public inbox for gentoo-commits@lists.gentoo.org
 help / color / mirror / Atom feed
From: "Alice Ferrazzi" <alicef@gentoo.org>
To: gentoo-commits@lists.gentoo.org
Subject: [gentoo-commits] proj/linux-patches:5.12 commit in: /
Date: Fri, 14 May 2021 14:02:03 +0000 (UTC)	[thread overview]
Message-ID: <1621000886.0c96d61b12324bd9d6e4ea92c8c39ad05645f1af.alicef@gentoo> (raw)

commit:     0c96d61b12324bd9d6e4ea92c8c39ad05645f1af
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Fri May 14 14:01:09 2021 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Fri May 14 14:01:26 2021 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=0c96d61b

Linux patch 5.12.4

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README             |     4 +
 1003_linux-5.12.4.patch | 25407 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 25411 insertions(+)

diff --git a/0000_README b/0000_README
index 00f2c30..9e34155 100644
--- a/0000_README
+++ b/0000_README
@@ -55,6 +55,10 @@ Patch:  1002_linux-5.12.3.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.12.3
 
+Patch:  1003_linux-5.12.4.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.12.4
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1003_linux-5.12.4.patch b/1003_linux-5.12.4.patch
new file mode 100644
index 0000000..29dd42c
--- /dev/null
+++ b/1003_linux-5.12.4.patch
@@ -0,0 +1,25407 @@
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index 04545725f187f..835f810f2f26d 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -1869,13 +1869,6 @@
+ 			bypassed by not enabling DMAR with this option. In
+ 			this case, gfx device will use physical address for
+ 			DMA.
+-		forcedac [X86-64]
+-			With this option iommu will not optimize to look
+-			for io virtual address below 32-bit forcing dual
+-			address cycle on pci bus for cards supporting greater
+-			than 32-bit addressing. The default is to look
+-			for translation below 32-bit and if not available
+-			then look in the higher range.
+ 		strict [Default Off]
+ 			With this option on every unmap_single operation will
+ 			result in a hardware IOTLB flush operation as opposed
+@@ -1964,6 +1957,14 @@
+ 		nobypass	[PPC/POWERNV]
+ 			Disable IOMMU bypass, using IOMMU for PCI devices.
+ 
++	iommu.forcedac=	[ARM64, X86] Control IOVA allocation for PCI devices.
++			Format: { "0" | "1" }
++			0 - Try to allocate a 32-bit DMA address first, before
++			  falling back to the full range if needed.
++			1 - Allocate directly from the full usable range,
++			  forcing Dual Address Cycle for PCI cards supporting
++			  greater than 32-bit addressing.
++
+ 	iommu.strict=	[ARM64] Configure TLB invalidation behaviour
+ 			Format: { "0" | "1" }
+ 			0 - Lazy mode.
+diff --git a/Documentation/devicetree/bindings/serial/st,stm32-uart.yaml b/Documentation/devicetree/bindings/serial/st,stm32-uart.yaml
+index 8631678283f9a..865be05083c3d 100644
+--- a/Documentation/devicetree/bindings/serial/st,stm32-uart.yaml
++++ b/Documentation/devicetree/bindings/serial/st,stm32-uart.yaml
+@@ -80,7 +80,8 @@ required:
+   - interrupts
+   - clocks
+ 
+-additionalProperties: false
++additionalProperties:
++  type: object
+ 
+ examples:
+   - |
+diff --git a/Documentation/driver-api/xilinx/eemi.rst b/Documentation/driver-api/xilinx/eemi.rst
+index 9dcbc6f18d75d..c1bc47b9000dc 100644
+--- a/Documentation/driver-api/xilinx/eemi.rst
++++ b/Documentation/driver-api/xilinx/eemi.rst
+@@ -16,35 +16,8 @@ components running across different processing clusters on a chip or
+ device to communicate with a power management controller (PMC) on a
+ device to issue or respond to power management requests.
+ 
+-EEMI ops is a structure containing all eemi APIs supported by Zynq MPSoC.
+-The zynqmp-firmware driver maintain all EEMI APIs in zynqmp_eemi_ops
+-structure. Any driver who want to communicate with PMC using EEMI APIs
+-can call zynqmp_pm_get_eemi_ops().
+-
+-Example of EEMI ops::
+-
+-	/* zynqmp-firmware driver maintain all EEMI APIs */
+-	struct zynqmp_eemi_ops {
+-		int (*get_api_version)(u32 *version);
+-		int (*query_data)(struct zynqmp_pm_query_data qdata, u32 *out);
+-	};
+-
+-	static const struct zynqmp_eemi_ops eemi_ops = {
+-		.get_api_version = zynqmp_pm_get_api_version,
+-		.query_data = zynqmp_pm_query_data,
+-	};
+-
+-Example of EEMI ops usage::
+-
+-	static const struct zynqmp_eemi_ops *eemi_ops;
+-	u32 ret_payload[PAYLOAD_ARG_CNT];
+-	int ret;
+-
+-	eemi_ops = zynqmp_pm_get_eemi_ops();
+-	if (IS_ERR(eemi_ops))
+-		return PTR_ERR(eemi_ops);
+-
+-	ret = eemi_ops->query_data(qdata, ret_payload);
++Any driver who wants to communicate with PMC using EEMI APIs use the
++functions provided for each function.
+ 
+ IOCTL
+ ------
+diff --git a/Documentation/userspace-api/media/v4l/subdev-formats.rst b/Documentation/userspace-api/media/v4l/subdev-formats.rst
+index 7f16cbe46e5c2..e6a9faa811973 100644
+--- a/Documentation/userspace-api/media/v4l/subdev-formats.rst
++++ b/Documentation/userspace-api/media/v4l/subdev-formats.rst
+@@ -1567,8 +1567,8 @@ The following tables list existing packed RGB formats.
+       - MEDIA_BUS_FMT_RGB101010_1X30
+       - 0x1018
+       -
+-      - 0
+-      - 0
++      -
++      -
+       - r\ :sub:`9`
+       - r\ :sub:`8`
+       - r\ :sub:`7`
+diff --git a/Makefile b/Makefile
+index 53a4b1cb7bb06..0b1852621615c 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 12
+-SUBLEVEL = 3
++SUBLEVEL = 4
+ EXTRAVERSION =
+ NAME = Frozen Wasteland
+ 
+diff --git a/arch/arm/boot/dts/aspeed-bmc-ibm-rainier.dts b/arch/arm/boot/dts/aspeed-bmc-ibm-rainier.dts
+index 6c9804d2f3b4f..6df1ce5450618 100644
+--- a/arch/arm/boot/dts/aspeed-bmc-ibm-rainier.dts
++++ b/arch/arm/boot/dts/aspeed-bmc-ibm-rainier.dts
+@@ -713,9 +713,9 @@
+ 	multi-master;
+ 	status = "okay";
+ 
+-	si7021-a20@20 {
++	si7021-a20@40 {
+ 		compatible = "silabs,si7020";
+-		reg = <0x20>;
++		reg = <0x40>;
+ 	};
+ 
+ 	tmp275@48 {
+diff --git a/arch/arm/boot/dts/exynos4210-i9100.dts b/arch/arm/boot/dts/exynos4210-i9100.dts
+index 304a8ee2364c5..d98c78207aaf1 100644
+--- a/arch/arm/boot/dts/exynos4210-i9100.dts
++++ b/arch/arm/boot/dts/exynos4210-i9100.dts
+@@ -136,7 +136,7 @@
+ 			compatible = "maxim,max17042";
+ 
+ 			interrupt-parent = <&gpx2>;
+-			interrupts = <3 IRQ_TYPE_EDGE_FALLING>;
++			interrupts = <3 IRQ_TYPE_LEVEL_LOW>;
+ 
+ 			pinctrl-0 = <&max17042_fuel_irq>;
+ 			pinctrl-names = "default";
+diff --git a/arch/arm/boot/dts/exynos4412-midas.dtsi b/arch/arm/boot/dts/exynos4412-midas.dtsi
+index 111c32bae02c0..fc77c1bfd844e 100644
+--- a/arch/arm/boot/dts/exynos4412-midas.dtsi
++++ b/arch/arm/boot/dts/exynos4412-midas.dtsi
+@@ -173,7 +173,7 @@
+ 		pmic@66 {
+ 			compatible = "maxim,max77693";
+ 			interrupt-parent = <&gpx1>;
+-			interrupts = <5 IRQ_TYPE_EDGE_FALLING>;
++			interrupts = <5 IRQ_TYPE_LEVEL_LOW>;
+ 			pinctrl-names = "default";
+ 			pinctrl-0 = <&max77693_irq>;
+ 			reg = <0x66>;
+@@ -221,7 +221,7 @@
+ 		fuel-gauge@36 {
+ 			compatible = "maxim,max17047";
+ 			interrupt-parent = <&gpx2>;
+-			interrupts = <3 IRQ_TYPE_EDGE_FALLING>;
++			interrupts = <3 IRQ_TYPE_LEVEL_LOW>;
+ 			pinctrl-names = "default";
+ 			pinctrl-0 = <&max77693_fuel_irq>;
+ 			reg = <0x36>;
+@@ -665,7 +665,7 @@
+ 	max77686: pmic@9 {
+ 		compatible = "maxim,max77686";
+ 		interrupt-parent = <&gpx0>;
+-		interrupts = <7 IRQ_TYPE_NONE>;
++		interrupts = <7 IRQ_TYPE_LEVEL_LOW>;
+ 		pinctrl-0 = <&max77686_irq>;
+ 		pinctrl-names = "default";
+ 		reg = <0x09>;
+diff --git a/arch/arm/boot/dts/exynos4412-odroid-common.dtsi b/arch/arm/boot/dts/exynos4412-odroid-common.dtsi
+index 2b20d9095d9f2..eebe6a3952ce8 100644
+--- a/arch/arm/boot/dts/exynos4412-odroid-common.dtsi
++++ b/arch/arm/boot/dts/exynos4412-odroid-common.dtsi
+@@ -278,7 +278,7 @@
+ 	max77686: pmic@9 {
+ 		compatible = "maxim,max77686";
+ 		interrupt-parent = <&gpx3>;
+-		interrupts = <2 IRQ_TYPE_NONE>;
++		interrupts = <2 IRQ_TYPE_LEVEL_LOW>;
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&max77686_irq>;
+ 		reg = <0x09>;
+diff --git a/arch/arm/boot/dts/exynos4412-p4note.dtsi b/arch/arm/boot/dts/exynos4412-p4note.dtsi
+index b2f9d5448a188..9e750890edb87 100644
+--- a/arch/arm/boot/dts/exynos4412-p4note.dtsi
++++ b/arch/arm/boot/dts/exynos4412-p4note.dtsi
+@@ -146,7 +146,7 @@
+ 			pinctrl-0 = <&fuel_alert_irq>;
+ 			pinctrl-names = "default";
+ 			interrupt-parent = <&gpx2>;
+-			interrupts = <3 IRQ_TYPE_EDGE_FALLING>;
++			interrupts = <3 IRQ_TYPE_LEVEL_LOW>;
+ 			maxim,rsns-microohm = <10000>;
+ 			maxim,over-heat-temp = <600>;
+ 			maxim,over-volt = <4300>;
+@@ -322,7 +322,7 @@
+ 	max77686: pmic@9 {
+ 		compatible = "maxim,max77686";
+ 		interrupt-parent = <&gpx0>;
+-		interrupts = <7 IRQ_TYPE_NONE>;
++		interrupts = <7 IRQ_TYPE_LEVEL_LOW>;
+ 		pinctrl-0 = <&max77686_irq>;
+ 		pinctrl-names = "default";
+ 		reg = <0x09>;
+diff --git a/arch/arm/boot/dts/exynos5250-smdk5250.dts b/arch/arm/boot/dts/exynos5250-smdk5250.dts
+index 8b5a79a8720c6..39bbe18145cf2 100644
+--- a/arch/arm/boot/dts/exynos5250-smdk5250.dts
++++ b/arch/arm/boot/dts/exynos5250-smdk5250.dts
+@@ -134,7 +134,7 @@
+ 		compatible = "maxim,max77686";
+ 		reg = <0x09>;
+ 		interrupt-parent = <&gpx3>;
+-		interrupts = <2 IRQ_TYPE_NONE>;
++		interrupts = <2 IRQ_TYPE_LEVEL_LOW>;
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&max77686_irq>;
+ 		#clock-cells = <1>;
+diff --git a/arch/arm/boot/dts/exynos5250-snow-common.dtsi b/arch/arm/boot/dts/exynos5250-snow-common.dtsi
+index 6635f6184051e..2335c46873494 100644
+--- a/arch/arm/boot/dts/exynos5250-snow-common.dtsi
++++ b/arch/arm/boot/dts/exynos5250-snow-common.dtsi
+@@ -292,7 +292,7 @@
+ 	max77686: pmic@9 {
+ 		compatible = "maxim,max77686";
+ 		interrupt-parent = <&gpx3>;
+-		interrupts = <2 IRQ_TYPE_NONE>;
++		interrupts = <2 IRQ_TYPE_LEVEL_LOW>;
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&max77686_irq>;
+ 		wakeup-source;
+diff --git a/arch/arm/boot/dts/qcom-msm8974-lge-nexus5-hammerhead.dts b/arch/arm/boot/dts/qcom-msm8974-lge-nexus5-hammerhead.dts
+index 0cda654371ae7..56ee02ceba7d4 100644
+--- a/arch/arm/boot/dts/qcom-msm8974-lge-nexus5-hammerhead.dts
++++ b/arch/arm/boot/dts/qcom-msm8974-lge-nexus5-hammerhead.dts
+@@ -575,7 +575,7 @@
+ 			maxim,rcomp = /bits/ 8 <0x4d>;
+ 
+ 			interrupt-parent = <&msmgpio>;
+-			interrupts = <9 IRQ_TYPE_EDGE_FALLING>;
++			interrupts = <9 IRQ_TYPE_LEVEL_LOW>;
+ 
+ 			pinctrl-names = "default";
+ 			pinctrl-0 = <&fuelgauge_pin>;
+diff --git a/arch/arm/boot/dts/qcom-msm8974-samsung-klte.dts b/arch/arm/boot/dts/qcom-msm8974-samsung-klte.dts
+index a0f7f461f48c8..2dadb836c5fed 100644
+--- a/arch/arm/boot/dts/qcom-msm8974-samsung-klte.dts
++++ b/arch/arm/boot/dts/qcom-msm8974-samsung-klte.dts
+@@ -717,7 +717,7 @@
+ 			maxim,rcomp = /bits/ 8 <0x56>;
+ 
+ 			interrupt-parent = <&pma8084_gpios>;
+-			interrupts = <21 IRQ_TYPE_EDGE_FALLING>;
++			interrupts = <21 IRQ_TYPE_LEVEL_LOW>;
+ 
+ 			pinctrl-names = "default";
+ 			pinctrl-0 = <&fuelgauge_pin>;
+diff --git a/arch/arm/boot/dts/r8a7790-lager.dts b/arch/arm/boot/dts/r8a7790-lager.dts
+index 09a152b915575..1d6f0c5d02e9a 100644
+--- a/arch/arm/boot/dts/r8a7790-lager.dts
++++ b/arch/arm/boot/dts/r8a7790-lager.dts
+@@ -53,6 +53,9 @@
+ 		i2c11 = &i2cexio1;
+ 		i2c12 = &i2chdmi;
+ 		i2c13 = &i2cpwr;
++		mmc0 = &mmcif1;
++		mmc1 = &sdhi0;
++		mmc2 = &sdhi2;
+ 	};
+ 
+ 	chosen {
+diff --git a/arch/arm/boot/dts/r8a7791-koelsch.dts b/arch/arm/boot/dts/r8a7791-koelsch.dts
+index f603cba5441fc..6af1727b82690 100644
+--- a/arch/arm/boot/dts/r8a7791-koelsch.dts
++++ b/arch/arm/boot/dts/r8a7791-koelsch.dts
+@@ -53,6 +53,9 @@
+ 		i2c12 = &i2cexio1;
+ 		i2c13 = &i2chdmi;
+ 		i2c14 = &i2cexio4;
++		mmc0 = &sdhi0;
++		mmc1 = &sdhi1;
++		mmc2 = &sdhi2;
+ 	};
+ 
+ 	chosen {
+diff --git a/arch/arm/boot/dts/r8a7791-porter.dts b/arch/arm/boot/dts/r8a7791-porter.dts
+index c6d563fb7ec7c..bf51e29c793a3 100644
+--- a/arch/arm/boot/dts/r8a7791-porter.dts
++++ b/arch/arm/boot/dts/r8a7791-porter.dts
+@@ -28,6 +28,8 @@
+ 		serial0 = &scif0;
+ 		i2c9 = &gpioi2c2;
+ 		i2c10 = &i2chdmi;
++		mmc0 = &sdhi0;
++		mmc1 = &sdhi2;
+ 	};
+ 
+ 	chosen {
+diff --git a/arch/arm/boot/dts/r8a7793-gose.dts b/arch/arm/boot/dts/r8a7793-gose.dts
+index abf487e8fe0f3..2b59a04913500 100644
+--- a/arch/arm/boot/dts/r8a7793-gose.dts
++++ b/arch/arm/boot/dts/r8a7793-gose.dts
+@@ -49,6 +49,9 @@
+ 		i2c10 = &gpioi2c4;
+ 		i2c11 = &i2chdmi;
+ 		i2c12 = &i2cexio4;
++		mmc0 = &sdhi0;
++		mmc1 = &sdhi1;
++		mmc2 = &sdhi2;
+ 	};
+ 
+ 	chosen {
+diff --git a/arch/arm/boot/dts/r8a7794-alt.dts b/arch/arm/boot/dts/r8a7794-alt.dts
+index 3f1cc5bbf3297..32025986b3b9b 100644
+--- a/arch/arm/boot/dts/r8a7794-alt.dts
++++ b/arch/arm/boot/dts/r8a7794-alt.dts
+@@ -19,6 +19,9 @@
+ 		i2c10 = &gpioi2c4;
+ 		i2c11 = &i2chdmi;
+ 		i2c12 = &i2cexio4;
++		mmc0 = &mmcif0;
++		mmc1 = &sdhi0;
++		mmc2 = &sdhi1;
+ 	};
+ 
+ 	chosen {
+diff --git a/arch/arm/boot/dts/r8a7794-silk.dts b/arch/arm/boot/dts/r8a7794-silk.dts
+index 677596f6c9c9a..af066ee5e2754 100644
+--- a/arch/arm/boot/dts/r8a7794-silk.dts
++++ b/arch/arm/boot/dts/r8a7794-silk.dts
+@@ -31,6 +31,8 @@
+ 		serial0 = &scif2;
+ 		i2c9 = &gpioi2c1;
+ 		i2c10 = &i2chdmi;
++		mmc0 = &mmcif0;
++		mmc1 = &sdhi1;
+ 	};
+ 
+ 	chosen {
+diff --git a/arch/arm/boot/dts/s5pv210-fascinate4g.dts b/arch/arm/boot/dts/s5pv210-fascinate4g.dts
+index ca064359dd308..b47d8300e536e 100644
+--- a/arch/arm/boot/dts/s5pv210-fascinate4g.dts
++++ b/arch/arm/boot/dts/s5pv210-fascinate4g.dts
+@@ -115,7 +115,7 @@
+ 	compatible = "maxim,max77836-battery";
+ 
+ 	interrupt-parent = <&gph3>;
+-	interrupts = <3 IRQ_TYPE_EDGE_FALLING>;
++	interrupts = <3 IRQ_TYPE_LEVEL_LOW>;
+ 
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&fg_irq>;
+diff --git a/arch/arm/boot/dts/stm32mp15-pinctrl.dtsi b/arch/arm/boot/dts/stm32mp15-pinctrl.dtsi
+index 7b4249ed19833..060baa8b7e9d4 100644
+--- a/arch/arm/boot/dts/stm32mp15-pinctrl.dtsi
++++ b/arch/arm/boot/dts/stm32mp15-pinctrl.dtsi
+@@ -1891,10 +1891,15 @@
+ 	usart2_idle_pins_c: usart2-idle-2 {
+ 		pins1 {
+ 			pinmux = <STM32_PINMUX('D', 5, ANALOG)>, /* USART2_TX */
+-				 <STM32_PINMUX('D', 4, ANALOG)>, /* USART2_RTS */
+ 				 <STM32_PINMUX('D', 3, ANALOG)>; /* USART2_CTS_NSS */
+ 		};
+ 		pins2 {
++			pinmux = <STM32_PINMUX('D', 4, AF7)>; /* USART2_RTS */
++			bias-disable;
++			drive-push-pull;
++			slew-rate = <3>;
++		};
++		pins3 {
+ 			pinmux = <STM32_PINMUX('D', 6, AF7)>; /* USART2_RX */
+ 			bias-disable;
+ 		};
+@@ -1940,10 +1945,15 @@
+ 	usart3_idle_pins_b: usart3-idle-1 {
+ 		pins1 {
+ 			pinmux = <STM32_PINMUX('B', 10, ANALOG)>, /* USART3_TX */
+-				 <STM32_PINMUX('G', 8, ANALOG)>, /* USART3_RTS */
+ 				 <STM32_PINMUX('I', 10, ANALOG)>; /* USART3_CTS_NSS */
+ 		};
+ 		pins2 {
++			pinmux = <STM32_PINMUX('G', 8, AF8)>; /* USART3_RTS */
++			bias-disable;
++			drive-push-pull;
++			slew-rate = <0>;
++		};
++		pins3 {
+ 			pinmux = <STM32_PINMUX('B', 12, AF8)>; /* USART3_RX */
+ 			bias-disable;
+ 		};
+@@ -1976,10 +1986,15 @@
+ 	usart3_idle_pins_c: usart3-idle-2 {
+ 		pins1 {
+ 			pinmux = <STM32_PINMUX('B', 10, ANALOG)>, /* USART3_TX */
+-				 <STM32_PINMUX('G', 8, ANALOG)>, /* USART3_RTS */
+ 				 <STM32_PINMUX('B', 13, ANALOG)>; /* USART3_CTS_NSS */
+ 		};
+ 		pins2 {
++			pinmux = <STM32_PINMUX('G', 8, AF8)>; /* USART3_RTS */
++			bias-disable;
++			drive-push-pull;
++			slew-rate = <0>;
++		};
++		pins3 {
+ 			pinmux = <STM32_PINMUX('B', 12, AF8)>; /* USART3_RX */
+ 			bias-disable;
+ 		};
+diff --git a/arch/arm/boot/dts/uniphier-pxs2.dtsi b/arch/arm/boot/dts/uniphier-pxs2.dtsi
+index b0b15c97306b8..e81e5937a60ae 100644
+--- a/arch/arm/boot/dts/uniphier-pxs2.dtsi
++++ b/arch/arm/boot/dts/uniphier-pxs2.dtsi
+@@ -583,7 +583,7 @@
+ 			clocks = <&sys_clk 6>;
+ 			reset-names = "ether";
+ 			resets = <&sys_rst 6>;
+-			phy-mode = "rgmii";
++			phy-mode = "rgmii-id";
+ 			local-mac-address = [00 00 00 00 00 00];
+ 			socionext,syscon-phy-mode = <&soc_glue 0>;
+ 
+diff --git a/arch/arm/crypto/blake2s-core.S b/arch/arm/crypto/blake2s-core.S
+index bed897e9a181a..86345751bbf3a 100644
+--- a/arch/arm/crypto/blake2s-core.S
++++ b/arch/arm/crypto/blake2s-core.S
+@@ -8,6 +8,7 @@
+  */
+ 
+ #include <linux/linkage.h>
++#include <asm/assembler.h>
+ 
+ 	// Registers used to hold message words temporarily.  There aren't
+ 	// enough ARM registers to hold the whole message block, so we have to
+@@ -38,6 +39,23 @@
+ #endif
+ .endm
+ 
++.macro _le32_bswap	a, tmp
++#ifdef __ARMEB__
++	rev_l		\a, \tmp
++#endif
++.endm
++
++.macro _le32_bswap_8x	a, b, c, d, e, f, g, h,  tmp
++	_le32_bswap	\a, \tmp
++	_le32_bswap	\b, \tmp
++	_le32_bswap	\c, \tmp
++	_le32_bswap	\d, \tmp
++	_le32_bswap	\e, \tmp
++	_le32_bswap	\f, \tmp
++	_le32_bswap	\g, \tmp
++	_le32_bswap	\h, \tmp
++.endm
++
+ // Execute a quarter-round of BLAKE2s by mixing two columns or two diagonals.
+ // (a0, b0, c0, d0) and (a1, b1, c1, d1) give the registers containing the two
+ // columns/diagonals.  s0-s1 are the word offsets to the message words the first
+@@ -180,8 +198,10 @@ ENTRY(blake2s_compress_arch)
+ 	tst		r1, #3
+ 	bne		.Lcopy_block_misaligned
+ 	ldmia		r1!, {r2-r9}
++	_le32_bswap_8x	r2, r3, r4, r5, r6, r7, r8, r9,  r14
+ 	stmia		r12!, {r2-r9}
+ 	ldmia		r1!, {r2-r9}
++	_le32_bswap_8x	r2, r3, r4, r5, r6, r7, r8, r9,  r14
+ 	stmia		r12, {r2-r9}
+ .Lcopy_block_done:
+ 	str		r1, [sp, #68]		// Update message pointer
+@@ -268,6 +288,7 @@ ENTRY(blake2s_compress_arch)
+ 1:
+ #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
+ 	ldr		r3, [r1], #4
++	_le32_bswap	r3, r4
+ #else
+ 	ldrb		r3, [r1, #0]
+ 	ldrb		r4, [r1, #1]
+diff --git a/arch/arm/crypto/poly1305-glue.c b/arch/arm/crypto/poly1305-glue.c
+index 3023c1acfa194..c31bd8f7c0927 100644
+--- a/arch/arm/crypto/poly1305-glue.c
++++ b/arch/arm/crypto/poly1305-glue.c
+@@ -29,7 +29,7 @@ void __weak poly1305_blocks_neon(void *state, const u8 *src, u32 len, u32 hibit)
+ 
+ static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_neon);
+ 
+-void poly1305_init_arch(struct poly1305_desc_ctx *dctx, const u8 *key)
++void poly1305_init_arch(struct poly1305_desc_ctx *dctx, const u8 key[POLY1305_KEY_SIZE])
+ {
+ 	poly1305_init_arm(&dctx->h, key);
+ 	dctx->s[0] = get_unaligned_le32(key + 16);
+diff --git a/arch/arm64/boot/dts/broadcom/bcm4908/bcm4908-asus-gt-ac5300.dts b/arch/arm64/boot/dts/broadcom/bcm4908/bcm4908-asus-gt-ac5300.dts
+index 6e4ad66ff5360..8d5d368dbe90e 100644
+--- a/arch/arm64/boot/dts/broadcom/bcm4908/bcm4908-asus-gt-ac5300.dts
++++ b/arch/arm64/boot/dts/broadcom/bcm4908/bcm4908-asus-gt-ac5300.dts
+@@ -65,6 +65,7 @@
+ 	port@7 {
+ 		label = "sw";
+ 		reg = <7>;
++		phy-mode = "rgmii";
+ 
+ 		fixed-link {
+ 			speed = <1000>;
+diff --git a/arch/arm64/boot/dts/broadcom/bcm4908/bcm4908.dtsi b/arch/arm64/boot/dts/broadcom/bcm4908/bcm4908.dtsi
+index 9354077f74cd6..9e799328c6dbc 100644
+--- a/arch/arm64/boot/dts/broadcom/bcm4908/bcm4908.dtsi
++++ b/arch/arm64/boot/dts/broadcom/bcm4908/bcm4908.dtsi
+@@ -131,7 +131,7 @@
+ 			status = "disabled";
+ 		};
+ 
+-		ethernet-switch@80000 {
++		bus@80000 {
+ 			compatible = "simple-bus";
+ 			#size-cells = <1>;
+ 			#address-cells = <1>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt8173-evb.dts b/arch/arm64/boot/dts/mediatek/mt8173-evb.dts
+index 6dffada2e66b4..28aa634c9780e 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8173-evb.dts
++++ b/arch/arm64/boot/dts/mediatek/mt8173-evb.dts
+@@ -294,7 +294,7 @@
+ 
+ &pwrap {
+ 	/* Only MT8173 E1 needs USB power domain */
+-	power-domains = <&scpsys MT8173_POWER_DOMAIN_USB>;
++	power-domains = <&spm MT8173_POWER_DOMAIN_USB>;
+ 
+ 	pmic: mt6397 {
+ 		compatible = "mediatek,mt6397";
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183.dtsi b/arch/arm64/boot/dts/mediatek/mt8183.dtsi
+index 80519a145f13f..16f4b1fc0fb92 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183.dtsi
+@@ -983,6 +983,9 @@
+ 			compatible = "mediatek,mt8183-mmsys", "syscon";
+ 			reg = <0 0x14000000 0 0x1000>;
+ 			#clock-cells = <1>;
++			mboxes = <&gce 0 CMDQ_THR_PRIO_HIGHEST>,
++				 <&gce 1 CMDQ_THR_PRIO_HIGHEST>;
++			mediatek,gce-client-reg = <&gce SUBSYS_1400XXXX 0 0x1000>;
+ 		};
+ 
+ 		ovl0: ovl@14008000 {
+@@ -1058,6 +1061,7 @@
+ 			interrupts = <GIC_SPI 232 IRQ_TYPE_LEVEL_LOW>;
+ 			power-domains = <&spm MT8183_POWER_DOMAIN_DISP>;
+ 			clocks = <&mmsys CLK_MM_DISP_CCORR0>;
++			mediatek,gce-client-reg = <&gce SUBSYS_1400XXXX 0xf000 0x1000>;
+ 		};
+ 
+ 		aal0: aal@14010000 {
+@@ -1067,6 +1071,7 @@
+ 			interrupts = <GIC_SPI 233 IRQ_TYPE_LEVEL_LOW>;
+ 			power-domains = <&spm MT8183_POWER_DOMAIN_DISP>;
+ 			clocks = <&mmsys CLK_MM_DISP_AAL0>;
++			mediatek,gce-client-reg = <&gce SUBSYS_1401XXXX 0 0x1000>;
+ 		};
+ 
+ 		gamma0: gamma@14011000 {
+@@ -1075,6 +1080,7 @@
+ 			interrupts = <GIC_SPI 234 IRQ_TYPE_LEVEL_LOW>;
+ 			power-domains = <&spm MT8183_POWER_DOMAIN_DISP>;
+ 			clocks = <&mmsys CLK_MM_DISP_GAMMA0>;
++			mediatek,gce-client-reg = <&gce SUBSYS_1401XXXX 0x1000 0x1000>;
+ 		};
+ 
+ 		dither0: dither@14012000 {
+@@ -1083,6 +1089,7 @@
+ 			interrupts = <GIC_SPI 235 IRQ_TYPE_LEVEL_LOW>;
+ 			power-domains = <&spm MT8183_POWER_DOMAIN_DISP>;
+ 			clocks = <&mmsys CLK_MM_DISP_DITHER0>;
++			mediatek,gce-client-reg = <&gce SUBSYS_1401XXXX 0x2000 0x1000>;
+ 		};
+ 
+ 		dsi0: dsi@14014000 {
+diff --git a/arch/arm64/boot/dts/mediatek/pumpkin-common.dtsi b/arch/arm64/boot/dts/mediatek/pumpkin-common.dtsi
+index 63fd70086bb85..9f27e7ed5e225 100644
+--- a/arch/arm64/boot/dts/mediatek/pumpkin-common.dtsi
++++ b/arch/arm64/boot/dts/mediatek/pumpkin-common.dtsi
+@@ -56,7 +56,7 @@
+ 	tca6416: gpio@20 {
+ 		compatible = "ti,tca6416";
+ 		reg = <0x20>;
+-		reset-gpios = <&pio 65 GPIO_ACTIVE_HIGH>;
++		reset-gpios = <&pio 65 GPIO_ACTIVE_LOW>;
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&tca6416_pins>;
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi b/arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi
+index 07c8b2c926c0c..b8f7cf5cbdab1 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7180-trogdor.dtsi
+@@ -22,9 +22,11 @@
+ 			thermal-sensors = <&pm6150_adc_tm 1>;
+ 
+ 			trips {
+-				temperature = <125000>;
+-				hysteresis = <1000>;
+-				type = "critical";
++				charger-crit {
++					temperature = <125000>;
++					hysteresis = <1000>;
++					type = "critical";
++				};
+ 			};
+ 		};
+ 	};
+@@ -768,17 +770,17 @@ hp_i2c: &i2c9 {
+ };
+ 
+ &spi0 {
+-	pinctrl-0 = <&qup_spi0_cs_gpio>;
++	pinctrl-0 = <&qup_spi0_cs_gpio_init_high>, <&qup_spi0_cs_gpio>;
+ 	cs-gpios = <&tlmm 37 GPIO_ACTIVE_LOW>;
+ };
+ 
+ &spi6 {
+-	pinctrl-0 = <&qup_spi6_cs_gpio>;
++	pinctrl-0 = <&qup_spi6_cs_gpio_init_high>, <&qup_spi6_cs_gpio>;
+ 	cs-gpios = <&tlmm 62 GPIO_ACTIVE_LOW>;
+ };
+ 
+ ap_spi_fp: &spi10 {
+-	pinctrl-0 = <&qup_spi10_cs_gpio>;
++	pinctrl-0 = <&qup_spi10_cs_gpio_init_high>, <&qup_spi10_cs_gpio>;
+ 	cs-gpios = <&tlmm 89 GPIO_ACTIVE_LOW>;
+ 
+ 	cros_ec_fp: ec@0 {
+@@ -1339,6 +1341,27 @@ ap_spi_fp: &spi10 {
+ 		};
+ 	};
+ 
++	qup_spi0_cs_gpio_init_high: qup-spi0-cs-gpio-init-high {
++		pinconf {
++			pins = "gpio37";
++			output-high;
++		};
++	};
++
++	qup_spi6_cs_gpio_init_high: qup-spi6-cs-gpio-init-high {
++		pinconf {
++			pins = "gpio62";
++			output-high;
++		};
++	};
++
++	qup_spi10_cs_gpio_init_high: qup-spi10-cs-gpio-init-high {
++		pinconf {
++			pins = "gpio89";
++			output-high;
++		};
++	};
++
+ 	qup_uart3_sleep: qup-uart3-sleep {
+ 		pinmux {
+ 			pins = "gpio38", "gpio39",
+diff --git a/arch/arm64/boot/dts/qcom/sdm845-db845c.dts b/arch/arm64/boot/dts/qcom/sdm845-db845c.dts
+index c4ac6f5dc008d..96d36b38f2696 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845-db845c.dts
++++ b/arch/arm64/boot/dts/qcom/sdm845-db845c.dts
+@@ -1015,7 +1015,7 @@
+ 		left_spkr: wsa8810-left{
+ 			compatible = "sdw10217201000";
+ 			reg = <0 1>;
+-			powerdown-gpios = <&wcdgpio 2 GPIO_ACTIVE_HIGH>;
++			powerdown-gpios = <&wcdgpio 1 GPIO_ACTIVE_HIGH>;
+ 			#thermal-sensor-cells = <0>;
+ 			sound-name-prefix = "SpkrLeft";
+ 			#sound-dai-cells = <0>;
+@@ -1023,7 +1023,7 @@
+ 
+ 		right_spkr: wsa8810-right{
+ 			compatible = "sdw10217201000";
+-			powerdown-gpios = <&wcdgpio 2 GPIO_ACTIVE_HIGH>;
++			powerdown-gpios = <&wcdgpio 1 GPIO_ACTIVE_HIGH>;
+ 			reg = <0 2>;
+ 			#thermal-sensor-cells = <0>;
+ 			sound-name-prefix = "SpkrRight";
+diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+index 454f794af5479..6a2ed02d383d0 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+@@ -2382,7 +2382,7 @@
+ 			#gpio-cells = <2>;
+ 			interrupt-controller;
+ 			#interrupt-cells = <2>;
+-			gpio-ranges = <&tlmm 0 0 150>;
++			gpio-ranges = <&tlmm 0 0 151>;
+ 			wakeup-parent = <&pdc_intc>;
+ 
+ 			cci0_default: cci0-default {
+diff --git a/arch/arm64/boot/dts/qcom/sm8150.dtsi b/arch/arm64/boot/dts/qcom/sm8150.dtsi
+index e5bb17bc2f46b..778613d3410b1 100644
+--- a/arch/arm64/boot/dts/qcom/sm8150.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8150.dtsi
+@@ -914,7 +914,7 @@
+ 			      <0x0 0x03D00000 0x0 0x300000>;
+ 			reg-names = "west", "east", "north", "south";
+ 			interrupts = <GIC_SPI 208 IRQ_TYPE_LEVEL_HIGH>;
+-			gpio-ranges = <&tlmm 0 0 175>;
++			gpio-ranges = <&tlmm 0 0 176>;
+ 			gpio-controller;
+ 			#gpio-cells = <2>;
+ 			interrupt-controller;
+diff --git a/arch/arm64/boot/dts/qcom/sm8250.dtsi b/arch/arm64/boot/dts/qcom/sm8250.dtsi
+index 947e1accae3ab..46a6c18cea91e 100644
+--- a/arch/arm64/boot/dts/qcom/sm8250.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8250.dtsi
+@@ -279,7 +279,7 @@
+ 
+ 	pmu {
+ 		compatible = "arm,armv8-pmuv3";
+-		interrupts = <GIC_PPI 7 IRQ_TYPE_LEVEL_HIGH>;
++		interrupts = <GIC_PPI 7 IRQ_TYPE_LEVEL_LOW>;
+ 	};
+ 
+ 	psci {
+@@ -2327,10 +2327,9 @@
+ 			reg = <0 0x0ae00000 0 0x1000>;
+ 			reg-names = "mdss";
+ 
+-			interconnects = <&gem_noc MASTER_AMPSS_M0 &config_noc SLAVE_DISPLAY_CFG>,
+-					<&mmss_noc MASTER_MDP_PORT0 &mc_virt SLAVE_EBI_CH0>,
++			interconnects = <&mmss_noc MASTER_MDP_PORT0 &mc_virt SLAVE_EBI_CH0>,
+ 					<&mmss_noc MASTER_MDP_PORT1 &mc_virt SLAVE_EBI_CH0>;
+-			interconnect-names = "notused", "mdp0-mem", "mdp1-mem";
++			interconnect-names = "mdp0-mem", "mdp1-mem";
+ 
+ 			power-domains = <&dispcc MDSS_GDSC>;
+ 
+@@ -2580,7 +2579,7 @@
+ 
+ 		dispcc: clock-controller@af00000 {
+ 			compatible = "qcom,sm8250-dispcc";
+-			reg = <0 0x0af00000 0 0x20000>;
++			reg = <0 0x0af00000 0 0x10000>;
+ 			mmcx-supply = <&mmcx_reg>;
+ 			clocks = <&rpmhcc RPMH_CXO_CLK>,
+ 				 <&dsi0_phy 0>,
+@@ -2588,28 +2587,14 @@
+ 				 <&dsi1_phy 0>,
+ 				 <&dsi1_phy 1>,
+ 				 <0>,
+-				 <0>,
+-				 <0>,
+-				 <0>,
+-				 <0>,
+-				 <0>,
+-				 <0>,
+-				 <0>,
+-				 <&sleep_clk>;
++				 <0>;
+ 			clock-names = "bi_tcxo",
+ 				      "dsi0_phy_pll_out_byteclk",
+ 				      "dsi0_phy_pll_out_dsiclk",
+ 				      "dsi1_phy_pll_out_byteclk",
+ 				      "dsi1_phy_pll_out_dsiclk",
+-				      "dp_link_clk_divsel_ten",
+-				      "dp_vco_divided_clk_src_mux",
+-				      "dptx1_phy_pll_link_clk",
+-				      "dptx1_phy_pll_vco_div_clk",
+-				      "dptx2_phy_pll_link_clk",
+-				      "dptx2_phy_pll_vco_div_clk",
+-				      "edp_phy_pll_link_clk",
+-				      "edp_phy_pll_vco_div_clk",
+-				      "sleep_clk";
++				      "dp_phy_pll_link_clk",
++				      "dp_phy_pll_vco_div_clk";
+ 			#clock-cells = <1>;
+ 			#reset-cells = <1>;
+ 			#power-domain-cells = <1>;
+@@ -2689,7 +2674,7 @@
+ 			#gpio-cells = <2>;
+ 			interrupt-controller;
+ 			#interrupt-cells = <2>;
+-			gpio-ranges = <&tlmm 0 0 180>;
++			gpio-ranges = <&tlmm 0 0 181>;
+ 			wakeup-parent = <&pdc>;
+ 
+ 			pri_mi2s_active: pri-mi2s-active {
+@@ -3754,7 +3739,7 @@
+ 				(GIC_CPU_MASK_SIMPLE(8) | IRQ_TYPE_LEVEL_LOW)>,
+ 			     <GIC_PPI 11
+ 				(GIC_CPU_MASK_SIMPLE(8) | IRQ_TYPE_LEVEL_LOW)>,
+-			     <GIC_PPI 12
++			     <GIC_PPI 10
+ 				(GIC_CPU_MASK_SIMPLE(8) | IRQ_TYPE_LEVEL_LOW)>;
+ 	};
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sm8350.dtsi b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+index 5ef460458f5c3..e2fca420e5183 100644
+--- a/arch/arm64/boot/dts/qcom/sm8350.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+@@ -153,7 +153,7 @@
+ 
+ 	pmu {
+ 		compatible = "arm,armv8-pmuv3";
+-		interrupts = <GIC_PPI 7 IRQ_TYPE_LEVEL_HIGH>;
++		interrupts = <GIC_PPI 7 IRQ_TYPE_LEVEL_LOW>;
+ 	};
+ 
+ 	psci {
+@@ -382,7 +382,7 @@
+ 			#gpio-cells = <2>;
+ 			interrupt-controller;
+ 			#interrupt-cells = <2>;
+-			gpio-ranges = <&tlmm 0 0 203>;
++			gpio-ranges = <&tlmm 0 0 204>;
+ 
+ 			qup_uart3_default_state: qup-uart3-default-state {
+ 				rx {
+diff --git a/arch/arm64/boot/dts/renesas/hihope-common.dtsi b/arch/arm64/boot/dts/renesas/hihope-common.dtsi
+index 7a3da9b06f677..0c7e6f7905902 100644
+--- a/arch/arm64/boot/dts/renesas/hihope-common.dtsi
++++ b/arch/arm64/boot/dts/renesas/hihope-common.dtsi
+@@ -12,6 +12,9 @@
+ 	aliases {
+ 		serial0 = &scif2;
+ 		serial1 = &hscif0;
++		mmc0 = &sdhi3;
++		mmc1 = &sdhi0;
++		mmc2 = &sdhi2;
+ 	};
+ 
+ 	chosen {
+diff --git a/arch/arm64/boot/dts/renesas/r8a774a1-beacon-rzg2m-kit.dts b/arch/arm64/boot/dts/renesas/r8a774a1-beacon-rzg2m-kit.dts
+index 501cb05da228d..3cf2e076940f3 100644
+--- a/arch/arm64/boot/dts/renesas/r8a774a1-beacon-rzg2m-kit.dts
++++ b/arch/arm64/boot/dts/renesas/r8a774a1-beacon-rzg2m-kit.dts
+@@ -21,6 +21,9 @@
+ 		serial4 = &hscif2;
+ 		serial5 = &scif5;
+ 		ethernet0 = &avb;
++		mmc0 = &sdhi3;
++		mmc1 = &sdhi0;
++		mmc2 = &sdhi2;
+ 	};
+ 
+ 	chosen {
+diff --git a/arch/arm64/boot/dts/renesas/r8a774b1-beacon-rzg2n-kit.dts b/arch/arm64/boot/dts/renesas/r8a774b1-beacon-rzg2n-kit.dts
+index 71763f4402a7c..3c0d59def8ee5 100644
+--- a/arch/arm64/boot/dts/renesas/r8a774b1-beacon-rzg2n-kit.dts
++++ b/arch/arm64/boot/dts/renesas/r8a774b1-beacon-rzg2n-kit.dts
+@@ -22,6 +22,9 @@
+ 		serial5 = &scif5;
+ 		serial6 = &scif4;
+ 		ethernet0 = &avb;
++		mmc0 = &sdhi3;
++		mmc1 = &sdhi0;
++		mmc2 = &sdhi2;
+ 	};
+ 
+ 	chosen {
+diff --git a/arch/arm64/boot/dts/renesas/r8a774c0-cat874.dts b/arch/arm64/boot/dts/renesas/r8a774c0-cat874.dts
+index ea87cb5a459c8..33257c6440b2c 100644
+--- a/arch/arm64/boot/dts/renesas/r8a774c0-cat874.dts
++++ b/arch/arm64/boot/dts/renesas/r8a774c0-cat874.dts
+@@ -17,6 +17,8 @@
+ 	aliases {
+ 		serial0 = &scif2;
+ 		serial1 = &hscif2;
++		mmc0 = &sdhi0;
++		mmc1 = &sdhi3;
+ 	};
+ 
+ 	chosen {
+diff --git a/arch/arm64/boot/dts/renesas/r8a774e1-beacon-rzg2h-kit.dts b/arch/arm64/boot/dts/renesas/r8a774e1-beacon-rzg2h-kit.dts
+index 273f062f29093..7b6649a3ded02 100644
+--- a/arch/arm64/boot/dts/renesas/r8a774e1-beacon-rzg2h-kit.dts
++++ b/arch/arm64/boot/dts/renesas/r8a774e1-beacon-rzg2h-kit.dts
+@@ -22,6 +22,9 @@
+ 		serial5 = &scif5;
+ 		serial6 = &scif4;
+ 		ethernet0 = &avb;
++		mmc0 = &sdhi3;
++		mmc1 = &sdhi0;
++		mmc2 = &sdhi2;
+ 	};
+ 
+ 	chosen {
+diff --git a/arch/arm64/boot/dts/renesas/r8a77980.dtsi b/arch/arm64/boot/dts/renesas/r8a77980.dtsi
+index ec7ca72399ec4..1ffa4a995a7ab 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77980.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77980.dtsi
+@@ -992,8 +992,8 @@
+ 
+ 					reg = <1>;
+ 
+-					vin4csi41: endpoint@2 {
+-						reg = <2>;
++					vin4csi41: endpoint@3 {
++						reg = <3>;
+ 						remote-endpoint = <&csi41vin4>;
+ 					};
+ 				};
+@@ -1020,8 +1020,8 @@
+ 
+ 					reg = <1>;
+ 
+-					vin5csi41: endpoint@2 {
+-						reg = <2>;
++					vin5csi41: endpoint@3 {
++						reg = <3>;
+ 						remote-endpoint = <&csi41vin5>;
+ 					};
+ 				};
+@@ -1048,8 +1048,8 @@
+ 
+ 					reg = <1>;
+ 
+-					vin6csi41: endpoint@2 {
+-						reg = <2>;
++					vin6csi41: endpoint@3 {
++						reg = <3>;
+ 						remote-endpoint = <&csi41vin6>;
+ 					};
+ 				};
+@@ -1076,8 +1076,8 @@
+ 
+ 					reg = <1>;
+ 
+-					vin7csi41: endpoint@2 {
+-						reg = <2>;
++					vin7csi41: endpoint@3 {
++						reg = <3>;
+ 						remote-endpoint = <&csi41vin7>;
+ 					};
+ 				};
+diff --git a/arch/arm64/boot/dts/renesas/r8a77990-ebisu.dts b/arch/arm64/boot/dts/renesas/r8a77990-ebisu.dts
+index f74f8b9993f1d..6d6cdc4c324ba 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77990-ebisu.dts
++++ b/arch/arm64/boot/dts/renesas/r8a77990-ebisu.dts
+@@ -16,6 +16,9 @@
+ 	aliases {
+ 		serial0 = &scif2;
+ 		ethernet0 = &avb;
++		mmc0 = &sdhi3;
++		mmc1 = &sdhi0;
++		mmc2 = &sdhi1;
+ 	};
+ 
+ 	chosen {
+diff --git a/arch/arm64/boot/dts/renesas/r8a779a0.dtsi b/arch/arm64/boot/dts/renesas/r8a779a0.dtsi
+index dfd6ae8b564fb..86ac48e2c8491 100644
+--- a/arch/arm64/boot/dts/renesas/r8a779a0.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a779a0.dtsi
+@@ -60,10 +60,7 @@
+ 
+ 	pmu_a76 {
+ 		compatible = "arm,cortex-a76-pmu";
+-		interrupts-extended = <&gic GIC_SPI 139 IRQ_TYPE_LEVEL_HIGH>,
+-				      <&gic GIC_SPI 140 IRQ_TYPE_LEVEL_HIGH>,
+-				      <&gic GIC_SPI 141 IRQ_TYPE_LEVEL_HIGH>,
+-				      <&gic GIC_SPI 142 IRQ_TYPE_LEVEL_HIGH>;
++		interrupts-extended = <&gic GIC_PPI 7 IRQ_TYPE_LEVEL_LOW>;
+ 	};
+ 
+ 	/* External SCIF clock - to be overridden by boards that provide it */
+diff --git a/arch/arm64/boot/dts/renesas/salvator-common.dtsi b/arch/arm64/boot/dts/renesas/salvator-common.dtsi
+index c22bb38994e80..15bb1eeb66010 100644
+--- a/arch/arm64/boot/dts/renesas/salvator-common.dtsi
++++ b/arch/arm64/boot/dts/renesas/salvator-common.dtsi
+@@ -36,6 +36,9 @@
+ 		serial0 = &scif2;
+ 		serial1 = &hscif1;
+ 		ethernet0 = &avb;
++		mmc0 = &sdhi2;
++		mmc1 = &sdhi0;
++		mmc2 = &sdhi3;
+ 	};
+ 
+ 	chosen {
+diff --git a/arch/arm64/boot/dts/renesas/ulcb-kf.dtsi b/arch/arm64/boot/dts/renesas/ulcb-kf.dtsi
+index e9ed2597f1c20..61bd4df09df0d 100644
+--- a/arch/arm64/boot/dts/renesas/ulcb-kf.dtsi
++++ b/arch/arm64/boot/dts/renesas/ulcb-kf.dtsi
+@@ -16,6 +16,7 @@
+ 	aliases {
+ 		serial1 = &hscif0;
+ 		serial2 = &scif1;
++		mmc2 = &sdhi3;
+ 	};
+ 
+ 	clksndsel: clksndsel {
+diff --git a/arch/arm64/boot/dts/renesas/ulcb.dtsi b/arch/arm64/boot/dts/renesas/ulcb.dtsi
+index a04eae55dd6c4..3d88e95c65a53 100644
+--- a/arch/arm64/boot/dts/renesas/ulcb.dtsi
++++ b/arch/arm64/boot/dts/renesas/ulcb.dtsi
+@@ -23,6 +23,8 @@
+ 	aliases {
+ 		serial0 = &scif2;
+ 		ethernet0 = &avb;
++		mmc0 = &sdhi2;
++		mmc1 = &sdhi0;
+ 	};
+ 
+ 	chosen {
+diff --git a/arch/arm64/boot/dts/socionext/uniphier-ld20.dtsi b/arch/arm64/boot/dts/socionext/uniphier-ld20.dtsi
+index a87b8a6787196..8f2c1c1e2c64e 100644
+--- a/arch/arm64/boot/dts/socionext/uniphier-ld20.dtsi
++++ b/arch/arm64/boot/dts/socionext/uniphier-ld20.dtsi
+@@ -734,7 +734,7 @@
+ 			clocks = <&sys_clk 6>;
+ 			reset-names = "ether";
+ 			resets = <&sys_rst 6>;
+-			phy-mode = "rgmii";
++			phy-mode = "rgmii-id";
+ 			local-mac-address = [00 00 00 00 00 00];
+ 			socionext,syscon-phy-mode = <&soc_glue 0>;
+ 
+diff --git a/arch/arm64/boot/dts/socionext/uniphier-pxs3.dtsi b/arch/arm64/boot/dts/socionext/uniphier-pxs3.dtsi
+index 0e52dadf54b3a..be97da1322580 100644
+--- a/arch/arm64/boot/dts/socionext/uniphier-pxs3.dtsi
++++ b/arch/arm64/boot/dts/socionext/uniphier-pxs3.dtsi
+@@ -564,7 +564,7 @@
+ 			clocks = <&sys_clk 6>;
+ 			reset-names = "ether";
+ 			resets = <&sys_rst 6>;
+-			phy-mode = "rgmii";
++			phy-mode = "rgmii-id";
+ 			local-mac-address = [00 00 00 00 00 00];
+ 			socionext,syscon-phy-mode = <&soc_glue 0>;
+ 
+@@ -585,7 +585,7 @@
+ 			clocks = <&sys_clk 7>;
+ 			reset-names = "ether";
+ 			resets = <&sys_rst 7>;
+-			phy-mode = "rgmii";
++			phy-mode = "rgmii-id";
+ 			local-mac-address = [00 00 00 00 00 00];
+ 			socionext,syscon-phy-mode = <&soc_glue 1>;
+ 
+diff --git a/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi b/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi
+index 8c84dafb7125c..f1e7da3dfa276 100644
+--- a/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j721e-main.dtsi
+@@ -1042,13 +1042,16 @@
+ 		assigned-clocks = <&k3_clks 91 1>;
+ 		assigned-clock-parents = <&k3_clks 91 2>;
+ 		bus-width = <8>;
+-		mmc-hs400-1_8v;
++		mmc-hs200-1_8v;
+ 		mmc-ddr-1_8v;
+ 		ti,otap-del-sel-legacy = <0xf>;
+ 		ti,otap-del-sel-mmc-hs = <0xf>;
+ 		ti,otap-del-sel-ddr52 = <0x5>;
+ 		ti,otap-del-sel-hs200 = <0x6>;
+ 		ti,otap-del-sel-hs400 = <0x0>;
++		ti,itap-del-sel-legacy = <0x10>;
++		ti,itap-del-sel-mmc-hs = <0xa>;
++		ti,itap-del-sel-ddr52 = <0x3>;
+ 		ti,trm-icp = <0x8>;
+ 		ti,strobe-sel = <0x77>;
+ 		dma-coherent;
+@@ -1069,9 +1072,15 @@
+ 		ti,otap-del-sel-sdr25 = <0xf>;
+ 		ti,otap-del-sel-sdr50 = <0xc>;
+ 		ti,otap-del-sel-ddr50 = <0xc>;
++		ti,itap-del-sel-legacy = <0x0>;
++		ti,itap-del-sel-sd-hs = <0x0>;
++		ti,itap-del-sel-sdr12 = <0x0>;
++		ti,itap-del-sel-sdr25 = <0x0>;
++		ti,itap-del-sel-ddr50 = <0x2>;
+ 		ti,trm-icp = <0x8>;
+ 		ti,clkbuf-sel = <0x7>;
+ 		dma-coherent;
++		sdhci-caps-mask = <0x2 0x0>;
+ 	};
+ 
+ 	main_sdhci2: mmc@4f98000 {
+@@ -1089,9 +1098,15 @@
+ 		ti,otap-del-sel-sdr25 = <0xf>;
+ 		ti,otap-del-sel-sdr50 = <0xc>;
+ 		ti,otap-del-sel-ddr50 = <0xc>;
++		ti,itap-del-sel-legacy = <0x0>;
++		ti,itap-del-sel-sd-hs = <0x0>;
++		ti,itap-del-sel-sdr12 = <0x0>;
++		ti,itap-del-sel-sdr25 = <0x0>;
++		ti,itap-del-sel-ddr50 = <0x2>;
+ 		ti,trm-icp = <0x8>;
+ 		ti,clkbuf-sel = <0x7>;
+ 		dma-coherent;
++		sdhci-caps-mask = <0x2 0x0>;
+ 	};
+ 
+ 	usbss0: cdns-usb@4104000 {
+diff --git a/arch/arm64/crypto/aes-modes.S b/arch/arm64/crypto/aes-modes.S
+index bbdb54702aa7a..247011356d110 100644
+--- a/arch/arm64/crypto/aes-modes.S
++++ b/arch/arm64/crypto/aes-modes.S
+@@ -359,6 +359,7 @@ ST5(	mov		v4.16b, vctr.16b		)
+ 	ins		vctr.d[0], x8
+ 
+ 	/* apply carry to N counter blocks for N := x12 */
++	cbz		x12, 2f
+ 	adr		x16, 1f
+ 	sub		x16, x16, x12, lsl #3
+ 	br		x16
+diff --git a/arch/arm64/crypto/poly1305-glue.c b/arch/arm64/crypto/poly1305-glue.c
+index 683de671741a7..9c3d86e397bf3 100644
+--- a/arch/arm64/crypto/poly1305-glue.c
++++ b/arch/arm64/crypto/poly1305-glue.c
+@@ -25,7 +25,7 @@ asmlinkage void poly1305_emit(void *state, u8 *digest, const u32 *nonce);
+ 
+ static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_neon);
+ 
+-void poly1305_init_arch(struct poly1305_desc_ctx *dctx, const u8 *key)
++void poly1305_init_arch(struct poly1305_desc_ctx *dctx, const u8 key[POLY1305_KEY_SIZE])
+ {
+ 	poly1305_init_arm64(&dctx->h, key);
+ 	dctx->s[0] = get_unaligned_le32(key + 16);
+diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
+index 3d10e6527f7de..858c2fcfc043b 100644
+--- a/arch/arm64/include/asm/kvm_host.h
++++ b/arch/arm64/include/asm/kvm_host.h
+@@ -713,6 +713,7 @@ static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
+ static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
+ 
+ void kvm_arm_init_debug(void);
++void kvm_arm_vcpu_init_debug(struct kvm_vcpu *vcpu);
+ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu);
+ void kvm_arm_clear_debug(struct kvm_vcpu *vcpu);
+ void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu);
+diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
+index 7f06ba76698d8..84b5f79c9eab4 100644
+--- a/arch/arm64/kvm/arm.c
++++ b/arch/arm64/kvm/arm.c
+@@ -580,6 +580,8 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
+ 
+ 	vcpu->arch.has_run_once = true;
+ 
++	kvm_arm_vcpu_init_debug(vcpu);
++
+ 	if (likely(irqchip_in_kernel(kvm))) {
+ 		/*
+ 		 * Map the VGIC hardware resources before running a vcpu the
+@@ -1808,8 +1810,10 @@ static int init_hyp_mode(void)
+ 	if (is_protected_kvm_enabled()) {
+ 		init_cpu_logical_map();
+ 
+-		if (!init_psci_relay())
++		if (!init_psci_relay()) {
++			err = -ENODEV;
+ 			goto out_err;
++		}
+ 	}
+ 
+ 	return 0;
+diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c
+index dbc8905116311..2484b2cca74bc 100644
+--- a/arch/arm64/kvm/debug.c
++++ b/arch/arm64/kvm/debug.c
+@@ -68,6 +68,64 @@ void kvm_arm_init_debug(void)
+ 	__this_cpu_write(mdcr_el2, kvm_call_hyp_ret(__kvm_get_mdcr_el2));
+ }
+ 
++/**
++ * kvm_arm_setup_mdcr_el2 - configure vcpu mdcr_el2 value
++ *
++ * @vcpu:	the vcpu pointer
++ *
++ * This ensures we will trap access to:
++ *  - Performance monitors (MDCR_EL2_TPM/MDCR_EL2_TPMCR)
++ *  - Debug ROM Address (MDCR_EL2_TDRA)
++ *  - OS related registers (MDCR_EL2_TDOSA)
++ *  - Statistical profiler (MDCR_EL2_TPMS/MDCR_EL2_E2PB)
++ *  - Self-hosted Trace Filter controls (MDCR_EL2_TTRF)
++ */
++static void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcpu)
++{
++	/*
++	 * This also clears MDCR_EL2_E2PB_MASK to disable guest access
++	 * to the profiling buffer.
++	 */
++	vcpu->arch.mdcr_el2 = __this_cpu_read(mdcr_el2) & MDCR_EL2_HPMN_MASK;
++	vcpu->arch.mdcr_el2 |= (MDCR_EL2_TPM |
++				MDCR_EL2_TPMS |
++				MDCR_EL2_TTRF |
++				MDCR_EL2_TPMCR |
++				MDCR_EL2_TDRA |
++				MDCR_EL2_TDOSA);
++
++	/* Is the VM being debugged by userspace? */
++	if (vcpu->guest_debug)
++		/* Route all software debug exceptions to EL2 */
++		vcpu->arch.mdcr_el2 |= MDCR_EL2_TDE;
++
++	/*
++	 * Trap debug register access when one of the following is true:
++	 *  - Userspace is using the hardware to debug the guest
++	 *  (KVM_GUESTDBG_USE_HW is set).
++	 *  - The guest is not using debug (KVM_ARM64_DEBUG_DIRTY is clear).
++	 */
++	if ((vcpu->guest_debug & KVM_GUESTDBG_USE_HW) ||
++	    !(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY))
++		vcpu->arch.mdcr_el2 |= MDCR_EL2_TDA;
++
++	trace_kvm_arm_set_dreg32("MDCR_EL2", vcpu->arch.mdcr_el2);
++}
++
++/**
++ * kvm_arm_vcpu_init_debug - setup vcpu debug traps
++ *
++ * @vcpu:	the vcpu pointer
++ *
++ * Set vcpu initial mdcr_el2 value.
++ */
++void kvm_arm_vcpu_init_debug(struct kvm_vcpu *vcpu)
++{
++	preempt_disable();
++	kvm_arm_setup_mdcr_el2(vcpu);
++	preempt_enable();
++}
++
+ /**
+  * kvm_arm_reset_debug_ptr - reset the debug ptr to point to the vcpu state
+  */
+@@ -83,13 +141,7 @@ void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu)
+  * @vcpu:	the vcpu pointer
+  *
+  * This is called before each entry into the hypervisor to setup any
+- * debug related registers. Currently this just ensures we will trap
+- * access to:
+- *  - Performance monitors (MDCR_EL2_TPM/MDCR_EL2_TPMCR)
+- *  - Debug ROM Address (MDCR_EL2_TDRA)
+- *  - OS related registers (MDCR_EL2_TDOSA)
+- *  - Statistical profiler (MDCR_EL2_TPMS/MDCR_EL2_E2PB)
+- *  - Self-hosted Trace Filter controls (MDCR_EL2_TTRF)
++ * debug related registers.
+  *
+  * Additionally, KVM only traps guest accesses to the debug registers if
+  * the guest is not actively using them (see the KVM_ARM64_DEBUG_DIRTY
+@@ -101,28 +153,14 @@ void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu)
+ 
+ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
+ {
+-	bool trap_debug = !(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY);
+ 	unsigned long mdscr, orig_mdcr_el2 = vcpu->arch.mdcr_el2;
+ 
+ 	trace_kvm_arm_setup_debug(vcpu, vcpu->guest_debug);
+ 
+-	/*
+-	 * This also clears MDCR_EL2_E2PB_MASK to disable guest access
+-	 * to the profiling buffer.
+-	 */
+-	vcpu->arch.mdcr_el2 = __this_cpu_read(mdcr_el2) & MDCR_EL2_HPMN_MASK;
+-	vcpu->arch.mdcr_el2 |= (MDCR_EL2_TPM |
+-				MDCR_EL2_TPMS |
+-				MDCR_EL2_TTRF |
+-				MDCR_EL2_TPMCR |
+-				MDCR_EL2_TDRA |
+-				MDCR_EL2_TDOSA);
++	kvm_arm_setup_mdcr_el2(vcpu);
+ 
+ 	/* Is Guest debugging in effect? */
+ 	if (vcpu->guest_debug) {
+-		/* Route all software debug exceptions to EL2 */
+-		vcpu->arch.mdcr_el2 |= MDCR_EL2_TDE;
+-
+ 		/* Save guest debug state */
+ 		save_guest_debug_regs(vcpu);
+ 
+@@ -176,7 +214,6 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
+ 
+ 			vcpu->arch.debug_ptr = &vcpu->arch.external_debug_state;
+ 			vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY;
+-			trap_debug = true;
+ 
+ 			trace_kvm_arm_set_regset("BKPTS", get_num_brps(),
+ 						&vcpu->arch.debug_ptr->dbg_bcr[0],
+@@ -191,10 +228,6 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
+ 	BUG_ON(!vcpu->guest_debug &&
+ 		vcpu->arch.debug_ptr != &vcpu->arch.vcpu_debug_state);
+ 
+-	/* Trap debug register access */
+-	if (trap_debug)
+-		vcpu->arch.mdcr_el2 |= MDCR_EL2_TDA;
+-
+ 	/* If KDE or MDE are set, perform a full save/restore cycle. */
+ 	if (vcpu_read_sys_reg(vcpu, MDSCR_EL1) & (DBG_MDSCR_KDE | DBG_MDSCR_MDE))
+ 		vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY;
+@@ -203,7 +236,6 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
+ 	if (has_vhe() && orig_mdcr_el2 != vcpu->arch.mdcr_el2)
+ 		write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
+ 
+-	trace_kvm_arm_set_dreg32("MDCR_EL2", vcpu->arch.mdcr_el2);
+ 	trace_kvm_arm_set_dreg32("MDSCR_EL1", vcpu_read_sys_reg(vcpu, MDSCR_EL1));
+ }
+ 
+diff --git a/arch/arm64/kvm/hyp/nvhe/gen-hyprel.c b/arch/arm64/kvm/hyp/nvhe/gen-hyprel.c
+index ead02c6a76289..6bc88a756cb7a 100644
+--- a/arch/arm64/kvm/hyp/nvhe/gen-hyprel.c
++++ b/arch/arm64/kvm/hyp/nvhe/gen-hyprel.c
+@@ -50,6 +50,18 @@
+ #ifndef R_AARCH64_ABS64
+ #define R_AARCH64_ABS64			257
+ #endif
++#ifndef R_AARCH64_PREL64
++#define R_AARCH64_PREL64		260
++#endif
++#ifndef R_AARCH64_PREL32
++#define R_AARCH64_PREL32		261
++#endif
++#ifndef R_AARCH64_PREL16
++#define R_AARCH64_PREL16		262
++#endif
++#ifndef R_AARCH64_PLT32
++#define R_AARCH64_PLT32			314
++#endif
+ #ifndef R_AARCH64_LD_PREL_LO19
+ #define R_AARCH64_LD_PREL_LO19		273
+ #endif
+@@ -371,6 +383,12 @@ static void emit_rela_section(Elf64_Shdr *sh_rela)
+ 		case R_AARCH64_ABS64:
+ 			emit_rela_abs64(rela, sh_orig_name);
+ 			break;
++		/* Allow position-relative data relocations. */
++		case R_AARCH64_PREL64:
++		case R_AARCH64_PREL32:
++		case R_AARCH64_PREL16:
++		case R_AARCH64_PLT32:
++			break;
+ 		/* Allow relocations to generate PC-relative addressing. */
+ 		case R_AARCH64_LD_PREL_LO19:
+ 		case R_AARCH64_ADR_PREL_LO21:
+diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
+index bd354cd45d286..4b5acd84b8c87 100644
+--- a/arch/arm64/kvm/reset.c
++++ b/arch/arm64/kvm/reset.c
+@@ -242,6 +242,11 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
+ 
+ 	/* Reset core registers */
+ 	memset(vcpu_gp_regs(vcpu), 0, sizeof(*vcpu_gp_regs(vcpu)));
++	memset(&vcpu->arch.ctxt.fp_regs, 0, sizeof(vcpu->arch.ctxt.fp_regs));
++	vcpu->arch.ctxt.spsr_abt = 0;
++	vcpu->arch.ctxt.spsr_und = 0;
++	vcpu->arch.ctxt.spsr_irq = 0;
++	vcpu->arch.ctxt.spsr_fiq = 0;
+ 	vcpu_gp_regs(vcpu)->pstate = pstate;
+ 
+ 	/* Reset system registers */
+diff --git a/arch/arm64/kvm/vgic/vgic-kvm-device.c b/arch/arm64/kvm/vgic/vgic-kvm-device.c
+index 44419679f91ad..7740995de982e 100644
+--- a/arch/arm64/kvm/vgic/vgic-kvm-device.c
++++ b/arch/arm64/kvm/vgic/vgic-kvm-device.c
+@@ -87,8 +87,8 @@ int kvm_vgic_addr(struct kvm *kvm, unsigned long type, u64 *addr, bool write)
+ 			r = vgic_v3_set_redist_base(kvm, 0, *addr, 0);
+ 			goto out;
+ 		}
+-		rdreg = list_first_entry(&vgic->rd_regions,
+-					 struct vgic_redist_region, list);
++		rdreg = list_first_entry_or_null(&vgic->rd_regions,
++						 struct vgic_redist_region, list);
+ 		if (!rdreg)
+ 			addr_ptr = &undef_value;
+ 		else
+@@ -226,6 +226,9 @@ static int vgic_get_common_attr(struct kvm_device *dev,
+ 		u64 addr;
+ 		unsigned long type = (unsigned long)attr->attr;
+ 
++		if (copy_from_user(&addr, uaddr, sizeof(addr)))
++			return -EFAULT;
++
+ 		r = kvm_vgic_addr(dev->kvm, type, &addr, false);
+ 		if (r)
+ 			return (r == -ENODEV) ? -ENXIO : r;
+diff --git a/arch/ia64/kernel/acpi.c b/arch/ia64/kernel/acpi.c
+index a5636524af769..e2af6b172200e 100644
+--- a/arch/ia64/kernel/acpi.c
++++ b/arch/ia64/kernel/acpi.c
+@@ -446,7 +446,8 @@ void __init acpi_numa_fixup(void)
+ 	if (srat_num_cpus == 0) {
+ 		node_set_online(0);
+ 		node_cpuid[0].phys_id = hard_smp_processor_id();
+-		return;
++		slit_distance(0, 0) = LOCAL_DISTANCE;
++		goto out;
+ 	}
+ 
+ 	/*
+@@ -489,7 +490,7 @@ void __init acpi_numa_fixup(void)
+ 			for (j = 0; j < MAX_NUMNODES; j++)
+ 				slit_distance(i, j) = i == j ?
+ 					LOCAL_DISTANCE : REMOTE_DISTANCE;
+-		return;
++		goto out;
+ 	}
+ 
+ 	memset(numa_slit, -1, sizeof(numa_slit));
+@@ -514,6 +515,8 @@ void __init acpi_numa_fixup(void)
+ 		printk("\n");
+ 	}
+ #endif
++out:
++	node_possible_map = node_online_map;
+ }
+ #endif				/* CONFIG_ACPI_NUMA */
+ 
+diff --git a/arch/ia64/kernel/efi.c b/arch/ia64/kernel/efi.c
+index c5fe21de46a81..31149e41f9be0 100644
+--- a/arch/ia64/kernel/efi.c
++++ b/arch/ia64/kernel/efi.c
+@@ -415,10 +415,10 @@ efi_get_pal_addr (void)
+ 		mask  = ~((1 << IA64_GRANULE_SHIFT) - 1);
+ 
+ 		printk(KERN_INFO "CPU %d: mapping PAL code "
+-                       "[0x%lx-0x%lx) into [0x%lx-0x%lx)\n",
+-                       smp_processor_id(), md->phys_addr,
+-                       md->phys_addr + efi_md_size(md),
+-                       vaddr & mask, (vaddr & mask) + IA64_GRANULE_SIZE);
++			"[0x%llx-0x%llx) into [0x%llx-0x%llx)\n",
++			smp_processor_id(), md->phys_addr,
++			md->phys_addr + efi_md_size(md),
++			vaddr & mask, (vaddr & mask) + IA64_GRANULE_SIZE);
+ #endif
+ 		return __va(md->phys_addr);
+ 	}
+@@ -560,6 +560,7 @@ efi_init (void)
+ 	{
+ 		efi_memory_desc_t *md;
+ 		void *p;
++		unsigned int i;
+ 
+ 		for (i = 0, p = efi_map_start; p < efi_map_end;
+ 		     ++i, p += efi_desc_size)
+@@ -586,7 +587,7 @@ efi_init (void)
+ 			}
+ 
+ 			printk("mem%02d: %s "
+-			       "range=[0x%016lx-0x%016lx) (%4lu%s)\n",
++			       "range=[0x%016llx-0x%016llx) (%4lu%s)\n",
+ 			       i, efi_md_typeattr_format(buf, sizeof(buf), md),
+ 			       md->phys_addr,
+ 			       md->phys_addr + efi_md_size(md), size, unit);
+diff --git a/arch/m68k/include/asm/mvme147hw.h b/arch/m68k/include/asm/mvme147hw.h
+index 257b29184af91..e28eb1c0e0bfb 100644
+--- a/arch/m68k/include/asm/mvme147hw.h
++++ b/arch/m68k/include/asm/mvme147hw.h
+@@ -66,6 +66,9 @@ struct pcc_regs {
+ #define PCC_INT_ENAB		0x08
+ 
+ #define PCC_TIMER_INT_CLR	0x80
++
++#define PCC_TIMER_TIC_EN	0x01
++#define PCC_TIMER_COC_EN	0x02
+ #define PCC_TIMER_CLR_OVF	0x04
+ 
+ #define PCC_LEVEL_ABORT		0x07
+diff --git a/arch/m68k/kernel/sys_m68k.c b/arch/m68k/kernel/sys_m68k.c
+index 1c235d8f53f36..f55bdcb8e4f15 100644
+--- a/arch/m68k/kernel/sys_m68k.c
++++ b/arch/m68k/kernel/sys_m68k.c
+@@ -388,6 +388,8 @@ sys_cacheflush (unsigned long addr, int scope, int cache, unsigned long len)
+ 		ret = -EPERM;
+ 		if (!capable(CAP_SYS_ADMIN))
+ 			goto out;
++
++		mmap_read_lock(current->mm);
+ 	} else {
+ 		struct vm_area_struct *vma;
+ 
+diff --git a/arch/m68k/mvme147/config.c b/arch/m68k/mvme147/config.c
+index cfdc7f912e14e..e1e90c49a4962 100644
+--- a/arch/m68k/mvme147/config.c
++++ b/arch/m68k/mvme147/config.c
+@@ -114,8 +114,10 @@ static irqreturn_t mvme147_timer_int (int irq, void *dev_id)
+ 	unsigned long flags;
+ 
+ 	local_irq_save(flags);
+-	m147_pcc->t1_int_cntrl = PCC_TIMER_INT_CLR;
+-	m147_pcc->t1_cntrl = PCC_TIMER_CLR_OVF;
++	m147_pcc->t1_cntrl = PCC_TIMER_CLR_OVF | PCC_TIMER_COC_EN |
++			     PCC_TIMER_TIC_EN;
++	m147_pcc->t1_int_cntrl = PCC_INT_ENAB | PCC_TIMER_INT_CLR |
++				 PCC_LEVEL_TIMER1;
+ 	clk_total += PCC_TIMER_CYCLES;
+ 	legacy_timer_tick(1);
+ 	local_irq_restore(flags);
+@@ -133,10 +135,10 @@ void mvme147_sched_init (void)
+ 	/* Init the clock with a value */
+ 	/* The clock counter increments until 0xFFFF then reloads */
+ 	m147_pcc->t1_preload = PCC_TIMER_PRELOAD;
+-	m147_pcc->t1_cntrl = 0x0;	/* clear timer */
+-	m147_pcc->t1_cntrl = 0x3;	/* start timer */
+-	m147_pcc->t1_int_cntrl = PCC_TIMER_INT_CLR;  /* clear pending ints */
+-	m147_pcc->t1_int_cntrl = PCC_INT_ENAB|PCC_LEVEL_TIMER1;
++	m147_pcc->t1_cntrl = PCC_TIMER_CLR_OVF | PCC_TIMER_COC_EN |
++			     PCC_TIMER_TIC_EN;
++	m147_pcc->t1_int_cntrl = PCC_INT_ENAB | PCC_TIMER_INT_CLR |
++				 PCC_LEVEL_TIMER1;
+ 
+ 	clocksource_register_hz(&mvme147_clk, PCC_TIMER_CLOCK_FREQ);
+ }
+diff --git a/arch/m68k/mvme16x/config.c b/arch/m68k/mvme16x/config.c
+index 30357fe4ba6c8..b59593c7cfb9d 100644
+--- a/arch/m68k/mvme16x/config.c
++++ b/arch/m68k/mvme16x/config.c
+@@ -366,6 +366,7 @@ static u32 clk_total;
+ #define PCCTOVR1_COC_EN      0x02
+ #define PCCTOVR1_OVR_CLR     0x04
+ 
++#define PCCTIC1_INT_LEVEL    6
+ #define PCCTIC1_INT_CLR      0x08
+ #define PCCTIC1_INT_EN       0x10
+ 
+@@ -374,8 +375,8 @@ static irqreturn_t mvme16x_timer_int (int irq, void *dev_id)
+ 	unsigned long flags;
+ 
+ 	local_irq_save(flags);
+-	out_8(PCCTIC1, in_8(PCCTIC1) | PCCTIC1_INT_CLR);
+-	out_8(PCCTOVR1, PCCTOVR1_OVR_CLR);
++	out_8(PCCTOVR1, PCCTOVR1_OVR_CLR | PCCTOVR1_TIC_EN | PCCTOVR1_COC_EN);
++	out_8(PCCTIC1, PCCTIC1_INT_EN | PCCTIC1_INT_CLR | PCCTIC1_INT_LEVEL);
+ 	clk_total += PCC_TIMER_CYCLES;
+ 	legacy_timer_tick(1);
+ 	local_irq_restore(flags);
+@@ -389,14 +390,15 @@ void mvme16x_sched_init(void)
+     int irq;
+ 
+     /* Using PCCchip2 or MC2 chip tick timer 1 */
+-    out_be32(PCCTCNT1, 0);
+-    out_be32(PCCTCMP1, PCC_TIMER_CYCLES);
+-    out_8(PCCTOVR1, in_8(PCCTOVR1) | PCCTOVR1_TIC_EN | PCCTOVR1_COC_EN);
+-    out_8(PCCTIC1, PCCTIC1_INT_EN | 6);
+     if (request_irq(MVME16x_IRQ_TIMER, mvme16x_timer_int, IRQF_TIMER, "timer",
+                     NULL))
+ 	panic ("Couldn't register timer int");
+ 
++    out_be32(PCCTCNT1, 0);
++    out_be32(PCCTCMP1, PCC_TIMER_CYCLES);
++    out_8(PCCTOVR1, PCCTOVR1_OVR_CLR | PCCTOVR1_TIC_EN | PCCTOVR1_COC_EN);
++    out_8(PCCTIC1, PCCTIC1_INT_EN | PCCTIC1_INT_CLR | PCCTIC1_INT_LEVEL);
++
+     clocksource_register_hz(&mvme16x_clk, PCC_TIMER_CLOCK_FREQ);
+ 
+     if (brdno == 0x0162 || brdno == 0x172)
+diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
+index d89efba3d8a44..e89d63cd92d10 100644
+--- a/arch/mips/Kconfig
++++ b/arch/mips/Kconfig
+@@ -6,6 +6,7 @@ config MIPS
+ 	select ARCH_BINFMT_ELF_STATE if MIPS_FP_SUPPORT
+ 	select ARCH_HAS_FORTIFY_SOURCE
+ 	select ARCH_HAS_KCOV
++	select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE if !EVA
+ 	select ARCH_HAS_PTE_SPECIAL if !(32BIT && CPU_HAS_RIXI)
+ 	select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
+ 	select ARCH_HAS_UBSAN_SANITIZE_ALL
+diff --git a/arch/mips/boot/dts/brcm/bcm3368.dtsi b/arch/mips/boot/dts/brcm/bcm3368.dtsi
+index 69cbef4723775..d4b2b430dad01 100644
+--- a/arch/mips/boot/dts/brcm/bcm3368.dtsi
++++ b/arch/mips/boot/dts/brcm/bcm3368.dtsi
+@@ -59,7 +59,7 @@
+ 
+ 		periph_cntl: syscon@fff8c008 {
+ 			compatible = "syscon";
+-			reg = <0xfff8c000 0x4>;
++			reg = <0xfff8c008 0x4>;
+ 			native-endian;
+ 		};
+ 
+diff --git a/arch/mips/boot/dts/brcm/bcm63268.dtsi b/arch/mips/boot/dts/brcm/bcm63268.dtsi
+index e0021ff9f144d..9405944368726 100644
+--- a/arch/mips/boot/dts/brcm/bcm63268.dtsi
++++ b/arch/mips/boot/dts/brcm/bcm63268.dtsi
+@@ -59,7 +59,7 @@
+ 
+ 		periph_cntl: syscon@10000008 {
+ 			compatible = "syscon";
+-			reg = <0x10000000 0xc>;
++			reg = <0x10000008 0x4>;
+ 			native-endian;
+ 		};
+ 
+diff --git a/arch/mips/boot/dts/brcm/bcm6358.dtsi b/arch/mips/boot/dts/brcm/bcm6358.dtsi
+index 9d93e7f5e6fc7..d79c88c2fc9ca 100644
+--- a/arch/mips/boot/dts/brcm/bcm6358.dtsi
++++ b/arch/mips/boot/dts/brcm/bcm6358.dtsi
+@@ -59,7 +59,7 @@
+ 
+ 		periph_cntl: syscon@fffe0008 {
+ 			compatible = "syscon";
+-			reg = <0xfffe0000 0x4>;
++			reg = <0xfffe0008 0x4>;
+ 			native-endian;
+ 		};
+ 
+diff --git a/arch/mips/boot/dts/brcm/bcm6362.dtsi b/arch/mips/boot/dts/brcm/bcm6362.dtsi
+index eb10341b75bae..8a21cb761ffd4 100644
+--- a/arch/mips/boot/dts/brcm/bcm6362.dtsi
++++ b/arch/mips/boot/dts/brcm/bcm6362.dtsi
+@@ -59,7 +59,7 @@
+ 
+ 		periph_cntl: syscon@10000008 {
+ 			compatible = "syscon";
+-			reg = <0x10000000 0xc>;
++			reg = <0x10000008 0x4>;
+ 			native-endian;
+ 		};
+ 
+diff --git a/arch/mips/boot/dts/brcm/bcm6368.dtsi b/arch/mips/boot/dts/brcm/bcm6368.dtsi
+index 52c19f40b9cca..8e87867ebc04a 100644
+--- a/arch/mips/boot/dts/brcm/bcm6368.dtsi
++++ b/arch/mips/boot/dts/brcm/bcm6368.dtsi
+@@ -59,7 +59,7 @@
+ 
+ 		periph_cntl: syscon@100000008 {
+ 			compatible = "syscon";
+-			reg = <0x10000000 0xc>;
++			reg = <0x10000008 0x4>;
+ 			native-endian;
+ 		};
+ 
+diff --git a/arch/mips/crypto/poly1305-glue.c b/arch/mips/crypto/poly1305-glue.c
+index fc881b46d9111..bc6110fb98e0a 100644
+--- a/arch/mips/crypto/poly1305-glue.c
++++ b/arch/mips/crypto/poly1305-glue.c
+@@ -17,7 +17,7 @@ asmlinkage void poly1305_init_mips(void *state, const u8 *key);
+ asmlinkage void poly1305_blocks_mips(void *state, const u8 *src, u32 len, u32 hibit);
+ asmlinkage void poly1305_emit_mips(void *state, u8 *digest, const u32 *nonce);
+ 
+-void poly1305_init_arch(struct poly1305_desc_ctx *dctx, const u8 *key)
++void poly1305_init_arch(struct poly1305_desc_ctx *dctx, const u8 key[POLY1305_KEY_SIZE])
+ {
+ 	poly1305_init_mips(&dctx->h, key);
+ 	dctx->s[0] = get_unaligned_le32(key + 16);
+diff --git a/arch/mips/generic/board-boston.its.S b/arch/mips/generic/board-boston.its.S
+index a7f51f97b9102..c45ad27594218 100644
+--- a/arch/mips/generic/board-boston.its.S
++++ b/arch/mips/generic/board-boston.its.S
+@@ -1,22 +1,22 @@
+ / {
+ 	images {
+-		fdt@boston {
++		fdt-boston {
+ 			description = "img,boston Device Tree";
+ 			data = /incbin/("boot/dts/img/boston.dtb");
+ 			type = "flat_dt";
+ 			arch = "mips";
+ 			compression = "none";
+-			hash@0 {
++			hash {
+ 				algo = "sha1";
+ 			};
+ 		};
+ 	};
+ 
+ 	configurations {
+-		conf@boston {
++		conf-boston {
+ 			description = "Boston Linux kernel";
+-			kernel = "kernel@0";
+-			fdt = "fdt@boston";
++			kernel = "kernel";
++			fdt = "fdt-boston";
+ 		};
+ 	};
+ };
+diff --git a/arch/mips/generic/board-jaguar2.its.S b/arch/mips/generic/board-jaguar2.its.S
+index fb0e589eeff71..c2b8d479b26cd 100644
+--- a/arch/mips/generic/board-jaguar2.its.S
++++ b/arch/mips/generic/board-jaguar2.its.S
+@@ -1,23 +1,23 @@
+ /* SPDX-License-Identifier: (GPL-2.0 OR MIT) */
+ / {
+ 	images {
+-		fdt@jaguar2_pcb110 {
++		fdt-jaguar2_pcb110 {
+ 			description = "MSCC Jaguar2 PCB110 Device Tree";
+ 			data = /incbin/("boot/dts/mscc/jaguar2_pcb110.dtb");
+ 			type = "flat_dt";
+ 			arch = "mips";
+ 			compression = "none";
+-			hash@0 {
++			hash {
+ 				algo = "sha1";
+ 			};
+ 		};
+-		fdt@jaguar2_pcb111 {
++		fdt-jaguar2_pcb111 {
+ 			description = "MSCC Jaguar2 PCB111 Device Tree";
+ 			data = /incbin/("boot/dts/mscc/jaguar2_pcb111.dtb");
+ 			type = "flat_dt";
+ 			arch = "mips";
+ 			compression = "none";
+-			hash@0 {
++			hash {
+ 				algo = "sha1";
+ 			};
+ 		};
+@@ -26,14 +26,14 @@
+ 	configurations {
+ 		pcb110 {
+ 			description = "Jaguar2 Linux kernel";
+-			kernel = "kernel@0";
+-			fdt = "fdt@jaguar2_pcb110";
++			kernel = "kernel";
++			fdt = "fdt-jaguar2_pcb110";
+ 			ramdisk = "ramdisk";
+ 		};
+ 		pcb111 {
+ 			description = "Jaguar2 Linux kernel";
+-			kernel = "kernel@0";
+-			fdt = "fdt@jaguar2_pcb111";
++			kernel = "kernel";
++			fdt = "fdt-jaguar2_pcb111";
+ 			ramdisk = "ramdisk";
+ 		};
+ 	};
+diff --git a/arch/mips/generic/board-luton.its.S b/arch/mips/generic/board-luton.its.S
+index 39a543f62f258..bd9837c9af976 100644
+--- a/arch/mips/generic/board-luton.its.S
++++ b/arch/mips/generic/board-luton.its.S
+@@ -1,13 +1,13 @@
+ /* SPDX-License-Identifier: (GPL-2.0 OR MIT) */
+ / {
+ 	images {
+-		fdt@luton_pcb091 {
++		fdt-luton_pcb091 {
+ 			description = "MSCC Luton PCB091 Device Tree";
+ 			data = /incbin/("boot/dts/mscc/luton_pcb091.dtb");
+ 			type = "flat_dt";
+ 			arch = "mips";
+ 			compression = "none";
+-			hash@0 {
++			hash {
+ 				algo = "sha1";
+ 			};
+ 		};
+@@ -16,8 +16,8 @@
+ 	configurations {
+ 		pcb091 {
+ 			description = "Luton Linux kernel";
+-			kernel = "kernel@0";
+-			fdt = "fdt@luton_pcb091";
++			kernel = "kernel";
++			fdt = "fdt-luton_pcb091";
+ 		};
+ 	};
+ };
+diff --git a/arch/mips/generic/board-ni169445.its.S b/arch/mips/generic/board-ni169445.its.S
+index e4cb4f95a8cc1..0a2e8f7a8526f 100644
+--- a/arch/mips/generic/board-ni169445.its.S
++++ b/arch/mips/generic/board-ni169445.its.S
+@@ -1,22 +1,22 @@
+ / {
+ 	images {
+-		fdt@ni169445 {
++		fdt-ni169445 {
+ 			description = "NI 169445 device tree";
+ 			data = /incbin/("boot/dts/ni/169445.dtb");
+ 			type = "flat_dt";
+ 			arch = "mips";
+ 			compression = "none";
+-			hash@0 {
++			hash {
+ 				algo = "sha1";
+ 			};
+ 		};
+ 	};
+ 
+ 	configurations {
+-		conf@ni169445 {
++		conf-ni169445 {
+ 			description = "NI 169445 Linux Kernel";
+-			kernel = "kernel@0";
+-			fdt = "fdt@ni169445";
++			kernel = "kernel";
++			fdt = "fdt-ni169445";
+ 		};
+ 	};
+ };
+diff --git a/arch/mips/generic/board-ocelot.its.S b/arch/mips/generic/board-ocelot.its.S
+index 3da23988149a6..8c7e3a1b68d3d 100644
+--- a/arch/mips/generic/board-ocelot.its.S
++++ b/arch/mips/generic/board-ocelot.its.S
+@@ -1,40 +1,40 @@
+ /* SPDX-License-Identifier: (GPL-2.0 OR MIT) */
+ / {
+ 	images {
+-		fdt@ocelot_pcb123 {
++		fdt-ocelot_pcb123 {
+ 			description = "MSCC Ocelot PCB123 Device Tree";
+ 			data = /incbin/("boot/dts/mscc/ocelot_pcb123.dtb");
+ 			type = "flat_dt";
+ 			arch = "mips";
+ 			compression = "none";
+-			hash@0 {
++			hash {
+ 				algo = "sha1";
+ 			};
+ 		};
+ 
+-		fdt@ocelot_pcb120 {
++		fdt-ocelot_pcb120 {
+ 			description = "MSCC Ocelot PCB120 Device Tree";
+ 			data = /incbin/("boot/dts/mscc/ocelot_pcb120.dtb");
+ 			type = "flat_dt";
+ 			arch = "mips";
+ 			compression = "none";
+-			hash@0 {
++			hash {
+ 				algo = "sha1";
+ 			};
+ 		};
+ 	};
+ 
+ 	configurations {
+-		conf@ocelot_pcb123 {
++		conf-ocelot_pcb123 {
+ 			description = "Ocelot Linux kernel";
+-			kernel = "kernel@0";
+-			fdt = "fdt@ocelot_pcb123";
++			kernel = "kernel";
++			fdt = "fdt-ocelot_pcb123";
+ 		};
+ 
+-		conf@ocelot_pcb120 {
++		conf-ocelot_pcb120 {
+ 			description = "Ocelot Linux kernel";
+-			kernel = "kernel@0";
+-			fdt = "fdt@ocelot_pcb120";
++			kernel = "kernel";
++			fdt = "fdt-ocelot_pcb120";
+ 		};
+ 	};
+ };
+diff --git a/arch/mips/generic/board-serval.its.S b/arch/mips/generic/board-serval.its.S
+index 4ea4fc9d757f3..dde833efe980a 100644
+--- a/arch/mips/generic/board-serval.its.S
++++ b/arch/mips/generic/board-serval.its.S
+@@ -1,13 +1,13 @@
+ /* SPDX-License-Identifier: (GPL-2.0 OR MIT) */
+ / {
+ 	images {
+-		fdt@serval_pcb105 {
++		fdt-serval_pcb105 {
+ 			description = "MSCC Serval PCB105 Device Tree";
+ 			data = /incbin/("boot/dts/mscc/serval_pcb105.dtb");
+ 			type = "flat_dt";
+ 			arch = "mips";
+ 			compression = "none";
+-			hash@0 {
++			hash {
+ 				algo = "sha1";
+ 			};
+ 		};
+@@ -16,8 +16,8 @@
+ 	configurations {
+ 		pcb105 {
+ 			description = "Serval Linux kernel";
+-			kernel = "kernel@0";
+-			fdt = "fdt@serval_pcb105";
++			kernel = "kernel";
++			fdt = "fdt-serval_pcb105";
+ 			ramdisk = "ramdisk";
+ 		};
+ 	};
+diff --git a/arch/mips/generic/board-xilfpga.its.S b/arch/mips/generic/board-xilfpga.its.S
+index a2e773d3f14f4..08c1e900eb4ed 100644
+--- a/arch/mips/generic/board-xilfpga.its.S
++++ b/arch/mips/generic/board-xilfpga.its.S
+@@ -1,22 +1,22 @@
+ / {
+ 	images {
+-		fdt@xilfpga {
++		fdt-xilfpga {
+ 			description = "MIPSfpga (xilfpga) Device Tree";
+ 			data = /incbin/("boot/dts/xilfpga/nexys4ddr.dtb");
+ 			type = "flat_dt";
+ 			arch = "mips";
+ 			compression = "none";
+-			hash@0 {
++			hash {
+ 				algo = "sha1";
+ 			};
+ 		};
+ 	};
+ 
+ 	configurations {
+-		conf@xilfpga {
++		conf-xilfpga {
+ 			description = "MIPSfpga Linux kernel";
+-			kernel = "kernel@0";
+-			fdt = "fdt@xilfpga";
++			kernel = "kernel";
++			fdt = "fdt-xilfpga";
+ 		};
+ 	};
+ };
+diff --git a/arch/mips/generic/vmlinux.its.S b/arch/mips/generic/vmlinux.its.S
+index 1a08438fd8930..3e254676540f4 100644
+--- a/arch/mips/generic/vmlinux.its.S
++++ b/arch/mips/generic/vmlinux.its.S
+@@ -6,7 +6,7 @@
+ 	#address-cells = <ADDR_CELLS>;
+ 
+ 	images {
+-		kernel@0 {
++		kernel {
+ 			description = KERNEL_NAME;
+ 			data = /incbin/(VMLINUX_BINARY);
+ 			type = "kernel";
+@@ -15,18 +15,18 @@
+ 			compression = VMLINUX_COMPRESSION;
+ 			load = /bits/ ADDR_BITS <VMLINUX_LOAD_ADDRESS>;
+ 			entry = /bits/ ADDR_BITS <VMLINUX_ENTRY_ADDRESS>;
+-			hash@0 {
++			hash {
+ 				algo = "sha1";
+ 			};
+ 		};
+ 	};
+ 
+ 	configurations {
+-		default = "conf@default";
++		default = "conf-default";
+ 
+-		conf@default {
++		conf-default {
+ 			description = "Generic Linux kernel";
+-			kernel = "kernel@0";
++			kernel = "kernel";
+ 		};
+ 	};
+ };
+diff --git a/arch/mips/include/asm/asmmacro.h b/arch/mips/include/asm/asmmacro.h
+index 86f2323ebe6bc..ca83ada7015f5 100644
+--- a/arch/mips/include/asm/asmmacro.h
++++ b/arch/mips/include/asm/asmmacro.h
+@@ -44,8 +44,7 @@
+ 	.endm
+ #endif
+ 
+-#if defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR5) || \
+-    defined(CONFIG_CPU_MIPSR6)
++#ifdef CONFIG_CPU_HAS_DIEI
+ 	.macro	local_irq_enable reg=t0
+ 	ei
+ 	irq_enable_hazard
+diff --git a/arch/mips/loongson64/init.c b/arch/mips/loongson64/init.c
+index cfa788bca8719..1c664b23c0f90 100644
+--- a/arch/mips/loongson64/init.c
++++ b/arch/mips/loongson64/init.c
+@@ -126,7 +126,7 @@ static int __init add_legacy_isa_io(struct fwnode_handle *fwnode, resource_size_
+ 		return -ENOMEM;
+ 
+ 	range->fwnode = fwnode;
+-	range->size = size;
++	range->size = size = round_up(size, PAGE_SIZE);
+ 	range->hw_start = hw_start;
+ 	range->flags = LOGIC_PIO_CPU_MMIO;
+ 
+diff --git a/arch/mips/pci/pci-legacy.c b/arch/mips/pci/pci-legacy.c
+index 39052de915f34..3a909194284a6 100644
+--- a/arch/mips/pci/pci-legacy.c
++++ b/arch/mips/pci/pci-legacy.c
+@@ -166,8 +166,13 @@ void pci_load_of_ranges(struct pci_controller *hose, struct device_node *node)
+ 			res = hose->mem_resource;
+ 			break;
+ 		}
+-		if (res != NULL)
+-			of_pci_range_to_resource(&range, node, res);
++		if (res != NULL) {
++			res->name = node->full_name;
++			res->flags = range.flags;
++			res->start = range.cpu_addr;
++			res->end = range.cpu_addr + range.size - 1;
++			res->parent = res->child = res->sibling = NULL;
++		}
+ 	}
+ }
+ 
+diff --git a/arch/mips/pci/pci-mt7620.c b/arch/mips/pci/pci-mt7620.c
+index d360616037525..e032932348d6f 100644
+--- a/arch/mips/pci/pci-mt7620.c
++++ b/arch/mips/pci/pci-mt7620.c
+@@ -30,6 +30,7 @@
+ #define RALINK_GPIOMODE			0x60
+ 
+ #define PPLL_CFG1			0x9c
++#define PPLL_LD				BIT(23)
+ 
+ #define PPLL_DRV			0xa0
+ #define PDRV_SW_SET			BIT(31)
+@@ -239,8 +240,8 @@ static int mt7620_pci_hw_init(struct platform_device *pdev)
+ 	rt_sysc_m32(0, RALINK_PCIE0_CLK_EN, RALINK_CLKCFG1);
+ 	mdelay(100);
+ 
+-	if (!(rt_sysc_r32(PPLL_CFG1) & PDRV_SW_SET)) {
+-		dev_err(&pdev->dev, "MT7620 PPLL unlock\n");
++	if (!(rt_sysc_r32(PPLL_CFG1) & PPLL_LD)) {
++		dev_err(&pdev->dev, "pcie PLL not locked, aborting init\n");
+ 		reset_control_assert(rstpcie0);
+ 		rt_sysc_m32(RALINK_PCIE0_CLK_EN, 0, RALINK_CLKCFG1);
+ 		return -1;
+diff --git a/arch/mips/pci/pci-rt2880.c b/arch/mips/pci/pci-rt2880.c
+index e1f12e3981363..f1538d2be89e5 100644
+--- a/arch/mips/pci/pci-rt2880.c
++++ b/arch/mips/pci/pci-rt2880.c
+@@ -180,7 +180,6 @@ static inline void rt2880_pci_write_u32(unsigned long reg, u32 val)
+ 
+ int pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
+ {
+-	u16 cmd;
+ 	int irq = -1;
+ 
+ 	if (dev->bus->number != 0)
+@@ -188,8 +187,6 @@ int pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
+ 
+ 	switch (PCI_SLOT(dev->devfn)) {
+ 	case 0x00:
+-		rt2880_pci_write_u32(PCI_BASE_ADDRESS_0, 0x08000000);
+-		(void) rt2880_pci_read_u32(PCI_BASE_ADDRESS_0);
+ 		break;
+ 	case 0x11:
+ 		irq = RT288X_CPU_IRQ_PCI;
+@@ -201,16 +198,6 @@ int pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
+ 		break;
+ 	}
+ 
+-	pci_write_config_byte((struct pci_dev *) dev,
+-		PCI_CACHE_LINE_SIZE, 0x14);
+-	pci_write_config_byte((struct pci_dev *) dev, PCI_LATENCY_TIMER, 0xFF);
+-	pci_read_config_word((struct pci_dev *) dev, PCI_COMMAND, &cmd);
+-	cmd |= PCI_COMMAND_MASTER | PCI_COMMAND_IO | PCI_COMMAND_MEMORY |
+-		PCI_COMMAND_INVALIDATE | PCI_COMMAND_FAST_BACK |
+-		PCI_COMMAND_SERR | PCI_COMMAND_WAIT | PCI_COMMAND_PARITY;
+-	pci_write_config_word((struct pci_dev *) dev, PCI_COMMAND, cmd);
+-	pci_write_config_byte((struct pci_dev *) dev, PCI_INTERRUPT_LINE,
+-			      dev->irq);
+ 	return irq;
+ }
+ 
+@@ -251,6 +238,30 @@ static int rt288x_pci_probe(struct platform_device *pdev)
+ 
+ int pcibios_plat_dev_init(struct pci_dev *dev)
+ {
++	static bool slot0_init;
++
++	/*
++	 * Nobody seems to initialize slot 0, but this platform requires it, so
++	 * do it once when some other slot is being enabled. The PCI subsystem
++	 * should configure other slots properly, so no need to do anything
++	 * special for those.
++	 */
++	if (!slot0_init && dev->bus->number == 0) {
++		u16 cmd;
++		u32 bar0;
++
++		slot0_init = true;
++
++		pci_bus_write_config_dword(dev->bus, 0, PCI_BASE_ADDRESS_0,
++					   0x08000000);
++		pci_bus_read_config_dword(dev->bus, 0, PCI_BASE_ADDRESS_0,
++					  &bar0);
++
++		pci_bus_read_config_word(dev->bus, 0, PCI_COMMAND, &cmd);
++		cmd |= PCI_COMMAND_MASTER | PCI_COMMAND_IO | PCI_COMMAND_MEMORY;
++		pci_bus_write_config_word(dev->bus, 0, PCI_COMMAND, cmd);
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
+index 386ae12d8523b..57c0ab71d51e7 100644
+--- a/arch/powerpc/Kconfig
++++ b/arch/powerpc/Kconfig
+@@ -224,7 +224,7 @@ config PPC
+ 	select HAVE_LIVEPATCH			if HAVE_DYNAMIC_FTRACE_WITH_REGS
+ 	select HAVE_MOD_ARCH_SPECIFIC
+ 	select HAVE_NMI				if PERF_EVENTS || (PPC64 && PPC_BOOK3S)
+-	select HAVE_HARDLOCKUP_DETECTOR_ARCH	if (PPC64 && PPC_BOOK3S)
++	select HAVE_HARDLOCKUP_DETECTOR_ARCH	if PPC64 && PPC_BOOK3S && SMP
+ 	select HAVE_OPTPROBES			if PPC64
+ 	select HAVE_PERF_EVENTS
+ 	select HAVE_PERF_EVENTS_NMI		if PPC64
+diff --git a/arch/powerpc/Kconfig.debug b/arch/powerpc/Kconfig.debug
+index ae084357994e8..6342f9da45455 100644
+--- a/arch/powerpc/Kconfig.debug
++++ b/arch/powerpc/Kconfig.debug
+@@ -353,6 +353,7 @@ config PPC_EARLY_DEBUG_CPM_ADDR
+ config FAIL_IOMMU
+ 	bool "Fault-injection capability for IOMMU"
+ 	depends on FAULT_INJECTION
++	depends on PCI || IBMVIO
+ 	help
+ 	  Provide fault-injection capability for IOMMU. Each device can
+ 	  be selectively enabled via the fail_iommu property.
+diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
+index 058601efbc8a3..b703330459b80 100644
+--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
++++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
+@@ -7,6 +7,7 @@
+ #ifndef __ASSEMBLY__
+ #include <linux/mmdebug.h>
+ #include <linux/bug.h>
++#include <linux/sizes.h>
+ #endif
+ 
+ /*
+@@ -323,7 +324,8 @@ extern unsigned long pci_io_base;
+ #define  PHB_IO_END	(KERN_IO_START + FULL_IO_SIZE)
+ #define IOREMAP_BASE	(PHB_IO_END)
+ #define IOREMAP_START	(ioremap_bot)
+-#define IOREMAP_END	(KERN_IO_END)
++#define IOREMAP_END	(KERN_IO_END - FIXADDR_SIZE)
++#define FIXADDR_SIZE	SZ_32M
+ 
+ /* Advertise special mapping type for AGP */
+ #define HAVE_PAGE_AGP
+diff --git a/arch/powerpc/include/asm/book3s/64/radix.h b/arch/powerpc/include/asm/book3s/64/radix.h
+index c7813dc628fc9..59cab558e2f05 100644
+--- a/arch/powerpc/include/asm/book3s/64/radix.h
++++ b/arch/powerpc/include/asm/book3s/64/radix.h
+@@ -222,8 +222,10 @@ static inline void radix__set_pte_at(struct mm_struct *mm, unsigned long addr,
+ 	 * from ptesync, it should probably go into update_mmu_cache, rather
+ 	 * than set_pte_at (which is used to set ptes unrelated to faults).
+ 	 *
+-	 * Spurious faults to vmalloc region are not tolerated, so there is
+-	 * a ptesync in flush_cache_vmap.
++	 * Spurious faults from the kernel memory are not tolerated, so there
++	 * is a ptesync in flush_cache_vmap, and __map_kernel_page() follows
++	 * the pte update sequence from ISA Book III 6.10 Translation Table
++	 * Update Synchronization Requirements.
+ 	 */
+ }
+ 
+diff --git a/arch/powerpc/include/asm/fixmap.h b/arch/powerpc/include/asm/fixmap.h
+index 8d03c16a36635..947b5b9c44241 100644
+--- a/arch/powerpc/include/asm/fixmap.h
++++ b/arch/powerpc/include/asm/fixmap.h
+@@ -23,12 +23,17 @@
+ #include <asm/kmap_size.h>
+ #endif
+ 
++#ifdef CONFIG_PPC64
++#define FIXADDR_TOP	(IOREMAP_END + FIXADDR_SIZE)
++#else
++#define FIXADDR_SIZE	0
+ #ifdef CONFIG_KASAN
+ #include <asm/kasan.h>
+ #define FIXADDR_TOP	(KASAN_SHADOW_START - PAGE_SIZE)
+ #else
+ #define FIXADDR_TOP	((unsigned long)(-PAGE_SIZE))
+ #endif
++#endif
+ 
+ /*
+  * Here we define all the compile-time 'special' virtual
+@@ -50,6 +55,7 @@
+  */
+ enum fixed_addresses {
+ 	FIX_HOLE,
++#ifdef CONFIG_PPC32
+ 	/* reserve the top 128K for early debugging purposes */
+ 	FIX_EARLY_DEBUG_TOP = FIX_HOLE,
+ 	FIX_EARLY_DEBUG_BASE = FIX_EARLY_DEBUG_TOP+(ALIGN(SZ_128K, PAGE_SIZE)/PAGE_SIZE)-1,
+@@ -72,6 +78,7 @@ enum fixed_addresses {
+ 		       FIX_IMMR_SIZE,
+ #endif
+ 	/* FIX_PCIE_MCFG, */
++#endif /* CONFIG_PPC32 */
+ 	__end_of_permanent_fixed_addresses,
+ 
+ #define NR_FIX_BTMAPS		(SZ_256K / PAGE_SIZE)
+@@ -98,6 +105,8 @@ enum fixed_addresses {
+ static inline void __set_fixmap(enum fixed_addresses idx,
+ 				phys_addr_t phys, pgprot_t flags)
+ {
++	BUILD_BUG_ON(IS_ENABLED(CONFIG_PPC64) && __FIXADDR_SIZE > FIXADDR_SIZE);
++
+ 	if (__builtin_constant_p(idx))
+ 		BUILD_BUG_ON(idx >= __end_of_fixed_addresses);
+ 	else if (WARN_ON(idx >= __end_of_fixed_addresses))
+diff --git a/arch/powerpc/include/asm/nohash/64/pgtable.h b/arch/powerpc/include/asm/nohash/64/pgtable.h
+index 6cb8aa3571917..57cd3892bfe05 100644
+--- a/arch/powerpc/include/asm/nohash/64/pgtable.h
++++ b/arch/powerpc/include/asm/nohash/64/pgtable.h
+@@ -6,6 +6,8 @@
+  * the ppc64 non-hashed page table.
+  */
+ 
++#include <linux/sizes.h>
++
+ #include <asm/nohash/64/pgtable-4k.h>
+ #include <asm/barrier.h>
+ #include <asm/asm-const.h>
+@@ -54,7 +56,8 @@
+ #define  PHB_IO_END	(KERN_IO_START + FULL_IO_SIZE)
+ #define IOREMAP_BASE	(PHB_IO_END)
+ #define IOREMAP_START	(ioremap_bot)
+-#define IOREMAP_END	(KERN_VIRT_START + KERN_VIRT_SIZE)
++#define IOREMAP_END	(KERN_VIRT_START + KERN_VIRT_SIZE - FIXADDR_SIZE)
++#define FIXADDR_SIZE	SZ_32M
+ 
+ 
+ /*
+diff --git a/arch/powerpc/include/asm/smp.h b/arch/powerpc/include/asm/smp.h
+index 7a13bc20f0a0c..47081a9e13ca4 100644
+--- a/arch/powerpc/include/asm/smp.h
++++ b/arch/powerpc/include/asm/smp.h
+@@ -121,6 +121,11 @@ static inline struct cpumask *cpu_sibling_mask(int cpu)
+ 	return per_cpu(cpu_sibling_map, cpu);
+ }
+ 
++static inline struct cpumask *cpu_core_mask(int cpu)
++{
++	return per_cpu(cpu_core_map, cpu);
++}
++
+ static inline struct cpumask *cpu_l2_cache_mask(int cpu)
+ {
+ 	return per_cpu(cpu_l2_cache_map, cpu);
+diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
+index 8482739d42f38..eddf362caedce 100644
+--- a/arch/powerpc/kernel/fadump.c
++++ b/arch/powerpc/kernel/fadump.c
+@@ -292,7 +292,7 @@ static void fadump_show_config(void)
+  * that is required for a kernel to boot successfully.
+  *
+  */
+-static inline u64 fadump_calculate_reserve_size(void)
++static __init u64 fadump_calculate_reserve_size(void)
+ {
+ 	u64 base, size, bootmem_min;
+ 	int ret;
+diff --git a/arch/powerpc/kernel/interrupt.c b/arch/powerpc/kernel/interrupt.c
+index c475a229a42ac..352346e14a084 100644
+--- a/arch/powerpc/kernel/interrupt.c
++++ b/arch/powerpc/kernel/interrupt.c
+@@ -34,11 +34,11 @@ notrace long system_call_exception(long r3, long r4, long r5,
+ 	if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG))
+ 		BUG_ON(irq_soft_mask_return() != IRQS_ALL_DISABLED);
+ 
++	trace_hardirqs_off(); /* finish reconciling */
++
+ 	CT_WARN_ON(ct_state() == CONTEXT_KERNEL);
+ 	user_exit_irqoff();
+ 
+-	trace_hardirqs_off(); /* finish reconciling */
+-
+ 	if (!IS_ENABLED(CONFIG_BOOKE) && !IS_ENABLED(CONFIG_40x))
+ 		BUG_ON(!(regs->msr & MSR_RI));
+ 	BUG_ON(!(regs->msr & MSR_PR));
+diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
+index 9a4797d1d40d5..a8b2d6bfc1ca7 100644
+--- a/arch/powerpc/kernel/prom.c
++++ b/arch/powerpc/kernel/prom.c
+@@ -267,7 +267,7 @@ static struct feature_property {
+ };
+ 
+ #if defined(CONFIG_44x) && defined(CONFIG_PPC_FPU)
+-static inline void identical_pvr_fixup(unsigned long node)
++static __init void identical_pvr_fixup(unsigned long node)
+ {
+ 	unsigned int pvr;
+ 	const char *model = of_get_flat_dt_prop(node, "model", NULL);
+diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
+index 5a4d59a1070d5..5c7ce1d506312 100644
+--- a/arch/powerpc/kernel/smp.c
++++ b/arch/powerpc/kernel/smp.c
+@@ -1057,17 +1057,12 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
+ 				local_memory_node(numa_cpu_lookup_table[cpu]));
+ 		}
+ #endif
+-		/*
+-		 * cpu_core_map is now more updated and exists only since
+-		 * its been exported for long. It only will have a snapshot
+-		 * of cpu_cpu_mask.
+-		 */
+-		cpumask_copy(per_cpu(cpu_core_map, cpu), cpu_cpu_mask(cpu));
+ 	}
+ 
+ 	/* Init the cpumasks so the boot CPU is related to itself */
+ 	cpumask_set_cpu(boot_cpuid, cpu_sibling_mask(boot_cpuid));
+ 	cpumask_set_cpu(boot_cpuid, cpu_l2_cache_mask(boot_cpuid));
++	cpumask_set_cpu(boot_cpuid, cpu_core_mask(boot_cpuid));
+ 
+ 	if (has_coregroup_support())
+ 		cpumask_set_cpu(boot_cpuid, cpu_coregroup_mask(boot_cpuid));
+@@ -1408,6 +1403,9 @@ static void remove_cpu_from_masks(int cpu)
+ 			set_cpus_unrelated(cpu, i, cpu_smallcore_mask);
+ 	}
+ 
++	for_each_cpu(i, cpu_core_mask(cpu))
++		set_cpus_unrelated(cpu, i, cpu_core_mask);
++
+ 	if (has_coregroup_support()) {
+ 		for_each_cpu(i, cpu_coregroup_mask(cpu))
+ 			set_cpus_unrelated(cpu, i, cpu_coregroup_mask);
+@@ -1468,8 +1466,11 @@ static void update_coregroup_mask(int cpu, cpumask_var_t *mask)
+ 
+ static void add_cpu_to_masks(int cpu)
+ {
++	struct cpumask *(*submask_fn)(int) = cpu_sibling_mask;
+ 	int first_thread = cpu_first_thread_sibling(cpu);
++	int chip_id = cpu_to_chip_id(cpu);
+ 	cpumask_var_t mask;
++	bool ret;
+ 	int i;
+ 
+ 	/*
+@@ -1485,12 +1486,36 @@ static void add_cpu_to_masks(int cpu)
+ 	add_cpu_to_smallcore_masks(cpu);
+ 
+ 	/* In CPU-hotplug path, hence use GFP_ATOMIC */
+-	alloc_cpumask_var_node(&mask, GFP_ATOMIC, cpu_to_node(cpu));
++	ret = alloc_cpumask_var_node(&mask, GFP_ATOMIC, cpu_to_node(cpu));
+ 	update_mask_by_l2(cpu, &mask);
+ 
+ 	if (has_coregroup_support())
+ 		update_coregroup_mask(cpu, &mask);
+ 
++	if (chip_id == -1 || !ret) {
++		cpumask_copy(per_cpu(cpu_core_map, cpu), cpu_cpu_mask(cpu));
++		goto out;
++	}
++
++	if (shared_caches)
++		submask_fn = cpu_l2_cache_mask;
++
++	/* Update core_mask with all the CPUs that are part of submask */
++	or_cpumasks_related(cpu, cpu, submask_fn, cpu_core_mask);
++
++	/* Skip all CPUs already part of current CPU core mask */
++	cpumask_andnot(mask, cpu_online_mask, cpu_core_mask(cpu));
++
++	for_each_cpu(i, mask) {
++		if (chip_id == cpu_to_chip_id(i)) {
++			or_cpumasks_related(cpu, i, submask_fn, cpu_core_mask);
++			cpumask_andnot(mask, mask, submask_fn(i));
++		} else {
++			cpumask_andnot(mask, mask, cpu_core_mask(i));
++		}
++	}
++
++out:
+ 	free_cpumask_var(mask);
+ }
+ 
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index 13bad6bf4c958..208a053c9adfd 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -3728,7 +3728,10 @@ static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
+ 	vcpu->arch.dec_expires = dec + tb;
+ 	vcpu->cpu = -1;
+ 	vcpu->arch.thread_cpu = -1;
++	/* Save guest CTRL register, set runlatch to 1 */
+ 	vcpu->arch.ctrl = mfspr(SPRN_CTRLF);
++	if (!(vcpu->arch.ctrl & 1))
++		mtspr(SPRN_CTRLT, vcpu->arch.ctrl | 1);
+ 
+ 	vcpu->arch.iamr = mfspr(SPRN_IAMR);
+ 	vcpu->arch.pspb = mfspr(SPRN_PSPB);
+diff --git a/arch/powerpc/mm/book3s64/hash_pgtable.c b/arch/powerpc/mm/book3s64/hash_pgtable.c
+index 567e0c6b3978e..03819c259f0ab 100644
+--- a/arch/powerpc/mm/book3s64/hash_pgtable.c
++++ b/arch/powerpc/mm/book3s64/hash_pgtable.c
+@@ -428,12 +428,14 @@ static bool hash__change_memory_range(unsigned long start, unsigned long end,
+ 
+ void hash__mark_rodata_ro(void)
+ {
+-	unsigned long start, end;
++	unsigned long start, end, pp;
+ 
+ 	start = (unsigned long)_stext;
+ 	end = (unsigned long)__init_begin;
+ 
+-	WARN_ON(!hash__change_memory_range(start, end, PP_RXXX));
++	pp = htab_convert_pte_flags(pgprot_val(PAGE_KERNEL_ROX), HPTE_USE_KERNEL_KEY);
++
++	WARN_ON(!hash__change_memory_range(start, end, pp));
+ }
+ 
+ void hash__mark_initmem_nx(void)
+diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c
+index 581b20a2feaf6..7719995323c3f 100644
+--- a/arch/powerpc/mm/book3s64/hash_utils.c
++++ b/arch/powerpc/mm/book3s64/hash_utils.c
+@@ -1545,10 +1545,10 @@ DEFINE_INTERRUPT_HANDLER_RET(__do_hash_fault)
+ 	if (user_mode(regs) || (region_id == USER_REGION_ID))
+ 		access &= ~_PAGE_PRIVILEGED;
+ 
+-	if (regs->trap == 0x400)
++	if (TRAP(regs) == 0x400)
+ 		access |= _PAGE_EXEC;
+ 
+-	err = hash_page_mm(mm, ea, access, regs->trap, flags);
++	err = hash_page_mm(mm, ea, access, TRAP(regs), flags);
+ 	if (unlikely(err < 0)) {
+ 		// failed to instert a hash PTE due to an hypervisor error
+ 		if (user_mode(regs)) {
+diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
+index 98f0b243c1ab2..39d488a212a04 100644
+--- a/arch/powerpc/mm/book3s64/radix_pgtable.c
++++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
+@@ -108,7 +108,7 @@ static int early_map_kernel_page(unsigned long ea, unsigned long pa,
+ 
+ set_the_pte:
+ 	set_pte_at(&init_mm, ea, ptep, pfn_pte(pfn, flags));
+-	smp_wmb();
++	asm volatile("ptesync": : :"memory");
+ 	return 0;
+ }
+ 
+@@ -168,7 +168,7 @@ static int __map_kernel_page(unsigned long ea, unsigned long pa,
+ 
+ set_the_pte:
+ 	set_pte_at(&init_mm, ea, ptep, pfn_pte(pfn, flags));
+-	smp_wmb();
++	asm volatile("ptesync": : :"memory");
+ 	return 0;
+ }
+ 
+diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
+index 4e8ce6d852323..7a59a5c9aa5dc 100644
+--- a/arch/powerpc/mm/mem.c
++++ b/arch/powerpc/mm/mem.c
+@@ -54,7 +54,6 @@
+ 
+ #include <mm/mmu_decl.h>
+ 
+-static DEFINE_MUTEX(linear_mapping_mutex);
+ unsigned long long memory_limit;
+ bool init_mem_is_free;
+ 
+@@ -72,6 +71,7 @@ pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
+ EXPORT_SYMBOL(phys_mem_access_prot);
+ 
+ #ifdef CONFIG_MEMORY_HOTPLUG
++static DEFINE_MUTEX(linear_mapping_mutex);
+ 
+ #ifdef CONFIG_NUMA
+ int memory_add_physaddr_to_nid(u64 start)
+diff --git a/arch/powerpc/perf/isa207-common.c b/arch/powerpc/perf/isa207-common.c
+index e4f577da33d8b..8b5eeb6fb2fb3 100644
+--- a/arch/powerpc/perf/isa207-common.c
++++ b/arch/powerpc/perf/isa207-common.c
+@@ -447,8 +447,8 @@ ebb_bhrb:
+ 	 * EBB events are pinned & exclusive, so this should never actually
+ 	 * hit, but we leave it as a fallback in case.
+ 	 */
+-	mask  |= CNST_EBB_VAL(ebb);
+-	value |= CNST_EBB_MASK;
++	mask  |= CNST_EBB_MASK;
++	value |= CNST_EBB_VAL(ebb);
+ 
+ 	*maskp = mask;
+ 	*valp = value;
+diff --git a/arch/powerpc/perf/power10-events-list.h b/arch/powerpc/perf/power10-events-list.h
+index e45dafe818ed4..93be7197d2502 100644
+--- a/arch/powerpc/perf/power10-events-list.h
++++ b/arch/powerpc/perf/power10-events-list.h
+@@ -75,5 +75,5 @@ EVENT(PM_RUN_INST_CMPL_ALT,			0x00002);
+  *     thresh end (TE)
+  */
+ 
+-EVENT(MEM_LOADS,				0x34340401e0);
+-EVENT(MEM_STORES,				0x343c0401e0);
++EVENT(MEM_LOADS,				0x35340401e0);
++EVENT(MEM_STORES,				0x353c0401e0);
+diff --git a/arch/powerpc/platforms/52xx/lite5200_sleep.S b/arch/powerpc/platforms/52xx/lite5200_sleep.S
+index 11475c58ea431..afee8b1515a8e 100644
+--- a/arch/powerpc/platforms/52xx/lite5200_sleep.S
++++ b/arch/powerpc/platforms/52xx/lite5200_sleep.S
+@@ -181,7 +181,7 @@ sram_code:
+   udelay: /* r11 - tb_ticks_per_usec, r12 - usecs, overwrites r13 */
+ 	mullw	r12, r12, r11
+ 	mftb	r13	/* start */
+-	addi	r12, r13, r12 /* end */
++	add	r12, r13, r12 /* end */
+     1:
+ 	mftb	r13	/* current */
+ 	cmp	cr0, r13, r12
+diff --git a/arch/powerpc/platforms/pseries/iommu.c b/arch/powerpc/platforms/pseries/iommu.c
+index 9fc5217f0c8e5..836cbbe0ecc56 100644
+--- a/arch/powerpc/platforms/pseries/iommu.c
++++ b/arch/powerpc/platforms/pseries/iommu.c
+@@ -1229,7 +1229,7 @@ static u64 enable_ddw(struct pci_dev *dev, struct device_node *pdn)
+ 	if (pmem_present) {
+ 		if (query.largest_available_block >=
+ 		    (1ULL << (MAX_PHYSMEM_BITS - page_shift)))
+-			len = MAX_PHYSMEM_BITS - page_shift;
++			len = MAX_PHYSMEM_BITS;
+ 		else
+ 			dev_info(&dev->dev, "Skipping ibm,pmemory");
+ 	}
+diff --git a/arch/powerpc/platforms/pseries/lpar.c b/arch/powerpc/platforms/pseries/lpar.c
+index 3805519a64697..cd38bd421f381 100644
+--- a/arch/powerpc/platforms/pseries/lpar.c
++++ b/arch/powerpc/platforms/pseries/lpar.c
+@@ -977,11 +977,13 @@ static void pSeries_lpar_hpte_updateboltedpp(unsigned long newpp,
+ 	slot = pSeries_lpar_hpte_find(vpn, psize, ssize);
+ 	BUG_ON(slot == -1);
+ 
+-	flags = newpp & 7;
++	flags = newpp & (HPTE_R_PP | HPTE_R_N);
+ 	if (mmu_has_feature(MMU_FTR_KERNEL_RO))
+ 		/* Move pp0 into bit 8 (IBM 55) */
+ 		flags |= (newpp & HPTE_R_PP0) >> 55;
+ 
++	flags |= ((newpp & HPTE_R_KEY_HI) >> 48) | (newpp & HPTE_R_KEY_LO);
++
+ 	lpar_rc = plpar_pte_protect(flags, slot, 0);
+ 
+ 	BUG_ON(lpar_rc != H_SUCCESS);
+diff --git a/arch/powerpc/platforms/pseries/pci_dlpar.c b/arch/powerpc/platforms/pseries/pci_dlpar.c
+index f9ae17e8a0f46..a8f9140a24fa3 100644
+--- a/arch/powerpc/platforms/pseries/pci_dlpar.c
++++ b/arch/powerpc/platforms/pseries/pci_dlpar.c
+@@ -50,6 +50,7 @@ EXPORT_SYMBOL_GPL(init_phb_dynamic);
+ int remove_phb_dynamic(struct pci_controller *phb)
+ {
+ 	struct pci_bus *b = phb->bus;
++	struct pci_host_bridge *host_bridge = to_pci_host_bridge(b->bridge);
+ 	struct resource *res;
+ 	int rc, i;
+ 
+@@ -76,7 +77,8 @@ int remove_phb_dynamic(struct pci_controller *phb)
+ 	/* Remove the PCI bus and unregister the bridge device from sysfs */
+ 	phb->bus = NULL;
+ 	pci_remove_bus(b);
+-	device_unregister(b->bridge);
++	host_bridge->bus = NULL;
++	device_unregister(&host_bridge->dev);
+ 
+ 	/* Now release the IO resource */
+ 	if (res->flags & IORESOURCE_IO)
+diff --git a/arch/powerpc/platforms/pseries/vio.c b/arch/powerpc/platforms/pseries/vio.c
+index 9cb4fc839fd5d..429053d0402ad 100644
+--- a/arch/powerpc/platforms/pseries/vio.c
++++ b/arch/powerpc/platforms/pseries/vio.c
+@@ -1285,6 +1285,10 @@ static int vio_bus_remove(struct device *dev)
+ int __vio_register_driver(struct vio_driver *viodrv, struct module *owner,
+ 			  const char *mod_name)
+ {
++	// vio_bus_type is only initialised for pseries
++	if (!machine_is(pseries))
++		return -ENODEV;
++
+ 	pr_debug("%s: driver %s registering\n", __func__, viodrv->name);
+ 
+ 	/* fill in 'struct driver' fields */
+diff --git a/arch/powerpc/sysdev/xive/common.c b/arch/powerpc/sysdev/xive/common.c
+index 595310e056f4d..5cacb632eb37a 100644
+--- a/arch/powerpc/sysdev/xive/common.c
++++ b/arch/powerpc/sysdev/xive/common.c
+@@ -253,17 +253,20 @@ notrace void xmon_xive_do_dump(int cpu)
+ 	xmon_printf("\n");
+ }
+ 
++static struct irq_data *xive_get_irq_data(u32 hw_irq)
++{
++	unsigned int irq = irq_find_mapping(xive_irq_domain, hw_irq);
++
++	return irq ? irq_get_irq_data(irq) : NULL;
++}
++
+ int xmon_xive_get_irq_config(u32 hw_irq, struct irq_data *d)
+ {
+-	struct irq_chip *chip = irq_data_get_irq_chip(d);
+ 	int rc;
+ 	u32 target;
+ 	u8 prio;
+ 	u32 lirq;
+ 
+-	if (!is_xive_irq(chip))
+-		return -EINVAL;
+-
+ 	rc = xive_ops->get_irq_config(hw_irq, &target, &prio, &lirq);
+ 	if (rc) {
+ 		xmon_printf("IRQ 0x%08x : no config rc=%d\n", hw_irq, rc);
+@@ -273,6 +276,9 @@ int xmon_xive_get_irq_config(u32 hw_irq, struct irq_data *d)
+ 	xmon_printf("IRQ 0x%08x : target=0x%x prio=%02x lirq=0x%x ",
+ 		    hw_irq, target, prio, lirq);
+ 
++	if (!d)
++		d = xive_get_irq_data(hw_irq);
++
+ 	if (d) {
+ 		struct xive_irq_data *xd = irq_data_get_irq_handler_data(d);
+ 		u64 val = xive_esb_read(xd, XIVE_ESB_GET);
+@@ -1599,6 +1605,8 @@ static void xive_debug_show_irq(struct seq_file *m, u32 hw_irq, struct irq_data
+ 	u32 target;
+ 	u8 prio;
+ 	u32 lirq;
++	struct xive_irq_data *xd;
++	u64 val;
+ 
+ 	if (!is_xive_irq(chip))
+ 		return;
+@@ -1612,17 +1620,14 @@ static void xive_debug_show_irq(struct seq_file *m, u32 hw_irq, struct irq_data
+ 	seq_printf(m, "IRQ 0x%08x : target=0x%x prio=%02x lirq=0x%x ",
+ 		   hw_irq, target, prio, lirq);
+ 
+-	if (d) {
+-		struct xive_irq_data *xd = irq_data_get_irq_handler_data(d);
+-		u64 val = xive_esb_read(xd, XIVE_ESB_GET);
+-
+-		seq_printf(m, "flags=%c%c%c PQ=%c%c",
+-			   xd->flags & XIVE_IRQ_FLAG_STORE_EOI ? 'S' : ' ',
+-			   xd->flags & XIVE_IRQ_FLAG_LSI ? 'L' : ' ',
+-			   xd->flags & XIVE_IRQ_FLAG_H_INT_ESB ? 'H' : ' ',
+-			   val & XIVE_ESB_VAL_P ? 'P' : '-',
+-			   val & XIVE_ESB_VAL_Q ? 'Q' : '-');
+-	}
++	xd = irq_data_get_irq_handler_data(d);
++	val = xive_esb_read(xd, XIVE_ESB_GET);
++	seq_printf(m, "flags=%c%c%c PQ=%c%c",
++		   xd->flags & XIVE_IRQ_FLAG_STORE_EOI ? 'S' : ' ',
++		   xd->flags & XIVE_IRQ_FLAG_LSI ? 'L' : ' ',
++		   xd->flags & XIVE_IRQ_FLAG_H_INT_ESB ? 'H' : ' ',
++		   val & XIVE_ESB_VAL_P ? 'P' : '-',
++		   val & XIVE_ESB_VAL_Q ? 'Q' : '-');
+ 	seq_puts(m, "\n");
+ }
+ 
+diff --git a/arch/s390/kernel/setup.c b/arch/s390/kernel/setup.c
+index 72134f9f6ff52..5aab59ad56881 100644
+--- a/arch/s390/kernel/setup.c
++++ b/arch/s390/kernel/setup.c
+@@ -937,9 +937,9 @@ static int __init setup_hwcaps(void)
+ 	if (MACHINE_HAS_VX) {
+ 		elf_hwcap |= HWCAP_S390_VXRS;
+ 		if (test_facility(134))
+-			elf_hwcap |= HWCAP_S390_VXRS_EXT;
+-		if (test_facility(135))
+ 			elf_hwcap |= HWCAP_S390_VXRS_BCD;
++		if (test_facility(135))
++			elf_hwcap |= HWCAP_S390_VXRS_EXT;
+ 		if (test_facility(148))
+ 			elf_hwcap |= HWCAP_S390_VXRS_EXT2;
+ 		if (test_facility(152))
+diff --git a/arch/s390/kvm/gaccess.c b/arch/s390/kvm/gaccess.c
+index 6d6b57059493e..b9f85b2dc053f 100644
+--- a/arch/s390/kvm/gaccess.c
++++ b/arch/s390/kvm/gaccess.c
+@@ -976,7 +976,9 @@ int kvm_s390_check_low_addr_prot_real(struct kvm_vcpu *vcpu, unsigned long gra)
+  * kvm_s390_shadow_tables - walk the guest page table and create shadow tables
+  * @sg: pointer to the shadow guest address space structure
+  * @saddr: faulting address in the shadow gmap
+- * @pgt: pointer to the page table address result
++ * @pgt: pointer to the beginning of the page table for the given address if
++ *	 successful (return value 0), or to the first invalid DAT entry in
++ *	 case of exceptions (return value > 0)
+  * @fake: pgt references contiguous guest memory block, not a pgtable
+  */
+ static int kvm_s390_shadow_tables(struct gmap *sg, unsigned long saddr,
+@@ -1034,6 +1036,7 @@ static int kvm_s390_shadow_tables(struct gmap *sg, unsigned long saddr,
+ 			rfte.val = ptr;
+ 			goto shadow_r2t;
+ 		}
++		*pgt = ptr + vaddr.rfx * 8;
+ 		rc = gmap_read_table(parent, ptr + vaddr.rfx * 8, &rfte.val);
+ 		if (rc)
+ 			return rc;
+@@ -1060,6 +1063,7 @@ shadow_r2t:
+ 			rste.val = ptr;
+ 			goto shadow_r3t;
+ 		}
++		*pgt = ptr + vaddr.rsx * 8;
+ 		rc = gmap_read_table(parent, ptr + vaddr.rsx * 8, &rste.val);
+ 		if (rc)
+ 			return rc;
+@@ -1087,6 +1091,7 @@ shadow_r3t:
+ 			rtte.val = ptr;
+ 			goto shadow_sgt;
+ 		}
++		*pgt = ptr + vaddr.rtx * 8;
+ 		rc = gmap_read_table(parent, ptr + vaddr.rtx * 8, &rtte.val);
+ 		if (rc)
+ 			return rc;
+@@ -1123,6 +1128,7 @@ shadow_sgt:
+ 			ste.val = ptr;
+ 			goto shadow_pgt;
+ 		}
++		*pgt = ptr + vaddr.sx * 8;
+ 		rc = gmap_read_table(parent, ptr + vaddr.sx * 8, &ste.val);
+ 		if (rc)
+ 			return rc;
+@@ -1157,6 +1163,8 @@ shadow_pgt:
+  * @vcpu: virtual cpu
+  * @sg: pointer to the shadow guest address space structure
+  * @saddr: faulting address in the shadow gmap
++ * @datptr: will contain the address of the faulting DAT table entry, or of
++ *	    the valid leaf, plus some flags
+  *
+  * Returns: - 0 if the shadow fault was successfully resolved
+  *	    - > 0 (pgm exception code) on exceptions while faulting
+@@ -1165,11 +1173,11 @@ shadow_pgt:
+  *	    - -ENOMEM if out of memory
+  */
+ int kvm_s390_shadow_fault(struct kvm_vcpu *vcpu, struct gmap *sg,
+-			  unsigned long saddr)
++			  unsigned long saddr, unsigned long *datptr)
+ {
+ 	union vaddress vaddr;
+ 	union page_table_entry pte;
+-	unsigned long pgt;
++	unsigned long pgt = 0;
+ 	int dat_protection, fake;
+ 	int rc;
+ 
+@@ -1191,8 +1199,20 @@ int kvm_s390_shadow_fault(struct kvm_vcpu *vcpu, struct gmap *sg,
+ 		pte.val = pgt + vaddr.px * PAGE_SIZE;
+ 		goto shadow_page;
+ 	}
+-	if (!rc)
+-		rc = gmap_read_table(sg->parent, pgt + vaddr.px * 8, &pte.val);
++
++	switch (rc) {
++	case PGM_SEGMENT_TRANSLATION:
++	case PGM_REGION_THIRD_TRANS:
++	case PGM_REGION_SECOND_TRANS:
++	case PGM_REGION_FIRST_TRANS:
++		pgt |= PEI_NOT_PTE;
++		break;
++	case 0:
++		pgt += vaddr.px * 8;
++		rc = gmap_read_table(sg->parent, pgt, &pte.val);
++	}
++	if (datptr)
++		*datptr = pgt | dat_protection * PEI_DAT_PROT;
+ 	if (!rc && pte.i)
+ 		rc = PGM_PAGE_TRANSLATION;
+ 	if (!rc && pte.z)
+diff --git a/arch/s390/kvm/gaccess.h b/arch/s390/kvm/gaccess.h
+index f4c51756c4623..7c72a5e3449f8 100644
+--- a/arch/s390/kvm/gaccess.h
++++ b/arch/s390/kvm/gaccess.h
+@@ -18,17 +18,14 @@
+ 
+ /**
+  * kvm_s390_real_to_abs - convert guest real address to guest absolute address
+- * @vcpu - guest virtual cpu
++ * @prefix - guest prefix
+  * @gra - guest real address
+  *
+  * Returns the guest absolute address that corresponds to the passed guest real
+- * address @gra of a virtual guest cpu by applying its prefix.
++ * address @gra of by applying the given prefix.
+  */
+-static inline unsigned long kvm_s390_real_to_abs(struct kvm_vcpu *vcpu,
+-						 unsigned long gra)
++static inline unsigned long _kvm_s390_real_to_abs(u32 prefix, unsigned long gra)
+ {
+-	unsigned long prefix  = kvm_s390_get_prefix(vcpu);
+-
+ 	if (gra < 2 * PAGE_SIZE)
+ 		gra += prefix;
+ 	else if (gra >= prefix && gra < prefix + 2 * PAGE_SIZE)
+@@ -36,6 +33,43 @@ static inline unsigned long kvm_s390_real_to_abs(struct kvm_vcpu *vcpu,
+ 	return gra;
+ }
+ 
++/**
++ * kvm_s390_real_to_abs - convert guest real address to guest absolute address
++ * @vcpu - guest virtual cpu
++ * @gra - guest real address
++ *
++ * Returns the guest absolute address that corresponds to the passed guest real
++ * address @gra of a virtual guest cpu by applying its prefix.
++ */
++static inline unsigned long kvm_s390_real_to_abs(struct kvm_vcpu *vcpu,
++						 unsigned long gra)
++{
++	return _kvm_s390_real_to_abs(kvm_s390_get_prefix(vcpu), gra);
++}
++
++/**
++ * _kvm_s390_logical_to_effective - convert guest logical to effective address
++ * @psw: psw of the guest
++ * @ga: guest logical address
++ *
++ * Convert a guest logical address to an effective address by applying the
++ * rules of the addressing mode defined by bits 31 and 32 of the given PSW
++ * (extendended/basic addressing mode).
++ *
++ * Depending on the addressing mode, the upper 40 bits (24 bit addressing
++ * mode), 33 bits (31 bit addressing mode) or no bits (64 bit addressing
++ * mode) of @ga will be zeroed and the remaining bits will be returned.
++ */
++static inline unsigned long _kvm_s390_logical_to_effective(psw_t *psw,
++							   unsigned long ga)
++{
++	if (psw_bits(*psw).eaba == PSW_BITS_AMODE_64BIT)
++		return ga;
++	if (psw_bits(*psw).eaba == PSW_BITS_AMODE_31BIT)
++		return ga & ((1UL << 31) - 1);
++	return ga & ((1UL << 24) - 1);
++}
++
+ /**
+  * kvm_s390_logical_to_effective - convert guest logical to effective address
+  * @vcpu: guest virtual cpu
+@@ -52,13 +86,7 @@ static inline unsigned long kvm_s390_real_to_abs(struct kvm_vcpu *vcpu,
+ static inline unsigned long kvm_s390_logical_to_effective(struct kvm_vcpu *vcpu,
+ 							  unsigned long ga)
+ {
+-	psw_t *psw = &vcpu->arch.sie_block->gpsw;
+-
+-	if (psw_bits(*psw).eaba == PSW_BITS_AMODE_64BIT)
+-		return ga;
+-	if (psw_bits(*psw).eaba == PSW_BITS_AMODE_31BIT)
+-		return ga & ((1UL << 31) - 1);
+-	return ga & ((1UL << 24) - 1);
++	return _kvm_s390_logical_to_effective(&vcpu->arch.sie_block->gpsw, ga);
+ }
+ 
+ /*
+@@ -359,7 +387,11 @@ void ipte_unlock(struct kvm_vcpu *vcpu);
+ int ipte_lock_held(struct kvm_vcpu *vcpu);
+ int kvm_s390_check_low_addr_prot_real(struct kvm_vcpu *vcpu, unsigned long gra);
+ 
++/* MVPG PEI indication bits */
++#define PEI_DAT_PROT 2
++#define PEI_NOT_PTE 4
++
+ int kvm_s390_shadow_fault(struct kvm_vcpu *vcpu, struct gmap *shadow,
+-			  unsigned long saddr);
++			  unsigned long saddr, unsigned long *datptr);
+ 
+ #endif /* __KVM_S390_GACCESS_H */
+diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
+index 2f09e9d7dc95a..24ad447e648c1 100644
+--- a/arch/s390/kvm/kvm-s390.c
++++ b/arch/s390/kvm/kvm-s390.c
+@@ -4307,16 +4307,16 @@ static void store_regs_fmt2(struct kvm_vcpu *vcpu)
+ 	kvm_run->s.regs.bpbc = (vcpu->arch.sie_block->fpf & FPF_BPBC) == FPF_BPBC;
+ 	kvm_run->s.regs.diag318 = vcpu->arch.diag318_info.val;
+ 	if (MACHINE_HAS_GS) {
++		preempt_disable();
+ 		__ctl_set_bit(2, 4);
+ 		if (vcpu->arch.gs_enabled)
+ 			save_gs_cb(current->thread.gs_cb);
+-		preempt_disable();
+ 		current->thread.gs_cb = vcpu->arch.host_gscb;
+ 		restore_gs_cb(vcpu->arch.host_gscb);
+-		preempt_enable();
+ 		if (!vcpu->arch.host_gscb)
+ 			__ctl_clear_bit(2, 4);
+ 		vcpu->arch.host_gscb = NULL;
++		preempt_enable();
+ 	}
+ 	/* SIE will save etoken directly into SDNX and therefore kvm_run */
+ }
+diff --git a/arch/s390/kvm/vsie.c b/arch/s390/kvm/vsie.c
+index bd803e0919187..4002a24bc43a6 100644
+--- a/arch/s390/kvm/vsie.c
++++ b/arch/s390/kvm/vsie.c
+@@ -417,11 +417,6 @@ static void unshadow_scb(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
+ 		memcpy((void *)((u64)scb_o + 0xc0),
+ 		       (void *)((u64)scb_s + 0xc0), 0xf0 - 0xc0);
+ 		break;
+-	case ICPT_PARTEXEC:
+-		/* MVPG only */
+-		memcpy((void *)((u64)scb_o + 0xc0),
+-		       (void *)((u64)scb_s + 0xc0), 0xd0 - 0xc0);
+-		break;
+ 	}
+ 
+ 	if (scb_s->ihcpu != 0xffffU)
+@@ -620,10 +615,10 @@ static int map_prefix(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
+ 	/* with mso/msl, the prefix lies at offset *mso* */
+ 	prefix += scb_s->mso;
+ 
+-	rc = kvm_s390_shadow_fault(vcpu, vsie_page->gmap, prefix);
++	rc = kvm_s390_shadow_fault(vcpu, vsie_page->gmap, prefix, NULL);
+ 	if (!rc && (scb_s->ecb & ECB_TE))
+ 		rc = kvm_s390_shadow_fault(vcpu, vsie_page->gmap,
+-					   prefix + PAGE_SIZE);
++					   prefix + PAGE_SIZE, NULL);
+ 	/*
+ 	 * We don't have to mprotect, we will be called for all unshadows.
+ 	 * SIE will detect if protection applies and trigger a validity.
+@@ -914,7 +909,7 @@ static int handle_fault(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
+ 				    current->thread.gmap_addr, 1);
+ 
+ 	rc = kvm_s390_shadow_fault(vcpu, vsie_page->gmap,
+-				   current->thread.gmap_addr);
++				   current->thread.gmap_addr, NULL);
+ 	if (rc > 0) {
+ 		rc = inject_fault(vcpu, rc,
+ 				  current->thread.gmap_addr,
+@@ -936,7 +931,7 @@ static void handle_last_fault(struct kvm_vcpu *vcpu,
+ {
+ 	if (vsie_page->fault_addr)
+ 		kvm_s390_shadow_fault(vcpu, vsie_page->gmap,
+-				      vsie_page->fault_addr);
++				      vsie_page->fault_addr, NULL);
+ 	vsie_page->fault_addr = 0;
+ }
+ 
+@@ -983,6 +978,98 @@ static int handle_stfle(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
+ 	return 0;
+ }
+ 
++/*
++ * Get a register for a nested guest.
++ * @vcpu the vcpu of the guest
++ * @vsie_page the vsie_page for the nested guest
++ * @reg the register number, the upper 4 bits are ignored.
++ * returns: the value of the register.
++ */
++static u64 vsie_get_register(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page, u8 reg)
++{
++	/* no need to validate the parameter and/or perform error handling */
++	reg &= 0xf;
++	switch (reg) {
++	case 15:
++		return vsie_page->scb_s.gg15;
++	case 14:
++		return vsie_page->scb_s.gg14;
++	default:
++		return vcpu->run->s.regs.gprs[reg];
++	}
++}
++
++static int vsie_handle_mvpg(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
++{
++	struct kvm_s390_sie_block *scb_s = &vsie_page->scb_s;
++	unsigned long pei_dest, pei_src, src, dest, mask, prefix;
++	u64 *pei_block = &vsie_page->scb_o->mcic;
++	int edat, rc_dest, rc_src;
++	union ctlreg0 cr0;
++
++	cr0.val = vcpu->arch.sie_block->gcr[0];
++	edat = cr0.edat && test_kvm_facility(vcpu->kvm, 8);
++	mask = _kvm_s390_logical_to_effective(&scb_s->gpsw, PAGE_MASK);
++	prefix = scb_s->prefix << GUEST_PREFIX_SHIFT;
++
++	dest = vsie_get_register(vcpu, vsie_page, scb_s->ipb >> 20) & mask;
++	dest = _kvm_s390_real_to_abs(prefix, dest) + scb_s->mso;
++	src = vsie_get_register(vcpu, vsie_page, scb_s->ipb >> 16) & mask;
++	src = _kvm_s390_real_to_abs(prefix, src) + scb_s->mso;
++
++	rc_dest = kvm_s390_shadow_fault(vcpu, vsie_page->gmap, dest, &pei_dest);
++	rc_src = kvm_s390_shadow_fault(vcpu, vsie_page->gmap, src, &pei_src);
++	/*
++	 * Either everything went well, or something non-critical went wrong
++	 * e.g. because of a race. In either case, simply retry.
++	 */
++	if (rc_dest == -EAGAIN || rc_src == -EAGAIN || (!rc_dest && !rc_src)) {
++		retry_vsie_icpt(vsie_page);
++		return -EAGAIN;
++	}
++	/* Something more serious went wrong, propagate the error */
++	if (rc_dest < 0)
++		return rc_dest;
++	if (rc_src < 0)
++		return rc_src;
++
++	/* The only possible suppressing exception: just deliver it */
++	if (rc_dest == PGM_TRANSLATION_SPEC || rc_src == PGM_TRANSLATION_SPEC) {
++		clear_vsie_icpt(vsie_page);
++		rc_dest = kvm_s390_inject_program_int(vcpu, PGM_TRANSLATION_SPEC);
++		WARN_ON_ONCE(rc_dest);
++		return 1;
++	}
++
++	/*
++	 * Forward the PEI intercept to the guest if it was a page fault, or
++	 * also for segment and region table faults if EDAT applies.
++	 */
++	if (edat) {
++		rc_dest = rc_dest == PGM_ASCE_TYPE ? rc_dest : 0;
++		rc_src = rc_src == PGM_ASCE_TYPE ? rc_src : 0;
++	} else {
++		rc_dest = rc_dest != PGM_PAGE_TRANSLATION ? rc_dest : 0;
++		rc_src = rc_src != PGM_PAGE_TRANSLATION ? rc_src : 0;
++	}
++	if (!rc_dest && !rc_src) {
++		pei_block[0] = pei_dest;
++		pei_block[1] = pei_src;
++		return 1;
++	}
++
++	retry_vsie_icpt(vsie_page);
++
++	/*
++	 * The host has edat, and the guest does not, or it was an ASCE type
++	 * exception. The host needs to inject the appropriate DAT interrupts
++	 * into the guest.
++	 */
++	if (rc_dest)
++		return inject_fault(vcpu, rc_dest, dest, 1);
++	return inject_fault(vcpu, rc_src, src, 0);
++}
++
+ /*
+  * Run the vsie on a shadow scb and a shadow gmap, without any further
+  * sanity checks, handling SIE faults.
+@@ -1071,6 +1158,10 @@ static int do_vsie_run(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
+ 		if ((scb_s->ipa & 0xf000) != 0xf000)
+ 			scb_s->ipa += 0x1000;
+ 		break;
++	case ICPT_PARTEXEC:
++		if (scb_s->ipa == 0xb254)
++			rc = vsie_handle_mvpg(vcpu, vsie_page);
++		break;
+ 	}
+ 	return rc;
+ }
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 268b7d5c98354..861b1b794697b 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -571,6 +571,7 @@ config X86_UV
+ 	depends on X86_EXTENDED_PLATFORM
+ 	depends on NUMA
+ 	depends on EFI
++	depends on KEXEC_CORE
+ 	depends on X86_X2APIC
+ 	depends on PCI
+ 	help
+diff --git a/arch/x86/crypto/poly1305_glue.c b/arch/x86/crypto/poly1305_glue.c
+index 646da46e8d104..1dfb8af48a3ca 100644
+--- a/arch/x86/crypto/poly1305_glue.c
++++ b/arch/x86/crypto/poly1305_glue.c
+@@ -16,7 +16,7 @@
+ #include <asm/simd.h>
+ 
+ asmlinkage void poly1305_init_x86_64(void *ctx,
+-				     const u8 key[POLY1305_KEY_SIZE]);
++				     const u8 key[POLY1305_BLOCK_SIZE]);
+ asmlinkage void poly1305_blocks_x86_64(void *ctx, const u8 *inp,
+ 				       const size_t len, const u32 padbit);
+ asmlinkage void poly1305_emit_x86_64(void *ctx, u8 mac[POLY1305_DIGEST_SIZE],
+@@ -81,7 +81,7 @@ static void convert_to_base2_64(void *ctx)
+ 	state->is_base2_26 = 0;
+ }
+ 
+-static void poly1305_simd_init(void *ctx, const u8 key[POLY1305_KEY_SIZE])
++static void poly1305_simd_init(void *ctx, const u8 key[POLY1305_BLOCK_SIZE])
+ {
+ 	poly1305_init_x86_64(ctx, key);
+ }
+@@ -129,7 +129,7 @@ static void poly1305_simd_emit(void *ctx, u8 mac[POLY1305_DIGEST_SIZE],
+ 		poly1305_emit_avx(ctx, mac, nonce);
+ }
+ 
+-void poly1305_init_arch(struct poly1305_desc_ctx *dctx, const u8 *key)
++void poly1305_init_arch(struct poly1305_desc_ctx *dctx, const u8 key[POLY1305_KEY_SIZE])
+ {
+ 	poly1305_simd_init(&dctx->h, key);
+ 	dctx->s[0] = get_unaligned_le32(&key[16]);
+diff --git a/arch/x86/entry/vdso/vdso2c.h b/arch/x86/entry/vdso/vdso2c.h
+index 1c7cfac7e64ac..5264daa8859f5 100644
+--- a/arch/x86/entry/vdso/vdso2c.h
++++ b/arch/x86/entry/vdso/vdso2c.h
+@@ -35,7 +35,7 @@ static void BITSFUNC(extract)(const unsigned char *data, size_t data_len,
+ 	if (offset + len > data_len)
+ 		fail("section to extract overruns input data");
+ 
+-	fprintf(outfile, "static const unsigned char %s[%lu] = {", name, len);
++	fprintf(outfile, "static const unsigned char %s[%zu] = {", name, len);
+ 	BITSFUNC(copy)(outfile, data + offset, len);
+ 	fprintf(outfile, "\n};\n\n");
+ }
+diff --git a/arch/x86/events/amd/iommu.c b/arch/x86/events/amd/iommu.c
+index be50ef8572cce..6a98a76516214 100644
+--- a/arch/x86/events/amd/iommu.c
++++ b/arch/x86/events/amd/iommu.c
+@@ -81,12 +81,12 @@ static struct attribute_group amd_iommu_events_group = {
+ };
+ 
+ struct amd_iommu_event_desc {
+-	struct kobj_attribute attr;
++	struct device_attribute attr;
+ 	const char *event;
+ };
+ 
+-static ssize_t _iommu_event_show(struct kobject *kobj,
+-				struct kobj_attribute *attr, char *buf)
++static ssize_t _iommu_event_show(struct device *dev,
++				struct device_attribute *attr, char *buf)
+ {
+ 	struct amd_iommu_event_desc *event =
+ 		container_of(attr, struct amd_iommu_event_desc, attr);
+diff --git a/arch/x86/events/amd/uncore.c b/arch/x86/events/amd/uncore.c
+index 7f014d450bc28..582c0ffb5e983 100644
+--- a/arch/x86/events/amd/uncore.c
++++ b/arch/x86/events/amd/uncore.c
+@@ -275,14 +275,14 @@ static struct attribute_group amd_uncore_attr_group = {
+ };
+ 
+ #define DEFINE_UNCORE_FORMAT_ATTR(_var, _name, _format)			\
+-static ssize_t __uncore_##_var##_show(struct kobject *kobj,		\
+-				struct kobj_attribute *attr,		\
++static ssize_t __uncore_##_var##_show(struct device *dev,		\
++				struct device_attribute *attr,		\
+ 				char *page)				\
+ {									\
+ 	BUILD_BUG_ON(sizeof(_format) >= PAGE_SIZE);			\
+ 	return sprintf(page, _format "\n");				\
+ }									\
+-static struct kobj_attribute format_attr_##_var =			\
++static struct device_attribute format_attr_##_var =			\
+ 	__ATTR(_name, 0444, __uncore_##_var##_show, NULL)
+ 
+ DEFINE_UNCORE_FORMAT_ATTR(event12,	event,		"config:0-7,32-35");
+diff --git a/arch/x86/kernel/apic/x2apic_uv_x.c b/arch/x86/kernel/apic/x2apic_uv_x.c
+index 52bc217ca8c32..c9ddd233e32ff 100644
+--- a/arch/x86/kernel/apic/x2apic_uv_x.c
++++ b/arch/x86/kernel/apic/x2apic_uv_x.c
+@@ -1671,6 +1671,9 @@ static __init int uv_system_init_hubless(void)
+ 	if (rc < 0)
+ 		return rc;
+ 
++	/* Set section block size for current node memory */
++	set_block_size();
++
+ 	/* Create user access node */
+ 	if (rc >= 0)
+ 		uv_setup_proc_files(1);
+diff --git a/arch/x86/kernel/cpu/microcode/core.c b/arch/x86/kernel/cpu/microcode/core.c
+index b935e1b5f115e..6a6318e9590c8 100644
+--- a/arch/x86/kernel/cpu/microcode/core.c
++++ b/arch/x86/kernel/cpu/microcode/core.c
+@@ -629,16 +629,16 @@ static ssize_t reload_store(struct device *dev,
+ 	if (val != 1)
+ 		return size;
+ 
+-	tmp_ret = microcode_ops->request_microcode_fw(bsp, &microcode_pdev->dev, true);
+-	if (tmp_ret != UCODE_NEW)
+-		return size;
+-
+ 	get_online_cpus();
+ 
+ 	ret = check_online_cpus();
+ 	if (ret)
+ 		goto put;
+ 
++	tmp_ret = microcode_ops->request_microcode_fw(bsp, &microcode_pdev->dev, true);
++	if (tmp_ret != UCODE_NEW)
++		goto put;
++
+ 	mutex_lock(&microcode_mutex);
+ 	ret = microcode_reload_late();
+ 	mutex_unlock(&microcode_mutex);
+diff --git a/arch/x86/kernel/e820.c b/arch/x86/kernel/e820.c
+index 22aad412f965e..629c4994f1654 100644
+--- a/arch/x86/kernel/e820.c
++++ b/arch/x86/kernel/e820.c
+@@ -31,8 +31,8 @@
+  *       - inform the user about the firmware's notion of memory layout
+  *         via /sys/firmware/memmap
+  *
+- *       - the hibernation code uses it to generate a kernel-independent MD5
+- *         fingerprint of the physical memory layout of a system.
++ *       - the hibernation code uses it to generate a kernel-independent CRC32
++ *         checksum of the physical memory layout of a system.
+  *
+  * - 'e820_table_kexec': a slightly modified (by the kernel) firmware version
+  *   passed to us by the bootloader - the major difference between
+diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
+index df776cdca327d..0bb9fe021bbed 100644
+--- a/arch/x86/kernel/kprobes/core.c
++++ b/arch/x86/kernel/kprobes/core.c
+@@ -139,6 +139,8 @@ NOKPROBE_SYMBOL(synthesize_relcall);
+ int can_boost(struct insn *insn, void *addr)
+ {
+ 	kprobe_opcode_t opcode;
++	insn_byte_t prefix;
++	int i;
+ 
+ 	if (search_exception_tables((unsigned long)addr))
+ 		return 0;	/* Page fault may occur on this address. */
+@@ -151,9 +153,14 @@ int can_boost(struct insn *insn, void *addr)
+ 	if (insn->opcode.nbytes != 1)
+ 		return 0;
+ 
+-	/* Can't boost Address-size override prefix */
+-	if (unlikely(inat_is_address_size_prefix(insn->attr)))
+-		return 0;
++	for_each_insn_prefix(insn, i, prefix) {
++		insn_attr_t attr;
++
++		attr = inat_get_opcode_attribute(prefix);
++		/* Can't boost Address-size override prefix and CS override prefix */
++		if (prefix == 0x2e || inat_is_address_size_prefix(attr))
++			return 0;
++	}
+ 
+ 	opcode = insn->opcode.bytes[0];
+ 
+@@ -178,8 +185,8 @@ int can_boost(struct insn *insn, void *addr)
+ 		/* clear and set flags are boostable */
+ 		return (opcode == 0xf5 || (0xf7 < opcode && opcode < 0xfe));
+ 	default:
+-		/* CS override prefix and call are not boostable */
+-		return (opcode != 0x2e && opcode != 0x9a);
++		/* call is not boostable */
++		return opcode != 0x9a;
+ 	}
+ }
+ 
+@@ -448,7 +455,11 @@ static void set_resume_flags(struct kprobe *p, struct insn *insn)
+ 		break;
+ #endif
+ 	case 0xff:
+-		opcode = insn->opcode.bytes[1];
++		/*
++		 * Since the 0xff is an extended group opcode, the instruction
++		 * is determined by the MOD/RM byte.
++		 */
++		opcode = insn->modrm.bytes[0];
+ 		if ((opcode & 0x30) == 0x10) {
+ 			/*
+ 			 * call absolute, indirect
+diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
+index 16703c35a944f..6b08d1eb173fd 100644
+--- a/arch/x86/kernel/smpboot.c
++++ b/arch/x86/kernel/smpboot.c
+@@ -458,29 +458,52 @@ static bool match_smt(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o)
+ 	return false;
+ }
+ 
++static bool match_die(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o)
++{
++	if (c->phys_proc_id == o->phys_proc_id &&
++	    c->cpu_die_id == o->cpu_die_id)
++		return true;
++	return false;
++}
++
+ /*
+- * Define snc_cpu[] for SNC (Sub-NUMA Cluster) CPUs.
++ * Unlike the other levels, we do not enforce keeping a
++ * multicore group inside a NUMA node.  If this happens, we will
++ * discard the MC level of the topology later.
++ */
++static bool match_pkg(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o)
++{
++	if (c->phys_proc_id == o->phys_proc_id)
++		return true;
++	return false;
++}
++
++/*
++ * Define intel_cod_cpu[] for Intel COD (Cluster-on-Die) CPUs.
+  *
+- * These are Intel CPUs that enumerate an LLC that is shared by
+- * multiple NUMA nodes. The LLC on these systems is shared for
+- * off-package data access but private to the NUMA node (half
+- * of the package) for on-package access.
++ * Any Intel CPU that has multiple nodes per package and does not
++ * match intel_cod_cpu[] has the SNC (Sub-NUMA Cluster) topology.
+  *
+- * CPUID (the source of the information about the LLC) can only
+- * enumerate the cache as being shared *or* unshared, but not
+- * this particular configuration. The CPU in this case enumerates
+- * the cache to be shared across the entire package (spanning both
+- * NUMA nodes).
++ * When in SNC mode, these CPUs enumerate an LLC that is shared
++ * by multiple NUMA nodes. The LLC is shared for off-package data
++ * access but private to the NUMA node (half of the package) for
++ * on-package access. CPUID (the source of the information about
++ * the LLC) can only enumerate the cache as shared or unshared,
++ * but not this particular configuration.
+  */
+ 
+-static const struct x86_cpu_id snc_cpu[] = {
+-	X86_MATCH_INTEL_FAM6_MODEL(SKYLAKE_X, NULL),
++static const struct x86_cpu_id intel_cod_cpu[] = {
++	X86_MATCH_INTEL_FAM6_MODEL(HASWELL_X, 0),	/* COD */
++	X86_MATCH_INTEL_FAM6_MODEL(BROADWELL_X, 0),	/* COD */
++	X86_MATCH_INTEL_FAM6_MODEL(ANY, 1),		/* SNC */
+ 	{}
+ };
+ 
+ static bool match_llc(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o)
+ {
++	const struct x86_cpu_id *id = x86_match_cpu(intel_cod_cpu);
+ 	int cpu1 = c->cpu_index, cpu2 = o->cpu_index;
++	bool intel_snc = id && id->driver_data;
+ 
+ 	/* Do not match if we do not have a valid APICID for cpu: */
+ 	if (per_cpu(cpu_llc_id, cpu1) == BAD_APICID)
+@@ -495,32 +518,12 @@ static bool match_llc(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o)
+ 	 * means 'c' does not share the LLC of 'o'. This will be
+ 	 * reflected to userspace.
+ 	 */
+-	if (!topology_same_node(c, o) && x86_match_cpu(snc_cpu))
++	if (match_pkg(c, o) && !topology_same_node(c, o) && intel_snc)
+ 		return false;
+ 
+ 	return topology_sane(c, o, "llc");
+ }
+ 
+-/*
+- * Unlike the other levels, we do not enforce keeping a
+- * multicore group inside a NUMA node.  If this happens, we will
+- * discard the MC level of the topology later.
+- */
+-static bool match_pkg(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o)
+-{
+-	if (c->phys_proc_id == o->phys_proc_id)
+-		return true;
+-	return false;
+-}
+-
+-static bool match_die(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o)
+-{
+-	if ((c->phys_proc_id == o->phys_proc_id) &&
+-		(c->cpu_die_id == o->cpu_die_id))
+-		return true;
+-	return false;
+-}
+-
+ 
+ #if defined(CONFIG_SCHED_SMT) || defined(CONFIG_SCHED_MC)
+ static inline int x86_sched_itmt_flags(void)
+@@ -592,14 +595,23 @@ void set_cpu_sibling_map(int cpu)
+ 	for_each_cpu(i, cpu_sibling_setup_mask) {
+ 		o = &cpu_data(i);
+ 
++		if (match_pkg(c, o) && !topology_same_node(c, o))
++			x86_has_numa_in_package = true;
++
+ 		if ((i == cpu) || (has_smt && match_smt(c, o)))
+ 			link_mask(topology_sibling_cpumask, cpu, i);
+ 
+ 		if ((i == cpu) || (has_mp && match_llc(c, o)))
+ 			link_mask(cpu_llc_shared_mask, cpu, i);
+ 
++		if ((i == cpu) || (has_mp && match_die(c, o)))
++			link_mask(topology_die_cpumask, cpu, i);
+ 	}
+ 
++	threads = cpumask_weight(topology_sibling_cpumask(cpu));
++	if (threads > __max_smt_threads)
++		__max_smt_threads = threads;
++
+ 	/*
+ 	 * This needs a separate iteration over the cpus because we rely on all
+ 	 * topology_sibling_cpumask links to be set-up.
+@@ -613,8 +625,7 @@ void set_cpu_sibling_map(int cpu)
+ 			/*
+ 			 *  Does this new cpu bringup a new core?
+ 			 */
+-			if (cpumask_weight(
+-			    topology_sibling_cpumask(cpu)) == 1) {
++			if (threads == 1) {
+ 				/*
+ 				 * for each core in package, increment
+ 				 * the booted_cores for this new cpu
+@@ -631,16 +642,7 @@ void set_cpu_sibling_map(int cpu)
+ 			} else if (i != cpu && !c->booted_cores)
+ 				c->booted_cores = cpu_data(i).booted_cores;
+ 		}
+-		if (match_pkg(c, o) && !topology_same_node(c, o))
+-			x86_has_numa_in_package = true;
+-
+-		if ((i == cpu) || (has_mp && match_die(c, o)))
+-			link_mask(topology_die_cpumask, cpu, i);
+ 	}
+-
+-	threads = cpumask_weight(topology_sibling_cpumask(cpu));
+-	if (threads > __max_smt_threads)
+-		__max_smt_threads = threads;
+ }
+ 
+ /* maps the cpu to the sched domain representing multi-core */
+diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
+index f7970ba6219fc..abd9a4db11a88 100644
+--- a/arch/x86/kvm/emulate.c
++++ b/arch/x86/kvm/emulate.c
+@@ -4220,7 +4220,7 @@ static bool valid_cr(int nr)
+ 	}
+ }
+ 
+-static int check_cr_read(struct x86_emulate_ctxt *ctxt)
++static int check_cr_access(struct x86_emulate_ctxt *ctxt)
+ {
+ 	if (!valid_cr(ctxt->modrm_reg))
+ 		return emulate_ud(ctxt);
+@@ -4228,80 +4228,6 @@ static int check_cr_read(struct x86_emulate_ctxt *ctxt)
+ 	return X86EMUL_CONTINUE;
+ }
+ 
+-static int check_cr_write(struct x86_emulate_ctxt *ctxt)
+-{
+-	u64 new_val = ctxt->src.val64;
+-	int cr = ctxt->modrm_reg;
+-	u64 efer = 0;
+-
+-	static u64 cr_reserved_bits[] = {
+-		0xffffffff00000000ULL,
+-		0, 0, 0, /* CR3 checked later */
+-		CR4_RESERVED_BITS,
+-		0, 0, 0,
+-		CR8_RESERVED_BITS,
+-	};
+-
+-	if (!valid_cr(cr))
+-		return emulate_ud(ctxt);
+-
+-	if (new_val & cr_reserved_bits[cr])
+-		return emulate_gp(ctxt, 0);
+-
+-	switch (cr) {
+-	case 0: {
+-		u64 cr4;
+-		if (((new_val & X86_CR0_PG) && !(new_val & X86_CR0_PE)) ||
+-		    ((new_val & X86_CR0_NW) && !(new_val & X86_CR0_CD)))
+-			return emulate_gp(ctxt, 0);
+-
+-		cr4 = ctxt->ops->get_cr(ctxt, 4);
+-		ctxt->ops->get_msr(ctxt, MSR_EFER, &efer);
+-
+-		if ((new_val & X86_CR0_PG) && (efer & EFER_LME) &&
+-		    !(cr4 & X86_CR4_PAE))
+-			return emulate_gp(ctxt, 0);
+-
+-		break;
+-		}
+-	case 3: {
+-		u64 rsvd = 0;
+-
+-		ctxt->ops->get_msr(ctxt, MSR_EFER, &efer);
+-		if (efer & EFER_LMA) {
+-			u64 maxphyaddr;
+-			u32 eax, ebx, ecx, edx;
+-
+-			eax = 0x80000008;
+-			ecx = 0;
+-			if (ctxt->ops->get_cpuid(ctxt, &eax, &ebx, &ecx,
+-						 &edx, true))
+-				maxphyaddr = eax & 0xff;
+-			else
+-				maxphyaddr = 36;
+-			rsvd = rsvd_bits(maxphyaddr, 63);
+-			if (ctxt->ops->get_cr(ctxt, 4) & X86_CR4_PCIDE)
+-				rsvd &= ~X86_CR3_PCID_NOFLUSH;
+-		}
+-
+-		if (new_val & rsvd)
+-			return emulate_gp(ctxt, 0);
+-
+-		break;
+-		}
+-	case 4: {
+-		ctxt->ops->get_msr(ctxt, MSR_EFER, &efer);
+-
+-		if ((efer & EFER_LMA) && !(new_val & X86_CR4_PAE))
+-			return emulate_gp(ctxt, 0);
+-
+-		break;
+-		}
+-	}
+-
+-	return X86EMUL_CONTINUE;
+-}
+-
+ static int check_dr7_gd(struct x86_emulate_ctxt *ctxt)
+ {
+ 	unsigned long dr7;
+@@ -4841,10 +4767,10 @@ static const struct opcode twobyte_table[256] = {
+ 	D(ImplicitOps | ModRM | SrcMem | NoAccess), /* 8 * reserved NOP */
+ 	D(ImplicitOps | ModRM | SrcMem | NoAccess), /* NOP + 7 * reserved NOP */
+ 	/* 0x20 - 0x2F */
+-	DIP(ModRM | DstMem | Priv | Op3264 | NoMod, cr_read, check_cr_read),
++	DIP(ModRM | DstMem | Priv | Op3264 | NoMod, cr_read, check_cr_access),
+ 	DIP(ModRM | DstMem | Priv | Op3264 | NoMod, dr_read, check_dr_read),
+ 	IIP(ModRM | SrcMem | Priv | Op3264 | NoMod, em_cr_write, cr_write,
+-						check_cr_write),
++						check_cr_access),
+ 	IIP(ModRM | SrcMem | Priv | Op3264 | NoMod, em_dr_write, dr_write,
+ 						check_dr_write),
+ 	N, N, N, N,
+diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
+index cc369b9ad8f11..49a839d0567a5 100644
+--- a/arch/x86/kvm/lapic.c
++++ b/arch/x86/kvm/lapic.c
+@@ -296,6 +296,10 @@ static inline void apic_set_spiv(struct kvm_lapic *apic, u32 val)
+ 
+ 		atomic_set_release(&apic->vcpu->kvm->arch.apic_map_dirty, DIRTY);
+ 	}
++
++	/* Check if there are APF page ready requests pending */
++	if (enabled)
++		kvm_make_request(KVM_REQ_APF_READY, apic->vcpu);
+ }
+ 
+ static inline void kvm_apic_set_xapic_id(struct kvm_lapic *apic, u8 id)
+@@ -2261,6 +2265,8 @@ void kvm_lapic_set_base(struct kvm_vcpu *vcpu, u64 value)
+ 		if (value & MSR_IA32_APICBASE_ENABLE) {
+ 			kvm_apic_set_xapic_id(apic, vcpu->vcpu_id);
+ 			static_branch_slow_dec_deferred(&apic_hw_disabled);
++			/* Check if there are APF page ready requests pending */
++			kvm_make_request(KVM_REQ_APF_READY, vcpu);
+ 		} else {
+ 			static_branch_inc(&apic_hw_disabled.key);
+ 			atomic_set_release(&apic->vcpu->kvm->arch.apic_map_dirty, DIRTY);
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index 951dae4e71751..cd0faa1876743 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -3193,14 +3193,14 @@ void kvm_mmu_free_roots(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
+ 		if (mmu->shadow_root_level >= PT64_ROOT_4LEVEL &&
+ 		    (mmu->root_level >= PT64_ROOT_4LEVEL || mmu->direct_map)) {
+ 			mmu_free_root_page(kvm, &mmu->root_hpa, &invalid_list);
+-		} else {
++		} else if (mmu->pae_root) {
+ 			for (i = 0; i < 4; ++i)
+ 				if (mmu->pae_root[i] != 0)
+ 					mmu_free_root_page(kvm,
+ 							   &mmu->pae_root[i],
+ 							   &invalid_list);
+-			mmu->root_hpa = INVALID_PAGE;
+ 		}
++		mmu->root_hpa = INVALID_PAGE;
+ 		mmu->root_pgd = 0;
+ 	}
+ 
+@@ -3312,9 +3312,23 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu)
+ 	 * the shadow page table may be a PAE or a long mode page table.
+ 	 */
+ 	pm_mask = PT_PRESENT_MASK;
+-	if (vcpu->arch.mmu->shadow_root_level == PT64_ROOT_4LEVEL)
++	if (vcpu->arch.mmu->shadow_root_level == PT64_ROOT_4LEVEL) {
+ 		pm_mask |= PT_ACCESSED_MASK | PT_WRITABLE_MASK | PT_USER_MASK;
+ 
++		/*
++		 * Allocate the page for the PDPTEs when shadowing 32-bit NPT
++		 * with 64-bit only when needed.  Unlike 32-bit NPT, it doesn't
++		 * need to be in low mem.  See also lm_root below.
++		 */
++		if (!vcpu->arch.mmu->pae_root) {
++			WARN_ON_ONCE(!tdp_enabled);
++
++			vcpu->arch.mmu->pae_root = (void *)get_zeroed_page(GFP_KERNEL_ACCOUNT);
++			if (!vcpu->arch.mmu->pae_root)
++				return -ENOMEM;
++		}
++	}
++
+ 	for (i = 0; i < 4; ++i) {
+ 		MMU_WARN_ON(VALID_PAGE(vcpu->arch.mmu->pae_root[i]));
+ 		if (vcpu->arch.mmu->root_level == PT32E_ROOT_LEVEL) {
+@@ -3337,21 +3351,19 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu)
+ 	vcpu->arch.mmu->root_hpa = __pa(vcpu->arch.mmu->pae_root);
+ 
+ 	/*
+-	 * If we shadow a 32 bit page table with a long mode page
+-	 * table we enter this path.
++	 * When shadowing 32-bit or PAE NPT with 64-bit NPT, the PML4 and PDP
++	 * tables are allocated and initialized at MMU creation as there is no
++	 * equivalent level in the guest's NPT to shadow.  Allocate the tables
++	 * on demand, as running a 32-bit L1 VMM is very rare.  The PDP is
++	 * handled above (to share logic with PAE), deal with the PML4 here.
+ 	 */
+ 	if (vcpu->arch.mmu->shadow_root_level == PT64_ROOT_4LEVEL) {
+ 		if (vcpu->arch.mmu->lm_root == NULL) {
+-			/*
+-			 * The additional page necessary for this is only
+-			 * allocated on demand.
+-			 */
+-
+ 			u64 *lm_root;
+ 
+ 			lm_root = (void*)get_zeroed_page(GFP_KERNEL_ACCOUNT);
+-			if (lm_root == NULL)
+-				return 1;
++			if (!lm_root)
++				return -ENOMEM;
+ 
+ 			lm_root[0] = __pa(vcpu->arch.mmu->pae_root) | pm_mask;
+ 
+@@ -3653,6 +3665,14 @@ static bool try_async_pf(struct kvm_vcpu *vcpu, bool prefault, gfn_t gfn,
+ 	struct kvm_memory_slot *slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn);
+ 	bool async;
+ 
++	/*
++	 * Retry the page fault if the gfn hit a memslot that is being deleted
++	 * or moved.  This ensures any existing SPTEs for the old memslot will
++	 * be zapped before KVM inserts a new MMIO SPTE for the gfn.
++	 */
++	if (slot && (slot->flags & KVM_MEMSLOT_INVALID))
++		return true;
++
+ 	/* Don't expose private memslots to L2. */
+ 	if (is_guest_mode(vcpu) && !kvm_is_visible_memslot(slot)) {
+ 		*pfn = KVM_PFN_NOSLOT;
+@@ -4615,12 +4635,17 @@ void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, u32 cr0, u32 cr4, u32 efer,
+ 	struct kvm_mmu *context = &vcpu->arch.guest_mmu;
+ 	union kvm_mmu_role new_role = kvm_calc_shadow_npt_root_page_role(vcpu);
+ 
+-	context->shadow_root_level = new_role.base.level;
+-
+ 	__kvm_mmu_new_pgd(vcpu, nested_cr3, new_role.base, false, false);
+ 
+-	if (new_role.as_u64 != context->mmu_role.as_u64)
++	if (new_role.as_u64 != context->mmu_role.as_u64) {
+ 		shadow_mmu_init_context(vcpu, context, cr0, cr4, efer, new_role);
++
++		/*
++		 * Override the level set by the common init helper, nested TDP
++		 * always uses the host's TDP configuration.
++		 */
++		context->shadow_root_level = new_role.base.level;
++	}
+ }
+ EXPORT_SYMBOL_GPL(kvm_init_shadow_npt_mmu);
+ 
+@@ -5240,9 +5265,11 @@ static int __kvm_mmu_create(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu)
+ 	 * while the PDP table is a per-vCPU construct that's allocated at MMU
+ 	 * creation.  When emulating 32-bit mode, cr3 is only 32 bits even on
+ 	 * x86_64.  Therefore we need to allocate the PDP table in the first
+-	 * 4GB of memory, which happens to fit the DMA32 zone.  Except for
+-	 * SVM's 32-bit NPT support, TDP paging doesn't use PAE paging and can
+-	 * skip allocating the PDP table.
++	 * 4GB of memory, which happens to fit the DMA32 zone.  TDP paging
++	 * generally doesn't use PAE paging and can skip allocating the PDP
++	 * table.  The main exception, handled here, is SVM's 32-bit NPT.  The
++	 * other exception is for shadowing L1's 32-bit or PAE NPT on 64-bit
++	 * KVM; that horror is handled on-demand by mmu_alloc_shadow_roots().
+ 	 */
+ 	if (tdp_enabled && kvm_mmu_get_tdp_level(vcpu) > PT32E_ROOT_LEVEL)
+ 		return 0;
+diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
+index 874ea309279f5..019130011d0fc 100644
+--- a/arch/x86/kvm/svm/sev.c
++++ b/arch/x86/kvm/svm/sev.c
+@@ -87,7 +87,7 @@ static bool __sev_recycle_asids(int min_asid, int max_asid)
+ 	return true;
+ }
+ 
+-static int sev_asid_new(struct kvm_sev_info *sev)
++static int sev_asid_new(bool es_active)
+ {
+ 	int pos, min_asid, max_asid;
+ 	bool retry = true;
+@@ -98,8 +98,8 @@ static int sev_asid_new(struct kvm_sev_info *sev)
+ 	 * SEV-enabled guests must use asid from min_sev_asid to max_sev_asid.
+ 	 * SEV-ES-enabled guest can use from 1 to min_sev_asid - 1.
+ 	 */
+-	min_asid = sev->es_active ? 0 : min_sev_asid - 1;
+-	max_asid = sev->es_active ? min_sev_asid - 1 : max_sev_asid;
++	min_asid = es_active ? 0 : min_sev_asid - 1;
++	max_asid = es_active ? min_sev_asid - 1 : max_sev_asid;
+ again:
+ 	pos = find_next_zero_bit(sev_asid_bitmap, max_sev_asid, min_asid);
+ 	if (pos >= max_asid) {
+@@ -179,13 +179,17 @@ static void sev_unbind_asid(struct kvm *kvm, unsigned int handle)
+ static int sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp)
+ {
+ 	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
++	bool es_active = argp->id == KVM_SEV_ES_INIT;
+ 	int asid, ret;
+ 
++	if (kvm->created_vcpus)
++		return -EINVAL;
++
+ 	ret = -EBUSY;
+ 	if (unlikely(sev->active))
+ 		return ret;
+ 
+-	asid = sev_asid_new(sev);
++	asid = sev_asid_new(es_active);
+ 	if (asid < 0)
+ 		return ret;
+ 
+@@ -194,6 +198,7 @@ static int sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp)
+ 		goto e_free;
+ 
+ 	sev->active = true;
++	sev->es_active = es_active;
+ 	sev->asid = asid;
+ 	INIT_LIST_HEAD(&sev->regions_list);
+ 
+@@ -204,16 +209,6 @@ e_free:
+ 	return ret;
+ }
+ 
+-static int sev_es_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp)
+-{
+-	if (!sev_es)
+-		return -ENOTTY;
+-
+-	to_kvm_svm(kvm)->sev_info.es_active = true;
+-
+-	return sev_guest_init(kvm, argp);
+-}
+-
+ static int sev_bind_asid(struct kvm *kvm, unsigned int handle, int *error)
+ {
+ 	struct sev_data_activate *data;
+@@ -564,6 +559,7 @@ static int sev_launch_update_vmsa(struct kvm *kvm, struct kvm_sev_cmd *argp)
+ {
+ 	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
+ 	struct sev_data_launch_update_vmsa *vmsa;
++	struct kvm_vcpu *vcpu;
+ 	int i, ret;
+ 
+ 	if (!sev_es_guest(kvm))
+@@ -573,8 +569,8 @@ static int sev_launch_update_vmsa(struct kvm *kvm, struct kvm_sev_cmd *argp)
+ 	if (!vmsa)
+ 		return -ENOMEM;
+ 
+-	for (i = 0; i < kvm->created_vcpus; i++) {
+-		struct vcpu_svm *svm = to_svm(kvm->vcpus[i]);
++	kvm_for_each_vcpu(i, vcpu, kvm) {
++		struct vcpu_svm *svm = to_svm(vcpu);
+ 
+ 		/* Perform some pre-encryption checks against the VMSA */
+ 		ret = sev_es_sync_vmsa(svm);
+@@ -1127,12 +1123,15 @@ int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
+ 	mutex_lock(&kvm->lock);
+ 
+ 	switch (sev_cmd.id) {
++	case KVM_SEV_ES_INIT:
++		if (!sev_es) {
++			r = -ENOTTY;
++			goto out;
++		}
++		fallthrough;
+ 	case KVM_SEV_INIT:
+ 		r = sev_guest_init(kvm, &sev_cmd);
+ 		break;
+-	case KVM_SEV_ES_INIT:
+-		r = sev_es_guest_init(kvm, &sev_cmd);
+-		break;
+ 	case KVM_SEV_LAUNCH_START:
+ 		r = sev_launch_start(kvm, &sev_cmd);
+ 		break;
+@@ -1349,8 +1348,11 @@ void __init sev_hardware_setup(void)
+ 		goto out;
+ 
+ 	sev_reclaim_asid_bitmap = bitmap_zalloc(max_sev_asid, GFP_KERNEL);
+-	if (!sev_reclaim_asid_bitmap)
++	if (!sev_reclaim_asid_bitmap) {
++		bitmap_free(sev_asid_bitmap);
++		sev_asid_bitmap = NULL;
+ 		goto out;
++	}
+ 
+ 	pr_info("SEV supported: %u ASIDs\n", max_sev_asid - min_sev_asid + 1);
+ 	sev_supported = true;
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index 58a45bb139f88..309725151313d 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -564,9 +564,8 @@ static int svm_cpu_init(int cpu)
+ 	clear_page(page_address(sd->save_area));
+ 
+ 	if (svm_sev_enabled()) {
+-		sd->sev_vmcbs = kmalloc_array(max_sev_asid + 1,
+-					      sizeof(void *),
+-					      GFP_KERNEL);
++		sd->sev_vmcbs = kcalloc(max_sev_asid + 1, sizeof(void *),
++					GFP_KERNEL);
+ 		if (!sd->sev_vmcbs)
+ 			goto free_save_area;
+ 	}
+@@ -969,21 +968,6 @@ static __init int svm_hardware_setup(void)
+ 		kvm_enable_efer_bits(EFER_SVME | EFER_LMSLE);
+ 	}
+ 
+-	if (IS_ENABLED(CONFIG_KVM_AMD_SEV) && sev) {
+-		sev_hardware_setup();
+-	} else {
+-		sev = false;
+-		sev_es = false;
+-	}
+-
+-	svm_adjust_mmio_mask();
+-
+-	for_each_possible_cpu(cpu) {
+-		r = svm_cpu_init(cpu);
+-		if (r)
+-			goto err;
+-	}
+-
+ 	/*
+ 	 * KVM's MMU doesn't support using 2-level paging for itself, and thus
+ 	 * NPT isn't supported if the host is using 2-level paging since host
+@@ -998,6 +982,21 @@ static __init int svm_hardware_setup(void)
+ 	kvm_configure_mmu(npt_enabled, get_max_npt_level(), PG_LEVEL_1G);
+ 	pr_info("kvm: Nested Paging %sabled\n", npt_enabled ? "en" : "dis");
+ 
++	if (IS_ENABLED(CONFIG_KVM_AMD_SEV) && sev && npt_enabled) {
++		sev_hardware_setup();
++	} else {
++		sev = false;
++		sev_es = false;
++	}
++
++	svm_adjust_mmio_mask();
++
++	for_each_possible_cpu(cpu) {
++		r = svm_cpu_init(cpu);
++		if (r)
++			goto err;
++	}
++
+ 	if (nrips) {
+ 		if (!boot_cpu_has(X86_FEATURE_NRIPS))
+ 			nrips = false;
+@@ -1898,7 +1897,7 @@ static void svm_set_dr7(struct kvm_vcpu *vcpu, unsigned long value)
+ 
+ static int pf_interception(struct vcpu_svm *svm)
+ {
+-	u64 fault_address = __sme_clr(svm->vmcb->control.exit_info_2);
++	u64 fault_address = svm->vmcb->control.exit_info_2;
+ 	u64 error_code = svm->vmcb->control.exit_info_1;
+ 
+ 	return kvm_handle_page_fault(&svm->vcpu, error_code, fault_address,
+@@ -2738,6 +2737,9 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 	case MSR_TSC_AUX:
+ 		if (!boot_cpu_has(X86_FEATURE_RDTSCP))
+ 			return 1;
++		if (!msr_info->host_initiated &&
++		    !guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP))
++			return 1;
+ 		msr_info->data = svm->tsc_aux;
+ 		break;
+ 	/*
+@@ -2946,6 +2948,10 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
+ 		if (!boot_cpu_has(X86_FEATURE_RDTSCP))
+ 			return 1;
+ 
++		if (!msr->host_initiated &&
++		    !guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP))
++			return 1;
++
+ 		/*
+ 		 * This is rare, so we update the MSR here instead of using
+ 		 * direct_access_msrs.  Doing that would require a rdmsr in
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index bcca0b80e0d04..1727057c53130 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -619,6 +619,7 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu,
+ 	}
+ 
+ 	/* KVM unconditionally exposes the FS/GS base MSRs to L1. */
++#ifdef CONFIG_X86_64
+ 	nested_vmx_disable_intercept_for_msr(msr_bitmap_l1, msr_bitmap_l0,
+ 					     MSR_FS_BASE, MSR_TYPE_RW);
+ 
+@@ -627,6 +628,7 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu,
+ 
+ 	nested_vmx_disable_intercept_for_msr(msr_bitmap_l1, msr_bitmap_l0,
+ 					     MSR_KERNEL_GS_BASE, MSR_TYPE_RW);
++#endif
+ 
+ 	/*
+ 	 * Checking the L0->L1 bitmap is trying to verify two things:
+@@ -4601,9 +4603,9 @@ int get_vmx_mem_address(struct kvm_vcpu *vcpu, unsigned long exit_qualification,
+ 	else if (addr_size == 0)
+ 		off = (gva_t)sign_extend64(off, 15);
+ 	if (base_is_valid)
+-		off += kvm_register_read(vcpu, base_reg);
++		off += kvm_register_readl(vcpu, base_reg);
+ 	if (index_is_valid)
+-		off += kvm_register_read(vcpu, index_reg) << scaling;
++		off += kvm_register_readl(vcpu, index_reg) << scaling;
+ 	vmx_get_segment(vcpu, &s, seg_reg);
+ 
+ 	/*
+@@ -5479,16 +5481,11 @@ static int nested_vmx_eptp_switching(struct kvm_vcpu *vcpu,
+ 		if (!nested_vmx_check_eptp(vcpu, new_eptp))
+ 			return 1;
+ 
+-		kvm_mmu_unload(vcpu);
+ 		mmu->ept_ad = accessed_dirty;
+ 		mmu->mmu_role.base.ad_disabled = !accessed_dirty;
+ 		vmcs12->ept_pointer = new_eptp;
+-		/*
+-		 * TODO: Check what's the correct approach in case
+-		 * mmu reload fails. Currently, we just let the next
+-		 * reload potentially fail
+-		 */
+-		kvm_mmu_reload(vcpu);
++
++		kvm_make_request(KVM_REQ_MMU_RELOAD, vcpu);
+ 	}
+ 
+ 	return 0;
+@@ -5717,7 +5714,7 @@ static bool nested_vmx_exit_handled_vmcs_access(struct kvm_vcpu *vcpu,
+ 
+ 	/* Decode instruction info and find the field to access */
+ 	vmx_instruction_info = vmcs_read32(VMX_INSTRUCTION_INFO);
+-	field = kvm_register_read(vcpu, (((vmx_instruction_info) >> 28) & 0xf));
++	field = kvm_register_readl(vcpu, (((vmx_instruction_info) >> 28) & 0xf));
+ 
+ 	/* Out-of-range fields always cause a VM exit from L2 to L1 */
+ 	if (field >> 15)
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 29b40e092d13d..f705e0d9f1618 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -156,9 +156,11 @@ static u32 vmx_possible_passthrough_msrs[MAX_POSSIBLE_PASSTHROUGH_MSRS] = {
+ 	MSR_IA32_SPEC_CTRL,
+ 	MSR_IA32_PRED_CMD,
+ 	MSR_IA32_TSC,
++#ifdef CONFIG_X86_64
+ 	MSR_FS_BASE,
+ 	MSR_GS_BASE,
+ 	MSR_KERNEL_GS_BASE,
++#endif
+ 	MSR_IA32_SYSENTER_CS,
+ 	MSR_IA32_SYSENTER_ESP,
+ 	MSR_IA32_SYSENTER_EIP,
+@@ -5062,12 +5064,12 @@ static int handle_cr(struct kvm_vcpu *vcpu)
+ 		case 3:
+ 			WARN_ON_ONCE(enable_unrestricted_guest);
+ 			val = kvm_read_cr3(vcpu);
+-			kvm_register_write(vcpu, reg, val);
++			kvm_register_writel(vcpu, reg, val);
+ 			trace_kvm_cr_read(cr, val);
+ 			return kvm_skip_emulated_instruction(vcpu);
+ 		case 8:
+ 			val = kvm_get_cr8(vcpu);
+-			kvm_register_write(vcpu, reg, val);
++			kvm_register_writel(vcpu, reg, val);
+ 			trace_kvm_cr_read(cr, val);
+ 			return kvm_skip_emulated_instruction(vcpu);
+ 		}
+@@ -5140,7 +5142,7 @@ static int handle_dr(struct kvm_vcpu *vcpu)
+ 		unsigned long val;
+ 
+ 		kvm_get_dr(vcpu, dr, &val);
+-		kvm_register_write(vcpu, reg, val);
++		kvm_register_writel(vcpu, reg, val);
+ 		err = 0;
+ 	} else {
+ 		err = kvm_set_dr(vcpu, dr, kvm_register_readl(vcpu, reg));
+@@ -5792,7 +5794,6 @@ void dump_vmcs(void)
+ 	u32 vmentry_ctl, vmexit_ctl;
+ 	u32 cpu_based_exec_ctrl, pin_based_exec_ctrl, secondary_exec_control;
+ 	unsigned long cr4;
+-	u64 efer;
+ 
+ 	if (!dump_invalid_vmcs) {
+ 		pr_warn_ratelimited("set kvm_intel.dump_invalid_vmcs=1 to dump internal KVM state.\n");
+@@ -5804,7 +5805,6 @@ void dump_vmcs(void)
+ 	cpu_based_exec_ctrl = vmcs_read32(CPU_BASED_VM_EXEC_CONTROL);
+ 	pin_based_exec_ctrl = vmcs_read32(PIN_BASED_VM_EXEC_CONTROL);
+ 	cr4 = vmcs_readl(GUEST_CR4);
+-	efer = vmcs_read64(GUEST_IA32_EFER);
+ 	secondary_exec_control = 0;
+ 	if (cpu_has_secondary_exec_ctrls())
+ 		secondary_exec_control = vmcs_read32(SECONDARY_VM_EXEC_CONTROL);
+@@ -5816,9 +5816,7 @@ void dump_vmcs(void)
+ 	pr_err("CR4: actual=0x%016lx, shadow=0x%016lx, gh_mask=%016lx\n",
+ 	       cr4, vmcs_readl(CR4_READ_SHADOW), vmcs_readl(CR4_GUEST_HOST_MASK));
+ 	pr_err("CR3 = 0x%016lx\n", vmcs_readl(GUEST_CR3));
+-	if ((secondary_exec_control & SECONDARY_EXEC_ENABLE_EPT) &&
+-	    (cr4 & X86_CR4_PAE) && !(efer & EFER_LMA))
+-	{
++	if (cpu_has_vmx_ept()) {
+ 		pr_err("PDPTR0 = 0x%016llx  PDPTR1 = 0x%016llx\n",
+ 		       vmcs_read64(GUEST_PDPTR0), vmcs_read64(GUEST_PDPTR1));
+ 		pr_err("PDPTR2 = 0x%016llx  PDPTR3 = 0x%016llx\n",
+@@ -5844,7 +5842,8 @@ void dump_vmcs(void)
+ 	if ((vmexit_ctl & (VM_EXIT_SAVE_IA32_PAT | VM_EXIT_SAVE_IA32_EFER)) ||
+ 	    (vmentry_ctl & (VM_ENTRY_LOAD_IA32_PAT | VM_ENTRY_LOAD_IA32_EFER)))
+ 		pr_err("EFER =     0x%016llx  PAT = 0x%016llx\n",
+-		       efer, vmcs_read64(GUEST_IA32_PAT));
++		       vmcs_read64(GUEST_IA32_EFER),
++		       vmcs_read64(GUEST_IA32_PAT));
+ 	pr_err("DebugCtl = 0x%016llx  DebugExceptions = 0x%016lx\n",
+ 	       vmcs_read64(GUEST_IA32_DEBUGCTL),
+ 	       vmcs_readl(GUEST_PENDING_DBG_EXCEPTIONS));
+@@ -6938,9 +6937,11 @@ static int vmx_create_vcpu(struct kvm_vcpu *vcpu)
+ 	bitmap_fill(vmx->shadow_msr_intercept.write, MAX_POSSIBLE_PASSTHROUGH_MSRS);
+ 
+ 	vmx_disable_intercept_for_msr(vcpu, MSR_IA32_TSC, MSR_TYPE_R);
++#ifdef CONFIG_X86_64
+ 	vmx_disable_intercept_for_msr(vcpu, MSR_FS_BASE, MSR_TYPE_RW);
+ 	vmx_disable_intercept_for_msr(vcpu, MSR_GS_BASE, MSR_TYPE_RW);
+ 	vmx_disable_intercept_for_msr(vcpu, MSR_KERNEL_GS_BASE, MSR_TYPE_RW);
++#endif
+ 	vmx_disable_intercept_for_msr(vcpu, MSR_IA32_SYSENTER_CS, MSR_TYPE_RW);
+ 	vmx_disable_intercept_for_msr(vcpu, MSR_IA32_SYSENTER_ESP, MSR_TYPE_RW);
+ 	vmx_disable_intercept_for_msr(vcpu, MSR_IA32_SYSENTER_EIP, MSR_TYPE_RW);
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index ee0dc58ac3a51..3406ff421c1a3 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -1072,10 +1072,15 @@ int kvm_set_cr3(struct kvm_vcpu *vcpu, unsigned long cr3)
+ 		return 0;
+ 	}
+ 
+-	if (is_long_mode(vcpu) && kvm_vcpu_is_illegal_gpa(vcpu, cr3))
++	/*
++	 * Do not condition the GPA check on long mode, this helper is used to
++	 * stuff CR3, e.g. for RSM emulation, and there is no guarantee that
++	 * the current vCPU mode is accurate.
++	 */
++	if (kvm_vcpu_is_illegal_gpa(vcpu, cr3))
+ 		return 1;
+-	else if (is_pae_paging(vcpu) &&
+-		 !load_pdptrs(vcpu, vcpu->arch.walk_mmu, cr3))
++
++	if (is_pae_paging(vcpu) && !load_pdptrs(vcpu, vcpu->arch.walk_mmu, cr3))
+ 		return 1;
+ 
+ 	kvm_mmu_new_pgd(vcpu, cr3, skip_tlb_flush, skip_tlb_flush);
+@@ -11020,6 +11025,9 @@ bool kvm_arch_dy_runnable(struct kvm_vcpu *vcpu)
+ 
+ bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu)
+ {
++	if (vcpu->arch.guest_state_protected)
++		return true;
++
+ 	return vcpu->arch.preempted_in_kernel;
+ }
+ 
+@@ -11290,7 +11298,7 @@ bool kvm_arch_can_dequeue_async_page_present(struct kvm_vcpu *vcpu)
+ 	if (!kvm_pv_async_pf_enabled(vcpu))
+ 		return true;
+ 	else
+-		return apf_pageready_slot_free(vcpu);
++		return kvm_lapic_enabled(vcpu) && apf_pageready_slot_free(vcpu);
+ }
+ 
+ void kvm_arch_start_assignment(struct kvm *kvm)
+@@ -11539,7 +11547,7 @@ int kvm_handle_invpcid(struct kvm_vcpu *vcpu, unsigned long type, gva_t gva)
+ 
+ 		fallthrough;
+ 	case INVPCID_TYPE_ALL_INCL_GLOBAL:
+-		kvm_mmu_unload(vcpu);
++		kvm_make_request(KVM_REQ_MMU_RELOAD, vcpu);
+ 		return kvm_skip_emulated_instruction(vcpu);
+ 
+ 	default:
+diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c
+index ae17250e1efe0..7f27bb65a5721 100644
+--- a/arch/x86/kvm/xen.c
++++ b/arch/x86/kvm/xen.c
+@@ -673,7 +673,7 @@ int kvm_xen_hypercall(struct kvm_vcpu *vcpu)
+ 	bool longmode;
+ 	u64 input, params[6];
+ 
+-	input = (u64)kvm_register_read(vcpu, VCPU_REGS_RAX);
++	input = (u64)kvm_register_readl(vcpu, VCPU_REGS_RAX);
+ 
+ 	/* Hyper-V hypercalls get bit 31 set in EAX */
+ 	if ((input & 0x80000000) &&
+diff --git a/arch/x86/power/hibernate.c b/arch/x86/power/hibernate.c
+index cd3914fc9f3d4..e94e0050a583a 100644
+--- a/arch/x86/power/hibernate.c
++++ b/arch/x86/power/hibernate.c
+@@ -13,8 +13,8 @@
+ #include <linux/kdebug.h>
+ #include <linux/cpu.h>
+ #include <linux/pgtable.h>
+-
+-#include <crypto/hash.h>
++#include <linux/types.h>
++#include <linux/crc32.h>
+ 
+ #include <asm/e820/api.h>
+ #include <asm/init.h>
+@@ -54,95 +54,33 @@ int pfn_is_nosave(unsigned long pfn)
+ 	return pfn >= nosave_begin_pfn && pfn < nosave_end_pfn;
+ }
+ 
+-
+-#define MD5_DIGEST_SIZE 16
+-
+ struct restore_data_record {
+ 	unsigned long jump_address;
+ 	unsigned long jump_address_phys;
+ 	unsigned long cr3;
+ 	unsigned long magic;
+-	u8 e820_digest[MD5_DIGEST_SIZE];
++	unsigned long e820_checksum;
+ };
+ 
+-#if IS_BUILTIN(CONFIG_CRYPTO_MD5)
+ /**
+- * get_e820_md5 - calculate md5 according to given e820 table
++ * compute_e820_crc32 - calculate crc32 of a given e820 table
+  *
+  * @table: the e820 table to be calculated
+- * @buf: the md5 result to be stored to
++ *
++ * Return: the resulting checksum
+  */
+-static int get_e820_md5(struct e820_table *table, void *buf)
++static inline u32 compute_e820_crc32(struct e820_table *table)
+ {
+-	struct crypto_shash *tfm;
+-	struct shash_desc *desc;
+-	int size;
+-	int ret = 0;
+-
+-	tfm = crypto_alloc_shash("md5", 0, 0);
+-	if (IS_ERR(tfm))
+-		return -ENOMEM;
+-
+-	desc = kmalloc(sizeof(struct shash_desc) + crypto_shash_descsize(tfm),
+-		       GFP_KERNEL);
+-	if (!desc) {
+-		ret = -ENOMEM;
+-		goto free_tfm;
+-	}
+-
+-	desc->tfm = tfm;
+-
+-	size = offsetof(struct e820_table, entries) +
++	int size = offsetof(struct e820_table, entries) +
+ 		sizeof(struct e820_entry) * table->nr_entries;
+ 
+-	if (crypto_shash_digest(desc, (u8 *)table, size, buf))
+-		ret = -EINVAL;
+-
+-	kfree_sensitive(desc);
+-
+-free_tfm:
+-	crypto_free_shash(tfm);
+-	return ret;
+-}
+-
+-static int hibernation_e820_save(void *buf)
+-{
+-	return get_e820_md5(e820_table_firmware, buf);
+-}
+-
+-static bool hibernation_e820_mismatch(void *buf)
+-{
+-	int ret;
+-	u8 result[MD5_DIGEST_SIZE];
+-
+-	memset(result, 0, MD5_DIGEST_SIZE);
+-	/* If there is no digest in suspend kernel, let it go. */
+-	if (!memcmp(result, buf, MD5_DIGEST_SIZE))
+-		return false;
+-
+-	ret = get_e820_md5(e820_table_firmware, result);
+-	if (ret)
+-		return true;
+-
+-	return memcmp(result, buf, MD5_DIGEST_SIZE) ? true : false;
+-}
+-#else
+-static int hibernation_e820_save(void *buf)
+-{
+-	return 0;
+-}
+-
+-static bool hibernation_e820_mismatch(void *buf)
+-{
+-	/* If md5 is not builtin for restore kernel, let it go. */
+-	return false;
++	return ~crc32_le(~0, (unsigned char const *)table, size);
+ }
+-#endif
+ 
+ #ifdef CONFIG_X86_64
+-#define RESTORE_MAGIC	0x23456789ABCDEF01UL
++#define RESTORE_MAGIC	0x23456789ABCDEF02UL
+ #else
+-#define RESTORE_MAGIC	0x12345678UL
++#define RESTORE_MAGIC	0x12345679UL
+ #endif
+ 
+ /**
+@@ -179,7 +117,8 @@ int arch_hibernation_header_save(void *addr, unsigned int max_size)
+ 	 */
+ 	rdr->cr3 = restore_cr3 & ~CR3_PCID_MASK;
+ 
+-	return hibernation_e820_save(rdr->e820_digest);
++	rdr->e820_checksum = compute_e820_crc32(e820_table_firmware);
++	return 0;
+ }
+ 
+ /**
+@@ -200,7 +139,7 @@ int arch_hibernation_header_restore(void *addr)
+ 	jump_address_phys = rdr->jump_address_phys;
+ 	restore_cr3 = rdr->cr3;
+ 
+-	if (hibernation_e820_mismatch(rdr->e820_digest)) {
++	if (rdr->e820_checksum != compute_e820_crc32(e820_table_firmware)) {
+ 		pr_crit("Hibernate inconsistent memory map detected!\n");
+ 		return -ENODEV;
+ 	}
+diff --git a/crypto/async_tx/async_xor.c b/crypto/async_tx/async_xor.c
+index a057ecb1288d2..6cd7f7025df47 100644
+--- a/crypto/async_tx/async_xor.c
++++ b/crypto/async_tx/async_xor.c
+@@ -233,6 +233,7 @@ async_xor_offs(struct page *dest, unsigned int offset,
+ 		if (submit->flags & ASYNC_TX_XOR_DROP_DST) {
+ 			src_cnt--;
+ 			src_list++;
++			src_offs++;
+ 		}
+ 
+ 		/* wait for any prerequisite operations */
+diff --git a/drivers/acpi/cppc_acpi.c b/drivers/acpi/cppc_acpi.c
+index 69057fcd2c047..a5e6fd0bafa1b 100644
+--- a/drivers/acpi/cppc_acpi.c
++++ b/drivers/acpi/cppc_acpi.c
+@@ -119,23 +119,15 @@ static DEFINE_PER_CPU(struct cpc_desc *, cpc_desc_ptr);
+  */
+ #define NUM_RETRIES 500ULL
+ 
+-struct cppc_attr {
+-	struct attribute attr;
+-	ssize_t (*show)(struct kobject *kobj,
+-			struct attribute *attr, char *buf);
+-	ssize_t (*store)(struct kobject *kobj,
+-			struct attribute *attr, const char *c, ssize_t count);
+-};
+-
+ #define define_one_cppc_ro(_name)		\
+-static struct cppc_attr _name =			\
++static struct kobj_attribute _name =		\
+ __ATTR(_name, 0444, show_##_name, NULL)
+ 
+ #define to_cpc_desc(a) container_of(a, struct cpc_desc, kobj)
+ 
+ #define show_cppc_data(access_fn, struct_name, member_name)		\
+ 	static ssize_t show_##member_name(struct kobject *kobj,		\
+-					struct attribute *attr,	char *buf) \
++				struct kobj_attribute *attr, char *buf)	\
+ 	{								\
+ 		struct cpc_desc *cpc_ptr = to_cpc_desc(kobj);		\
+ 		struct struct_name st_name = {0};			\
+@@ -161,7 +153,7 @@ show_cppc_data(cppc_get_perf_ctrs, cppc_perf_fb_ctrs, reference_perf);
+ show_cppc_data(cppc_get_perf_ctrs, cppc_perf_fb_ctrs, wraparound_time);
+ 
+ static ssize_t show_feedback_ctrs(struct kobject *kobj,
+-		struct attribute *attr, char *buf)
++		struct kobj_attribute *attr, char *buf)
+ {
+ 	struct cpc_desc *cpc_ptr = to_cpc_desc(kobj);
+ 	struct cppc_perf_fb_ctrs fb_ctrs = {0};
+diff --git a/drivers/ata/libahci_platform.c b/drivers/ata/libahci_platform.c
+index de638dafce21e..b2f5520882918 100644
+--- a/drivers/ata/libahci_platform.c
++++ b/drivers/ata/libahci_platform.c
+@@ -582,11 +582,13 @@ int ahci_platform_init_host(struct platform_device *pdev,
+ 	int i, irq, n_ports, rc;
+ 
+ 	irq = platform_get_irq(pdev, 0);
+-	if (irq <= 0) {
++	if (irq < 0) {
+ 		if (irq != -EPROBE_DEFER)
+ 			dev_err(dev, "no irq\n");
+ 		return irq;
+ 	}
++	if (!irq)
++		return -EINVAL;
+ 
+ 	hpriv->irq = irq;
+ 
+diff --git a/drivers/ata/pata_arasan_cf.c b/drivers/ata/pata_arasan_cf.c
+index e9cf31f384506..63f39440a9b42 100644
+--- a/drivers/ata/pata_arasan_cf.c
++++ b/drivers/ata/pata_arasan_cf.c
+@@ -818,12 +818,19 @@ static int arasan_cf_probe(struct platform_device *pdev)
+ 	else
+ 		quirk = CF_BROKEN_UDMA; /* as it is on spear1340 */
+ 
+-	/* if irq is 0, support only PIO */
+-	acdev->irq = platform_get_irq(pdev, 0);
+-	if (acdev->irq)
++	/*
++	 * If there's an error getting IRQ (or we do get IRQ0),
++	 * support only PIO
++	 */
++	ret = platform_get_irq(pdev, 0);
++	if (ret > 0) {
++		acdev->irq = ret;
+ 		irq_handler = arasan_cf_interrupt;
+-	else
++	} else	if (ret == -EPROBE_DEFER) {
++		return ret;
++	} else	{
+ 		quirk |= CF_BROKEN_MWDMA | CF_BROKEN_UDMA;
++	}
+ 
+ 	acdev->pbase = res->start;
+ 	acdev->vbase = devm_ioremap(&pdev->dev, res->start,
+diff --git a/drivers/ata/pata_ixp4xx_cf.c b/drivers/ata/pata_ixp4xx_cf.c
+index d1644a8ef9fa6..abc0e87ca1a8b 100644
+--- a/drivers/ata/pata_ixp4xx_cf.c
++++ b/drivers/ata/pata_ixp4xx_cf.c
+@@ -165,8 +165,12 @@ static int ixp4xx_pata_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	irq = platform_get_irq(pdev, 0);
+-	if (irq)
++	if (irq > 0)
+ 		irq_set_irq_type(irq, IRQ_TYPE_EDGE_RISING);
++	else if (irq < 0)
++		return irq;
++	else
++		return -EINVAL;
+ 
+ 	/* Setup expansion bus chip selects */
+ 	*data->cs0_cfg = data->cs0_bits;
+diff --git a/drivers/ata/sata_mv.c b/drivers/ata/sata_mv.c
+index 664ef658a955f..b62446ea5f408 100644
+--- a/drivers/ata/sata_mv.c
++++ b/drivers/ata/sata_mv.c
+@@ -4097,6 +4097,10 @@ static int mv_platform_probe(struct platform_device *pdev)
+ 		n_ports = mv_platform_data->n_ports;
+ 		irq = platform_get_irq(pdev, 0);
+ 	}
++	if (irq < 0)
++		return irq;
++	if (!irq)
++		return -EINVAL;
+ 
+ 	host = ata_host_alloc_pinfo(&pdev->dev, ppi, n_ports);
+ 	hpriv = devm_kzalloc(&pdev->dev, sizeof(*hpriv), GFP_KERNEL);
+diff --git a/drivers/base/devtmpfs.c b/drivers/base/devtmpfs.c
+index 653c8c6ac7a73..aedeb2dc1a182 100644
+--- a/drivers/base/devtmpfs.c
++++ b/drivers/base/devtmpfs.c
+@@ -419,7 +419,6 @@ static int __init devtmpfs_setup(void *p)
+ 	init_chroot(".");
+ out:
+ 	*(int *)p = err;
+-	complete(&setup_done);
+ 	return err;
+ }
+ 
+@@ -432,6 +431,7 @@ static int __ref devtmpfsd(void *p)
+ {
+ 	int err = devtmpfs_setup(p);
+ 
++	complete(&setup_done);
+ 	if (err)
+ 		return err;
+ 	devtmpfs_work_loop();
+diff --git a/drivers/base/node.c b/drivers/base/node.c
+index f449dbb2c7466..2c36f61d30bcb 100644
+--- a/drivers/base/node.c
++++ b/drivers/base/node.c
+@@ -268,21 +268,20 @@ static void node_init_cache_dev(struct node *node)
+ 	if (!dev)
+ 		return;
+ 
++	device_initialize(dev);
+ 	dev->parent = &node->dev;
+ 	dev->release = node_cache_release;
+ 	if (dev_set_name(dev, "memory_side_cache"))
+-		goto free_dev;
++		goto put_device;
+ 
+-	if (device_register(dev))
+-		goto free_name;
++	if (device_add(dev))
++		goto put_device;
+ 
+ 	pm_runtime_no_callbacks(dev);
+ 	node->cache_dev = dev;
+ 	return;
+-free_name:
+-	kfree_const(dev->kobj.name);
+-free_dev:
+-	kfree(dev);
++put_device:
++	put_device(dev);
+ }
+ 
+ /**
+@@ -319,25 +318,24 @@ void node_add_cache(unsigned int nid, struct node_cache_attrs *cache_attrs)
+ 		return;
+ 
+ 	dev = &info->dev;
++	device_initialize(dev);
+ 	dev->parent = node->cache_dev;
+ 	dev->release = node_cacheinfo_release;
+ 	dev->groups = cache_groups;
+ 	if (dev_set_name(dev, "index%d", cache_attrs->level))
+-		goto free_cache;
++		goto put_device;
+ 
+ 	info->cache_attrs = *cache_attrs;
+-	if (device_register(dev)) {
++	if (device_add(dev)) {
+ 		dev_warn(&node->dev, "failed to add cache level:%d\n",
+ 			 cache_attrs->level);
+-		goto free_name;
++		goto put_device;
+ 	}
+ 	pm_runtime_no_callbacks(dev);
+ 	list_add_tail(&info->node, &node->cache_attrs);
+ 	return;
+-free_name:
+-	kfree_const(dev->kobj.name);
+-free_cache:
+-	kfree(info);
++put_device:
++	put_device(dev);
+ }
+ 
+ static void node_remove_caches(struct node *node)
+diff --git a/drivers/base/regmap/regmap-debugfs.c b/drivers/base/regmap/regmap-debugfs.c
+index ff2ee87987c7e..211a335a608d7 100644
+--- a/drivers/base/regmap/regmap-debugfs.c
++++ b/drivers/base/regmap/regmap-debugfs.c
+@@ -660,6 +660,7 @@ void regmap_debugfs_exit(struct regmap *map)
+ 		regmap_debugfs_free_dump_cache(map);
+ 		mutex_unlock(&map->cache_lock);
+ 		kfree(map->debugfs_name);
++		map->debugfs_name = NULL;
+ 	} else {
+ 		struct regmap_debugfs_node *node, *tmp;
+ 
+diff --git a/drivers/base/swnode.c b/drivers/base/swnode.c
+index fa3719ef80e4d..88310ac9ce906 100644
+--- a/drivers/base/swnode.c
++++ b/drivers/base/swnode.c
+@@ -1032,6 +1032,7 @@ int device_add_software_node(struct device *dev, const struct software_node *nod
+ 	}
+ 
+ 	set_secondary_fwnode(dev, &swnode->fwnode);
++	software_node_notify(dev, KOBJ_ADD);
+ 
+ 	return 0;
+ }
+@@ -1105,8 +1106,8 @@ int software_node_notify(struct device *dev, unsigned long action)
+ 
+ 	switch (action) {
+ 	case KOBJ_ADD:
+-		ret = sysfs_create_link(&dev->kobj, &swnode->kobj,
+-					"software_node");
++		ret = sysfs_create_link_nowarn(&dev->kobj, &swnode->kobj,
++					       "software_node");
+ 		if (ret)
+ 			break;
+ 
+diff --git a/drivers/block/ataflop.c b/drivers/block/ataflop.c
+index 104b713f4055a..d601e49f80e07 100644
+--- a/drivers/block/ataflop.c
++++ b/drivers/block/ataflop.c
+@@ -729,8 +729,12 @@ static int do_format(int drive, int type, struct atari_format_descr *desc)
+ 	unsigned long	flags;
+ 	int ret;
+ 
+-	if (type)
++	if (type) {
+ 		type--;
++		if (type >= NUM_DISK_MINORS ||
++		    minor2disktype[type].drive_types > DriveType)
++			return -EINVAL;
++	}
+ 
+ 	q = unit[drive].disk[type]->queue;
+ 	blk_mq_freeze_queue(q);
+@@ -742,11 +746,6 @@ static int do_format(int drive, int type, struct atari_format_descr *desc)
+ 	local_irq_restore(flags);
+ 
+ 	if (type) {
+-		if (type >= NUM_DISK_MINORS ||
+-		    minor2disktype[type].drive_types > DriveType) {
+-			ret = -EINVAL;
+-			goto out;
+-		}
+ 		type = minor2disktype[type].index;
+ 		UDT = &atari_disk_type[type];
+ 	}
+@@ -2002,7 +2001,10 @@ static void ataflop_probe(dev_t dev)
+ 	int drive = MINOR(dev) & 3;
+ 	int type  = MINOR(dev) >> 2;
+ 
+-	if (drive >= FD_MAX_UNITS || type > NUM_DISK_MINORS)
++	if (type)
++		type--;
++
++	if (drive >= FD_MAX_UNITS || type >= NUM_DISK_MINORS)
+ 		return;
+ 	mutex_lock(&ataflop_probe_lock);
+ 	if (!unit[drive].disk[type]) {
+diff --git a/drivers/block/null_blk/zoned.c b/drivers/block/null_blk/zoned.c
+index bfcab1c782b53..dae54dd1aeac3 100644
+--- a/drivers/block/null_blk/zoned.c
++++ b/drivers/block/null_blk/zoned.c
+@@ -180,6 +180,7 @@ int null_register_zoned_dev(struct nullb *nullb)
+ void null_free_zoned_dev(struct nullb_device *dev)
+ {
+ 	kvfree(dev->zones);
++	dev->zones = NULL;
+ }
+ 
+ int null_report_zones(struct gendisk *disk, sector_t sector,
+diff --git a/drivers/block/rnbd/rnbd-clt-sysfs.c b/drivers/block/rnbd/rnbd-clt-sysfs.c
+index 526c77cd7a506..49ad400a52255 100644
+--- a/drivers/block/rnbd/rnbd-clt-sysfs.c
++++ b/drivers/block/rnbd/rnbd-clt-sysfs.c
+@@ -483,11 +483,7 @@ static int rnbd_clt_get_path_name(struct rnbd_clt_dev *dev, char *buf,
+ 	while ((s = strchr(pathname, '/')))
+ 		s[0] = '!';
+ 
+-	ret = snprintf(buf, len, "%s", pathname);
+-	if (ret >= len)
+-		return -ENAMETOOLONG;
+-
+-	ret = snprintf(buf, len, "%s@%s", buf, dev->sess->sessname);
++	ret = snprintf(buf, len, "%s@%s", pathname, dev->sess->sessname);
+ 	if (ret >= len)
+ 		return -ENAMETOOLONG;
+ 
+diff --git a/drivers/block/xen-blkback/common.h b/drivers/block/xen-blkback/common.h
+index b0c71d3a81a02..bda5c815e4415 100644
+--- a/drivers/block/xen-blkback/common.h
++++ b/drivers/block/xen-blkback/common.h
+@@ -313,6 +313,7 @@ struct xen_blkif {
+ 
+ 	struct work_struct	free_work;
+ 	unsigned int 		nr_ring_pages;
++	bool			multi_ref;
+ 	/* All rings for this device. */
+ 	struct xen_blkif_ring	*rings;
+ 	unsigned int		nr_rings;
+diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback/xenbus.c
+index c2aaf690352c7..125b22205d383 100644
+--- a/drivers/block/xen-blkback/xenbus.c
++++ b/drivers/block/xen-blkback/xenbus.c
+@@ -998,14 +998,17 @@ static int read_per_ring_refs(struct xen_blkif_ring *ring, const char *dir)
+ 	for (i = 0; i < nr_grefs; i++) {
+ 		char ring_ref_name[RINGREF_NAME_LEN];
+ 
+-		snprintf(ring_ref_name, RINGREF_NAME_LEN, "ring-ref%u", i);
++		if (blkif->multi_ref)
++			snprintf(ring_ref_name, RINGREF_NAME_LEN, "ring-ref%u", i);
++		else {
++			WARN_ON(i != 0);
++			snprintf(ring_ref_name, RINGREF_NAME_LEN, "ring-ref");
++		}
++
+ 		err = xenbus_scanf(XBT_NIL, dir, ring_ref_name,
+ 				   "%u", &ring_ref[i]);
+ 
+ 		if (err != 1) {
+-			if (nr_grefs == 1)
+-				break;
+-
+ 			err = -EINVAL;
+ 			xenbus_dev_fatal(dev, err, "reading %s/%s",
+ 					 dir, ring_ref_name);
+@@ -1013,18 +1016,6 @@ static int read_per_ring_refs(struct xen_blkif_ring *ring, const char *dir)
+ 		}
+ 	}
+ 
+-	if (err != 1) {
+-		WARN_ON(nr_grefs != 1);
+-
+-		err = xenbus_scanf(XBT_NIL, dir, "ring-ref", "%u",
+-				   &ring_ref[0]);
+-		if (err != 1) {
+-			err = -EINVAL;
+-			xenbus_dev_fatal(dev, err, "reading %s/ring-ref", dir);
+-			return err;
+-		}
+-	}
+-
+ 	err = -ENOMEM;
+ 	for (i = 0; i < nr_grefs * XEN_BLKIF_REQS_PER_PAGE; i++) {
+ 		req = kzalloc(sizeof(*req), GFP_KERNEL);
+@@ -1129,10 +1120,15 @@ static int connect_ring(struct backend_info *be)
+ 		 blkif->nr_rings, blkif->blk_protocol, protocol,
+ 		 blkif->vbd.feature_gnt_persistent ? "persistent grants" : "");
+ 
+-	ring_page_order = xenbus_read_unsigned(dev->otherend,
+-					       "ring-page-order", 0);
+-
+-	if (ring_page_order > xen_blkif_max_ring_order) {
++	err = xenbus_scanf(XBT_NIL, dev->otherend, "ring-page-order", "%u",
++			   &ring_page_order);
++	if (err != 1) {
++		blkif->nr_ring_pages = 1;
++		blkif->multi_ref = false;
++	} else if (ring_page_order <= xen_blkif_max_ring_order) {
++		blkif->nr_ring_pages = 1 << ring_page_order;
++		blkif->multi_ref = true;
++	} else {
+ 		err = -EINVAL;
+ 		xenbus_dev_fatal(dev, err,
+ 				 "requested ring page order %d exceed max:%d",
+@@ -1141,8 +1137,6 @@ static int connect_ring(struct backend_info *be)
+ 		return err;
+ 	}
+ 
+-	blkif->nr_ring_pages = 1 << ring_page_order;
+-
+ 	if (blkif->nr_rings == 1)
+ 		return read_per_ring_refs(&blkif->rings[0], dev->otherend);
+ 	else {
+diff --git a/drivers/bus/qcom-ebi2.c b/drivers/bus/qcom-ebi2.c
+index 03ddcf426887b..0b8f53a688b8a 100644
+--- a/drivers/bus/qcom-ebi2.c
++++ b/drivers/bus/qcom-ebi2.c
+@@ -353,8 +353,10 @@ static int qcom_ebi2_probe(struct platform_device *pdev)
+ 
+ 		/* Figure out the chipselect */
+ 		ret = of_property_read_u32(child, "reg", &csindex);
+-		if (ret)
++		if (ret) {
++			of_node_put(child);
+ 			return ret;
++		}
+ 
+ 		if (csindex > 5) {
+ 			dev_err(dev,
+diff --git a/drivers/bus/ti-sysc.c b/drivers/bus/ti-sysc.c
+index 9e535336689fd..68145e326eb90 100644
+--- a/drivers/bus/ti-sysc.c
++++ b/drivers/bus/ti-sysc.c
+@@ -901,9 +901,6 @@ static int sysc_map_and_check_registers(struct sysc *ddata)
+ 	struct device_node *np = ddata->dev->of_node;
+ 	int error;
+ 
+-	if (!of_get_property(np, "reg", NULL))
+-		return 0;
+-
+ 	error = sysc_parse_and_check_child_range(ddata);
+ 	if (error)
+ 		return error;
+@@ -914,6 +911,9 @@ static int sysc_map_and_check_registers(struct sysc *ddata)
+ 
+ 	sysc_check_children(ddata);
+ 
++	if (!of_get_property(np, "reg", NULL))
++		return 0;
++
+ 	error = sysc_parse_registers(ddata);
+ 	if (error)
+ 		return error;
+diff --git a/drivers/char/tpm/tpm_tis_i2c_cr50.c b/drivers/char/tpm/tpm_tis_i2c_cr50.c
+index ec9a65e7887dd..f19c227d20f48 100644
+--- a/drivers/char/tpm/tpm_tis_i2c_cr50.c
++++ b/drivers/char/tpm/tpm_tis_i2c_cr50.c
+@@ -483,6 +483,7 @@ static int tpm_cr50_i2c_tis_recv(struct tpm_chip *chip, u8 *buf, size_t buf_len)
+ 	expected = be32_to_cpup((__be32 *)(buf + 2));
+ 	if (expected > buf_len) {
+ 		dev_err(&chip->dev, "Buffer too small to receive i2c data\n");
++		rc = -E2BIG;
+ 		goto out_err;
+ 	}
+ 
+diff --git a/drivers/char/ttyprintk.c b/drivers/char/ttyprintk.c
+index 6a0059e508e38..93f5d11c830b7 100644
+--- a/drivers/char/ttyprintk.c
++++ b/drivers/char/ttyprintk.c
+@@ -158,12 +158,23 @@ static int tpk_ioctl(struct tty_struct *tty,
+ 	return 0;
+ }
+ 
++/*
++ * TTY operations hangup function.
++ */
++static void tpk_hangup(struct tty_struct *tty)
++{
++	struct ttyprintk_port *tpkp = tty->driver_data;
++
++	tty_port_hangup(&tpkp->port);
++}
++
+ static const struct tty_operations ttyprintk_ops = {
+ 	.open = tpk_open,
+ 	.close = tpk_close,
+ 	.write = tpk_write,
+ 	.write_room = tpk_write_room,
+ 	.ioctl = tpk_ioctl,
++	.hangup = tpk_hangup,
+ };
+ 
+ static const struct tty_port_operations null_ops = { };
+diff --git a/drivers/clk/clk-ast2600.c b/drivers/clk/clk-ast2600.c
+index a55b37fc2c8bd..bc3be5f3eae15 100644
+--- a/drivers/clk/clk-ast2600.c
++++ b/drivers/clk/clk-ast2600.c
+@@ -61,10 +61,10 @@ static void __iomem *scu_g6_base;
+ static const struct aspeed_gate_data aspeed_g6_gates[] = {
+ 	/*				    clk rst  name		parent	 flags */
+ 	[ASPEED_CLK_GATE_MCLK]		= {  0, -1, "mclk-gate",	"mpll",	 CLK_IS_CRITICAL }, /* SDRAM */
+-	[ASPEED_CLK_GATE_ECLK]		= {  1, -1, "eclk-gate",	"eclk",	 0 },	/* Video Engine */
++	[ASPEED_CLK_GATE_ECLK]		= {  1,  6, "eclk-gate",	"eclk",	 0 },	/* Video Engine */
+ 	[ASPEED_CLK_GATE_GCLK]		= {  2,  7, "gclk-gate",	NULL,	 0 },	/* 2D engine */
+ 	/* vclk parent - dclk/d1clk/hclk/mclk */
+-	[ASPEED_CLK_GATE_VCLK]		= {  3,  6, "vclk-gate",	NULL,	 0 },	/* Video Capture */
++	[ASPEED_CLK_GATE_VCLK]		= {  3, -1, "vclk-gate",	NULL,	 0 },	/* Video Capture */
+ 	[ASPEED_CLK_GATE_BCLK]		= {  4,  8, "bclk-gate",	"bclk",	 0 }, /* PCIe/PCI */
+ 	/* From dpll */
+ 	[ASPEED_CLK_GATE_DCLK]		= {  5, -1, "dclk-gate",	NULL,	 CLK_IS_CRITICAL }, /* DAC */
+diff --git a/drivers/clk/imx/clk-imx25.c b/drivers/clk/imx/clk-imx25.c
+index a66cabfbf94f1..66192fe0a898c 100644
+--- a/drivers/clk/imx/clk-imx25.c
++++ b/drivers/clk/imx/clk-imx25.c
+@@ -73,16 +73,6 @@ enum mx25_clks {
+ 
+ static struct clk *clk[clk_max];
+ 
+-static struct clk ** const uart_clks[] __initconst = {
+-	&clk[uart_ipg_per],
+-	&clk[uart1_ipg],
+-	&clk[uart2_ipg],
+-	&clk[uart3_ipg],
+-	&clk[uart4_ipg],
+-	&clk[uart5_ipg],
+-	NULL
+-};
+-
+ static int __init __mx25_clocks_init(void __iomem *ccm_base)
+ {
+ 	BUG_ON(!ccm_base);
+@@ -228,7 +218,7 @@ static int __init __mx25_clocks_init(void __iomem *ccm_base)
+ 	 */
+ 	clk_set_parent(clk[cko_sel], clk[ipg]);
+ 
+-	imx_register_uart_clocks(uart_clks);
++	imx_register_uart_clocks(6);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/clk/imx/clk-imx27.c b/drivers/clk/imx/clk-imx27.c
+index 5585ded8b8c6f..56a5fc402b10c 100644
+--- a/drivers/clk/imx/clk-imx27.c
++++ b/drivers/clk/imx/clk-imx27.c
+@@ -49,17 +49,6 @@ static const char *ssi_sel_clks[] = { "spll_gate", "mpll", };
+ static struct clk *clk[IMX27_CLK_MAX];
+ static struct clk_onecell_data clk_data;
+ 
+-static struct clk ** const uart_clks[] __initconst = {
+-	&clk[IMX27_CLK_PER1_GATE],
+-	&clk[IMX27_CLK_UART1_IPG_GATE],
+-	&clk[IMX27_CLK_UART2_IPG_GATE],
+-	&clk[IMX27_CLK_UART3_IPG_GATE],
+-	&clk[IMX27_CLK_UART4_IPG_GATE],
+-	&clk[IMX27_CLK_UART5_IPG_GATE],
+-	&clk[IMX27_CLK_UART6_IPG_GATE],
+-	NULL
+-};
+-
+ static void __init _mx27_clocks_init(unsigned long fref)
+ {
+ 	BUG_ON(!ccm);
+@@ -176,7 +165,7 @@ static void __init _mx27_clocks_init(unsigned long fref)
+ 
+ 	clk_prepare_enable(clk[IMX27_CLK_EMI_AHB_GATE]);
+ 
+-	imx_register_uart_clocks(uart_clks);
++	imx_register_uart_clocks(7);
+ 
+ 	imx_print_silicon_rev("i.MX27", mx27_revision());
+ }
+diff --git a/drivers/clk/imx/clk-imx35.c b/drivers/clk/imx/clk-imx35.c
+index c1df03665c09a..0fe5ac2101566 100644
+--- a/drivers/clk/imx/clk-imx35.c
++++ b/drivers/clk/imx/clk-imx35.c
+@@ -82,14 +82,6 @@ enum mx35_clks {
+ 
+ static struct clk *clk[clk_max];
+ 
+-static struct clk ** const uart_clks[] __initconst = {
+-	&clk[ipg],
+-	&clk[uart1_gate],
+-	&clk[uart2_gate],
+-	&clk[uart3_gate],
+-	NULL
+-};
+-
+ static void __init _mx35_clocks_init(void)
+ {
+ 	void __iomem *base;
+@@ -243,7 +235,7 @@ static void __init _mx35_clocks_init(void)
+ 	 */
+ 	clk_prepare_enable(clk[scc_gate]);
+ 
+-	imx_register_uart_clocks(uart_clks);
++	imx_register_uart_clocks(4);
+ 
+ 	imx_print_silicon_rev("i.MX35", mx35_revision());
+ }
+diff --git a/drivers/clk/imx/clk-imx5.c b/drivers/clk/imx/clk-imx5.c
+index 01e079b810261..e4493846454dd 100644
+--- a/drivers/clk/imx/clk-imx5.c
++++ b/drivers/clk/imx/clk-imx5.c
+@@ -128,30 +128,6 @@ static const char *ieee1588_sels[] = { "pll3_sw", "pll4_sw", "dummy" /* usbphy2_
+ static struct clk *clk[IMX5_CLK_END];
+ static struct clk_onecell_data clk_data;
+ 
+-static struct clk ** const uart_clks_mx51[] __initconst = {
+-	&clk[IMX5_CLK_UART1_IPG_GATE],
+-	&clk[IMX5_CLK_UART1_PER_GATE],
+-	&clk[IMX5_CLK_UART2_IPG_GATE],
+-	&clk[IMX5_CLK_UART2_PER_GATE],
+-	&clk[IMX5_CLK_UART3_IPG_GATE],
+-	&clk[IMX5_CLK_UART3_PER_GATE],
+-	NULL
+-};
+-
+-static struct clk ** const uart_clks_mx50_mx53[] __initconst = {
+-	&clk[IMX5_CLK_UART1_IPG_GATE],
+-	&clk[IMX5_CLK_UART1_PER_GATE],
+-	&clk[IMX5_CLK_UART2_IPG_GATE],
+-	&clk[IMX5_CLK_UART2_PER_GATE],
+-	&clk[IMX5_CLK_UART3_IPG_GATE],
+-	&clk[IMX5_CLK_UART3_PER_GATE],
+-	&clk[IMX5_CLK_UART4_IPG_GATE],
+-	&clk[IMX5_CLK_UART4_PER_GATE],
+-	&clk[IMX5_CLK_UART5_IPG_GATE],
+-	&clk[IMX5_CLK_UART5_PER_GATE],
+-	NULL
+-};
+-
+ static void __init mx5_clocks_common_init(void __iomem *ccm_base)
+ {
+ 	clk[IMX5_CLK_DUMMY]		= imx_clk_fixed("dummy", 0);
+@@ -382,7 +358,7 @@ static void __init mx50_clocks_init(struct device_node *np)
+ 	r = clk_round_rate(clk[IMX5_CLK_USBOH3_PER_GATE], 54000000);
+ 	clk_set_rate(clk[IMX5_CLK_USBOH3_PER_GATE], r);
+ 
+-	imx_register_uart_clocks(uart_clks_mx50_mx53);
++	imx_register_uart_clocks(5);
+ }
+ CLK_OF_DECLARE(imx50_ccm, "fsl,imx50-ccm", mx50_clocks_init);
+ 
+@@ -488,7 +464,7 @@ static void __init mx51_clocks_init(struct device_node *np)
+ 	val |= 1 << 23;
+ 	writel(val, MXC_CCM_CLPCR);
+ 
+-	imx_register_uart_clocks(uart_clks_mx51);
++	imx_register_uart_clocks(3);
+ }
+ CLK_OF_DECLARE(imx51_ccm, "fsl,imx51-ccm", mx51_clocks_init);
+ 
+@@ -633,6 +609,6 @@ static void __init mx53_clocks_init(struct device_node *np)
+ 	r = clk_round_rate(clk[IMX5_CLK_USBOH3_PER_GATE], 54000000);
+ 	clk_set_rate(clk[IMX5_CLK_USBOH3_PER_GATE], r);
+ 
+-	imx_register_uart_clocks(uart_clks_mx50_mx53);
++	imx_register_uart_clocks(5);
+ }
+ CLK_OF_DECLARE(imx53_ccm, "fsl,imx53-ccm", mx53_clocks_init);
+diff --git a/drivers/clk/imx/clk-imx6q.c b/drivers/clk/imx/clk-imx6q.c
+index 521d6136d22c5..496900de0b0bb 100644
+--- a/drivers/clk/imx/clk-imx6q.c
++++ b/drivers/clk/imx/clk-imx6q.c
+@@ -140,13 +140,6 @@ static inline int clk_on_imx6dl(void)
+ 	return of_machine_is_compatible("fsl,imx6dl");
+ }
+ 
+-static const int uart_clk_ids[] __initconst = {
+-	IMX6QDL_CLK_UART_IPG,
+-	IMX6QDL_CLK_UART_SERIAL,
+-};
+-
+-static struct clk **uart_clks[ARRAY_SIZE(uart_clk_ids) + 1] __initdata;
+-
+ static int ldb_di_sel_by_clock_id(int clock_id)
+ {
+ 	switch (clock_id) {
+@@ -440,7 +433,6 @@ static void __init imx6q_clocks_init(struct device_node *ccm_node)
+ 	struct device_node *np;
+ 	void __iomem *anatop_base, *base;
+ 	int ret;
+-	int i;
+ 
+ 	clk_hw_data = kzalloc(struct_size(clk_hw_data, hws,
+ 					  IMX6QDL_CLK_END), GFP_KERNEL);
+@@ -982,12 +974,6 @@ static void __init imx6q_clocks_init(struct device_node *ccm_node)
+ 			       hws[IMX6QDL_CLK_PLL3_USB_OTG]->clk);
+ 	}
+ 
+-	for (i = 0; i < ARRAY_SIZE(uart_clk_ids); i++) {
+-		int index = uart_clk_ids[i];
+-
+-		uart_clks[i] = &hws[index]->clk;
+-	}
+-
+-	imx_register_uart_clocks(uart_clks);
++	imx_register_uart_clocks(1);
+ }
+ CLK_OF_DECLARE(imx6q, "fsl,imx6q-ccm", imx6q_clocks_init);
+diff --git a/drivers/clk/imx/clk-imx6sl.c b/drivers/clk/imx/clk-imx6sl.c
+index 29eab05c90689..2773659703202 100644
+--- a/drivers/clk/imx/clk-imx6sl.c
++++ b/drivers/clk/imx/clk-imx6sl.c
+@@ -179,19 +179,11 @@ void imx6sl_set_wait_clk(bool enter)
+ 		imx6sl_enable_pll_arm(false);
+ }
+ 
+-static const int uart_clk_ids[] __initconst = {
+-	IMX6SL_CLK_UART,
+-	IMX6SL_CLK_UART_SERIAL,
+-};
+-
+-static struct clk **uart_clks[ARRAY_SIZE(uart_clk_ids) + 1] __initdata;
+-
+ static void __init imx6sl_clocks_init(struct device_node *ccm_node)
+ {
+ 	struct device_node *np;
+ 	void __iomem *base;
+ 	int ret;
+-	int i;
+ 
+ 	clk_hw_data = kzalloc(struct_size(clk_hw_data, hws,
+ 					  IMX6SL_CLK_END), GFP_KERNEL);
+@@ -448,12 +440,6 @@ static void __init imx6sl_clocks_init(struct device_node *ccm_node)
+ 	clk_set_parent(hws[IMX6SL_CLK_LCDIF_AXI_SEL]->clk,
+ 		       hws[IMX6SL_CLK_PLL2_PFD2]->clk);
+ 
+-	for (i = 0; i < ARRAY_SIZE(uart_clk_ids); i++) {
+-		int index = uart_clk_ids[i];
+-
+-		uart_clks[i] = &hws[index]->clk;
+-	}
+-
+-	imx_register_uart_clocks(uart_clks);
++	imx_register_uart_clocks(2);
+ }
+ CLK_OF_DECLARE(imx6sl, "fsl,imx6sl-ccm", imx6sl_clocks_init);
+diff --git a/drivers/clk/imx/clk-imx6sll.c b/drivers/clk/imx/clk-imx6sll.c
+index 8e8288bda4d0b..31d777f300395 100644
+--- a/drivers/clk/imx/clk-imx6sll.c
++++ b/drivers/clk/imx/clk-imx6sll.c
+@@ -76,26 +76,10 @@ static u32 share_count_ssi1;
+ static u32 share_count_ssi2;
+ static u32 share_count_ssi3;
+ 
+-static const int uart_clk_ids[] __initconst = {
+-	IMX6SLL_CLK_UART1_IPG,
+-	IMX6SLL_CLK_UART1_SERIAL,
+-	IMX6SLL_CLK_UART2_IPG,
+-	IMX6SLL_CLK_UART2_SERIAL,
+-	IMX6SLL_CLK_UART3_IPG,
+-	IMX6SLL_CLK_UART3_SERIAL,
+-	IMX6SLL_CLK_UART4_IPG,
+-	IMX6SLL_CLK_UART4_SERIAL,
+-	IMX6SLL_CLK_UART5_IPG,
+-	IMX6SLL_CLK_UART5_SERIAL,
+-};
+-
+-static struct clk **uart_clks[ARRAY_SIZE(uart_clk_ids) + 1] __initdata;
+-
+ static void __init imx6sll_clocks_init(struct device_node *ccm_node)
+ {
+ 	struct device_node *np;
+ 	void __iomem *base;
+-	int i;
+ 
+ 	clk_hw_data = kzalloc(struct_size(clk_hw_data, hws,
+ 					  IMX6SLL_CLK_END), GFP_KERNEL);
+@@ -356,13 +340,7 @@ static void __init imx6sll_clocks_init(struct device_node *ccm_node)
+ 
+ 	of_clk_add_hw_provider(np, of_clk_hw_onecell_get, clk_hw_data);
+ 
+-	for (i = 0; i < ARRAY_SIZE(uart_clk_ids); i++) {
+-		int index = uart_clk_ids[i];
+-
+-		uart_clks[i] = &hws[index]->clk;
+-	}
+-
+-	imx_register_uart_clocks(uart_clks);
++	imx_register_uart_clocks(5);
+ 
+ 	/* Lower the AHB clock rate before changing the clock source. */
+ 	clk_set_rate(hws[IMX6SLL_CLK_AHB]->clk, 99000000);
+diff --git a/drivers/clk/imx/clk-imx6sx.c b/drivers/clk/imx/clk-imx6sx.c
+index 20dcce526d072..fc1bd23d45834 100644
+--- a/drivers/clk/imx/clk-imx6sx.c
++++ b/drivers/clk/imx/clk-imx6sx.c
+@@ -117,18 +117,10 @@ static u32 share_count_ssi3;
+ static u32 share_count_sai1;
+ static u32 share_count_sai2;
+ 
+-static const int uart_clk_ids[] __initconst = {
+-	IMX6SX_CLK_UART_IPG,
+-	IMX6SX_CLK_UART_SERIAL,
+-};
+-
+-static struct clk **uart_clks[ARRAY_SIZE(uart_clk_ids) + 1] __initdata;
+-
+ static void __init imx6sx_clocks_init(struct device_node *ccm_node)
+ {
+ 	struct device_node *np;
+ 	void __iomem *base;
+-	int i;
+ 
+ 	clk_hw_data = kzalloc(struct_size(clk_hw_data, hws,
+ 					  IMX6SX_CLK_CLK_END), GFP_KERNEL);
+@@ -556,12 +548,6 @@ static void __init imx6sx_clocks_init(struct device_node *ccm_node)
+ 	clk_set_parent(hws[IMX6SX_CLK_QSPI1_SEL]->clk, hws[IMX6SX_CLK_PLL2_BUS]->clk);
+ 	clk_set_parent(hws[IMX6SX_CLK_QSPI2_SEL]->clk, hws[IMX6SX_CLK_PLL2_BUS]->clk);
+ 
+-	for (i = 0; i < ARRAY_SIZE(uart_clk_ids); i++) {
+-		int index = uart_clk_ids[i];
+-
+-		uart_clks[i] = &hws[index]->clk;
+-	}
+-
+-	imx_register_uart_clocks(uart_clks);
++	imx_register_uart_clocks(2);
+ }
+ CLK_OF_DECLARE(imx6sx, "fsl,imx6sx-ccm", imx6sx_clocks_init);
+diff --git a/drivers/clk/imx/clk-imx7d.c b/drivers/clk/imx/clk-imx7d.c
+index 22d24a6a05e70..c4e0f1c07192f 100644
+--- a/drivers/clk/imx/clk-imx7d.c
++++ b/drivers/clk/imx/clk-imx7d.c
+@@ -377,23 +377,10 @@ static const char *pll_video_bypass_sel[] = { "pll_video_main", "pll_video_main_
+ static struct clk_hw **hws;
+ static struct clk_hw_onecell_data *clk_hw_data;
+ 
+-static const int uart_clk_ids[] __initconst = {
+-	IMX7D_UART1_ROOT_CLK,
+-	IMX7D_UART2_ROOT_CLK,
+-	IMX7D_UART3_ROOT_CLK,
+-	IMX7D_UART4_ROOT_CLK,
+-	IMX7D_UART5_ROOT_CLK,
+-	IMX7D_UART6_ROOT_CLK,
+-	IMX7D_UART7_ROOT_CLK,
+-};
+-
+-static struct clk **uart_clks[ARRAY_SIZE(uart_clk_ids) + 1] __initdata;
+-
+ static void __init imx7d_clocks_init(struct device_node *ccm_node)
+ {
+ 	struct device_node *np;
+ 	void __iomem *base;
+-	int i;
+ 
+ 	clk_hw_data = kzalloc(struct_size(clk_hw_data, hws,
+ 					  IMX7D_CLK_END), GFP_KERNEL);
+@@ -897,14 +884,7 @@ static void __init imx7d_clocks_init(struct device_node *ccm_node)
+ 	hws[IMX7D_USB1_MAIN_480M_CLK] = imx_clk_hw_fixed_factor("pll_usb1_main_clk", "osc", 20, 1);
+ 	hws[IMX7D_USB_MAIN_480M_CLK] = imx_clk_hw_fixed_factor("pll_usb_main_clk", "osc", 20, 1);
+ 
+-	for (i = 0; i < ARRAY_SIZE(uart_clk_ids); i++) {
+-		int index = uart_clk_ids[i];
+-
+-		uart_clks[i] = &hws[index]->clk;
+-	}
+-
+-
+-	imx_register_uart_clocks(uart_clks);
++	imx_register_uart_clocks(7);
+ 
+ }
+ CLK_OF_DECLARE(imx7d, "fsl,imx7d-ccm", imx7d_clocks_init);
+diff --git a/drivers/clk/imx/clk-imx7ulp.c b/drivers/clk/imx/clk-imx7ulp.c
+index 634c0b6636b0e..779e09105da7d 100644
+--- a/drivers/clk/imx/clk-imx7ulp.c
++++ b/drivers/clk/imx/clk-imx7ulp.c
+@@ -43,19 +43,6 @@ static const struct clk_div_table ulp_div_table[] = {
+ 	{ /* sentinel */ },
+ };
+ 
+-static const int pcc2_uart_clk_ids[] __initconst = {
+-	IMX7ULP_CLK_LPUART4,
+-	IMX7ULP_CLK_LPUART5,
+-};
+-
+-static const int pcc3_uart_clk_ids[] __initconst = {
+-	IMX7ULP_CLK_LPUART6,
+-	IMX7ULP_CLK_LPUART7,
+-};
+-
+-static struct clk **pcc2_uart_clks[ARRAY_SIZE(pcc2_uart_clk_ids) + 1] __initdata;
+-static struct clk **pcc3_uart_clks[ARRAY_SIZE(pcc3_uart_clk_ids) + 1] __initdata;
+-
+ static void __init imx7ulp_clk_scg1_init(struct device_node *np)
+ {
+ 	struct clk_hw_onecell_data *clk_data;
+@@ -150,7 +137,6 @@ static void __init imx7ulp_clk_pcc2_init(struct device_node *np)
+ 	struct clk_hw_onecell_data *clk_data;
+ 	struct clk_hw **hws;
+ 	void __iomem *base;
+-	int i;
+ 
+ 	clk_data = kzalloc(struct_size(clk_data, hws, IMX7ULP_CLK_PCC2_END),
+ 			   GFP_KERNEL);
+@@ -190,13 +176,7 @@ static void __init imx7ulp_clk_pcc2_init(struct device_node *np)
+ 
+ 	of_clk_add_hw_provider(np, of_clk_hw_onecell_get, clk_data);
+ 
+-	for (i = 0; i < ARRAY_SIZE(pcc2_uart_clk_ids); i++) {
+-		int index = pcc2_uart_clk_ids[i];
+-
+-		pcc2_uart_clks[i] = &hws[index]->clk;
+-	}
+-
+-	imx_register_uart_clocks(pcc2_uart_clks);
++	imx_register_uart_clocks(2);
+ }
+ CLK_OF_DECLARE(imx7ulp_clk_pcc2, "fsl,imx7ulp-pcc2", imx7ulp_clk_pcc2_init);
+ 
+@@ -205,7 +185,6 @@ static void __init imx7ulp_clk_pcc3_init(struct device_node *np)
+ 	struct clk_hw_onecell_data *clk_data;
+ 	struct clk_hw **hws;
+ 	void __iomem *base;
+-	int i;
+ 
+ 	clk_data = kzalloc(struct_size(clk_data, hws, IMX7ULP_CLK_PCC3_END),
+ 			   GFP_KERNEL);
+@@ -244,13 +223,7 @@ static void __init imx7ulp_clk_pcc3_init(struct device_node *np)
+ 
+ 	of_clk_add_hw_provider(np, of_clk_hw_onecell_get, clk_data);
+ 
+-	for (i = 0; i < ARRAY_SIZE(pcc3_uart_clk_ids); i++) {
+-		int index = pcc3_uart_clk_ids[i];
+-
+-		pcc3_uart_clks[i] = &hws[index]->clk;
+-	}
+-
+-	imx_register_uart_clocks(pcc3_uart_clks);
++	imx_register_uart_clocks(7);
+ }
+ CLK_OF_DECLARE(imx7ulp_clk_pcc3, "fsl,imx7ulp-pcc3", imx7ulp_clk_pcc3_init);
+ 
+diff --git a/drivers/clk/imx/clk-imx8mm.c b/drivers/clk/imx/clk-imx8mm.c
+index 6a01eec36dd04..f1919fafb1247 100644
+--- a/drivers/clk/imx/clk-imx8mm.c
++++ b/drivers/clk/imx/clk-imx8mm.c
+@@ -296,20 +296,12 @@ static const char * const clkout_sels[] = {"audio_pll1_out", "audio_pll2_out", "
+ static struct clk_hw_onecell_data *clk_hw_data;
+ static struct clk_hw **hws;
+ 
+-static const int uart_clk_ids[] = {
+-	IMX8MM_CLK_UART1_ROOT,
+-	IMX8MM_CLK_UART2_ROOT,
+-	IMX8MM_CLK_UART3_ROOT,
+-	IMX8MM_CLK_UART4_ROOT,
+-};
+-static struct clk **uart_hws[ARRAY_SIZE(uart_clk_ids) + 1];
+-
+ static int imx8mm_clocks_probe(struct platform_device *pdev)
+ {
+ 	struct device *dev = &pdev->dev;
+ 	struct device_node *np = dev->of_node;
+ 	void __iomem *base;
+-	int ret, i;
++	int ret;
+ 
+ 	clk_hw_data = kzalloc(struct_size(clk_hw_data, hws,
+ 					  IMX8MM_CLK_END), GFP_KERNEL);
+@@ -634,13 +626,7 @@ static int imx8mm_clocks_probe(struct platform_device *pdev)
+ 		goto unregister_hws;
+ 	}
+ 
+-	for (i = 0; i < ARRAY_SIZE(uart_clk_ids); i++) {
+-		int index = uart_clk_ids[i];
+-
+-		uart_hws[i] = &hws[index]->clk;
+-	}
+-
+-	imx_register_uart_clocks(uart_hws);
++	imx_register_uart_clocks(4);
+ 
+ 	return 0;
+ 
+diff --git a/drivers/clk/imx/clk-imx8mn.c b/drivers/clk/imx/clk-imx8mn.c
+index 324c5fd0aa04f..88f6630cd472f 100644
+--- a/drivers/clk/imx/clk-imx8mn.c
++++ b/drivers/clk/imx/clk-imx8mn.c
+@@ -289,20 +289,12 @@ static const char * const clkout_sels[] = {"audio_pll1_out", "audio_pll2_out", "
+ static struct clk_hw_onecell_data *clk_hw_data;
+ static struct clk_hw **hws;
+ 
+-static const int uart_clk_ids[] = {
+-	IMX8MN_CLK_UART1_ROOT,
+-	IMX8MN_CLK_UART2_ROOT,
+-	IMX8MN_CLK_UART3_ROOT,
+-	IMX8MN_CLK_UART4_ROOT,
+-};
+-static struct clk **uart_hws[ARRAY_SIZE(uart_clk_ids) + 1];
+-
+ static int imx8mn_clocks_probe(struct platform_device *pdev)
+ {
+ 	struct device *dev = &pdev->dev;
+ 	struct device_node *np = dev->of_node;
+ 	void __iomem *base;
+-	int ret, i;
++	int ret;
+ 
+ 	clk_hw_data = kzalloc(struct_size(clk_hw_data, hws,
+ 					  IMX8MN_CLK_END), GFP_KERNEL);
+@@ -585,13 +577,7 @@ static int imx8mn_clocks_probe(struct platform_device *pdev)
+ 		goto unregister_hws;
+ 	}
+ 
+-	for (i = 0; i < ARRAY_SIZE(uart_clk_ids); i++) {
+-		int index = uart_clk_ids[i];
+-
+-		uart_hws[i] = &hws[index]->clk;
+-	}
+-
+-	imx_register_uart_clocks(uart_hws);
++	imx_register_uart_clocks(4);
+ 
+ 	return 0;
+ 
+diff --git a/drivers/clk/imx/clk-imx8mp.c b/drivers/clk/imx/clk-imx8mp.c
+index 2f4e1d674e1c1..3e6557e7d559b 100644
+--- a/drivers/clk/imx/clk-imx8mp.c
++++ b/drivers/clk/imx/clk-imx8mp.c
+@@ -414,20 +414,11 @@ static const char * const imx8mp_dram_core_sels[] = {"dram_pll_out", "dram_alt_r
+ static struct clk_hw **hws;
+ static struct clk_hw_onecell_data *clk_hw_data;
+ 
+-static const int uart_clk_ids[] = {
+-	IMX8MP_CLK_UART1_ROOT,
+-	IMX8MP_CLK_UART2_ROOT,
+-	IMX8MP_CLK_UART3_ROOT,
+-	IMX8MP_CLK_UART4_ROOT,
+-};
+-static struct clk **uart_clks[ARRAY_SIZE(uart_clk_ids) + 1];
+-
+ static int imx8mp_clocks_probe(struct platform_device *pdev)
+ {
+ 	struct device *dev = &pdev->dev;
+ 	struct device_node *np;
+ 	void __iomem *anatop_base, *ccm_base;
+-	int i;
+ 
+ 	np = of_find_compatible_node(NULL, NULL, "fsl,imx8mp-anatop");
+ 	anatop_base = of_iomap(np, 0);
+@@ -737,13 +728,7 @@ static int imx8mp_clocks_probe(struct platform_device *pdev)
+ 
+ 	of_clk_add_hw_provider(np, of_clk_hw_onecell_get, clk_hw_data);
+ 
+-	for (i = 0; i < ARRAY_SIZE(uart_clk_ids); i++) {
+-		int index = uart_clk_ids[i];
+-
+-		uart_clks[i] = &hws[index]->clk;
+-	}
+-
+-	imx_register_uart_clocks(uart_clks);
++	imx_register_uart_clocks(4);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/clk/imx/clk-imx8mq.c b/drivers/clk/imx/clk-imx8mq.c
+index 4dd4ae9d022ba..3e1a10d3f55cc 100644
+--- a/drivers/clk/imx/clk-imx8mq.c
++++ b/drivers/clk/imx/clk-imx8mq.c
+@@ -281,20 +281,12 @@ static const char * const pllout_monitor_sels[] = {"osc_25m", "osc_27m", "dummy"
+ static struct clk_hw_onecell_data *clk_hw_data;
+ static struct clk_hw **hws;
+ 
+-static const int uart_clk_ids[] = {
+-	IMX8MQ_CLK_UART1_ROOT,
+-	IMX8MQ_CLK_UART2_ROOT,
+-	IMX8MQ_CLK_UART3_ROOT,
+-	IMX8MQ_CLK_UART4_ROOT,
+-};
+-static struct clk **uart_hws[ARRAY_SIZE(uart_clk_ids) + 1];
+-
+ static int imx8mq_clocks_probe(struct platform_device *pdev)
+ {
+ 	struct device *dev = &pdev->dev;
+ 	struct device_node *np = dev->of_node;
+ 	void __iomem *base;
+-	int err, i;
++	int err;
+ 
+ 	clk_hw_data = kzalloc(struct_size(clk_hw_data, hws,
+ 					  IMX8MQ_CLK_END), GFP_KERNEL);
+@@ -629,13 +621,7 @@ static int imx8mq_clocks_probe(struct platform_device *pdev)
+ 		goto unregister_hws;
+ 	}
+ 
+-	for (i = 0; i < ARRAY_SIZE(uart_clk_ids); i++) {
+-		int index = uart_clk_ids[i];
+-
+-		uart_hws[i] = &hws[index]->clk;
+-	}
+-
+-	imx_register_uart_clocks(uart_hws);
++	imx_register_uart_clocks(4);
+ 
+ 	return 0;
+ 
+diff --git a/drivers/clk/imx/clk.c b/drivers/clk/imx/clk.c
+index 47882c51cb853..7cc669934253a 100644
+--- a/drivers/clk/imx/clk.c
++++ b/drivers/clk/imx/clk.c
+@@ -147,8 +147,10 @@ void imx_cscmr1_fixup(u32 *val)
+ }
+ 
+ #ifndef MODULE
+-static int imx_keep_uart_clocks;
+-static struct clk ** const *imx_uart_clocks;
++
++static bool imx_keep_uart_clocks;
++static int imx_enabled_uart_clocks;
++static struct clk **imx_uart_clocks;
+ 
+ static int __init imx_keep_uart_clocks_param(char *str)
+ {
+@@ -161,24 +163,45 @@ __setup_param("earlycon", imx_keep_uart_earlycon,
+ __setup_param("earlyprintk", imx_keep_uart_earlyprintk,
+ 	      imx_keep_uart_clocks_param, 0);
+ 
+-void imx_register_uart_clocks(struct clk ** const clks[])
++void imx_register_uart_clocks(unsigned int clk_count)
+ {
++	imx_enabled_uart_clocks = 0;
++
++/* i.MX boards use device trees now.  For build tests without CONFIG_OF, do nothing */
++#ifdef CONFIG_OF
+ 	if (imx_keep_uart_clocks) {
+ 		int i;
+ 
+-		imx_uart_clocks = clks;
+-		for (i = 0; imx_uart_clocks[i]; i++)
+-			clk_prepare_enable(*imx_uart_clocks[i]);
++		imx_uart_clocks = kcalloc(clk_count, sizeof(struct clk *), GFP_KERNEL);
++
++		if (!of_stdout)
++			return;
++
++		for (i = 0; i < clk_count; i++) {
++			imx_uart_clocks[imx_enabled_uart_clocks] = of_clk_get(of_stdout, i);
++
++			/* Stop if there are no more of_stdout references */
++			if (IS_ERR(imx_uart_clocks[imx_enabled_uart_clocks]))
++				return;
++
++			/* Only enable the clock if it's not NULL */
++			if (imx_uart_clocks[imx_enabled_uart_clocks])
++				clk_prepare_enable(imx_uart_clocks[imx_enabled_uart_clocks++]);
++		}
+ 	}
++#endif
+ }
+ 
+ static int __init imx_clk_disable_uart(void)
+ {
+-	if (imx_keep_uart_clocks && imx_uart_clocks) {
++	if (imx_keep_uart_clocks && imx_enabled_uart_clocks) {
+ 		int i;
+ 
+-		for (i = 0; imx_uart_clocks[i]; i++)
+-			clk_disable_unprepare(*imx_uart_clocks[i]);
++		for (i = 0; i < imx_enabled_uart_clocks; i++) {
++			clk_disable_unprepare(imx_uart_clocks[i]);
++			clk_put(imx_uart_clocks[i]);
++		}
++		kfree(imx_uart_clocks);
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/clk/imx/clk.h b/drivers/clk/imx/clk.h
+index 4f04c8287286f..7571603bee23b 100644
+--- a/drivers/clk/imx/clk.h
++++ b/drivers/clk/imx/clk.h
+@@ -11,9 +11,9 @@ extern spinlock_t imx_ccm_lock;
+ void imx_check_clocks(struct clk *clks[], unsigned int count);
+ void imx_check_clk_hws(struct clk_hw *clks[], unsigned int count);
+ #ifndef MODULE
+-void imx_register_uart_clocks(struct clk ** const clks[]);
++void imx_register_uart_clocks(unsigned int clk_count);
+ #else
+-static inline void imx_register_uart_clocks(struct clk ** const clks[])
++static inline void imx_register_uart_clocks(unsigned int clk_count)
+ {
+ }
+ #endif
+diff --git a/drivers/clk/mvebu/armada-37xx-periph.c b/drivers/clk/mvebu/armada-37xx-periph.c
+index f5746f9ea929f..32ac6b6b75306 100644
+--- a/drivers/clk/mvebu/armada-37xx-periph.c
++++ b/drivers/clk/mvebu/armada-37xx-periph.c
+@@ -84,6 +84,7 @@ struct clk_pm_cpu {
+ 	void __iomem *reg_div;
+ 	u8 shift_div;
+ 	struct regmap *nb_pm_base;
++	unsigned long l1_expiration;
+ };
+ 
+ #define to_clk_double_div(_hw) container_of(_hw, struct clk_double_div, hw)
+@@ -440,33 +441,6 @@ static u8 clk_pm_cpu_get_parent(struct clk_hw *hw)
+ 	return val;
+ }
+ 
+-static int clk_pm_cpu_set_parent(struct clk_hw *hw, u8 index)
+-{
+-	struct clk_pm_cpu *pm_cpu = to_clk_pm_cpu(hw);
+-	struct regmap *base = pm_cpu->nb_pm_base;
+-	int load_level;
+-
+-	/*
+-	 * We set the clock parent only if the DVFS is available but
+-	 * not enabled.
+-	 */
+-	if (IS_ERR(base) || armada_3700_pm_dvfs_is_enabled(base))
+-		return -EINVAL;
+-
+-	/* Set the parent clock for all the load level */
+-	for (load_level = 0; load_level < LOAD_LEVEL_NR; load_level++) {
+-		unsigned int reg, mask,  val,
+-			offset = ARMADA_37XX_NB_TBG_SEL_OFF;
+-
+-		armada_3700_pm_dvfs_update_regs(load_level, &reg, &offset);
+-
+-		val = index << offset;
+-		mask = ARMADA_37XX_NB_TBG_SEL_MASK << offset;
+-		regmap_update_bits(base, reg, mask, val);
+-	}
+-	return 0;
+-}
+-
+ static unsigned long clk_pm_cpu_recalc_rate(struct clk_hw *hw,
+ 					    unsigned long parent_rate)
+ {
+@@ -514,8 +488,10 @@ static long clk_pm_cpu_round_rate(struct clk_hw *hw, unsigned long rate,
+ }
+ 
+ /*
+- * Switching the CPU from the L2 or L3 frequencies (300 and 200 Mhz
+- * respectively) to L0 frequency (1.2 Ghz) requires a significant
++ * Workaround when base CPU frequnecy is 1000 or 1200 MHz
++ *
++ * Switching the CPU from the L2 or L3 frequencies (250/300 or 200 MHz
++ * respectively) to L0 frequency (1/1.2 GHz) requires a significant
+  * amount of time to let VDD stabilize to the appropriate
+  * voltage. This amount of time is large enough that it cannot be
+  * covered by the hardware countdown register. Due to this, the CPU
+@@ -525,26 +501,56 @@ static long clk_pm_cpu_round_rate(struct clk_hw *hw, unsigned long rate,
+  * To work around this problem, we prevent switching directly from the
+  * L2/L3 frequencies to the L0 frequency, and instead switch to the L1
+  * frequency in-between. The sequence therefore becomes:
+- * 1. First switch from L2/L3(200/300MHz) to L1(600MHZ)
++ * 1. First switch from L2/L3 (200/250/300 MHz) to L1 (500/600 MHz)
+  * 2. Sleep 20ms for stabling VDD voltage
+- * 3. Then switch from L1(600MHZ) to L0(1200Mhz).
++ * 3. Then switch from L1 (500/600 MHz) to L0 (1000/1200 MHz).
+  */
+-static void clk_pm_cpu_set_rate_wa(unsigned long rate, struct regmap *base)
++static void clk_pm_cpu_set_rate_wa(struct clk_pm_cpu *pm_cpu,
++				   unsigned int new_level, unsigned long rate,
++				   struct regmap *base)
+ {
+ 	unsigned int cur_level;
+ 
+-	if (rate != 1200 * 1000 * 1000)
+-		return;
+-
+ 	regmap_read(base, ARMADA_37XX_NB_CPU_LOAD, &cur_level);
+ 	cur_level &= ARMADA_37XX_NB_CPU_LOAD_MASK;
+-	if (cur_level <= ARMADA_37XX_DVFS_LOAD_1)
++
++	if (cur_level == new_level)
++		return;
++
++	/*
++	 * System wants to go to L1 on its own. If we are going from L2/L3,
++	 * remember when 20ms will expire. If from L0, set the value so that
++	 * next switch to L0 won't have to wait.
++	 */
++	if (new_level == ARMADA_37XX_DVFS_LOAD_1) {
++		if (cur_level == ARMADA_37XX_DVFS_LOAD_0)
++			pm_cpu->l1_expiration = jiffies;
++		else
++			pm_cpu->l1_expiration = jiffies + msecs_to_jiffies(20);
+ 		return;
++	}
++
++	/*
++	 * If we are setting to L2/L3, just invalidate L1 expiration time,
++	 * sleeping is not needed.
++	 */
++	if (rate < 1000*1000*1000)
++		goto invalidate_l1_exp;
++
++	/*
++	 * We are going to L0 with rate >= 1GHz. Check whether we have been at
++	 * L1 for long enough time. If not, go to L1 for 20ms.
++	 */
++	if (pm_cpu->l1_expiration && jiffies >= pm_cpu->l1_expiration)
++		goto invalidate_l1_exp;
+ 
+ 	regmap_update_bits(base, ARMADA_37XX_NB_CPU_LOAD,
+ 			   ARMADA_37XX_NB_CPU_LOAD_MASK,
+ 			   ARMADA_37XX_DVFS_LOAD_1);
+ 	msleep(20);
++
++invalidate_l1_exp:
++	pm_cpu->l1_expiration = 0;
+ }
+ 
+ static int clk_pm_cpu_set_rate(struct clk_hw *hw, unsigned long rate,
+@@ -578,7 +584,9 @@ static int clk_pm_cpu_set_rate(struct clk_hw *hw, unsigned long rate,
+ 			reg = ARMADA_37XX_NB_CPU_LOAD;
+ 			mask = ARMADA_37XX_NB_CPU_LOAD_MASK;
+ 
+-			clk_pm_cpu_set_rate_wa(rate, base);
++			/* Apply workaround when base CPU frequency is 1000 or 1200 MHz */
++			if (parent_rate >= 1000*1000*1000)
++				clk_pm_cpu_set_rate_wa(pm_cpu, load_level, rate, base);
+ 
+ 			regmap_update_bits(base, reg, mask, load_level);
+ 
+@@ -592,7 +600,6 @@ static int clk_pm_cpu_set_rate(struct clk_hw *hw, unsigned long rate,
+ 
+ static const struct clk_ops clk_pm_cpu_ops = {
+ 	.get_parent = clk_pm_cpu_get_parent,
+-	.set_parent = clk_pm_cpu_set_parent,
+ 	.round_rate = clk_pm_cpu_round_rate,
+ 	.set_rate = clk_pm_cpu_set_rate,
+ 	.recalc_rate = clk_pm_cpu_recalc_rate,
+diff --git a/drivers/clk/qcom/a53-pll.c b/drivers/clk/qcom/a53-pll.c
+index 45cfc57bff924..af6ac17c7daeb 100644
+--- a/drivers/clk/qcom/a53-pll.c
++++ b/drivers/clk/qcom/a53-pll.c
+@@ -93,6 +93,7 @@ static const struct of_device_id qcom_a53pll_match_table[] = {
+ 	{ .compatible = "qcom,msm8916-a53pll" },
+ 	{ }
+ };
++MODULE_DEVICE_TABLE(of, qcom_a53pll_match_table);
+ 
+ static struct platform_driver qcom_a53pll_driver = {
+ 	.probe = qcom_a53pll_probe,
+diff --git a/drivers/clk/qcom/a7-pll.c b/drivers/clk/qcom/a7-pll.c
+index e171d3caf2cf3..c4a53e5db229f 100644
+--- a/drivers/clk/qcom/a7-pll.c
++++ b/drivers/clk/qcom/a7-pll.c
+@@ -86,6 +86,7 @@ static const struct of_device_id qcom_a7pll_match_table[] = {
+ 	{ .compatible = "qcom,sdx55-a7pll" },
+ 	{ }
+ };
++MODULE_DEVICE_TABLE(of, qcom_a7pll_match_table);
+ 
+ static struct platform_driver qcom_a7pll_driver = {
+ 	.probe = qcom_a7pll_probe,
+diff --git a/drivers/clk/qcom/apss-ipq-pll.c b/drivers/clk/qcom/apss-ipq-pll.c
+index 30be87fb222aa..bef7899ad0d66 100644
+--- a/drivers/clk/qcom/apss-ipq-pll.c
++++ b/drivers/clk/qcom/apss-ipq-pll.c
+@@ -81,6 +81,7 @@ static const struct of_device_id apss_ipq_pll_match_table[] = {
+ 	{ .compatible = "qcom,ipq6018-a53pll" },
+ 	{ }
+ };
++MODULE_DEVICE_TABLE(of, apss_ipq_pll_match_table);
+ 
+ static struct platform_driver apss_ipq_pll_driver = {
+ 	.probe = apss_ipq_pll_probe,
+diff --git a/drivers/clk/uniphier/clk-uniphier-mux.c b/drivers/clk/uniphier/clk-uniphier-mux.c
+index 462c84321b2d2..1998e9d4cfc02 100644
+--- a/drivers/clk/uniphier/clk-uniphier-mux.c
++++ b/drivers/clk/uniphier/clk-uniphier-mux.c
+@@ -31,10 +31,10 @@ static int uniphier_clk_mux_set_parent(struct clk_hw *hw, u8 index)
+ static u8 uniphier_clk_mux_get_parent(struct clk_hw *hw)
+ {
+ 	struct uniphier_clk_mux *mux = to_uniphier_clk_mux(hw);
+-	int num_parents = clk_hw_get_num_parents(hw);
++	unsigned int num_parents = clk_hw_get_num_parents(hw);
+ 	int ret;
+ 	unsigned int val;
+-	u8 i;
++	unsigned int i;
+ 
+ 	ret = regmap_read(mux->regmap, mux->reg, &val);
+ 	if (ret)
+diff --git a/drivers/clk/zynqmp/pll.c b/drivers/clk/zynqmp/pll.c
+index 92f449ed38e51..abe6afbf3407b 100644
+--- a/drivers/clk/zynqmp/pll.c
++++ b/drivers/clk/zynqmp/pll.c
+@@ -14,10 +14,12 @@
+  * struct zynqmp_pll - PLL clock
+  * @hw:		Handle between common and hardware-specific interfaces
+  * @clk_id:	PLL clock ID
++ * @set_pll_mode:	Whether an IOCTL_SET_PLL_FRAC_MODE request be sent to ATF
+  */
+ struct zynqmp_pll {
+ 	struct clk_hw hw;
+ 	u32 clk_id;
++	bool set_pll_mode;
+ };
+ 
+ #define to_zynqmp_pll(_hw)	container_of(_hw, struct zynqmp_pll, hw)
+@@ -81,6 +83,8 @@ static inline void zynqmp_pll_set_mode(struct clk_hw *hw, bool on)
+ 	if (ret)
+ 		pr_warn_once("%s() PLL set frac mode failed for %s, ret = %d\n",
+ 			     __func__, clk_name, ret);
++	else
++		clk->set_pll_mode = true;
+ }
+ 
+ /**
+@@ -100,9 +104,7 @@ static long zynqmp_pll_round_rate(struct clk_hw *hw, unsigned long rate,
+ 	/* Enable the fractional mode if needed */
+ 	rate_div = (rate * FRAC_DIV) / *prate;
+ 	f = rate_div % FRAC_DIV;
+-	zynqmp_pll_set_mode(hw, !!f);
+-
+-	if (zynqmp_pll_get_mode(hw) == PLL_MODE_FRAC) {
++	if (f) {
+ 		if (rate > PS_PLL_VCO_MAX) {
+ 			fbdiv = rate / PS_PLL_VCO_MAX;
+ 			rate = rate / (fbdiv + 1);
+@@ -173,10 +175,12 @@ static int zynqmp_pll_set_rate(struct clk_hw *hw, unsigned long rate,
+ 	long rate_div, frac, m, f;
+ 	int ret;
+ 
+-	if (zynqmp_pll_get_mode(hw) == PLL_MODE_FRAC) {
+-		rate_div = (rate * FRAC_DIV) / parent_rate;
++	rate_div = (rate * FRAC_DIV) / parent_rate;
++	f = rate_div % FRAC_DIV;
++	zynqmp_pll_set_mode(hw, !!f);
++
++	if (f) {
+ 		m = rate_div / FRAC_DIV;
+-		f = rate_div % FRAC_DIV;
+ 		m = clamp_t(u32, m, (PLL_FBDIV_MIN), (PLL_FBDIV_MAX));
+ 		rate = parent_rate * m;
+ 		frac = (parent_rate * f) / FRAC_DIV;
+@@ -240,9 +244,15 @@ static int zynqmp_pll_enable(struct clk_hw *hw)
+ 	u32 clk_id = clk->clk_id;
+ 	int ret;
+ 
+-	if (zynqmp_pll_is_enabled(hw))
++	/*
++	 * Don't skip enabling clock if there is an IOCTL_SET_PLL_FRAC_MODE request
++	 * that has been sent to ATF.
++	 */
++	if (zynqmp_pll_is_enabled(hw) && (!clk->set_pll_mode))
+ 		return 0;
+ 
++	clk->set_pll_mode = false;
++
+ 	ret = zynqmp_pm_clock_enable(clk_id);
+ 	if (ret)
+ 		pr_warn_once("%s() clock enable failed for %s, ret = %d\n",
+diff --git a/drivers/clocksource/ingenic-ost.c b/drivers/clocksource/ingenic-ost.c
+index 029efc2731b49..6af2470136bd2 100644
+--- a/drivers/clocksource/ingenic-ost.c
++++ b/drivers/clocksource/ingenic-ost.c
+@@ -88,9 +88,9 @@ static int __init ingenic_ost_probe(struct platform_device *pdev)
+ 		return PTR_ERR(ost->regs);
+ 
+ 	map = device_node_to_regmap(dev->parent->of_node);
+-	if (!map) {
++	if (IS_ERR(map)) {
+ 		dev_err(dev, "regmap not found");
+-		return -EINVAL;
++		return PTR_ERR(map);
+ 	}
+ 
+ 	ost->clk = devm_clk_get(dev, "ost");
+diff --git a/drivers/clocksource/timer-ti-dm-systimer.c b/drivers/clocksource/timer-ti-dm-systimer.c
+index 33b3e8aa2cc50..3fae9ebb58b83 100644
+--- a/drivers/clocksource/timer-ti-dm-systimer.c
++++ b/drivers/clocksource/timer-ti-dm-systimer.c
+@@ -449,13 +449,13 @@ static int dmtimer_set_next_event(unsigned long cycles,
+ 	struct dmtimer_systimer *t = &clkevt->t;
+ 	void __iomem *pend = t->base + t->pend;
+ 
+-	writel_relaxed(0xffffffff - cycles, t->base + t->counter);
+ 	while (readl_relaxed(pend) & WP_TCRR)
+ 		cpu_relax();
++	writel_relaxed(0xffffffff - cycles, t->base + t->counter);
+ 
+-	writel_relaxed(OMAP_TIMER_CTRL_ST, t->base + t->ctrl);
+ 	while (readl_relaxed(pend) & WP_TCLR)
+ 		cpu_relax();
++	writel_relaxed(OMAP_TIMER_CTRL_ST, t->base + t->ctrl);
+ 
+ 	return 0;
+ }
+@@ -490,18 +490,18 @@ static int dmtimer_set_periodic(struct clock_event_device *evt)
+ 	dmtimer_clockevent_shutdown(evt);
+ 
+ 	/* Looks like we need to first set the load value separately */
+-	writel_relaxed(clkevt->period, t->base + t->load);
+ 	while (readl_relaxed(pend) & WP_TLDR)
+ 		cpu_relax();
++	writel_relaxed(clkevt->period, t->base + t->load);
+ 
+-	writel_relaxed(clkevt->period, t->base + t->counter);
+ 	while (readl_relaxed(pend) & WP_TCRR)
+ 		cpu_relax();
++	writel_relaxed(clkevt->period, t->base + t->counter);
+ 
+-	writel_relaxed(OMAP_TIMER_CTRL_AR | OMAP_TIMER_CTRL_ST,
+-		       t->base + t->ctrl);
+ 	while (readl_relaxed(pend) & WP_TCLR)
+ 		cpu_relax();
++	writel_relaxed(OMAP_TIMER_CTRL_AR | OMAP_TIMER_CTRL_ST,
++		       t->base + t->ctrl);
+ 
+ 	return 0;
+ }
+@@ -554,6 +554,7 @@ static int __init dmtimer_clockevent_init(struct device_node *np)
+ 	dev->set_state_shutdown = dmtimer_clockevent_shutdown;
+ 	dev->set_state_periodic = dmtimer_set_periodic;
+ 	dev->set_state_oneshot = dmtimer_clockevent_shutdown;
++	dev->set_state_oneshot_stopped = dmtimer_clockevent_shutdown;
+ 	dev->tick_resume = dmtimer_clockevent_shutdown;
+ 	dev->cpumask = cpu_possible_mask;
+ 
+diff --git a/drivers/cpufreq/armada-37xx-cpufreq.c b/drivers/cpufreq/armada-37xx-cpufreq.c
+index b4af4094309b0..e4782f562e7a9 100644
+--- a/drivers/cpufreq/armada-37xx-cpufreq.c
++++ b/drivers/cpufreq/armada-37xx-cpufreq.c
+@@ -25,6 +25,10 @@
+ 
+ #include "cpufreq-dt.h"
+ 
++/* Clk register set */
++#define ARMADA_37XX_CLK_TBG_SEL		0
++#define ARMADA_37XX_CLK_TBG_SEL_CPU_OFF	22
++
+ /* Power management in North Bridge register set */
+ #define ARMADA_37XX_NB_L0L1	0x18
+ #define ARMADA_37XX_NB_L2L3	0x1C
+@@ -69,6 +73,8 @@
+ #define LOAD_LEVEL_NR	4
+ 
+ #define MIN_VOLT_MV 1000
++#define MIN_VOLT_MV_FOR_L1_1000MHZ 1108
++#define MIN_VOLT_MV_FOR_L1_1200MHZ 1155
+ 
+ /*  AVS value for the corresponding voltage (in mV) */
+ static int avs_map[] = {
+@@ -120,10 +126,15 @@ static struct armada_37xx_dvfs *armada_37xx_cpu_freq_info_get(u32 freq)
+  * will be configured then the DVFS will be enabled.
+  */
+ static void __init armada37xx_cpufreq_dvfs_setup(struct regmap *base,
+-						 struct clk *clk, u8 *divider)
++						 struct regmap *clk_base, u8 *divider)
+ {
++	u32 cpu_tbg_sel;
+ 	int load_lvl;
+-	struct clk *parent;
++
++	/* Determine to which TBG clock is CPU connected */
++	regmap_read(clk_base, ARMADA_37XX_CLK_TBG_SEL, &cpu_tbg_sel);
++	cpu_tbg_sel >>= ARMADA_37XX_CLK_TBG_SEL_CPU_OFF;
++	cpu_tbg_sel &= ARMADA_37XX_NB_TBG_SEL_MASK;
+ 
+ 	for (load_lvl = 0; load_lvl < LOAD_LEVEL_NR; load_lvl++) {
+ 		unsigned int reg, mask, val, offset = 0;
+@@ -142,6 +153,11 @@ static void __init armada37xx_cpufreq_dvfs_setup(struct regmap *base,
+ 		mask = (ARMADA_37XX_NB_CLK_SEL_MASK
+ 			<< ARMADA_37XX_NB_CLK_SEL_OFF);
+ 
++		/* Set TBG index, for all levels we use the same TBG */
++		val = cpu_tbg_sel << ARMADA_37XX_NB_TBG_SEL_OFF;
++		mask = (ARMADA_37XX_NB_TBG_SEL_MASK
++			<< ARMADA_37XX_NB_TBG_SEL_OFF);
++
+ 		/*
+ 		 * Set cpu divider based on the pre-computed array in
+ 		 * order to have balanced step.
+@@ -160,14 +176,6 @@ static void __init armada37xx_cpufreq_dvfs_setup(struct regmap *base,
+ 
+ 		regmap_update_bits(base, reg, mask, val);
+ 	}
+-
+-	/*
+-	 * Set cpu clock source, for all the level we keep the same
+-	 * clock source that the one already configured. For this one
+-	 * we need to use the clock framework
+-	 */
+-	parent = clk_get_parent(clk);
+-	clk_set_parent(clk, parent);
+ }
+ 
+ /*
+@@ -202,6 +210,8 @@ static u32 armada_37xx_avs_val_match(int target_vm)
+  * - L2 & L3 voltage should be about 150mv smaller than L0 voltage.
+  * This function calculates L1 & L2 & L3 AVS values dynamically based
+  * on L0 voltage and fill all AVS values to the AVS value table.
++ * When base CPU frequency is 1000 or 1200 MHz then there is additional
++ * minimal avs value for load L1.
+  */
+ static void __init armada37xx_cpufreq_avs_configure(struct regmap *base,
+ 						struct armada_37xx_dvfs *dvfs)
+@@ -233,6 +243,19 @@ static void __init armada37xx_cpufreq_avs_configure(struct regmap *base,
+ 		for (load_level = 1; load_level < LOAD_LEVEL_NR; load_level++)
+ 			dvfs->avs[load_level] = avs_min;
+ 
++		/*
++		 * Set the avs values for load L0 and L1 when base CPU frequency
++		 * is 1000/1200 MHz to its typical initial values according to
++		 * the Armada 3700 Hardware Specifications.
++		 */
++		if (dvfs->cpu_freq_max >= 1000*1000*1000) {
++			if (dvfs->cpu_freq_max >= 1200*1000*1000)
++				avs_min = armada_37xx_avs_val_match(MIN_VOLT_MV_FOR_L1_1200MHZ);
++			else
++				avs_min = armada_37xx_avs_val_match(MIN_VOLT_MV_FOR_L1_1000MHZ);
++			dvfs->avs[0] = dvfs->avs[1] = avs_min;
++		}
++
+ 		return;
+ 	}
+ 
+@@ -252,6 +275,26 @@ static void __init armada37xx_cpufreq_avs_configure(struct regmap *base,
+ 	target_vm = avs_map[l0_vdd_min] - 150;
+ 	target_vm = target_vm > MIN_VOLT_MV ? target_vm : MIN_VOLT_MV;
+ 	dvfs->avs[2] = dvfs->avs[3] = armada_37xx_avs_val_match(target_vm);
++
++	/*
++	 * Fix the avs value for load L1 when base CPU frequency is 1000/1200 MHz,
++	 * otherwise the CPU gets stuck when switching from load L1 to load L0.
++	 * Also ensure that avs value for load L1 is not higher than for L0.
++	 */
++	if (dvfs->cpu_freq_max >= 1000*1000*1000) {
++		u32 avs_min_l1;
++
++		if (dvfs->cpu_freq_max >= 1200*1000*1000)
++			avs_min_l1 = armada_37xx_avs_val_match(MIN_VOLT_MV_FOR_L1_1200MHZ);
++		else
++			avs_min_l1 = armada_37xx_avs_val_match(MIN_VOLT_MV_FOR_L1_1000MHZ);
++
++		if (avs_min_l1 > dvfs->avs[0])
++			avs_min_l1 = dvfs->avs[0];
++
++		if (dvfs->avs[1] < avs_min_l1)
++			dvfs->avs[1] = avs_min_l1;
++	}
+ }
+ 
+ static void __init armada37xx_cpufreq_avs_setup(struct regmap *base,
+@@ -358,11 +401,16 @@ static int __init armada37xx_cpufreq_driver_init(void)
+ 	struct platform_device *pdev;
+ 	unsigned long freq;
+ 	unsigned int cur_frequency, base_frequency;
+-	struct regmap *nb_pm_base, *avs_base;
++	struct regmap *nb_clk_base, *nb_pm_base, *avs_base;
+ 	struct device *cpu_dev;
+ 	int load_lvl, ret;
+ 	struct clk *clk, *parent;
+ 
++	nb_clk_base =
++		syscon_regmap_lookup_by_compatible("marvell,armada-3700-periph-clock-nb");
++	if (IS_ERR(nb_clk_base))
++		return -ENODEV;
++
+ 	nb_pm_base =
+ 		syscon_regmap_lookup_by_compatible("marvell,armada-3700-nb-pm");
+ 
+@@ -421,7 +469,7 @@ static int __init armada37xx_cpufreq_driver_init(void)
+ 		return -EINVAL;
+ 	}
+ 
+-	dvfs = armada_37xx_cpu_freq_info_get(cur_frequency);
++	dvfs = armada_37xx_cpu_freq_info_get(base_frequency);
+ 	if (!dvfs) {
+ 		clk_put(clk);
+ 		return -EINVAL;
+@@ -439,7 +487,7 @@ static int __init armada37xx_cpufreq_driver_init(void)
+ 	armada37xx_cpufreq_avs_configure(avs_base, dvfs);
+ 	armada37xx_cpufreq_avs_setup(avs_base, dvfs);
+ 
+-	armada37xx_cpufreq_dvfs_setup(nb_pm_base, clk, dvfs->divider);
++	armada37xx_cpufreq_dvfs_setup(nb_pm_base, nb_clk_base, dvfs->divider);
+ 	clk_put(clk);
+ 
+ 	for (load_lvl = ARMADA_37XX_DVFS_LOAD_0; load_lvl < LOAD_LEVEL_NR;
+@@ -473,7 +521,7 @@ disable_dvfs:
+ remove_opp:
+ 	/* clean-up the already added opp before leaving */
+ 	while (load_lvl-- > ARMADA_37XX_DVFS_LOAD_0) {
+-		freq = cur_frequency / dvfs->divider[load_lvl];
++		freq = base_frequency / dvfs->divider[load_lvl];
+ 		dev_pm_opp_remove(cpu_dev, freq);
+ 	}
+ 
+diff --git a/drivers/cpuidle/Kconfig.arm b/drivers/cpuidle/Kconfig.arm
+index 0844fadc4be85..334f83e56120c 100644
+--- a/drivers/cpuidle/Kconfig.arm
++++ b/drivers/cpuidle/Kconfig.arm
+@@ -107,7 +107,7 @@ config ARM_TEGRA_CPUIDLE
+ 
+ config ARM_QCOM_SPM_CPUIDLE
+ 	bool "CPU Idle Driver for Qualcomm Subsystem Power Manager (SPM)"
+-	depends on (ARCH_QCOM || COMPILE_TEST) && !ARM64
++	depends on (ARCH_QCOM || COMPILE_TEST) && !ARM64 && MMU
+ 	select ARM_CPU_SUSPEND
+ 	select CPU_IDLE_MULTIPLE_DRIVERS
+ 	select DT_IDLE_STATES
+diff --git a/drivers/crypto/allwinner/Kconfig b/drivers/crypto/allwinner/Kconfig
+index 856fb20456566..b8e75210a0e31 100644
+--- a/drivers/crypto/allwinner/Kconfig
++++ b/drivers/crypto/allwinner/Kconfig
+@@ -71,10 +71,10 @@ config CRYPTO_DEV_SUN8I_CE_DEBUG
+ config CRYPTO_DEV_SUN8I_CE_HASH
+ 	bool "Enable support for hash on sun8i-ce"
+ 	depends on CRYPTO_DEV_SUN8I_CE
+-	select MD5
+-	select SHA1
+-	select SHA256
+-	select SHA512
++	select CRYPTO_MD5
++	select CRYPTO_SHA1
++	select CRYPTO_SHA256
++	select CRYPTO_SHA512
+ 	help
+ 	  Say y to enable support for hash algorithms.
+ 
+@@ -132,8 +132,8 @@ config CRYPTO_DEV_SUN8I_SS_PRNG
+ config CRYPTO_DEV_SUN8I_SS_HASH
+ 	bool "Enable support for hash on sun8i-ss"
+ 	depends on CRYPTO_DEV_SUN8I_SS
+-	select MD5
+-	select SHA1
+-	select SHA256
++	select CRYPTO_MD5
++	select CRYPTO_SHA1
++	select CRYPTO_SHA256
+ 	help
+ 	  Say y to enable support for hash algorithms.
+diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c
+index 11cbcbc83a7b6..64446b86c927f 100644
+--- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c
++++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c
+@@ -348,8 +348,10 @@ int sun8i_ss_hash_run(struct crypto_engine *engine, void *breq)
+ 	bf = (__le32 *)pad;
+ 
+ 	result = kzalloc(digestsize, GFP_KERNEL | GFP_DMA);
+-	if (!result)
++	if (!result) {
++		kfree(pad);
+ 		return -ENOMEM;
++	}
+ 
+ 	for (i = 0; i < MAX_SG; i++) {
+ 		rctx->t_dst[i].addr = 0;
+@@ -435,11 +437,10 @@ int sun8i_ss_hash_run(struct crypto_engine *engine, void *breq)
+ 	dma_unmap_sg(ss->dev, areq->src, nr_sgs, DMA_TO_DEVICE);
+ 	dma_unmap_single(ss->dev, addr_res, digestsize, DMA_FROM_DEVICE);
+ 
+-	kfree(pad);
+-
+ 	memcpy(areq->result, result, algt->alg.hash.halg.digestsize);
+-	kfree(result);
+ theend:
++	kfree(pad);
++	kfree(result);
+ 	crypto_finalize_hash_request(engine, breq, err);
+ 	return 0;
+ }
+diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-prng.c b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-prng.c
+index 08a1473b21457..3191527928e41 100644
+--- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-prng.c
++++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-prng.c
+@@ -103,7 +103,8 @@ int sun8i_ss_prng_generate(struct crypto_rng *tfm, const u8 *src,
+ 	dma_iv = dma_map_single(ss->dev, ctx->seed, ctx->slen, DMA_TO_DEVICE);
+ 	if (dma_mapping_error(ss->dev, dma_iv)) {
+ 		dev_err(ss->dev, "Cannot DMA MAP IV\n");
+-		return -EFAULT;
++		err = -EFAULT;
++		goto err_free;
+ 	}
+ 
+ 	dma_dst = dma_map_single(ss->dev, d, todo, DMA_FROM_DEVICE);
+@@ -167,6 +168,7 @@ err_iv:
+ 		memcpy(ctx->seed, d + dlen, ctx->slen);
+ 	}
+ 	memzero_explicit(d, todo);
++err_free:
+ 	kfree(d);
+ 
+ 	return err;
+diff --git a/drivers/crypto/ccp/sev-dev.c b/drivers/crypto/ccp/sev-dev.c
+index cb9b4c4e371ed..8fd43c1acac13 100644
+--- a/drivers/crypto/ccp/sev-dev.c
++++ b/drivers/crypto/ccp/sev-dev.c
+@@ -150,6 +150,9 @@ static int __sev_do_cmd_locked(int cmd, void *data, int *psp_ret)
+ 
+ 	sev = psp->sev_data;
+ 
++	if (data && WARN_ON_ONCE(!virt_addr_valid(data)))
++		return -EINVAL;
++
+ 	/* Get the physical address of the command buffer */
+ 	phys_lsb = data ? lower_32_bits(__psp_pa(data)) : 0;
+ 	phys_msb = data ? upper_32_bits(__psp_pa(data)) : 0;
+diff --git a/drivers/crypto/ccp/tee-dev.c b/drivers/crypto/ccp/tee-dev.c
+index 5e697a90ea7f4..bcb81fef42118 100644
+--- a/drivers/crypto/ccp/tee-dev.c
++++ b/drivers/crypto/ccp/tee-dev.c
+@@ -36,6 +36,7 @@ static int tee_alloc_ring(struct psp_tee_device *tee, int ring_size)
+ 	if (!start_addr)
+ 		return -ENOMEM;
+ 
++	memset(start_addr, 0x0, ring_size);
+ 	rb_mgr->ring_start = start_addr;
+ 	rb_mgr->ring_size = ring_size;
+ 	rb_mgr->ring_pa = __psp_pa(start_addr);
+@@ -244,41 +245,54 @@ static int tee_submit_cmd(struct psp_tee_device *tee, enum tee_cmd_id cmd_id,
+ 			  void *buf, size_t len, struct tee_ring_cmd **resp)
+ {
+ 	struct tee_ring_cmd *cmd;
+-	u32 rptr, wptr;
+ 	int nloop = 1000, ret = 0;
++	u32 rptr;
+ 
+ 	*resp = NULL;
+ 
+ 	mutex_lock(&tee->rb_mgr.mutex);
+ 
+-	wptr = tee->rb_mgr.wptr;
+-
+-	/* Check if ring buffer is full */
++	/* Loop until empty entry found in ring buffer */
+ 	do {
++		/* Get pointer to ring buffer command entry */
++		cmd = (struct tee_ring_cmd *)
++			(tee->rb_mgr.ring_start + tee->rb_mgr.wptr);
++
+ 		rptr = ioread32(tee->io_regs + tee->vdata->ring_rptr_reg);
+ 
+-		if (!(wptr + sizeof(struct tee_ring_cmd) == rptr))
++		/* Check if ring buffer is full or command entry is waiting
++		 * for response from TEE
++		 */
++		if (!(tee->rb_mgr.wptr + sizeof(struct tee_ring_cmd) == rptr ||
++		      cmd->flag == CMD_WAITING_FOR_RESPONSE))
+ 			break;
+ 
+-		dev_info(tee->dev, "tee: ring buffer full. rptr = %u wptr = %u\n",
+-			 rptr, wptr);
++		dev_dbg(tee->dev, "tee: ring buffer full. rptr = %u wptr = %u\n",
++			rptr, tee->rb_mgr.wptr);
+ 
+-		/* Wait if ring buffer is full */
++		/* Wait if ring buffer is full or TEE is processing data */
+ 		mutex_unlock(&tee->rb_mgr.mutex);
+ 		schedule_timeout_interruptible(msecs_to_jiffies(10));
+ 		mutex_lock(&tee->rb_mgr.mutex);
+ 
+ 	} while (--nloop);
+ 
+-	if (!nloop && (wptr + sizeof(struct tee_ring_cmd) == rptr)) {
+-		dev_err(tee->dev, "tee: ring buffer full. rptr = %u wptr = %u\n",
+-			rptr, wptr);
++	if (!nloop &&
++	    (tee->rb_mgr.wptr + sizeof(struct tee_ring_cmd) == rptr ||
++	     cmd->flag == CMD_WAITING_FOR_RESPONSE)) {
++		dev_err(tee->dev, "tee: ring buffer full. rptr = %u wptr = %u response flag %u\n",
++			rptr, tee->rb_mgr.wptr, cmd->flag);
+ 		ret = -EBUSY;
+ 		goto unlock;
+ 	}
+ 
+-	/* Pointer to empty data entry in ring buffer */
+-	cmd = (struct tee_ring_cmd *)(tee->rb_mgr.ring_start + wptr);
++	/* Do not submit command if PSP got disabled while processing any
++	 * command in another thread
++	 */
++	if (psp_dead) {
++		ret = -EBUSY;
++		goto unlock;
++	}
+ 
+ 	/* Write command data into ring buffer */
+ 	cmd->cmd_id = cmd_id;
+@@ -286,6 +300,9 @@ static int tee_submit_cmd(struct psp_tee_device *tee, enum tee_cmd_id cmd_id,
+ 	memset(&cmd->buf[0], 0, sizeof(cmd->buf));
+ 	memcpy(&cmd->buf[0], buf, len);
+ 
++	/* Indicate driver is waiting for response */
++	cmd->flag = CMD_WAITING_FOR_RESPONSE;
++
+ 	/* Update local copy of write pointer */
+ 	tee->rb_mgr.wptr += sizeof(struct tee_ring_cmd);
+ 	if (tee->rb_mgr.wptr >= tee->rb_mgr.ring_size)
+@@ -353,12 +370,16 @@ int psp_tee_process_cmd(enum tee_cmd_id cmd_id, void *buf, size_t len,
+ 		return ret;
+ 
+ 	ret = tee_wait_cmd_completion(tee, resp, TEE_DEFAULT_TIMEOUT);
+-	if (ret)
++	if (ret) {
++		resp->flag = CMD_RESPONSE_TIMEDOUT;
+ 		return ret;
++	}
+ 
+ 	memcpy(buf, &resp->buf[0], len);
+ 	*status = resp->status;
+ 
++	resp->flag = CMD_RESPONSE_COPIED;
++
+ 	return 0;
+ }
+ EXPORT_SYMBOL(psp_tee_process_cmd);
+diff --git a/drivers/crypto/ccp/tee-dev.h b/drivers/crypto/ccp/tee-dev.h
+index f099601121150..49d26158b71e3 100644
+--- a/drivers/crypto/ccp/tee-dev.h
++++ b/drivers/crypto/ccp/tee-dev.h
+@@ -1,6 +1,6 @@
+ /* SPDX-License-Identifier: MIT */
+ /*
+- * Copyright 2019 Advanced Micro Devices, Inc.
++ * Copyright (C) 2019,2021 Advanced Micro Devices, Inc.
+  *
+  * Author: Rijo Thomas <Rijo-john.Thomas@amd.com>
+  * Author: Devaraj Rangasamy <Devaraj.Rangasamy@amd.com>
+@@ -18,7 +18,7 @@
+ #include <linux/mutex.h>
+ 
+ #define TEE_DEFAULT_TIMEOUT		10
+-#define MAX_BUFFER_SIZE			992
++#define MAX_BUFFER_SIZE			988
+ 
+ /**
+  * enum tee_ring_cmd_id - TEE interface commands for ring buffer configuration
+@@ -81,6 +81,20 @@ enum tee_cmd_state {
+ 	TEE_CMD_STATE_COMPLETED,
+ };
+ 
++/**
++ * enum cmd_resp_state - TEE command's response status maintained by driver
++ * @CMD_RESPONSE_INVALID:      initial state when no command is written to ring
++ * @CMD_WAITING_FOR_RESPONSE:  driver waiting for response from TEE
++ * @CMD_RESPONSE_TIMEDOUT:     failed to get response from TEE
++ * @CMD_RESPONSE_COPIED:       driver has copied response from TEE
++ */
++enum cmd_resp_state {
++	CMD_RESPONSE_INVALID,
++	CMD_WAITING_FOR_RESPONSE,
++	CMD_RESPONSE_TIMEDOUT,
++	CMD_RESPONSE_COPIED,
++};
++
+ /**
+  * struct tee_ring_cmd - Structure of the command buffer in TEE ring
+  * @cmd_id:      refers to &enum tee_cmd_id. Command id for the ring buffer
+@@ -91,6 +105,7 @@ enum tee_cmd_state {
+  * @pdata:       private data (currently unused)
+  * @res1:        reserved region
+  * @buf:         TEE command specific buffer
++ * @flag:	 refers to &enum cmd_resp_state
+  */
+ struct tee_ring_cmd {
+ 	u32 cmd_id;
+@@ -100,6 +115,7 @@ struct tee_ring_cmd {
+ 	u64 pdata;
+ 	u32 res1[2];
+ 	u8 buf[MAX_BUFFER_SIZE];
++	u32 flag;
+ 
+ 	/* Total size: 1024 bytes */
+ } __packed;
+diff --git a/drivers/crypto/chelsio/chcr_algo.c b/drivers/crypto/chelsio/chcr_algo.c
+index f5a336634daa6..405ff957b8370 100644
+--- a/drivers/crypto/chelsio/chcr_algo.c
++++ b/drivers/crypto/chelsio/chcr_algo.c
+@@ -769,13 +769,14 @@ static inline void create_wreq(struct chcr_context *ctx,
+ 	struct uld_ctx *u_ctx = ULD_CTX(ctx);
+ 	unsigned int tx_channel_id, rx_channel_id;
+ 	unsigned int txqidx = 0, rxqidx = 0;
+-	unsigned int qid, fid;
++	unsigned int qid, fid, portno;
+ 
+ 	get_qidxs(req, &txqidx, &rxqidx);
+ 	qid = u_ctx->lldi.rxq_ids[rxqidx];
+ 	fid = u_ctx->lldi.rxq_ids[0];
++	portno = rxqidx / ctx->rxq_perchan;
+ 	tx_channel_id = txqidx / ctx->txq_perchan;
+-	rx_channel_id = rxqidx / ctx->rxq_perchan;
++	rx_channel_id = cxgb4_port_e2cchan(u_ctx->lldi.ports[portno]);
+ 
+ 
+ 	chcr_req->wreq.op_to_cctx_size = FILL_WR_OP_CCTX_SIZE;
+@@ -806,6 +807,7 @@ static struct sk_buff *create_cipher_wr(struct cipher_wr_param *wrparam)
+ {
+ 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(wrparam->req);
+ 	struct chcr_context *ctx = c_ctx(tfm);
++	struct uld_ctx *u_ctx = ULD_CTX(ctx);
+ 	struct ablk_ctx *ablkctx = ABLK_CTX(ctx);
+ 	struct sk_buff *skb = NULL;
+ 	struct chcr_wr *chcr_req;
+@@ -822,6 +824,7 @@ static struct sk_buff *create_cipher_wr(struct cipher_wr_param *wrparam)
+ 	struct adapter *adap = padap(ctx->dev);
+ 	unsigned int rx_channel_id = reqctx->rxqidx / ctx->rxq_perchan;
+ 
++	rx_channel_id = cxgb4_port_e2cchan(u_ctx->lldi.ports[rx_channel_id]);
+ 	nents = sg_nents_xlen(reqctx->dstsg,  wrparam->bytes, CHCR_DST_SG_SIZE,
+ 			      reqctx->dst_ofst);
+ 	dst_size = get_space_for_phys_dsgl(nents);
+@@ -1580,6 +1583,7 @@ static struct sk_buff *create_hash_wr(struct ahash_request *req,
+ 	int error = 0;
+ 	unsigned int rx_channel_id = req_ctx->rxqidx / ctx->rxq_perchan;
+ 
++	rx_channel_id = cxgb4_port_e2cchan(u_ctx->lldi.ports[rx_channel_id]);
+ 	transhdr_len = HASH_TRANSHDR_SIZE(param->kctx_len);
+ 	req_ctx->hctx_wr.imm = (transhdr_len + param->bfr_len +
+ 				param->sg_len) <= SGE_MAX_WR_LEN;
+@@ -2438,6 +2442,7 @@ static struct sk_buff *create_authenc_wr(struct aead_request *req,
+ {
+ 	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ 	struct chcr_context *ctx = a_ctx(tfm);
++	struct uld_ctx *u_ctx = ULD_CTX(ctx);
+ 	struct chcr_aead_ctx *aeadctx = AEAD_CTX(ctx);
+ 	struct chcr_authenc_ctx *actx = AUTHENC_CTX(aeadctx);
+ 	struct chcr_aead_reqctx *reqctx = aead_request_ctx(req);
+@@ -2457,6 +2462,7 @@ static struct sk_buff *create_authenc_wr(struct aead_request *req,
+ 	struct adapter *adap = padap(ctx->dev);
+ 	unsigned int rx_channel_id = reqctx->rxqidx / ctx->rxq_perchan;
+ 
++	rx_channel_id = cxgb4_port_e2cchan(u_ctx->lldi.ports[rx_channel_id]);
+ 	if (req->cryptlen == 0)
+ 		return NULL;
+ 
+@@ -2710,9 +2716,11 @@ void chcr_add_aead_dst_ent(struct aead_request *req,
+ 	struct dsgl_walk dsgl_walk;
+ 	unsigned int authsize = crypto_aead_authsize(tfm);
+ 	struct chcr_context *ctx = a_ctx(tfm);
++	struct uld_ctx *u_ctx = ULD_CTX(ctx);
+ 	u32 temp;
+ 	unsigned int rx_channel_id = reqctx->rxqidx / ctx->rxq_perchan;
+ 
++	rx_channel_id = cxgb4_port_e2cchan(u_ctx->lldi.ports[rx_channel_id]);
+ 	dsgl_walk_init(&dsgl_walk, phys_cpl);
+ 	dsgl_walk_add_page(&dsgl_walk, IV + reqctx->b0_len, reqctx->iv_dma);
+ 	temp = req->assoclen + req->cryptlen +
+@@ -2752,9 +2760,11 @@ void chcr_add_cipher_dst_ent(struct skcipher_request *req,
+ 	struct chcr_skcipher_req_ctx *reqctx = skcipher_request_ctx(req);
+ 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(wrparam->req);
+ 	struct chcr_context *ctx = c_ctx(tfm);
++	struct uld_ctx *u_ctx = ULD_CTX(ctx);
+ 	struct dsgl_walk dsgl_walk;
+ 	unsigned int rx_channel_id = reqctx->rxqidx / ctx->rxq_perchan;
+ 
++	rx_channel_id = cxgb4_port_e2cchan(u_ctx->lldi.ports[rx_channel_id]);
+ 	dsgl_walk_init(&dsgl_walk, phys_cpl);
+ 	dsgl_walk_add_sg(&dsgl_walk, reqctx->dstsg, wrparam->bytes,
+ 			 reqctx->dst_ofst);
+@@ -2958,6 +2968,7 @@ static void fill_sec_cpl_for_aead(struct cpl_tx_sec_pdu *sec_cpl,
+ {
+ 	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ 	struct chcr_context *ctx = a_ctx(tfm);
++	struct uld_ctx *u_ctx = ULD_CTX(ctx);
+ 	struct chcr_aead_ctx *aeadctx = AEAD_CTX(ctx);
+ 	struct chcr_aead_reqctx *reqctx = aead_request_ctx(req);
+ 	unsigned int cipher_mode = CHCR_SCMD_CIPHER_MODE_AES_CCM;
+@@ -2967,6 +2978,8 @@ static void fill_sec_cpl_for_aead(struct cpl_tx_sec_pdu *sec_cpl,
+ 	unsigned int tag_offset = 0, auth_offset = 0;
+ 	unsigned int assoclen;
+ 
++	rx_channel_id = cxgb4_port_e2cchan(u_ctx->lldi.ports[rx_channel_id]);
++
+ 	if (get_aead_subtype(tfm) == CRYPTO_ALG_SUB_TYPE_AEAD_RFC4309)
+ 		assoclen = req->assoclen - 8;
+ 	else
+@@ -3127,6 +3140,7 @@ static struct sk_buff *create_gcm_wr(struct aead_request *req,
+ {
+ 	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ 	struct chcr_context *ctx = a_ctx(tfm);
++	struct uld_ctx *u_ctx = ULD_CTX(ctx);
+ 	struct chcr_aead_ctx *aeadctx = AEAD_CTX(ctx);
+ 	struct chcr_aead_reqctx  *reqctx = aead_request_ctx(req);
+ 	struct sk_buff *skb = NULL;
+@@ -3143,6 +3157,7 @@ static struct sk_buff *create_gcm_wr(struct aead_request *req,
+ 	struct adapter *adap = padap(ctx->dev);
+ 	unsigned int rx_channel_id = reqctx->rxqidx / ctx->rxq_perchan;
+ 
++	rx_channel_id = cxgb4_port_e2cchan(u_ctx->lldi.ports[rx_channel_id]);
+ 	if (get_aead_subtype(tfm) == CRYPTO_ALG_SUB_TYPE_AEAD_RFC4106)
+ 		assoclen = req->assoclen - 8;
+ 
+diff --git a/drivers/crypto/keembay/keembay-ocs-aes-core.c b/drivers/crypto/keembay/keembay-ocs-aes-core.c
+index b6b25d994af38..2ef312866338f 100644
+--- a/drivers/crypto/keembay/keembay-ocs-aes-core.c
++++ b/drivers/crypto/keembay/keembay-ocs-aes-core.c
+@@ -1649,8 +1649,10 @@ static int kmb_ocs_aes_probe(struct platform_device *pdev)
+ 
+ 	/* Initialize crypto engine */
+ 	aes_dev->engine = crypto_engine_alloc_init(dev, true);
+-	if (!aes_dev->engine)
++	if (!aes_dev->engine) {
++		rc = -ENOMEM;
+ 		goto list_del;
++	}
+ 
+ 	rc = crypto_engine_start(aes_dev->engine);
+ 	if (rc) {
+diff --git a/drivers/crypto/keembay/keembay-ocs-hcu-core.c b/drivers/crypto/keembay/keembay-ocs-hcu-core.c
+index c4b97b4160e9b..322c51a6936f3 100644
+--- a/drivers/crypto/keembay/keembay-ocs-hcu-core.c
++++ b/drivers/crypto/keembay/keembay-ocs-hcu-core.c
+@@ -1220,8 +1220,10 @@ static int kmb_ocs_hcu_probe(struct platform_device *pdev)
+ 
+ 	/* Initialize crypto engine */
+ 	hcu_dev->engine = crypto_engine_alloc_init(dev, 1);
+-	if (!hcu_dev->engine)
++	if (!hcu_dev->engine) {
++		rc = -ENOMEM;
+ 		goto list_del;
++	}
+ 
+ 	rc = crypto_engine_start(hcu_dev->engine);
+ 	if (rc) {
+diff --git a/drivers/crypto/qat/qat_c3xxxvf/adf_drv.c b/drivers/crypto/qat/qat_c3xxxvf/adf_drv.c
+index 1d1532e8fb6d9..067ca5e17d387 100644
+--- a/drivers/crypto/qat/qat_c3xxxvf/adf_drv.c
++++ b/drivers/crypto/qat/qat_c3xxxvf/adf_drv.c
+@@ -184,12 +184,12 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	if (ret)
+ 		goto out_err_free_reg;
+ 
+-	set_bit(ADF_STATUS_PF_RUNNING, &accel_dev->status);
+-
+ 	ret = adf_dev_init(accel_dev);
+ 	if (ret)
+ 		goto out_err_dev_shutdown;
+ 
++	set_bit(ADF_STATUS_PF_RUNNING, &accel_dev->status);
++
+ 	ret = adf_dev_start(accel_dev);
+ 	if (ret)
+ 		goto out_err_dev_stop;
+diff --git a/drivers/crypto/qat/qat_c62xvf/adf_drv.c b/drivers/crypto/qat/qat_c62xvf/adf_drv.c
+index 04742a6d91cae..51ea88c0b17d7 100644
+--- a/drivers/crypto/qat/qat_c62xvf/adf_drv.c
++++ b/drivers/crypto/qat/qat_c62xvf/adf_drv.c
+@@ -184,12 +184,12 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	if (ret)
+ 		goto out_err_free_reg;
+ 
+-	set_bit(ADF_STATUS_PF_RUNNING, &accel_dev->status);
+-
+ 	ret = adf_dev_init(accel_dev);
+ 	if (ret)
+ 		goto out_err_dev_shutdown;
+ 
++	set_bit(ADF_STATUS_PF_RUNNING, &accel_dev->status);
++
+ 	ret = adf_dev_start(accel_dev);
+ 	if (ret)
+ 		goto out_err_dev_stop;
+diff --git a/drivers/crypto/qat/qat_common/adf_isr.c b/drivers/crypto/qat/qat_common/adf_isr.c
+index c458534635306..e3ad5587be49e 100644
+--- a/drivers/crypto/qat/qat_common/adf_isr.c
++++ b/drivers/crypto/qat/qat_common/adf_isr.c
+@@ -291,19 +291,32 @@ int adf_isr_resource_alloc(struct adf_accel_dev *accel_dev)
+ 
+ 	ret = adf_isr_alloc_msix_entry_table(accel_dev);
+ 	if (ret)
+-		return ret;
+-	if (adf_enable_msix(accel_dev))
+ 		goto err_out;
+ 
+-	if (adf_setup_bh(accel_dev))
+-		goto err_out;
++	ret = adf_enable_msix(accel_dev);
++	if (ret)
++		goto err_free_msix_table;
+ 
+-	if (adf_request_irqs(accel_dev))
+-		goto err_out;
++	ret = adf_setup_bh(accel_dev);
++	if (ret)
++		goto err_disable_msix;
++
++	ret = adf_request_irqs(accel_dev);
++	if (ret)
++		goto err_cleanup_bh;
+ 
+ 	return 0;
++
++err_cleanup_bh:
++	adf_cleanup_bh(accel_dev);
++
++err_disable_msix:
++	adf_disable_msix(&accel_dev->accel_pci_dev);
++
++err_free_msix_table:
++	adf_isr_free_msix_entry_table(accel_dev);
++
+ err_out:
+-	adf_isr_resource_free(accel_dev);
+-	return -EFAULT;
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(adf_isr_resource_alloc);
+diff --git a/drivers/crypto/qat/qat_common/adf_transport.c b/drivers/crypto/qat/qat_common/adf_transport.c
+index 888c1e0472952..8ba28409fb74b 100644
+--- a/drivers/crypto/qat/qat_common/adf_transport.c
++++ b/drivers/crypto/qat/qat_common/adf_transport.c
+@@ -172,6 +172,7 @@ static int adf_init_ring(struct adf_etr_ring_data *ring)
+ 		dev_err(&GET_DEV(accel_dev), "Ring address not aligned\n");
+ 		dma_free_coherent(&GET_DEV(accel_dev), ring_size_bytes,
+ 				  ring->base_addr, ring->dma_addr);
++		ring->base_addr = NULL;
+ 		return -EFAULT;
+ 	}
+ 
+diff --git a/drivers/crypto/qat/qat_common/adf_vf_isr.c b/drivers/crypto/qat/qat_common/adf_vf_isr.c
+index 38d316a42ba6f..888388acb6bd3 100644
+--- a/drivers/crypto/qat/qat_common/adf_vf_isr.c
++++ b/drivers/crypto/qat/qat_common/adf_vf_isr.c
+@@ -261,17 +261,26 @@ int adf_vf_isr_resource_alloc(struct adf_accel_dev *accel_dev)
+ 		goto err_out;
+ 
+ 	if (adf_setup_pf2vf_bh(accel_dev))
+-		goto err_out;
++		goto err_disable_msi;
+ 
+ 	if (adf_setup_bh(accel_dev))
+-		goto err_out;
++		goto err_cleanup_pf2vf_bh;
+ 
+ 	if (adf_request_msi_irq(accel_dev))
+-		goto err_out;
++		goto err_cleanup_bh;
+ 
+ 	return 0;
++
++err_cleanup_bh:
++	adf_cleanup_bh(accel_dev);
++
++err_cleanup_pf2vf_bh:
++	adf_cleanup_pf2vf_bh(accel_dev);
++
++err_disable_msi:
++	adf_disable_msi(accel_dev);
++
+ err_out:
+-	adf_vf_isr_resource_free(accel_dev);
+ 	return -EFAULT;
+ }
+ EXPORT_SYMBOL_GPL(adf_vf_isr_resource_alloc);
+diff --git a/drivers/crypto/qat/qat_dh895xccvf/adf_drv.c b/drivers/crypto/qat/qat_dh895xccvf/adf_drv.c
+index c972554a755e7..29999da716cc9 100644
+--- a/drivers/crypto/qat/qat_dh895xccvf/adf_drv.c
++++ b/drivers/crypto/qat/qat_dh895xccvf/adf_drv.c
+@@ -184,12 +184,12 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	if (ret)
+ 		goto out_err_free_reg;
+ 
+-	set_bit(ADF_STATUS_PF_RUNNING, &accel_dev->status);
+-
+ 	ret = adf_dev_init(accel_dev);
+ 	if (ret)
+ 		goto out_err_dev_shutdown;
+ 
++	set_bit(ADF_STATUS_PF_RUNNING, &accel_dev->status);
++
+ 	ret = adf_dev_start(accel_dev);
+ 	if (ret)
+ 		goto out_err_dev_stop;
+diff --git a/drivers/crypto/sa2ul.c b/drivers/crypto/sa2ul.c
+index d7b1628fb4848..b0f0502a5bb0f 100644
+--- a/drivers/crypto/sa2ul.c
++++ b/drivers/crypto/sa2ul.c
+@@ -1146,8 +1146,10 @@ static int sa_run(struct sa_req *req)
+ 		mapped_sg->sgt.sgl = src;
+ 		mapped_sg->sgt.orig_nents = src_nents;
+ 		ret = dma_map_sgtable(ddev, &mapped_sg->sgt, dir_src, 0);
+-		if (ret)
++		if (ret) {
++			kfree(rxd);
+ 			return ret;
++		}
+ 
+ 		mapped_sg->dir = dir_src;
+ 		mapped_sg->mapped = true;
+@@ -1155,8 +1157,10 @@ static int sa_run(struct sa_req *req)
+ 		mapped_sg->sgt.sgl = req->src;
+ 		mapped_sg->sgt.orig_nents = sg_nents;
+ 		ret = dma_map_sgtable(ddev, &mapped_sg->sgt, dir_src, 0);
+-		if (ret)
++		if (ret) {
++			kfree(rxd);
+ 			return ret;
++		}
+ 
+ 		mapped_sg->dir = dir_src;
+ 		mapped_sg->mapped = true;
+diff --git a/drivers/devfreq/devfreq.c b/drivers/devfreq/devfreq.c
+index bf3047896e41a..59ba59bea0f54 100644
+--- a/drivers/devfreq/devfreq.c
++++ b/drivers/devfreq/devfreq.c
+@@ -387,7 +387,7 @@ static int devfreq_set_target(struct devfreq *devfreq, unsigned long new_freq,
+ 	devfreq->previous_freq = new_freq;
+ 
+ 	if (devfreq->suspend_freq)
+-		devfreq->resume_freq = cur_freq;
++		devfreq->resume_freq = new_freq;
+ 
+ 	return err;
+ }
+@@ -821,7 +821,8 @@ struct devfreq *devfreq_add_device(struct device *dev,
+ 
+ 	if (devfreq->profile->timer < 0
+ 		|| devfreq->profile->timer >= DEVFREQ_TIMER_NUM) {
+-		goto err_out;
++		mutex_unlock(&devfreq->lock);
++		goto err_dev;
+ 	}
+ 
+ 	if (!devfreq->profile->max_state && !devfreq->profile->freq_table) {
+diff --git a/drivers/firmware/Kconfig b/drivers/firmware/Kconfig
+index 3f14dffb96696..5dd19dbd67a3b 100644
+--- a/drivers/firmware/Kconfig
++++ b/drivers/firmware/Kconfig
+@@ -237,6 +237,7 @@ config INTEL_STRATIX10_RSU
+ config QCOM_SCM
+ 	bool
+ 	depends on ARM || ARM64
++	depends on HAVE_ARM_SMCCC
+ 	select RESET_CONTROLLER
+ 
+ config QCOM_SCM_DOWNLOAD_MODE_DEFAULT
+diff --git a/drivers/firmware/qcom_scm-smc.c b/drivers/firmware/qcom_scm-smc.c
+index 497c13ba98d67..d111833364ba4 100644
+--- a/drivers/firmware/qcom_scm-smc.c
++++ b/drivers/firmware/qcom_scm-smc.c
+@@ -77,8 +77,10 @@ static void __scm_smc_do(const struct arm_smccc_args *smc,
+ 	}  while (res->a0 == QCOM_SCM_V2_EBUSY);
+ }
+ 
+-int scm_smc_call(struct device *dev, const struct qcom_scm_desc *desc,
+-		 struct qcom_scm_res *res, bool atomic)
++
++int __scm_smc_call(struct device *dev, const struct qcom_scm_desc *desc,
++		   enum qcom_scm_convention qcom_convention,
++		   struct qcom_scm_res *res, bool atomic)
+ {
+ 	int arglen = desc->arginfo & 0xf;
+ 	int i;
+@@ -87,9 +89,8 @@ int scm_smc_call(struct device *dev, const struct qcom_scm_desc *desc,
+ 	size_t alloc_len;
+ 	gfp_t flag = atomic ? GFP_ATOMIC : GFP_KERNEL;
+ 	u32 smccc_call_type = atomic ? ARM_SMCCC_FAST_CALL : ARM_SMCCC_STD_CALL;
+-	u32 qcom_smccc_convention =
+-			(qcom_scm_convention == SMC_CONVENTION_ARM_32) ?
+-			ARM_SMCCC_SMC_32 : ARM_SMCCC_SMC_64;
++	u32 qcom_smccc_convention = (qcom_convention == SMC_CONVENTION_ARM_32) ?
++				    ARM_SMCCC_SMC_32 : ARM_SMCCC_SMC_64;
+ 	struct arm_smccc_res smc_res;
+ 	struct arm_smccc_args smc = {0};
+ 
+@@ -148,4 +149,5 @@ int scm_smc_call(struct device *dev, const struct qcom_scm_desc *desc,
+ 	}
+ 
+ 	return (long)smc_res.a0 ? qcom_scm_remap_error(smc_res.a0) : 0;
++
+ }
+diff --git a/drivers/firmware/qcom_scm.c b/drivers/firmware/qcom_scm.c
+index f57779fc7ee93..9ac84b5d6ce00 100644
+--- a/drivers/firmware/qcom_scm.c
++++ b/drivers/firmware/qcom_scm.c
+@@ -113,14 +113,10 @@ static void qcom_scm_clk_disable(void)
+ 	clk_disable_unprepare(__scm->bus_clk);
+ }
+ 
+-static int __qcom_scm_is_call_available(struct device *dev, u32 svc_id,
+-					u32 cmd_id);
++enum qcom_scm_convention qcom_scm_convention = SMC_CONVENTION_UNKNOWN;
++static DEFINE_SPINLOCK(scm_query_lock);
+ 
+-enum qcom_scm_convention qcom_scm_convention;
+-static bool has_queried __read_mostly;
+-static DEFINE_SPINLOCK(query_lock);
+-
+-static void __query_convention(void)
++static enum qcom_scm_convention __get_convention(void)
+ {
+ 	unsigned long flags;
+ 	struct qcom_scm_desc desc = {
+@@ -133,36 +129,50 @@ static void __query_convention(void)
+ 		.owner = ARM_SMCCC_OWNER_SIP,
+ 	};
+ 	struct qcom_scm_res res;
++	enum qcom_scm_convention probed_convention;
+ 	int ret;
++	bool forced = false;
+ 
+-	spin_lock_irqsave(&query_lock, flags);
+-	if (has_queried)
+-		goto out;
++	if (likely(qcom_scm_convention != SMC_CONVENTION_UNKNOWN))
++		return qcom_scm_convention;
+ 
+-	qcom_scm_convention = SMC_CONVENTION_ARM_64;
+-	// Device isn't required as there is only one argument - no device
+-	// needed to dma_map_single to secure world
+-	ret = scm_smc_call(NULL, &desc, &res, true);
++	/*
++	 * Device isn't required as there is only one argument - no device
++	 * needed to dma_map_single to secure world
++	 */
++	probed_convention = SMC_CONVENTION_ARM_64;
++	ret = __scm_smc_call(NULL, &desc, probed_convention, &res, true);
+ 	if (!ret && res.result[0] == 1)
+-		goto out;
++		goto found;
++
++	/*
++	 * Some SC7180 firmwares didn't implement the
++	 * QCOM_SCM_INFO_IS_CALL_AVAIL call, so we fallback to forcing ARM_64
++	 * calling conventions on these firmwares. Luckily we don't make any
++	 * early calls into the firmware on these SoCs so the device pointer
++	 * will be valid here to check if the compatible matches.
++	 */
++	if (of_device_is_compatible(__scm ? __scm->dev->of_node : NULL, "qcom,scm-sc7180")) {
++		forced = true;
++		goto found;
++	}
+ 
+-	qcom_scm_convention = SMC_CONVENTION_ARM_32;
+-	ret = scm_smc_call(NULL, &desc, &res, true);
++	probed_convention = SMC_CONVENTION_ARM_32;
++	ret = __scm_smc_call(NULL, &desc, probed_convention, &res, true);
+ 	if (!ret && res.result[0] == 1)
+-		goto out;
+-
+-	qcom_scm_convention = SMC_CONVENTION_LEGACY;
+-out:
+-	has_queried = true;
+-	spin_unlock_irqrestore(&query_lock, flags);
+-	pr_info("qcom_scm: convention: %s\n",
+-		qcom_scm_convention_names[qcom_scm_convention]);
+-}
++		goto found;
++
++	probed_convention = SMC_CONVENTION_LEGACY;
++found:
++	spin_lock_irqsave(&scm_query_lock, flags);
++	if (probed_convention != qcom_scm_convention) {
++		qcom_scm_convention = probed_convention;
++		pr_info("qcom_scm: convention: %s%s\n",
++			qcom_scm_convention_names[qcom_scm_convention],
++			forced ? " (forced)" : "");
++	}
++	spin_unlock_irqrestore(&scm_query_lock, flags);
+ 
+-static inline enum qcom_scm_convention __get_convention(void)
+-{
+-	if (unlikely(!has_queried))
+-		__query_convention();
+ 	return qcom_scm_convention;
+ }
+ 
+@@ -219,8 +229,8 @@ static int qcom_scm_call_atomic(struct device *dev,
+ 	}
+ }
+ 
+-static int __qcom_scm_is_call_available(struct device *dev, u32 svc_id,
+-					u32 cmd_id)
++static bool __qcom_scm_is_call_available(struct device *dev, u32 svc_id,
++					 u32 cmd_id)
+ {
+ 	int ret;
+ 	struct qcom_scm_desc desc = {
+@@ -247,7 +257,7 @@ static int __qcom_scm_is_call_available(struct device *dev, u32 svc_id,
+ 
+ 	ret = qcom_scm_call(dev, &desc, &res);
+ 
+-	return ret ? : res.result[0];
++	return ret ? false : !!res.result[0];
+ }
+ 
+ /**
+@@ -585,9 +595,8 @@ bool qcom_scm_pas_supported(u32 peripheral)
+ 	};
+ 	struct qcom_scm_res res;
+ 
+-	ret = __qcom_scm_is_call_available(__scm->dev, QCOM_SCM_SVC_PIL,
+-					   QCOM_SCM_PIL_PAS_IS_SUPPORTED);
+-	if (ret <= 0)
++	if (!__qcom_scm_is_call_available(__scm->dev, QCOM_SCM_SVC_PIL,
++					  QCOM_SCM_PIL_PAS_IS_SUPPORTED))
+ 		return false;
+ 
+ 	ret = qcom_scm_call(__scm->dev, &desc, &res);
+@@ -1060,17 +1069,18 @@ EXPORT_SYMBOL(qcom_scm_ice_set_key);
+  */
+ bool qcom_scm_hdcp_available(void)
+ {
++	bool avail;
+ 	int ret = qcom_scm_clk_enable();
+ 
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = __qcom_scm_is_call_available(__scm->dev, QCOM_SCM_SVC_HDCP,
++	avail = __qcom_scm_is_call_available(__scm->dev, QCOM_SCM_SVC_HDCP,
+ 						QCOM_SCM_HDCP_INVOKE);
+ 
+ 	qcom_scm_clk_disable();
+ 
+-	return ret > 0;
++	return avail;
+ }
+ EXPORT_SYMBOL(qcom_scm_hdcp_available);
+ 
+@@ -1242,7 +1252,7 @@ static int qcom_scm_probe(struct platform_device *pdev)
+ 	__scm = scm;
+ 	__scm->dev = &pdev->dev;
+ 
+-	__query_convention();
++	__get_convention();
+ 
+ 	/*
+ 	 * If requested enable "download mode", from this point on warmboot
+diff --git a/drivers/firmware/qcom_scm.h b/drivers/firmware/qcom_scm.h
+index 95cd1ac30ab0b..632fe31424621 100644
+--- a/drivers/firmware/qcom_scm.h
++++ b/drivers/firmware/qcom_scm.h
+@@ -61,8 +61,11 @@ struct qcom_scm_res {
+ };
+ 
+ #define SCM_SMC_FNID(s, c)	((((s) & 0xFF) << 8) | ((c) & 0xFF))
+-extern int scm_smc_call(struct device *dev, const struct qcom_scm_desc *desc,
+-			struct qcom_scm_res *res, bool atomic);
++extern int __scm_smc_call(struct device *dev, const struct qcom_scm_desc *desc,
++			  enum qcom_scm_convention qcom_convention,
++			  struct qcom_scm_res *res, bool atomic);
++#define scm_smc_call(dev, desc, res, atomic) \
++	__scm_smc_call((dev), (desc), qcom_scm_convention, (res), (atomic))
+ 
+ #define SCM_LEGACY_FNID(s, c)	(((s) << 10) | ((c) & 0x3ff))
+ extern int scm_legacy_call_atomic(struct device *dev,
+diff --git a/drivers/firmware/xilinx/zynqmp.c b/drivers/firmware/xilinx/zynqmp.c
+index 7eb9958662ddd..83082e2f2e441 100644
+--- a/drivers/firmware/xilinx/zynqmp.c
++++ b/drivers/firmware/xilinx/zynqmp.c
+@@ -2,7 +2,7 @@
+ /*
+  * Xilinx Zynq MPSoC Firmware layer
+  *
+- *  Copyright (C) 2014-2020 Xilinx, Inc.
++ *  Copyright (C) 2014-2021 Xilinx, Inc.
+  *
+  *  Michal Simek <michal.simek@xilinx.com>
+  *  Davorin Mista <davorin.mista@aggios.com>
+@@ -1280,12 +1280,13 @@ static int zynqmp_firmware_probe(struct platform_device *pdev)
+ static int zynqmp_firmware_remove(struct platform_device *pdev)
+ {
+ 	struct pm_api_feature_data *feature_data;
++	struct hlist_node *tmp;
+ 	int i;
+ 
+ 	mfd_remove_devices(&pdev->dev);
+ 	zynqmp_pm_api_debugfs_exit();
+ 
+-	hash_for_each(pm_api_features_map, i, feature_data, hentry) {
++	hash_for_each_safe(pm_api_features_map, i, tmp, feature_data, hentry) {
+ 		hash_del(&feature_data->hentry);
+ 		kfree(feature_data);
+ 	}
+diff --git a/drivers/fpga/xilinx-spi.c b/drivers/fpga/xilinx-spi.c
+index 27defa98092dd..fee4d0abf6bfe 100644
+--- a/drivers/fpga/xilinx-spi.c
++++ b/drivers/fpga/xilinx-spi.c
+@@ -233,25 +233,19 @@ static int xilinx_spi_probe(struct spi_device *spi)
+ 
+ 	/* PROGRAM_B is active low */
+ 	conf->prog_b = devm_gpiod_get(&spi->dev, "prog_b", GPIOD_OUT_LOW);
+-	if (IS_ERR(conf->prog_b)) {
+-		dev_err(&spi->dev, "Failed to get PROGRAM_B gpio: %ld\n",
+-			PTR_ERR(conf->prog_b));
+-		return PTR_ERR(conf->prog_b);
+-	}
++	if (IS_ERR(conf->prog_b))
++		return dev_err_probe(&spi->dev, PTR_ERR(conf->prog_b),
++				     "Failed to get PROGRAM_B gpio\n");
+ 
+ 	conf->init_b = devm_gpiod_get_optional(&spi->dev, "init-b", GPIOD_IN);
+-	if (IS_ERR(conf->init_b)) {
+-		dev_err(&spi->dev, "Failed to get INIT_B gpio: %ld\n",
+-			PTR_ERR(conf->init_b));
+-		return PTR_ERR(conf->init_b);
+-	}
++	if (IS_ERR(conf->init_b))
++		return dev_err_probe(&spi->dev, PTR_ERR(conf->init_b),
++				     "Failed to get INIT_B gpio\n");
+ 
+ 	conf->done = devm_gpiod_get(&spi->dev, "done", GPIOD_IN);
+-	if (IS_ERR(conf->done)) {
+-		dev_err(&spi->dev, "Failed to get DONE gpio: %ld\n",
+-			PTR_ERR(conf->done));
+-		return PTR_ERR(conf->done);
+-	}
++	if (IS_ERR(conf->done))
++		return dev_err_probe(&spi->dev, PTR_ERR(conf->done),
++				     "Failed to get DONE gpio\n");
+ 
+ 	mgr = devm_fpga_mgr_create(&spi->dev,
+ 				   "Xilinx Slave Serial FPGA Manager",
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
+index f753e04fee99a..a2ac44cc2a6da 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
+@@ -1355,7 +1355,7 @@ int amdgpu_display_suspend_helper(struct amdgpu_device *adev)
+ 			}
+ 		}
+ 	}
+-	return r;
++	return 0;
+ }
+ 
+ int amdgpu_display_resume_helper(struct amdgpu_device *adev)
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
+index 94b069630db36..b4971e90b98cf 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
+@@ -215,7 +215,11 @@ static int amdgpu_vmid_grab_idle(struct amdgpu_vm *vm,
+ 	/* Check if we have an idle VMID */
+ 	i = 0;
+ 	list_for_each_entry((*idle), &id_mgr->ids_lru, list) {
+-		fences[i] = amdgpu_sync_peek_fence(&(*idle)->active, ring);
++		/* Don't use per engine and per process VMID at the same time */
++		struct amdgpu_ring *r = adev->vm_manager.concurrent_flush ?
++			NULL : ring;
++
++		fences[i] = amdgpu_sync_peek_fence(&(*idle)->active, r);
+ 		if (!fences[i])
+ 			break;
+ 		++i;
+@@ -281,7 +285,7 @@ static int amdgpu_vmid_grab_reserved(struct amdgpu_vm *vm,
+ 	if (updates && (*id)->flushed_updates &&
+ 	    updates->context == (*id)->flushed_updates->context &&
+ 	    !dma_fence_is_later(updates, (*id)->flushed_updates))
+-	    updates = NULL;
++		updates = NULL;
+ 
+ 	if ((*id)->owner != vm->immediate.fence_context ||
+ 	    job->vm_pd_addr != (*id)->pd_gpu_addr ||
+@@ -290,6 +294,10 @@ static int amdgpu_vmid_grab_reserved(struct amdgpu_vm *vm,
+ 	     !dma_fence_is_signaled((*id)->last_flush))) {
+ 		struct dma_fence *tmp;
+ 
++		/* Don't use per engine and per process VMID at the same time */
++		if (adev->vm_manager.concurrent_flush)
++			ring = NULL;
++
+ 		/* to prevent one context starved by another context */
+ 		(*id)->pd_gpu_addr = 0;
+ 		tmp = amdgpu_sync_peek_fence(&(*id)->active, ring);
+@@ -365,12 +373,7 @@ static int amdgpu_vmid_grab_used(struct amdgpu_vm *vm,
+ 		if (updates && (!flushed || dma_fence_is_later(updates, flushed)))
+ 			needs_flush = true;
+ 
+-		/* Concurrent flushes are only possible starting with Vega10 and
+-		 * are broken on Navi10 and Navi14.
+-		 */
+-		if (needs_flush && (adev->asic_type < CHIP_VEGA10 ||
+-				    adev->asic_type == CHIP_NAVI10 ||
+-				    adev->asic_type == CHIP_NAVI14))
++		if (needs_flush && !adev->vm_manager.concurrent_flush)
+ 			continue;
+ 
+ 		/* Good, we can use this VMID. Remember this submission as
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_pmu.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_pmu.c
+index 19c0a3655228f..82e9ecf843523 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_pmu.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_pmu.c
+@@ -519,8 +519,10 @@ static int init_pmu_entry_by_type_and_add(struct amdgpu_pmu_entry *pmu_entry,
+ 	pmu_entry->pmu.attr_groups = kmemdup(attr_groups, sizeof(attr_groups),
+ 								GFP_KERNEL);
+ 
+-	if (!pmu_entry->pmu.attr_groups)
++	if (!pmu_entry->pmu.attr_groups) {
++		ret = -ENOMEM;
+ 		goto err_attr_group;
++	}
+ 
+ 	snprintf(pmu_name, PMU_NAME_SIZE, "%s_%d", pmu_entry->pmu_file_prefix,
+ 				adev_to_drm(pmu_entry->adev)->primary->index);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+index 326dae31b675d..a566bbe26bdd8 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+@@ -92,13 +92,13 @@ struct amdgpu_prt_cb {
+ static inline void amdgpu_vm_eviction_lock(struct amdgpu_vm *vm)
+ {
+ 	mutex_lock(&vm->eviction_lock);
+-	vm->saved_flags = memalloc_nofs_save();
++	vm->saved_flags = memalloc_noreclaim_save();
+ }
+ 
+ static inline int amdgpu_vm_eviction_trylock(struct amdgpu_vm *vm)
+ {
+ 	if (mutex_trylock(&vm->eviction_lock)) {
+-		vm->saved_flags = memalloc_nofs_save();
++		vm->saved_flags = memalloc_noreclaim_save();
+ 		return 1;
+ 	}
+ 	return 0;
+@@ -106,7 +106,7 @@ static inline int amdgpu_vm_eviction_trylock(struct amdgpu_vm *vm)
+ 
+ static inline void amdgpu_vm_eviction_unlock(struct amdgpu_vm *vm)
+ {
+-	memalloc_nofs_restore(vm->saved_flags);
++	memalloc_noreclaim_restore(vm->saved_flags);
+ 	mutex_unlock(&vm->eviction_lock);
+ }
+ 
+@@ -3147,6 +3147,12 @@ void amdgpu_vm_manager_init(struct amdgpu_device *adev)
+ {
+ 	unsigned i;
+ 
++	/* Concurrent flushes are only possible starting with Vega10 and
++	 * are broken on Navi10 and Navi14.
++	 */
++	adev->vm_manager.concurrent_flush = !(adev->asic_type < CHIP_VEGA10 ||
++					      adev->asic_type == CHIP_NAVI10 ||
++					      adev->asic_type == CHIP_NAVI14);
+ 	amdgpu_vmid_mgr_init(adev);
+ 
+ 	adev->vm_manager.fence_context =
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
+index 976a12e5a8b92..4e140288159cd 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
+@@ -331,6 +331,7 @@ struct amdgpu_vm_manager {
+ 	/* Handling of VMIDs */
+ 	struct amdgpu_vmid_mgr			id_mgr[AMDGPU_MAX_VMHUBS];
+ 	unsigned int				first_kfd_vmid;
++	bool					concurrent_flush;
+ 
+ 	/* Handling of VM fences */
+ 	u64					fence_context;
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
+index 2d832fc231191..421d6069c5096 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
+@@ -59,6 +59,7 @@ MODULE_FIRMWARE("amdgpu/tonga_mc.bin");
+ MODULE_FIRMWARE("amdgpu/polaris11_mc.bin");
+ MODULE_FIRMWARE("amdgpu/polaris10_mc.bin");
+ MODULE_FIRMWARE("amdgpu/polaris12_mc.bin");
++MODULE_FIRMWARE("amdgpu/polaris12_32_mc.bin");
+ MODULE_FIRMWARE("amdgpu/polaris11_k_mc.bin");
+ MODULE_FIRMWARE("amdgpu/polaris10_k_mc.bin");
+ MODULE_FIRMWARE("amdgpu/polaris12_k_mc.bin");
+@@ -243,10 +244,16 @@ static int gmc_v8_0_init_microcode(struct amdgpu_device *adev)
+ 			chip_name = "polaris10";
+ 		break;
+ 	case CHIP_POLARIS12:
+-		if (ASICID_IS_P23(adev->pdev->device, adev->pdev->revision))
++		if (ASICID_IS_P23(adev->pdev->device, adev->pdev->revision)) {
+ 			chip_name = "polaris12_k";
+-		else
+-			chip_name = "polaris12";
++		} else {
++			WREG32(mmMC_SEQ_IO_DEBUG_INDEX, ixMC_IO_DEBUG_UP_159);
++			/* Polaris12 32bit ASIC needs a special MC firmware */
++			if (RREG32(mmMC_SEQ_IO_DEBUG_DATA) == 0x05b4dc40)
++				chip_name = "polaris12_32";
++			else
++				chip_name = "polaris12";
++		}
+ 		break;
+ 	case CHIP_FIJI:
+ 	case CHIP_CARRIZO:
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
+index def583916294d..9b844e9fb16ff 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
+@@ -584,6 +584,10 @@ static void vcn_v3_0_mc_resume_dpg_mode(struct amdgpu_device *adev, int inst_idx
+ 	WREG32_SOC15_DPG_MODE(inst_idx, SOC15_DPG_MODE_OFFSET(
+ 			VCN, inst_idx, mmUVD_VCPU_NONCACHE_SIZE0),
+ 			AMDGPU_GPU_PAGE_ALIGN(sizeof(struct amdgpu_fw_shared)), 0, indirect);
++
++	/* VCN global tiling registers */
++	WREG32_SOC15_DPG_MODE(0, SOC15_DPG_MODE_OFFSET(
++		UVD, 0, mmUVD_GFX10_ADDR_CONFIG), adev->gfx.config.gb_addr_config, 0, indirect);
+ }
+ 
+ static void vcn_v3_0_disable_static_power_gating(struct amdgpu_device *adev, int inst)
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_iommu.c b/drivers/gpu/drm/amd/amdkfd/kfd_iommu.c
+index 66bbca61e3ef5..9318936aa8054 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_iommu.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_iommu.c
+@@ -20,6 +20,10 @@
+  * OTHER DEALINGS IN THE SOFTWARE.
+  */
+ 
++#include <linux/kconfig.h>
++
++#if IS_REACHABLE(CONFIG_AMD_IOMMU_V2)
++
+ #include <linux/printk.h>
+ #include <linux/device.h>
+ #include <linux/slab.h>
+@@ -355,3 +359,5 @@ int kfd_iommu_add_perf_counters(struct kfd_topology_device *kdev)
+ 
+ 	return 0;
+ }
++
++#endif
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_iommu.h b/drivers/gpu/drm/amd/amdkfd/kfd_iommu.h
+index dd23d9fdf6a82..afd420b01a0c2 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_iommu.h
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_iommu.h
+@@ -23,7 +23,9 @@
+ #ifndef __KFD_IOMMU_H__
+ #define __KFD_IOMMU_H__
+ 
+-#if defined(CONFIG_AMD_IOMMU_V2_MODULE) || defined(CONFIG_AMD_IOMMU_V2)
++#include <linux/kconfig.h>
++
++#if IS_REACHABLE(CONFIG_AMD_IOMMU_V2)
+ 
+ #define KFD_SUPPORT_IOMMU_V2
+ 
+@@ -46,6 +48,9 @@ static inline int kfd_iommu_check_device(struct kfd_dev *kfd)
+ }
+ static inline int kfd_iommu_device_init(struct kfd_dev *kfd)
+ {
++#if IS_MODULE(CONFIG_AMD_IOMMU_V2)
++	WARN_ONCE(1, "iommu_v2 module is not usable by built-in KFD");
++#endif
+ 	return 0;
+ }
+ 
+@@ -73,6 +78,6 @@ static inline int kfd_iommu_add_perf_counters(struct kfd_topology_device *kdev)
+ 	return 0;
+ }
+ 
+-#endif /* defined(CONFIG_AMD_IOMMU_V2) */
++#endif /* IS_REACHABLE(CONFIG_AMD_IOMMU_V2) */
+ 
+ #endif /* __KFD_IOMMU_H__ */
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 29ca1708458c1..71e07ebc8f88a 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -3850,6 +3850,23 @@ static int fill_dc_scaling_info(const struct drm_plane_state *state,
+ 	scaling_info->src_rect.x = state->src_x >> 16;
+ 	scaling_info->src_rect.y = state->src_y >> 16;
+ 
++	/*
++	 * For reasons we don't (yet) fully understand a non-zero
++	 * src_y coordinate into an NV12 buffer can cause a
++	 * system hang. To avoid hangs (and maybe be overly cautious)
++	 * let's reject both non-zero src_x and src_y.
++	 *
++	 * We currently know of only one use-case to reproduce a
++	 * scenario with non-zero src_x and src_y for NV12, which
++	 * is to gesture the YouTube Android app into full screen
++	 * on ChromeOS.
++	 */
++	if (state->fb &&
++	    state->fb->format->format == DRM_FORMAT_NV12 &&
++	    (scaling_info->src_rect.x != 0 ||
++	     scaling_info->src_rect.y != 0))
++		return -EINVAL;
++
+ 	scaling_info->src_rect.width = state->src_w >> 16;
+ 	if (scaling_info->src_rect.width == 0)
+ 		return -EINVAL;
+@@ -9278,7 +9295,8 @@ static int dm_check_crtc_cursor(struct drm_atomic_state *state,
+ 
+ 	new_cursor_state = drm_atomic_get_new_plane_state(state, crtc->cursor);
+ 	new_primary_state = drm_atomic_get_new_plane_state(state, crtc->primary);
+-	if (!new_cursor_state || !new_primary_state || !new_cursor_state->fb) {
++	if (!new_cursor_state || !new_primary_state ||
++	    !new_cursor_state->fb || !new_primary_state->fb) {
+ 		return 0;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+index bd0101013ec89..440bf0a0e12ae 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+@@ -1603,6 +1603,7 @@ static bool dc_link_construct(struct dc_link *link,
+ 	link->psr_settings.psr_version = DC_PSR_VERSION_UNSUPPORTED;
+ 
+ 	DC_LOG_DC("BIOS object table - %s finished successfully.\n", __func__);
++	kfree(info);
+ 	return true;
+ device_tag_fail:
+ 	link->link_enc->funcs->destroy(&link->link_enc);
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_abm.c b/drivers/gpu/drm/amd/display/dc/dce/dce_abm.c
+index 4e87e70237e3d..874b132fe1d78 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dce_abm.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dce_abm.c
+@@ -283,7 +283,7 @@ struct abm *dce_abm_create(
+ 	const struct dce_abm_shift *abm_shift,
+ 	const struct dce_abm_mask *abm_mask)
+ {
+-	struct dce_abm *abm_dce = kzalloc(sizeof(*abm_dce), GFP_KERNEL);
++	struct dce_abm *abm_dce = kzalloc(sizeof(*abm_dce), GFP_ATOMIC);
+ 
+ 	if (abm_dce == NULL) {
+ 		BREAK_TO_DEBUGGER();
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_dmcu.c b/drivers/gpu/drm/amd/display/dc/dce/dce_dmcu.c
+index ddc789daf3b13..09d4cb5c97b67 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dce_dmcu.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dce_dmcu.c
+@@ -1049,7 +1049,7 @@ struct dmcu *dcn10_dmcu_create(
+ 	const struct dce_dmcu_shift *dmcu_shift,
+ 	const struct dce_dmcu_mask *dmcu_mask)
+ {
+-	struct dce_dmcu *dmcu_dce = kzalloc(sizeof(*dmcu_dce), GFP_KERNEL);
++	struct dce_dmcu *dmcu_dce = kzalloc(sizeof(*dmcu_dce), GFP_ATOMIC);
+ 
+ 	if (dmcu_dce == NULL) {
+ 		BREAK_TO_DEBUGGER();
+@@ -1070,7 +1070,7 @@ struct dmcu *dcn20_dmcu_create(
+ 	const struct dce_dmcu_shift *dmcu_shift,
+ 	const struct dce_dmcu_mask *dmcu_mask)
+ {
+-	struct dce_dmcu *dmcu_dce = kzalloc(sizeof(*dmcu_dce), GFP_KERNEL);
++	struct dce_dmcu *dmcu_dce = kzalloc(sizeof(*dmcu_dce), GFP_ATOMIC);
+ 
+ 	if (dmcu_dce == NULL) {
+ 		BREAK_TO_DEBUGGER();
+@@ -1091,7 +1091,7 @@ struct dmcu *dcn21_dmcu_create(
+ 	const struct dce_dmcu_shift *dmcu_shift,
+ 	const struct dce_dmcu_mask *dmcu_mask)
+ {
+-	struct dce_dmcu *dmcu_dce = kzalloc(sizeof(*dmcu_dce), GFP_KERNEL);
++	struct dce_dmcu *dmcu_dce = kzalloc(sizeof(*dmcu_dce), GFP_ATOMIC);
+ 
+ 	if (dmcu_dce == NULL) {
+ 		BREAK_TO_DEBUGGER();
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dccg.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dccg.c
+index 62cc2651e00c1..8774406120fc1 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dccg.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dccg.c
+@@ -112,7 +112,7 @@ struct dccg *dccg2_create(
+ 	const struct dccg_shift *dccg_shift,
+ 	const struct dccg_mask *dccg_mask)
+ {
+-	struct dcn_dccg *dccg_dcn = kzalloc(sizeof(*dccg_dcn), GFP_KERNEL);
++	struct dcn_dccg *dccg_dcn = kzalloc(sizeof(*dccg_dcn), GFP_ATOMIC);
+ 	struct dccg *base;
+ 
+ 	if (dccg_dcn == NULL) {
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+index 2c2dbfcd89571..bfbc23b76cd5a 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+@@ -1104,7 +1104,7 @@ struct dpp *dcn20_dpp_create(
+ 	uint32_t inst)
+ {
+ 	struct dcn20_dpp *dpp =
+-		kzalloc(sizeof(struct dcn20_dpp), GFP_KERNEL);
++		kzalloc(sizeof(struct dcn20_dpp), GFP_ATOMIC);
+ 
+ 	if (!dpp)
+ 		return NULL;
+@@ -1122,7 +1122,7 @@ struct input_pixel_processor *dcn20_ipp_create(
+ 	struct dc_context *ctx, uint32_t inst)
+ {
+ 	struct dcn10_ipp *ipp =
+-		kzalloc(sizeof(struct dcn10_ipp), GFP_KERNEL);
++		kzalloc(sizeof(struct dcn10_ipp), GFP_ATOMIC);
+ 
+ 	if (!ipp) {
+ 		BREAK_TO_DEBUGGER();
+@@ -1139,7 +1139,7 @@ struct output_pixel_processor *dcn20_opp_create(
+ 	struct dc_context *ctx, uint32_t inst)
+ {
+ 	struct dcn20_opp *opp =
+-		kzalloc(sizeof(struct dcn20_opp), GFP_KERNEL);
++		kzalloc(sizeof(struct dcn20_opp), GFP_ATOMIC);
+ 
+ 	if (!opp) {
+ 		BREAK_TO_DEBUGGER();
+@@ -1156,7 +1156,7 @@ struct dce_aux *dcn20_aux_engine_create(
+ 	uint32_t inst)
+ {
+ 	struct aux_engine_dce110 *aux_engine =
+-		kzalloc(sizeof(struct aux_engine_dce110), GFP_KERNEL);
++		kzalloc(sizeof(struct aux_engine_dce110), GFP_ATOMIC);
+ 
+ 	if (!aux_engine)
+ 		return NULL;
+@@ -1194,7 +1194,7 @@ struct dce_i2c_hw *dcn20_i2c_hw_create(
+ 	uint32_t inst)
+ {
+ 	struct dce_i2c_hw *dce_i2c_hw =
+-		kzalloc(sizeof(struct dce_i2c_hw), GFP_KERNEL);
++		kzalloc(sizeof(struct dce_i2c_hw), GFP_ATOMIC);
+ 
+ 	if (!dce_i2c_hw)
+ 		return NULL;
+@@ -1207,7 +1207,7 @@ struct dce_i2c_hw *dcn20_i2c_hw_create(
+ struct mpc *dcn20_mpc_create(struct dc_context *ctx)
+ {
+ 	struct dcn20_mpc *mpc20 = kzalloc(sizeof(struct dcn20_mpc),
+-					  GFP_KERNEL);
++					  GFP_ATOMIC);
+ 
+ 	if (!mpc20)
+ 		return NULL;
+@@ -1225,7 +1225,7 @@ struct hubbub *dcn20_hubbub_create(struct dc_context *ctx)
+ {
+ 	int i;
+ 	struct dcn20_hubbub *hubbub = kzalloc(sizeof(struct dcn20_hubbub),
+-					  GFP_KERNEL);
++					  GFP_ATOMIC);
+ 
+ 	if (!hubbub)
+ 		return NULL;
+@@ -1253,7 +1253,7 @@ struct timing_generator *dcn20_timing_generator_create(
+ 		uint32_t instance)
+ {
+ 	struct optc *tgn10 =
+-		kzalloc(sizeof(struct optc), GFP_KERNEL);
++		kzalloc(sizeof(struct optc), GFP_ATOMIC);
+ 
+ 	if (!tgn10)
+ 		return NULL;
+@@ -1332,7 +1332,7 @@ static struct clock_source *dcn20_clock_source_create(
+ 	bool dp_clk_src)
+ {
+ 	struct dce110_clk_src *clk_src =
+-		kzalloc(sizeof(struct dce110_clk_src), GFP_KERNEL);
++		kzalloc(sizeof(struct dce110_clk_src), GFP_ATOMIC);
+ 
+ 	if (!clk_src)
+ 		return NULL;
+@@ -1438,7 +1438,7 @@ struct display_stream_compressor *dcn20_dsc_create(
+ 	struct dc_context *ctx, uint32_t inst)
+ {
+ 	struct dcn20_dsc *dsc =
+-		kzalloc(sizeof(struct dcn20_dsc), GFP_KERNEL);
++		kzalloc(sizeof(struct dcn20_dsc), GFP_ATOMIC);
+ 
+ 	if (!dsc) {
+ 		BREAK_TO_DEBUGGER();
+@@ -1572,7 +1572,7 @@ struct hubp *dcn20_hubp_create(
+ 	uint32_t inst)
+ {
+ 	struct dcn20_hubp *hubp2 =
+-		kzalloc(sizeof(struct dcn20_hubp), GFP_KERNEL);
++		kzalloc(sizeof(struct dcn20_hubp), GFP_ATOMIC);
+ 
+ 	if (!hubp2)
+ 		return NULL;
+@@ -3390,7 +3390,7 @@ bool dcn20_mmhubbub_create(struct dc_context *ctx, struct resource_pool *pool)
+ 
+ static struct pp_smu_funcs *dcn20_pp_smu_create(struct dc_context *ctx)
+ {
+-	struct pp_smu_funcs *pp_smu = kzalloc(sizeof(*pp_smu), GFP_KERNEL);
++	struct pp_smu_funcs *pp_smu = kzalloc(sizeof(*pp_smu), GFP_ATOMIC);
+ 
+ 	if (!pp_smu)
+ 		return pp_smu;
+@@ -4034,7 +4034,7 @@ struct resource_pool *dcn20_create_resource_pool(
+ 		struct dc *dc)
+ {
+ 	struct dcn20_resource_pool *pool =
+-		kzalloc(sizeof(struct dcn20_resource_pool), GFP_KERNEL);
++		kzalloc(sizeof(struct dcn20_resource_pool), GFP_ATOMIC);
+ 
+ 	if (!pool)
+ 		return NULL;
+diff --git a/drivers/gpu/drm/amd/display/dc/hdcp/hdcp_msg.c b/drivers/gpu/drm/amd/display/dc/hdcp/hdcp_msg.c
+index 5e384a8a83dc2..51855a2624cf4 100644
+--- a/drivers/gpu/drm/amd/display/dc/hdcp/hdcp_msg.c
++++ b/drivers/gpu/drm/amd/display/dc/hdcp/hdcp_msg.c
+@@ -39,7 +39,7 @@
+ #define HDCP14_KSV_SIZE 5
+ #define HDCP14_MAX_KSV_FIFO_SIZE 127*HDCP14_KSV_SIZE
+ 
+-static const bool hdcp_cmd_is_read[] = {
++static const bool hdcp_cmd_is_read[HDCP_MESSAGE_ID_MAX] = {
+ 	[HDCP_MESSAGE_ID_READ_BKSV] = true,
+ 	[HDCP_MESSAGE_ID_READ_RI_R0] = true,
+ 	[HDCP_MESSAGE_ID_READ_PJ] = true,
+@@ -75,7 +75,7 @@ static const bool hdcp_cmd_is_read[] = {
+ 	[HDCP_MESSAGE_ID_WRITE_CONTENT_STREAM_TYPE] = false
+ };
+ 
+-static const uint8_t hdcp_i2c_offsets[] = {
++static const uint8_t hdcp_i2c_offsets[HDCP_MESSAGE_ID_MAX] = {
+ 	[HDCP_MESSAGE_ID_READ_BKSV] = 0x0,
+ 	[HDCP_MESSAGE_ID_READ_RI_R0] = 0x8,
+ 	[HDCP_MESSAGE_ID_READ_PJ] = 0xA,
+@@ -106,7 +106,8 @@ static const uint8_t hdcp_i2c_offsets[] = {
+ 	[HDCP_MESSAGE_ID_WRITE_REPEATER_AUTH_SEND_ACK] = 0x60,
+ 	[HDCP_MESSAGE_ID_WRITE_REPEATER_AUTH_STREAM_MANAGE] = 0x60,
+ 	[HDCP_MESSAGE_ID_READ_REPEATER_AUTH_STREAM_READY] = 0x80,
+-	[HDCP_MESSAGE_ID_READ_RXSTATUS] = 0x70
++	[HDCP_MESSAGE_ID_READ_RXSTATUS] = 0x70,
++	[HDCP_MESSAGE_ID_WRITE_CONTENT_STREAM_TYPE] = 0x0,
+ };
+ 
+ struct protection_properties {
+@@ -184,7 +185,7 @@ static const struct protection_properties hdmi_14_protection = {
+ 	.process_transaction = hdmi_14_process_transaction
+ };
+ 
+-static const uint32_t hdcp_dpcd_addrs[] = {
++static const uint32_t hdcp_dpcd_addrs[HDCP_MESSAGE_ID_MAX] = {
+ 	[HDCP_MESSAGE_ID_READ_BKSV] = 0x68000,
+ 	[HDCP_MESSAGE_ID_READ_RI_R0] = 0x68005,
+ 	[HDCP_MESSAGE_ID_READ_PJ] = 0xFFFFFFFF,
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+index 42c4dbe3e3624..ec0037a213314 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+@@ -2086,6 +2086,7 @@ int smu_set_power_limit(struct smu_context *smu, uint32_t limit)
+ 		dev_err(smu->adev->dev,
+ 			"New power limit (%d) is over the max allowed %d\n",
+ 			limit, smu->max_power_limit);
++		ret = -EINVAL;
+ 		goto out;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/bridge/Kconfig b/drivers/gpu/drm/bridge/Kconfig
+index e4110d6ca7b3c..bc60fc4728d70 100644
+--- a/drivers/gpu/drm/bridge/Kconfig
++++ b/drivers/gpu/drm/bridge/Kconfig
+@@ -67,6 +67,7 @@ config DRM_LONTIUM_LT9611UXC
+ 	depends on OF
+ 	select DRM_PANEL_BRIDGE
+ 	select DRM_KMS_HELPER
++	select DRM_MIPI_DSI
+ 	select REGMAP_I2C
+ 	help
+ 	  Driver for Lontium LT9611UXC DSI to HDMI bridge
+@@ -151,6 +152,7 @@ config DRM_SII902X
+ 	tristate "Silicon Image sii902x RGB/HDMI bridge"
+ 	depends on OF
+ 	select DRM_KMS_HELPER
++	select DRM_MIPI_DSI
+ 	select REGMAP_I2C
+ 	select I2C_MUX
+ 	select SND_SOC_HDMI_CODEC if SND_SOC
+@@ -200,6 +202,7 @@ config DRM_TOSHIBA_TC358767
+ 	tristate "Toshiba TC358767 eDP bridge"
+ 	depends on OF
+ 	select DRM_KMS_HELPER
++	select DRM_MIPI_DSI
+ 	select REGMAP_I2C
+ 	select DRM_PANEL
+ 	help
+diff --git a/drivers/gpu/drm/bridge/analogix/Kconfig b/drivers/gpu/drm/bridge/analogix/Kconfig
+index 024ea2a570e74..9160fd80dd704 100644
+--- a/drivers/gpu/drm/bridge/analogix/Kconfig
++++ b/drivers/gpu/drm/bridge/analogix/Kconfig
+@@ -30,6 +30,7 @@ config DRM_ANALOGIX_ANX7625
+ 	tristate "Analogix Anx7625 MIPI to DP interface support"
+ 	depends on DRM
+ 	depends on OF
++	select DRM_MIPI_DSI
+ 	help
+ 	  ANX7625 is an ultra-low power 4K mobile HD transmitter
+ 	  designed for portable devices. It converts MIPI/DPI to
+diff --git a/drivers/gpu/drm/bridge/panel.c b/drivers/gpu/drm/bridge/panel.c
+index 0ddc37551194e..c916f4b8907ef 100644
+--- a/drivers/gpu/drm/bridge/panel.c
++++ b/drivers/gpu/drm/bridge/panel.c
+@@ -87,6 +87,18 @@ static int panel_bridge_attach(struct drm_bridge *bridge,
+ 
+ static void panel_bridge_detach(struct drm_bridge *bridge)
+ {
++	struct panel_bridge *panel_bridge = drm_bridge_to_panel_bridge(bridge);
++	struct drm_connector *connector = &panel_bridge->connector;
++
++	/*
++	 * Cleanup the connector if we know it was initialized.
++	 *
++	 * FIXME: This wouldn't be needed if the panel_bridge structure was
++	 * allocated with drmm_kzalloc(). This might be tricky since the
++	 * drm_device pointer can only be retrieved when the bridge is attached.
++	 */
++	if (connector->dev)
++		drm_connector_cleanup(connector);
+ }
+ 
+ static void panel_bridge_pre_enable(struct drm_bridge *bridge)
+diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c
+index 309afe61afdd5..9c75c88150569 100644
+--- a/drivers/gpu/drm/drm_dp_mst_topology.c
++++ b/drivers/gpu/drm/drm_dp_mst_topology.c
+@@ -1154,6 +1154,7 @@ static void build_clear_payload_id_table(struct drm_dp_sideband_msg_tx *msg)
+ 
+ 	req.req_type = DP_CLEAR_PAYLOAD_ID_TABLE;
+ 	drm_dp_encode_sideband_req(&req, msg);
++	msg->path_msg = true;
+ }
+ 
+ static int build_enum_path_resources(struct drm_dp_sideband_msg_tx *msg,
+@@ -2824,15 +2825,21 @@ static int set_hdr_from_dst_qlock(struct drm_dp_sideband_msg_hdr *hdr,
+ 
+ 	req_type = txmsg->msg[0] & 0x7f;
+ 	if (req_type == DP_CONNECTION_STATUS_NOTIFY ||
+-		req_type == DP_RESOURCE_STATUS_NOTIFY)
++		req_type == DP_RESOURCE_STATUS_NOTIFY ||
++		req_type == DP_CLEAR_PAYLOAD_ID_TABLE)
+ 		hdr->broadcast = 1;
+ 	else
+ 		hdr->broadcast = 0;
+ 	hdr->path_msg = txmsg->path_msg;
+-	hdr->lct = mstb->lct;
+-	hdr->lcr = mstb->lct - 1;
+-	if (mstb->lct > 1)
+-		memcpy(hdr->rad, mstb->rad, mstb->lct / 2);
++	if (hdr->broadcast) {
++		hdr->lct = 1;
++		hdr->lcr = 6;
++	} else {
++		hdr->lct = mstb->lct;
++		hdr->lcr = mstb->lct - 1;
++	}
++
++	memcpy(hdr->rad, mstb->rad, hdr->lct / 2);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/drm_probe_helper.c b/drivers/gpu/drm/drm_probe_helper.c
+index ad59a51eab6d8..e7e1ee2aa3529 100644
+--- a/drivers/gpu/drm/drm_probe_helper.c
++++ b/drivers/gpu/drm/drm_probe_helper.c
+@@ -624,6 +624,7 @@ static void output_poll_execute(struct work_struct *work)
+ 	struct drm_connector_list_iter conn_iter;
+ 	enum drm_connector_status old_status;
+ 	bool repoll = false, changed;
++	u64 old_epoch_counter;
+ 
+ 	if (!dev->mode_config.poll_enabled)
+ 		return;
+@@ -660,8 +661,9 @@ static void output_poll_execute(struct work_struct *work)
+ 
+ 		repoll = true;
+ 
++		old_epoch_counter = connector->epoch_counter;
+ 		connector->status = drm_helper_probe_detect(connector, NULL, false);
+-		if (old_status != connector->status) {
++		if (old_epoch_counter != connector->epoch_counter) {
+ 			const char *old, *new;
+ 
+ 			/*
+@@ -690,6 +692,9 @@ static void output_poll_execute(struct work_struct *work)
+ 				      connector->base.id,
+ 				      connector->name,
+ 				      old, new);
++			DRM_DEBUG_KMS("[CONNECTOR:%d:%s] epoch counter %llu -> %llu\n",
++				      connector->base.id, connector->name,
++				      old_epoch_counter, connector->epoch_counter);
+ 
+ 			changed = true;
+ 		}
+diff --git a/drivers/gpu/drm/i915/gvt/gvt.c b/drivers/gpu/drm/i915/gvt/gvt.c
+index d1d8ee4a5f16a..57578bf28d774 100644
+--- a/drivers/gpu/drm/i915/gvt/gvt.c
++++ b/drivers/gpu/drm/i915/gvt/gvt.c
+@@ -126,7 +126,7 @@ static bool intel_get_gvt_attrs(struct attribute_group ***intel_vgpu_type_groups
+ 	return true;
+ }
+ 
+-static bool intel_gvt_init_vgpu_type_groups(struct intel_gvt *gvt)
++static int intel_gvt_init_vgpu_type_groups(struct intel_gvt *gvt)
+ {
+ 	int i, j;
+ 	struct intel_vgpu_type *type;
+@@ -144,7 +144,7 @@ static bool intel_gvt_init_vgpu_type_groups(struct intel_gvt *gvt)
+ 		gvt_vgpu_type_groups[i] = group;
+ 	}
+ 
+-	return true;
++	return 0;
+ 
+ unwind:
+ 	for (j = 0; j < i; j++) {
+@@ -152,7 +152,7 @@ unwind:
+ 		kfree(group);
+ 	}
+ 
+-	return false;
++	return -ENOMEM;
+ }
+ 
+ static void intel_gvt_cleanup_vgpu_type_groups(struct intel_gvt *gvt)
+@@ -360,7 +360,7 @@ int intel_gvt_init_device(struct drm_i915_private *i915)
+ 		goto out_clean_thread;
+ 
+ 	ret = intel_gvt_init_vgpu_type_groups(gvt);
+-	if (ret == false) {
++	if (ret) {
+ 		gvt_err("failed to init vgpu type groups: %d\n", ret);
+ 		goto out_clean_types;
+ 	}
+diff --git a/drivers/gpu/drm/ingenic/ingenic-drm-drv.c b/drivers/gpu/drm/ingenic/ingenic-drm-drv.c
+index 7bb31fbee29da..fd8870edde0ed 100644
+--- a/drivers/gpu/drm/ingenic/ingenic-drm-drv.c
++++ b/drivers/gpu/drm/ingenic/ingenic-drm-drv.c
+@@ -554,7 +554,7 @@ static void ingenic_drm_plane_atomic_update(struct drm_plane *plane,
+ 		height = state->src_h >> 16;
+ 		cpp = state->fb->format->cpp[0];
+ 
+-		if (priv->soc_info->has_osd && plane->type == DRM_PLANE_TYPE_OVERLAY)
++		if (!priv->soc_info->has_osd || plane->type == DRM_PLANE_TYPE_OVERLAY)
+ 			hwdesc = &priv->dma_hwdescs->hwdesc_f0;
+ 		else
+ 			hwdesc = &priv->dma_hwdescs->hwdesc_f1;
+@@ -826,6 +826,7 @@ static int ingenic_drm_bind(struct device *dev, bool has_components)
+ 	const struct jz_soc_info *soc_info;
+ 	struct ingenic_drm *priv;
+ 	struct clk *parent_clk;
++	struct drm_plane *primary;
+ 	struct drm_bridge *bridge;
+ 	struct drm_panel *panel;
+ 	struct drm_encoder *encoder;
+@@ -940,9 +941,11 @@ static int ingenic_drm_bind(struct device *dev, bool has_components)
+ 	if (soc_info->has_osd)
+ 		priv->ipu_plane = drm_plane_from_index(drm, 0);
+ 
+-	drm_plane_helper_add(&priv->f1, &ingenic_drm_plane_helper_funcs);
++	primary = priv->soc_info->has_osd ? &priv->f1 : &priv->f0;
+ 
+-	ret = drm_universal_plane_init(drm, &priv->f1, 1,
++	drm_plane_helper_add(primary, &ingenic_drm_plane_helper_funcs);
++
++	ret = drm_universal_plane_init(drm, primary, 1,
+ 				       &ingenic_drm_primary_plane_funcs,
+ 				       priv->soc_info->formats_f1,
+ 				       priv->soc_info->num_formats_f1,
+@@ -954,7 +957,7 @@ static int ingenic_drm_bind(struct device *dev, bool has_components)
+ 
+ 	drm_crtc_helper_add(&priv->crtc, &ingenic_drm_crtc_helper_funcs);
+ 
+-	ret = drm_crtc_init_with_planes(drm, &priv->crtc, &priv->f1,
++	ret = drm_crtc_init_with_planes(drm, &priv->crtc, primary,
+ 					NULL, &ingenic_drm_crtc_funcs, NULL);
+ 	if (ret) {
+ 		dev_err(dev, "Failed to init CRTC: %i\n", ret);
+diff --git a/drivers/gpu/drm/mcde/mcde_dsi.c b/drivers/gpu/drm/mcde/mcde_dsi.c
+index 2314c81229920..b3fd3501c4127 100644
+--- a/drivers/gpu/drm/mcde/mcde_dsi.c
++++ b/drivers/gpu/drm/mcde/mcde_dsi.c
+@@ -760,7 +760,7 @@ static void mcde_dsi_start(struct mcde_dsi *d)
+ 		DSI_MCTL_MAIN_DATA_CTL_BTA_EN |
+ 		DSI_MCTL_MAIN_DATA_CTL_READ_EN |
+ 		DSI_MCTL_MAIN_DATA_CTL_REG_TE_EN;
+-	if (d->mdsi->mode_flags & MIPI_DSI_MODE_EOT_PACKET)
++	if (!(d->mdsi->mode_flags & MIPI_DSI_MODE_EOT_PACKET))
+ 		val |= DSI_MCTL_MAIN_DATA_CTL_HOST_EOT_GEN;
+ 	writel(val, d->regs + DSI_MCTL_MAIN_DATA_CTL);
+ 
+diff --git a/drivers/gpu/drm/mediatek/mtk_hdmi.c b/drivers/gpu/drm/mediatek/mtk_hdmi.c
+index 8ee55f9e29541..7fb358167f8d1 100644
+--- a/drivers/gpu/drm/mediatek/mtk_hdmi.c
++++ b/drivers/gpu/drm/mediatek/mtk_hdmi.c
+@@ -153,7 +153,7 @@ struct mtk_hdmi_conf {
+ struct mtk_hdmi {
+ 	struct drm_bridge bridge;
+ 	struct drm_bridge *next_bridge;
+-	struct drm_connector conn;
++	struct drm_connector *curr_conn;/* current connector (only valid when 'enabled') */
+ 	struct device *dev;
+ 	const struct mtk_hdmi_conf *conf;
+ 	struct phy *phy;
+@@ -186,11 +186,6 @@ static inline struct mtk_hdmi *hdmi_ctx_from_bridge(struct drm_bridge *b)
+ 	return container_of(b, struct mtk_hdmi, bridge);
+ }
+ 
+-static inline struct mtk_hdmi *hdmi_ctx_from_conn(struct drm_connector *c)
+-{
+-	return container_of(c, struct mtk_hdmi, conn);
+-}
+-
+ static u32 mtk_hdmi_read(struct mtk_hdmi *hdmi, u32 offset)
+ {
+ 	return readl(hdmi->regs + offset);
+@@ -974,7 +969,7 @@ static int mtk_hdmi_setup_avi_infoframe(struct mtk_hdmi *hdmi,
+ 	ssize_t err;
+ 
+ 	err = drm_hdmi_avi_infoframe_from_display_mode(&frame,
+-						       &hdmi->conn, mode);
++						       hdmi->curr_conn, mode);
+ 	if (err < 0) {
+ 		dev_err(hdmi->dev,
+ 			"Failed to get AVI infoframe from mode: %zd\n", err);
+@@ -1054,7 +1049,7 @@ static int mtk_hdmi_setup_vendor_specific_infoframe(struct mtk_hdmi *hdmi,
+ 	ssize_t err;
+ 
+ 	err = drm_hdmi_vendor_infoframe_from_display_mode(&frame,
+-							  &hdmi->conn, mode);
++							  hdmi->curr_conn, mode);
+ 	if (err) {
+ 		dev_err(hdmi->dev,
+ 			"Failed to get vendor infoframe from mode: %zd\n", err);
+@@ -1201,48 +1196,16 @@ mtk_hdmi_update_plugged_status(struct mtk_hdmi *hdmi)
+ 	       connector_status_connected : connector_status_disconnected;
+ }
+ 
+-static enum drm_connector_status hdmi_conn_detect(struct drm_connector *conn,
+-						  bool force)
++static enum drm_connector_status mtk_hdmi_detect(struct mtk_hdmi *hdmi)
+ {
+-	struct mtk_hdmi *hdmi = hdmi_ctx_from_conn(conn);
+ 	return mtk_hdmi_update_plugged_status(hdmi);
+ }
+ 
+-static void hdmi_conn_destroy(struct drm_connector *conn)
+-{
+-	struct mtk_hdmi *hdmi = hdmi_ctx_from_conn(conn);
+-
+-	mtk_cec_set_hpd_event(hdmi->cec_dev, NULL, NULL);
+-
+-	drm_connector_cleanup(conn);
+-}
+-
+-static int mtk_hdmi_conn_get_modes(struct drm_connector *conn)
+-{
+-	struct mtk_hdmi *hdmi = hdmi_ctx_from_conn(conn);
+-	struct edid *edid;
+-	int ret;
+-
+-	if (!hdmi->ddc_adpt)
+-		return -ENODEV;
+-
+-	edid = drm_get_edid(conn, hdmi->ddc_adpt);
+-	if (!edid)
+-		return -ENODEV;
+-
+-	hdmi->dvi_mode = !drm_detect_monitor_audio(edid);
+-
+-	drm_connector_update_edid_property(conn, edid);
+-
+-	ret = drm_add_edid_modes(conn, edid);
+-	kfree(edid);
+-	return ret;
+-}
+-
+-static int mtk_hdmi_conn_mode_valid(struct drm_connector *conn,
+-				    struct drm_display_mode *mode)
++static int mtk_hdmi_bridge_mode_valid(struct drm_bridge *bridge,
++				      const struct drm_display_info *info,
++				      const struct drm_display_mode *mode)
+ {
+-	struct mtk_hdmi *hdmi = hdmi_ctx_from_conn(conn);
++	struct mtk_hdmi *hdmi = hdmi_ctx_from_bridge(bridge);
+ 	struct drm_bridge *next_bridge;
+ 
+ 	dev_dbg(hdmi->dev, "xres=%d, yres=%d, refresh=%d, intl=%d clock=%d\n",
+@@ -1267,74 +1230,57 @@ static int mtk_hdmi_conn_mode_valid(struct drm_connector *conn,
+ 	return drm_mode_validate_size(mode, 0x1fff, 0x1fff);
+ }
+ 
+-static struct drm_encoder *mtk_hdmi_conn_best_enc(struct drm_connector *conn)
+-{
+-	struct mtk_hdmi *hdmi = hdmi_ctx_from_conn(conn);
+-
+-	return hdmi->bridge.encoder;
+-}
+-
+-static const struct drm_connector_funcs mtk_hdmi_connector_funcs = {
+-	.detect = hdmi_conn_detect,
+-	.fill_modes = drm_helper_probe_single_connector_modes,
+-	.destroy = hdmi_conn_destroy,
+-	.reset = drm_atomic_helper_connector_reset,
+-	.atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
+-	.atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
+-};
+-
+-static const struct drm_connector_helper_funcs
+-		mtk_hdmi_connector_helper_funcs = {
+-	.get_modes = mtk_hdmi_conn_get_modes,
+-	.mode_valid = mtk_hdmi_conn_mode_valid,
+-	.best_encoder = mtk_hdmi_conn_best_enc,
+-};
+-
+ static void mtk_hdmi_hpd_event(bool hpd, struct device *dev)
+ {
+ 	struct mtk_hdmi *hdmi = dev_get_drvdata(dev);
+ 
+-	if (hdmi && hdmi->bridge.encoder && hdmi->bridge.encoder->dev)
++	if (hdmi && hdmi->bridge.encoder && hdmi->bridge.encoder->dev) {
++		static enum drm_connector_status status;
++
++		status = mtk_hdmi_detect(hdmi);
+ 		drm_helper_hpd_irq_event(hdmi->bridge.encoder->dev);
++		drm_bridge_hpd_notify(&hdmi->bridge, status);
++	}
+ }
+ 
+ /*
+  * Bridge callbacks
+  */
+ 
++static enum drm_connector_status mtk_hdmi_bridge_detect(struct drm_bridge *bridge)
++{
++	struct mtk_hdmi *hdmi = hdmi_ctx_from_bridge(bridge);
++
++	return mtk_hdmi_detect(hdmi);
++}
++
++static struct edid *mtk_hdmi_bridge_get_edid(struct drm_bridge *bridge,
++					     struct drm_connector *connector)
++{
++	struct mtk_hdmi *hdmi = hdmi_ctx_from_bridge(bridge);
++	struct edid *edid;
++
++	if (!hdmi->ddc_adpt)
++		return NULL;
++	edid = drm_get_edid(connector, hdmi->ddc_adpt);
++	if (!edid)
++		return NULL;
++	hdmi->dvi_mode = !drm_detect_monitor_audio(edid);
++	return edid;
++}
++
+ static int mtk_hdmi_bridge_attach(struct drm_bridge *bridge,
+ 				  enum drm_bridge_attach_flags flags)
+ {
+ 	struct mtk_hdmi *hdmi = hdmi_ctx_from_bridge(bridge);
+ 	int ret;
+ 
+-	if (flags & DRM_BRIDGE_ATTACH_NO_CONNECTOR) {
+-		DRM_ERROR("Fix bridge driver to make connector optional!");
++	if (!(flags & DRM_BRIDGE_ATTACH_NO_CONNECTOR)) {
++		DRM_ERROR("%s: The flag DRM_BRIDGE_ATTACH_NO_CONNECTOR must be supplied\n",
++			  __func__);
+ 		return -EINVAL;
+ 	}
+ 
+-	ret = drm_connector_init_with_ddc(bridge->encoder->dev, &hdmi->conn,
+-					  &mtk_hdmi_connector_funcs,
+-					  DRM_MODE_CONNECTOR_HDMIA,
+-					  hdmi->ddc_adpt);
+-	if (ret) {
+-		dev_err(hdmi->dev, "Failed to initialize connector: %d\n", ret);
+-		return ret;
+-	}
+-	drm_connector_helper_add(&hdmi->conn, &mtk_hdmi_connector_helper_funcs);
+-
+-	hdmi->conn.polled = DRM_CONNECTOR_POLL_HPD;
+-	hdmi->conn.interlace_allowed = true;
+-	hdmi->conn.doublescan_allowed = false;
+-
+-	ret = drm_connector_attach_encoder(&hdmi->conn,
+-						bridge->encoder);
+-	if (ret) {
+-		dev_err(hdmi->dev,
+-			"Failed to attach connector to encoder: %d\n", ret);
+-		return ret;
+-	}
+-
+ 	if (hdmi->next_bridge) {
+ 		ret = drm_bridge_attach(bridge->encoder, hdmi->next_bridge,
+ 					bridge, flags);
+@@ -1357,7 +1303,8 @@ static bool mtk_hdmi_bridge_mode_fixup(struct drm_bridge *bridge,
+ 	return true;
+ }
+ 
+-static void mtk_hdmi_bridge_disable(struct drm_bridge *bridge)
++static void mtk_hdmi_bridge_atomic_disable(struct drm_bridge *bridge,
++					   struct drm_bridge_state *old_bridge_state)
+ {
+ 	struct mtk_hdmi *hdmi = hdmi_ctx_from_bridge(bridge);
+ 
+@@ -1368,10 +1315,13 @@ static void mtk_hdmi_bridge_disable(struct drm_bridge *bridge)
+ 	clk_disable_unprepare(hdmi->clk[MTK_HDMI_CLK_HDMI_PIXEL]);
+ 	clk_disable_unprepare(hdmi->clk[MTK_HDMI_CLK_HDMI_PLL]);
+ 
++	hdmi->curr_conn = NULL;
++
+ 	hdmi->enabled = false;
+ }
+ 
+-static void mtk_hdmi_bridge_post_disable(struct drm_bridge *bridge)
++static void mtk_hdmi_bridge_atomic_post_disable(struct drm_bridge *bridge,
++						struct drm_bridge_state *old_state)
+ {
+ 	struct mtk_hdmi *hdmi = hdmi_ctx_from_bridge(bridge);
+ 
+@@ -1406,7 +1356,8 @@ static void mtk_hdmi_bridge_mode_set(struct drm_bridge *bridge,
+ 	drm_mode_copy(&hdmi->mode, adjusted_mode);
+ }
+ 
+-static void mtk_hdmi_bridge_pre_enable(struct drm_bridge *bridge)
++static void mtk_hdmi_bridge_atomic_pre_enable(struct drm_bridge *bridge,
++					      struct drm_bridge_state *old_state)
+ {
+ 	struct mtk_hdmi *hdmi = hdmi_ctx_from_bridge(bridge);
+ 
+@@ -1426,10 +1377,16 @@ static void mtk_hdmi_send_infoframe(struct mtk_hdmi *hdmi,
+ 		mtk_hdmi_setup_vendor_specific_infoframe(hdmi, mode);
+ }
+ 
+-static void mtk_hdmi_bridge_enable(struct drm_bridge *bridge)
++static void mtk_hdmi_bridge_atomic_enable(struct drm_bridge *bridge,
++					  struct drm_bridge_state *old_state)
+ {
++	struct drm_atomic_state *state = old_state->base.state;
+ 	struct mtk_hdmi *hdmi = hdmi_ctx_from_bridge(bridge);
+ 
++	/* Retrieve the connector through the atomic state. */
++	hdmi->curr_conn = drm_atomic_get_new_connector_for_encoder(state,
++								   bridge->encoder);
++
+ 	mtk_hdmi_output_set_display_mode(hdmi, &hdmi->mode);
+ 	clk_prepare_enable(hdmi->clk[MTK_HDMI_CLK_HDMI_PLL]);
+ 	clk_prepare_enable(hdmi->clk[MTK_HDMI_CLK_HDMI_PIXEL]);
+@@ -1440,13 +1397,19 @@ static void mtk_hdmi_bridge_enable(struct drm_bridge *bridge)
+ }
+ 
+ static const struct drm_bridge_funcs mtk_hdmi_bridge_funcs = {
++	.mode_valid = mtk_hdmi_bridge_mode_valid,
++	.atomic_duplicate_state = drm_atomic_helper_bridge_duplicate_state,
++	.atomic_destroy_state = drm_atomic_helper_bridge_destroy_state,
++	.atomic_reset = drm_atomic_helper_bridge_reset,
+ 	.attach = mtk_hdmi_bridge_attach,
+ 	.mode_fixup = mtk_hdmi_bridge_mode_fixup,
+-	.disable = mtk_hdmi_bridge_disable,
+-	.post_disable = mtk_hdmi_bridge_post_disable,
++	.atomic_disable = mtk_hdmi_bridge_atomic_disable,
++	.atomic_post_disable = mtk_hdmi_bridge_atomic_post_disable,
+ 	.mode_set = mtk_hdmi_bridge_mode_set,
+-	.pre_enable = mtk_hdmi_bridge_pre_enable,
+-	.enable = mtk_hdmi_bridge_enable,
++	.atomic_pre_enable = mtk_hdmi_bridge_atomic_pre_enable,
++	.atomic_enable = mtk_hdmi_bridge_atomic_enable,
++	.detect = mtk_hdmi_bridge_detect,
++	.get_edid = mtk_hdmi_bridge_get_edid,
+ };
+ 
+ static int mtk_hdmi_dt_parse_pdata(struct mtk_hdmi *hdmi,
+@@ -1662,8 +1625,10 @@ static int mtk_hdmi_audio_get_eld(struct device *dev, void *data, uint8_t *buf,
+ {
+ 	struct mtk_hdmi *hdmi = dev_get_drvdata(dev);
+ 
+-	memcpy(buf, hdmi->conn.eld, min(sizeof(hdmi->conn.eld), len));
+-
++	if (hdmi->enabled)
++		memcpy(buf, hdmi->curr_conn->eld, min(sizeof(hdmi->curr_conn->eld), len));
++	else
++		memset(buf, 0, len);
+ 	return 0;
+ }
+ 
+@@ -1755,6 +1720,9 @@ static int mtk_drm_hdmi_probe(struct platform_device *pdev)
+ 
+ 	hdmi->bridge.funcs = &mtk_hdmi_bridge_funcs;
+ 	hdmi->bridge.of_node = pdev->dev.of_node;
++	hdmi->bridge.ops = DRM_BRIDGE_OP_DETECT | DRM_BRIDGE_OP_EDID
++			 | DRM_BRIDGE_OP_HPD;
++	hdmi->bridge.type = DRM_MODE_CONNECTOR_HDMIA;
+ 	drm_bridge_add(&hdmi->bridge);
+ 
+ 	ret = mtk_hdmi_clk_enable_audio(hdmi);
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
+index 189f3533525c5..e4444452759c9 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
+@@ -22,7 +22,7 @@
+ 	(VIG_MASK | BIT(DPU_SSPP_QOS_8LVL) | BIT(DPU_SSPP_SCALER_QSEED4))
+ 
+ #define VIG_SM8250_MASK \
+-	(VIG_MASK | BIT(DPU_SSPP_SCALER_QSEED3LITE))
++	(VIG_MASK | BIT(DPU_SSPP_QOS_8LVL) | BIT(DPU_SSPP_SCALER_QSEED3LITE))
+ 
+ #define DMA_SDM845_MASK \
+ 	(BIT(DPU_SSPP_SRC) | BIT(DPU_SSPP_QOS) | BIT(DPU_SSPP_QOS_8LVL) |\
+diff --git a/drivers/gpu/drm/msm/msm_debugfs.c b/drivers/gpu/drm/msm/msm_debugfs.c
+index 85ad0babc3260..d611cc8e54a45 100644
+--- a/drivers/gpu/drm/msm/msm_debugfs.c
++++ b/drivers/gpu/drm/msm/msm_debugfs.c
+@@ -111,23 +111,15 @@ static const struct file_operations msm_gpu_fops = {
+ static int msm_gem_show(struct drm_device *dev, struct seq_file *m)
+ {
+ 	struct msm_drm_private *priv = dev->dev_private;
+-	struct msm_gpu *gpu = priv->gpu;
+ 	int ret;
+ 
+-	ret = mutex_lock_interruptible(&priv->mm_lock);
++	ret = mutex_lock_interruptible(&priv->obj_lock);
+ 	if (ret)
+ 		return ret;
+ 
+-	if (gpu) {
+-		seq_printf(m, "Active Objects (%s):\n", gpu->name);
+-		msm_gem_describe_objects(&gpu->active_list, m);
+-	}
+-
+-	seq_printf(m, "Inactive Objects:\n");
+-	msm_gem_describe_objects(&priv->inactive_dontneed, m);
+-	msm_gem_describe_objects(&priv->inactive_willneed, m);
++	msm_gem_describe_objects(&priv->objects, m);
+ 
+-	mutex_unlock(&priv->mm_lock);
++	mutex_unlock(&priv->obj_lock);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
+index 196907689c82e..18ea1c66de718 100644
+--- a/drivers/gpu/drm/msm/msm_drv.c
++++ b/drivers/gpu/drm/msm/msm_drv.c
+@@ -446,6 +446,9 @@ static int msm_drm_init(struct device *dev, const struct drm_driver *drv)
+ 
+ 	priv->wq = alloc_ordered_workqueue("msm", 0);
+ 
++	INIT_LIST_HEAD(&priv->objects);
++	mutex_init(&priv->obj_lock);
++
+ 	INIT_LIST_HEAD(&priv->inactive_willneed);
+ 	INIT_LIST_HEAD(&priv->inactive_dontneed);
+ 	mutex_init(&priv->mm_lock);
+diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h
+index 591c47a654e83..6b58e49754cbc 100644
+--- a/drivers/gpu/drm/msm/msm_drv.h
++++ b/drivers/gpu/drm/msm/msm_drv.h
+@@ -174,7 +174,14 @@ struct msm_drm_private {
+ 	struct msm_rd_state *hangrd;   /* debugfs to dump hanging submits */
+ 	struct msm_perf_state *perf;
+ 
+-	/*
++	/**
++	 * List of all GEM objects (mainly for debugfs, protected by obj_lock
++	 * (acquire before per GEM object lock)
++	 */
++	struct list_head objects;
++	struct mutex obj_lock;
++
++	/**
+ 	 * Lists of inactive GEM objects.  Every bo is either in one of the
+ 	 * inactive lists (depending on whether or not it is shrinkable) or
+ 	 * gpu->active_list (for the gpu it is active on[1])
+diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
+index f091c1e164fa8..aeba3eb8ce468 100644
+--- a/drivers/gpu/drm/msm/msm_gem.c
++++ b/drivers/gpu/drm/msm/msm_gem.c
+@@ -951,7 +951,7 @@ void msm_gem_describe_objects(struct list_head *list, struct seq_file *m)
+ 	size_t size = 0;
+ 
+ 	seq_puts(m, "   flags       id ref  offset   kaddr            size     madv      name\n");
+-	list_for_each_entry(msm_obj, list, mm_list) {
++	list_for_each_entry(msm_obj, list, node) {
+ 		struct drm_gem_object *obj = &msm_obj->base;
+ 		seq_puts(m, "   ");
+ 		msm_gem_describe(obj, m);
+@@ -970,6 +970,10 @@ void msm_gem_free_object(struct drm_gem_object *obj)
+ 	struct drm_device *dev = obj->dev;
+ 	struct msm_drm_private *priv = dev->dev_private;
+ 
++	mutex_lock(&priv->obj_lock);
++	list_del(&msm_obj->node);
++	mutex_unlock(&priv->obj_lock);
++
+ 	mutex_lock(&priv->mm_lock);
+ 	list_del(&msm_obj->mm_list);
+ 	mutex_unlock(&priv->mm_lock);
+@@ -1157,6 +1161,10 @@ static struct drm_gem_object *_msm_gem_new(struct drm_device *dev,
+ 	list_add_tail(&msm_obj->mm_list, &priv->inactive_willneed);
+ 	mutex_unlock(&priv->mm_lock);
+ 
++	mutex_lock(&priv->obj_lock);
++	list_add_tail(&msm_obj->node, &priv->objects);
++	mutex_unlock(&priv->obj_lock);
++
+ 	return obj;
+ 
+ fail:
+@@ -1227,6 +1235,10 @@ struct drm_gem_object *msm_gem_import(struct drm_device *dev,
+ 	list_add_tail(&msm_obj->mm_list, &priv->inactive_willneed);
+ 	mutex_unlock(&priv->mm_lock);
+ 
++	mutex_lock(&priv->obj_lock);
++	list_add_tail(&msm_obj->node, &priv->objects);
++	mutex_unlock(&priv->obj_lock);
++
+ 	return obj;
+ 
+ fail:
+diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h
+index b3a0a880cbabe..99d4c0e9465ec 100644
+--- a/drivers/gpu/drm/msm/msm_gem.h
++++ b/drivers/gpu/drm/msm/msm_gem.h
+@@ -55,8 +55,16 @@ struct msm_gem_object {
+ 	 */
+ 	uint8_t vmap_count;
+ 
+-	/* And object is either:
+-	 *  inactive - on priv->inactive_list
++	/**
++	 * Node in list of all objects (mainly for debugfs, protected by
++	 * priv->obj_lock
++	 */
++	struct list_head node;
++
++	/**
++	 * An object is either:
++	 *  inactive - on priv->inactive_dontneed or priv->inactive_willneed
++	 *     (depending on purgability status)
+ 	 *  active   - on one one of the gpu's active_list..  well, at
+ 	 *     least for now we don't have (I don't think) hw sync between
+ 	 *     2d and 3d one devices which have both, meaning we need to
+diff --git a/drivers/gpu/drm/omapdrm/dss/dsi.c b/drivers/gpu/drm/omapdrm/dss/dsi.c
+index b31d750c425ac..5f1722b040f46 100644
+--- a/drivers/gpu/drm/omapdrm/dss/dsi.c
++++ b/drivers/gpu/drm/omapdrm/dss/dsi.c
+@@ -4327,7 +4327,8 @@ static int omap_dsi_register_te_irq(struct dsi_data *dsi,
+ 	irq_set_status_flags(te_irq, IRQ_NOAUTOEN);
+ 
+ 	err = request_threaded_irq(te_irq, NULL, omap_dsi_te_irq_handler,
+-				   IRQF_TRIGGER_RISING, "TE", dsi);
++				   IRQF_TRIGGER_RISING | IRQF_ONESHOT,
++				   "TE", dsi);
+ 	if (err) {
+ 		dev_err(dsi->dev, "request irq failed with %d\n", err);
+ 		gpiod_put(dsi->te_gpio);
+diff --git a/drivers/gpu/drm/panel/panel-novatek-nt35510.c b/drivers/gpu/drm/panel/panel-novatek-nt35510.c
+index b9a0e56f33e24..ef70140c5b09d 100644
+--- a/drivers/gpu/drm/panel/panel-novatek-nt35510.c
++++ b/drivers/gpu/drm/panel/panel-novatek-nt35510.c
+@@ -898,8 +898,7 @@ static int nt35510_probe(struct mipi_dsi_device *dsi)
+ 	 */
+ 	dsi->hs_rate = 349440000;
+ 	dsi->lp_rate = 9600000;
+-	dsi->mode_flags = MIPI_DSI_CLOCK_NON_CONTINUOUS |
+-		MIPI_DSI_MODE_EOT_PACKET;
++	dsi->mode_flags = MIPI_DSI_CLOCK_NON_CONTINUOUS;
+ 
+ 	/*
+ 	 * Every new incarnation of this display must have a unique
+diff --git a/drivers/gpu/drm/panel/panel-samsung-s6d16d0.c b/drivers/gpu/drm/panel/panel-samsung-s6d16d0.c
+index 4aac0d1573dd0..70560cac53a99 100644
+--- a/drivers/gpu/drm/panel/panel-samsung-s6d16d0.c
++++ b/drivers/gpu/drm/panel/panel-samsung-s6d16d0.c
+@@ -184,9 +184,7 @@ static int s6d16d0_probe(struct mipi_dsi_device *dsi)
+ 	 * As we only send commands we do not need to be continuously
+ 	 * clocked.
+ 	 */
+-	dsi->mode_flags =
+-		MIPI_DSI_CLOCK_NON_CONTINUOUS |
+-		MIPI_DSI_MODE_EOT_PACKET;
++	dsi->mode_flags = MIPI_DSI_CLOCK_NON_CONTINUOUS;
+ 
+ 	s6->supply = devm_regulator_get(dev, "vdd1");
+ 	if (IS_ERR(s6->supply))
+diff --git a/drivers/gpu/drm/panel/panel-samsung-s6e63m0-dsi.c b/drivers/gpu/drm/panel/panel-samsung-s6e63m0-dsi.c
+index eec74c10dddaf..9c3563c61e8cc 100644
+--- a/drivers/gpu/drm/panel/panel-samsung-s6e63m0-dsi.c
++++ b/drivers/gpu/drm/panel/panel-samsung-s6e63m0-dsi.c
+@@ -97,7 +97,6 @@ static int s6e63m0_dsi_probe(struct mipi_dsi_device *dsi)
+ 	dsi->hs_rate = 349440000;
+ 	dsi->lp_rate = 9600000;
+ 	dsi->mode_flags = MIPI_DSI_MODE_VIDEO |
+-		MIPI_DSI_MODE_EOT_PACKET |
+ 		MIPI_DSI_MODE_VIDEO_BURST;
+ 
+ 	ret = s6e63m0_probe(dev, s6e63m0_dsi_dcs_read, s6e63m0_dsi_dcs_write,
+diff --git a/drivers/gpu/drm/panel/panel-simple.c b/drivers/gpu/drm/panel/panel-simple.c
+index 4e2dad314c795..e8b1a0e873eaa 100644
+--- a/drivers/gpu/drm/panel/panel-simple.c
++++ b/drivers/gpu/drm/panel/panel-simple.c
+@@ -406,7 +406,7 @@ static int panel_simple_prepare(struct drm_panel *panel)
+ 		if (IS_ERR(p->hpd_gpio)) {
+ 			err = panel_simple_get_hpd_gpio(panel->dev, p, false);
+ 			if (err)
+-				return err;
++				goto error;
+ 		}
+ 
+ 		err = readx_poll_timeout(gpiod_get_value_cansleep, p->hpd_gpio,
+@@ -418,13 +418,20 @@ static int panel_simple_prepare(struct drm_panel *panel)
+ 		if (err) {
+ 			dev_err(panel->dev,
+ 				"error waiting for hpd GPIO: %d\n", err);
+-			return err;
++			goto error;
+ 		}
+ 	}
+ 
+ 	p->prepared_time = ktime_get();
+ 
+ 	return 0;
++
++error:
++	gpiod_set_value_cansleep(p->enable_gpio, 0);
++	regulator_disable(p->supply);
++	p->unprepared_time = ktime_get();
++
++	return err;
+ }
+ 
+ static int panel_simple_enable(struct drm_panel *panel)
+diff --git a/drivers/gpu/drm/panel/panel-sony-acx424akp.c b/drivers/gpu/drm/panel/panel-sony-acx424akp.c
+index 065efae213f5b..95659a4d15e97 100644
+--- a/drivers/gpu/drm/panel/panel-sony-acx424akp.c
++++ b/drivers/gpu/drm/panel/panel-sony-acx424akp.c
+@@ -449,8 +449,7 @@ static int acx424akp_probe(struct mipi_dsi_device *dsi)
+ 			MIPI_DSI_MODE_VIDEO_BURST;
+ 	else
+ 		dsi->mode_flags =
+-			MIPI_DSI_CLOCK_NON_CONTINUOUS |
+-			MIPI_DSI_MODE_EOT_PACKET;
++			MIPI_DSI_CLOCK_NON_CONTINUOUS;
+ 
+ 	acx->supply = devm_regulator_get(dev, "vddi");
+ 	if (IS_ERR(acx->supply))
+diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c
+index 7c1b3481b7850..21e552d1ac71a 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_mmu.c
++++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c
+@@ -488,8 +488,14 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
+ 		}
+ 		bo->base.pages = pages;
+ 		bo->base.pages_use_count = 1;
+-	} else
++	} else {
+ 		pages = bo->base.pages;
++		if (pages[page_offset]) {
++			/* Pages are already mapped, bail out. */
++			mutex_unlock(&bo->base.pages_lock);
++			goto out;
++		}
++	}
+ 
+ 	mapping = bo->base.base.filp->f_mapping;
+ 	mapping_set_unevictable(mapping);
+@@ -522,6 +528,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
+ 
+ 	dev_dbg(pfdev->dev, "mapped page fault @ AS%d %llx", as, addr);
+ 
++out:
+ 	panfrost_gem_mapping_put(bomapping);
+ 
+ 	return 0;
+@@ -593,6 +600,8 @@ static irqreturn_t panfrost_mmu_irq_handler_thread(int irq, void *data)
+ 		access_type = (fault_status >> 8) & 0x3;
+ 		source_id = (fault_status >> 16);
+ 
++		mmu_write(pfdev, MMU_INT_CLEAR, mask);
++
+ 		/* Page fault only */
+ 		ret = -1;
+ 		if ((status & mask) == BIT(i) && (exception_type & 0xF8) == 0xC0)
+@@ -616,8 +625,6 @@ static irqreturn_t panfrost_mmu_irq_handler_thread(int irq, void *data)
+ 				access_type, access_type_name(pfdev, fault_status),
+ 				source_id);
+ 
+-		mmu_write(pfdev, MMU_INT_CLEAR, mask);
+-
+ 		status &= ~mask;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/qxl/qxl_cmd.c b/drivers/gpu/drm/qxl/qxl_cmd.c
+index 54e3c3a974407..741cc983daf1c 100644
+--- a/drivers/gpu/drm/qxl/qxl_cmd.c
++++ b/drivers/gpu/drm/qxl/qxl_cmd.c
+@@ -268,7 +268,7 @@ int qxl_alloc_bo_reserved(struct qxl_device *qdev,
+ 	int ret;
+ 
+ 	ret = qxl_bo_create(qdev, size, false /* not kernel - device */,
+-			    false, QXL_GEM_DOMAIN_VRAM, NULL, &bo);
++			    false, QXL_GEM_DOMAIN_VRAM, 0, NULL, &bo);
+ 	if (ret) {
+ 		DRM_ERROR("failed to allocate VRAM BO\n");
+ 		return ret;
+diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c
+index 56e0c6c625e9a..3f432ec8e771c 100644
+--- a/drivers/gpu/drm/qxl/qxl_display.c
++++ b/drivers/gpu/drm/qxl/qxl_display.c
+@@ -798,8 +798,8 @@ static int qxl_plane_prepare_fb(struct drm_plane *plane,
+ 				qdev->dumb_shadow_bo = NULL;
+ 			}
+ 			qxl_bo_create(qdev, surf.height * surf.stride,
+-				      true, true, QXL_GEM_DOMAIN_SURFACE, &surf,
+-				      &qdev->dumb_shadow_bo);
++				      true, true, QXL_GEM_DOMAIN_SURFACE, 0,
++				      &surf, &qdev->dumb_shadow_bo);
+ 		}
+ 		if (user_bo->shadow != qdev->dumb_shadow_bo) {
+ 			if (user_bo->shadow) {
+diff --git a/drivers/gpu/drm/qxl/qxl_drv.c b/drivers/gpu/drm/qxl/qxl_drv.c
+index 1b09bbe980551..1864467f10639 100644
+--- a/drivers/gpu/drm/qxl/qxl_drv.c
++++ b/drivers/gpu/drm/qxl/qxl_drv.c
+@@ -144,8 +144,6 @@ static void qxl_drm_release(struct drm_device *dev)
+ 	 * reordering qxl_modeset_fini() + qxl_device_fini() calls is
+ 	 * non-trivial though.
+ 	 */
+-	if (!dev->registered)
+-		return;
+ 	qxl_modeset_fini(qdev);
+ 	qxl_device_fini(qdev);
+ }
+diff --git a/drivers/gpu/drm/qxl/qxl_gem.c b/drivers/gpu/drm/qxl/qxl_gem.c
+index 48e096285b4c6..a08da0bd9098b 100644
+--- a/drivers/gpu/drm/qxl/qxl_gem.c
++++ b/drivers/gpu/drm/qxl/qxl_gem.c
+@@ -55,7 +55,7 @@ int qxl_gem_object_create(struct qxl_device *qdev, int size,
+ 	/* At least align on page size */
+ 	if (alignment < PAGE_SIZE)
+ 		alignment = PAGE_SIZE;
+-	r = qxl_bo_create(qdev, size, kernel, false, initial_domain, surf, &qbo);
++	r = qxl_bo_create(qdev, size, kernel, false, initial_domain, 0, surf, &qbo);
+ 	if (r) {
+ 		if (r != -ERESTARTSYS)
+ 			DRM_ERROR(
+diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c
+index ceebc5881f68d..a5806667697aa 100644
+--- a/drivers/gpu/drm/qxl/qxl_object.c
++++ b/drivers/gpu/drm/qxl/qxl_object.c
+@@ -103,8 +103,8 @@ static const struct drm_gem_object_funcs qxl_object_funcs = {
+ 	.print_info = drm_gem_ttm_print_info,
+ };
+ 
+-int qxl_bo_create(struct qxl_device *qdev,
+-		  unsigned long size, bool kernel, bool pinned, u32 domain,
++int qxl_bo_create(struct qxl_device *qdev, unsigned long size,
++		  bool kernel, bool pinned, u32 domain, u32 priority,
+ 		  struct qxl_surface *surf,
+ 		  struct qxl_bo **bo_ptr)
+ {
+@@ -137,6 +137,7 @@ int qxl_bo_create(struct qxl_device *qdev,
+ 
+ 	qxl_ttm_placement_from_domain(bo, domain);
+ 
++	bo->tbo.priority = priority;
+ 	r = ttm_bo_init_reserved(&qdev->mman.bdev, &bo->tbo, size, type,
+ 				 &bo->placement, 0, &ctx, size,
+ 				 NULL, NULL, &qxl_ttm_bo_destroy);
+diff --git a/drivers/gpu/drm/qxl/qxl_object.h b/drivers/gpu/drm/qxl/qxl_object.h
+index e60a8f88e2262..dc1659e717f14 100644
+--- a/drivers/gpu/drm/qxl/qxl_object.h
++++ b/drivers/gpu/drm/qxl/qxl_object.h
+@@ -61,6 +61,7 @@ static inline u64 qxl_bo_mmap_offset(struct qxl_bo *bo)
+ extern int qxl_bo_create(struct qxl_device *qdev,
+ 			 unsigned long size,
+ 			 bool kernel, bool pinned, u32 domain,
++			 u32 priority,
+ 			 struct qxl_surface *surf,
+ 			 struct qxl_bo **bo_ptr);
+ extern int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map);
+diff --git a/drivers/gpu/drm/qxl/qxl_release.c b/drivers/gpu/drm/qxl/qxl_release.c
+index b372455e27298..801ce77b1dac8 100644
+--- a/drivers/gpu/drm/qxl/qxl_release.c
++++ b/drivers/gpu/drm/qxl/qxl_release.c
+@@ -199,11 +199,12 @@ qxl_release_free(struct qxl_device *qdev,
+ }
+ 
+ static int qxl_release_bo_alloc(struct qxl_device *qdev,
+-				struct qxl_bo **bo)
++				struct qxl_bo **bo,
++				u32 priority)
+ {
+ 	/* pin releases bo's they are too messy to evict */
+ 	return qxl_bo_create(qdev, PAGE_SIZE, false, true,
+-			     QXL_GEM_DOMAIN_VRAM, NULL, bo);
++			     QXL_GEM_DOMAIN_VRAM, priority, NULL, bo);
+ }
+ 
+ int qxl_release_list_add(struct qxl_release *release, struct qxl_bo *bo)
+@@ -326,13 +327,18 @@ int qxl_alloc_release_reserved(struct qxl_device *qdev, unsigned long size,
+ 	int ret = 0;
+ 	union qxl_release_info *info;
+ 	int cur_idx;
++	u32 priority;
+ 
+-	if (type == QXL_RELEASE_DRAWABLE)
++	if (type == QXL_RELEASE_DRAWABLE) {
+ 		cur_idx = 0;
+-	else if (type == QXL_RELEASE_SURFACE_CMD)
++		priority = 0;
++	} else if (type == QXL_RELEASE_SURFACE_CMD) {
+ 		cur_idx = 1;
+-	else if (type == QXL_RELEASE_CURSOR_CMD)
++		priority = 1;
++	} else if (type == QXL_RELEASE_CURSOR_CMD) {
+ 		cur_idx = 2;
++		priority = 1;
++	}
+ 	else {
+ 		DRM_ERROR("got illegal type: %d\n", type);
+ 		return -EINVAL;
+@@ -352,7 +358,7 @@ int qxl_alloc_release_reserved(struct qxl_device *qdev, unsigned long size,
+ 		qdev->current_release_bo[cur_idx] = NULL;
+ 	}
+ 	if (!qdev->current_release_bo[cur_idx]) {
+-		ret = qxl_release_bo_alloc(qdev, &qdev->current_release_bo[cur_idx]);
++		ret = qxl_release_bo_alloc(qdev, &qdev->current_release_bo[cur_idx], priority);
+ 		if (ret) {
+ 			mutex_unlock(&qdev->release_mutex);
+ 			if (free_bo) {
+diff --git a/drivers/gpu/drm/radeon/radeon_dp_mst.c b/drivers/gpu/drm/radeon/radeon_dp_mst.c
+index 2c32186c4acd9..4e4c937c36c62 100644
+--- a/drivers/gpu/drm/radeon/radeon_dp_mst.c
++++ b/drivers/gpu/drm/radeon/radeon_dp_mst.c
+@@ -242,6 +242,9 @@ radeon_dp_mst_detect(struct drm_connector *connector,
+ 		to_radeon_connector(connector);
+ 	struct radeon_connector *master = radeon_connector->mst_port;
+ 
++	if (drm_connector_is_unregistered(connector))
++		return connector_status_disconnected;
++
+ 	return drm_dp_mst_detect_port(connector, ctx, &master->mst_mgr,
+ 				      radeon_connector->port);
+ }
+diff --git a/drivers/gpu/drm/radeon/radeon_kms.c b/drivers/gpu/drm/radeon/radeon_kms.c
+index 2479d6ab7a368..58876bb4ef2a1 100644
+--- a/drivers/gpu/drm/radeon/radeon_kms.c
++++ b/drivers/gpu/drm/radeon/radeon_kms.c
+@@ -518,6 +518,7 @@ int radeon_info_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
+ 			*value = rdev->config.si.backend_enable_mask;
+ 		} else {
+ 			DRM_DEBUG_KMS("BACKEND_ENABLED_MASK is si+ only!\n");
++			return -EINVAL;
+ 		}
+ 		break;
+ 	case RADEON_INFO_MAX_SCLK:
+diff --git a/drivers/gpu/drm/stm/ltdc.c b/drivers/gpu/drm/stm/ltdc.c
+index 7812094f93d6b..6f3b523e16e8c 100644
+--- a/drivers/gpu/drm/stm/ltdc.c
++++ b/drivers/gpu/drm/stm/ltdc.c
+@@ -525,13 +525,42 @@ static void ltdc_crtc_mode_set_nofb(struct drm_crtc *crtc)
+ {
+ 	struct ltdc_device *ldev = crtc_to_ltdc(crtc);
+ 	struct drm_device *ddev = crtc->dev;
++	struct drm_connector_list_iter iter;
++	struct drm_connector *connector = NULL;
++	struct drm_encoder *encoder = NULL;
++	struct drm_bridge *bridge = NULL;
+ 	struct drm_display_mode *mode = &crtc->state->adjusted_mode;
+ 	struct videomode vm;
+ 	u32 hsync, vsync, accum_hbp, accum_vbp, accum_act_w, accum_act_h;
+ 	u32 total_width, total_height;
++	u32 bus_flags = 0;
+ 	u32 val;
+ 	int ret;
+ 
++	/* get encoder from crtc */
++	drm_for_each_encoder(encoder, ddev)
++		if (encoder->crtc == crtc)
++			break;
++
++	if (encoder) {
++		/* get bridge from encoder */
++		list_for_each_entry(bridge, &encoder->bridge_chain, chain_node)
++			if (bridge->encoder == encoder)
++				break;
++
++		/* Get the connector from encoder */
++		drm_connector_list_iter_begin(ddev, &iter);
++		drm_for_each_connector_iter(connector, &iter)
++			if (connector->encoder == encoder)
++				break;
++		drm_connector_list_iter_end(&iter);
++	}
++
++	if (bridge && bridge->timings)
++		bus_flags = bridge->timings->input_bus_flags;
++	else if (connector)
++		bus_flags = connector->display_info.bus_flags;
++
+ 	if (!pm_runtime_active(ddev->dev)) {
+ 		ret = pm_runtime_get_sync(ddev->dev);
+ 		if (ret) {
+@@ -567,10 +596,10 @@ static void ltdc_crtc_mode_set_nofb(struct drm_crtc *crtc)
+ 	if (vm.flags & DISPLAY_FLAGS_VSYNC_HIGH)
+ 		val |= GCR_VSPOL;
+ 
+-	if (vm.flags & DISPLAY_FLAGS_DE_LOW)
++	if (bus_flags & DRM_BUS_FLAG_DE_LOW)
+ 		val |= GCR_DEPOL;
+ 
+-	if (vm.flags & DISPLAY_FLAGS_PIXDATA_NEGEDGE)
++	if (bus_flags & DRM_BUS_FLAG_PIXDATA_DRIVE_NEGEDGE)
+ 		val |= GCR_PCPOL;
+ 
+ 	reg_update_bits(ldev->regs, LTDC_GCR,
+diff --git a/drivers/gpu/drm/tilcdc/tilcdc_crtc.c b/drivers/gpu/drm/tilcdc/tilcdc_crtc.c
+index 30213708fc990..d99afd19ca083 100644
+--- a/drivers/gpu/drm/tilcdc/tilcdc_crtc.c
++++ b/drivers/gpu/drm/tilcdc/tilcdc_crtc.c
+@@ -515,6 +515,15 @@ static void tilcdc_crtc_off(struct drm_crtc *crtc, bool shutdown)
+ 
+ 	drm_crtc_vblank_off(crtc);
+ 
++	spin_lock_irq(&crtc->dev->event_lock);
++
++	if (crtc->state->event) {
++		drm_crtc_send_vblank_event(crtc, crtc->state->event);
++		crtc->state->event = NULL;
++	}
++
++	spin_unlock_irq(&crtc->dev->event_lock);
++
+ 	tilcdc_crtc_disable_irqs(dev);
+ 
+ 	pm_runtime_put_sync(dev->dev);
+diff --git a/drivers/gpu/drm/xlnx/zynqmp_dp.c b/drivers/gpu/drm/xlnx/zynqmp_dp.c
+index 99158ee67d02b..59d1fb017da01 100644
+--- a/drivers/gpu/drm/xlnx/zynqmp_dp.c
++++ b/drivers/gpu/drm/xlnx/zynqmp_dp.c
+@@ -866,7 +866,7 @@ static int zynqmp_dp_train(struct zynqmp_dp *dp)
+ 		return ret;
+ 
+ 	zynqmp_dp_write(dp, ZYNQMP_DP_SCRAMBLING_DISABLE, 1);
+-	memset(dp->train_set, 0, 4);
++	memset(dp->train_set, 0, sizeof(dp->train_set));
+ 	ret = zynqmp_dp_link_train_cr(dp);
+ 	if (ret)
+ 		return ret;
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 67fd8a2f5aba3..ba338973e968d 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -946,6 +946,7 @@
+ #define USB_DEVICE_ID_ORTEK_IHOME_IMAC_A210S	0x8003
+ 
+ #define USB_VENDOR_ID_PLANTRONICS	0x047f
++#define USB_DEVICE_ID_PLANTRONICS_BLACKWIRE_3220_SERIES	0xc056
+ 
+ #define USB_VENDOR_ID_PANASONIC		0x04da
+ #define USB_DEVICE_ID_PANABOARD_UBT780	0x1044
+diff --git a/drivers/hid/hid-lenovo.c b/drivers/hid/hid-lenovo.c
+index c6c8e20f3e8d5..0ff03fed97709 100644
+--- a/drivers/hid/hid-lenovo.c
++++ b/drivers/hid/hid-lenovo.c
+@@ -33,6 +33,9 @@
+ 
+ #include "hid-ids.h"
+ 
++/* Userspace expects F20 for mic-mute KEY_MICMUTE does not work */
++#define LENOVO_KEY_MICMUTE KEY_F20
++
+ struct lenovo_drvdata {
+ 	u8 led_report[3]; /* Must be first for proper alignment */
+ 	int led_state;
+@@ -62,8 +65,8 @@ struct lenovo_drvdata {
+ #define TP10UBKBD_LED_OFF		1
+ #define TP10UBKBD_LED_ON		2
+ 
+-static void lenovo_led_set_tp10ubkbd(struct hid_device *hdev, u8 led_code,
+-				     enum led_brightness value)
++static int lenovo_led_set_tp10ubkbd(struct hid_device *hdev, u8 led_code,
++				    enum led_brightness value)
+ {
+ 	struct lenovo_drvdata *data = hid_get_drvdata(hdev);
+ 	int ret;
+@@ -75,10 +78,18 @@ static void lenovo_led_set_tp10ubkbd(struct hid_device *hdev, u8 led_code,
+ 	data->led_report[2] = value ? TP10UBKBD_LED_ON : TP10UBKBD_LED_OFF;
+ 	ret = hid_hw_raw_request(hdev, data->led_report[0], data->led_report, 3,
+ 				 HID_OUTPUT_REPORT, HID_REQ_SET_REPORT);
+-	if (ret)
+-		hid_err(hdev, "Set LED output report error: %d\n", ret);
++	if (ret != 3) {
++		if (ret != -ENODEV)
++			hid_err(hdev, "Set LED output report error: %d\n", ret);
++
++		ret = ret < 0 ? ret : -EIO;
++	} else {
++		ret = 0;
++	}
+ 
+ 	mutex_unlock(&data->led_report_mutex);
++
++	return ret;
+ }
+ 
+ static void lenovo_tp10ubkbd_sync_fn_lock(struct work_struct *work)
+@@ -126,7 +137,7 @@ static int lenovo_input_mapping_tpkbd(struct hid_device *hdev,
+ 	if (usage->hid == (HID_UP_BUTTON | 0x0010)) {
+ 		/* This sub-device contains trackpoint, mark it */
+ 		hid_set_drvdata(hdev, (void *)1);
+-		map_key_clear(KEY_MICMUTE);
++		map_key_clear(LENOVO_KEY_MICMUTE);
+ 		return 1;
+ 	}
+ 	return 0;
+@@ -141,7 +152,7 @@ static int lenovo_input_mapping_cptkbd(struct hid_device *hdev,
+ 	    (usage->hid & HID_USAGE_PAGE) == HID_UP_LNVENDOR) {
+ 		switch (usage->hid & HID_USAGE) {
+ 		case 0x00f1: /* Fn-F4: Mic mute */
+-			map_key_clear(KEY_MICMUTE);
++			map_key_clear(LENOVO_KEY_MICMUTE);
+ 			return 1;
+ 		case 0x00f2: /* Fn-F5: Brightness down */
+ 			map_key_clear(KEY_BRIGHTNESSDOWN);
+@@ -231,7 +242,7 @@ static int lenovo_input_mapping_tp10_ultrabook_kbd(struct hid_device *hdev,
+ 			map_key_clear(KEY_FN_ESC);
+ 			return 1;
+ 		case 9: /* Fn-F4: Mic mute */
+-			map_key_clear(KEY_MICMUTE);
++			map_key_clear(LENOVO_KEY_MICMUTE);
+ 			return 1;
+ 		case 10: /* Fn-F7: Control panel */
+ 			map_key_clear(KEY_CONFIG);
+@@ -349,7 +360,7 @@ static ssize_t attr_fn_lock_store(struct device *dev,
+ {
+ 	struct hid_device *hdev = to_hid_device(dev);
+ 	struct lenovo_drvdata *data = hid_get_drvdata(hdev);
+-	int value;
++	int value, ret;
+ 
+ 	if (kstrtoint(buf, 10, &value))
+ 		return -EINVAL;
+@@ -364,7 +375,9 @@ static ssize_t attr_fn_lock_store(struct device *dev,
+ 		lenovo_features_set_cptkbd(hdev);
+ 		break;
+ 	case USB_DEVICE_ID_LENOVO_TP10UBKBD:
+-		lenovo_led_set_tp10ubkbd(hdev, TP10UBKBD_FN_LOCK_LED, value);
++		ret = lenovo_led_set_tp10ubkbd(hdev, TP10UBKBD_FN_LOCK_LED, value);
++		if (ret)
++			return ret;
+ 		break;
+ 	}
+ 
+@@ -498,6 +511,9 @@ static int lenovo_event_cptkbd(struct hid_device *hdev,
+ static int lenovo_event(struct hid_device *hdev, struct hid_field *field,
+ 		struct hid_usage *usage, __s32 value)
+ {
++	if (!hid_get_drvdata(hdev))
++		return 0;
++
+ 	switch (hdev->product) {
+ 	case USB_DEVICE_ID_LENOVO_CUSBKBD:
+ 	case USB_DEVICE_ID_LENOVO_CBTKBD:
+@@ -777,7 +793,7 @@ static enum led_brightness lenovo_led_brightness_get(
+ 				: LED_OFF;
+ }
+ 
+-static void lenovo_led_brightness_set(struct led_classdev *led_cdev,
++static int lenovo_led_brightness_set(struct led_classdev *led_cdev,
+ 			enum led_brightness value)
+ {
+ 	struct device *dev = led_cdev->dev->parent;
+@@ -785,6 +801,7 @@ static void lenovo_led_brightness_set(struct led_classdev *led_cdev,
+ 	struct lenovo_drvdata *data_pointer = hid_get_drvdata(hdev);
+ 	u8 tp10ubkbd_led[] = { TP10UBKBD_MUTE_LED, TP10UBKBD_MICMUTE_LED };
+ 	int led_nr = 0;
++	int ret = 0;
+ 
+ 	if (led_cdev == &data_pointer->led_micmute)
+ 		led_nr = 1;
+@@ -799,9 +816,11 @@ static void lenovo_led_brightness_set(struct led_classdev *led_cdev,
+ 		lenovo_led_set_tpkbd(hdev);
+ 		break;
+ 	case USB_DEVICE_ID_LENOVO_TP10UBKBD:
+-		lenovo_led_set_tp10ubkbd(hdev, tp10ubkbd_led[led_nr], value);
++		ret = lenovo_led_set_tp10ubkbd(hdev, tp10ubkbd_led[led_nr], value);
+ 		break;
+ 	}
++
++	return ret;
+ }
+ 
+ static int lenovo_register_leds(struct hid_device *hdev)
+@@ -822,7 +841,8 @@ static int lenovo_register_leds(struct hid_device *hdev)
+ 
+ 	data->led_mute.name = name_mute;
+ 	data->led_mute.brightness_get = lenovo_led_brightness_get;
+-	data->led_mute.brightness_set = lenovo_led_brightness_set;
++	data->led_mute.brightness_set_blocking = lenovo_led_brightness_set;
++	data->led_mute.flags = LED_HW_PLUGGABLE;
+ 	data->led_mute.dev = &hdev->dev;
+ 	ret = led_classdev_register(&hdev->dev, &data->led_mute);
+ 	if (ret < 0)
+@@ -830,7 +850,8 @@ static int lenovo_register_leds(struct hid_device *hdev)
+ 
+ 	data->led_micmute.name = name_micm;
+ 	data->led_micmute.brightness_get = lenovo_led_brightness_get;
+-	data->led_micmute.brightness_set = lenovo_led_brightness_set;
++	data->led_micmute.brightness_set_blocking = lenovo_led_brightness_set;
++	data->led_micmute.flags = LED_HW_PLUGGABLE;
+ 	data->led_micmute.dev = &hdev->dev;
+ 	ret = led_classdev_register(&hdev->dev, &data->led_micmute);
+ 	if (ret < 0) {
+diff --git a/drivers/hid/hid-plantronics.c b/drivers/hid/hid-plantronics.c
+index 85b685efc12f3..e81b7cec2d124 100644
+--- a/drivers/hid/hid-plantronics.c
++++ b/drivers/hid/hid-plantronics.c
+@@ -13,6 +13,7 @@
+ 
+ #include <linux/hid.h>
+ #include <linux/module.h>
++#include <linux/jiffies.h>
+ 
+ #define PLT_HID_1_0_PAGE	0xffa00000
+ #define PLT_HID_2_0_PAGE	0xffa20000
+@@ -36,6 +37,16 @@
+ #define PLT_ALLOW_CONSUMER (field->application == HID_CP_CONSUMERCONTROL && \
+ 			    (usage->hid & HID_USAGE_PAGE) == HID_UP_CONSUMER)
+ 
++#define PLT_QUIRK_DOUBLE_VOLUME_KEYS BIT(0)
++
++#define PLT_DOUBLE_KEY_TIMEOUT 5 /* ms */
++
++struct plt_drv_data {
++	unsigned long device_type;
++	unsigned long last_volume_key_ts;
++	u32 quirks;
++};
++
+ static int plantronics_input_mapping(struct hid_device *hdev,
+ 				     struct hid_input *hi,
+ 				     struct hid_field *field,
+@@ -43,7 +54,8 @@ static int plantronics_input_mapping(struct hid_device *hdev,
+ 				     unsigned long **bit, int *max)
+ {
+ 	unsigned short mapped_key;
+-	unsigned long plt_type = (unsigned long)hid_get_drvdata(hdev);
++	struct plt_drv_data *drv_data = hid_get_drvdata(hdev);
++	unsigned long plt_type = drv_data->device_type;
+ 
+ 	/* special case for PTT products */
+ 	if (field->application == HID_GD_JOYSTICK)
+@@ -105,6 +117,30 @@ mapped:
+ 	return 1;
+ }
+ 
++static int plantronics_event(struct hid_device *hdev, struct hid_field *field,
++			     struct hid_usage *usage, __s32 value)
++{
++	struct plt_drv_data *drv_data = hid_get_drvdata(hdev);
++
++	if (drv_data->quirks & PLT_QUIRK_DOUBLE_VOLUME_KEYS) {
++		unsigned long prev_ts, cur_ts;
++
++		/* Usages are filtered in plantronics_usages. */
++
++		if (!value) /* Handle key presses only. */
++			return 0;
++
++		prev_ts = drv_data->last_volume_key_ts;
++		cur_ts = jiffies;
++		if (jiffies_to_msecs(cur_ts - prev_ts) <= PLT_DOUBLE_KEY_TIMEOUT)
++			return 1; /* Ignore the repeated key. */
++
++		drv_data->last_volume_key_ts = cur_ts;
++	}
++
++	return 0;
++}
++
+ static unsigned long plantronics_device_type(struct hid_device *hdev)
+ {
+ 	unsigned i, col_page;
+@@ -133,15 +169,24 @@ exit:
+ static int plantronics_probe(struct hid_device *hdev,
+ 			     const struct hid_device_id *id)
+ {
++	struct plt_drv_data *drv_data;
+ 	int ret;
+ 
++	drv_data = devm_kzalloc(&hdev->dev, sizeof(*drv_data), GFP_KERNEL);
++	if (!drv_data)
++		return -ENOMEM;
++
+ 	ret = hid_parse(hdev);
+ 	if (ret) {
+ 		hid_err(hdev, "parse failed\n");
+ 		goto err;
+ 	}
+ 
+-	hid_set_drvdata(hdev, (void *)plantronics_device_type(hdev));
++	drv_data->device_type = plantronics_device_type(hdev);
++	drv_data->quirks = id->driver_data;
++	drv_data->last_volume_key_ts = jiffies - msecs_to_jiffies(PLT_DOUBLE_KEY_TIMEOUT);
++
++	hid_set_drvdata(hdev, drv_data);
+ 
+ 	ret = hid_hw_start(hdev, HID_CONNECT_DEFAULT |
+ 		HID_CONNECT_HIDINPUT_FORCE | HID_CONNECT_HIDDEV_FORCE);
+@@ -153,15 +198,26 @@ err:
+ }
+ 
+ static const struct hid_device_id plantronics_devices[] = {
++	{ HID_USB_DEVICE(USB_VENDOR_ID_PLANTRONICS,
++					 USB_DEVICE_ID_PLANTRONICS_BLACKWIRE_3220_SERIES),
++		.driver_data = PLT_QUIRK_DOUBLE_VOLUME_KEYS },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_PLANTRONICS, HID_ANY_ID) },
+ 	{ }
+ };
+ MODULE_DEVICE_TABLE(hid, plantronics_devices);
+ 
++static const struct hid_usage_id plantronics_usages[] = {
++	{ HID_CP_VOLUMEUP, EV_KEY, HID_ANY_ID },
++	{ HID_CP_VOLUMEDOWN, EV_KEY, HID_ANY_ID },
++	{ HID_TERMINATOR, HID_TERMINATOR, HID_TERMINATOR }
++};
++
+ static struct hid_driver plantronics_driver = {
+ 	.name = "plantronics",
+ 	.id_table = plantronics_devices,
++	.usage_table = plantronics_usages,
+ 	.input_mapping = plantronics_input_mapping,
++	.event = plantronics_event,
+ 	.probe = plantronics_probe,
+ };
+ module_hid_driver(plantronics_driver);
+diff --git a/drivers/hsi/hsi_core.c b/drivers/hsi/hsi_core.c
+index c3fb5beb846e2..ec90713564e32 100644
+--- a/drivers/hsi/hsi_core.c
++++ b/drivers/hsi/hsi_core.c
+@@ -210,8 +210,6 @@ static void hsi_add_client_from_dt(struct hsi_port *port,
+ 	if (err)
+ 		goto err;
+ 
+-	dev_set_name(&cl->device, "%s", name);
+-
+ 	err = hsi_of_property_parse_mode(client, "hsi-mode", &mode);
+ 	if (err) {
+ 		err = hsi_of_property_parse_mode(client, "hsi-rx-mode",
+@@ -293,6 +291,7 @@ static void hsi_add_client_from_dt(struct hsi_port *port,
+ 	cl->device.release = hsi_client_release;
+ 	cl->device.of_node = client;
+ 
++	dev_set_name(&cl->device, "%s", name);
+ 	if (device_register(&cl->device) < 0) {
+ 		pr_err("hsi: failed to register client: %s\n", name);
+ 		put_device(&cl->device);
+diff --git a/drivers/hv/channel.c b/drivers/hv/channel.c
+index 0bd202de79600..945e41f5e3a82 100644
+--- a/drivers/hv/channel.c
++++ b/drivers/hv/channel.c
+@@ -653,7 +653,7 @@ static int __vmbus_open(struct vmbus_channel *newchannel,
+ 
+ 	if (newchannel->rescind) {
+ 		err = -ENODEV;
+-		goto error_free_info;
++		goto error_clean_msglist;
+ 	}
+ 
+ 	err = vmbus_post_msg(open_msg,
+diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
+index f0ed730e2e4e4..ecebf1235fd50 100644
+--- a/drivers/hv/channel_mgmt.c
++++ b/drivers/hv/channel_mgmt.c
+@@ -756,6 +756,12 @@ static void init_vp_index(struct vmbus_channel *channel)
+ 	free_cpumask_var(available_mask);
+ }
+ 
++#define UNLOAD_DELAY_UNIT_MS	10		/* 10 milliseconds */
++#define UNLOAD_WAIT_MS		(100*1000)	/* 100 seconds */
++#define UNLOAD_WAIT_LOOPS	(UNLOAD_WAIT_MS/UNLOAD_DELAY_UNIT_MS)
++#define UNLOAD_MSG_MS		(5*1000)	/* Every 5 seconds */
++#define UNLOAD_MSG_LOOPS	(UNLOAD_MSG_MS/UNLOAD_DELAY_UNIT_MS)
++
+ static void vmbus_wait_for_unload(void)
+ {
+ 	int cpu;
+@@ -773,12 +779,17 @@ static void vmbus_wait_for_unload(void)
+ 	 * vmbus_connection.unload_event. If not, the last thing we can do is
+ 	 * read message pages for all CPUs directly.
+ 	 *
+-	 * Wait no more than 10 seconds so that the panic path can't get
+-	 * hung forever in case the response message isn't seen.
++	 * Wait up to 100 seconds since an Azure host must writeback any dirty
++	 * data in its disk cache before the VMbus UNLOAD request will
++	 * complete. This flushing has been empirically observed to take up
++	 * to 50 seconds in cases with a lot of dirty data, so allow additional
++	 * leeway and for inaccuracies in mdelay(). But eventually time out so
++	 * that the panic path can't get hung forever in case the response
++	 * message isn't seen.
+ 	 */
+-	for (i = 0; i < 1000; i++) {
++	for (i = 1; i <= UNLOAD_WAIT_LOOPS; i++) {
+ 		if (completion_done(&vmbus_connection.unload_event))
+-			break;
++			goto completed;
+ 
+ 		for_each_online_cpu(cpu) {
+ 			struct hv_per_cpu_context *hv_cpu
+@@ -801,9 +812,18 @@ static void vmbus_wait_for_unload(void)
+ 			vmbus_signal_eom(msg, message_type);
+ 		}
+ 
+-		mdelay(10);
++		/*
++		 * Give a notice periodically so someone watching the
++		 * serial output won't think it is completely hung.
++		 */
++		if (!(i % UNLOAD_MSG_LOOPS))
++			pr_notice("Waiting for VMBus UNLOAD to complete\n");
++
++		mdelay(UNLOAD_DELAY_UNIT_MS);
+ 	}
++	pr_err("Continuing even though VMBus UNLOAD did not complete\n");
+ 
++completed:
+ 	/*
+ 	 * We're crashing and already got the UNLOAD_RESPONSE, cleanup all
+ 	 * maybe-pending messages on all CPUs to be able to receive new
+diff --git a/drivers/hv/ring_buffer.c b/drivers/hv/ring_buffer.c
+index 35833d4d1a1dc..ecd82ebfd5bc4 100644
+--- a/drivers/hv/ring_buffer.c
++++ b/drivers/hv/ring_buffer.c
+@@ -313,7 +313,6 @@ int hv_ringbuffer_write(struct vmbus_channel *channel,
+ 		rqst_id = vmbus_next_request_id(&channel->requestor, requestid);
+ 		if (rqst_id == VMBUS_RQST_ERROR) {
+ 			spin_unlock_irqrestore(&outring_info->ring_lock, flags);
+-			pr_err("No request id available\n");
+ 			return -EAGAIN;
+ 		}
+ 	}
+diff --git a/drivers/hwmon/pmbus/pxe1610.c b/drivers/hwmon/pmbus/pxe1610.c
+index da27ce34ee3fd..eb4a06003b7f9 100644
+--- a/drivers/hwmon/pmbus/pxe1610.c
++++ b/drivers/hwmon/pmbus/pxe1610.c
+@@ -41,6 +41,15 @@ static int pxe1610_identify(struct i2c_client *client,
+ 				info->vrm_version[i] = vr13;
+ 				break;
+ 			default:
++				/*
++				 * If prior pages are available limit operation
++				 * to them
++				 */
++				if (i != 0) {
++					info->pages = i;
++					return 0;
++				}
++
+ 				return -ENODEV;
+ 			}
+ 		}
+diff --git a/drivers/hwtracing/coresight/coresight-etm-perf.c b/drivers/hwtracing/coresight/coresight-etm-perf.c
+index 0f603b4094f22..a706ba11b93e6 100644
+--- a/drivers/hwtracing/coresight/coresight-etm-perf.c
++++ b/drivers/hwtracing/coresight/coresight-etm-perf.c
+@@ -52,7 +52,7 @@ static ssize_t format_attr_contextid_show(struct device *dev,
+ {
+ 	int pid_fmt = ETM_OPT_CTXTID;
+ 
+-#if defined(CONFIG_CORESIGHT_SOURCE_ETM4X)
++#if IS_ENABLED(CONFIG_CORESIGHT_SOURCE_ETM4X)
+ 	pid_fmt = is_kernel_in_hyp_mode() ? ETM_OPT_CTXTID2 : ETM_OPT_CTXTID;
+ #endif
+ 	return sprintf(page, "config:%d\n", pid_fmt);
+diff --git a/drivers/i2c/busses/i2c-cadence.c b/drivers/i2c/busses/i2c-cadence.c
+index e4b7f2a951ad5..c1bbc4caeb5c9 100644
+--- a/drivers/i2c/busses/i2c-cadence.c
++++ b/drivers/i2c/busses/i2c-cadence.c
+@@ -789,7 +789,7 @@ static int cdns_i2c_master_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs,
+ 	bool change_role = false;
+ #endif
+ 
+-	ret = pm_runtime_get_sync(id->dev);
++	ret = pm_runtime_resume_and_get(id->dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+@@ -911,7 +911,7 @@ static int cdns_reg_slave(struct i2c_client *slave)
+ 	if (slave->flags & I2C_CLIENT_TEN)
+ 		return -EAFNOSUPPORT;
+ 
+-	ret = pm_runtime_get_sync(id->dev);
++	ret = pm_runtime_resume_and_get(id->dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+@@ -1200,7 +1200,10 @@ static int cdns_i2c_probe(struct platform_device *pdev)
+ 	if (IS_ERR(id->membase))
+ 		return PTR_ERR(id->membase);
+ 
+-	id->irq = platform_get_irq(pdev, 0);
++	ret = platform_get_irq(pdev, 0);
++	if (ret < 0)
++		return ret;
++	id->irq = ret;
+ 
+ 	id->adap.owner = THIS_MODULE;
+ 	id->adap.dev.of_node = pdev->dev.of_node;
+diff --git a/drivers/i2c/busses/i2c-emev2.c b/drivers/i2c/busses/i2c-emev2.c
+index a08554c1a5704..bdff0e6345d9a 100644
+--- a/drivers/i2c/busses/i2c-emev2.c
++++ b/drivers/i2c/busses/i2c-emev2.c
+@@ -395,7 +395,10 @@ static int em_i2c_probe(struct platform_device *pdev)
+ 
+ 	em_i2c_reset(&priv->adap);
+ 
+-	priv->irq = platform_get_irq(pdev, 0);
++	ret = platform_get_irq(pdev, 0);
++	if (ret < 0)
++		goto err_clk;
++	priv->irq = ret;
+ 	ret = devm_request_irq(&pdev->dev, priv->irq, em_i2c_irq_handler, 0,
+ 				"em_i2c", priv);
+ 	if (ret)
+diff --git a/drivers/i2c/busses/i2c-img-scb.c b/drivers/i2c/busses/i2c-img-scb.c
+index 98a89301ed2a6..8e987945ed450 100644
+--- a/drivers/i2c/busses/i2c-img-scb.c
++++ b/drivers/i2c/busses/i2c-img-scb.c
+@@ -1057,7 +1057,7 @@ static int img_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs,
+ 			atomic = true;
+ 	}
+ 
+-	ret = pm_runtime_get_sync(adap->dev.parent);
++	ret = pm_runtime_resume_and_get(adap->dev.parent);
+ 	if (ret < 0)
+ 		return ret;
+ 
+@@ -1158,7 +1158,7 @@ static int img_i2c_init(struct img_i2c *i2c)
+ 	u32 rev;
+ 	int ret;
+ 
+-	ret = pm_runtime_get_sync(i2c->adap.dev.parent);
++	ret = pm_runtime_resume_and_get(i2c->adap.dev.parent);
+ 	if (ret < 0)
+ 		return ret;
+ 
+diff --git a/drivers/i2c/busses/i2c-imx-lpi2c.c b/drivers/i2c/busses/i2c-imx-lpi2c.c
+index 9db6ccded5e9e..8b9ba055c4186 100644
+--- a/drivers/i2c/busses/i2c-imx-lpi2c.c
++++ b/drivers/i2c/busses/i2c-imx-lpi2c.c
+@@ -259,7 +259,7 @@ static int lpi2c_imx_master_enable(struct lpi2c_imx_struct *lpi2c_imx)
+ 	unsigned int temp;
+ 	int ret;
+ 
+-	ret = pm_runtime_get_sync(lpi2c_imx->adapter.dev.parent);
++	ret = pm_runtime_resume_and_get(lpi2c_imx->adapter.dev.parent);
+ 	if (ret < 0)
+ 		return ret;
+ 
+diff --git a/drivers/i2c/busses/i2c-imx.c b/drivers/i2c/busses/i2c-imx.c
+index b80fdc1f0092f..dc9c4b4cc25a1 100644
+--- a/drivers/i2c/busses/i2c-imx.c
++++ b/drivers/i2c/busses/i2c-imx.c
+@@ -1253,7 +1253,7 @@ static int i2c_imx_xfer(struct i2c_adapter *adapter,
+ 	struct imx_i2c_struct *i2c_imx = i2c_get_adapdata(adapter);
+ 	int result;
+ 
+-	result = pm_runtime_get_sync(i2c_imx->adapter.dev.parent);
++	result = pm_runtime_resume_and_get(i2c_imx->adapter.dev.parent);
+ 	if (result < 0)
+ 		return result;
+ 
+@@ -1496,7 +1496,7 @@ static int i2c_imx_remove(struct platform_device *pdev)
+ 	struct imx_i2c_struct *i2c_imx = platform_get_drvdata(pdev);
+ 	int irq, ret;
+ 
+-	ret = pm_runtime_get_sync(&pdev->dev);
++	ret = pm_runtime_resume_and_get(&pdev->dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+diff --git a/drivers/i2c/busses/i2c-jz4780.c b/drivers/i2c/busses/i2c-jz4780.c
+index 55177eb21d7b1..baa7319eee539 100644
+--- a/drivers/i2c/busses/i2c-jz4780.c
++++ b/drivers/i2c/busses/i2c-jz4780.c
+@@ -825,7 +825,10 @@ static int jz4780_i2c_probe(struct platform_device *pdev)
+ 
+ 	jz4780_i2c_writew(i2c, JZ4780_I2C_INTM, 0x0);
+ 
+-	i2c->irq = platform_get_irq(pdev, 0);
++	ret = platform_get_irq(pdev, 0);
++	if (ret < 0)
++		goto err;
++	i2c->irq = ret;
+ 	ret = devm_request_irq(&pdev->dev, i2c->irq, jz4780_i2c_irq, 0,
+ 			       dev_name(&pdev->dev), i2c);
+ 	if (ret)
+diff --git a/drivers/i2c/busses/i2c-mlxbf.c b/drivers/i2c/busses/i2c-mlxbf.c
+index 2fb0532d8a161..ab261d762dea3 100644
+--- a/drivers/i2c/busses/i2c-mlxbf.c
++++ b/drivers/i2c/busses/i2c-mlxbf.c
+@@ -2376,6 +2376,8 @@ static int mlxbf_i2c_probe(struct platform_device *pdev)
+ 	mlxbf_i2c_init_slave(pdev, priv);
+ 
+ 	irq = platform_get_irq(pdev, 0);
++	if (irq < 0)
++		return irq;
+ 	ret = devm_request_irq(dev, irq, mlxbf_smbus_irq,
+ 			       IRQF_ONESHOT | IRQF_SHARED | IRQF_PROBE_SHARED,
+ 			       dev_name(dev), priv);
+diff --git a/drivers/i2c/busses/i2c-mt65xx.c b/drivers/i2c/busses/i2c-mt65xx.c
+index 2ffd2f354d0ae..86f70c7513192 100644
+--- a/drivers/i2c/busses/i2c-mt65xx.c
++++ b/drivers/i2c/busses/i2c-mt65xx.c
+@@ -479,7 +479,7 @@ static void mtk_i2c_init_hw(struct mtk_i2c *i2c)
+ {
+ 	u16 control_reg;
+ 
+-	if (i2c->dev_comp->dma_sync) {
++	if (i2c->dev_comp->apdma_sync) {
+ 		writel(I2C_DMA_WARM_RST, i2c->pdmabase + OFFSET_RST);
+ 		udelay(10);
+ 		writel(I2C_DMA_CLR_FLAG, i2c->pdmabase + OFFSET_RST);
+diff --git a/drivers/i2c/busses/i2c-omap.c b/drivers/i2c/busses/i2c-omap.c
+index 12ac4212aded8..d4f6c6d60683a 100644
+--- a/drivers/i2c/busses/i2c-omap.c
++++ b/drivers/i2c/busses/i2c-omap.c
+@@ -1404,9 +1404,9 @@ omap_i2c_probe(struct platform_device *pdev)
+ 	pm_runtime_set_autosuspend_delay(omap->dev, OMAP_I2C_PM_TIMEOUT);
+ 	pm_runtime_use_autosuspend(omap->dev);
+ 
+-	r = pm_runtime_get_sync(omap->dev);
++	r = pm_runtime_resume_and_get(omap->dev);
+ 	if (r < 0)
+-		goto err_free_mem;
++		goto err_disable_pm;
+ 
+ 	/*
+ 	 * Read the Rev hi bit-[15:14] ie scheme this is 1 indicates ver2.
+@@ -1513,8 +1513,8 @@ err_unuse_clocks:
+ 	omap_i2c_write_reg(omap, OMAP_I2C_CON_REG, 0);
+ 	pm_runtime_dont_use_autosuspend(omap->dev);
+ 	pm_runtime_put_sync(omap->dev);
++err_disable_pm:
+ 	pm_runtime_disable(&pdev->dev);
+-err_free_mem:
+ 
+ 	return r;
+ }
+@@ -1525,7 +1525,7 @@ static int omap_i2c_remove(struct platform_device *pdev)
+ 	int ret;
+ 
+ 	i2c_del_adapter(&omap->adapter);
+-	ret = pm_runtime_get_sync(&pdev->dev);
++	ret = pm_runtime_resume_and_get(&pdev->dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+diff --git a/drivers/i2c/busses/i2c-rcar.c b/drivers/i2c/busses/i2c-rcar.c
+index 12f6d452c0f70..8722ca23f889b 100644
+--- a/drivers/i2c/busses/i2c-rcar.c
++++ b/drivers/i2c/busses/i2c-rcar.c
+@@ -1027,7 +1027,10 @@ static int rcar_i2c_probe(struct platform_device *pdev)
+ 	if (of_property_read_bool(dev->of_node, "smbus"))
+ 		priv->flags |= ID_P_HOST_NOTIFY;
+ 
+-	priv->irq = platform_get_irq(pdev, 0);
++	ret = platform_get_irq(pdev, 0);
++	if (ret < 0)
++		goto out_pm_disable;
++	priv->irq = ret;
+ 	ret = devm_request_irq(dev, priv->irq, irqhandler, irqflags, dev_name(dev), priv);
+ 	if (ret < 0) {
+ 		dev_err(dev, "cannot get irq %d\n", priv->irq);
+diff --git a/drivers/i2c/busses/i2c-sh7760.c b/drivers/i2c/busses/i2c-sh7760.c
+index c2005c789d2b0..319d1fa617c88 100644
+--- a/drivers/i2c/busses/i2c-sh7760.c
++++ b/drivers/i2c/busses/i2c-sh7760.c
+@@ -471,7 +471,10 @@ static int sh7760_i2c_probe(struct platform_device *pdev)
+ 		goto out2;
+ 	}
+ 
+-	id->irq = platform_get_irq(pdev, 0);
++	ret = platform_get_irq(pdev, 0);
++	if (ret < 0)
++		goto out3;
++	id->irq = ret;
+ 
+ 	id->adap.nr = pdev->id;
+ 	id->adap.algo = &sh7760_i2c_algo;
+diff --git a/drivers/i2c/busses/i2c-sprd.c b/drivers/i2c/busses/i2c-sprd.c
+index 2917fecf6c80d..8ead7e021008c 100644
+--- a/drivers/i2c/busses/i2c-sprd.c
++++ b/drivers/i2c/busses/i2c-sprd.c
+@@ -290,7 +290,7 @@ static int sprd_i2c_master_xfer(struct i2c_adapter *i2c_adap,
+ 	struct sprd_i2c *i2c_dev = i2c_adap->algo_data;
+ 	int im, ret;
+ 
+-	ret = pm_runtime_get_sync(i2c_dev->dev);
++	ret = pm_runtime_resume_and_get(i2c_dev->dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+@@ -576,7 +576,7 @@ static int sprd_i2c_remove(struct platform_device *pdev)
+ 	struct sprd_i2c *i2c_dev = platform_get_drvdata(pdev);
+ 	int ret;
+ 
+-	ret = pm_runtime_get_sync(i2c_dev->dev);
++	ret = pm_runtime_resume_and_get(i2c_dev->dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+diff --git a/drivers/i2c/busses/i2c-stm32f7.c b/drivers/i2c/busses/i2c-stm32f7.c
+index c62c815b88eb6..318abfa7926b2 100644
+--- a/drivers/i2c/busses/i2c-stm32f7.c
++++ b/drivers/i2c/busses/i2c-stm32f7.c
+@@ -1652,7 +1652,7 @@ static int stm32f7_i2c_xfer(struct i2c_adapter *i2c_adap,
+ 	i2c_dev->msg_id = 0;
+ 	f7_msg->smbus = false;
+ 
+-	ret = pm_runtime_get_sync(i2c_dev->dev);
++	ret = pm_runtime_resume_and_get(i2c_dev->dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+@@ -1698,7 +1698,7 @@ static int stm32f7_i2c_smbus_xfer(struct i2c_adapter *adapter, u16 addr,
+ 	f7_msg->read_write = read_write;
+ 	f7_msg->smbus = true;
+ 
+-	ret = pm_runtime_get_sync(dev);
++	ret = pm_runtime_resume_and_get(dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+@@ -1799,7 +1799,7 @@ static int stm32f7_i2c_reg_slave(struct i2c_client *slave)
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = pm_runtime_get_sync(dev);
++	ret = pm_runtime_resume_and_get(dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+@@ -1880,7 +1880,7 @@ static int stm32f7_i2c_unreg_slave(struct i2c_client *slave)
+ 
+ 	WARN_ON(!i2c_dev->slave[id]);
+ 
+-	ret = pm_runtime_get_sync(i2c_dev->dev);
++	ret = pm_runtime_resume_and_get(i2c_dev->dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+@@ -2273,7 +2273,7 @@ static int stm32f7_i2c_regs_backup(struct stm32f7_i2c_dev *i2c_dev)
+ 	int ret;
+ 	struct stm32f7_i2c_regs *backup_regs = &i2c_dev->backup_regs;
+ 
+-	ret = pm_runtime_get_sync(i2c_dev->dev);
++	ret = pm_runtime_resume_and_get(i2c_dev->dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+@@ -2295,7 +2295,7 @@ static int stm32f7_i2c_regs_restore(struct stm32f7_i2c_dev *i2c_dev)
+ 	int ret;
+ 	struct stm32f7_i2c_regs *backup_regs = &i2c_dev->backup_regs;
+ 
+-	ret = pm_runtime_get_sync(i2c_dev->dev);
++	ret = pm_runtime_resume_and_get(i2c_dev->dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+diff --git a/drivers/i2c/busses/i2c-xiic.c b/drivers/i2c/busses/i2c-xiic.c
+index 087b2951942eb..2a8568b97c14d 100644
+--- a/drivers/i2c/busses/i2c-xiic.c
++++ b/drivers/i2c/busses/i2c-xiic.c
+@@ -706,7 +706,7 @@ static int xiic_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs, int num)
+ 	dev_dbg(adap->dev.parent, "%s entry SR: 0x%x\n", __func__,
+ 		xiic_getreg8(i2c, XIIC_SR_REG_OFFSET));
+ 
+-	err = pm_runtime_get_sync(i2c->dev);
++	err = pm_runtime_resume_and_get(i2c->dev);
+ 	if (err < 0)
+ 		return err;
+ 
+@@ -873,7 +873,7 @@ static int xiic_i2c_remove(struct platform_device *pdev)
+ 	/* remove adapter & data */
+ 	i2c_del_adapter(&i2c->adap);
+ 
+-	ret = pm_runtime_get_sync(i2c->dev);
++	ret = pm_runtime_resume_and_get(i2c->dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+diff --git a/drivers/i3c/master.c b/drivers/i3c/master.c
+index f8e9b7305c133..e2e12a5585e51 100644
+--- a/drivers/i3c/master.c
++++ b/drivers/i3c/master.c
+@@ -2535,7 +2535,7 @@ int i3c_master_register(struct i3c_master_controller *master,
+ 
+ 	ret = i3c_master_bus_init(master);
+ 	if (ret)
+-		goto err_destroy_wq;
++		goto err_put_dev;
+ 
+ 	ret = device_add(&master->dev);
+ 	if (ret)
+@@ -2566,9 +2566,6 @@ err_del_dev:
+ err_cleanup_bus:
+ 	i3c_master_bus_cleanup(master);
+ 
+-err_destroy_wq:
+-	destroy_workqueue(master->wq);
+-
+ err_put_dev:
+ 	put_device(&master->dev);
+ 
+diff --git a/drivers/iio/accel/adis16201.c b/drivers/iio/accel/adis16201.c
+index 3633a4e302c68..fe225990de24b 100644
+--- a/drivers/iio/accel/adis16201.c
++++ b/drivers/iio/accel/adis16201.c
+@@ -215,7 +215,7 @@ static const struct iio_chan_spec adis16201_channels[] = {
+ 	ADIS_AUX_ADC_CHAN(ADIS16201_AUX_ADC_REG, ADIS16201_SCAN_AUX_ADC, 0, 12),
+ 	ADIS_INCLI_CHAN(X, ADIS16201_XINCL_OUT_REG, ADIS16201_SCAN_INCLI_X,
+ 			BIT(IIO_CHAN_INFO_CALIBBIAS), 0, 14),
+-	ADIS_INCLI_CHAN(X, ADIS16201_YINCL_OUT_REG, ADIS16201_SCAN_INCLI_Y,
++	ADIS_INCLI_CHAN(Y, ADIS16201_YINCL_OUT_REG, ADIS16201_SCAN_INCLI_Y,
+ 			BIT(IIO_CHAN_INFO_CALIBBIAS), 0, 14),
+ 	IIO_CHAN_SOFT_TIMESTAMP(7)
+ };
+diff --git a/drivers/iio/adc/Kconfig b/drivers/iio/adc/Kconfig
+index e0667c4b3c08a..dda0f1e37ec16 100644
+--- a/drivers/iio/adc/Kconfig
++++ b/drivers/iio/adc/Kconfig
+@@ -249,7 +249,7 @@ config AD799X
+ config AD9467
+ 	tristate "Analog Devices AD9467 High Speed ADC driver"
+ 	depends on SPI
+-	select ADI_AXI_ADC
++	depends on ADI_AXI_ADC
+ 	help
+ 	  Say yes here to build support for Analog Devices:
+ 	  * AD9467 16-Bit, 200 MSPS/250 MSPS Analog-to-Digital Converter
+diff --git a/drivers/iio/adc/ad7476.c b/drivers/iio/adc/ad7476.c
+index 17402714b3876..9e9ff07cf972b 100644
+--- a/drivers/iio/adc/ad7476.c
++++ b/drivers/iio/adc/ad7476.c
+@@ -321,25 +321,15 @@ static int ad7476_probe(struct spi_device *spi)
+ 	spi_message_init(&st->msg);
+ 	spi_message_add_tail(&st->xfer, &st->msg);
+ 
+-	ret = iio_triggered_buffer_setup(indio_dev, NULL,
+-			&ad7476_trigger_handler, NULL);
++	ret = devm_iio_triggered_buffer_setup(&spi->dev, indio_dev, NULL,
++					      &ad7476_trigger_handler, NULL);
+ 	if (ret)
+-		goto error_disable_reg;
++		return ret;
+ 
+ 	if (st->chip_info->reset)
+ 		st->chip_info->reset(st);
+ 
+-	ret = iio_device_register(indio_dev);
+-	if (ret)
+-		goto error_ring_unregister;
+-	return 0;
+-
+-error_ring_unregister:
+-	iio_triggered_buffer_cleanup(indio_dev);
+-error_disable_reg:
+-	regulator_disable(st->reg);
+-
+-	return ret;
++	return devm_iio_device_register(&spi->dev, indio_dev);
+ }
+ 
+ static const struct spi_device_id ad7476_id[] = {
+diff --git a/drivers/iio/imu/adis16480.c b/drivers/iio/imu/adis16480.c
+index dfe86c5893254..c41b8ef1e2509 100644
+--- a/drivers/iio/imu/adis16480.c
++++ b/drivers/iio/imu/adis16480.c
+@@ -10,6 +10,7 @@
+ #include <linux/of_irq.h>
+ #include <linux/interrupt.h>
+ #include <linux/delay.h>
++#include <linux/math.h>
+ #include <linux/mutex.h>
+ #include <linux/device.h>
+ #include <linux/kernel.h>
+@@ -17,6 +18,7 @@
+ #include <linux/slab.h>
+ #include <linux/sysfs.h>
+ #include <linux/module.h>
++#include <linux/lcm.h>
+ 
+ #include <linux/iio/iio.h>
+ #include <linux/iio/sysfs.h>
+@@ -170,6 +172,11 @@ static const char * const adis16480_int_pin_names[4] = {
+ 	[ADIS16480_PIN_DIO4] = "DIO4",
+ };
+ 
++static bool low_rate_allow;
++module_param(low_rate_allow, bool, 0444);
++MODULE_PARM_DESC(low_rate_allow,
++		 "Allow IMU rates below the minimum advisable when external clk is used in PPS mode (default: N)");
++
+ #ifdef CONFIG_DEBUG_FS
+ 
+ static ssize_t adis16480_show_firmware_revision(struct file *file,
+@@ -312,7 +319,8 @@ static int adis16480_debugfs_init(struct iio_dev *indio_dev)
+ static int adis16480_set_freq(struct iio_dev *indio_dev, int val, int val2)
+ {
+ 	struct adis16480 *st = iio_priv(indio_dev);
+-	unsigned int t, reg;
++	unsigned int t, sample_rate = st->clk_freq;
++	int ret;
+ 
+ 	if (val < 0 || val2 < 0)
+ 		return -EINVAL;
+@@ -321,28 +329,65 @@ static int adis16480_set_freq(struct iio_dev *indio_dev, int val, int val2)
+ 	if (t == 0)
+ 		return -EINVAL;
+ 
++	mutex_lock(&st->adis.state_lock);
+ 	/*
+-	 * When using PPS mode, the rate of data collection is equal to the
+-	 * product of the external clock frequency and the scale factor in the
+-	 * SYNC_SCALE register.
+-	 * When using sync mode, or internal clock, the output data rate is
+-	 * equal with  the clock frequency divided by DEC_RATE + 1.
++	 * When using PPS mode, the input clock needs to be scaled so that we have an IMU
++	 * sample rate between (optimally) 4000 and 4250. After this, we can use the
++	 * decimation filter to lower the sampling rate in order to get what the user wants.
++	 * Optimally, the user sample rate is a multiple of both the IMU sample rate and
++	 * the input clock. Hence, calculating the sync_scale dynamically gives us better
++	 * chances of achieving a perfect/integer value for DEC_RATE. The math here is:
++	 *	1. lcm of the input clock and the desired output rate.
++	 *	2. get the highest multiple of the previous result lower than the adis max rate.
++	 *	3. The last result becomes the IMU sample rate. Use that to calculate SYNC_SCALE
++	 *	   and DEC_RATE (to get the user output rate)
+ 	 */
+ 	if (st->clk_mode == ADIS16480_CLK_PPS) {
+-		t = t / st->clk_freq;
+-		reg = ADIS16495_REG_SYNC_SCALE;
+-	} else {
+-		t = st->clk_freq / t;
+-		reg = ADIS16480_REG_DEC_RATE;
++		unsigned long scaled_rate = lcm(st->clk_freq, t);
++		int sync_scale;
++
++		/*
++		 * If lcm is bigger than the IMU maximum sampling rate there's no perfect
++		 * solution. In this case, we get the highest multiple of the input clock
++		 * lower than the IMU max sample rate.
++		 */
++		if (scaled_rate > st->chip_info->int_clk)
++			scaled_rate = st->chip_info->int_clk / st->clk_freq * st->clk_freq;
++		else
++			scaled_rate = st->chip_info->int_clk / scaled_rate * scaled_rate;
++
++		/*
++		 * This is not an hard requirement but it's not advised to run the IMU
++		 * with a sample rate lower than 4000Hz due to possible undersampling
++		 * issues. However, there are users that might really want to take the risk.
++		 * Hence, we provide a module parameter for them. If set, we allow sample
++		 * rates lower than 4KHz. By default, we won't allow this and we just roundup
++		 * the rate to the next multiple of the input clock bigger than 4KHz. This
++		 * is done like this as in some cases (when DEC_RATE is 0) might give
++		 * us the closest value to the one desired by the user...
++		 */
++		if (scaled_rate < 4000000 && !low_rate_allow)
++			scaled_rate = roundup(4000000, st->clk_freq);
++
++		sync_scale = scaled_rate / st->clk_freq;
++		ret = __adis_write_reg_16(&st->adis, ADIS16495_REG_SYNC_SCALE, sync_scale);
++		if (ret)
++			goto error;
++
++		sample_rate = scaled_rate;
+ 	}
+ 
++	t = DIV_ROUND_CLOSEST(sample_rate, t);
++	if (t)
++		t--;
++
+ 	if (t > st->chip_info->max_dec_rate)
+ 		t = st->chip_info->max_dec_rate;
+ 
+-	if ((t != 0) && (st->clk_mode != ADIS16480_CLK_PPS))
+-		t--;
+-
+-	return adis_write_reg_16(&st->adis, reg, t);
++	ret = __adis_write_reg_16(&st->adis, ADIS16480_REG_DEC_RATE, t);
++error:
++	mutex_unlock(&st->adis.state_lock);
++	return ret;
+ }
+ 
+ static int adis16480_get_freq(struct iio_dev *indio_dev, int *val, int *val2)
+@@ -350,34 +395,35 @@ static int adis16480_get_freq(struct iio_dev *indio_dev, int *val, int *val2)
+ 	struct adis16480 *st = iio_priv(indio_dev);
+ 	uint16_t t;
+ 	int ret;
+-	unsigned int freq;
+-	unsigned int reg;
++	unsigned int freq, sample_rate = st->clk_freq;
+ 
+-	if (st->clk_mode == ADIS16480_CLK_PPS)
+-		reg = ADIS16495_REG_SYNC_SCALE;
+-	else
+-		reg = ADIS16480_REG_DEC_RATE;
++	mutex_lock(&st->adis.state_lock);
++
++	if (st->clk_mode == ADIS16480_CLK_PPS) {
++		u16 sync_scale;
++
++		ret = __adis_read_reg_16(&st->adis, ADIS16495_REG_SYNC_SCALE, &sync_scale);
++		if (ret)
++			goto error;
+ 
+-	ret = adis_read_reg_16(&st->adis, reg, &t);
++		sample_rate = st->clk_freq * sync_scale;
++	}
++
++	ret = __adis_read_reg_16(&st->adis, ADIS16480_REG_DEC_RATE, &t);
+ 	if (ret)
+-		return ret;
++		goto error;
+ 
+-	/*
+-	 * When using PPS mode, the rate of data collection is equal to the
+-	 * product of the external clock frequency and the scale factor in the
+-	 * SYNC_SCALE register.
+-	 * When using sync mode, or internal clock, the output data rate is
+-	 * equal with  the clock frequency divided by DEC_RATE + 1.
+-	 */
+-	if (st->clk_mode == ADIS16480_CLK_PPS)
+-		freq = st->clk_freq * t;
+-	else
+-		freq = st->clk_freq / (t + 1);
++	mutex_unlock(&st->adis.state_lock);
++
++	freq = DIV_ROUND_CLOSEST(sample_rate, (t + 1));
+ 
+ 	*val = freq / 1000;
+ 	*val2 = (freq % 1000) * 1000;
+ 
+ 	return IIO_VAL_INT_PLUS_MICRO;
++error:
++	mutex_unlock(&st->adis.state_lock);
++	return ret;
+ }
+ 
+ enum {
+@@ -1278,6 +1324,20 @@ static int adis16480_probe(struct spi_device *spi)
+ 
+ 		st->clk_freq = clk_get_rate(st->ext_clk);
+ 		st->clk_freq *= 1000; /* micro */
++		if (st->clk_mode == ADIS16480_CLK_PPS) {
++			u16 sync_scale;
++
++			/*
++			 * In PPS mode, the IMU sample rate is the clk_freq * sync_scale. Hence,
++			 * default the IMU sample rate to the highest multiple of the input clock
++			 * lower than the IMU max sample rate. The internal sample rate is the
++			 * max...
++			 */
++			sync_scale = st->chip_info->int_clk / st->clk_freq;
++			ret = __adis_write_reg_16(&st->adis, ADIS16495_REG_SYNC_SCALE, sync_scale);
++			if (ret)
++				return ret;
++		}
+ 	} else {
+ 		st->clk_freq = st->chip_info->int_clk;
+ 	}
+diff --git a/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c b/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c
+index 453c51c796555..69ab94ab72975 100644
+--- a/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c
++++ b/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c
+@@ -731,12 +731,16 @@ inv_mpu6050_read_raw(struct iio_dev *indio_dev,
+ 	}
+ }
+ 
+-static int inv_mpu6050_write_gyro_scale(struct inv_mpu6050_state *st, int val)
++static int inv_mpu6050_write_gyro_scale(struct inv_mpu6050_state *st, int val,
++					int val2)
+ {
+ 	int result, i;
+ 
++	if (val != 0)
++		return -EINVAL;
++
+ 	for (i = 0; i < ARRAY_SIZE(gyro_scale_6050); ++i) {
+-		if (gyro_scale_6050[i] == val) {
++		if (gyro_scale_6050[i] == val2) {
+ 			result = inv_mpu6050_set_gyro_fsr(st, i);
+ 			if (result)
+ 				return result;
+@@ -767,13 +771,17 @@ static int inv_write_raw_get_fmt(struct iio_dev *indio_dev,
+ 	return -EINVAL;
+ }
+ 
+-static int inv_mpu6050_write_accel_scale(struct inv_mpu6050_state *st, int val)
++static int inv_mpu6050_write_accel_scale(struct inv_mpu6050_state *st, int val,
++					 int val2)
+ {
+ 	int result, i;
+ 	u8 d;
+ 
++	if (val != 0)
++		return -EINVAL;
++
+ 	for (i = 0; i < ARRAY_SIZE(accel_scale); ++i) {
+-		if (accel_scale[i] == val) {
++		if (accel_scale[i] == val2) {
+ 			d = (i << INV_MPU6050_ACCL_CONFIG_FSR_SHIFT);
+ 			result = regmap_write(st->map, st->reg->accl_config, d);
+ 			if (result)
+@@ -814,10 +822,10 @@ static int inv_mpu6050_write_raw(struct iio_dev *indio_dev,
+ 	case IIO_CHAN_INFO_SCALE:
+ 		switch (chan->type) {
+ 		case IIO_ANGL_VEL:
+-			result = inv_mpu6050_write_gyro_scale(st, val2);
++			result = inv_mpu6050_write_gyro_scale(st, val, val2);
+ 			break;
+ 		case IIO_ACCEL:
+-			result = inv_mpu6050_write_accel_scale(st, val2);
++			result = inv_mpu6050_write_accel_scale(st, val, val2);
+ 			break;
+ 		default:
+ 			result = -EINVAL;
+diff --git a/drivers/iio/magnetometer/yamaha-yas530.c b/drivers/iio/magnetometer/yamaha-yas530.c
+index d46f23d82b3da..2f2f8cb3c26cd 100644
+--- a/drivers/iio/magnetometer/yamaha-yas530.c
++++ b/drivers/iio/magnetometer/yamaha-yas530.c
+@@ -32,13 +32,14 @@
+ #include <linux/regmap.h>
+ #include <linux/regulator/consumer.h>
+ #include <linux/random.h>
+-#include <linux/unaligned/be_byteshift.h>
+ 
+ #include <linux/iio/buffer.h>
+ #include <linux/iio/iio.h>
+ #include <linux/iio/trigger_consumer.h>
+ #include <linux/iio/triggered_buffer.h>
+ 
++#include <asm/unaligned.h>
++
+ /* This register map covers YAS530 and YAS532 but differs in YAS 537 and YAS539 */
+ #define YAS5XX_DEVICE_ID		0x80
+ #define YAS5XX_ACTUATE_INIT_COIL	0x81
+@@ -887,6 +888,7 @@ static int yas5xx_probe(struct i2c_client *i2c,
+ 		strncpy(yas5xx->name, "yas532", sizeof(yas5xx->name));
+ 		break;
+ 	default:
++		ret = -ENODEV;
+ 		dev_err(dev, "unhandled device ID %02x\n", yas5xx->devid);
+ 		goto assert_reset;
+ 	}
+diff --git a/drivers/iio/orientation/hid-sensor-rotation.c b/drivers/iio/orientation/hid-sensor-rotation.c
+index 18e4ef0600963..c087d8f72a546 100644
+--- a/drivers/iio/orientation/hid-sensor-rotation.c
++++ b/drivers/iio/orientation/hid-sensor-rotation.c
+@@ -21,7 +21,7 @@ struct dev_rot_state {
+ 	struct hid_sensor_common common_attributes;
+ 	struct hid_sensor_hub_attribute_info quaternion;
+ 	struct {
+-		u32 sampled_vals[4] __aligned(16);
++		s32 sampled_vals[4] __aligned(16);
+ 		u64 timestamp __aligned(8);
+ 	} scan;
+ 	int scale_pre_decml;
+@@ -170,8 +170,15 @@ static int dev_rot_capture_sample(struct hid_sensor_hub_device *hsdev,
+ 	struct dev_rot_state *rot_state = iio_priv(indio_dev);
+ 
+ 	if (usage_id == HID_USAGE_SENSOR_ORIENT_QUATERNION) {
+-		memcpy(&rot_state->scan.sampled_vals, raw_data,
+-		       sizeof(rot_state->scan.sampled_vals));
++		if (raw_len / 4 == sizeof(s16)) {
++			rot_state->scan.sampled_vals[0] = ((s16 *)raw_data)[0];
++			rot_state->scan.sampled_vals[1] = ((s16 *)raw_data)[1];
++			rot_state->scan.sampled_vals[2] = ((s16 *)raw_data)[2];
++			rot_state->scan.sampled_vals[3] = ((s16 *)raw_data)[3];
++		} else {
++			memcpy(&rot_state->scan.sampled_vals, raw_data,
++			       sizeof(rot_state->scan.sampled_vals));
++		}
+ 
+ 		dev_dbg(&indio_dev->dev, "Recd Quat len:%zu::%zu\n", raw_len,
+ 			sizeof(rot_state->scan.sampled_vals));
+diff --git a/drivers/iio/proximity/sx9310.c b/drivers/iio/proximity/sx9310.c
+index 37fd0b65a0140..ea82cfaf7f427 100644
+--- a/drivers/iio/proximity/sx9310.c
++++ b/drivers/iio/proximity/sx9310.c
+@@ -763,7 +763,11 @@ static int sx9310_write_far_debounce(struct sx9310_data *data, int val)
+ 	int ret;
+ 	unsigned int regval;
+ 
+-	val = ilog2(val);
++	if (val > 0)
++		val = ilog2(val);
++	if (!FIELD_FIT(SX9310_REG_PROX_CTRL10_FAR_DEBOUNCE_MASK, val))
++		return -EINVAL;
++
+ 	regval = FIELD_PREP(SX9310_REG_PROX_CTRL10_FAR_DEBOUNCE_MASK, val);
+ 
+ 	mutex_lock(&data->mutex);
+@@ -780,7 +784,11 @@ static int sx9310_write_close_debounce(struct sx9310_data *data, int val)
+ 	int ret;
+ 	unsigned int regval;
+ 
+-	val = ilog2(val);
++	if (val > 0)
++		val = ilog2(val);
++	if (!FIELD_FIT(SX9310_REG_PROX_CTRL10_CLOSE_DEBOUNCE_MASK, val))
++		return -EINVAL;
++
+ 	regval = FIELD_PREP(SX9310_REG_PROX_CTRL10_CLOSE_DEBOUNCE_MASK, val);
+ 
+ 	mutex_lock(&data->mutex);
+@@ -1213,17 +1221,17 @@ static int sx9310_init_compensation(struct iio_dev *indio_dev)
+ }
+ 
+ static const struct sx9310_reg_default *
+-sx9310_get_default_reg(struct sx9310_data *data, int i,
++sx9310_get_default_reg(struct sx9310_data *data, int idx,
+ 		       struct sx9310_reg_default *reg_def)
+ {
+-	int ret;
+ 	const struct device_node *np = data->client->dev.of_node;
+-	u32 combined[SX9310_NUM_CHANNELS] = { 4, 4, 4, 4 };
++	u32 combined[SX9310_NUM_CHANNELS];
++	u32 start = 0, raw = 0, pos = 0;
+ 	unsigned long comb_mask = 0;
++	int ret, i, count;
+ 	const char *res;
+-	u32 start = 0, raw = 0, pos = 0;
+ 
+-	memcpy(reg_def, &sx9310_default_regs[i], sizeof(*reg_def));
++	memcpy(reg_def, &sx9310_default_regs[idx], sizeof(*reg_def));
+ 	if (!np)
+ 		return reg_def;
+ 
+@@ -1234,15 +1242,31 @@ sx9310_get_default_reg(struct sx9310_data *data, int i,
+ 			reg_def->def |= SX9310_REG_PROX_CTRL2_SHIELDEN_GROUND;
+ 		}
+ 
+-		reg_def->def &= ~SX9310_REG_PROX_CTRL2_COMBMODE_MASK;
+-		of_property_read_u32_array(np, "semtech,combined-sensors",
+-					   combined, ARRAY_SIZE(combined));
+-		for (i = 0; i < ARRAY_SIZE(combined); i++) {
+-			if (combined[i] <= SX9310_NUM_CHANNELS)
+-				comb_mask |= BIT(combined[i]);
++		count = of_property_count_elems_of_size(np, "semtech,combined-sensors",
++							sizeof(u32));
++		if (count > 0 && count <= ARRAY_SIZE(combined)) {
++			ret = of_property_read_u32_array(np, "semtech,combined-sensors",
++							 combined, count);
++			if (ret)
++				break;
++		} else {
++			/*
++			 * Either the property does not exist in the DT or the
++			 * number of entries is incorrect.
++			 */
++			break;
+ 		}
++		for (i = 0; i < count; i++) {
++			if (combined[i] >= SX9310_NUM_CHANNELS) {
++				/* Invalid sensor (invalid DT). */
++				break;
++			}
++			comb_mask |= BIT(combined[i]);
++		}
++		if (i < count)
++			break;
+ 
+-		comb_mask &= 0xf;
++		reg_def->def &= ~SX9310_REG_PROX_CTRL2_COMBMODE_MASK;
+ 		if (comb_mask == (BIT(3) | BIT(2) | BIT(1) | BIT(0)))
+ 			reg_def->def |= SX9310_REG_PROX_CTRL2_COMBMODE_CS0_CS1_CS2_CS3;
+ 		else if (comb_mask == (BIT(1) | BIT(2)))
+diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
+index 3d194bb608405..6adbaea358aeb 100644
+--- a/drivers/infiniband/core/cm.c
++++ b/drivers/infiniband/core/cm.c
+@@ -2138,7 +2138,8 @@ static int cm_req_handler(struct cm_work *work)
+ 		goto destroy;
+ 	}
+ 
+-	cm_process_routed_req(req_msg, work->mad_recv_wc->wc);
++	if (cm_id_priv->av.ah_attr.type != RDMA_AH_ATTR_TYPE_ROCE)
++		cm_process_routed_req(req_msg, work->mad_recv_wc->wc);
+ 
+ 	memset(&work->path[0], 0, sizeof(work->path[0]));
+ 	if (cm_req_has_alt_path(req_msg))
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index 94096511599f0..6ac07911a17bd 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -463,7 +463,6 @@ static void _cma_attach_to_dev(struct rdma_id_private *id_priv,
+ 	id_priv->id.route.addr.dev_addr.transport =
+ 		rdma_node_get_transport(cma_dev->device->node_type);
+ 	list_add_tail(&id_priv->list, &cma_dev->id_list);
+-	rdma_restrack_add(&id_priv->res);
+ 
+ 	trace_cm_id_attach(id_priv, cma_dev->device);
+ }
+@@ -700,6 +699,7 @@ static int cma_ib_acquire_dev(struct rdma_id_private *id_priv,
+ 	mutex_lock(&lock);
+ 	cma_attach_to_dev(id_priv, listen_id_priv->cma_dev);
+ 	mutex_unlock(&lock);
++	rdma_restrack_add(&id_priv->res);
+ 	return 0;
+ }
+ 
+@@ -754,8 +754,10 @@ static int cma_iw_acquire_dev(struct rdma_id_private *id_priv,
+ 	}
+ 
+ out:
+-	if (!ret)
++	if (!ret) {
+ 		cma_attach_to_dev(id_priv, cma_dev);
++		rdma_restrack_add(&id_priv->res);
++	}
+ 
+ 	mutex_unlock(&lock);
+ 	return ret;
+@@ -816,6 +818,7 @@ static int cma_resolve_ib_dev(struct rdma_id_private *id_priv)
+ 
+ found:
+ 	cma_attach_to_dev(id_priv, cma_dev);
++	rdma_restrack_add(&id_priv->res);
+ 	mutex_unlock(&lock);
+ 	addr = (struct sockaddr_ib *)cma_src_addr(id_priv);
+ 	memcpy(&addr->sib_addr, &sgid, sizeof(sgid));
+@@ -2529,6 +2532,7 @@ static int cma_listen_on_dev(struct rdma_id_private *id_priv,
+ 	       rdma_addr_size(cma_src_addr(id_priv)));
+ 
+ 	_cma_attach_to_dev(dev_id_priv, cma_dev);
++	rdma_restrack_add(&dev_id_priv->res);
+ 	cma_id_get(id_priv);
+ 	dev_id_priv->internal_id = 1;
+ 	dev_id_priv->afonly = id_priv->afonly;
+@@ -3169,6 +3173,7 @@ port_found:
+ 	ib_addr_set_pkey(&id_priv->id.route.addr.dev_addr, pkey);
+ 	id_priv->id.port_num = p;
+ 	cma_attach_to_dev(id_priv, cma_dev);
++	rdma_restrack_add(&id_priv->res);
+ 	cma_set_loopback(cma_src_addr(id_priv));
+ out:
+ 	mutex_unlock(&lock);
+@@ -3201,6 +3206,7 @@ static void addr_handler(int status, struct sockaddr *src_addr,
+ 		if (status)
+ 			pr_debug_ratelimited("RDMA CM: ADDR_ERROR: failed to acquire device. status %d\n",
+ 					     status);
++		rdma_restrack_add(&id_priv->res);
+ 	} else if (status) {
+ 		pr_debug_ratelimited("RDMA CM: ADDR_ERROR: failed to resolve IP. status %d\n", status);
+ 	}
+@@ -3812,6 +3818,8 @@ int rdma_bind_addr(struct rdma_cm_id *id, struct sockaddr *addr)
+ 	if (ret)
+ 		goto err2;
+ 
++	if (!cma_any_addr(addr))
++		rdma_restrack_add(&id_priv->res);
+ 	return 0;
+ err2:
+ 	if (id_priv->cma_dev)
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.c b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+index 995d4633b0a1c..d4d4959c2434c 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+@@ -2784,6 +2784,7 @@ do_rq:
+ 		dev_err(&cq->hwq.pdev->dev,
+ 			"FP: CQ Processed terminal reported rq_cons_idx 0x%x exceeds max 0x%x\n",
+ 			cqe_cons, rq->max_wqe);
++		rc = -EINVAL;
+ 		goto done;
+ 	}
+ 
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_res.c b/drivers/infiniband/hw/bnxt_re/qplib_res.c
+index fa7878336100a..3ca47004b7527 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_res.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_res.c
+@@ -854,6 +854,7 @@ static int bnxt_qplib_alloc_dpi_tbl(struct bnxt_qplib_res     *res,
+ 
+ unmap_io:
+ 	pci_iounmap(res->pdev, dpit->dbr_bar_reg_iomem);
++	dpit->dbr_bar_reg_iomem = NULL;
+ 	return -ENOMEM;
+ }
+ 
+diff --git a/drivers/infiniband/hw/cxgb4/resource.c b/drivers/infiniband/hw/cxgb4/resource.c
+index 5c95c789f302d..e800e8e8bed5a 100644
+--- a/drivers/infiniband/hw/cxgb4/resource.c
++++ b/drivers/infiniband/hw/cxgb4/resource.c
+@@ -216,7 +216,7 @@ u32 c4iw_get_qpid(struct c4iw_rdev *rdev, struct c4iw_dev_ucontext *uctx)
+ 			goto out;
+ 		entry->qid = qid;
+ 		list_add_tail(&entry->entry, &uctx->cqids);
+-		for (i = qid; i & rdev->qpmask; i++) {
++		for (i = qid + 1; i & rdev->qpmask; i++) {
+ 			entry = kmalloc(sizeof(*entry), GFP_KERNEL);
+ 			if (!entry)
+ 				goto out;
+diff --git a/drivers/infiniband/hw/hfi1/firmware.c b/drivers/infiniband/hw/hfi1/firmware.c
+index 0e83d4b61e463..2cf102b5abd44 100644
+--- a/drivers/infiniband/hw/hfi1/firmware.c
++++ b/drivers/infiniband/hw/hfi1/firmware.c
+@@ -1916,6 +1916,7 @@ int parse_platform_config(struct hfi1_devdata *dd)
+ 			dd_dev_err(dd, "%s: Failed CRC check at offset %ld\n",
+ 				   __func__, (ptr -
+ 				   (u32 *)dd->platform_config.data));
++			ret = -EINVAL;
+ 			goto bail;
+ 		}
+ 		/* Jump the CRC DWORD */
+diff --git a/drivers/infiniband/hw/hfi1/mmu_rb.c b/drivers/infiniband/hw/hfi1/mmu_rb.c
+index f3fb28e3d5d74..d213f65d4cdd0 100644
+--- a/drivers/infiniband/hw/hfi1/mmu_rb.c
++++ b/drivers/infiniband/hw/hfi1/mmu_rb.c
+@@ -89,7 +89,7 @@ int hfi1_mmu_rb_register(void *ops_arg,
+ 	struct mmu_rb_handler *h;
+ 	int ret;
+ 
+-	h = kmalloc(sizeof(*h), GFP_KERNEL);
++	h = kzalloc(sizeof(*h), GFP_KERNEL);
+ 	if (!h)
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index ce26f97b2ca26..ad3cee54140e1 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -5068,6 +5068,7 @@ done:
+ 	qp_attr->cur_qp_state = qp_attr->qp_state;
+ 	qp_attr->cap.max_recv_wr = hr_qp->rq.wqe_cnt;
+ 	qp_attr->cap.max_recv_sge = hr_qp->rq.max_gs - hr_qp->rq.rsv_sge;
++	qp_attr->cap.max_inline_data = hr_qp->max_inline_data;
+ 
+ 	if (!ibqp->uobject) {
+ 		qp_attr->cap.max_send_wr = hr_qp->sq.wqe_cnt;
+diff --git a/drivers/infiniband/hw/i40iw/i40iw_pble.c b/drivers/infiniband/hw/i40iw/i40iw_pble.c
+index 53e5cd1a2bd6e..146a4148219ba 100644
+--- a/drivers/infiniband/hw/i40iw/i40iw_pble.c
++++ b/drivers/infiniband/hw/i40iw/i40iw_pble.c
+@@ -393,12 +393,9 @@ static enum i40iw_status_code add_pble_pool(struct i40iw_sc_dev *dev,
+ 	i40iw_debug(dev, I40IW_DEBUG_PBLE, "next_fpm_addr = %llx chunk_size[%u] = 0x%x\n",
+ 		    pble_rsrc->next_fpm_addr, chunk->size, chunk->size);
+ 	pble_rsrc->unallocated_pble -= (chunk->size >> 3);
+-	list_add(&chunk->list, &pble_rsrc->pinfo.clist);
+ 	sd_reg_val = (sd_entry_type == I40IW_SD_TYPE_PAGED) ?
+ 			sd_entry->u.pd_table.pd_page_addr.pa : sd_entry->u.bp.addr.pa;
+-	if (sd_entry->valid)
+-		return 0;
+-	if (dev->is_pf) {
++	if (dev->is_pf && !sd_entry->valid) {
+ 		ret_code = i40iw_hmc_sd_one(dev, hmc_info->hmc_fn_id,
+ 					    sd_reg_val, idx->sd_idx,
+ 					    sd_entry->entry_type, true);
+@@ -409,6 +406,7 @@ static enum i40iw_status_code add_pble_pool(struct i40iw_sc_dev *dev,
+ 	}
+ 
+ 	sd_entry->valid = true;
++	list_add(&chunk->list, &pble_rsrc->pinfo.clist);
+ 	return 0;
+  error:
+ 	kfree(chunk);
+diff --git a/drivers/infiniband/hw/mlx5/fs.c b/drivers/infiniband/hw/mlx5/fs.c
+index 25da0b05b4e2f..f0af3f1ae0398 100644
+--- a/drivers/infiniband/hw/mlx5/fs.c
++++ b/drivers/infiniband/hw/mlx5/fs.c
+@@ -1528,8 +1528,8 @@ static struct mlx5_ib_flow_handler *raw_fs_rule_add(
+ 		dst_num++;
+ 	}
+ 
+-	handler = _create_raw_flow_rule(dev, ft_prio, dst, fs_matcher,
+-					flow_context, flow_act,
++	handler = _create_raw_flow_rule(dev, ft_prio, dst_num ? dst : NULL,
++					fs_matcher, flow_context, flow_act,
+ 					cmd_in, inlen, dst_num);
+ 
+ 	if (IS_ERR(handler)) {
+@@ -1885,8 +1885,9 @@ static int get_dests(struct uverbs_attr_bundle *attrs,
+ 		else
+ 			*dest_id = mqp->raw_packet_qp.rq.tirn;
+ 		*dest_type = MLX5_FLOW_DESTINATION_TYPE_TIR;
+-	} else if (fs_matcher->ns_type == MLX5_FLOW_NAMESPACE_EGRESS ||
+-		   fs_matcher->ns_type == MLX5_FLOW_NAMESPACE_RDMA_TX) {
++	} else if ((fs_matcher->ns_type == MLX5_FLOW_NAMESPACE_EGRESS ||
++		    fs_matcher->ns_type == MLX5_FLOW_NAMESPACE_RDMA_TX) &&
++		   !(*flags & MLX5_IB_ATTR_CREATE_FLOW_FLAGS_DROP)) {
+ 		*dest_type = MLX5_FLOW_DESTINATION_TYPE_PORT;
+ 	}
+ 
+diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
+index 0d69a697d75f3..4be7bccefaa40 100644
+--- a/drivers/infiniband/hw/mlx5/main.c
++++ b/drivers/infiniband/hw/mlx5/main.c
+@@ -499,7 +499,7 @@ static int mlx5_query_port_roce(struct ib_device *device, u8 port_num,
+ 	translate_eth_proto_oper(eth_prot_oper, &props->active_speed,
+ 				 &props->active_width, ext);
+ 
+-	if (!dev->is_rep && mlx5_is_roce_enabled(mdev)) {
++	if (!dev->is_rep && dev->mdev->roce.roce_en) {
+ 		u16 qkey_viol_cntr;
+ 
+ 		props->port_cap_flags |= IB_PORT_CM_SUP;
+@@ -4174,7 +4174,7 @@ static int mlx5_ib_roce_init(struct mlx5_ib_dev *dev)
+ 
+ 		/* Register only for native ports */
+ 		err = mlx5_add_netdev_notifier(dev, port_num);
+-		if (err || dev->is_rep || !mlx5_is_roce_enabled(mdev))
++		if (err || dev->is_rep || !mlx5_is_roce_init_enabled(mdev))
+ 			/*
+ 			 * We don't enable ETH interface for
+ 			 * 1. IB representors
+@@ -4711,7 +4711,7 @@ static int mlx5r_probe(struct auxiliary_device *adev,
+ 	dev->mdev = mdev;
+ 	dev->num_ports = num_ports;
+ 
+-	if (ll == IB_LINK_LAYER_ETHERNET && !mlx5_is_roce_enabled(mdev))
++	if (ll == IB_LINK_LAYER_ETHERNET && !mlx5_is_roce_init_enabled(mdev))
+ 		profile = &raw_eth_profile;
+ 	else
+ 		profile = &pf_profile;
+diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
+index 88cc26e008fce..b085c02b53d0e 100644
+--- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
++++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
+@@ -547,11 +547,6 @@ static inline const struct mlx5_umr_wr *umr_wr(const struct ib_send_wr *wr)
+ 	return container_of(wr, struct mlx5_umr_wr, wr);
+ }
+ 
+-struct mlx5_shared_mr_info {
+-	int mr_id;
+-	struct ib_umem		*umem;
+-};
+-
+ enum mlx5_ib_cq_pr_flags {
+ 	MLX5_IB_CQ_PR_FLAGS_CQE_128_PAD	= 1 << 0,
+ };
+@@ -654,47 +649,69 @@ struct mlx5_ib_dm {
+ 	atomic64_add(value, &((mr)->odp_stats.counter_name))
+ 
+ struct mlx5_ib_mr {
+-	struct ib_mr		ibmr;
+-	void			*descs;
+-	dma_addr_t		desc_map;
+-	int			ndescs;
+-	int			data_length;
+-	int			meta_ndescs;
+-	int			meta_length;
+-	int			max_descs;
+-	int			desc_size;
+-	int			access_mode;
+-	unsigned int		page_shift;
+-	struct mlx5_core_mkey	mmkey;
+-	struct ib_umem	       *umem;
+-	struct mlx5_shared_mr_info	*smr_info;
+-	struct list_head	list;
+-	struct mlx5_cache_ent  *cache_ent;
+-	u32 out[MLX5_ST_SZ_DW(create_mkey_out)];
+-	struct mlx5_core_sig_ctx    *sig;
+-	void			*descs_alloc;
+-	int			access_flags; /* Needed for rereg MR */
+-
+-	struct mlx5_ib_mr      *parent;
+-	/* Needed for IB_MR_TYPE_INTEGRITY */
+-	struct mlx5_ib_mr      *pi_mr;
+-	struct mlx5_ib_mr      *klm_mr;
+-	struct mlx5_ib_mr      *mtt_mr;
+-	u64			data_iova;
+-	u64			pi_iova;
+-
+-	/* For ODP and implicit */
+-	struct xarray		implicit_children;
+-	union {
+-		struct list_head elm;
+-		struct work_struct work;
+-	} odp_destroy;
+-	struct ib_odp_counters	odp_stats;
+-	bool			is_odp_implicit;
++	struct ib_mr ibmr;
++	struct mlx5_core_mkey mmkey;
+ 
+-	struct mlx5_async_work  cb_work;
++	/* User MR data */
++	struct mlx5_cache_ent *cache_ent;
++	struct ib_umem *umem;
++
++	/* This is zero'd when the MR is allocated */
++	struct {
++		/* Used only while the MR is in the cache */
++		struct {
++			u32 out[MLX5_ST_SZ_DW(create_mkey_out)];
++			struct mlx5_async_work cb_work;
++			/* Cache list element */
++			struct list_head list;
++		};
++
++		/* Used only by kernel MRs (umem == NULL) */
++		struct {
++			void *descs;
++			void *descs_alloc;
++			dma_addr_t desc_map;
++			int max_descs;
++			int ndescs;
++			int desc_size;
++			int access_mode;
++
++			/* For Kernel IB_MR_TYPE_INTEGRITY */
++			struct mlx5_core_sig_ctx *sig;
++			struct mlx5_ib_mr *pi_mr;
++			struct mlx5_ib_mr *klm_mr;
++			struct mlx5_ib_mr *mtt_mr;
++			u64 data_iova;
++			u64 pi_iova;
++			int meta_ndescs;
++			int meta_length;
++			int data_length;
++		};
++
++		/* Used only by User MRs (umem != NULL) */
++		struct {
++			unsigned int page_shift;
++			/* Current access_flags */
++			int access_flags;
++
++			/* For User ODP */
++			struct mlx5_ib_mr *parent;
++			struct xarray implicit_children;
++			union {
++				struct work_struct work;
++			} odp_destroy;
++			struct ib_odp_counters odp_stats;
++			bool is_odp_implicit;
++		};
++	};
+ };
+ 
++/* Zero the fields in the mr that are variant depending on usage */
++static inline void mlx5_clear_mr(struct mlx5_ib_mr *mr)
++{
++	memset(mr->out, 0, sizeof(*mr) - offsetof(struct mlx5_ib_mr, out));
++}
++
+ static inline bool is_odp_mr(struct mlx5_ib_mr *mr)
+ {
+ 	return IS_ENABLED(CONFIG_INFINIBAND_ON_DEMAND_PAGING) && mr->umem &&
+diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
+index db05b0e0a8d7a..ea8f068a6da3e 100644
+--- a/drivers/infiniband/hw/mlx5/mr.c
++++ b/drivers/infiniband/hw/mlx5/mr.c
+@@ -590,6 +590,8 @@ struct mlx5_ib_mr *mlx5_mr_cache_alloc(struct mlx5_ib_dev *dev,
+ 		ent->available_mrs--;
+ 		queue_adjust_cache_locked(ent);
+ 		spin_unlock_irq(&ent->lock);
++
++		mlx5_clear_mr(mr);
+ 	}
+ 	mr->access_flags = access_flags;
+ 	return mr;
+@@ -615,16 +617,14 @@ static struct mlx5_ib_mr *get_cache_mr(struct mlx5_cache_ent *req_ent)
+ 			ent->available_mrs--;
+ 			queue_adjust_cache_locked(ent);
+ 			spin_unlock_irq(&ent->lock);
+-			break;
++			mlx5_clear_mr(mr);
++			return mr;
+ 		}
+ 		queue_adjust_cache_locked(ent);
+ 		spin_unlock_irq(&ent->lock);
+ 	}
+-
+-	if (!mr)
+-		req_ent->miss++;
+-
+-	return mr;
++	req_ent->miss++;
++	return NULL;
+ }
+ 
+ static void detach_mr_from_cache(struct mlx5_ib_mr *mr)
+@@ -993,8 +993,6 @@ static struct mlx5_ib_mr *alloc_cacheable_mr(struct ib_pd *pd,
+ 
+ 	mr->ibmr.pd = pd;
+ 	mr->umem = umem;
+-	mr->access_flags = access_flags;
+-	mr->desc_size = sizeof(struct mlx5_mtt);
+ 	mr->mmkey.iova = iova;
+ 	mr->mmkey.size = umem->length;
+ 	mr->mmkey.pd = to_mpd(pd)->pdn;
+diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c
+index b103555b1f5d4..d98755e78362f 100644
+--- a/drivers/infiniband/hw/mlx5/odp.c
++++ b/drivers/infiniband/hw/mlx5/odp.c
+@@ -227,7 +227,6 @@ static void free_implicit_child_mr(struct mlx5_ib_mr *mr, bool need_imr_xlt)
+ 
+ 	dma_fence_odp_mr(mr);
+ 
+-	mr->parent = NULL;
+ 	mlx5_mr_cache_free(mr_to_mdev(mr), mr);
+ 	ib_umem_odp_release(odp);
+ }
+diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
+index f5a52a6fae437..843f9e7fe96ff 100644
+--- a/drivers/infiniband/hw/mlx5/qp.c
++++ b/drivers/infiniband/hw/mlx5/qp.c
+@@ -3146,6 +3146,19 @@ enum {
+ 	MLX5_PATH_FLAG_COUNTER	= 1 << 2,
+ };
+ 
++static int mlx5_to_ib_rate_map(u8 rate)
++{
++	static const int rates[] = { IB_RATE_PORT_CURRENT, IB_RATE_56_GBPS,
++				     IB_RATE_25_GBPS,	   IB_RATE_100_GBPS,
++				     IB_RATE_200_GBPS,	   IB_RATE_50_GBPS,
++				     IB_RATE_400_GBPS };
++
++	if (rate < ARRAY_SIZE(rates))
++		return rates[rate];
++
++	return rate - MLX5_STAT_RATE_OFFSET;
++}
++
+ static int ib_to_mlx5_rate_map(u8 rate)
+ {
+ 	switch (rate) {
+@@ -4485,7 +4498,7 @@ static void to_rdma_ah_attr(struct mlx5_ib_dev *ibdev,
+ 	rdma_ah_set_path_bits(ah_attr, MLX5_GET(ads, path, mlid));
+ 
+ 	static_rate = MLX5_GET(ads, path, stat_rate);
+-	rdma_ah_set_static_rate(ah_attr, static_rate ? static_rate - 5 : 0);
++	rdma_ah_set_static_rate(ah_attr, mlx5_to_ib_rate_map(static_rate));
+ 	if (MLX5_GET(ads, path, grh) ||
+ 	    ah_attr->type == RDMA_AH_ATTR_TYPE_ROCE) {
+ 		rdma_ah_set_grh(ah_attr, NULL, MLX5_GET(ads, path, flow_label),
+diff --git a/drivers/infiniband/hw/qedr/qedr_iw_cm.c b/drivers/infiniband/hw/qedr/qedr_iw_cm.c
+index c4bc58736e489..1715fbe0719d8 100644
+--- a/drivers/infiniband/hw/qedr/qedr_iw_cm.c
++++ b/drivers/infiniband/hw/qedr/qedr_iw_cm.c
+@@ -636,8 +636,10 @@ int qedr_iw_connect(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
+ 	memcpy(in_params.local_mac_addr, dev->ndev->dev_addr, ETH_ALEN);
+ 
+ 	if (test_and_set_bit(QEDR_IWARP_CM_WAIT_FOR_CONNECT,
+-			     &qp->iwarp_cm_flags))
++			     &qp->iwarp_cm_flags)) {
++		rc = -ENODEV;
+ 		goto err; /* QP already being destroyed */
++	}
+ 
+ 	rc = dev->ops->iwarp_connect(dev->rdma_ctx, &in_params, &out_params);
+ 	if (rc) {
+diff --git a/drivers/infiniband/sw/rxe/rxe_av.c b/drivers/infiniband/sw/rxe/rxe_av.c
+index df0d173d6acba..da2e867a1ed93 100644
+--- a/drivers/infiniband/sw/rxe/rxe_av.c
++++ b/drivers/infiniband/sw/rxe/rxe_av.c
+@@ -88,7 +88,7 @@ void rxe_av_fill_ip_info(struct rxe_av *av, struct rdma_ah_attr *attr)
+ 		type = RXE_NETWORK_TYPE_IPV4;
+ 		break;
+ 	case RDMA_NETWORK_IPV6:
+-		type = RXE_NETWORK_TYPE_IPV4;
++		type = RXE_NETWORK_TYPE_IPV6;
+ 		break;
+ 	default:
+ 		/* not reached - checked in rxe_av_chk_attr */
+diff --git a/drivers/infiniband/sw/siw/siw_mem.c b/drivers/infiniband/sw/siw/siw_mem.c
+index 34a910cf0edbd..61c17db70d658 100644
+--- a/drivers/infiniband/sw/siw/siw_mem.c
++++ b/drivers/infiniband/sw/siw/siw_mem.c
+@@ -106,8 +106,6 @@ int siw_mr_add_mem(struct siw_mr *mr, struct ib_pd *pd, void *mem_obj,
+ 	mem->perms = rights & IWARP_ACCESS_MASK;
+ 	kref_init(&mem->ref);
+ 
+-	mr->mem = mem;
+-
+ 	get_random_bytes(&next, 4);
+ 	next &= 0x00ffffff;
+ 
+@@ -116,6 +114,8 @@ int siw_mr_add_mem(struct siw_mr *mr, struct ib_pd *pd, void *mem_obj,
+ 		kfree(mem);
+ 		return -ENOMEM;
+ 	}
++
++	mr->mem = mem;
+ 	/* Set the STag index part */
+ 	mem->stag = id << 8;
+ 	mr->base_mr.lkey = mr->base_mr.rkey = mem->stag;
+diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c
+index 7305ed8976c24..18266f07c58d9 100644
+--- a/drivers/infiniband/ulp/isert/ib_isert.c
++++ b/drivers/infiniband/ulp/isert/ib_isert.c
+@@ -438,23 +438,23 @@ isert_connect_request(struct rdma_cm_id *cma_id, struct rdma_cm_event *event)
+ 	isert_init_conn(isert_conn);
+ 	isert_conn->cm_id = cma_id;
+ 
+-	ret = isert_alloc_login_buf(isert_conn, cma_id->device);
+-	if (ret)
+-		goto out;
+-
+ 	device = isert_device_get(cma_id);
+ 	if (IS_ERR(device)) {
+ 		ret = PTR_ERR(device);
+-		goto out_rsp_dma_map;
++		goto out;
+ 	}
+ 	isert_conn->device = device;
+ 
++	ret = isert_alloc_login_buf(isert_conn, cma_id->device);
++	if (ret)
++		goto out_conn_dev;
++
+ 	isert_set_nego_params(isert_conn, &event->param.conn);
+ 
+ 	isert_conn->qp = isert_create_qp(isert_conn, cma_id);
+ 	if (IS_ERR(isert_conn->qp)) {
+ 		ret = PTR_ERR(isert_conn->qp);
+-		goto out_conn_dev;
++		goto out_rsp_dma_map;
+ 	}
+ 
+ 	ret = isert_login_post_recv(isert_conn);
+@@ -473,10 +473,10 @@ isert_connect_request(struct rdma_cm_id *cma_id, struct rdma_cm_event *event)
+ 
+ out_destroy_qp:
+ 	isert_destroy_qp(isert_conn);
+-out_conn_dev:
+-	isert_device_put(device);
+ out_rsp_dma_map:
+ 	isert_free_login_buf(isert_conn);
++out_conn_dev:
++	isert_device_put(device);
+ out:
+ 	kfree(isert_conn);
+ 	rdma_reject(cma_id, NULL, 0, IB_CM_REJ_CONSUMER_DEFINED);
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+index 6734329cca332..959ba0462ef07 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+@@ -2784,8 +2784,8 @@ int rtrs_clt_remove_path_from_sysfs(struct rtrs_clt_sess *sess,
+ 	} while (!changed && old_state != RTRS_CLT_DEAD);
+ 
+ 	if (likely(changed)) {
+-		rtrs_clt_destroy_sess_files(sess, sysfs_self);
+ 		rtrs_clt_remove_path_from_arr(sess);
++		rtrs_clt_destroy_sess_files(sess, sysfs_self);
+ 		kobject_put(&sess->kobj);
+ 	}
+ 
+diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.c b/drivers/infiniband/ulp/srpt/ib_srpt.c
+index 6be60aa5ffe21..7f0420ad90575 100644
+--- a/drivers/infiniband/ulp/srpt/ib_srpt.c
++++ b/drivers/infiniband/ulp/srpt/ib_srpt.c
+@@ -2378,6 +2378,7 @@ static int srpt_cm_req_recv(struct srpt_device *const sdev,
+ 		pr_info("rejected SRP_LOGIN_REQ because target %s_%d is not enabled\n",
+ 			dev_name(&sdev->device->dev), port_num);
+ 		mutex_unlock(&sport->mutex);
++		ret = -EINVAL;
+ 		goto reject;
+ 	}
+ 
+diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
+index 321f5906e6ed3..f7e31018cd0b9 100644
+--- a/drivers/iommu/amd/init.c
++++ b/drivers/iommu/amd/init.c
+@@ -1837,7 +1837,7 @@ static void __init late_iommu_features_init(struct amd_iommu *iommu)
+ 	 * IVHD and MMIO conflict.
+ 	 */
+ 	if (features != iommu->features)
+-		pr_warn(FW_WARN "EFR mismatch. Use IVHD EFR (%#llx : %#llx\n).",
++		pr_warn(FW_WARN "EFR mismatch. Use IVHD EFR (%#llx : %#llx).\n",
+ 			features, iommu->features);
+ }
+ 
+diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
+index f985817c967a2..230b6f6b39016 100644
+--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
++++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
+@@ -115,7 +115,7 @@
+ #define GERROR_PRIQ_ABT_ERR		(1 << 3)
+ #define GERROR_EVTQ_ABT_ERR		(1 << 2)
+ #define GERROR_CMDQ_ERR			(1 << 0)
+-#define GERROR_ERR_MASK			0xfd
++#define GERROR_ERR_MASK			0x1fd
+ 
+ #define ARM_SMMU_GERRORN		0x64
+ 
+diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
+index af765c813cc84..fdd095e1fa521 100644
+--- a/drivers/iommu/dma-iommu.c
++++ b/drivers/iommu/dma-iommu.c
+@@ -52,6 +52,17 @@ struct iommu_dma_cookie {
+ };
+ 
+ static DEFINE_STATIC_KEY_FALSE(iommu_deferred_attach_enabled);
++bool iommu_dma_forcedac __read_mostly;
++
++static int __init iommu_dma_forcedac_setup(char *str)
++{
++	int ret = kstrtobool(str, &iommu_dma_forcedac);
++
++	if (!ret && iommu_dma_forcedac)
++		pr_info("Forcing DAC for PCI devices\n");
++	return ret;
++}
++early_param("iommu.forcedac", iommu_dma_forcedac_setup);
+ 
+ void iommu_dma_free_cpu_cached_iovas(unsigned int cpu,
+ 		struct iommu_domain *domain)
+@@ -444,7 +455,7 @@ static dma_addr_t iommu_dma_alloc_iova(struct iommu_domain *domain,
+ 		dma_limit = min(dma_limit, (u64)domain->geometry.aperture_end);
+ 
+ 	/* Try to get PCI devices a SAC address */
+-	if (dma_limit > DMA_BIT_MASK(32) && dev_is_pci(dev))
++	if (dma_limit > DMA_BIT_MASK(32) && !iommu_dma_forcedac && dev_is_pci(dev))
+ 		iova = alloc_iova_fast(iovad, iova_len,
+ 				       DMA_BIT_MASK(32) >> shift, false);
+ 
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index 881c9f2a5c7d9..7e551da6c1fbd 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -360,7 +360,6 @@ int intel_iommu_enabled = 0;
+ EXPORT_SYMBOL_GPL(intel_iommu_enabled);
+ 
+ static int dmar_map_gfx = 1;
+-static int dmar_forcedac;
+ static int intel_iommu_strict;
+ static int intel_iommu_superpage = 1;
+ static int iommu_identity_mapping;
+@@ -451,8 +450,8 @@ static int __init intel_iommu_setup(char *str)
+ 			dmar_map_gfx = 0;
+ 			pr_info("Disable GFX device mapping\n");
+ 		} else if (!strncmp(str, "forcedac", 8)) {
+-			pr_info("Forcing DAC for PCI devices\n");
+-			dmar_forcedac = 1;
++			pr_warn("intel_iommu=forcedac deprecated; use iommu.forcedac instead\n");
++			iommu_dma_forcedac = true;
+ 		} else if (!strncmp(str, "strict", 6)) {
+ 			pr_info("Disable batched IOTLB flush\n");
+ 			intel_iommu_strict = 1;
+@@ -658,7 +657,14 @@ static int domain_update_iommu_snooping(struct intel_iommu *skip)
+ 	rcu_read_lock();
+ 	for_each_active_iommu(iommu, drhd) {
+ 		if (iommu != skip) {
+-			if (!ecap_sc_support(iommu->ecap)) {
++			/*
++			 * If the hardware is operating in the scalable mode,
++			 * the snooping control is always supported since we
++			 * always set PASID-table-entry.PGSNP bit if the domain
++			 * is managed outside (UNMANAGED).
++			 */
++			if (!sm_supported(iommu) &&
++			    !ecap_sc_support(iommu->ecap)) {
+ 				ret = 0;
+ 				break;
+ 			}
+@@ -1340,6 +1346,11 @@ static void iommu_set_root_entry(struct intel_iommu *iommu)
+ 		      readl, (sts & DMA_GSTS_RTPS), sts);
+ 
+ 	raw_spin_unlock_irqrestore(&iommu->register_lock, flag);
++
++	iommu->flush.flush_context(iommu, 0, 0, 0, DMA_CCMD_GLOBAL_INVL);
++	if (sm_supported(iommu))
++		qi_flush_pasid_cache(iommu, 0, QI_PC_GLOBAL, 0);
++	iommu->flush.flush_iotlb(iommu, 0, 0, 0, DMA_TLB_GLOBAL_FLUSH);
+ }
+ 
+ void iommu_flush_write_buffer(struct intel_iommu *iommu)
+@@ -2340,8 +2351,9 @@ __domain_mapping(struct dmar_domain *domain, unsigned long iov_pfn,
+ 		return -EINVAL;
+ 
+ 	attr = prot & (DMA_PTE_READ | DMA_PTE_WRITE | DMA_PTE_SNP);
++	attr |= DMA_FL_PTE_PRESENT;
+ 	if (domain_use_first_level(domain)) {
+-		attr |= DMA_FL_PTE_PRESENT | DMA_FL_PTE_XD | DMA_FL_PTE_US;
++		attr |= DMA_FL_PTE_XD | DMA_FL_PTE_US;
+ 
+ 		if (domain->domain.type == IOMMU_DOMAIN_DMA) {
+ 			attr |= DMA_FL_PTE_ACCESS;
+@@ -2446,6 +2458,10 @@ static void domain_context_clear_one(struct intel_iommu *iommu, u8 bus, u8 devfn
+ 				   (((u16)bus) << 8) | devfn,
+ 				   DMA_CCMD_MASK_NOBIT,
+ 				   DMA_CCMD_DEVICE_INVL);
++
++	if (sm_supported(iommu))
++		qi_flush_pasid_cache(iommu, did_old, QI_PC_ALL_PASIDS, 0);
++
+ 	iommu->flush.flush_iotlb(iommu,
+ 				 did_old,
+ 				 0,
+@@ -2529,6 +2545,9 @@ static int domain_setup_first_level(struct intel_iommu *iommu,
+ 
+ 	flags |= (level == 5) ? PASID_FLAG_FL5LP : 0;
+ 
++	if (domain->domain.type == IOMMU_DOMAIN_UNMANAGED)
++		flags |= PASID_FLAG_PAGE_SNOOP;
++
+ 	return intel_pasid_setup_first_level(iommu, dev, (pgd_t *)pgd, pasid,
+ 					     domain->iommu_did[iommu->seq_id],
+ 					     flags);
+@@ -3291,8 +3310,6 @@ static int __init init_dmars(void)
+ 		register_pasid_allocator(iommu);
+ #endif
+ 		iommu_set_root_entry(iommu);
+-		iommu->flush.flush_context(iommu, 0, 0, 0, DMA_CCMD_GLOBAL_INVL);
+-		iommu->flush.flush_iotlb(iommu, 0, 0, 0, DMA_TLB_GLOBAL_FLUSH);
+ 	}
+ 
+ #ifdef CONFIG_INTEL_IOMMU_BROKEN_GFX_WA
+@@ -3482,12 +3499,7 @@ static int init_iommu_hw(void)
+ 		}
+ 
+ 		iommu_flush_write_buffer(iommu);
+-
+ 		iommu_set_root_entry(iommu);
+-
+-		iommu->flush.flush_context(iommu, 0, 0, 0,
+-					   DMA_CCMD_GLOBAL_INVL);
+-		iommu->flush.flush_iotlb(iommu, 0, 0, 0, DMA_TLB_GLOBAL_FLUSH);
+ 		iommu_enable_translation(iommu);
+ 		iommu_disable_protect_mem_regions(iommu);
+ 	}
+@@ -3870,8 +3882,6 @@ static int intel_iommu_add(struct dmar_drhd_unit *dmaru)
+ 		goto disable_iommu;
+ 
+ 	iommu_set_root_entry(iommu);
+-	iommu->flush.flush_context(iommu, 0, 0, 0, DMA_CCMD_GLOBAL_INVL);
+-	iommu->flush.flush_iotlb(iommu, 0, 0, 0, DMA_TLB_GLOBAL_FLUSH);
+ 	iommu_enable_translation(iommu);
+ 
+ 	iommu_disable_protect_mem_regions(iommu);
+diff --git a/drivers/iommu/intel/irq_remapping.c b/drivers/iommu/intel/irq_remapping.c
+index 611ef5243cb63..5c16ebe037a14 100644
+--- a/drivers/iommu/intel/irq_remapping.c
++++ b/drivers/iommu/intel/irq_remapping.c
+@@ -736,7 +736,7 @@ static int __init intel_prepare_irq_remapping(void)
+ 		return -ENODEV;
+ 
+ 	if (intel_cap_audit(CAP_AUDIT_STATIC_IRQR, NULL))
+-		goto error;
++		return -ENODEV;
+ 
+ 	if (!dmar_ir_support())
+ 		return -ENODEV;
+diff --git a/drivers/iommu/intel/pasid.c b/drivers/iommu/intel/pasid.c
+index f26cb6195b2c4..5093d317ff1a2 100644
+--- a/drivers/iommu/intel/pasid.c
++++ b/drivers/iommu/intel/pasid.c
+@@ -411,6 +411,16 @@ static inline void pasid_set_page_snoop(struct pasid_entry *pe, bool value)
+ 	pasid_set_bits(&pe->val[1], 1 << 23, value << 23);
+ }
+ 
++/*
++ * Setup the Page Snoop (PGSNP) field (Bit 88) of a scalable mode
++ * PASID entry.
++ */
++static inline void
++pasid_set_pgsnp(struct pasid_entry *pe)
++{
++	pasid_set_bits(&pe->val[1], 1ULL << 24, 1ULL << 24);
++}
++
+ /*
+  * Setup the First Level Page table Pointer field (Bit 140~191)
+  * of a scalable mode PASID entry.
+@@ -565,6 +575,9 @@ int intel_pasid_setup_first_level(struct intel_iommu *iommu,
+ 		}
+ 	}
+ 
++	if (flags & PASID_FLAG_PAGE_SNOOP)
++		pasid_set_pgsnp(pte);
++
+ 	pasid_set_domain_id(pte, did);
+ 	pasid_set_address_width(pte, iommu->agaw);
+ 	pasid_set_page_snoop(pte, !!ecap_smpwc(iommu->ecap));
+@@ -643,6 +656,9 @@ int intel_pasid_setup_second_level(struct intel_iommu *iommu,
+ 	pasid_set_fault_enable(pte);
+ 	pasid_set_page_snoop(pte, !!ecap_smpwc(iommu->ecap));
+ 
++	if (domain->domain.type == IOMMU_DOMAIN_UNMANAGED)
++		pasid_set_pgsnp(pte);
++
+ 	/*
+ 	 * Since it is a second level only translation setup, we should
+ 	 * set SRE bit as well (addresses are expected to be GPAs).
+diff --git a/drivers/iommu/intel/pasid.h b/drivers/iommu/intel/pasid.h
+index 444c0bec221a4..086ebd6973199 100644
+--- a/drivers/iommu/intel/pasid.h
++++ b/drivers/iommu/intel/pasid.h
+@@ -48,6 +48,7 @@
+  */
+ #define PASID_FLAG_SUPERVISOR_MODE	BIT(0)
+ #define PASID_FLAG_NESTED		BIT(1)
++#define PASID_FLAG_PAGE_SNOOP		BIT(2)
+ 
+ /*
+  * The PASID_FLAG_FL5LP flag Indicates using 5-level paging for first-
+diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c
+index 574a7e657a9af..ecb6314fdd5cd 100644
+--- a/drivers/iommu/intel/svm.c
++++ b/drivers/iommu/intel/svm.c
+@@ -862,7 +862,7 @@ intel_svm_prq_report(struct device *dev, struct page_req_dsc *desc)
+ 	/* Fill in event data for device specific processing */
+ 	memset(&event, 0, sizeof(struct iommu_fault_event));
+ 	event.fault.type = IOMMU_FAULT_PAGE_REQ;
+-	event.fault.prm.addr = desc->addr;
++	event.fault.prm.addr = (u64)desc->addr << VTD_PAGE_SHIFT;
+ 	event.fault.prm.pasid = desc->pasid;
+ 	event.fault.prm.grpid = desc->prg_index;
+ 	event.fault.prm.perm = prq_to_iommu_prot(desc);
+@@ -920,7 +920,17 @@ static irqreturn_t prq_event_thread(int irq, void *d)
+ 			       ((unsigned long long *)req)[1]);
+ 			goto no_pasid;
+ 		}
+-
++		/* We shall not receive page request for supervisor SVM */
++		if (req->pm_req && (req->rd_req | req->wr_req)) {
++			pr_err("Unexpected page request in Privilege Mode");
++			/* No need to find the matching sdev as for bad_req */
++			goto no_pasid;
++		}
++		/* DMA read with exec requeset is not supported. */
++		if (req->exe_req && req->rd_req) {
++			pr_err("Execution request not supported\n");
++			goto no_pasid;
++		}
+ 		if (!svm || svm->pasid != req->pasid) {
+ 			rcu_read_lock();
+ 			svm = ioasid_find(NULL, req->pasid, NULL);
+@@ -1021,12 +1031,12 @@ no_pasid:
+ 				QI_PGRP_RESP_TYPE;
+ 			resp.qw1 = QI_PGRP_IDX(req->prg_index) |
+ 				QI_PGRP_LPIG(req->lpig);
++			resp.qw2 = 0;
++			resp.qw3 = 0;
+ 
+ 			if (req->priv_data_present)
+ 				memcpy(&resp.qw2, req->priv_data,
+ 				       sizeof(req->priv_data));
+-			resp.qw2 = 0;
+-			resp.qw3 = 0;
+ 			qi_submit_sync(iommu, &resp, 1, 0);
+ 		}
+ prq_advance:
+diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
+index d0b0a15dba841..e10cfa99057ce 100644
+--- a/drivers/iommu/iommu.c
++++ b/drivers/iommu/iommu.c
+@@ -2878,10 +2878,12 @@ EXPORT_SYMBOL_GPL(iommu_fwspec_add_ids);
+  */
+ int iommu_dev_enable_feature(struct device *dev, enum iommu_dev_features feat)
+ {
+-	const struct iommu_ops *ops = dev->bus->iommu_ops;
++	if (dev->iommu && dev->iommu->iommu_dev) {
++		const struct iommu_ops *ops = dev->iommu->iommu_dev->ops;
+ 
+-	if (ops && ops->dev_enable_feat)
+-		return ops->dev_enable_feat(dev, feat);
++		if (ops->dev_enable_feat)
++			return ops->dev_enable_feat(dev, feat);
++	}
+ 
+ 	return -ENODEV;
+ }
+@@ -2894,10 +2896,12 @@ EXPORT_SYMBOL_GPL(iommu_dev_enable_feature);
+  */
+ int iommu_dev_disable_feature(struct device *dev, enum iommu_dev_features feat)
+ {
+-	const struct iommu_ops *ops = dev->bus->iommu_ops;
++	if (dev->iommu && dev->iommu->iommu_dev) {
++		const struct iommu_ops *ops = dev->iommu->iommu_dev->ops;
+ 
+-	if (ops && ops->dev_disable_feat)
+-		return ops->dev_disable_feat(dev, feat);
++		if (ops->dev_disable_feat)
++			return ops->dev_disable_feat(dev, feat);
++	}
+ 
+ 	return -EBUSY;
+ }
+@@ -2905,10 +2909,12 @@ EXPORT_SYMBOL_GPL(iommu_dev_disable_feature);
+ 
+ bool iommu_dev_feature_enabled(struct device *dev, enum iommu_dev_features feat)
+ {
+-	const struct iommu_ops *ops = dev->bus->iommu_ops;
++	if (dev->iommu && dev->iommu->iommu_dev) {
++		const struct iommu_ops *ops = dev->iommu->iommu_dev->ops;
+ 
+-	if (ops && ops->dev_feat_enabled)
+-		return ops->dev_feat_enabled(dev, feat);
++		if (ops->dev_feat_enabled)
++			return ops->dev_feat_enabled(dev, feat);
++	}
+ 
+ 	return false;
+ }
+diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c
+index 6ecc007f07cd5..e168a682806a9 100644
+--- a/drivers/iommu/mtk_iommu.c
++++ b/drivers/iommu/mtk_iommu.c
+@@ -688,13 +688,6 @@ static const struct iommu_ops mtk_iommu_ops = {
+ static int mtk_iommu_hw_init(const struct mtk_iommu_data *data)
+ {
+ 	u32 regval;
+-	int ret;
+-
+-	ret = clk_prepare_enable(data->bclk);
+-	if (ret) {
+-		dev_err(data->dev, "Failed to enable iommu bclk(%d)\n", ret);
+-		return ret;
+-	}
+ 
+ 	if (data->plat_data->m4u_plat == M4U_MT8173) {
+ 		regval = F_MMU_PREFETCH_RT_REPLACE_MOD |
+@@ -760,7 +753,6 @@ static int mtk_iommu_hw_init(const struct mtk_iommu_data *data)
+ 	if (devm_request_irq(data->dev, data->irq, mtk_iommu_isr, 0,
+ 			     dev_name(data->dev), (void *)data)) {
+ 		writel_relaxed(0, data->base + REG_MMU_PT_BASE_ADDR);
+-		clk_disable_unprepare(data->bclk);
+ 		dev_err(data->dev, "Failed @ IRQ-%d Request\n", data->irq);
+ 		return -ENODEV;
+ 	}
+@@ -977,14 +969,19 @@ static int __maybe_unused mtk_iommu_runtime_resume(struct device *dev)
+ 	void __iomem *base = data->base;
+ 	int ret;
+ 
+-	/* Avoid first resume to affect the default value of registers below. */
+-	if (!m4u_dom)
+-		return 0;
+ 	ret = clk_prepare_enable(data->bclk);
+ 	if (ret) {
+ 		dev_err(data->dev, "Failed to enable clk(%d) in resume\n", ret);
+ 		return ret;
+ 	}
++
++	/*
++	 * Uppon first resume, only enable the clk and return, since the values of the
++	 * registers are not yet set.
++	 */
++	if (!m4u_dom)
++		return 0;
++
+ 	writel_relaxed(reg->wr_len_ctrl, base + REG_MMU_WR_LEN_CTRL);
+ 	writel_relaxed(reg->misc_ctrl, base + REG_MMU_MISC_CTRL);
+ 	writel_relaxed(reg->dcm_dis, base + REG_MMU_DCM_DIS);
+diff --git a/drivers/irqchip/irq-gic-v3-mbi.c b/drivers/irqchip/irq-gic-v3-mbi.c
+index 563a9b3662941..e81e89a81cb5b 100644
+--- a/drivers/irqchip/irq-gic-v3-mbi.c
++++ b/drivers/irqchip/irq-gic-v3-mbi.c
+@@ -303,7 +303,7 @@ int __init mbi_init(struct fwnode_handle *fwnode, struct irq_domain *parent)
+ 	reg = of_get_property(np, "mbi-alias", NULL);
+ 	if (reg) {
+ 		mbi_phys_base = of_translate_address(np, reg);
+-		if (mbi_phys_base == OF_BAD_ADDR) {
++		if (mbi_phys_base == (phys_addr_t)OF_BAD_ADDR) {
+ 			ret = -ENXIO;
+ 			goto err_free_mbi;
+ 		}
+diff --git a/drivers/mailbox/sprd-mailbox.c b/drivers/mailbox/sprd-mailbox.c
+index 4c325301a2fe8..94d9067dc8d09 100644
+--- a/drivers/mailbox/sprd-mailbox.c
++++ b/drivers/mailbox/sprd-mailbox.c
+@@ -60,6 +60,8 @@ struct sprd_mbox_priv {
+ 	struct clk		*clk;
+ 	u32			outbox_fifo_depth;
+ 
++	struct mutex		lock;
++	u32			refcnt;
+ 	struct mbox_chan	chan[SPRD_MBOX_CHAN_MAX];
+ };
+ 
+@@ -115,7 +117,11 @@ static irqreturn_t sprd_mbox_outbox_isr(int irq, void *data)
+ 		id = readl(priv->outbox_base + SPRD_MBOX_ID);
+ 
+ 		chan = &priv->chan[id];
+-		mbox_chan_received_data(chan, (void *)msg);
++		if (chan->cl)
++			mbox_chan_received_data(chan, (void *)msg);
++		else
++			dev_warn_ratelimited(priv->dev,
++				    "message's been dropped at ch[%d]\n", id);
+ 
+ 		/* Trigger to update outbox FIFO pointer */
+ 		writel(0x1, priv->outbox_base + SPRD_MBOX_TRIGGER);
+@@ -215,18 +221,22 @@ static int sprd_mbox_startup(struct mbox_chan *chan)
+ 	struct sprd_mbox_priv *priv = to_sprd_mbox_priv(chan->mbox);
+ 	u32 val;
+ 
+-	/* Select outbox FIFO mode and reset the outbox FIFO status */
+-	writel(0x0, priv->outbox_base + SPRD_MBOX_FIFO_RST);
++	mutex_lock(&priv->lock);
++	if (priv->refcnt++ == 0) {
++		/* Select outbox FIFO mode and reset the outbox FIFO status */
++		writel(0x0, priv->outbox_base + SPRD_MBOX_FIFO_RST);
+ 
+-	/* Enable inbox FIFO overflow and delivery interrupt */
+-	val = readl(priv->inbox_base + SPRD_MBOX_IRQ_MSK);
+-	val &= ~(SPRD_INBOX_FIFO_OVERFLOW_IRQ | SPRD_INBOX_FIFO_DELIVER_IRQ);
+-	writel(val, priv->inbox_base + SPRD_MBOX_IRQ_MSK);
++		/* Enable inbox FIFO overflow and delivery interrupt */
++		val = readl(priv->inbox_base + SPRD_MBOX_IRQ_MSK);
++		val &= ~(SPRD_INBOX_FIFO_OVERFLOW_IRQ | SPRD_INBOX_FIFO_DELIVER_IRQ);
++		writel(val, priv->inbox_base + SPRD_MBOX_IRQ_MSK);
+ 
+-	/* Enable outbox FIFO not empty interrupt */
+-	val = readl(priv->outbox_base + SPRD_MBOX_IRQ_MSK);
+-	val &= ~SPRD_OUTBOX_FIFO_NOT_EMPTY_IRQ;
+-	writel(val, priv->outbox_base + SPRD_MBOX_IRQ_MSK);
++		/* Enable outbox FIFO not empty interrupt */
++		val = readl(priv->outbox_base + SPRD_MBOX_IRQ_MSK);
++		val &= ~SPRD_OUTBOX_FIFO_NOT_EMPTY_IRQ;
++		writel(val, priv->outbox_base + SPRD_MBOX_IRQ_MSK);
++	}
++	mutex_unlock(&priv->lock);
+ 
+ 	return 0;
+ }
+@@ -235,9 +245,13 @@ static void sprd_mbox_shutdown(struct mbox_chan *chan)
+ {
+ 	struct sprd_mbox_priv *priv = to_sprd_mbox_priv(chan->mbox);
+ 
+-	/* Disable inbox & outbox interrupt */
+-	writel(SPRD_INBOX_FIFO_IRQ_MASK, priv->inbox_base + SPRD_MBOX_IRQ_MSK);
+-	writel(SPRD_OUTBOX_FIFO_IRQ_MASK, priv->outbox_base + SPRD_MBOX_IRQ_MSK);
++	mutex_lock(&priv->lock);
++	if (--priv->refcnt == 0) {
++		/* Disable inbox & outbox interrupt */
++		writel(SPRD_INBOX_FIFO_IRQ_MASK, priv->inbox_base + SPRD_MBOX_IRQ_MSK);
++		writel(SPRD_OUTBOX_FIFO_IRQ_MASK, priv->outbox_base + SPRD_MBOX_IRQ_MSK);
++	}
++	mutex_unlock(&priv->lock);
+ }
+ 
+ static const struct mbox_chan_ops sprd_mbox_ops = {
+@@ -266,6 +280,7 @@ static int sprd_mbox_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	priv->dev = dev;
++	mutex_init(&priv->lock);
+ 
+ 	/*
+ 	 * The Spreadtrum mailbox uses an inbox to send messages to the target
+diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
+index 82d4e0880a994..4fb635c0baa07 100644
+--- a/drivers/md/bcache/writeback.c
++++ b/drivers/md/bcache/writeback.c
+@@ -110,13 +110,13 @@ static void __update_writeback_rate(struct cached_dev *dc)
+ 		int64_t fps;
+ 
+ 		if (c->gc_stats.in_use <= BCH_WRITEBACK_FRAGMENT_THRESHOLD_MID) {
+-			fp_term = dc->writeback_rate_fp_term_low *
++			fp_term = (int64_t)dc->writeback_rate_fp_term_low *
+ 			(c->gc_stats.in_use - BCH_WRITEBACK_FRAGMENT_THRESHOLD_LOW);
+ 		} else if (c->gc_stats.in_use <= BCH_WRITEBACK_FRAGMENT_THRESHOLD_HIGH) {
+-			fp_term = dc->writeback_rate_fp_term_mid *
++			fp_term = (int64_t)dc->writeback_rate_fp_term_mid *
+ 			(c->gc_stats.in_use - BCH_WRITEBACK_FRAGMENT_THRESHOLD_MID);
+ 		} else {
+-			fp_term = dc->writeback_rate_fp_term_high *
++			fp_term = (int64_t)dc->writeback_rate_fp_term_high *
+ 			(c->gc_stats.in_use - BCH_WRITEBACK_FRAGMENT_THRESHOLD_HIGH);
+ 		}
+ 		fps = div_s64(dirty, dirty_buckets) * fp_term;
+diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c
+index 200c5d0f08bf5..ea3130e116801 100644
+--- a/drivers/md/md-bitmap.c
++++ b/drivers/md/md-bitmap.c
+@@ -1722,6 +1722,8 @@ void md_bitmap_flush(struct mddev *mddev)
+ 	md_bitmap_daemon_work(mddev);
+ 	bitmap->daemon_lastrun -= sleep;
+ 	md_bitmap_daemon_work(mddev);
++	if (mddev->bitmap_info.external)
++		md_super_wait(mddev);
+ 	md_bitmap_update_sb(bitmap);
+ }
+ 
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 21da0c48f6c21..2a9553efc2d1b 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -734,7 +734,34 @@ void mddev_init(struct mddev *mddev)
+ }
+ EXPORT_SYMBOL_GPL(mddev_init);
+ 
++static struct mddev *mddev_find_locked(dev_t unit)
++{
++	struct mddev *mddev;
++
++	list_for_each_entry(mddev, &all_mddevs, all_mddevs)
++		if (mddev->unit == unit)
++			return mddev;
++
++	return NULL;
++}
++
+ static struct mddev *mddev_find(dev_t unit)
++{
++	struct mddev *mddev;
++
++	if (MAJOR(unit) != MD_MAJOR)
++		unit &= ~((1 << MdpMinorShift) - 1);
++
++	spin_lock(&all_mddevs_lock);
++	mddev = mddev_find_locked(unit);
++	if (mddev)
++		mddev_get(mddev);
++	spin_unlock(&all_mddevs_lock);
++
++	return mddev;
++}
++
++static struct mddev *mddev_find_or_alloc(dev_t unit)
+ {
+ 	struct mddev *mddev, *new = NULL;
+ 
+@@ -745,13 +772,13 @@ static struct mddev *mddev_find(dev_t unit)
+ 	spin_lock(&all_mddevs_lock);
+ 
+ 	if (unit) {
+-		list_for_each_entry(mddev, &all_mddevs, all_mddevs)
+-			if (mddev->unit == unit) {
+-				mddev_get(mddev);
+-				spin_unlock(&all_mddevs_lock);
+-				kfree(new);
+-				return mddev;
+-			}
++		mddev = mddev_find_locked(unit);
++		if (mddev) {
++			mddev_get(mddev);
++			spin_unlock(&all_mddevs_lock);
++			kfree(new);
++			return mddev;
++		}
+ 
+ 		if (new) {
+ 			list_add(&new->all_mddevs, &all_mddevs);
+@@ -777,12 +804,7 @@ static struct mddev *mddev_find(dev_t unit)
+ 				return NULL;
+ 			}
+ 
+-			is_free = 1;
+-			list_for_each_entry(mddev, &all_mddevs, all_mddevs)
+-				if (mddev->unit == dev) {
+-					is_free = 0;
+-					break;
+-				}
++			is_free = !mddev_find_locked(dev);
+ 		}
+ 		new->unit = dev;
+ 		new->md_minor = MINOR(dev);
+@@ -5644,7 +5666,7 @@ static int md_alloc(dev_t dev, char *name)
+ 	 * writing to /sys/module/md_mod/parameters/new_array.
+ 	 */
+ 	static DEFINE_MUTEX(disks_mutex);
+-	struct mddev *mddev = mddev_find(dev);
++	struct mddev *mddev = mddev_find_or_alloc(dev);
+ 	struct gendisk *disk;
+ 	int partitioned;
+ 	int shift;
+@@ -6524,11 +6546,9 @@ static void autorun_devices(int part)
+ 
+ 		md_probe(dev);
+ 		mddev = mddev_find(dev);
+-		if (!mddev || !mddev->gendisk) {
+-			if (mddev)
+-				mddev_put(mddev);
++		if (!mddev)
+ 			break;
+-		}
++
+ 		if (mddev_lock(mddev))
+ 			pr_warn("md: %s locked, cannot run\n", mdname(mddev));
+ 		else if (mddev->raid_disks || mddev->major_version
+@@ -7821,8 +7841,7 @@ static int md_open(struct block_device *bdev, fmode_t mode)
+ 		/* Wait until bdev->bd_disk is definitely gone */
+ 		if (work_pending(&mddev->del_work))
+ 			flush_workqueue(md_misc_wq);
+-		/* Then retry the open from the top */
+-		return -ERESTARTSYS;
++		return -EBUSY;
+ 	}
+ 	BUG_ON(mddev != bdev->bd_disk->private_data);
+ 
+@@ -8153,7 +8172,11 @@ static void *md_seq_start(struct seq_file *seq, loff_t *pos)
+ 	loff_t l = *pos;
+ 	struct mddev *mddev;
+ 
+-	if (l >= 0x10000)
++	if (l == 0x10000) {
++		++*pos;
++		return (void *)2;
++	}
++	if (l > 0x10000)
+ 		return NULL;
+ 	if (!l--)
+ 		/* header */
+@@ -9251,11 +9274,11 @@ void md_check_recovery(struct mddev *mddev)
+ 		}
+ 
+ 		if (mddev_is_clustered(mddev)) {
+-			struct md_rdev *rdev;
++			struct md_rdev *rdev, *tmp;
+ 			/* kick the device if another node issued a
+ 			 * remove disk.
+ 			 */
+-			rdev_for_each(rdev, mddev) {
++			rdev_for_each_safe(rdev, tmp, mddev) {
+ 				if (test_and_clear_bit(ClusterRemove, &rdev->flags) &&
+ 						rdev->raid_disk < 0)
+ 					md_kick_rdev_from_array(rdev);
+@@ -9569,7 +9592,7 @@ err_wq:
+ static void check_sb_changes(struct mddev *mddev, struct md_rdev *rdev)
+ {
+ 	struct mdp_superblock_1 *sb = page_address(rdev->sb_page);
+-	struct md_rdev *rdev2;
++	struct md_rdev *rdev2, *tmp;
+ 	int role, ret;
+ 	char b[BDEVNAME_SIZE];
+ 
+@@ -9586,7 +9609,7 @@ static void check_sb_changes(struct mddev *mddev, struct md_rdev *rdev)
+ 	}
+ 
+ 	/* Check for change of roles in the active devices */
+-	rdev_for_each(rdev2, mddev) {
++	rdev_for_each_safe(rdev2, tmp, mddev) {
+ 		if (test_bit(Faulty, &rdev2->flags))
+ 			continue;
+ 
+diff --git a/drivers/media/common/saa7146/saa7146_core.c b/drivers/media/common/saa7146/saa7146_core.c
+index f2d13b71416cb..e50fa0ff7c5d5 100644
+--- a/drivers/media/common/saa7146/saa7146_core.c
++++ b/drivers/media/common/saa7146/saa7146_core.c
+@@ -253,7 +253,7 @@ int saa7146_pgtable_build_single(struct pci_dev *pci, struct saa7146_pgtable *pt
+ 			 i, sg_dma_address(list), sg_dma_len(list),
+ 			 list->offset);
+ */
+-		for (p = 0; p * 4096 < list->length; p++, ptr++) {
++		for (p = 0; p * 4096 < sg_dma_len(list); p++, ptr++) {
+ 			*ptr = cpu_to_le32(sg_dma_address(list) + p * 4096);
+ 			nr_pages++;
+ 		}
+diff --git a/drivers/media/common/saa7146/saa7146_video.c b/drivers/media/common/saa7146/saa7146_video.c
+index 7b8795eca5893..66215d9106a42 100644
+--- a/drivers/media/common/saa7146/saa7146_video.c
++++ b/drivers/media/common/saa7146/saa7146_video.c
+@@ -247,9 +247,8 @@ static int saa7146_pgtable_build(struct saa7146_dev *dev, struct saa7146_buf *bu
+ 
+ 		/* walk all pages, copy all page addresses to ptr1 */
+ 		for (i = 0; i < length; i++, list++) {
+-			for (p = 0; p * 4096 < list->length; p++, ptr1++) {
++			for (p = 0; p * 4096 < sg_dma_len(list); p++, ptr1++)
+ 				*ptr1 = cpu_to_le32(sg_dma_address(list) - list->offset);
+-			}
+ 		}
+ /*
+ 		ptr1 = pt1->cpu;
+diff --git a/drivers/media/dvb-frontends/m88ds3103.c b/drivers/media/dvb-frontends/m88ds3103.c
+index cfa4cdde99d8a..02e8aa11e36e7 100644
+--- a/drivers/media/dvb-frontends/m88ds3103.c
++++ b/drivers/media/dvb-frontends/m88ds3103.c
+@@ -1904,8 +1904,8 @@ static int m88ds3103_probe(struct i2c_client *client,
+ 
+ 		dev->dt_client = i2c_new_dummy_device(client->adapter,
+ 						      dev->dt_addr);
+-		if (!dev->dt_client) {
+-			ret = -ENODEV;
++		if (IS_ERR(dev->dt_client)) {
++			ret = PTR_ERR(dev->dt_client);
+ 			goto err_kfree;
+ 		}
+ 	}
+diff --git a/drivers/media/i2c/ccs/ccs-core.c b/drivers/media/i2c/ccs/ccs-core.c
+index 15afbb4f5b317..4505594996bd8 100644
+--- a/drivers/media/i2c/ccs/ccs-core.c
++++ b/drivers/media/i2c/ccs/ccs-core.c
+@@ -3522,11 +3522,11 @@ static int ccs_probe(struct i2c_client *client)
+ 	sensor->pll.scale_n = CCS_LIM(sensor, SCALER_N_MIN);
+ 
+ 	ccs_create_subdev(sensor, sensor->scaler, " scaler", 2,
+-			  MEDIA_ENT_F_CAM_SENSOR);
++			  MEDIA_ENT_F_PROC_VIDEO_SCALER);
+ 	ccs_create_subdev(sensor, sensor->binner, " binner", 2,
+ 			  MEDIA_ENT_F_PROC_VIDEO_SCALER);
+ 	ccs_create_subdev(sensor, sensor->pixel_array, " pixel_array", 1,
+-			  MEDIA_ENT_F_PROC_VIDEO_SCALER);
++			  MEDIA_ENT_F_CAM_SENSOR);
+ 
+ 	rval = ccs_init_controls(sensor);
+ 	if (rval < 0)
+diff --git a/drivers/media/i2c/imx219.c b/drivers/media/i2c/imx219.c
+index 6e3382b85a90d..49ba39418360a 100644
+--- a/drivers/media/i2c/imx219.c
++++ b/drivers/media/i2c/imx219.c
+@@ -1035,29 +1035,47 @@ static int imx219_start_streaming(struct imx219 *imx219)
+ 	const struct imx219_reg_list *reg_list;
+ 	int ret;
+ 
++	ret = pm_runtime_get_sync(&client->dev);
++	if (ret < 0) {
++		pm_runtime_put_noidle(&client->dev);
++		return ret;
++	}
++
+ 	/* Apply default values of current mode */
+ 	reg_list = &imx219->mode->reg_list;
+ 	ret = imx219_write_regs(imx219, reg_list->regs, reg_list->num_of_regs);
+ 	if (ret) {
+ 		dev_err(&client->dev, "%s failed to set mode\n", __func__);
+-		return ret;
++		goto err_rpm_put;
+ 	}
+ 
+ 	ret = imx219_set_framefmt(imx219);
+ 	if (ret) {
+ 		dev_err(&client->dev, "%s failed to set frame format: %d\n",
+ 			__func__, ret);
+-		return ret;
++		goto err_rpm_put;
+ 	}
+ 
+ 	/* Apply customized values from user */
+ 	ret =  __v4l2_ctrl_handler_setup(imx219->sd.ctrl_handler);
+ 	if (ret)
+-		return ret;
++		goto err_rpm_put;
+ 
+ 	/* set stream on register */
+-	return imx219_write_reg(imx219, IMX219_REG_MODE_SELECT,
+-				IMX219_REG_VALUE_08BIT, IMX219_MODE_STREAMING);
++	ret = imx219_write_reg(imx219, IMX219_REG_MODE_SELECT,
++			       IMX219_REG_VALUE_08BIT, IMX219_MODE_STREAMING);
++	if (ret)
++		goto err_rpm_put;
++
++	/* vflip and hflip cannot change during streaming */
++	__v4l2_ctrl_grab(imx219->vflip, true);
++	__v4l2_ctrl_grab(imx219->hflip, true);
++
++	return 0;
++
++err_rpm_put:
++	pm_runtime_put(&client->dev);
++	return ret;
+ }
+ 
+ static void imx219_stop_streaming(struct imx219 *imx219)
+@@ -1070,12 +1088,16 @@ static void imx219_stop_streaming(struct imx219 *imx219)
+ 			       IMX219_REG_VALUE_08BIT, IMX219_MODE_STANDBY);
+ 	if (ret)
+ 		dev_err(&client->dev, "%s failed to set stream\n", __func__);
++
++	__v4l2_ctrl_grab(imx219->vflip, false);
++	__v4l2_ctrl_grab(imx219->hflip, false);
++
++	pm_runtime_put(&client->dev);
+ }
+ 
+ static int imx219_set_stream(struct v4l2_subdev *sd, int enable)
+ {
+ 	struct imx219 *imx219 = to_imx219(sd);
+-	struct i2c_client *client = v4l2_get_subdevdata(sd);
+ 	int ret = 0;
+ 
+ 	mutex_lock(&imx219->mutex);
+@@ -1085,36 +1107,23 @@ static int imx219_set_stream(struct v4l2_subdev *sd, int enable)
+ 	}
+ 
+ 	if (enable) {
+-		ret = pm_runtime_get_sync(&client->dev);
+-		if (ret < 0) {
+-			pm_runtime_put_noidle(&client->dev);
+-			goto err_unlock;
+-		}
+-
+ 		/*
+ 		 * Apply default & customized values
+ 		 * and then start streaming.
+ 		 */
+ 		ret = imx219_start_streaming(imx219);
+ 		if (ret)
+-			goto err_rpm_put;
++			goto err_unlock;
+ 	} else {
+ 		imx219_stop_streaming(imx219);
+-		pm_runtime_put(&client->dev);
+ 	}
+ 
+ 	imx219->streaming = enable;
+ 
+-	/* vflip and hflip cannot change during streaming */
+-	__v4l2_ctrl_grab(imx219->vflip, enable);
+-	__v4l2_ctrl_grab(imx219->hflip, enable);
+-
+ 	mutex_unlock(&imx219->mutex);
+ 
+ 	return ret;
+ 
+-err_rpm_put:
+-	pm_runtime_put(&client->dev);
+ err_unlock:
+ 	mutex_unlock(&imx219->mutex);
+ 
+diff --git a/drivers/media/i2c/rdacm21.c b/drivers/media/i2c/rdacm21.c
+index dcc21515e5a43..179d107f494ca 100644
+--- a/drivers/media/i2c/rdacm21.c
++++ b/drivers/media/i2c/rdacm21.c
+@@ -345,7 +345,7 @@ static int ov10640_initialize(struct rdacm21_device *dev)
+ 	/* Read OV10640 ID to test communications. */
+ 	ov490_write_reg(dev, OV490_SCCB_SLAVE0_DIR, OV490_SCCB_SLAVE_READ);
+ 	ov490_write_reg(dev, OV490_SCCB_SLAVE0_ADDR_HIGH, OV10640_CHIP_ID >> 8);
+-	ov490_write_reg(dev, OV490_SCCB_SLAVE0_ADDR_LOW, (u8)OV10640_CHIP_ID);
++	ov490_write_reg(dev, OV490_SCCB_SLAVE0_ADDR_LOW, OV10640_CHIP_ID & 0xff);
+ 
+ 	/* Trigger SCCB slave transaction and give it some time to complete. */
+ 	ov490_write_reg(dev, OV490_HOST_CMD, OV490_HOST_CMD_TRIGGER);
+diff --git a/drivers/media/pci/intel/ipu3/ipu3-cio2-main.c b/drivers/media/pci/intel/ipu3/ipu3-cio2-main.c
+index 6e8c0c230e111..fecef85bd62eb 100644
+--- a/drivers/media/pci/intel/ipu3/ipu3-cio2-main.c
++++ b/drivers/media/pci/intel/ipu3/ipu3-cio2-main.c
+@@ -302,7 +302,7 @@ static int cio2_csi2_calc_timing(struct cio2_device *cio2, struct cio2_queue *q,
+ 	if (!q->sensor)
+ 		return -ENODEV;
+ 
+-	freq = v4l2_get_link_freq(q->sensor->ctrl_handler, bpp, lanes);
++	freq = v4l2_get_link_freq(q->sensor->ctrl_handler, bpp, lanes * 2);
+ 	if (freq < 0) {
+ 		dev_err(dev, "error %lld, invalid link_freq\n", freq);
+ 		return freq;
+diff --git a/drivers/media/pci/saa7134/saa7134-core.c b/drivers/media/pci/saa7134/saa7134-core.c
+index 391572a6ec76a..efb757d5168a6 100644
+--- a/drivers/media/pci/saa7134/saa7134-core.c
++++ b/drivers/media/pci/saa7134/saa7134-core.c
+@@ -243,7 +243,7 @@ int saa7134_pgtable_build(struct pci_dev *pci, struct saa7134_pgtable *pt,
+ 
+ 	ptr = pt->cpu + startpage;
+ 	for (i = 0; i < length; i++, list = sg_next(list)) {
+-		for (p = 0; p * 4096 < list->length; p++, ptr++)
++		for (p = 0; p * 4096 < sg_dma_len(list); p++, ptr++)
+ 			*ptr = cpu_to_le32(sg_dma_address(list) +
+ 						list->offset + p * 4096);
+ 	}
+diff --git a/drivers/media/platform/Kconfig b/drivers/media/platform/Kconfig
+index fd1831e97b22f..1ddb5d6354cf3 100644
+--- a/drivers/media/platform/Kconfig
++++ b/drivers/media/platform/Kconfig
+@@ -244,6 +244,7 @@ config VIDEO_MEDIATEK_JPEG
+ 	depends on MTK_IOMMU_V1 || MTK_IOMMU || COMPILE_TEST
+ 	depends on VIDEO_DEV && VIDEO_V4L2
+ 	depends on ARCH_MEDIATEK || COMPILE_TEST
++	depends on MTK_SMI || (COMPILE_TEST && MTK_SMI=n)
+ 	select VIDEOBUF2_DMA_CONTIG
+ 	select V4L2_MEM2MEM_DEV
+ 	help
+@@ -271,6 +272,7 @@ config VIDEO_MEDIATEK_MDP
+ 	depends on MTK_IOMMU || COMPILE_TEST
+ 	depends on VIDEO_DEV && VIDEO_V4L2
+ 	depends on ARCH_MEDIATEK || COMPILE_TEST
++	depends on MTK_SMI || (COMPILE_TEST && MTK_SMI=n)
+ 	select VIDEOBUF2_DMA_CONTIG
+ 	select V4L2_MEM2MEM_DEV
+ 	select VIDEO_MEDIATEK_VPU
+@@ -291,6 +293,7 @@ config VIDEO_MEDIATEK_VCODEC
+ 	# our dependencies, to avoid missing symbols during link.
+ 	depends on VIDEO_MEDIATEK_VPU || !VIDEO_MEDIATEK_VPU
+ 	depends on MTK_SCP || !MTK_SCP
++	depends on MTK_SMI || (COMPILE_TEST && MTK_SMI=n)
+ 	select VIDEOBUF2_DMA_CONTIG
+ 	select V4L2_MEM2MEM_DEV
+ 	select VIDEO_MEDIATEK_VCODEC_VPU if VIDEO_MEDIATEK_VPU
+diff --git a/drivers/media/platform/aspeed-video.c b/drivers/media/platform/aspeed-video.c
+index f2c4dadd6a0eb..7bb6babdcade0 100644
+--- a/drivers/media/platform/aspeed-video.c
++++ b/drivers/media/platform/aspeed-video.c
+@@ -514,8 +514,8 @@ static void aspeed_video_off(struct aspeed_video *video)
+ 	aspeed_video_write(video, VE_INTERRUPT_STATUS, 0xffffffff);
+ 
+ 	/* Turn off the relevant clocks */
+-	clk_disable(video->vclk);
+ 	clk_disable(video->eclk);
++	clk_disable(video->vclk);
+ 
+ 	clear_bit(VIDEO_CLOCKS_ON, &video->flags);
+ }
+@@ -526,8 +526,8 @@ static void aspeed_video_on(struct aspeed_video *video)
+ 		return;
+ 
+ 	/* Turn on the relevant clocks */
+-	clk_enable(video->eclk);
+ 	clk_enable(video->vclk);
++	clk_enable(video->eclk);
+ 
+ 	set_bit(VIDEO_CLOCKS_ON, &video->flags);
+ }
+@@ -1719,8 +1719,11 @@ static int aspeed_video_probe(struct platform_device *pdev)
+ 		return rc;
+ 
+ 	rc = aspeed_video_setup_video(video);
+-	if (rc)
++	if (rc) {
++		clk_unprepare(video->vclk);
++		clk_unprepare(video->eclk);
+ 		return rc;
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/media/platform/meson/ge2d/ge2d.c b/drivers/media/platform/meson/ge2d/ge2d.c
+index 153612ca96fc1..a1393fefa8aea 100644
+--- a/drivers/media/platform/meson/ge2d/ge2d.c
++++ b/drivers/media/platform/meson/ge2d/ge2d.c
+@@ -757,7 +757,7 @@ static int ge2d_s_ctrl(struct v4l2_ctrl *ctrl)
+ 
+ 		if (ctrl->val == 90) {
+ 			ctx->hflip = 0;
+-			ctx->vflip = 0;
++			ctx->vflip = 1;
+ 			ctx->xy_swap = 1;
+ 		} else if (ctrl->val == 180) {
+ 			ctx->hflip = 1;
+@@ -765,7 +765,7 @@ static int ge2d_s_ctrl(struct v4l2_ctrl *ctrl)
+ 			ctx->xy_swap = 0;
+ 		} else if (ctrl->val == 270) {
+ 			ctx->hflip = 1;
+-			ctx->vflip = 1;
++			ctx->vflip = 0;
+ 			ctx->xy_swap = 1;
+ 		} else {
+ 			ctx->hflip = 0;
+diff --git a/drivers/media/platform/qcom/venus/core.c b/drivers/media/platform/qcom/venus/core.c
+index d2842f496b474..ae374bb2a48f0 100644
+--- a/drivers/media/platform/qcom/venus/core.c
++++ b/drivers/media/platform/qcom/venus/core.c
+@@ -224,11 +224,11 @@ static int venus_probe(struct platform_device *pdev)
+ 	if (IS_ERR(core->base))
+ 		return PTR_ERR(core->base);
+ 
+-	core->video_path = of_icc_get(dev, "video-mem");
++	core->video_path = devm_of_icc_get(dev, "video-mem");
+ 	if (IS_ERR(core->video_path))
+ 		return PTR_ERR(core->video_path);
+ 
+-	core->cpucfg_path = of_icc_get(dev, "cpu-cfg");
++	core->cpucfg_path = devm_of_icc_get(dev, "cpu-cfg");
+ 	if (IS_ERR(core->cpucfg_path))
+ 		return PTR_ERR(core->cpucfg_path);
+ 
+@@ -367,9 +367,6 @@ static int venus_remove(struct platform_device *pdev)
+ 
+ 	hfi_destroy(core);
+ 
+-	icc_put(core->video_path);
+-	icc_put(core->cpucfg_path);
+-
+ 	v4l2_device_unregister(&core->v4l2_dev);
+ 
+ 	mutex_destroy(&core->pm_lock);
+diff --git a/drivers/media/platform/rockchip/rkisp1/rkisp1-resizer.c b/drivers/media/platform/rockchip/rkisp1/rkisp1-resizer.c
+index 813670ed9577b..79deed8adceab 100644
+--- a/drivers/media/platform/rockchip/rkisp1/rkisp1-resizer.c
++++ b/drivers/media/platform/rockchip/rkisp1/rkisp1-resizer.c
+@@ -520,14 +520,15 @@ static void rkisp1_rsz_set_src_fmt(struct rkisp1_resizer *rsz,
+ 				   struct v4l2_mbus_framefmt *format,
+ 				   unsigned int which)
+ {
+-	const struct rkisp1_isp_mbus_info *mbus_info;
+-	struct v4l2_mbus_framefmt *src_fmt;
++	const struct rkisp1_isp_mbus_info *sink_mbus_info;
++	struct v4l2_mbus_framefmt *src_fmt, *sink_fmt;
+ 
++	sink_fmt = rkisp1_rsz_get_pad_fmt(rsz, cfg, RKISP1_RSZ_PAD_SINK, which);
+ 	src_fmt = rkisp1_rsz_get_pad_fmt(rsz, cfg, RKISP1_RSZ_PAD_SRC, which);
+-	mbus_info = rkisp1_isp_mbus_info_get(src_fmt->code);
++	sink_mbus_info = rkisp1_isp_mbus_info_get(sink_fmt->code);
+ 
+ 	/* for YUV formats, userspace can change the mbus code on the src pad if it is supported */
+-	if (mbus_info->pixel_enc == V4L2_PIXEL_ENC_YUV &&
++	if (sink_mbus_info->pixel_enc == V4L2_PIXEL_ENC_YUV &&
+ 	    rkisp1_rsz_get_yuv_mbus_info(format->code))
+ 		src_fmt->code = format->code;
+ 
+diff --git a/drivers/media/platform/sunxi/sun6i-csi/sun6i_video.c b/drivers/media/platform/sunxi/sun6i-csi/sun6i_video.c
+index b55de9ab64d8b..3181d0781b613 100644
+--- a/drivers/media/platform/sunxi/sun6i-csi/sun6i_video.c
++++ b/drivers/media/platform/sunxi/sun6i-csi/sun6i_video.c
+@@ -151,8 +151,10 @@ static int sun6i_video_start_streaming(struct vb2_queue *vq, unsigned int count)
+ 	}
+ 
+ 	subdev = sun6i_video_remote_subdev(video, NULL);
+-	if (!subdev)
++	if (!subdev) {
++		ret = -EINVAL;
+ 		goto stop_media_pipeline;
++	}
+ 
+ 	config.pixelformat = video->fmt.fmt.pix.pixelformat;
+ 	config.code = video->mbus_code;
+diff --git a/drivers/media/test-drivers/vivid/vivid-vid-out.c b/drivers/media/test-drivers/vivid/vivid-vid-out.c
+index ac1e981e83420..9f731f085179e 100644
+--- a/drivers/media/test-drivers/vivid/vivid-vid-out.c
++++ b/drivers/media/test-drivers/vivid/vivid-vid-out.c
+@@ -1021,7 +1021,7 @@ int vivid_vid_out_s_fbuf(struct file *file, void *fh,
+ 		return -EINVAL;
+ 	}
+ 	dev->fbuf_out_flags &= ~(chroma_flags | alpha_flags);
+-	dev->fbuf_out_flags = a->flags & (chroma_flags | alpha_flags);
++	dev->fbuf_out_flags |= a->flags & (chroma_flags | alpha_flags);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/media/tuners/m88rs6000t.c b/drivers/media/tuners/m88rs6000t.c
+index b3505f4024764..8647c50b66e50 100644
+--- a/drivers/media/tuners/m88rs6000t.c
++++ b/drivers/media/tuners/m88rs6000t.c
+@@ -525,7 +525,7 @@ static int m88rs6000t_get_rf_strength(struct dvb_frontend *fe, u16 *strength)
+ 	PGA2_cri = PGA2_GC >> 2;
+ 	PGA2_crf = PGA2_GC & 0x03;
+ 
+-	for (i = 0; i <= RF_GC; i++)
++	for (i = 0; i <= RF_GC && i < ARRAY_SIZE(RFGS); i++)
+ 		RFG += RFGS[i];
+ 
+ 	if (RF_GC == 0)
+@@ -537,12 +537,12 @@ static int m88rs6000t_get_rf_strength(struct dvb_frontend *fe, u16 *strength)
+ 	if (RF_GC == 3)
+ 		RFG += 100;
+ 
+-	for (i = 0; i <= IF_GC; i++)
++	for (i = 0; i <= IF_GC && i < ARRAY_SIZE(IFGS); i++)
+ 		IFG += IFGS[i];
+ 
+ 	TIAG = TIA_GC * TIA_GS;
+ 
+-	for (i = 0; i <= BB_GC; i++)
++	for (i = 0; i <= BB_GC && i < ARRAY_SIZE(BBGS); i++)
+ 		BBG += BBGS[i];
+ 
+ 	PGA2G = PGA2_cri * PGA2_cri_GS + PGA2_crf * PGA2_crf_GS;
+diff --git a/drivers/media/v4l2-core/v4l2-ctrls.c b/drivers/media/v4l2-core/v4l2-ctrls.c
+index 4f0209695f132..6219c81857827 100644
+--- a/drivers/media/v4l2-core/v4l2-ctrls.c
++++ b/drivers/media/v4l2-core/v4l2-ctrls.c
+@@ -2552,7 +2552,15 @@ void v4l2_ctrl_handler_free(struct v4l2_ctrl_handler *hdl)
+ 	if (hdl == NULL || hdl->buckets == NULL)
+ 		return;
+ 
+-	if (!hdl->req_obj.req && !list_empty(&hdl->requests)) {
++	/*
++	 * If the main handler is freed and it is used by handler objects in
++	 * outstanding requests, then unbind and put those objects before
++	 * freeing the main handler.
++	 *
++	 * The main handler can be identified by having a NULL ops pointer in
++	 * the request object.
++	 */
++	if (!hdl->req_obj.ops && !list_empty(&hdl->requests)) {
+ 		struct v4l2_ctrl_handler *req, *next_req;
+ 
+ 		list_for_each_entry_safe(req, next_req, &hdl->requests, requests) {
+@@ -3595,8 +3603,8 @@ static void v4l2_ctrl_request_unbind(struct media_request_object *obj)
+ 		container_of(obj, struct v4l2_ctrl_handler, req_obj);
+ 	struct v4l2_ctrl_handler *main_hdl = obj->priv;
+ 
+-	list_del_init(&hdl->requests);
+ 	mutex_lock(main_hdl->lock);
++	list_del_init(&hdl->requests);
+ 	if (hdl->request_is_queued) {
+ 		list_del_init(&hdl->requests_queued);
+ 		hdl->request_is_queued = false;
+@@ -3655,8 +3663,11 @@ static int v4l2_ctrl_request_bind(struct media_request *req,
+ 	if (!ret) {
+ 		ret = media_request_object_bind(req, &req_ops,
+ 						from, false, &hdl->req_obj);
+-		if (!ret)
++		if (!ret) {
++			mutex_lock(from->lock);
+ 			list_add_tail(&hdl->requests, &from->requests);
++			mutex_unlock(from->lock);
++		}
+ 	}
+ 	return ret;
+ }
+diff --git a/drivers/memory/omap-gpmc.c b/drivers/memory/omap-gpmc.c
+index cfa730cfd1453..f80c2ea39ca4c 100644
+--- a/drivers/memory/omap-gpmc.c
++++ b/drivers/memory/omap-gpmc.c
+@@ -1009,8 +1009,8 @@ EXPORT_SYMBOL(gpmc_cs_request);
+ 
+ void gpmc_cs_free(int cs)
+ {
+-	struct gpmc_cs_data *gpmc = &gpmc_cs[cs];
+-	struct resource *res = &gpmc->mem;
++	struct gpmc_cs_data *gpmc;
++	struct resource *res;
+ 
+ 	spin_lock(&gpmc_mem_lock);
+ 	if (cs >= gpmc_cs_num || cs < 0 || !gpmc_cs_reserved(cs)) {
+@@ -1018,6 +1018,9 @@ void gpmc_cs_free(int cs)
+ 		spin_unlock(&gpmc_mem_lock);
+ 		return;
+ 	}
++	gpmc = &gpmc_cs[cs];
++	res = &gpmc->mem;
++
+ 	gpmc_cs_disable_mem(cs);
+ 	if (res->flags)
+ 		release_resource(res);
+diff --git a/drivers/memory/pl353-smc.c b/drivers/memory/pl353-smc.c
+index 3b5b1045edd90..9c0a284167773 100644
+--- a/drivers/memory/pl353-smc.c
++++ b/drivers/memory/pl353-smc.c
+@@ -63,7 +63,7 @@
+ /* ECC memory config register specific constants */
+ #define PL353_SMC_ECC_MEMCFG_MODE_MASK	0xC
+ #define PL353_SMC_ECC_MEMCFG_MODE_SHIFT	2
+-#define PL353_SMC_ECC_MEMCFG_PGSIZE_MASK	0xC
++#define PL353_SMC_ECC_MEMCFG_PGSIZE_MASK	0x3
+ 
+ #define PL353_SMC_DC_UPT_NAND_REGS	((4 << 23) |	/* CS: NAND chip */ \
+ 				 (2 << 21))	/* UpdateRegs operation */
+diff --git a/drivers/memory/renesas-rpc-if.c b/drivers/memory/renesas-rpc-if.c
+index 8d36e221def14..45eed659b0c6d 100644
+--- a/drivers/memory/renesas-rpc-if.c
++++ b/drivers/memory/renesas-rpc-if.c
+@@ -192,10 +192,10 @@ int rpcif_sw_init(struct rpcif *rpc, struct device *dev)
+ 	}
+ 
+ 	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dirmap");
+-	rpc->size = resource_size(res);
+ 	rpc->dirmap = devm_ioremap_resource(&pdev->dev, res);
+ 	if (IS_ERR(rpc->dirmap))
+ 		rpc->dirmap = NULL;
++	rpc->size = resource_size(res);
+ 
+ 	rpc->rstc = devm_reset_control_get_exclusive(&pdev->dev, NULL);
+ 
+diff --git a/drivers/memory/samsung/exynos5422-dmc.c b/drivers/memory/samsung/exynos5422-dmc.c
+index 1dabb509dec3a..dee503640e125 100644
+--- a/drivers/memory/samsung/exynos5422-dmc.c
++++ b/drivers/memory/samsung/exynos5422-dmc.c
+@@ -1298,7 +1298,9 @@ static int exynos5_dmc_init_clks(struct exynos5_dmc *dmc)
+ 
+ 	dmc->curr_volt = target_volt;
+ 
+-	clk_set_parent(dmc->mout_mx_mspll_ccore, dmc->mout_spll);
++	ret = clk_set_parent(dmc->mout_mx_mspll_ccore, dmc->mout_spll);
++	if (ret)
++		return ret;
+ 
+ 	clk_prepare_enable(dmc->fout_bpll);
+ 	clk_prepare_enable(dmc->mout_bpll);
+diff --git a/drivers/mfd/intel_pmt.c b/drivers/mfd/intel_pmt.c
+index 744b230cdccaa..65da2b17a2040 100644
+--- a/drivers/mfd/intel_pmt.c
++++ b/drivers/mfd/intel_pmt.c
+@@ -79,19 +79,18 @@ static int pmt_add_dev(struct pci_dev *pdev, struct intel_dvsec_header *header,
+ 	case DVSEC_INTEL_ID_WATCHER:
+ 		if (quirks & PMT_QUIRK_NO_WATCHER) {
+ 			dev_info(dev, "Watcher not supported\n");
+-			return 0;
++			return -EINVAL;
+ 		}
+ 		name = "pmt_watcher";
+ 		break;
+ 	case DVSEC_INTEL_ID_CRASHLOG:
+ 		if (quirks & PMT_QUIRK_NO_CRASHLOG) {
+ 			dev_info(dev, "Crashlog not supported\n");
+-			return 0;
++			return -EINVAL;
+ 		}
+ 		name = "pmt_crashlog";
+ 		break;
+ 	default:
+-		dev_err(dev, "Unrecognized PMT capability: %d\n", id);
+ 		return -EINVAL;
+ 	}
+ 
+@@ -174,12 +173,8 @@ static int pmt_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 		header.offset = INTEL_DVSEC_TABLE_OFFSET(table);
+ 
+ 		ret = pmt_add_dev(pdev, &header, quirks);
+-		if (ret) {
+-			dev_warn(&pdev->dev,
+-				 "Failed to add device for DVSEC id %d\n",
+-				 header.id);
++		if (ret)
+ 			continue;
+-		}
+ 
+ 		found_devices = true;
+ 	} while (true);
+diff --git a/drivers/mfd/stm32-timers.c b/drivers/mfd/stm32-timers.c
+index add6033591242..44ed2fce03196 100644
+--- a/drivers/mfd/stm32-timers.c
++++ b/drivers/mfd/stm32-timers.c
+@@ -158,13 +158,18 @@ static const struct regmap_config stm32_timers_regmap_cfg = {
+ 
+ static void stm32_timers_get_arr_size(struct stm32_timers *ddata)
+ {
++	u32 arr;
++
++	/* Backup ARR to restore it after getting the maximum value */
++	regmap_read(ddata->regmap, TIM_ARR, &arr);
++
+ 	/*
+ 	 * Only the available bits will be written so when readback
+ 	 * we get the maximum value of auto reload register
+ 	 */
+ 	regmap_write(ddata->regmap, TIM_ARR, ~0L);
+ 	regmap_read(ddata->regmap, TIM_ARR, &ddata->max_arr);
+-	regmap_write(ddata->regmap, TIM_ARR, 0x0);
++	regmap_write(ddata->regmap, TIM_ARR, arr);
+ }
+ 
+ static int stm32_timers_dma_probe(struct device *dev,
+diff --git a/drivers/mfd/stmpe.c b/drivers/mfd/stmpe.c
+index 90f3292230c9c..1dd39483e7c14 100644
+--- a/drivers/mfd/stmpe.c
++++ b/drivers/mfd/stmpe.c
+@@ -312,7 +312,7 @@ EXPORT_SYMBOL_GPL(stmpe_set_altfunc);
+  * GPIO (all variants)
+  */
+ 
+-static const struct resource stmpe_gpio_resources[] = {
++static struct resource stmpe_gpio_resources[] = {
+ 	/* Start and end filled dynamically */
+ 	{
+ 		.flags	= IORESOURCE_IRQ,
+@@ -336,7 +336,8 @@ static const struct mfd_cell stmpe_gpio_cell_noirq = {
+  * Keypad (1601, 2401, 2403)
+  */
+ 
+-static const struct resource stmpe_keypad_resources[] = {
++static struct resource stmpe_keypad_resources[] = {
++	/* Start and end filled dynamically */
+ 	{
+ 		.name	= "KEYPAD",
+ 		.flags	= IORESOURCE_IRQ,
+@@ -357,7 +358,8 @@ static const struct mfd_cell stmpe_keypad_cell = {
+ /*
+  * PWM (1601, 2401, 2403)
+  */
+-static const struct resource stmpe_pwm_resources[] = {
++static struct resource stmpe_pwm_resources[] = {
++	/* Start and end filled dynamically */
+ 	{
+ 		.name	= "PWM0",
+ 		.flags	= IORESOURCE_IRQ,
+@@ -445,7 +447,8 @@ static struct stmpe_variant_info stmpe801_noirq = {
+  * Touchscreen (STMPE811 or STMPE610)
+  */
+ 
+-static const struct resource stmpe_ts_resources[] = {
++static struct resource stmpe_ts_resources[] = {
++	/* Start and end filled dynamically */
+ 	{
+ 		.name	= "TOUCH_DET",
+ 		.flags	= IORESOURCE_IRQ,
+@@ -467,7 +470,8 @@ static const struct mfd_cell stmpe_ts_cell = {
+  * ADC (STMPE811)
+  */
+ 
+-static const struct resource stmpe_adc_resources[] = {
++static struct resource stmpe_adc_resources[] = {
++	/* Start and end filled dynamically */
+ 	{
+ 		.name	= "STMPE_TEMP_SENS",
+ 		.flags	= IORESOURCE_IRQ,
+diff --git a/drivers/misc/lis3lv02d/lis3lv02d.c b/drivers/misc/lis3lv02d/lis3lv02d.c
+index dd65cedf3b125..9d14bf444481b 100644
+--- a/drivers/misc/lis3lv02d/lis3lv02d.c
++++ b/drivers/misc/lis3lv02d/lis3lv02d.c
+@@ -208,7 +208,7 @@ static int lis3_3dc_rates[16] = {0, 1, 10, 25, 50, 100, 200, 400, 1600, 5000};
+ static int lis3_3dlh_rates[4] = {50, 100, 400, 1000};
+ 
+ /* ODR is Output Data Rate */
+-static int lis3lv02d_get_odr(struct lis3lv02d *lis3)
++static int lis3lv02d_get_odr_index(struct lis3lv02d *lis3)
+ {
+ 	u8 ctrl;
+ 	int shift;
+@@ -216,15 +216,23 @@ static int lis3lv02d_get_odr(struct lis3lv02d *lis3)
+ 	lis3->read(lis3, CTRL_REG1, &ctrl);
+ 	ctrl &= lis3->odr_mask;
+ 	shift = ffs(lis3->odr_mask) - 1;
+-	return lis3->odrs[(ctrl >> shift)];
++	return (ctrl >> shift);
+ }
+ 
+ static int lis3lv02d_get_pwron_wait(struct lis3lv02d *lis3)
+ {
+-	int div = lis3lv02d_get_odr(lis3);
++	int odr_idx = lis3lv02d_get_odr_index(lis3);
++	int div = lis3->odrs[odr_idx];
+ 
+-	if (WARN_ONCE(div == 0, "device returned spurious data"))
++	if (div == 0) {
++		if (odr_idx == 0) {
++			/* Power-down mode, not sampling no need to sleep */
++			return 0;
++		}
++
++		dev_err(&lis3->pdev->dev, "Error unknown odrs-index: %d\n", odr_idx);
+ 		return -ENXIO;
++	}
+ 
+ 	/* LIS3 power on delay is quite long */
+ 	msleep(lis3->pwron_delay / div);
+@@ -816,9 +824,12 @@ static ssize_t lis3lv02d_rate_show(struct device *dev,
+ 			struct device_attribute *attr, char *buf)
+ {
+ 	struct lis3lv02d *lis3 = dev_get_drvdata(dev);
++	int odr_idx;
+ 
+ 	lis3lv02d_sysfs_poweron(lis3);
+-	return sprintf(buf, "%d\n", lis3lv02d_get_odr(lis3));
++
++	odr_idx = lis3lv02d_get_odr_index(lis3);
++	return sprintf(buf, "%d\n", lis3->odrs[odr_idx]);
+ }
+ 
+ static ssize_t lis3lv02d_rate_set(struct device *dev,
+diff --git a/drivers/misc/vmw_vmci/vmci_doorbell.c b/drivers/misc/vmw_vmci/vmci_doorbell.c
+index 345addd9306de..fa8a7fce4481b 100644
+--- a/drivers/misc/vmw_vmci/vmci_doorbell.c
++++ b/drivers/misc/vmw_vmci/vmci_doorbell.c
+@@ -326,7 +326,7 @@ int vmci_dbell_host_context_notify(u32 src_cid, struct vmci_handle handle)
+ bool vmci_dbell_register_notification_bitmap(u64 bitmap_ppn)
+ {
+ 	int result;
+-	struct vmci_notify_bm_set_msg bitmap_set_msg;
++	struct vmci_notify_bm_set_msg bitmap_set_msg = { };
+ 
+ 	bitmap_set_msg.hdr.dst = vmci_make_handle(VMCI_HYPERVISOR_CONTEXT_ID,
+ 						  VMCI_SET_NOTIFY_BITMAP);
+diff --git a/drivers/misc/vmw_vmci/vmci_guest.c b/drivers/misc/vmw_vmci/vmci_guest.c
+index cc8eeb361fcdb..1018dc77269d4 100644
+--- a/drivers/misc/vmw_vmci/vmci_guest.c
++++ b/drivers/misc/vmw_vmci/vmci_guest.c
+@@ -168,7 +168,7 @@ static int vmci_check_host_caps(struct pci_dev *pdev)
+ 				VMCI_UTIL_NUM_RESOURCES * sizeof(u32);
+ 	struct vmci_datagram *check_msg;
+ 
+-	check_msg = kmalloc(msg_size, GFP_KERNEL);
++	check_msg = kzalloc(msg_size, GFP_KERNEL);
+ 	if (!check_msg) {
+ 		dev_err(&pdev->dev, "%s: Insufficient memory\n", __func__);
+ 		return -ENOMEM;
+diff --git a/drivers/mtd/maps/physmap-core.c b/drivers/mtd/maps/physmap-core.c
+index 001ed5deb622a..4f63b8430c710 100644
+--- a/drivers/mtd/maps/physmap-core.c
++++ b/drivers/mtd/maps/physmap-core.c
+@@ -69,8 +69,10 @@ static int physmap_flash_remove(struct platform_device *dev)
+ 	int i, err = 0;
+ 
+ 	info = platform_get_drvdata(dev);
+-	if (!info)
++	if (!info) {
++		err = -EINVAL;
+ 		goto out;
++	}
+ 
+ 	if (info->cmtd) {
+ 		err = mtd_device_unregister(info->cmtd);
+diff --git a/drivers/mtd/mtdchar.c b/drivers/mtd/mtdchar.c
+index 323035d4f2d01..688de663cabf6 100644
+--- a/drivers/mtd/mtdchar.c
++++ b/drivers/mtd/mtdchar.c
+@@ -651,16 +651,12 @@ static int mtdchar_ioctl(struct file *file, u_int cmd, u_long arg)
+ 	case MEMGETINFO:
+ 	case MEMREADOOB:
+ 	case MEMREADOOB64:
+-	case MEMLOCK:
+-	case MEMUNLOCK:
+ 	case MEMISLOCKED:
+ 	case MEMGETOOBSEL:
+ 	case MEMGETBADBLOCK:
+-	case MEMSETBADBLOCK:
+ 	case OTPSELECT:
+ 	case OTPGETREGIONCOUNT:
+ 	case OTPGETREGIONINFO:
+-	case OTPLOCK:
+ 	case ECCGETLAYOUT:
+ 	case ECCGETSTATS:
+ 	case MTDFILEMODE:
+@@ -671,9 +667,13 @@ static int mtdchar_ioctl(struct file *file, u_int cmd, u_long arg)
+ 	/* "dangerous" commands */
+ 	case MEMERASE:
+ 	case MEMERASE64:
++	case MEMLOCK:
++	case MEMUNLOCK:
++	case MEMSETBADBLOCK:
+ 	case MEMWRITEOOB:
+ 	case MEMWRITEOOB64:
+ 	case MEMWRITE:
++	case OTPLOCK:
+ 		if (!(file->f_mode & FMODE_WRITE))
+ 			return -EPERM;
+ 		break;
+diff --git a/drivers/mtd/mtdcore.c b/drivers/mtd/mtdcore.c
+index 2d6423d89a175..d97ddc65b5d43 100644
+--- a/drivers/mtd/mtdcore.c
++++ b/drivers/mtd/mtdcore.c
+@@ -820,6 +820,9 @@ int mtd_device_parse_register(struct mtd_info *mtd, const char * const *types,
+ 
+ 	/* Prefer parsed partitions over driver-provided fallback */
+ 	ret = parse_mtd_partitions(mtd, types, parser_data);
++	if (ret == -EPROBE_DEFER)
++		goto out;
++
+ 	if (ret > 0)
+ 		ret = 0;
+ 	else if (nr_parts)
+diff --git a/drivers/mtd/mtdpart.c b/drivers/mtd/mtdpart.c
+index 12ca4f19cb14a..665fd9020b764 100644
+--- a/drivers/mtd/mtdpart.c
++++ b/drivers/mtd/mtdpart.c
+@@ -331,7 +331,7 @@ static int __del_mtd_partitions(struct mtd_info *mtd)
+ 
+ 	list_for_each_entry_safe(child, next, &mtd->partitions, part.node) {
+ 		if (mtd_has_partitions(child))
+-			del_mtd_partitions(child);
++			__del_mtd_partitions(child);
+ 
+ 		pr_info("Deleting %s MTD partition\n", child->name);
+ 		ret = del_mtd_device(child);
+diff --git a/drivers/mtd/nand/raw/brcmnand/brcmnand.c b/drivers/mtd/nand/raw/brcmnand/brcmnand.c
+index 659eaa6f0980c..5ff4291380c53 100644
+--- a/drivers/mtd/nand/raw/brcmnand/brcmnand.c
++++ b/drivers/mtd/nand/raw/brcmnand/brcmnand.c
+@@ -2688,6 +2688,12 @@ static int brcmnand_attach_chip(struct nand_chip *chip)
+ 
+ 	ret = brcmstb_choose_ecc_layout(host);
+ 
++	/* If OOB is written with ECC enabled it will cause ECC errors */
++	if (is_hamming_ecc(host->ctrl, &host->hwcfg)) {
++		chip->ecc.write_oob = brcmnand_write_oob_raw;
++		chip->ecc.read_oob = brcmnand_read_oob_raw;
++	}
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/mtd/nand/raw/fsmc_nand.c b/drivers/mtd/nand/raw/fsmc_nand.c
+index 0101c0fab50ad..a24e2f57fa68a 100644
+--- a/drivers/mtd/nand/raw/fsmc_nand.c
++++ b/drivers/mtd/nand/raw/fsmc_nand.c
+@@ -1077,11 +1077,13 @@ static int __init fsmc_nand_probe(struct platform_device *pdev)
+ 		host->read_dma_chan = dma_request_channel(mask, filter, NULL);
+ 		if (!host->read_dma_chan) {
+ 			dev_err(&pdev->dev, "Unable to get read dma channel\n");
++			ret = -ENODEV;
+ 			goto disable_clk;
+ 		}
+ 		host->write_dma_chan = dma_request_channel(mask, filter, NULL);
+ 		if (!host->write_dma_chan) {
+ 			dev_err(&pdev->dev, "Unable to get write dma channel\n");
++			ret = -ENODEV;
+ 			goto release_dma_read_chan;
+ 		}
+ 	}
+diff --git a/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c b/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
+index 3fa8c22d3f36a..4d08e4ab5c1b6 100644
+--- a/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
++++ b/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
+@@ -2449,7 +2449,7 @@ static int gpmi_nand_init(struct gpmi_nand_data *this)
+ 	this->bch_geometry.auxiliary_size = 128;
+ 	ret = gpmi_alloc_dma_buffer(this);
+ 	if (ret)
+-		goto err_out;
++		return ret;
+ 
+ 	nand_controller_init(&this->base);
+ 	this->base.ops = &gpmi_nand_controller_ops;
+diff --git a/drivers/mtd/nand/raw/qcom_nandc.c b/drivers/mtd/nand/raw/qcom_nandc.c
+index fd4c318b520f3..87c23bb320bf5 100644
+--- a/drivers/mtd/nand/raw/qcom_nandc.c
++++ b/drivers/mtd/nand/raw/qcom_nandc.c
+@@ -2898,7 +2898,7 @@ static int qcom_probe_nand_devices(struct qcom_nand_controller *nandc)
+ 	struct device *dev = nandc->dev;
+ 	struct device_node *dn = dev->of_node, *child;
+ 	struct qcom_nand_host *host;
+-	int ret;
++	int ret = -ENODEV;
+ 
+ 	for_each_available_child_of_node(dn, child) {
+ 		host = devm_kzalloc(dev, sizeof(*host), GFP_KERNEL);
+@@ -2916,10 +2916,7 @@ static int qcom_probe_nand_devices(struct qcom_nand_controller *nandc)
+ 		list_add_tail(&host->node, &nandc->host_list);
+ 	}
+ 
+-	if (list_empty(&nandc->host_list))
+-		return -ENODEV;
+-
+-	return 0;
++	return ret;
+ }
+ 
+ /* parse custom DT properties here */
+diff --git a/drivers/mtd/parsers/qcomsmempart.c b/drivers/mtd/parsers/qcomsmempart.c
+index 808cb33d71f8e..d9083308f6ba6 100644
+--- a/drivers/mtd/parsers/qcomsmempart.c
++++ b/drivers/mtd/parsers/qcomsmempart.c
+@@ -65,6 +65,13 @@ static int parse_qcomsmem_part(struct mtd_info *mtd,
+ 	int ret, i, numparts;
+ 	char *name, *c;
+ 
++	if (IS_ENABLED(CONFIG_MTD_SPI_NOR_USE_4K_SECTORS)
++			&& mtd->type == MTD_NORFLASH) {
++		pr_err("%s: SMEM partition parser is incompatible with 4K sectors\n",
++				mtd->name);
++		return -EINVAL;
++	}
++
+ 	pr_debug("Parsing partition table info from SMEM\n");
+ 	ptable = qcom_smem_get(SMEM_APPS, SMEM_AARM_PARTITION_TABLE, &len);
+ 	if (IS_ERR(ptable)) {
+@@ -104,7 +111,7 @@ static int parse_qcomsmem_part(struct mtd_info *mtd,
+ 	 * complete partition table
+ 	 */
+ 	ptable = qcom_smem_get(SMEM_APPS, SMEM_AARM_PARTITION_TABLE, &len);
+-	if (IS_ERR_OR_NULL(ptable)) {
++	if (IS_ERR(ptable)) {
+ 		pr_err("Error reading partition table\n");
+ 		return PTR_ERR(ptable);
+ 	}
+diff --git a/drivers/net/dsa/bcm_sf2.c b/drivers/net/dsa/bcm_sf2.c
+index ba5d546d06aa3..9c86cacc4a727 100644
+--- a/drivers/net/dsa/bcm_sf2.c
++++ b/drivers/net/dsa/bcm_sf2.c
+@@ -32,6 +32,36 @@
+ #include "b53/b53_priv.h"
+ #include "b53/b53_regs.h"
+ 
++static u16 bcm_sf2_reg_rgmii_cntrl(struct bcm_sf2_priv *priv, int port)
++{
++	switch (priv->type) {
++	case BCM4908_DEVICE_ID:
++		switch (port) {
++		case 7:
++			return REG_RGMII_11_CNTRL;
++		default:
++			break;
++		}
++		break;
++	default:
++		switch (port) {
++		case 0:
++			return REG_RGMII_0_CNTRL;
++		case 1:
++			return REG_RGMII_1_CNTRL;
++		case 2:
++			return REG_RGMII_2_CNTRL;
++		default:
++			break;
++		}
++	}
++
++	WARN_ONCE(1, "Unsupported port %d\n", port);
++
++	/* RO fallback reg */
++	return REG_SWITCH_STATUS;
++}
++
+ /* Return the number of active ports, not counting the IMP (CPU) port */
+ static unsigned int bcm_sf2_num_active_ports(struct dsa_switch *ds)
+ {
+@@ -647,6 +677,7 @@ static void bcm_sf2_sw_mac_config(struct dsa_switch *ds, int port,
+ {
+ 	struct bcm_sf2_priv *priv = bcm_sf2_to_priv(ds);
+ 	u32 id_mode_dis = 0, port_mode;
++	u32 reg_rgmii_ctrl;
+ 	u32 reg;
+ 
+ 	if (port == core_readl(priv, CORE_IMP0_PRT_ID))
+@@ -670,10 +701,12 @@ static void bcm_sf2_sw_mac_config(struct dsa_switch *ds, int port,
+ 		return;
+ 	}
+ 
++	reg_rgmii_ctrl = bcm_sf2_reg_rgmii_cntrl(priv, port);
++
+ 	/* Clear id_mode_dis bit, and the existing port mode, let
+ 	 * RGMII_MODE_EN bet set by mac_link_{up,down}
+ 	 */
+-	reg = reg_readl(priv, REG_RGMII_CNTRL_P(port));
++	reg = reg_readl(priv, reg_rgmii_ctrl);
+ 	reg &= ~ID_MODE_DIS;
+ 	reg &= ~(PORT_MODE_MASK << PORT_MODE_SHIFT);
+ 
+@@ -681,13 +714,14 @@ static void bcm_sf2_sw_mac_config(struct dsa_switch *ds, int port,
+ 	if (id_mode_dis)
+ 		reg |= ID_MODE_DIS;
+ 
+-	reg_writel(priv, reg, REG_RGMII_CNTRL_P(port));
++	reg_writel(priv, reg, reg_rgmii_ctrl);
+ }
+ 
+ static void bcm_sf2_sw_mac_link_set(struct dsa_switch *ds, int port,
+ 				    phy_interface_t interface, bool link)
+ {
+ 	struct bcm_sf2_priv *priv = bcm_sf2_to_priv(ds);
++	u32 reg_rgmii_ctrl;
+ 	u32 reg;
+ 
+ 	if (!phy_interface_mode_is_rgmii(interface) &&
+@@ -695,13 +729,15 @@ static void bcm_sf2_sw_mac_link_set(struct dsa_switch *ds, int port,
+ 	    interface != PHY_INTERFACE_MODE_REVMII)
+ 		return;
+ 
++	reg_rgmii_ctrl = bcm_sf2_reg_rgmii_cntrl(priv, port);
++
+ 	/* If the link is down, just disable the interface to conserve power */
+-	reg = reg_readl(priv, REG_RGMII_CNTRL_P(port));
++	reg = reg_readl(priv, reg_rgmii_ctrl);
+ 	if (link)
+ 		reg |= RGMII_MODE_EN;
+ 	else
+ 		reg &= ~RGMII_MODE_EN;
+-	reg_writel(priv, reg, REG_RGMII_CNTRL_P(port));
++	reg_writel(priv, reg, reg_rgmii_ctrl);
+ }
+ 
+ static void bcm_sf2_sw_mac_link_down(struct dsa_switch *ds, int port,
+@@ -735,11 +771,15 @@ static void bcm_sf2_sw_mac_link_up(struct dsa_switch *ds, int port,
+ {
+ 	struct bcm_sf2_priv *priv = bcm_sf2_to_priv(ds);
+ 	struct ethtool_eee *p = &priv->dev->ports[port].eee;
+-	u32 reg, offset;
+ 
+ 	bcm_sf2_sw_mac_link_set(ds, port, interface, true);
+ 
+ 	if (port != core_readl(priv, CORE_IMP0_PRT_ID)) {
++		u32 reg_rgmii_ctrl;
++		u32 reg, offset;
++
++		reg_rgmii_ctrl = bcm_sf2_reg_rgmii_cntrl(priv, port);
++
+ 		if (priv->type == BCM4908_DEVICE_ID ||
+ 		    priv->type == BCM7445_DEVICE_ID)
+ 			offset = CORE_STS_OVERRIDE_GMIIP_PORT(port);
+@@ -750,7 +790,7 @@ static void bcm_sf2_sw_mac_link_up(struct dsa_switch *ds, int port,
+ 		    interface == PHY_INTERFACE_MODE_RGMII_TXID ||
+ 		    interface == PHY_INTERFACE_MODE_MII ||
+ 		    interface == PHY_INTERFACE_MODE_REVMII) {
+-			reg = reg_readl(priv, REG_RGMII_CNTRL_P(port));
++			reg = reg_readl(priv, reg_rgmii_ctrl);
+ 			reg &= ~(RX_PAUSE_EN | TX_PAUSE_EN);
+ 
+ 			if (tx_pause)
+@@ -758,7 +798,7 @@ static void bcm_sf2_sw_mac_link_up(struct dsa_switch *ds, int port,
+ 			if (rx_pause)
+ 				reg |= RX_PAUSE_EN;
+ 
+-			reg_writel(priv, reg, REG_RGMII_CNTRL_P(port));
++			reg_writel(priv, reg, reg_rgmii_ctrl);
+ 		}
+ 
+ 		reg = SW_OVERRIDE | LINK_STS;
+@@ -1144,9 +1184,7 @@ static const u16 bcm_sf2_4908_reg_offsets[] = {
+ 	[REG_PHY_REVISION]	= 0x14,
+ 	[REG_SPHY_CNTRL]	= 0x24,
+ 	[REG_CROSSBAR]		= 0xc8,
+-	[REG_RGMII_0_CNTRL]	= 0xe0,
+-	[REG_RGMII_1_CNTRL]	= 0xec,
+-	[REG_RGMII_2_CNTRL]	= 0xf8,
++	[REG_RGMII_11_CNTRL]	= 0x014c,
+ 	[REG_LED_0_CNTRL]	= 0x40,
+ 	[REG_LED_1_CNTRL]	= 0x4c,
+ 	[REG_LED_2_CNTRL]	= 0x58,
+diff --git a/drivers/net/dsa/bcm_sf2_regs.h b/drivers/net/dsa/bcm_sf2_regs.h
+index 1d2d55c9f8aad..9e141d1a0b07c 100644
+--- a/drivers/net/dsa/bcm_sf2_regs.h
++++ b/drivers/net/dsa/bcm_sf2_regs.h
+@@ -21,6 +21,7 @@ enum bcm_sf2_reg_offs {
+ 	REG_RGMII_0_CNTRL,
+ 	REG_RGMII_1_CNTRL,
+ 	REG_RGMII_2_CNTRL,
++	REG_RGMII_11_CNTRL,
+ 	REG_LED_0_CNTRL,
+ 	REG_LED_1_CNTRL,
+ 	REG_LED_2_CNTRL,
+@@ -48,8 +49,6 @@ enum bcm_sf2_reg_offs {
+ #define  PHY_PHYAD_SHIFT		8
+ #define  PHY_PHYAD_MASK			0x1F
+ 
+-#define REG_RGMII_CNTRL_P(x)		(REG_RGMII_0_CNTRL + (x))
+-
+ /* Relative to REG_RGMII_CNTRL */
+ #define  RGMII_MODE_EN			(1 << 0)
+ #define  ID_MODE_DIS			(1 << 1)
+diff --git a/drivers/net/dsa/mv88e6xxx/devlink.c b/drivers/net/dsa/mv88e6xxx/devlink.c
+index 21953d6d484c5..ada7a38d4d313 100644
+--- a/drivers/net/dsa/mv88e6xxx/devlink.c
++++ b/drivers/net/dsa/mv88e6xxx/devlink.c
+@@ -678,7 +678,7 @@ static int mv88e6xxx_setup_devlink_regions_global(struct dsa_switch *ds,
+ 				sizeof(struct mv88e6xxx_devlink_atu_entry);
+ 			break;
+ 		case MV88E6XXX_REGION_VTU:
+-			size = mv88e6xxx_max_vid(chip) *
++			size = (mv88e6xxx_max_vid(chip) + 1) *
+ 				sizeof(struct mv88e6xxx_devlink_vtu_entry);
+ 			break;
+ 		}
+diff --git a/drivers/net/dsa/mv88e6xxx/serdes.c b/drivers/net/dsa/mv88e6xxx/serdes.c
+index 3195936dc5be7..2ce04fef698de 100644
+--- a/drivers/net/dsa/mv88e6xxx/serdes.c
++++ b/drivers/net/dsa/mv88e6xxx/serdes.c
+@@ -443,15 +443,15 @@ int mv88e6185_serdes_power(struct mv88e6xxx_chip *chip, int port, u8 lane,
+ u8 mv88e6185_serdes_get_lane(struct mv88e6xxx_chip *chip, int port)
+ {
+ 	/* There are no configurable serdes lanes on this switch chip but we
+-	 * need to return non-zero so that callers of
++	 * need to return a non-negative lane number so that callers of
+ 	 * mv88e6xxx_serdes_get_lane() know this is a serdes port.
+ 	 */
+ 	switch (chip->ports[port].cmode) {
+ 	case MV88E6185_PORT_STS_CMODE_SERDES:
+ 	case MV88E6185_PORT_STS_CMODE_1000BASE_X:
+-		return 0xff;
+-	default:
+ 		return 0;
++	default:
++		return -ENODEV;
+ 	}
+ }
+ 
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index b53a0d87371a2..73239d3eaca15 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -1732,14 +1732,16 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
+ 
+ 	cons = rxcmp->rx_cmp_opaque;
+ 	if (unlikely(cons != rxr->rx_next_cons)) {
+-		int rc1 = bnxt_discard_rx(bp, cpr, raw_cons, rxcmp);
++		int rc1 = bnxt_discard_rx(bp, cpr, &tmp_raw_cons, rxcmp);
+ 
+ 		/* 0xffff is forced error, don't print it */
+ 		if (rxr->rx_next_cons != 0xffff)
+ 			netdev_warn(bp->dev, "RX cons %x != expected cons %x\n",
+ 				    cons, rxr->rx_next_cons);
+ 		bnxt_sched_reset(bp, rxr);
+-		return rc1;
++		if (rc1)
++			return rc1;
++		goto next_rx_no_prod_no_len;
+ 	}
+ 	rx_buf = &rxr->rx_buf_ring[cons];
+ 	data = rx_buf->data;
+@@ -9736,7 +9738,9 @@ static ssize_t bnxt_show_temp(struct device *dev,
+ 	if (!rc)
+ 		len = sprintf(buf, "%u\n", resp->temp * 1000); /* display millidegree */
+ 	mutex_unlock(&bp->hwrm_cmd_lock);
+-	return rc ?: len;
++	if (rc)
++		return rc;
++	return len;
+ }
+ static SENSOR_DEVICE_ATTR(temp1_input, 0444, bnxt_show_temp, NULL, 0);
+ 
+diff --git a/drivers/net/ethernet/cavium/liquidio/cn23xx_pf_regs.h b/drivers/net/ethernet/cavium/liquidio/cn23xx_pf_regs.h
+index e6d4ad99cc387..3f1c189646f4e 100644
+--- a/drivers/net/ethernet/cavium/liquidio/cn23xx_pf_regs.h
++++ b/drivers/net/ethernet/cavium/liquidio/cn23xx_pf_regs.h
+@@ -521,7 +521,7 @@
+ #define    CN23XX_BAR1_INDEX_OFFSET                3
+ 
+ #define    CN23XX_PEM_BAR1_INDEX_REG(port, idx)		\
+-		(CN23XX_PEM_BAR1_INDEX_START + ((port) << CN23XX_PEM_OFFSET) + \
++		(CN23XX_PEM_BAR1_INDEX_START + (((u64)port) << CN23XX_PEM_OFFSET) + \
+ 		 ((idx) << CN23XX_BAR1_INDEX_OFFSET))
+ 
+ /*############################ DPI #########################*/
+diff --git a/drivers/net/ethernet/cavium/thunder/nicvf_queues.c b/drivers/net/ethernet/cavium/thunder/nicvf_queues.c
+index f782e6af45e93..50bbe79fb93df 100644
+--- a/drivers/net/ethernet/cavium/thunder/nicvf_queues.c
++++ b/drivers/net/ethernet/cavium/thunder/nicvf_queues.c
+@@ -776,7 +776,7 @@ static void nicvf_rcv_queue_config(struct nicvf *nic, struct queue_set *qs,
+ 	mbx.rq.msg = NIC_MBOX_MSG_RQ_CFG;
+ 	mbx.rq.qs_num = qs->vnic_id;
+ 	mbx.rq.rq_num = qidx;
+-	mbx.rq.cfg = (rq->caching << 26) | (rq->cq_qs << 19) |
++	mbx.rq.cfg = ((u64)rq->caching << 26) | (rq->cq_qs << 19) |
+ 			  (rq->cq_idx << 16) | (rq->cont_rbdr_qs << 9) |
+ 			  (rq->cont_qs_rbdr_idx << 8) |
+ 			  (rq->start_rbdr_qs << 1) | (rq->start_qs_rbdr_idx);
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c
+index 83b46440408ba..bde8494215c41 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c
+@@ -174,31 +174,31 @@ static void set_nat_params(struct adapter *adap, struct filter_entry *f,
+ 				      WORD_MASK, f->fs.nat_lip[15] |
+ 				      f->fs.nat_lip[14] << 8 |
+ 				      f->fs.nat_lip[13] << 16 |
+-				      f->fs.nat_lip[12] << 24, 1);
++				      (u64)f->fs.nat_lip[12] << 24, 1);
+ 
+ 			set_tcb_field(adap, f, tid, TCB_SND_UNA_RAW_W + 1,
+ 				      WORD_MASK, f->fs.nat_lip[11] |
+ 				      f->fs.nat_lip[10] << 8 |
+ 				      f->fs.nat_lip[9] << 16 |
+-				      f->fs.nat_lip[8] << 24, 1);
++				      (u64)f->fs.nat_lip[8] << 24, 1);
+ 
+ 			set_tcb_field(adap, f, tid, TCB_SND_UNA_RAW_W + 2,
+ 				      WORD_MASK, f->fs.nat_lip[7] |
+ 				      f->fs.nat_lip[6] << 8 |
+ 				      f->fs.nat_lip[5] << 16 |
+-				      f->fs.nat_lip[4] << 24, 1);
++				      (u64)f->fs.nat_lip[4] << 24, 1);
+ 
+ 			set_tcb_field(adap, f, tid, TCB_SND_UNA_RAW_W + 3,
+ 				      WORD_MASK, f->fs.nat_lip[3] |
+ 				      f->fs.nat_lip[2] << 8 |
+ 				      f->fs.nat_lip[1] << 16 |
+-				      f->fs.nat_lip[0] << 24, 1);
++				      (u64)f->fs.nat_lip[0] << 24, 1);
+ 		} else {
+ 			set_tcb_field(adap, f, tid, TCB_RX_FRAG3_LEN_RAW_W,
+ 				      WORD_MASK, f->fs.nat_lip[3] |
+ 				      f->fs.nat_lip[2] << 8 |
+ 				      f->fs.nat_lip[1] << 16 |
+-				      f->fs.nat_lip[0] << 24, 1);
++				      (u64)f->fs.nat_lip[0] << 25, 1);
+ 		}
+ 	}
+ 
+@@ -208,25 +208,25 @@ static void set_nat_params(struct adapter *adap, struct filter_entry *f,
+ 				      WORD_MASK, f->fs.nat_fip[15] |
+ 				      f->fs.nat_fip[14] << 8 |
+ 				      f->fs.nat_fip[13] << 16 |
+-				      f->fs.nat_fip[12] << 24, 1);
++				      (u64)f->fs.nat_fip[12] << 24, 1);
+ 
+ 			set_tcb_field(adap, f, tid, TCB_RX_FRAG2_PTR_RAW_W + 1,
+ 				      WORD_MASK, f->fs.nat_fip[11] |
+ 				      f->fs.nat_fip[10] << 8 |
+ 				      f->fs.nat_fip[9] << 16 |
+-				      f->fs.nat_fip[8] << 24, 1);
++				      (u64)f->fs.nat_fip[8] << 24, 1);
+ 
+ 			set_tcb_field(adap, f, tid, TCB_RX_FRAG2_PTR_RAW_W + 2,
+ 				      WORD_MASK, f->fs.nat_fip[7] |
+ 				      f->fs.nat_fip[6] << 8 |
+ 				      f->fs.nat_fip[5] << 16 |
+-				      f->fs.nat_fip[4] << 24, 1);
++				      (u64)f->fs.nat_fip[4] << 24, 1);
+ 
+ 			set_tcb_field(adap, f, tid, TCB_RX_FRAG2_PTR_RAW_W + 3,
+ 				      WORD_MASK, f->fs.nat_fip[3] |
+ 				      f->fs.nat_fip[2] << 8 |
+ 				      f->fs.nat_fip[1] << 16 |
+-				      f->fs.nat_fip[0] << 24, 1);
++				      (u64)f->fs.nat_fip[0] << 24, 1);
+ 
+ 		} else {
+ 			set_tcb_field(adap, f, tid,
+@@ -234,13 +234,13 @@ static void set_nat_params(struct adapter *adap, struct filter_entry *f,
+ 				      WORD_MASK, f->fs.nat_fip[3] |
+ 				      f->fs.nat_fip[2] << 8 |
+ 				      f->fs.nat_fip[1] << 16 |
+-				      f->fs.nat_fip[0] << 24, 1);
++				      (u64)f->fs.nat_fip[0] << 24, 1);
+ 		}
+ 	}
+ 
+ 	set_tcb_field(adap, f, tid, TCB_PDU_HDR_LEN_W, WORD_MASK,
+ 		      (dp ? (nat_lp[1] | nat_lp[0] << 8) : 0) |
+-		      (sp ? (nat_fp[1] << 16 | nat_fp[0] << 24) : 0),
++		      (sp ? (nat_fp[1] << 16 | (u64)nat_fp[0] << 24) : 0),
+ 		      1);
+ }
+ 
+diff --git a/drivers/net/ethernet/freescale/Makefile b/drivers/net/ethernet/freescale/Makefile
+index 67c436400352f..de7b318422330 100644
+--- a/drivers/net/ethernet/freescale/Makefile
++++ b/drivers/net/ethernet/freescale/Makefile
+@@ -24,6 +24,4 @@ obj-$(CONFIG_FSL_DPAA_ETH) += dpaa/
+ 
+ obj-$(CONFIG_FSL_DPAA2_ETH) += dpaa2/
+ 
+-obj-$(CONFIG_FSL_ENETC) += enetc/
+-obj-$(CONFIG_FSL_ENETC_MDIO) += enetc/
+-obj-$(CONFIG_FSL_ENETC_VF) += enetc/
++obj-y += enetc/
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+index bf4302a5cf956..65752f363f431 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+@@ -3704,7 +3704,6 @@ static void hns3_nic_set_cpumask(struct hns3_nic_priv *priv)
+ 
+ static int hns3_nic_init_vector_data(struct hns3_nic_priv *priv)
+ {
+-	struct hnae3_ring_chain_node vector_ring_chain;
+ 	struct hnae3_handle *h = priv->ae_handle;
+ 	struct hns3_enet_tqp_vector *tqp_vector;
+ 	int ret;
+@@ -3736,6 +3735,8 @@ static int hns3_nic_init_vector_data(struct hns3_nic_priv *priv)
+ 	}
+ 
+ 	for (i = 0; i < priv->vector_num; i++) {
++		struct hnae3_ring_chain_node vector_ring_chain;
++
+ 		tqp_vector = &priv->tqp_vector[i];
+ 
+ 		tqp_vector->rx_group.total_bytes = 0;
+diff --git a/drivers/net/ethernet/marvell/prestera/prestera_main.c b/drivers/net/ethernet/marvell/prestera/prestera_main.c
+index 25dd903a3e92c..d849b0f65de2d 100644
+--- a/drivers/net/ethernet/marvell/prestera/prestera_main.c
++++ b/drivers/net/ethernet/marvell/prestera/prestera_main.c
+@@ -431,7 +431,8 @@ static void prestera_port_handle_event(struct prestera_switch *sw,
+ 			netif_carrier_on(port->dev);
+ 			if (!delayed_work_pending(caching_dw))
+ 				queue_delayed_work(prestera_wq, caching_dw, 0);
+-		} else {
++		} else if (netif_running(port->dev) &&
++			   netif_carrier_ok(port->dev)) {
+ 			netif_carrier_off(port->dev);
+ 			if (delayed_work_pending(caching_dw))
+ 				cancel_delayed_work(caching_dw);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fpga/ipsec.c b/drivers/net/ethernet/mellanox/mlx5/core/fpga/ipsec.c
+index 22bee49902327..bb61f52d782d9 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fpga/ipsec.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fpga/ipsec.c
+@@ -850,7 +850,7 @@ mlx5_fpga_ipsec_release_sa_ctx(struct mlx5_fpga_ipsec_sa_ctx *sa_ctx)
+ 		return;
+ 	}
+ 
+-	if (sa_ctx->fpga_xfrm->accel_xfrm.attrs.action &
++	if (sa_ctx->fpga_xfrm->accel_xfrm.attrs.action ==
+ 	    MLX5_ACCEL_ESP_ACTION_DECRYPT)
+ 		ida_simple_remove(&fipsec->halloc, sa_ctx->sa_handle);
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ste_v1.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ste_v1.c
+index 9143ec326ebfb..f146c618a78e7 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ste_v1.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ste_v1.c
+@@ -1532,6 +1532,7 @@ static void dr_ste_v1_build_src_gvmi_qpn_bit_mask(struct mlx5dr_match_param *val
+ 
+ 	DR_STE_SET_ONES(src_gvmi_qp_v1, bit_mask, source_gvmi, misc_mask, source_port);
+ 	DR_STE_SET_ONES(src_gvmi_qp_v1, bit_mask, source_qp, misc_mask, source_sqn);
++	misc_mask->source_eswitch_owner_vhca_id = 0;
+ }
+ 
+ static int dr_ste_v1_build_src_gvmi_qpn_tag(struct mlx5dr_match_param *value,
+diff --git a/drivers/net/ethernet/netronome/nfp/nfp_devlink.c b/drivers/net/ethernet/netronome/nfp/nfp_devlink.c
+index 713ee3041d491..bea978df77138 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfp_devlink.c
++++ b/drivers/net/ethernet/netronome/nfp/nfp_devlink.c
+@@ -364,6 +364,7 @@ int nfp_devlink_port_register(struct nfp_app *app, struct nfp_port *port)
+ 
+ 	attrs.split = eth_port.is_split;
+ 	attrs.splittable = !attrs.split;
++	attrs.lanes = eth_port.port_lanes;
+ 	attrs.flavour = DEVLINK_PORT_FLAVOUR_PHYSICAL;
+ 	attrs.phys.port_number = eth_port.label_port;
+ 	attrs.phys.split_subport_number = eth_port.label_subport;
+diff --git a/drivers/net/ethernet/qualcomm/emac/emac-mac.c b/drivers/net/ethernet/qualcomm/emac/emac-mac.c
+index 117188e3c7de2..87b8c032195d0 100644
+--- a/drivers/net/ethernet/qualcomm/emac/emac-mac.c
++++ b/drivers/net/ethernet/qualcomm/emac/emac-mac.c
+@@ -1437,6 +1437,7 @@ netdev_tx_t emac_mac_tx_buf_send(struct emac_adapter *adpt,
+ {
+ 	struct emac_tpd tpd;
+ 	u32 prod_idx;
++	int len;
+ 
+ 	memset(&tpd, 0, sizeof(tpd));
+ 
+@@ -1456,9 +1457,10 @@ netdev_tx_t emac_mac_tx_buf_send(struct emac_adapter *adpt,
+ 	if (skb_network_offset(skb) != ETH_HLEN)
+ 		TPD_TYP_SET(&tpd, 1);
+ 
++	len = skb->len;
+ 	emac_tx_fill_tpd(adpt, tx_q, skb, &tpd);
+ 
+-	netdev_sent_queue(adpt->netdev, skb->len);
++	netdev_sent_queue(adpt->netdev, len);
+ 
+ 	/* Make sure the are enough free descriptors to hold one
+ 	 * maximum-sized SKB.  We need one desc for each fragment,
+diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c
+index eb0c03bdb12d5..cad57d58d764b 100644
+--- a/drivers/net/ethernet/renesas/ravb_main.c
++++ b/drivers/net/ethernet/renesas/ravb_main.c
+@@ -911,31 +911,20 @@ static int ravb_poll(struct napi_struct *napi, int budget)
+ 	int q = napi - priv->napi;
+ 	int mask = BIT(q);
+ 	int quota = budget;
+-	u32 ris0, tis;
+ 
+-	for (;;) {
+-		tis = ravb_read(ndev, TIS);
+-		ris0 = ravb_read(ndev, RIS0);
+-		if (!((ris0 & mask) || (tis & mask)))
+-			break;
++	/* Processing RX Descriptor Ring */
++	/* Clear RX interrupt */
++	ravb_write(ndev, ~(mask | RIS0_RESERVED), RIS0);
++	if (ravb_rx(ndev, &quota, q))
++		goto out;
+ 
+-		/* Processing RX Descriptor Ring */
+-		if (ris0 & mask) {
+-			/* Clear RX interrupt */
+-			ravb_write(ndev, ~(mask | RIS0_RESERVED), RIS0);
+-			if (ravb_rx(ndev, &quota, q))
+-				goto out;
+-		}
+-		/* Processing TX Descriptor Ring */
+-		if (tis & mask) {
+-			spin_lock_irqsave(&priv->lock, flags);
+-			/* Clear TX interrupt */
+-			ravb_write(ndev, ~(mask | TIS_RESERVED), TIS);
+-			ravb_tx_free(ndev, q, true);
+-			netif_wake_subqueue(ndev, q);
+-			spin_unlock_irqrestore(&priv->lock, flags);
+-		}
+-	}
++	/* Processing RX Descriptor Ring */
++	spin_lock_irqsave(&priv->lock, flags);
++	/* Clear TX interrupt */
++	ravb_write(ndev, ~(mask | TIS_RESERVED), TIS);
++	ravb_tx_free(ndev, q, true);
++	netif_wake_subqueue(ndev, q);
++	spin_unlock_irqrestore(&priv->lock, flags);
+ 
+ 	napi_complete(napi);
+ 
+diff --git a/drivers/net/ethernet/sfc/ef10.c b/drivers/net/ethernet/sfc/ef10.c
+index da6886dcac37c..4fa72b573c172 100644
+--- a/drivers/net/ethernet/sfc/ef10.c
++++ b/drivers/net/ethernet/sfc/ef10.c
+@@ -2928,8 +2928,7 @@ efx_ef10_handle_tx_event(struct efx_channel *channel, efx_qword_t *event)
+ 
+ 	/* Get the transmit queue */
+ 	tx_ev_q_label = EFX_QWORD_FIELD(*event, ESF_DZ_TX_QLABEL);
+-	tx_queue = efx_channel_get_tx_queue(channel,
+-					    tx_ev_q_label % EFX_MAX_TXQ_PER_CHANNEL);
++	tx_queue = channel->tx_queue + (tx_ev_q_label % EFX_MAX_TXQ_PER_CHANNEL);
+ 
+ 	if (!tx_queue->timestamping) {
+ 		/* Transmit completion */
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 4749bd0af1607..c6f24abf64328 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -2757,8 +2757,15 @@ static int stmmac_hw_setup(struct net_device *dev, bool init_ptp)
+ 
+ 	/* Enable TSO */
+ 	if (priv->tso) {
+-		for (chan = 0; chan < tx_cnt; chan++)
++		for (chan = 0; chan < tx_cnt; chan++) {
++			struct stmmac_tx_queue *tx_q = &priv->tx_queue[chan];
++
++			/* TSO and TBS cannot co-exist */
++			if (tx_q->tbs & STMMAC_TBS_AVAIL)
++				continue;
++
+ 			stmmac_enable_tso(priv, priv->ioaddr, 1, chan);
++		}
+ 	}
+ 
+ 	/* Enable Split Header */
+@@ -2850,9 +2857,8 @@ static int stmmac_open(struct net_device *dev)
+ 		struct stmmac_tx_queue *tx_q = &priv->tx_queue[chan];
+ 		int tbs_en = priv->plat->tx_queues_cfg[chan].tbs_en;
+ 
++		/* Setup per-TXQ tbs flag before TX descriptor alloc */
+ 		tx_q->tbs |= tbs_en ? STMMAC_TBS_AVAIL : 0;
+-		if (stmmac_enable_tbs(priv, priv->ioaddr, tbs_en, chan))
+-			tx_q->tbs &= ~STMMAC_TBS_AVAIL;
+ 	}
+ 
+ 	ret = alloc_dma_desc_resources(priv);
+diff --git a/drivers/net/ethernet/ti/davinci_emac.c b/drivers/net/ethernet/ti/davinci_emac.c
+index c7031e1960d4a..03055c96f0760 100644
+--- a/drivers/net/ethernet/ti/davinci_emac.c
++++ b/drivers/net/ethernet/ti/davinci_emac.c
+@@ -169,11 +169,11 @@ static const char emac_version_string[] = "TI DaVinci EMAC Linux v6.1";
+ /* EMAC mac_status register */
+ #define EMAC_MACSTATUS_TXERRCODE_MASK	(0xF00000)
+ #define EMAC_MACSTATUS_TXERRCODE_SHIFT	(20)
+-#define EMAC_MACSTATUS_TXERRCH_MASK	(0x7)
++#define EMAC_MACSTATUS_TXERRCH_MASK	(0x70000)
+ #define EMAC_MACSTATUS_TXERRCH_SHIFT	(16)
+ #define EMAC_MACSTATUS_RXERRCODE_MASK	(0xF000)
+ #define EMAC_MACSTATUS_RXERRCODE_SHIFT	(12)
+-#define EMAC_MACSTATUS_RXERRCH_MASK	(0x7)
++#define EMAC_MACSTATUS_RXERRCH_MASK	(0x700)
+ #define EMAC_MACSTATUS_RXERRCH_SHIFT	(8)
+ 
+ /* EMAC RX register masks */
+diff --git a/drivers/net/ethernet/xilinx/Kconfig b/drivers/net/ethernet/xilinx/Kconfig
+index c6eb7f2368aa3..911b5ef9e6803 100644
+--- a/drivers/net/ethernet/xilinx/Kconfig
++++ b/drivers/net/ethernet/xilinx/Kconfig
+@@ -18,12 +18,14 @@ if NET_VENDOR_XILINX
+ 
+ config XILINX_EMACLITE
+ 	tristate "Xilinx 10/100 Ethernet Lite support"
++	depends on HAS_IOMEM
+ 	select PHYLIB
+ 	help
+ 	  This driver supports the 10/100 Ethernet Lite from Xilinx.
+ 
+ config XILINX_AXI_EMAC
+ 	tristate "Xilinx 10/100/1000 AXI Ethernet support"
++	depends on HAS_IOMEM
+ 	select PHYLINK
+ 	help
+ 	  This driver supports the 10/100/1000 Ethernet from Xilinx for the
+@@ -31,6 +33,7 @@ config XILINX_AXI_EMAC
+ 
+ config XILINX_LL_TEMAC
+ 	tristate "Xilinx LL TEMAC (LocalLink Tri-mode Ethernet MAC) driver"
++	depends on HAS_IOMEM
+ 	select PHYLIB
+ 	help
+ 	  This driver supports the Xilinx 10/100/1000 LocalLink TEMAC
+diff --git a/drivers/net/ethernet/xscale/ixp4xx_eth.c b/drivers/net/ethernet/xscale/ixp4xx_eth.c
+index 0152f1e707834..9defaa21a1a9e 100644
+--- a/drivers/net/ethernet/xscale/ixp4xx_eth.c
++++ b/drivers/net/ethernet/xscale/ixp4xx_eth.c
+@@ -1085,7 +1085,7 @@ static int init_queues(struct port *port)
+ 	int i;
+ 
+ 	if (!ports_open) {
+-		dma_pool = dma_pool_create(DRV_NAME, port->netdev->dev.parent,
++		dma_pool = dma_pool_create(DRV_NAME, &port->netdev->dev,
+ 					   POOL_ALLOC_SIZE, 32, 0);
+ 		if (!dma_pool)
+ 			return -ENOMEM;
+@@ -1435,6 +1435,9 @@ static int ixp4xx_eth_probe(struct platform_device *pdev)
+ 	ndev->netdev_ops = &ixp4xx_netdev_ops;
+ 	ndev->ethtool_ops = &ixp4xx_ethtool_ops;
+ 	ndev->tx_queue_len = 100;
++	/* Inherit the DMA masks from the platform device */
++	ndev->dev.dma_mask = dev->dma_mask;
++	ndev->dev.coherent_dma_mask = dev->coherent_dma_mask;
+ 
+ 	netif_napi_add(ndev, &port->napi, eth_poll, NAPI_WEIGHT);
+ 
+diff --git a/drivers/net/fddi/Kconfig b/drivers/net/fddi/Kconfig
+index f722079dfb6ae..f99c1048c97e3 100644
+--- a/drivers/net/fddi/Kconfig
++++ b/drivers/net/fddi/Kconfig
+@@ -40,17 +40,20 @@ config DEFXX
+ 
+ config DEFXX_MMIO
+ 	bool
+-	prompt "Use MMIO instead of PIO" if PCI || EISA
++	prompt "Use MMIO instead of IOP" if PCI || EISA
+ 	depends on DEFXX
+-	default n if PCI || EISA
++	default n if EISA
+ 	default y
+ 	help
+ 	  This instructs the driver to use EISA or PCI memory-mapped I/O
+-	  (MMIO) as appropriate instead of programmed I/O ports (PIO).
++	  (MMIO) as appropriate instead of programmed I/O ports (IOP).
+ 	  Enabling this gives an improvement in processing time in parts
+-	  of the driver, but it may cause problems with EISA (DEFEA)
+-	  adapters.  TURBOchannel does not have the concept of I/O ports,
+-	  so MMIO is always used for these (DEFTA) adapters.
++	  of the driver, but it requires a memory window to be configured
++	  for EISA (DEFEA) adapters that may not always be available.
++	  Conversely some PCIe host bridges do not support IOP, so MMIO
++	  may be required to access PCI (DEFPA) adapters on downstream PCI
++	  buses with some systems.  TURBOchannel does not have the concept
++	  of I/O ports, so MMIO is always used for these (DEFTA) adapters.
+ 
+ 	  If unsure, say N.
+ 
+diff --git a/drivers/net/fddi/defxx.c b/drivers/net/fddi/defxx.c
+index 077c68498f048..c7ce6d5491afc 100644
+--- a/drivers/net/fddi/defxx.c
++++ b/drivers/net/fddi/defxx.c
+@@ -495,6 +495,25 @@ static const struct net_device_ops dfx_netdev_ops = {
+ 	.ndo_set_mac_address	= dfx_ctl_set_mac_address,
+ };
+ 
++static void dfx_register_res_alloc_err(const char *print_name, bool mmio,
++				       bool eisa)
++{
++	pr_err("%s: Cannot use %s, no address set, aborting\n",
++	       print_name, mmio ? "MMIO" : "I/O");
++	pr_err("%s: Recompile driver with \"CONFIG_DEFXX_MMIO=%c\"\n",
++	       print_name, mmio ? 'n' : 'y');
++	if (eisa && mmio)
++		pr_err("%s: Or run ECU and set adapter's MMIO location\n",
++		       print_name);
++}
++
++static void dfx_register_res_err(const char *print_name, bool mmio,
++				 unsigned long start, unsigned long len)
++{
++	pr_err("%s: Cannot reserve %s resource 0x%lx @ 0x%lx, aborting\n",
++	       print_name, mmio ? "MMIO" : "I/O", len, start);
++}
++
+ /*
+  * ================
+  * = dfx_register =
+@@ -568,15 +587,12 @@ static int dfx_register(struct device *bdev)
+ 	dev_set_drvdata(bdev, dev);
+ 
+ 	dfx_get_bars(bdev, bar_start, bar_len);
+-	if (dfx_bus_eisa && dfx_use_mmio && bar_start[0] == 0) {
+-		pr_err("%s: Cannot use MMIO, no address set, aborting\n",
+-		       print_name);
+-		pr_err("%s: Run ECU and set adapter's MMIO location\n",
+-		       print_name);
+-		pr_err("%s: Or recompile driver with \"CONFIG_DEFXX_MMIO=n\""
+-		       "\n", print_name);
++	if (bar_len[0] == 0 ||
++	    (dfx_bus_eisa && dfx_use_mmio && bar_start[0] == 0)) {
++		dfx_register_res_alloc_err(print_name, dfx_use_mmio,
++					   dfx_bus_eisa);
+ 		err = -ENXIO;
+-		goto err_out;
++		goto err_out_disable;
+ 	}
+ 
+ 	if (dfx_use_mmio)
+@@ -585,18 +601,16 @@ static int dfx_register(struct device *bdev)
+ 	else
+ 		region = request_region(bar_start[0], bar_len[0], print_name);
+ 	if (!region) {
+-		pr_err("%s: Cannot reserve %s resource 0x%lx @ 0x%lx, "
+-		       "aborting\n", dfx_use_mmio ? "MMIO" : "I/O", print_name,
+-		       (long)bar_len[0], (long)bar_start[0]);
++		dfx_register_res_err(print_name, dfx_use_mmio,
++				     bar_start[0], bar_len[0]);
+ 		err = -EBUSY;
+ 		goto err_out_disable;
+ 	}
+ 	if (bar_start[1] != 0) {
+ 		region = request_region(bar_start[1], bar_len[1], print_name);
+ 		if (!region) {
+-			pr_err("%s: Cannot reserve I/O resource "
+-			       "0x%lx @ 0x%lx, aborting\n", print_name,
+-			       (long)bar_len[1], (long)bar_start[1]);
++			dfx_register_res_err(print_name, 0,
++					     bar_start[1], bar_len[1]);
+ 			err = -EBUSY;
+ 			goto err_out_csr_region;
+ 		}
+@@ -604,9 +618,8 @@ static int dfx_register(struct device *bdev)
+ 	if (bar_start[2] != 0) {
+ 		region = request_region(bar_start[2], bar_len[2], print_name);
+ 		if (!region) {
+-			pr_err("%s: Cannot reserve I/O resource "
+-			       "0x%lx @ 0x%lx, aborting\n", print_name,
+-			       (long)bar_len[2], (long)bar_start[2]);
++			dfx_register_res_err(print_name, 0,
++					     bar_start[2], bar_len[2]);
+ 			err = -EBUSY;
+ 			goto err_out_bh_region;
+ 		}
+diff --git a/drivers/net/geneve.c b/drivers/net/geneve.c
+index 42f31c6818462..61cd3dd4deab6 100644
+--- a/drivers/net/geneve.c
++++ b/drivers/net/geneve.c
+@@ -891,7 +891,7 @@ static int geneve_xmit_skb(struct sk_buff *skb, struct net_device *dev,
+ 	__be16 sport;
+ 	int err;
+ 
+-	if (!pskb_network_may_pull(skb, sizeof(struct iphdr)))
++	if (!pskb_inet_may_pull(skb))
+ 		return -EINVAL;
+ 
+ 	sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true);
+@@ -988,7 +988,7 @@ static int geneve6_xmit_skb(struct sk_buff *skb, struct net_device *dev,
+ 	__be16 sport;
+ 	int err;
+ 
+-	if (!pskb_network_may_pull(skb, sizeof(struct ipv6hdr)))
++	if (!pskb_inet_may_pull(skb))
+ 		return -EINVAL;
+ 
+ 	sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true);
+diff --git a/drivers/net/phy/intel-xway.c b/drivers/net/phy/intel-xway.c
+index 6eac50d4b42fc..d453ec0161688 100644
+--- a/drivers/net/phy/intel-xway.c
++++ b/drivers/net/phy/intel-xway.c
+@@ -11,6 +11,18 @@
+ 
+ #define XWAY_MDIO_IMASK			0x19	/* interrupt mask */
+ #define XWAY_MDIO_ISTAT			0x1A	/* interrupt status */
++#define XWAY_MDIO_LED			0x1B	/* led control */
++
++/* bit 15:12 are reserved */
++#define XWAY_MDIO_LED_LED3_EN		BIT(11)	/* Enable the integrated function of LED3 */
++#define XWAY_MDIO_LED_LED2_EN		BIT(10)	/* Enable the integrated function of LED2 */
++#define XWAY_MDIO_LED_LED1_EN		BIT(9)	/* Enable the integrated function of LED1 */
++#define XWAY_MDIO_LED_LED0_EN		BIT(8)	/* Enable the integrated function of LED0 */
++/* bit 7:4 are reserved */
++#define XWAY_MDIO_LED_LED3_DA		BIT(3)	/* Direct Access to LED3 */
++#define XWAY_MDIO_LED_LED2_DA		BIT(2)	/* Direct Access to LED2 */
++#define XWAY_MDIO_LED_LED1_DA		BIT(1)	/* Direct Access to LED1 */
++#define XWAY_MDIO_LED_LED0_DA		BIT(0)	/* Direct Access to LED0 */
+ 
+ #define XWAY_MDIO_INIT_WOL		BIT(15)	/* Wake-On-LAN */
+ #define XWAY_MDIO_INIT_MSRE		BIT(14)
+@@ -159,6 +171,15 @@ static int xway_gphy_config_init(struct phy_device *phydev)
+ 	/* Clear all pending interrupts */
+ 	phy_read(phydev, XWAY_MDIO_ISTAT);
+ 
++	/* Ensure that integrated led function is enabled for all leds */
++	err = phy_write(phydev, XWAY_MDIO_LED,
++			XWAY_MDIO_LED_LED0_EN |
++			XWAY_MDIO_LED_LED1_EN |
++			XWAY_MDIO_LED_LED2_EN |
++			XWAY_MDIO_LED_LED3_EN);
++	if (err)
++		return err;
++
+ 	phy_write_mmd(phydev, MDIO_MMD_VEND2, XWAY_MMD_LEDCH,
+ 		      XWAY_MMD_LEDCH_NACS_NONE |
+ 		      XWAY_MMD_LEDCH_SBF_F02HZ |
+diff --git a/drivers/net/phy/marvell.c b/drivers/net/phy/marvell.c
+index 8018ddf7f3162..f86c9ddc609ea 100644
+--- a/drivers/net/phy/marvell.c
++++ b/drivers/net/phy/marvell.c
+@@ -967,22 +967,28 @@ static int m88e1111_get_downshift(struct phy_device *phydev, u8 *data)
+ 
+ static int m88e1111_set_downshift(struct phy_device *phydev, u8 cnt)
+ {
+-	int val;
++	int val, err;
+ 
+ 	if (cnt > MII_M1111_PHY_EXT_CR_DOWNSHIFT_MAX)
+ 		return -E2BIG;
+ 
+-	if (!cnt)
+-		return phy_clear_bits(phydev, MII_M1111_PHY_EXT_CR,
+-				      MII_M1111_PHY_EXT_CR_DOWNSHIFT_EN);
++	if (!cnt) {
++		err = phy_clear_bits(phydev, MII_M1111_PHY_EXT_CR,
++				     MII_M1111_PHY_EXT_CR_DOWNSHIFT_EN);
++	} else {
++		val = MII_M1111_PHY_EXT_CR_DOWNSHIFT_EN;
++		val |= FIELD_PREP(MII_M1111_PHY_EXT_CR_DOWNSHIFT_MASK, cnt - 1);
+ 
+-	val = MII_M1111_PHY_EXT_CR_DOWNSHIFT_EN;
+-	val |= FIELD_PREP(MII_M1111_PHY_EXT_CR_DOWNSHIFT_MASK, cnt - 1);
++		err = phy_modify(phydev, MII_M1111_PHY_EXT_CR,
++				 MII_M1111_PHY_EXT_CR_DOWNSHIFT_EN |
++				 MII_M1111_PHY_EXT_CR_DOWNSHIFT_MASK,
++				 val);
++	}
+ 
+-	return phy_modify(phydev, MII_M1111_PHY_EXT_CR,
+-			  MII_M1111_PHY_EXT_CR_DOWNSHIFT_EN |
+-			  MII_M1111_PHY_EXT_CR_DOWNSHIFT_MASK,
+-			  val);
++	if (err < 0)
++		return err;
++
++	return genphy_soft_reset(phydev);
+ }
+ 
+ static int m88e1111_get_tunable(struct phy_device *phydev,
+@@ -1025,22 +1031,28 @@ static int m88e1011_get_downshift(struct phy_device *phydev, u8 *data)
+ 
+ static int m88e1011_set_downshift(struct phy_device *phydev, u8 cnt)
+ {
+-	int val;
++	int val, err;
+ 
+ 	if (cnt > MII_M1011_PHY_SCR_DOWNSHIFT_MAX)
+ 		return -E2BIG;
+ 
+-	if (!cnt)
+-		return phy_clear_bits(phydev, MII_M1011_PHY_SCR,
+-				      MII_M1011_PHY_SCR_DOWNSHIFT_EN);
++	if (!cnt) {
++		err = phy_clear_bits(phydev, MII_M1011_PHY_SCR,
++				     MII_M1011_PHY_SCR_DOWNSHIFT_EN);
++	} else {
++		val = MII_M1011_PHY_SCR_DOWNSHIFT_EN;
++		val |= FIELD_PREP(MII_M1011_PHY_SCR_DOWNSHIFT_MASK, cnt - 1);
+ 
+-	val = MII_M1011_PHY_SCR_DOWNSHIFT_EN;
+-	val |= FIELD_PREP(MII_M1011_PHY_SCR_DOWNSHIFT_MASK, cnt - 1);
++		err = phy_modify(phydev, MII_M1011_PHY_SCR,
++				 MII_M1011_PHY_SCR_DOWNSHIFT_EN |
++				 MII_M1011_PHY_SCR_DOWNSHIFT_MASK,
++				 val);
++	}
+ 
+-	return phy_modify(phydev, MII_M1011_PHY_SCR,
+-			  MII_M1011_PHY_SCR_DOWNSHIFT_EN |
+-			  MII_M1011_PHY_SCR_DOWNSHIFT_MASK,
+-			  val);
++	if (err < 0)
++		return err;
++
++	return genphy_soft_reset(phydev);
+ }
+ 
+ static int m88e1011_get_tunable(struct phy_device *phydev,
+diff --git a/drivers/net/phy/smsc.c b/drivers/net/phy/smsc.c
+index ddb78fb4d6dc3..d8cac02a79b95 100644
+--- a/drivers/net/phy/smsc.c
++++ b/drivers/net/phy/smsc.c
+@@ -185,10 +185,13 @@ static int lan87xx_config_aneg(struct phy_device *phydev)
+ 	return genphy_config_aneg(phydev);
+ }
+ 
+-static int lan87xx_config_aneg_ext(struct phy_device *phydev)
++static int lan95xx_config_aneg_ext(struct phy_device *phydev)
+ {
+ 	int rc;
+ 
++	if (phydev->phy_id != 0x0007c0f0) /* not (LAN9500A or LAN9505A) */
++		return lan87xx_config_aneg(phydev);
++
+ 	/* Extend Manual AutoMDIX timer */
+ 	rc = phy_read(phydev, PHY_EDPD_CONFIG);
+ 	if (rc < 0)
+@@ -441,7 +444,7 @@ static struct phy_driver smsc_phy_driver[] = {
+ 	.read_status	= lan87xx_read_status,
+ 	.config_init	= smsc_phy_config_init,
+ 	.soft_reset	= smsc_phy_reset,
+-	.config_aneg	= lan87xx_config_aneg_ext,
++	.config_aneg	= lan95xx_config_aneg_ext,
+ 
+ 	/* IRQ related */
+ 	.config_intr	= smsc_phy_config_intr,
+diff --git a/drivers/net/wan/hdlc_fr.c b/drivers/net/wan/hdlc_fr.c
+index 4d9dc7d159089..0720f5f92caa7 100644
+--- a/drivers/net/wan/hdlc_fr.c
++++ b/drivers/net/wan/hdlc_fr.c
+@@ -415,7 +415,7 @@ static netdev_tx_t pvc_xmit(struct sk_buff *skb, struct net_device *dev)
+ 
+ 		if (pad > 0) { /* Pad the frame with zeros */
+ 			if (__skb_pad(skb, pad, false))
+-				goto out;
++				goto drop;
+ 			skb_put(skb, pad);
+ 		}
+ 	}
+@@ -448,9 +448,8 @@ static netdev_tx_t pvc_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	return NETDEV_TX_OK;
+ 
+ drop:
+-	kfree_skb(skb);
+-out:
+ 	dev->stats.tx_dropped++;
++	kfree_skb(skb);
+ 	return NETDEV_TX_OK;
+ }
+ 
+diff --git a/drivers/net/wan/lapbether.c b/drivers/net/wan/lapbether.c
+index c3372498f4f15..8fda0446ff71e 100644
+--- a/drivers/net/wan/lapbether.c
++++ b/drivers/net/wan/lapbether.c
+@@ -51,6 +51,8 @@ struct lapbethdev {
+ 	struct list_head	node;
+ 	struct net_device	*ethdev;	/* link to ethernet device */
+ 	struct net_device	*axdev;		/* lapbeth device (lapb#) */
++	bool			up;
++	spinlock_t		up_lock;	/* Protects "up" */
+ };
+ 
+ static LIST_HEAD(lapbeth_devices);
+@@ -101,8 +103,9 @@ static int lapbeth_rcv(struct sk_buff *skb, struct net_device *dev, struct packe
+ 	rcu_read_lock();
+ 	lapbeth = lapbeth_get_x25_dev(dev);
+ 	if (!lapbeth)
+-		goto drop_unlock;
+-	if (!netif_running(lapbeth->axdev))
++		goto drop_unlock_rcu;
++	spin_lock_bh(&lapbeth->up_lock);
++	if (!lapbeth->up)
+ 		goto drop_unlock;
+ 
+ 	len = skb->data[0] + skb->data[1] * 256;
+@@ -117,11 +120,14 @@ static int lapbeth_rcv(struct sk_buff *skb, struct net_device *dev, struct packe
+ 		goto drop_unlock;
+ 	}
+ out:
++	spin_unlock_bh(&lapbeth->up_lock);
+ 	rcu_read_unlock();
+ 	return 0;
+ drop_unlock:
+ 	kfree_skb(skb);
+ 	goto out;
++drop_unlock_rcu:
++	rcu_read_unlock();
+ drop:
+ 	kfree_skb(skb);
+ 	return 0;
+@@ -151,13 +157,11 @@ static int lapbeth_data_indication(struct net_device *dev, struct sk_buff *skb)
+ static netdev_tx_t lapbeth_xmit(struct sk_buff *skb,
+ 				      struct net_device *dev)
+ {
++	struct lapbethdev *lapbeth = netdev_priv(dev);
+ 	int err;
+ 
+-	/*
+-	 * Just to be *really* sure not to send anything if the interface
+-	 * is down, the ethernet device may have gone.
+-	 */
+-	if (!netif_running(dev))
++	spin_lock_bh(&lapbeth->up_lock);
++	if (!lapbeth->up)
+ 		goto drop;
+ 
+ 	/* There should be a pseudo header of 1 byte added by upper layers.
+@@ -194,6 +198,7 @@ static netdev_tx_t lapbeth_xmit(struct sk_buff *skb,
+ 		goto drop;
+ 	}
+ out:
++	spin_unlock_bh(&lapbeth->up_lock);
+ 	return NETDEV_TX_OK;
+ drop:
+ 	kfree_skb(skb);
+@@ -285,6 +290,7 @@ static const struct lapb_register_struct lapbeth_callbacks = {
+  */
+ static int lapbeth_open(struct net_device *dev)
+ {
++	struct lapbethdev *lapbeth = netdev_priv(dev);
+ 	int err;
+ 
+ 	if ((err = lapb_register(dev, &lapbeth_callbacks)) != LAPB_OK) {
+@@ -292,13 +298,22 @@ static int lapbeth_open(struct net_device *dev)
+ 		return -ENODEV;
+ 	}
+ 
++	spin_lock_bh(&lapbeth->up_lock);
++	lapbeth->up = true;
++	spin_unlock_bh(&lapbeth->up_lock);
++
+ 	return 0;
+ }
+ 
+ static int lapbeth_close(struct net_device *dev)
+ {
++	struct lapbethdev *lapbeth = netdev_priv(dev);
+ 	int err;
+ 
++	spin_lock_bh(&lapbeth->up_lock);
++	lapbeth->up = false;
++	spin_unlock_bh(&lapbeth->up_lock);
++
+ 	if ((err = lapb_unregister(dev)) != LAPB_OK)
+ 		pr_err("lapb_unregister error: %d\n", err);
+ 
+@@ -356,6 +371,9 @@ static int lapbeth_new_device(struct net_device *dev)
+ 	dev_hold(dev);
+ 	lapbeth->ethdev = dev;
+ 
++	lapbeth->up = false;
++	spin_lock_init(&lapbeth->up_lock);
++
+ 	rc = -EIO;
+ 	if (register_netdevice(ndev))
+ 		goto fail;
+diff --git a/drivers/net/wireless/ath/ath10k/htc.c b/drivers/net/wireless/ath/ath10k/htc.c
+index 0a37be6a7d33d..fab398046a3f2 100644
+--- a/drivers/net/wireless/ath/ath10k/htc.c
++++ b/drivers/net/wireless/ath/ath10k/htc.c
+@@ -669,7 +669,7 @@ static int ath10k_htc_send_bundle(struct ath10k_htc_ep *ep,
+ 
+ 	ath10k_dbg(ar, ATH10K_DBG_HTC,
+ 		   "bundle tx status %d eid %d req count %d count %d len %d\n",
+-		   ret, ep->eid, skb_queue_len(&ep->tx_req_head), cn, bundle_skb->len);
++		   ret, ep->eid, skb_queue_len(&ep->tx_req_head), cn, skb_len);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/wireless/ath/ath10k/wmi-tlv.c b/drivers/net/wireless/ath/ath10k/wmi-tlv.c
+index d97b33f789e44..7efbe03fbca82 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi-tlv.c
++++ b/drivers/net/wireless/ath/ath10k/wmi-tlv.c
+@@ -592,6 +592,9 @@ static void ath10k_wmi_event_tdls_peer(struct ath10k *ar, struct sk_buff *skb)
+ 					GFP_ATOMIC
+ 					);
+ 		break;
++	default:
++		kfree(tb);
++		return;
+ 	}
+ 
+ exit:
+diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_init.c b/drivers/net/wireless/ath/ath9k/htc_drv_init.c
+index db0c6fa9c9dc4..ff61ae34ecdf0 100644
+--- a/drivers/net/wireless/ath/ath9k/htc_drv_init.c
++++ b/drivers/net/wireless/ath/ath9k/htc_drv_init.c
+@@ -246,7 +246,7 @@ static unsigned int ath9k_regread(void *hw_priv, u32 reg_offset)
+ 	if (unlikely(r)) {
+ 		ath_dbg(common, WMI, "REGISTER READ FAILED: (0x%04x, %d)\n",
+ 			reg_offset, r);
+-		return -EIO;
++		return -1;
+ 	}
+ 
+ 	return be32_to_cpu(val);
+diff --git a/drivers/net/wireless/ath/ath9k/hw.c b/drivers/net/wireless/ath/ath9k/hw.c
+index 5abc2a5526ecf..2ca3b86714a9d 100644
+--- a/drivers/net/wireless/ath/ath9k/hw.c
++++ b/drivers/net/wireless/ath/ath9k/hw.c
+@@ -286,7 +286,7 @@ static bool ath9k_hw_read_revisions(struct ath_hw *ah)
+ 
+ 	srev = REG_READ(ah, AR_SREV);
+ 
+-	if (srev == -EIO) {
++	if (srev == -1) {
+ 		ath_err(ath9k_hw_common(ah),
+ 			"Failed to read SREV register");
+ 		return false;
+diff --git a/drivers/net/wireless/intel/ipw2x00/libipw_wx.c b/drivers/net/wireless/intel/ipw2x00/libipw_wx.c
+index a0cf78c418ac9..903de34028efb 100644
+--- a/drivers/net/wireless/intel/ipw2x00/libipw_wx.c
++++ b/drivers/net/wireless/intel/ipw2x00/libipw_wx.c
+@@ -633,8 +633,10 @@ int libipw_wx_set_encodeext(struct libipw_device *ieee,
+ 	}
+ 
+ 	if (ext->alg != IW_ENCODE_ALG_NONE) {
+-		memcpy(sec.keys[idx], ext->key, ext->key_len);
+-		sec.key_sizes[idx] = ext->key_len;
++		int key_len = clamp_val(ext->key_len, 0, SCM_KEY_LEN);
++
++		memcpy(sec.keys[idx], ext->key, key_len);
++		sec.key_sizes[idx] = key_len;
+ 		sec.flags |= (1 << idx);
+ 		if (ext->alg == IW_ENCODE_ALG_WEP) {
+ 			sec.encode_alg[idx] = SEC_ALG_WEP;
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c b/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
+index 579bc81cc0ae2..4cd8c39cc3e95 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
+ /*
+- * Copyright (C) 2018-2020 Intel Corporation
++ * Copyright (C) 2018-2021 Intel Corporation
+  */
+ #include <linux/firmware.h>
+ #include "iwl-drv.h"
+@@ -426,7 +426,8 @@ void iwl_dbg_tlv_load_bin(struct device *dev, struct iwl_trans *trans)
+ 	const struct firmware *fw;
+ 	int res;
+ 
+-	if (!iwlwifi_mod_params.enable_ini)
++	if (!iwlwifi_mod_params.enable_ini ||
++	    trans->trans_cfg->device_family <= IWL_DEVICE_FAMILY_9000)
+ 		return;
+ 
+ 	res = firmware_request_nowarn(&fw, "iwl-debug-yoyo.bin", dev);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rs-fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/rs-fw.c
+index 8772b65c9dabb..2d58cb969918f 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/rs-fw.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/rs-fw.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
+ /*
+  * Copyright (C) 2017 Intel Deutschland GmbH
+- * Copyright (C) 2018-2020 Intel Corporation
++ * Copyright (C) 2018-2021 Intel Corporation
+  */
+ #include "rs.h"
+ #include "fw-api.h"
+@@ -72,19 +72,15 @@ static u16 rs_fw_get_config_flags(struct iwl_mvm *mvm,
+ 	bool vht_ena = vht_cap->vht_supported;
+ 	u16 flags = 0;
+ 
++	/* get STBC flags */
+ 	if (mvm->cfg->ht_params->stbc &&
+ 	    (num_of_ant(iwl_mvm_get_valid_tx_ant(mvm)) > 1)) {
+-		if (he_cap->has_he) {
+-			if (he_cap->he_cap_elem.phy_cap_info[2] &
+-			    IEEE80211_HE_PHY_CAP2_STBC_RX_UNDER_80MHZ)
+-				flags |= IWL_TLC_MNG_CFG_FLAGS_STBC_MSK;
+-
+-			if (he_cap->he_cap_elem.phy_cap_info[7] &
+-			    IEEE80211_HE_PHY_CAP7_STBC_RX_ABOVE_80MHZ)
+-				flags |= IWL_TLC_MNG_CFG_FLAGS_HE_STBC_160MHZ_MSK;
+-		} else if ((ht_cap->cap & IEEE80211_HT_CAP_RX_STBC) ||
+-			   (vht_ena &&
+-			    (vht_cap->cap & IEEE80211_VHT_CAP_RXSTBC_MASK)))
++		if (he_cap->has_he && he_cap->he_cap_elem.phy_cap_info[2] &
++				      IEEE80211_HE_PHY_CAP2_STBC_RX_UNDER_80MHZ)
++			flags |= IWL_TLC_MNG_CFG_FLAGS_STBC_MSK;
++		else if (vht_cap->cap & IEEE80211_VHT_CAP_RXSTBC_MASK)
++			flags |= IWL_TLC_MNG_CFG_FLAGS_STBC_MSK;
++		else if (ht_cap->cap & IEEE80211_HT_CAP_RX_STBC)
+ 			flags |= IWL_TLC_MNG_CFG_FLAGS_STBC_MSK;
+ 	}
+ 
+diff --git a/drivers/net/wireless/marvell/mwl8k.c b/drivers/net/wireless/marvell/mwl8k.c
+index c9f8c056aa517..84b32a5f01ee8 100644
+--- a/drivers/net/wireless/marvell/mwl8k.c
++++ b/drivers/net/wireless/marvell/mwl8k.c
+@@ -1473,6 +1473,7 @@ static int mwl8k_txq_init(struct ieee80211_hw *hw, int index)
+ 	if (txq->skb == NULL) {
+ 		dma_free_coherent(&priv->pdev->dev, size, txq->txd,
+ 				  txq->txd_dma);
++		txq->txd = NULL;
+ 		return -ENOMEM;
+ 	}
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/dma.c b/drivers/net/wireless/mediatek/mt76/dma.c
+index 2f27c43ad76df..7196fa9047e6d 100644
+--- a/drivers/net/wireless/mediatek/mt76/dma.c
++++ b/drivers/net/wireless/mediatek/mt76/dma.c
+@@ -309,7 +309,7 @@ static int
+ mt76_dma_tx_queue_skb_raw(struct mt76_dev *dev, struct mt76_queue *q,
+ 			  struct sk_buff *skb, u32 tx_info)
+ {
+-	struct mt76_queue_buf buf;
++	struct mt76_queue_buf buf = {};
+ 	dma_addr_t addr;
+ 
+ 	if (q->queued + 1 >= q->ndesc - 1)
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
+index 59fdd0fc2ad4f..d73841480544a 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/mac.c
+@@ -690,7 +690,7 @@ mt7615_txp_skb_unmap_fw(struct mt76_dev *dev, struct mt7615_fw_txp *txp)
+ {
+ 	int i;
+ 
+-	for (i = 1; i < txp->nbuf; i++)
++	for (i = 0; i < txp->nbuf; i++)
+ 		dma_unmap_single(dev->dev, le32_to_cpu(txp->buf[i]),
+ 				 le16_to_cpu(txp->len[i]), DMA_TO_DEVICE);
+ }
+@@ -966,6 +966,7 @@ void mt7615_mac_set_rates(struct mt7615_phy *phy, struct mt7615_sta *sta,
+ 	struct mt7615_dev *dev = phy->dev;
+ 	struct mt7615_rate_desc rd;
+ 	u32 w5, w27, addr;
++	u16 idx = sta->vif->mt76.omac_idx;
+ 
+ 	if (!mt76_is_mmio(&dev->mt76)) {
+ 		mt7615_mac_queue_rate_update(phy, sta, probe_rate, rates);
+@@ -1017,7 +1018,10 @@ void mt7615_mac_set_rates(struct mt7615_phy *phy, struct mt7615_sta *sta,
+ 
+ 	mt76_wr(dev, addr + 27 * 4, w27);
+ 
+-	mt76_set(dev, MT_LPON_T0CR, MT_LPON_T0CR_MODE); /* TSF read */
++	idx = idx > HW_BSSID_MAX ? HW_BSSID_0 : idx;
++	addr = idx > 1 ? MT_LPON_TCR2(idx): MT_LPON_TCR0(idx);
++
++	mt76_set(dev, addr, MT_LPON_TCR_MODE); /* TSF read */
+ 	sta->rate_set_tsf = mt76_rr(dev, MT_LPON_UTTR0) & ~BIT(0);
+ 	sta->rate_set_tsf |= rd.rateset;
+ 
+@@ -1821,10 +1825,8 @@ mt7615_mac_update_mib_stats(struct mt7615_phy *phy)
+ 	int i, aggr;
+ 	u32 val, val2;
+ 
+-	memset(mib, 0, sizeof(*mib));
+-
+-	mib->fcs_err_cnt = mt76_get_field(dev, MT_MIB_SDR3(ext_phy),
+-					  MT_MIB_SDR3_FCS_ERR_MASK);
++	mib->fcs_err_cnt += mt76_get_field(dev, MT_MIB_SDR3(ext_phy),
++					   MT_MIB_SDR3_FCS_ERR_MASK);
+ 
+ 	val = mt76_get_field(dev, MT_MIB_SDR14(ext_phy),
+ 			     MT_MIB_AMPDU_MPDU_COUNT);
+@@ -1837,24 +1839,16 @@ mt7615_mac_update_mib_stats(struct mt7615_phy *phy)
+ 	aggr = ext_phy ? ARRAY_SIZE(dev->mt76.aggr_stats) / 2 : 0;
+ 	for (i = 0; i < 4; i++) {
+ 		val = mt76_rr(dev, MT_MIB_MB_SDR1(ext_phy, i));
+-
+-		val2 = FIELD_GET(MT_MIB_ACK_FAIL_COUNT_MASK, val);
+-		if (val2 > mib->ack_fail_cnt)
+-			mib->ack_fail_cnt = val2;
+-
+-		val2 = FIELD_GET(MT_MIB_BA_MISS_COUNT_MASK, val);
+-		if (val2 > mib->ba_miss_cnt)
+-			mib->ba_miss_cnt = val2;
++		mib->ba_miss_cnt += FIELD_GET(MT_MIB_BA_MISS_COUNT_MASK, val);
++		mib->ack_fail_cnt += FIELD_GET(MT_MIB_ACK_FAIL_COUNT_MASK,
++					       val);
+ 
+ 		val = mt76_rr(dev, MT_MIB_MB_SDR0(ext_phy, i));
+-		val2 = FIELD_GET(MT_MIB_RTS_RETRIES_COUNT_MASK, val);
+-		if (val2 > mib->rts_retries_cnt) {
+-			mib->rts_cnt = FIELD_GET(MT_MIB_RTS_COUNT_MASK, val);
+-			mib->rts_retries_cnt = val2;
+-		}
++		mib->rts_cnt += FIELD_GET(MT_MIB_RTS_COUNT_MASK, val);
++		mib->rts_retries_cnt += FIELD_GET(MT_MIB_RTS_RETRIES_COUNT_MASK,
++						  val);
+ 
+ 		val = mt76_rr(dev, MT_TX_AGG_CNT(ext_phy, i));
+-
+ 		dev->mt76.aggr_stats[aggr++] += val & 0xffff;
+ 		dev->mt76.aggr_stats[aggr++] += val >> 16;
+ 	}
+@@ -1976,15 +1970,17 @@ void mt7615_dma_reset(struct mt7615_dev *dev)
+ 	mt76_clear(dev, MT_WPDMA_GLO_CFG,
+ 		   MT_WPDMA_GLO_CFG_RX_DMA_EN | MT_WPDMA_GLO_CFG_TX_DMA_EN |
+ 		   MT_WPDMA_GLO_CFG_TX_WRITEBACK_DONE);
++
+ 	usleep_range(1000, 2000);
+ 
+-	mt76_queue_tx_cleanup(dev, dev->mt76.q_mcu[MT_MCUQ_WM], true);
+ 	for (i = 0; i < __MT_TXQ_MAX; i++)
+ 		mt76_queue_tx_cleanup(dev, dev->mphy.q_tx[i], true);
+ 
+-	mt76_for_each_q_rx(&dev->mt76, i) {
++	for (i = 0; i < __MT_MCUQ_MAX; i++)
++		mt76_queue_tx_cleanup(dev, dev->mt76.q_mcu[i], true);
++
++	mt76_for_each_q_rx(&dev->mt76, i)
+ 		mt76_queue_rx_reset(dev, i);
+-	}
+ 
+ 	mt76_set(dev, MT_WPDMA_GLO_CFG,
+ 		 MT_WPDMA_GLO_CFG_RX_DMA_EN | MT_WPDMA_GLO_CFG_TX_DMA_EN |
+@@ -2000,8 +1996,12 @@ void mt7615_tx_token_put(struct mt7615_dev *dev)
+ 	spin_lock_bh(&dev->token_lock);
+ 	idr_for_each_entry(&dev->token, txwi, id) {
+ 		mt7615_txp_skb_unmap(&dev->mt76, txwi);
+-		if (txwi->skb)
+-			dev_kfree_skb_any(txwi->skb);
++		if (txwi->skb) {
++			struct ieee80211_hw *hw;
++
++			hw = mt76_tx_status_get_hw(&dev->mt76, txwi->skb);
++			ieee80211_free_txskb(hw, txwi->skb);
++		}
+ 		mt76_put_txwi(&dev->mt76, txwi);
+ 	}
+ 	spin_unlock_bh(&dev->token_lock);
+@@ -2304,8 +2304,10 @@ void mt7615_coredump_work(struct work_struct *work)
+ 			break;
+ 
+ 		skb_pull(skb, sizeof(struct mt7615_mcu_rxd));
+-		if (data + skb->len - dump > MT76_CONNAC_COREDUMP_SZ)
+-			break;
++		if (data + skb->len - dump > MT76_CONNAC_COREDUMP_SZ) {
++			dev_kfree_skb(skb);
++			continue;
++		}
+ 
+ 		memcpy(data, skb->data, skb->len);
+ 		data += skb->len;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/main.c b/drivers/net/wireless/mediatek/mt76/mt7615/main.c
+index 25faf486d2795..6107e827b3836 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/main.c
+@@ -217,8 +217,6 @@ static int mt7615_add_interface(struct ieee80211_hw *hw,
+ 	ret = mt7615_mcu_add_dev_info(phy, vif, true);
+ 	if (ret)
+ 		goto out;
+-
+-	mt7615_mac_set_beacon_filter(phy, vif, true);
+ out:
+ 	mt7615_mutex_release(dev);
+ 
+@@ -244,7 +242,6 @@ static void mt7615_remove_interface(struct ieee80211_hw *hw,
+ 
+ 	mt76_connac_free_pending_tx_skbs(&dev->pm, &msta->wcid);
+ 
+-	mt7615_mac_set_beacon_filter(phy, vif, false);
+ 	mt7615_mcu_add_dev_info(phy, vif, false);
+ 
+ 	rcu_assign_pointer(dev->mt76.wcid[idx], NULL);
+@@ -544,6 +541,9 @@ static void mt7615_bss_info_changed(struct ieee80211_hw *hw,
+ 	if (changed & BSS_CHANGED_ARP_FILTER)
+ 		mt7615_mcu_update_arp_filter(hw, vif, info);
+ 
++	if (changed & BSS_CHANGED_ASSOC)
++		mt7615_mac_set_beacon_filter(phy, vif, info->assoc);
++
+ 	mt7615_mutex_release(dev);
+ }
+ 
+@@ -803,26 +803,38 @@ mt7615_get_stats(struct ieee80211_hw *hw,
+ 	struct mt7615_phy *phy = mt7615_hw_phy(hw);
+ 	struct mib_stats *mib = &phy->mib;
+ 
++	mt7615_mutex_acquire(phy->dev);
++
+ 	stats->dot11RTSSuccessCount = mib->rts_cnt;
+ 	stats->dot11RTSFailureCount = mib->rts_retries_cnt;
+ 	stats->dot11FCSErrorCount = mib->fcs_err_cnt;
+ 	stats->dot11ACKFailureCount = mib->ack_fail_cnt;
+ 
++	memset(mib, 0, sizeof(*mib));
++
++	mt7615_mutex_release(phy->dev);
++
+ 	return 0;
+ }
+ 
+ static u64
+ mt7615_get_tsf(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
+ {
++	struct mt7615_vif *mvif = (struct mt7615_vif *)vif->drv_priv;
+ 	struct mt7615_dev *dev = mt7615_hw_dev(hw);
+ 	union {
+ 		u64 t64;
+ 		u32 t32[2];
+ 	} tsf;
++	u16 idx = mvif->mt76.omac_idx;
++	u32 reg;
++
++	idx = idx > HW_BSSID_MAX ? HW_BSSID_0 : idx;
++	reg = idx > 1 ? MT_LPON_TCR2(idx): MT_LPON_TCR0(idx);
+ 
+ 	mt7615_mutex_acquire(dev);
+ 
+-	mt76_set(dev, MT_LPON_T0CR, MT_LPON_T0CR_MODE); /* TSF read */
++	mt76_set(dev, reg, MT_LPON_TCR_MODE); /* TSF read */
+ 	tsf.t32[0] = mt76_rr(dev, MT_LPON_UTTR0);
+ 	tsf.t32[1] = mt76_rr(dev, MT_LPON_UTTR1);
+ 
+@@ -835,18 +847,24 @@ static void
+ mt7615_set_tsf(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ 	       u64 timestamp)
+ {
++	struct mt7615_vif *mvif = (struct mt7615_vif *)vif->drv_priv;
+ 	struct mt7615_dev *dev = mt7615_hw_dev(hw);
+ 	union {
+ 		u64 t64;
+ 		u32 t32[2];
+ 	} tsf = { .t64 = timestamp, };
++	u16 idx = mvif->mt76.omac_idx;
++	u32 reg;
++
++	idx = idx > HW_BSSID_MAX ? HW_BSSID_0 : idx;
++	reg = idx > 1 ? MT_LPON_TCR2(idx): MT_LPON_TCR0(idx);
+ 
+ 	mt7615_mutex_acquire(dev);
+ 
+ 	mt76_wr(dev, MT_LPON_UTTR0, tsf.t32[0]);
+ 	mt76_wr(dev, MT_LPON_UTTR1, tsf.t32[1]);
+ 	/* TSF software overwrite */
+-	mt76_set(dev, MT_LPON_T0CR, MT_LPON_T0CR_WRITE);
++	mt76_set(dev, reg, MT_LPON_TCR_WRITE);
+ 
+ 	mt7615_mutex_release(dev);
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mt7615.h b/drivers/net/wireless/mediatek/mt76/mt7615/mt7615.h
+index 491841bc62912..4bc0c379c5793 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/mt7615.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/mt7615.h
+@@ -133,11 +133,11 @@ struct mt7615_vif {
+ };
+ 
+ struct mib_stats {
+-	u16 ack_fail_cnt;
+-	u16 fcs_err_cnt;
+-	u16 rts_cnt;
+-	u16 rts_retries_cnt;
+-	u16 ba_miss_cnt;
++	u32 ack_fail_cnt;
++	u32 fcs_err_cnt;
++	u32 rts_cnt;
++	u32 rts_retries_cnt;
++	u32 ba_miss_cnt;
+ 	unsigned long aggr_per;
+ };
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/pci_init.c b/drivers/net/wireless/mediatek/mt76/mt7615/pci_init.c
+index 72395925ddee4..15b417d6d8895 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/pci_init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/pci_init.c
+@@ -163,10 +163,9 @@ void mt7615_unregister_device(struct mt7615_dev *dev)
+ 	mt76_unregister_device(&dev->mt76);
+ 	if (mcu_running)
+ 		mt7615_mcu_exit(dev);
+-	mt7615_dma_cleanup(dev);
+ 
+ 	mt7615_tx_token_put(dev);
+-
++	mt7615_dma_cleanup(dev);
+ 	tasklet_disable(&dev->irq_tasklet);
+ 
+ 	mt76_free_device(&dev->mt76);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/regs.h b/drivers/net/wireless/mediatek/mt76/mt7615/regs.h
+index 6e5db015b32cd..6e4710d3ddd34 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/regs.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/regs.h
+@@ -447,9 +447,10 @@ enum mt7615_reg_base {
+ 
+ #define MT_LPON(_n)			((dev)->reg_map[MT_LPON_BASE] + (_n))
+ 
+-#define MT_LPON_T0CR			MT_LPON(0x010)
+-#define MT_LPON_T0CR_MODE		GENMASK(1, 0)
+-#define MT_LPON_T0CR_WRITE		BIT(0)
++#define MT_LPON_TCR0(_n)		MT_LPON(0x010 + ((_n) * 4))
++#define MT_LPON_TCR2(_n)		MT_LPON(0x0f8 + ((_n) - 2) * 4)
++#define MT_LPON_TCR_MODE		GENMASK(1, 0)
++#define MT_LPON_TCR_WRITE		BIT(0)
+ 
+ #define MT_LPON_UTTR0			MT_LPON(0x018)
+ #define MT_LPON_UTTR1			MT_LPON(0x01c)
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/sdio_txrx.c b/drivers/net/wireless/mediatek/mt76/mt7615/sdio_txrx.c
+index 9fb506f2ace6d..4393dd21ebbbb 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/sdio_txrx.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/sdio_txrx.c
+@@ -218,12 +218,15 @@ static int mt7663s_tx_run_queue(struct mt76_dev *dev, struct mt76_queue *q)
+ 	int qid, err, nframes = 0, len = 0, pse_sz = 0, ple_sz = 0;
+ 	bool mcu = q == dev->q_mcu[MT_MCUQ_WM];
+ 	struct mt76_sdio *sdio = &dev->sdio;
++	u8 pad;
+ 
+ 	qid = mcu ? ARRAY_SIZE(sdio->xmit_buf) - 1 : q->qid;
+ 	while (q->first != q->head) {
+ 		struct mt76_queue_entry *e = &q->entry[q->first];
+ 		struct sk_buff *iter;
+ 
++		smp_rmb();
++
+ 		if (!test_bit(MT76_STATE_MCU_RUNNING, &dev->phy.state)) {
+ 			__skb_put_zero(e->skb, 4);
+ 			err = __mt7663s_xmit_queue(dev, e->skb->data,
+@@ -234,7 +237,8 @@ static int mt7663s_tx_run_queue(struct mt76_dev *dev, struct mt76_queue *q)
+ 			goto next;
+ 		}
+ 
+-		if (len + e->skb->len + 4 > MT76S_XMIT_BUF_SZ)
++		pad = roundup(e->skb->len, 4) - e->skb->len;
++		if (len + e->skb->len + pad + 4 > MT76S_XMIT_BUF_SZ)
+ 			break;
+ 
+ 		if (mt7663s_tx_pick_quota(sdio, mcu, e->buf_sz, &pse_sz,
+@@ -252,6 +256,11 @@ static int mt7663s_tx_run_queue(struct mt76_dev *dev, struct mt76_queue *q)
+ 			len += iter->len;
+ 			nframes++;
+ 		}
++
++		if (unlikely(pad)) {
++			memset(sdio->xmit_buf[qid] + len, 0, pad);
++			len += pad;
++		}
+ next:
+ 		q->first = (q->first + 1) % q->ndesc;
+ 		e->done = true;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/usb_sdio.c b/drivers/net/wireless/mediatek/mt76/mt7615/usb_sdio.c
+index 203256862dfdd..f8d3673c2cae8 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/usb_sdio.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/usb_sdio.c
+@@ -67,6 +67,7 @@ static int mt7663_usb_sdio_set_rates(struct mt7615_dev *dev,
+ 	struct mt7615_rate_desc *rate = &wrd->rate;
+ 	struct mt7615_sta *sta = wrd->sta;
+ 	u32 w5, w27, addr, val;
++	u16 idx;
+ 
+ 	lockdep_assert_held(&dev->mt76.mutex);
+ 
+@@ -118,7 +119,11 @@ static int mt7663_usb_sdio_set_rates(struct mt7615_dev *dev,
+ 
+ 	sta->rate_probe = sta->rateset[rate->rateset].probe_rate.idx != -1;
+ 
+-	mt76_set(dev, MT_LPON_T0CR, MT_LPON_T0CR_MODE); /* TSF read */
++	idx = sta->vif->mt76.omac_idx;
++	idx = idx > HW_BSSID_MAX ? HW_BSSID_0 : idx;
++	addr = idx > 1 ? MT_LPON_TCR2(idx): MT_LPON_TCR0(idx);
++
++	mt76_set(dev, addr, MT_LPON_TCR_MODE); /* TSF read */
+ 	val = mt76_rr(dev, MT_LPON_UTTR0);
+ 	sta->rate_set_tsf = (val & ~BIT(0)) | rate->rateset;
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
+index 6cbccfb05f8b5..76a61e8b7fb96 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
+@@ -946,6 +946,7 @@ int mt76_connac_mcu_uni_add_dev(struct mt76_phy *phy,
+ 
+ 	switch (vif->type) {
+ 	case NL80211_IFTYPE_MESH_POINT:
++	case NL80211_IFTYPE_MONITOR:
+ 	case NL80211_IFTYPE_AP:
+ 		basic_req.basic.conn_type = cpu_to_le32(CONNECTION_INFRA_AP);
+ 		break;
+@@ -1195,6 +1196,7 @@ int mt76_connac_mcu_uni_add_bss(struct mt76_phy *phy,
+ 			.center_chan = ieee80211_frequency_to_channel(freq1),
+ 			.center_chan2 = ieee80211_frequency_to_channel(freq2),
+ 			.tx_streams = hweight8(phy->antenna_mask),
++			.ht_op_info = 4, /* set HT 40M allowed */
+ 			.rx_streams = phy->chainmask,
+ 			.short_st = true,
+ 		},
+@@ -1287,6 +1289,7 @@ int mt76_connac_mcu_uni_add_bss(struct mt76_phy *phy,
+ 	case NL80211_CHAN_WIDTH_20:
+ 	default:
+ 		rlm_req.rlm.bw = CMD_CBW_20MHZ;
++		rlm_req.rlm.ht_op_info = 0;
+ 		break;
+ 	}
+ 
+@@ -1306,7 +1309,7 @@ int mt76_connac_mcu_hw_scan(struct mt76_phy *phy, struct ieee80211_vif *vif,
+ {
+ 	struct mt76_vif *mvif = (struct mt76_vif *)vif->drv_priv;
+ 	struct cfg80211_scan_request *sreq = &scan_req->req;
+-	int n_ssids = 0, err, i, duration = MT76_CONNAC_SCAN_CHANNEL_TIME;
++	int n_ssids = 0, err, i, duration;
+ 	int ext_channels_num = max_t(int, sreq->n_channels - 32, 0);
+ 	struct ieee80211_channel **scan_list = sreq->channels;
+ 	struct mt76_dev *mdev = phy->dev;
+@@ -1343,6 +1346,7 @@ int mt76_connac_mcu_hw_scan(struct mt76_phy *phy, struct ieee80211_vif *vif,
+ 	req->ssid_type_ext = n_ssids ? BIT(0) : 0;
+ 	req->ssids_num = n_ssids;
+ 
++	duration = is_mt7921(phy->dev) ? 0 : MT76_CONNAC_SCAN_CHANNEL_TIME;
+ 	/* increase channel time for passive scan */
+ 	if (!sreq->n_ssids)
+ 		duration *= 2;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/debugfs.c b/drivers/net/wireless/mediatek/mt76/mt7915/debugfs.c
+index 77dcd71e49a5e..2f706620686e8 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/debugfs.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/debugfs.c
+@@ -124,7 +124,7 @@ mt7915_ampdu_stat_read_phy(struct mt7915_phy *phy,
+ 		range[i] = mt76_rr(dev, MT_MIB_ARNG(ext_phy, i));
+ 
+ 	for (i = 0; i < ARRAY_SIZE(bound); i++)
+-		bound[i] = MT_MIB_ARNCR_RANGE(range[i / 4], i) + 1;
++		bound[i] = MT_MIB_ARNCR_RANGE(range[i / 4], i % 4) + 1;
+ 
+ 	seq_printf(file, "\nPhy %d\n", ext_phy);
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/init.c b/drivers/net/wireless/mediatek/mt76/mt7915/init.c
+index ad4e5b95158bc..894016fdcf070 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/init.c
+@@ -675,9 +675,8 @@ void mt7915_unregister_device(struct mt7915_dev *dev)
+ 	mt7915_unregister_ext_phy(dev);
+ 	mt76_unregister_device(&dev->mt76);
+ 	mt7915_mcu_exit(dev);
+-	mt7915_dma_cleanup(dev);
+-
+ 	mt7915_tx_token_put(dev);
++	mt7915_dma_cleanup(dev);
+ 
+ 	mt76_free_device(&dev->mt76);
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mac.c b/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
+index e5a258958ac91..819670767521f 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
+@@ -1091,7 +1091,7 @@ void mt7915_txp_skb_unmap(struct mt76_dev *dev,
+ 	int i;
+ 
+ 	txp = mt7915_txwi_to_txp(dev, t);
+-	for (i = 1; i < txp->nbuf; i++)
++	for (i = 0; i < txp->nbuf; i++)
+ 		dma_unmap_single(dev->dev, le32_to_cpu(txp->buf[i]),
+ 				 le16_to_cpu(txp->len[i]), DMA_TO_DEVICE);
+ }
+@@ -1470,9 +1470,8 @@ mt7915_update_beacons(struct mt7915_dev *dev)
+ }
+ 
+ static void
+-mt7915_dma_reset(struct mt7915_phy *phy)
++mt7915_dma_reset(struct mt7915_dev *dev)
+ {
+-	struct mt7915_dev *dev = phy->dev;
+ 	struct mt76_phy *mphy_ext = dev->mt76.phy2;
+ 	u32 hif1_ofs = MT_WFDMA1_PCIE1_BASE - MT_WFDMA1_BASE;
+ 	int i;
+@@ -1489,18 +1488,20 @@ mt7915_dma_reset(struct mt7915_phy *phy)
+ 			   (MT_WFDMA1_GLO_CFG_TX_DMA_EN |
+ 			    MT_WFDMA1_GLO_CFG_RX_DMA_EN));
+ 	}
++
+ 	usleep_range(1000, 2000);
+ 
+-	mt76_queue_tx_cleanup(dev, dev->mt76.q_mcu[MT_MCUQ_WA], true);
+ 	for (i = 0; i < __MT_TXQ_MAX; i++) {
+-		mt76_queue_tx_cleanup(dev, phy->mt76->q_tx[i], true);
++		mt76_queue_tx_cleanup(dev, dev->mphy.q_tx[i], true);
+ 		if (mphy_ext)
+ 			mt76_queue_tx_cleanup(dev, mphy_ext->q_tx[i], true);
+ 	}
+ 
+-	mt76_for_each_q_rx(&dev->mt76, i) {
++	for (i = 0; i < __MT_MCUQ_MAX; i++)
++		mt76_queue_tx_cleanup(dev, dev->mt76.q_mcu[i], true);
++
++	mt76_for_each_q_rx(&dev->mt76, i)
+ 		mt76_queue_rx_reset(dev, i);
+-	}
+ 
+ 	/* re-init prefetch settings after reset */
+ 	mt7915_dma_prefetch(dev);
+@@ -1584,7 +1585,7 @@ void mt7915_mac_reset_work(struct work_struct *work)
+ 	idr_init(&dev->token);
+ 
+ 	if (mt7915_wait_reset_state(dev, MT_MCU_CMD_RESET_DONE)) {
+-		mt7915_dma_reset(&dev->phy);
++		mt7915_dma_reset(dev);
+ 
+ 		mt76_wr(dev, MT_MCU_INT_EVENT, MT_MCU_INT_EVENT_DMA_INIT);
+ 		mt7915_wait_reset_state(dev, MT_MCU_CMD_RECOVERY_DONE);
+@@ -1633,39 +1634,30 @@ mt7915_mac_update_mib_stats(struct mt7915_phy *phy)
+ 	bool ext_phy = phy != &dev->phy;
+ 	int i, aggr0, aggr1;
+ 
+-	memset(mib, 0, sizeof(*mib));
+-
+-	mib->fcs_err_cnt = mt76_get_field(dev, MT_MIB_SDR3(ext_phy),
+-					  MT_MIB_SDR3_FCS_ERR_MASK);
++	mib->fcs_err_cnt += mt76_get_field(dev, MT_MIB_SDR3(ext_phy),
++					   MT_MIB_SDR3_FCS_ERR_MASK);
+ 
+ 	aggr0 = ext_phy ? ARRAY_SIZE(dev->mt76.aggr_stats) / 2 : 0;
+ 	for (i = 0, aggr1 = aggr0 + 4; i < 4; i++) {
+-		u32 val, val2;
++		u32 val;
+ 
+ 		val = mt76_rr(dev, MT_MIB_MB_SDR1(ext_phy, i));
+-
+-		val2 = FIELD_GET(MT_MIB_ACK_FAIL_COUNT_MASK, val);
+-		if (val2 > mib->ack_fail_cnt)
+-			mib->ack_fail_cnt = val2;
+-
+-		val2 = FIELD_GET(MT_MIB_BA_MISS_COUNT_MASK, val);
+-		if (val2 > mib->ba_miss_cnt)
+-			mib->ba_miss_cnt = val2;
++		mib->ba_miss_cnt += FIELD_GET(MT_MIB_BA_MISS_COUNT_MASK, val);
++		mib->ack_fail_cnt +=
++			FIELD_GET(MT_MIB_ACK_FAIL_COUNT_MASK, val);
+ 
+ 		val = mt76_rr(dev, MT_MIB_MB_SDR0(ext_phy, i));
+-		val2 = FIELD_GET(MT_MIB_RTS_RETRIES_COUNT_MASK, val);
+-		if (val2 > mib->rts_retries_cnt) {
+-			mib->rts_cnt = FIELD_GET(MT_MIB_RTS_COUNT_MASK, val);
+-			mib->rts_retries_cnt = val2;
+-		}
++		mib->rts_cnt += FIELD_GET(MT_MIB_RTS_COUNT_MASK, val);
++		mib->rts_retries_cnt +=
++			FIELD_GET(MT_MIB_RTS_RETRIES_COUNT_MASK, val);
+ 
+ 		val = mt76_rr(dev, MT_TX_AGG_CNT(ext_phy, i));
+-		val2 = mt76_rr(dev, MT_TX_AGG_CNT2(ext_phy, i));
+-
+ 		dev->mt76.aggr_stats[aggr0++] += val & 0xffff;
+ 		dev->mt76.aggr_stats[aggr0++] += val >> 16;
+-		dev->mt76.aggr_stats[aggr1++] += val2 & 0xffff;
+-		dev->mt76.aggr_stats[aggr1++] += val2 >> 16;
++
++		val = mt76_rr(dev, MT_TX_AGG_CNT2(ext_phy, i));
++		dev->mt76.aggr_stats[aggr1++] += val & 0xffff;
++		dev->mt76.aggr_stats[aggr1++] += val >> 16;
+ 	}
+ }
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/main.c b/drivers/net/wireless/mediatek/mt76/mt7915/main.c
+index d4969b2e1ffb0..98f4b49642a8c 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/main.c
+@@ -717,13 +717,19 @@ mt7915_get_stats(struct ieee80211_hw *hw,
+ 		 struct ieee80211_low_level_stats *stats)
+ {
+ 	struct mt7915_phy *phy = mt7915_hw_phy(hw);
++	struct mt7915_dev *dev = mt7915_hw_dev(hw);
+ 	struct mib_stats *mib = &phy->mib;
+ 
++	mutex_lock(&dev->mt76.mutex);
+ 	stats->dot11RTSSuccessCount = mib->rts_cnt;
+ 	stats->dot11RTSFailureCount = mib->rts_retries_cnt;
+ 	stats->dot11FCSErrorCount = mib->fcs_err_cnt;
+ 	stats->dot11ACKFailureCount = mib->ack_fail_cnt;
+ 
++	memset(mib, 0, sizeof(*mib));
++
++	mutex_unlock(&dev->mt76.mutex);
++
+ 	return 0;
+ }
+ 
+@@ -833,9 +839,12 @@ static void mt7915_sta_statistics(struct ieee80211_hw *hw,
+ 	struct mt7915_phy *phy = mt7915_hw_phy(hw);
+ 	struct mt7915_sta *msta = (struct mt7915_sta *)sta->drv_priv;
+ 	struct mt7915_sta_stats *stats = &msta->stats;
++	struct rate_info rxrate = {};
+ 
+-	if (mt7915_mcu_get_rx_rate(phy, vif, sta, &sinfo->rxrate) == 0)
++	if (!mt7915_mcu_get_rx_rate(phy, vif, sta, &rxrate)) {
++		sinfo->rxrate = rxrate;
+ 		sinfo->filled |= BIT_ULL(NL80211_STA_INFO_RX_BITRATE);
++	}
+ 
+ 	if (!stats->tx_rate.legacy && !stats->tx_rate.flags)
+ 		return;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+index 195929242b72f..443cb09ae7cbd 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+@@ -351,54 +351,62 @@ mt7915_mcu_rx_radar_detected(struct mt7915_dev *dev, struct sk_buff *skb)
+ 	dev->hw_pattern++;
+ }
+ 
+-static void
++static int
+ mt7915_mcu_tx_rate_parse(struct mt76_phy *mphy, struct mt7915_mcu_ra_info *ra,
+ 			 struct rate_info *rate, u16 r)
+ {
+ 	struct ieee80211_supported_band *sband;
+ 	u16 ru_idx = le16_to_cpu(ra->ru_idx);
+-	u16 flags = 0;
++	bool cck = false;
+ 
+ 	rate->mcs = FIELD_GET(MT_RA_RATE_MCS, r);
+ 	rate->nss = FIELD_GET(MT_RA_RATE_NSS, r) + 1;
+ 
+ 	switch (FIELD_GET(MT_RA_RATE_TX_MODE, r)) {
+ 	case MT_PHY_TYPE_CCK:
++		cck = true;
++		fallthrough;
+ 	case MT_PHY_TYPE_OFDM:
+ 		if (mphy->chandef.chan->band == NL80211_BAND_5GHZ)
+ 			sband = &mphy->sband_5g.sband;
+ 		else
+ 			sband = &mphy->sband_2g.sband;
+ 
++		rate->mcs = mt76_get_rate(mphy->dev, sband, rate->mcs, cck);
+ 		rate->legacy = sband->bitrates[rate->mcs].bitrate;
+ 		break;
+ 	case MT_PHY_TYPE_HT:
+ 	case MT_PHY_TYPE_HT_GF:
+ 		rate->mcs += (rate->nss - 1) * 8;
+-		flags |= RATE_INFO_FLAGS_MCS;
++		if (rate->mcs > 31)
++			return -EINVAL;
+ 
++		rate->flags = RATE_INFO_FLAGS_MCS;
+ 		if (ra->gi)
+-			flags |= RATE_INFO_FLAGS_SHORT_GI;
++			rate->flags |= RATE_INFO_FLAGS_SHORT_GI;
+ 		break;
+ 	case MT_PHY_TYPE_VHT:
+-		flags |= RATE_INFO_FLAGS_VHT_MCS;
++		if (rate->mcs > 9)
++			return -EINVAL;
+ 
++		rate->flags = RATE_INFO_FLAGS_VHT_MCS;
+ 		if (ra->gi)
+-			flags |= RATE_INFO_FLAGS_SHORT_GI;
++			rate->flags |= RATE_INFO_FLAGS_SHORT_GI;
+ 		break;
+ 	case MT_PHY_TYPE_HE_SU:
+ 	case MT_PHY_TYPE_HE_EXT_SU:
+ 	case MT_PHY_TYPE_HE_TB:
+ 	case MT_PHY_TYPE_HE_MU:
++		if (ra->gi > NL80211_RATE_INFO_HE_GI_3_2 || rate->mcs > 11)
++			return -EINVAL;
++
+ 		rate->he_gi = ra->gi;
+ 		rate->he_dcm = FIELD_GET(MT_RA_RATE_DCM_EN, r);
+-
+-		flags |= RATE_INFO_FLAGS_HE_MCS;
++		rate->flags = RATE_INFO_FLAGS_HE_MCS;
+ 		break;
+ 	default:
+-		break;
++		return -EINVAL;
+ 	}
+-	rate->flags = flags;
+ 
+ 	if (ru_idx) {
+ 		switch (ru_idx) {
+@@ -435,6 +443,8 @@ mt7915_mcu_tx_rate_parse(struct mt76_phy *mphy, struct mt7915_mcu_ra_info *ra,
+ 			break;
+ 		}
+ 	}
++
++	return 0;
+ }
+ 
+ static void
+@@ -465,12 +475,12 @@ mt7915_mcu_tx_rate_report(struct mt7915_dev *dev, struct sk_buff *skb)
+ 		mphy = dev->mt76.phy2;
+ 
+ 	/* current rate */
+-	mt7915_mcu_tx_rate_parse(mphy, ra, &rate, curr);
+-	stats->tx_rate = rate;
++	if (!mt7915_mcu_tx_rate_parse(mphy, ra, &rate, curr))
++		stats->tx_rate = rate;
+ 
+ 	/* probing rate */
+-	mt7915_mcu_tx_rate_parse(mphy, ra, &prob_rate, probe);
+-	stats->prob_rate = prob_rate;
++	if (!mt7915_mcu_tx_rate_parse(mphy, ra, &prob_rate, probe))
++		stats->prob_rate = prob_rate;
+ 
+ 	if (attempts) {
+ 		u16 success = le16_to_cpu(ra->success);
+@@ -3501,9 +3511,8 @@ int mt7915_mcu_get_rx_rate(struct mt7915_phy *phy, struct ieee80211_vif *vif,
+ 	struct ieee80211_supported_band *sband;
+ 	struct mt7915_mcu_phy_rx_info *res;
+ 	struct sk_buff *skb;
+-	u16 flags = 0;
+ 	int ret;
+-	int i;
++	bool cck = false;
+ 
+ 	ret = mt76_mcu_send_and_get_msg(&dev->mt76, MCU_EXT_CMD(PHY_STAT_INFO),
+ 					&req, sizeof(req), true, &skb);
+@@ -3517,48 +3526,53 @@ int mt7915_mcu_get_rx_rate(struct mt7915_phy *phy, struct ieee80211_vif *vif,
+ 
+ 	switch (res->mode) {
+ 	case MT_PHY_TYPE_CCK:
++		cck = true;
++		fallthrough;
+ 	case MT_PHY_TYPE_OFDM:
+ 		if (mphy->chandef.chan->band == NL80211_BAND_5GHZ)
+ 			sband = &mphy->sband_5g.sband;
+ 		else
+ 			sband = &mphy->sband_2g.sband;
+ 
+-		for (i = 0; i < sband->n_bitrates; i++) {
+-			if (rate->mcs != (sband->bitrates[i].hw_value & 0xf))
+-				continue;
+-
+-			rate->legacy = sband->bitrates[i].bitrate;
+-			break;
+-		}
++		rate->mcs = mt76_get_rate(&dev->mt76, sband, rate->mcs, cck);
++		rate->legacy = sband->bitrates[rate->mcs].bitrate;
+ 		break;
+ 	case MT_PHY_TYPE_HT:
+ 	case MT_PHY_TYPE_HT_GF:
+-		if (rate->mcs > 31)
+-			return -EINVAL;
+-
+-		flags |= RATE_INFO_FLAGS_MCS;
++		if (rate->mcs > 31) {
++			ret = -EINVAL;
++			goto out;
++		}
+ 
++		rate->flags = RATE_INFO_FLAGS_MCS;
+ 		if (res->gi)
+-			flags |= RATE_INFO_FLAGS_SHORT_GI;
++			rate->flags |= RATE_INFO_FLAGS_SHORT_GI;
+ 		break;
+ 	case MT_PHY_TYPE_VHT:
+-		flags |= RATE_INFO_FLAGS_VHT_MCS;
++		if (rate->mcs > 9) {
++			ret = -EINVAL;
++			goto out;
++		}
+ 
++		rate->flags = RATE_INFO_FLAGS_VHT_MCS;
+ 		if (res->gi)
+-			flags |= RATE_INFO_FLAGS_SHORT_GI;
++			rate->flags |= RATE_INFO_FLAGS_SHORT_GI;
+ 		break;
+ 	case MT_PHY_TYPE_HE_SU:
+ 	case MT_PHY_TYPE_HE_EXT_SU:
+ 	case MT_PHY_TYPE_HE_TB:
+ 	case MT_PHY_TYPE_HE_MU:
++		if (res->gi > NL80211_RATE_INFO_HE_GI_3_2 || rate->mcs > 11) {
++			ret = -EINVAL;
++			goto out;
++		}
+ 		rate->he_gi = res->gi;
+-
+-		flags |= RATE_INFO_FLAGS_HE_MCS;
++		rate->flags = RATE_INFO_FLAGS_HE_MCS;
+ 		break;
+ 	default:
+-		break;
++		ret = -EINVAL;
++		goto out;
+ 	}
+-	rate->flags = flags;
+ 
+ 	switch (res->bw) {
+ 	case IEEE80211_STA_RX_BW_160:
+@@ -3575,7 +3589,8 @@ int mt7915_mcu_get_rx_rate(struct mt7915_phy *phy, struct ieee80211_vif *vif,
+ 		break;
+ 	}
+ 
++out:
+ 	dev_kfree_skb(skb);
+ 
+-	return 0;
++	return ret;
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h b/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h
+index 5c7eefdf2013c..1160d1bf8a7c3 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h
+@@ -108,11 +108,11 @@ struct mt7915_vif {
+ };
+ 
+ struct mib_stats {
+-	u16 ack_fail_cnt;
+-	u16 fcs_err_cnt;
+-	u16 rts_cnt;
+-	u16 rts_retries_cnt;
+-	u16 ba_miss_cnt;
++	u32 ack_fail_cnt;
++	u32 fcs_err_cnt;
++	u32 rts_cnt;
++	u32 rts_retries_cnt;
++	u32 ba_miss_cnt;
+ };
+ 
+ struct mt7915_hif {
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/debugfs.c b/drivers/net/wireless/mediatek/mt76/mt7921/debugfs.c
+index 0dc8e25e18e4a..87a7ea12f3b3c 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/debugfs.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/debugfs.c
+@@ -9,10 +9,13 @@ mt7921_fw_debug_set(void *data, u64 val)
+ {
+ 	struct mt7921_dev *dev = data;
+ 
+-	dev->fw_debug = (u8)val;
++	mt7921_mutex_acquire(dev);
+ 
++	dev->fw_debug = (u8)val;
+ 	mt7921_mcu_fw_log_2_host(dev, dev->fw_debug);
+ 
++	mt7921_mutex_release(dev);
++
+ 	return 0;
+ }
+ 
+@@ -44,14 +47,13 @@ mt7921_ampdu_stat_read_phy(struct mt7921_phy *phy,
+ 		range[i] = mt76_rr(dev, MT_MIB_ARNG(0, i));
+ 
+ 	for (i = 0; i < ARRAY_SIZE(bound); i++)
+-		bound[i] = MT_MIB_ARNCR_RANGE(range[i / 4], i) + 1;
++		bound[i] = MT_MIB_ARNCR_RANGE(range[i / 4], i % 4) + 1;
+ 
+ 	seq_printf(file, "\nPhy0\n");
+ 
+ 	seq_printf(file, "Length: %8d | ", bound[0]);
+ 	for (i = 0; i < ARRAY_SIZE(bound) - 1; i++)
+-		seq_printf(file, "%3d -%3d | ",
+-			   bound[i] + 1, bound[i + 1]);
++		seq_printf(file, "%3d  %3d | ", bound[i] + 1, bound[i + 1]);
+ 
+ 	seq_puts(file, "\nCount:  ");
+ 	for (i = 0; i < ARRAY_SIZE(bound); i++)
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mac.c b/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
+index 3f9097481a5ef..a6d2a25b3495f 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
+@@ -400,7 +400,9 @@ int mt7921_mac_fill_rx(struct mt7921_dev *dev, struct sk_buff *skb)
+ 
+ 	/* RXD Group 3 - P-RXV */
+ 	if (rxd1 & MT_RXD1_NORMAL_GROUP_3) {
+-		u32 v0, v1, v2;
++		u8 stbc, gi;
++		u32 v0, v1;
++		bool cck;
+ 
+ 		rxv = rxd;
+ 		rxd += 2;
+@@ -409,7 +411,6 @@ int mt7921_mac_fill_rx(struct mt7921_dev *dev, struct sk_buff *skb)
+ 
+ 		v0 = le32_to_cpu(rxv[0]);
+ 		v1 = le32_to_cpu(rxv[1]);
+-		v2 = le32_to_cpu(rxv[2]);
+ 
+ 		if (v0 & MT_PRXV_HT_AD_CODE)
+ 			status->enc_flags |= RX_ENC_FLAG_LDPC;
+@@ -429,87 +430,87 @@ int mt7921_mac_fill_rx(struct mt7921_dev *dev, struct sk_buff *skb)
+ 					     status->chain_signal[i]);
+ 		}
+ 
+-		/* RXD Group 5 - C-RXV */
+-		if (rxd1 & MT_RXD1_NORMAL_GROUP_5) {
+-			u8 stbc = FIELD_GET(MT_CRXV_HT_STBC, v2);
+-			u8 gi = FIELD_GET(MT_CRXV_HT_SHORT_GI, v2);
+-			bool cck = false;
++		stbc = FIELD_GET(MT_PRXV_STBC, v0);
++		gi = FIELD_GET(MT_PRXV_SGI, v0);
++		cck = false;
+ 
+-			rxd += 18;
+-			if ((u8 *)rxd - skb->data >= skb->len)
+-				return -EINVAL;
++		idx = i = FIELD_GET(MT_PRXV_TX_RATE, v0);
++		mode = FIELD_GET(MT_PRXV_TX_MODE, v0);
+ 
+-			idx = i = FIELD_GET(MT_PRXV_TX_RATE, v0);
+-			mode = FIELD_GET(MT_CRXV_TX_MODE, v2);
+-
+-			switch (mode) {
+-			case MT_PHY_TYPE_CCK:
+-				cck = true;
+-				fallthrough;
+-			case MT_PHY_TYPE_OFDM:
+-				i = mt76_get_rate(&dev->mt76, sband, i, cck);
+-				break;
+-			case MT_PHY_TYPE_HT_GF:
+-			case MT_PHY_TYPE_HT:
+-				status->encoding = RX_ENC_HT;
+-				if (i > 31)
+-					return -EINVAL;
+-				break;
+-			case MT_PHY_TYPE_VHT:
+-				status->nss =
+-					FIELD_GET(MT_PRXV_NSTS, v0) + 1;
+-				status->encoding = RX_ENC_VHT;
+-				if (i > 9)
+-					return -EINVAL;
+-				break;
+-			case MT_PHY_TYPE_HE_MU:
+-				status->flag |= RX_FLAG_RADIOTAP_HE_MU;
+-				fallthrough;
+-			case MT_PHY_TYPE_HE_SU:
+-			case MT_PHY_TYPE_HE_EXT_SU:
+-			case MT_PHY_TYPE_HE_TB:
+-				status->nss =
+-					FIELD_GET(MT_PRXV_NSTS, v0) + 1;
+-				status->encoding = RX_ENC_HE;
+-				status->flag |= RX_FLAG_RADIOTAP_HE;
+-				i &= GENMASK(3, 0);
+-
+-				if (gi <= NL80211_RATE_INFO_HE_GI_3_2)
+-					status->he_gi = gi;
+-
+-				status->he_dcm = !!(idx & MT_PRXV_TX_DCM);
+-				break;
+-			default:
++		switch (mode) {
++		case MT_PHY_TYPE_CCK:
++			cck = true;
++			fallthrough;
++		case MT_PHY_TYPE_OFDM:
++			i = mt76_get_rate(&dev->mt76, sband, i, cck);
++			break;
++		case MT_PHY_TYPE_HT_GF:
++		case MT_PHY_TYPE_HT:
++			status->encoding = RX_ENC_HT;
++			if (i > 31)
+ 				return -EINVAL;
+-			}
+-			status->rate_idx = i;
+-
+-			switch (FIELD_GET(MT_CRXV_FRAME_MODE, v2)) {
+-			case IEEE80211_STA_RX_BW_20:
+-				break;
+-			case IEEE80211_STA_RX_BW_40:
+-				if (mode & MT_PHY_TYPE_HE_EXT_SU &&
+-				    (idx & MT_PRXV_TX_ER_SU_106T)) {
+-					status->bw = RATE_INFO_BW_HE_RU;
+-					status->he_ru =
+-						NL80211_RATE_INFO_HE_RU_ALLOC_106;
+-				} else {
+-					status->bw = RATE_INFO_BW_40;
+-				}
+-				break;
+-			case IEEE80211_STA_RX_BW_80:
+-				status->bw = RATE_INFO_BW_80;
+-				break;
+-			case IEEE80211_STA_RX_BW_160:
+-				status->bw = RATE_INFO_BW_160;
+-				break;
+-			default:
++			break;
++		case MT_PHY_TYPE_VHT:
++			status->nss =
++				FIELD_GET(MT_PRXV_NSTS, v0) + 1;
++			status->encoding = RX_ENC_VHT;
++			if (i > 9)
+ 				return -EINVAL;
++			break;
++		case MT_PHY_TYPE_HE_MU:
++			status->flag |= RX_FLAG_RADIOTAP_HE_MU;
++			fallthrough;
++		case MT_PHY_TYPE_HE_SU:
++		case MT_PHY_TYPE_HE_EXT_SU:
++		case MT_PHY_TYPE_HE_TB:
++			status->nss =
++				FIELD_GET(MT_PRXV_NSTS, v0) + 1;
++			status->encoding = RX_ENC_HE;
++			status->flag |= RX_FLAG_RADIOTAP_HE;
++			i &= GENMASK(3, 0);
++
++			if (gi <= NL80211_RATE_INFO_HE_GI_3_2)
++				status->he_gi = gi;
++
++			status->he_dcm = !!(idx & MT_PRXV_TX_DCM);
++			break;
++		default:
++			return -EINVAL;
++		}
++
++		status->rate_idx = i;
++
++		switch (FIELD_GET(MT_PRXV_FRAME_MODE, v0)) {
++		case IEEE80211_STA_RX_BW_20:
++			break;
++		case IEEE80211_STA_RX_BW_40:
++			if (mode & MT_PHY_TYPE_HE_EXT_SU &&
++			    (idx & MT_PRXV_TX_ER_SU_106T)) {
++				status->bw = RATE_INFO_BW_HE_RU;
++				status->he_ru =
++					NL80211_RATE_INFO_HE_RU_ALLOC_106;
++			} else {
++				status->bw = RATE_INFO_BW_40;
+ 			}
++			break;
++		case IEEE80211_STA_RX_BW_80:
++			status->bw = RATE_INFO_BW_80;
++			break;
++		case IEEE80211_STA_RX_BW_160:
++			status->bw = RATE_INFO_BW_160;
++			break;
++		default:
++			return -EINVAL;
++		}
+ 
+-			status->enc_flags |= RX_ENC_FLAG_STBC_MASK * stbc;
+-			if (mode < MT_PHY_TYPE_HE_SU && gi)
+-				status->enc_flags |= RX_ENC_FLAG_SHORT_GI;
++		status->enc_flags |= RX_ENC_FLAG_STBC_MASK * stbc;
++		if (mode < MT_PHY_TYPE_HE_SU && gi)
++			status->enc_flags |= RX_ENC_FLAG_SHORT_GI;
++
++		if (rxd1 & MT_RXD1_NORMAL_GROUP_5) {
++			rxd += 18;
++			if ((u8 *)rxd - skb->data >= skb->len)
++				return -EINVAL;
+ 		}
+ 	}
+ 
+@@ -1317,31 +1318,20 @@ mt7921_mac_update_mib_stats(struct mt7921_phy *phy)
+ 	struct mib_stats *mib = &phy->mib;
+ 	int i, aggr0 = 0, aggr1;
+ 
+-	memset(mib, 0, sizeof(*mib));
+-
+-	mib->fcs_err_cnt = mt76_get_field(dev, MT_MIB_SDR3(0),
+-					  MT_MIB_SDR3_FCS_ERR_MASK);
++	mib->fcs_err_cnt += mt76_get_field(dev, MT_MIB_SDR3(0),
++					   MT_MIB_SDR3_FCS_ERR_MASK);
++	mib->ack_fail_cnt += mt76_get_field(dev, MT_MIB_MB_BSDR3(0),
++					    MT_MIB_ACK_FAIL_COUNT_MASK);
++	mib->ba_miss_cnt += mt76_get_field(dev, MT_MIB_MB_BSDR2(0),
++					   MT_MIB_BA_FAIL_COUNT_MASK);
++	mib->rts_cnt += mt76_get_field(dev, MT_MIB_MB_BSDR0(0),
++				       MT_MIB_RTS_COUNT_MASK);
++	mib->rts_retries_cnt += mt76_get_field(dev, MT_MIB_MB_BSDR1(0),
++					       MT_MIB_RTS_FAIL_COUNT_MASK);
+ 
+ 	for (i = 0, aggr1 = aggr0 + 4; i < 4; i++) {
+ 		u32 val, val2;
+ 
+-		val = mt76_rr(dev, MT_MIB_MB_SDR1(0, i));
+-
+-		val2 = FIELD_GET(MT_MIB_ACK_FAIL_COUNT_MASK, val);
+-		if (val2 > mib->ack_fail_cnt)
+-			mib->ack_fail_cnt = val2;
+-
+-		val2 = FIELD_GET(MT_MIB_BA_MISS_COUNT_MASK, val);
+-		if (val2 > mib->ba_miss_cnt)
+-			mib->ba_miss_cnt = val2;
+-
+-		val = mt76_rr(dev, MT_MIB_MB_SDR0(0, i));
+-		val2 = FIELD_GET(MT_MIB_RTS_RETRIES_COUNT_MASK, val);
+-		if (val2 > mib->rts_retries_cnt) {
+-			mib->rts_cnt = FIELD_GET(MT_MIB_RTS_COUNT_MASK, val);
+-			mib->rts_retries_cnt = val2;
+-		}
+-
+ 		val = mt76_rr(dev, MT_TX_AGG_CNT(0, i));
+ 		val2 = mt76_rr(dev, MT_TX_AGG_CNT2(0, i));
+ 
+@@ -1503,8 +1493,10 @@ void mt7921_coredump_work(struct work_struct *work)
+ 			break;
+ 
+ 		skb_pull(skb, sizeof(struct mt7921_mcu_rxd));
+-		if (data + skb->len - dump > MT76_CONNAC_COREDUMP_SZ)
+-			break;
++		if (data + skb->len - dump > MT76_CONNAC_COREDUMP_SZ) {
++			dev_kfree_skb(skb);
++			continue;
++		}
+ 
+ 		memcpy(data, skb->data, skb->len);
+ 		data += skb->len;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mac.h b/drivers/net/wireless/mediatek/mt76/mt7921/mac.h
+index a0c1fa0f20e46..109c8849d106a 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mac.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mac.h
+@@ -97,18 +97,24 @@ enum rx_pkt_type {
+ #define MT_RXD3_NORMAL_PF_MODE		BIT(29)
+ #define MT_RXD3_NORMAL_PF_STS		GENMASK(31, 30)
+ 
+-/* P-RXV */
++/* P-RXV DW0 */
+ #define MT_PRXV_TX_RATE			GENMASK(6, 0)
+ #define MT_PRXV_TX_DCM			BIT(4)
+ #define MT_PRXV_TX_ER_SU_106T		BIT(5)
+ #define MT_PRXV_NSTS			GENMASK(9, 7)
+ #define MT_PRXV_HT_AD_CODE		BIT(11)
++#define MT_PRXV_FRAME_MODE		GENMASK(14, 12)
++#define MT_PRXV_SGI			GENMASK(16, 15)
++#define MT_PRXV_STBC			GENMASK(23, 22)
++#define MT_PRXV_TX_MODE			GENMASK(27, 24)
+ #define MT_PRXV_HE_RU_ALLOC_L		GENMASK(31, 28)
+-#define MT_PRXV_HE_RU_ALLOC_H		GENMASK(3, 0)
++
++/* P-RXV DW1 */
+ #define MT_PRXV_RCPI3			GENMASK(31, 24)
+ #define MT_PRXV_RCPI2			GENMASK(23, 16)
+ #define MT_PRXV_RCPI1			GENMASK(15, 8)
+ #define MT_PRXV_RCPI0			GENMASK(7, 0)
++#define MT_PRXV_HE_RU_ALLOC_H		GENMASK(3, 0)
+ 
+ /* C-RXV */
+ #define MT_CRXV_HT_STBC			GENMASK(1, 0)
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/main.c b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
+index 729f6c42cddee..cd9fd0e24e3e6 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
+@@ -348,6 +348,7 @@ static void mt7921_remove_interface(struct ieee80211_hw *hw,
+ 	if (vif == phy->monitor_vif)
+ 		phy->monitor_vif = NULL;
+ 
++	mt7921_mutex_acquire(dev);
+ 	mt76_connac_free_pending_tx_skbs(&dev->pm, &msta->wcid);
+ 
+ 	if (dev->pm.enable) {
+@@ -360,7 +361,6 @@ static void mt7921_remove_interface(struct ieee80211_hw *hw,
+ 
+ 	rcu_assign_pointer(dev->mt76.wcid[idx], NULL);
+ 
+-	mt7921_mutex_acquire(dev);
+ 	dev->mt76.vif_mask &= ~BIT(mvif->mt76.idx);
+ 	phy->omac_mask &= ~BIT_ULL(mvif->mt76.omac_idx);
+ 	mt7921_mutex_release(dev);
+@@ -587,6 +587,9 @@ static void mt7921_bss_info_changed(struct ieee80211_hw *hw,
+ 	if (changed & BSS_CHANGED_PS)
+ 		mt7921_mcu_uni_bss_ps(dev, vif);
+ 
++	if (changed & BSS_CHANGED_ARP_FILTER)
++		mt7921_mcu_update_arp_filter(hw, vif, info);
++
+ 	mt7921_mutex_release(dev);
+ }
+ 
+@@ -814,11 +817,17 @@ mt7921_get_stats(struct ieee80211_hw *hw,
+ 	struct mt7921_phy *phy = mt7921_hw_phy(hw);
+ 	struct mib_stats *mib = &phy->mib;
+ 
++	mt7921_mutex_acquire(phy->dev);
++
+ 	stats->dot11RTSSuccessCount = mib->rts_cnt;
+ 	stats->dot11RTSFailureCount = mib->rts_retries_cnt;
+ 	stats->dot11FCSErrorCount = mib->fcs_err_cnt;
+ 	stats->dot11ACKFailureCount = mib->ack_fail_cnt;
+ 
++	memset(mib, 0, sizeof(*mib));
++
++	mt7921_mutex_release(phy->dev);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
+index b5cc72e7e81c3..62afbad77596b 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
+@@ -1304,3 +1304,47 @@ mt7921_pm_interface_iter(void *priv, u8 *mac, struct ieee80211_vif *vif)
+ 		mt76_clear(dev, MT_WF_RFCR(0), MT_WF_RFCR_DROP_OTHER_BEACON);
+ 	}
+ }
++
++int mt7921_mcu_update_arp_filter(struct ieee80211_hw *hw,
++				 struct ieee80211_vif *vif,
++				 struct ieee80211_bss_conf *info)
++{
++	struct mt7921_vif *mvif = (struct mt7921_vif *)vif->drv_priv;
++	struct mt7921_dev *dev = mt7921_hw_dev(hw);
++	struct sk_buff *skb;
++	int i, len = min_t(int, info->arp_addr_cnt,
++			   IEEE80211_BSS_ARP_ADDR_LIST_LEN);
++	struct {
++		struct {
++			u8 bss_idx;
++			u8 pad[3];
++		} __packed hdr;
++		struct mt76_connac_arpns_tlv arp;
++	} req_hdr = {
++		.hdr = {
++			.bss_idx = mvif->mt76.idx,
++		},
++		.arp = {
++			.tag = cpu_to_le16(UNI_OFFLOAD_OFFLOAD_ARP),
++			.len = cpu_to_le16(sizeof(struct mt76_connac_arpns_tlv)),
++			.ips_num = len,
++			.mode = 2,  /* update */
++			.option = 1,
++		},
++	};
++
++	skb = mt76_mcu_msg_alloc(&dev->mt76, NULL,
++				 sizeof(req_hdr) + len * sizeof(__be32));
++	if (!skb)
++		return -ENOMEM;
++
++	skb_put_data(skb, &req_hdr, sizeof(req_hdr));
++	for (i = 0; i < len; i++) {
++		u8 *addr = (u8 *)skb_put(skb, sizeof(__be32));
++
++		memcpy(addr, &info->arp_addr_list[i], sizeof(__be32));
++	}
++
++	return mt76_mcu_skb_send_msg(&dev->mt76, skb, MCU_UNI_CMD_OFFLOAD,
++				     true);
++}
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h b/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
+index 46e6aeec35ae3..25a1a6acb6baf 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
+@@ -102,11 +102,11 @@ struct mt7921_vif {
+ };
+ 
+ struct mib_stats {
+-	u16 ack_fail_cnt;
+-	u16 fcs_err_cnt;
+-	u16 rts_cnt;
+-	u16 rts_retries_cnt;
+-	u16 ba_miss_cnt;
++	u32 ack_fail_cnt;
++	u32 fcs_err_cnt;
++	u32 rts_cnt;
++	u32 rts_retries_cnt;
++	u32 ba_miss_cnt;
+ };
+ 
+ struct mt7921_phy {
+@@ -339,4 +339,7 @@ int mt7921_mac_set_beacon_filter(struct mt7921_phy *phy,
+ 				 bool enable);
+ void mt7921_pm_interface_iter(void *priv, u8 *mac, struct ieee80211_vif *vif);
+ void mt7921_coredump_work(struct work_struct *work);
++int mt7921_mcu_update_arp_filter(struct ieee80211_hw *hw,
++				 struct ieee80211_vif *vif,
++				 struct ieee80211_bss_conf *info);
+ #endif
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/pci.c b/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
+index 5570b4a505314..80f6f29892a45 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
+@@ -137,7 +137,7 @@ static int mt7921_pci_probe(struct pci_dev *pdev,
+ 
+ 	mt76_wr(dev, MT_WFDMA0_HOST_INT_ENA, 0);
+ 
+-	mt7921_l1_wr(dev, MT_PCIE_MAC_INT_ENABLE, 0xff);
++	mt76_wr(dev, MT_PCIE_MAC_INT_ENABLE, 0xff);
+ 
+ 	ret = devm_request_irq(mdev->dev, pdev->irq, mt7921_irq_handler,
+ 			       IRQF_SHARED, KBUILD_MODNAME, dev);
+@@ -146,10 +146,12 @@ static int mt7921_pci_probe(struct pci_dev *pdev,
+ 
+ 	ret = mt7921_register_device(dev);
+ 	if (ret)
+-		goto err_free_dev;
++		goto err_free_irq;
+ 
+ 	return 0;
+ 
++err_free_irq:
++	devm_free_irq(&pdev->dev, pdev->irq, dev);
+ err_free_dev:
+ 	mt76_free_device(&dev->mt76);
+ err_free_pci_vec:
+@@ -193,7 +195,6 @@ static int mt7921_pci_suspend(struct pci_dev *pdev, pm_message_t state)
+ 	mt76_for_each_q_rx(mdev, i) {
+ 		napi_disable(&mdev->napi[i]);
+ 	}
+-	tasklet_kill(&dev->irq_tasklet);
+ 
+ 	pci_enable_wake(pdev, pci_choose_state(pdev, state), true);
+ 
+@@ -208,13 +209,16 @@ static int mt7921_pci_suspend(struct pci_dev *pdev, pm_message_t state)
+ 
+ 	/* disable interrupt */
+ 	mt76_wr(dev, MT_WFDMA0_HOST_INT_ENA, 0);
++	mt76_wr(dev, MT_PCIE_MAC_INT_ENABLE, 0x0);
++	synchronize_irq(pdev->irq);
++	tasklet_kill(&dev->irq_tasklet);
+ 
+-	pci_save_state(pdev);
+-	err = pci_set_power_state(pdev, pci_choose_state(pdev, state));
++	err = mt7921_mcu_fw_pmctrl(dev);
+ 	if (err)
+ 		goto restore;
+ 
+-	err = mt7921_mcu_drv_pmctrl(dev);
++	pci_save_state(pdev);
++	err = pci_set_power_state(pdev, pci_choose_state(pdev, state));
+ 	if (err)
+ 		goto restore;
+ 
+@@ -237,18 +241,18 @@ static int mt7921_pci_resume(struct pci_dev *pdev)
+ 	struct mt7921_dev *dev = container_of(mdev, struct mt7921_dev, mt76);
+ 	int i, err;
+ 
+-	err = mt7921_mcu_fw_pmctrl(dev);
+-	if (err < 0)
+-		return err;
+-
+ 	err = pci_set_power_state(pdev, PCI_D0);
+ 	if (err)
+ 		return err;
+ 
+ 	pci_restore_state(pdev);
+ 
++	err = mt7921_mcu_drv_pmctrl(dev);
++	if (err < 0)
++		return err;
++
+ 	/* enable interrupt */
+-	mt7921_l1_wr(dev, MT_PCIE_MAC_INT_ENABLE, 0xff);
++	mt76_wr(dev, MT_PCIE_MAC_INT_ENABLE, 0xff);
+ 	mt7921_irq_enable(dev, MT_INT_RX_DONE_ALL | MT_INT_TX_DONE_ALL |
+ 			  MT_INT_MCU_CMD);
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/regs.h b/drivers/net/wireless/mediatek/mt76/mt7921/regs.h
+index 6dad7f6ab09df..73878d3e24951 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/regs.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/regs.h
+@@ -96,8 +96,8 @@
+ #define MT_WF_MIB_BASE(_band)		((_band) ? 0xa4800 : 0x24800)
+ #define MT_WF_MIB(_band, ofs)		(MT_WF_MIB_BASE(_band) + (ofs))
+ 
+-#define MT_MIB_SDR3(_band)		MT_WF_MIB(_band, 0x014)
+-#define MT_MIB_SDR3_FCS_ERR_MASK	GENMASK(15, 0)
++#define MT_MIB_SDR3(_band)		MT_WF_MIB(_band, 0x698)
++#define MT_MIB_SDR3_FCS_ERR_MASK	GENMASK(31, 16)
+ 
+ #define MT_MIB_SDR9(_band)		MT_WF_MIB(_band, 0x02c)
+ #define MT_MIB_SDR9_BUSY_MASK		GENMASK(23, 0)
+@@ -121,16 +121,21 @@
+ #define MT_MIB_RTS_RETRIES_COUNT_MASK	GENMASK(31, 16)
+ #define MT_MIB_RTS_COUNT_MASK		GENMASK(15, 0)
+ 
+-#define MT_MIB_MB_SDR1(_band, n)	MT_WF_MIB(_band, 0x104 + ((n) << 4))
+-#define MT_MIB_BA_MISS_COUNT_MASK	GENMASK(15, 0)
+-#define MT_MIB_ACK_FAIL_COUNT_MASK	GENMASK(31, 16)
++#define MT_MIB_MB_BSDR0(_band)		MT_WF_MIB(_band, 0x688)
++#define MT_MIB_RTS_COUNT_MASK		GENMASK(15, 0)
++#define MT_MIB_MB_BSDR1(_band)		MT_WF_MIB(_band, 0x690)
++#define MT_MIB_RTS_FAIL_COUNT_MASK	GENMASK(15, 0)
++#define MT_MIB_MB_BSDR2(_band)		MT_WF_MIB(_band, 0x518)
++#define MT_MIB_BA_FAIL_COUNT_MASK	GENMASK(15, 0)
++#define MT_MIB_MB_BSDR3(_band)		MT_WF_MIB(_band, 0x520)
++#define MT_MIB_ACK_FAIL_COUNT_MASK	GENMASK(15, 0)
+ 
+ #define MT_MIB_MB_SDR2(_band, n)	MT_WF_MIB(_band, 0x108 + ((n) << 4))
+ #define MT_MIB_FRAME_RETRIES_COUNT_MASK	GENMASK(15, 0)
+ 
+-#define MT_TX_AGG_CNT(_band, n)		MT_WF_MIB(_band, 0x0a8 + ((n) << 2))
+-#define MT_TX_AGG_CNT2(_band, n)	MT_WF_MIB(_band, 0x164 + ((n) << 2))
+-#define MT_MIB_ARNG(_band, n)		MT_WF_MIB(_band, 0x4b8 + ((n) << 2))
++#define MT_TX_AGG_CNT(_band, n)		MT_WF_MIB(_band, 0x7dc + ((n) << 2))
++#define MT_TX_AGG_CNT2(_band, n)	MT_WF_MIB(_band, 0x7ec + ((n) << 2))
++#define MT_MIB_ARNG(_band, n)		MT_WF_MIB(_band, 0x0b0 + ((n) << 2))
+ #define MT_MIB_ARNCR_RANGE(val, n)	(((val) >> ((n) << 3)) & GENMASK(7, 0))
+ 
+ #define MT_WTBLON_TOP_BASE		0x34000
+@@ -357,11 +362,11 @@
+ #define MT_INFRA_CFG_BASE		0xfe000
+ #define MT_INFRA(ofs)			(MT_INFRA_CFG_BASE + (ofs))
+ 
+-#define MT_HIF_REMAP_L1			MT_INFRA(0x260)
++#define MT_HIF_REMAP_L1			MT_INFRA(0x24c)
+ #define MT_HIF_REMAP_L1_MASK		GENMASK(15, 0)
+ #define MT_HIF_REMAP_L1_OFFSET		GENMASK(15, 0)
+ #define MT_HIF_REMAP_L1_BASE		GENMASK(31, 16)
+-#define MT_HIF_REMAP_BASE_L1		0xe0000
++#define MT_HIF_REMAP_BASE_L1		0x40000
+ 
+ #define MT_SWDEF_BASE			0x41f200
+ #define MT_SWDEF(ofs)			(MT_SWDEF_BASE + (ofs))
+@@ -384,7 +389,7 @@
+ #define MT_HW_CHIPID			0x70010200
+ #define MT_HW_REV			0x70010204
+ 
+-#define MT_PCIE_MAC_BASE		0x74030000
++#define MT_PCIE_MAC_BASE		0x10000
+ #define MT_PCIE_MAC(ofs)		(MT_PCIE_MAC_BASE + (ofs))
+ #define MT_PCIE_MAC_INT_ENABLE		MT_PCIE_MAC(0x188)
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/sdio.c b/drivers/net/wireless/mediatek/mt76/sdio.c
+index 0b6facb17ff72..a18d2896ee1fb 100644
+--- a/drivers/net/wireless/mediatek/mt76/sdio.c
++++ b/drivers/net/wireless/mediatek/mt76/sdio.c
+@@ -256,6 +256,9 @@ mt76s_tx_queue_skb(struct mt76_dev *dev, struct mt76_queue *q,
+ 
+ 	q->entry[q->head].skb = tx_info.skb;
+ 	q->entry[q->head].buf_sz = len;
++
++	smp_wmb();
++
+ 	q->head = (q->head + 1) % q->ndesc;
+ 	q->queued++;
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/tx.c b/drivers/net/wireless/mediatek/mt76/tx.c
+index b8fe8adc43a31..451ed60c62961 100644
+--- a/drivers/net/wireless/mediatek/mt76/tx.c
++++ b/drivers/net/wireless/mediatek/mt76/tx.c
+@@ -461,11 +461,11 @@ mt76_txq_schedule_list(struct mt76_phy *phy, enum mt76_txq_id qid)
+ 	int ret = 0;
+ 
+ 	while (1) {
++		int n_frames = 0;
++
+ 		if (test_bit(MT76_STATE_PM, &phy->state) ||
+-		    test_bit(MT76_RESET, &phy->state)) {
+-			ret = -EBUSY;
+-			break;
+-		}
++		    test_bit(MT76_RESET, &phy->state))
++			return -EBUSY;
+ 
+ 		if (dev->queue_ops->tx_cleanup &&
+ 		    q->queued + 2 * MT_TXQ_FREE_THR >= q->ndesc) {
+@@ -497,11 +497,16 @@ mt76_txq_schedule_list(struct mt76_phy *phy, enum mt76_txq_id qid)
+ 		}
+ 
+ 		if (!mt76_txq_stopped(q))
+-			ret += mt76_txq_send_burst(phy, q, mtxq);
++			n_frames = mt76_txq_send_burst(phy, q, mtxq);
+ 
+ 		spin_unlock_bh(&q->lock);
+ 
+ 		ieee80211_return_txq(phy->hw, txq, false);
++
++		if (unlikely(n_frames < 0))
++			return n_frames;
++
++		ret += n_frames;
+ 	}
+ 
+ 	return ret;
+diff --git a/drivers/net/wireless/mediatek/mt7601u/eeprom.c b/drivers/net/wireless/mediatek/mt7601u/eeprom.c
+index c868582c5d225..aa3b64902cf9b 100644
+--- a/drivers/net/wireless/mediatek/mt7601u/eeprom.c
++++ b/drivers/net/wireless/mediatek/mt7601u/eeprom.c
+@@ -99,7 +99,7 @@ mt7601u_has_tssi(struct mt7601u_dev *dev, u8 *eeprom)
+ {
+ 	u16 nic_conf1 = get_unaligned_le16(eeprom + MT_EE_NIC_CONF_1);
+ 
+-	return ~nic_conf1 && (nic_conf1 & MT_EE_NIC_CONF_1_TX_ALC_EN);
++	return (u16)~nic_conf1 && (nic_conf1 & MT_EE_NIC_CONF_1_TX_ALC_EN);
+ }
+ 
+ static void
+diff --git a/drivers/net/wireless/microchip/wilc1000/sdio.c b/drivers/net/wireless/microchip/wilc1000/sdio.c
+index 351ff909ab1c7..e14b9fc2c67ac 100644
+--- a/drivers/net/wireless/microchip/wilc1000/sdio.c
++++ b/drivers/net/wireless/microchip/wilc1000/sdio.c
+@@ -947,7 +947,7 @@ static int wilc_sdio_sync_ext(struct wilc *wilc, int nint)
+ 			for (i = 0; (i < 3) && (nint > 0); i++, nint--)
+ 				reg |= BIT(i);
+ 
+-			ret = wilc_sdio_read_reg(wilc, WILC_INTR2_ENABLE, &reg);
++			ret = wilc_sdio_write_reg(wilc, WILC_INTR2_ENABLE, reg);
+ 			if (ret) {
+ 				dev_err(&func->dev,
+ 					"Failed write reg (%08x)...\n",
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/table.c b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/table.c
+index 27c8a5d965208..fcaaf664cbec5 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/table.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/table.c
+@@ -249,7 +249,7 @@ u32 RTL8821AE_PHY_REG_ARRAY[] = {
+ 	0x824, 0x00030FE0,
+ 	0x828, 0x00000000,
+ 	0x82C, 0x002081DD,
+-	0x830, 0x2AAA8E24,
++	0x830, 0x2AAAEEC8,
+ 	0x834, 0x0037A706,
+ 	0x838, 0x06489B44,
+ 	0x83C, 0x0000095B,
+@@ -324,10 +324,10 @@ u32 RTL8821AE_PHY_REG_ARRAY[] = {
+ 	0x9D8, 0x00000000,
+ 	0x9DC, 0x00000000,
+ 	0x9E0, 0x00005D00,
+-	0x9E4, 0x00000002,
++	0x9E4, 0x00000003,
+ 	0x9E8, 0x00000001,
+ 	0xA00, 0x00D047C8,
+-	0xA04, 0x01FF000C,
++	0xA04, 0x01FF800C,
+ 	0xA08, 0x8C8A8300,
+ 	0xA0C, 0x2E68000F,
+ 	0xA10, 0x9500BB78,
+@@ -1320,7 +1320,11 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
+ 		0x083, 0x00021800,
+ 		0x084, 0x00028000,
+ 		0x085, 0x00048000,
++	0x80000111,	0x00000000,	0x40000000,	0x00000000,
++		0x086, 0x0009483A,
++	0xA0000000,	0x00000000,
+ 		0x086, 0x00094838,
++	0xB0000000,	0x00000000,
+ 		0x087, 0x00044980,
+ 		0x088, 0x00048000,
+ 		0x089, 0x0000D480,
+@@ -1409,36 +1413,32 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
+ 		0x03C, 0x000CA000,
+ 		0x0EF, 0x00000000,
+ 		0x0EF, 0x00001100,
+-	0xFF0F0104, 0xABCD,
++	0x80000111,	0x00000000,	0x40000000,	0x00000000,
+ 		0x034, 0x0004ADF3,
+ 		0x034, 0x00049DF0,
+-	0xFF0F0204, 0xCDEF,
++	0x90000110,	0x00000000,	0x40000000,	0x00000000,
+ 		0x034, 0x0004ADF3,
+ 		0x034, 0x00049DF0,
+-	0xFF0F0404, 0xCDEF,
+-		0x034, 0x0004ADF3,
+-		0x034, 0x00049DF0,
+-	0xFF0F0200, 0xCDEF,
++	0x90000210,	0x00000000,	0x40000000,	0x00000000,
+ 		0x034, 0x0004ADF5,
+ 		0x034, 0x00049DF2,
+-	0xFF0F02C0, 0xCDEF,
++	0x9000020c,	0x00000000,	0x40000000,	0x00000000,
++		0x034, 0x0004A0F3,
++		0x034, 0x000490B1,
++		0x9000040c,	0x00000000,	0x40000000,	0x00000000,
+ 		0x034, 0x0004A0F3,
+ 		0x034, 0x000490B1,
+-	0xCDCDCDCD, 0xCDCD,
++	0x90000200,	0x00000000,	0x40000000,	0x00000000,
++		0x034, 0x0004ADF5,
++		0x034, 0x00049DF2,
++	0x90000410,	0x00000000,	0x40000000,	0x00000000,
++		0x034, 0x0004ADF3,
++		0x034, 0x00049DF0,
++	0xA0000000,	0x00000000,
+ 		0x034, 0x0004ADF7,
+ 		0x034, 0x00049DF3,
+-	0xFF0F0104, 0xDEAD,
+-	0xFF0F0104, 0xABCD,
+-		0x034, 0x00048DED,
+-		0x034, 0x00047DEA,
+-		0x034, 0x00046DE7,
+-		0x034, 0x00045CE9,
+-		0x034, 0x00044CE6,
+-		0x034, 0x000438C6,
+-		0x034, 0x00042886,
+-		0x034, 0x00041486,
+-		0x034, 0x00040447,
+-	0xFF0F0204, 0xCDEF,
++	0xB0000000,	0x00000000,
++	0x80000111,	0x00000000,	0x40000000,	0x00000000,
+ 		0x034, 0x00048DED,
+ 		0x034, 0x00047DEA,
+ 		0x034, 0x00046DE7,
+@@ -1448,7 +1448,7 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
+ 		0x034, 0x00042886,
+ 		0x034, 0x00041486,
+ 		0x034, 0x00040447,
+-	0xFF0F0404, 0xCDEF,
++	0x90000110,	0x00000000,	0x40000000,	0x00000000,
+ 		0x034, 0x00048DED,
+ 		0x034, 0x00047DEA,
+ 		0x034, 0x00046DE7,
+@@ -1458,7 +1458,17 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
+ 		0x034, 0x00042886,
+ 		0x034, 0x00041486,
+ 		0x034, 0x00040447,
+-	0xFF0F02C0, 0xCDEF,
++	0x9000020c,	0x00000000,	0x40000000,	0x00000000,
++		0x034, 0x000480AE,
++		0x034, 0x000470AB,
++		0x034, 0x0004608B,
++		0x034, 0x00045069,
++		0x034, 0x00044048,
++		0x034, 0x00043045,
++		0x034, 0x00042026,
++		0x034, 0x00041023,
++		0x034, 0x00040002,
++	0x9000040c,	0x00000000,	0x40000000,	0x00000000,
+ 		0x034, 0x000480AE,
+ 		0x034, 0x000470AB,
+ 		0x034, 0x0004608B,
+@@ -1468,7 +1478,17 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
+ 		0x034, 0x00042026,
+ 		0x034, 0x00041023,
+ 		0x034, 0x00040002,
+-	0xCDCDCDCD, 0xCDCD,
++	0x90000410,	0x00000000,	0x40000000,	0x00000000,
++		0x034, 0x00048DED,
++		0x034, 0x00047DEA,
++		0x034, 0x00046DE7,
++		0x034, 0x00045CE9,
++		0x034, 0x00044CE6,
++		0x034, 0x000438C6,
++		0x034, 0x00042886,
++		0x034, 0x00041486,
++		0x034, 0x00040447,
++	0xA0000000,	0x00000000,
+ 		0x034, 0x00048DEF,
+ 		0x034, 0x00047DEC,
+ 		0x034, 0x00046DE9,
+@@ -1478,38 +1498,36 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
+ 		0x034, 0x0004248A,
+ 		0x034, 0x0004108D,
+ 		0x034, 0x0004008A,
+-	0xFF0F0104, 0xDEAD,
+-	0xFF0F0200, 0xABCD,
++	0xB0000000,	0x00000000,
++	0x80000210,	0x00000000,	0x40000000,	0x00000000,
+ 		0x034, 0x0002ADF4,
+-	0xFF0F02C0, 0xCDEF,
++	0x9000020c,	0x00000000,	0x40000000,	0x00000000,
++		0x034, 0x0002A0F3,
++	0x9000040c,	0x00000000,	0x40000000,	0x00000000,
+ 		0x034, 0x0002A0F3,
+-	0xCDCDCDCD, 0xCDCD,
++	0x90000200,	0x00000000,	0x40000000,	0x00000000,
++		0x034, 0x0002ADF4,
++	0xA0000000,	0x00000000,
+ 		0x034, 0x0002ADF7,
+-	0xFF0F0200, 0xDEAD,
+-	0xFF0F0104, 0xABCD,
+-		0x034, 0x00029DF4,
+-	0xFF0F0204, 0xCDEF,
++	0xB0000000,	0x00000000,
++	0x80000111,	0x00000000,	0x40000000,	0x00000000,
+ 		0x034, 0x00029DF4,
+-	0xFF0F0404, 0xCDEF,
++	0x90000110,	0x00000000,	0x40000000,	0x00000000,
+ 		0x034, 0x00029DF4,
+-	0xFF0F0200, 0xCDEF,
++	0x90000210,	0x00000000,	0x40000000,	0x00000000,
+ 		0x034, 0x00029DF1,
+-	0xFF0F02C0, 0xCDEF,
++	0x9000020c,	0x00000000,	0x40000000,	0x00000000,
++		0x034, 0x000290F0,
++	0x9000040c,	0x00000000,	0x40000000,	0x00000000,
+ 		0x034, 0x000290F0,
+-	0xCDCDCDCD, 0xCDCD,
++	0x90000200,	0x00000000,	0x40000000,	0x00000000,
++		0x034, 0x00029DF1,
++	0x90000410,	0x00000000,	0x40000000,	0x00000000,
++		0x034, 0x00029DF4,
++	0xA0000000,	0x00000000,
+ 		0x034, 0x00029DF2,
+-	0xFF0F0104, 0xDEAD,
+-	0xFF0F0104, 0xABCD,
+-		0x034, 0x00028DF1,
+-		0x034, 0x00027DEE,
+-		0x034, 0x00026DEB,
+-		0x034, 0x00025CEC,
+-		0x034, 0x00024CE9,
+-		0x034, 0x000238CA,
+-		0x034, 0x00022889,
+-		0x034, 0x00021489,
+-		0x034, 0x0002044A,
+-	0xFF0F0204, 0xCDEF,
++	0xB0000000,	0x00000000,
++	0x80000111,	0x00000000,	0x40000000,	0x00000000,
+ 		0x034, 0x00028DF1,
+ 		0x034, 0x00027DEE,
+ 		0x034, 0x00026DEB,
+@@ -1519,7 +1537,7 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
+ 		0x034, 0x00022889,
+ 		0x034, 0x00021489,
+ 		0x034, 0x0002044A,
+-	0xFF0F0404, 0xCDEF,
++	0x90000110,	0x00000000,	0x40000000,	0x00000000,
+ 		0x034, 0x00028DF1,
+ 		0x034, 0x00027DEE,
+ 		0x034, 0x00026DEB,
+@@ -1529,7 +1547,7 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
+ 		0x034, 0x00022889,
+ 		0x034, 0x00021489,
+ 		0x034, 0x0002044A,
+-	0xFF0F02C0, 0xCDEF,
++	0x9000020c,	0x00000000,	0x40000000,	0x00000000,
+ 		0x034, 0x000280AF,
+ 		0x034, 0x000270AC,
+ 		0x034, 0x0002608B,
+@@ -1539,7 +1557,27 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
+ 		0x034, 0x00022026,
+ 		0x034, 0x00021023,
+ 		0x034, 0x00020002,
+-	0xCDCDCDCD, 0xCDCD,
++	0x9000040c,	0x00000000,	0x40000000,	0x00000000,
++		0x034, 0x000280AF,
++		0x034, 0x000270AC,
++		0x034, 0x0002608B,
++		0x034, 0x00025069,
++		0x034, 0x00024048,
++		0x034, 0x00023045,
++		0x034, 0x00022026,
++		0x034, 0x00021023,
++		0x034, 0x00020002,
++	0x90000410,	0x00000000,	0x40000000,	0x00000000,
++		0x034, 0x00028DF1,
++		0x034, 0x00027DEE,
++		0x034, 0x00026DEB,
++		0x034, 0x00025CEC,
++		0x034, 0x00024CE9,
++		0x034, 0x000238CA,
++		0x034, 0x00022889,
++		0x034, 0x00021489,
++		0x034, 0x0002044A,
++	0xA0000000,	0x00000000,
+ 		0x034, 0x00028DEE,
+ 		0x034, 0x00027DEB,
+ 		0x034, 0x00026CCD,
+@@ -1549,27 +1587,24 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
+ 		0x034, 0x00022849,
+ 		0x034, 0x00021449,
+ 		0x034, 0x0002004D,
+-	0xFF0F0104, 0xDEAD,
+-	0xFF0F02C0, 0xABCD,
++	0xB0000000,	0x00000000,
++	0x8000020c,	0x00000000,	0x40000000,	0x00000000,
++		0x034, 0x0000A0D7,
++		0x034, 0x000090D3,
++		0x034, 0x000080B1,
++		0x034, 0x000070AE,
++	0x9000040c,	0x00000000,	0x40000000,	0x00000000,
+ 		0x034, 0x0000A0D7,
+ 		0x034, 0x000090D3,
+ 		0x034, 0x000080B1,
+ 		0x034, 0x000070AE,
+-	0xCDCDCDCD, 0xCDCD,
++	0xA0000000,	0x00000000,
+ 		0x034, 0x0000ADF7,
+ 		0x034, 0x00009DF4,
+ 		0x034, 0x00008DF1,
+ 		0x034, 0x00007DEE,
+-	0xFF0F02C0, 0xDEAD,
+-	0xFF0F0104, 0xABCD,
+-		0x034, 0x00006DEB,
+-		0x034, 0x00005CEC,
+-		0x034, 0x00004CE9,
+-		0x034, 0x000038CA,
+-		0x034, 0x00002889,
+-		0x034, 0x00001489,
+-		0x034, 0x0000044A,
+-	0xFF0F0204, 0xCDEF,
++	0xB0000000,	0x00000000,
++	0x80000111,	0x00000000,	0x40000000,	0x00000000,
+ 		0x034, 0x00006DEB,
+ 		0x034, 0x00005CEC,
+ 		0x034, 0x00004CE9,
+@@ -1577,7 +1612,7 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
+ 		0x034, 0x00002889,
+ 		0x034, 0x00001489,
+ 		0x034, 0x0000044A,
+-	0xFF0F0404, 0xCDEF,
++	0x90000110,	0x00000000,	0x40000000,	0x00000000,
+ 		0x034, 0x00006DEB,
+ 		0x034, 0x00005CEC,
+ 		0x034, 0x00004CE9,
+@@ -1585,7 +1620,7 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
+ 		0x034, 0x00002889,
+ 		0x034, 0x00001489,
+ 		0x034, 0x0000044A,
+-	0xFF0F02C0, 0xCDEF,
++	0x9000020c,	0x00000000,	0x40000000,	0x00000000,
+ 		0x034, 0x0000608D,
+ 		0x034, 0x0000506B,
+ 		0x034, 0x0000404A,
+@@ -1593,7 +1628,23 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
+ 		0x034, 0x00002044,
+ 		0x034, 0x00001025,
+ 		0x034, 0x00000004,
+-	0xCDCDCDCD, 0xCDCD,
++	0x9000040c,	0x00000000,	0x40000000,	0x00000000,
++		0x034, 0x0000608D,
++		0x034, 0x0000506B,
++		0x034, 0x0000404A,
++		0x034, 0x00003047,
++		0x034, 0x00002044,
++		0x034, 0x00001025,
++		0x034, 0x00000004,
++	0x90000410,	0x00000000,	0x40000000,	0x00000000,
++		0x034, 0x00006DEB,
++		0x034, 0x00005CEC,
++		0x034, 0x00004CE9,
++		0x034, 0x000038CA,
++		0x034, 0x00002889,
++		0x034, 0x00001489,
++		0x034, 0x0000044A,
++	0xA0000000,	0x00000000,
+ 		0x034, 0x00006DCD,
+ 		0x034, 0x00005CCD,
+ 		0x034, 0x00004CCA,
+@@ -1601,11 +1652,11 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
+ 		0x034, 0x00002888,
+ 		0x034, 0x00001488,
+ 		0x034, 0x00000486,
+-	0xFF0F0104, 0xDEAD,
++	0xB0000000,	0x00000000,
+ 		0x0EF, 0x00000000,
+ 		0x018, 0x0001712A,
+ 		0x0EF, 0x00000040,
+-	0xFF0F0104, 0xABCD,
++	0x80000111,	0x00000000,	0x40000000,	0x00000000,
+ 		0x035, 0x00000187,
+ 		0x035, 0x00008187,
+ 		0x035, 0x00010187,
+@@ -1615,7 +1666,7 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
+ 		0x035, 0x00040188,
+ 		0x035, 0x00048188,
+ 		0x035, 0x00050188,
+-	0xFF0F0204, 0xCDEF,
++	0x90000110,	0x00000000,	0x40000000,	0x00000000,
+ 		0x035, 0x00000187,
+ 		0x035, 0x00008187,
+ 		0x035, 0x00010187,
+@@ -1625,7 +1676,37 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
+ 		0x035, 0x00040188,
+ 		0x035, 0x00048188,
+ 		0x035, 0x00050188,
+-	0xFF0F0404, 0xCDEF,
++	0x90000210,	0x00000000,	0x40000000,	0x00000000,
++		0x035, 0x00000128,
++		0x035, 0x00008128,
++		0x035, 0x00010128,
++		0x035, 0x000201C8,
++		0x035, 0x000281C8,
++		0x035, 0x000301C8,
++		0x035, 0x000401C8,
++		0x035, 0x000481C8,
++		0x035, 0x000501C8,
++	0x9000040c,	0x00000000,	0x40000000,	0x00000000,
++		0x035, 0x00000145,
++		0x035, 0x00008145,
++		0x035, 0x00010145,
++		0x035, 0x00020196,
++		0x035, 0x00028196,
++		0x035, 0x00030196,
++		0x035, 0x000401C7,
++		0x035, 0x000481C7,
++		0x035, 0x000501C7,
++	0x90000200,	0x00000000,	0x40000000,	0x00000000,
++		0x035, 0x00000128,
++		0x035, 0x00008128,
++		0x035, 0x00010128,
++		0x035, 0x000201C8,
++		0x035, 0x000281C8,
++		0x035, 0x000301C8,
++		0x035, 0x000401C8,
++		0x035, 0x000481C8,
++		0x035, 0x000501C8,
++	0x90000410,	0x00000000,	0x40000000,	0x00000000,
+ 		0x035, 0x00000187,
+ 		0x035, 0x00008187,
+ 		0x035, 0x00010187,
+@@ -1635,7 +1716,7 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
+ 		0x035, 0x00040188,
+ 		0x035, 0x00048188,
+ 		0x035, 0x00050188,
+-	0xCDCDCDCD, 0xCDCD,
++	0xA0000000,	0x00000000,
+ 		0x035, 0x00000145,
+ 		0x035, 0x00008145,
+ 		0x035, 0x00010145,
+@@ -1645,11 +1726,11 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
+ 		0x035, 0x000401C7,
+ 		0x035, 0x000481C7,
+ 		0x035, 0x000501C7,
+-	0xFF0F0104, 0xDEAD,
++	0xB0000000,	0x00000000,
+ 		0x0EF, 0x00000000,
+ 		0x018, 0x0001712A,
+ 		0x0EF, 0x00000010,
+-	0xFF0F0104, 0xABCD,
++	0x80000111,	0x00000000,	0x40000000,	0x00000000,
+ 		0x036, 0x00085733,
+ 		0x036, 0x0008D733,
+ 		0x036, 0x00095733,
+@@ -1662,7 +1743,7 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
+ 		0x036, 0x000CE4B4,
+ 		0x036, 0x000D64B4,
+ 		0x036, 0x000DE4B4,
+-	0xFF0F0204, 0xCDEF,
++	0x90000110,	0x00000000,	0x40000000,	0x00000000,
+ 		0x036, 0x00085733,
+ 		0x036, 0x0008D733,
+ 		0x036, 0x00095733,
+@@ -1675,7 +1756,46 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
+ 		0x036, 0x000CE4B4,
+ 		0x036, 0x000D64B4,
+ 		0x036, 0x000DE4B4,
+-	0xFF0F0404, 0xCDEF,
++	0x90000210,	0x00000000,	0x40000000,	0x00000000,
++		0x036, 0x000063B5,
++		0x036, 0x0000E3B5,
++		0x036, 0x000163B5,
++		0x036, 0x0001E3B5,
++		0x036, 0x000263B5,
++		0x036, 0x0002E3B5,
++		0x036, 0x000363B5,
++		0x036, 0x0003E3B5,
++		0x036, 0x000463B5,
++		0x036, 0x0004E3B5,
++		0x036, 0x000563B5,
++		0x036, 0x0005E3B5,
++	0x9000040c,	0x00000000,	0x40000000,	0x00000000,
++		0x036, 0x000056B3,
++		0x036, 0x0000D6B3,
++		0x036, 0x000156B3,
++		0x036, 0x0001D6B3,
++		0x036, 0x00026634,
++		0x036, 0x0002E634,
++		0x036, 0x00036634,
++		0x036, 0x0003E634,
++		0x036, 0x000467B4,
++		0x036, 0x0004E7B4,
++		0x036, 0x000567B4,
++		0x036, 0x0005E7B4,
++	0x90000200,	0x00000000,	0x40000000,	0x00000000,
++		0x036, 0x000063B5,
++		0x036, 0x0000E3B5,
++		0x036, 0x000163B5,
++		0x036, 0x0001E3B5,
++		0x036, 0x000263B5,
++		0x036, 0x0002E3B5,
++		0x036, 0x000363B5,
++		0x036, 0x0003E3B5,
++		0x036, 0x000463B5,
++		0x036, 0x0004E3B5,
++		0x036, 0x000563B5,
++		0x036, 0x0005E3B5,
++	0x90000410,	0x00000000,	0x40000000,	0x00000000,
+ 		0x036, 0x00085733,
+ 		0x036, 0x0008D733,
+ 		0x036, 0x00095733,
+@@ -1688,7 +1808,7 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
+ 		0x036, 0x000CE4B4,
+ 		0x036, 0x000D64B4,
+ 		0x036, 0x000DE4B4,
+-	0xCDCDCDCD, 0xCDCD,
++	0xA0000000,	0x00000000,
+ 		0x036, 0x000056B3,
+ 		0x036, 0x0000D6B3,
+ 		0x036, 0x000156B3,
+@@ -1701,103 +1821,162 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
+ 		0x036, 0x0004E7B4,
+ 		0x036, 0x000567B4,
+ 		0x036, 0x0005E7B4,
+-	0xFF0F0104, 0xDEAD,
++	0xB0000000,	0x00000000,
+ 		0x0EF, 0x00000000,
+ 		0x0EF, 0x00000008,
+-	0xFF0F0104, 0xABCD,
++	0x80000111,	0x00000000,	0x40000000,	0x00000000,
+ 		0x03C, 0x000001C8,
+ 		0x03C, 0x00000492,
+-	0xFF0F0204, 0xCDEF,
++	0x90000110,	0x00000000,	0x40000000,	0x00000000,
+ 		0x03C, 0x000001C8,
+ 		0x03C, 0x00000492,
+-	0xFF0F0404, 0xCDEF,
++	0x90000210,	0x00000000,	0x40000000,	0x00000000,
++		0x03C, 0x000001B6,
++		0x03C, 0x00000492,
++	0x9000040c,	0x00000000,	0x40000000,	0x00000000,
++		0x03C, 0x0000022A,
++		0x03C, 0x00000594,
++	0x90000200,	0x00000000,	0x40000000,	0x00000000,
++		0x03C, 0x000001B6,
++		0x03C, 0x00000492,
++	0x90000410,	0x00000000,	0x40000000,	0x00000000,
+ 		0x03C, 0x000001C8,
+ 		0x03C, 0x00000492,
+-	0xCDCDCDCD, 0xCDCD,
++	0xA0000000,	0x00000000,
+ 		0x03C, 0x0000022A,
+ 		0x03C, 0x00000594,
+-	0xFF0F0104, 0xDEAD,
+-	0xFF0F0104, 0xABCD,
++	0xB0000000,	0x00000000,
++	0x80000111,	0x00000000,	0x40000000,	0x00000000,
+ 		0x03C, 0x00000800,
+-	0xFF0F0204, 0xCDEF,
++	0x90000110,	0x00000000,	0x40000000,	0x00000000,
+ 		0x03C, 0x00000800,
+-	0xFF0F0404, 0xCDEF,
++	0x90000210,	0x00000000,	0x40000000,	0x00000000,
+ 		0x03C, 0x00000800,
+-	0xFF0F02C0, 0xCDEF,
++	0x9000020c,	0x00000000,	0x40000000,	0x00000000,
+ 		0x03C, 0x00000820,
+-	0xCDCDCDCD, 0xCDCD,
++	0x9000040c,	0x00000000,	0x40000000,	0x00000000,
++		0x03C, 0x00000820,
++	0x90000200,	0x00000000,	0x40000000,	0x00000000,
++		0x03C, 0x00000800,
++	0x90000410,	0x00000000,	0x40000000,	0x00000000,
++		0x03C, 0x00000800,
++	0xA0000000,	0x00000000,
+ 		0x03C, 0x00000900,
+-	0xFF0F0104, 0xDEAD,
++	0xB0000000,	0x00000000,
+ 		0x0EF, 0x00000000,
+ 		0x018, 0x0001712A,
+ 		0x0EF, 0x00000002,
+-	0xFF0F0104, 0xABCD,
++	0x80000111,	0x00000000,	0x40000000,	0x00000000,
+ 		0x008, 0x0004E400,
+-	0xFF0F0204, 0xCDEF,
++	0x90000110,	0x00000000,	0x40000000,	0x00000000,
+ 		0x008, 0x0004E400,
+-	0xFF0F0404, 0xCDEF,
++	0x90000210,	0x00000000,	0x40000000,	0x00000000,
++		0x008, 0x00002000,
++	0x9000020c,	0x00000000,	0x40000000,	0x00000000,
++		0x008, 0x00002000,
++	0x9000040c,	0x00000000,	0x40000000,	0x00000000,
++		0x008, 0x00002000,
++	0x90000200,	0x00000000,	0x40000000,	0x00000000,
++		0x008, 0x00002000,
++	0x90000410,	0x00000000,	0x40000000,	0x00000000,
+ 		0x008, 0x0004E400,
+-	0xCDCDCDCD, 0xCDCD,
++	0xA0000000,	0x00000000,
+ 		0x008, 0x00002000,
+-	0xFF0F0104, 0xDEAD,
++	0xB0000000,	0x00000000,
+ 		0x0EF, 0x00000000,
+ 		0x0DF, 0x000000C0,
+-		0x01F, 0x00040064,
+-	0xFF0F0104, 0xABCD,
++		0x01F, 0x00000064,
++	0x80000111,	0x00000000,	0x40000000,	0x00000000,
+ 		0x058, 0x000A7284,
+ 		0x059, 0x000600EC,
+-	0xFF0F0204, 0xCDEF,
++	0x90000110,	0x00000000,	0x40000000,	0x00000000,
+ 		0x058, 0x000A7284,
+ 		0x059, 0x000600EC,
+-	0xFF0F0404, 0xCDEF,
++	0x9000020c,	0x00000000,	0x40000000,	0x00000000,
++		0x058, 0x00081184,
++		0x059, 0x0006016C,
++	0x9000040c,	0x00000000,	0x40000000,	0x00000000,
++		0x058, 0x00081184,
++		0x059, 0x0006016C,
++	0x90000200,	0x00000000,	0x40000000,	0x00000000,
++		0x058, 0x00081184,
++		0x059, 0x0006016C,
++	0x90000410,	0x00000000,	0x40000000,	0x00000000,
+ 		0x058, 0x000A7284,
+ 		0x059, 0x000600EC,
+-	0xCDCDCDCD, 0xCDCD,
++	0xA0000000,	0x00000000,
+ 		0x058, 0x00081184,
+ 		0x059, 0x0006016C,
+-	0xFF0F0104, 0xDEAD,
+-	0xFF0F0104, 0xABCD,
++	0xB0000000,	0x00000000,
++	0x80000111,	0x00000000,	0x40000000,	0x00000000,
+ 		0x061, 0x000E8D73,
+ 		0x062, 0x00093FC5,
+-	0xFF0F0204, 0xCDEF,
++	0x90000110,	0x00000000,	0x40000000,	0x00000000,
+ 		0x061, 0x000E8D73,
+ 		0x062, 0x00093FC5,
+-	0xFF0F0404, 0xCDEF,
++	0x90000210,	0x00000000,	0x40000000,	0x00000000,
++		0x061, 0x000EFD83,
++		0x062, 0x00093FCC,
++	0x9000040c,	0x00000000,	0x40000000,	0x00000000,
++		0x061, 0x000EAD53,
++		0x062, 0x00093BC4,
++	0x90000200,	0x00000000,	0x40000000,	0x00000000,
++		0x061, 0x000EFD83,
++		0x062, 0x00093FCC,
++	0x90000410,	0x00000000,	0x40000000,	0x00000000,
+ 		0x061, 0x000E8D73,
+ 		0x062, 0x00093FC5,
+-	0xCDCDCDCD, 0xCDCD,
++	0xA0000000,	0x00000000,
+ 		0x061, 0x000EAD53,
+ 		0x062, 0x00093BC4,
+-	0xFF0F0104, 0xDEAD,
+-	0xFF0F0104, 0xABCD,
++	0xB0000000,	0x00000000,
++	0x80000111,	0x00000000,	0x40000000,	0x00000000,
+ 		0x063, 0x000110E9,
+-	0xFF0F0204, 0xCDEF,
++	0x90000110,	0x00000000,	0x40000000,	0x00000000,
+ 		0x063, 0x000110E9,
+-	0xFF0F0404, 0xCDEF,
++	0x90000210,	0x00000000,	0x40000000,	0x00000000,
++		0x063, 0x000110EB,
++	0x9000020c,	0x00000000,	0x40000000,	0x00000000,
+ 		0x063, 0x000110E9,
+-	0xFF0F0200, 0xCDEF,
+-		0x063, 0x000710E9,
+-	0xFF0F02C0, 0xCDEF,
++	0x9000040c,	0x00000000,	0x40000000,	0x00000000,
+ 		0x063, 0x000110E9,
+-	0xCDCDCDCD, 0xCDCD,
++	0x90000200,	0x00000000,	0x40000000,	0x00000000,
++		0x063, 0x000110EB,
++	0x90000410,	0x00000000,	0x40000000,	0x00000000,
++		0x063, 0x000110E9,
++	0xA0000000,	0x00000000,
+ 		0x063, 0x000714E9,
+-	0xFF0F0104, 0xDEAD,
+-	0xFF0F0104, 0xABCD,
++	0xB0000000,	0x00000000,
++	0x80000111,	0x00000000,	0x40000000,	0x00000000,
++		0x064, 0x0001C27C,
++	0x90000110,	0x00000000,	0x40000000,	0x00000000,
++		0x064, 0x0001C27C,
++	0x90000210,	0x00000000,	0x40000000,	0x00000000,
+ 		0x064, 0x0001C27C,
+-	0xFF0F0204, 0xCDEF,
++	0x9000040c,	0x00000000,	0x40000000,	0x00000000,
++		0x064, 0x0001C67C,
++	0x90000200,	0x00000000,	0x40000000,	0x00000000,
+ 		0x064, 0x0001C27C,
+-	0xFF0F0404, 0xCDEF,
++	0x90000410,	0x00000000,	0x40000000,	0x00000000,
+ 		0x064, 0x0001C27C,
+-	0xCDCDCDCD, 0xCDCD,
++	0xA0000000,	0x00000000,
+ 		0x064, 0x0001C67C,
+-	0xFF0F0104, 0xDEAD,
+-	0xFF0F0200, 0xABCD,
++	0xB0000000,	0x00000000,
++	0x80000111,	0x00000000,	0x40000000,	0x00000000,
++		0x065, 0x00091016,
++	0x90000110,	0x00000000,	0x40000000,	0x00000000,
++		0x065, 0x00091016,
++	0x90000210,	0x00000000,	0x40000000,	0x00000000,
+ 		0x065, 0x00093016,
+-	0xFF0F02C0, 0xCDEF,
++		0x9000020c,	0x00000000,	0x40000000,	0x00000000,
+ 		0x065, 0x00093015,
+-	0xCDCDCDCD, 0xCDCD,
++		0x9000040c,	0x00000000,	0x40000000,	0x00000000,
++		0x065, 0x00093015,
++		0x90000200,	0x00000000,	0x40000000,	0x00000000,
++		0x065, 0x00093016,
++		0xA0000000,	0x00000000,
+ 		0x065, 0x00091016,
+-	0xFF0F0200, 0xDEAD,
++		0xB0000000,	0x00000000,
+ 		0x018, 0x00000006,
+ 		0x0EF, 0x00002000,
+ 		0x03B, 0x0003824B,
+@@ -1895,9 +2074,10 @@ u32 RTL8821AE_RADIOA_ARRAY[] = {
+ 		0x0B4, 0x0001214C,
+ 		0x0B7, 0x0003000C,
+ 		0x01C, 0x000539D2,
++		0x0C4, 0x000AFE00,
+ 		0x018, 0x0001F12A,
+-		0x0FE, 0x00000000,
+-		0x0FE, 0x00000000,
++		0xFFE, 0x00000000,
++		0xFFE, 0x00000000,
+ 		0x018, 0x0001712A,
+ 
+ };
+@@ -2017,6 +2197,7 @@ u32 RTL8812AE_MAC_REG_ARRAY[] = {
+ u32 RTL8812AE_MAC_1T_ARRAYLEN = ARRAY_SIZE(RTL8812AE_MAC_REG_ARRAY);
+ 
+ u32 RTL8821AE_MAC_REG_ARRAY[] = {
++		0x421, 0x0000000F,
+ 		0x428, 0x0000000A,
+ 		0x429, 0x00000010,
+ 		0x430, 0x00000000,
+@@ -2485,7 +2666,7 @@ u32 RTL8821AE_AGC_TAB_ARRAY[] = {
+ 		0x81C, 0xA6360001,
+ 		0x81C, 0xA5380001,
+ 		0x81C, 0xA43A0001,
+-		0x81C, 0xA33C0001,
++		0x81C, 0x683C0001,
+ 		0x81C, 0x673E0001,
+ 		0x81C, 0x66400001,
+ 		0x81C, 0x65420001,
+@@ -2519,7 +2700,66 @@ u32 RTL8821AE_AGC_TAB_ARRAY[] = {
+ 		0x81C, 0x017A0001,
+ 		0x81C, 0x017C0001,
+ 		0x81C, 0x017E0001,
+-	0xFF0F02C0, 0xABCD,
++	0x8000020c,	0x00000000,	0x40000000,	0x00000000,
++		0x81C, 0xFB000101,
++		0x81C, 0xFA020101,
++		0x81C, 0xF9040101,
++		0x81C, 0xF8060101,
++		0x81C, 0xF7080101,
++		0x81C, 0xF60A0101,
++		0x81C, 0xF50C0101,
++		0x81C, 0xF40E0101,
++		0x81C, 0xF3100101,
++		0x81C, 0xF2120101,
++		0x81C, 0xF1140101,
++		0x81C, 0xF0160101,
++		0x81C, 0xEF180101,
++		0x81C, 0xEE1A0101,
++		0x81C, 0xED1C0101,
++		0x81C, 0xEC1E0101,
++		0x81C, 0xEB200101,
++		0x81C, 0xEA220101,
++		0x81C, 0xE9240101,
++		0x81C, 0xE8260101,
++		0x81C, 0xE7280101,
++		0x81C, 0xE62A0101,
++		0x81C, 0xE52C0101,
++		0x81C, 0xE42E0101,
++		0x81C, 0xE3300101,
++		0x81C, 0xA5320101,
++		0x81C, 0xA4340101,
++		0x81C, 0xA3360101,
++		0x81C, 0x87380101,
++		0x81C, 0x863A0101,
++		0x81C, 0x853C0101,
++		0x81C, 0x843E0101,
++		0x81C, 0x69400101,
++		0x81C, 0x68420101,
++		0x81C, 0x67440101,
++		0x81C, 0x66460101,
++		0x81C, 0x49480101,
++		0x81C, 0x484A0101,
++		0x81C, 0x474C0101,
++		0x81C, 0x2A4E0101,
++		0x81C, 0x29500101,
++		0x81C, 0x28520101,
++		0x81C, 0x27540101,
++		0x81C, 0x26560101,
++		0x81C, 0x25580101,
++		0x81C, 0x245A0101,
++		0x81C, 0x235C0101,
++		0x81C, 0x055E0101,
++		0x81C, 0x04600101,
++		0x81C, 0x03620101,
++		0x81C, 0x02640101,
++		0x81C, 0x01660101,
++		0x81C, 0x01680101,
++		0x81C, 0x016A0101,
++		0x81C, 0x016C0101,
++		0x81C, 0x016E0101,
++		0x81C, 0x01700101,
++		0x81C, 0x01720101,
++	0x9000040c,	0x00000000,	0x40000000,	0x00000000,
+ 		0x81C, 0xFB000101,
+ 		0x81C, 0xFA020101,
+ 		0x81C, 0xF9040101,
+@@ -2578,7 +2818,7 @@ u32 RTL8821AE_AGC_TAB_ARRAY[] = {
+ 		0x81C, 0x016E0101,
+ 		0x81C, 0x01700101,
+ 		0x81C, 0x01720101,
+-	0xCDCDCDCD, 0xCDCD,
++	0xA0000000,	0x00000000,
+ 		0x81C, 0xFF000101,
+ 		0x81C, 0xFF020101,
+ 		0x81C, 0xFE040101,
+@@ -2637,7 +2877,7 @@ u32 RTL8821AE_AGC_TAB_ARRAY[] = {
+ 		0x81C, 0x046E0101,
+ 		0x81C, 0x03700101,
+ 		0x81C, 0x02720101,
+-	0xFF0F02C0, 0xDEAD,
++	0xB0000000,	0x00000000,
+ 		0x81C, 0x01740101,
+ 		0x81C, 0x01760101,
+ 		0x81C, 0x01780101,
+diff --git a/drivers/net/wireless/realtek/rtw88/debug.c b/drivers/net/wireless/realtek/rtw88/debug.c
+index 948cb79050ea9..e7d51ac9b6891 100644
+--- a/drivers/net/wireless/realtek/rtw88/debug.c
++++ b/drivers/net/wireless/realtek/rtw88/debug.c
+@@ -270,7 +270,7 @@ static ssize_t rtw_debugfs_set_rsvd_page(struct file *filp,
+ 
+ 	if (num != 2) {
+ 		rtw_warn(rtwdev, "invalid arguments\n");
+-		return num;
++		return -EINVAL;
+ 	}
+ 
+ 	debugfs_priv->rsvd_page.page_offset = offset;
+diff --git a/drivers/net/wireless/realtek/rtw88/pci.c b/drivers/net/wireless/realtek/rtw88/pci.c
+index 786a486499463..6b5c885798a4b 100644
+--- a/drivers/net/wireless/realtek/rtw88/pci.c
++++ b/drivers/net/wireless/realtek/rtw88/pci.c
+@@ -581,23 +581,30 @@ static int rtw_pci_start(struct rtw_dev *rtwdev)
+ {
+ 	struct rtw_pci *rtwpci = (struct rtw_pci *)rtwdev->priv;
+ 
++	rtw_pci_napi_start(rtwdev);
++
+ 	spin_lock_bh(&rtwpci->irq_lock);
++	rtwpci->running = true;
+ 	rtw_pci_enable_interrupt(rtwdev, rtwpci, false);
+ 	spin_unlock_bh(&rtwpci->irq_lock);
+ 
+-	rtw_pci_napi_start(rtwdev);
+-
+ 	return 0;
+ }
+ 
+ static void rtw_pci_stop(struct rtw_dev *rtwdev)
+ {
+ 	struct rtw_pci *rtwpci = (struct rtw_pci *)rtwdev->priv;
++	struct pci_dev *pdev = rtwpci->pdev;
+ 
++	spin_lock_bh(&rtwpci->irq_lock);
++	rtwpci->running = false;
++	rtw_pci_disable_interrupt(rtwdev, rtwpci);
++	spin_unlock_bh(&rtwpci->irq_lock);
++
++	synchronize_irq(pdev->irq);
+ 	rtw_pci_napi_stop(rtwdev);
+ 
+ 	spin_lock_bh(&rtwpci->irq_lock);
+-	rtw_pci_disable_interrupt(rtwdev, rtwpci);
+ 	rtw_pci_dma_release(rtwdev, rtwpci);
+ 	spin_unlock_bh(&rtwpci->irq_lock);
+ }
+@@ -1138,7 +1145,8 @@ static irqreturn_t rtw_pci_interrupt_threadfn(int irq, void *dev)
+ 		rtw_fw_c2h_cmd_isr(rtwdev);
+ 
+ 	/* all of the jobs for this interrupt have been done */
+-	rtw_pci_enable_interrupt(rtwdev, rtwpci, rx);
++	if (rtwpci->running)
++		rtw_pci_enable_interrupt(rtwdev, rtwpci, rx);
+ 	spin_unlock_bh(&rtwpci->irq_lock);
+ 
+ 	return IRQ_HANDLED;
+@@ -1558,7 +1566,8 @@ static int rtw_pci_napi_poll(struct napi_struct *napi, int budget)
+ 	if (work_done < budget) {
+ 		napi_complete_done(napi, work_done);
+ 		spin_lock_bh(&rtwpci->irq_lock);
+-		rtw_pci_enable_interrupt(rtwdev, rtwpci, false);
++		if (rtwpci->running)
++			rtw_pci_enable_interrupt(rtwdev, rtwpci, false);
+ 		spin_unlock_bh(&rtwpci->irq_lock);
+ 		/* When ISR happens during polling and before napi_complete
+ 		 * while no further data is received. Data on the dma_ring will
+diff --git a/drivers/net/wireless/realtek/rtw88/pci.h b/drivers/net/wireless/realtek/rtw88/pci.h
+index e76fc549a7883..0ffae887527a2 100644
+--- a/drivers/net/wireless/realtek/rtw88/pci.h
++++ b/drivers/net/wireless/realtek/rtw88/pci.h
+@@ -211,6 +211,7 @@ struct rtw_pci {
+ 	spinlock_t irq_lock;
+ 	u32 irq_mask[4];
+ 	bool irq_enabled;
++	bool running;
+ 
+ 	/* napi structure */
+ 	struct net_device netdev;
+diff --git a/drivers/net/wireless/realtek/rtw88/phy.c b/drivers/net/wireless/realtek/rtw88/phy.c
+index e114ddecac099..0b3da5bef7036 100644
+--- a/drivers/net/wireless/realtek/rtw88/phy.c
++++ b/drivers/net/wireless/realtek/rtw88/phy.c
+@@ -1584,7 +1584,7 @@ void rtw_phy_load_tables(struct rtw_dev *rtwdev)
+ }
+ EXPORT_SYMBOL(rtw_phy_load_tables);
+ 
+-static u8 rtw_get_channel_group(u8 channel)
++static u8 rtw_get_channel_group(u8 channel, u8 rate)
+ {
+ 	switch (channel) {
+ 	default:
+@@ -1628,6 +1628,7 @@ static u8 rtw_get_channel_group(u8 channel)
+ 	case 106:
+ 		return 4;
+ 	case 14:
++		return rate <= DESC_RATE11M ? 5 : 4;
+ 	case 108:
+ 	case 110:
+ 	case 112:
+@@ -1879,7 +1880,7 @@ void rtw_get_tx_power_params(struct rtw_dev *rtwdev, u8 path, u8 rate, u8 bw,
+ 	s8 *remnant = &pwr_param->pwr_remnant;
+ 
+ 	pwr_idx = &rtwdev->efuse.txpwr_idx_table[path];
+-	group = rtw_get_channel_group(ch);
++	group = rtw_get_channel_group(ch, rate);
+ 
+ 	/* base power index for 2.4G/5G */
+ 	if (IS_CH_2G_BAND(ch)) {
+diff --git a/drivers/net/wireless/ti/wlcore/boot.c b/drivers/net/wireless/ti/wlcore/boot.c
+index e14d88e558f04..85abd0a2d1c90 100644
+--- a/drivers/net/wireless/ti/wlcore/boot.c
++++ b/drivers/net/wireless/ti/wlcore/boot.c
+@@ -72,6 +72,7 @@ static int wlcore_validate_fw_ver(struct wl1271 *wl)
+ 	unsigned int *min_ver = (wl->fw_type == WL12XX_FW_TYPE_MULTI) ?
+ 		wl->min_mr_fw_ver : wl->min_sr_fw_ver;
+ 	char min_fw_str[32] = "";
++	int off = 0;
+ 	int i;
+ 
+ 	/* the chip must be exactly equal */
+@@ -105,13 +106,15 @@ static int wlcore_validate_fw_ver(struct wl1271 *wl)
+ 	return 0;
+ 
+ fail:
+-	for (i = 0; i < NUM_FW_VER; i++)
++	for (i = 0; i < NUM_FW_VER && off < sizeof(min_fw_str); i++)
+ 		if (min_ver[i] == WLCORE_FW_VER_IGNORE)
+-			snprintf(min_fw_str, sizeof(min_fw_str),
+-				  "%s*.", min_fw_str);
++			off += snprintf(min_fw_str + off,
++					sizeof(min_fw_str) - off,
++					"*.");
+ 		else
+-			snprintf(min_fw_str, sizeof(min_fw_str),
+-				  "%s%u.", min_fw_str, min_ver[i]);
++			off += snprintf(min_fw_str + off,
++					sizeof(min_fw_str) - off,
++					"%u.", min_ver[i]);
+ 
+ 	wl1271_error("Your WiFi FW version (%u.%u.%u.%u.%u) is invalid.\n"
+ 		     "Please use at least FW %s\n"
+diff --git a/drivers/net/wireless/ti/wlcore/debugfs.h b/drivers/net/wireless/ti/wlcore/debugfs.h
+index b143293e694f9..a9e13e6d65c50 100644
+--- a/drivers/net/wireless/ti/wlcore/debugfs.h
++++ b/drivers/net/wireless/ti/wlcore/debugfs.h
+@@ -78,13 +78,14 @@ static ssize_t sub## _ ##name## _read(struct file *file,		\
+ 	struct wl1271 *wl = file->private_data;				\
+ 	struct struct_type *stats = wl->stats.fw_stats;			\
+ 	char buf[DEBUGFS_FORMAT_BUFFER_SIZE] = "";			\
++	int pos = 0;							\
+ 	int i;								\
+ 									\
+ 	wl1271_debugfs_update_stats(wl);				\
+ 									\
+-	for (i = 0; i < len; i++)					\
+-		snprintf(buf, sizeof(buf), "%s[%d] = %d\n",		\
+-			 buf, i, stats->sub.name[i]);			\
++	for (i = 0; i < len && pos < sizeof(buf); i++)			\
++		pos += snprintf(buf + pos, sizeof(buf) - pos,		\
++			 "[%d] = %d\n", i, stats->sub.name[i]);		\
+ 									\
+ 	return wl1271_format_buffer(userbuf, count, ppos, "%s", buf);	\
+ }									\
+diff --git a/drivers/nfc/pn533/pn533.c b/drivers/nfc/pn533/pn533.c
+index f1469ac8ff425..3fe5b81eda2d6 100644
+--- a/drivers/nfc/pn533/pn533.c
++++ b/drivers/nfc/pn533/pn533.c
+@@ -706,6 +706,9 @@ static bool pn533_target_type_a_is_valid(struct pn533_target_type_a *type_a,
+ 	if (PN533_TYPE_A_SEL_CASCADE(type_a->sel_res) != 0)
+ 		return false;
+ 
++	if (type_a->nfcid_len > NFC_NFCID1_MAXSIZE)
++		return false;
++
+ 	return true;
+ }
+ 
+diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
+index a1d476e1ac020..ec1e454848e58 100644
+--- a/drivers/nvme/host/multipath.c
++++ b/drivers/nvme/host/multipath.c
+@@ -668,6 +668,10 @@ void nvme_mpath_add_disk(struct nvme_ns *ns, struct nvme_id_ns *id)
+ 		if (desc.state) {
+ 			/* found the group desc: update */
+ 			nvme_update_ns_ana_state(&desc, ns);
++		} else {
++			/* group desc not found: trigger a re-read */
++			set_bit(NVME_NS_ANA_PENDING, &ns->flags);
++			queue_work(nvme_wq, &ns->ctrl->ana_work);
+ 		}
+ 	} else {
+ 		ns->ana_state = NVME_ANA_OPTIMIZED; 
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 7249ae74f71ff..c92a15c3fbc5e 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -852,7 +852,7 @@ static blk_status_t nvme_map_data(struct nvme_dev *dev, struct request *req,
+ 				return nvme_setup_prp_simple(dev, req,
+ 							     &cmnd->rw, &bv);
+ 
+-			if (iod->nvmeq->qid &&
++			if (iod->nvmeq->qid && sgl_threshold &&
+ 			    dev->ctrl.sgls & ((1 << 0) | (1 << 1)))
+ 				return nvme_setup_sgl_simple(dev, req,
+ 							     &cmnd->rw, &bv);
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index a0f00cb8f9f3c..d7d7c81d07014 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -874,7 +874,7 @@ static void nvme_tcp_state_change(struct sock *sk)
+ {
+ 	struct nvme_tcp_queue *queue;
+ 
+-	read_lock(&sk->sk_callback_lock);
++	read_lock_bh(&sk->sk_callback_lock);
+ 	queue = sk->sk_user_data;
+ 	if (!queue)
+ 		goto done;
+@@ -895,7 +895,7 @@ static void nvme_tcp_state_change(struct sock *sk)
+ 
+ 	queue->state_change(sk);
+ done:
+-	read_unlock(&sk->sk_callback_lock);
++	read_unlock_bh(&sk->sk_callback_lock);
+ }
+ 
+ static inline bool nvme_tcp_queue_more(struct nvme_tcp_queue *queue)
+diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
+index d658c6e8263af..d958b5da9b88a 100644
+--- a/drivers/nvme/target/tcp.c
++++ b/drivers/nvme/target/tcp.c
+@@ -525,11 +525,36 @@ static void nvmet_tcp_queue_response(struct nvmet_req *req)
+ 	struct nvmet_tcp_cmd *cmd =
+ 		container_of(req, struct nvmet_tcp_cmd, req);
+ 	struct nvmet_tcp_queue	*queue = cmd->queue;
++	struct nvme_sgl_desc *sgl;
++	u32 len;
++
++	if (unlikely(cmd == queue->cmd)) {
++		sgl = &cmd->req.cmd->common.dptr.sgl;
++		len = le32_to_cpu(sgl->length);
++
++		/*
++		 * Wait for inline data before processing the response.
++		 * Avoid using helpers, this might happen before
++		 * nvmet_req_init is completed.
++		 */
++		if (queue->rcv_state == NVMET_TCP_RECV_PDU &&
++		    len && len < cmd->req.port->inline_data_size &&
++		    nvme_is_write(cmd->req.cmd))
++			return;
++	}
+ 
+ 	llist_add(&cmd->lentry, &queue->resp_list);
+ 	queue_work_on(queue_cpu(queue), nvmet_tcp_wq, &cmd->queue->io_work);
+ }
+ 
++static void nvmet_tcp_execute_request(struct nvmet_tcp_cmd *cmd)
++{
++	if (unlikely(cmd->flags & NVMET_TCP_F_INIT_FAILED))
++		nvmet_tcp_queue_response(&cmd->req);
++	else
++		cmd->req.execute(&cmd->req);
++}
++
+ static int nvmet_try_send_data_pdu(struct nvmet_tcp_cmd *cmd)
+ {
+ 	u8 hdgst = nvmet_tcp_hdgst_len(cmd->queue);
+@@ -961,7 +986,7 @@ static int nvmet_tcp_done_recv_pdu(struct nvmet_tcp_queue *queue)
+ 			le32_to_cpu(req->cmd->common.dptr.sgl.length));
+ 
+ 		nvmet_tcp_handle_req_failure(queue, queue->cmd, req);
+-		return -EAGAIN;
++		return 0;
+ 	}
+ 
+ 	ret = nvmet_tcp_map_data(queue->cmd);
+@@ -1104,10 +1129,8 @@ static int nvmet_tcp_try_recv_data(struct nvmet_tcp_queue *queue)
+ 		return 0;
+ 	}
+ 
+-	if (!(cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
+-	    cmd->rbytes_done == cmd->req.transfer_len) {
+-		cmd->req.execute(&cmd->req);
+-	}
++	if (cmd->rbytes_done == cmd->req.transfer_len)
++		nvmet_tcp_execute_request(cmd);
+ 
+ 	nvmet_prepare_receive_pdu(queue);
+ 	return 0;
+@@ -1144,9 +1167,9 @@ static int nvmet_tcp_try_recv_ddgst(struct nvmet_tcp_queue *queue)
+ 		goto out;
+ 	}
+ 
+-	if (!(cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
+-	    cmd->rbytes_done == cmd->req.transfer_len)
+-		cmd->req.execute(&cmd->req);
++	if (cmd->rbytes_done == cmd->req.transfer_len)
++		nvmet_tcp_execute_request(cmd);
++
+ 	ret = 0;
+ out:
+ 	nvmet_prepare_receive_pdu(queue);
+@@ -1434,7 +1457,7 @@ static void nvmet_tcp_state_change(struct sock *sk)
+ {
+ 	struct nvmet_tcp_queue *queue;
+ 
+-	write_lock_bh(&sk->sk_callback_lock);
++	read_lock_bh(&sk->sk_callback_lock);
+ 	queue = sk->sk_user_data;
+ 	if (!queue)
+ 		goto done;
+@@ -1452,7 +1475,7 @@ static void nvmet_tcp_state_change(struct sock *sk)
+ 			queue->idx, sk->sk_state);
+ 	}
+ done:
+-	write_unlock_bh(&sk->sk_callback_lock);
++	read_unlock_bh(&sk->sk_callback_lock);
+ }
+ 
+ static int nvmet_tcp_set_queue_sock(struct nvmet_tcp_queue *queue)
+diff --git a/drivers/nvmem/Kconfig b/drivers/nvmem/Kconfig
+index 75d2594c16e19..267a0d9e99ba0 100644
+--- a/drivers/nvmem/Kconfig
++++ b/drivers/nvmem/Kconfig
+@@ -272,6 +272,7 @@ config SPRD_EFUSE
+ 
+ config NVMEM_RMEM
+ 	tristate "Reserved Memory Based Driver Support"
++	depends on HAS_IOMEM
+ 	help
+ 	  This driver maps reserved memory into an nvmem device. It might be
+ 	  useful to expose information left by firmware in memory.
+diff --git a/drivers/nvmem/qfprom.c b/drivers/nvmem/qfprom.c
+index 6cace24dfbf73..100d69d8f2e1c 100644
+--- a/drivers/nvmem/qfprom.c
++++ b/drivers/nvmem/qfprom.c
+@@ -127,6 +127,16 @@ static void qfprom_disable_fuse_blowing(const struct qfprom_priv *priv,
+ {
+ 	int ret;
+ 
++	/*
++	 * This may be a shared rail and may be able to run at a lower rate
++	 * when we're not blowing fuses.  At the moment, the regulator framework
++	 * applies voltage constraints even on disabled rails, so remove our
++	 * constraints and allow the rail to be adjusted by other users.
++	 */
++	ret = regulator_set_voltage(priv->vcc, 0, INT_MAX);
++	if (ret)
++		dev_warn(priv->dev, "Failed to set 0 voltage (ignoring)\n");
++
+ 	ret = regulator_disable(priv->vcc);
+ 	if (ret)
+ 		dev_warn(priv->dev, "Failed to disable regulator (ignoring)\n");
+@@ -172,6 +182,17 @@ static int qfprom_enable_fuse_blowing(const struct qfprom_priv *priv,
+ 		goto err_clk_prepared;
+ 	}
+ 
++	/*
++	 * Hardware requires 1.8V min for fuse blowing; this may be
++	 * a rail shared do don't specify a max--regulator constraints
++	 * will handle.
++	 */
++	ret = regulator_set_voltage(priv->vcc, 1800000, INT_MAX);
++	if (ret) {
++		dev_err(priv->dev, "Failed to set 1.8 voltage\n");
++		goto err_clk_rate_set;
++	}
++
+ 	ret = regulator_enable(priv->vcc);
+ 	if (ret) {
+ 		dev_err(priv->dev, "Failed to enable regulator\n");
+diff --git a/drivers/of/overlay.c b/drivers/of/overlay.c
+index 23effe5e50ece..2d132949572d6 100644
+--- a/drivers/of/overlay.c
++++ b/drivers/of/overlay.c
+@@ -796,6 +796,7 @@ static int init_overlay_changeset(struct overlay_changeset *ovcs,
+ 		if (!fragment->target) {
+ 			of_node_put(fragment->overlay);
+ 			ret = -EINVAL;
++			of_node_put(node);
+ 			goto err_free_fragments;
+ 		}
+ 
+diff --git a/drivers/pci/controller/dwc/pci-keystone.c b/drivers/pci/controller/dwc/pci-keystone.c
+index 53aa35cb3a493..a59ecbec601fc 100644
+--- a/drivers/pci/controller/dwc/pci-keystone.c
++++ b/drivers/pci/controller/dwc/pci-keystone.c
+@@ -798,7 +798,8 @@ static int __init ks_pcie_host_init(struct pcie_port *pp)
+ 	int ret;
+ 
+ 	pp->bridge->ops = &ks_pcie_ops;
+-	pp->bridge->child_ops = &ks_child_pcie_ops;
++	if (!ks_pcie->is_am6)
++		pp->bridge->child_ops = &ks_child_pcie_ops;
+ 
+ 	ret = ks_pcie_config_legacy_irq(ks_pcie);
+ 	if (ret)
+diff --git a/drivers/pci/controller/pci-xgene.c b/drivers/pci/controller/pci-xgene.c
+index 2afdc865253e8..7f503dd4ff81d 100644
+--- a/drivers/pci/controller/pci-xgene.c
++++ b/drivers/pci/controller/pci-xgene.c
+@@ -354,7 +354,8 @@ static int xgene_pcie_map_reg(struct xgene_pcie_port *port,
+ 	if (IS_ERR(port->csr_base))
+ 		return PTR_ERR(port->csr_base);
+ 
+-	port->cfg_base = devm_platform_ioremap_resource_byname(pdev, "cfg");
++	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "cfg");
++	port->cfg_base = devm_ioremap_resource(dev, res);
+ 	if (IS_ERR(port->cfg_base))
+ 		return PTR_ERR(port->cfg_base);
+ 	port->cfg_addr = res->start;
+diff --git a/drivers/pci/vpd.c b/drivers/pci/vpd.c
+index 7915d10f9aa10..bd549070c0112 100644
+--- a/drivers/pci/vpd.c
++++ b/drivers/pci/vpd.c
+@@ -570,7 +570,6 @@ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x005d, quirk_blacklist_vpd);
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x005f, quirk_blacklist_vpd);
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATTANSIC, PCI_ANY_ID,
+ 		quirk_blacklist_vpd);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_QLOGIC, 0x2261, quirk_blacklist_vpd);
+ /*
+  * The Amazon Annapurna Labs 0x0031 device id is reused for other non Root Port
+  * device types, so the quirk is registered for the PCI_CLASS_BRIDGE_PCI class.
+diff --git a/drivers/phy/cadence/phy-cadence-sierra.c b/drivers/phy/cadence/phy-cadence-sierra.c
+index 26a0badabe38b..19f32ae877b94 100644
+--- a/drivers/phy/cadence/phy-cadence-sierra.c
++++ b/drivers/phy/cadence/phy-cadence-sierra.c
+@@ -319,6 +319,12 @@ static int cdns_sierra_phy_on(struct phy *gphy)
+ 	u32 val;
+ 	int ret;
+ 
++	ret = reset_control_deassert(sp->phy_rst);
++	if (ret) {
++		dev_err(dev, "Failed to take the PHY out of reset\n");
++		return ret;
++	}
++
+ 	/* Take the PHY lane group out of reset */
+ 	ret = reset_control_deassert(ins->lnk_rst);
+ 	if (ret) {
+@@ -616,7 +622,6 @@ static int cdns_sierra_phy_probe(struct platform_device *pdev)
+ 
+ 	pm_runtime_enable(dev);
+ 	phy_provider = devm_of_phy_provider_register(dev, of_phy_simple_xlate);
+-	reset_control_deassert(sp->phy_rst);
+ 	return PTR_ERR_OR_ZERO(phy_provider);
+ 
+ put_child:
+diff --git a/drivers/phy/ingenic/phy-ingenic-usb.c b/drivers/phy/ingenic/phy-ingenic-usb.c
+index ea127b177f46b..28c28d8164849 100644
+--- a/drivers/phy/ingenic/phy-ingenic-usb.c
++++ b/drivers/phy/ingenic/phy-ingenic-usb.c
+@@ -352,8 +352,8 @@ static int ingenic_usb_phy_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	priv->phy = devm_phy_create(dev, NULL, &ingenic_usb_phy_ops);
+-	if (IS_ERR(priv))
+-		return PTR_ERR(priv);
++	if (IS_ERR(priv->phy))
++		return PTR_ERR(priv->phy);
+ 
+ 	phy_set_drvdata(priv->phy, priv);
+ 
+diff --git a/drivers/phy/marvell/Kconfig b/drivers/phy/marvell/Kconfig
+index 6c96f2bf52665..c8ee23fc3a83d 100644
+--- a/drivers/phy/marvell/Kconfig
++++ b/drivers/phy/marvell/Kconfig
+@@ -3,8 +3,8 @@
+ # Phy drivers for Marvell platforms
+ #
+ config ARMADA375_USBCLUSTER_PHY
+-	def_bool y
+-	depends on MACH_ARMADA_375 || COMPILE_TEST
++	bool "Armada 375 USB cluster PHY support" if COMPILE_TEST
++	default y if MACH_ARMADA_375
+ 	depends on OF && HAS_IOMEM
+ 	select GENERIC_PHY
+ 
+diff --git a/drivers/phy/ralink/phy-mt7621-pci.c b/drivers/phy/ralink/phy-mt7621-pci.c
+index 9a610b414b1fb..753cb5bab9308 100644
+--- a/drivers/phy/ralink/phy-mt7621-pci.c
++++ b/drivers/phy/ralink/phy-mt7621-pci.c
+@@ -62,7 +62,7 @@
+ 
+ #define RG_PE1_FRC_MSTCKDIV			BIT(5)
+ 
+-#define XTAL_MASK				GENMASK(7, 6)
++#define XTAL_MASK				GENMASK(8, 6)
+ 
+ #define MAX_PHYS	2
+ 
+@@ -319,9 +319,9 @@ static int mt7621_pci_phy_probe(struct platform_device *pdev)
+ 		return PTR_ERR(phy->regmap);
+ 
+ 	phy->phy = devm_phy_create(dev, dev->of_node, &mt7621_pci_phy_ops);
+-	if (IS_ERR(phy)) {
++	if (IS_ERR(phy->phy)) {
+ 		dev_err(dev, "failed to create phy\n");
+-		return PTR_ERR(phy);
++		return PTR_ERR(phy->phy);
+ 	}
+ 
+ 	phy_set_drvdata(phy->phy, phy);
+diff --git a/drivers/phy/ti/phy-j721e-wiz.c b/drivers/phy/ti/phy-j721e-wiz.c
+index c9cfafe89cbf1..e28e25f98708c 100644
+--- a/drivers/phy/ti/phy-j721e-wiz.c
++++ b/drivers/phy/ti/phy-j721e-wiz.c
+@@ -615,6 +615,12 @@ static void wiz_clock_cleanup(struct wiz *wiz, struct device_node *node)
+ 		of_clk_del_provider(clk_node);
+ 		of_node_put(clk_node);
+ 	}
++
++	for (i = 0; i < wiz->clk_div_sel_num; i++) {
++		clk_node = of_get_child_by_name(node, clk_div_sel[i].node_name);
++		of_clk_del_provider(clk_node);
++		of_node_put(clk_node);
++	}
+ }
+ 
+ static int wiz_clock_init(struct wiz *wiz, struct device_node *node)
+@@ -947,27 +953,24 @@ static int wiz_probe(struct platform_device *pdev)
+ 		goto err_get_sync;
+ 	}
+ 
++	ret = wiz_init(wiz);
++	if (ret) {
++		dev_err(dev, "WIZ initialization failed\n");
++		goto err_wiz_init;
++	}
++
+ 	serdes_pdev = of_platform_device_create(child_node, NULL, dev);
+ 	if (!serdes_pdev) {
+ 		dev_WARN(dev, "Unable to create SERDES platform device\n");
+ 		ret = -ENOMEM;
+-		goto err_pdev_create;
+-	}
+-	wiz->serdes_pdev = serdes_pdev;
+-
+-	ret = wiz_init(wiz);
+-	if (ret) {
+-		dev_err(dev, "WIZ initialization failed\n");
+ 		goto err_wiz_init;
+ 	}
++	wiz->serdes_pdev = serdes_pdev;
+ 
+ 	of_node_put(child_node);
+ 	return 0;
+ 
+ err_wiz_init:
+-	of_platform_device_destroy(&serdes_pdev->dev, NULL);
+-
+-err_pdev_create:
+ 	wiz_clock_cleanup(wiz, node);
+ 
+ err_get_sync:
+diff --git a/drivers/pinctrl/pinctrl-at91-pio4.c b/drivers/pinctrl/pinctrl-at91-pio4.c
+index e71ebccc479cf..03c32b2c5d303 100644
+--- a/drivers/pinctrl/pinctrl-at91-pio4.c
++++ b/drivers/pinctrl/pinctrl-at91-pio4.c
+@@ -801,6 +801,10 @@ static int atmel_conf_pin_config_group_set(struct pinctrl_dev *pctldev,
+ 
+ 	conf = atmel_pin_config_read(pctldev, pin_id);
+ 
++	/* Keep slew rate enabled by default. */
++	if (atmel_pioctrl->slew_rate_support)
++		conf |= ATMEL_PIO_SR_MASK;
++
+ 	for (i = 0; i < num_configs; i++) {
+ 		unsigned int param = pinconf_to_config_param(configs[i]);
+ 		unsigned int arg = pinconf_to_config_argument(configs[i]);
+@@ -808,10 +812,6 @@ static int atmel_conf_pin_config_group_set(struct pinctrl_dev *pctldev,
+ 		dev_dbg(pctldev->dev, "%s: pin=%u, config=0x%lx\n",
+ 			__func__, pin_id, configs[i]);
+ 
+-		/* Keep slew rate enabled by default. */
+-		if (atmel_pioctrl->slew_rate_support)
+-			conf |= ATMEL_PIO_SR_MASK;
+-
+ 		switch (param) {
+ 		case PIN_CONFIG_BIAS_DISABLE:
+ 			conf &= (~ATMEL_PIO_PUEN_MASK);
+diff --git a/drivers/pinctrl/pinctrl-single.c b/drivers/pinctrl/pinctrl-single.c
+index 7771316dfffab..10890fde9a751 100644
+--- a/drivers/pinctrl/pinctrl-single.c
++++ b/drivers/pinctrl/pinctrl-single.c
+@@ -270,20 +270,44 @@ static void __maybe_unused pcs_writel(unsigned val, void __iomem *reg)
+ 	writel(val, reg);
+ }
+ 
++static unsigned int pcs_pin_reg_offset_get(struct pcs_device *pcs,
++					   unsigned int pin)
++{
++	unsigned int mux_bytes = pcs->width / BITS_PER_BYTE;
++
++	if (pcs->bits_per_mux) {
++		unsigned int pin_offset_bytes;
++
++		pin_offset_bytes = (pcs->bits_per_pin * pin) / BITS_PER_BYTE;
++		return (pin_offset_bytes / mux_bytes) * mux_bytes;
++	}
++
++	return pin * mux_bytes;
++}
++
++static unsigned int pcs_pin_shift_reg_get(struct pcs_device *pcs,
++					  unsigned int pin)
++{
++	return (pin % (pcs->width / pcs->bits_per_pin)) * pcs->bits_per_pin;
++}
++
+ static void pcs_pin_dbg_show(struct pinctrl_dev *pctldev,
+ 					struct seq_file *s,
+ 					unsigned pin)
+ {
+ 	struct pcs_device *pcs;
+-	unsigned val, mux_bytes;
++	unsigned int val;
+ 	unsigned long offset;
+ 	size_t pa;
+ 
+ 	pcs = pinctrl_dev_get_drvdata(pctldev);
+ 
+-	mux_bytes = pcs->width / BITS_PER_BYTE;
+-	offset = pin * mux_bytes;
++	offset = pcs_pin_reg_offset_get(pcs, pin);
+ 	val = pcs->read(pcs->base + offset);
++
++	if (pcs->bits_per_mux)
++		val &= pcs->fmask << pcs_pin_shift_reg_get(pcs, pin);
++
+ 	pa = pcs->res->start + offset;
+ 
+ 	seq_printf(s, "%zx %08x %s ", pa, val, DRIVER_NAME);
+@@ -384,7 +408,6 @@ static int pcs_request_gpio(struct pinctrl_dev *pctldev,
+ 	struct pcs_device *pcs = pinctrl_dev_get_drvdata(pctldev);
+ 	struct pcs_gpiofunc_range *frange = NULL;
+ 	struct list_head *pos, *tmp;
+-	int mux_bytes = 0;
+ 	unsigned data;
+ 
+ 	/* If function mask is null, return directly. */
+@@ -392,29 +415,27 @@ static int pcs_request_gpio(struct pinctrl_dev *pctldev,
+ 		return -ENOTSUPP;
+ 
+ 	list_for_each_safe(pos, tmp, &pcs->gpiofuncs) {
++		u32 offset;
++
+ 		frange = list_entry(pos, struct pcs_gpiofunc_range, node);
+ 		if (pin >= frange->offset + frange->npins
+ 			|| pin < frange->offset)
+ 			continue;
+-		mux_bytes = pcs->width / BITS_PER_BYTE;
+ 
+-		if (pcs->bits_per_mux) {
+-			int byte_num, offset, pin_shift;
++		offset = pcs_pin_reg_offset_get(pcs, pin);
+ 
+-			byte_num = (pcs->bits_per_pin * pin) / BITS_PER_BYTE;
+-			offset = (byte_num / mux_bytes) * mux_bytes;
+-			pin_shift = pin % (pcs->width / pcs->bits_per_pin) *
+-				    pcs->bits_per_pin;
++		if (pcs->bits_per_mux) {
++			int pin_shift = pcs_pin_shift_reg_get(pcs, pin);
+ 
+ 			data = pcs->read(pcs->base + offset);
+ 			data &= ~(pcs->fmask << pin_shift);
+ 			data |= frange->gpiofunc << pin_shift;
+ 			pcs->write(data, pcs->base + offset);
+ 		} else {
+-			data = pcs->read(pcs->base + pin * mux_bytes);
++			data = pcs->read(pcs->base + offset);
+ 			data &= ~pcs->fmask;
+ 			data |= frange->gpiofunc;
+-			pcs->write(data, pcs->base + pin * mux_bytes);
++			pcs->write(data, pcs->base + offset);
+ 		}
+ 		break;
+ 	}
+@@ -656,10 +677,8 @@ static const struct pinconf_ops pcs_pinconf_ops = {
+  * pcs_add_pin() - add a pin to the static per controller pin array
+  * @pcs: pcs driver instance
+  * @offset: register offset from base
+- * @pin_pos: unused
+  */
+-static int pcs_add_pin(struct pcs_device *pcs, unsigned offset,
+-		unsigned pin_pos)
++static int pcs_add_pin(struct pcs_device *pcs, unsigned int offset)
+ {
+ 	struct pcs_soc_data *pcs_soc = &pcs->socdata;
+ 	struct pinctrl_pin_desc *pin;
+@@ -728,17 +747,9 @@ static int pcs_allocate_pin_table(struct pcs_device *pcs)
+ 	for (i = 0; i < pcs->desc.npins; i++) {
+ 		unsigned offset;
+ 		int res;
+-		int byte_num;
+-		int pin_pos = 0;
+ 
+-		if (pcs->bits_per_mux) {
+-			byte_num = (pcs->bits_per_pin * i) / BITS_PER_BYTE;
+-			offset = (byte_num / mux_bytes) * mux_bytes;
+-			pin_pos = i % num_pins_in_register;
+-		} else {
+-			offset = i * mux_bytes;
+-		}
+-		res = pcs_add_pin(pcs, offset, pin_pos);
++		offset = pcs_pin_reg_offset_get(pcs, i);
++		res = pcs_add_pin(pcs, offset);
+ 		if (res < 0) {
+ 			dev_err(pcs->dev, "error adding pins: %i\n", res);
+ 			return res;
+diff --git a/drivers/platform/surface/aggregator/controller.c b/drivers/platform/surface/aggregator/controller.c
+index 5bcb59ed579db..89761d3e1a476 100644
+--- a/drivers/platform/surface/aggregator/controller.c
++++ b/drivers/platform/surface/aggregator/controller.c
+@@ -1040,7 +1040,7 @@ static int ssam_dsm_load_u32(acpi_handle handle, u64 funcs, u64 func, u32 *ret)
+ 	union acpi_object *obj;
+ 	u64 val;
+ 
+-	if (!(funcs & BIT(func)))
++	if (!(funcs & BIT_ULL(func)))
+ 		return 0; /* Not supported, leave *ret at its default value */
+ 
+ 	obj = acpi_evaluate_dsm_typed(handle, &SSAM_SSH_DSM_GUID,
+diff --git a/drivers/platform/x86/dell/dell-wmi-sysman/sysman.c b/drivers/platform/x86/dell/dell-wmi-sysman/sysman.c
+index 7410ccae650c2..a90ae6ba4a73b 100644
+--- a/drivers/platform/x86/dell/dell-wmi-sysman/sysman.c
++++ b/drivers/platform/x86/dell/dell-wmi-sysman/sysman.c
+@@ -399,6 +399,7 @@ static int init_bios_attributes(int attr_type, const char *guid)
+ 	union acpi_object *obj = NULL;
+ 	union acpi_object *elements;
+ 	struct kset *tmp_set;
++	int min_elements;
+ 
+ 	/* instance_id needs to be reset for each type GUID
+ 	 * also, instance IDs are unique within GUID but not across
+@@ -409,14 +410,38 @@ static int init_bios_attributes(int attr_type, const char *guid)
+ 	retval = alloc_attributes_data(attr_type);
+ 	if (retval)
+ 		return retval;
++
++	switch (attr_type) {
++	case ENUM:	min_elements = 8;	break;
++	case INT:	min_elements = 9;	break;
++	case STR:	min_elements = 8;	break;
++	case PO:	min_elements = 4;	break;
++	default:
++		pr_err("Error: Unknown attr_type: %d\n", attr_type);
++		return -EINVAL;
++	}
++
+ 	/* need to use specific instance_id and guid combination to get right data */
+ 	obj = get_wmiobj_pointer(instance_id, guid);
+-	if (!obj || obj->type != ACPI_TYPE_PACKAGE)
++	if (!obj)
+ 		return -ENODEV;
+-	elements = obj->package.elements;
+ 
+ 	mutex_lock(&wmi_priv.mutex);
+-	while (elements) {
++	while (obj) {
++		if (obj->type != ACPI_TYPE_PACKAGE) {
++			pr_err("Error: Expected ACPI-package type, got: %d\n", obj->type);
++			retval = -EIO;
++			goto err_attr_init;
++		}
++
++		if (obj->package.count < min_elements) {
++			pr_err("Error: ACPI-package does not have enough elements: %d < %d\n",
++			       obj->package.count, min_elements);
++			goto nextobj;
++		}
++
++		elements = obj->package.elements;
++
+ 		/* sanity checking */
+ 		if (elements[ATTR_NAME].type != ACPI_TYPE_STRING) {
+ 			pr_debug("incorrect element type\n");
+@@ -481,7 +506,6 @@ nextobj:
+ 		kfree(obj);
+ 		instance_id++;
+ 		obj = get_wmiobj_pointer(instance_id, guid);
+-		elements = obj ? obj->package.elements : NULL;
+ 	}
+ 
+ 	mutex_unlock(&wmi_priv.mutex);
+diff --git a/drivers/platform/x86/pmc_atom.c b/drivers/platform/x86/pmc_atom.c
+index ca684ed760d14..a9d2a4b98e570 100644
+--- a/drivers/platform/x86/pmc_atom.c
++++ b/drivers/platform/x86/pmc_atom.c
+@@ -393,34 +393,10 @@ static const struct dmi_system_id critclk_systems[] = {
+ 	},
+ 	{
+ 		/* pmc_plt_clk* - are used for ethernet controllers */
+-		.ident = "Beckhoff CB3163",
++		.ident = "Beckhoff Baytrail",
+ 		.matches = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "Beckhoff Automation"),
+-			DMI_MATCH(DMI_BOARD_NAME, "CB3163"),
+-		},
+-	},
+-	{
+-		/* pmc_plt_clk* - are used for ethernet controllers */
+-		.ident = "Beckhoff CB4063",
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Beckhoff Automation"),
+-			DMI_MATCH(DMI_BOARD_NAME, "CB4063"),
+-		},
+-	},
+-	{
+-		/* pmc_plt_clk* - are used for ethernet controllers */
+-		.ident = "Beckhoff CB6263",
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Beckhoff Automation"),
+-			DMI_MATCH(DMI_BOARD_NAME, "CB6263"),
+-		},
+-	},
+-	{
+-		/* pmc_plt_clk* - are used for ethernet controllers */
+-		.ident = "Beckhoff CB6363",
+-		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Beckhoff Automation"),
+-			DMI_MATCH(DMI_BOARD_NAME, "CB6363"),
++			DMI_MATCH(DMI_PRODUCT_FAMILY, "CBxx63"),
+ 		},
+ 	},
+ 	{
+diff --git a/drivers/power/supply/bq25980_charger.c b/drivers/power/supply/bq25980_charger.c
+index 530ff4025b31c..0008c229fd9c7 100644
+--- a/drivers/power/supply/bq25980_charger.c
++++ b/drivers/power/supply/bq25980_charger.c
+@@ -606,33 +606,6 @@ static int bq25980_get_state(struct bq25980_device *bq,
+ 	return 0;
+ }
+ 
+-static int bq25980_set_battery_property(struct power_supply *psy,
+-				enum power_supply_property psp,
+-				const union power_supply_propval *val)
+-{
+-	struct bq25980_device *bq = power_supply_get_drvdata(psy);
+-	int ret = 0;
+-
+-	switch (psp) {
+-	case POWER_SUPPLY_PROP_CONSTANT_CHARGE_CURRENT:
+-		ret = bq25980_set_const_charge_curr(bq, val->intval);
+-		if (ret)
+-			return ret;
+-		break;
+-
+-	case POWER_SUPPLY_PROP_CONSTANT_CHARGE_VOLTAGE:
+-		ret = bq25980_set_const_charge_volt(bq, val->intval);
+-		if (ret)
+-			return ret;
+-		break;
+-
+-	default:
+-		return -EINVAL;
+-	}
+-
+-	return ret;
+-}
+-
+ static int bq25980_get_battery_property(struct power_supply *psy,
+ 				enum power_supply_property psp,
+ 				union power_supply_propval *val)
+@@ -701,6 +674,18 @@ static int bq25980_set_charger_property(struct power_supply *psy,
+ 			return ret;
+ 		break;
+ 
++	case POWER_SUPPLY_PROP_CONSTANT_CHARGE_CURRENT:
++		ret = bq25980_set_const_charge_curr(bq, val->intval);
++		if (ret)
++			return ret;
++		break;
++
++	case POWER_SUPPLY_PROP_CONSTANT_CHARGE_VOLTAGE:
++		ret = bq25980_set_const_charge_volt(bq, val->intval);
++		if (ret)
++			return ret;
++		break;
++
+ 	default:
+ 		return -EINVAL;
+ 	}
+@@ -922,7 +907,6 @@ static struct power_supply_desc bq25980_battery_desc = {
+ 	.name			= "bq25980-battery",
+ 	.type			= POWER_SUPPLY_TYPE_BATTERY,
+ 	.get_property		= bq25980_get_battery_property,
+-	.set_property		= bq25980_set_battery_property,
+ 	.properties		= bq25980_battery_props,
+ 	.num_properties		= ARRAY_SIZE(bq25980_battery_props),
+ 	.property_is_writeable	= bq25980_property_is_writeable,
+diff --git a/drivers/power/supply/bq27xxx_battery.c b/drivers/power/supply/bq27xxx_battery.c
+index 0262109ac285e..20e1dc8a87cf2 100644
+--- a/drivers/power/supply/bq27xxx_battery.c
++++ b/drivers/power/supply/bq27xxx_battery.c
+@@ -1804,7 +1804,7 @@ static int bq27xxx_battery_current(struct bq27xxx_device_info *di,
+ 		val->intval = curr * BQ27XXX_CURRENT_CONSTANT / BQ27XXX_RS;
+ 	} else {
+ 		/* Other gauges return signed value */
+-		val->intval = -(int)((s16)curr) * 1000;
++		val->intval = (int)((s16)curr) * 1000;
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/regulator/bd9576-regulator.c b/drivers/regulator/bd9576-regulator.c
+index a8b5832a5a1bb..204a2da054f53 100644
+--- a/drivers/regulator/bd9576-regulator.c
++++ b/drivers/regulator/bd9576-regulator.c
+@@ -206,7 +206,7 @@ static int bd957x_probe(struct platform_device *pdev)
+ {
+ 	struct regmap *regmap;
+ 	struct regulator_config config = { 0 };
+-	int i, err;
++	int i;
+ 	bool vout_mode, ddr_sel;
+ 	const struct bd957x_regulator_data *reg_data = &bd9576_regulators[0];
+ 	unsigned int num_reg_data = ARRAY_SIZE(bd9576_regulators);
+@@ -279,8 +279,7 @@ static int bd957x_probe(struct platform_device *pdev)
+ 		break;
+ 	default:
+ 		dev_err(&pdev->dev, "Unsupported chip type\n");
+-		err = -EINVAL;
+-		goto err;
++		return -EINVAL;
+ 	}
+ 
+ 	config.dev = pdev->dev.parent;
+@@ -300,8 +299,7 @@ static int bd957x_probe(struct platform_device *pdev)
+ 			dev_err(&pdev->dev,
+ 				"failed to register %s regulator\n",
+ 				desc->name);
+-			err = PTR_ERR(rdev);
+-			goto err;
++			return PTR_ERR(rdev);
+ 		}
+ 		/*
+ 		 * Clear the VOUT1 GPIO setting - rest of the regulators do not
+@@ -310,8 +308,7 @@ static int bd957x_probe(struct platform_device *pdev)
+ 		config.ena_gpiod = NULL;
+ 	}
+ 
+-err:
+-	return err;
++	return 0;
+ }
+ 
+ static const struct platform_device_id bd957x_pmic_id[] = {
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
+index 7451377c4cb68..3e359ac752fd2 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
+@@ -1646,7 +1646,7 @@ static int interrupt_init_v1_hw(struct hisi_hba *hisi_hba)
+ 		idx = i * HISI_SAS_PHY_INT_NR;
+ 		for (j = 0; j < HISI_SAS_PHY_INT_NR; j++, idx++) {
+ 			irq = platform_get_irq(pdev, idx);
+-			if (!irq) {
++			if (irq < 0) {
+ 				dev_err(dev, "irq init: fail map phy interrupt %d\n",
+ 					idx);
+ 				return -ENOENT;
+@@ -1665,7 +1665,7 @@ static int interrupt_init_v1_hw(struct hisi_hba *hisi_hba)
+ 	idx = hisi_hba->n_phy * HISI_SAS_PHY_INT_NR;
+ 	for (i = 0; i < hisi_hba->queue_count; i++, idx++) {
+ 		irq = platform_get_irq(pdev, idx);
+-		if (!irq) {
++		if (irq < 0) {
+ 			dev_err(dev, "irq init: could not map cq interrupt %d\n",
+ 				idx);
+ 			return -ENOENT;
+@@ -1683,7 +1683,7 @@ static int interrupt_init_v1_hw(struct hisi_hba *hisi_hba)
+ 	idx = (hisi_hba->n_phy * HISI_SAS_PHY_INT_NR) + hisi_hba->queue_count;
+ 	for (i = 0; i < HISI_SAS_FATAL_INT_NR; i++, idx++) {
+ 		irq = platform_get_irq(pdev, idx);
+-		if (!irq) {
++		if (irq < 0) {
+ 			dev_err(dev, "irq init: could not map fatal interrupt %d\n",
+ 				idx);
+ 			return -ENOENT;
+diff --git a/drivers/scsi/ibmvscsi/ibmvfc.c b/drivers/scsi/ibmvscsi/ibmvfc.c
+index 61831f2fdb309..d6675a25719d5 100644
+--- a/drivers/scsi/ibmvscsi/ibmvfc.c
++++ b/drivers/scsi/ibmvscsi/ibmvfc.c
+@@ -603,8 +603,17 @@ static void ibmvfc_set_host_action(struct ibmvfc_host *vhost,
+ 		if (vhost->action == IBMVFC_HOST_ACTION_ALLOC_TGTS)
+ 			vhost->action = action;
+ 		break;
++	case IBMVFC_HOST_ACTION_REENABLE:
++	case IBMVFC_HOST_ACTION_RESET:
++		vhost->action = action;
++		break;
+ 	case IBMVFC_HOST_ACTION_INIT:
+ 	case IBMVFC_HOST_ACTION_TGT_DEL:
++	case IBMVFC_HOST_ACTION_LOGO:
++	case IBMVFC_HOST_ACTION_QUERY_TGTS:
++	case IBMVFC_HOST_ACTION_TGT_DEL_FAILED:
++	case IBMVFC_HOST_ACTION_NONE:
++	default:
+ 		switch (vhost->action) {
+ 		case IBMVFC_HOST_ACTION_RESET:
+ 		case IBMVFC_HOST_ACTION_REENABLE:
+@@ -614,15 +623,6 @@ static void ibmvfc_set_host_action(struct ibmvfc_host *vhost,
+ 			break;
+ 		}
+ 		break;
+-	case IBMVFC_HOST_ACTION_LOGO:
+-	case IBMVFC_HOST_ACTION_QUERY_TGTS:
+-	case IBMVFC_HOST_ACTION_TGT_DEL_FAILED:
+-	case IBMVFC_HOST_ACTION_NONE:
+-	case IBMVFC_HOST_ACTION_RESET:
+-	case IBMVFC_HOST_ACTION_REENABLE:
+-	default:
+-		vhost->action = action;
+-		break;
+ 	}
+ }
+ 
+@@ -5373,30 +5373,49 @@ static void ibmvfc_do_work(struct ibmvfc_host *vhost)
+ 	case IBMVFC_HOST_ACTION_INIT_WAIT:
+ 		break;
+ 	case IBMVFC_HOST_ACTION_RESET:
+-		vhost->action = IBMVFC_HOST_ACTION_TGT_DEL;
+ 		list_splice_init(&vhost->purge, &purge);
+ 		spin_unlock_irqrestore(vhost->host->host_lock, flags);
+ 		ibmvfc_complete_purge(&purge);
+ 		rc = ibmvfc_reset_crq(vhost);
++
+ 		spin_lock_irqsave(vhost->host->host_lock, flags);
+-		if (rc == H_CLOSED)
++		if (!rc || rc == H_CLOSED)
+ 			vio_enable_interrupts(to_vio_dev(vhost->dev));
+-		if (rc || (rc = ibmvfc_send_crq_init(vhost)) ||
+-		    (rc = vio_enable_interrupts(to_vio_dev(vhost->dev)))) {
+-			ibmvfc_link_down(vhost, IBMVFC_LINK_DEAD);
+-			dev_err(vhost->dev, "Error after reset (rc=%d)\n", rc);
++		if (vhost->action == IBMVFC_HOST_ACTION_RESET) {
++			/*
++			 * The only action we could have changed to would have
++			 * been reenable, in which case, we skip the rest of
++			 * this path and wait until we've done the re-enable
++			 * before sending the crq init.
++			 */
++			vhost->action = IBMVFC_HOST_ACTION_TGT_DEL;
++
++			if (rc || (rc = ibmvfc_send_crq_init(vhost)) ||
++			    (rc = vio_enable_interrupts(to_vio_dev(vhost->dev)))) {
++				ibmvfc_link_down(vhost, IBMVFC_LINK_DEAD);
++				dev_err(vhost->dev, "Error after reset (rc=%d)\n", rc);
++			}
+ 		}
+ 		break;
+ 	case IBMVFC_HOST_ACTION_REENABLE:
+-		vhost->action = IBMVFC_HOST_ACTION_TGT_DEL;
+ 		list_splice_init(&vhost->purge, &purge);
+ 		spin_unlock_irqrestore(vhost->host->host_lock, flags);
+ 		ibmvfc_complete_purge(&purge);
+ 		rc = ibmvfc_reenable_crq_queue(vhost);
++
+ 		spin_lock_irqsave(vhost->host->host_lock, flags);
+-		if (rc || (rc = ibmvfc_send_crq_init(vhost))) {
+-			ibmvfc_link_down(vhost, IBMVFC_LINK_DEAD);
+-			dev_err(vhost->dev, "Error after enable (rc=%d)\n", rc);
++		if (vhost->action == IBMVFC_HOST_ACTION_REENABLE) {
++			/*
++			 * The only action we could have changed to would have
++			 * been reset, in which case, we skip the rest of this
++			 * path and wait until we've done the reset before
++			 * sending the crq init.
++			 */
++			vhost->action = IBMVFC_HOST_ACTION_TGT_DEL;
++			if (rc || (rc = ibmvfc_send_crq_init(vhost))) {
++				ibmvfc_link_down(vhost, IBMVFC_LINK_DEAD);
++				dev_err(vhost->dev, "Error after enable (rc=%d)\n", rc);
++			}
+ 		}
+ 		break;
+ 	case IBMVFC_HOST_ACTION_LOGO:
+diff --git a/drivers/scsi/jazz_esp.c b/drivers/scsi/jazz_esp.c
+index f0ed6863cc700..60a88a95a8e23 100644
+--- a/drivers/scsi/jazz_esp.c
++++ b/drivers/scsi/jazz_esp.c
+@@ -143,7 +143,9 @@ static int esp_jazz_probe(struct platform_device *dev)
+ 	if (!esp->command_block)
+ 		goto fail_unmap_regs;
+ 
+-	host->irq = platform_get_irq(dev, 0);
++	host->irq = err = platform_get_irq(dev, 0);
++	if (err < 0)
++		goto fail_unmap_command_block;
+ 	err = request_irq(host->irq, scsi_esp_intr, IRQF_SHARED, "ESP", esp);
+ 	if (err < 0)
+ 		goto fail_unmap_command_block;
+diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c
+index fd18ac2acc135..3dd22da3153ff 100644
+--- a/drivers/scsi/lpfc/lpfc_els.c
++++ b/drivers/scsi/lpfc/lpfc_els.c
+@@ -1,7 +1,7 @@
+ /*******************************************************************
+  * This file is part of the Emulex Linux Device Driver for         *
+  * Fibre Channel Host Bus Adapters.                                *
+- * Copyright (C) 2017-2020 Broadcom. All Rights Reserved. The term *
++ * Copyright (C) 2017-2021 Broadcom. All Rights Reserved. The term *
+  * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.     *
+  * Copyright (C) 2004-2016 Emulex.  All rights reserved.           *
+  * EMULEX and SLI are trademarks of Emulex.                        *
+@@ -2044,13 +2044,12 @@ out_freeiocb:
+  * This routine issues a Port Login (PLOGI) command to a remote N_Port
+  * (with the @did) for a @vport. Before issuing a PLOGI to a remote N_Port,
+  * the ndlp with the remote N_Port DID must exist on the @vport's ndlp list.
+- * This routine constructs the proper feilds of the PLOGI IOCB and invokes
++ * This routine constructs the proper fields of the PLOGI IOCB and invokes
+  * the lpfc_sli_issue_iocb() routine to send out PLOGI ELS command.
+  *
+- * Note that, in lpfc_prep_els_iocb() routine, the reference count of ndlp
+- * will be incremented by 1 for holding the ndlp and the reference to ndlp
+- * will be stored into the context1 field of the IOCB for the completion
+- * callback function to the PLOGI ELS command.
++ * Note that the ndlp reference count will be incremented by 1 for holding
++ * the ndlp and the reference to ndlp will be stored into the context1 field
++ * of the IOCB for the completion callback function to the PLOGI ELS command.
+  *
+  * Return code
+  *   0 - Successfully issued a plogi for @vport
+@@ -2068,29 +2067,28 @@ lpfc_issue_els_plogi(struct lpfc_vport *vport, uint32_t did, uint8_t retry)
+ 	int ret;
+ 
+ 	ndlp = lpfc_findnode_did(vport, did);
++	if (!ndlp)
++		return 1;
+ 
+-	if (ndlp) {
+-		/* Defer the processing of the issue PLOGI until after the
+-		 * outstanding UNREG_RPI mbox command completes, unless we
+-		 * are going offline. This logic does not apply for Fabric DIDs
+-		 */
+-		if ((ndlp->nlp_flag & NLP_UNREG_INP) &&
+-		    ((ndlp->nlp_DID & Fabric_DID_MASK) != Fabric_DID_MASK) &&
+-		    !(vport->fc_flag & FC_OFFLINE_MODE)) {
+-			lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY,
+-					 "4110 Issue PLOGI x%x deferred "
+-					 "on NPort x%x rpi x%x Data: x%px\n",
+-					 ndlp->nlp_defer_did, ndlp->nlp_DID,
+-					 ndlp->nlp_rpi, ndlp);
+-
+-			/* We can only defer 1st PLOGI */
+-			if (ndlp->nlp_defer_did == NLP_EVT_NOTHING_PENDING)
+-				ndlp->nlp_defer_did = did;
+-			return 0;
+-		}
++	/* Defer the processing of the issue PLOGI until after the
++	 * outstanding UNREG_RPI mbox command completes, unless we
++	 * are going offline. This logic does not apply for Fabric DIDs
++	 */
++	if ((ndlp->nlp_flag & NLP_UNREG_INP) &&
++	    ((ndlp->nlp_DID & Fabric_DID_MASK) != Fabric_DID_MASK) &&
++	    !(vport->fc_flag & FC_OFFLINE_MODE)) {
++		lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY,
++				 "4110 Issue PLOGI x%x deferred "
++				 "on NPort x%x rpi x%x Data: x%px\n",
++				 ndlp->nlp_defer_did, ndlp->nlp_DID,
++				 ndlp->nlp_rpi, ndlp);
++
++		/* We can only defer 1st PLOGI */
++		if (ndlp->nlp_defer_did == NLP_EVT_NOTHING_PENDING)
++			ndlp->nlp_defer_did = did;
++		return 0;
+ 	}
+ 
+-	/* If ndlp is not NULL, we will bump the reference count on it */
+ 	cmdsize = (sizeof(uint32_t) + sizeof(struct serv_parm));
+ 	elsiocb = lpfc_prep_els_iocb(vport, 1, cmdsize, retry, ndlp, did,
+ 				     ELS_CMD_PLOGI);
+diff --git a/drivers/scsi/pm8001/pm8001_hwi.c b/drivers/scsi/pm8001/pm8001_hwi.c
+index 31e5455d280cb..1b1a57f46989a 100644
+--- a/drivers/scsi/pm8001/pm8001_hwi.c
++++ b/drivers/scsi/pm8001/pm8001_hwi.c
+@@ -643,7 +643,7 @@ static void init_pci_device_addresses(struct pm8001_hba_info *pm8001_ha)
+  */
+ static int pm8001_chip_init(struct pm8001_hba_info *pm8001_ha)
+ {
+-	u8 i = 0;
++	u32 i = 0;
+ 	u16 deviceid;
+ 	pci_read_config_word(pm8001_ha->pdev, PCI_DEVICE_ID, &deviceid);
+ 	/* 8081 controllers need BAR shift to access MPI space
+diff --git a/drivers/scsi/pm8001/pm80xx_hwi.c b/drivers/scsi/pm8001/pm80xx_hwi.c
+index 84315560e8e1a..c6b0834e38061 100644
+--- a/drivers/scsi/pm8001/pm80xx_hwi.c
++++ b/drivers/scsi/pm8001/pm80xx_hwi.c
+@@ -1502,9 +1502,9 @@ static int mpi_uninit_check(struct pm8001_hba_info *pm8001_ha)
+ 
+ 	/* wait until Inbound DoorBell Clear Register toggled */
+ 	if (IS_SPCV_12G(pm8001_ha->pdev)) {
+-		max_wait_count = 4 * 1000 * 1000;/* 4 sec */
++		max_wait_count = 30 * 1000 * 1000; /* 30 sec */
+ 	} else {
+-		max_wait_count = 2 * 1000 * 1000;/* 2 sec */
++		max_wait_count = 15 * 1000 * 1000; /* 15 sec */
+ 	}
+ 	do {
+ 		udelay(1);
+diff --git a/drivers/scsi/qla2xxx/qla_bsg.c b/drivers/scsi/qla2xxx/qla_bsg.c
+index d021e51344f57..aef2f7cc89d36 100644
+--- a/drivers/scsi/qla2xxx/qla_bsg.c
++++ b/drivers/scsi/qla2xxx/qla_bsg.c
+@@ -2584,6 +2584,10 @@ qla2x00_get_host_stats(struct bsg_job *bsg_job)
+ 	}
+ 
+ 	data = kzalloc(response_len, GFP_KERNEL);
++	if (!data) {
++		kfree(req_data);
++		return -ENOMEM;
++	}
+ 
+ 	ret = qla2xxx_get_ini_stats(fc_bsg_to_shost(bsg_job), req_data->stat_type,
+ 				    data, response_len);
+diff --git a/drivers/scsi/sni_53c710.c b/drivers/scsi/sni_53c710.c
+index 9e2e196bc2026..97c6f81b1d2a6 100644
+--- a/drivers/scsi/sni_53c710.c
++++ b/drivers/scsi/sni_53c710.c
+@@ -58,6 +58,7 @@ static int snirm710_probe(struct platform_device *dev)
+ 	struct NCR_700_Host_Parameters *hostdata;
+ 	struct Scsi_Host *host;
+ 	struct  resource *res;
++	int rc;
+ 
+ 	res = platform_get_resource(dev, IORESOURCE_MEM, 0);
+ 	if (!res)
+@@ -83,7 +84,9 @@ static int snirm710_probe(struct platform_device *dev)
+ 		goto out_kfree;
+ 	host->this_id = 7;
+ 	host->base = base;
+-	host->irq = platform_get_irq(dev, 0);
++	host->irq = rc = platform_get_irq(dev, 0);
++	if (rc < 0)
++		goto out_put_host;
+ 	if(request_irq(host->irq, NCR_700_intr, IRQF_SHARED, "snirm710", host)) {
+ 		printk(KERN_ERR "snirm710: request_irq failed!\n");
+ 		goto out_put_host;
+diff --git a/drivers/scsi/sun3x_esp.c b/drivers/scsi/sun3x_esp.c
+index 7de82f2c97579..d3489ac7ab28b 100644
+--- a/drivers/scsi/sun3x_esp.c
++++ b/drivers/scsi/sun3x_esp.c
+@@ -206,7 +206,9 @@ static int esp_sun3x_probe(struct platform_device *dev)
+ 	if (!esp->command_block)
+ 		goto fail_unmap_regs_dma;
+ 
+-	host->irq = platform_get_irq(dev, 0);
++	host->irq = err = platform_get_irq(dev, 0);
++	if (err < 0)
++		goto fail_unmap_command_block;
+ 	err = request_irq(host->irq, scsi_esp_intr, IRQF_SHARED,
+ 			  "SUN3X ESP", esp);
+ 	if (err < 0)
+diff --git a/drivers/scsi/ufs/ufshcd-pltfrm.c b/drivers/scsi/ufs/ufshcd-pltfrm.c
+index 1a69949a4ea1c..b56d9b4e5f033 100644
+--- a/drivers/scsi/ufs/ufshcd-pltfrm.c
++++ b/drivers/scsi/ufs/ufshcd-pltfrm.c
+@@ -377,7 +377,7 @@ int ufshcd_pltfrm_init(struct platform_device *pdev,
+ 
+ 	irq = platform_get_irq(pdev, 0);
+ 	if (irq < 0) {
+-		err = -ENODEV;
++		err = irq;
+ 		goto out;
+ 	}
+ 
+diff --git a/drivers/soc/aspeed/aspeed-lpc-snoop.c b/drivers/soc/aspeed/aspeed-lpc-snoop.c
+index 20acac6342eff..5828f94b8a7df 100644
+--- a/drivers/soc/aspeed/aspeed-lpc-snoop.c
++++ b/drivers/soc/aspeed/aspeed-lpc-snoop.c
+@@ -95,8 +95,10 @@ static ssize_t snoop_file_read(struct file *file, char __user *buffer,
+ 			return -EINTR;
+ 	}
+ 	ret = kfifo_to_user(&chan->fifo, buffer, count, &copied);
++	if (ret)
++		return ret;
+ 
+-	return ret ? ret : copied;
++	return copied;
+ }
+ 
+ static __poll_t snoop_file_poll(struct file *file,
+diff --git a/drivers/soc/mediatek/mtk-pm-domains.c b/drivers/soc/mediatek/mtk-pm-domains.c
+index b7f697666bdd7..06aaf03b194c0 100644
+--- a/drivers/soc/mediatek/mtk-pm-domains.c
++++ b/drivers/soc/mediatek/mtk-pm-domains.c
+@@ -487,8 +487,9 @@ static int scpsys_add_subdomain(struct scpsys *scpsys, struct device_node *paren
+ 
+ 		child_pd = scpsys_add_one_domain(scpsys, child);
+ 		if (IS_ERR(child_pd)) {
+-			dev_err_probe(scpsys->dev, PTR_ERR(child_pd),
+-				      "%pOF: failed to get child domain id\n", child);
++			ret = PTR_ERR(child_pd);
++			dev_err_probe(scpsys->dev, ret, "%pOF: failed to get child domain id\n",
++				      child);
+ 			goto err_put_node;
+ 		}
+ 
+diff --git a/drivers/soc/qcom/mdt_loader.c b/drivers/soc/qcom/mdt_loader.c
+index 24cd193dec550..eba7f76f9d61a 100644
+--- a/drivers/soc/qcom/mdt_loader.c
++++ b/drivers/soc/qcom/mdt_loader.c
+@@ -230,6 +230,14 @@ static int __qcom_mdt_load(struct device *dev, const struct firmware *fw,
+ 			break;
+ 		}
+ 
++		if (phdr->p_filesz > phdr->p_memsz) {
++			dev_err(dev,
++				"refusing to load segment %d with p_filesz > p_memsz\n",
++				i);
++			ret = -EINVAL;
++			break;
++		}
++
+ 		ptr = mem_region + offset;
+ 
+ 		if (phdr->p_filesz && phdr->p_offset < fw->size) {
+@@ -253,6 +261,15 @@ static int __qcom_mdt_load(struct device *dev, const struct firmware *fw,
+ 				break;
+ 			}
+ 
++			if (seg_fw->size != phdr->p_filesz) {
++				dev_err(dev,
++					"failed to load segment %d from truncated file %s\n",
++					i, fw_name);
++				release_firmware(seg_fw);
++				ret = -EINVAL;
++				break;
++			}
++
+ 			release_firmware(seg_fw);
+ 		}
+ 
+diff --git a/drivers/soc/qcom/pdr_interface.c b/drivers/soc/qcom/pdr_interface.c
+index 209dcdca923f9..915d5bc3d46e6 100644
+--- a/drivers/soc/qcom/pdr_interface.c
++++ b/drivers/soc/qcom/pdr_interface.c
+@@ -153,7 +153,7 @@ static int pdr_register_listener(struct pdr_handle *pdr,
+ 	if (resp.resp.result != QMI_RESULT_SUCCESS_V01) {
+ 		pr_err("PDR: %s register listener failed: 0x%x\n",
+ 		       pds->service_path, resp.resp.error);
+-		return ret;
++		return -EREMOTEIO;
+ 	}
+ 
+ 	pds->state = resp.curr_state;
+diff --git a/drivers/soc/tegra/regulators-tegra30.c b/drivers/soc/tegra/regulators-tegra30.c
+index 7f21f31de09d6..0e776b20f6252 100644
+--- a/drivers/soc/tegra/regulators-tegra30.c
++++ b/drivers/soc/tegra/regulators-tegra30.c
+@@ -178,7 +178,7 @@ static int tegra30_voltage_update(struct tegra_regulator_coupler *tegra,
+ 	 * survive the voltage drop if it's running on a higher frequency.
+ 	 */
+ 	if (!cpu_min_uV_consumers)
+-		cpu_min_uV = cpu_uV;
++		cpu_min_uV = max(cpu_uV, cpu_min_uV);
+ 
+ 	/*
+ 	 * Bootloader shall set up voltages correctly, but if it
+diff --git a/drivers/soundwire/bus.c b/drivers/soundwire/bus.c
+index 46885429928ab..4ec29338ce9a1 100644
+--- a/drivers/soundwire/bus.c
++++ b/drivers/soundwire/bus.c
+@@ -705,7 +705,7 @@ static int sdw_program_device_num(struct sdw_bus *bus)
+ 	struct sdw_slave *slave, *_s;
+ 	struct sdw_slave_id id;
+ 	struct sdw_msg msg;
+-	bool found = false;
++	bool found;
+ 	int count = 0, ret;
+ 	u64 addr;
+ 
+@@ -737,6 +737,7 @@ static int sdw_program_device_num(struct sdw_bus *bus)
+ 
+ 		sdw_extract_slave_id(bus, addr, &id);
+ 
++		found = false;
+ 		/* Now compare with entries */
+ 		list_for_each_entry_safe(slave, _s, &bus->slaves, node) {
+ 			if (sdw_compare_devid(slave, id) == 0) {
+diff --git a/drivers/soundwire/stream.c b/drivers/soundwire/stream.c
+index 1099b5d1262be..a418c3c7001c0 100644
+--- a/drivers/soundwire/stream.c
++++ b/drivers/soundwire/stream.c
+@@ -1375,8 +1375,16 @@ int sdw_stream_add_slave(struct sdw_slave *slave,
+ 	}
+ 
+ 	ret = sdw_config_stream(&slave->dev, stream, stream_config, true);
+-	if (ret)
++	if (ret) {
++		/*
++		 * sdw_release_master_stream will release s_rt in slave_rt_list in
++		 * stream_error case, but s_rt is only added to slave_rt_list
++		 * when sdw_config_stream is successful, so free s_rt explicitly
++		 * when sdw_config_stream is failed.
++		 */
++		kfree(s_rt);
+ 		goto stream_error;
++	}
+ 
+ 	list_add_tail(&s_rt->m_rt_node, &m_rt->slave_rt_list);
+ 
+diff --git a/drivers/spi/spi-fsl-lpspi.c b/drivers/spi/spi-fsl-lpspi.c
+index a2886ee44e4cb..5d98611dd999d 100644
+--- a/drivers/spi/spi-fsl-lpspi.c
++++ b/drivers/spi/spi-fsl-lpspi.c
+@@ -200,7 +200,7 @@ static int lpspi_prepare_xfer_hardware(struct spi_controller *controller)
+ 				spi_controller_get_devdata(controller);
+ 	int ret;
+ 
+-	ret = pm_runtime_get_sync(fsl_lpspi->dev);
++	ret = pm_runtime_resume_and_get(fsl_lpspi->dev);
+ 	if (ret < 0) {
+ 		dev_err(fsl_lpspi->dev, "failed to enable clock\n");
+ 		return ret;
+diff --git a/drivers/spi/spi-fsl-spi.c b/drivers/spi/spi-fsl-spi.c
+index e4a8d203f9408..d0e5aa18b7bad 100644
+--- a/drivers/spi/spi-fsl-spi.c
++++ b/drivers/spi/spi-fsl-spi.c
+@@ -707,6 +707,11 @@ static int of_fsl_spi_probe(struct platform_device *ofdev)
+ 	struct resource mem;
+ 	int irq, type;
+ 	int ret;
++	bool spisel_boot = false;
++#if IS_ENABLED(CONFIG_FSL_SOC)
++	struct mpc8xxx_spi_probe_info *pinfo = NULL;
++#endif
++
+ 
+ 	ret = of_mpc8xxx_spi_probe(ofdev);
+ 	if (ret)
+@@ -715,9 +720,8 @@ static int of_fsl_spi_probe(struct platform_device *ofdev)
+ 	type = fsl_spi_get_type(&ofdev->dev);
+ 	if (type == TYPE_FSL) {
+ 		struct fsl_spi_platform_data *pdata = dev_get_platdata(dev);
+-		bool spisel_boot = false;
+ #if IS_ENABLED(CONFIG_FSL_SOC)
+-		struct mpc8xxx_spi_probe_info *pinfo = to_of_pinfo(pdata);
++		pinfo = to_of_pinfo(pdata);
+ 
+ 		spisel_boot = of_property_read_bool(np, "fsl,spisel_boot");
+ 		if (spisel_boot) {
+@@ -746,15 +750,24 @@ static int of_fsl_spi_probe(struct platform_device *ofdev)
+ 
+ 	ret = of_address_to_resource(np, 0, &mem);
+ 	if (ret)
+-		return ret;
++		goto unmap_out;
+ 
+ 	irq = platform_get_irq(ofdev, 0);
+-	if (irq < 0)
+-		return irq;
++	if (irq < 0) {
++		ret = irq;
++		goto unmap_out;
++	}
+ 
+ 	master = fsl_spi_probe(dev, &mem, irq);
+ 
+ 	return PTR_ERR_OR_ZERO(master);
++
++unmap_out:
++#if IS_ENABLED(CONFIG_FSL_SOC)
++	if (spisel_boot)
++		iounmap(pinfo->immr_spi_cs);
++#endif
++	return ret;
+ }
+ 
+ static int of_fsl_spi_remove(struct platform_device *ofdev)
+diff --git a/drivers/spi/spi-rockchip.c b/drivers/spi/spi-rockchip.c
+index 936ef54e09037..0d75080da6480 100644
+--- a/drivers/spi/spi-rockchip.c
++++ b/drivers/spi/spi-rockchip.c
+@@ -476,7 +476,7 @@ static int rockchip_spi_prepare_dma(struct rockchip_spi *rs,
+ 	return 1;
+ }
+ 
+-static void rockchip_spi_config(struct rockchip_spi *rs,
++static int rockchip_spi_config(struct rockchip_spi *rs,
+ 		struct spi_device *spi, struct spi_transfer *xfer,
+ 		bool use_dma, bool slave_mode)
+ {
+@@ -521,7 +521,9 @@ static void rockchip_spi_config(struct rockchip_spi *rs,
+ 		 * ctlr->bits_per_word_mask, so this shouldn't
+ 		 * happen
+ 		 */
+-		unreachable();
++		dev_err(rs->dev, "unknown bits per word: %d\n",
++			xfer->bits_per_word);
++		return -EINVAL;
+ 	}
+ 
+ 	if (use_dma) {
+@@ -554,6 +556,8 @@ static void rockchip_spi_config(struct rockchip_spi *rs,
+ 	 */
+ 	writel_relaxed(2 * DIV_ROUND_UP(rs->freq, 2 * xfer->speed_hz),
+ 			rs->regs + ROCKCHIP_SPI_BAUDR);
++
++	return 0;
+ }
+ 
+ static size_t rockchip_spi_max_transfer_size(struct spi_device *spi)
+@@ -577,6 +581,7 @@ static int rockchip_spi_transfer_one(
+ 		struct spi_transfer *xfer)
+ {
+ 	struct rockchip_spi *rs = spi_controller_get_devdata(ctlr);
++	int ret;
+ 	bool use_dma;
+ 
+ 	WARN_ON(readl_relaxed(rs->regs + ROCKCHIP_SPI_SSIENR) &&
+@@ -596,7 +601,9 @@ static int rockchip_spi_transfer_one(
+ 
+ 	use_dma = ctlr->can_dma ? ctlr->can_dma(ctlr, spi, xfer) : false;
+ 
+-	rockchip_spi_config(rs, spi, xfer, use_dma, ctlr->slave);
++	ret = rockchip_spi_config(rs, spi, xfer, use_dma, ctlr->slave);
++	if (ret)
++		return ret;
+ 
+ 	if (use_dma)
+ 		return rockchip_spi_prepare_dma(rs, ctlr, xfer);
+diff --git a/drivers/spi/spi-stm32.c b/drivers/spi/spi-stm32.c
+index 25c0764610119..7f0244a246e92 100644
+--- a/drivers/spi/spi-stm32.c
++++ b/drivers/spi/spi-stm32.c
+@@ -1803,7 +1803,7 @@ static int stm32_spi_probe(struct platform_device *pdev)
+ 	struct reset_control *rst;
+ 	int ret;
+ 
+-	master = spi_alloc_master(&pdev->dev, sizeof(struct stm32_spi));
++	master = devm_spi_alloc_master(&pdev->dev, sizeof(struct stm32_spi));
+ 	if (!master) {
+ 		dev_err(&pdev->dev, "spi master allocation failed\n");
+ 		return -ENOMEM;
+@@ -1821,18 +1821,16 @@ static int stm32_spi_probe(struct platform_device *pdev)
+ 
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ 	spi->base = devm_ioremap_resource(&pdev->dev, res);
+-	if (IS_ERR(spi->base)) {
+-		ret = PTR_ERR(spi->base);
+-		goto err_master_put;
+-	}
++	if (IS_ERR(spi->base))
++		return PTR_ERR(spi->base);
+ 
+ 	spi->phys_addr = (dma_addr_t)res->start;
+ 
+ 	spi->irq = platform_get_irq(pdev, 0);
+-	if (spi->irq <= 0) {
+-		ret = dev_err_probe(&pdev->dev, spi->irq, "failed to get irq\n");
+-		goto err_master_put;
+-	}
++	if (spi->irq <= 0)
++		return dev_err_probe(&pdev->dev, spi->irq,
++				     "failed to get irq\n");
++
+ 	ret = devm_request_threaded_irq(&pdev->dev, spi->irq,
+ 					spi->cfg->irq_handler_event,
+ 					spi->cfg->irq_handler_thread,
+@@ -1840,20 +1838,20 @@ static int stm32_spi_probe(struct platform_device *pdev)
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "irq%d request failed: %d\n", spi->irq,
+ 			ret);
+-		goto err_master_put;
++		return ret;
+ 	}
+ 
+ 	spi->clk = devm_clk_get(&pdev->dev, NULL);
+ 	if (IS_ERR(spi->clk)) {
+ 		ret = PTR_ERR(spi->clk);
+ 		dev_err(&pdev->dev, "clk get failed: %d\n", ret);
+-		goto err_master_put;
++		return ret;
+ 	}
+ 
+ 	ret = clk_prepare_enable(spi->clk);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "clk enable failed: %d\n", ret);
+-		goto err_master_put;
++		return ret;
+ 	}
+ 	spi->clk_rate = clk_get_rate(spi->clk);
+ 	if (!spi->clk_rate) {
+@@ -1929,7 +1927,7 @@ static int stm32_spi_probe(struct platform_device *pdev)
+ 	pm_runtime_set_active(&pdev->dev);
+ 	pm_runtime_enable(&pdev->dev);
+ 
+-	ret = devm_spi_register_master(&pdev->dev, master);
++	ret = spi_register_master(master);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "spi master registration failed: %d\n",
+ 			ret);
+@@ -1949,8 +1947,6 @@ err_dma_release:
+ 		dma_release_channel(spi->dma_rx);
+ err_clk_disable:
+ 	clk_disable_unprepare(spi->clk);
+-err_master_put:
+-	spi_master_put(master);
+ 
+ 	return ret;
+ }
+@@ -1960,6 +1956,7 @@ static int stm32_spi_remove(struct platform_device *pdev)
+ 	struct spi_master *master = platform_get_drvdata(pdev);
+ 	struct stm32_spi *spi = spi_master_get_devdata(master);
+ 
++	spi_unregister_master(master);
+ 	spi->cfg->disable(spi);
+ 
+ 	if (master->dma_tx)
+diff --git a/drivers/spi/spi-zynqmp-gqspi.c b/drivers/spi/spi-zynqmp-gqspi.c
+index c8fa6ee18ae77..7162387b9f96a 100644
+--- a/drivers/spi/spi-zynqmp-gqspi.c
++++ b/drivers/spi/spi-zynqmp-gqspi.c
+@@ -157,6 +157,7 @@ enum mode_type {GQSPI_MODE_IO, GQSPI_MODE_DMA};
+  * @data_completion:	completion structure
+  */
+ struct zynqmp_qspi {
++	struct spi_controller *ctlr;
+ 	void __iomem *regs;
+ 	struct clk *refclk;
+ 	struct clk *pclk;
+@@ -173,6 +174,7 @@ struct zynqmp_qspi {
+ 	u32 genfifoentry;
+ 	enum mode_type mode;
+ 	struct completion data_completion;
++	struct mutex op_lock;
+ };
+ 
+ /**
+@@ -486,24 +488,10 @@ static int zynqmp_qspi_setup_op(struct spi_device *qspi)
+ {
+ 	struct spi_controller *ctlr = qspi->master;
+ 	struct zynqmp_qspi *xqspi = spi_controller_get_devdata(ctlr);
+-	struct device *dev = &ctlr->dev;
+-	int ret;
+ 
+ 	if (ctlr->busy)
+ 		return -EBUSY;
+ 
+-	ret = clk_enable(xqspi->refclk);
+-	if (ret) {
+-		dev_err(dev, "Cannot enable device clock.\n");
+-		return ret;
+-	}
+-
+-	ret = clk_enable(xqspi->pclk);
+-	if (ret) {
+-		dev_err(dev, "Cannot enable APB clock.\n");
+-		clk_disable(xqspi->refclk);
+-		return ret;
+-	}
+ 	zynqmp_gqspi_write(xqspi, GQSPI_EN_OFST, GQSPI_EN_MASK);
+ 
+ 	return 0;
+@@ -520,7 +508,7 @@ static void zynqmp_qspi_filltxfifo(struct zynqmp_qspi *xqspi, int size)
+ {
+ 	u32 count = 0, intermediate;
+ 
+-	while ((xqspi->bytes_to_transfer > 0) && (count < size)) {
++	while ((xqspi->bytes_to_transfer > 0) && (count < size) && (xqspi->txbuf)) {
+ 		memcpy(&intermediate, xqspi->txbuf, 4);
+ 		zynqmp_gqspi_write(xqspi, GQSPI_TXD_OFST, intermediate);
+ 
+@@ -579,7 +567,7 @@ static void zynqmp_qspi_fillgenfifo(struct zynqmp_qspi *xqspi, u8 nbits,
+ 		genfifoentry |= GQSPI_GENFIFO_DATA_XFER;
+ 		genfifoentry |= GQSPI_GENFIFO_TX;
+ 		transfer_len = xqspi->bytes_to_transfer;
+-	} else {
++	} else if (xqspi->rxbuf) {
+ 		genfifoentry &= ~GQSPI_GENFIFO_TX;
+ 		genfifoentry |= GQSPI_GENFIFO_DATA_XFER;
+ 		genfifoentry |= GQSPI_GENFIFO_RX;
+@@ -587,6 +575,11 @@ static void zynqmp_qspi_fillgenfifo(struct zynqmp_qspi *xqspi, u8 nbits,
+ 			transfer_len = xqspi->dma_rx_bytes;
+ 		else
+ 			transfer_len = xqspi->bytes_to_receive;
++	} else {
++		/* Sending dummy circles here */
++		genfifoentry &= ~(GQSPI_GENFIFO_TX | GQSPI_GENFIFO_RX);
++		genfifoentry |= GQSPI_GENFIFO_DATA_XFER;
++		transfer_len = xqspi->bytes_to_transfer;
+ 	}
+ 	genfifoentry |= zynqmp_qspi_selectspimode(xqspi, nbits);
+ 	xqspi->genfifoentry = genfifoentry;
+@@ -738,7 +731,7 @@ static irqreturn_t zynqmp_qspi_irq(int irq, void *dev_id)
+  * zynqmp_qspi_setuprxdma - This function sets up the RX DMA operation
+  * @xqspi:	xqspi is a pointer to the GQSPI instance.
+  */
+-static void zynqmp_qspi_setuprxdma(struct zynqmp_qspi *xqspi)
++static int zynqmp_qspi_setuprxdma(struct zynqmp_qspi *xqspi)
+ {
+ 	u32 rx_bytes, rx_rem, config_reg;
+ 	dma_addr_t addr;
+@@ -752,7 +745,7 @@ static void zynqmp_qspi_setuprxdma(struct zynqmp_qspi *xqspi)
+ 		zynqmp_gqspi_write(xqspi, GQSPI_CONFIG_OFST, config_reg);
+ 		xqspi->mode = GQSPI_MODE_IO;
+ 		xqspi->dma_rx_bytes = 0;
+-		return;
++		return 0;
+ 	}
+ 
+ 	rx_rem = xqspi->bytes_to_receive % 4;
+@@ -760,8 +753,10 @@ static void zynqmp_qspi_setuprxdma(struct zynqmp_qspi *xqspi)
+ 
+ 	addr = dma_map_single(xqspi->dev, (void *)xqspi->rxbuf,
+ 			      rx_bytes, DMA_FROM_DEVICE);
+-	if (dma_mapping_error(xqspi->dev, addr))
++	if (dma_mapping_error(xqspi->dev, addr)) {
+ 		dev_err(xqspi->dev, "ERR:rxdma:memory not mapped\n");
++		return -ENOMEM;
++	}
+ 
+ 	xqspi->dma_rx_bytes = rx_bytes;
+ 	xqspi->dma_addr = addr;
+@@ -782,6 +777,8 @@ static void zynqmp_qspi_setuprxdma(struct zynqmp_qspi *xqspi)
+ 
+ 	/* Write the number of bytes to transfer */
+ 	zynqmp_gqspi_write(xqspi, GQSPI_QSPIDMA_DST_SIZE_OFST, rx_bytes);
++
++	return 0;
+ }
+ 
+ /**
+@@ -818,11 +815,17 @@ static void zynqmp_qspi_write_op(struct zynqmp_qspi *xqspi, u8 tx_nbits,
+  * @genfifoentry:	genfifoentry is pointer to the variable in which
+  *			GENFIFO	mask is returned to calling function
+  */
+-static void zynqmp_qspi_read_op(struct zynqmp_qspi *xqspi, u8 rx_nbits,
++static int zynqmp_qspi_read_op(struct zynqmp_qspi *xqspi, u8 rx_nbits,
+ 				u32 genfifoentry)
+ {
++	int ret;
++
++	ret = zynqmp_qspi_setuprxdma(xqspi);
++	if (ret)
++		return ret;
+ 	zynqmp_qspi_fillgenfifo(xqspi, rx_nbits, genfifoentry);
+-	zynqmp_qspi_setuprxdma(xqspi);
++
++	return 0;
+ }
+ 
+ /**
+@@ -835,10 +838,13 @@ static void zynqmp_qspi_read_op(struct zynqmp_qspi *xqspi, u8 rx_nbits,
+  */
+ static int __maybe_unused zynqmp_qspi_suspend(struct device *dev)
+ {
+-	struct spi_controller *ctlr = dev_get_drvdata(dev);
+-	struct zynqmp_qspi *xqspi = spi_controller_get_devdata(ctlr);
++	struct zynqmp_qspi *xqspi = dev_get_drvdata(dev);
++	struct spi_controller *ctlr = xqspi->ctlr;
++	int ret;
+ 
+-	spi_controller_suspend(ctlr);
++	ret = spi_controller_suspend(ctlr);
++	if (ret)
++		return ret;
+ 
+ 	zynqmp_gqspi_write(xqspi, GQSPI_EN_OFST, 0x0);
+ 
+@@ -856,27 +862,13 @@ static int __maybe_unused zynqmp_qspi_suspend(struct device *dev)
+  */
+ static int __maybe_unused zynqmp_qspi_resume(struct device *dev)
+ {
+-	struct spi_controller *ctlr = dev_get_drvdata(dev);
+-	struct zynqmp_qspi *xqspi = spi_controller_get_devdata(ctlr);
+-	int ret = 0;
+-
+-	ret = clk_enable(xqspi->pclk);
+-	if (ret) {
+-		dev_err(dev, "Cannot enable APB clock.\n");
+-		return ret;
+-	}
++	struct zynqmp_qspi *xqspi = dev_get_drvdata(dev);
++	struct spi_controller *ctlr = xqspi->ctlr;
+ 
+-	ret = clk_enable(xqspi->refclk);
+-	if (ret) {
+-		dev_err(dev, "Cannot enable device clock.\n");
+-		clk_disable(xqspi->pclk);
+-		return ret;
+-	}
++	zynqmp_gqspi_write(xqspi, GQSPI_EN_OFST, GQSPI_EN_MASK);
+ 
+ 	spi_controller_resume(ctlr);
+ 
+-	clk_disable(xqspi->refclk);
+-	clk_disable(xqspi->pclk);
+ 	return 0;
+ }
+ 
+@@ -890,10 +882,10 @@ static int __maybe_unused zynqmp_qspi_resume(struct device *dev)
+  */
+ static int __maybe_unused zynqmp_runtime_suspend(struct device *dev)
+ {
+-	struct zynqmp_qspi *xqspi = (struct zynqmp_qspi *)dev_get_drvdata(dev);
++	struct zynqmp_qspi *xqspi = dev_get_drvdata(dev);
+ 
+-	clk_disable(xqspi->refclk);
+-	clk_disable(xqspi->pclk);
++	clk_disable_unprepare(xqspi->refclk);
++	clk_disable_unprepare(xqspi->pclk);
+ 
+ 	return 0;
+ }
+@@ -908,19 +900,19 @@ static int __maybe_unused zynqmp_runtime_suspend(struct device *dev)
+  */
+ static int __maybe_unused zynqmp_runtime_resume(struct device *dev)
+ {
+-	struct zynqmp_qspi *xqspi = (struct zynqmp_qspi *)dev_get_drvdata(dev);
++	struct zynqmp_qspi *xqspi = dev_get_drvdata(dev);
+ 	int ret;
+ 
+-	ret = clk_enable(xqspi->pclk);
++	ret = clk_prepare_enable(xqspi->pclk);
+ 	if (ret) {
+ 		dev_err(dev, "Cannot enable APB clock.\n");
+ 		return ret;
+ 	}
+ 
+-	ret = clk_enable(xqspi->refclk);
++	ret = clk_prepare_enable(xqspi->refclk);
+ 	if (ret) {
+ 		dev_err(dev, "Cannot enable device clock.\n");
+-		clk_disable(xqspi->pclk);
++		clk_disable_unprepare(xqspi->pclk);
+ 		return ret;
+ 	}
+ 
+@@ -944,25 +936,23 @@ static int zynqmp_qspi_exec_op(struct spi_mem *mem,
+ 	struct zynqmp_qspi *xqspi = spi_controller_get_devdata
+ 				    (mem->spi->master);
+ 	int err = 0, i;
+-	u8 *tmpbuf;
+ 	u32 genfifoentry = 0;
++	u16 opcode = op->cmd.opcode;
++	u64 opaddr;
+ 
+ 	dev_dbg(xqspi->dev, "cmd:%#x mode:%d.%d.%d.%d\n",
+ 		op->cmd.opcode, op->cmd.buswidth, op->addr.buswidth,
+ 		op->dummy.buswidth, op->data.buswidth);
+ 
++	mutex_lock(&xqspi->op_lock);
+ 	zynqmp_qspi_config_op(xqspi, mem->spi);
+ 	zynqmp_qspi_chipselect(mem->spi, false);
+ 	genfifoentry |= xqspi->genfifocs;
+ 	genfifoentry |= xqspi->genfifobus;
+ 
+ 	if (op->cmd.opcode) {
+-		tmpbuf = kzalloc(op->cmd.nbytes, GFP_KERNEL | GFP_DMA);
+-		if (!tmpbuf)
+-			return -ENOMEM;
+-		tmpbuf[0] = op->cmd.opcode;
+ 		reinit_completion(&xqspi->data_completion);
+-		xqspi->txbuf = tmpbuf;
++		xqspi->txbuf = &opcode;
+ 		xqspi->rxbuf = NULL;
+ 		xqspi->bytes_to_transfer = op->cmd.nbytes;
+ 		xqspi->bytes_to_receive = 0;
+@@ -973,16 +963,15 @@ static int zynqmp_qspi_exec_op(struct spi_mem *mem,
+ 		zynqmp_gqspi_write(xqspi, GQSPI_IER_OFST,
+ 				   GQSPI_IER_GENFIFOEMPTY_MASK |
+ 				   GQSPI_IER_TXNOT_FULL_MASK);
+-		if (!wait_for_completion_interruptible_timeout
++		if (!wait_for_completion_timeout
+ 		    (&xqspi->data_completion, msecs_to_jiffies(1000))) {
+ 			err = -ETIMEDOUT;
+-			kfree(tmpbuf);
+ 			goto return_err;
+ 		}
+-		kfree(tmpbuf);
+ 	}
+ 
+ 	if (op->addr.nbytes) {
++		xqspi->txbuf = &opaddr;
+ 		for (i = 0; i < op->addr.nbytes; i++) {
+ 			*(((u8 *)xqspi->txbuf) + i) = op->addr.val >>
+ 					(8 * (op->addr.nbytes - i - 1));
+@@ -1001,7 +990,7 @@ static int zynqmp_qspi_exec_op(struct spi_mem *mem,
+ 				   GQSPI_IER_TXEMPTY_MASK |
+ 				   GQSPI_IER_GENFIFOEMPTY_MASK |
+ 				   GQSPI_IER_TXNOT_FULL_MASK);
+-		if (!wait_for_completion_interruptible_timeout
++		if (!wait_for_completion_timeout
+ 		    (&xqspi->data_completion, msecs_to_jiffies(1000))) {
+ 			err = -ETIMEDOUT;
+ 			goto return_err;
+@@ -1009,32 +998,23 @@ static int zynqmp_qspi_exec_op(struct spi_mem *mem,
+ 	}
+ 
+ 	if (op->dummy.nbytes) {
+-		tmpbuf = kzalloc(op->dummy.nbytes, GFP_KERNEL | GFP_DMA);
+-		if (!tmpbuf)
+-			return -ENOMEM;
+-		memset(tmpbuf, 0xff, op->dummy.nbytes);
+-		reinit_completion(&xqspi->data_completion);
+-		xqspi->txbuf = tmpbuf;
++		xqspi->txbuf = NULL;
+ 		xqspi->rxbuf = NULL;
+-		xqspi->bytes_to_transfer = op->dummy.nbytes;
++		/*
++		 * xqspi->bytes_to_transfer here represents the dummy circles
++		 * which need to be sent.
++		 */
++		xqspi->bytes_to_transfer = op->dummy.nbytes * 8 / op->dummy.buswidth;
+ 		xqspi->bytes_to_receive = 0;
+-		zynqmp_qspi_write_op(xqspi, op->dummy.buswidth,
++		/*
++		 * Using op->data.buswidth instead of op->dummy.buswidth here because
++		 * we need to use it to configure the correct SPI mode.
++		 */
++		zynqmp_qspi_write_op(xqspi, op->data.buswidth,
+ 				     genfifoentry);
+ 		zynqmp_gqspi_write(xqspi, GQSPI_CONFIG_OFST,
+ 				   zynqmp_gqspi_read(xqspi, GQSPI_CONFIG_OFST) |
+ 				   GQSPI_CFG_START_GEN_FIFO_MASK);
+-		zynqmp_gqspi_write(xqspi, GQSPI_IER_OFST,
+-				   GQSPI_IER_TXEMPTY_MASK |
+-				   GQSPI_IER_GENFIFOEMPTY_MASK |
+-				   GQSPI_IER_TXNOT_FULL_MASK);
+-		if (!wait_for_completion_interruptible_timeout
+-		    (&xqspi->data_completion, msecs_to_jiffies(1000))) {
+-			err = -ETIMEDOUT;
+-			kfree(tmpbuf);
+-			goto return_err;
+-		}
+-
+-		kfree(tmpbuf);
+ 	}
+ 
+ 	if (op->data.nbytes) {
+@@ -1059,8 +1039,11 @@ static int zynqmp_qspi_exec_op(struct spi_mem *mem,
+ 			xqspi->rxbuf = (u8 *)op->data.buf.in;
+ 			xqspi->bytes_to_receive = op->data.nbytes;
+ 			xqspi->bytes_to_transfer = 0;
+-			zynqmp_qspi_read_op(xqspi, op->data.buswidth,
++			err = zynqmp_qspi_read_op(xqspi, op->data.buswidth,
+ 					    genfifoentry);
++			if (err)
++				goto return_err;
++
+ 			zynqmp_gqspi_write(xqspi, GQSPI_CONFIG_OFST,
+ 					   zynqmp_gqspi_read
+ 					   (xqspi, GQSPI_CONFIG_OFST) |
+@@ -1076,7 +1059,7 @@ static int zynqmp_qspi_exec_op(struct spi_mem *mem,
+ 						   GQSPI_IER_RXEMPTY_MASK);
+ 			}
+ 		}
+-		if (!wait_for_completion_interruptible_timeout
++		if (!wait_for_completion_timeout
+ 		    (&xqspi->data_completion, msecs_to_jiffies(1000)))
+ 			err = -ETIMEDOUT;
+ 	}
+@@ -1084,6 +1067,7 @@ static int zynqmp_qspi_exec_op(struct spi_mem *mem,
+ return_err:
+ 
+ 	zynqmp_qspi_chipselect(mem->spi, true);
++	mutex_unlock(&xqspi->op_lock);
+ 
+ 	return err;
+ }
+@@ -1120,6 +1104,7 @@ static int zynqmp_qspi_probe(struct platform_device *pdev)
+ 
+ 	xqspi = spi_controller_get_devdata(ctlr);
+ 	xqspi->dev = dev;
++	xqspi->ctlr = ctlr;
+ 	platform_set_drvdata(pdev, xqspi);
+ 
+ 	xqspi->regs = devm_platform_ioremap_resource(pdev, 0);
+@@ -1135,13 +1120,11 @@ static int zynqmp_qspi_probe(struct platform_device *pdev)
+ 		goto remove_master;
+ 	}
+ 
+-	init_completion(&xqspi->data_completion);
+-
+ 	xqspi->refclk = devm_clk_get(&pdev->dev, "ref_clk");
+ 	if (IS_ERR(xqspi->refclk)) {
+ 		dev_err(dev, "ref_clk clock not found.\n");
+ 		ret = PTR_ERR(xqspi->refclk);
+-		goto clk_dis_pclk;
++		goto remove_master;
+ 	}
+ 
+ 	ret = clk_prepare_enable(xqspi->pclk);
+@@ -1156,15 +1139,24 @@ static int zynqmp_qspi_probe(struct platform_device *pdev)
+ 		goto clk_dis_pclk;
+ 	}
+ 
++	init_completion(&xqspi->data_completion);
++
++	mutex_init(&xqspi->op_lock);
++
+ 	pm_runtime_use_autosuspend(&pdev->dev);
+ 	pm_runtime_set_autosuspend_delay(&pdev->dev, SPI_AUTOSUSPEND_TIMEOUT);
+ 	pm_runtime_set_active(&pdev->dev);
+ 	pm_runtime_enable(&pdev->dev);
++
++	ret = pm_runtime_get_sync(&pdev->dev);
++	if (ret < 0) {
++		dev_err(&pdev->dev, "Failed to pm_runtime_get_sync: %d\n", ret);
++		goto clk_dis_all;
++	}
++
+ 	/* QSPI controller initializations */
+ 	zynqmp_qspi_init_hw(xqspi);
+ 
+-	pm_runtime_mark_last_busy(&pdev->dev);
+-	pm_runtime_put_autosuspend(&pdev->dev);
+ 	xqspi->irq = platform_get_irq(pdev, 0);
+ 	if (xqspi->irq <= 0) {
+ 		ret = -ENXIO;
+@@ -1178,6 +1170,7 @@ static int zynqmp_qspi_probe(struct platform_device *pdev)
+ 		goto clk_dis_all;
+ 	}
+ 
++	dma_set_mask(&pdev->dev, DMA_BIT_MASK(44));
+ 	ctlr->bits_per_word_mask = SPI_BPW_MASK(8);
+ 	ctlr->num_chipselect = GQSPI_DEFAULT_NUM_CS;
+ 	ctlr->mem_ops = &zynqmp_qspi_mem_ops;
+@@ -1187,6 +1180,7 @@ static int zynqmp_qspi_probe(struct platform_device *pdev)
+ 	ctlr->mode_bits = SPI_CPOL | SPI_CPHA | SPI_RX_DUAL | SPI_RX_QUAD |
+ 			    SPI_TX_DUAL | SPI_TX_QUAD;
+ 	ctlr->dev.of_node = np;
++	ctlr->auto_runtime_pm = true;
+ 
+ 	ret = devm_spi_register_controller(&pdev->dev, ctlr);
+ 	if (ret) {
+@@ -1194,9 +1188,13 @@ static int zynqmp_qspi_probe(struct platform_device *pdev)
+ 		goto clk_dis_all;
+ 	}
+ 
++	pm_runtime_mark_last_busy(&pdev->dev);
++	pm_runtime_put_autosuspend(&pdev->dev);
++
+ 	return 0;
+ 
+ clk_dis_all:
++	pm_runtime_put_sync(&pdev->dev);
+ 	pm_runtime_set_suspended(&pdev->dev);
+ 	pm_runtime_disable(&pdev->dev);
+ 	clk_disable_unprepare(xqspi->refclk);
+diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
+index 927c2a28011f6..8da4fe475b84b 100644
+--- a/drivers/spi/spi.c
++++ b/drivers/spi/spi.c
+@@ -2496,6 +2496,7 @@ struct spi_controller *__devm_spi_alloc_controller(struct device *dev,
+ 
+ 	ctlr = __spi_alloc_controller(dev, size, slave);
+ 	if (ctlr) {
++		ctlr->devm_allocated = true;
+ 		*ptr = ctlr;
+ 		devres_add(dev, ptr);
+ 	} else {
+@@ -2842,11 +2843,6 @@ int devm_spi_register_controller(struct device *dev,
+ }
+ EXPORT_SYMBOL_GPL(devm_spi_register_controller);
+ 
+-static int devm_spi_match_controller(struct device *dev, void *res, void *ctlr)
+-{
+-	return *(struct spi_controller **)res == ctlr;
+-}
+-
+ static int __unregister(struct device *dev, void *null)
+ {
+ 	spi_unregister_device(to_spi_device(dev));
+@@ -2893,8 +2889,7 @@ void spi_unregister_controller(struct spi_controller *ctlr)
+ 	/* Release the last reference on the controller if its driver
+ 	 * has not yet been converted to devm_spi_alloc_master/slave().
+ 	 */
+-	if (!devres_find(ctlr->dev.parent, devm_spi_release_controller,
+-			 devm_spi_match_controller, ctlr))
++	if (!ctlr->devm_allocated)
+ 		put_device(&ctlr->dev);
+ 
+ 	/* free bus id */
+diff --git a/drivers/staging/comedi/drivers/tests/ni_routes_test.c b/drivers/staging/comedi/drivers/tests/ni_routes_test.c
+index 4061b3b5f8e9b..68defeb53de4a 100644
+--- a/drivers/staging/comedi/drivers/tests/ni_routes_test.c
++++ b/drivers/staging/comedi/drivers/tests/ni_routes_test.c
+@@ -217,7 +217,8 @@ void test_ni_assign_device_routes(void)
+ 	const u8 *table, *oldtable;
+ 
+ 	init_pci_6070e();
+-	ni_assign_device_routes(ni_eseries, pci_6070e, &private.routing_tables);
++	ni_assign_device_routes(ni_eseries, pci_6070e, NULL,
++				&private.routing_tables);
+ 	devroutes = private.routing_tables.valid_routes;
+ 	table = private.routing_tables.route_values;
+ 
+@@ -253,7 +254,8 @@ void test_ni_assign_device_routes(void)
+ 	olddevroutes = devroutes;
+ 	oldtable = table;
+ 	init_pci_6220();
+-	ni_assign_device_routes(ni_mseries, pci_6220, &private.routing_tables);
++	ni_assign_device_routes(ni_mseries, pci_6220, NULL,
++				&private.routing_tables);
+ 	devroutes = private.routing_tables.valid_routes;
+ 	table = private.routing_tables.route_values;
+ 
+diff --git a/drivers/staging/fwserial/fwserial.c b/drivers/staging/fwserial/fwserial.c
+index c368082aae1aa..0f4655d7d520a 100644
+--- a/drivers/staging/fwserial/fwserial.c
++++ b/drivers/staging/fwserial/fwserial.c
+@@ -1218,13 +1218,12 @@ static int get_serial_info(struct tty_struct *tty,
+ 	struct fwtty_port *port = tty->driver_data;
+ 
+ 	mutex_lock(&port->port.mutex);
+-	ss->type =  PORT_UNKNOWN;
+-	ss->line =  port->port.tty->index;
+-	ss->flags = port->port.flags;
+-	ss->xmit_fifo_size = FWTTY_PORT_TXFIFO_LEN;
++	ss->line = port->index;
+ 	ss->baud_base = 400000000;
+-	ss->close_delay = port->port.close_delay;
++	ss->close_delay = jiffies_to_msecs(port->port.close_delay) / 10;
++	ss->closing_wait = 3000;
+ 	mutex_unlock(&port->port.mutex);
++
+ 	return 0;
+ }
+ 
+@@ -1232,20 +1231,20 @@ static int set_serial_info(struct tty_struct *tty,
+ 			   struct serial_struct *ss)
+ {
+ 	struct fwtty_port *port = tty->driver_data;
++	unsigned int cdelay;
+ 
+-	if (ss->irq != 0 || ss->port != 0 || ss->custom_divisor != 0 ||
+-	    ss->baud_base != 400000000)
+-		return -EPERM;
++	cdelay = msecs_to_jiffies(ss->close_delay * 10);
+ 
+ 	mutex_lock(&port->port.mutex);
+ 	if (!capable(CAP_SYS_ADMIN)) {
+-		if (((ss->flags & ~ASYNC_USR_MASK) !=
++		if (cdelay != port->port.close_delay ||
++		    ((ss->flags & ~ASYNC_USR_MASK) !=
+ 		     (port->port.flags & ~ASYNC_USR_MASK))) {
+ 			mutex_unlock(&port->port.mutex);
+ 			return -EPERM;
+ 		}
+ 	}
+-	port->port.close_delay = ss->close_delay * HZ / 100;
++	port->port.close_delay = cdelay;
+ 	mutex_unlock(&port->port.mutex);
+ 
+ 	return 0;
+diff --git a/drivers/staging/greybus/uart.c b/drivers/staging/greybus/uart.c
+index 607378bfebb7e..a520f7f213db0 100644
+--- a/drivers/staging/greybus/uart.c
++++ b/drivers/staging/greybus/uart.c
+@@ -614,10 +614,12 @@ static int get_serial_info(struct tty_struct *tty,
+ 	ss->line = gb_tty->minor;
+ 	ss->xmit_fifo_size = 16;
+ 	ss->baud_base = 9600;
+-	ss->close_delay = gb_tty->port.close_delay / 10;
++	ss->close_delay = jiffies_to_msecs(gb_tty->port.close_delay) / 10;
+ 	ss->closing_wait =
+ 		gb_tty->port.closing_wait == ASYNC_CLOSING_WAIT_NONE ?
+-		ASYNC_CLOSING_WAIT_NONE : gb_tty->port.closing_wait / 10;
++		ASYNC_CLOSING_WAIT_NONE :
++		jiffies_to_msecs(gb_tty->port.closing_wait) / 10;
++
+ 	return 0;
+ }
+ 
+@@ -629,17 +631,16 @@ static int set_serial_info(struct tty_struct *tty,
+ 	unsigned int close_delay;
+ 	int retval = 0;
+ 
+-	close_delay = ss->close_delay * 10;
++	close_delay = msecs_to_jiffies(ss->close_delay * 10);
+ 	closing_wait = ss->closing_wait == ASYNC_CLOSING_WAIT_NONE ?
+-			ASYNC_CLOSING_WAIT_NONE : ss->closing_wait * 10;
++			ASYNC_CLOSING_WAIT_NONE :
++			msecs_to_jiffies(ss->closing_wait * 10);
+ 
+ 	mutex_lock(&gb_tty->port.mutex);
+ 	if (!capable(CAP_SYS_ADMIN)) {
+ 		if ((close_delay != gb_tty->port.close_delay) ||
+ 		    (closing_wait != gb_tty->port.closing_wait))
+ 			retval = -EPERM;
+-		else
+-			retval = -EOPNOTSUPP;
+ 	} else {
+ 		gb_tty->port.close_delay = close_delay;
+ 		gb_tty->port.closing_wait = closing_wait;
+diff --git a/drivers/staging/media/atomisp/i2c/atomisp-lm3554.c b/drivers/staging/media/atomisp/i2c/atomisp-lm3554.c
+index 7ca7378b18592..0ab67b2aec671 100644
+--- a/drivers/staging/media/atomisp/i2c/atomisp-lm3554.c
++++ b/drivers/staging/media/atomisp/i2c/atomisp-lm3554.c
+@@ -843,8 +843,10 @@ static int lm3554_probe(struct i2c_client *client)
+ 		return -ENOMEM;
+ 
+ 	flash->pdata = lm3554_platform_data_func(client);
+-	if (IS_ERR(flash->pdata))
+-		return PTR_ERR(flash->pdata);
++	if (IS_ERR(flash->pdata)) {
++		err = PTR_ERR(flash->pdata);
++		goto fail1;
++	}
+ 
+ 	v4l2_i2c_subdev_init(&flash->sd, client, &lm3554_ops);
+ 	flash->sd.internal_ops = &lm3554_internal_ops;
+@@ -856,7 +858,7 @@ static int lm3554_probe(struct i2c_client *client)
+ 				   ARRAY_SIZE(lm3554_controls));
+ 	if (ret) {
+ 		dev_err(&client->dev, "error initialize a ctrl_handler.\n");
+-		goto fail2;
++		goto fail3;
+ 	}
+ 
+ 	for (i = 0; i < ARRAY_SIZE(lm3554_controls); i++)
+@@ -865,14 +867,14 @@ static int lm3554_probe(struct i2c_client *client)
+ 
+ 	if (flash->ctrl_handler.error) {
+ 		dev_err(&client->dev, "ctrl_handler error.\n");
+-		goto fail2;
++		goto fail3;
+ 	}
+ 
+ 	flash->sd.ctrl_handler = &flash->ctrl_handler;
+ 	err = media_entity_pads_init(&flash->sd.entity, 0, NULL);
+ 	if (err) {
+ 		dev_err(&client->dev, "error initialize a media entity.\n");
+-		goto fail1;
++		goto fail2;
+ 	}
+ 
+ 	flash->sd.entity.function = MEDIA_ENT_F_FLASH;
+@@ -884,14 +886,15 @@ static int lm3554_probe(struct i2c_client *client)
+ 	err = lm3554_gpio_init(client);
+ 	if (err) {
+ 		dev_err(&client->dev, "gpio request/direction_output fail");
+-		goto fail2;
++		goto fail3;
+ 	}
+ 	return atomisp_register_i2c_module(&flash->sd, NULL, LED_FLASH);
+-fail2:
++fail3:
+ 	media_entity_cleanup(&flash->sd.entity);
+ 	v4l2_ctrl_handler_free(&flash->ctrl_handler);
+-fail1:
++fail2:
+ 	v4l2_device_unregister_subdev(&flash->sd);
++fail1:
+ 	kfree(flash);
+ 
+ 	return err;
+diff --git a/drivers/staging/media/atomisp/pci/atomisp_ioctl.c b/drivers/staging/media/atomisp/pci/atomisp_ioctl.c
+index 2ae50decfc8bd..9da82855552de 100644
+--- a/drivers/staging/media/atomisp/pci/atomisp_ioctl.c
++++ b/drivers/staging/media/atomisp/pci/atomisp_ioctl.c
+@@ -948,10 +948,8 @@ int atomisp_alloc_css_stat_bufs(struct atomisp_sub_device *asd,
+ 		dev_dbg(isp->dev, "allocating %d dis buffers\n", count);
+ 		while (count--) {
+ 			dis_buf = kzalloc(sizeof(struct atomisp_dis_buf), GFP_KERNEL);
+-			if (!dis_buf) {
+-				kfree(s3a_buf);
++			if (!dis_buf)
+ 				goto error;
+-			}
+ 			if (atomisp_css_allocate_stat_buffers(
+ 				asd, stream_id, NULL, dis_buf, NULL)) {
+ 				kfree(dis_buf);
+diff --git a/drivers/staging/media/atomisp/pci/hmm/hmm_bo.c b/drivers/staging/media/atomisp/pci/hmm/hmm_bo.c
+index f13af2329f486..0168f9839c905 100644
+--- a/drivers/staging/media/atomisp/pci/hmm/hmm_bo.c
++++ b/drivers/staging/media/atomisp/pci/hmm/hmm_bo.c
+@@ -857,16 +857,17 @@ static void free_private_pages(struct hmm_buffer_object *bo,
+ 	kfree(bo->page_obj);
+ }
+ 
+-static void free_user_pages(struct hmm_buffer_object *bo)
++static void free_user_pages(struct hmm_buffer_object *bo,
++			    unsigned int page_nr)
+ {
+ 	int i;
+ 
+ 	hmm_mem_stat.usr_size -= bo->pgnr;
+ 
+ 	if (bo->mem_type == HMM_BO_MEM_TYPE_PFN) {
+-		unpin_user_pages(bo->pages, bo->pgnr);
++		unpin_user_pages(bo->pages, page_nr);
+ 	} else {
+-		for (i = 0; i < bo->pgnr; i++)
++		for (i = 0; i < page_nr; i++)
+ 			put_page(bo->pages[i]);
+ 	}
+ 	kfree(bo->pages);
+@@ -942,6 +943,8 @@ static int alloc_user_pages(struct hmm_buffer_object *bo,
+ 		dev_err(atomisp_dev,
+ 			"get_user_pages err: bo->pgnr = %d, pgnr actually pinned = %d.\n",
+ 			bo->pgnr, page_nr);
++		if (page_nr < 0)
++			page_nr = 0;
+ 		goto out_of_mem;
+ 	}
+ 
+@@ -954,7 +957,7 @@ static int alloc_user_pages(struct hmm_buffer_object *bo,
+ 
+ out_of_mem:
+ 
+-	free_user_pages(bo);
++	free_user_pages(bo, page_nr);
+ 
+ 	return -ENOMEM;
+ }
+@@ -1037,7 +1040,7 @@ void hmm_bo_free_pages(struct hmm_buffer_object *bo)
+ 	if (bo->type == HMM_BO_PRIVATE)
+ 		free_private_pages(bo, &dynamic_pool, &reserved_pool);
+ 	else if (bo->type == HMM_BO_USER)
+-		free_user_pages(bo);
++		free_user_pages(bo, bo->pgnr);
+ 	else
+ 		dev_err(atomisp_dev, "invalid buffer type.\n");
+ 	mutex_unlock(&bo->mutex);
+diff --git a/drivers/staging/media/omap4iss/iss.c b/drivers/staging/media/omap4iss/iss.c
+index dae9073e7d3cc..085397045b36d 100644
+--- a/drivers/staging/media/omap4iss/iss.c
++++ b/drivers/staging/media/omap4iss/iss.c
+@@ -1236,8 +1236,10 @@ static int iss_probe(struct platform_device *pdev)
+ 	if (ret < 0)
+ 		goto error;
+ 
+-	if (!omap4iss_get(iss))
++	if (!omap4iss_get(iss)) {
++		ret = -EINVAL;
+ 		goto error;
++	}
+ 
+ 	ret = iss_reset(iss);
+ 	if (ret < 0)
+diff --git a/drivers/staging/media/rkvdec/rkvdec.c b/drivers/staging/media/rkvdec/rkvdec.c
+index d3eb81ee8dc27..5f0219d117fb8 100644
+--- a/drivers/staging/media/rkvdec/rkvdec.c
++++ b/drivers/staging/media/rkvdec/rkvdec.c
+@@ -55,16 +55,13 @@ static const struct v4l2_ctrl_ops rkvdec_ctrl_ops = {
+ 
+ static const struct rkvdec_ctrl_desc rkvdec_h264_ctrl_descs[] = {
+ 	{
+-		.mandatory = true,
+ 		.cfg.id = V4L2_CID_STATELESS_H264_DECODE_PARAMS,
+ 	},
+ 	{
+-		.mandatory = true,
+ 		.cfg.id = V4L2_CID_STATELESS_H264_SPS,
+ 		.cfg.ops = &rkvdec_ctrl_ops,
+ 	},
+ 	{
+-		.mandatory = true,
+ 		.cfg.id = V4L2_CID_STATELESS_H264_PPS,
+ 	},
+ 	{
+@@ -585,25 +582,7 @@ static const struct vb2_ops rkvdec_queue_ops = {
+ 
+ static int rkvdec_request_validate(struct media_request *req)
+ {
+-	struct media_request_object *obj;
+-	const struct rkvdec_ctrls *ctrls;
+-	struct v4l2_ctrl_handler *hdl;
+-	struct rkvdec_ctx *ctx = NULL;
+-	unsigned int count, i;
+-	int ret;
+-
+-	list_for_each_entry(obj, &req->objects, list) {
+-		if (vb2_request_object_is_buffer(obj)) {
+-			struct vb2_buffer *vb;
+-
+-			vb = container_of(obj, struct vb2_buffer, req_obj);
+-			ctx = vb2_get_drv_priv(vb->vb2_queue);
+-			break;
+-		}
+-	}
+-
+-	if (!ctx)
+-		return -EINVAL;
++	unsigned int count;
+ 
+ 	count = vb2_request_buffer_cnt(req);
+ 	if (!count)
+@@ -611,31 +590,6 @@ static int rkvdec_request_validate(struct media_request *req)
+ 	else if (count > 1)
+ 		return -EINVAL;
+ 
+-	hdl = v4l2_ctrl_request_hdl_find(req, &ctx->ctrl_hdl);
+-	if (!hdl)
+-		return -ENOENT;
+-
+-	ret = 0;
+-	ctrls = ctx->coded_fmt_desc->ctrls;
+-	for (i = 0; ctrls && i < ctrls->num_ctrls; i++) {
+-		u32 id = ctrls->ctrls[i].cfg.id;
+-		struct v4l2_ctrl *ctrl;
+-
+-		if (!ctrls->ctrls[i].mandatory)
+-			continue;
+-
+-		ctrl = v4l2_ctrl_request_hdl_ctrl_find(hdl, id);
+-		if (!ctrl) {
+-			ret = -ENOENT;
+-			break;
+-		}
+-	}
+-
+-	v4l2_ctrl_request_hdl_put(hdl);
+-
+-	if (ret)
+-		return ret;
+-
+ 	return vb2_request_validate(req);
+ }
+ 
+diff --git a/drivers/staging/media/rkvdec/rkvdec.h b/drivers/staging/media/rkvdec/rkvdec.h
+index 77a137cca88ea..52ac3874c5e54 100644
+--- a/drivers/staging/media/rkvdec/rkvdec.h
++++ b/drivers/staging/media/rkvdec/rkvdec.h
+@@ -25,7 +25,6 @@
+ struct rkvdec_ctx;
+ 
+ struct rkvdec_ctrl_desc {
+-	u32 mandatory : 1;
+ 	struct v4l2_ctrl_config cfg;
+ };
+ 
+diff --git a/drivers/staging/media/sunxi/cedrus/cedrus_regs.h b/drivers/staging/media/sunxi/cedrus/cedrus_regs.h
+index 7718c561823f6..92ace87c1c7d1 100644
+--- a/drivers/staging/media/sunxi/cedrus/cedrus_regs.h
++++ b/drivers/staging/media/sunxi/cedrus/cedrus_regs.h
+@@ -443,16 +443,17 @@
+ #define VE_DEC_H265_STATUS_STCD_BUSY		BIT(21)
+ #define VE_DEC_H265_STATUS_WB_BUSY		BIT(20)
+ #define VE_DEC_H265_STATUS_BS_DMA_BUSY		BIT(19)
+-#define VE_DEC_H265_STATUS_IQIT_BUSY		BIT(18)
++#define VE_DEC_H265_STATUS_IT_BUSY		BIT(18)
+ #define VE_DEC_H265_STATUS_INTER_BUSY		BIT(17)
+ #define VE_DEC_H265_STATUS_MORE_DATA		BIT(16)
+-#define VE_DEC_H265_STATUS_VLD_BUSY		BIT(14)
+-#define VE_DEC_H265_STATUS_DEBLOCKING_BUSY	BIT(13)
+-#define VE_DEC_H265_STATUS_DEBLOCKING_DRAM_BUSY	BIT(12)
+-#define VE_DEC_H265_STATUS_INTRA_BUSY		BIT(11)
+-#define VE_DEC_H265_STATUS_SAO_BUSY		BIT(10)
+-#define VE_DEC_H265_STATUS_MVP_BUSY		BIT(9)
+-#define VE_DEC_H265_STATUS_SWDEC_BUSY		BIT(8)
++#define VE_DEC_H265_STATUS_DBLK_BUSY		BIT(15)
++#define VE_DEC_H265_STATUS_IREC_BUSY		BIT(14)
++#define VE_DEC_H265_STATUS_INTRA_BUSY		BIT(13)
++#define VE_DEC_H265_STATUS_MCRI_BUSY		BIT(12)
++#define VE_DEC_H265_STATUS_IQIT_BUSY		BIT(11)
++#define VE_DEC_H265_STATUS_MVP_BUSY		BIT(10)
++#define VE_DEC_H265_STATUS_IS_BUSY		BIT(9)
++#define VE_DEC_H265_STATUS_VLD_BUSY		BIT(8)
+ #define VE_DEC_H265_STATUS_OVER_TIME		BIT(3)
+ #define VE_DEC_H265_STATUS_VLD_DATA_REQ		BIT(2)
+ #define VE_DEC_H265_STATUS_ERROR		BIT(1)
+diff --git a/drivers/staging/qlge/qlge_main.c b/drivers/staging/qlge/qlge_main.c
+index 5516be3af8983..c1d52190e1bdd 100644
+--- a/drivers/staging/qlge/qlge_main.c
++++ b/drivers/staging/qlge/qlge_main.c
+@@ -4550,7 +4550,7 @@ static int qlge_probe(struct pci_dev *pdev,
+ 	struct net_device *ndev = NULL;
+ 	struct devlink *devlink;
+ 	static int cards_found;
+-	int err = 0;
++	int err;
+ 
+ 	devlink = devlink_alloc(&qlge_devlink_ops, sizeof(struct qlge_adapter));
+ 	if (!devlink)
+@@ -4561,8 +4561,10 @@ static int qlge_probe(struct pci_dev *pdev,
+ 	ndev = alloc_etherdev_mq(sizeof(struct qlge_netdev_priv),
+ 				 min(MAX_CPUS,
+ 				     netif_get_num_default_rss_queues()));
+-	if (!ndev)
++	if (!ndev) {
++		err = -ENOMEM;
+ 		goto devlink_free;
++	}
+ 
+ 	ndev_priv = netdev_priv(ndev);
+ 	ndev_priv->qdev = qdev;
+diff --git a/drivers/staging/rtl8192u/r8192U_core.c b/drivers/staging/rtl8192u/r8192U_core.c
+index 9fc4adc83d77d..b5a313649f445 100644
+--- a/drivers/staging/rtl8192u/r8192U_core.c
++++ b/drivers/staging/rtl8192u/r8192U_core.c
+@@ -3210,7 +3210,7 @@ static void rtl819x_update_rxcounts(struct r8192_priv *priv, u32 *TotalRxBcnNum,
+ 			     u32 *TotalRxDataNum)
+ {
+ 	u16			SlotIndex;
+-	u8			i;
++	u16			i;
+ 
+ 	*TotalRxBcnNum = 0;
+ 	*TotalRxDataNum = 0;
+diff --git a/drivers/tty/amiserial.c b/drivers/tty/amiserial.c
+index 18b78ea110ef4..ecda5e18d23f6 100644
+--- a/drivers/tty/amiserial.c
++++ b/drivers/tty/amiserial.c
+@@ -970,6 +970,7 @@ static int set_serial_info(struct tty_struct *tty, struct serial_struct *ss)
+ 	if (!serial_isroot()) {
+ 		if ((ss->baud_base != state->baud_base) ||
+ 		    (ss->close_delay != port->close_delay) ||
++		    (ss->closing_wait != port->closing_wait) ||
+ 		    (ss->xmit_fifo_size != state->xmit_fifo_size) ||
+ 		    ((ss->flags & ~ASYNC_USR_MASK) !=
+ 		     (port->flags & ~ASYNC_USR_MASK))) {
+diff --git a/drivers/tty/moxa.c b/drivers/tty/moxa.c
+index 9f13f7d49dd78..f9f14104bd2c0 100644
+--- a/drivers/tty/moxa.c
++++ b/drivers/tty/moxa.c
+@@ -2040,7 +2040,7 @@ static int moxa_get_serial_info(struct tty_struct *tty,
+ 	ss->line = info->port.tty->index,
+ 	ss->flags = info->port.flags,
+ 	ss->baud_base = 921600,
+-	ss->close_delay = info->port.close_delay;
++	ss->close_delay = jiffies_to_msecs(info->port.close_delay) / 10;
+ 	mutex_unlock(&info->port.mutex);
+ 	return 0;
+ }
+@@ -2050,6 +2050,7 @@ static int moxa_set_serial_info(struct tty_struct *tty,
+ 		struct serial_struct *ss)
+ {
+ 	struct moxa_port *info = tty->driver_data;
++	unsigned int close_delay;
+ 
+ 	if (tty->index == MAX_PORTS)
+ 		return -EINVAL;
+@@ -2061,19 +2062,24 @@ static int moxa_set_serial_info(struct tty_struct *tty,
+ 			ss->baud_base != 921600)
+ 		return -EPERM;
+ 
++	close_delay = msecs_to_jiffies(ss->close_delay * 10);
++
+ 	mutex_lock(&info->port.mutex);
+ 	if (!capable(CAP_SYS_ADMIN)) {
+-		if (((ss->flags & ~ASYNC_USR_MASK) !=
++		if (close_delay != info->port.close_delay ||
++		    ss->type != info->type ||
++		    ((ss->flags & ~ASYNC_USR_MASK) !=
+ 		     (info->port.flags & ~ASYNC_USR_MASK))) {
+ 			mutex_unlock(&info->port.mutex);
+ 			return -EPERM;
+ 		}
+-	}
+-	info->port.close_delay = ss->close_delay * HZ / 100;
++	} else {
++		info->port.close_delay = close_delay;
+ 
+-	MoxaSetFifo(info, ss->type == PORT_16550A);
++		MoxaSetFifo(info, ss->type == PORT_16550A);
+ 
+-	info->type = ss->type;
++		info->type = ss->type;
++	}
+ 	mutex_unlock(&info->port.mutex);
+ 	return 0;
+ }
+diff --git a/drivers/tty/mxser.c b/drivers/tty/mxser.c
+index 4203b64bccdb1..2d8e76263a25b 100644
+--- a/drivers/tty/mxser.c
++++ b/drivers/tty/mxser.c
+@@ -1208,19 +1208,26 @@ static int mxser_get_serial_info(struct tty_struct *tty,
+ {
+ 	struct mxser_port *info = tty->driver_data;
+ 	struct tty_port *port = &info->port;
++	unsigned int closing_wait, close_delay;
+ 
+ 	if (tty->index == MXSER_PORTS)
+ 		return -ENOTTY;
+ 
+ 	mutex_lock(&port->mutex);
++
++	close_delay = jiffies_to_msecs(info->port.close_delay) / 10;
++	closing_wait = info->port.closing_wait;
++	if (closing_wait != ASYNC_CLOSING_WAIT_NONE)
++		closing_wait = jiffies_to_msecs(closing_wait) / 10;
++
+ 	ss->type = info->type,
+ 	ss->line = tty->index,
+ 	ss->port = info->ioaddr,
+ 	ss->irq = info->board->irq,
+ 	ss->flags = info->port.flags,
+ 	ss->baud_base = info->baud_base,
+-	ss->close_delay = info->port.close_delay,
+-	ss->closing_wait = info->port.closing_wait,
++	ss->close_delay = close_delay;
++	ss->closing_wait = closing_wait;
+ 	ss->custom_divisor = info->custom_divisor,
+ 	mutex_unlock(&port->mutex);
+ 	return 0;
+@@ -1233,7 +1240,7 @@ static int mxser_set_serial_info(struct tty_struct *tty,
+ 	struct tty_port *port = &info->port;
+ 	speed_t baud;
+ 	unsigned long sl_flags;
+-	unsigned int flags;
++	unsigned int flags, close_delay, closing_wait;
+ 	int retval = 0;
+ 
+ 	if (tty->index == MXSER_PORTS)
+@@ -1255,9 +1262,15 @@ static int mxser_set_serial_info(struct tty_struct *tty,
+ 
+ 	flags = port->flags & ASYNC_SPD_MASK;
+ 
++	close_delay = msecs_to_jiffies(ss->close_delay * 10);
++	closing_wait = ss->closing_wait;
++	if (closing_wait != ASYNC_CLOSING_WAIT_NONE)
++		closing_wait = msecs_to_jiffies(closing_wait * 10);
++
+ 	if (!capable(CAP_SYS_ADMIN)) {
+ 		if ((ss->baud_base != info->baud_base) ||
+-				(ss->close_delay != info->port.close_delay) ||
++				(close_delay != info->port.close_delay) ||
++				(closing_wait != info->port.closing_wait) ||
+ 				((ss->flags & ~ASYNC_USR_MASK) != (info->port.flags & ~ASYNC_USR_MASK))) {
+ 			mutex_unlock(&port->mutex);
+ 			return -EPERM;
+@@ -1271,8 +1284,8 @@ static int mxser_set_serial_info(struct tty_struct *tty,
+ 		 */
+ 		port->flags = ((port->flags & ~ASYNC_FLAGS) |
+ 				(ss->flags & ASYNC_FLAGS));
+-		port->close_delay = ss->close_delay * HZ / 100;
+-		port->closing_wait = ss->closing_wait * HZ / 100;
++		port->close_delay = close_delay;
++		port->closing_wait = closing_wait;
+ 		if ((port->flags & ASYNC_SPD_MASK) == ASYNC_SPD_CUST &&
+ 				(ss->baud_base != info->baud_base ||
+ 				ss->custom_divisor !=
+@@ -1284,11 +1297,11 @@ static int mxser_set_serial_info(struct tty_struct *tty,
+ 			baud = ss->baud_base / ss->custom_divisor;
+ 			tty_encode_baud_rate(tty, baud, baud);
+ 		}
+-	}
+ 
+-	info->type = ss->type;
++		info->type = ss->type;
+ 
+-	process_txrx_fifo(info);
++		process_txrx_fifo(info);
++	}
+ 
+ 	if (tty_port_initialized(port)) {
+ 		if (flags != (port->flags & ASYNC_SPD_MASK)) {
+diff --git a/drivers/tty/serial/liteuart.c b/drivers/tty/serial/liteuart.c
+index 64842f3539e19..0b06770642cb3 100644
+--- a/drivers/tty/serial/liteuart.c
++++ b/drivers/tty/serial/liteuart.c
+@@ -270,8 +270,8 @@ static int liteuart_probe(struct platform_device *pdev)
+ 
+ 	/* get membase */
+ 	port->membase = devm_platform_get_and_ioremap_resource(pdev, 0, NULL);
+-	if (!port->membase)
+-		return -ENXIO;
++	if (IS_ERR(port->membase))
++		return PTR_ERR(port->membase);
+ 
+ 	/* values not from device tree */
+ 	port->dev = &pdev->dev;
+diff --git a/drivers/tty/serial/omap-serial.c b/drivers/tty/serial/omap-serial.c
+index 76b94d0ff5865..84e8158088cd2 100644
+--- a/drivers/tty/serial/omap-serial.c
++++ b/drivers/tty/serial/omap-serial.c
+@@ -159,6 +159,8 @@ struct uart_omap_port {
+ 	u32			calc_latency;
+ 	struct work_struct	qos_work;
+ 	bool			is_suspending;
++
++	unsigned int		rs485_tx_filter_count;
+ };
+ 
+ #define to_uart_omap_port(p) ((container_of((p), struct uart_omap_port, port)))
+@@ -302,7 +304,8 @@ static void serial_omap_stop_tx(struct uart_port *port)
+ 			serial_out(up, UART_OMAP_SCR, up->scr);
+ 			res = (port->rs485.flags & SER_RS485_RTS_AFTER_SEND) ?
+ 				1 : 0;
+-			if (gpiod_get_value(up->rts_gpiod) != res) {
++			if (up->rts_gpiod &&
++			    gpiod_get_value(up->rts_gpiod) != res) {
+ 				if (port->rs485.delay_rts_after_send > 0)
+ 					mdelay(
+ 					port->rs485.delay_rts_after_send);
+@@ -328,19 +331,6 @@ static void serial_omap_stop_tx(struct uart_port *port)
+ 		serial_out(up, UART_IER, up->ier);
+ 	}
+ 
+-	if ((port->rs485.flags & SER_RS485_ENABLED) &&
+-	    !(port->rs485.flags & SER_RS485_RX_DURING_TX)) {
+-		/*
+-		 * Empty the RX FIFO, we are not interested in anything
+-		 * received during the half-duplex transmission.
+-		 */
+-		serial_out(up, UART_FCR, up->fcr | UART_FCR_CLEAR_RCVR);
+-		/* Re-enable RX interrupts */
+-		up->ier |= UART_IER_RLSI | UART_IER_RDI;
+-		up->port.read_status_mask |= UART_LSR_DR;
+-		serial_out(up, UART_IER, up->ier);
+-	}
+-
+ 	pm_runtime_mark_last_busy(up->dev);
+ 	pm_runtime_put_autosuspend(up->dev);
+ }
+@@ -366,6 +356,10 @@ static void transmit_chars(struct uart_omap_port *up, unsigned int lsr)
+ 		serial_out(up, UART_TX, up->port.x_char);
+ 		up->port.icount.tx++;
+ 		up->port.x_char = 0;
++		if ((up->port.rs485.flags & SER_RS485_ENABLED) &&
++		    !(up->port.rs485.flags & SER_RS485_RX_DURING_TX))
++			up->rs485_tx_filter_count++;
++
+ 		return;
+ 	}
+ 	if (uart_circ_empty(xmit) || uart_tx_stopped(&up->port)) {
+@@ -377,6 +371,10 @@ static void transmit_chars(struct uart_omap_port *up, unsigned int lsr)
+ 		serial_out(up, UART_TX, xmit->buf[xmit->tail]);
+ 		xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1);
+ 		up->port.icount.tx++;
++		if ((up->port.rs485.flags & SER_RS485_ENABLED) &&
++		    !(up->port.rs485.flags & SER_RS485_RX_DURING_TX))
++			up->rs485_tx_filter_count++;
++
+ 		if (uart_circ_empty(xmit))
+ 			break;
+ 	} while (--count > 0);
+@@ -411,7 +409,7 @@ static void serial_omap_start_tx(struct uart_port *port)
+ 
+ 		/* if rts not already enabled */
+ 		res = (port->rs485.flags & SER_RS485_RTS_ON_SEND) ? 1 : 0;
+-		if (gpiod_get_value(up->rts_gpiod) != res) {
++		if (up->rts_gpiod && gpiod_get_value(up->rts_gpiod) != res) {
+ 			gpiod_set_value(up->rts_gpiod, res);
+ 			if (port->rs485.delay_rts_before_send > 0)
+ 				mdelay(port->rs485.delay_rts_before_send);
+@@ -420,7 +418,7 @@ static void serial_omap_start_tx(struct uart_port *port)
+ 
+ 	if ((port->rs485.flags & SER_RS485_ENABLED) &&
+ 	    !(port->rs485.flags & SER_RS485_RX_DURING_TX))
+-		serial_omap_stop_rx(port);
++		up->rs485_tx_filter_count = 0;
+ 
+ 	serial_omap_enable_ier_thri(up);
+ 	pm_runtime_mark_last_busy(up->dev);
+@@ -491,8 +489,13 @@ static void serial_omap_rlsi(struct uart_omap_port *up, unsigned int lsr)
+ 	 * Read one data character out to avoid stalling the receiver according
+ 	 * to the table 23-246 of the omap4 TRM.
+ 	 */
+-	if (likely(lsr & UART_LSR_DR))
++	if (likely(lsr & UART_LSR_DR)) {
+ 		serial_in(up, UART_RX);
++		if ((up->port.rs485.flags & SER_RS485_ENABLED) &&
++		    !(up->port.rs485.flags & SER_RS485_RX_DURING_TX) &&
++		    up->rs485_tx_filter_count)
++			up->rs485_tx_filter_count--;
++	}
+ 
+ 	up->port.icount.rx++;
+ 	flag = TTY_NORMAL;
+@@ -543,6 +546,13 @@ static void serial_omap_rdi(struct uart_omap_port *up, unsigned int lsr)
+ 		return;
+ 
+ 	ch = serial_in(up, UART_RX);
++	if ((up->port.rs485.flags & SER_RS485_ENABLED) &&
++	    !(up->port.rs485.flags & SER_RS485_RX_DURING_TX) &&
++	    up->rs485_tx_filter_count) {
++		up->rs485_tx_filter_count--;
++		return;
++	}
++
+ 	flag = TTY_NORMAL;
+ 	up->port.icount.rx++;
+ 
+@@ -1407,18 +1417,13 @@ serial_omap_config_rs485(struct uart_port *port, struct serial_rs485 *rs485)
+ 	/* store new config */
+ 	port->rs485 = *rs485;
+ 
+-	/*
+-	 * Just as a precaution, only allow rs485
+-	 * to be enabled if the gpio pin is valid
+-	 */
+ 	if (up->rts_gpiod) {
+ 		/* enable / disable rts */
+ 		val = (port->rs485.flags & SER_RS485_ENABLED) ?
+ 			SER_RS485_RTS_AFTER_SEND : SER_RS485_RTS_ON_SEND;
+ 		val = (port->rs485.flags & val) ? 1 : 0;
+ 		gpiod_set_value(up->rts_gpiod, val);
+-	} else
+-		port->rs485.flags &= ~SER_RS485_ENABLED;
++	}
+ 
+ 	/* Enable interrupts */
+ 	up->ier = mode;
+diff --git a/drivers/tty/serial/sc16is7xx.c b/drivers/tty/serial/sc16is7xx.c
+index f86ec2d2635b7..9adb8362578c5 100644
+--- a/drivers/tty/serial/sc16is7xx.c
++++ b/drivers/tty/serial/sc16is7xx.c
+@@ -1196,7 +1196,7 @@ static int sc16is7xx_probe(struct device *dev,
+ 	ret = regmap_read(regmap,
+ 			  SC16IS7XX_LSR_REG << SC16IS7XX_REG_SHIFT, &val);
+ 	if (ret < 0)
+-		return ret;
++		return -EPROBE_DEFER;
+ 
+ 	/* Alloc port structure */
+ 	s = devm_kzalloc(dev, struct_size(s, p, devtype->nr_uart), GFP_KERNEL);
+diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c
+index ba31e97d3d96b..43f02ed055d5e 100644
+--- a/drivers/tty/serial/serial_core.c
++++ b/drivers/tty/serial/serial_core.c
+@@ -1305,7 +1305,7 @@ static int uart_set_rs485_config(struct uart_port *port,
+ 	unsigned long flags;
+ 
+ 	if (!port->rs485_config)
+-		return -ENOIOCTLCMD;
++		return -ENOTTY;
+ 
+ 	if (copy_from_user(&rs485, rs485_user, sizeof(*rs485_user)))
+ 		return -EFAULT;
+@@ -1329,7 +1329,7 @@ static int uart_get_iso7816_config(struct uart_port *port,
+ 	struct serial_iso7816 aux;
+ 
+ 	if (!port->iso7816_config)
+-		return -ENOIOCTLCMD;
++		return -ENOTTY;
+ 
+ 	spin_lock_irqsave(&port->lock, flags);
+ 	aux = port->iso7816;
+@@ -1349,7 +1349,7 @@ static int uart_set_iso7816_config(struct uart_port *port,
+ 	unsigned long flags;
+ 
+ 	if (!port->iso7816_config)
+-		return -ENOIOCTLCMD;
++		return -ENOTTY;
+ 
+ 	if (copy_from_user(&iso7816, iso7816_user, sizeof(*iso7816_user)))
+ 		return -EFAULT;
+diff --git a/drivers/tty/serial/stm32-usart.c b/drivers/tty/serial/stm32-usart.c
+index b3675cf25a692..99dfa884cbefb 100644
+--- a/drivers/tty/serial/stm32-usart.c
++++ b/drivers/tty/serial/stm32-usart.c
+@@ -214,12 +214,14 @@ static void stm32_usart_receive_chars(struct uart_port *port, bool threaded)
+ 	struct tty_port *tport = &port->state->port;
+ 	struct stm32_port *stm32_port = to_stm32_port(port);
+ 	const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+-	unsigned long c;
++	unsigned long c, flags;
+ 	u32 sr;
+ 	char flag;
+ 
+-	if (irqd_is_wakeup_set(irq_get_irq_data(port->irq)))
+-		pm_wakeup_event(tport->tty->dev, 0);
++	if (threaded)
++		spin_lock_irqsave(&port->lock, flags);
++	else
++		spin_lock(&port->lock);
+ 
+ 	while (stm32_usart_pending_rx(port, &sr, &stm32_port->last_res,
+ 				      threaded)) {
+@@ -276,9 +278,12 @@ static void stm32_usart_receive_chars(struct uart_port *port, bool threaded)
+ 		uart_insert_char(port, sr, USART_SR_ORE, c, flag);
+ 	}
+ 
+-	spin_unlock(&port->lock);
++	if (threaded)
++		spin_unlock_irqrestore(&port->lock, flags);
++	else
++		spin_unlock(&port->lock);
++
+ 	tty_flip_buffer_push(tport);
+-	spin_lock(&port->lock);
+ }
+ 
+ static void stm32_usart_tx_dma_complete(void *arg)
+@@ -286,12 +291,16 @@ static void stm32_usart_tx_dma_complete(void *arg)
+ 	struct uart_port *port = arg;
+ 	struct stm32_port *stm32port = to_stm32_port(port);
+ 	const struct stm32_usart_offsets *ofs = &stm32port->info->ofs;
++	unsigned long flags;
+ 
++	dmaengine_terminate_async(stm32port->tx_ch);
+ 	stm32_usart_clr_bits(port, ofs->cr3, USART_CR3_DMAT);
+ 	stm32port->tx_dma_busy = false;
+ 
+ 	/* Let's see if we have pending data to send */
++	spin_lock_irqsave(&port->lock, flags);
+ 	stm32_usart_transmit_chars(port);
++	spin_unlock_irqrestore(&port->lock, flags);
+ }
+ 
+ static void stm32_usart_tx_interrupt_enable(struct uart_port *port)
+@@ -455,29 +464,34 @@ static void stm32_usart_transmit_chars(struct uart_port *port)
+ static irqreturn_t stm32_usart_interrupt(int irq, void *ptr)
+ {
+ 	struct uart_port *port = ptr;
++	struct tty_port *tport = &port->state->port;
+ 	struct stm32_port *stm32_port = to_stm32_port(port);
+ 	const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+ 	u32 sr;
+ 
+-	spin_lock(&port->lock);
+-
+ 	sr = readl_relaxed(port->membase + ofs->isr);
+ 
+ 	if ((sr & USART_SR_RTOF) && ofs->icr != UNDEF_REG)
+ 		writel_relaxed(USART_ICR_RTOCF,
+ 			       port->membase + ofs->icr);
+ 
+-	if ((sr & USART_SR_WUF) && ofs->icr != UNDEF_REG)
++	if ((sr & USART_SR_WUF) && ofs->icr != UNDEF_REG) {
++		/* Clear wake up flag and disable wake up interrupt */
+ 		writel_relaxed(USART_ICR_WUCF,
+ 			       port->membase + ofs->icr);
++		stm32_usart_clr_bits(port, ofs->cr3, USART_CR3_WUFIE);
++		if (irqd_is_wakeup_set(irq_get_irq_data(port->irq)))
++			pm_wakeup_event(tport->tty->dev, 0);
++	}
+ 
+ 	if ((sr & USART_SR_RXNE) && !(stm32_port->rx_ch))
+ 		stm32_usart_receive_chars(port, false);
+ 
+-	if ((sr & USART_SR_TXE) && !(stm32_port->tx_ch))
++	if ((sr & USART_SR_TXE) && !(stm32_port->tx_ch)) {
++		spin_lock(&port->lock);
+ 		stm32_usart_transmit_chars(port);
+-
+-	spin_unlock(&port->lock);
++		spin_unlock(&port->lock);
++	}
+ 
+ 	if (stm32_port->rx_ch)
+ 		return IRQ_WAKE_THREAD;
+@@ -490,13 +504,9 @@ static irqreturn_t stm32_usart_threaded_interrupt(int irq, void *ptr)
+ 	struct uart_port *port = ptr;
+ 	struct stm32_port *stm32_port = to_stm32_port(port);
+ 
+-	spin_lock(&port->lock);
+-
+ 	if (stm32_port->rx_ch)
+ 		stm32_usart_receive_chars(port, true);
+ 
+-	spin_unlock(&port->lock);
+-
+ 	return IRQ_HANDLED;
+ }
+ 
+@@ -505,7 +515,10 @@ static unsigned int stm32_usart_tx_empty(struct uart_port *port)
+ 	struct stm32_port *stm32_port = to_stm32_port(port);
+ 	const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+ 
+-	return readl_relaxed(port->membase + ofs->isr) & USART_SR_TXE;
++	if (readl_relaxed(port->membase + ofs->isr) & USART_SR_TC)
++		return TIOCSER_TEMT;
++
++	return 0;
+ }
+ 
+ static void stm32_usart_set_mctrl(struct uart_port *port, unsigned int mctrl)
+@@ -634,6 +647,7 @@ static int stm32_usart_startup(struct uart_port *port)
+ {
+ 	struct stm32_port *stm32_port = to_stm32_port(port);
+ 	const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
++	const struct stm32_usart_config *cfg = &stm32_port->info->cfg;
+ 	const char *name = to_platform_device(port->dev)->name;
+ 	u32 val;
+ 	int ret;
+@@ -646,21 +660,10 @@ static int stm32_usart_startup(struct uart_port *port)
+ 
+ 	/* RX FIFO Flush */
+ 	if (ofs->rqr != UNDEF_REG)
+-		stm32_usart_set_bits(port, ofs->rqr, USART_RQR_RXFRQ);
+-
+-	/* Tx and RX FIFO configuration */
+-	if (stm32_port->fifoen) {
+-		val = readl_relaxed(port->membase + ofs->cr3);
+-		val &= ~(USART_CR3_TXFTCFG_MASK | USART_CR3_RXFTCFG_MASK);
+-		val |= USART_CR3_TXFTCFG_HALF << USART_CR3_TXFTCFG_SHIFT;
+-		val |= USART_CR3_RXFTCFG_HALF << USART_CR3_RXFTCFG_SHIFT;
+-		writel_relaxed(val, port->membase + ofs->cr3);
+-	}
++		writel_relaxed(USART_RQR_RXFRQ, port->membase + ofs->rqr);
+ 
+-	/* RX FIFO enabling */
+-	val = stm32_port->cr1_irq | USART_CR1_RE;
+-	if (stm32_port->fifoen)
+-		val |= USART_CR1_FIFOEN;
++	/* RX enabling */
++	val = stm32_port->cr1_irq | USART_CR1_RE | BIT(cfg->uart_enable_bit);
+ 	stm32_usart_set_bits(port, ofs->cr1, val);
+ 
+ 	return 0;
+@@ -691,6 +694,11 @@ static void stm32_usart_shutdown(struct uart_port *port)
+ 	if (ret)
+ 		dev_err(port->dev, "Transmission is not complete\n");
+ 
++	/* flush RX & TX FIFO */
++	if (ofs->rqr != UNDEF_REG)
++		writel_relaxed(USART_RQR_TXFRQ | USART_RQR_RXFRQ,
++			       port->membase + ofs->rqr);
++
+ 	stm32_usart_clr_bits(port, ofs->cr1, val);
+ 
+ 	free_irq(port->irq, port);
+@@ -737,8 +745,9 @@ static void stm32_usart_set_termios(struct uart_port *port,
+ 	unsigned int baud, bits;
+ 	u32 usartdiv, mantissa, fraction, oversampling;
+ 	tcflag_t cflag = termios->c_cflag;
+-	u32 cr1, cr2, cr3;
++	u32 cr1, cr2, cr3, isr;
+ 	unsigned long flags;
++	int ret;
+ 
+ 	if (!stm32_port->hw_flow_control)
+ 		cflag &= ~CRTSCTS;
+@@ -747,21 +756,36 @@ static void stm32_usart_set_termios(struct uart_port *port,
+ 
+ 	spin_lock_irqsave(&port->lock, flags);
+ 
++	ret = readl_relaxed_poll_timeout_atomic(port->membase + ofs->isr,
++						isr,
++						(isr & USART_SR_TC),
++						10, 100000);
++
++	/* Send the TC error message only when ISR_TC is not set. */
++	if (ret)
++		dev_err(port->dev, "Transmission is not complete\n");
++
+ 	/* Stop serial port and reset value */
+ 	writel_relaxed(0, port->membase + ofs->cr1);
+ 
+ 	/* flush RX & TX FIFO */
+ 	if (ofs->rqr != UNDEF_REG)
+-		stm32_usart_set_bits(port, ofs->rqr,
+-				     USART_RQR_TXFRQ | USART_RQR_RXFRQ);
++		writel_relaxed(USART_RQR_TXFRQ | USART_RQR_RXFRQ,
++			       port->membase + ofs->rqr);
+ 
+ 	cr1 = USART_CR1_TE | USART_CR1_RE;
+ 	if (stm32_port->fifoen)
+ 		cr1 |= USART_CR1_FIFOEN;
+ 	cr2 = 0;
++
++	/* Tx and RX FIFO configuration */
+ 	cr3 = readl_relaxed(port->membase + ofs->cr3);
+-	cr3 &= USART_CR3_TXFTIE | USART_CR3_RXFTCFG_MASK | USART_CR3_RXFTIE
+-		| USART_CR3_TXFTCFG_MASK;
++	cr3 &= USART_CR3_TXFTIE | USART_CR3_RXFTIE;
++	if (stm32_port->fifoen) {
++		cr3 &= ~(USART_CR3_TXFTCFG_MASK | USART_CR3_RXFTCFG_MASK);
++		cr3 |= USART_CR3_TXFTCFG_HALF << USART_CR3_TXFTCFG_SHIFT;
++		cr3 |= USART_CR3_RXFTCFG_HALF << USART_CR3_RXFTCFG_SHIFT;
++	}
+ 
+ 	if (cflag & CSTOPB)
+ 		cr2 |= USART_CR2_STOP_2B;
+@@ -817,12 +841,6 @@ static void stm32_usart_set_termios(struct uart_port *port,
+ 		cr3 |= USART_CR3_CTSE | USART_CR3_RTSE;
+ 	}
+ 
+-	/* Handle modem control interrupts */
+-	if (UART_ENABLE_MS(port, termios->c_cflag))
+-		stm32_usart_enable_ms(port);
+-	else
+-		stm32_usart_disable_ms(port);
+-
+ 	usartdiv = DIV_ROUND_CLOSEST(port->uartclk, baud);
+ 
+ 	/*
+@@ -892,12 +910,24 @@ static void stm32_usart_set_termios(struct uart_port *port,
+ 		cr1 &= ~(USART_CR1_DEDT_MASK | USART_CR1_DEAT_MASK);
+ 	}
+ 
++	/* Configure wake up from low power on start bit detection */
++	if (stm32_port->wakeirq > 0) {
++		cr3 &= ~USART_CR3_WUS_MASK;
++		cr3 |= USART_CR3_WUS_START_BIT;
++	}
++
+ 	writel_relaxed(cr3, port->membase + ofs->cr3);
+ 	writel_relaxed(cr2, port->membase + ofs->cr2);
+ 	writel_relaxed(cr1, port->membase + ofs->cr1);
+ 
+ 	stm32_usart_set_bits(port, ofs->cr1, BIT(cfg->uart_enable_bit));
+ 	spin_unlock_irqrestore(&port->lock, flags);
++
++	/* Handle modem control interrupts */
++	if (UART_ENABLE_MS(port, termios->c_cflag))
++		stm32_usart_enable_ms(port);
++	else
++		stm32_usart_disable_ms(port);
+ }
+ 
+ static const char *stm32_usart_type(struct uart_port *port)
+@@ -1252,10 +1282,6 @@ static int stm32_usart_serial_probe(struct platform_device *pdev)
+ 		device_set_wakeup_enable(&pdev->dev, false);
+ 	}
+ 
+-	ret = uart_add_one_port(&stm32_usart_driver, &stm32port->port);
+-	if (ret)
+-		goto err_wirq;
+-
+ 	ret = stm32_usart_of_dma_rx_probe(stm32port, pdev);
+ 	if (ret)
+ 		dev_info(&pdev->dev, "interrupt mode used for rx (no dma)\n");
+@@ -1269,11 +1295,40 @@ static int stm32_usart_serial_probe(struct platform_device *pdev)
+ 	pm_runtime_get_noresume(&pdev->dev);
+ 	pm_runtime_set_active(&pdev->dev);
+ 	pm_runtime_enable(&pdev->dev);
++
++	ret = uart_add_one_port(&stm32_usart_driver, &stm32port->port);
++	if (ret)
++		goto err_port;
++
+ 	pm_runtime_put_sync(&pdev->dev);
+ 
+ 	return 0;
+ 
+-err_wirq:
++err_port:
++	pm_runtime_disable(&pdev->dev);
++	pm_runtime_set_suspended(&pdev->dev);
++	pm_runtime_put_noidle(&pdev->dev);
++
++	if (stm32port->rx_ch) {
++		dmaengine_terminate_async(stm32port->rx_ch);
++		dma_release_channel(stm32port->rx_ch);
++	}
++
++	if (stm32port->rx_dma_buf)
++		dma_free_coherent(&pdev->dev,
++				  RX_BUF_L, stm32port->rx_buf,
++				  stm32port->rx_dma_buf);
++
++	if (stm32port->tx_ch) {
++		dmaengine_terminate_async(stm32port->tx_ch);
++		dma_release_channel(stm32port->tx_ch);
++	}
++
++	if (stm32port->tx_dma_buf)
++		dma_free_coherent(&pdev->dev,
++				  TX_BUF_L, stm32port->tx_buf,
++				  stm32port->tx_dma_buf);
++
+ 	if (stm32port->wakeirq > 0)
+ 		dev_pm_clear_wake_irq(&pdev->dev);
+ 
+@@ -1295,11 +1350,20 @@ static int stm32_usart_serial_remove(struct platform_device *pdev)
+ 	int err;
+ 
+ 	pm_runtime_get_sync(&pdev->dev);
++	err = uart_remove_one_port(&stm32_usart_driver, port);
++	if (err)
++		return(err);
++
++	pm_runtime_disable(&pdev->dev);
++	pm_runtime_set_suspended(&pdev->dev);
++	pm_runtime_put_noidle(&pdev->dev);
+ 
+ 	stm32_usart_clr_bits(port, ofs->cr3, USART_CR3_DMAR);
+ 
+-	if (stm32_port->rx_ch)
++	if (stm32_port->rx_ch) {
++		dmaengine_terminate_async(stm32_port->rx_ch);
+ 		dma_release_channel(stm32_port->rx_ch);
++	}
+ 
+ 	if (stm32_port->rx_dma_buf)
+ 		dma_free_coherent(&pdev->dev,
+@@ -1308,8 +1372,10 @@ static int stm32_usart_serial_remove(struct platform_device *pdev)
+ 
+ 	stm32_usart_clr_bits(port, ofs->cr3, USART_CR3_DMAT);
+ 
+-	if (stm32_port->tx_ch)
++	if (stm32_port->tx_ch) {
++		dmaengine_terminate_async(stm32_port->tx_ch);
+ 		dma_release_channel(stm32_port->tx_ch);
++	}
+ 
+ 	if (stm32_port->tx_dma_buf)
+ 		dma_free_coherent(&pdev->dev,
+@@ -1323,12 +1389,7 @@ static int stm32_usart_serial_remove(struct platform_device *pdev)
+ 
+ 	stm32_usart_deinit_port(stm32_port);
+ 
+-	err = uart_remove_one_port(&stm32_usart_driver, port);
+-
+-	pm_runtime_disable(&pdev->dev);
+-	pm_runtime_put_noidle(&pdev->dev);
+-
+-	return err;
++	return 0;
+ }
+ 
+ #ifdef CONFIG_SERIAL_STM32_CONSOLE
+@@ -1436,23 +1497,20 @@ static void __maybe_unused stm32_usart_serial_en_wakeup(struct uart_port *port,
+ {
+ 	struct stm32_port *stm32_port = to_stm32_port(port);
+ 	const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+-	const struct stm32_usart_config *cfg = &stm32_port->info->cfg;
+-	u32 val;
+ 
+ 	if (stm32_port->wakeirq <= 0)
+ 		return;
+ 
++	/*
++	 * Enable low-power wake-up and wake-up irq if argument is set to
++	 * "enable", disable low-power wake-up and wake-up irq otherwise
++	 */
+ 	if (enable) {
+-		stm32_usart_clr_bits(port, ofs->cr1, BIT(cfg->uart_enable_bit));
+ 		stm32_usart_set_bits(port, ofs->cr1, USART_CR1_UESM);
+-		val = readl_relaxed(port->membase + ofs->cr3);
+-		val &= ~USART_CR3_WUS_MASK;
+-		/* Enable Wake up interrupt from low power on start bit */
+-		val |= USART_CR3_WUS_START_BIT | USART_CR3_WUFIE;
+-		writel_relaxed(val, port->membase + ofs->cr3);
+-		stm32_usart_set_bits(port, ofs->cr1, BIT(cfg->uart_enable_bit));
++		stm32_usart_set_bits(port, ofs->cr3, USART_CR3_WUFIE);
+ 	} else {
+ 		stm32_usart_clr_bits(port, ofs->cr1, USART_CR1_UESM);
++		stm32_usart_clr_bits(port, ofs->cr3, USART_CR3_WUFIE);
+ 	}
+ }
+ 
+diff --git a/drivers/tty/serial/stm32-usart.h b/drivers/tty/serial/stm32-usart.h
+index cb4f327c46db9..94b568aa46bbd 100644
+--- a/drivers/tty/serial/stm32-usart.h
++++ b/drivers/tty/serial/stm32-usart.h
+@@ -127,9 +127,6 @@ struct stm32_usart_info stm32h7_info = {
+ /* Dummy bits */
+ #define USART_SR_DUMMY_RX	BIT(16)
+ 
+-/* USART_ICR (F7) */
+-#define USART_CR_TC		BIT(6)
+-
+ /* USART_DR */
+ #define USART_DR_MASK		GENMASK(8, 0)
+ 
+diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c
+index 391bada4cedb6..adbcbfa11b290 100644
+--- a/drivers/tty/tty_io.c
++++ b/drivers/tty/tty_io.c
+@@ -2530,14 +2530,14 @@ out:
+  *	@p: pointer to result
+  *
+  *	Obtain the modem status bits from the tty driver if the feature
+- *	is supported. Return -EINVAL if it is not available.
++ *	is supported. Return -ENOTTY if it is not available.
+  *
+  *	Locking: none (up to the driver)
+  */
+ 
+ static int tty_tiocmget(struct tty_struct *tty, int __user *p)
+ {
+-	int retval = -EINVAL;
++	int retval = -ENOTTY;
+ 
+ 	if (tty->ops->tiocmget) {
+ 		retval = tty->ops->tiocmget(tty);
+@@ -2555,7 +2555,7 @@ static int tty_tiocmget(struct tty_struct *tty, int __user *p)
+  *	@p: pointer to desired bits
+  *
+  *	Set the modem status bits from the tty driver if the feature
+- *	is supported. Return -EINVAL if it is not available.
++ *	is supported. Return -ENOTTY if it is not available.
+  *
+  *	Locking: none (up to the driver)
+  */
+@@ -2567,7 +2567,7 @@ static int tty_tiocmset(struct tty_struct *tty, unsigned int cmd,
+ 	unsigned int set, clear, val;
+ 
+ 	if (tty->ops->tiocmset == NULL)
+-		return -EINVAL;
++		return -ENOTTY;
+ 
+ 	retval = get_user(val, p);
+ 	if (retval)
+@@ -2607,7 +2607,7 @@ int tty_get_icount(struct tty_struct *tty,
+ 	if (tty->ops->get_icount)
+ 		return tty->ops->get_icount(tty, icount);
+ 	else
+-		return -EINVAL;
++		return -ENOTTY;
+ }
+ EXPORT_SYMBOL_GPL(tty_get_icount);
+ 
+diff --git a/drivers/tty/tty_ioctl.c b/drivers/tty/tty_ioctl.c
+index 4de1c6ddb8ffb..803da2d111c8c 100644
+--- a/drivers/tty/tty_ioctl.c
++++ b/drivers/tty/tty_ioctl.c
+@@ -774,8 +774,8 @@ int tty_mode_ioctl(struct tty_struct *tty, struct file *file,
+ 	case TCSETX:
+ 	case TCSETXW:
+ 	case TCSETXF:
+-		return -EINVAL;
+-#endif		
++		return -ENOTTY;
++#endif
+ 	case TIOCGSOFTCAR:
+ 		copy_termios(real_tty, &kterm);
+ 		ret = put_user((kterm.c_cflag & CLOCAL) ? 1 : 0,
+diff --git a/drivers/usb/cdns3/cdnsp-gadget.c b/drivers/usb/cdns3/cdnsp-gadget.c
+index d7d4bdd57f46f..56707b6b0f57c 100644
+--- a/drivers/usb/cdns3/cdnsp-gadget.c
++++ b/drivers/usb/cdns3/cdnsp-gadget.c
+@@ -727,7 +727,7 @@ int cdnsp_reset_device(struct cdnsp_device *pdev)
+ 	 * are in Disabled state.
+ 	 */
+ 	for (i = 1; i < CDNSP_ENDPOINTS_NUM; ++i)
+-		pdev->eps[i].ep_state |= EP_STOPPED;
++		pdev->eps[i].ep_state |= EP_STOPPED | EP_UNCONFIGURED;
+ 
+ 	trace_cdnsp_handle_cmd_reset_dev(slot_ctx);
+ 
+@@ -942,6 +942,7 @@ static int cdnsp_gadget_ep_enable(struct usb_ep *ep,
+ 
+ 	pep = to_cdnsp_ep(ep);
+ 	pdev = pep->pdev;
++	pep->ep_state &= ~EP_UNCONFIGURED;
+ 
+ 	if (dev_WARN_ONCE(pdev->dev, pep->ep_state & EP_ENABLED,
+ 			  "%s is already enabled\n", pep->name))
+@@ -1023,9 +1024,13 @@ static int cdnsp_gadget_ep_disable(struct usb_ep *ep)
+ 		goto finish;
+ 	}
+ 
+-	cdnsp_cmd_stop_ep(pdev, pep);
+ 	pep->ep_state |= EP_DIS_IN_RROGRESS;
+-	cdnsp_cmd_flush_ep(pdev, pep);
++
++	/* Endpoint was unconfigured by Reset Device command. */
++	if (!(pep->ep_state & EP_UNCONFIGURED)) {
++		cdnsp_cmd_stop_ep(pdev, pep);
++		cdnsp_cmd_flush_ep(pdev, pep);
++	}
+ 
+ 	/* Remove all queued USB requests. */
+ 	while (!list_empty(&pep->pending_list)) {
+@@ -1043,10 +1048,12 @@ static int cdnsp_gadget_ep_disable(struct usb_ep *ep)
+ 
+ 	cdnsp_endpoint_zero(pdev, pep);
+ 
+-	ret = cdnsp_update_eps_configuration(pdev, pep);
++	if (!(pep->ep_state & EP_UNCONFIGURED))
++		ret = cdnsp_update_eps_configuration(pdev, pep);
++
+ 	cdnsp_free_endpoint_rings(pdev, pep);
+ 
+-	pep->ep_state &= ~EP_ENABLED;
++	pep->ep_state &= ~(EP_ENABLED | EP_UNCONFIGURED);
+ 	pep->ep_state |= EP_STOPPED;
+ 
+ finish:
+diff --git a/drivers/usb/cdns3/cdnsp-gadget.h b/drivers/usb/cdns3/cdnsp-gadget.h
+index 6bbb26548c049..783ca8ffde007 100644
+--- a/drivers/usb/cdns3/cdnsp-gadget.h
++++ b/drivers/usb/cdns3/cdnsp-gadget.h
+@@ -835,6 +835,7 @@ struct cdnsp_ep {
+ #define EP_WEDGE		BIT(4)
+ #define EP0_HALTED_STATUS	BIT(5)
+ #define EP_HAS_STREAMS		BIT(6)
++#define EP_UNCONFIGURED		BIT(7)
+ 
+ 	bool skip;
+ };
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index f5886c512fec1..c103961c3fae9 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -929,8 +929,7 @@ static int get_serial_info(struct tty_struct *tty, struct serial_struct *ss)
+ {
+ 	struct acm *acm = tty->driver_data;
+ 
+-	ss->xmit_fifo_size = acm->writesize;
+-	ss->baud_base = le32_to_cpu(acm->line.dwDTERate);
++	ss->line = acm->minor;
+ 	ss->close_delay	= jiffies_to_msecs(acm->port.close_delay) / 10;
+ 	ss->closing_wait = acm->port.closing_wait == ASYNC_CLOSING_WAIT_NONE ?
+ 				ASYNC_CLOSING_WAIT_NONE :
+@@ -942,7 +941,6 @@ static int set_serial_info(struct tty_struct *tty, struct serial_struct *ss)
+ {
+ 	struct acm *acm = tty->driver_data;
+ 	unsigned int closing_wait, close_delay;
+-	unsigned int old_closing_wait, old_close_delay;
+ 	int retval = 0;
+ 
+ 	close_delay = msecs_to_jiffies(ss->close_delay * 10);
+@@ -950,20 +948,12 @@ static int set_serial_info(struct tty_struct *tty, struct serial_struct *ss)
+ 			ASYNC_CLOSING_WAIT_NONE :
+ 			msecs_to_jiffies(ss->closing_wait * 10);
+ 
+-	/* we must redo the rounding here, so that the values match */
+-	old_close_delay	= jiffies_to_msecs(acm->port.close_delay) / 10;
+-	old_closing_wait = acm->port.closing_wait == ASYNC_CLOSING_WAIT_NONE ?
+-				ASYNC_CLOSING_WAIT_NONE :
+-				jiffies_to_msecs(acm->port.closing_wait) / 10;
+-
+ 	mutex_lock(&acm->port.mutex);
+ 
+ 	if (!capable(CAP_SYS_ADMIN)) {
+-		if ((ss->close_delay != old_close_delay) ||
+-		    (ss->closing_wait != old_closing_wait))
++		if ((close_delay != acm->port.close_delay) ||
++		    (closing_wait != acm->port.closing_wait))
+ 			retval = -EPERM;
+-		else
+-			retval = -EOPNOTSUPP;
+ 	} else {
+ 		acm->port.close_delay  = close_delay;
+ 		acm->port.closing_wait = closing_wait;
+diff --git a/drivers/usb/dwc2/core_intr.c b/drivers/usb/dwc2/core_intr.c
+index 800c8b6c55ff1..510fd0572feb1 100644
+--- a/drivers/usb/dwc2/core_intr.c
++++ b/drivers/usb/dwc2/core_intr.c
+@@ -660,6 +660,71 @@ static u32 dwc2_read_common_intr(struct dwc2_hsotg *hsotg)
+ 		return 0;
+ }
+ 
++/**
++ * dwc_handle_gpwrdn_disc_det() - Handles the gpwrdn disconnect detect.
++ * Exits hibernation without restoring registers.
++ *
++ * @hsotg: Programming view of DWC_otg controller
++ * @gpwrdn: GPWRDN register
++ */
++static inline void dwc_handle_gpwrdn_disc_det(struct dwc2_hsotg *hsotg,
++					      u32 gpwrdn)
++{
++	u32 gpwrdn_tmp;
++
++	/* Switch-on voltage to the core */
++	gpwrdn_tmp = dwc2_readl(hsotg, GPWRDN);
++	gpwrdn_tmp &= ~GPWRDN_PWRDNSWTCH;
++	dwc2_writel(hsotg, gpwrdn_tmp, GPWRDN);
++	udelay(5);
++
++	/* Reset core */
++	gpwrdn_tmp = dwc2_readl(hsotg, GPWRDN);
++	gpwrdn_tmp &= ~GPWRDN_PWRDNRSTN;
++	dwc2_writel(hsotg, gpwrdn_tmp, GPWRDN);
++	udelay(5);
++
++	/* Disable Power Down Clamp */
++	gpwrdn_tmp = dwc2_readl(hsotg, GPWRDN);
++	gpwrdn_tmp &= ~GPWRDN_PWRDNCLMP;
++	dwc2_writel(hsotg, gpwrdn_tmp, GPWRDN);
++	udelay(5);
++
++	/* Deassert reset core */
++	gpwrdn_tmp = dwc2_readl(hsotg, GPWRDN);
++	gpwrdn_tmp |= GPWRDN_PWRDNRSTN;
++	dwc2_writel(hsotg, gpwrdn_tmp, GPWRDN);
++	udelay(5);
++
++	/* Disable PMU interrupt */
++	gpwrdn_tmp = dwc2_readl(hsotg, GPWRDN);
++	gpwrdn_tmp &= ~GPWRDN_PMUINTSEL;
++	dwc2_writel(hsotg, gpwrdn_tmp, GPWRDN);
++
++	/* De-assert Wakeup Logic */
++	gpwrdn_tmp = dwc2_readl(hsotg, GPWRDN);
++	gpwrdn_tmp &= ~GPWRDN_PMUACTV;
++	dwc2_writel(hsotg, gpwrdn_tmp, GPWRDN);
++
++	hsotg->hibernated = 0;
++	hsotg->bus_suspended = 0;
++
++	if (gpwrdn & GPWRDN_IDSTS) {
++		hsotg->op_state = OTG_STATE_B_PERIPHERAL;
++		dwc2_core_init(hsotg, false);
++		dwc2_enable_global_interrupts(hsotg);
++		dwc2_hsotg_core_init_disconnected(hsotg, false);
++		dwc2_hsotg_core_connect(hsotg);
++	} else {
++		hsotg->op_state = OTG_STATE_A_HOST;
++
++		/* Initialize the Core for Host mode */
++		dwc2_core_init(hsotg, false);
++		dwc2_enable_global_interrupts(hsotg);
++		dwc2_hcd_start(hsotg);
++	}
++}
++
+ /*
+  * GPWRDN interrupt handler.
+  *
+@@ -681,64 +746,14 @@ static void dwc2_handle_gpwrdn_intr(struct dwc2_hsotg *hsotg)
+ 
+ 	if ((gpwrdn & GPWRDN_DISCONN_DET) &&
+ 	    (gpwrdn & GPWRDN_DISCONN_DET_MSK) && !linestate) {
+-		u32 gpwrdn_tmp;
+-
+ 		dev_dbg(hsotg->dev, "%s: GPWRDN_DISCONN_DET\n", __func__);
+-
+-		/* Switch-on voltage to the core */
+-		gpwrdn_tmp = dwc2_readl(hsotg, GPWRDN);
+-		gpwrdn_tmp &= ~GPWRDN_PWRDNSWTCH;
+-		dwc2_writel(hsotg, gpwrdn_tmp, GPWRDN);
+-		udelay(10);
+-
+-		/* Reset core */
+-		gpwrdn_tmp = dwc2_readl(hsotg, GPWRDN);
+-		gpwrdn_tmp &= ~GPWRDN_PWRDNRSTN;
+-		dwc2_writel(hsotg, gpwrdn_tmp, GPWRDN);
+-		udelay(10);
+-
+-		/* Disable Power Down Clamp */
+-		gpwrdn_tmp = dwc2_readl(hsotg, GPWRDN);
+-		gpwrdn_tmp &= ~GPWRDN_PWRDNCLMP;
+-		dwc2_writel(hsotg, gpwrdn_tmp, GPWRDN);
+-		udelay(10);
+-
+-		/* Deassert reset core */
+-		gpwrdn_tmp = dwc2_readl(hsotg, GPWRDN);
+-		gpwrdn_tmp |= GPWRDN_PWRDNRSTN;
+-		dwc2_writel(hsotg, gpwrdn_tmp, GPWRDN);
+-		udelay(10);
+-
+-		/* Disable PMU interrupt */
+-		gpwrdn_tmp = dwc2_readl(hsotg, GPWRDN);
+-		gpwrdn_tmp &= ~GPWRDN_PMUINTSEL;
+-		dwc2_writel(hsotg, gpwrdn_tmp, GPWRDN);
+-
+-		/* De-assert Wakeup Logic */
+-		gpwrdn_tmp = dwc2_readl(hsotg, GPWRDN);
+-		gpwrdn_tmp &= ~GPWRDN_PMUACTV;
+-		dwc2_writel(hsotg, gpwrdn_tmp, GPWRDN);
+-
+-		hsotg->hibernated = 0;
+-
+-		if (gpwrdn & GPWRDN_IDSTS) {
+-			hsotg->op_state = OTG_STATE_B_PERIPHERAL;
+-			dwc2_core_init(hsotg, false);
+-			dwc2_enable_global_interrupts(hsotg);
+-			dwc2_hsotg_core_init_disconnected(hsotg, false);
+-			dwc2_hsotg_core_connect(hsotg);
+-		} else {
+-			hsotg->op_state = OTG_STATE_A_HOST;
+-
+-			/* Initialize the Core for Host mode */
+-			dwc2_core_init(hsotg, false);
+-			dwc2_enable_global_interrupts(hsotg);
+-			dwc2_hcd_start(hsotg);
+-		}
+-	}
+-
+-	if ((gpwrdn & GPWRDN_LNSTSCHG) &&
+-	    (gpwrdn & GPWRDN_LNSTSCHG_MSK) && linestate) {
++		/*
++		 * Call disconnect detect function to exit from
++		 * hibernation
++		 */
++		dwc_handle_gpwrdn_disc_det(hsotg, gpwrdn);
++	} else if ((gpwrdn & GPWRDN_LNSTSCHG) &&
++		   (gpwrdn & GPWRDN_LNSTSCHG_MSK) && linestate) {
+ 		dev_dbg(hsotg->dev, "%s: GPWRDN_LNSTSCHG\n", __func__);
+ 		if (hsotg->hw_params.hibernation &&
+ 		    hsotg->hibernated) {
+@@ -749,24 +764,21 @@ static void dwc2_handle_gpwrdn_intr(struct dwc2_hsotg *hsotg)
+ 				dwc2_exit_hibernation(hsotg, 1, 0, 1);
+ 			}
+ 		}
+-	}
+-	if ((gpwrdn & GPWRDN_RST_DET) && (gpwrdn & GPWRDN_RST_DET_MSK)) {
++	} else if ((gpwrdn & GPWRDN_RST_DET) &&
++		   (gpwrdn & GPWRDN_RST_DET_MSK)) {
+ 		dev_dbg(hsotg->dev, "%s: GPWRDN_RST_DET\n", __func__);
+ 		if (!linestate && (gpwrdn & GPWRDN_BSESSVLD))
+ 			dwc2_exit_hibernation(hsotg, 0, 1, 0);
+-	}
+-	if ((gpwrdn & GPWRDN_STS_CHGINT) &&
+-	    (gpwrdn & GPWRDN_STS_CHGINT_MSK) && linestate) {
++	} else if ((gpwrdn & GPWRDN_STS_CHGINT) &&
++		   (gpwrdn & GPWRDN_STS_CHGINT_MSK)) {
+ 		dev_dbg(hsotg->dev, "%s: GPWRDN_STS_CHGINT\n", __func__);
+-		if (hsotg->hw_params.hibernation &&
+-		    hsotg->hibernated) {
+-			if (gpwrdn & GPWRDN_IDSTS) {
+-				dwc2_exit_hibernation(hsotg, 0, 0, 0);
+-				call_gadget(hsotg, resume);
+-			} else {
+-				dwc2_exit_hibernation(hsotg, 1, 0, 1);
+-			}
+-		}
++		/*
++		 * As GPWRDN_STS_CHGINT exit from hibernation flow is
++		 * the same as in GPWRDN_DISCONN_DET flow. Call
++		 * disconnect detect helper function to exit from
++		 * hibernation.
++		 */
++		dwc_handle_gpwrdn_disc_det(hsotg, gpwrdn);
+ 	}
+ }
+ 
+diff --git a/drivers/usb/dwc2/hcd.c b/drivers/usb/dwc2/hcd.c
+index 1a9789ec5847f..6af1dcbc36564 100644
+--- a/drivers/usb/dwc2/hcd.c
++++ b/drivers/usb/dwc2/hcd.c
+@@ -5580,7 +5580,15 @@ int dwc2_host_exit_hibernation(struct dwc2_hsotg *hsotg, int rem_wakeup,
+ 		return ret;
+ 	}
+ 
+-	dwc2_hcd_rem_wakeup(hsotg);
++	if (rem_wakeup) {
++		dwc2_hcd_rem_wakeup(hsotg);
++		/*
++		 * Change "port_connect_status_change" flag to re-enumerate,
++		 * because after exit from hibernation port connection status
++		 * is not detected.
++		 */
++		hsotg->flags.b.port_connect_status_change = 1;
++	}
+ 
+ 	hsotg->hibernated = 0;
+ 	hsotg->bus_suspended = 0;
+diff --git a/drivers/usb/gadget/udc/aspeed-vhub/core.c b/drivers/usb/gadget/udc/aspeed-vhub/core.c
+index be7bb64e3594d..d11d3d14313f9 100644
+--- a/drivers/usb/gadget/udc/aspeed-vhub/core.c
++++ b/drivers/usb/gadget/udc/aspeed-vhub/core.c
+@@ -36,6 +36,7 @@ void ast_vhub_done(struct ast_vhub_ep *ep, struct ast_vhub_req *req,
+ 		   int status)
+ {
+ 	bool internal = req->internal;
++	struct ast_vhub *vhub = ep->vhub;
+ 
+ 	EPVDBG(ep, "completing request @%p, status %d\n", req, status);
+ 
+@@ -46,7 +47,7 @@ void ast_vhub_done(struct ast_vhub_ep *ep, struct ast_vhub_req *req,
+ 
+ 	if (req->req.dma) {
+ 		if (!WARN_ON(!ep->dev))
+-			usb_gadget_unmap_request(&ep->dev->gadget,
++			usb_gadget_unmap_request_by_dev(&vhub->pdev->dev,
+ 						 &req->req, ep->epn.is_in);
+ 		req->req.dma = 0;
+ 	}
+diff --git a/drivers/usb/gadget/udc/aspeed-vhub/epn.c b/drivers/usb/gadget/udc/aspeed-vhub/epn.c
+index 02d8bfae58fb1..cb164c615e6fc 100644
+--- a/drivers/usb/gadget/udc/aspeed-vhub/epn.c
++++ b/drivers/usb/gadget/udc/aspeed-vhub/epn.c
+@@ -376,7 +376,7 @@ static int ast_vhub_epn_queue(struct usb_ep* u_ep, struct usb_request *u_req,
+ 	if (ep->epn.desc_mode ||
+ 	    ((((unsigned long)u_req->buf & 7) == 0) &&
+ 	     (ep->epn.is_in || !(u_req->length & (u_ep->maxpacket - 1))))) {
+-		rc = usb_gadget_map_request(&ep->dev->gadget, u_req,
++		rc = usb_gadget_map_request_by_dev(&vhub->pdev->dev, u_req,
+ 					    ep->epn.is_in);
+ 		if (rc) {
+ 			dev_warn(&vhub->pdev->dev,
+diff --git a/drivers/usb/gadget/udc/fotg210-udc.c b/drivers/usb/gadget/udc/fotg210-udc.c
+index d6ca50f019853..75bf446f4a666 100644
+--- a/drivers/usb/gadget/udc/fotg210-udc.c
++++ b/drivers/usb/gadget/udc/fotg210-udc.c
+@@ -338,15 +338,16 @@ static void fotg210_start_dma(struct fotg210_ep *ep,
+ 		} else {
+ 			buffer = req->req.buf + req->req.actual;
+ 			length = ioread32(ep->fotg210->reg +
+-					FOTG210_FIBCR(ep->epnum - 1));
+-			length &= FIBCR_BCFX;
++					FOTG210_FIBCR(ep->epnum - 1)) & FIBCR_BCFX;
++			if (length > req->req.length - req->req.actual)
++				length = req->req.length - req->req.actual;
+ 		}
+ 	} else {
+ 		buffer = req->req.buf + req->req.actual;
+ 		if (req->req.length - req->req.actual > ep->ep.maxpacket)
+ 			length = ep->ep.maxpacket;
+ 		else
+-			length = req->req.length;
++			length = req->req.length - req->req.actual;
+ 	}
+ 
+ 	d = dma_map_single(dev, buffer, length,
+@@ -379,8 +380,7 @@ static void fotg210_ep0_queue(struct fotg210_ep *ep,
+ 	}
+ 	if (ep->dir_in) { /* if IN */
+ 		fotg210_start_dma(ep, req);
+-		if ((req->req.length == req->req.actual) ||
+-		    (req->req.actual < ep->ep.maxpacket))
++		if (req->req.length == req->req.actual)
+ 			fotg210_done(ep, req, 0);
+ 	} else { /* OUT */
+ 		u32 value = ioread32(ep->fotg210->reg + FOTG210_DMISGR0);
+@@ -820,7 +820,7 @@ static void fotg210_ep0in(struct fotg210_udc *fotg210)
+ 		if (req->req.length)
+ 			fotg210_start_dma(ep, req);
+ 
+-		if ((req->req.length - req->req.actual) < ep->ep.maxpacket)
++		if (req->req.actual == req->req.length)
+ 			fotg210_done(ep, req, 0);
+ 	} else {
+ 		fotg210_set_cxdone(fotg210);
+@@ -849,12 +849,16 @@ static void fotg210_out_fifo_handler(struct fotg210_ep *ep)
+ {
+ 	struct fotg210_request *req = list_entry(ep->queue.next,
+ 						 struct fotg210_request, queue);
++	int disgr1 = ioread32(ep->fotg210->reg + FOTG210_DISGR1);
+ 
+ 	fotg210_start_dma(ep, req);
+ 
+-	/* finish out transfer */
++	/* Complete the request when it's full or a short packet arrived.
++	 * Like other drivers, short_not_ok isn't handled.
++	 */
++
+ 	if (req->req.length == req->req.actual ||
+-	    req->req.actual < ep->ep.maxpacket)
++	    (disgr1 & DISGR1_SPK_INT(ep->epnum - 1)))
+ 		fotg210_done(ep, req, 0);
+ }
+ 
+@@ -1027,6 +1031,12 @@ static void fotg210_init(struct fotg210_udc *fotg210)
+ 	value &= ~DMCR_GLINT_EN;
+ 	iowrite32(value, fotg210->reg + FOTG210_DMCR);
+ 
++	/* enable only grp2 irqs we handle */
++	iowrite32(~(DISGR2_DMA_ERROR | DISGR2_RX0BYTE_INT | DISGR2_TX0BYTE_INT
++		    | DISGR2_ISO_SEQ_ABORT_INT | DISGR2_ISO_SEQ_ERR_INT
++		    | DISGR2_RESM_INT | DISGR2_SUSP_INT | DISGR2_USBRST_INT),
++		  fotg210->reg + FOTG210_DMISGR2);
++
+ 	/* disable all fifo interrupt */
+ 	iowrite32(~(u32)0, fotg210->reg + FOTG210_DMISGR1);
+ 
+diff --git a/drivers/usb/gadget/udc/pch_udc.c b/drivers/usb/gadget/udc/pch_udc.c
+index a3c1fc9242686..fd3656d0f760c 100644
+--- a/drivers/usb/gadget/udc/pch_udc.c
++++ b/drivers/usb/gadget/udc/pch_udc.c
+@@ -7,12 +7,14 @@
+ #include <linux/module.h>
+ #include <linux/pci.h>
+ #include <linux/delay.h>
++#include <linux/dmi.h>
+ #include <linux/errno.h>
++#include <linux/gpio/consumer.h>
++#include <linux/gpio/machine.h>
+ #include <linux/list.h>
+ #include <linux/interrupt.h>
+ #include <linux/usb/ch9.h>
+ #include <linux/usb/gadget.h>
+-#include <linux/gpio/consumer.h>
+ #include <linux/irq.h>
+ 
+ #define PCH_VBUS_PERIOD		3000	/* VBUS polling period (msec) */
+@@ -596,18 +598,22 @@ static void pch_udc_reconnect(struct pch_udc_dev *dev)
+ static inline void pch_udc_vbus_session(struct pch_udc_dev *dev,
+ 					  int is_active)
+ {
++	unsigned long		iflags;
++
++	spin_lock_irqsave(&dev->lock, iflags);
+ 	if (is_active) {
+ 		pch_udc_reconnect(dev);
+ 		dev->vbus_session = 1;
+ 	} else {
+ 		if (dev->driver && dev->driver->disconnect) {
+-			spin_lock(&dev->lock);
++			spin_unlock_irqrestore(&dev->lock, iflags);
+ 			dev->driver->disconnect(&dev->gadget);
+-			spin_unlock(&dev->lock);
++			spin_lock_irqsave(&dev->lock, iflags);
+ 		}
+ 		pch_udc_set_disconnect(dev);
+ 		dev->vbus_session = 0;
+ 	}
++	spin_unlock_irqrestore(&dev->lock, iflags);
+ }
+ 
+ /**
+@@ -1166,20 +1172,25 @@ static int pch_udc_pcd_selfpowered(struct usb_gadget *gadget, int value)
+ static int pch_udc_pcd_pullup(struct usb_gadget *gadget, int is_on)
+ {
+ 	struct pch_udc_dev	*dev;
++	unsigned long		iflags;
+ 
+ 	if (!gadget)
+ 		return -EINVAL;
++
+ 	dev = container_of(gadget, struct pch_udc_dev, gadget);
++
++	spin_lock_irqsave(&dev->lock, iflags);
+ 	if (is_on) {
+ 		pch_udc_reconnect(dev);
+ 	} else {
+ 		if (dev->driver && dev->driver->disconnect) {
+-			spin_lock(&dev->lock);
++			spin_unlock_irqrestore(&dev->lock, iflags);
+ 			dev->driver->disconnect(&dev->gadget);
+-			spin_unlock(&dev->lock);
++			spin_lock_irqsave(&dev->lock, iflags);
+ 		}
+ 		pch_udc_set_disconnect(dev);
+ 	}
++	spin_unlock_irqrestore(&dev->lock, iflags);
+ 
+ 	return 0;
+ }
+@@ -1350,6 +1361,43 @@ static irqreturn_t pch_vbus_gpio_irq(int irq, void *data)
+ 	return IRQ_HANDLED;
+ }
+ 
++static struct gpiod_lookup_table minnowboard_udc_gpios = {
++	.dev_id		= "0000:02:02.4",
++	.table		= {
++		GPIO_LOOKUP("sch_gpio.33158", 12, NULL, GPIO_ACTIVE_HIGH),
++		{}
++	},
++};
++
++static const struct dmi_system_id pch_udc_gpio_dmi_table[] = {
++	{
++		.ident = "MinnowBoard",
++		.matches = {
++			DMI_MATCH(DMI_BOARD_NAME, "MinnowBoard"),
++		},
++		.driver_data = &minnowboard_udc_gpios,
++	},
++	{ }
++};
++
++static void pch_vbus_gpio_remove_table(void *table)
++{
++	gpiod_remove_lookup_table(table);
++}
++
++static int pch_vbus_gpio_add_table(struct pch_udc_dev *dev)
++{
++	struct device *d = &dev->pdev->dev;
++	const struct dmi_system_id *dmi;
++
++	dmi = dmi_first_match(pch_udc_gpio_dmi_table);
++	if (!dmi)
++		return 0;
++
++	gpiod_add_lookup_table(dmi->driver_data);
++	return devm_add_action_or_reset(d, pch_vbus_gpio_remove_table, dmi->driver_data);
++}
++
+ /**
+  * pch_vbus_gpio_init() - This API initializes GPIO port detecting VBUS.
+  * @dev:		Reference to the driver structure
+@@ -1360,6 +1408,7 @@ static irqreturn_t pch_vbus_gpio_irq(int irq, void *data)
+  */
+ static int pch_vbus_gpio_init(struct pch_udc_dev *dev)
+ {
++	struct device *d = &dev->pdev->dev;
+ 	int err;
+ 	int irq_num = 0;
+ 	struct gpio_desc *gpiod;
+@@ -1367,8 +1416,12 @@ static int pch_vbus_gpio_init(struct pch_udc_dev *dev)
+ 	dev->vbus_gpio.port = NULL;
+ 	dev->vbus_gpio.intr = 0;
+ 
++	err = pch_vbus_gpio_add_table(dev);
++	if (err)
++		return err;
++
+ 	/* Retrieve the GPIO line from the USB gadget device */
+-	gpiod = devm_gpiod_get(dev->gadget.dev.parent, NULL, GPIOD_IN);
++	gpiod = devm_gpiod_get_optional(d, NULL, GPIOD_IN);
+ 	if (IS_ERR(gpiod))
+ 		return PTR_ERR(gpiod);
+ 	gpiod_set_consumer_name(gpiod, "pch_vbus");
+@@ -1756,7 +1809,7 @@ static struct usb_request *pch_udc_alloc_request(struct usb_ep *usbep,
+ 	}
+ 	/* prevent from using desc. - set HOST BUSY */
+ 	dma_desc->status |= PCH_UDC_BS_HST_BSY;
+-	dma_desc->dataptr = cpu_to_le32(DMA_ADDR_INVALID);
++	dma_desc->dataptr = lower_32_bits(DMA_ADDR_INVALID);
+ 	req->td_data = dma_desc;
+ 	req->td_data_last = dma_desc;
+ 	req->chain_len = 1;
+@@ -2298,6 +2351,21 @@ static void pch_udc_svc_data_out(struct pch_udc_dev *dev, int ep_num)
+ 		pch_udc_set_dma(dev, DMA_DIR_RX);
+ }
+ 
++static int pch_udc_gadget_setup(struct pch_udc_dev *dev)
++	__must_hold(&dev->lock)
++{
++	int rc;
++
++	/* In some cases we can get an interrupt before driver gets setup */
++	if (!dev->driver)
++		return -ESHUTDOWN;
++
++	spin_unlock(&dev->lock);
++	rc = dev->driver->setup(&dev->gadget, &dev->setup_data);
++	spin_lock(&dev->lock);
++	return rc;
++}
++
+ /**
+  * pch_udc_svc_control_in() - Handle Control IN endpoint interrupts
+  * @dev:	Reference to the device structure
+@@ -2369,15 +2437,12 @@ static void pch_udc_svc_control_out(struct pch_udc_dev *dev)
+ 			dev->gadget.ep0 = &dev->ep[UDC_EP0IN_IDX].ep;
+ 		else /* OUT */
+ 			dev->gadget.ep0 = &ep->ep;
+-		spin_lock(&dev->lock);
+ 		/* If Mass storage Reset */
+ 		if ((dev->setup_data.bRequestType == 0x21) &&
+ 		    (dev->setup_data.bRequest == 0xFF))
+ 			dev->prot_stall = 0;
+ 		/* call gadget with setup data received */
+-		setup_supported = dev->driver->setup(&dev->gadget,
+-						     &dev->setup_data);
+-		spin_unlock(&dev->lock);
++		setup_supported = pch_udc_gadget_setup(dev);
+ 
+ 		if (dev->setup_data.bRequestType & USB_DIR_IN) {
+ 			ep->td_data->status = (ep->td_data->status &
+@@ -2625,9 +2690,7 @@ static void pch_udc_svc_intf_interrupt(struct pch_udc_dev *dev)
+ 		dev->ep[i].halted = 0;
+ 	}
+ 	dev->stall = 0;
+-	spin_unlock(&dev->lock);
+-	dev->driver->setup(&dev->gadget, &dev->setup_data);
+-	spin_lock(&dev->lock);
++	pch_udc_gadget_setup(dev);
+ }
+ 
+ /**
+@@ -2662,9 +2725,7 @@ static void pch_udc_svc_cfg_interrupt(struct pch_udc_dev *dev)
+ 	dev->stall = 0;
+ 
+ 	/* call gadget zero with setup data received */
+-	spin_unlock(&dev->lock);
+-	dev->driver->setup(&dev->gadget, &dev->setup_data);
+-	spin_lock(&dev->lock);
++	pch_udc_gadget_setup(dev);
+ }
+ 
+ /**
+@@ -2870,14 +2931,20 @@ static void pch_udc_pcd_reinit(struct pch_udc_dev *dev)
+  * @dev:	Reference to the driver structure
+  *
+  * Return codes:
+- *	0: Success
++ *	0:		Success
++ *	-%ERRNO:	All kind of errors when retrieving VBUS GPIO
+  */
+ static int pch_udc_pcd_init(struct pch_udc_dev *dev)
+ {
++	int ret;
++
+ 	pch_udc_init(dev);
+ 	pch_udc_pcd_reinit(dev);
+-	pch_vbus_gpio_init(dev);
+-	return 0;
++
++	ret = pch_vbus_gpio_init(dev);
++	if (ret)
++		pch_udc_exit(dev);
++	return ret;
+ }
+ 
+ /**
+@@ -2938,7 +3005,7 @@ static int init_dma_pools(struct pch_udc_dev *dev)
+ 	dev->dma_addr = dma_map_single(&dev->pdev->dev, ep0out_buf,
+ 				       UDC_EP0OUT_BUFF_SIZE * 4,
+ 				       DMA_FROM_DEVICE);
+-	return 0;
++	return dma_mapping_error(&dev->pdev->dev, dev->dma_addr);
+ }
+ 
+ static int pch_udc_start(struct usb_gadget *g,
+@@ -3063,6 +3130,7 @@ static int pch_udc_probe(struct pci_dev *pdev,
+ 	if (retval)
+ 		return retval;
+ 
++	dev->pdev = pdev;
+ 	pci_set_drvdata(pdev, dev);
+ 
+ 	/* Determine BAR based on PCI ID */
+@@ -3078,16 +3146,10 @@ static int pch_udc_probe(struct pci_dev *pdev,
+ 
+ 	dev->base_addr = pcim_iomap_table(pdev)[bar];
+ 
+-	/*
+-	 * FIXME: add a GPIO descriptor table to pdev.dev using
+-	 * gpiod_add_descriptor_table() from <linux/gpio/machine.h> based on
+-	 * the PCI subsystem ID. The system-dependent GPIO is necessary for
+-	 * VBUS operation.
+-	 */
+-
+ 	/* initialize the hardware */
+-	if (pch_udc_pcd_init(dev))
+-		return -ENODEV;
++	retval = pch_udc_pcd_init(dev);
++	if (retval)
++		return retval;
+ 
+ 	pci_enable_msi(pdev);
+ 
+@@ -3104,7 +3166,6 @@ static int pch_udc_probe(struct pci_dev *pdev,
+ 
+ 	/* device struct setup */
+ 	spin_lock_init(&dev->lock);
+-	dev->pdev = pdev;
+ 	dev->gadget.ops = &pch_udc_ops;
+ 
+ 	retval = init_dma_pools(dev);
+diff --git a/drivers/usb/gadget/udc/r8a66597-udc.c b/drivers/usb/gadget/udc/r8a66597-udc.c
+index 896c1a016d550..65cae48834545 100644
+--- a/drivers/usb/gadget/udc/r8a66597-udc.c
++++ b/drivers/usb/gadget/udc/r8a66597-udc.c
+@@ -1849,6 +1849,8 @@ static int r8a66597_probe(struct platform_device *pdev)
+ 		return PTR_ERR(reg);
+ 
+ 	ires = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
++	if (!ires)
++		return -EINVAL;
+ 	irq = ires->start;
+ 	irq_trigger = ires->flags & IRQF_TRIGGER_MASK;
+ 
+diff --git a/drivers/usb/gadget/udc/s3c2410_udc.c b/drivers/usb/gadget/udc/s3c2410_udc.c
+index 1d3ebb07ccd4d..b154b62abefa1 100644
+--- a/drivers/usb/gadget/udc/s3c2410_udc.c
++++ b/drivers/usb/gadget/udc/s3c2410_udc.c
+@@ -54,8 +54,6 @@ static struct clk		*udc_clock;
+ static struct clk		*usb_bus_clock;
+ static void __iomem		*base_addr;
+ static int			irq_usbd;
+-static u64			rsrc_start;
+-static u64			rsrc_len;
+ static struct dentry		*s3c2410_udc_debugfs_root;
+ 
+ static inline u32 udc_read(u32 reg)
+@@ -1752,7 +1750,8 @@ static int s3c2410_udc_probe(struct platform_device *pdev)
+ 	udc_clock = clk_get(NULL, "usb-device");
+ 	if (IS_ERR(udc_clock)) {
+ 		dev_err(dev, "failed to get udc clock source\n");
+-		return PTR_ERR(udc_clock);
++		retval = PTR_ERR(udc_clock);
++		goto err_usb_bus_clk;
+ 	}
+ 
+ 	clk_prepare_enable(udc_clock);
+@@ -1775,7 +1774,7 @@ static int s3c2410_udc_probe(struct platform_device *pdev)
+ 	base_addr = devm_platform_ioremap_resource(pdev, 0);
+ 	if (IS_ERR(base_addr)) {
+ 		retval = PTR_ERR(base_addr);
+-		goto err_mem;
++		goto err_udc_clk;
+ 	}
+ 
+ 	the_controller = udc;
+@@ -1793,7 +1792,7 @@ static int s3c2410_udc_probe(struct platform_device *pdev)
+ 	if (retval != 0) {
+ 		dev_err(dev, "cannot get irq %i, err %d\n", irq_usbd, retval);
+ 		retval = -EBUSY;
+-		goto err_map;
++		goto err_udc_clk;
+ 	}
+ 
+ 	dev_dbg(dev, "got irq %i\n", irq_usbd);
+@@ -1864,10 +1863,14 @@ err_gpio_claim:
+ 		gpio_free(udc_info->vbus_pin);
+ err_int:
+ 	free_irq(irq_usbd, udc);
+-err_map:
+-	iounmap(base_addr);
+-err_mem:
+-	release_mem_region(rsrc_start, rsrc_len);
++err_udc_clk:
++	clk_disable_unprepare(udc_clock);
++	clk_put(udc_clock);
++	udc_clock = NULL;
++err_usb_bus_clk:
++	clk_disable_unprepare(usb_bus_clock);
++	clk_put(usb_bus_clock);
++	usb_bus_clock = NULL;
+ 
+ 	return retval;
+ }
+@@ -1899,9 +1902,6 @@ static int s3c2410_udc_remove(struct platform_device *pdev)
+ 
+ 	free_irq(irq_usbd, udc);
+ 
+-	iounmap(base_addr);
+-	release_mem_region(rsrc_start, rsrc_len);
+-
+ 	if (!IS_ERR(udc_clock) && udc_clock != NULL) {
+ 		clk_disable_unprepare(udc_clock);
+ 		clk_put(udc_clock);
+diff --git a/drivers/usb/gadget/udc/snps_udc_plat.c b/drivers/usb/gadget/udc/snps_udc_plat.c
+index 32f1d3e90c264..99805d60a7ab3 100644
+--- a/drivers/usb/gadget/udc/snps_udc_plat.c
++++ b/drivers/usb/gadget/udc/snps_udc_plat.c
+@@ -114,8 +114,8 @@ static int udc_plat_probe(struct platform_device *pdev)
+ 
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ 	udc->virt_addr = devm_ioremap_resource(dev, res);
+-	if (IS_ERR(udc->regs))
+-		return PTR_ERR(udc->regs);
++	if (IS_ERR(udc->virt_addr))
++		return PTR_ERR(udc->virt_addr);
+ 
+ 	/* udc csr registers base */
+ 	udc->csr = udc->virt_addr + UDC_CSR_ADDR;
+diff --git a/drivers/usb/host/Kconfig b/drivers/usb/host/Kconfig
+index b94f2a070c056..df9428f1dc5ed 100644
+--- a/drivers/usb/host/Kconfig
++++ b/drivers/usb/host/Kconfig
+@@ -272,6 +272,7 @@ config USB_EHCI_TEGRA
+ 	select USB_CHIPIDEA
+ 	select USB_CHIPIDEA_HOST
+ 	select USB_CHIPIDEA_TEGRA
++	select USB_GADGET
+ 	help
+ 	  This option is deprecated now and the driver was removed, use
+ 	  USB_CHIPIDEA_TEGRA instead.
+diff --git a/drivers/usb/host/xhci-mtk-sch.c b/drivers/usb/host/xhci-mtk-sch.c
+index b45e5bf089979..8950d1f10a7fb 100644
+--- a/drivers/usb/host/xhci-mtk-sch.c
++++ b/drivers/usb/host/xhci-mtk-sch.c
+@@ -378,6 +378,31 @@ static void update_bus_bw(struct mu3h_sch_bw_info *sch_bw,
+ 	sch_ep->allocated = used;
+ }
+ 
++static int check_fs_bus_bw(struct mu3h_sch_ep_info *sch_ep, int offset)
++{
++	struct mu3h_sch_tt *tt = sch_ep->sch_tt;
++	u32 num_esit, tmp;
++	int base;
++	int i, j;
++
++	num_esit = XHCI_MTK_MAX_ESIT / sch_ep->esit;
++	for (i = 0; i < num_esit; i++) {
++		base = offset + i * sch_ep->esit;
++
++		/*
++		 * Compared with hs bus, no matter what ep type,
++		 * the hub will always delay one uframe to send data
++		 */
++		for (j = 0; j < sch_ep->cs_count; j++) {
++			tmp = tt->fs_bus_bw[base + j] + sch_ep->bw_cost_per_microframe;
++			if (tmp > FS_PAYLOAD_MAX)
++				return -ERANGE;
++		}
++	}
++
++	return 0;
++}
++
+ static int check_sch_tt(struct usb_device *udev,
+ 	struct mu3h_sch_ep_info *sch_ep, u32 offset)
+ {
+@@ -402,7 +427,7 @@ static int check_sch_tt(struct usb_device *udev,
+ 			return -ERANGE;
+ 
+ 		for (i = 0; i < sch_ep->cs_count; i++)
+-			if (test_bit(offset + i, tt->split_bit_map))
++			if (test_bit(offset + i, tt->ss_bit_map))
+ 				return -ERANGE;
+ 
+ 	} else {
+@@ -432,7 +457,7 @@ static int check_sch_tt(struct usb_device *udev,
+ 			cs_count = 7; /* HW limit */
+ 
+ 		for (i = 0; i < cs_count + 2; i++) {
+-			if (test_bit(offset + i, tt->split_bit_map))
++			if (test_bit(offset + i, tt->ss_bit_map))
+ 				return -ERANGE;
+ 		}
+ 
+@@ -448,24 +473,44 @@ static int check_sch_tt(struct usb_device *udev,
+ 			sch_ep->num_budget_microframes = sch_ep->esit;
+ 	}
+ 
+-	return 0;
++	return check_fs_bus_bw(sch_ep, offset);
+ }
+ 
+ static void update_sch_tt(struct usb_device *udev,
+-	struct mu3h_sch_ep_info *sch_ep)
++	struct mu3h_sch_ep_info *sch_ep, bool used)
+ {
+ 	struct mu3h_sch_tt *tt = sch_ep->sch_tt;
+ 	u32 base, num_esit;
++	int bw_updated;
++	int bits;
+ 	int i, j;
+ 
+ 	num_esit = XHCI_MTK_MAX_ESIT / sch_ep->esit;
++	bits = (sch_ep->ep_type == ISOC_OUT_EP) ? sch_ep->cs_count : 1;
++
++	if (used)
++		bw_updated = sch_ep->bw_cost_per_microframe;
++	else
++		bw_updated = -sch_ep->bw_cost_per_microframe;
++
+ 	for (i = 0; i < num_esit; i++) {
+ 		base = sch_ep->offset + i * sch_ep->esit;
+-		for (j = 0; j < sch_ep->num_budget_microframes; j++)
+-			set_bit(base + j, tt->split_bit_map);
++
++		for (j = 0; j < bits; j++) {
++			if (used)
++				set_bit(base + j, tt->ss_bit_map);
++			else
++				clear_bit(base + j, tt->ss_bit_map);
++		}
++
++		for (j = 0; j < sch_ep->cs_count; j++)
++			tt->fs_bus_bw[base + j] += bw_updated;
+ 	}
+ 
+-	list_add_tail(&sch_ep->tt_endpoint, &tt->ep_list);
++	if (used)
++		list_add_tail(&sch_ep->tt_endpoint, &tt->ep_list);
++	else
++		list_del(&sch_ep->tt_endpoint);
+ }
+ 
+ static int check_sch_bw(struct usb_device *udev,
+@@ -535,7 +580,7 @@ static int check_sch_bw(struct usb_device *udev,
+ 		if (!tt_offset_ok)
+ 			return -ERANGE;
+ 
+-		update_sch_tt(udev, sch_ep);
++		update_sch_tt(udev, sch_ep, 1);
+ 	}
+ 
+ 	/* update bus bandwidth info */
+@@ -548,15 +593,16 @@ static void destroy_sch_ep(struct usb_device *udev,
+ 	struct mu3h_sch_bw_info *sch_bw, struct mu3h_sch_ep_info *sch_ep)
+ {
+ 	/* only release ep bw check passed by check_sch_bw() */
+-	if (sch_ep->allocated)
++	if (sch_ep->allocated) {
+ 		update_bus_bw(sch_bw, sch_ep, 0);
++		if (sch_ep->sch_tt)
++			update_sch_tt(udev, sch_ep, 0);
++	}
+ 
+-	list_del(&sch_ep->endpoint);
+-
+-	if (sch_ep->sch_tt) {
+-		list_del(&sch_ep->tt_endpoint);
++	if (sch_ep->sch_tt)
+ 		drop_tt(udev);
+-	}
++
++	list_del(&sch_ep->endpoint);
+ 	kfree(sch_ep);
+ }
+ 
+@@ -643,7 +689,7 @@ int xhci_mtk_add_ep_quirk(struct usb_hcd *hcd, struct usb_device *udev,
+ 		 */
+ 		if (usb_endpoint_xfer_int(&ep->desc)
+ 			|| usb_endpoint_xfer_isoc(&ep->desc))
+-			ep_ctx->reserved[0] |= cpu_to_le32(EP_BPKTS(1));
++			ep_ctx->reserved[0] = cpu_to_le32(EP_BPKTS(1));
+ 
+ 		return 0;
+ 	}
+@@ -730,10 +776,10 @@ int xhci_mtk_check_bandwidth(struct usb_hcd *hcd, struct usb_device *udev)
+ 		list_move_tail(&sch_ep->endpoint, &sch_bw->bw_ep_list);
+ 
+ 		ep_ctx = xhci_get_ep_ctx(xhci, virt_dev->in_ctx, ep_index);
+-		ep_ctx->reserved[0] |= cpu_to_le32(EP_BPKTS(sch_ep->pkts)
++		ep_ctx->reserved[0] = cpu_to_le32(EP_BPKTS(sch_ep->pkts)
+ 			| EP_BCSCOUNT(sch_ep->cs_count)
+ 			| EP_BBM(sch_ep->burst_mode));
+-		ep_ctx->reserved[1] |= cpu_to_le32(EP_BOFFSET(sch_ep->offset)
++		ep_ctx->reserved[1] = cpu_to_le32(EP_BOFFSET(sch_ep->offset)
+ 			| EP_BREPEAT(sch_ep->repeat));
+ 
+ 		xhci_dbg(xhci, " PKTS:%x, CSCOUNT:%x, BM:%x, OFFSET:%x, REPEAT:%x\n",
+diff --git a/drivers/usb/host/xhci-mtk.h b/drivers/usb/host/xhci-mtk.h
+index 080109012b9ac..2fc0568ba054e 100644
+--- a/drivers/usb/host/xhci-mtk.h
++++ b/drivers/usb/host/xhci-mtk.h
+@@ -20,13 +20,15 @@
+ #define XHCI_MTK_MAX_ESIT	64
+ 
+ /**
+- * @split_bit_map: used to avoid split microframes overlay
++ * @ss_bit_map: used to avoid start split microframes overlay
++ * @fs_bus_bw: array to keep track of bandwidth already used for FS
+  * @ep_list: Endpoints using this TT
+  * @usb_tt: usb TT related
+  * @tt_port: TT port number
+  */
+ struct mu3h_sch_tt {
+-	DECLARE_BITMAP(split_bit_map, XHCI_MTK_MAX_ESIT);
++	DECLARE_BITMAP(ss_bit_map, XHCI_MTK_MAX_ESIT);
++	u32 fs_bus_bw[XHCI_MTK_MAX_ESIT];
+ 	struct list_head ep_list;
+ 	struct usb_tt *usb_tt;
+ 	int tt_port;
+diff --git a/drivers/usb/roles/class.c b/drivers/usb/roles/class.c
+index 97f37077b7f97..33b637d0d8d99 100644
+--- a/drivers/usb/roles/class.c
++++ b/drivers/usb/roles/class.c
+@@ -189,6 +189,8 @@ usb_role_switch_find_by_fwnode(const struct fwnode_handle *fwnode)
+ 		return NULL;
+ 
+ 	dev = class_find_device_by_fwnode(role_class, fwnode);
++	if (dev)
++		WARN_ON(!try_module_get(dev->parent->driver->owner));
+ 
+ 	return dev ? to_role_switch(dev) : NULL;
+ }
+diff --git a/drivers/usb/serial/ti_usb_3410_5052.c b/drivers/usb/serial/ti_usb_3410_5052.c
+index 7252b0ce75a6c..fe1c13a8849cc 100644
+--- a/drivers/usb/serial/ti_usb_3410_5052.c
++++ b/drivers/usb/serial/ti_usb_3410_5052.c
+@@ -1418,14 +1418,19 @@ static int ti_set_serial_info(struct tty_struct *tty,
+ 	struct serial_struct *ss)
+ {
+ 	struct usb_serial_port *port = tty->driver_data;
+-	struct ti_port *tport = usb_get_serial_port_data(port);
++	struct tty_port *tport = &port->port;
+ 	unsigned cwait;
+ 
+ 	cwait = ss->closing_wait;
+ 	if (cwait != ASYNC_CLOSING_WAIT_NONE)
+ 		cwait = msecs_to_jiffies(10 * ss->closing_wait);
+ 
+-	tport->tp_port->port.closing_wait = cwait;
++	if (!capable(CAP_SYS_ADMIN)) {
++		if (cwait != tport->closing_wait)
++			return -EPERM;
++	}
++
++	tport->closing_wait = cwait;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/usb/serial/usb_wwan.c b/drivers/usb/serial/usb_wwan.c
+index 46d46a4f99c9f..4e9c994a972a2 100644
+--- a/drivers/usb/serial/usb_wwan.c
++++ b/drivers/usb/serial/usb_wwan.c
+@@ -140,10 +140,10 @@ int usb_wwan_get_serial_info(struct tty_struct *tty,
+ 	ss->line            = port->minor;
+ 	ss->port            = port->port_number;
+ 	ss->baud_base       = tty_get_baud_rate(port->port.tty);
+-	ss->close_delay	    = port->port.close_delay / 10;
++	ss->close_delay	    = jiffies_to_msecs(port->port.close_delay) / 10;
+ 	ss->closing_wait    = port->port.closing_wait == ASYNC_CLOSING_WAIT_NONE ?
+ 				 ASYNC_CLOSING_WAIT_NONE :
+-				 port->port.closing_wait / 10;
++				 jiffies_to_msecs(port->port.closing_wait) / 10;
+ 	return 0;
+ }
+ EXPORT_SYMBOL(usb_wwan_get_serial_info);
+@@ -155,9 +155,10 @@ int usb_wwan_set_serial_info(struct tty_struct *tty,
+ 	unsigned int closing_wait, close_delay;
+ 	int retval = 0;
+ 
+-	close_delay = ss->close_delay * 10;
++	close_delay = msecs_to_jiffies(ss->close_delay * 10);
+ 	closing_wait = ss->closing_wait == ASYNC_CLOSING_WAIT_NONE ?
+-			ASYNC_CLOSING_WAIT_NONE : ss->closing_wait * 10;
++			ASYNC_CLOSING_WAIT_NONE :
++			msecs_to_jiffies(ss->closing_wait * 10);
+ 
+ 	mutex_lock(&port->port.mutex);
+ 
+diff --git a/drivers/usb/serial/xr_serial.c b/drivers/usb/serial/xr_serial.c
+index 0ca04906da4bf..c59c8b47a120d 100644
+--- a/drivers/usb/serial/xr_serial.c
++++ b/drivers/usb/serial/xr_serial.c
+@@ -467,6 +467,11 @@ static void xr_set_termios(struct tty_struct *tty,
+ 		termios->c_cflag &= ~CSIZE;
+ 		if (old_termios)
+ 			termios->c_cflag |= old_termios->c_cflag & CSIZE;
++		else
++			termios->c_cflag |= CS8;
++
++		if (C_CSIZE(tty) == CS7)
++			bits |= XR21V141X_UART_DATA_7;
+ 		else
+ 			bits |= XR21V141X_UART_DATA_8;
+ 		break;
+diff --git a/drivers/usb/typec/stusb160x.c b/drivers/usb/typec/stusb160x.c
+index d21750bbbb44d..6eaeba9b096e1 100644
+--- a/drivers/usb/typec/stusb160x.c
++++ b/drivers/usb/typec/stusb160x.c
+@@ -682,8 +682,8 @@ static int stusb160x_probe(struct i2c_client *client)
+ 	}
+ 
+ 	fwnode = device_get_named_child_node(chip->dev, "connector");
+-	if (IS_ERR(fwnode))
+-		return PTR_ERR(fwnode);
++	if (!fwnode)
++		return -ENODEV;
+ 
+ 	/*
+ 	 * When both VDD and VSYS power supplies are present, the low power
+diff --git a/drivers/usb/typec/tcpm/tcpci.c b/drivers/usb/typec/tcpm/tcpci.c
+index a27deb0b5f03c..027afd7dfdce2 100644
+--- a/drivers/usb/typec/tcpm/tcpci.c
++++ b/drivers/usb/typec/tcpm/tcpci.c
+@@ -24,6 +24,15 @@
+ #define	AUTO_DISCHARGE_PD_HEADROOM_MV		850
+ #define	AUTO_DISCHARGE_PPS_HEADROOM_MV		1250
+ 
++#define tcpc_presenting_cc1_rd(reg) \
++	(!(TCPC_ROLE_CTRL_DRP & (reg)) && \
++	 (((reg) & (TCPC_ROLE_CTRL_CC1_MASK << TCPC_ROLE_CTRL_CC1_SHIFT)) == \
++	  (TCPC_ROLE_CTRL_CC_RD << TCPC_ROLE_CTRL_CC1_SHIFT)))
++#define tcpc_presenting_cc2_rd(reg) \
++	(!(TCPC_ROLE_CTRL_DRP & (reg)) && \
++	 (((reg) & (TCPC_ROLE_CTRL_CC2_MASK << TCPC_ROLE_CTRL_CC2_SHIFT)) == \
++	  (TCPC_ROLE_CTRL_CC_RD << TCPC_ROLE_CTRL_CC2_SHIFT)))
++
+ struct tcpci {
+ 	struct device *dev;
+ 
+@@ -178,19 +187,25 @@ static int tcpci_get_cc(struct tcpc_dev *tcpc,
+ 			enum typec_cc_status *cc1, enum typec_cc_status *cc2)
+ {
+ 	struct tcpci *tcpci = tcpc_to_tcpci(tcpc);
+-	unsigned int reg;
++	unsigned int reg, role_control;
+ 	int ret;
+ 
++	ret = regmap_read(tcpci->regmap, TCPC_ROLE_CTRL, &role_control);
++	if (ret < 0)
++		return ret;
++
+ 	ret = regmap_read(tcpci->regmap, TCPC_CC_STATUS, &reg);
+ 	if (ret < 0)
+ 		return ret;
+ 
+ 	*cc1 = tcpci_to_typec_cc((reg >> TCPC_CC_STATUS_CC1_SHIFT) &
+ 				 TCPC_CC_STATUS_CC1_MASK,
+-				 reg & TCPC_CC_STATUS_TERM);
++				 reg & TCPC_CC_STATUS_TERM ||
++				 tcpc_presenting_cc1_rd(role_control));
+ 	*cc2 = tcpci_to_typec_cc((reg >> TCPC_CC_STATUS_CC2_SHIFT) &
+ 				 TCPC_CC_STATUS_CC2_MASK,
+-				 reg & TCPC_CC_STATUS_TERM);
++				 reg & TCPC_CC_STATUS_TERM ||
++				 tcpc_presenting_cc2_rd(role_control));
+ 
+ 	return 0;
+ }
+diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c
+index ce7af398c7c1c..1a086ba254d23 100644
+--- a/drivers/usb/typec/tcpm/tcpm.c
++++ b/drivers/usb/typec/tcpm/tcpm.c
+@@ -268,12 +268,27 @@ struct pd_mode_data {
+ 	struct typec_altmode_desc altmode_desc[ALTMODE_DISCOVERY_MAX];
+ };
+ 
++/*
++ * @min_volt: Actual min voltage at the local port
++ * @req_min_volt: Requested min voltage to the port partner
++ * @max_volt: Actual max voltage at the local port
++ * @req_max_volt: Requested max voltage to the port partner
++ * @max_curr: Actual max current at the local port
++ * @req_max_curr: Requested max current of the port partner
++ * @req_out_volt: Requested output voltage to the port partner
++ * @req_op_curr: Requested operating current to the port partner
++ * @supported: Parter has atleast one APDO hence supports PPS
++ * @active: PPS mode is active
++ */
+ struct pd_pps_data {
+ 	u32 min_volt;
++	u32 req_min_volt;
+ 	u32 max_volt;
++	u32 req_max_volt;
+ 	u32 max_curr;
+-	u32 out_volt;
+-	u32 op_curr;
++	u32 req_max_curr;
++	u32 req_out_volt;
++	u32 req_op_curr;
+ 	bool supported;
+ 	bool active;
+ };
+@@ -389,7 +404,10 @@ struct tcpm_port {
+ 	unsigned int operating_snk_mw;
+ 	bool update_sink_caps;
+ 
+-	/* Requested current / voltage */
++	/* Requested current / voltage to the port partner */
++	u32 req_current_limit;
++	u32 req_supply_voltage;
++	/* Actual current / voltage limit of the local port */
+ 	u32 current_limit;
+ 	u32 supply_voltage;
+ 
+@@ -438,6 +456,9 @@ struct tcpm_port {
+ 	enum tcpm_ams next_ams;
+ 	bool in_ams;
+ 
++	/* Auto vbus discharge status */
++	bool auto_vbus_discharge_enabled;
++
+ #ifdef CONFIG_DEBUG_FS
+ 	struct dentry *dentry;
+ 	struct mutex logbuffer_lock;	/* log buffer access lock */
+@@ -507,6 +528,9 @@ static const char * const pd_rev[] = {
+ 	(tcpm_port_is_sink(port) && \
+ 	((port)->cc1 == TYPEC_CC_RP_3_0 || (port)->cc2 == TYPEC_CC_RP_3_0))
+ 
++#define tcpm_wait_for_discharge(port) \
++	(((port)->auto_vbus_discharge_enabled && !(port)->vbus_vsafe0v) ? PD_T_SAFE_0V : 0)
++
+ static enum tcpm_state tcpm_default_state(struct tcpm_port *port)
+ {
+ 	if (port->port_type == TYPEC_PORT_DRP) {
+@@ -2432,8 +2456,8 @@ static void tcpm_pd_ctrl_request(struct tcpm_port *port,
+ 		case SNK_TRANSITION_SINK:
+ 			if (port->vbus_present) {
+ 				tcpm_set_current_limit(port,
+-						       port->current_limit,
+-						       port->supply_voltage);
++						       port->req_current_limit,
++						       port->req_supply_voltage);
+ 				port->explicit_contract = true;
+ 				tcpm_set_auto_vbus_discharge_threshold(port,
+ 								       TYPEC_PWR_MODE_PD,
+@@ -2492,8 +2516,8 @@ static void tcpm_pd_ctrl_request(struct tcpm_port *port,
+ 			break;
+ 		case SNK_NEGOTIATE_PPS_CAPABILITIES:
+ 			/* Revert data back from any requested PPS updates */
+-			port->pps_data.out_volt = port->supply_voltage;
+-			port->pps_data.op_curr = port->current_limit;
++			port->pps_data.req_out_volt = port->supply_voltage;
++			port->pps_data.req_op_curr = port->current_limit;
+ 			port->pps_status = (type == PD_CTRL_WAIT ?
+ 					    -EAGAIN : -EOPNOTSUPP);
+ 
+@@ -2542,8 +2566,12 @@ static void tcpm_pd_ctrl_request(struct tcpm_port *port,
+ 			break;
+ 		case SNK_NEGOTIATE_PPS_CAPABILITIES:
+ 			port->pps_data.active = true;
+-			port->supply_voltage = port->pps_data.out_volt;
+-			port->current_limit = port->pps_data.op_curr;
++			port->pps_data.min_volt = port->pps_data.req_min_volt;
++			port->pps_data.max_volt = port->pps_data.req_max_volt;
++			port->pps_data.max_curr = port->pps_data.req_max_curr;
++			port->req_supply_voltage = port->pps_data.req_out_volt;
++			port->req_current_limit = port->pps_data.req_op_curr;
++			power_supply_changed(port->psy);
+ 			tcpm_set_state(port, SNK_TRANSITION_SINK, 0);
+ 			break;
+ 		case SOFT_RESET_SEND:
+@@ -3102,17 +3130,16 @@ static unsigned int tcpm_pd_select_pps_apdo(struct tcpm_port *port)
+ 		src = port->source_caps[src_pdo];
+ 		snk = port->snk_pdo[snk_pdo];
+ 
+-		port->pps_data.min_volt = max(pdo_pps_apdo_min_voltage(src),
+-					      pdo_pps_apdo_min_voltage(snk));
+-		port->pps_data.max_volt = min(pdo_pps_apdo_max_voltage(src),
+-					      pdo_pps_apdo_max_voltage(snk));
+-		port->pps_data.max_curr = min_pps_apdo_current(src, snk);
+-		port->pps_data.out_volt = min(port->pps_data.max_volt,
+-					      max(port->pps_data.min_volt,
+-						  port->pps_data.out_volt));
+-		port->pps_data.op_curr = min(port->pps_data.max_curr,
+-					     port->pps_data.op_curr);
+-		power_supply_changed(port->psy);
++		port->pps_data.req_min_volt = max(pdo_pps_apdo_min_voltage(src),
++						  pdo_pps_apdo_min_voltage(snk));
++		port->pps_data.req_max_volt = min(pdo_pps_apdo_max_voltage(src),
++						  pdo_pps_apdo_max_voltage(snk));
++		port->pps_data.req_max_curr = min_pps_apdo_current(src, snk);
++		port->pps_data.req_out_volt = min(port->pps_data.max_volt,
++						  max(port->pps_data.min_volt,
++						      port->pps_data.req_out_volt));
++		port->pps_data.req_op_curr = min(port->pps_data.max_curr,
++						 port->pps_data.req_op_curr);
+ 	}
+ 
+ 	return src_pdo;
+@@ -3192,8 +3219,8 @@ static int tcpm_pd_build_request(struct tcpm_port *port, u32 *rdo)
+ 			 flags & RDO_CAP_MISMATCH ? " [mismatch]" : "");
+ 	}
+ 
+-	port->current_limit = ma;
+-	port->supply_voltage = mv;
++	port->req_current_limit = ma;
++	port->req_supply_voltage = mv;
+ 
+ 	return 0;
+ }
+@@ -3239,10 +3266,10 @@ static int tcpm_pd_build_pps_request(struct tcpm_port *port, u32 *rdo)
+ 			tcpm_log(port, "Invalid APDO selected!");
+ 			return -EINVAL;
+ 		}
+-		max_mv = port->pps_data.max_volt;
+-		max_ma = port->pps_data.max_curr;
+-		out_mv = port->pps_data.out_volt;
+-		op_ma = port->pps_data.op_curr;
++		max_mv = port->pps_data.req_max_volt;
++		max_ma = port->pps_data.req_max_curr;
++		out_mv = port->pps_data.req_out_volt;
++		op_ma = port->pps_data.req_op_curr;
+ 		break;
+ 	default:
+ 		tcpm_log(port, "Invalid PDO selected!");
+@@ -3289,8 +3316,8 @@ static int tcpm_pd_build_pps_request(struct tcpm_port *port, u32 *rdo)
+ 	tcpm_log(port, "Requesting APDO %d: %u mV, %u mA",
+ 		 src_pdo_index, out_mv, op_ma);
+ 
+-	port->pps_data.op_curr = op_ma;
+-	port->pps_data.out_volt = out_mv;
++	port->pps_data.req_op_curr = op_ma;
++	port->pps_data.req_out_volt = out_mv;
+ 
+ 	return 0;
+ }
+@@ -3418,6 +3445,8 @@ static int tcpm_src_attach(struct tcpm_port *port)
+ 	if (port->tcpc->enable_auto_vbus_discharge) {
+ 		ret = port->tcpc->enable_auto_vbus_discharge(port->tcpc, true);
+ 		tcpm_log_force(port, "enable vbus discharge ret:%d", ret);
++		if (!ret)
++			port->auto_vbus_discharge_enabled = true;
+ 	}
+ 
+ 	ret = tcpm_set_roles(port, true, TYPEC_SOURCE, tcpm_data_role_for_source(port));
+@@ -3500,6 +3529,8 @@ static void tcpm_reset_port(struct tcpm_port *port)
+ 	if (port->tcpc->enable_auto_vbus_discharge) {
+ 		ret = port->tcpc->enable_auto_vbus_discharge(port->tcpc, false);
+ 		tcpm_log_force(port, "Disable vbus discharge ret:%d", ret);
++		if (!ret)
++			port->auto_vbus_discharge_enabled = false;
+ 	}
+ 	port->in_ams = false;
+ 	port->ams = NONE_AMS;
+@@ -3533,8 +3564,6 @@ static void tcpm_reset_port(struct tcpm_port *port)
+ 	port->sink_cap_done = false;
+ 	if (port->tcpc->enable_frs)
+ 		port->tcpc->enable_frs(port->tcpc, false);
+-
+-	power_supply_changed(port->psy);
+ }
+ 
+ static void tcpm_detach(struct tcpm_port *port)
+@@ -3574,6 +3603,8 @@ static int tcpm_snk_attach(struct tcpm_port *port)
+ 		tcpm_set_auto_vbus_discharge_threshold(port, TYPEC_PWR_MODE_USB, false, VSAFE5V);
+ 		ret = port->tcpc->enable_auto_vbus_discharge(port->tcpc, true);
+ 		tcpm_log_force(port, "enable vbus discharge ret:%d", ret);
++		if (!ret)
++			port->auto_vbus_discharge_enabled = true;
+ 	}
+ 
+ 	ret = tcpm_set_roles(port, true, TYPEC_SINK, tcpm_data_role_for_sink(port));
+@@ -4103,6 +4134,23 @@ static void run_state_machine(struct tcpm_port *port)
+ 		}
+ 		break;
+ 	case SNK_TRANSITION_SINK:
++		/* From the USB PD spec:
++		 * "The Sink Shall transition to Sink Standby before a positive or
++		 * negative voltage transition of VBUS. During Sink Standby
++		 * the Sink Shall reduce its power draw to pSnkStdby."
++		 *
++		 * This is not applicable to PPS though as the port can continue
++		 * to draw negotiated power without switching to standby.
++		 */
++		if (port->supply_voltage != port->req_supply_voltage && !port->pps_data.active &&
++		    port->current_limit * port->supply_voltage / 1000 > PD_P_SNK_STDBY_MW) {
++			u32 stdby_ma = PD_P_SNK_STDBY_MW * 1000 / port->supply_voltage;
++
++			tcpm_log(port, "Setting standby current %u mV @ %u mA",
++				 port->supply_voltage, stdby_ma);
++			tcpm_set_current_limit(port, stdby_ma, port->supply_voltage);
++		}
++		fallthrough;
+ 	case SNK_TRANSITION_SINK_VBUS:
+ 		tcpm_set_state(port, hard_reset_state(port),
+ 			       PD_T_PS_TRANSITION);
+@@ -4676,9 +4724,9 @@ static void _tcpm_cc_change(struct tcpm_port *port, enum typec_cc_status cc1,
+ 		if (tcpm_port_is_disconnected(port) ||
+ 		    !tcpm_port_is_source(port)) {
+ 			if (port->port_type == TYPEC_PORT_SRC)
+-				tcpm_set_state(port, SRC_UNATTACHED, 0);
++				tcpm_set_state(port, SRC_UNATTACHED, tcpm_wait_for_discharge(port));
+ 			else
+-				tcpm_set_state(port, SNK_UNATTACHED, 0);
++				tcpm_set_state(port, SNK_UNATTACHED, tcpm_wait_for_discharge(port));
+ 		}
+ 		break;
+ 	case SNK_UNATTACHED:
+@@ -4709,7 +4757,23 @@ static void _tcpm_cc_change(struct tcpm_port *port, enum typec_cc_status cc1,
+ 			tcpm_set_state(port, SNK_DEBOUNCED, 0);
+ 		break;
+ 	case SNK_READY:
+-		if (tcpm_port_is_disconnected(port))
++		/*
++		 * EXIT condition is based primarily on vbus disconnect and CC is secondary.
++		 * "A port that has entered into USB PD communications with the Source and
++		 * has seen the CC voltage exceed vRd-USB may monitor the CC pin to detect
++		 * cable disconnect in addition to monitoring VBUS.
++		 *
++		 * A port that is monitoring the CC voltage for disconnect (but is not in
++		 * the process of a USB PD PR_Swap or USB PD FR_Swap) shall transition to
++		 * Unattached.SNK within tSinkDisconnect after the CC voltage remains below
++		 * vRd-USB for tPDDebounce."
++		 *
++		 * When set_auto_vbus_discharge_threshold is enabled, CC pins go
++		 * away before vbus decays to disconnect threshold. Allow
++		 * disconnect to be driven by vbus disconnect when auto vbus
++		 * discharge is enabled.
++		 */
++		if (!port->auto_vbus_discharge_enabled && tcpm_port_is_disconnected(port))
+ 			tcpm_set_state(port, unattached_state(port), 0);
+ 		else if (!port->pd_capable &&
+ 			 (cc1 != old_cc1 || cc2 != old_cc2))
+@@ -4808,9 +4872,13 @@ static void _tcpm_cc_change(struct tcpm_port *port, enum typec_cc_status cc1,
+ 		 * Ignore CC changes here.
+ 		 */
+ 		break;
+-
+ 	default:
+-		if (tcpm_port_is_disconnected(port))
++		/*
++		 * While acting as sink and auto vbus discharge is enabled, Allow disconnect
++		 * to be driven by vbus disconnect.
++		 */
++		if (tcpm_port_is_disconnected(port) && !(port->pwr_role == TYPEC_SINK &&
++							 port->auto_vbus_discharge_enabled))
+ 			tcpm_set_state(port, unattached_state(port), 0);
+ 		break;
+ 	}
+@@ -4974,8 +5042,16 @@ static void _tcpm_pd_vbus_off(struct tcpm_port *port)
+ 	case SRC_TRANSITION_SUPPLY:
+ 	case SRC_READY:
+ 	case SRC_WAIT_NEW_CAPABILITIES:
+-		/* Force to unattached state to re-initiate connection */
+-		tcpm_set_state(port, SRC_UNATTACHED, 0);
++		/*
++		 * Force to unattached state to re-initiate connection.
++		 * DRP port should move to Unattached.SNK instead of Unattached.SRC if
++		 * sink removed. Although sink removal here is due to source's vbus collapse,
++		 * treat it the same way for consistency.
++		 */
++		if (port->port_type == TYPEC_PORT_SRC)
++			tcpm_set_state(port, SRC_UNATTACHED, tcpm_wait_for_discharge(port));
++		else
++			tcpm_set_state(port, SNK_UNATTACHED, tcpm_wait_for_discharge(port));
+ 		break;
+ 
+ 	case PORT_RESET:
+@@ -4994,9 +5070,8 @@ static void _tcpm_pd_vbus_off(struct tcpm_port *port)
+ 		break;
+ 
+ 	default:
+-		if (port->pwr_role == TYPEC_SINK &&
+-		    port->attached)
+-			tcpm_set_state(port, SNK_UNATTACHED, 0);
++		if (port->pwr_role == TYPEC_SINK && port->attached)
++			tcpm_set_state(port, SNK_UNATTACHED, tcpm_wait_for_discharge(port));
+ 		break;
+ 	}
+ }
+@@ -5018,7 +5093,23 @@ static void _tcpm_pd_vbus_vsafe0v(struct tcpm_port *port)
+ 			tcpm_set_state(port, tcpm_try_snk(port) ? SNK_TRY : SRC_ATTACHED,
+ 				       PD_T_CC_DEBOUNCE);
+ 		break;
++	case SRC_STARTUP:
++	case SRC_SEND_CAPABILITIES:
++	case SRC_SEND_CAPABILITIES_TIMEOUT:
++	case SRC_NEGOTIATE_CAPABILITIES:
++	case SRC_TRANSITION_SUPPLY:
++	case SRC_READY:
++	case SRC_WAIT_NEW_CAPABILITIES:
++		if (port->auto_vbus_discharge_enabled) {
++			if (port->port_type == TYPEC_PORT_SRC)
++				tcpm_set_state(port, SRC_UNATTACHED, 0);
++			else
++				tcpm_set_state(port, SNK_UNATTACHED, 0);
++		}
++		break;
+ 	default:
++		if (port->pwr_role == TYPEC_SINK && port->auto_vbus_discharge_enabled)
++			tcpm_set_state(port, SNK_UNATTACHED, 0);
+ 		break;
+ 	}
+ }
+@@ -5374,7 +5465,7 @@ static int tcpm_try_role(struct typec_port *p, int role)
+ 	return ret;
+ }
+ 
+-static int tcpm_pps_set_op_curr(struct tcpm_port *port, u16 op_curr)
++static int tcpm_pps_set_op_curr(struct tcpm_port *port, u16 req_op_curr)
+ {
+ 	unsigned int target_mw;
+ 	int ret;
+@@ -5392,12 +5483,12 @@ static int tcpm_pps_set_op_curr(struct tcpm_port *port, u16 op_curr)
+ 		goto port_unlock;
+ 	}
+ 
+-	if (op_curr > port->pps_data.max_curr) {
++	if (req_op_curr > port->pps_data.max_curr) {
+ 		ret = -EINVAL;
+ 		goto port_unlock;
+ 	}
+ 
+-	target_mw = (op_curr * port->pps_data.out_volt) / 1000;
++	target_mw = (req_op_curr * port->supply_voltage) / 1000;
+ 	if (target_mw < port->operating_snk_mw) {
+ 		ret = -EINVAL;
+ 		goto port_unlock;
+@@ -5411,10 +5502,10 @@ static int tcpm_pps_set_op_curr(struct tcpm_port *port, u16 op_curr)
+ 	}
+ 
+ 	/* Round down operating current to align with PPS valid steps */
+-	op_curr = op_curr - (op_curr % RDO_PROG_CURR_MA_STEP);
++	req_op_curr = req_op_curr - (req_op_curr % RDO_PROG_CURR_MA_STEP);
+ 
+ 	reinit_completion(&port->pps_complete);
+-	port->pps_data.op_curr = op_curr;
++	port->pps_data.req_op_curr = req_op_curr;
+ 	port->pps_status = 0;
+ 	port->pps_pending = true;
+ 	mutex_unlock(&port->lock);
+@@ -5435,7 +5526,7 @@ swap_unlock:
+ 	return ret;
+ }
+ 
+-static int tcpm_pps_set_out_volt(struct tcpm_port *port, u16 out_volt)
++static int tcpm_pps_set_out_volt(struct tcpm_port *port, u16 req_out_volt)
+ {
+ 	unsigned int target_mw;
+ 	int ret;
+@@ -5453,13 +5544,13 @@ static int tcpm_pps_set_out_volt(struct tcpm_port *port, u16 out_volt)
+ 		goto port_unlock;
+ 	}
+ 
+-	if (out_volt < port->pps_data.min_volt ||
+-	    out_volt > port->pps_data.max_volt) {
++	if (req_out_volt < port->pps_data.min_volt ||
++	    req_out_volt > port->pps_data.max_volt) {
+ 		ret = -EINVAL;
+ 		goto port_unlock;
+ 	}
+ 
+-	target_mw = (port->pps_data.op_curr * out_volt) / 1000;
++	target_mw = (port->current_limit * req_out_volt) / 1000;
+ 	if (target_mw < port->operating_snk_mw) {
+ 		ret = -EINVAL;
+ 		goto port_unlock;
+@@ -5473,10 +5564,10 @@ static int tcpm_pps_set_out_volt(struct tcpm_port *port, u16 out_volt)
+ 	}
+ 
+ 	/* Round down output voltage to align with PPS valid steps */
+-	out_volt = out_volt - (out_volt % RDO_PROG_VOLT_MV_STEP);
++	req_out_volt = req_out_volt - (req_out_volt % RDO_PROG_VOLT_MV_STEP);
+ 
+ 	reinit_completion(&port->pps_complete);
+-	port->pps_data.out_volt = out_volt;
++	port->pps_data.req_out_volt = req_out_volt;
+ 	port->pps_status = 0;
+ 	port->pps_pending = true;
+ 	mutex_unlock(&port->lock);
+@@ -5534,8 +5625,8 @@ static int tcpm_pps_activate(struct tcpm_port *port, bool activate)
+ 
+ 	/* Trigger PPS request or move back to standard PDO contract */
+ 	if (activate) {
+-		port->pps_data.out_volt = port->supply_voltage;
+-		port->pps_data.op_curr = port->current_limit;
++		port->pps_data.req_out_volt = port->supply_voltage;
++		port->pps_data.req_op_curr = port->current_limit;
+ 	}
+ 	mutex_unlock(&port->lock);
+ 
+diff --git a/drivers/usb/typec/tps6598x.c b/drivers/usb/typec/tps6598x.c
+index 29bd1c5a283cd..4038104568f5a 100644
+--- a/drivers/usb/typec/tps6598x.c
++++ b/drivers/usb/typec/tps6598x.c
+@@ -614,8 +614,8 @@ static int tps6598x_probe(struct i2c_client *client)
+ 		return ret;
+ 
+ 	fwnode = device_get_named_child_node(&client->dev, "connector");
+-	if (IS_ERR(fwnode))
+-		return PTR_ERR(fwnode);
++	if (!fwnode)
++		return -ENODEV;
+ 
+ 	tps->role_sw = fwnode_usb_role_switch_get(fwnode);
+ 	if (IS_ERR(tps->role_sw)) {
+diff --git a/drivers/usb/usbip/vudc_sysfs.c b/drivers/usb/usbip/vudc_sysfs.c
+index f7633ee655a17..d1cf6b51bf85d 100644
+--- a/drivers/usb/usbip/vudc_sysfs.c
++++ b/drivers/usb/usbip/vudc_sysfs.c
+@@ -156,12 +156,14 @@ static ssize_t usbip_sockfd_store(struct device *dev,
+ 		tcp_rx = kthread_create(&v_rx_loop, &udc->ud, "vudc_rx");
+ 		if (IS_ERR(tcp_rx)) {
+ 			sockfd_put(socket);
++			mutex_unlock(&udc->ud.sysfs_lock);
+ 			return -EINVAL;
+ 		}
+ 		tcp_tx = kthread_create(&v_tx_loop, &udc->ud, "vudc_tx");
+ 		if (IS_ERR(tcp_tx)) {
+ 			kthread_stop(tcp_rx);
+ 			sockfd_put(socket);
++			mutex_unlock(&udc->ud.sysfs_lock);
+ 			return -EINVAL;
+ 		}
+ 
+diff --git a/drivers/vfio/fsl-mc/vfio_fsl_mc.c b/drivers/vfio/fsl-mc/vfio_fsl_mc.c
+index f27e25112c403..8722f5effacd4 100644
+--- a/drivers/vfio/fsl-mc/vfio_fsl_mc.c
++++ b/drivers/vfio/fsl-mc/vfio_fsl_mc.c
+@@ -568,23 +568,39 @@ static int vfio_fsl_mc_init_device(struct vfio_fsl_mc_device *vdev)
+ 		dev_err(&mc_dev->dev, "VFIO_FSL_MC: Failed to setup DPRC (%d)\n", ret);
+ 		goto out_nc_unreg;
+ 	}
++	return 0;
++
++out_nc_unreg:
++	bus_unregister_notifier(&fsl_mc_bus_type, &vdev->nb);
++	return ret;
++}
+ 
++static int vfio_fsl_mc_scan_container(struct fsl_mc_device *mc_dev)
++{
++	int ret;
++
++	/* non dprc devices do not scan for other devices */
++	if (!is_fsl_mc_bus_dprc(mc_dev))
++		return 0;
+ 	ret = dprc_scan_container(mc_dev, false);
+ 	if (ret) {
+-		dev_err(&mc_dev->dev, "VFIO_FSL_MC: Container scanning failed (%d)\n", ret);
+-		goto out_dprc_cleanup;
++		dev_err(&mc_dev->dev,
++			"VFIO_FSL_MC: Container scanning failed (%d)\n", ret);
++		dprc_remove_devices(mc_dev, NULL, 0);
++		return ret;
+ 	}
+-
+ 	return 0;
++}
++
++static void vfio_fsl_uninit_device(struct vfio_fsl_mc_device *vdev)
++{
++	struct fsl_mc_device *mc_dev = vdev->mc_dev;
++
++	if (!is_fsl_mc_bus_dprc(mc_dev))
++		return;
+ 
+-out_dprc_cleanup:
+-	dprc_remove_devices(mc_dev, NULL, 0);
+ 	dprc_cleanup(mc_dev);
+-out_nc_unreg:
+ 	bus_unregister_notifier(&fsl_mc_bus_type, &vdev->nb);
+-	vdev->nb.notifier_call = NULL;
+-
+-	return ret;
+ }
+ 
+ static int vfio_fsl_mc_probe(struct fsl_mc_device *mc_dev)
+@@ -607,29 +623,39 @@ static int vfio_fsl_mc_probe(struct fsl_mc_device *mc_dev)
+ 	}
+ 
+ 	vdev->mc_dev = mc_dev;
+-
+-	ret = vfio_add_group_dev(dev, &vfio_fsl_mc_ops, vdev);
+-	if (ret) {
+-		dev_err(dev, "VFIO_FSL_MC: Failed to add to vfio group\n");
+-		goto out_group_put;
+-	}
++	mutex_init(&vdev->igate);
+ 
+ 	ret = vfio_fsl_mc_reflck_attach(vdev);
+ 	if (ret)
+-		goto out_group_dev;
++		goto out_group_put;
+ 
+ 	ret = vfio_fsl_mc_init_device(vdev);
+ 	if (ret)
+ 		goto out_reflck;
+ 
+-	mutex_init(&vdev->igate);
++	ret = vfio_add_group_dev(dev, &vfio_fsl_mc_ops, vdev);
++	if (ret) {
++		dev_err(dev, "VFIO_FSL_MC: Failed to add to vfio group\n");
++		goto out_device;
++	}
+ 
++	/*
++	 * This triggers recursion into vfio_fsl_mc_probe() on another device
++	 * and the vfio_fsl_mc_reflck_attach() must succeed, which relies on the
++	 * vfio_add_group_dev() above. It has no impact on this vdev, so it is
++	 * safe to be after the vfio device is made live.
++	 */
++	ret = vfio_fsl_mc_scan_container(mc_dev);
++	if (ret)
++		goto out_group_dev;
+ 	return 0;
+ 
+-out_reflck:
+-	vfio_fsl_mc_reflck_put(vdev->reflck);
+ out_group_dev:
+ 	vfio_del_group_dev(dev);
++out_device:
++	vfio_fsl_uninit_device(vdev);
++out_reflck:
++	vfio_fsl_mc_reflck_put(vdev->reflck);
+ out_group_put:
+ 	vfio_iommu_group_put(group, dev);
+ 	return ret;
+@@ -646,16 +672,10 @@ static int vfio_fsl_mc_remove(struct fsl_mc_device *mc_dev)
+ 
+ 	mutex_destroy(&vdev->igate);
+ 
++	dprc_remove_devices(mc_dev, NULL, 0);
++	vfio_fsl_uninit_device(vdev);
+ 	vfio_fsl_mc_reflck_put(vdev->reflck);
+ 
+-	if (is_fsl_mc_bus_dprc(mc_dev)) {
+-		dprc_remove_devices(mc_dev, NULL, 0);
+-		dprc_cleanup(mc_dev);
+-	}
+-
+-	if (vdev->nb.notifier_call)
+-		bus_unregister_notifier(&fsl_mc_bus_type, &vdev->nb);
+-
+ 	vfio_iommu_group_put(mc_dev->dev.iommu_group, dev);
+ 
+ 	return 0;
+diff --git a/drivers/vfio/mdev/mdev_sysfs.c b/drivers/vfio/mdev/mdev_sysfs.c
+index 917fd84c1c6f2..367ff5412a387 100644
+--- a/drivers/vfio/mdev/mdev_sysfs.c
++++ b/drivers/vfio/mdev/mdev_sysfs.c
+@@ -105,6 +105,7 @@ static struct mdev_type *add_mdev_supported_type(struct mdev_parent *parent,
+ 		return ERR_PTR(-ENOMEM);
+ 
+ 	type->kobj.kset = parent->mdev_types_kset;
++	type->parent = parent;
+ 
+ 	ret = kobject_init_and_add(&type->kobj, &mdev_type_ktype, NULL,
+ 				   "%s-%s", dev_driver_string(parent->dev),
+@@ -132,7 +133,6 @@ static struct mdev_type *add_mdev_supported_type(struct mdev_parent *parent,
+ 	}
+ 
+ 	type->group = group;
+-	type->parent = parent;
+ 	return type;
+ 
+ attrs_failed:
+diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
+index 5023e23db3bcb..cb7f2dc09e9d4 100644
+--- a/drivers/vfio/pci/vfio_pci.c
++++ b/drivers/vfio/pci/vfio_pci.c
+@@ -1924,6 +1924,68 @@ static int vfio_pci_bus_notifier(struct notifier_block *nb,
+ 	return 0;
+ }
+ 
++static int vfio_pci_vf_init(struct vfio_pci_device *vdev)
++{
++	struct pci_dev *pdev = vdev->pdev;
++	int ret;
++
++	if (!pdev->is_physfn)
++		return 0;
++
++	vdev->vf_token = kzalloc(sizeof(*vdev->vf_token), GFP_KERNEL);
++	if (!vdev->vf_token)
++		return -ENOMEM;
++
++	mutex_init(&vdev->vf_token->lock);
++	uuid_gen(&vdev->vf_token->uuid);
++
++	vdev->nb.notifier_call = vfio_pci_bus_notifier;
++	ret = bus_register_notifier(&pci_bus_type, &vdev->nb);
++	if (ret) {
++		kfree(vdev->vf_token);
++		return ret;
++	}
++	return 0;
++}
++
++static void vfio_pci_vf_uninit(struct vfio_pci_device *vdev)
++{
++	if (!vdev->vf_token)
++		return;
++
++	bus_unregister_notifier(&pci_bus_type, &vdev->nb);
++	WARN_ON(vdev->vf_token->users);
++	mutex_destroy(&vdev->vf_token->lock);
++	kfree(vdev->vf_token);
++}
++
++static int vfio_pci_vga_init(struct vfio_pci_device *vdev)
++{
++	struct pci_dev *pdev = vdev->pdev;
++	int ret;
++
++	if (!vfio_pci_is_vga(pdev))
++		return 0;
++
++	ret = vga_client_register(pdev, vdev, NULL, vfio_pci_set_vga_decode);
++	if (ret)
++		return ret;
++	vga_set_legacy_decoding(pdev, vfio_pci_set_vga_decode(vdev, false));
++	return 0;
++}
++
++static void vfio_pci_vga_uninit(struct vfio_pci_device *vdev)
++{
++	struct pci_dev *pdev = vdev->pdev;
++
++	if (!vfio_pci_is_vga(pdev))
++		return;
++	vga_client_register(pdev, NULL, NULL, NULL);
++	vga_set_legacy_decoding(pdev, VGA_RSRC_NORMAL_IO | VGA_RSRC_NORMAL_MEM |
++					      VGA_RSRC_LEGACY_IO |
++					      VGA_RSRC_LEGACY_MEM);
++}
++
+ static int vfio_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ {
+ 	struct vfio_pci_device *vdev;
+@@ -1970,35 +2032,15 @@ static int vfio_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	INIT_LIST_HEAD(&vdev->vma_list);
+ 	init_rwsem(&vdev->memory_lock);
+ 
+-	ret = vfio_add_group_dev(&pdev->dev, &vfio_pci_ops, vdev);
++	ret = vfio_pci_reflck_attach(vdev);
+ 	if (ret)
+ 		goto out_free;
+-
+-	ret = vfio_pci_reflck_attach(vdev);
++	ret = vfio_pci_vf_init(vdev);
+ 	if (ret)
+-		goto out_del_group_dev;
+-
+-	if (pdev->is_physfn) {
+-		vdev->vf_token = kzalloc(sizeof(*vdev->vf_token), GFP_KERNEL);
+-		if (!vdev->vf_token) {
+-			ret = -ENOMEM;
+-			goto out_reflck;
+-		}
+-
+-		mutex_init(&vdev->vf_token->lock);
+-		uuid_gen(&vdev->vf_token->uuid);
+-
+-		vdev->nb.notifier_call = vfio_pci_bus_notifier;
+-		ret = bus_register_notifier(&pci_bus_type, &vdev->nb);
+-		if (ret)
+-			goto out_vf_token;
+-	}
+-
+-	if (vfio_pci_is_vga(pdev)) {
+-		vga_client_register(pdev, vdev, NULL, vfio_pci_set_vga_decode);
+-		vga_set_legacy_decoding(pdev,
+-					vfio_pci_set_vga_decode(vdev, false));
+-	}
++		goto out_reflck;
++	ret = vfio_pci_vga_init(vdev);
++	if (ret)
++		goto out_vf;
+ 
+ 	vfio_pci_probe_power_state(vdev);
+ 
+@@ -2016,15 +2058,20 @@ static int vfio_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 		vfio_pci_set_power_state(vdev, PCI_D3hot);
+ 	}
+ 
+-	return ret;
++	ret = vfio_add_group_dev(&pdev->dev, &vfio_pci_ops, vdev);
++	if (ret)
++		goto out_power;
++	return 0;
+ 
+-out_vf_token:
+-	kfree(vdev->vf_token);
++out_power:
++	if (!disable_idle_d3)
++		vfio_pci_set_power_state(vdev, PCI_D0);
++out_vf:
++	vfio_pci_vf_uninit(vdev);
+ out_reflck:
+ 	vfio_pci_reflck_put(vdev->reflck);
+-out_del_group_dev:
+-	vfio_del_group_dev(&pdev->dev);
+ out_free:
++	kfree(vdev->pm_save);
+ 	kfree(vdev);
+ out_group_put:
+ 	vfio_iommu_group_put(group, &pdev->dev);
+@@ -2041,33 +2088,19 @@ static void vfio_pci_remove(struct pci_dev *pdev)
+ 	if (!vdev)
+ 		return;
+ 
+-	if (vdev->vf_token) {
+-		WARN_ON(vdev->vf_token->users);
+-		mutex_destroy(&vdev->vf_token->lock);
+-		kfree(vdev->vf_token);
+-	}
+-
+-	if (vdev->nb.notifier_call)
+-		bus_unregister_notifier(&pci_bus_type, &vdev->nb);
+-
++	vfio_pci_vf_uninit(vdev);
+ 	vfio_pci_reflck_put(vdev->reflck);
++	vfio_pci_vga_uninit(vdev);
+ 
+ 	vfio_iommu_group_put(pdev->dev.iommu_group, &pdev->dev);
+-	kfree(vdev->region);
+-	mutex_destroy(&vdev->ioeventfds_lock);
+ 
+ 	if (!disable_idle_d3)
+ 		vfio_pci_set_power_state(vdev, PCI_D0);
+ 
++	mutex_destroy(&vdev->ioeventfds_lock);
++	kfree(vdev->region);
+ 	kfree(vdev->pm_save);
+ 	kfree(vdev);
+-
+-	if (vfio_pci_is_vga(pdev)) {
+-		vga_client_register(pdev, NULL, NULL, NULL);
+-		vga_set_legacy_decoding(pdev,
+-				VGA_RSRC_NORMAL_IO | VGA_RSRC_NORMAL_MEM |
+-				VGA_RSRC_LEGACY_IO | VGA_RSRC_LEGACY_MEM);
+-	}
+ }
+ 
+ static pci_ers_result_t vfio_pci_aer_err_detected(struct pci_dev *pdev,
+diff --git a/fs/afs/dir.c b/fs/afs/dir.c
+index 17548c1faf029..31251d11d576c 100644
+--- a/fs/afs/dir.c
++++ b/fs/afs/dir.c
+@@ -1342,6 +1342,7 @@ static int afs_mkdir(struct user_namespace *mnt_userns, struct inode *dir,
+ 
+ 	afs_op_set_vnode(op, 0, dvnode);
+ 	op->file[0].dv_delta = 1;
++	op->file[0].modification = true;
+ 	op->file[0].update_ctime = true;
+ 	op->dentry	= dentry;
+ 	op->create.mode	= S_IFDIR | mode;
+@@ -1423,6 +1424,7 @@ static int afs_rmdir(struct inode *dir, struct dentry *dentry)
+ 
+ 	afs_op_set_vnode(op, 0, dvnode);
+ 	op->file[0].dv_delta = 1;
++	op->file[0].modification = true;
+ 	op->file[0].update_ctime = true;
+ 
+ 	op->dentry	= dentry;
+@@ -1559,6 +1561,7 @@ static int afs_unlink(struct inode *dir, struct dentry *dentry)
+ 
+ 	afs_op_set_vnode(op, 0, dvnode);
+ 	op->file[0].dv_delta = 1;
++	op->file[0].modification = true;
+ 	op->file[0].update_ctime = true;
+ 
+ 	/* Try to make sure we have a callback promise on the victim. */
+@@ -1641,6 +1644,7 @@ static int afs_create(struct user_namespace *mnt_userns, struct inode *dir,
+ 
+ 	afs_op_set_vnode(op, 0, dvnode);
+ 	op->file[0].dv_delta = 1;
++	op->file[0].modification = true;
+ 	op->file[0].update_ctime = true;
+ 
+ 	op->dentry	= dentry;
+@@ -1715,6 +1719,7 @@ static int afs_link(struct dentry *from, struct inode *dir,
+ 	afs_op_set_vnode(op, 0, dvnode);
+ 	afs_op_set_vnode(op, 1, vnode);
+ 	op->file[0].dv_delta = 1;
++	op->file[0].modification = true;
+ 	op->file[0].update_ctime = true;
+ 	op->file[1].update_ctime = true;
+ 
+@@ -1910,6 +1915,8 @@ static int afs_rename(struct user_namespace *mnt_userns, struct inode *old_dir,
+ 	afs_op_set_vnode(op, 1, new_dvnode); /* May be same as orig_dvnode */
+ 	op->file[0].dv_delta = 1;
+ 	op->file[1].dv_delta = 1;
++	op->file[0].modification = true;
++	op->file[1].modification = true;
+ 	op->file[0].update_ctime = true;
+ 	op->file[1].update_ctime = true;
+ 
+diff --git a/fs/afs/dir_silly.c b/fs/afs/dir_silly.c
+index 04f75a44f2432..dae9a57d7ec0c 100644
+--- a/fs/afs/dir_silly.c
++++ b/fs/afs/dir_silly.c
+@@ -73,6 +73,8 @@ static int afs_do_silly_rename(struct afs_vnode *dvnode, struct afs_vnode *vnode
+ 	afs_op_set_vnode(op, 1, dvnode);
+ 	op->file[0].dv_delta = 1;
+ 	op->file[1].dv_delta = 1;
++	op->file[0].modification = true;
++	op->file[1].modification = true;
+ 	op->file[0].update_ctime = true;
+ 	op->file[1].update_ctime = true;
+ 
+@@ -201,6 +203,7 @@ static int afs_do_silly_unlink(struct afs_vnode *dvnode, struct afs_vnode *vnode
+ 	afs_op_set_vnode(op, 0, dvnode);
+ 	afs_op_set_vnode(op, 1, vnode);
+ 	op->file[0].dv_delta = 1;
++	op->file[0].modification = true;
+ 	op->file[0].update_ctime = true;
+ 	op->file[1].op_unlinked = true;
+ 	op->file[1].update_ctime = true;
+diff --git a/fs/afs/fs_operation.c b/fs/afs/fs_operation.c
+index 71c58723763d2..a82515b47350e 100644
+--- a/fs/afs/fs_operation.c
++++ b/fs/afs/fs_operation.c
+@@ -118,6 +118,8 @@ static void afs_prepare_vnode(struct afs_operation *op, struct afs_vnode_param *
+ 		vp->cb_break_before	= afs_calc_vnode_cb_break(vnode);
+ 		if (vnode->lock_state != AFS_VNODE_LOCK_NONE)
+ 			op->flags	|= AFS_OPERATION_CUR_ONLY;
++		if (vp->modification)
++			set_bit(AFS_VNODE_MODIFYING, &vnode->flags);
+ 	}
+ 
+ 	if (vp->fid.vnode)
+@@ -223,6 +225,10 @@ int afs_put_operation(struct afs_operation *op)
+ 
+ 	if (op->ops && op->ops->put)
+ 		op->ops->put(op);
++	if (op->file[0].modification)
++		clear_bit(AFS_VNODE_MODIFYING, &op->file[0].vnode->flags);
++	if (op->file[1].modification && op->file[1].vnode != op->file[0].vnode)
++		clear_bit(AFS_VNODE_MODIFYING, &op->file[1].vnode->flags);
+ 	if (op->file[0].put_vnode)
+ 		iput(&op->file[0].vnode->vfs_inode);
+ 	if (op->file[1].put_vnode)
+diff --git a/fs/afs/inode.c b/fs/afs/inode.c
+index 12be88716e4c9..fddf7d54e0b76 100644
+--- a/fs/afs/inode.c
++++ b/fs/afs/inode.c
+@@ -102,13 +102,13 @@ static int afs_inode_init_from_status(struct afs_operation *op,
+ 
+ 	switch (status->type) {
+ 	case AFS_FTYPE_FILE:
+-		inode->i_mode	= S_IFREG | status->mode;
++		inode->i_mode	= S_IFREG | (status->mode & S_IALLUGO);
+ 		inode->i_op	= &afs_file_inode_operations;
+ 		inode->i_fop	= &afs_file_operations;
+ 		inode->i_mapping->a_ops	= &afs_fs_aops;
+ 		break;
+ 	case AFS_FTYPE_DIR:
+-		inode->i_mode	= S_IFDIR | status->mode;
++		inode->i_mode	= S_IFDIR |  (status->mode & S_IALLUGO);
+ 		inode->i_op	= &afs_dir_inode_operations;
+ 		inode->i_fop	= &afs_dir_file_operations;
+ 		inode->i_mapping->a_ops	= &afs_dir_aops;
+@@ -198,7 +198,7 @@ static void afs_apply_status(struct afs_operation *op,
+ 	if (status->mode != vnode->status.mode) {
+ 		mode = inode->i_mode;
+ 		mode &= ~S_IALLUGO;
+-		mode |= status->mode;
++		mode |= status->mode & S_IALLUGO;
+ 		WRITE_ONCE(inode->i_mode, mode);
+ 	}
+ 
+@@ -293,8 +293,9 @@ void afs_vnode_commit_status(struct afs_operation *op, struct afs_vnode_param *v
+ 			op->flags &= ~AFS_OPERATION_DIR_CONFLICT;
+ 		}
+ 	} else if (vp->scb.have_status) {
+-		if (vp->dv_before + vp->dv_delta != vp->scb.status.data_version &&
+-		    vp->speculative)
++		if (vp->speculative &&
++		    (test_bit(AFS_VNODE_MODIFYING, &vnode->flags) ||
++		     vp->dv_before != vnode->status.data_version))
+ 			/* Ignore the result of a speculative bulk status fetch
+ 			 * if it splits around a modification op, thereby
+ 			 * appearing to regress the data version.
+@@ -910,6 +911,7 @@ int afs_setattr(struct user_namespace *mnt_userns, struct dentry *dentry,
+ 	}
+ 	op->ctime = attr->ia_ctime;
+ 	op->file[0].update_ctime = 1;
++	op->file[0].modification = true;
+ 
+ 	op->ops = &afs_setattr_operation;
+ 	ret = afs_do_sync_operation(op);
+diff --git a/fs/afs/internal.h b/fs/afs/internal.h
+index 1627b18728125..be981a9a1add2 100644
+--- a/fs/afs/internal.h
++++ b/fs/afs/internal.h
+@@ -640,6 +640,7 @@ struct afs_vnode {
+ #define AFS_VNODE_PSEUDODIR	7 		/* set if Vnode is a pseudo directory */
+ #define AFS_VNODE_NEW_CONTENT	8		/* Set if file has new content (create/trunc-0) */
+ #define AFS_VNODE_SILLY_DELETED	9		/* Set if file has been silly-deleted */
++#define AFS_VNODE_MODIFYING	10		/* Set if we're performing a modification op */
+ 
+ 	struct list_head	wb_keys;	/* List of keys available for writeback */
+ 	struct list_head	pending_locks;	/* locks waiting to be granted */
+@@ -756,6 +757,7 @@ struct afs_vnode_param {
+ 	bool			set_size:1;	/* Must update i_size */
+ 	bool			op_unlinked:1;	/* True if file was unlinked by op */
+ 	bool			speculative:1;	/* T if speculative status fetch (no vnode lock) */
++	bool			modification:1;	/* Set if the content gets modified */
+ };
+ 
+ /*
+diff --git a/fs/afs/write.c b/fs/afs/write.c
+index eb737ed63afb6..ebe3b6493fcef 100644
+--- a/fs/afs/write.c
++++ b/fs/afs/write.c
+@@ -450,6 +450,7 @@ static int afs_store_data(struct address_space *mapping,
+ 	afs_op_set_vnode(op, 0, vnode);
+ 	op->file[0].dv_delta = 1;
+ 	op->store.mapping = mapping;
++	op->file[0].modification = true;
+ 	op->store.first = first;
+ 	op->store.last = last;
+ 	op->store.first_offset = offset;
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index 92a3686277918..72c4b66ed5163 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -3165,20 +3165,22 @@ int btrfs_sync_log(struct btrfs_trans_handle *trans,
+ 	 */
+ 	mutex_unlock(&root->log_mutex);
+ 
+-	btrfs_init_log_ctx(&root_log_ctx, NULL);
+-
+-	mutex_lock(&log_root_tree->log_mutex);
+-
+ 	if (btrfs_is_zoned(fs_info)) {
++		mutex_lock(&fs_info->tree_root->log_mutex);
+ 		if (!log_root_tree->node) {
+ 			ret = btrfs_alloc_log_tree_node(trans, log_root_tree);
+ 			if (ret) {
+-				mutex_unlock(&log_root_tree->log_mutex);
++				mutex_unlock(&fs_info->tree_log_mutex);
+ 				goto out;
+ 			}
+ 		}
++		mutex_unlock(&fs_info->tree_root->log_mutex);
+ 	}
+ 
++	btrfs_init_log_ctx(&root_log_ctx, NULL);
++
++	mutex_lock(&log_root_tree->log_mutex);
++
+ 	index2 = log_root_tree->log_transid % 2;
+ 	list_add_tail(&root_log_ctx.list, &log_root_tree->log_ctxs[index2]);
+ 	root_log_ctx.log_transid = log_root_tree->log_transid;
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 1c6810bbaf8b5..3912eda7905f6 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -4989,6 +4989,8 @@ static void init_alloc_chunk_ctl_policy_zoned(
+ 		ctl->max_chunk_size = 2 * ctl->max_stripe_size;
+ 		ctl->devs_max = min_t(int, ctl->devs_max,
+ 				      BTRFS_MAX_DEVS_SYS_CHUNK);
++	} else {
++		BUG();
+ 	}
+ 
+ 	/* We don't want a chunk larger than 10% of writable space */
+diff --git a/fs/dlm/lowcomms.c b/fs/dlm/lowcomms.c
+index 372c34ff8594f..f7d2c52791f8f 100644
+--- a/fs/dlm/lowcomms.c
++++ b/fs/dlm/lowcomms.c
+@@ -908,6 +908,7 @@ static int accept_from_sock(struct listen_connection *con)
+ 			result = dlm_con_init(othercon, nodeid);
+ 			if (result < 0) {
+ 				kfree(othercon);
++				mutex_unlock(&newcon->sock_mutex);
+ 				goto accept_err;
+ 			}
+ 
+diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c
+index c0fee830a34ed..a5ceccc5ef00f 100644
+--- a/fs/fuse/dev.c
++++ b/fs/fuse/dev.c
+@@ -2233,11 +2233,8 @@ static long fuse_dev_ioctl(struct file *file, unsigned int cmd,
+ 	int oldfd;
+ 	struct fuse_dev *fud = NULL;
+ 
+-	if (_IOC_TYPE(cmd) != FUSE_DEV_IOC_MAGIC)
+-		return -ENOTTY;
+-
+-	switch (_IOC_NR(cmd)) {
+-	case _IOC_NR(FUSE_DEV_IOC_CLONE):
++	switch (cmd) {
++	case FUSE_DEV_IOC_CLONE:
+ 		res = -EFAULT;
+ 		if (!get_user(oldfd, (__u32 __user *)arg)) {
+ 			struct file *old = fget(oldfd);
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 0b5fbbd969cbd..144056b0cac92 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -238,7 +238,7 @@ struct fixed_rsrc_data {
+ struct io_buffer {
+ 	struct list_head list;
+ 	__u64 addr;
+-	__s32 len;
++	__u32 len;
+ 	__u16 bid;
+ };
+ 
+@@ -614,7 +614,7 @@ struct io_splice {
+ struct io_provide_buf {
+ 	struct file			*file;
+ 	__u64				addr;
+-	__s32				len;
++	__u32				len;
+ 	__u32				bgid;
+ 	__u16				nbufs;
+ 	__u16				bid;
+@@ -3979,7 +3979,7 @@ static int io_remove_buffers(struct io_kiocb *req, unsigned int issue_flags)
+ static int io_provide_buffers_prep(struct io_kiocb *req,
+ 				   const struct io_uring_sqe *sqe)
+ {
+-	unsigned long size;
++	unsigned long size, tmp_check;
+ 	struct io_provide_buf *p = &req->pbuf;
+ 	u64 tmp;
+ 
+@@ -3993,6 +3993,12 @@ static int io_provide_buffers_prep(struct io_kiocb *req,
+ 	p->addr = READ_ONCE(sqe->addr);
+ 	p->len = READ_ONCE(sqe->len);
+ 
++	if (check_mul_overflow((unsigned long)p->len, (unsigned long)p->nbufs,
++				&size))
++		return -EOVERFLOW;
++	if (check_add_overflow((unsigned long)p->addr, size, &tmp_check))
++		return -EOVERFLOW;
++
+ 	size = (unsigned long)p->len * p->nbufs;
+ 	if (!access_ok(u64_to_user_ptr(p->addr), size))
+ 		return -EFAULT;
+@@ -4017,7 +4023,7 @@ static int io_add_buffers(struct io_provide_buf *pbuf, struct io_buffer **head)
+ 			break;
+ 
+ 		buf->addr = addr;
+-		buf->len = pbuf->len;
++		buf->len = min_t(__u32, pbuf->len, MAX_RW_COUNT);
+ 		buf->bid = bid;
+ 		addr += pbuf->len;
+ 		bid++;
+diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
+index dd9f38d072dd6..e13c4c81fb89d 100644
+--- a/fs/nfsd/nfs4proc.c
++++ b/fs/nfsd/nfs4proc.c
+@@ -1538,8 +1538,8 @@ nfsd4_copy(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 		if (!nfs4_init_copy_state(nn, copy))
+ 			goto out_err;
+ 		refcount_set(&async_copy->refcount, 1);
+-		memcpy(&copy->cp_res.cb_stateid, &copy->cp_stateid,
+-			sizeof(copy->cp_stateid));
++		memcpy(&copy->cp_res.cb_stateid, &copy->cp_stateid.stid,
++			sizeof(copy->cp_res.cb_stateid));
+ 		dup_copy_fields(copy, async_copy);
+ 		async_copy->copy_task = kthread_create(nfsd4_do_async_copy,
+ 				async_copy, "%s", "copy thread");
+diff --git a/fs/overlayfs/copy_up.c b/fs/overlayfs/copy_up.c
+index 0b2891c6c71e0..2846b943e80c1 100644
+--- a/fs/overlayfs/copy_up.c
++++ b/fs/overlayfs/copy_up.c
+@@ -932,7 +932,7 @@ static int ovl_copy_up_one(struct dentry *parent, struct dentry *dentry,
+ static int ovl_copy_up_flags(struct dentry *dentry, int flags)
+ {
+ 	int err = 0;
+-	const struct cred *old_cred = ovl_override_creds(dentry->d_sb);
++	const struct cred *old_cred;
+ 	bool disconnected = (dentry->d_flags & DCACHE_DISCONNECTED);
+ 
+ 	/*
+@@ -943,6 +943,7 @@ static int ovl_copy_up_flags(struct dentry *dentry, int flags)
+ 	if (WARN_ON(disconnected && d_is_dir(dentry)))
+ 		return -EIO;
+ 
++	old_cred = ovl_override_creds(dentry->d_sb);
+ 	while (!err) {
+ 		struct dentry *next;
+ 		struct dentry *parent = NULL;
+diff --git a/fs/overlayfs/overlayfs.h b/fs/overlayfs/overlayfs.h
+index 95cff83786a55..2322f854533cf 100644
+--- a/fs/overlayfs/overlayfs.h
++++ b/fs/overlayfs/overlayfs.h
+@@ -319,9 +319,6 @@ int ovl_check_setxattr(struct dentry *dentry, struct dentry *upperdentry,
+ 		       enum ovl_xattr ox, const void *value, size_t size,
+ 		       int xerr);
+ int ovl_set_impure(struct dentry *dentry, struct dentry *upperdentry);
+-void ovl_set_flag(unsigned long flag, struct inode *inode);
+-void ovl_clear_flag(unsigned long flag, struct inode *inode);
+-bool ovl_test_flag(unsigned long flag, struct inode *inode);
+ bool ovl_inuse_trylock(struct dentry *dentry);
+ void ovl_inuse_unlock(struct dentry *dentry);
+ bool ovl_is_inuse(struct dentry *dentry);
+@@ -335,6 +332,21 @@ char *ovl_get_redirect_xattr(struct ovl_fs *ofs, struct dentry *dentry,
+ 			     int padding);
+ int ovl_sync_status(struct ovl_fs *ofs);
+ 
++static inline void ovl_set_flag(unsigned long flag, struct inode *inode)
++{
++	set_bit(flag, &OVL_I(inode)->flags);
++}
++
++static inline void ovl_clear_flag(unsigned long flag, struct inode *inode)
++{
++	clear_bit(flag, &OVL_I(inode)->flags);
++}
++
++static inline bool ovl_test_flag(unsigned long flag, struct inode *inode)
++{
++	return test_bit(flag, &OVL_I(inode)->flags);
++}
++
+ static inline bool ovl_is_impuredir(struct super_block *sb,
+ 				    struct dentry *dentry)
+ {
+@@ -439,6 +451,18 @@ int ovl_workdir_cleanup(struct inode *dir, struct vfsmount *mnt,
+ 			struct dentry *dentry, int level);
+ int ovl_indexdir_cleanup(struct ovl_fs *ofs);
+ 
++/*
++ * Can we iterate real dir directly?
++ *
++ * Non-merge dir may contain whiteouts from a time it was a merge upper, before
++ * lower dir was removed under it and possibly before it was rotated from upper
++ * to lower layer.
++ */
++static inline bool ovl_dir_is_real(struct dentry *dir)
++{
++	return !ovl_test_flag(OVL_WHITEOUTS, d_inode(dir));
++}
++
+ /* inode.c */
+ int ovl_set_nlink_upper(struct dentry *dentry);
+ int ovl_set_nlink_lower(struct dentry *dentry);
+diff --git a/fs/overlayfs/readdir.c b/fs/overlayfs/readdir.c
+index f404a78e6b607..cc1e802570644 100644
+--- a/fs/overlayfs/readdir.c
++++ b/fs/overlayfs/readdir.c
+@@ -319,18 +319,6 @@ static inline int ovl_dir_read(struct path *realpath,
+ 	return err;
+ }
+ 
+-/*
+- * Can we iterate real dir directly?
+- *
+- * Non-merge dir may contain whiteouts from a time it was a merge upper, before
+- * lower dir was removed under it and possibly before it was rotated from upper
+- * to lower layer.
+- */
+-static bool ovl_dir_is_real(struct dentry *dir)
+-{
+-	return !ovl_test_flag(OVL_WHITEOUTS, d_inode(dir));
+-}
+-
+ static void ovl_dir_reset(struct file *file)
+ {
+ 	struct ovl_dir_file *od = file->private_data;
+diff --git a/fs/overlayfs/super.c b/fs/overlayfs/super.c
+index 8cf3433350296..787ce7c38fba5 100644
+--- a/fs/overlayfs/super.c
++++ b/fs/overlayfs/super.c
+@@ -380,6 +380,8 @@ static int ovl_show_options(struct seq_file *m, struct dentry *dentry)
+ 			   ofs->config.metacopy ? "on" : "off");
+ 	if (ofs->config.ovl_volatile)
+ 		seq_puts(m, ",volatile");
++	if (ofs->config.userxattr)
++		seq_puts(m, ",userxattr");
+ 	return 0;
+ }
+ 
+diff --git a/fs/overlayfs/util.c b/fs/overlayfs/util.c
+index 7f5a01a11f97d..404a0a32ddf6b 100644
+--- a/fs/overlayfs/util.c
++++ b/fs/overlayfs/util.c
+@@ -422,18 +422,20 @@ void ovl_inode_update(struct inode *inode, struct dentry *upperdentry)
+ 	}
+ }
+ 
+-static void ovl_dentry_version_inc(struct dentry *dentry, bool impurity)
++static void ovl_dir_version_inc(struct dentry *dentry, bool impurity)
+ {
+ 	struct inode *inode = d_inode(dentry);
+ 
+ 	WARN_ON(!inode_is_locked(inode));
++	WARN_ON(!d_is_dir(dentry));
+ 	/*
+-	 * Version is used by readdir code to keep cache consistent.  For merge
+-	 * dirs all changes need to be noted.  For non-merge dirs, cache only
+-	 * contains impure (ones which have been copied up and have origins)
+-	 * entries, so only need to note changes to impure entries.
++	 * Version is used by readdir code to keep cache consistent.
++	 * For merge dirs (or dirs with origin) all changes need to be noted.
++	 * For non-merge dirs, cache contains only impure entries (i.e. ones
++	 * which have been copied up and have origins), so only need to note
++	 * changes to impure entries.
+ 	 */
+-	if (OVL_TYPE_MERGE(ovl_path_type(dentry)) || impurity)
++	if (!ovl_dir_is_real(dentry) || impurity)
+ 		OVL_I(inode)->version++;
+ }
+ 
+@@ -442,7 +444,7 @@ void ovl_dir_modified(struct dentry *dentry, bool impurity)
+ 	/* Copy mtime/ctime */
+ 	ovl_copyattr(d_inode(ovl_dentry_upper(dentry)), d_inode(dentry));
+ 
+-	ovl_dentry_version_inc(dentry, impurity);
++	ovl_dir_version_inc(dentry, impurity);
+ }
+ 
+ u64 ovl_dentry_version_get(struct dentry *dentry)
+@@ -638,21 +640,6 @@ int ovl_set_impure(struct dentry *dentry, struct dentry *upperdentry)
+ 	return err;
+ }
+ 
+-void ovl_set_flag(unsigned long flag, struct inode *inode)
+-{
+-	set_bit(flag, &OVL_I(inode)->flags);
+-}
+-
+-void ovl_clear_flag(unsigned long flag, struct inode *inode)
+-{
+-	clear_bit(flag, &OVL_I(inode)->flags);
+-}
+-
+-bool ovl_test_flag(unsigned long flag, struct inode *inode)
+-{
+-	return test_bit(flag, &OVL_I(inode)->flags);
+-}
+-
+ /**
+  * Caller must hold a reference to inode to prevent it from being freed while
+  * it is marked inuse.
+diff --git a/fs/proc/array.c b/fs/proc/array.c
+index bb87e4d89cd8f..7ec59171f197f 100644
+--- a/fs/proc/array.c
++++ b/fs/proc/array.c
+@@ -342,8 +342,10 @@ static inline void task_seccomp(struct seq_file *m, struct task_struct *p)
+ 	seq_put_decimal_ull(m, "NoNewPrivs:\t", task_no_new_privs(p));
+ #ifdef CONFIG_SECCOMP
+ 	seq_put_decimal_ull(m, "\nSeccomp:\t", p->seccomp.mode);
++#ifdef CONFIG_SECCOMP_FILTER
+ 	seq_put_decimal_ull(m, "\nSeccomp_filters:\t",
+ 			    atomic_read(&p->seccomp.filter_count));
++#endif
+ #endif
+ 	seq_puts(m, "\nSpeculation_Store_Bypass:\t");
+ 	switch (arch_prctl_spec_ctrl_get(p, PR_SPEC_STORE_BYPASS)) {
+diff --git a/fs/xfs/libxfs/xfs_attr.c b/fs/xfs/libxfs/xfs_attr.c
+index 472b3039eabbf..902e5f7e66423 100644
+--- a/fs/xfs/libxfs/xfs_attr.c
++++ b/fs/xfs/libxfs/xfs_attr.c
+@@ -928,6 +928,7 @@ restart:
+ 	 * Search to see if name already exists, and get back a pointer
+ 	 * to where it should go.
+ 	 */
++	error = 0;
+ 	retval = xfs_attr_node_hasname(args, &state);
+ 	if (retval != -ENOATTR && retval != -EEXIST)
+ 		goto out;
+diff --git a/include/crypto/internal/poly1305.h b/include/crypto/internal/poly1305.h
+index 064e52ca52480..196aa769f2968 100644
+--- a/include/crypto/internal/poly1305.h
++++ b/include/crypto/internal/poly1305.h
+@@ -18,7 +18,8 @@
+  * only the ε-almost-∆-universal hash function (not the full MAC) is computed.
+  */
+ 
+-void poly1305_core_setkey(struct poly1305_core_key *key, const u8 *raw_key);
++void poly1305_core_setkey(struct poly1305_core_key *key,
++			  const u8 raw_key[POLY1305_BLOCK_SIZE]);
+ static inline void poly1305_core_init(struct poly1305_state *state)
+ {
+ 	*state = (struct poly1305_state){};
+diff --git a/include/crypto/poly1305.h b/include/crypto/poly1305.h
+index f1f67fc749cf4..090692ec3bc73 100644
+--- a/include/crypto/poly1305.h
++++ b/include/crypto/poly1305.h
+@@ -58,8 +58,10 @@ struct poly1305_desc_ctx {
+ 	};
+ };
+ 
+-void poly1305_init_arch(struct poly1305_desc_ctx *desc, const u8 *key);
+-void poly1305_init_generic(struct poly1305_desc_ctx *desc, const u8 *key);
++void poly1305_init_arch(struct poly1305_desc_ctx *desc,
++			const u8 key[POLY1305_KEY_SIZE]);
++void poly1305_init_generic(struct poly1305_desc_ctx *desc,
++			   const u8 key[POLY1305_KEY_SIZE]);
+ 
+ static inline void poly1305_init(struct poly1305_desc_ctx *desc, const u8 *key)
+ {
+diff --git a/include/keys/trusted-type.h b/include/keys/trusted-type.h
+index a94c03a61d8f9..b2ed3481c6a02 100644
+--- a/include/keys/trusted-type.h
++++ b/include/keys/trusted-type.h
+@@ -30,6 +30,7 @@ struct trusted_key_options {
+ 	uint16_t keytype;
+ 	uint32_t keyhandle;
+ 	unsigned char keyauth[TPM_DIGEST_SIZE];
++	uint32_t blobauth_len;
+ 	unsigned char blobauth[TPM_DIGEST_SIZE];
+ 	uint32_t pcrinfo_len;
+ 	unsigned char pcrinfo[MAX_PCRINFO_SIZE];
+diff --git a/include/linux/dma-iommu.h b/include/linux/dma-iommu.h
+index 706b68d1359be..13d1f4c14d7ba 100644
+--- a/include/linux/dma-iommu.h
++++ b/include/linux/dma-iommu.h
+@@ -40,6 +40,8 @@ void iommu_dma_get_resv_regions(struct device *dev, struct list_head *list);
+ void iommu_dma_free_cpu_cached_iovas(unsigned int cpu,
+ 		struct iommu_domain *domain);
+ 
++extern bool iommu_dma_forcedac;
++
+ #else /* CONFIG_IOMMU_DMA */
+ 
+ struct iommu_domain;
+diff --git a/include/linux/firmware/xlnx-zynqmp.h b/include/linux/firmware/xlnx-zynqmp.h
+index 71177b17eee5e..66e2423d9feb7 100644
+--- a/include/linux/firmware/xlnx-zynqmp.h
++++ b/include/linux/firmware/xlnx-zynqmp.h
+@@ -354,11 +354,6 @@ int zynqmp_pm_read_pggs(u32 index, u32 *value);
+ int zynqmp_pm_system_shutdown(const u32 type, const u32 subtype);
+ int zynqmp_pm_set_boot_health_status(u32 value);
+ #else
+-static inline struct zynqmp_eemi_ops *zynqmp_pm_get_eemi_ops(void)
+-{
+-	return ERR_PTR(-ENODEV);
+-}
+-
+ static inline int zynqmp_pm_get_api_version(u32 *version)
+ {
+ 	return -ENODEV;
+diff --git a/include/linux/gpio/driver.h b/include/linux/gpio/driver.h
+index 286de0520574e..ecf0032a09954 100644
+--- a/include/linux/gpio/driver.h
++++ b/include/linux/gpio/driver.h
+@@ -624,8 +624,17 @@ void gpiochip_irq_domain_deactivate(struct irq_domain *domain,
+ bool gpiochip_irqchip_irq_valid(const struct gpio_chip *gc,
+ 				unsigned int offset);
+ 
++#ifdef CONFIG_GPIOLIB_IRQCHIP
+ int gpiochip_irqchip_add_domain(struct gpio_chip *gc,
+ 				struct irq_domain *domain);
++#else
++static inline int gpiochip_irqchip_add_domain(struct gpio_chip *gc,
++					      struct irq_domain *domain)
++{
++	WARN_ON(1);
++	return -EINVAL;
++}
++#endif
+ 
+ int gpiochip_generic_request(struct gpio_chip *gc, unsigned int offset);
+ void gpiochip_generic_free(struct gpio_chip *gc, unsigned int offset);
+diff --git a/include/linux/hid.h b/include/linux/hid.h
+index ef702b3f56e38..3e33eb14118c8 100644
+--- a/include/linux/hid.h
++++ b/include/linux/hid.h
+@@ -262,6 +262,8 @@ struct hid_item {
+ #define HID_CP_SELECTION	0x000c0080
+ #define HID_CP_MEDIASELECTION	0x000c0087
+ #define HID_CP_SELECTDISC	0x000c00ba
++#define HID_CP_VOLUMEUP		0x000c00e9
++#define HID_CP_VOLUMEDOWN	0x000c00ea
+ #define HID_CP_PLAYBACKSPEED	0x000c00f1
+ #define HID_CP_PROXIMITY	0x000c0109
+ #define HID_CP_SPEAKERSYSTEM	0x000c0160
+diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h
+index 1bc46b88711a9..d1f32b33415a5 100644
+--- a/include/linux/intel-iommu.h
++++ b/include/linux/intel-iommu.h
+@@ -372,6 +372,7 @@ enum {
+ /* PASID cache invalidation granu */
+ #define QI_PC_ALL_PASIDS	0
+ #define QI_PC_PASID_SEL		1
++#define QI_PC_GLOBAL		3
+ 
+ #define QI_EIOTLB_ADDR(addr)	((u64)(addr) & VTD_PAGE_MASK)
+ #define QI_EIOTLB_IH(ih)	(((u64)ih) << 6)
+diff --git a/include/linux/iommu.h b/include/linux/iommu.h
+index 5e7fe519430af..9ca6e6b8084dc 100644
+--- a/include/linux/iommu.h
++++ b/include/linux/iommu.h
+@@ -547,7 +547,7 @@ static inline void iommu_iotlb_gather_add_page(struct iommu_domain *domain,
+ 	 * structure can be rewritten.
+ 	 */
+ 	if (gather->pgsize != size ||
+-	    end < gather->start || start > gather->end) {
++	    end + 1 < gather->start || start > gather->end + 1) {
+ 		if (gather->pgsize)
+ 			iommu_iotlb_sync(domain, gather);
+ 		gather->pgsize = size;
+diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
+index 1b65e7204344a..99dccea4293c6 100644
+--- a/include/linux/kvm_host.h
++++ b/include/linux/kvm_host.h
+@@ -192,8 +192,8 @@ int kvm_io_bus_read(struct kvm_vcpu *vcpu, enum kvm_bus bus_idx, gpa_t addr,
+ 		    int len, void *val);
+ int kvm_io_bus_register_dev(struct kvm *kvm, enum kvm_bus bus_idx, gpa_t addr,
+ 			    int len, struct kvm_io_device *dev);
+-void kvm_io_bus_unregister_dev(struct kvm *kvm, enum kvm_bus bus_idx,
+-			       struct kvm_io_device *dev);
++int kvm_io_bus_unregister_dev(struct kvm *kvm, enum kvm_bus bus_idx,
++			      struct kvm_io_device *dev);
+ struct kvm_io_device *kvm_io_bus_get_dev(struct kvm *kvm, enum kvm_bus bus_idx,
+ 					 gpa_t addr);
+ 
+diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
+index 53b89631a1d9d..ab07f09f2bad9 100644
+--- a/include/linux/mlx5/driver.h
++++ b/include/linux/mlx5/driver.h
+@@ -1226,7 +1226,7 @@ enum {
+ 	MLX5_TRIGGERED_CMD_COMP = (u64)1 << 32,
+ };
+ 
+-static inline bool mlx5_is_roce_enabled(struct mlx5_core_dev *dev)
++static inline bool mlx5_is_roce_init_enabled(struct mlx5_core_dev *dev)
+ {
+ 	struct devlink *devlink = priv_to_devlink(dev);
+ 	union devlink_param_value val;
+diff --git a/include/linux/platform_device.h b/include/linux/platform_device.h
+index 3f23f6e430bfa..cd81e060863c9 100644
+--- a/include/linux/platform_device.h
++++ b/include/linux/platform_device.h
+@@ -359,4 +359,7 @@ static inline int is_sh_early_platform_device(struct platform_device *pdev)
+ }
+ #endif /* CONFIG_SUPERH */
+ 
++/* For now only SuperH uses it */
++void early_platform_cleanup(void);
++
+ #endif /* _PLATFORM_DEVICE_H_ */
+diff --git a/include/linux/pm_runtime.h b/include/linux/pm_runtime.h
+index b492ae00cc908..6c08a085367bf 100644
+--- a/include/linux/pm_runtime.h
++++ b/include/linux/pm_runtime.h
+@@ -265,7 +265,7 @@ static inline void pm_runtime_no_callbacks(struct device *dev) {}
+ static inline void pm_runtime_irq_safe(struct device *dev) {}
+ static inline bool pm_runtime_is_irq_safe(struct device *dev) { return false; }
+ 
+-static inline bool pm_runtime_callbacks_present(struct device *dev) { return false; }
++static inline bool pm_runtime_has_no_callbacks(struct device *dev) { return false; }
+ static inline void pm_runtime_mark_last_busy(struct device *dev) {}
+ static inline void __pm_runtime_use_autosuspend(struct device *dev,
+ 						bool use) {}
+diff --git a/include/linux/smp.h b/include/linux/smp.h
+index 70c6f6284dcf6..238a3f97a415b 100644
+--- a/include/linux/smp.h
++++ b/include/linux/smp.h
+@@ -73,7 +73,7 @@ void on_each_cpu_cond(smp_cond_func_t cond_func, smp_call_func_t func,
+ void on_each_cpu_cond_mask(smp_cond_func_t cond_func, smp_call_func_t func,
+ 			   void *info, bool wait, const struct cpumask *mask);
+ 
+-int smp_call_function_single_async(int cpu, call_single_data_t *csd);
++int smp_call_function_single_async(int cpu, struct __call_single_data *csd);
+ 
+ #ifdef CONFIG_SMP
+ 
+diff --git a/include/linux/spi/spi.h b/include/linux/spi/spi.h
+index 592897fa4f030..643139b1eafea 100644
+--- a/include/linux/spi/spi.h
++++ b/include/linux/spi/spi.h
+@@ -510,6 +510,9 @@ struct spi_controller {
+ 
+ #define SPI_MASTER_GPIO_SS		BIT(5)	/* GPIO CS must select slave */
+ 
++	/* flag indicating this is a non-devres managed controller */
++	bool			devm_allocated;
++
+ 	/* flag indicating this is an SPI slave controller */
+ 	bool			slave;
+ 
+diff --git a/include/linux/tty_driver.h b/include/linux/tty_driver.h
+index 61c3372d3f328..2f719b471d524 100644
+--- a/include/linux/tty_driver.h
++++ b/include/linux/tty_driver.h
+@@ -228,7 +228,7 @@
+  *
+  *	Called when the device receives a TIOCGICOUNT ioctl. Passed a kernel
+  *	structure to complete. This method is optional and will only be called
+- *	if provided (otherwise EINVAL will be returned).
++ *	if provided (otherwise ENOTTY will be returned).
+  */
+ 
+ #include <linux/export.h>
+diff --git a/include/linux/udp.h b/include/linux/udp.h
+index aa84597bdc33c..ae58ff3b6b5b8 100644
+--- a/include/linux/udp.h
++++ b/include/linux/udp.h
+@@ -51,7 +51,9 @@ struct udp_sock {
+ 					   * different encapsulation layer set
+ 					   * this
+ 					   */
+-			 gro_enabled:1;	/* Can accept GRO packets */
++			 gro_enabled:1,	/* Request GRO aggregation */
++			 accept_udp_l4:1,
++			 accept_udp_fraglist:1;
+ 	/*
+ 	 * Following member retains the information to create a UDP header
+ 	 * when the socket is uncorked.
+@@ -131,8 +133,16 @@ static inline void udp_cmsg_recv(struct msghdr *msg, struct sock *sk,
+ 
+ static inline bool udp_unexpected_gso(struct sock *sk, struct sk_buff *skb)
+ {
+-	return !udp_sk(sk)->gro_enabled && skb_is_gso(skb) &&
+-	       skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4;
++	if (!skb_is_gso(skb))
++		return false;
++
++	if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4 && !udp_sk(sk)->accept_udp_l4)
++		return true;
++
++	if (skb_shinfo(skb)->gso_type & SKB_GSO_FRAGLIST && !udp_sk(sk)->accept_udp_fraglist)
++		return true;
++
++	return false;
+ }
+ 
+ #define udp_portaddr_for_each_entry(__sk, list) \
+diff --git a/include/linux/usb/pd.h b/include/linux/usb/pd.h
+index 70d681918d013..bf00259493e07 100644
+--- a/include/linux/usb/pd.h
++++ b/include/linux/usb/pd.h
+@@ -493,4 +493,6 @@ static inline unsigned int rdo_max_power(u32 rdo)
+ #define PD_N_CAPS_COUNT		(PD_T_NO_RESPONSE / PD_T_SEND_SOURCE_CAP)
+ #define PD_N_HARD_RESET_COUNT	2
+ 
++#define PD_P_SNK_STDBY_MW	2500	/* 2500 mW */
++
+ #endif /* __LINUX_USB_PD_H */
+diff --git a/include/net/addrconf.h b/include/net/addrconf.h
+index 18f783dcd55fa..78ea3e332688f 100644
+--- a/include/net/addrconf.h
++++ b/include/net/addrconf.h
+@@ -233,7 +233,6 @@ void ipv6_mc_unmap(struct inet6_dev *idev);
+ void ipv6_mc_remap(struct inet6_dev *idev);
+ void ipv6_mc_init_dev(struct inet6_dev *idev);
+ void ipv6_mc_destroy_dev(struct inet6_dev *idev);
+-int ipv6_mc_check_icmpv6(struct sk_buff *skb);
+ int ipv6_mc_check_mld(struct sk_buff *skb);
+ void addrconf_dad_failure(struct sk_buff *skb, struct inet6_ifaddr *ifp);
+ 
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index ebdd4afe30d27..ca4ac6603b9a0 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -704,6 +704,7 @@ struct hci_chan {
+ 	struct sk_buff_head data_q;
+ 	unsigned int	sent;
+ 	__u8		state;
++	bool		amp;
+ };
+ 
+ struct hci_conn_params {
+diff --git a/include/net/netfilter/nf_tables_offload.h b/include/net/netfilter/nf_tables_offload.h
+index 1d34fe154fe0b..434a6158852f3 100644
+--- a/include/net/netfilter/nf_tables_offload.h
++++ b/include/net/netfilter/nf_tables_offload.h
+@@ -4,11 +4,16 @@
+ #include <net/flow_offload.h>
+ #include <net/netfilter/nf_tables.h>
+ 
++enum nft_offload_reg_flags {
++	NFT_OFFLOAD_F_NETWORK2HOST	= (1 << 0),
++};
++
+ struct nft_offload_reg {
+ 	u32		key;
+ 	u32		len;
+ 	u32		base_offset;
+ 	u32		offset;
++	u32		flags;
+ 	struct nft_data data;
+ 	struct nft_data	mask;
+ };
+@@ -45,6 +50,7 @@ struct nft_flow_key {
+ 	struct flow_dissector_key_ports			tp;
+ 	struct flow_dissector_key_ip			ip;
+ 	struct flow_dissector_key_vlan			vlan;
++	struct flow_dissector_key_vlan			cvlan;
+ 	struct flow_dissector_key_eth_addrs		eth_addrs;
+ 	struct flow_dissector_key_meta			meta;
+ } __aligned(BITS_PER_LONG / 8); /* Ensure that we can do comparisons as longs. */
+@@ -71,13 +77,17 @@ struct nft_flow_rule *nft_flow_rule_create(struct net *net, const struct nft_rul
+ void nft_flow_rule_destroy(struct nft_flow_rule *flow);
+ int nft_flow_rule_offload_commit(struct net *net);
+ 
+-#define NFT_OFFLOAD_MATCH(__key, __base, __field, __len, __reg)		\
++#define NFT_OFFLOAD_MATCH_FLAGS(__key, __base, __field, __len, __reg, __flags)	\
+ 	(__reg)->base_offset	=					\
+ 		offsetof(struct nft_flow_key, __base);			\
+ 	(__reg)->offset		=					\
+ 		offsetof(struct nft_flow_key, __base.__field);		\
+ 	(__reg)->len		= __len;				\
+ 	(__reg)->key		= __key;				\
++	(__reg)->flags		= __flags;
++
++#define NFT_OFFLOAD_MATCH(__key, __base, __field, __len, __reg)		\
++	NFT_OFFLOAD_MATCH_FLAGS(__key, __base, __field, __len, __reg, 0)
+ 
+ #define NFT_OFFLOAD_MATCH_EXACT(__key, __base, __field, __len, __reg)	\
+ 	NFT_OFFLOAD_MATCH(__key, __base, __field, __len, __reg)		\
+diff --git a/include/uapi/linux/tty_flags.h b/include/uapi/linux/tty_flags.h
+index 900a32e634247..6a3ac496a56c1 100644
+--- a/include/uapi/linux/tty_flags.h
++++ b/include/uapi/linux/tty_flags.h
+@@ -39,7 +39,7 @@
+  * WARNING: These flags are no longer used and have been superceded by the
+  *	    TTY_PORT_ flags in the iflags field (and not userspace-visible)
+  */
+-#ifndef _KERNEL_
++#ifndef __KERNEL__
+ #define ASYNCB_INITIALIZED	31 /* Serial port was initialized */
+ #define ASYNCB_SUSPENDED	30 /* Serial port is suspended */
+ #define ASYNCB_NORMAL_ACTIVE	29 /* Normal device is active */
+@@ -81,7 +81,7 @@
+ #define ASYNC_SPD_WARP		(ASYNC_SPD_HI|ASYNC_SPD_SHI)
+ #define ASYNC_SPD_MASK		(ASYNC_SPD_HI|ASYNC_SPD_VHI|ASYNC_SPD_SHI)
+ 
+-#ifndef _KERNEL_
++#ifndef __KERNEL__
+ /* These flags are no longer used (and were always masked from userspace) */
+ #define ASYNC_INITIALIZED	(1U << ASYNCB_INITIALIZED)
+ #define ASYNC_NORMAL_ACTIVE	(1U << ASYNCB_NORMAL_ACTIVE)
+diff --git a/init/init_task.c b/init/init_task.c
+index 3711cdaafed2f..8b08c2e19cbb5 100644
+--- a/init/init_task.c
++++ b/init/init_task.c
+@@ -210,7 +210,7 @@ struct task_struct init_task
+ #ifdef CONFIG_SECURITY
+ 	.security	= NULL,
+ #endif
+-#ifdef CONFIG_SECCOMP
++#ifdef CONFIG_SECCOMP_FILTER
+ 	.seccomp	= { .filter_count = ATOMIC_INIT(0) },
+ #endif
+ };
+diff --git a/kernel/bpf/ringbuf.c b/kernel/bpf/ringbuf.c
+index f25b719ac7868..84b3b35fc0d05 100644
+--- a/kernel/bpf/ringbuf.c
++++ b/kernel/bpf/ringbuf.c
+@@ -221,25 +221,20 @@ static int ringbuf_map_get_next_key(struct bpf_map *map, void *key,
+ 	return -ENOTSUPP;
+ }
+ 
+-static size_t bpf_ringbuf_mmap_page_cnt(const struct bpf_ringbuf *rb)
+-{
+-	size_t data_pages = (rb->mask + 1) >> PAGE_SHIFT;
+-
+-	/* consumer page + producer page + 2 x data pages */
+-	return RINGBUF_POS_PAGES + 2 * data_pages;
+-}
+-
+ static int ringbuf_map_mmap(struct bpf_map *map, struct vm_area_struct *vma)
+ {
+ 	struct bpf_ringbuf_map *rb_map;
+-	size_t mmap_sz;
+ 
+ 	rb_map = container_of(map, struct bpf_ringbuf_map, map);
+-	mmap_sz = bpf_ringbuf_mmap_page_cnt(rb_map->rb) << PAGE_SHIFT;
+-
+-	if (vma->vm_pgoff * PAGE_SIZE + (vma->vm_end - vma->vm_start) > mmap_sz)
+-		return -EINVAL;
+ 
++	if (vma->vm_flags & VM_WRITE) {
++		/* allow writable mapping for the consumer_pos only */
++		if (vma->vm_pgoff != 0 || vma->vm_end - vma->vm_start != PAGE_SIZE)
++			return -EPERM;
++	} else {
++		vma->vm_flags &= ~VM_MAYWRITE;
++	}
++	/* remap_vmalloc_range() checks size and offset constraints */
+ 	return remap_vmalloc_range(vma, rb_map->rb,
+ 				   vma->vm_pgoff + RINGBUF_PGOFF);
+ }
+@@ -315,6 +310,9 @@ static void *__bpf_ringbuf_reserve(struct bpf_ringbuf *rb, u64 size)
+ 		return NULL;
+ 
+ 	len = round_up(size + BPF_RINGBUF_HDR_SZ, 8);
++	if (len > rb->mask + 1)
++		return NULL;
++
+ 	cons_pos = smp_load_acquire(&rb->consumer_pos);
+ 
+ 	if (in_nmi()) {
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index a2ed7a7e27e2b..7fa6fc6bedf1f 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -1362,9 +1362,7 @@ static bool __reg64_bound_s32(s64 a)
+ 
+ static bool __reg64_bound_u32(u64 a)
+ {
+-	if (a > U32_MIN && a < U32_MAX)
+-		return true;
+-	return false;
++	return a > U32_MIN && a < U32_MAX;
+ }
+ 
+ static void __reg_combine_64_into_32(struct bpf_reg_state *reg)
+@@ -1375,10 +1373,10 @@ static void __reg_combine_64_into_32(struct bpf_reg_state *reg)
+ 		reg->s32_min_value = (s32)reg->smin_value;
+ 		reg->s32_max_value = (s32)reg->smax_value;
+ 	}
+-	if (__reg64_bound_u32(reg->umin_value))
++	if (__reg64_bound_u32(reg->umin_value) && __reg64_bound_u32(reg->umax_value)) {
+ 		reg->u32_min_value = (u32)reg->umin_value;
+-	if (__reg64_bound_u32(reg->umax_value))
+ 		reg->u32_max_value = (u32)reg->umax_value;
++	}
+ 
+ 	/* Intersecting with the old var_off might have improved our bounds
+ 	 * slightly.  e.g. if umax was 0x7f...f and var_off was (0; 0xf...fc),
+@@ -6540,11 +6538,10 @@ static void scalar32_min_max_and(struct bpf_reg_state *dst_reg,
+ 	s32 smin_val = src_reg->s32_min_value;
+ 	u32 umax_val = src_reg->u32_max_value;
+ 
+-	/* Assuming scalar64_min_max_and will be called so its safe
+-	 * to skip updating register for known 32-bit case.
+-	 */
+-	if (src_known && dst_known)
++	if (src_known && dst_known) {
++		__mark_reg32_known(dst_reg, var32_off.value);
+ 		return;
++	}
+ 
+ 	/* We get our minimum from the var_off, since that's inherently
+ 	 * bitwise.  Our maximum is the minimum of the operands' maxima.
+@@ -6564,7 +6561,6 @@ static void scalar32_min_max_and(struct bpf_reg_state *dst_reg,
+ 		dst_reg->s32_min_value = dst_reg->u32_min_value;
+ 		dst_reg->s32_max_value = dst_reg->u32_max_value;
+ 	}
+-
+ }
+ 
+ static void scalar_min_max_and(struct bpf_reg_state *dst_reg,
+@@ -6611,11 +6607,10 @@ static void scalar32_min_max_or(struct bpf_reg_state *dst_reg,
+ 	s32 smin_val = src_reg->s32_min_value;
+ 	u32 umin_val = src_reg->u32_min_value;
+ 
+-	/* Assuming scalar64_min_max_or will be called so it is safe
+-	 * to skip updating register for known case.
+-	 */
+-	if (src_known && dst_known)
++	if (src_known && dst_known) {
++		__mark_reg32_known(dst_reg, var32_off.value);
+ 		return;
++	}
+ 
+ 	/* We get our maximum from the var_off, and our minimum is the
+ 	 * maximum of the operands' minima
+@@ -6680,11 +6675,10 @@ static void scalar32_min_max_xor(struct bpf_reg_state *dst_reg,
+ 	struct tnum var32_off = tnum_subreg(dst_reg->var_off);
+ 	s32 smin_val = src_reg->s32_min_value;
+ 
+-	/* Assuming scalar64_min_max_xor will be called so it is safe
+-	 * to skip updating register for known case.
+-	 */
+-	if (src_known && dst_known)
++	if (src_known && dst_known) {
++		__mark_reg32_known(dst_reg, var32_off.value);
+ 		return;
++	}
+ 
+ 	/* We get both minimum and maximum from the var32_off. */
+ 	dst_reg->u32_min_value = var32_off.value;
+diff --git a/kernel/kthread.c b/kernel/kthread.c
+index 1578973c57409..6d3c488a0f824 100644
+--- a/kernel/kthread.c
++++ b/kernel/kthread.c
+@@ -84,6 +84,25 @@ static inline struct kthread *to_kthread(struct task_struct *k)
+ 	return (__force void *)k->set_child_tid;
+ }
+ 
++/*
++ * Variant of to_kthread() that doesn't assume @p is a kthread.
++ *
++ * Per construction; when:
++ *
++ *   (p->flags & PF_KTHREAD) && p->set_child_tid
++ *
++ * the task is both a kthread and struct kthread is persistent. However
++ * PF_KTHREAD on it's own is not, kernel_thread() can exec() (See umh.c and
++ * begin_new_exec()).
++ */
++static inline struct kthread *__to_kthread(struct task_struct *p)
++{
++	void *kthread = (__force void *)p->set_child_tid;
++	if (kthread && !(p->flags & PF_KTHREAD))
++		kthread = NULL;
++	return kthread;
++}
++
+ void free_kthread_struct(struct task_struct *k)
+ {
+ 	struct kthread *kthread;
+@@ -168,8 +187,9 @@ EXPORT_SYMBOL_GPL(kthread_freezable_should_stop);
+  */
+ void *kthread_func(struct task_struct *task)
+ {
+-	if (task->flags & PF_KTHREAD)
+-		return to_kthread(task)->threadfn;
++	struct kthread *kthread = __to_kthread(task);
++	if (kthread)
++		return kthread->threadfn;
+ 	return NULL;
+ }
+ EXPORT_SYMBOL_GPL(kthread_func);
+@@ -199,10 +219,11 @@ EXPORT_SYMBOL_GPL(kthread_data);
+  */
+ void *kthread_probe_data(struct task_struct *task)
+ {
+-	struct kthread *kthread = to_kthread(task);
++	struct kthread *kthread = __to_kthread(task);
+ 	void *data = NULL;
+ 
+-	copy_from_kernel_nofault(&data, &kthread->data, sizeof(data));
++	if (kthread)
++		copy_from_kernel_nofault(&data, &kthread->data, sizeof(data));
+ 	return data;
+ }
+ 
+@@ -514,9 +535,9 @@ void kthread_set_per_cpu(struct task_struct *k, int cpu)
+ 	set_bit(KTHREAD_IS_PER_CPU, &kthread->flags);
+ }
+ 
+-bool kthread_is_per_cpu(struct task_struct *k)
++bool kthread_is_per_cpu(struct task_struct *p)
+ {
+-	struct kthread *kthread = to_kthread(k);
++	struct kthread *kthread = __to_kthread(p);
+ 	if (!kthread)
+ 		return false;
+ 
+diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
+index 575a34b88936f..77ae2704e979c 100644
+--- a/kernel/printk/printk.c
++++ b/kernel/printk/printk.c
+@@ -1494,6 +1494,7 @@ static int syslog_print_all(char __user *buf, int size, bool clear)
+ 	struct printk_info info;
+ 	unsigned int line_count;
+ 	struct printk_record r;
++	u64 max_seq;
+ 	char *text;
+ 	int len = 0;
+ 	u64 seq;
+@@ -1512,9 +1513,15 @@ static int syslog_print_all(char __user *buf, int size, bool clear)
+ 	prb_for_each_info(clear_seq, prb, seq, &info, &line_count)
+ 		len += get_record_print_text_size(&info, line_count, true, time);
+ 
++	/*
++	 * Set an upper bound for the next loop to avoid subtracting lengths
++	 * that were never added.
++	 */
++	max_seq = seq;
++
+ 	/* move first record forward until length fits into the buffer */
+ 	prb_for_each_info(clear_seq, prb, seq, &info, &line_count) {
+-		if (len <= size)
++		if (len <= size || info.seq >= max_seq)
+ 			break;
+ 		len -= get_record_print_text_size(&info, line_count, true, time);
+ 	}
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index 2a739c5fcca5a..7356764e49a0b 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -1077,7 +1077,6 @@ noinstr void rcu_nmi_enter(void)
+ 	} else if (!in_nmi()) {
+ 		instrumentation_begin();
+ 		rcu_irq_enter_check_tick();
+-		instrumentation_end();
+ 	} else  {
+ 		instrumentation_begin();
+ 	}
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 98191218d891d..17ad829a114c8 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -7652,7 +7652,7 @@ static void balance_push(struct rq *rq)
+ 	 * histerical raisins.
+ 	 */
+ 	if (rq->idle == push_task ||
+-	    ((push_task->flags & PF_KTHREAD) && kthread_is_per_cpu(push_task)) ||
++	    kthread_is_per_cpu(push_task) ||
+ 	    is_migration_disabled(push_task)) {
+ 
+ 		/*
+diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
+index 486f403a778b2..9c8b3ed2199a8 100644
+--- a/kernel/sched/debug.c
++++ b/kernel/sched/debug.c
+@@ -8,8 +8,6 @@
+  */
+ #include "sched.h"
+ 
+-static DEFINE_SPINLOCK(sched_debug_lock);
+-
+ /*
+  * This allows printing both to /proc/sched_debug and
+  * to the console
+@@ -470,16 +468,37 @@ static void print_cfs_group_stats(struct seq_file *m, int cpu, struct task_group
+ #endif
+ 
+ #ifdef CONFIG_CGROUP_SCHED
++static DEFINE_SPINLOCK(sched_debug_lock);
+ static char group_path[PATH_MAX];
+ 
+-static char *task_group_path(struct task_group *tg)
++static void task_group_path(struct task_group *tg, char *path, int plen)
+ {
+-	if (autogroup_path(tg, group_path, PATH_MAX))
+-		return group_path;
++	if (autogroup_path(tg, path, plen))
++		return;
+ 
+-	cgroup_path(tg->css.cgroup, group_path, PATH_MAX);
++	cgroup_path(tg->css.cgroup, path, plen);
++}
+ 
+-	return group_path;
++/*
++ * Only 1 SEQ_printf_task_group_path() caller can use the full length
++ * group_path[] for cgroup path. Other simultaneous callers will have
++ * to use a shorter stack buffer. A "..." suffix is appended at the end
++ * of the stack buffer so that it will show up in case the output length
++ * matches the given buffer size to indicate possible path name truncation.
++ */
++#define SEQ_printf_task_group_path(m, tg, fmt...)			\
++{									\
++	if (spin_trylock(&sched_debug_lock)) {				\
++		task_group_path(tg, group_path, sizeof(group_path));	\
++		SEQ_printf(m, fmt, group_path);				\
++		spin_unlock(&sched_debug_lock);				\
++	} else {							\
++		char buf[128];						\
++		char *bufend = buf + sizeof(buf) - 3;			\
++		task_group_path(tg, buf, bufend - buf);			\
++		strcpy(bufend - 1, "...");				\
++		SEQ_printf(m, fmt, buf);				\
++	}								\
+ }
+ #endif
+ 
+@@ -506,7 +525,7 @@ print_task(struct seq_file *m, struct rq *rq, struct task_struct *p)
+ 	SEQ_printf(m, " %d %d", task_node(p), task_numa_group_id(p));
+ #endif
+ #ifdef CONFIG_CGROUP_SCHED
+-	SEQ_printf(m, " %s", task_group_path(task_group(p)));
++	SEQ_printf_task_group_path(m, task_group(p), " %s")
+ #endif
+ 
+ 	SEQ_printf(m, "\n");
+@@ -543,7 +562,7 @@ void print_cfs_rq(struct seq_file *m, int cpu, struct cfs_rq *cfs_rq)
+ 
+ #ifdef CONFIG_FAIR_GROUP_SCHED
+ 	SEQ_printf(m, "\n");
+-	SEQ_printf(m, "cfs_rq[%d]:%s\n", cpu, task_group_path(cfs_rq->tg));
++	SEQ_printf_task_group_path(m, cfs_rq->tg, "cfs_rq[%d]:%s\n", cpu);
+ #else
+ 	SEQ_printf(m, "\n");
+ 	SEQ_printf(m, "cfs_rq[%d]:\n", cpu);
+@@ -614,7 +633,7 @@ void print_rt_rq(struct seq_file *m, int cpu, struct rt_rq *rt_rq)
+ {
+ #ifdef CONFIG_RT_GROUP_SCHED
+ 	SEQ_printf(m, "\n");
+-	SEQ_printf(m, "rt_rq[%d]:%s\n", cpu, task_group_path(rt_rq->tg));
++	SEQ_printf_task_group_path(m, rt_rq->tg, "rt_rq[%d]:%s\n", cpu);
+ #else
+ 	SEQ_printf(m, "\n");
+ 	SEQ_printf(m, "rt_rq[%d]:\n", cpu);
+@@ -666,7 +685,6 @@ void print_dl_rq(struct seq_file *m, int cpu, struct dl_rq *dl_rq)
+ static void print_cpu(struct seq_file *m, int cpu)
+ {
+ 	struct rq *rq = cpu_rq(cpu);
+-	unsigned long flags;
+ 
+ #ifdef CONFIG_X86
+ 	{
+@@ -717,13 +735,11 @@ do {									\
+ 	}
+ #undef P
+ 
+-	spin_lock_irqsave(&sched_debug_lock, flags);
+ 	print_cfs_stats(m, cpu);
+ 	print_rt_stats(m, cpu);
+ 	print_dl_stats(m, cpu);
+ 
+ 	print_rq(m, rq, cpu);
+-	spin_unlock_irqrestore(&sched_debug_lock, flags);
+ 	SEQ_printf(m, "\n");
+ }
+ 
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 0eeeeeb66f33a..d078767f677f4 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -7608,7 +7608,7 @@ int can_migrate_task(struct task_struct *p, struct lb_env *env)
+ 		return 0;
+ 
+ 	/* Disregard pcpu kthreads; they are where they need to be. */
+-	if ((p->flags & PF_KTHREAD) && kthread_is_per_cpu(p))
++	if (kthread_is_per_cpu(p))
+ 		return 0;
+ 
+ 	if (!cpumask_test_cpu(env->dst_cpu, p->cpus_ptr)) {
+@@ -7780,8 +7780,7 @@ static int detach_tasks(struct lb_env *env)
+ 			 * scheduler fails to find a good waiting task to
+ 			 * migrate.
+ 			 */
+-
+-			if ((load >> env->sd->nr_balance_failed) > env->imbalance)
++			if (shr_bound(load, env->sd->nr_balance_failed) > env->imbalance)
+ 				goto next;
+ 
+ 			env->imbalance -= load;
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index 10a1522b1e303..e4e4f47cee6a4 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -204,6 +204,13 @@ static inline void update_avg(u64 *avg, u64 sample)
+ 	*avg += diff / 8;
+ }
+ 
++/*
++ * Shifting a value by an exponent greater *or equal* to the size of said value
++ * is UB; cap at size-1.
++ */
++#define shr_bound(val, shift)							\
++	(val >> min_t(typeof(shift), shift, BITS_PER_TYPE(typeof(val)) - 1))
++
+ /*
+  * !! For sched_setattr_nocheck() (kernel) only !!
+  *
+diff --git a/kernel/smp.c b/kernel/smp.c
+index aeb0adfa06063..c678589fbb767 100644
+--- a/kernel/smp.c
++++ b/kernel/smp.c
+@@ -110,7 +110,7 @@ static DEFINE_PER_CPU(void *, cur_csd_info);
+ static atomic_t csd_bug_count = ATOMIC_INIT(0);
+ 
+ /* Record current CSD work for current CPU, NULL to erase. */
+-static void csd_lock_record(call_single_data_t *csd)
++static void csd_lock_record(struct __call_single_data *csd)
+ {
+ 	if (!csd) {
+ 		smp_mb(); /* NULL cur_csd after unlock. */
+@@ -125,7 +125,7 @@ static void csd_lock_record(call_single_data_t *csd)
+ 		  /* Or before unlock, as the case may be. */
+ }
+ 
+-static __always_inline int csd_lock_wait_getcpu(call_single_data_t *csd)
++static __always_inline int csd_lock_wait_getcpu(struct __call_single_data *csd)
+ {
+ 	unsigned int csd_type;
+ 
+@@ -140,7 +140,7 @@ static __always_inline int csd_lock_wait_getcpu(call_single_data_t *csd)
+  * the CSD_TYPE_SYNC/ASYNC types provide the destination CPU,
+  * so waiting on other types gets much less information.
+  */
+-static __always_inline bool csd_lock_wait_toolong(call_single_data_t *csd, u64 ts0, u64 *ts1, int *bug_id)
++static __always_inline bool csd_lock_wait_toolong(struct __call_single_data *csd, u64 ts0, u64 *ts1, int *bug_id)
+ {
+ 	int cpu = -1;
+ 	int cpux;
+@@ -204,7 +204,7 @@ static __always_inline bool csd_lock_wait_toolong(call_single_data_t *csd, u64 t
+  * previous function call. For multi-cpu calls its even more interesting
+  * as we'll have to ensure no other cpu is observing our csd.
+  */
+-static __always_inline void csd_lock_wait(call_single_data_t *csd)
++static __always_inline void csd_lock_wait(struct __call_single_data *csd)
+ {
+ 	int bug_id = 0;
+ 	u64 ts0, ts1;
+@@ -219,17 +219,17 @@ static __always_inline void csd_lock_wait(call_single_data_t *csd)
+ }
+ 
+ #else
+-static void csd_lock_record(call_single_data_t *csd)
++static void csd_lock_record(struct __call_single_data *csd)
+ {
+ }
+ 
+-static __always_inline void csd_lock_wait(call_single_data_t *csd)
++static __always_inline void csd_lock_wait(struct __call_single_data *csd)
+ {
+ 	smp_cond_load_acquire(&csd->node.u_flags, !(VAL & CSD_FLAG_LOCK));
+ }
+ #endif
+ 
+-static __always_inline void csd_lock(call_single_data_t *csd)
++static __always_inline void csd_lock(struct __call_single_data *csd)
+ {
+ 	csd_lock_wait(csd);
+ 	csd->node.u_flags |= CSD_FLAG_LOCK;
+@@ -242,7 +242,7 @@ static __always_inline void csd_lock(call_single_data_t *csd)
+ 	smp_wmb();
+ }
+ 
+-static __always_inline void csd_unlock(call_single_data_t *csd)
++static __always_inline void csd_unlock(struct __call_single_data *csd)
+ {
+ 	WARN_ON(!(csd->node.u_flags & CSD_FLAG_LOCK));
+ 
+@@ -276,7 +276,7 @@ void __smp_call_single_queue(int cpu, struct llist_node *node)
+  * for execution on the given CPU. data must already have
+  * ->func, ->info, and ->flags set.
+  */
+-static int generic_exec_single(int cpu, call_single_data_t *csd)
++static int generic_exec_single(int cpu, struct __call_single_data *csd)
+ {
+ 	if (cpu == smp_processor_id()) {
+ 		smp_call_func_t func = csd->func;
+@@ -542,7 +542,7 @@ EXPORT_SYMBOL(smp_call_function_single);
+  * NOTE: Be careful, there is unfortunately no current debugging facility to
+  * validate the correctness of this serialization.
+  */
+-int smp_call_function_single_async(int cpu, call_single_data_t *csd)
++int smp_call_function_single_async(int cpu, struct __call_single_data *csd)
+ {
+ 	int err = 0;
+ 
+diff --git a/kernel/up.c b/kernel/up.c
+index c6f323dcd45bb..4edd5493eba24 100644
+--- a/kernel/up.c
++++ b/kernel/up.c
+@@ -25,7 +25,7 @@ int smp_call_function_single(int cpu, void (*func) (void *info), void *info,
+ }
+ EXPORT_SYMBOL(smp_call_function_single);
+ 
+-int smp_call_function_single_async(int cpu, call_single_data_t *csd)
++int smp_call_function_single_async(int cpu, struct __call_single_data *csd)
+ {
+ 	unsigned long flags;
+ 
+diff --git a/lib/bug.c b/lib/bug.c
+index 8f9d537bfb2a5..b92da1f6e21b2 100644
+--- a/lib/bug.c
++++ b/lib/bug.c
+@@ -155,30 +155,27 @@ enum bug_trap_type report_bug(unsigned long bugaddr, struct pt_regs *regs)
+ 
+ 	file = NULL;
+ 	line = 0;
+-	warning = 0;
+ 
+-	if (bug) {
+ #ifdef CONFIG_DEBUG_BUGVERBOSE
+ #ifndef CONFIG_GENERIC_BUG_RELATIVE_POINTERS
+-		file = bug->file;
++	file = bug->file;
+ #else
+-		file = (const char *)bug + bug->file_disp;
++	file = (const char *)bug + bug->file_disp;
+ #endif
+-		line = bug->line;
++	line = bug->line;
+ #endif
+-		warning = (bug->flags & BUGFLAG_WARNING) != 0;
+-		once = (bug->flags & BUGFLAG_ONCE) != 0;
+-		done = (bug->flags & BUGFLAG_DONE) != 0;
+-
+-		if (warning && once) {
+-			if (done)
+-				return BUG_TRAP_TYPE_WARN;
+-
+-			/*
+-			 * Since this is the only store, concurrency is not an issue.
+-			 */
+-			bug->flags |= BUGFLAG_DONE;
+-		}
++	warning = (bug->flags & BUGFLAG_WARNING) != 0;
++	once = (bug->flags & BUGFLAG_ONCE) != 0;
++	done = (bug->flags & BUGFLAG_DONE) != 0;
++
++	if (warning && once) {
++		if (done)
++			return BUG_TRAP_TYPE_WARN;
++
++		/*
++		 * Since this is the only store, concurrency is not an issue.
++		 */
++		bug->flags |= BUGFLAG_DONE;
+ 	}
+ 
+ 	/*
+diff --git a/lib/crypto/poly1305-donna32.c b/lib/crypto/poly1305-donna32.c
+index 3cc77d94390b2..7fb71845cc846 100644
+--- a/lib/crypto/poly1305-donna32.c
++++ b/lib/crypto/poly1305-donna32.c
+@@ -10,7 +10,8 @@
+ #include <asm/unaligned.h>
+ #include <crypto/internal/poly1305.h>
+ 
+-void poly1305_core_setkey(struct poly1305_core_key *key, const u8 raw_key[16])
++void poly1305_core_setkey(struct poly1305_core_key *key,
++			  const u8 raw_key[POLY1305_BLOCK_SIZE])
+ {
+ 	/* r &= 0xffffffc0ffffffc0ffffffc0fffffff */
+ 	key->key.r[0] = (get_unaligned_le32(&raw_key[0])) & 0x3ffffff;
+diff --git a/lib/crypto/poly1305-donna64.c b/lib/crypto/poly1305-donna64.c
+index 6ae181bb43450..d34cf40536689 100644
+--- a/lib/crypto/poly1305-donna64.c
++++ b/lib/crypto/poly1305-donna64.c
+@@ -12,7 +12,8 @@
+ 
+ typedef __uint128_t u128;
+ 
+-void poly1305_core_setkey(struct poly1305_core_key *key, const u8 raw_key[16])
++void poly1305_core_setkey(struct poly1305_core_key *key,
++			  const u8 raw_key[POLY1305_BLOCK_SIZE])
+ {
+ 	u64 t0, t1;
+ 
+diff --git a/lib/crypto/poly1305.c b/lib/crypto/poly1305.c
+index 9d2d14df0fee5..26d87fc3823e8 100644
+--- a/lib/crypto/poly1305.c
++++ b/lib/crypto/poly1305.c
+@@ -12,7 +12,8 @@
+ #include <linux/module.h>
+ #include <asm/unaligned.h>
+ 
+-void poly1305_init_generic(struct poly1305_desc_ctx *desc, const u8 *key)
++void poly1305_init_generic(struct poly1305_desc_ctx *desc,
++			   const u8 key[POLY1305_KEY_SIZE])
+ {
+ 	poly1305_core_setkey(&desc->core_r, key);
+ 	desc->s[0] = get_unaligned_le32(key + 16);
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index e064ac0d850ac..e876ba693998d 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -3181,9 +3181,17 @@ static void drain_obj_stock(struct memcg_stock_pcp *stock)
+ 		unsigned int nr_bytes = stock->nr_bytes & (PAGE_SIZE - 1);
+ 
+ 		if (nr_pages) {
++			struct mem_cgroup *memcg;
++
+ 			rcu_read_lock();
+-			__memcg_kmem_uncharge(obj_cgroup_memcg(old), nr_pages);
++retry:
++			memcg = obj_cgroup_memcg(old);
++			if (unlikely(!css_tryget(&memcg->css)))
++				goto retry;
+ 			rcu_read_unlock();
++
++			__memcg_kmem_uncharge(memcg, nr_pages);
++			css_put(&memcg->css);
+ 		}
+ 
+ 		/*
+diff --git a/mm/memory-failure.c b/mm/memory-failure.c
+index 24210c9bd8434..bd3945446d47e 100644
+--- a/mm/memory-failure.c
++++ b/mm/memory-failure.c
+@@ -1368,7 +1368,7 @@ static int memory_failure_dev_pagemap(unsigned long pfn, int flags,
+ 		 * communicated in siginfo, see kill_proc()
+ 		 */
+ 		start = (page->index << PAGE_SHIFT) & ~(size - 1);
+-		unmap_mapping_range(page->mapping, start, start + size, 0);
++		unmap_mapping_range(page->mapping, start, size, 0);
+ 	}
+ 	kill_procs(&tokill, flags & MF_MUST_KILL, !unmap_success, pfn, flags);
+ 	rc = 0;
+diff --git a/mm/sparse.c b/mm/sparse.c
+index 7bd23f9d6cef6..33406ea2ecc44 100644
+--- a/mm/sparse.c
++++ b/mm/sparse.c
+@@ -547,6 +547,7 @@ static void __init sparse_init_nid(int nid, unsigned long pnum_begin,
+ 			pr_err("%s: node[%d] memory map backing failed. Some memory will not be available.",
+ 			       __func__, nid);
+ 			pnum_begin = pnum;
++			sparse_buffer_fini();
+ 			goto failed;
+ 		}
+ 		check_usemap_section_nr(nid, usage);
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index 6ffa89e3ba0a8..f72646690539f 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -1830,8 +1830,6 @@ u32 hci_conn_get_phy(struct hci_conn *conn)
+ {
+ 	u32 phys = 0;
+ 
+-	hci_dev_lock(conn->hdev);
+-
+ 	/* BLUETOOTH CORE SPECIFICATION Version 5.2 | Vol 2, Part B page 471:
+ 	 * Table 6.2: Packets defined for synchronous, asynchronous, and
+ 	 * CSB logical transport types.
+@@ -1928,7 +1926,5 @@ u32 hci_conn_get_phy(struct hci_conn *conn)
+ 		break;
+ 	}
+ 
+-	hci_dev_unlock(conn->hdev);
+-
+ 	return phys;
+ }
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 67668be3461e9..7a3e42e752350 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -5005,6 +5005,7 @@ static void hci_loglink_complete_evt(struct hci_dev *hdev, struct sk_buff *skb)
+ 		return;
+ 
+ 	hchan->handle = le16_to_cpu(ev->handle);
++	hchan->amp = true;
+ 
+ 	BT_DBG("hcon %p mgr %p hchan %p", hcon, hcon->amp_mgr, hchan);
+ 
+@@ -5037,7 +5038,7 @@ static void hci_disconn_loglink_complete_evt(struct hci_dev *hdev,
+ 	hci_dev_lock(hdev);
+ 
+ 	hchan = hci_chan_lookup_handle(hdev, le16_to_cpu(ev->handle));
+-	if (!hchan)
++	if (!hchan || !hchan->amp)
+ 		goto unlock;
+ 
+ 	amp_destroy_logical_link(hchan, ev->reason);
+diff --git a/net/bluetooth/hci_request.c b/net/bluetooth/hci_request.c
+index e55976db4403e..805ce546b8133 100644
+--- a/net/bluetooth/hci_request.c
++++ b/net/bluetooth/hci_request.c
+@@ -272,12 +272,16 @@ int hci_req_sync(struct hci_dev *hdev, int (*req)(struct hci_request *req,
+ {
+ 	int ret;
+ 
+-	if (!test_bit(HCI_UP, &hdev->flags))
+-		return -ENETDOWN;
+-
+ 	/* Serialize all requests */
+ 	hci_req_sync_lock(hdev);
+-	ret = __hci_req_sync(hdev, req, opt, timeout, hci_status);
++	/* check the state after obtaing the lock to protect the HCI_UP
++	 * against any races from hci_dev_do_close when the controller
++	 * gets removed.
++	 */
++	if (test_bit(HCI_UP, &hdev->flags))
++		ret = __hci_req_sync(hdev, req, opt, timeout, hci_status);
++	else
++		ret = -ENETDOWN;
+ 	hci_req_sync_unlock(hdev);
+ 
+ 	return ret;
+diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c
+index 9d265447d6540..229309d7b4ff9 100644
+--- a/net/bridge/br_multicast.c
++++ b/net/bridge/br_multicast.c
+@@ -3152,25 +3152,14 @@ static int br_multicast_ipv4_rcv(struct net_bridge *br,
+ }
+ 
+ #if IS_ENABLED(CONFIG_IPV6)
+-static int br_ip6_multicast_mrd_rcv(struct net_bridge *br,
+-				    struct net_bridge_port *port,
+-				    struct sk_buff *skb)
++static void br_ip6_multicast_mrd_rcv(struct net_bridge *br,
++				     struct net_bridge_port *port,
++				     struct sk_buff *skb)
+ {
+-	int ret;
+-
+-	if (ipv6_hdr(skb)->nexthdr != IPPROTO_ICMPV6)
+-		return -ENOMSG;
+-
+-	ret = ipv6_mc_check_icmpv6(skb);
+-	if (ret < 0)
+-		return ret;
+-
+ 	if (icmp6_hdr(skb)->icmp6_type != ICMPV6_MRDISC_ADV)
+-		return -ENOMSG;
++		return;
+ 
+ 	br_multicast_mark_router(br, port);
+-
+-	return 0;
+ }
+ 
+ static int br_multicast_ipv6_rcv(struct net_bridge *br,
+@@ -3184,18 +3173,12 @@ static int br_multicast_ipv6_rcv(struct net_bridge *br,
+ 
+ 	err = ipv6_mc_check_mld(skb);
+ 
+-	if (err == -ENOMSG) {
++	if (err == -ENOMSG || err == -ENODATA) {
+ 		if (!ipv6_addr_is_ll_all_nodes(&ipv6_hdr(skb)->daddr))
+ 			BR_INPUT_SKB_CB(skb)->mrouters_only = 1;
+-
+-		if (ipv6_addr_is_all_snoopers(&ipv6_hdr(skb)->daddr)) {
+-			err = br_ip6_multicast_mrd_rcv(br, port, skb);
+-
+-			if (err < 0 && err != -ENOMSG) {
+-				br_multicast_err_count(br, port, skb->protocol);
+-				return err;
+-			}
+-		}
++		if (err == -ENODATA &&
++		    ipv6_addr_is_all_snoopers(&ipv6_hdr(skb)->daddr))
++			br_ip6_multicast_mrd_rcv(br, port, skb);
+ 
+ 		return 0;
+ 	} else if (err < 0) {
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 1f79b9aa9a3f2..70829c568645c 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -4672,10 +4672,10 @@ static u32 netif_receive_generic_xdp(struct sk_buff *skb,
+ 	void *orig_data, *orig_data_end, *hard_start;
+ 	struct netdev_rx_queue *rxqueue;
+ 	u32 metalen, act = XDP_DROP;
++	bool orig_bcast, orig_host;
+ 	u32 mac_len, frame_sz;
+ 	__be16 orig_eth_type;
+ 	struct ethhdr *eth;
+-	bool orig_bcast;
+ 	int off;
+ 
+ 	/* Reinjected packets coming from act_mirred or similar should
+@@ -4722,6 +4722,7 @@ static u32 netif_receive_generic_xdp(struct sk_buff *skb,
+ 	orig_data_end = xdp->data_end;
+ 	orig_data = xdp->data;
+ 	eth = (struct ethhdr *)xdp->data;
++	orig_host = ether_addr_equal_64bits(eth->h_dest, skb->dev->dev_addr);
+ 	orig_bcast = is_multicast_ether_addr_64bits(eth->h_dest);
+ 	orig_eth_type = eth->h_proto;
+ 
+@@ -4749,8 +4750,11 @@ static u32 netif_receive_generic_xdp(struct sk_buff *skb,
+ 	/* check if XDP changed eth hdr such SKB needs update */
+ 	eth = (struct ethhdr *)xdp->data;
+ 	if ((orig_eth_type != eth->h_proto) ||
++	    (orig_host != ether_addr_equal_64bits(eth->h_dest,
++						  skb->dev->dev_addr)) ||
+ 	    (orig_bcast != is_multicast_ether_addr_64bits(eth->h_dest))) {
+ 		__skb_push(skb, ETH_HLEN);
++		skb->pkt_type = PACKET_HOST;
+ 		skb->protocol = eth_type_trans(skb, skb->dev);
+ 	}
+ 
+@@ -5914,7 +5918,7 @@ static struct list_head *gro_list_prepare(struct napi_struct *napi,
+ 	return head;
+ }
+ 
+-static void skb_gro_reset_offset(struct sk_buff *skb)
++static inline void skb_gro_reset_offset(struct sk_buff *skb, u32 nhoff)
+ {
+ 	const struct skb_shared_info *pinfo = skb_shinfo(skb);
+ 	const skb_frag_t *frag0 = &pinfo->frags[0];
+@@ -5925,7 +5929,7 @@ static void skb_gro_reset_offset(struct sk_buff *skb)
+ 
+ 	if (!skb_headlen(skb) && pinfo->nr_frags &&
+ 	    !PageHighMem(skb_frag_page(frag0)) &&
+-	    (!NET_IP_ALIGN || !(skb_frag_off(frag0) & 3))) {
++	    (!NET_IP_ALIGN || !((skb_frag_off(frag0) + nhoff) & 3))) {
+ 		NAPI_GRO_CB(skb)->frag0 = skb_frag_address(frag0);
+ 		NAPI_GRO_CB(skb)->frag0_len = min_t(unsigned int,
+ 						    skb_frag_size(frag0),
+@@ -6143,7 +6147,7 @@ gro_result_t napi_gro_receive(struct napi_struct *napi, struct sk_buff *skb)
+ 	skb_mark_napi_id(skb, napi);
+ 	trace_napi_gro_receive_entry(skb);
+ 
+-	skb_gro_reset_offset(skb);
++	skb_gro_reset_offset(skb, 0);
+ 
+ 	ret = napi_skb_finish(napi, skb, dev_gro_receive(napi, skb));
+ 	trace_napi_gro_receive_exit(ret);
+@@ -6232,7 +6236,7 @@ static struct sk_buff *napi_frags_skb(struct napi_struct *napi)
+ 	napi->skb = NULL;
+ 
+ 	skb_reset_mac_header(skb);
+-	skb_gro_reset_offset(skb);
++	skb_gro_reset_offset(skb, hlen);
+ 
+ 	if (unlikely(skb_gro_header_hard(skb, hlen))) {
+ 		eth = skb_gro_header_slow(skb, hlen, 0);
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index bba150fdd2658..d635b4f32d348 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -66,6 +66,7 @@
+ #include <linux/types.h>
+ #include <linux/kernel.h>
+ #include <linux/mm.h>
++#include <linux/memblock.h>
+ #include <linux/string.h>
+ #include <linux/socket.h>
+ #include <linux/sockios.h>
+@@ -478,8 +479,10 @@ static void ipv4_confirm_neigh(const struct dst_entry *dst, const void *daddr)
+ 	__ipv4_confirm_neigh(dev, *(__force u32 *)pkey);
+ }
+ 
+-#define IP_IDENTS_SZ 2048u
+-
++/* Hash tables of size 2048..262144 depending on RAM size.
++ * Each bucket uses 8 bytes.
++ */
++static u32 ip_idents_mask __read_mostly;
+ static atomic_t *ip_idents __read_mostly;
+ static u32 *ip_tstamps __read_mostly;
+ 
+@@ -489,12 +492,16 @@ static u32 *ip_tstamps __read_mostly;
+  */
+ u32 ip_idents_reserve(u32 hash, int segs)
+ {
+-	u32 *p_tstamp = ip_tstamps + hash % IP_IDENTS_SZ;
+-	atomic_t *p_id = ip_idents + hash % IP_IDENTS_SZ;
+-	u32 old = READ_ONCE(*p_tstamp);
+-	u32 now = (u32)jiffies;
++	u32 bucket, old, now = (u32)jiffies;
++	atomic_t *p_id;
++	u32 *p_tstamp;
+ 	u32 delta = 0;
+ 
++	bucket = hash & ip_idents_mask;
++	p_tstamp = ip_tstamps + bucket;
++	p_id = ip_idents + bucket;
++	old = READ_ONCE(*p_tstamp);
++
+ 	if (old != now && cmpxchg(p_tstamp, old, now) == old)
+ 		delta = prandom_u32_max(now - old);
+ 
+@@ -3553,18 +3560,25 @@ struct ip_rt_acct __percpu *ip_rt_acct __read_mostly;
+ 
+ int __init ip_rt_init(void)
+ {
++	void *idents_hash;
+ 	int cpu;
+ 
+-	ip_idents = kmalloc_array(IP_IDENTS_SZ, sizeof(*ip_idents),
+-				  GFP_KERNEL);
+-	if (!ip_idents)
+-		panic("IP: failed to allocate ip_idents\n");
++	/* For modern hosts, this will use 2 MB of memory */
++	idents_hash = alloc_large_system_hash("IP idents",
++					      sizeof(*ip_idents) + sizeof(*ip_tstamps),
++					      0,
++					      16, /* one bucket per 64 KB */
++					      HASH_ZERO,
++					      NULL,
++					      &ip_idents_mask,
++					      2048,
++					      256*1024);
++
++	ip_idents = idents_hash;
+ 
+-	prandom_bytes(ip_idents, IP_IDENTS_SZ * sizeof(*ip_idents));
++	prandom_bytes(ip_idents, (ip_idents_mask + 1) * sizeof(*ip_idents));
+ 
+-	ip_tstamps = kcalloc(IP_IDENTS_SZ, sizeof(*ip_tstamps), GFP_KERNEL);
+-	if (!ip_tstamps)
+-		panic("IP: failed to allocate ip_tstamps\n");
++	ip_tstamps = idents_hash + (ip_idents_mask + 1) * sizeof(*ip_idents);
+ 
+ 	for_each_possible_cpu(cpu) {
+ 		struct uncached_list *ul = &per_cpu(rt_uncached_list, cpu);
+diff --git a/net/ipv4/tcp_cong.c b/net/ipv4/tcp_cong.c
+index 563d016e74783..db5831e6c136a 100644
+--- a/net/ipv4/tcp_cong.c
++++ b/net/ipv4/tcp_cong.c
+@@ -230,6 +230,10 @@ int tcp_set_default_congestion_control(struct net *net, const char *name)
+ 		ret = -ENOENT;
+ 	} else if (!bpf_try_module_get(ca, ca->owner)) {
+ 		ret = -EBUSY;
++	} else if (!net_eq(net, &init_net) &&
++			!(ca->flags & TCP_CONG_NON_RESTRICTED)) {
++		/* Only init netns can set default to a restricted algorithm */
++		ret = -EPERM;
+ 	} else {
+ 		prev = xchg(&net->ipv4.tcp_congestion_control, ca);
+ 		if (prev)
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index 99d743eb9dc46..c586a6bb8c6d0 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -2664,9 +2664,12 @@ int udp_lib_setsockopt(struct sock *sk, int level, int optname,
+ 
+ 	case UDP_GRO:
+ 		lock_sock(sk);
++
++		/* when enabling GRO, accept the related GSO packet type */
+ 		if (valbool)
+ 			udp_tunnel_encap_enable(sk->sk_socket);
+ 		up->gro_enabled = valbool;
++		up->accept_udp_l4 = valbool;
+ 		release_sock(sk);
+ 		break;
+ 
+diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c
+index c5b4b586570fe..25134a3548e99 100644
+--- a/net/ipv4/udp_offload.c
++++ b/net/ipv4/udp_offload.c
+@@ -515,21 +515,24 @@ struct sk_buff *udp_gro_receive(struct list_head *head, struct sk_buff *skb,
+ 	unsigned int off = skb_gro_offset(skb);
+ 	int flush = 1;
+ 
++	/* we can do L4 aggregation only if the packet can't land in a tunnel
++	 * otherwise we could corrupt the inner stream
++	 */
+ 	NAPI_GRO_CB(skb)->is_flist = 0;
+-	if (skb->dev->features & NETIF_F_GRO_FRAGLIST)
+-		NAPI_GRO_CB(skb)->is_flist = sk ? !udp_sk(sk)->gro_enabled: 1;
++	if (!sk || !udp_sk(sk)->gro_receive) {
++		if (skb->dev->features & NETIF_F_GRO_FRAGLIST)
++			NAPI_GRO_CB(skb)->is_flist = sk ? !udp_sk(sk)->gro_enabled : 1;
+ 
+-	if ((!sk && (skb->dev->features & NETIF_F_GRO_UDP_FWD)) ||
+-	    (sk && udp_sk(sk)->gro_enabled) || NAPI_GRO_CB(skb)->is_flist) {
+-		pp = call_gro_receive(udp_gro_receive_segment, head, skb);
++		if ((!sk && (skb->dev->features & NETIF_F_GRO_UDP_FWD)) ||
++		    (sk && udp_sk(sk)->gro_enabled) || NAPI_GRO_CB(skb)->is_flist)
++			pp = call_gro_receive(udp_gro_receive_segment, head, skb);
+ 		return pp;
+ 	}
+ 
+-	if (!sk || NAPI_GRO_CB(skb)->encap_mark ||
++	if (NAPI_GRO_CB(skb)->encap_mark ||
+ 	    (uh->check && skb->ip_summed != CHECKSUM_PARTIAL &&
+ 	     NAPI_GRO_CB(skb)->csum_cnt == 0 &&
+-	     !NAPI_GRO_CB(skb)->csum_valid) ||
+-	    !udp_sk(sk)->gro_receive)
++	     !NAPI_GRO_CB(skb)->csum_valid))
+ 		goto out;
+ 
+ 	/* mark that this skb passed once through the tunnel gro layer */
+diff --git a/net/ipv6/mcast_snoop.c b/net/ipv6/mcast_snoop.c
+index d3d6b6a66e5fa..04d5fcdfa6e00 100644
+--- a/net/ipv6/mcast_snoop.c
++++ b/net/ipv6/mcast_snoop.c
+@@ -109,7 +109,7 @@ static int ipv6_mc_check_mld_msg(struct sk_buff *skb)
+ 	struct mld_msg *mld;
+ 
+ 	if (!ipv6_mc_may_pull(skb, len))
+-		return -EINVAL;
++		return -ENODATA;
+ 
+ 	mld = (struct mld_msg *)skb_transport_header(skb);
+ 
+@@ -122,7 +122,7 @@ static int ipv6_mc_check_mld_msg(struct sk_buff *skb)
+ 	case ICMPV6_MGM_QUERY:
+ 		return ipv6_mc_check_mld_query(skb);
+ 	default:
+-		return -ENOMSG;
++		return -ENODATA;
+ 	}
+ }
+ 
+@@ -131,7 +131,7 @@ static inline __sum16 ipv6_mc_validate_checksum(struct sk_buff *skb)
+ 	return skb_checksum_validate(skb, IPPROTO_ICMPV6, ip6_compute_pseudo);
+ }
+ 
+-int ipv6_mc_check_icmpv6(struct sk_buff *skb)
++static int ipv6_mc_check_icmpv6(struct sk_buff *skb)
+ {
+ 	unsigned int len = skb_transport_offset(skb) + sizeof(struct icmp6hdr);
+ 	unsigned int transport_len = ipv6_transport_len(skb);
+@@ -150,7 +150,6 @@ int ipv6_mc_check_icmpv6(struct sk_buff *skb)
+ 
+ 	return 0;
+ }
+-EXPORT_SYMBOL(ipv6_mc_check_icmpv6);
+ 
+ /**
+  * ipv6_mc_check_mld - checks whether this is a sane MLD packet
+@@ -161,7 +160,10 @@ EXPORT_SYMBOL(ipv6_mc_check_icmpv6);
+  *
+  * -EINVAL: A broken packet was detected, i.e. it violates some internet
+  *  standard
+- * -ENOMSG: IP header validation succeeded but it is not an MLD packet.
++ * -ENOMSG: IP header validation succeeded but it is not an ICMPv6 packet
++ *  with a hop-by-hop option.
++ * -ENODATA: IP+ICMPv6 header with hop-by-hop option validation succeeded
++ *  but it is not an MLD packet.
+  * -ENOMEM: A memory allocation failure happened.
+  *
+  * Caller needs to set the skb network header and free any returned skb if it
+diff --git a/net/mac80211/main.c b/net/mac80211/main.c
+index 1b9c82616606b..0331f3a3c40e0 100644
+--- a/net/mac80211/main.c
++++ b/net/mac80211/main.c
+@@ -1141,8 +1141,11 @@ int ieee80211_register_hw(struct ieee80211_hw *hw)
+ 	if (local->hw.wiphy->max_scan_ie_len)
+ 		local->hw.wiphy->max_scan_ie_len -= local->scan_ies_len;
+ 
+-	WARN_ON(!ieee80211_cs_list_valid(local->hw.cipher_schemes,
+-					 local->hw.n_cipher_schemes));
++	if (WARN_ON(!ieee80211_cs_list_valid(local->hw.cipher_schemes,
++					     local->hw.n_cipher_schemes))) {
++		result = -EINVAL;
++		goto fail_workqueue;
++	}
+ 
+ 	result = ieee80211_init_cipher_suites(local);
+ 	if (result < 0)
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index 4bde960e19dc0..65e5d3eb10789 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -399,6 +399,14 @@ static bool mptcp_pending_data_fin(struct sock *sk, u64 *seq)
+ 	return false;
+ }
+ 
++static void mptcp_set_datafin_timeout(const struct sock *sk)
++{
++	struct inet_connection_sock *icsk = inet_csk(sk);
++
++	mptcp_sk(sk)->timer_ival = min(TCP_RTO_MAX,
++				       TCP_RTO_MIN << icsk->icsk_retransmits);
++}
++
+ static void mptcp_set_timeout(const struct sock *sk, const struct sock *ssk)
+ {
+ 	long tout = ssk && inet_csk(ssk)->icsk_pending ?
+@@ -1052,7 +1060,7 @@ out:
+ 	}
+ 
+ 	if (snd_una == READ_ONCE(msk->snd_nxt)) {
+-		if (msk->timer_ival)
++		if (msk->timer_ival && !mptcp_data_fin_enabled(msk))
+ 			mptcp_stop_timer(sk);
+ 	} else {
+ 		mptcp_reset_timer(sk);
+@@ -1275,7 +1283,7 @@ static int mptcp_sendmsg_frag(struct sock *sk, struct sock *ssk,
+ 	int avail_size;
+ 	size_t ret = 0;
+ 
+-	pr_debug("msk=%p ssk=%p sending dfrag at seq=%lld len=%d already sent=%d",
++	pr_debug("msk=%p ssk=%p sending dfrag at seq=%llu len=%u already sent=%u",
+ 		 msk, ssk, dfrag->data_seq, dfrag->data_len, info->sent);
+ 
+ 	/* compute send limit */
+@@ -1693,7 +1701,7 @@ static int mptcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ 			if (!msk->first_pending)
+ 				WRITE_ONCE(msk->first_pending, dfrag);
+ 		}
+-		pr_debug("msk=%p dfrag at seq=%lld len=%d sent=%d new=%d", msk,
++		pr_debug("msk=%p dfrag at seq=%llu len=%u sent=%u new=%d", msk,
+ 			 dfrag->data_seq, dfrag->data_len, dfrag->already_sent,
+ 			 !dfrag_collapsed);
+ 
+@@ -2276,8 +2284,19 @@ static void __mptcp_retrans(struct sock *sk)
+ 
+ 	__mptcp_clean_una_wakeup(sk);
+ 	dfrag = mptcp_rtx_head(sk);
+-	if (!dfrag)
++	if (!dfrag) {
++		if (mptcp_data_fin_enabled(msk)) {
++			struct inet_connection_sock *icsk = inet_csk(sk);
++
++			icsk->icsk_retransmits++;
++			mptcp_set_datafin_timeout(sk);
++			mptcp_send_ack(msk);
++
++			goto reset_timer;
++		}
++
+ 		return;
++	}
+ 
+ 	ssk = mptcp_subflow_get_retrans(msk);
+ 	if (!ssk)
+@@ -2460,6 +2479,8 @@ void mptcp_subflow_shutdown(struct sock *sk, struct sock *ssk, int how)
+ 			pr_debug("Sending DATA_FIN on subflow %p", ssk);
+ 			mptcp_set_timeout(sk, ssk);
+ 			tcp_send_ack(ssk);
++			if (!mptcp_timer_pending(sk))
++				mptcp_reset_timer(sk);
+ 		}
+ 		break;
+ 	}
+diff --git a/net/netfilter/nf_tables_offload.c b/net/netfilter/nf_tables_offload.c
+index 9ae14270c543e..2b00f7f47693b 100644
+--- a/net/netfilter/nf_tables_offload.c
++++ b/net/netfilter/nf_tables_offload.c
+@@ -45,6 +45,48 @@ void nft_flow_rule_set_addr_type(struct nft_flow_rule *flow,
+ 		offsetof(struct nft_flow_key, control);
+ }
+ 
++struct nft_offload_ethertype {
++	__be16 value;
++	__be16 mask;
++};
++
++static void nft_flow_rule_transfer_vlan(struct nft_offload_ctx *ctx,
++					struct nft_flow_rule *flow)
++{
++	struct nft_flow_match *match = &flow->match;
++	struct nft_offload_ethertype ethertype;
++
++	if (match->dissector.used_keys & BIT(FLOW_DISSECTOR_KEY_CONTROL) &&
++	    match->key.basic.n_proto != htons(ETH_P_8021Q) &&
++	    match->key.basic.n_proto != htons(ETH_P_8021AD))
++		return;
++
++	ethertype.value = match->key.basic.n_proto;
++	ethertype.mask = match->mask.basic.n_proto;
++
++	if (match->dissector.used_keys & BIT(FLOW_DISSECTOR_KEY_VLAN) &&
++	    (match->key.vlan.vlan_tpid == htons(ETH_P_8021Q) ||
++	     match->key.vlan.vlan_tpid == htons(ETH_P_8021AD))) {
++		match->key.basic.n_proto = match->key.cvlan.vlan_tpid;
++		match->mask.basic.n_proto = match->mask.cvlan.vlan_tpid;
++		match->key.cvlan.vlan_tpid = match->key.vlan.vlan_tpid;
++		match->mask.cvlan.vlan_tpid = match->mask.vlan.vlan_tpid;
++		match->key.vlan.vlan_tpid = ethertype.value;
++		match->mask.vlan.vlan_tpid = ethertype.mask;
++		match->dissector.offset[FLOW_DISSECTOR_KEY_CVLAN] =
++			offsetof(struct nft_flow_key, cvlan);
++		match->dissector.used_keys |= BIT(FLOW_DISSECTOR_KEY_CVLAN);
++	} else {
++		match->key.basic.n_proto = match->key.vlan.vlan_tpid;
++		match->mask.basic.n_proto = match->mask.vlan.vlan_tpid;
++		match->key.vlan.vlan_tpid = ethertype.value;
++		match->mask.vlan.vlan_tpid = ethertype.mask;
++		match->dissector.offset[FLOW_DISSECTOR_KEY_VLAN] =
++			offsetof(struct nft_flow_key, vlan);
++		match->dissector.used_keys |= BIT(FLOW_DISSECTOR_KEY_VLAN);
++	}
++}
++
+ struct nft_flow_rule *nft_flow_rule_create(struct net *net,
+ 					   const struct nft_rule *rule)
+ {
+@@ -89,6 +131,8 @@ struct nft_flow_rule *nft_flow_rule_create(struct net *net,
+ 
+ 		expr = nft_expr_next(expr);
+ 	}
++	nft_flow_rule_transfer_vlan(ctx, flow);
++
+ 	flow->proto = ctx->dep.l3num;
+ 	kfree(ctx);
+ 
+diff --git a/net/netfilter/nft_cmp.c b/net/netfilter/nft_cmp.c
+index eb6a43a180bba..47b6d05f1ae69 100644
+--- a/net/netfilter/nft_cmp.c
++++ b/net/netfilter/nft_cmp.c
+@@ -114,19 +114,56 @@ nla_put_failure:
+ 	return -1;
+ }
+ 
++union nft_cmp_offload_data {
++	u16	val16;
++	u32	val32;
++	u64	val64;
++};
++
++static void nft_payload_n2h(union nft_cmp_offload_data *data,
++			    const u8 *val, u32 len)
++{
++	switch (len) {
++	case 2:
++		data->val16 = ntohs(*((u16 *)val));
++		break;
++	case 4:
++		data->val32 = ntohl(*((u32 *)val));
++		break;
++	case 8:
++		data->val64 = be64_to_cpu(*((u64 *)val));
++		break;
++	default:
++		WARN_ON_ONCE(1);
++		break;
++	}
++}
++
+ static int __nft_cmp_offload(struct nft_offload_ctx *ctx,
+ 			     struct nft_flow_rule *flow,
+ 			     const struct nft_cmp_expr *priv)
+ {
+ 	struct nft_offload_reg *reg = &ctx->regs[priv->sreg];
++	union nft_cmp_offload_data _data, _datamask;
+ 	u8 *mask = (u8 *)&flow->match.mask;
+ 	u8 *key = (u8 *)&flow->match.key;
++	u8 *data, *datamask;
+ 
+ 	if (priv->op != NFT_CMP_EQ || priv->len > reg->len)
+ 		return -EOPNOTSUPP;
+ 
+-	memcpy(key + reg->offset, &priv->data, reg->len);
+-	memcpy(mask + reg->offset, &reg->mask, reg->len);
++	if (reg->flags & NFT_OFFLOAD_F_NETWORK2HOST) {
++		nft_payload_n2h(&_data, (u8 *)&priv->data, reg->len);
++		nft_payload_n2h(&_datamask, (u8 *)&reg->mask, reg->len);
++		data = (u8 *)&_data;
++		datamask = (u8 *)&_datamask;
++	} else {
++		data = (u8 *)&priv->data;
++		datamask = (u8 *)&reg->mask;
++	}
++
++	memcpy(key + reg->offset, data, reg->len);
++	memcpy(mask + reg->offset, datamask, reg->len);
+ 
+ 	flow->match.dissector.used_keys |= BIT(reg->key);
+ 	flow->match.dissector.offset[reg->key] = reg->base_offset;
+diff --git a/net/netfilter/nft_payload.c b/net/netfilter/nft_payload.c
+index cb1c8c2318803..501c5b24cc396 100644
+--- a/net/netfilter/nft_payload.c
++++ b/net/netfilter/nft_payload.c
+@@ -226,8 +226,9 @@ static int nft_payload_offload_ll(struct nft_offload_ctx *ctx,
+ 		if (!nft_payload_offload_mask(reg, priv->len, sizeof(__be16)))
+ 			return -EOPNOTSUPP;
+ 
+-		NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_VLAN, vlan,
+-				  vlan_tci, sizeof(__be16), reg);
++		NFT_OFFLOAD_MATCH_FLAGS(FLOW_DISSECTOR_KEY_VLAN, vlan,
++					vlan_tci, sizeof(__be16), reg,
++					NFT_OFFLOAD_F_NETWORK2HOST);
+ 		break;
+ 	case offsetof(struct vlan_ethhdr, h_vlan_encapsulated_proto):
+ 		if (!nft_payload_offload_mask(reg, priv->len, sizeof(__be16)))
+@@ -241,16 +242,18 @@ static int nft_payload_offload_ll(struct nft_offload_ctx *ctx,
+ 		if (!nft_payload_offload_mask(reg, priv->len, sizeof(__be16)))
+ 			return -EOPNOTSUPP;
+ 
+-		NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_CVLAN, vlan,
+-				  vlan_tci, sizeof(__be16), reg);
++		NFT_OFFLOAD_MATCH_FLAGS(FLOW_DISSECTOR_KEY_CVLAN, cvlan,
++					vlan_tci, sizeof(__be16), reg,
++					NFT_OFFLOAD_F_NETWORK2HOST);
+ 		break;
+ 	case offsetof(struct vlan_ethhdr, h_vlan_encapsulated_proto) +
+ 							sizeof(struct vlan_hdr):
+ 		if (!nft_payload_offload_mask(reg, priv->len, sizeof(__be16)))
+ 			return -EOPNOTSUPP;
+ 
+-		NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_CVLAN, vlan,
++		NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_CVLAN, cvlan,
+ 				  vlan_tpid, sizeof(__be16), reg);
++		nft_offload_set_dependency(ctx, NFT_OFFLOAD_DEP_NETWORK);
+ 		break;
+ 	default:
+ 		return -EOPNOTSUPP;
+diff --git a/net/nfc/digital_dep.c b/net/nfc/digital_dep.c
+index 5971fb6f51cc7..dc21b4141b0af 100644
+--- a/net/nfc/digital_dep.c
++++ b/net/nfc/digital_dep.c
+@@ -1273,6 +1273,8 @@ static void digital_tg_recv_dep_req(struct nfc_digital_dev *ddev, void *arg,
+ 	}
+ 
+ 	rc = nfc_tm_data_received(ddev->nfc_dev, resp);
++	if (rc)
++		resp = NULL;
+ 
+ exit:
+ 	kfree_skb(ddev->chaining_skb);
+diff --git a/net/nfc/llcp_sock.c b/net/nfc/llcp_sock.c
+index a3b46f8888033..53dbe733f9981 100644
+--- a/net/nfc/llcp_sock.c
++++ b/net/nfc/llcp_sock.c
+@@ -109,12 +109,14 @@ static int llcp_sock_bind(struct socket *sock, struct sockaddr *addr, int alen)
+ 					  GFP_KERNEL);
+ 	if (!llcp_sock->service_name) {
+ 		nfc_llcp_local_put(llcp_sock->local);
++		llcp_sock->local = NULL;
+ 		ret = -ENOMEM;
+ 		goto put_dev;
+ 	}
+ 	llcp_sock->ssap = nfc_llcp_get_sdp_ssap(local, llcp_sock);
+ 	if (llcp_sock->ssap == LLCP_SAP_MAX) {
+ 		nfc_llcp_local_put(llcp_sock->local);
++		llcp_sock->local = NULL;
+ 		kfree(llcp_sock->service_name);
+ 		llcp_sock->service_name = NULL;
+ 		ret = -EADDRINUSE;
+@@ -709,6 +711,7 @@ static int llcp_sock_connect(struct socket *sock, struct sockaddr *_addr,
+ 	llcp_sock->ssap = nfc_llcp_get_local_ssap(local);
+ 	if (llcp_sock->ssap == LLCP_SAP_MAX) {
+ 		nfc_llcp_local_put(llcp_sock->local);
++		llcp_sock->local = NULL;
+ 		ret = -ENOMEM;
+ 		goto put_dev;
+ 	}
+@@ -756,6 +759,7 @@ sock_unlink:
+ sock_llcp_release:
+ 	nfc_llcp_put_ssap(local, llcp_sock->ssap);
+ 	nfc_llcp_local_put(llcp_sock->local);
++	llcp_sock->local = NULL;
+ 
+ put_dev:
+ 	nfc_put_device(dev);
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index e24b2841c643a..9611e41c7b8be 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -1359,7 +1359,7 @@ static unsigned int fanout_demux_rollover(struct packet_fanout *f,
+ 	struct packet_sock *po, *po_next, *po_skip = NULL;
+ 	unsigned int i, j, room = ROOM_NONE;
+ 
+-	po = pkt_sk(f->arr[idx]);
++	po = pkt_sk(rcu_dereference(f->arr[idx]));
+ 
+ 	if (try_self) {
+ 		room = packet_rcv_has_room(po, skb);
+@@ -1371,7 +1371,7 @@ static unsigned int fanout_demux_rollover(struct packet_fanout *f,
+ 
+ 	i = j = min_t(int, po->rollover->sock, num - 1);
+ 	do {
+-		po_next = pkt_sk(f->arr[i]);
++		po_next = pkt_sk(rcu_dereference(f->arr[i]));
+ 		if (po_next != po_skip && !READ_ONCE(po_next->pressure) &&
+ 		    packet_rcv_has_room(po_next, skb) == ROOM_NORMAL) {
+ 			if (i != j)
+@@ -1466,7 +1466,7 @@ static int packet_rcv_fanout(struct sk_buff *skb, struct net_device *dev,
+ 	if (fanout_has_flag(f, PACKET_FANOUT_FLAG_ROLLOVER))
+ 		idx = fanout_demux_rollover(f, skb, idx, true, num);
+ 
+-	po = pkt_sk(f->arr[idx]);
++	po = pkt_sk(rcu_dereference(f->arr[idx]));
+ 	return po->prot_hook.func(skb, dev, &po->prot_hook, orig_dev);
+ }
+ 
+@@ -1480,7 +1480,7 @@ static void __fanout_link(struct sock *sk, struct packet_sock *po)
+ 	struct packet_fanout *f = po->fanout;
+ 
+ 	spin_lock(&f->lock);
+-	f->arr[f->num_members] = sk;
++	rcu_assign_pointer(f->arr[f->num_members], sk);
+ 	smp_wmb();
+ 	f->num_members++;
+ 	if (f->num_members == 1)
+@@ -1495,11 +1495,14 @@ static void __fanout_unlink(struct sock *sk, struct packet_sock *po)
+ 
+ 	spin_lock(&f->lock);
+ 	for (i = 0; i < f->num_members; i++) {
+-		if (f->arr[i] == sk)
++		if (rcu_dereference_protected(f->arr[i],
++					      lockdep_is_held(&f->lock)) == sk)
+ 			break;
+ 	}
+ 	BUG_ON(i >= f->num_members);
+-	f->arr[i] = f->arr[f->num_members - 1];
++	rcu_assign_pointer(f->arr[i],
++			   rcu_dereference_protected(f->arr[f->num_members - 1],
++						     lockdep_is_held(&f->lock)));
+ 	f->num_members--;
+ 	if (f->num_members == 0)
+ 		__dev_remove_pack(&f->prot_hook);
+diff --git a/net/packet/internal.h b/net/packet/internal.h
+index 5f61e59ebbffa..48af35b1aed25 100644
+--- a/net/packet/internal.h
++++ b/net/packet/internal.h
+@@ -94,7 +94,7 @@ struct packet_fanout {
+ 	spinlock_t		lock;
+ 	refcount_t		sk_ref;
+ 	struct packet_type	prot_hook ____cacheline_aligned_in_smp;
+-	struct sock		*arr[];
++	struct sock	__rcu	*arr[];
+ };
+ 
+ struct packet_rollover {
+diff --git a/net/sched/act_ct.c b/net/sched/act_ct.c
+index 16e888a9601dd..48fdf7293deaa 100644
+--- a/net/sched/act_ct.c
++++ b/net/sched/act_ct.c
+@@ -732,7 +732,8 @@ static int tcf_ct_handle_fragments(struct net *net, struct sk_buff *skb,
+ #endif
+ 	}
+ 
+-	*qdisc_skb_cb(skb) = cb;
++	if (err != -EINPROGRESS)
++		*qdisc_skb_cb(skb) = cb;
+ 	skb_clear_hash(skb);
+ 	skb->ignore_df = 1;
+ 	return err;
+@@ -967,7 +968,7 @@ static int tcf_ct_act(struct sk_buff *skb, const struct tc_action *a,
+ 	err = tcf_ct_handle_fragments(net, skb, family, p->zone, &defrag);
+ 	if (err == -EINPROGRESS) {
+ 		retval = TC_ACT_STOLEN;
+-		goto out;
++		goto out_clear;
+ 	}
+ 	if (err)
+ 		goto drop;
+@@ -1030,7 +1031,6 @@ do_nat:
+ out_push:
+ 	skb_push_rcsum(skb, nh_ofs);
+ 
+-out:
+ 	qdisc_skb_cb(skb)->post_ct = true;
+ out_clear:
+ 	tcf_action_update_bstats(&c->common, skb);
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index b9b3d899a611c..4ae428f2f2c57 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -357,6 +357,18 @@ static struct sctp_af *sctp_sockaddr_af(struct sctp_sock *opt,
+ 	return af;
+ }
+ 
++static void sctp_auto_asconf_init(struct sctp_sock *sp)
++{
++	struct net *net = sock_net(&sp->inet.sk);
++
++	if (net->sctp.default_auto_asconf) {
++		spin_lock(&net->sctp.addr_wq_lock);
++		list_add_tail(&sp->auto_asconf_list, &net->sctp.auto_asconf_splist);
++		spin_unlock(&net->sctp.addr_wq_lock);
++		sp->do_auto_asconf = 1;
++	}
++}
++
+ /* Bind a local address either to an endpoint or to an association.  */
+ static int sctp_do_bind(struct sock *sk, union sctp_addr *addr, int len)
+ {
+@@ -418,8 +430,10 @@ static int sctp_do_bind(struct sock *sk, union sctp_addr *addr, int len)
+ 		return -EADDRINUSE;
+ 
+ 	/* Refresh ephemeral port.  */
+-	if (!bp->port)
++	if (!bp->port) {
+ 		bp->port = inet_sk(sk)->inet_num;
++		sctp_auto_asconf_init(sp);
++	}
+ 
+ 	/* Add the address to the bind address list.
+ 	 * Use GFP_ATOMIC since BHs will be disabled.
+@@ -1520,9 +1534,11 @@ static void sctp_close(struct sock *sk, long timeout)
+ 
+ 	/* Supposedly, no process has access to the socket, but
+ 	 * the net layers still may.
++	 * Also, sctp_destroy_sock() needs to be called with addr_wq_lock
++	 * held and that should be grabbed before socket lock.
+ 	 */
+-	local_bh_disable();
+-	bh_lock_sock(sk);
++	spin_lock_bh(&net->sctp.addr_wq_lock);
++	bh_lock_sock_nested(sk);
+ 
+ 	/* Hold the sock, since sk_common_release() will put sock_put()
+ 	 * and we have just a little more cleanup.
+@@ -1531,7 +1547,7 @@ static void sctp_close(struct sock *sk, long timeout)
+ 	sk_common_release(sk);
+ 
+ 	bh_unlock_sock(sk);
+-	local_bh_enable();
++	spin_unlock_bh(&net->sctp.addr_wq_lock);
+ 
+ 	sock_put(sk);
+ 
+@@ -4991,16 +5007,6 @@ static int sctp_init_sock(struct sock *sk)
+ 	sk_sockets_allocated_inc(sk);
+ 	sock_prot_inuse_add(net, sk->sk_prot, 1);
+ 
+-	if (net->sctp.default_auto_asconf) {
+-		spin_lock(&sock_net(sk)->sctp.addr_wq_lock);
+-		list_add_tail(&sp->auto_asconf_list,
+-		    &net->sctp.auto_asconf_splist);
+-		sp->do_auto_asconf = 1;
+-		spin_unlock(&sock_net(sk)->sctp.addr_wq_lock);
+-	} else {
+-		sp->do_auto_asconf = 0;
+-	}
+-
+ 	local_bh_enable();
+ 
+ 	return 0;
+@@ -5025,9 +5031,7 @@ static void sctp_destroy_sock(struct sock *sk)
+ 
+ 	if (sp->do_auto_asconf) {
+ 		sp->do_auto_asconf = 0;
+-		spin_lock_bh(&sock_net(sk)->sctp.addr_wq_lock);
+ 		list_del(&sp->auto_asconf_list);
+-		spin_unlock_bh(&sock_net(sk)->sctp.addr_wq_lock);
+ 	}
+ 	sctp_endpoint_free(sp->ep);
+ 	local_bh_disable();
+@@ -9398,6 +9402,8 @@ static int sctp_sock_migrate(struct sock *oldsk, struct sock *newsk,
+ 			return err;
+ 	}
+ 
++	sctp_auto_asconf_init(newsp);
++
+ 	/* Move any messages in the old socket's receive queue that are for the
+ 	 * peeled off association to the new socket's receive queue.
+ 	 */
+diff --git a/net/tipc/crypto.c b/net/tipc/crypto.c
+index 97710ce36047c..c89ce47c56cf2 100644
+--- a/net/tipc/crypto.c
++++ b/net/tipc/crypto.c
+@@ -1492,6 +1492,8 @@ int tipc_crypto_start(struct tipc_crypto **crypto, struct net *net,
+ 	/* Allocate statistic structure */
+ 	c->stats = alloc_percpu_gfp(struct tipc_crypto_stats, GFP_ATOMIC);
+ 	if (!c->stats) {
++		if (c->wq)
++			destroy_workqueue(c->wq);
+ 		kfree_sensitive(c);
+ 		return -ENOMEM;
+ 	}
+diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
+index e4370b1b74947..902cb6dd710bd 100644
+--- a/net/vmw_vsock/virtio_transport_common.c
++++ b/net/vmw_vsock/virtio_transport_common.c
+@@ -733,6 +733,23 @@ static int virtio_transport_reset_no_sock(const struct virtio_transport *t,
+ 	return t->send_pkt(reply);
+ }
+ 
++/* This function should be called with sk_lock held and SOCK_DONE set */
++static void virtio_transport_remove_sock(struct vsock_sock *vsk)
++{
++	struct virtio_vsock_sock *vvs = vsk->trans;
++	struct virtio_vsock_pkt *pkt, *tmp;
++
++	/* We don't need to take rx_lock, as the socket is closing and we are
++	 * removing it.
++	 */
++	list_for_each_entry_safe(pkt, tmp, &vvs->rx_queue, list) {
++		list_del(&pkt->list);
++		virtio_transport_free_pkt(pkt);
++	}
++
++	vsock_remove_sock(vsk);
++}
++
+ static void virtio_transport_wait_close(struct sock *sk, long timeout)
+ {
+ 	if (timeout) {
+@@ -765,7 +782,7 @@ static void virtio_transport_do_close(struct vsock_sock *vsk,
+ 	    (!cancel_timeout || cancel_delayed_work(&vsk->close_work))) {
+ 		vsk->close_work_scheduled = false;
+ 
+-		vsock_remove_sock(vsk);
++		virtio_transport_remove_sock(vsk);
+ 
+ 		/* Release refcnt obtained when we scheduled the timeout */
+ 		sock_put(sk);
+@@ -828,22 +845,15 @@ static bool virtio_transport_close(struct vsock_sock *vsk)
+ 
+ void virtio_transport_release(struct vsock_sock *vsk)
+ {
+-	struct virtio_vsock_sock *vvs = vsk->trans;
+-	struct virtio_vsock_pkt *pkt, *tmp;
+ 	struct sock *sk = &vsk->sk;
+ 	bool remove_sock = true;
+ 
+ 	if (sk->sk_type == SOCK_STREAM)
+ 		remove_sock = virtio_transport_close(vsk);
+ 
+-	list_for_each_entry_safe(pkt, tmp, &vvs->rx_queue, list) {
+-		list_del(&pkt->list);
+-		virtio_transport_free_pkt(pkt);
+-	}
+-
+ 	if (remove_sock) {
+ 		sock_set_flag(sk, SOCK_DONE);
+-		vsock_remove_sock(vsk);
++		virtio_transport_remove_sock(vsk);
+ 	}
+ }
+ EXPORT_SYMBOL_GPL(virtio_transport_release);
+diff --git a/net/vmw_vsock/vmci_transport.c b/net/vmw_vsock/vmci_transport.c
+index 8b65323207db5..1c9ecb18b8e64 100644
+--- a/net/vmw_vsock/vmci_transport.c
++++ b/net/vmw_vsock/vmci_transport.c
+@@ -568,8 +568,7 @@ vmci_transport_queue_pair_alloc(struct vmci_qp **qpair,
+ 			       peer, flags, VMCI_NO_PRIVILEGE_FLAGS);
+ out:
+ 	if (err < 0) {
+-		pr_err("Could not attach to queue pair with %d\n",
+-		       err);
++		pr_err_once("Could not attach to queue pair with %d\n", err);
+ 		err = vmci_transport_error_to_vsock_error(err);
+ 	}
+ 
+diff --git a/net/wireless/scan.c b/net/wireless/scan.c
+index 758eb7d2a7068..caa8eafbd5834 100644
+--- a/net/wireless/scan.c
++++ b/net/wireless/scan.c
+@@ -1751,6 +1751,8 @@ cfg80211_bss_update(struct cfg80211_registered_device *rdev,
+ 
+ 		if (rdev->bss_entries >= bss_entries_limit &&
+ 		    !cfg80211_bss_expire_oldest(rdev)) {
++			if (!list_empty(&new->hidden_list))
++				list_del(&new->hidden_list);
+ 			kfree(new);
+ 			goto drop;
+ 		}
+diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
+index 4faabd1ecfd1b..143979ea41657 100644
+--- a/net/xdp/xsk.c
++++ b/net/xdp/xsk.c
+@@ -454,12 +454,16 @@ static int xsk_generic_xmit(struct sock *sk)
+ 	struct sk_buff *skb;
+ 	unsigned long flags;
+ 	int err = 0;
++	u32 hr, tr;
+ 
+ 	mutex_lock(&xs->mutex);
+ 
+ 	if (xs->queue_id >= xs->dev->real_num_tx_queues)
+ 		goto out;
+ 
++	hr = max(NET_SKB_PAD, L1_CACHE_ALIGN(xs->dev->needed_headroom));
++	tr = xs->dev->needed_tailroom;
++
+ 	while (xskq_cons_peek_desc(xs->tx, &desc, xs->pool)) {
+ 		char *buffer;
+ 		u64 addr;
+@@ -471,11 +475,13 @@ static int xsk_generic_xmit(struct sock *sk)
+ 		}
+ 
+ 		len = desc.len;
+-		skb = sock_alloc_send_skb(sk, len, 1, &err);
++		skb = sock_alloc_send_skb(sk, hr + len + tr, 1, &err);
+ 		if (unlikely(!skb))
+ 			goto out;
+ 
++		skb_reserve(skb, hr);
+ 		skb_put(skb, len);
++
+ 		addr = desc.addr;
+ 		buffer = xsk_buff_raw_get_data(xs->pool, addr);
+ 		err = skb_store_bits(skb, 0, buffer, len);
+diff --git a/samples/kfifo/bytestream-example.c b/samples/kfifo/bytestream-example.c
+index c406f03ee5519..5a90aa5278775 100644
+--- a/samples/kfifo/bytestream-example.c
++++ b/samples/kfifo/bytestream-example.c
+@@ -122,8 +122,10 @@ static ssize_t fifo_write(struct file *file, const char __user *buf,
+ 	ret = kfifo_from_user(&test, buf, count, &copied);
+ 
+ 	mutex_unlock(&write_lock);
++	if (ret)
++		return ret;
+ 
+-	return ret ? ret : copied;
++	return copied;
+ }
+ 
+ static ssize_t fifo_read(struct file *file, char __user *buf,
+@@ -138,8 +140,10 @@ static ssize_t fifo_read(struct file *file, char __user *buf,
+ 	ret = kfifo_to_user(&test, buf, count, &copied);
+ 
+ 	mutex_unlock(&read_lock);
++	if (ret)
++		return ret;
+ 
+-	return ret ? ret : copied;
++	return copied;
+ }
+ 
+ static const struct proc_ops fifo_proc_ops = {
+diff --git a/samples/kfifo/inttype-example.c b/samples/kfifo/inttype-example.c
+index 78977fc4a23f7..e5403d8c971a5 100644
+--- a/samples/kfifo/inttype-example.c
++++ b/samples/kfifo/inttype-example.c
+@@ -115,8 +115,10 @@ static ssize_t fifo_write(struct file *file, const char __user *buf,
+ 	ret = kfifo_from_user(&test, buf, count, &copied);
+ 
+ 	mutex_unlock(&write_lock);
++	if (ret)
++		return ret;
+ 
+-	return ret ? ret : copied;
++	return copied;
+ }
+ 
+ static ssize_t fifo_read(struct file *file, char __user *buf,
+@@ -131,8 +133,10 @@ static ssize_t fifo_read(struct file *file, char __user *buf,
+ 	ret = kfifo_to_user(&test, buf, count, &copied);
+ 
+ 	mutex_unlock(&read_lock);
++	if (ret)
++		return ret;
+ 
+-	return ret ? ret : copied;
++	return copied;
+ }
+ 
+ static const struct proc_ops fifo_proc_ops = {
+diff --git a/samples/kfifo/record-example.c b/samples/kfifo/record-example.c
+index c507998a2617c..f64f3d62d6c2a 100644
+--- a/samples/kfifo/record-example.c
++++ b/samples/kfifo/record-example.c
+@@ -129,8 +129,10 @@ static ssize_t fifo_write(struct file *file, const char __user *buf,
+ 	ret = kfifo_from_user(&test, buf, count, &copied);
+ 
+ 	mutex_unlock(&write_lock);
++	if (ret)
++		return ret;
+ 
+-	return ret ? ret : copied;
++	return copied;
+ }
+ 
+ static ssize_t fifo_read(struct file *file, char __user *buf,
+@@ -145,8 +147,10 @@ static ssize_t fifo_read(struct file *file, char __user *buf,
+ 	ret = kfifo_to_user(&test, buf, count, &copied);
+ 
+ 	mutex_unlock(&read_lock);
++	if (ret)
++		return ret;
+ 
+-	return ret ? ret : copied;
++	return copied;
+ }
+ 
+ static const struct proc_ops fifo_proc_ops = {
+diff --git a/security/integrity/ima/ima_template.c b/security/integrity/ima/ima_template.c
+index e22e510ae92d4..4e081e6500476 100644
+--- a/security/integrity/ima/ima_template.c
++++ b/security/integrity/ima/ima_template.c
+@@ -494,8 +494,8 @@ int ima_restore_measurement_list(loff_t size, void *buf)
+ 			}
+ 		}
+ 
+-		entry->pcr = !ima_canonical_fmt ? *(hdr[HDR_PCR].data) :
+-			     le32_to_cpu(*(hdr[HDR_PCR].data));
++		entry->pcr = !ima_canonical_fmt ? *(u32 *)(hdr[HDR_PCR].data) :
++			     le32_to_cpu(*(u32 *)(hdr[HDR_PCR].data));
+ 		ret = ima_restore_measurement_entry(entry);
+ 		if (ret < 0)
+ 			break;
+diff --git a/security/keys/trusted-keys/trusted_tpm1.c b/security/keys/trusted-keys/trusted_tpm1.c
+index 493eb91ed017f..1e13c9f7ea8c1 100644
+--- a/security/keys/trusted-keys/trusted_tpm1.c
++++ b/security/keys/trusted-keys/trusted_tpm1.c
+@@ -791,13 +791,33 @@ static int getoptions(char *c, struct trusted_key_payload *pay,
+ 				return -EINVAL;
+ 			break;
+ 		case Opt_blobauth:
+-			if (strlen(args[0].from) != 2 * SHA1_DIGEST_SIZE)
+-				return -EINVAL;
+-			res = hex2bin(opt->blobauth, args[0].from,
+-				      SHA1_DIGEST_SIZE);
+-			if (res < 0)
+-				return -EINVAL;
++			/*
++			 * TPM 1.2 authorizations are sha1 hashes passed in as
++			 * hex strings.  TPM 2.0 authorizations are simple
++			 * passwords (although it can take a hash as well)
++			 */
++			opt->blobauth_len = strlen(args[0].from);
++
++			if (opt->blobauth_len == 2 * TPM_DIGEST_SIZE) {
++				res = hex2bin(opt->blobauth, args[0].from,
++					      TPM_DIGEST_SIZE);
++				if (res < 0)
++					return -EINVAL;
++
++				opt->blobauth_len = TPM_DIGEST_SIZE;
++				break;
++			}
++
++			if (tpm2 && opt->blobauth_len <= sizeof(opt->blobauth)) {
++				memcpy(opt->blobauth, args[0].from,
++				       opt->blobauth_len);
++				break;
++			}
++
++			return -EINVAL;
++
+ 			break;
++
+ 		case Opt_migratable:
+ 			if (*args[0].from == '0')
+ 				pay->migratable = 0;
+diff --git a/security/keys/trusted-keys/trusted_tpm2.c b/security/keys/trusted-keys/trusted_tpm2.c
+index c87c4df8703d4..4c19d3abddbee 100644
+--- a/security/keys/trusted-keys/trusted_tpm2.c
++++ b/security/keys/trusted-keys/trusted_tpm2.c
+@@ -97,10 +97,12 @@ int tpm2_seal_trusted(struct tpm_chip *chip,
+ 			     TPM_DIGEST_SIZE);
+ 
+ 	/* sensitive */
+-	tpm_buf_append_u16(&buf, 4 + TPM_DIGEST_SIZE + payload->key_len + 1);
++	tpm_buf_append_u16(&buf, 4 + options->blobauth_len + payload->key_len + 1);
++
++	tpm_buf_append_u16(&buf, options->blobauth_len);
++	if (options->blobauth_len)
++		tpm_buf_append(&buf, options->blobauth, options->blobauth_len);
+ 
+-	tpm_buf_append_u16(&buf, TPM_DIGEST_SIZE);
+-	tpm_buf_append(&buf, options->blobauth, TPM_DIGEST_SIZE);
+ 	tpm_buf_append_u16(&buf, payload->key_len + 1);
+ 	tpm_buf_append(&buf, payload->key, payload->key_len);
+ 	tpm_buf_append_u8(&buf, payload->migratable);
+@@ -265,7 +267,7 @@ static int tpm2_unseal_cmd(struct tpm_chip *chip,
+ 			     NULL /* nonce */, 0,
+ 			     TPM2_SA_CONTINUE_SESSION,
+ 			     options->blobauth /* hmac */,
+-			     TPM_DIGEST_SIZE);
++			     options->blobauth_len);
+ 
+ 	rc = tpm_transmit_cmd(chip, &buf, 6, "unsealing");
+ 	if (rc > 0)
+diff --git a/security/selinux/include/classmap.h b/security/selinux/include/classmap.h
+index ba2e01a6955cb..62d19bccf3de1 100644
+--- a/security/selinux/include/classmap.h
++++ b/security/selinux/include/classmap.h
+@@ -242,11 +242,12 @@ struct security_class_mapping secclass_map[] = {
+ 	{ "infiniband_endport",
+ 	  { "manage_subnet", NULL } },
+ 	{ "bpf",
+-	  {"map_create", "map_read", "map_write", "prog_load", "prog_run"} },
++	  { "map_create", "map_read", "map_write", "prog_load", "prog_run",
++	    NULL } },
+ 	{ "xdp_socket",
+ 	  { COMMON_SOCK_PERMS, NULL } },
+ 	{ "perf_event",
+-	  {"open", "cpu", "kernel", "tracepoint", "read", "write"} },
++	  { "open", "cpu", "kernel", "tracepoint", "read", "write", NULL } },
+ 	{ "lockdown",
+ 	  { "integrity", "confidentiality", NULL } },
+ 	{ "anon_inode",
+diff --git a/sound/core/init.c b/sound/core/init.c
+index 45f4b01de23fb..ef41f5b3a240d 100644
+--- a/sound/core/init.c
++++ b/sound/core/init.c
+@@ -398,10 +398,8 @@ int snd_card_disconnect(struct snd_card *card)
+ 		return 0;
+ 	}
+ 	card->shutdown = 1;
+-	spin_unlock(&card->files_lock);
+ 
+ 	/* replace file->f_op with special dummy operations */
+-	spin_lock(&card->files_lock);
+ 	list_for_each_entry(mfile, &card->files_list, list) {
+ 		/* it's critical part, use endless loop */
+ 		/* we have no room to fail */
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index d05d16ddbdf2c..8ec57bd351dfe 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -2470,13 +2470,13 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+ 		      ALC882_FIXUP_ACER_ASPIRE_8930G),
+ 	SND_PCI_QUIRK(0x1025, 0x0146, "Acer Aspire 6935G",
+ 		      ALC882_FIXUP_ACER_ASPIRE_8930G),
++	SND_PCI_QUIRK(0x1025, 0x0142, "Acer Aspire 7730G",
++		      ALC882_FIXUP_ACER_ASPIRE_4930G),
++	SND_PCI_QUIRK(0x1025, 0x0155, "Packard-Bell M5120", ALC882_FIXUP_PB_M5210),
+ 	SND_PCI_QUIRK(0x1025, 0x015e, "Acer Aspire 6930G",
+ 		      ALC882_FIXUP_ACER_ASPIRE_4930G),
+ 	SND_PCI_QUIRK(0x1025, 0x0166, "Acer Aspire 6530G",
+ 		      ALC882_FIXUP_ACER_ASPIRE_4930G),
+-	SND_PCI_QUIRK(0x1025, 0x0142, "Acer Aspire 7730G",
+-		      ALC882_FIXUP_ACER_ASPIRE_4930G),
+-	SND_PCI_QUIRK(0x1025, 0x0155, "Packard-Bell M5120", ALC882_FIXUP_PB_M5210),
+ 	SND_PCI_QUIRK(0x1025, 0x021e, "Acer Aspire 5739G",
+ 		      ALC882_FIXUP_ACER_ASPIRE_4930G),
+ 	SND_PCI_QUIRK(0x1025, 0x0259, "Acer Aspire 5935", ALC889_FIXUP_DAC_ROUTE),
+@@ -2489,11 +2489,11 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x835f, "Asus Eee 1601", ALC888_FIXUP_EEE1601),
+ 	SND_PCI_QUIRK(0x1043, 0x84bc, "ASUS ET2700", ALC887_FIXUP_ASUS_BASS),
+ 	SND_PCI_QUIRK(0x1043, 0x8691, "ASUS ROG Ranger VIII", ALC882_FIXUP_GPIO3),
++	SND_PCI_QUIRK(0x104d, 0x9043, "Sony Vaio VGC-LN51JGB", ALC882_FIXUP_NO_PRIMARY_HP),
++	SND_PCI_QUIRK(0x104d, 0x9044, "Sony VAIO AiO", ALC882_FIXUP_NO_PRIMARY_HP),
+ 	SND_PCI_QUIRK(0x104d, 0x9047, "Sony Vaio TT", ALC889_FIXUP_VAIO_TT),
+ 	SND_PCI_QUIRK(0x104d, 0x905a, "Sony Vaio Z", ALC882_FIXUP_NO_PRIMARY_HP),
+ 	SND_PCI_QUIRK(0x104d, 0x9060, "Sony Vaio VPCL14M1R", ALC882_FIXUP_NO_PRIMARY_HP),
+-	SND_PCI_QUIRK(0x104d, 0x9043, "Sony Vaio VGC-LN51JGB", ALC882_FIXUP_NO_PRIMARY_HP),
+-	SND_PCI_QUIRK(0x104d, 0x9044, "Sony VAIO AiO", ALC882_FIXUP_NO_PRIMARY_HP),
+ 
+ 	/* All Apple entries are in codec SSIDs */
+ 	SND_PCI_QUIRK(0x106b, 0x00a0, "MacBookPro 3,1", ALC889_FIXUP_MBP_VREF),
+@@ -2536,9 +2536,19 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1462, 0xda57, "MSI Z270-Gaming", ALC1220_FIXUP_GB_DUAL_CODECS),
+ 	SND_PCI_QUIRK_VENDOR(0x1462, "MSI", ALC882_FIXUP_GPIO3),
+ 	SND_PCI_QUIRK(0x147b, 0x107a, "Abit AW9D-MAX", ALC882_FIXUP_ABIT_AW9D_MAX),
++	SND_PCI_QUIRK(0x1558, 0x50d3, "Clevo PC50[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
++	SND_PCI_QUIRK(0x1558, 0x65d1, "Clevo PB51[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
++	SND_PCI_QUIRK(0x1558, 0x65d2, "Clevo PB51R[CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
++	SND_PCI_QUIRK(0x1558, 0x65e1, "Clevo PB51[ED][DF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
++	SND_PCI_QUIRK(0x1558, 0x65e5, "Clevo PC50D[PRS](?:-D|-G)?", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
++	SND_PCI_QUIRK(0x1558, 0x67d1, "Clevo PB71[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
++	SND_PCI_QUIRK(0x1558, 0x67e1, "Clevo PB71[DE][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
++	SND_PCI_QUIRK(0x1558, 0x67e5, "Clevo PC70D[PRS](?:-D|-G)?", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
++	SND_PCI_QUIRK(0x1558, 0x70d1, "Clevo PC70[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
++	SND_PCI_QUIRK(0x1558, 0x7714, "Clevo X170", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK(0x1558, 0x9501, "Clevo P950HR", ALC1220_FIXUP_CLEVO_P950),
+ 	SND_PCI_QUIRK(0x1558, 0x9506, "Clevo P955HQ", ALC1220_FIXUP_CLEVO_P950),
+-	SND_PCI_QUIRK(0x1558, 0x950A, "Clevo P955H[PR]", ALC1220_FIXUP_CLEVO_P950),
++	SND_PCI_QUIRK(0x1558, 0x950a, "Clevo P955H[PR]", ALC1220_FIXUP_CLEVO_P950),
+ 	SND_PCI_QUIRK(0x1558, 0x95e1, "Clevo P95xER", ALC1220_FIXUP_CLEVO_P950),
+ 	SND_PCI_QUIRK(0x1558, 0x95e2, "Clevo P950ER", ALC1220_FIXUP_CLEVO_P950),
+ 	SND_PCI_QUIRK(0x1558, 0x95e3, "Clevo P955[ER]T", ALC1220_FIXUP_CLEVO_P950),
+@@ -2548,16 +2558,6 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1558, 0x96e1, "Clevo P960[ER][CDFN]-K", ALC1220_FIXUP_CLEVO_P950),
+ 	SND_PCI_QUIRK(0x1558, 0x97e1, "Clevo P970[ER][CDFN]", ALC1220_FIXUP_CLEVO_P950),
+ 	SND_PCI_QUIRK(0x1558, 0x97e2, "Clevo P970RC-M", ALC1220_FIXUP_CLEVO_P950),
+-	SND_PCI_QUIRK(0x1558, 0x50d3, "Clevo PC50[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+-	SND_PCI_QUIRK(0x1558, 0x65d1, "Clevo PB51[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+-	SND_PCI_QUIRK(0x1558, 0x65d2, "Clevo PB51R[CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+-	SND_PCI_QUIRK(0x1558, 0x65e1, "Clevo PB51[ED][DF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+-	SND_PCI_QUIRK(0x1558, 0x65e5, "Clevo PC50D[PRS](?:-D|-G)?", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+-	SND_PCI_QUIRK(0x1558, 0x67d1, "Clevo PB71[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+-	SND_PCI_QUIRK(0x1558, 0x67e1, "Clevo PB71[DE][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+-	SND_PCI_QUIRK(0x1558, 0x67e5, "Clevo PC70D[PRS](?:-D|-G)?", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+-	SND_PCI_QUIRK(0x1558, 0x70d1, "Clevo PC70[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+-	SND_PCI_QUIRK(0x1558, 0x7714, "Clevo X170", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK_VENDOR(0x1558, "Clevo laptop", ALC882_FIXUP_EAPD),
+ 	SND_PCI_QUIRK(0x161f, 0x2054, "Medion laptop", ALC883_FIXUP_EAPD),
+ 	SND_PCI_QUIRK(0x17aa, 0x3a0d, "Lenovo Y530", ALC882_FIXUP_LENOVO_Y530),
+@@ -4331,6 +4331,35 @@ static void alc245_fixup_hp_x360_amp(struct hda_codec *codec,
+ 	}
+ }
+ 
++/* toggle GPIO2 at each time stream is started; we use PREPARE state instead */
++static void alc274_hp_envy_pcm_hook(struct hda_pcm_stream *hinfo,
++				    struct hda_codec *codec,
++				    struct snd_pcm_substream *substream,
++				    int action)
++{
++	switch (action) {
++	case HDA_GEN_PCM_ACT_PREPARE:
++		alc_update_gpio_data(codec, 0x04, true);
++		break;
++	case HDA_GEN_PCM_ACT_CLEANUP:
++		alc_update_gpio_data(codec, 0x04, false);
++		break;
++	}
++}
++
++static void alc274_fixup_hp_envy_gpio(struct hda_codec *codec,
++				      const struct hda_fixup *fix,
++				      int action)
++{
++	struct alc_spec *spec = codec->spec;
++
++	if (action == HDA_FIXUP_ACT_PROBE) {
++		spec->gpio_mask |= 0x04;
++		spec->gpio_dir |= 0x04;
++		spec->gen.pcm_playback_hook = alc274_hp_envy_pcm_hook;
++	}
++}
++
+ static void alc_update_coef_led(struct hda_codec *codec,
+ 				struct alc_coef_led *led,
+ 				bool polarity, bool on)
+@@ -6443,6 +6472,7 @@ enum {
+ 	ALC255_FIXUP_XIAOMI_HEADSET_MIC,
+ 	ALC274_FIXUP_HP_MIC,
+ 	ALC274_FIXUP_HP_HEADSET_MIC,
++	ALC274_FIXUP_HP_ENVY_GPIO,
+ 	ALC256_FIXUP_ASUS_HPE,
+ 	ALC285_FIXUP_THINKPAD_NO_BASS_SPK_HEADSET_JACK,
+ 	ALC287_FIXUP_HP_GPIO_LED,
+@@ -7882,6 +7912,10 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = ALC274_FIXUP_HP_MIC
+ 	},
++	[ALC274_FIXUP_HP_ENVY_GPIO] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc274_fixup_hp_envy_gpio,
++	},
+ 	[ALC256_FIXUP_ASUS_HPE] = {
+ 		.type = HDA_FIXUP_VERBS,
+ 		.v.verbs = (const struct hda_verb[]) {
+@@ -7947,12 +7981,12 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1025, 0x0349, "Acer AOD260", ALC269_FIXUP_INV_DMIC),
+ 	SND_PCI_QUIRK(0x1025, 0x047c, "Acer AC700", ALC269_FIXUP_ACER_AC700),
+ 	SND_PCI_QUIRK(0x1025, 0x072d, "Acer Aspire V5-571G", ALC269_FIXUP_ASPIRE_HEADSET_MIC),
+-	SND_PCI_QUIRK(0x1025, 0x080d, "Acer Aspire V5-122P", ALC269_FIXUP_ASPIRE_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1025, 0x0740, "Acer AO725", ALC271_FIXUP_HP_GATE_MIC_JACK),
+ 	SND_PCI_QUIRK(0x1025, 0x0742, "Acer AO756", ALC271_FIXUP_HP_GATE_MIC_JACK),
+ 	SND_PCI_QUIRK(0x1025, 0x0762, "Acer Aspire E1-472", ALC271_FIXUP_HP_GATE_MIC_JACK_E1_572),
+ 	SND_PCI_QUIRK(0x1025, 0x0775, "Acer Aspire E1-572", ALC271_FIXUP_HP_GATE_MIC_JACK_E1_572),
+ 	SND_PCI_QUIRK(0x1025, 0x079b, "Acer Aspire V5-573G", ALC282_FIXUP_ASPIRE_V5_PINS),
++	SND_PCI_QUIRK(0x1025, 0x080d, "Acer Aspire V5-122P", ALC269_FIXUP_ASPIRE_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1025, 0x0840, "Acer Aspire E1", ALC269VB_FIXUP_ASPIRE_E1_COEF),
+ 	SND_PCI_QUIRK(0x1025, 0x101c, "Acer Veriton N2510G", ALC269_FIXUP_LIFEBOOK),
+ 	SND_PCI_QUIRK(0x1025, 0x102b, "Acer Aspire C24-860", ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE),
+@@ -8008,8 +8042,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1028, 0x0738, "Dell Precision 5820", ALC269_FIXUP_NO_SHUTUP),
+ 	SND_PCI_QUIRK(0x1028, 0x075c, "Dell XPS 27 7760", ALC298_FIXUP_SPK_VOLUME),
+ 	SND_PCI_QUIRK(0x1028, 0x075d, "Dell AIO", ALC298_FIXUP_SPK_VOLUME),
+-	SND_PCI_QUIRK(0x1028, 0x07b0, "Dell Precision 7520", ALC295_FIXUP_DISABLE_DAC3),
+ 	SND_PCI_QUIRK(0x1028, 0x0798, "Dell Inspiron 17 7000 Gaming", ALC256_FIXUP_DELL_INSPIRON_7559_SUBWOOFER),
++	SND_PCI_QUIRK(0x1028, 0x07b0, "Dell Precision 7520", ALC295_FIXUP_DISABLE_DAC3),
+ 	SND_PCI_QUIRK(0x1028, 0x080c, "Dell WYSE", ALC225_FIXUP_DELL_WYSE_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1028, 0x084b, "Dell", ALC274_FIXUP_DELL_AIO_LINEOUT_VERB),
+ 	SND_PCI_QUIRK(0x1028, 0x084e, "Dell", ALC274_FIXUP_DELL_AIO_LINEOUT_VERB),
+@@ -8019,8 +8053,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1028, 0x08ad, "Dell WYSE AIO", ALC225_FIXUP_DELL_WYSE_AIO_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1028, 0x08ae, "Dell WYSE NB", ALC225_FIXUP_DELL1_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1028, 0x0935, "Dell", ALC274_FIXUP_DELL_AIO_LINEOUT_VERB),
+-	SND_PCI_QUIRK(0x1028, 0x097e, "Dell Precision", ALC289_FIXUP_DUAL_SPK),
+ 	SND_PCI_QUIRK(0x1028, 0x097d, "Dell Precision", ALC289_FIXUP_DUAL_SPK),
++	SND_PCI_QUIRK(0x1028, 0x097e, "Dell Precision", ALC289_FIXUP_DUAL_SPK),
+ 	SND_PCI_QUIRK(0x1028, 0x098d, "Dell Precision", ALC233_FIXUP_ASUS_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1028, 0x09bf, "Dell Precision", ALC233_FIXUP_ASUS_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1028, 0x0a2e, "Dell", ALC236_FIXUP_DELL_AIO_HEADSET_MIC),
+@@ -8031,35 +8065,18 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x1586, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC2),
+ 	SND_PCI_QUIRK(0x103c, 0x18e6, "HP", ALC269_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x218b, "HP", ALC269_FIXUP_LIMIT_INT_MIC_BOOST_MUTE_LED),
+-	SND_PCI_QUIRK(0x103c, 0x225f, "HP", ALC280_FIXUP_HP_GPIO2_MIC_HOTKEY),
+-	/* ALC282 */
+ 	SND_PCI_QUIRK(0x103c, 0x21f9, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+ 	SND_PCI_QUIRK(0x103c, 0x2210, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+ 	SND_PCI_QUIRK(0x103c, 0x2214, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
++	SND_PCI_QUIRK(0x103c, 0x221b, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
++	SND_PCI_QUIRK(0x103c, 0x221c, "HP EliteBook 755 G2", ALC280_FIXUP_HP_HEADSET_MIC),
++	SND_PCI_QUIRK(0x103c, 0x2221, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
++	SND_PCI_QUIRK(0x103c, 0x2225, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x2236, "HP", ALC269_FIXUP_HP_LINE1_MIC1_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x2237, "HP", ALC269_FIXUP_HP_LINE1_MIC1_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x2238, "HP", ALC269_FIXUP_HP_LINE1_MIC1_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x2239, "HP", ALC269_FIXUP_HP_LINE1_MIC1_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x224b, "HP", ALC269_FIXUP_HP_LINE1_MIC1_LED),
+-	SND_PCI_QUIRK(0x103c, 0x2268, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+-	SND_PCI_QUIRK(0x103c, 0x226a, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+-	SND_PCI_QUIRK(0x103c, 0x226b, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+-	SND_PCI_QUIRK(0x103c, 0x226e, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+-	SND_PCI_QUIRK(0x103c, 0x2271, "HP", ALC286_FIXUP_HP_GPIO_LED),
+-	SND_PCI_QUIRK(0x103c, 0x2272, "HP", ALC280_FIXUP_HP_DOCK_PINS),
+-	SND_PCI_QUIRK(0x103c, 0x2273, "HP", ALC280_FIXUP_HP_DOCK_PINS),
+-	SND_PCI_QUIRK(0x103c, 0x229e, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+-	SND_PCI_QUIRK(0x103c, 0x22b2, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+-	SND_PCI_QUIRK(0x103c, 0x22b7, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+-	SND_PCI_QUIRK(0x103c, 0x22bf, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+-	SND_PCI_QUIRK(0x103c, 0x22cf, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+-	SND_PCI_QUIRK(0x103c, 0x22db, "HP", ALC280_FIXUP_HP_9480M),
+-	SND_PCI_QUIRK(0x103c, 0x22dc, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
+-	SND_PCI_QUIRK(0x103c, 0x22fb, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
+-	/* ALC290 */
+-	SND_PCI_QUIRK(0x103c, 0x221b, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
+-	SND_PCI_QUIRK(0x103c, 0x2221, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
+-	SND_PCI_QUIRK(0x103c, 0x2225, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x2253, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x2254, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x2255, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
+@@ -8067,26 +8084,41 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x2257, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x2259, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x225a, "HP", ALC269_FIXUP_HP_DOCK_GPIO_MIC1_LED),
++	SND_PCI_QUIRK(0x103c, 0x225f, "HP", ALC280_FIXUP_HP_GPIO2_MIC_HOTKEY),
+ 	SND_PCI_QUIRK(0x103c, 0x2260, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+ 	SND_PCI_QUIRK(0x103c, 0x2263, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+ 	SND_PCI_QUIRK(0x103c, 0x2264, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+ 	SND_PCI_QUIRK(0x103c, 0x2265, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
++	SND_PCI_QUIRK(0x103c, 0x2268, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
++	SND_PCI_QUIRK(0x103c, 0x226a, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
++	SND_PCI_QUIRK(0x103c, 0x226b, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
++	SND_PCI_QUIRK(0x103c, 0x226e, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
++	SND_PCI_QUIRK(0x103c, 0x2271, "HP", ALC286_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x2272, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
++	SND_PCI_QUIRK(0x103c, 0x2272, "HP", ALC280_FIXUP_HP_DOCK_PINS),
+ 	SND_PCI_QUIRK(0x103c, 0x2273, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
++	SND_PCI_QUIRK(0x103c, 0x2273, "HP", ALC280_FIXUP_HP_DOCK_PINS),
+ 	SND_PCI_QUIRK(0x103c, 0x2278, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x227f, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+ 	SND_PCI_QUIRK(0x103c, 0x2282, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+ 	SND_PCI_QUIRK(0x103c, 0x228b, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+ 	SND_PCI_QUIRK(0x103c, 0x228e, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
++	SND_PCI_QUIRK(0x103c, 0x229e, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
++	SND_PCI_QUIRK(0x103c, 0x22b2, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
++	SND_PCI_QUIRK(0x103c, 0x22b7, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
++	SND_PCI_QUIRK(0x103c, 0x22bf, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
++	SND_PCI_QUIRK(0x103c, 0x22c4, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+ 	SND_PCI_QUIRK(0x103c, 0x22c5, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+ 	SND_PCI_QUIRK(0x103c, 0x22c7, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+ 	SND_PCI_QUIRK(0x103c, 0x22c8, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+-	SND_PCI_QUIRK(0x103c, 0x22c4, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
++	SND_PCI_QUIRK(0x103c, 0x22cf, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
++	SND_PCI_QUIRK(0x103c, 0x22db, "HP", ALC280_FIXUP_HP_9480M),
++	SND_PCI_QUIRK(0x103c, 0x22dc, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
++	SND_PCI_QUIRK(0x103c, 0x22fb, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x2334, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+ 	SND_PCI_QUIRK(0x103c, 0x2335, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+ 	SND_PCI_QUIRK(0x103c, 0x2336, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+ 	SND_PCI_QUIRK(0x103c, 0x2337, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+-	SND_PCI_QUIRK(0x103c, 0x221c, "HP EliteBook 755 G2", ALC280_FIXUP_HP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x103c, 0x802e, "HP Z240 SFF", ALC221_FIXUP_HP_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x802f, "HP Z240", ALC221_FIXUP_HP_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x8077, "HP", ALC256_FIXUP_HP_HEADSET_MIC),
+@@ -8101,6 +8133,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8497, "HP Envy x360", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+ 	SND_PCI_QUIRK(0x103c, 0x84e7, "HP Pavilion 15", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+ 	SND_PCI_QUIRK(0x103c, 0x869d, "HP", ALC236_FIXUP_HP_MUTE_LED),
++	SND_PCI_QUIRK(0x103c, 0x86c7, "HP Envy AiO 32", ALC274_FIXUP_HP_ENVY_GPIO),
+ 	SND_PCI_QUIRK(0x103c, 0x8724, "HP EliteBook 850 G7", ALC285_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8729, "HP", ALC285_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8730, "HP ProBook 445 G7", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+@@ -8128,16 +8161,18 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x10d0, "ASUS X540LA/X540LJ", ALC255_FIXUP_ASUS_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1043, 0x115d, "Asus 1015E", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+ 	SND_PCI_QUIRK(0x1043, 0x11c0, "ASUS X556UR", ALC255_FIXUP_ASUS_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1043, 0x125e, "ASUS Q524UQK", ALC255_FIXUP_ASUS_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1043, 0x1271, "ASUS X430UN", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1043, 0x1290, "ASUS X441SA", ALC233_FIXUP_EAPD_COEF_AND_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1043, 0x12a0, "ASUS X441UV", ALC233_FIXUP_EAPD_COEF_AND_MIC_NO_PRESENCE),
+-	SND_PCI_QUIRK(0x1043, 0x12f0, "ASUS X541UV", ALC256_FIXUP_ASUS_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x12e0, "ASUS X541SA", ALC256_FIXUP_ASUS_MIC),
++	SND_PCI_QUIRK(0x1043, 0x12f0, "ASUS X541UV", ALC256_FIXUP_ASUS_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x13b0, "ASUS Z550SA", ALC256_FIXUP_ASUS_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x1427, "Asus Zenbook UX31E", ALC269VB_FIXUP_ASUS_ZENBOOK),
+ 	SND_PCI_QUIRK(0x1043, 0x1517, "Asus Zenbook UX31A", ALC269VB_FIXUP_ASUS_ZENBOOK_UX31A),
+ 	SND_PCI_QUIRK(0x1043, 0x16e3, "ASUS UX50", ALC269_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x1043, 0x17d1, "ASUS UX431FL", ALC294_FIXUP_ASUS_DUAL_SPK),
++	SND_PCI_QUIRK(0x1043, 0x1881, "ASUS Zephyrus S/M", ALC294_FIXUP_ASUS_GX502_PINS),
+ 	SND_PCI_QUIRK(0x1043, 0x18b1, "Asus MJ401TA", ALC256_FIXUP_ASUS_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x18f1, "Asus FX505DT", ALC256_FIXUP_ASUS_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x194e, "ASUS UX563FD", ALC294_FIXUP_ASUS_HPE),
+@@ -8150,32 +8185,31 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x1b13, "Asus U41SV", ALC269_FIXUP_INV_DMIC),
+ 	SND_PCI_QUIRK(0x1043, 0x1bbd, "ASUS Z550MA", ALC255_FIXUP_ASUS_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1043, 0x1c23, "Asus X55U", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+-	SND_PCI_QUIRK(0x1043, 0x125e, "ASUS Q524UQK", ALC255_FIXUP_ASUS_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1043, 0x1ccd, "ASUS X555UB", ALC256_FIXUP_ASUS_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x1d4e, "ASUS TM420", ALC256_FIXUP_ASUS_HPE),
+ 	SND_PCI_QUIRK(0x1043, 0x1e11, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA502),
+ 	SND_PCI_QUIRK(0x1043, 0x1e8e, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA401),
+ 	SND_PCI_QUIRK(0x1043, 0x1f11, "ASUS Zephyrus G14", ALC289_FIXUP_ASUS_GA401),
+-	SND_PCI_QUIRK(0x1043, 0x1881, "ASUS Zephyrus S/M", ALC294_FIXUP_ASUS_GX502_PINS),
+ 	SND_PCI_QUIRK(0x1043, 0x3030, "ASUS ZN270IE", ALC256_FIXUP_ASUS_AIO_GPIO2),
+ 	SND_PCI_QUIRK(0x1043, 0x831a, "ASUS P901", ALC269_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x1043, 0x834a, "ASUS S101", ALC269_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x1043, 0x8398, "ASUS P1005", ALC269_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x1043, 0x83ce, "ASUS P1005", ALC269_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x1043, 0x8516, "ASUS X101CH", ALC269_FIXUP_ASUS_X101),
+-	SND_PCI_QUIRK(0x104d, 0x90b5, "Sony VAIO Pro 11", ALC286_FIXUP_SONY_MIC_NO_PRESENCE),
+-	SND_PCI_QUIRK(0x104d, 0x90b6, "Sony VAIO Pro 13", ALC286_FIXUP_SONY_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x104d, 0x9073, "Sony VAIO", ALC275_FIXUP_SONY_VAIO_GPIO2),
+ 	SND_PCI_QUIRK(0x104d, 0x907b, "Sony VAIO", ALC275_FIXUP_SONY_HWEQ),
+ 	SND_PCI_QUIRK(0x104d, 0x9084, "Sony VAIO", ALC275_FIXUP_SONY_HWEQ),
+ 	SND_PCI_QUIRK(0x104d, 0x9099, "Sony VAIO S13", ALC275_FIXUP_SONY_DISABLE_AAMIX),
++	SND_PCI_QUIRK(0x104d, 0x90b5, "Sony VAIO Pro 11", ALC286_FIXUP_SONY_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x104d, 0x90b6, "Sony VAIO Pro 13", ALC286_FIXUP_SONY_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x10cf, 0x1475, "Lifebook", ALC269_FIXUP_LIFEBOOK),
+ 	SND_PCI_QUIRK(0x10cf, 0x159f, "Lifebook E780", ALC269_FIXUP_LIFEBOOK_NO_HP_TO_LINEOUT),
+ 	SND_PCI_QUIRK(0x10cf, 0x15dc, "Lifebook T731", ALC269_FIXUP_LIFEBOOK_HP_PIN),
+-	SND_PCI_QUIRK(0x10cf, 0x1757, "Lifebook E752", ALC269_FIXUP_LIFEBOOK_HP_PIN),
+ 	SND_PCI_QUIRK(0x10cf, 0x1629, "Lifebook U7x7", ALC255_FIXUP_LIFEBOOK_U7x7_HEADSET_MIC),
++	SND_PCI_QUIRK(0x10cf, 0x1757, "Lifebook E752", ALC269_FIXUP_LIFEBOOK_HP_PIN),
+ 	SND_PCI_QUIRK(0x10cf, 0x1845, "Lifebook U904", ALC269_FIXUP_LIFEBOOK_EXTMIC),
+ 	SND_PCI_QUIRK(0x10ec, 0x10f2, "Intel Reference board", ALC700_FIXUP_INTEL_REFERENCE),
++	SND_PCI_QUIRK(0x10ec, 0x118c, "Medion EE4254 MD62100", ALC256_FIXUP_MEDION_HEADSET_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x10ec, 0x1230, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK),
+ 	SND_PCI_QUIRK(0x10ec, 0x1252, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK),
+ 	SND_PCI_QUIRK(0x10ec, 0x1254, "Intel Reference board", ALC295_FIXUP_CHROME_BOOK),
+@@ -8185,9 +8219,9 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x144d, 0xc176, "Samsung Notebook 9 Pro (NP930MBE-K04US)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+ 	SND_PCI_QUIRK(0x144d, 0xc189, "Samsung Galaxy Flex Book (NT950QCG-X716)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+ 	SND_PCI_QUIRK(0x144d, 0xc18a, "Samsung Galaxy Book Ion (NP930XCJ-K01US)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+-	SND_PCI_QUIRK(0x144d, 0xc830, "Samsung Galaxy Book Ion (NT950XCJ-X716A)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+ 	SND_PCI_QUIRK(0x144d, 0xc740, "Samsung Ativ book 8 (NP870Z5G)", ALC269_FIXUP_ATIV_BOOK_8),
+ 	SND_PCI_QUIRK(0x144d, 0xc812, "Samsung Notebook Pen S (NT950SBE-X58)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
++	SND_PCI_QUIRK(0x144d, 0xc830, "Samsung Galaxy Book Ion (NT950XCJ-X716A)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+ 	SND_PCI_QUIRK(0x1458, 0xfa53, "Gigabyte BXBT-2807", ALC283_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1462, 0xb120, "MSI Cubi MS-B120", ALC283_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1462, 0xb171, "Cubi N 8GL (MS-B171)", ALC283_FIXUP_HEADSET_MIC),
+@@ -8243,9 +8277,9 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x21b8, "Thinkpad Edge 14", ALC269_FIXUP_SKU_IGNORE),
+ 	SND_PCI_QUIRK(0x17aa, 0x21ca, "Thinkpad L412", ALC269_FIXUP_SKU_IGNORE),
+ 	SND_PCI_QUIRK(0x17aa, 0x21e9, "Thinkpad Edge 15", ALC269_FIXUP_SKU_IGNORE),
++	SND_PCI_QUIRK(0x17aa, 0x21f3, "Thinkpad T430", ALC269_FIXUP_LENOVO_DOCK),
+ 	SND_PCI_QUIRK(0x17aa, 0x21f6, "Thinkpad T530", ALC269_FIXUP_LENOVO_DOCK_LIMIT_BOOST),
+ 	SND_PCI_QUIRK(0x17aa, 0x21fa, "Thinkpad X230", ALC269_FIXUP_LENOVO_DOCK),
+-	SND_PCI_QUIRK(0x17aa, 0x21f3, "Thinkpad T430", ALC269_FIXUP_LENOVO_DOCK),
+ 	SND_PCI_QUIRK(0x17aa, 0x21fb, "Thinkpad T430s", ALC269_FIXUP_LENOVO_DOCK),
+ 	SND_PCI_QUIRK(0x17aa, 0x2203, "Thinkpad X230 Tablet", ALC269_FIXUP_LENOVO_DOCK),
+ 	SND_PCI_QUIRK(0x17aa, 0x2208, "Thinkpad T431s", ALC269_FIXUP_LENOVO_DOCK),
+@@ -8289,6 +8323,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x3902, "Lenovo E50-80", ALC269_FIXUP_DMIC_THINKPAD_ACPI),
+ 	SND_PCI_QUIRK(0x17aa, 0x3977, "IdeaPad S210", ALC283_FIXUP_INT_MIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x3978, "Lenovo B50-70", ALC269_FIXUP_DMIC_THINKPAD_ACPI),
++	SND_PCI_QUIRK(0x17aa, 0x3bf8, "Quanta FL1", ALC269_FIXUP_PCM_44K),
+ 	SND_PCI_QUIRK(0x17aa, 0x5013, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+ 	SND_PCI_QUIRK(0x17aa, 0x501a, "Thinkpad", ALC283_FIXUP_INT_MIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x501e, "Thinkpad L440", ALC292_FIXUP_TPT440_DOCK),
+@@ -8307,20 +8342,18 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x5109, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+ 	SND_PCI_QUIRK(0x17aa, 0x511e, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
+ 	SND_PCI_QUIRK(0x17aa, 0x511f, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
+-	SND_PCI_QUIRK(0x17aa, 0x3bf8, "Quanta FL1", ALC269_FIXUP_PCM_44K),
+ 	SND_PCI_QUIRK(0x17aa, 0x9e54, "LENOVO NB", ALC269_FIXUP_LENOVO_EAPD),
+ 	SND_PCI_QUIRK(0x19e5, 0x3204, "Huawei MACH-WX9", ALC256_FIXUP_HUAWEI_MACH_WX9_PINS),
+ 	SND_PCI_QUIRK(0x1b35, 0x1235, "CZC B20", ALC269_FIXUP_CZC_B20),
+ 	SND_PCI_QUIRK(0x1b35, 0x1236, "CZC TMI", ALC269_FIXUP_CZC_TMI),
+ 	SND_PCI_QUIRK(0x1b35, 0x1237, "CZC L101", ALC269_FIXUP_CZC_L101),
+ 	SND_PCI_QUIRK(0x1b7d, 0xa831, "Ordissimo EVE2 ", ALC269VB_FIXUP_ORDISSIMO_EVE2), /* Also known as Malata PC-B1303 */
++	SND_PCI_QUIRK(0x1c06, 0x2013, "Lemote A1802", ALC269_FIXUP_LEMOTE_A1802),
++	SND_PCI_QUIRK(0x1c06, 0x2015, "Lemote A190X", ALC269_FIXUP_LEMOTE_A190X),
+ 	SND_PCI_QUIRK(0x1d72, 0x1602, "RedmiBook", ALC255_FIXUP_XIAOMI_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1d72, 0x1701, "XiaomiNotebook Pro", ALC298_FIXUP_DELL1_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1d72, 0x1901, "RedmiBook 14", ALC256_FIXUP_ASUS_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1d72, 0x1947, "RedmiBook Air", ALC255_FIXUP_XIAOMI_HEADSET_MIC),
+-	SND_PCI_QUIRK(0x10ec, 0x118c, "Medion EE4254 MD62100", ALC256_FIXUP_MEDION_HEADSET_NO_PRESENCE),
+-	SND_PCI_QUIRK(0x1c06, 0x2013, "Lemote A1802", ALC269_FIXUP_LEMOTE_A1802),
+-	SND_PCI_QUIRK(0x1c06, 0x2015, "Lemote A190X", ALC269_FIXUP_LEMOTE_A190X),
+ 	SND_PCI_QUIRK(0x8086, 0x2074, "Intel NUC 8", ALC233_FIXUP_INTEL_NUC8_DMIC),
+ 	SND_PCI_QUIRK(0x8086, 0x2080, "Intel NUC 8 Rugged", ALC256_FIXUP_INTEL_NUC8_RUGGED),
+ 	SND_PCI_QUIRK(0x8086, 0x2081, "Intel NUC 10", ALC256_FIXUP_INTEL_NUC10),
+@@ -8777,6 +8810,16 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
+ 		{0x19, 0x03a11020},
+ 		{0x21, 0x0321101f}),
+ 	SND_HDA_PIN_QUIRK(0x10ec0285, 0x17aa, "Lenovo", ALC285_FIXUP_LENOVO_PC_BEEP_IN_NOISE,
++		{0x12, 0x90a60130},
++		{0x14, 0x90170110},
++		{0x19, 0x04a11040},
++		{0x21, 0x04211020}),
++	SND_HDA_PIN_QUIRK(0x10ec0285, 0x17aa, "Lenovo", ALC285_FIXUP_LENOVO_PC_BEEP_IN_NOISE,
++		{0x14, 0x90170110},
++		{0x19, 0x04a11040},
++		{0x1d, 0x40600001},
++		{0x21, 0x04211020}),
++	SND_HDA_PIN_QUIRK(0x10ec0285, 0x17aa, "Lenovo", ALC285_FIXUP_THINKPAD_NO_BASS_SPK_HEADSET_JACK,
+ 		{0x14, 0x90170110},
+ 		{0x19, 0x04a11040},
+ 		{0x21, 0x04211020}),
+@@ -8947,10 +8990,6 @@ static const struct snd_hda_pin_quirk alc269_fallback_pin_fixup_tbl[] = {
+ 	SND_HDA_PIN_QUIRK(0x10ec0274, 0x1028, "Dell", ALC274_FIXUP_DELL_AIO_LINEOUT_VERB,
+ 		{0x19, 0x40000000},
+ 		{0x1a, 0x40000000}),
+-	SND_HDA_PIN_QUIRK(0x10ec0285, 0x17aa, "Lenovo", ALC285_FIXUP_THINKPAD_NO_BASS_SPK_HEADSET_JACK,
+-		{0x14, 0x90170110},
+-		{0x19, 0x04a11040},
+-		{0x21, 0x04211020}),
+ 	{}
+ };
+ 
+@@ -9266,8 +9305,7 @@ static const struct snd_pci_quirk alc861_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x1393, "ASUS A6Rp", ALC861_FIXUP_ASUS_A6RP),
+ 	SND_PCI_QUIRK_VENDOR(0x1043, "ASUS laptop", ALC861_FIXUP_AMP_VREF_0F),
+ 	SND_PCI_QUIRK(0x1462, 0x7254, "HP DX2200", ALC861_FIXUP_NO_JACK_DETECT),
+-	SND_PCI_QUIRK(0x1584, 0x2b01, "Haier W18", ALC861_FIXUP_AMP_VREF_0F),
+-	SND_PCI_QUIRK(0x1584, 0x0000, "Uniwill ECS M31EI", ALC861_FIXUP_AMP_VREF_0F),
++	SND_PCI_QUIRK_VENDOR(0x1584, "Haier/Uniwill", ALC861_FIXUP_AMP_VREF_0F),
+ 	SND_PCI_QUIRK(0x1734, 0x10c7, "FSC Amilo Pi1505", ALC861_FIXUP_FSC_AMILO_PI1505),
+ 	{}
+ };
+@@ -10062,6 +10100,7 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1025, 0x0349, "eMachines eM250", ALC662_FIXUP_INV_DMIC),
+ 	SND_PCI_QUIRK(0x1025, 0x034a, "Gateway LT27", ALC662_FIXUP_INV_DMIC),
+ 	SND_PCI_QUIRK(0x1025, 0x038b, "Acer Aspire 8943G", ALC662_FIXUP_ASPIRE),
++	SND_PCI_QUIRK(0x1025, 0x0566, "Acer Aspire Ethos 8951G", ALC669_FIXUP_ACER_ASPIRE_ETHOS),
+ 	SND_PCI_QUIRK(0x1025, 0x123c, "Acer Nitro N50-600", ALC662_FIXUP_ACER_NITRO_HEADSET_MODE),
+ 	SND_PCI_QUIRK(0x1025, 0x124e, "Acer 2660G", ALC662_FIXUP_ACER_X2660G_HEADSET_MODE),
+ 	SND_PCI_QUIRK(0x1028, 0x05d8, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE),
+@@ -10078,9 +10117,9 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x873e, "HP", ALC671_FIXUP_HP_HEADSET_MIC2),
+ 	SND_PCI_QUIRK(0x1043, 0x1080, "Asus UX501VW", ALC668_FIXUP_HEADSET_MODE),
+ 	SND_PCI_QUIRK(0x1043, 0x11cd, "Asus N550", ALC662_FIXUP_ASUS_Nx50),
+-	SND_PCI_QUIRK(0x1043, 0x13df, "Asus N550JX", ALC662_FIXUP_BASS_1A),
+ 	SND_PCI_QUIRK(0x1043, 0x129d, "Asus N750", ALC662_FIXUP_ASUS_Nx50),
+ 	SND_PCI_QUIRK(0x1043, 0x12ff, "ASUS G751", ALC668_FIXUP_ASUS_G751),
++	SND_PCI_QUIRK(0x1043, 0x13df, "Asus N550JX", ALC662_FIXUP_BASS_1A),
+ 	SND_PCI_QUIRK(0x1043, 0x1477, "ASUS N56VZ", ALC662_FIXUP_BASS_MODE4_CHMAP),
+ 	SND_PCI_QUIRK(0x1043, 0x15a7, "ASUS UX51VZH", ALC662_FIXUP_BASS_16),
+ 	SND_PCI_QUIRK(0x1043, 0x177d, "ASUS N551", ALC668_FIXUP_ASUS_Nx51),
+@@ -10100,7 +10139,6 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1b0a, 0x01b8, "ACER Veriton", ALC662_FIXUP_ACER_VERITON),
+ 	SND_PCI_QUIRK(0x1b35, 0x1234, "CZC ET26", ALC662_FIXUP_CZC_ET26),
+ 	SND_PCI_QUIRK(0x1b35, 0x2206, "CZC P10T", ALC662_FIXUP_CZC_P10T),
+-	SND_PCI_QUIRK(0x1025, 0x0566, "Acer Aspire Ethos 8951G", ALC669_FIXUP_ACER_ASPIRE_ETHOS),
+ 
+ #if 0
+ 	/* Below is a quirk table taken from the old code.
+diff --git a/sound/soc/codecs/ak5558.c b/sound/soc/codecs/ak5558.c
+index 85bdd05341803..80b3b162ca5ba 100644
+--- a/sound/soc/codecs/ak5558.c
++++ b/sound/soc/codecs/ak5558.c
+@@ -272,7 +272,7 @@ static void ak5558_power_off(struct ak5558_priv *ak5558)
+ 	if (!ak5558->reset_gpiod)
+ 		return;
+ 
+-	gpiod_set_value_cansleep(ak5558->reset_gpiod, 0);
++	gpiod_set_value_cansleep(ak5558->reset_gpiod, 1);
+ 	usleep_range(1000, 2000);
+ }
+ 
+@@ -281,7 +281,7 @@ static void ak5558_power_on(struct ak5558_priv *ak5558)
+ 	if (!ak5558->reset_gpiod)
+ 		return;
+ 
+-	gpiod_set_value_cansleep(ak5558->reset_gpiod, 1);
++	gpiod_set_value_cansleep(ak5558->reset_gpiod, 0);
+ 	usleep_range(1000, 2000);
+ }
+ 
+diff --git a/sound/soc/codecs/tlv320aic32x4.c b/sound/soc/codecs/tlv320aic32x4.c
+index f04f88c8d4251..b689f26fc4be6 100644
+--- a/sound/soc/codecs/tlv320aic32x4.c
++++ b/sound/soc/codecs/tlv320aic32x4.c
+@@ -577,12 +577,12 @@ static const struct regmap_range_cfg aic32x4_regmap_pages[] = {
+ 		.window_start = 0,
+ 		.window_len = 128,
+ 		.range_min = 0,
+-		.range_max = AIC32X4_RMICPGAVOL,
++		.range_max = AIC32X4_REFPOWERUP,
+ 	},
+ };
+ 
+ const struct regmap_config aic32x4_regmap_config = {
+-	.max_register = AIC32X4_RMICPGAVOL,
++	.max_register = AIC32X4_REFPOWERUP,
+ 	.ranges = aic32x4_regmap_pages,
+ 	.num_ranges = ARRAY_SIZE(aic32x4_regmap_pages),
+ };
+@@ -1243,6 +1243,10 @@ int aic32x4_probe(struct device *dev, struct regmap *regmap)
+ 	if (ret)
+ 		goto err_disable_regulators;
+ 
++	ret = aic32x4_register_clocks(dev, aic32x4->mclk_name);
++	if (ret)
++		goto err_disable_regulators;
++
+ 	ret = devm_snd_soc_register_component(dev,
+ 			&soc_component_dev_aic32x4, &aic32x4_dai, 1);
+ 	if (ret) {
+@@ -1250,10 +1254,6 @@ int aic32x4_probe(struct device *dev, struct regmap *regmap)
+ 		goto err_disable_regulators;
+ 	}
+ 
+-	ret = aic32x4_register_clocks(dev, aic32x4->mclk_name);
+-	if (ret)
+-		goto err_disable_regulators;
+-
+ 	return 0;
+ 
+ err_disable_regulators:
+diff --git a/sound/soc/codecs/wm8960.c b/sound/soc/codecs/wm8960.c
+index cda9cd935d4f3..9e621a254392c 100644
+--- a/sound/soc/codecs/wm8960.c
++++ b/sound/soc/codecs/wm8960.c
+@@ -608,10 +608,6 @@ static const int bclk_divs[] = {
+  *		- lrclk      = sysclk / dac_divs
+  *		- 10 * bclk  = sysclk / bclk_divs
+  *
+- *	If we cannot find an exact match for (sysclk, lrclk, bclk)
+- *	triplet, we relax the bclk such that bclk is chosen as the
+- *	closest available frequency greater than expected bclk.
+- *
+  * @wm8960: codec private data
+  * @mclk: MCLK used to derive sysclk
+  * @sysclk_idx: sysclk_divs index for found sysclk
+@@ -629,7 +625,7 @@ int wm8960_configure_sysclk(struct wm8960_priv *wm8960, int mclk,
+ {
+ 	int sysclk, bclk, lrclk;
+ 	int i, j, k;
+-	int diff, closest = mclk;
++	int diff;
+ 
+ 	/* marker for no match */
+ 	*bclk_idx = -1;
+@@ -653,12 +649,6 @@ int wm8960_configure_sysclk(struct wm8960_priv *wm8960, int mclk,
+ 					*bclk_idx = k;
+ 					break;
+ 				}
+-				if (diff > 0 && closest > diff) {
+-					*sysclk_idx = i;
+-					*dac_idx = j;
+-					*bclk_idx = k;
+-					closest = diff;
+-				}
+ 			}
+ 			if (k != ARRAY_SIZE(bclk_divs))
+ 				break;
+diff --git a/sound/soc/generic/audio-graph-card.c b/sound/soc/generic/audio-graph-card.c
+index 8c5cdcdc8713c..e81b5cf0d37a9 100644
+--- a/sound/soc/generic/audio-graph-card.c
++++ b/sound/soc/generic/audio-graph-card.c
+@@ -380,7 +380,7 @@ static int graph_dai_link_of(struct asoc_simple_priv *priv,
+ 	struct device_node *top = dev->of_node;
+ 	struct asoc_simple_dai *cpu_dai;
+ 	struct asoc_simple_dai *codec_dai;
+-	int ret, single_cpu;
++	int ret, single_cpu = 0;
+ 
+ 	/* Do it only CPU turn */
+ 	if (!li->cpu)
+diff --git a/sound/soc/generic/simple-card.c b/sound/soc/generic/simple-card.c
+index 75365c7bb3930..d916ec69c24ff 100644
+--- a/sound/soc/generic/simple-card.c
++++ b/sound/soc/generic/simple-card.c
+@@ -258,7 +258,7 @@ static int simple_dai_link_of(struct asoc_simple_priv *priv,
+ 	struct device_node *plat = NULL;
+ 	char prop[128];
+ 	char *prefix = "";
+-	int ret, single_cpu;
++	int ret, single_cpu = 0;
+ 
+ 	/*
+ 	 *	 |CPU   |Codec   : turn
+diff --git a/sound/soc/intel/Makefile b/sound/soc/intel/Makefile
+index 4e0248d2accc7..7c5038803be73 100644
+--- a/sound/soc/intel/Makefile
++++ b/sound/soc/intel/Makefile
+@@ -5,7 +5,7 @@ obj-$(CONFIG_SND_SOC) += common/
+ # Platform Support
+ obj-$(CONFIG_SND_SST_ATOM_HIFI2_PLATFORM) += atom/
+ obj-$(CONFIG_SND_SOC_INTEL_CATPT) += catpt/
+-obj-$(CONFIG_SND_SOC_INTEL_SKYLAKE) += skylake/
++obj-$(CONFIG_SND_SOC_INTEL_SKYLAKE_COMMON) += skylake/
+ obj-$(CONFIG_SND_SOC_INTEL_KEEMBAY) += keembay/
+ 
+ # Machine support
+diff --git a/sound/soc/intel/boards/kbl_da7219_max98927.c b/sound/soc/intel/boards/kbl_da7219_max98927.c
+index cc9a2509ace29..e0149cf6127d0 100644
+--- a/sound/soc/intel/boards/kbl_da7219_max98927.c
++++ b/sound/soc/intel/boards/kbl_da7219_max98927.c
+@@ -282,11 +282,33 @@ static int kabylake_ssp_fixup(struct snd_soc_pcm_runtime *rtd,
+ 	struct snd_interval *chan = hw_param_interval(params,
+ 			SNDRV_PCM_HW_PARAM_CHANNELS);
+ 	struct snd_mask *fmt = hw_param_mask(params, SNDRV_PCM_HW_PARAM_FORMAT);
+-	struct snd_soc_dpcm *dpcm = container_of(
+-			params, struct snd_soc_dpcm, hw_params);
+-	struct snd_soc_dai_link *fe_dai_link = dpcm->fe->dai_link;
+-	struct snd_soc_dai_link *be_dai_link = dpcm->be->dai_link;
++	struct snd_soc_dpcm *dpcm, *rtd_dpcm = NULL;
+ 
++	/*
++	 * The following loop will be called only for playback stream
++	 * In this platform, there is only one playback device on every SSP
++	 */
++	for_each_dpcm_fe(rtd, SNDRV_PCM_STREAM_PLAYBACK, dpcm) {
++		rtd_dpcm = dpcm;
++		break;
++	}
++
++	/*
++	 * This following loop will be called only for capture stream
++	 * In this platform, there is only one capture device on every SSP
++	 */
++	for_each_dpcm_fe(rtd, SNDRV_PCM_STREAM_CAPTURE, dpcm) {
++		rtd_dpcm = dpcm;
++		break;
++	}
++
++	if (!rtd_dpcm)
++		return -EINVAL;
++
++	/*
++	 * The above 2 loops are mutually exclusive based on the stream direction,
++	 * thus rtd_dpcm variable will never be overwritten
++	 */
+ 	/*
+ 	 * Topology for kblda7219m98373 & kblmax98373 supports only S24_LE,
+ 	 * where as kblda7219m98927 & kblmax98927 supports S16_LE by default.
+@@ -309,9 +331,9 @@ static int kabylake_ssp_fixup(struct snd_soc_pcm_runtime *rtd,
+ 	/*
+ 	 * The ADSP will convert the FE rate to 48k, stereo, 24 bit
+ 	 */
+-	if (!strcmp(fe_dai_link->name, "Kbl Audio Port") ||
+-	    !strcmp(fe_dai_link->name, "Kbl Audio Headset Playback") ||
+-	    !strcmp(fe_dai_link->name, "Kbl Audio Capture Port")) {
++	if (!strcmp(rtd_dpcm->fe->dai_link->name, "Kbl Audio Port") ||
++	    !strcmp(rtd_dpcm->fe->dai_link->name, "Kbl Audio Headset Playback") ||
++	    !strcmp(rtd_dpcm->fe->dai_link->name, "Kbl Audio Capture Port")) {
+ 		rate->min = rate->max = 48000;
+ 		chan->min = chan->max = 2;
+ 		snd_mask_none(fmt);
+@@ -322,7 +344,7 @@ static int kabylake_ssp_fixup(struct snd_soc_pcm_runtime *rtd,
+ 	 * The speaker on the SSP0 supports S16_LE and not S24_LE.
+ 	 * thus changing the mask here
+ 	 */
+-	if (!strcmp(be_dai_link->name, "SSP0-Codec"))
++	if (!strcmp(rtd_dpcm->be->dai_link->name, "SSP0-Codec"))
+ 		snd_mask_set_format(fmt, SNDRV_PCM_FORMAT_S16_LE);
+ 
+ 	return 0;
+diff --git a/sound/soc/intel/boards/sof_wm8804.c b/sound/soc/intel/boards/sof_wm8804.c
+index a46ba13e8eb0c..6a181e45143d7 100644
+--- a/sound/soc/intel/boards/sof_wm8804.c
++++ b/sound/soc/intel/boards/sof_wm8804.c
+@@ -124,7 +124,11 @@ static int sof_wm8804_hw_params(struct snd_pcm_substream *substream,
+ 	}
+ 
+ 	snd_soc_dai_set_clkdiv(codec_dai, WM8804_MCLK_DIV, mclk_div);
+-	snd_soc_dai_set_pll(codec_dai, 0, 0, sysclk, mclk_freq);
++	ret = snd_soc_dai_set_pll(codec_dai, 0, 0, sysclk, mclk_freq);
++	if (ret < 0) {
++		dev_err(rtd->card->dev, "Failed to set WM8804 PLL\n");
++		return ret;
++	}
+ 
+ 	ret = snd_soc_dai_set_sysclk(codec_dai, WM8804_TX_CLKSRC_PLL,
+ 				     sysclk, SND_SOC_CLOCK_OUT);
+diff --git a/sound/soc/intel/skylake/Makefile b/sound/soc/intel/skylake/Makefile
+index dd39149b89b1d..1c4649bccec5a 100644
+--- a/sound/soc/intel/skylake/Makefile
++++ b/sound/soc/intel/skylake/Makefile
+@@ -7,7 +7,7 @@ ifdef CONFIG_DEBUG_FS
+   snd-soc-skl-objs += skl-debug.o
+ endif
+ 
+-obj-$(CONFIG_SND_SOC_INTEL_SKYLAKE) += snd-soc-skl.o
++obj-$(CONFIG_SND_SOC_INTEL_SKYLAKE_COMMON) += snd-soc-skl.o
+ 
+ #Skylake Clock device support
+ snd-soc-skl-ssp-clk-objs := skl-ssp-clk.o
+diff --git a/sound/soc/qcom/qdsp6/q6afe-clocks.c b/sound/soc/qcom/qdsp6/q6afe-clocks.c
+index f0362f0616521..9431656283cd1 100644
+--- a/sound/soc/qcom/qdsp6/q6afe-clocks.c
++++ b/sound/soc/qcom/qdsp6/q6afe-clocks.c
+@@ -11,33 +11,29 @@
+ #include <linux/slab.h>
+ #include "q6afe.h"
+ 
+-#define Q6AFE_CLK(id) &(struct q6afe_clk) {		\
++#define Q6AFE_CLK(id) {					\
+ 		.clk_id	= id,				\
+ 		.afe_clk_id	= Q6AFE_##id,		\
+ 		.name = #id,				\
+-		.attributes = LPASS_CLK_ATTRIBUTE_COUPLE_NO, \
+ 		.rate = 19200000,			\
+-		.hw.init = &(struct clk_init_data) {	\
+-			.ops = &clk_q6afe_ops,		\
+-			.name = #id,			\
+-		},					\
+ 	}
+ 
+-#define Q6AFE_VOTE_CLK(id, blkid, n) &(struct q6afe_clk) { \
++#define Q6AFE_VOTE_CLK(id, blkid, n) {			\
+ 		.clk_id	= id,				\
+ 		.afe_clk_id = blkid,			\
+-		.name = #n,				\
+-		.hw.init = &(struct clk_init_data) {	\
+-			.ops = &clk_vote_q6afe_ops,	\
+-			.name = #id,			\
+-		},					\
++		.name = n,				\
+ 	}
+ 
+-struct q6afe_clk {
+-	struct device *dev;
++struct q6afe_clk_init {
+ 	int clk_id;
+ 	int afe_clk_id;
+ 	char *name;
++	int rate;
++};
++
++struct q6afe_clk {
++	struct device *dev;
++	int afe_clk_id;
+ 	int attributes;
+ 	int rate;
+ 	uint32_t handle;
+@@ -48,8 +44,7 @@ struct q6afe_clk {
+ 
+ struct q6afe_cc {
+ 	struct device *dev;
+-	struct q6afe_clk **clks;
+-	int num_clks;
++	struct q6afe_clk *clks[Q6AFE_MAX_CLK_ID];
+ };
+ 
+ static int clk_q6afe_prepare(struct clk_hw *hw)
+@@ -105,7 +100,7 @@ static int clk_vote_q6afe_block(struct clk_hw *hw)
+ 	struct q6afe_clk *clk = to_q6afe_clk(hw);
+ 
+ 	return q6afe_vote_lpass_core_hw(clk->dev, clk->afe_clk_id,
+-					clk->name, &clk->handle);
++					clk_hw_get_name(&clk->hw), &clk->handle);
+ }
+ 
+ static void clk_unvote_q6afe_block(struct clk_hw *hw)
+@@ -120,84 +115,76 @@ static const struct clk_ops clk_vote_q6afe_ops = {
+ 	.unprepare	= clk_unvote_q6afe_block,
+ };
+ 
+-static struct q6afe_clk *q6afe_clks[Q6AFE_MAX_CLK_ID] = {
+-	[LPASS_CLK_ID_PRI_MI2S_IBIT] = Q6AFE_CLK(LPASS_CLK_ID_PRI_MI2S_IBIT),
+-	[LPASS_CLK_ID_PRI_MI2S_EBIT] = Q6AFE_CLK(LPASS_CLK_ID_PRI_MI2S_EBIT),
+-	[LPASS_CLK_ID_SEC_MI2S_IBIT] = Q6AFE_CLK(LPASS_CLK_ID_SEC_MI2S_IBIT),
+-	[LPASS_CLK_ID_SEC_MI2S_EBIT] = Q6AFE_CLK(LPASS_CLK_ID_SEC_MI2S_EBIT),
+-	[LPASS_CLK_ID_TER_MI2S_IBIT] = Q6AFE_CLK(LPASS_CLK_ID_TER_MI2S_IBIT),
+-	[LPASS_CLK_ID_TER_MI2S_EBIT] = Q6AFE_CLK(LPASS_CLK_ID_TER_MI2S_EBIT),
+-	[LPASS_CLK_ID_QUAD_MI2S_IBIT] = Q6AFE_CLK(LPASS_CLK_ID_QUAD_MI2S_IBIT),
+-	[LPASS_CLK_ID_QUAD_MI2S_EBIT] = Q6AFE_CLK(LPASS_CLK_ID_QUAD_MI2S_EBIT),
+-	[LPASS_CLK_ID_SPEAKER_I2S_IBIT] =
+-				Q6AFE_CLK(LPASS_CLK_ID_SPEAKER_I2S_IBIT),
+-	[LPASS_CLK_ID_SPEAKER_I2S_EBIT] =
+-				Q6AFE_CLK(LPASS_CLK_ID_SPEAKER_I2S_EBIT),
+-	[LPASS_CLK_ID_SPEAKER_I2S_OSR] =
+-				Q6AFE_CLK(LPASS_CLK_ID_SPEAKER_I2S_OSR),
+-	[LPASS_CLK_ID_QUI_MI2S_IBIT] = Q6AFE_CLK(LPASS_CLK_ID_QUI_MI2S_IBIT),
+-	[LPASS_CLK_ID_QUI_MI2S_EBIT] = Q6AFE_CLK(LPASS_CLK_ID_QUI_MI2S_EBIT),
+-	[LPASS_CLK_ID_SEN_MI2S_IBIT] = Q6AFE_CLK(LPASS_CLK_ID_SEN_MI2S_IBIT),
+-	[LPASS_CLK_ID_SEN_MI2S_EBIT] = Q6AFE_CLK(LPASS_CLK_ID_SEN_MI2S_EBIT),
+-	[LPASS_CLK_ID_INT0_MI2S_IBIT] = Q6AFE_CLK(LPASS_CLK_ID_INT0_MI2S_IBIT),
+-	[LPASS_CLK_ID_INT1_MI2S_IBIT] = Q6AFE_CLK(LPASS_CLK_ID_INT1_MI2S_IBIT),
+-	[LPASS_CLK_ID_INT2_MI2S_IBIT] = Q6AFE_CLK(LPASS_CLK_ID_INT2_MI2S_IBIT),
+-	[LPASS_CLK_ID_INT3_MI2S_IBIT] = Q6AFE_CLK(LPASS_CLK_ID_INT3_MI2S_IBIT),
+-	[LPASS_CLK_ID_INT4_MI2S_IBIT] = Q6AFE_CLK(LPASS_CLK_ID_INT4_MI2S_IBIT),
+-	[LPASS_CLK_ID_INT5_MI2S_IBIT] = Q6AFE_CLK(LPASS_CLK_ID_INT5_MI2S_IBIT),
+-	[LPASS_CLK_ID_INT6_MI2S_IBIT] = Q6AFE_CLK(LPASS_CLK_ID_INT6_MI2S_IBIT),
+-	[LPASS_CLK_ID_QUI_MI2S_OSR] = Q6AFE_CLK(LPASS_CLK_ID_QUI_MI2S_OSR),
+-	[LPASS_CLK_ID_PRI_PCM_IBIT] = Q6AFE_CLK(LPASS_CLK_ID_PRI_PCM_IBIT),
+-	[LPASS_CLK_ID_PRI_PCM_EBIT] = Q6AFE_CLK(LPASS_CLK_ID_PRI_PCM_EBIT),
+-	[LPASS_CLK_ID_SEC_PCM_IBIT] = Q6AFE_CLK(LPASS_CLK_ID_SEC_PCM_IBIT),
+-	[LPASS_CLK_ID_SEC_PCM_EBIT] = Q6AFE_CLK(LPASS_CLK_ID_SEC_PCM_EBIT),
+-	[LPASS_CLK_ID_TER_PCM_IBIT] = Q6AFE_CLK(LPASS_CLK_ID_TER_PCM_IBIT),
+-	[LPASS_CLK_ID_TER_PCM_EBIT] = Q6AFE_CLK(LPASS_CLK_ID_TER_PCM_EBIT),
+-	[LPASS_CLK_ID_QUAD_PCM_IBIT] = Q6AFE_CLK(LPASS_CLK_ID_QUAD_PCM_IBIT),
+-	[LPASS_CLK_ID_QUAD_PCM_EBIT] = Q6AFE_CLK(LPASS_CLK_ID_QUAD_PCM_EBIT),
+-	[LPASS_CLK_ID_QUIN_PCM_IBIT] = Q6AFE_CLK(LPASS_CLK_ID_QUIN_PCM_IBIT),
+-	[LPASS_CLK_ID_QUIN_PCM_EBIT] = Q6AFE_CLK(LPASS_CLK_ID_QUIN_PCM_EBIT),
+-	[LPASS_CLK_ID_QUI_PCM_OSR] = Q6AFE_CLK(LPASS_CLK_ID_QUI_PCM_OSR),
+-	[LPASS_CLK_ID_PRI_TDM_IBIT] = Q6AFE_CLK(LPASS_CLK_ID_PRI_TDM_IBIT),
+-	[LPASS_CLK_ID_PRI_TDM_EBIT] = Q6AFE_CLK(LPASS_CLK_ID_PRI_TDM_EBIT),
+-	[LPASS_CLK_ID_SEC_TDM_IBIT] = Q6AFE_CLK(LPASS_CLK_ID_SEC_TDM_IBIT),
+-	[LPASS_CLK_ID_SEC_TDM_EBIT] = Q6AFE_CLK(LPASS_CLK_ID_SEC_TDM_EBIT),
+-	[LPASS_CLK_ID_TER_TDM_IBIT] = Q6AFE_CLK(LPASS_CLK_ID_TER_TDM_IBIT),
+-	[LPASS_CLK_ID_TER_TDM_EBIT] = Q6AFE_CLK(LPASS_CLK_ID_TER_TDM_EBIT),
+-	[LPASS_CLK_ID_QUAD_TDM_IBIT] = Q6AFE_CLK(LPASS_CLK_ID_QUAD_TDM_IBIT),
+-	[LPASS_CLK_ID_QUAD_TDM_EBIT] = Q6AFE_CLK(LPASS_CLK_ID_QUAD_TDM_EBIT),
+-	[LPASS_CLK_ID_QUIN_TDM_IBIT] = Q6AFE_CLK(LPASS_CLK_ID_QUIN_TDM_IBIT),
+-	[LPASS_CLK_ID_QUIN_TDM_EBIT] = Q6AFE_CLK(LPASS_CLK_ID_QUIN_TDM_EBIT),
+-	[LPASS_CLK_ID_QUIN_TDM_OSR] = Q6AFE_CLK(LPASS_CLK_ID_QUIN_TDM_OSR),
+-	[LPASS_CLK_ID_MCLK_1] = Q6AFE_CLK(LPASS_CLK_ID_MCLK_1),
+-	[LPASS_CLK_ID_MCLK_2] = Q6AFE_CLK(LPASS_CLK_ID_MCLK_2),
+-	[LPASS_CLK_ID_MCLK_3] = Q6AFE_CLK(LPASS_CLK_ID_MCLK_3),
+-	[LPASS_CLK_ID_MCLK_4] = Q6AFE_CLK(LPASS_CLK_ID_MCLK_4),
+-	[LPASS_CLK_ID_INTERNAL_DIGITAL_CODEC_CORE] =
+-		Q6AFE_CLK(LPASS_CLK_ID_INTERNAL_DIGITAL_CODEC_CORE),
+-	[LPASS_CLK_ID_INT_MCLK_0] = Q6AFE_CLK(LPASS_CLK_ID_INT_MCLK_0),
+-	[LPASS_CLK_ID_INT_MCLK_1] = Q6AFE_CLK(LPASS_CLK_ID_INT_MCLK_1),
+-	[LPASS_CLK_ID_WSA_CORE_MCLK] = Q6AFE_CLK(LPASS_CLK_ID_WSA_CORE_MCLK),
+-	[LPASS_CLK_ID_WSA_CORE_NPL_MCLK] =
+-				Q6AFE_CLK(LPASS_CLK_ID_WSA_CORE_NPL_MCLK),
+-	[LPASS_CLK_ID_VA_CORE_MCLK] = Q6AFE_CLK(LPASS_CLK_ID_VA_CORE_MCLK),
+-	[LPASS_CLK_ID_TX_CORE_MCLK] = Q6AFE_CLK(LPASS_CLK_ID_TX_CORE_MCLK),
+-	[LPASS_CLK_ID_TX_CORE_NPL_MCLK] =
+-			Q6AFE_CLK(LPASS_CLK_ID_TX_CORE_NPL_MCLK),
+-	[LPASS_CLK_ID_RX_CORE_MCLK] = Q6AFE_CLK(LPASS_CLK_ID_RX_CORE_MCLK),
+-	[LPASS_CLK_ID_RX_CORE_NPL_MCLK] =
+-				Q6AFE_CLK(LPASS_CLK_ID_RX_CORE_NPL_MCLK),
+-	[LPASS_CLK_ID_VA_CORE_2X_MCLK] =
+-				Q6AFE_CLK(LPASS_CLK_ID_VA_CORE_2X_MCLK),
+-	[LPASS_HW_AVTIMER_VOTE] = Q6AFE_VOTE_CLK(LPASS_HW_AVTIMER_VOTE,
+-						 Q6AFE_LPASS_CORE_AVTIMER_BLOCK,
+-						 "LPASS_AVTIMER_MACRO"),
+-	[LPASS_HW_MACRO_VOTE] = Q6AFE_VOTE_CLK(LPASS_HW_MACRO_VOTE,
+-						Q6AFE_LPASS_CORE_HW_MACRO_BLOCK,
+-						"LPASS_HW_MACRO"),
+-	[LPASS_HW_DCODEC_VOTE] = Q6AFE_VOTE_CLK(LPASS_HW_DCODEC_VOTE,
+-					Q6AFE_LPASS_CORE_HW_DCODEC_BLOCK,
+-					"LPASS_HW_DCODEC"),
++static const struct q6afe_clk_init q6afe_clks[] = {
++	Q6AFE_CLK(LPASS_CLK_ID_PRI_MI2S_IBIT),
++	Q6AFE_CLK(LPASS_CLK_ID_PRI_MI2S_EBIT),
++	Q6AFE_CLK(LPASS_CLK_ID_SEC_MI2S_IBIT),
++	Q6AFE_CLK(LPASS_CLK_ID_SEC_MI2S_EBIT),
++	Q6AFE_CLK(LPASS_CLK_ID_TER_MI2S_IBIT),
++	Q6AFE_CLK(LPASS_CLK_ID_TER_MI2S_EBIT),
++	Q6AFE_CLK(LPASS_CLK_ID_QUAD_MI2S_IBIT),
++	Q6AFE_CLK(LPASS_CLK_ID_QUAD_MI2S_EBIT),
++	Q6AFE_CLK(LPASS_CLK_ID_SPEAKER_I2S_IBIT),
++	Q6AFE_CLK(LPASS_CLK_ID_SPEAKER_I2S_EBIT),
++	Q6AFE_CLK(LPASS_CLK_ID_SPEAKER_I2S_OSR),
++	Q6AFE_CLK(LPASS_CLK_ID_QUI_MI2S_IBIT),
++	Q6AFE_CLK(LPASS_CLK_ID_QUI_MI2S_EBIT),
++	Q6AFE_CLK(LPASS_CLK_ID_SEN_MI2S_IBIT),
++	Q6AFE_CLK(LPASS_CLK_ID_SEN_MI2S_EBIT),
++	Q6AFE_CLK(LPASS_CLK_ID_INT0_MI2S_IBIT),
++	Q6AFE_CLK(LPASS_CLK_ID_INT1_MI2S_IBIT),
++	Q6AFE_CLK(LPASS_CLK_ID_INT2_MI2S_IBIT),
++	Q6AFE_CLK(LPASS_CLK_ID_INT3_MI2S_IBIT),
++	Q6AFE_CLK(LPASS_CLK_ID_INT4_MI2S_IBIT),
++	Q6AFE_CLK(LPASS_CLK_ID_INT5_MI2S_IBIT),
++	Q6AFE_CLK(LPASS_CLK_ID_INT6_MI2S_IBIT),
++	Q6AFE_CLK(LPASS_CLK_ID_QUI_MI2S_OSR),
++	Q6AFE_CLK(LPASS_CLK_ID_PRI_PCM_IBIT),
++	Q6AFE_CLK(LPASS_CLK_ID_PRI_PCM_EBIT),
++	Q6AFE_CLK(LPASS_CLK_ID_SEC_PCM_IBIT),
++	Q6AFE_CLK(LPASS_CLK_ID_SEC_PCM_EBIT),
++	Q6AFE_CLK(LPASS_CLK_ID_TER_PCM_IBIT),
++	Q6AFE_CLK(LPASS_CLK_ID_TER_PCM_EBIT),
++	Q6AFE_CLK(LPASS_CLK_ID_QUAD_PCM_IBIT),
++	Q6AFE_CLK(LPASS_CLK_ID_QUAD_PCM_EBIT),
++	Q6AFE_CLK(LPASS_CLK_ID_QUIN_PCM_IBIT),
++	Q6AFE_CLK(LPASS_CLK_ID_QUIN_PCM_EBIT),
++	Q6AFE_CLK(LPASS_CLK_ID_QUI_PCM_OSR),
++	Q6AFE_CLK(LPASS_CLK_ID_PRI_TDM_IBIT),
++	Q6AFE_CLK(LPASS_CLK_ID_PRI_TDM_EBIT),
++	Q6AFE_CLK(LPASS_CLK_ID_SEC_TDM_IBIT),
++	Q6AFE_CLK(LPASS_CLK_ID_SEC_TDM_EBIT),
++	Q6AFE_CLK(LPASS_CLK_ID_TER_TDM_IBIT),
++	Q6AFE_CLK(LPASS_CLK_ID_TER_TDM_EBIT),
++	Q6AFE_CLK(LPASS_CLK_ID_QUAD_TDM_IBIT),
++	Q6AFE_CLK(LPASS_CLK_ID_QUAD_TDM_EBIT),
++	Q6AFE_CLK(LPASS_CLK_ID_QUIN_TDM_IBIT),
++	Q6AFE_CLK(LPASS_CLK_ID_QUIN_TDM_EBIT),
++	Q6AFE_CLK(LPASS_CLK_ID_QUIN_TDM_OSR),
++	Q6AFE_CLK(LPASS_CLK_ID_MCLK_1),
++	Q6AFE_CLK(LPASS_CLK_ID_MCLK_2),
++	Q6AFE_CLK(LPASS_CLK_ID_MCLK_3),
++	Q6AFE_CLK(LPASS_CLK_ID_MCLK_4),
++	Q6AFE_CLK(LPASS_CLK_ID_INTERNAL_DIGITAL_CODEC_CORE),
++	Q6AFE_CLK(LPASS_CLK_ID_INT_MCLK_0),
++	Q6AFE_CLK(LPASS_CLK_ID_INT_MCLK_1),
++	Q6AFE_CLK(LPASS_CLK_ID_WSA_CORE_MCLK),
++	Q6AFE_CLK(LPASS_CLK_ID_WSA_CORE_NPL_MCLK),
++	Q6AFE_CLK(LPASS_CLK_ID_VA_CORE_MCLK),
++	Q6AFE_CLK(LPASS_CLK_ID_TX_CORE_MCLK),
++	Q6AFE_CLK(LPASS_CLK_ID_TX_CORE_NPL_MCLK),
++	Q6AFE_CLK(LPASS_CLK_ID_RX_CORE_MCLK),
++	Q6AFE_CLK(LPASS_CLK_ID_RX_CORE_NPL_MCLK),
++	Q6AFE_CLK(LPASS_CLK_ID_VA_CORE_2X_MCLK),
++	Q6AFE_VOTE_CLK(LPASS_HW_AVTIMER_VOTE,
++		       Q6AFE_LPASS_CORE_AVTIMER_BLOCK,
++		       "LPASS_AVTIMER_MACRO"),
++	Q6AFE_VOTE_CLK(LPASS_HW_MACRO_VOTE,
++		       Q6AFE_LPASS_CORE_HW_MACRO_BLOCK,
++		       "LPASS_HW_MACRO"),
++	Q6AFE_VOTE_CLK(LPASS_HW_DCODEC_VOTE,
++		       Q6AFE_LPASS_CORE_HW_DCODEC_BLOCK,
++		       "LPASS_HW_DCODEC"),
+ };
+ 
+ static struct clk_hw *q6afe_of_clk_hw_get(struct of_phandle_args *clkspec,
+@@ -207,7 +194,7 @@ static struct clk_hw *q6afe_of_clk_hw_get(struct of_phandle_args *clkspec,
+ 	unsigned int idx = clkspec->args[0];
+ 	unsigned int attr = clkspec->args[1];
+ 
+-	if (idx >= cc->num_clks || attr > LPASS_CLK_ATTRIBUTE_COUPLE_DIVISOR) {
++	if (idx >= Q6AFE_MAX_CLK_ID || attr > LPASS_CLK_ATTRIBUTE_COUPLE_DIVISOR) {
+ 		dev_err(cc->dev, "Invalid clk specifier (%d, %d)\n", idx, attr);
+ 		return ERR_PTR(-EINVAL);
+ 	}
+@@ -230,20 +217,36 @@ static int q6afe_clock_dev_probe(struct platform_device *pdev)
+ 	if (!cc)
+ 		return -ENOMEM;
+ 
+-	cc->clks = &q6afe_clks[0];
+-	cc->num_clks = ARRAY_SIZE(q6afe_clks);
++	cc->dev = dev;
+ 	for (i = 0; i < ARRAY_SIZE(q6afe_clks); i++) {
+-		if (!q6afe_clks[i])
+-			continue;
++		unsigned int id = q6afe_clks[i].clk_id;
++		struct clk_init_data init = {
++			.name =  q6afe_clks[i].name,
++		};
++		struct q6afe_clk *clk;
++
++		clk = devm_kzalloc(dev, sizeof(*clk), GFP_KERNEL);
++		if (!clk)
++			return -ENOMEM;
++
++		clk->dev = dev;
++		clk->afe_clk_id = q6afe_clks[i].afe_clk_id;
++		clk->rate = q6afe_clks[i].rate;
++		clk->hw.init = &init;
++
++		if (clk->rate)
++			init.ops = &clk_q6afe_ops;
++		else
++			init.ops = &clk_vote_q6afe_ops;
+ 
+-		q6afe_clks[i]->dev = dev;
++		cc->clks[id] = clk;
+ 
+-		ret = devm_clk_hw_register(dev, &q6afe_clks[i]->hw);
++		ret = devm_clk_hw_register(dev, &clk->hw);
+ 		if (ret)
+ 			return ret;
+ 	}
+ 
+-	ret = of_clk_add_hw_provider(dev->of_node, q6afe_of_clk_hw_get, cc);
++	ret = devm_of_clk_add_hw_provider(dev, q6afe_of_clk_hw_get, cc);
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/sound/soc/qcom/qdsp6/q6afe.c b/sound/soc/qcom/qdsp6/q6afe.c
+index cad1cd1bfdf0e..4327b72162ecd 100644
+--- a/sound/soc/qcom/qdsp6/q6afe.c
++++ b/sound/soc/qcom/qdsp6/q6afe.c
+@@ -1681,7 +1681,7 @@ int q6afe_unvote_lpass_core_hw(struct device *dev, uint32_t hw_block_id,
+ EXPORT_SYMBOL(q6afe_unvote_lpass_core_hw);
+ 
+ int q6afe_vote_lpass_core_hw(struct device *dev, uint32_t hw_block_id,
+-			     char *client_name, uint32_t *client_handle)
++			     const char *client_name, uint32_t *client_handle)
+ {
+ 	struct q6afe *afe = dev_get_drvdata(dev->parent);
+ 	struct afe_cmd_remote_lpass_core_hw_vote_request *vote_cfg;
+diff --git a/sound/soc/qcom/qdsp6/q6afe.h b/sound/soc/qcom/qdsp6/q6afe.h
+index 22e10269aa109..3845b56c0ed36 100644
+--- a/sound/soc/qcom/qdsp6/q6afe.h
++++ b/sound/soc/qcom/qdsp6/q6afe.h
+@@ -236,7 +236,7 @@ int q6afe_port_set_sysclk(struct q6afe_port *port, int clk_id,
+ int q6afe_set_lpass_clock(struct device *dev, int clk_id, int clk_src,
+ 			  int clk_root, unsigned int freq);
+ int q6afe_vote_lpass_core_hw(struct device *dev, uint32_t hw_block_id,
+-			     char *client_name, uint32_t *client_handle);
++			     const char *client_name, uint32_t *client_handle);
+ int q6afe_unvote_lpass_core_hw(struct device *dev, uint32_t hw_block_id,
+ 			       uint32_t client_handle);
+ #endif /* __Q6AFE_H__ */
+diff --git a/sound/soc/samsung/tm2_wm5110.c b/sound/soc/samsung/tm2_wm5110.c
+index 9300fef9bf269..125e07f65d2b5 100644
+--- a/sound/soc/samsung/tm2_wm5110.c
++++ b/sound/soc/samsung/tm2_wm5110.c
+@@ -553,7 +553,7 @@ static int tm2_probe(struct platform_device *pdev)
+ 
+ 		ret = of_parse_phandle_with_args(dev->of_node, "i2s-controller",
+ 						 cells_name, i, &args);
+-		if (!args.np) {
++		if (ret) {
+ 			dev_err(dev, "i2s-controller property parse error: %d\n", i);
+ 			ret = -EINVAL;
+ 			goto dai_node_put;
+diff --git a/sound/soc/tegra/tegra30_i2s.c b/sound/soc/tegra/tegra30_i2s.c
+index 6740df541508f..3d22c1be6f3d5 100644
+--- a/sound/soc/tegra/tegra30_i2s.c
++++ b/sound/soc/tegra/tegra30_i2s.c
+@@ -58,8 +58,18 @@ static int tegra30_i2s_runtime_resume(struct device *dev)
+ 	}
+ 
+ 	regcache_cache_only(i2s->regmap, false);
++	regcache_mark_dirty(i2s->regmap);
++
++	ret = regcache_sync(i2s->regmap);
++	if (ret)
++		goto disable_clocks;
+ 
+ 	return 0;
++
++disable_clocks:
++	clk_disable_unprepare(i2s->clk_i2s);
++
++	return ret;
+ }
+ 
+ static int tegra30_i2s_set_fmt(struct snd_soc_dai *dai,
+@@ -551,37 +561,11 @@ static int tegra30_i2s_platform_remove(struct platform_device *pdev)
+ 	return 0;
+ }
+ 
+-#ifdef CONFIG_PM_SLEEP
+-static int tegra30_i2s_suspend(struct device *dev)
+-{
+-	struct tegra30_i2s *i2s = dev_get_drvdata(dev);
+-
+-	regcache_mark_dirty(i2s->regmap);
+-
+-	return 0;
+-}
+-
+-static int tegra30_i2s_resume(struct device *dev)
+-{
+-	struct tegra30_i2s *i2s = dev_get_drvdata(dev);
+-	int ret;
+-
+-	ret = pm_runtime_get_sync(dev);
+-	if (ret < 0) {
+-		pm_runtime_put(dev);
+-		return ret;
+-	}
+-	ret = regcache_sync(i2s->regmap);
+-	pm_runtime_put(dev);
+-
+-	return ret;
+-}
+-#endif
+-
+ static const struct dev_pm_ops tegra30_i2s_pm_ops = {
+ 	SET_RUNTIME_PM_OPS(tegra30_i2s_runtime_suspend,
+ 			   tegra30_i2s_runtime_resume, NULL)
+-	SET_SYSTEM_SLEEP_PM_OPS(tegra30_i2s_suspend, tegra30_i2s_resume)
++	SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend,
++				pm_runtime_force_resume)
+ };
+ 
+ static struct platform_driver tegra30_i2s_driver = {
+diff --git a/sound/usb/card.c b/sound/usb/card.c
+index 0826a437f8fca..7b7526d3a56e8 100644
+--- a/sound/usb/card.c
++++ b/sound/usb/card.c
+@@ -181,9 +181,8 @@ static int snd_usb_create_stream(struct snd_usb_audio *chip, int ctrlif, int int
+ 				ctrlif, interface);
+ 			return -EINVAL;
+ 		}
+-		usb_driver_claim_interface(&usb_audio_driver, iface, (void *)-1L);
+-
+-		return 0;
++		return usb_driver_claim_interface(&usb_audio_driver, iface,
++						  USB_AUDIO_IFACE_UNUSED);
+ 	}
+ 
+ 	if ((altsd->bInterfaceClass != USB_CLASS_AUDIO &&
+@@ -203,7 +202,8 @@ static int snd_usb_create_stream(struct snd_usb_audio *chip, int ctrlif, int int
+ 
+ 	if (! snd_usb_parse_audio_interface(chip, interface)) {
+ 		usb_set_interface(dev, interface, 0); /* reset the current interface */
+-		usb_driver_claim_interface(&usb_audio_driver, iface, (void *)-1L);
++		return usb_driver_claim_interface(&usb_audio_driver, iface,
++						  USB_AUDIO_IFACE_UNUSED);
+ 	}
+ 
+ 	return 0;
+@@ -862,7 +862,7 @@ static void usb_audio_disconnect(struct usb_interface *intf)
+ 	struct snd_card *card;
+ 	struct list_head *p;
+ 
+-	if (chip == (void *)-1L)
++	if (chip == USB_AUDIO_IFACE_UNUSED)
+ 		return;
+ 
+ 	card = chip->card;
+@@ -992,7 +992,7 @@ static int usb_audio_suspend(struct usb_interface *intf, pm_message_t message)
+ 	struct usb_mixer_interface *mixer;
+ 	struct list_head *p;
+ 
+-	if (chip == (void *)-1L)
++	if (chip == USB_AUDIO_IFACE_UNUSED)
+ 		return 0;
+ 
+ 	if (!chip->num_suspended_intf++) {
+@@ -1022,7 +1022,7 @@ static int __usb_audio_resume(struct usb_interface *intf, bool reset_resume)
+ 	struct list_head *p;
+ 	int err = 0;
+ 
+-	if (chip == (void *)-1L)
++	if (chip == USB_AUDIO_IFACE_UNUSED)
+ 		return 0;
+ 
+ 	atomic_inc(&chip->active); /* avoid autopm */
+diff --git a/sound/usb/midi.c b/sound/usb/midi.c
+index 0c23fa6d8525d..cd46ca7cd28de 100644
+--- a/sound/usb/midi.c
++++ b/sound/usb/midi.c
+@@ -1332,7 +1332,7 @@ static int snd_usbmidi_in_endpoint_create(struct snd_usb_midi *umidi,
+ 
+  error:
+ 	snd_usbmidi_in_endpoint_delete(ep);
+-	return -ENOMEM;
++	return err;
+ }
+ 
+ /*
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index 176437a441e6c..7c6e83eee71dc 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -55,8 +55,12 @@ static int create_composite_quirk(struct snd_usb_audio *chip,
+ 		if (!iface)
+ 			continue;
+ 		if (quirk->ifnum != probed_ifnum &&
+-		    !usb_interface_claimed(iface))
+-			usb_driver_claim_interface(driver, iface, (void *)-1L);
++		    !usb_interface_claimed(iface)) {
++			err = usb_driver_claim_interface(driver, iface,
++							 USB_AUDIO_IFACE_UNUSED);
++			if (err < 0)
++				return err;
++		}
+ 	}
+ 
+ 	return 0;
+@@ -426,8 +430,12 @@ static int create_autodetect_quirks(struct snd_usb_audio *chip,
+ 			continue;
+ 
+ 		err = create_autodetect_quirk(chip, iface, driver);
+-		if (err >= 0)
+-			usb_driver_claim_interface(driver, iface, (void *)-1L);
++		if (err >= 0) {
++			err = usb_driver_claim_interface(driver, iface,
++							 USB_AUDIO_IFACE_UNUSED);
++			if (err < 0)
++				return err;
++		}
+ 	}
+ 
+ 	return 0;
+diff --git a/sound/usb/usbaudio.h b/sound/usb/usbaudio.h
+index 60b9dd7df6bb7..8794c8658ab96 100644
+--- a/sound/usb/usbaudio.h
++++ b/sound/usb/usbaudio.h
+@@ -61,6 +61,8 @@ struct snd_usb_audio {
+ 	struct media_intf_devnode *ctl_intf_media_devnode;
+ };
+ 
++#define USB_AUDIO_IFACE_UNUSED	((void *)-1L)
++
+ #define usb_audio_err(chip, fmt, args...) \
+ 	dev_err(&(chip)->dev->dev, fmt, ##args)
+ #define usb_audio_warn(chip, fmt, args...) \
+diff --git a/tools/bpf/bpftool/btf.c b/tools/bpf/bpftool/btf.c
+index fe9e7b3a4b503..1326fff3629b1 100644
+--- a/tools/bpf/bpftool/btf.c
++++ b/tools/bpf/bpftool/btf.c
+@@ -538,6 +538,7 @@ static int do_dump(int argc, char **argv)
+ 			NEXT_ARG();
+ 			if (argc < 1) {
+ 				p_err("expecting value for 'format' option\n");
++				err = -EINVAL;
+ 				goto done;
+ 			}
+ 			if (strcmp(*argv, "c") == 0) {
+@@ -547,11 +548,13 @@ static int do_dump(int argc, char **argv)
+ 			} else {
+ 				p_err("unrecognized format specifier: '%s', possible values: raw, c",
+ 				      *argv);
++				err = -EINVAL;
+ 				goto done;
+ 			}
+ 			NEXT_ARG();
+ 		} else {
+ 			p_err("unrecognized option: '%s'", *argv);
++			err = -EINVAL;
+ 			goto done;
+ 		}
+ 	}
+diff --git a/tools/bpf/bpftool/main.c b/tools/bpf/bpftool/main.c
+index b86f450e6fce2..d9afb730136a4 100644
+--- a/tools/bpf/bpftool/main.c
++++ b/tools/bpf/bpftool/main.c
+@@ -276,7 +276,7 @@ static int do_batch(int argc, char **argv)
+ 	int n_argc;
+ 	FILE *fp;
+ 	char *cp;
+-	int err;
++	int err = 0;
+ 	int i;
+ 
+ 	if (argc < 2) {
+@@ -370,7 +370,6 @@ static int do_batch(int argc, char **argv)
+ 	} else {
+ 		if (!json_output)
+ 			printf("processed %d commands\n", lines);
+-		err = 0;
+ 	}
+ err_close:
+ 	if (fp != stdin)
+diff --git a/tools/bpf/bpftool/map.c b/tools/bpf/bpftool/map.c
+index b400364ee054e..09ae0381205b6 100644
+--- a/tools/bpf/bpftool/map.c
++++ b/tools/bpf/bpftool/map.c
+@@ -100,7 +100,7 @@ static int do_dump_btf(const struct btf_dumper *d,
+ 		       void *value)
+ {
+ 	__u32 value_id;
+-	int ret;
++	int ret = 0;
+ 
+ 	/* start of key-value pair */
+ 	jsonw_start_object(d->jw);
+diff --git a/tools/lib/bpf/bpf_core_read.h b/tools/lib/bpf/bpf_core_read.h
+index 53b3e199fb254..09ebe3db5f2f8 100644
+--- a/tools/lib/bpf/bpf_core_read.h
++++ b/tools/lib/bpf/bpf_core_read.h
+@@ -88,11 +88,19 @@ enum bpf_enum_value_kind {
+ 	const void *p = (const void *)s + __CORE_RELO(s, field, BYTE_OFFSET); \
+ 	unsigned long long val;						      \
+ 									      \
++	/* This is a so-called barrier_var() operation that makes specified   \
++	 * variable "a black box" for optimizing compiler.		      \
++	 * It forces compiler to perform BYTE_OFFSET relocation on p and use  \
++	 * its calculated value in the switch below, instead of applying      \
++	 * the same relocation 4 times for each individual memory load.       \
++	 */								      \
++	asm volatile("" : "=r"(p) : "0"(p));				      \
++									      \
+ 	switch (__CORE_RELO(s, field, BYTE_SIZE)) {			      \
+-	case 1: val = *(const unsigned char *)p;			      \
+-	case 2: val = *(const unsigned short *)p;			      \
+-	case 4: val = *(const unsigned int *)p;				      \
+-	case 8: val = *(const unsigned long long *)p;			      \
++	case 1: val = *(const unsigned char *)p; break;			      \
++	case 2: val = *(const unsigned short *)p; break;		      \
++	case 4: val = *(const unsigned int *)p; break;			      \
++	case 8: val = *(const unsigned long long *)p; break;		      \
+ 	}								      \
+ 	val <<= __CORE_RELO(s, field, LSHIFT_U64);			      \
+ 	if (__CORE_RELO(s, field, SIGNED))				      \
+diff --git a/tools/lib/bpf/bpf_tracing.h b/tools/lib/bpf/bpf_tracing.h
+index f9ef37707888f..1c2e91ee041d8 100644
+--- a/tools/lib/bpf/bpf_tracing.h
++++ b/tools/lib/bpf/bpf_tracing.h
+@@ -413,20 +413,38 @@ typeof(name(0)) name(struct pt_regs *ctx)				    \
+ }									    \
+ static __always_inline typeof(name(0)) ____##name(struct pt_regs *ctx, ##args)
+ 
++#define ___bpf_fill0(arr, p, x) do {} while (0)
++#define ___bpf_fill1(arr, p, x) arr[p] = x
++#define ___bpf_fill2(arr, p, x, args...) arr[p] = x; ___bpf_fill1(arr, p + 1, args)
++#define ___bpf_fill3(arr, p, x, args...) arr[p] = x; ___bpf_fill2(arr, p + 1, args)
++#define ___bpf_fill4(arr, p, x, args...) arr[p] = x; ___bpf_fill3(arr, p + 1, args)
++#define ___bpf_fill5(arr, p, x, args...) arr[p] = x; ___bpf_fill4(arr, p + 1, args)
++#define ___bpf_fill6(arr, p, x, args...) arr[p] = x; ___bpf_fill5(arr, p + 1, args)
++#define ___bpf_fill7(arr, p, x, args...) arr[p] = x; ___bpf_fill6(arr, p + 1, args)
++#define ___bpf_fill8(arr, p, x, args...) arr[p] = x; ___bpf_fill7(arr, p + 1, args)
++#define ___bpf_fill9(arr, p, x, args...) arr[p] = x; ___bpf_fill8(arr, p + 1, args)
++#define ___bpf_fill10(arr, p, x, args...) arr[p] = x; ___bpf_fill9(arr, p + 1, args)
++#define ___bpf_fill11(arr, p, x, args...) arr[p] = x; ___bpf_fill10(arr, p + 1, args)
++#define ___bpf_fill12(arr, p, x, args...) arr[p] = x; ___bpf_fill11(arr, p + 1, args)
++#define ___bpf_fill(arr, args...) \
++	___bpf_apply(___bpf_fill, ___bpf_narg(args))(arr, 0, args)
++
+ /*
+  * BPF_SEQ_PRINTF to wrap bpf_seq_printf to-be-printed values
+  * in a structure.
+  */
+-#define BPF_SEQ_PRINTF(seq, fmt, args...)				    \
+-	({								    \
+-		_Pragma("GCC diagnostic push")				    \
+-		_Pragma("GCC diagnostic ignored \"-Wint-conversion\"")	    \
+-		static const char ___fmt[] = fmt;			    \
+-		unsigned long long ___param[] = { args };		    \
+-		_Pragma("GCC diagnostic pop")				    \
+-		int ___ret = bpf_seq_printf(seq, ___fmt, sizeof(___fmt),    \
+-					    ___param, sizeof(___param));    \
+-		___ret;							    \
+-	})
++#define BPF_SEQ_PRINTF(seq, fmt, args...)			\
++({								\
++	static const char ___fmt[] = fmt;			\
++	unsigned long long ___param[___bpf_narg(args)];		\
++								\
++	_Pragma("GCC diagnostic push")				\
++	_Pragma("GCC diagnostic ignored \"-Wint-conversion\"")	\
++	___bpf_fill(___param, args);				\
++	_Pragma("GCC diagnostic pop")				\
++								\
++	bpf_seq_printf(seq, ___fmt, sizeof(___fmt),		\
++		       ___param, sizeof(___param));		\
++})
+ 
+ #endif
+diff --git a/tools/lib/bpf/btf.h b/tools/lib/bpf/btf.h
+index 1237bcd1dd17e..5b8a6ea44b38b 100644
+--- a/tools/lib/bpf/btf.h
++++ b/tools/lib/bpf/btf.h
+@@ -173,6 +173,7 @@ struct btf_dump_emit_type_decl_opts {
+ 	int indent_level;
+ 	/* strip all the const/volatile/restrict mods */
+ 	bool strip_mods;
++	size_t :0;
+ };
+ #define btf_dump_emit_type_decl_opts__last_field strip_mods
+ 
+diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
+index 3c35eb401931f..3d690d4e785c3 100644
+--- a/tools/lib/bpf/libbpf.h
++++ b/tools/lib/bpf/libbpf.h
+@@ -507,6 +507,7 @@ struct xdp_link_info {
+ struct bpf_xdp_set_link_opts {
+ 	size_t sz;
+ 	int old_fd;
++	size_t :0;
+ };
+ #define bpf_xdp_set_link_opts__last_field old_fd
+ 
+diff --git a/tools/lib/perf/include/perf/event.h b/tools/lib/perf/include/perf/event.h
+index d82054225fcc0..4d0c02ba3f7d3 100644
+--- a/tools/lib/perf/include/perf/event.h
++++ b/tools/lib/perf/include/perf/event.h
+@@ -8,6 +8,8 @@
+ #include <linux/bpf.h>
+ #include <sys/types.h> /* pid_t */
+ 
++#define event_contains(obj, mem) ((obj).header.size > offsetof(typeof(obj), mem))
++
+ struct perf_record_mmap {
+ 	struct perf_event_header header;
+ 	__u32			 pid, tid;
+@@ -346,8 +348,9 @@ struct perf_record_time_conv {
+ 	__u64			 time_zero;
+ 	__u64			 time_cycles;
+ 	__u64			 time_mask;
+-	bool			 cap_user_time_zero;
+-	bool			 cap_user_time_short;
++	__u8			 cap_user_time_zero;
++	__u8			 cap_user_time_short;
++	__u8			 reserved[6];	/* For alignment */
+ };
+ 
+ struct perf_record_header_feature {
+diff --git a/tools/perf/pmu-events/arch/x86/amdzen1/cache.json b/tools/perf/pmu-events/arch/x86/amdzen1/cache.json
+index 4ea7ec4f496e8..008f1683e5407 100644
+--- a/tools/perf/pmu-events/arch/x86/amdzen1/cache.json
++++ b/tools/perf/pmu-events/arch/x86/amdzen1/cache.json
+@@ -275,7 +275,7 @@
+   {
+     "EventName": "l2_pf_hit_l2",
+     "EventCode": "0x70",
+-    "BriefDescription": "L2 prefetch hit in L2.",
++    "BriefDescription": "L2 prefetch hit in L2. Use l2_cache_hits_from_l2_hwpf instead.",
+     "UMask": "0xff"
+   },
+   {
+diff --git a/tools/perf/pmu-events/arch/x86/amdzen1/recommended.json b/tools/perf/pmu-events/arch/x86/amdzen1/recommended.json
+index 2cfe2d2f3bfdd..3c954543d1ae6 100644
+--- a/tools/perf/pmu-events/arch/x86/amdzen1/recommended.json
++++ b/tools/perf/pmu-events/arch/x86/amdzen1/recommended.json
+@@ -79,10 +79,10 @@
+     "UMask": "0x70"
+   },
+   {
+-    "MetricName": "l2_cache_hits_from_l2_hwpf",
++    "EventName": "l2_cache_hits_from_l2_hwpf",
++    "EventCode": "0x70",
+     "BriefDescription": "L2 Cache Hits from L2 HWPF",
+-    "MetricExpr": "l2_pf_hit_l2 + l2_pf_miss_l2_hit_l3 + l2_pf_miss_l2_l3",
+-    "MetricGroup": "l2_cache"
++    "UMask": "0xff"
+   },
+   {
+     "EventName": "l3_accesses",
+diff --git a/tools/perf/pmu-events/arch/x86/amdzen2/cache.json b/tools/perf/pmu-events/arch/x86/amdzen2/cache.json
+index f61b982f83ca3..8ba84a48188dd 100644
+--- a/tools/perf/pmu-events/arch/x86/amdzen2/cache.json
++++ b/tools/perf/pmu-events/arch/x86/amdzen2/cache.json
+@@ -205,7 +205,7 @@
+   {
+     "EventName": "l2_pf_hit_l2",
+     "EventCode": "0x70",
+-    "BriefDescription": "L2 prefetch hit in L2.",
++    "BriefDescription": "L2 prefetch hit in L2. Use l2_cache_hits_from_l2_hwpf instead.",
+     "UMask": "0xff"
+   },
+   {
+diff --git a/tools/perf/pmu-events/arch/x86/amdzen2/recommended.json b/tools/perf/pmu-events/arch/x86/amdzen2/recommended.json
+index 2ef91e25e6613..1c624cee9ef48 100644
+--- a/tools/perf/pmu-events/arch/x86/amdzen2/recommended.json
++++ b/tools/perf/pmu-events/arch/x86/amdzen2/recommended.json
+@@ -79,10 +79,10 @@
+     "UMask": "0x70"
+   },
+   {
+-    "MetricName": "l2_cache_hits_from_l2_hwpf",
++    "EventName": "l2_cache_hits_from_l2_hwpf",
++    "EventCode": "0x70",
+     "BriefDescription": "L2 Cache Hits from L2 HWPF",
+-    "MetricExpr": "l2_pf_hit_l2 + l2_pf_miss_l2_hit_l3 + l2_pf_miss_l2_l3",
+-    "MetricGroup": "l2_cache"
++    "UMask": "0xff"
+   },
+   {
+     "EventName": "l3_accesses",
+diff --git a/tools/perf/trace/beauty/fsconfig.sh b/tools/perf/trace/beauty/fsconfig.sh
+index 83fb24df05c9f..bc6ef7bb7a5f9 100755
+--- a/tools/perf/trace/beauty/fsconfig.sh
++++ b/tools/perf/trace/beauty/fsconfig.sh
+@@ -10,8 +10,7 @@ fi
+ linux_mount=${linux_header_dir}/mount.h
+ 
+ printf "static const char *fsconfig_cmds[] = {\n"
+-regex='^[[:space:]]*+FSCONFIG_([[:alnum:]_]+)[[:space:]]*=[[:space:]]*([[:digit:]]+)[[:space:]]*,[[:space:]]*.*'
+-egrep $regex ${linux_mount} | \
+-	sed -r "s/$regex/\2 \1/g"	| \
+-	xargs printf "\t[%s] = \"%s\",\n"
++ms='[[:space:]]*'
++sed -nr "s/^${ms}FSCONFIG_([[:alnum:]_]+)${ms}=${ms}([[:digit:]]+)${ms},.*/\t[\2] = \"\1\",/p" \
++	${linux_mount}
+ printf "};\n"
+diff --git a/tools/perf/util/jitdump.c b/tools/perf/util/jitdump.c
+index 9760d8e7b3860..917a9c707371a 100644
+--- a/tools/perf/util/jitdump.c
++++ b/tools/perf/util/jitdump.c
+@@ -396,21 +396,31 @@ static pid_t jr_entry_tid(struct jit_buf_desc *jd, union jr_entry *jr)
+ 
+ static uint64_t convert_timestamp(struct jit_buf_desc *jd, uint64_t timestamp)
+ {
+-	struct perf_tsc_conversion tc;
++	struct perf_tsc_conversion tc = { .time_shift = 0, };
++	struct perf_record_time_conv *time_conv = &jd->session->time_conv;
+ 
+ 	if (!jd->use_arch_timestamp)
+ 		return timestamp;
+ 
+-	tc.time_shift	       = jd->session->time_conv.time_shift;
+-	tc.time_mult	       = jd->session->time_conv.time_mult;
+-	tc.time_zero	       = jd->session->time_conv.time_zero;
+-	tc.time_cycles	       = jd->session->time_conv.time_cycles;
+-	tc.time_mask	       = jd->session->time_conv.time_mask;
+-	tc.cap_user_time_zero  = jd->session->time_conv.cap_user_time_zero;
+-	tc.cap_user_time_short = jd->session->time_conv.cap_user_time_short;
++	tc.time_shift = time_conv->time_shift;
++	tc.time_mult  = time_conv->time_mult;
++	tc.time_zero  = time_conv->time_zero;
+ 
+-	if (!tc.cap_user_time_zero)
+-		return 0;
++	/*
++	 * The event TIME_CONV was extended for the fields from "time_cycles"
++	 * when supported cap_user_time_short, for backward compatibility,
++	 * checks the event size and assigns these extended fields if these
++	 * fields are contained in the event.
++	 */
++	if (event_contains(*time_conv, time_cycles)) {
++		tc.time_cycles	       = time_conv->time_cycles;
++		tc.time_mask	       = time_conv->time_mask;
++		tc.cap_user_time_zero  = time_conv->cap_user_time_zero;
++		tc.cap_user_time_short = time_conv->cap_user_time_short;
++
++		if (!tc.cap_user_time_zero)
++			return 0;
++	}
+ 
+ 	return tsc_to_perf_time(timestamp, &tc);
+ }
+diff --git a/tools/perf/util/session.c b/tools/perf/util/session.c
+index 859832a824967..e9d4e6f4bdf3e 100644
+--- a/tools/perf/util/session.c
++++ b/tools/perf/util/session.c
+@@ -949,6 +949,19 @@ static void perf_event__stat_round_swap(union perf_event *event,
+ 	event->stat_round.time = bswap_64(event->stat_round.time);
+ }
+ 
++static void perf_event__time_conv_swap(union perf_event *event,
++				       bool sample_id_all __maybe_unused)
++{
++	event->time_conv.time_shift = bswap_64(event->time_conv.time_shift);
++	event->time_conv.time_mult  = bswap_64(event->time_conv.time_mult);
++	event->time_conv.time_zero  = bswap_64(event->time_conv.time_zero);
++
++	if (event_contains(event->time_conv, time_cycles)) {
++		event->time_conv.time_cycles = bswap_64(event->time_conv.time_cycles);
++		event->time_conv.time_mask = bswap_64(event->time_conv.time_mask);
++	}
++}
++
+ typedef void (*perf_event__swap_op)(union perf_event *event,
+ 				    bool sample_id_all);
+ 
+@@ -985,7 +998,7 @@ static perf_event__swap_op perf_event__swap_ops[] = {
+ 	[PERF_RECORD_STAT]		  = perf_event__stat_swap,
+ 	[PERF_RECORD_STAT_ROUND]	  = perf_event__stat_round_swap,
+ 	[PERF_RECORD_EVENT_UPDATE]	  = perf_event__event_update_swap,
+-	[PERF_RECORD_TIME_CONV]		  = perf_event__all64_swap,
++	[PERF_RECORD_TIME_CONV]		  = perf_event__time_conv_swap,
+ 	[PERF_RECORD_HEADER_MAX]	  = NULL,
+ };
+ 
+diff --git a/tools/perf/util/symbol_fprintf.c b/tools/perf/util/symbol_fprintf.c
+index 35c936ce33efa..2664fb65e47ad 100644
+--- a/tools/perf/util/symbol_fprintf.c
++++ b/tools/perf/util/symbol_fprintf.c
+@@ -68,7 +68,7 @@ size_t dso__fprintf_symbols_by_name(struct dso *dso,
+ 
+ 	for (nd = rb_first_cached(&dso->symbol_names); nd; nd = rb_next(nd)) {
+ 		pos = rb_entry(nd, struct symbol_name_rb_node, rb_node);
+-		fprintf(fp, "%s\n", pos->sym.name);
++		ret += fprintf(fp, "%s\n", pos->sym.name);
+ 	}
+ 
+ 	return ret;
+diff --git a/tools/power/x86/turbostat/turbostat.c b/tools/power/x86/turbostat/turbostat.c
+index 490c9a496fe28..0026970214748 100644
+--- a/tools/power/x86/turbostat/turbostat.c
++++ b/tools/power/x86/turbostat/turbostat.c
+@@ -4822,33 +4822,12 @@ double discover_bclk(unsigned int family, unsigned int model)
+  * below this value, including the Digital Thermal Sensor (DTS),
+  * Package Thermal Management Sensor (PTM), and thermal event thresholds.
+  */
+-int read_tcc_activation_temp()
++int set_temperature_target(struct thread_data *t, struct core_data *c, struct pkg_data *p)
+ {
+ 	unsigned long long msr;
+-	unsigned int tcc, target_c, offset_c;
+-
+-	/* Temperature Target MSR is Nehalem and newer only */
+-	if (!do_nhm_platform_info)
+-		return 0;
+-
+-	if (get_msr(base_cpu, MSR_IA32_TEMPERATURE_TARGET, &msr))
+-		return 0;
+-
+-	target_c = (msr >> 16) & 0xFF;
+-
+-	offset_c = (msr >> 24) & 0xF;
+-
+-	tcc = target_c - offset_c;
+-
+-	if (!quiet)
+-		fprintf(outf, "cpu%d: MSR_IA32_TEMPERATURE_TARGET: 0x%08llx (%d C) (%d default - %d offset)\n",
+-			base_cpu, msr, tcc, target_c, offset_c);
+-
+-	return tcc;
+-}
++	unsigned int target_c_local;
++	int cpu;
+ 
+-int set_temperature_target(struct thread_data *t, struct core_data *c, struct pkg_data *p)
+-{
+ 	/* tcc_activation_temp is used only for dts or ptm */
+ 	if (!(do_dts || do_ptm))
+ 		return 0;
+@@ -4857,18 +4836,43 @@ int set_temperature_target(struct thread_data *t, struct core_data *c, struct pk
+ 	if (!(t->flags & CPU_IS_FIRST_THREAD_IN_CORE) || !(t->flags & CPU_IS_FIRST_CORE_IN_PACKAGE))
+ 		return 0;
+ 
++	cpu = t->cpu_id;
++	if (cpu_migrate(cpu)) {
++		fprintf(outf, "Could not migrate to CPU %d\n", cpu);
++		return -1;
++	}
++
+ 	if (tcc_activation_temp_override != 0) {
+ 		tcc_activation_temp = tcc_activation_temp_override;
+-		fprintf(outf, "Using cmdline TCC Target (%d C)\n", tcc_activation_temp);
++		fprintf(outf, "cpu%d: Using cmdline TCC Target (%d C)\n",
++			cpu, tcc_activation_temp);
+ 		return 0;
+ 	}
+ 
+-	tcc_activation_temp = read_tcc_activation_temp();
+-	if (tcc_activation_temp)
+-		return 0;
++	/* Temperature Target MSR is Nehalem and newer only */
++	if (!do_nhm_platform_info)
++		goto guess;
++
++	if (get_msr(base_cpu, MSR_IA32_TEMPERATURE_TARGET, &msr))
++		goto guess;
++
++	target_c_local = (msr >> 16) & 0xFF;
++
++	if (!quiet)
++		fprintf(outf, "cpu%d: MSR_IA32_TEMPERATURE_TARGET: 0x%08llx (%d C)\n",
++			cpu, msr, target_c_local);
++
++	if (!target_c_local)
++		goto guess;
++
++	tcc_activation_temp = target_c_local;
++
++	return 0;
+ 
++guess:
+ 	tcc_activation_temp = TJMAX_DEFAULT;
+-	fprintf(outf, "Guessing tjMax %d C, Please use -T to specify\n", tcc_activation_temp);
++	fprintf(outf, "cpu%d: Guessing tjMax %d C, Please use -T to specify\n",
++		cpu, tcc_activation_temp);
+ 
+ 	return 0;
+ }
+diff --git a/tools/spi/Makefile b/tools/spi/Makefile
+index ada881afb489a..0aa6dbd31fb8d 100644
+--- a/tools/spi/Makefile
++++ b/tools/spi/Makefile
+@@ -25,11 +25,12 @@ include $(srctree)/tools/build/Makefile.include
+ #
+ # We need the following to be outside of kernel tree
+ #
+-$(OUTPUT)include/linux/spi/spidev.h: ../../include/uapi/linux/spi/spidev.h
++$(OUTPUT)include/linux/spi: ../../include/uapi/linux/spi
+ 	mkdir -p $(OUTPUT)include/linux/spi 2>&1 || true
+ 	ln -sf $(CURDIR)/../../include/uapi/linux/spi/spidev.h $@
++	ln -sf $(CURDIR)/../../include/uapi/linux/spi/spi.h $@
+ 
+-prepare: $(OUTPUT)include/linux/spi/spidev.h
++prepare: $(OUTPUT)include/linux/spi
+ 
+ #
+ # spidev_test
+diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
+index 044bfdcf5b74f..76a325862119b 100644
+--- a/tools/testing/selftests/bpf/Makefile
++++ b/tools/testing/selftests/bpf/Makefile
+@@ -221,7 +221,7 @@ $(HOST_BPFOBJ): $(wildcard $(BPFDIR)/*.[ch] $(BPFDIR)/Makefile)                \
+ 		    DESTDIR=$(HOST_SCRATCH_DIR)/ prefix= all install_headers
+ endif
+ 
+-$(INCLUDE_DIR)/vmlinux.h: $(VMLINUX_BTF) | $(BPFTOOL) $(INCLUDE_DIR)
++$(INCLUDE_DIR)/vmlinux.h: $(VMLINUX_BTF) $(BPFTOOL) | $(INCLUDE_DIR)
+ ifeq ($(VMLINUX_H),)
+ 	$(call msg,GEN,,$@)
+ 	$(Q)$(BPFTOOL) btf dump file $(VMLINUX_BTF) format c > $@
+@@ -346,7 +346,8 @@ $(TRUNNER_BPF_OBJS): $(TRUNNER_OUTPUT)/%.o:				\
+ 
+ $(TRUNNER_BPF_SKELS): $(TRUNNER_OUTPUT)/%.skel.h:			\
+ 		      $(TRUNNER_OUTPUT)/%.o				\
+-		      | $(BPFTOOL) $(TRUNNER_OUTPUT)
++		      $(BPFTOOL)					\
++		      | $(TRUNNER_OUTPUT)
+ 	$$(call msg,GEN-SKEL,$(TRUNNER_BINARY),$$@)
+ 	$(Q)$$(BPFTOOL) gen skeleton $$< > $$@
+ endif
+diff --git a/tools/testing/selftests/bpf/prog_tests/core_reloc.c b/tools/testing/selftests/bpf/prog_tests/core_reloc.c
+index 06eb956ff7bbd..4b517d76257d1 100644
+--- a/tools/testing/selftests/bpf/prog_tests/core_reloc.c
++++ b/tools/testing/selftests/bpf/prog_tests/core_reloc.c
+@@ -210,11 +210,6 @@ static int duration = 0;
+ 	.bpf_obj_file = "test_core_reloc_existence.o",			\
+ 	.btf_src_file = "btf__core_reloc_" #name ".o"			\
+ 
+-#define FIELD_EXISTS_ERR_CASE(name) {					\
+-	FIELD_EXISTS_CASE_COMMON(name),					\
+-	.fails = true,							\
+-}
+-
+ #define BITFIELDS_CASE_COMMON(objfile, test_name_prefix,  name)		\
+ 	.case_name = test_name_prefix#name,				\
+ 	.bpf_obj_file = objfile,					\
+@@ -222,7 +217,7 @@ static int duration = 0;
+ 
+ #define BITFIELDS_CASE(name, ...) {					\
+ 	BITFIELDS_CASE_COMMON("test_core_reloc_bitfields_probed.o",	\
+-			      "direct:", name),				\
++			      "probed:", name),				\
+ 	.input = STRUCT_TO_CHAR_PTR(core_reloc_##name) __VA_ARGS__,	\
+ 	.input_len = sizeof(struct core_reloc_##name),			\
+ 	.output = STRUCT_TO_CHAR_PTR(core_reloc_bitfields_output)	\
+@@ -230,7 +225,7 @@ static int duration = 0;
+ 	.output_len = sizeof(struct core_reloc_bitfields_output),	\
+ }, {									\
+ 	BITFIELDS_CASE_COMMON("test_core_reloc_bitfields_direct.o",	\
+-			      "probed:", name),				\
++			      "direct:", name),				\
+ 	.input = STRUCT_TO_CHAR_PTR(core_reloc_##name) __VA_ARGS__,	\
+ 	.input_len = sizeof(struct core_reloc_##name),			\
+ 	.output = STRUCT_TO_CHAR_PTR(core_reloc_bitfields_output)	\
+@@ -550,8 +545,7 @@ static struct core_reloc_test_case test_cases[] = {
+ 	ARRAYS_ERR_CASE(arrays___err_too_small),
+ 	ARRAYS_ERR_CASE(arrays___err_too_shallow),
+ 	ARRAYS_ERR_CASE(arrays___err_non_array),
+-	ARRAYS_ERR_CASE(arrays___err_wrong_val_type1),
+-	ARRAYS_ERR_CASE(arrays___err_wrong_val_type2),
++	ARRAYS_ERR_CASE(arrays___err_wrong_val_type),
+ 	ARRAYS_ERR_CASE(arrays___err_bad_zero_sz_arr),
+ 
+ 	/* enum/ptr/int handling scenarios */
+@@ -642,13 +636,25 @@ static struct core_reloc_test_case test_cases[] = {
+ 		},
+ 		.output_len = sizeof(struct core_reloc_existence_output),
+ 	},
+-
+-	FIELD_EXISTS_ERR_CASE(existence__err_int_sz),
+-	FIELD_EXISTS_ERR_CASE(existence__err_int_type),
+-	FIELD_EXISTS_ERR_CASE(existence__err_int_kind),
+-	FIELD_EXISTS_ERR_CASE(existence__err_arr_kind),
+-	FIELD_EXISTS_ERR_CASE(existence__err_arr_value_type),
+-	FIELD_EXISTS_ERR_CASE(existence__err_struct_type),
++	{
++		FIELD_EXISTS_CASE_COMMON(existence___wrong_field_defs),
++		.input = STRUCT_TO_CHAR_PTR(core_reloc_existence___wrong_field_defs) {
++		},
++		.input_len = sizeof(struct core_reloc_existence___wrong_field_defs),
++		.output = STRUCT_TO_CHAR_PTR(core_reloc_existence_output) {
++			.a_exists = 0,
++			.b_exists = 0,
++			.c_exists = 0,
++			.arr_exists = 0,
++			.s_exists = 0,
++			.a_value = 0xff000001u,
++			.b_value = 0xff000002u,
++			.c_value = 0xff000003u,
++			.arr_value = 0xff000004u,
++			.s_value = 0xff000005u,
++		},
++		.output_len = sizeof(struct core_reloc_existence_output),
++	},
+ 
+ 	/* bitfield relocation checks */
+ 	BITFIELDS_CASE(bitfields, {
+@@ -857,13 +863,20 @@ void test_core_reloc(void)
+ 			  "prog '%s' not found\n", probe_name))
+ 			goto cleanup;
+ 
++
++		if (test_case->btf_src_file) {
++			err = access(test_case->btf_src_file, R_OK);
++			if (!ASSERT_OK(err, "btf_src_file"))
++				goto cleanup;
++		}
++
+ 		load_attr.obj = obj;
+ 		load_attr.log_level = 0;
+ 		load_attr.target_btf_path = test_case->btf_src_file;
+ 		err = bpf_object__load_xattr(&load_attr);
+ 		if (err) {
+ 			if (!test_case->fails)
+-				CHECK(false, "obj_load", "failed to load prog '%s': %d\n", probe_name, err);
++				ASSERT_OK(err, "obj_load");
+ 			goto cleanup;
+ 		}
+ 
+@@ -902,10 +915,8 @@ void test_core_reloc(void)
+ 			goto cleanup;
+ 		}
+ 
+-		if (test_case->fails) {
+-			CHECK(false, "obj_load_fail", "should fail to load prog '%s'\n", probe_name);
++		if (!ASSERT_FALSE(test_case->fails, "obj_load_should_fail"))
+ 			goto cleanup;
+-		}
+ 
+ 		equal = memcmp(data->out, test_case->output,
+ 			       test_case->output_len) == 0;
+diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_arr_kind.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_arr_kind.c
+deleted file mode 100644
+index dd0ffa518f366..0000000000000
+--- a/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_arr_kind.c
++++ /dev/null
+@@ -1,3 +0,0 @@
+-#include "core_reloc_types.h"
+-
+-void f(struct core_reloc_existence___err_wrong_arr_kind x) {}
+diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_arr_value_type.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_arr_value_type.c
+deleted file mode 100644
+index bc83372088ad0..0000000000000
+--- a/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_arr_value_type.c
++++ /dev/null
+@@ -1,3 +0,0 @@
+-#include "core_reloc_types.h"
+-
+-void f(struct core_reloc_existence___err_wrong_arr_value_type x) {}
+diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_int_kind.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_int_kind.c
+deleted file mode 100644
+index 917bec41be081..0000000000000
+--- a/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_int_kind.c
++++ /dev/null
+@@ -1,3 +0,0 @@
+-#include "core_reloc_types.h"
+-
+-void f(struct core_reloc_existence___err_wrong_int_kind x) {}
+diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_int_sz.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_int_sz.c
+deleted file mode 100644
+index 6ec7e6ec1c915..0000000000000
+--- a/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_int_sz.c
++++ /dev/null
+@@ -1,3 +0,0 @@
+-#include "core_reloc_types.h"
+-
+-void f(struct core_reloc_existence___err_wrong_int_sz x) {}
+diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_int_type.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_int_type.c
+deleted file mode 100644
+index 7bbcacf2b0d17..0000000000000
+--- a/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_int_type.c
++++ /dev/null
+@@ -1,3 +0,0 @@
+-#include "core_reloc_types.h"
+-
+-void f(struct core_reloc_existence___err_wrong_int_type x) {}
+diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_struct_type.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_struct_type.c
+deleted file mode 100644
+index f384dd38ec709..0000000000000
+--- a/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___err_wrong_struct_type.c
++++ /dev/null
+@@ -1,3 +0,0 @@
+-#include "core_reloc_types.h"
+-
+-void f(struct core_reloc_existence___err_wrong_struct_type x) {}
+diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___wrong_field_defs.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___wrong_field_defs.c
+new file mode 100644
+index 0000000000000..d14b496190c3d
+--- /dev/null
++++ b/tools/testing/selftests/bpf/progs/btf__core_reloc_existence___wrong_field_defs.c
+@@ -0,0 +1,3 @@
++#include "core_reloc_types.h"
++
++void f(struct core_reloc_existence___wrong_field_defs x) {}
+diff --git a/tools/testing/selftests/bpf/progs/core_reloc_types.h b/tools/testing/selftests/bpf/progs/core_reloc_types.h
+index 9a28508501213..664eea1013aab 100644
+--- a/tools/testing/selftests/bpf/progs/core_reloc_types.h
++++ b/tools/testing/selftests/bpf/progs/core_reloc_types.h
+@@ -700,27 +700,11 @@ struct core_reloc_existence___minimal {
+ 	int a;
+ };
+ 
+-struct core_reloc_existence___err_wrong_int_sz {
+-	short a;
+-};
+-
+-struct core_reloc_existence___err_wrong_int_type {
++struct core_reloc_existence___wrong_field_defs {
++	void *a;
+ 	int b[1];
+-};
+-
+-struct core_reloc_existence___err_wrong_int_kind {
+ 	struct{ int x; } c;
+-};
+-
+-struct core_reloc_existence___err_wrong_arr_kind {
+ 	int arr;
+-};
+-
+-struct core_reloc_existence___err_wrong_arr_value_type {
+-	short arr[1];
+-};
+-
+-struct core_reloc_existence___err_wrong_struct_type {
+ 	int s;
+ };
+ 
+diff --git a/tools/testing/selftests/bpf/verifier/array_access.c b/tools/testing/selftests/bpf/verifier/array_access.c
+index 1b138cd2b187d..1b1c798e92489 100644
+--- a/tools/testing/selftests/bpf/verifier/array_access.c
++++ b/tools/testing/selftests/bpf/verifier/array_access.c
+@@ -186,7 +186,7 @@
+ 	},
+ 	.fixup_map_hash_48b = { 3 },
+ 	.errstr_unpriv = "R0 leaks addr",
+-	.errstr = "invalid access to map value, value_size=48 off=44 size=8",
++	.errstr = "R0 unbounded memory access",
+ 	.result_unpriv = REJECT,
+ 	.result = REJECT,
+ 	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
+diff --git a/tools/testing/selftests/drivers/net/mlxsw/port_scale.sh b/tools/testing/selftests/drivers/net/mlxsw/port_scale.sh
+index f813ffefc07ec..65f43a7ce9c93 100644
+--- a/tools/testing/selftests/drivers/net/mlxsw/port_scale.sh
++++ b/tools/testing/selftests/drivers/net/mlxsw/port_scale.sh
+@@ -55,10 +55,6 @@ port_test()
+ 	      | jq '.[][][] | select(.name=="physical_ports") |.["occ"]')
+ 
+ 	[[ $occ -eq $max_ports ]]
+-	if [[ $should_fail -eq 0 ]]; then
+-		check_err $? "Mismatch ports number: Expected $max_ports, got $occ."
+-	else
+-		check_err_fail $should_fail $? "Reached more ports than expected"
+-	fi
++	check_err_fail $should_fail $? "Attempt to create $max_ports ports (actual result $occ)"
+ 
+ }
+diff --git a/tools/testing/selftests/drivers/net/mlxsw/tc_flower_scale.sh b/tools/testing/selftests/drivers/net/mlxsw/tc_flower_scale.sh
+index cc0f07e72cf22..aa74be9f47c85 100644
+--- a/tools/testing/selftests/drivers/net/mlxsw/tc_flower_scale.sh
++++ b/tools/testing/selftests/drivers/net/mlxsw/tc_flower_scale.sh
+@@ -98,11 +98,7 @@ __tc_flower_test()
+ 			jq -r '[ .[] | select(.kind == "flower") |
+ 			.options | .in_hw ]' | jq .[] | wc -l)
+ 	[[ $((offload_count - 1)) -eq $count ]]
+-	if [[ $should_fail -eq 0 ]]; then
+-		check_err $? "Offload mismatch"
+-	else
+-		check_err_fail $should_fail $? "Offload more than expacted"
+-	fi
++	check_err_fail $should_fail $? "Attempt to offload $count rules (actual result $((offload_count - 1)))"
+ }
+ 
+ tc_flower_test()
+diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
+index bb2752d78fe3a..81edbd23d371c 100644
+--- a/tools/testing/selftests/kvm/dirty_log_test.c
++++ b/tools/testing/selftests/kvm/dirty_log_test.c
+@@ -17,6 +17,7 @@
+ #include <linux/bitmap.h>
+ #include <linux/bitops.h>
+ #include <asm/barrier.h>
++#include <linux/atomic.h>
+ 
+ #include "kvm_util.h"
+ #include "test_util.h"
+@@ -137,12 +138,20 @@ static uint64_t host_clear_count;
+ static uint64_t host_track_next_count;
+ 
+ /* Whether dirty ring reset is requested, or finished */
+-static sem_t dirty_ring_vcpu_stop;
+-static sem_t dirty_ring_vcpu_cont;
++static sem_t sem_vcpu_stop;
++static sem_t sem_vcpu_cont;
++/*
++ * This is only set by main thread, and only cleared by vcpu thread.  It is
++ * used to request vcpu thread to stop at the next GUEST_SYNC, since GUEST_SYNC
++ * is the only place that we'll guarantee both "dirty bit" and "dirty data"
++ * will match.  E.g., SIG_IPI won't guarantee that if the vcpu is interrupted
++ * after setting dirty bit but before the data is written.
++ */
++static atomic_t vcpu_sync_stop_requested;
+ /*
+  * This is updated by the vcpu thread to tell the host whether it's a
+  * ring-full event.  It should only be read until a sem_wait() of
+- * dirty_ring_vcpu_stop and before vcpu continues to run.
++ * sem_vcpu_stop and before vcpu continues to run.
+  */
+ static bool dirty_ring_vcpu_ring_full;
+ /*
+@@ -234,6 +243,17 @@ static void clear_log_collect_dirty_pages(struct kvm_vm *vm, int slot,
+ 	kvm_vm_clear_dirty_log(vm, slot, bitmap, 0, num_pages);
+ }
+ 
++/* Should only be called after a GUEST_SYNC */
++static void vcpu_handle_sync_stop(void)
++{
++	if (atomic_read(&vcpu_sync_stop_requested)) {
++		/* It means main thread is sleeping waiting */
++		atomic_set(&vcpu_sync_stop_requested, false);
++		sem_post(&sem_vcpu_stop);
++		sem_wait_until(&sem_vcpu_cont);
++	}
++}
++
+ static void default_after_vcpu_run(struct kvm_vm *vm, int ret, int err)
+ {
+ 	struct kvm_run *run = vcpu_state(vm, VCPU_ID);
+@@ -244,6 +264,8 @@ static void default_after_vcpu_run(struct kvm_vm *vm, int ret, int err)
+ 	TEST_ASSERT(get_ucall(vm, VCPU_ID, NULL) == UCALL_SYNC,
+ 		    "Invalid guest sync status: exit_reason=%s\n",
+ 		    exit_reason_str(run->exit_reason));
++
++	vcpu_handle_sync_stop();
+ }
+ 
+ static bool dirty_ring_supported(void)
+@@ -301,13 +323,13 @@ static void dirty_ring_wait_vcpu(void)
+ {
+ 	/* This makes sure that hardware PML cache flushed */
+ 	vcpu_kick();
+-	sem_wait_until(&dirty_ring_vcpu_stop);
++	sem_wait_until(&sem_vcpu_stop);
+ }
+ 
+ static void dirty_ring_continue_vcpu(void)
+ {
+ 	pr_info("Notifying vcpu to continue\n");
+-	sem_post(&dirty_ring_vcpu_cont);
++	sem_post(&sem_vcpu_cont);
+ }
+ 
+ static void dirty_ring_collect_dirty_pages(struct kvm_vm *vm, int slot,
+@@ -361,11 +383,11 @@ static void dirty_ring_after_vcpu_run(struct kvm_vm *vm, int ret, int err)
+ 		/* Update the flag first before pause */
+ 		WRITE_ONCE(dirty_ring_vcpu_ring_full,
+ 			   run->exit_reason == KVM_EXIT_DIRTY_RING_FULL);
+-		sem_post(&dirty_ring_vcpu_stop);
++		sem_post(&sem_vcpu_stop);
+ 		pr_info("vcpu stops because %s...\n",
+ 			dirty_ring_vcpu_ring_full ?
+ 			"dirty ring is full" : "vcpu is kicked out");
+-		sem_wait_until(&dirty_ring_vcpu_cont);
++		sem_wait_until(&sem_vcpu_cont);
+ 		pr_info("vcpu continues now.\n");
+ 	} else {
+ 		TEST_ASSERT(false, "Invalid guest sync status: "
+@@ -377,7 +399,7 @@ static void dirty_ring_after_vcpu_run(struct kvm_vm *vm, int ret, int err)
+ static void dirty_ring_before_vcpu_join(void)
+ {
+ 	/* Kick another round of vcpu just to make sure it will quit */
+-	sem_post(&dirty_ring_vcpu_cont);
++	sem_post(&sem_vcpu_cont);
+ }
+ 
+ struct log_mode {
+@@ -505,9 +527,8 @@ static void *vcpu_worker(void *data)
+ 	 */
+ 	sigmask->len = 8;
+ 	pthread_sigmask(0, NULL, sigset);
++	sigdelset(sigset, SIG_IPI);
+ 	vcpu_ioctl(vm, VCPU_ID, KVM_SET_SIGNAL_MASK, sigmask);
+-	sigaddset(sigset, SIG_IPI);
+-	pthread_sigmask(SIG_BLOCK, sigset, NULL);
+ 
+ 	sigemptyset(sigset);
+ 	sigaddset(sigset, SIG_IPI);
+@@ -768,7 +789,25 @@ static void run_test(enum vm_guest_mode mode, void *arg)
+ 		usleep(p->interval * 1000);
+ 		log_mode_collect_dirty_pages(vm, TEST_MEM_SLOT_INDEX,
+ 					     bmap, host_num_pages);
++
++		/*
++		 * See vcpu_sync_stop_requested definition for details on why
++		 * we need to stop vcpu when verify data.
++		 */
++		atomic_set(&vcpu_sync_stop_requested, true);
++		sem_wait_until(&sem_vcpu_stop);
++		/*
++		 * NOTE: for dirty ring, it's possible that we didn't stop at
++		 * GUEST_SYNC but instead we stopped because ring is full;
++		 * that's okay too because ring full means we're only missing
++		 * the flush of the last page, and since we handle the last
++		 * page specially verification will succeed anyway.
++		 */
++		assert(host_log_mode == LOG_MODE_DIRTY_RING ||
++		       atomic_read(&vcpu_sync_stop_requested) == false);
+ 		vm_dirty_log_verify(mode, bmap);
++		sem_post(&sem_vcpu_cont);
++
+ 		iteration++;
+ 		sync_global_to_guest(vm, iteration);
+ 	}
+@@ -818,9 +857,10 @@ int main(int argc, char *argv[])
+ 		.interval = TEST_HOST_LOOP_INTERVAL,
+ 	};
+ 	int opt, i;
++	sigset_t sigset;
+ 
+-	sem_init(&dirty_ring_vcpu_stop, 0, 0);
+-	sem_init(&dirty_ring_vcpu_cont, 0, 0);
++	sem_init(&sem_vcpu_stop, 0, 0);
++	sem_init(&sem_vcpu_cont, 0, 0);
+ 
+ 	guest_modes_append_default();
+ 
+@@ -876,6 +916,11 @@ int main(int argc, char *argv[])
+ 
+ 	srandom(time(0));
+ 
++	/* Ensure that vCPU threads start with SIG_IPI blocked.  */
++	sigemptyset(&sigset);
++	sigaddset(&sigset, SIG_IPI);
++	pthread_sigmask(SIG_BLOCK, &sigset, NULL);
++
+ 	if (host_log_mode_option == LOG_MODE_ALL) {
+ 		/* Run each log mode */
+ 		for (i = 0; i < LOG_MODE_NUM; i++) {
+diff --git a/tools/testing/selftests/lib.mk b/tools/testing/selftests/lib.mk
+index a5ce26d548e4f..be17462fe1467 100644
+--- a/tools/testing/selftests/lib.mk
++++ b/tools/testing/selftests/lib.mk
+@@ -74,7 +74,8 @@ ifdef building_out_of_srctree
+ 		rsync -aq $(TEST_PROGS) $(TEST_PROGS_EXTENDED) $(TEST_FILES) $(OUTPUT); \
+ 	fi
+ 	@if [ "X$(TEST_PROGS)" != "X" ]; then \
+-		$(call RUN_TESTS, $(TEST_GEN_PROGS) $(TEST_CUSTOM_PROGS) $(OUTPUT)/$(TEST_PROGS)) ; \
++		$(call RUN_TESTS, $(TEST_GEN_PROGS) $(TEST_CUSTOM_PROGS) \
++				  $(addprefix $(OUTPUT)/,$(TEST_PROGS))) ; \
+ 	else \
+ 		$(call RUN_TESTS, $(TEST_GEN_PROGS) $(TEST_CUSTOM_PROGS)); \
+ 	fi
+diff --git a/tools/testing/selftests/net/forwarding/mirror_gre_vlan_bridge_1q.sh b/tools/testing/selftests/net/forwarding/mirror_gre_vlan_bridge_1q.sh
+index c02291e9841e3..880e3ab9d088d 100755
+--- a/tools/testing/selftests/net/forwarding/mirror_gre_vlan_bridge_1q.sh
++++ b/tools/testing/selftests/net/forwarding/mirror_gre_vlan_bridge_1q.sh
+@@ -271,7 +271,7 @@ test_span_gre_fdb_roaming()
+ 
+ 	while ((RET == 0)); do
+ 		bridge fdb del dev $swp3 $h3mac vlan 555 master 2>/dev/null
+-		bridge fdb add dev $swp2 $h3mac vlan 555 master
++		bridge fdb add dev $swp2 $h3mac vlan 555 master static
+ 		sleep 1
+ 		fail_test_span_gre_dir $tundev ingress
+ 
+diff --git a/tools/testing/selftests/x86/thunks_32.S b/tools/testing/selftests/x86/thunks_32.S
+index a71d92da8f466..f3f56e681e9fb 100644
+--- a/tools/testing/selftests/x86/thunks_32.S
++++ b/tools/testing/selftests/x86/thunks_32.S
+@@ -45,3 +45,5 @@ call64_from_32:
+ 	ret
+ 
+ .size call64_from_32, .-call64_from_32
++
++.section .note.GNU-stack,"",%progbits
+diff --git a/virt/kvm/coalesced_mmio.c b/virt/kvm/coalesced_mmio.c
+index 62bd908ecd580..f08f5e82460b1 100644
+--- a/virt/kvm/coalesced_mmio.c
++++ b/virt/kvm/coalesced_mmio.c
+@@ -174,21 +174,36 @@ int kvm_vm_ioctl_unregister_coalesced_mmio(struct kvm *kvm,
+ 					   struct kvm_coalesced_mmio_zone *zone)
+ {
+ 	struct kvm_coalesced_mmio_dev *dev, *tmp;
++	int r;
+ 
+ 	if (zone->pio != 1 && zone->pio != 0)
+ 		return -EINVAL;
+ 
+ 	mutex_lock(&kvm->slots_lock);
+ 
+-	list_for_each_entry_safe(dev, tmp, &kvm->coalesced_zones, list)
++	list_for_each_entry_safe(dev, tmp, &kvm->coalesced_zones, list) {
+ 		if (zone->pio == dev->zone.pio &&
+ 		    coalesced_mmio_in_range(dev, zone->addr, zone->size)) {
+-			kvm_io_bus_unregister_dev(kvm,
++			r = kvm_io_bus_unregister_dev(kvm,
+ 				zone->pio ? KVM_PIO_BUS : KVM_MMIO_BUS, &dev->dev);
+ 			kvm_iodevice_destructor(&dev->dev);
++
++			/*
++			 * On failure, unregister destroys all devices on the
++			 * bus _except_ the target device, i.e. coalesced_zones
++			 * has been modified.  No need to restart the walk as
++			 * there aren't any zones left.
++			 */
++			if (r)
++				break;
+ 		}
++	}
+ 
+ 	mutex_unlock(&kvm->slots_lock);
+ 
++	/*
++	 * Ignore the result of kvm_io_bus_unregister_dev(), from userspace's
++	 * perspective, the coalesced MMIO is most definitely unregistered.
++	 */
+ 	return 0;
+ }
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 383df23514b93..ab1fa6f92c825 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -4486,15 +4486,15 @@ int kvm_io_bus_register_dev(struct kvm *kvm, enum kvm_bus bus_idx, gpa_t addr,
+ }
+ 
+ /* Caller must hold slots_lock. */
+-void kvm_io_bus_unregister_dev(struct kvm *kvm, enum kvm_bus bus_idx,
+-			       struct kvm_io_device *dev)
++int kvm_io_bus_unregister_dev(struct kvm *kvm, enum kvm_bus bus_idx,
++			      struct kvm_io_device *dev)
+ {
+ 	int i, j;
+ 	struct kvm_io_bus *new_bus, *bus;
+ 
+ 	bus = kvm_get_bus(kvm, bus_idx);
+ 	if (!bus)
+-		return;
++		return 0;
+ 
+ 	for (i = 0; i < bus->dev_count; i++)
+ 		if (bus->range[i].dev == dev) {
+@@ -4502,7 +4502,7 @@ void kvm_io_bus_unregister_dev(struct kvm *kvm, enum kvm_bus bus_idx,
+ 		}
+ 
+ 	if (i == bus->dev_count)
+-		return;
++		return 0;
+ 
+ 	new_bus = kmalloc(struct_size(bus, range, bus->dev_count - 1),
+ 			  GFP_KERNEL_ACCOUNT);
+@@ -4511,7 +4511,13 @@ void kvm_io_bus_unregister_dev(struct kvm *kvm, enum kvm_bus bus_idx,
+ 		new_bus->dev_count--;
+ 		memcpy(new_bus->range + i, bus->range + i + 1,
+ 				flex_array_size(new_bus, range, new_bus->dev_count - i));
+-	} else {
++	}
++
++	rcu_assign_pointer(kvm->buses[bus_idx], new_bus);
++	synchronize_srcu_expedited(&kvm->srcu);
++
++	/* Destroy the old bus _after_ installing the (null) bus. */
++	if (!new_bus) {
+ 		pr_err("kvm: failed to shrink bus, removing it completely\n");
+ 		for (j = 0; j < bus->dev_count; j++) {
+ 			if (j == i)
+@@ -4520,10 +4526,8 @@ void kvm_io_bus_unregister_dev(struct kvm *kvm, enum kvm_bus bus_idx,
+ 		}
+ 	}
+ 
+-	rcu_assign_pointer(kvm->buses[bus_idx], new_bus);
+-	synchronize_srcu_expedited(&kvm->srcu);
+ 	kfree(bus);
+-	return;
++	return new_bus ? 0 : -ENOMEM;
+ }
+ 
+ struct kvm_io_device *kvm_io_bus_get_dev(struct kvm *kvm, enum kvm_bus bus_idx,


             reply	other threads:[~2021-05-14 14:02 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-14 14:02 Alice Ferrazzi [this message]
  -- strict thread matches above, loose matches on Subject: below --
2021-07-20 15:49 [gentoo-commits] proj/linux-patches:5.12 commit in: / Alice Ferrazzi
2021-07-19 11:16 Mike Pagano
2021-07-14 16:18 Mike Pagano
2021-07-13 12:36 Mike Pagano
2021-07-11 14:42 Mike Pagano
2021-07-07 13:12 Mike Pagano
2021-07-04 15:43 Mike Pagano
2021-07-01 14:28 Mike Pagano
2021-06-30 14:22 Mike Pagano
2021-06-23 15:15 Mike Pagano
2021-06-18 12:21 Mike Pagano
2021-06-18 12:00 Mike Pagano
2021-06-18 11:35 Mike Pagano
2021-06-16 12:25 Mike Pagano
2021-06-11 13:21 Mike Pagano
2021-06-10 12:14 Mike Pagano
2021-06-08 22:15 Mike Pagano
2021-06-08 16:48 Mike Pagano
2021-06-08 16:26 Mike Pagano
2021-06-03 10:22 Alice Ferrazzi
2021-05-28 12:17 Alice Ferrazzi
2021-05-26 12:08 Mike Pagano
2021-05-24 17:26 Mike Pagano
2021-05-22 16:50 Mike Pagano
2021-05-19 12:25 Mike Pagano
2021-05-12 12:30 Mike Pagano
2021-05-07 13:15 Alice Ferrazzi
2021-05-02 16:05 Mike Pagano
2021-04-30 18:53 Mike Pagano
2021-04-27 11:53 Mike Pagano
2021-04-18 22:03 Mike Pagano
2021-03-23 12:29 Mike Pagano

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1621000886.0c96d61b12324bd9d6e4ea92c8c39ad05645f1af.alicef@gentoo \
    --to=alicef@gentoo.org \
    --cc=gentoo-commits@lists.gentoo.org \
    --cc=gentoo-dev@lists.gentoo.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox