public inbox for gentoo-commits@lists.gentoo.org
 help / color / mirror / Atom feed
From: "Mike Pagano" <mpagano@gentoo.org>
To: gentoo-commits@lists.gentoo.org
Subject: [gentoo-commits] proj/linux-patches:6.5 commit in: /
Date: Wed, 13 Sep 2023 11:03:43 +0000 (UTC)	[thread overview]
Message-ID: <1694603011.26f66be9229018459d0f69bb54b8627d14a9d562.mpagano@gentoo> (raw)

commit:     26f66be9229018459d0f69bb54b8627d14a9d562
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Sep 13 11:03:31 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Sep 13 11:03:31 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=26f66be9

Linux patch 6.5.3

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README            |     4 +
 1002_linux-6.5.3.patch | 35809 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 35813 insertions(+)

diff --git a/0000_README b/0000_README
index 4ba02fbb..de8216ab 100644
--- a/0000_README
+++ b/0000_README
@@ -51,6 +51,10 @@ Patch:  1001_linux-6.5.2.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.5.2
 
+Patch:  1002_linux-6.5.3.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.5.3
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1002_linux-6.5.3.patch b/1002_linux-6.5.3.patch
new file mode 100644
index 00000000..c0712d3f
--- /dev/null
+++ b/1002_linux-6.5.3.patch
@@ -0,0 +1,35809 @@
+diff --git a/Documentation/ABI/testing/sysfs-bus-fsi-devices-sbefifo b/Documentation/ABI/testing/sysfs-bus-fsi-devices-sbefifo
+index 531fe9d6b40aa..c7393b4dd2d88 100644
+--- a/Documentation/ABI/testing/sysfs-bus-fsi-devices-sbefifo
++++ b/Documentation/ABI/testing/sysfs-bus-fsi-devices-sbefifo
+@@ -5,6 +5,6 @@ Description:
+ 		Indicates whether or not this SBE device has experienced a
+ 		timeout; i.e. the SBE did not respond within the time allotted
+ 		by the driver. A value of 1 indicates that a timeout has
+-		ocurred and no transfers have completed since the timeout. A
+-		value of 0 indicates that no timeout has ocurred, or if one
+-		has, more recent transfers have completed successful.
++		occurred and no transfers have completed since the timeout. A
++		value of 0 indicates that no timeout has occurred, or if one
++		has, more recent transfers have completed successfully.
+diff --git a/Documentation/ABI/testing/sysfs-driver-chromeos-acpi b/Documentation/ABI/testing/sysfs-driver-chromeos-acpi
+index c308926e1568a..7c8e129fc1005 100644
+--- a/Documentation/ABI/testing/sysfs-driver-chromeos-acpi
++++ b/Documentation/ABI/testing/sysfs-driver-chromeos-acpi
+@@ -134,4 +134,4 @@ KernelVersion:	5.19
+ Description:
+ 		Returns the verified boot data block shared between the
+ 		firmware verification step and the kernel verification step
+-		(binary).
++		(hex dump).
+diff --git a/Documentation/ABI/testing/sysfs-fs-f2fs b/Documentation/ABI/testing/sysfs-fs-f2fs
+index 8140fc98f5aee..ad3d76d37c8ba 100644
+--- a/Documentation/ABI/testing/sysfs-fs-f2fs
++++ b/Documentation/ABI/testing/sysfs-fs-f2fs
+@@ -54,9 +54,9 @@ Description:	Controls the in-place-update policy.
+ 		0x00  DISABLE         disable IPU(=default option in LFS mode)
+ 		0x01  FORCE           all the time
+ 		0x02  SSR             if SSR mode is activated
+-		0x04  UTIL            if FS utilization is over threashold
++		0x04  UTIL            if FS utilization is over threshold
+ 		0x08  SSR_UTIL        if SSR mode is activated and FS utilization is over
+-		                      threashold
++		                      threshold
+ 		0x10  FSYNC           activated in fsync path only for high performance
+ 		                      flash storages. IPU will be triggered only if the
+ 		                      # of dirty pages over min_fsync_blocks.
+@@ -117,7 +117,7 @@ Date:		December 2021
+ Contact:	"Konstantin Vyshetsky" <vkon@google.com>
+ Description:	Controls the number of discards a thread will issue at a time.
+ 		Higher number will allow the discard thread to finish its work
+-		faster, at the cost of higher latency for incomming I/O.
++		faster, at the cost of higher latency for incoming I/O.
+ 
+ What:		/sys/fs/f2fs/<disk>/min_discard_issue_time
+ Date:		December 2021
+@@ -334,7 +334,7 @@ Description:	This indicates how many GC can be failed for the pinned
+ 		state. 2048 trials is set by default.
+ 
+ What:		/sys/fs/f2fs/<disk>/extension_list
+-Date:		Feburary 2018
++Date:		February 2018
+ Contact:	"Chao Yu" <yuchao0@huawei.com>
+ Description:	Used to control configure extension list:
+ 		- Query: cat /sys/fs/f2fs/<disk>/extension_list
+diff --git a/Documentation/admin-guide/devices.txt b/Documentation/admin-guide/devices.txt
+index b1b57f638b94f..8390549235304 100644
+--- a/Documentation/admin-guide/devices.txt
++++ b/Documentation/admin-guide/devices.txt
+@@ -2691,18 +2691,9 @@
+ 		 45 = /dev/ttyMM1		Marvell MPSC - port 1 (obsolete unused)
+ 		 46 = /dev/ttyCPM0		PPC CPM (SCC or SMC) - port 0
+ 		    ...
+-		 49 = /dev/ttyCPM5		PPC CPM (SCC or SMC) - port 3
+-		 50 = /dev/ttyIOC0		Altix serial card
+-		    ...
+-		 81 = /dev/ttyIOC31		Altix serial card
++		 51 = /dev/ttyCPM5		PPC CPM (SCC or SMC) - port 5
+ 		 82 = /dev/ttyVR0		NEC VR4100 series SIU
+ 		 83 = /dev/ttyVR1		NEC VR4100 series DSIU
+-		 84 = /dev/ttyIOC84		Altix ioc4 serial card
+-		    ...
+-		 115 = /dev/ttyIOC115		Altix ioc4 serial card
+-		 116 = /dev/ttySIOC0		Altix ioc3 serial card
+-		    ...
+-		 147 = /dev/ttySIOC31		Altix ioc3 serial card
+ 		 148 = /dev/ttyPSC0		PPC PSC - port 0
+ 		    ...
+ 		 153 = /dev/ttyPSC5		PPC PSC - port 5
+@@ -2761,10 +2752,7 @@
+ 		 43 = /dev/ttycusmx2		Callout device for ttySMX2
+ 		 46 = /dev/cucpm0		Callout device for ttyCPM0
+ 		    ...
+-		 49 = /dev/cucpm5		Callout device for ttyCPM5
+-		 50 = /dev/cuioc40		Callout device for ttyIOC40
+-		    ...
+-		 81 = /dev/cuioc431		Callout device for ttyIOC431
++		 51 = /dev/cucpm5		Callout device for ttyCPM5
+ 		 82 = /dev/cuvr0		Callout device for ttyVR0
+ 		 83 = /dev/cuvr1		Callout device for ttyVR1
+ 
+diff --git a/Documentation/devicetree/bindings/extcon/maxim,max77843.yaml b/Documentation/devicetree/bindings/extcon/maxim,max77843.yaml
+index 1289605456408..55800fb0221d0 100644
+--- a/Documentation/devicetree/bindings/extcon/maxim,max77843.yaml
++++ b/Documentation/devicetree/bindings/extcon/maxim,max77843.yaml
+@@ -23,6 +23,7 @@ properties:
+ 
+   connector:
+     $ref: /schemas/connector/usb-connector.yaml#
++    unevaluatedProperties: false
+ 
+   ports:
+     $ref: /schemas/graph.yaml#/properties/ports
+diff --git a/Documentation/devicetree/bindings/pci/qcom,pcie-ep.yaml b/Documentation/devicetree/bindings/pci/qcom,pcie-ep.yaml
+index 811112255d7d2..c94b49498f695 100644
+--- a/Documentation/devicetree/bindings/pci/qcom,pcie-ep.yaml
++++ b/Documentation/devicetree/bindings/pci/qcom,pcie-ep.yaml
+@@ -11,10 +11,13 @@ maintainers:
+ 
+ properties:
+   compatible:
+-    enum:
+-      - qcom,sdx55-pcie-ep
+-      - qcom,sdx65-pcie-ep
+-      - qcom,sm8450-pcie-ep
++    oneOf:
++      - enum:
++          - qcom,sdx55-pcie-ep
++          - qcom,sm8450-pcie-ep
++      - items:
++          - const: qcom,sdx65-pcie-ep
++          - const: qcom,sdx55-pcie-ep
+ 
+   reg:
+     items:
+@@ -110,7 +113,6 @@ allOf:
+           contains:
+             enum:
+               - qcom,sdx55-pcie-ep
+-              - qcom,sdx65-pcie-ep
+     then:
+       properties:
+         clocks:
+diff --git a/Documentation/devicetree/bindings/power/qcom,kpss-acc-v2.yaml b/Documentation/devicetree/bindings/power/qcom,kpss-acc-v2.yaml
+index 202a5d51ee88c..facaafefb4414 100644
+--- a/Documentation/devicetree/bindings/power/qcom,kpss-acc-v2.yaml
++++ b/Documentation/devicetree/bindings/power/qcom,kpss-acc-v2.yaml
+@@ -21,6 +21,7 @@ properties:
+     const: qcom,kpss-acc-v2
+ 
+   reg:
++    minItems: 1
+     items:
+       - description: Base address and size of the register region
+       - description: Optional base address and size of the alias register region
+diff --git a/Documentation/devicetree/bindings/regulator/qcom,rpm-regulator.yaml b/Documentation/devicetree/bindings/regulator/qcom,rpm-regulator.yaml
+index 8a08698e34846..b4eb4001eb3d2 100644
+--- a/Documentation/devicetree/bindings/regulator/qcom,rpm-regulator.yaml
++++ b/Documentation/devicetree/bindings/regulator/qcom,rpm-regulator.yaml
+@@ -49,7 +49,7 @@ patternProperties:
+   ".*-supply$":
+     description: Input supply phandle(s) for this node
+ 
+-  "^((s|l|lvs)[0-9]*)|(s[1-2][a-b])|(ncp)|(mvs)|(usb-switch)|(hdmi-switch)$":
++  "^((s|l|lvs)[0-9]*|s[1-2][a-b]|ncp|mvs|usb-switch|hdmi-switch)$":
+     description: List of regulators and its properties
+     $ref: regulator.yaml#
+     unevaluatedProperties: false
+diff --git a/Documentation/devicetree/bindings/usb/samsung,exynos-dwc3.yaml b/Documentation/devicetree/bindings/usb/samsung,exynos-dwc3.yaml
+index 42ceaf13cd5da..deeed2bca2cdc 100644
+--- a/Documentation/devicetree/bindings/usb/samsung,exynos-dwc3.yaml
++++ b/Documentation/devicetree/bindings/usb/samsung,exynos-dwc3.yaml
+@@ -72,7 +72,7 @@ allOf:
+       properties:
+         compatible:
+           contains:
+-            const: samsung,exynos54333-dwusb3
++            const: samsung,exynos5433-dwusb3
+     then:
+       properties:
+         clocks:
+@@ -82,8 +82,8 @@ allOf:
+           items:
+             - const: aclk
+             - const: susp_clk
+-            - const: pipe_pclk
+             - const: phyclk
++            - const: pipe_pclk
+ 
+   - if:
+       properties:
+diff --git a/Documentation/scsi/scsi_mid_low_api.rst b/Documentation/scsi/scsi_mid_low_api.rst
+index 6fa3a62795016..022198c513506 100644
+--- a/Documentation/scsi/scsi_mid_low_api.rst
++++ b/Documentation/scsi/scsi_mid_low_api.rst
+@@ -1190,11 +1190,11 @@ Members of interest:
+ 		 - pointer to scsi_device object that this command is
+                    associated with.
+     resid
+-		 - an LLD should set this signed integer to the requested
++		 - an LLD should set this unsigned integer to the requested
+                    transfer length (i.e. 'request_bufflen') less the number
+                    of bytes that are actually transferred. 'resid' is
+                    preset to 0 so an LLD can ignore it if it cannot detect
+-                   underruns (overruns should be rare). If possible an LLD
++                   underruns (overruns should not be reported). An LLD
+                    should set 'resid' prior to invoking 'done'. The most
+                    interesting case is data transfers from a SCSI target
+                    device (e.g. READs) that underrun.
+diff --git a/Documentation/userspace-api/media/v4l/vidioc-subdev-g-routing.rst b/Documentation/userspace-api/media/v4l/vidioc-subdev-g-routing.rst
+index 2d6e3bbdd0404..72677a280cd64 100644
+--- a/Documentation/userspace-api/media/v4l/vidioc-subdev-g-routing.rst
++++ b/Documentation/userspace-api/media/v4l/vidioc-subdev-g-routing.rst
+@@ -58,6 +58,9 @@ the subdevice exposes, drivers return the ENOSPC error code and adjust the
+ value of the ``num_routes`` field. Application should then reserve enough memory
+ for all the route entries and call ``VIDIOC_SUBDEV_G_ROUTING`` again.
+ 
++On a successful ``VIDIOC_SUBDEV_G_ROUTING`` call the driver updates the
++``num_routes`` field to reflect the actual number of routes returned.
++
+ .. tabularcolumns:: |p{4.4cm}|p{4.4cm}|p{8.7cm}|
+ 
+ .. c:type:: v4l2_subdev_routing
+@@ -138,9 +141,7 @@ ENOSPC
+ 
+ EINVAL
+    The sink or source pad identifiers reference a non-existing pad, or reference
+-   pads of different types (ie. the sink_pad identifiers refers to a source pad)
+-   or the sink or source stream identifiers reference a non-existing stream on
+-   the sink or source pad.
++   pads of different types (ie. the sink_pad identifiers refers to a source pad).
+ 
+ E2BIG
+    The application provided ``num_routes`` for ``VIDIOC_SUBDEV_S_ROUTING`` is
+diff --git a/Makefile b/Makefile
+index c47558bc00aa8..901cdfa5e7d3b 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 5
+-SUBLEVEL = 2
++SUBLEVEL = 3
+ EXTRAVERSION =
+ NAME = Hurr durr I'ma ninja sloth
+ 
+@@ -1289,7 +1289,7 @@ prepare0: archprepare
+ # All the preparing..
+ prepare: prepare0
+ ifdef CONFIG_RUST
+-	$(Q)$(CONFIG_SHELL) $(srctree)/scripts/rust_is_available.sh -v
++	$(Q)$(CONFIG_SHELL) $(srctree)/scripts/rust_is_available.sh
+ 	$(Q)$(MAKE) $(build)=rust
+ endif
+ 
+@@ -1825,7 +1825,7 @@ $(DOC_TARGETS):
+ # "Is Rust available?" target
+ PHONY += rustavailable
+ rustavailable:
+-	$(Q)$(CONFIG_SHELL) $(srctree)/scripts/rust_is_available.sh -v && echo "Rust is available!"
++	$(Q)$(CONFIG_SHELL) $(srctree)/scripts/rust_is_available.sh && echo "Rust is available!"
+ 
+ # Documentation target
+ #
+diff --git a/arch/arm/boot/dts/broadcom/bcm47189-luxul-xap-1440.dts b/arch/arm/boot/dts/broadcom/bcm47189-luxul-xap-1440.dts
+index 0734aa249b8e0..0f6d7fe30068f 100644
+--- a/arch/arm/boot/dts/broadcom/bcm47189-luxul-xap-1440.dts
++++ b/arch/arm/boot/dts/broadcom/bcm47189-luxul-xap-1440.dts
+@@ -26,7 +26,6 @@
+ 		led-wlan {
+ 			label = "bcm53xx:blue:wlan";
+ 			gpios = <&chipcommon 10 GPIO_ACTIVE_LOW>;
+-			linux,default-trigger = "default-off";
+ 		};
+ 
+ 		led-system {
+@@ -46,3 +45,16 @@
+ 		};
+ 	};
+ };
++
++&gmac0 {
++	phy-mode = "rgmii";
++	phy-handle = <&bcm54210e>;
++
++	mdio {
++		/delete-node/ switch@1e;
++
++		bcm54210e: ethernet-phy@0 {
++			reg = <0>;
++		};
++	};
++};
+diff --git a/arch/arm/boot/dts/broadcom/bcm47189-luxul-xap-810.dts b/arch/arm/boot/dts/broadcom/bcm47189-luxul-xap-810.dts
+index e6fb6cbe69633..4e0ef0af726f5 100644
+--- a/arch/arm/boot/dts/broadcom/bcm47189-luxul-xap-810.dts
++++ b/arch/arm/boot/dts/broadcom/bcm47189-luxul-xap-810.dts
+@@ -26,7 +26,6 @@
+ 		led-5ghz {
+ 			label = "bcm53xx:blue:5ghz";
+ 			gpios = <&chipcommon 11 GPIO_ACTIVE_HIGH>;
+-			linux,default-trigger = "default-off";
+ 		};
+ 
+ 		led-system {
+@@ -42,7 +41,6 @@
+ 		led-2ghz {
+ 			label = "bcm53xx:blue:2ghz";
+ 			gpios = <&pcie0_chipcommon 3 GPIO_ACTIVE_HIGH>;
+-			linux,default-trigger = "default-off";
+ 		};
+ 	};
+ 
+@@ -83,3 +81,16 @@
+ 		};
+ 	};
+ };
++
++&gmac0 {
++	phy-mode = "rgmii";
++	phy-handle = <&bcm54210e>;
++
++	mdio {
++		/delete-node/ switch@1e;
++
++		bcm54210e: ethernet-phy@0 {
++			reg = <0>;
++		};
++	};
++};
+diff --git a/arch/arm/boot/dts/broadcom/bcm47189-tenda-ac9.dts b/arch/arm/boot/dts/broadcom/bcm47189-tenda-ac9.dts
+index dab2e5f63a727..06b1a582809ca 100644
+--- a/arch/arm/boot/dts/broadcom/bcm47189-tenda-ac9.dts
++++ b/arch/arm/boot/dts/broadcom/bcm47189-tenda-ac9.dts
+@@ -135,8 +135,8 @@
+ 			label = "lan4";
+ 		};
+ 
+-		port@5 {
+-			reg = <5>;
++		port@8 {
++			reg = <8>;
+ 			label = "cpu";
+ 			ethernet = <&gmac0>;
+ 		};
+diff --git a/arch/arm/boot/dts/broadcom/bcm53573.dtsi b/arch/arm/boot/dts/broadcom/bcm53573.dtsi
+index 3f03a381db0f2..eed1a6147f0bf 100644
+--- a/arch/arm/boot/dts/broadcom/bcm53573.dtsi
++++ b/arch/arm/boot/dts/broadcom/bcm53573.dtsi
+@@ -127,6 +127,9 @@
+ 
+ 		pcie0: pcie@2000 {
+ 			reg = <0x00002000 0x1000>;
++
++			#address-cells = <3>;
++			#size-cells = <2>;
+ 		};
+ 
+ 		usb2: usb2@4000 {
+@@ -156,8 +159,6 @@
+ 			};
+ 
+ 			ohci: usb@d000 {
+-				#usb-cells = <0>;
+-
+ 				compatible = "generic-ohci";
+ 				reg = <0xd000 0x1000>;
+ 				interrupt-parent = <&gic>;
+diff --git a/arch/arm/boot/dts/broadcom/bcm947189acdbmr.dts b/arch/arm/boot/dts/broadcom/bcm947189acdbmr.dts
+index 3709baa2376f5..0b8727ae6f16d 100644
+--- a/arch/arm/boot/dts/broadcom/bcm947189acdbmr.dts
++++ b/arch/arm/boot/dts/broadcom/bcm947189acdbmr.dts
+@@ -60,9 +60,9 @@
+ 	spi {
+ 		compatible = "spi-gpio";
+ 		num-chipselects = <1>;
+-		gpio-sck = <&chipcommon 21 0>;
+-		gpio-miso = <&chipcommon 22 0>;
+-		gpio-mosi = <&chipcommon 23 0>;
++		sck-gpios = <&chipcommon 21 0>;
++		miso-gpios = <&chipcommon 22 0>;
++		mosi-gpios = <&chipcommon 23 0>;
+ 		cs-gpios = <&chipcommon 24 0>;
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+diff --git a/arch/arm/boot/dts/qcom/qcom-ipq4019.dtsi b/arch/arm/boot/dts/qcom/qcom-ipq4019.dtsi
+index f0ef86fadc9d9..e328216443135 100644
+--- a/arch/arm/boot/dts/qcom/qcom-ipq4019.dtsi
++++ b/arch/arm/boot/dts/qcom/qcom-ipq4019.dtsi
+@@ -230,9 +230,12 @@
+ 			interrupts = <GIC_SPI 123 IRQ_TYPE_LEVEL_HIGH>, <GIC_SPI 138 IRQ_TYPE_LEVEL_HIGH>;
+ 			interrupt-names = "hc_irq", "pwr_irq";
+ 			bus-width = <8>;
+-			clocks = <&gcc GCC_SDCC1_AHB_CLK>, <&gcc GCC_SDCC1_APPS_CLK>,
+-				 <&gcc GCC_DCD_XO_CLK>;
+-			clock-names = "iface", "core", "xo";
++			clocks = <&gcc GCC_SDCC1_AHB_CLK>,
++				 <&gcc GCC_SDCC1_APPS_CLK>,
++				 <&xo>;
++			clock-names = "iface",
++				      "core",
++				      "xo";
+ 			status = "disabled";
+ 		};
+ 
+diff --git a/arch/arm/boot/dts/qcom/qcom-sdx65-mtp.dts b/arch/arm/boot/dts/qcom/qcom-sdx65-mtp.dts
+index 02d8d6e241ae1..fcf1c51c5e7a7 100644
+--- a/arch/arm/boot/dts/qcom/qcom-sdx65-mtp.dts
++++ b/arch/arm/boot/dts/qcom/qcom-sdx65-mtp.dts
+@@ -7,7 +7,7 @@
+ #include "qcom-sdx65.dtsi"
+ #include <dt-bindings/regulator/qcom,rpmh-regulator.h>
+ #include <arm64/qcom/pmk8350.dtsi>
+-#include <arm64/qcom/pm8150b.dtsi>
++#include <arm64/qcom/pm7250b.dtsi>
+ #include "qcom-pmx65.dtsi"
+ 
+ / {
+diff --git a/arch/arm/boot/dts/samsung/s3c6410-mini6410.dts b/arch/arm/boot/dts/samsung/s3c6410-mini6410.dts
+index 17097da36f5ed..0b07b3c319604 100644
+--- a/arch/arm/boot/dts/samsung/s3c6410-mini6410.dts
++++ b/arch/arm/boot/dts/samsung/s3c6410-mini6410.dts
+@@ -51,7 +51,7 @@
+ 
+ 		ethernet@18000000 {
+ 			compatible = "davicom,dm9000";
+-			reg = <0x18000000 0x2 0x18000004 0x2>;
++			reg = <0x18000000 0x2>, <0x18000004 0x2>;
+ 			interrupt-parent = <&gpn>;
+ 			interrupts = <7 IRQ_TYPE_LEVEL_HIGH>;
+ 			davicom,no-eeprom;
+diff --git a/arch/arm/boot/dts/samsung/s5pv210-smdkv210.dts b/arch/arm/boot/dts/samsung/s5pv210-smdkv210.dts
+index 6e26c67e0a26e..901e7197b1368 100644
+--- a/arch/arm/boot/dts/samsung/s5pv210-smdkv210.dts
++++ b/arch/arm/boot/dts/samsung/s5pv210-smdkv210.dts
+@@ -41,7 +41,7 @@
+ 
+ 	ethernet@a8000000 {
+ 		compatible = "davicom,dm9000";
+-		reg = <0xA8000000 0x2 0xA8000002 0x2>;
++		reg = <0xa8000000 0x2>, <0xa8000002 0x2>;
+ 		interrupt-parent = <&gph1>;
+ 		interrupts = <1 IRQ_TYPE_LEVEL_HIGH>;
+ 		local-mac-address = [00 00 de ad be ef];
+diff --git a/arch/arm/boot/dts/st/stm32mp157c-emstamp-argon.dtsi b/arch/arm/boot/dts/st/stm32mp157c-emstamp-argon.dtsi
+index 94e38141af672..fd89542c69c93 100644
+--- a/arch/arm/boot/dts/st/stm32mp157c-emstamp-argon.dtsi
++++ b/arch/arm/boot/dts/st/stm32mp157c-emstamp-argon.dtsi
+@@ -368,8 +368,8 @@
+ &m4_rproc {
+ 	memory-region = <&retram>, <&mcuram>, <&mcuram2>, <&vdev0vring0>,
+ 			<&vdev0vring1>, <&vdev0buffer>;
+-	mboxes = <&ipcc 0>, <&ipcc 1>, <&ipcc 2>;
+-	mbox-names = "vq0", "vq1", "shutdown";
++	mboxes = <&ipcc 0>, <&ipcc 1>, <&ipcc 2>, <&ipcc 3>;
++	mbox-names = "vq0", "vq1", "shutdown", "detach";
+ 	interrupt-parent = <&exti>;
+ 	interrupts = <68 1>;
+ 	interrupt-names = "wdg";
+diff --git a/arch/arm/boot/dts/st/stm32mp157c-odyssey-som.dtsi b/arch/arm/boot/dts/st/stm32mp157c-odyssey-som.dtsi
+index e22871dc580c8..cf74852514906 100644
+--- a/arch/arm/boot/dts/st/stm32mp157c-odyssey-som.dtsi
++++ b/arch/arm/boot/dts/st/stm32mp157c-odyssey-som.dtsi
+@@ -230,8 +230,8 @@
+ &m4_rproc {
+ 	memory-region = <&retram>, <&mcuram>, <&mcuram2>, <&vdev0vring0>,
+ 			<&vdev0vring1>, <&vdev0buffer>;
+-	mboxes = <&ipcc 0>, <&ipcc 1>, <&ipcc 2>;
+-	mbox-names = "vq0", "vq1", "shutdown";
++	mboxes = <&ipcc 0>, <&ipcc 1>, <&ipcc 2>, <&ipcc 3>;
++	mbox-names = "vq0", "vq1", "shutdown", "detach";
+ 	interrupt-parent = <&exti>;
+ 	interrupts = <68 1>;
+ 	status = "okay";
+diff --git a/arch/arm/boot/dts/st/stm32mp15xx-dhcom-som.dtsi b/arch/arm/boot/dts/st/stm32mp15xx-dhcom-som.dtsi
+index e61df23d361a7..74a11ccc5333f 100644
+--- a/arch/arm/boot/dts/st/stm32mp15xx-dhcom-som.dtsi
++++ b/arch/arm/boot/dts/st/stm32mp15xx-dhcom-som.dtsi
+@@ -416,8 +416,8 @@
+ &m4_rproc {
+ 	memory-region = <&retram>, <&mcuram>, <&mcuram2>, <&vdev0vring0>,
+ 			<&vdev0vring1>, <&vdev0buffer>;
+-	mboxes = <&ipcc 0>, <&ipcc 1>, <&ipcc 2>;
+-	mbox-names = "vq0", "vq1", "shutdown";
++	mboxes = <&ipcc 0>, <&ipcc 1>, <&ipcc 2>, <&ipcc 3>;
++	mbox-names = "vq0", "vq1", "shutdown", "detach";
+ 	interrupt-parent = <&exti>;
+ 	interrupts = <68 1>;
+ 	status = "okay";
+diff --git a/arch/arm/boot/dts/st/stm32mp15xx-dhcor-som.dtsi b/arch/arm/boot/dts/st/stm32mp15xx-dhcor-som.dtsi
+index bba19f21e5277..89881a26c6141 100644
+--- a/arch/arm/boot/dts/st/stm32mp15xx-dhcor-som.dtsi
++++ b/arch/arm/boot/dts/st/stm32mp15xx-dhcor-som.dtsi
+@@ -227,8 +227,8 @@
+ &m4_rproc {
+ 	memory-region = <&retram>, <&mcuram>, <&mcuram2>, <&vdev0vring0>,
+ 			<&vdev0vring1>, <&vdev0buffer>;
+-	mboxes = <&ipcc 0>, <&ipcc 1>, <&ipcc 2>;
+-	mbox-names = "vq0", "vq1", "shutdown";
++	mboxes = <&ipcc 0>, <&ipcc 1>, <&ipcc 2>, <&ipcc 3>;
++	mbox-names = "vq0", "vq1", "shutdown", "detach";
+ 	interrupt-parent = <&exti>;
+ 	interrupts = <68 1>;
+ 	status = "okay";
+diff --git a/arch/arm/include/asm/irq.h b/arch/arm/include/asm/irq.h
+index 18605f1b35807..26c1d2ced4ce1 100644
+--- a/arch/arm/include/asm/irq.h
++++ b/arch/arm/include/asm/irq.h
+@@ -32,7 +32,7 @@ void handle_IRQ(unsigned int, struct pt_regs *);
+ #include <linux/cpumask.h>
+ 
+ extern void arch_trigger_cpumask_backtrace(const cpumask_t *mask,
+-					   bool exclude_self);
++					   int exclude_cpu);
+ #define arch_trigger_cpumask_backtrace arch_trigger_cpumask_backtrace
+ #endif
+ 
+diff --git a/arch/arm/include/asm/syscall.h b/arch/arm/include/asm/syscall.h
+index dfeed440254a8..fe4326d938c18 100644
+--- a/arch/arm/include/asm/syscall.h
++++ b/arch/arm/include/asm/syscall.h
+@@ -25,6 +25,9 @@ static inline int syscall_get_nr(struct task_struct *task,
+ 	if (IS_ENABLED(CONFIG_AEABI) && !IS_ENABLED(CONFIG_OABI_COMPAT))
+ 		return task_thread_info(task)->abi_syscall;
+ 
++	if (task_thread_info(task)->abi_syscall == -1)
++		return -1;
++
+ 	return task_thread_info(task)->abi_syscall & __NR_SYSCALL_MASK;
+ }
+ 
+diff --git a/arch/arm/kernel/entry-common.S b/arch/arm/kernel/entry-common.S
+index bcc4c9ec3aa4e..5c31e9de7a602 100644
+--- a/arch/arm/kernel/entry-common.S
++++ b/arch/arm/kernel/entry-common.S
+@@ -90,6 +90,7 @@ slow_work_pending:
+ 	cmp	r0, #0
+ 	beq	no_work_pending
+ 	movlt	scno, #(__NR_restart_syscall - __NR_SYSCALL_BASE)
++	str	scno, [tsk, #TI_ABI_SYSCALL]	@ make sure tracers see update
+ 	ldmia	sp, {r0 - r6}			@ have to reload r0 - r6
+ 	b	local_restart			@ ... and off we go
+ ENDPROC(ret_fast_syscall)
+diff --git a/arch/arm/kernel/ptrace.c b/arch/arm/kernel/ptrace.c
+index 2d8e2516906b6..fef32d73f9120 100644
+--- a/arch/arm/kernel/ptrace.c
++++ b/arch/arm/kernel/ptrace.c
+@@ -783,8 +783,9 @@ long arch_ptrace(struct task_struct *child, long request,
+ 			break;
+ 
+ 		case PTRACE_SET_SYSCALL:
+-			task_thread_info(child)->abi_syscall = data &
+-							__NR_SYSCALL_MASK;
++			if (data != -1)
++				data &= __NR_SYSCALL_MASK;
++			task_thread_info(child)->abi_syscall = data;
+ 			ret = 0;
+ 			break;
+ 
+diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c
+index 6756203e45f3d..3431c0553f45c 100644
+--- a/arch/arm/kernel/smp.c
++++ b/arch/arm/kernel/smp.c
+@@ -846,7 +846,7 @@ static void raise_nmi(cpumask_t *mask)
+ 	__ipi_send_mask(ipi_desc[IPI_CPU_BACKTRACE], mask);
+ }
+ 
+-void arch_trigger_cpumask_backtrace(const cpumask_t *mask, bool exclude_self)
++void arch_trigger_cpumask_backtrace(const cpumask_t *mask, int exclude_cpu)
+ {
+-	nmi_trigger_cpumask_backtrace(mask, exclude_self, raise_nmi);
++	nmi_trigger_cpumask_backtrace(mask, exclude_cpu, raise_nmi);
+ }
+diff --git a/arch/arm/mach-omap2/powerdomain.c b/arch/arm/mach-omap2/powerdomain.c
+index 777f9f8e7cd86..5e05dd1324e7b 100644
+--- a/arch/arm/mach-omap2/powerdomain.c
++++ b/arch/arm/mach-omap2/powerdomain.c
+@@ -174,7 +174,7 @@ static int _pwrdm_state_switch(struct powerdomain *pwrdm, int flag)
+ 		break;
+ 	case PWRDM_STATE_PREV:
+ 		prev = pwrdm_read_prev_pwrst(pwrdm);
+-		if (pwrdm->state != prev)
++		if (prev >= 0 && pwrdm->state != prev)
+ 			pwrdm->state_counter[prev]++;
+ 		if (prev == PWRDM_POWER_RET)
+ 			_update_logic_membank_counters(pwrdm);
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp-debix-model-a.dts b/arch/arm64/boot/dts/freescale/imx8mp-debix-model-a.dts
+index b4409349eb3f6..1004ab0abb131 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp-debix-model-a.dts
++++ b/arch/arm64/boot/dts/freescale/imx8mp-debix-model-a.dts
+@@ -355,28 +355,6 @@
+ 		>;
+ 	};
+ 
+-	pinctrl_fec: fecgrp {
+-		fsl,pins = <
+-			MX8MP_IOMUXC_SAI1_RXD2__ENET1_MDC				0x3
+-			MX8MP_IOMUXC_SAI1_RXD3__ENET1_MDIO				0x3
+-			MX8MP_IOMUXC_SAI1_RXD4__ENET1_RGMII_RD0				0x91
+-			MX8MP_IOMUXC_SAI1_RXD5__ENET1_RGMII_RD1				0x91
+-			MX8MP_IOMUXC_SAI1_RXD6__ENET1_RGMII_RD2				0x91
+-			MX8MP_IOMUXC_SAI1_RXD7__ENET1_RGMII_RD3				0x91
+-			MX8MP_IOMUXC_SAI1_TXC__ENET1_RGMII_RXC				0x91
+-			MX8MP_IOMUXC_SAI1_TXFS__ENET1_RGMII_RX_CTL			0x91
+-			MX8MP_IOMUXC_SAI1_TXD0__ENET1_RGMII_TD0				0x1f
+-			MX8MP_IOMUXC_SAI1_TXD1__ENET1_RGMII_TD1				0x1f
+-			MX8MP_IOMUXC_SAI1_TXD2__ENET1_RGMII_TD2				0x1f
+-			MX8MP_IOMUXC_SAI1_TXD3__ENET1_RGMII_TD3				0x1f
+-			MX8MP_IOMUXC_SAI1_TXD4__ENET1_RGMII_TX_CTL			0x1f
+-			MX8MP_IOMUXC_SAI1_TXD5__ENET1_RGMII_TXC				0x1f
+-			MX8MP_IOMUXC_SAI1_RXD1__ENET1_1588_EVENT1_OUT			0x1f
+-			MX8MP_IOMUXC_SAI1_RXD0__ENET1_1588_EVENT1_IN			0x1f
+-			MX8MP_IOMUXC_SAI1_TXD7__GPIO4_IO19				0x19
+-		>;
+-	};
+-
+ 	pinctrl_gpio_led: gpioledgrp {
+ 		fsl,pins = <
+ 			MX8MP_IOMUXC_NAND_READY_B__GPIO3_IO16				0x19
+diff --git a/arch/arm64/boot/dts/nvidia/tegra210-smaug.dts b/arch/arm64/boot/dts/nvidia/tegra210-smaug.dts
+index 5a1ce432c1fbb..15a71a59745c4 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra210-smaug.dts
++++ b/arch/arm64/boot/dts/nvidia/tegra210-smaug.dts
+@@ -1317,6 +1317,7 @@
+ 
+ 	uartd: serial@70006300 {
+ 		compatible = "nvidia,tegra30-hsuart";
++		reset-names = "serial";
+ 		status = "okay";
+ 
+ 		bluetooth {
+diff --git a/arch/arm64/boot/dts/nvidia/tegra234-p3737-0000+p3701-0000.dts b/arch/arm64/boot/dts/nvidia/tegra234-p3737-0000+p3701-0000.dts
+index cd13cf2381dde..513cc2cd0b668 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra234-p3737-0000+p3701-0000.dts
++++ b/arch/arm64/boot/dts/nvidia/tegra234-p3737-0000+p3701-0000.dts
+@@ -2010,6 +2010,7 @@
+ 
+ 		serial@3100000 {
+ 			compatible = "nvidia,tegra194-hsuart";
++			reset-names = "serial";
+ 			status = "okay";
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/nvidia/tegra234-p3740-0002+p3701-0008.dts b/arch/arm64/boot/dts/nvidia/tegra234-p3740-0002+p3701-0008.dts
+index 43d797e5544f5..b35044812ecfd 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra234-p3740-0002+p3701-0008.dts
++++ b/arch/arm64/boot/dts/nvidia/tegra234-p3740-0002+p3701-0008.dts
+@@ -12,6 +12,7 @@
+ 
+ 	aliases {
+ 		serial0 = &tcu;
++		serial1 = &uarta;
+ 	};
+ 
+ 	chosen {
+diff --git a/arch/arm64/boot/dts/qcom/apq8016-sbc.dts b/arch/arm64/boot/dts/qcom/apq8016-sbc.dts
+index f3d65a6061949..5ee098c12801c 100644
+--- a/arch/arm64/boot/dts/qcom/apq8016-sbc.dts
++++ b/arch/arm64/boot/dts/qcom/apq8016-sbc.dts
+@@ -278,7 +278,7 @@
+ 		compatible = "ovti,ov5640";
+ 		reg = <0x3b>;
+ 
+-		enable-gpios = <&tlmm 34 GPIO_ACTIVE_HIGH>;
++		powerdown-gpios = <&tlmm 34 GPIO_ACTIVE_HIGH>;
+ 		reset-gpios = <&tlmm 35 GPIO_ACTIVE_LOW>;
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&camera_rear_default>;
+@@ -287,9 +287,9 @@
+ 		clock-names = "xclk";
+ 		clock-frequency = <23880000>;
+ 
+-		vdddo-supply = <&camera_vdddo_1v8>;
+-		vdda-supply = <&camera_vdda_2v8>;
+-		vddd-supply = <&camera_vddd_1v5>;
++		DOVDD-supply = <&camera_vdddo_1v8>;
++		AVDD-supply = <&camera_vdda_2v8>;
++		DVDD-supply = <&camera_vddd_1v5>;
+ 
+ 		/* No camera mezzanine by default */
+ 		status = "disabled";
+diff --git a/arch/arm64/boot/dts/qcom/apq8096-db820c.dts b/arch/arm64/boot/dts/qcom/apq8096-db820c.dts
+index 537547b97459b..b599909c44639 100644
+--- a/arch/arm64/boot/dts/qcom/apq8096-db820c.dts
++++ b/arch/arm64/boot/dts/qcom/apq8096-db820c.dts
+@@ -208,6 +208,25 @@
+ 	status = "okay";
+ };
+ 
++&hdmi {
++	status = "okay";
++
++	pinctrl-names = "default", "sleep";
++	pinctrl-0 = <&hdmi_hpd_active &hdmi_ddc_active>;
++	pinctrl-1 = <&hdmi_hpd_suspend &hdmi_ddc_suspend>;
++
++	core-vdda-supply = <&vreg_l12a_1p8>;
++	core-vcc-supply = <&vreg_s4a_1p8>;
++};
++
++&hdmi_phy {
++	status = "okay";
++
++	vddio-supply = <&vreg_l12a_1p8>;
++	vcca-supply = <&vreg_l28a_0p925>;
++	#phy-cells = <0>;
++};
++
+ &hsusb_phy1 {
+ 	status = "okay";
+ 
+@@ -232,25 +251,6 @@
+ 	status = "okay";
+ };
+ 
+-&mdss_hdmi {
+-	status = "okay";
+-
+-	pinctrl-names = "default", "sleep";
+-	pinctrl-0 = <&mdss_hdmi_hpd_active &mdss_hdmi_ddc_active>;
+-	pinctrl-1 = <&mdss_hdmi_hpd_suspend &mdss_hdmi_ddc_suspend>;
+-
+-	core-vdda-supply = <&vreg_l12a_1p8>;
+-	core-vcc-supply = <&vreg_s4a_1p8>;
+-};
+-
+-&mdss_hdmi_phy {
+-	status = "okay";
+-
+-	vddio-supply = <&vreg_l12a_1p8>;
+-	vcca-supply = <&vreg_l28a_0p925>;
+-	#phy-cells = <0>;
+-};
+-
+ &mmcc {
+ 	vdd-gfx-supply = <&vdd_gfx>;
+ };
+@@ -433,28 +433,28 @@
+ 		drive-strength = <2>;
+ 	};
+ 
+-	mdss_hdmi_hpd_active: mdss_hdmi-hpd-active-state {
++	hdmi_hpd_active: hdmi-hpd-active-state {
+ 		pins = "gpio34";
+ 		function = "hdmi_hot";
+ 		bias-pull-down;
+ 		drive-strength = <16>;
+ 	};
+ 
+-	mdss_hdmi_hpd_suspend: mdss_hdmi-hpd-suspend-state {
++	hdmi_hpd_suspend: hdmi-hpd-suspend-state {
+ 		pins = "gpio34";
+ 		function = "hdmi_hot";
+ 		bias-pull-down;
+ 		drive-strength = <2>;
+ 	};
+ 
+-	mdss_hdmi_ddc_active: mdss_hdmi-ddc-active-state {
++	hdmi_ddc_active: hdmi-ddc-active-state {
+ 		pins = "gpio32", "gpio33";
+ 		function = "hdmi_ddc";
+ 		drive-strength = <2>;
+ 		bias-pull-up;
+ 	};
+ 
+-	mdss_hdmi_ddc_suspend: mdss_hdmi-ddc-suspend-state {
++	hdmi_ddc_suspend: hdmi-ddc-suspend-state {
+ 		pins = "gpio32", "gpio33";
+ 		function = "hdmi_ddc";
+ 		drive-strength = <2>;
+@@ -1043,7 +1043,7 @@
+ 		};
+ 	};
+ 
+-	mdss_hdmi-dai-link {
++	hdmi-dai-link {
+ 		link-name = "HDMI";
+ 		cpu {
+ 			sound-dai = <&q6afedai HDMI_RX>;
+@@ -1054,7 +1054,7 @@
+ 		};
+ 
+ 		codec {
+-			sound-dai = <&mdss_hdmi 0>;
++			sound-dai = <&hdmi 0>;
+ 		};
+ 	};
+ 
+diff --git a/arch/arm64/boot/dts/qcom/apq8096-ifc6640.dts b/arch/arm64/boot/dts/qcom/apq8096-ifc6640.dts
+index ac6471d1db1f7..ed2e2f6c6775a 100644
+--- a/arch/arm64/boot/dts/qcom/apq8096-ifc6640.dts
++++ b/arch/arm64/boot/dts/qcom/apq8096-ifc6640.dts
+@@ -92,15 +92,15 @@
+ 	status = "okay";
+ };
+ 
+-&mdss {
++&hdmi {
+ 	status = "okay";
+ };
+ 
+-&mdss_hdmi {
++&hdmi_phy {
+ 	status = "okay";
+ };
+ 
+-&mdss_hdmi_phy {
++&mdss {
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm64/boot/dts/qcom/ipq8074.dtsi b/arch/arm64/boot/dts/qcom/ipq8074.dtsi
+index 68839acbd613f..00ed71936b472 100644
+--- a/arch/arm64/boot/dts/qcom/ipq8074.dtsi
++++ b/arch/arm64/boot/dts/qcom/ipq8074.dtsi
+@@ -794,10 +794,10 @@
+ 
+ 		pcie1: pci@10000000 {
+ 			compatible = "qcom,pcie-ipq8074";
+-			reg =  <0x10000000 0xf1d>,
+-			       <0x10000f20 0xa8>,
+-			       <0x00088000 0x2000>,
+-			       <0x10100000 0x1000>;
++			reg = <0x10000000 0xf1d>,
++			      <0x10000f20 0xa8>,
++			      <0x00088000 0x2000>,
++			      <0x10100000 0x1000>;
+ 			reg-names = "dbi", "elbi", "parf", "config";
+ 			device_type = "pci";
+ 			linux,pci-domain = <1>;
+diff --git a/arch/arm64/boot/dts/qcom/msm8916-longcheer-l8150.dts b/arch/arm64/boot/dts/qcom/msm8916-longcheer-l8150.dts
+index 97262b8519b36..3892ad4f639a8 100644
+--- a/arch/arm64/boot/dts/qcom/msm8916-longcheer-l8150.dts
++++ b/arch/arm64/boot/dts/qcom/msm8916-longcheer-l8150.dts
+@@ -165,7 +165,7 @@
+ 		pinctrl-0 = <&light_int_default>;
+ 
+ 		vdd-supply = <&pm8916_l17>;
+-		vio-supply = <&pm8916_l6>;
++		vddio-supply = <&pm8916_l6>;
+ 	};
+ 
+ 	gyroscope@68 {
+diff --git a/arch/arm64/boot/dts/qcom/msm8916-samsung-serranove.dts b/arch/arm64/boot/dts/qcom/msm8916-samsung-serranove.dts
+index 15dc246e84e2b..126e8b5cf49fd 100644
+--- a/arch/arm64/boot/dts/qcom/msm8916-samsung-serranove.dts
++++ b/arch/arm64/boot/dts/qcom/msm8916-samsung-serranove.dts
+@@ -219,9 +219,9 @@
+ 		compatible = "yamaha,yas537";
+ 		reg = <0x2e>;
+ 
+-		mount-matrix =  "0",  "1",  "0",
+-				"1",  "0",  "0",
+-				"0",  "0", "-1";
++		mount-matrix = "0",  "1",  "0",
++			       "1",  "0",  "0",
++			       "0",  "0", "-1";
+ 	};
+ };
+ 
+diff --git a/arch/arm64/boot/dts/qcom/msm8939.dtsi b/arch/arm64/boot/dts/qcom/msm8939.dtsi
+index 895cafc11480b..559a5d1ba615b 100644
+--- a/arch/arm64/boot/dts/qcom/msm8939.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8939.dtsi
+@@ -55,6 +55,7 @@
+ 			L2_1: l2-cache {
+ 				compatible = "cache";
+ 				cache-level = <2>;
++				cache-unified;
+ 			};
+ 		};
+ 
+@@ -111,6 +112,7 @@
+ 			L2_0: l2-cache {
+ 				compatible = "cache";
+ 				cache-level = <2>;
++				cache-unified;
+ 			};
+ 		};
+ 
+@@ -155,7 +157,7 @@
+ 
+ 		idle-states {
+ 			CPU_SLEEP_0: cpu-sleep-0 {
+-				compatible ="qcom,idle-state-spc", "arm,idle-state";
++				compatible = "arm,idle-state";
+ 				entry-latency-us = <130>;
+ 				exit-latency-us = <150>;
+ 				min-residency-us = <2000>;
+@@ -1644,7 +1646,7 @@
+ 			clocks = <&gcc GCC_SDCC2_AHB_CLK>,
+ 				 <&gcc GCC_SDCC2_APPS_CLK>,
+ 				 <&rpmcc RPM_SMD_XO_CLK_SRC>;
+-			clock-names =  "iface", "core", "xo";
++			clock-names = "iface", "core", "xo";
+ 			resets = <&gcc GCC_SDCC2_BCR>;
+ 			pinctrl-0 = <&sdc2_default>;
+ 			pinctrl-1 = <&sdc2_sleep>;
+@@ -1731,7 +1733,7 @@
+ 			interrupts = <GIC_SPI 96 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&gcc GCC_BLSP1_QUP2_I2C_APPS_CLK>,
+ 				 <&gcc GCC_BLSP1_AHB_CLK>;
+-			clock-names =  "core", "iface";
++			clock-names = "core", "iface";
+ 			dmas = <&blsp_dma 6>, <&blsp_dma 7>;
+ 			dma-names = "tx", "rx";
+ 			pinctrl-0 = <&blsp_i2c2_default>;
+@@ -1765,7 +1767,7 @@
+ 			interrupts = <GIC_SPI 97 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&gcc GCC_BLSP1_QUP3_I2C_APPS_CLK>,
+ 				 <&gcc GCC_BLSP1_AHB_CLK>;
+-			clock-names =  "core", "iface";
++			clock-names = "core", "iface";
+ 			dmas = <&blsp_dma 8>, <&blsp_dma 9>;
+ 			dma-names = "tx", "rx";
+ 			pinctrl-0 = <&blsp_i2c3_default>;
+@@ -1799,7 +1801,7 @@
+ 			interrupts = <GIC_SPI 98 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&gcc GCC_BLSP1_QUP4_I2C_APPS_CLK>,
+ 				 <&gcc GCC_BLSP1_AHB_CLK>;
+-			clock-names =  "core", "iface";
++			clock-names = "core", "iface";
+ 			dmas = <&blsp_dma 10>, <&blsp_dma 11>;
+ 			dma-names = "tx", "rx";
+ 			pinctrl-0 = <&blsp_i2c4_default>;
+@@ -1833,7 +1835,7 @@
+ 			interrupts = <GIC_SPI 99 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&gcc GCC_BLSP1_QUP5_I2C_APPS_CLK>,
+ 				 <&gcc GCC_BLSP1_AHB_CLK>;
+-			clock-names =  "core", "iface";
++			clock-names = "core", "iface";
+ 			dmas = <&blsp_dma 12>, <&blsp_dma 13>;
+ 			dma-names = "tx", "rx";
+ 			pinctrl-0 = <&blsp_i2c5_default>;
+@@ -1867,7 +1869,7 @@
+ 			interrupts = <GIC_SPI 100 IRQ_TYPE_LEVEL_HIGH>;
+ 			clocks = <&gcc GCC_BLSP1_QUP6_I2C_APPS_CLK>,
+ 				 <&gcc GCC_BLSP1_AHB_CLK>;
+-			clock-names =  "core", "iface";
++			clock-names = "core", "iface";
+ 			dmas = <&blsp_dma 14>, <&blsp_dma 15>;
+ 			dma-names = "tx", "rx";
+ 			pinctrl-0 = <&blsp_i2c6_default>;
+diff --git a/arch/arm64/boot/dts/qcom/msm8953-xiaomi-daisy.dts b/arch/arm64/boot/dts/qcom/msm8953-xiaomi-daisy.dts
+index 1d672e6086532..790d19c99af14 100644
+--- a/arch/arm64/boot/dts/qcom/msm8953-xiaomi-daisy.dts
++++ b/arch/arm64/boot/dts/qcom/msm8953-xiaomi-daisy.dts
+@@ -17,7 +17,7 @@
+ 	compatible = "xiaomi,daisy", "qcom,msm8953";
+ 	chassis-type = "handset";
+ 	qcom,msm-id = <293 0>;
+-	qcom,board-id= <0x1000b 0x9>;
++	qcom,board-id = <0x1000b 0x9>;
+ 
+ 	chosen {
+ 		#address-cells = <2>;
+diff --git a/arch/arm64/boot/dts/qcom/msm8953-xiaomi-vince.dts b/arch/arm64/boot/dts/qcom/msm8953-xiaomi-vince.dts
+index b5be55034fd36..0956c866d6cb1 100644
+--- a/arch/arm64/boot/dts/qcom/msm8953-xiaomi-vince.dts
++++ b/arch/arm64/boot/dts/qcom/msm8953-xiaomi-vince.dts
+@@ -20,7 +20,7 @@
+ 	compatible = "xiaomi,vince", "qcom,msm8953";
+ 	chassis-type = "handset";
+ 	qcom,msm-id = <293 0>;
+-	qcom,board-id= <0x1000b 0x08>;
++	qcom,board-id = <0x1000b 0x08>;
+ 
+ 	gpio-keys {
+ 		compatible = "gpio-keys";
+diff --git a/arch/arm64/boot/dts/qcom/msm8996-mtp.dts b/arch/arm64/boot/dts/qcom/msm8996-mtp.dts
+index 495d45a16e63a..596ad4c896f55 100644
+--- a/arch/arm64/boot/dts/qcom/msm8996-mtp.dts
++++ b/arch/arm64/boot/dts/qcom/msm8996-mtp.dts
+@@ -24,10 +24,10 @@
+ 	status = "okay";
+ };
+ 
+-&mdss_hdmi {
++&hdmi {
+ 	status = "okay";
+ };
+ 
+-&mdss_hdmi_phy {
++&hdmi_phy {
+ 	status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/qcom/msm8996-xiaomi-gemini.dts b/arch/arm64/boot/dts/qcom/msm8996-xiaomi-gemini.dts
+index bdedcf9dff032..d1066edaea471 100644
+--- a/arch/arm64/boot/dts/qcom/msm8996-xiaomi-gemini.dts
++++ b/arch/arm64/boot/dts/qcom/msm8996-xiaomi-gemini.dts
+@@ -82,7 +82,7 @@
+ 		#size-cells = <0>;
+ 		interrupt-parent = <&tlmm>;
+ 		interrupts = <125 IRQ_TYPE_LEVEL_LOW>;
+-		vdda-supply = <&vreg_l6a_1p8>;
++		vio-supply = <&vreg_l6a_1p8>;
+ 		vdd-supply = <&vdd_3v2_tp>;
+ 		reset-gpios = <&tlmm 89 GPIO_ACTIVE_LOW>;
+ 
+diff --git a/arch/arm64/boot/dts/qcom/msm8996.dtsi b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+index 0cb2d4f08c3a1..2ea3117438c3a 100644
+--- a/arch/arm64/boot/dts/qcom/msm8996.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+@@ -895,7 +895,7 @@
+ 				 <&mdss_dsi0_phy 0>,
+ 				 <&mdss_dsi1_phy 1>,
+ 				 <&mdss_dsi1_phy 0>,
+-				 <&mdss_hdmi_phy>;
++				 <&hdmi_phy>;
+ 			clock-names = "xo",
+ 				      "gpll0",
+ 				      "gcc_mmss_noc_cfg_ahb_clk",
+@@ -980,7 +980,7 @@
+ 					port@0 {
+ 						reg = <0>;
+ 						mdp5_intf3_out: endpoint {
+-							remote-endpoint = <&mdss_hdmi_in>;
++							remote-endpoint = <&hdmi_in>;
+ 						};
+ 					};
+ 
+@@ -1075,7 +1075,7 @@
+ 				reg-names = "dsi_ctrl";
+ 
+ 				interrupt-parent = <&mdss>;
+-				interrupts = <4>;
++				interrupts = <5>;
+ 
+ 				clocks = <&mmcc MDSS_MDP_CLK>,
+ 					 <&mmcc MDSS_BYTE1_CLK>,
+@@ -1136,11 +1136,11 @@
+ 				status = "disabled";
+ 			};
+ 
+-			mdss_hdmi: mdss_hdmi-tx@9a0000 {
+-				compatible = "qcom,mdss_hdmi-tx-8996";
+-				reg =	<0x009a0000 0x50c>,
+-					<0x00070000 0x6158>,
+-					<0x009e0000 0xfff>;
++			hdmi: hdmi-tx@9a0000 {
++				compatible = "qcom,hdmi-tx-8996";
++				reg = <0x009a0000 0x50c>,
++				      <0x00070000 0x6158>,
++				      <0x009e0000 0xfff>;
+ 				reg-names = "core_physical",
+ 					    "qfprom_physical",
+ 					    "hdcp_physical";
+@@ -1160,7 +1160,7 @@
+ 					"alt_iface",
+ 					"extp";
+ 
+-				phys = <&mdss_hdmi_phy>;
++				phys = <&hdmi_phy>;
+ 				#sound-dai-cells = <1>;
+ 
+ 				status = "disabled";
+@@ -1171,16 +1171,16 @@
+ 
+ 					port@0 {
+ 						reg = <0>;
+-						mdss_hdmi_in: endpoint {
++						hdmi_in: endpoint {
+ 							remote-endpoint = <&mdp5_intf3_out>;
+ 						};
+ 					};
+ 				};
+ 			};
+ 
+-			mdss_hdmi_phy: phy@9a0600 {
++			hdmi_phy: phy@9a0600 {
+ 				#phy-cells = <0>;
+-				compatible = "qcom,mdss_hdmi-phy-8996";
++				compatible = "qcom,hdmi-phy-8996";
+ 				reg = <0x009a0600 0x1c4>,
+ 				      <0x009a0a00 0x124>,
+ 				      <0x009a0c00 0x124>,
+@@ -3336,6 +3336,9 @@
+ 			#size-cells = <1>;
+ 			ranges;
+ 
++			interrupts = <GIC_SPI 352 IRQ_TYPE_LEVEL_HIGH>;
++			interrupt-names = "hs_phy_irq";
++
+ 			clocks = <&gcc GCC_PERIPH_NOC_USB20_AHB_CLK>,
+ 				<&gcc GCC_USB20_MASTER_CLK>,
+ 				<&gcc GCC_USB20_MOCK_UTMI_CLK>,
+diff --git a/arch/arm64/boot/dts/qcom/msm8996pro-xiaomi-natrium.dts b/arch/arm64/boot/dts/qcom/msm8996pro-xiaomi-natrium.dts
+index 7957c8823f0d5..5e3fd1637f449 100644
+--- a/arch/arm64/boot/dts/qcom/msm8996pro-xiaomi-natrium.dts
++++ b/arch/arm64/boot/dts/qcom/msm8996pro-xiaomi-natrium.dts
+@@ -106,7 +106,7 @@
+ &sound {
+ 	compatible = "qcom,apq8096-sndcard";
+ 	model = "natrium";
+-	audio-routing =	"RX_BIAS", "MCLK";
++	audio-routing = "RX_BIAS", "MCLK";
+ 
+ 	mm1-dai-link {
+ 		link-name = "MultiMedia1";
+diff --git a/arch/arm64/boot/dts/qcom/msm8998.dtsi b/arch/arm64/boot/dts/qcom/msm8998.dtsi
+index f0e943ff00464..ed764d02819f7 100644
+--- a/arch/arm64/boot/dts/qcom/msm8998.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8998.dtsi
+@@ -909,10 +909,10 @@
+ 
+ 		pcie0: pci@1c00000 {
+ 			compatible = "qcom,pcie-msm8998", "qcom,pcie-msm8996";
+-			reg =	<0x01c00000 0x2000>,
+-				<0x1b000000 0xf1d>,
+-				<0x1b000f20 0xa8>,
+-				<0x1b100000 0x100000>;
++			reg = <0x01c00000 0x2000>,
++			      <0x1b000000 0xf1d>,
++			      <0x1b000f20 0xa8>,
++			      <0x1b100000 0x100000>;
+ 			reg-names = "parf", "dbi", "elbi", "config";
+ 			device_type = "pci";
+ 			linux,pci-domain = <0>;
+@@ -2074,11 +2074,11 @@
+ 
+ 		spmi_bus: spmi@800f000 {
+ 			compatible = "qcom,spmi-pmic-arb";
+-			reg =	<0x0800f000 0x1000>,
+-				<0x08400000 0x1000000>,
+-				<0x09400000 0x1000000>,
+-				<0x0a400000 0x220000>,
+-				<0x0800a000 0x3000>;
++			reg = <0x0800f000 0x1000>,
++			      <0x08400000 0x1000000>,
++			      <0x09400000 0x1000000>,
++			      <0x0a400000 0x220000>,
++			      <0x0800a000 0x3000>;
+ 			reg-names = "core", "chnls", "obsrvr", "intr", "cnfg";
+ 			interrupt-names = "periph_irq";
+ 			interrupts = <GIC_SPI 326 IRQ_TYPE_LEVEL_HIGH>;
+@@ -2737,10 +2737,10 @@
+ 
+ 			clocks = <&mmcc MNOC_AHB_CLK>,
+ 				 <&mmcc BIMC_SMMU_AHB_CLK>,
+-				 <&rpmcc RPM_SMD_MMAXI_CLK>,
+ 				 <&mmcc BIMC_SMMU_AXI_CLK>;
+-			clock-names = "iface-mm", "iface-smmu",
+-				      "bus-mm", "bus-smmu";
++			clock-names = "iface-mm",
++				      "iface-smmu",
++				      "bus-smmu";
+ 
+ 			#global-interrupts = <0>;
+ 			interrupts =
+@@ -2764,6 +2764,8 @@
+ 				<GIC_SPI 261 IRQ_TYPE_LEVEL_HIGH>,
+ 				<GIC_SPI 262 IRQ_TYPE_LEVEL_HIGH>,
+ 				<GIC_SPI 272 IRQ_TYPE_LEVEL_HIGH>;
++
++			power-domains = <&mmcc BIMC_SMMU_GDSC>;
+ 		};
+ 
+ 		remoteproc_adsp: remoteproc@17300000 {
+diff --git a/arch/arm64/boot/dts/qcom/pm6150l.dtsi b/arch/arm64/boot/dts/qcom/pm6150l.dtsi
+index 6f7aa67501e27..0fdf440596c01 100644
+--- a/arch/arm64/boot/dts/qcom/pm6150l.dtsi
++++ b/arch/arm64/boot/dts/qcom/pm6150l.dtsi
+@@ -121,8 +121,9 @@
+ 		pm6150l_wled: leds@d800 {
+ 			compatible = "qcom,pm6150l-wled";
+ 			reg = <0xd800>, <0xd900>;
+-			interrupts = <0x5 0xd8 0x1 IRQ_TYPE_EDGE_RISING>;
+-			interrupt-names = "ovp";
++			interrupts = <0x5 0xd8 0x1 IRQ_TYPE_EDGE_RISING>,
++				     <0x5 0xd8 0x2 IRQ_TYPE_EDGE_RISING>;
++			interrupt-names = "ovp", "short";
+ 			label = "backlight";
+ 
+ 			status = "disabled";
+diff --git a/arch/arm64/boot/dts/qcom/pm660l.dtsi b/arch/arm64/boot/dts/qcom/pm660l.dtsi
+index 87b71b7205b85..6fdbf507c262a 100644
+--- a/arch/arm64/boot/dts/qcom/pm660l.dtsi
++++ b/arch/arm64/boot/dts/qcom/pm660l.dtsi
+@@ -74,8 +74,9 @@
+ 		pm660l_wled: leds@d800 {
+ 			compatible = "qcom,pm660l-wled";
+ 			reg = <0xd800>, <0xd900>;
+-			interrupts = <0x3 0xd8 0x1 IRQ_TYPE_EDGE_RISING>;
+-			interrupt-names = "ovp";
++			interrupts = <0x3 0xd8 0x1 IRQ_TYPE_EDGE_RISING>,
++				     <0x3 0xd8 0x2 IRQ_TYPE_EDGE_RISING>;
++			interrupt-names = "ovp", "short";
+ 			label = "backlight";
+ 
+ 			status = "disabled";
+diff --git a/arch/arm64/boot/dts/qcom/pm8350.dtsi b/arch/arm64/boot/dts/qcom/pm8350.dtsi
+index 2dfeb99300d74..9ed9ba23e81e4 100644
+--- a/arch/arm64/boot/dts/qcom/pm8350.dtsi
++++ b/arch/arm64/boot/dts/qcom/pm8350.dtsi
+@@ -8,7 +8,7 @@
+ 
+ / {
+ 	thermal-zones {
+-		pm8350_thermal: pm8350c-thermal {
++		pm8350_thermal: pm8350-thermal {
+ 			polling-delay-passive = <100>;
+ 			polling-delay = <0>;
+ 			thermal-sensors = <&pm8350_temp_alarm>;
+diff --git a/arch/arm64/boot/dts/qcom/pm8350b.dtsi b/arch/arm64/boot/dts/qcom/pm8350b.dtsi
+index f1c7bd9d079c2..05c1058988927 100644
+--- a/arch/arm64/boot/dts/qcom/pm8350b.dtsi
++++ b/arch/arm64/boot/dts/qcom/pm8350b.dtsi
+@@ -8,7 +8,7 @@
+ 
+ / {
+ 	thermal-zones {
+-		pm8350b_thermal: pm8350c-thermal {
++		pm8350b_thermal: pm8350b-thermal {
+ 			polling-delay-passive = <100>;
+ 			polling-delay = <0>;
+ 			thermal-sensors = <&pm8350b_temp_alarm>;
+diff --git a/arch/arm64/boot/dts/qcom/pmi8950.dtsi b/arch/arm64/boot/dts/qcom/pmi8950.dtsi
+index 4891be3cd68a3..c16adca4e93a9 100644
+--- a/arch/arm64/boot/dts/qcom/pmi8950.dtsi
++++ b/arch/arm64/boot/dts/qcom/pmi8950.dtsi
+@@ -87,8 +87,9 @@
+ 		pmi8950_wled: leds@d800 {
+ 			compatible = "qcom,pmi8950-wled";
+ 			reg = <0xd800>, <0xd900>;
+-			interrupts = <0x3 0xd8 0x02 IRQ_TYPE_EDGE_RISING>;
+-			interrupt-names = "short";
++			interrupts = <0x3 0xd8 0x1 IRQ_TYPE_EDGE_RISING>,
++				     <0x3 0xd8 0x2 IRQ_TYPE_EDGE_RISING>;
++			interrupt-names = "ovp", "short";
+ 			label = "backlight";
+ 
+ 			status = "disabled";
+diff --git a/arch/arm64/boot/dts/qcom/pmi8994.dtsi b/arch/arm64/boot/dts/qcom/pmi8994.dtsi
+index 0192968f4d9b3..36d6a1fb553ac 100644
+--- a/arch/arm64/boot/dts/qcom/pmi8994.dtsi
++++ b/arch/arm64/boot/dts/qcom/pmi8994.dtsi
+@@ -54,8 +54,9 @@
+ 		pmi8994_wled: wled@d800 {
+ 			compatible = "qcom,pmi8994-wled";
+ 			reg = <0xd800>, <0xd900>;
+-			interrupts = <3 0xd8 0x02 IRQ_TYPE_EDGE_RISING>;
+-			interrupt-names = "short";
++			interrupts = <0x3 0xd8 0x1 IRQ_TYPE_EDGE_RISING>,
++				     <0x3 0xd8 0x2 IRQ_TYPE_EDGE_RISING>;
++			interrupt-names = "ovp", "short";
+ 			qcom,cabc;
+ 			qcom,external-pfet;
+ 			status = "disabled";
+diff --git a/arch/arm64/boot/dts/qcom/pmk8350.dtsi b/arch/arm64/boot/dts/qcom/pmk8350.dtsi
+index bc6297e7253e2..1eb74017062d6 100644
+--- a/arch/arm64/boot/dts/qcom/pmk8350.dtsi
++++ b/arch/arm64/boot/dts/qcom/pmk8350.dtsi
+@@ -59,7 +59,7 @@
+ 		};
+ 
+ 		pmk8350_adc_tm: adc-tm@3400 {
+-			compatible = "qcom,adc-tm7";
++			compatible = "qcom,spmi-adc-tm5-gen2";
+ 			reg = <0x3400>;
+ 			interrupts = <PMK8350_SID 0x34 0x0 IRQ_TYPE_EDGE_RISING>;
+ 			#address-cells = <1>;
+diff --git a/arch/arm64/boot/dts/qcom/pmr735b.dtsi b/arch/arm64/boot/dts/qcom/pmr735b.dtsi
+index ec24c4478005a..f7473e2473224 100644
+--- a/arch/arm64/boot/dts/qcom/pmr735b.dtsi
++++ b/arch/arm64/boot/dts/qcom/pmr735b.dtsi
+@@ -8,7 +8,7 @@
+ 
+ / {
+ 	thermal-zones {
+-		pmr735a_thermal: pmr735a-thermal {
++		pmr735b_thermal: pmr735b-thermal {
+ 			polling-delay-passive = <100>;
+ 			polling-delay = <0>;
+ 			thermal-sensors = <&pmr735b_temp_alarm>;
+diff --git a/arch/arm64/boot/dts/qcom/qcm2290.dtsi b/arch/arm64/boot/dts/qcom/qcm2290.dtsi
+index 0ed11e80e5e29..1d1de156f8f04 100644
+--- a/arch/arm64/boot/dts/qcom/qcm2290.dtsi
++++ b/arch/arm64/boot/dts/qcom/qcm2290.dtsi
+@@ -790,7 +790,7 @@
+ 				     <GIC_SPI 342 IRQ_TYPE_LEVEL_HIGH>,
+ 				     <GIC_SPI 343 IRQ_TYPE_LEVEL_HIGH>,
+ 				     <GIC_SPI 344 IRQ_TYPE_LEVEL_HIGH>;
+-			dma-channels =  <10>;
++			dma-channels = <10>;
+ 			dma-channel-mask = <0x1f>;
+ 			iommus = <&apps_smmu 0xf6 0x0>;
+ 			#dma-cells = <3>;
+diff --git a/arch/arm64/boot/dts/qcom/qcs404.dtsi b/arch/arm64/boot/dts/qcom/qcs404.dtsi
+index 972f753847e13..f2568aff14c84 100644
+--- a/arch/arm64/boot/dts/qcom/qcs404.dtsi
++++ b/arch/arm64/boot/dts/qcom/qcs404.dtsi
+@@ -1459,10 +1459,10 @@
+ 
+ 		pcie: pci@10000000 {
+ 			compatible = "qcom,pcie-qcs404";
+-			reg =  <0x10000000 0xf1d>,
+-			       <0x10000f20 0xa8>,
+-			       <0x07780000 0x2000>,
+-			       <0x10001000 0x2000>;
++			reg = <0x10000000 0xf1d>,
++			      <0x10000f20 0xa8>,
++			      <0x07780000 0x2000>,
++			      <0x10001000 0x2000>;
+ 			reg-names = "dbi", "elbi", "parf", "config";
+ 			device_type = "pci";
+ 			linux,pci-domain = <0>;
+diff --git a/arch/arm64/boot/dts/qcom/sa8540p.dtsi b/arch/arm64/boot/dts/qcom/sa8540p.dtsi
+index bacbdec562814..96b2c59ad02b4 100644
+--- a/arch/arm64/boot/dts/qcom/sa8540p.dtsi
++++ b/arch/arm64/boot/dts/qcom/sa8540p.dtsi
+@@ -207,7 +207,7 @@
+ 
+ 	linux,pci-domain = <2>;
+ 
+-	interrupts =  <GIC_SPI 567 IRQ_TYPE_LEVEL_HIGH>;
++	interrupts = <GIC_SPI 567 IRQ_TYPE_LEVEL_HIGH>;
+ 	interrupt-names = "msi";
+ 
+ 	interrupt-map = <0 0 0 1 &intc 0 0 GIC_SPI 541 IRQ_TYPE_LEVEL_HIGH>,
+diff --git a/arch/arm64/boot/dts/qcom/sc7280-herobrine-audio-rt5682-3mic.dtsi b/arch/arm64/boot/dts/qcom/sc7280-herobrine-audio-rt5682-3mic.dtsi
+index 485f9942e1285..a90c70b1b73ea 100644
+--- a/arch/arm64/boot/dts/qcom/sc7280-herobrine-audio-rt5682-3mic.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7280-herobrine-audio-rt5682-3mic.dtsi
+@@ -13,7 +13,7 @@
+ 		compatible = "google,sc7280-herobrine";
+ 		model = "sc7280-rt5682-max98360a-3mic";
+ 
+-		audio-routing =	"VA DMIC0", "vdd-micb",
++		audio-routing = "VA DMIC0", "vdd-micb",
+ 				"VA DMIC1", "vdd-micb",
+ 				"VA DMIC2", "vdd-micb",
+ 				"VA DMIC3", "vdd-micb",
+diff --git a/arch/arm64/boot/dts/qcom/sc7280.dtsi b/arch/arm64/boot/dts/qcom/sc7280.dtsi
+index a0e8db8270e7a..925428a5f6aea 100644
+--- a/arch/arm64/boot/dts/qcom/sc7280.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7280.dtsi
+@@ -2449,7 +2449,7 @@
+ 				 <&apps_smmu 0x1821 0>,
+ 				 <&apps_smmu 0x1832 0>;
+ 
+-			power-domains =	<&rpmhpd SC7280_LCX>;
++			power-domains = <&rpmhpd SC7280_LCX>;
+ 			power-domain-names = "lcx";
+ 			required-opps = <&rpmhpd_opp_nom>;
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sc8180x-pmics.dtsi b/arch/arm64/boot/dts/qcom/sc8180x-pmics.dtsi
+index 8247af01c84a5..925047af734fc 100644
+--- a/arch/arm64/boot/dts/qcom/sc8180x-pmics.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc8180x-pmics.dtsi
+@@ -74,7 +74,7 @@
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+ 
+-		pon: power-on@800 {
++		pon: pon@800 {
+ 			compatible = "qcom,pm8916-pon";
+ 			reg = <0x0800>;
+ 			pwrkey {
+@@ -142,9 +142,10 @@
+ 		};
+ 
+ 		pmc8180_gpios: gpio@c000 {
+-			compatible = "qcom,pmc8180-gpio";
++			compatible = "qcom,pmc8180-gpio", "qcom,spmi-gpio";
+ 			reg = <0xc000>;
+ 			gpio-controller;
++			gpio-ranges = <&pmc8180_gpios 0 0 10>;
+ 			#gpio-cells = <2>;
+ 			interrupt-controller;
+ 			#interrupt-cells = <2>;
+@@ -246,7 +247,7 @@
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+ 
+-		power-on@800 {
++		pon@800 {
+ 			compatible = "qcom,pm8916-pon";
+ 			reg = <0x0800>;
+ 
+@@ -300,9 +301,10 @@
+ 		};
+ 
+ 		pmc8180c_gpios: gpio@c000 {
+-			compatible = "qcom,pmc8180c-gpio";
++			compatible = "qcom,pmc8180c-gpio", "qcom,spmi-gpio";
+ 			reg = <0xc000>;
+ 			gpio-controller;
++			gpio-ranges = <&pmc8180c_gpios 0 0 12>;
+ 			#gpio-cells = <2>;
+ 			interrupt-controller;
+ 			#interrupt-cells = <2>;
+@@ -313,7 +315,7 @@
+ 		compatible = "qcom,pmc8180c", "qcom,spmi-pmic";
+ 		reg = <0x5 SPMI_USID>;
+ 
+-		pmc8180c_lpg: lpg {
++		pmc8180c_lpg: pwm {
+ 			compatible = "qcom,pmc8180c-lpg";
+ 
+ 			#address-cells = <1>;
+diff --git a/arch/arm64/boot/dts/qcom/sc8180x.dtsi b/arch/arm64/boot/dts/qcom/sc8180x.dtsi
+index be78a933d8eb2..9aac5861a9132 100644
+--- a/arch/arm64/boot/dts/qcom/sc8180x.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc8180x.dtsi
+@@ -64,6 +64,7 @@
+ 				L3_0: l3-cache {
+ 					compatible = "cache";
+ 					cache-level = <3>;
++					cache-unified;
+ 				};
+ 			};
+ 		};
+@@ -298,7 +299,7 @@
+ 		domain-idle-states {
+ 			CLUSTER_SLEEP_0: cluster-sleep-0 {
+ 				compatible = "domain-idle-state";
+-				arm,psci-suspend-param = <0x4100c244>;
++				arm,psci-suspend-param = <0x4100a344>;
+ 				entry-latency-us = <3263>;
+ 				exit-latency-us = <6562>;
+ 				min-residency-us = <9987>;
+@@ -2252,7 +2253,7 @@
+ 		};
+ 
+ 		gmu: gmu@2c6a000 {
+-			compatible="qcom,adreno-gmu-680.1", "qcom,adreno-gmu";
++			compatible = "qcom,adreno-gmu-680.1", "qcom,adreno-gmu";
+ 
+ 			reg = <0 0x02c6a000 0 0x30000>,
+ 			      <0 0x0b290000 0 0x10000>,
+@@ -2541,8 +2542,11 @@
+ 
+ 		system-cache-controller@9200000 {
+ 			compatible = "qcom,sc8180x-llcc";
+-			reg = <0 0x09200000 0 0x50000>, <0 0x09600000 0 0x50000>;
+-			reg-names = "llcc_base", "llcc_broadcast_base";
++			reg = <0 0x09200000 0 0x50000>, <0 0x09280000 0 0x50000>,
++			      <0 0x09300000 0 0x50000>, <0 0x09380000 0 0x50000>,
++			      <0 0x09600000 0 0x50000>;
++			reg-names = "llcc0_base", "llcc1_base", "llcc2_base",
++				    "llcc3_base", "llcc_broadcast_base";
+ 			interrupts = <GIC_SPI 582 IRQ_TYPE_LEVEL_HIGH>;
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sc8280xp-crd.dts b/arch/arm64/boot/dts/qcom/sc8280xp-crd.dts
+index b566e403d1db2..b21b41a066b62 100644
+--- a/arch/arm64/boot/dts/qcom/sc8280xp-crd.dts
++++ b/arch/arm64/boot/dts/qcom/sc8280xp-crd.dts
+@@ -167,7 +167,7 @@
+ 		regulator-min-microvolt = <3300000>;
+ 		regulator-max-microvolt = <3300000>;
+ 
+-		gpio = <&pmc8280_1_gpios 1 GPIO_ACTIVE_HIGH>;
++		gpio = <&pmc8280_1_gpios 2 GPIO_ACTIVE_HIGH>;
+ 		enable-active-high;
+ 
+ 		pinctrl-names = "default";
+@@ -757,7 +757,7 @@
+ 	};
+ 
+ 	misc_3p3_reg_en: misc-3p3-reg-en-state {
+-		pins = "gpio1";
++		pins = "gpio2";
+ 		function = "normal";
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sc8280xp-lenovo-thinkpad-x13s.dts b/arch/arm64/boot/dts/qcom/sc8280xp-lenovo-thinkpad-x13s.dts
+index 7cc3028440b64..059dfccdfe7c2 100644
+--- a/arch/arm64/boot/dts/qcom/sc8280xp-lenovo-thinkpad-x13s.dts
++++ b/arch/arm64/boot/dts/qcom/sc8280xp-lenovo-thinkpad-x13s.dts
+@@ -1246,7 +1246,7 @@
+ };
+ 
+ &tlmm {
+-	gpio-reserved-ranges = <70 2>, <74 6>, <83 4>, <125 2>, <128 2>, <154 7>;
++	gpio-reserved-ranges = <70 2>, <74 6>, <125 2>, <128 2>, <154 4>;
+ 
+ 	bt_default: bt-default-state {
+ 		hstp-bt-en-pins {
+diff --git a/arch/arm64/boot/dts/qcom/sc8280xp.dtsi b/arch/arm64/boot/dts/qcom/sc8280xp.dtsi
+index ac0596dfdbc47..0756b7c141fff 100644
+--- a/arch/arm64/boot/dts/qcom/sc8280xp.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc8280xp.dtsi
+@@ -298,6 +298,7 @@
+ 	firmware {
+ 		scm: scm {
+ 			compatible = "qcom,scm-sc8280xp", "qcom,scm";
++			interconnects = <&aggre2_noc MASTER_CRYPTO 0 &mc_virt SLAVE_EBI1 0>;
+ 		};
+ 	};
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sdm630.dtsi b/arch/arm64/boot/dts/qcom/sdm630.dtsi
+index bba0f366ef03b..759b3a5964cc9 100644
+--- a/arch/arm64/boot/dts/qcom/sdm630.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdm630.dtsi
+@@ -1196,11 +1196,11 @@
+ 
+ 		spmi_bus: spmi@800f000 {
+ 			compatible = "qcom,spmi-pmic-arb";
+-			reg =	<0x0800f000 0x1000>,
+-				<0x08400000 0x1000000>,
+-				<0x09400000 0x1000000>,
+-				<0x0a400000 0x220000>,
+-				<0x0800a000 0x3000>;
++			reg = <0x0800f000 0x1000>,
++			      <0x08400000 0x1000000>,
++			      <0x09400000 0x1000000>,
++			      <0x0a400000 0x220000>,
++			      <0x0800a000 0x3000>;
+ 			reg-names = "core", "chnls", "obsrvr", "intr", "cnfg";
+ 			interrupt-names = "periph_irq";
+ 			interrupts = <GIC_SPI 326 IRQ_TYPE_LEVEL_HIGH>;
+diff --git a/arch/arm64/boot/dts/qcom/sdm845-oneplus-enchilada.dts b/arch/arm64/boot/dts/qcom/sdm845-oneplus-enchilada.dts
+index 623a826b18a3e..62fe72ff37630 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845-oneplus-enchilada.dts
++++ b/arch/arm64/boot/dts/qcom/sdm845-oneplus-enchilada.dts
+@@ -57,7 +57,7 @@
+ 
+ &sound {
+ 	model = "OnePlus 6";
+-	audio-routing =	"RX_BIAS", "MCLK",
++	audio-routing = "RX_BIAS", "MCLK",
+ 			"AMIC2", "MIC BIAS2",
+ 			"AMIC3", "MIC BIAS4",
+ 			"AMIC4", "MIC BIAS1",
+diff --git a/arch/arm64/boot/dts/qcom/sdm845-sony-xperia-tama.dtsi b/arch/arm64/boot/dts/qcom/sdm845-sony-xperia-tama.dtsi
+index 3bc187a066aeb..7ee61b20452e2 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845-sony-xperia-tama.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdm845-sony-xperia-tama.dtsi
+@@ -15,6 +15,15 @@
+ 	qcom,msm-id = <321 0x20001>; /* SDM845 v2.1 */
+ 	qcom,board-id = <8 0>;
+ 
++	aliases {
++		serial0 = &uart6;
++		serial1 = &uart9;
++	};
++
++	chosen {
++		stdout-path = "serial0:115200n8";
++	};
++
+ 	gpio-keys {
+ 		compatible = "gpio-keys";
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+index 02a6ea0b8b2c9..89520a9fe1e3d 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+@@ -1207,6 +1207,7 @@
+ 			#clock-cells = <1>;
+ 			#reset-cells = <1>;
+ 			#power-domain-cells = <1>;
++			power-domains = <&rpmhpd SDM845_CX>;
+ 		};
+ 
+ 		qfprom@784000 {
+@@ -2613,7 +2614,7 @@
+ 				<0 0>,
+ 				<0 0>,
+ 				<0 0>,
+-				<0 300000000>;
++				<75000000 300000000>;
+ 
+ 			status = "disabled";
+ 		};
+diff --git a/arch/arm64/boot/dts/qcom/sdx75.dtsi b/arch/arm64/boot/dts/qcom/sdx75.dtsi
+index 21d5d55da5ebf..7d39a615f4f78 100644
+--- a/arch/arm64/boot/dts/qcom/sdx75.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdx75.dtsi
+@@ -484,14 +484,14 @@
+ 				tx-pins {
+ 					pins = "gpio12";
+ 					function = "qup_se1_l2_mira";
+-					drive-strength= <2>;
++					drive-strength = <2>;
+ 					bias-disable;
+ 				};
+ 
+ 				rx-pins {
+ 					pins = "gpio13";
+ 					function = "qup_se1_l3_mira";
+-					drive-strength= <2>;
++					drive-strength = <2>;
+ 					bias-disable;
+ 				};
+ 			};
+diff --git a/arch/arm64/boot/dts/qcom/sm6115.dtsi b/arch/arm64/boot/dts/qcom/sm6115.dtsi
+index 55118577bf923..7d30b504441ad 100644
+--- a/arch/arm64/boot/dts/qcom/sm6115.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm6115.dtsi
+@@ -1052,7 +1052,7 @@
+ 				     <GIC_SPI 342 IRQ_TYPE_LEVEL_HIGH>,
+ 				     <GIC_SPI 343 IRQ_TYPE_LEVEL_HIGH>,
+ 				     <GIC_SPI 344 IRQ_TYPE_LEVEL_HIGH>;
+-			dma-channels =  <10>;
++			dma-channels = <10>;
+ 			dma-channel-mask = <0xf>;
+ 			iommus = <&apps_smmu 0xf6 0x0>;
+ 			#dma-cells = <3>;
+diff --git a/arch/arm64/boot/dts/qcom/sm6350.dtsi b/arch/arm64/boot/dts/qcom/sm6350.dtsi
+index 30e77010aed57..7cafb32fbb941 100644
+--- a/arch/arm64/boot/dts/qcom/sm6350.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm6350.dtsi
+@@ -633,11 +633,6 @@
+ 			no-map;
+ 		};
+ 
+-		pil_gpu_mem: memory@8b715400 {
+-			reg = <0 0x8b715400 0 0x2000>;
+-			no-map;
+-		};
+-
+ 		pil_modem_mem: memory@8b800000 {
+ 			reg = <0 0x8b800000 0 0xf800000>;
+ 			no-map;
+@@ -658,6 +653,11 @@
+ 			no-map;
+ 		};
+ 
++		pil_gpu_mem: memory@f0d00000 {
++			reg = <0 0xf0d00000 0 0x1000>;
++			no-map;
++		};
++
+ 		debug_region: memory@ffb00000 {
+ 			reg = <0 0xffb00000 0 0xc0000>;
+ 			no-map;
+diff --git a/arch/arm64/boot/dts/qcom/sm8150.dtsi b/arch/arm64/boot/dts/qcom/sm8150.dtsi
+index b46e55bb8bdec..a7c3020a5de49 100644
+--- a/arch/arm64/boot/dts/qcom/sm8150.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8150.dtsi
+@@ -1231,7 +1231,7 @@
+ 				dma-names = "tx", "rx";
+ 				pinctrl-names = "default";
+ 				pinctrl-0 = <&qup_i2c7_default>;
+-				interrupts = <GIC_SPI 607 IRQ_TYPE_LEVEL_HIGH>;
++				interrupts = <GIC_SPI 608 IRQ_TYPE_LEVEL_HIGH>;
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+ 				status = "disabled";
+@@ -3840,7 +3840,7 @@
+ 			};
+ 
+ 			mdss_dsi0_phy: phy@ae94400 {
+-				compatible = "qcom,dsi-phy-7nm";
++				compatible = "qcom,dsi-phy-7nm-8150";
+ 				reg = <0 0x0ae94400 0 0x200>,
+ 				      <0 0x0ae94600 0 0x280>,
+ 				      <0 0x0ae94900 0 0x260>;
+@@ -3914,7 +3914,7 @@
+ 			};
+ 
+ 			mdss_dsi1_phy: phy@ae96400 {
+-				compatible = "qcom,dsi-phy-7nm";
++				compatible = "qcom,dsi-phy-7nm-8150";
+ 				reg = <0 0x0ae96400 0 0x200>,
+ 				      <0 0x0ae96600 0 0x280>,
+ 				      <0 0x0ae96900 0 0x260>;
+diff --git a/arch/arm64/boot/dts/qcom/sm8250-sony-xperia-edo-pdx203.dts b/arch/arm64/boot/dts/qcom/sm8250-sony-xperia-edo-pdx203.dts
+index 356a81698731a..62590c6bd3067 100644
+--- a/arch/arm64/boot/dts/qcom/sm8250-sony-xperia-edo-pdx203.dts
++++ b/arch/arm64/boot/dts/qcom/sm8250-sony-xperia-edo-pdx203.dts
+@@ -14,3 +14,236 @@
+ };
+ 
+ /delete-node/ &vreg_l7f_1p8;
++
++&pm8009_gpios {
++	gpio-line-names = "NC", /* GPIO_1 */
++			  "CAM_PWR_LD_EN",
++			  "WIDEC_PWR_EN",
++			  "NC";
++};
++
++&pm8150_gpios {
++	gpio-line-names = "VOL_DOWN_N", /* GPIO_1 */
++			  "OPTION_2",
++			  "NC",
++			  "PM_SLP_CLK_IN",
++			  "OPTION_1",
++			  "NC",
++			  "NC",
++			  "SP_ARI_PWR_ALARM",
++			  "NC",
++			  "NC"; /* GPIO_10 */
++};
++
++&pm8150b_gpios {
++	gpio-line-names = "SNAPSHOT_N", /* GPIO_1 */
++			  "FOCUS_N",
++			  "NC",
++			  "NC",
++			  "RF_LCD_ID_EN",
++			  "NC",
++			  "NC",
++			  "LCD_ID",
++			  "NC",
++			  "WLC_EN_N", /* GPIO_10 */
++			  "NC",
++			  "RF_ID";
++};
++
++&pm8150l_gpios {
++	gpio-line-names = "NC", /* GPIO_1 */
++			  "PM3003A_EN",
++			  "NC",
++			  "NC",
++			  "NC",
++			  "AUX2_THERM",
++			  "BB_HP_EN",
++			  "FP_LDO_EN",
++			  "PMX_RESET_N",
++			  "AUX3_THERM", /* GPIO_10 */
++			  "DTV_PWR_EN",
++			  "PM3003A_MODE";
++};
++
++&tlmm {
++	gpio-line-names = "AP_CTI_IN", /* GPIO_0 */
++			  "MDM2AP_ERR_FATAL",
++			  "AP_CTI_OUT",
++			  "MDM2AP_STATUS",
++			  "NFC_I2C_SDA",
++			  "NFC_I2C_SCL",
++			  "NFC_EN",
++			  "NFC_CLK_REQ",
++			  "NFC_ESE_PWR_REQ",
++			  "DVDT_WRT_DET_AND",
++			  "SPK_AMP_RESET_N", /* GPIO_10 */
++			  "SPK_AMP_INT_N",
++			  "APPS_I2C_1_SDA",
++			  "APPS_I2C_1_SCL",
++			  "NC",
++			  "TX_GTR_THRES_IN",
++			  "HST_BT_UART_CTS",
++			  "HST_BT_UART_RFR",
++			  "HST_BT_UART_TX",
++			  "HST_BT_UART_RX",
++			  "HST_WLAN_EN", /* GPIO_20 */
++			  "HST_BT_EN",
++			  "RGBC_IR_PWR_EN",
++			  "FP_INT_N",
++			  "NC",
++			  "NC",
++			  "NC",
++			  "NC",
++			  "NFC_ESE_SPI_MISO",
++			  "NFC_ESE_SPI_MOSI",
++			  "NFC_ESE_SPI_SCLK", /* GPIO_30 */
++			  "NFC_ESE_SPI_CS_N",
++			  "WCD_RST_N",
++			  "NC",
++			  "SDM_DEBUG_UART_TX",
++			  "SDM_DEBUG_UART_RX",
++			  "TS_I2C_SDA",
++			  "TS_I2C_SCL",
++			  "TS_INT_N",
++			  "FP_SPI_MISO", /* GPIO_40 */
++			  "FP_SPI_MOSI",
++			  "FP_SPI_SCLK",
++			  "FP_SPI_CS_N",
++			  "APPS_I2C_0_SDA",
++			  "APPS_I2C_0_SCL",
++			  "DISP_ERR_FG",
++			  "UIM2_DETECT_EN",
++			  "NC",
++			  "NC",
++			  "NC", /* GPIO_50 */
++			  "NC",
++			  "MDM_UART_CTS",
++			  "MDM_UART_RFR",
++			  "MDM_UART_TX",
++			  "MDM_UART_RX",
++			  "AP2MDM_STATUS",
++			  "AP2MDM_ERR_FATAL",
++			  "MDM_IPC_HS_UART_TX",
++			  "MDM_IPC_HS_UART_RX",
++			  "NC", /* GPIO_60 */
++			  "NC",
++			  "NC",
++			  "NC",
++			  "NC",
++			  "USB_CC_DIR",
++			  "DISP_VSYNC",
++			  "NC",
++			  "NC",
++			  "CAM_PWR_B_CS",
++			  "NC", /* GPIO_70 */
++			  "CAM_PWR_A_CS",
++			  "SBU_SW_SEL",
++			  "SBU_SW_OE",
++			  "FP_RESET_N",
++			  "FP_RESET_N",
++			  "DISP_RESET_N",
++			  "DEBUG_GPIO0",
++			  "TRAY_DET",
++			  "CAM2_RST_N",
++			  "PCIE0_RST_N",
++			  "PCIE0_CLK_REQ_N", /* GPIO_80 */
++			  "PCIE0_WAKE_N",
++			  "DVDT_ENABLE",
++			  "DVDT_WRT_DET_OR",
++			  "NC",
++			  "PCIE2_RST_N",
++			  "PCIE2_CLK_REQ_N",
++			  "PCIE2_WAKE_N",
++			  "MDM_VFR_IRQ0",
++			  "MDM_VFR_IRQ1",
++			  "SW_SERVICE", /* GPIO_90 */
++			  "CAM_SOF",
++			  "CAM1_RST_N",
++			  "CAM0_RST_N",
++			  "CAM0_MCLK",
++			  "CAM1_MCLK",
++			  "CAM2_MCLK",
++			  "CAM3_MCLK",
++			  "CAM4_MCLK",
++			  "TOF_RST_N",
++			  "NC", /* GPIO_100 */
++			  "CCI0_I2C_SDA",
++			  "CCI0_I2C_SCL",
++			  "CCI1_I2C_SDA",
++			  "CCI1_I2C_SCL_",
++			  "CCI2_I2C_SDA",
++			  "CCI2_I2C_SCL",
++			  "CCI3_I2C_SDA",
++			  "CCI3_I2C_SCL",
++			  "CAM3_RST_N",
++			  "NFC_DWL_REQ", /* GPIO_110 */
++			  "NFC_IRQ",
++			  "XVS",
++			  "NC",
++			  "RF_ID_EXTENSION",
++			  "SPK_AMP_I2C_SDA",
++			  "SPK_AMP_I2C_SCL",
++			  "NC",
++			  "NC",
++			  "WLC_I2C_SDA",
++			  "WLC_I2C_SCL", /* GPIO_120 */
++			  "ACC_COVER_OPEN",
++			  "ALS_PROX_INT_N",
++			  "ACCEL_INT",
++			  "WLAN_SW_CTRL",
++			  "CAMSENSOR_I2C_SDA",
++			  "CAMSENSOR_I2C_SCL",
++			  "UDON_SWITCH_SEL",
++			  "WDOG_DISABLE",
++			  "BAROMETER_INT",
++			  "NC", /* GPIO_130 */
++			  "NC",
++			  "FORCED_USB_BOOT",
++			  "NC",
++			  "NC",
++			  "WLC_INT_N",
++			  "NC",
++			  "NC",
++			  "RGBC_IR_INT",
++			  "NC",
++			  "NC", /* GPIO_140 */
++			  "NC",
++			  "BT_SLIMBUS_CLK",
++			  "BT_SLIMBUS_DATA",
++			  "HW_ID_0",
++			  "HW_ID_1",
++			  "WCD_SWR_TX_CLK",
++			  "WCD_SWR_TX_DATA0",
++			  "WCD_SWR_TX_DATA1",
++			  "WCD_SWR_RX_CLK",
++			  "WCD_SWR_RX_DATA0", /* GPIO_150 */
++			  "WCD_SWR_RX_DATA1",
++			  "SDM_DMIC_CLK1",
++			  "SDM_DMIC_DATA1",
++			  "SDM_DMIC_CLK2",
++			  "SDM_DMIC_DATA2",
++			  "SPK_AMP_I2S_CLK",
++			  "SPK_AMP_I2S_WS",
++			  "SPK_AMP_I2S_ASP_DIN",
++			  "SPK_AMP_I2S_ASP_DOUT",
++			  "COMPASS_I2C_SDA", /* GPIO_160 */
++			  "COMPASS_I2C_SCL",
++			  "NC",
++			  "NC",
++			  "SSC_SPI_1_MISO",
++			  "SSC_SPI_1_MOSI",
++			  "SSC_SPI_1_CLK",
++			  "SSC_SPI_1_CS_N",
++			  "NC",
++			  "NC",
++			  "SSC_SENSOR_I2C_SDA", /* GPIO_170 */
++			  "SSC_SENSOR_I2C_SCL",
++			  "NC",
++			  "NC",
++			  "NC",
++			  "NC",
++			  "HST_BLE_SNS_UART6_TX",
++			  "HST_BLE_SNS_UART6_RX",
++			  "HST_WLAN_UART_TX",
++			  "HST_WLAN_UART_RX";
++};
+diff --git a/arch/arm64/boot/dts/qcom/sm8250-sony-xperia-edo-pdx206.dts b/arch/arm64/boot/dts/qcom/sm8250-sony-xperia-edo-pdx206.dts
+index 01fe3974ee720..58a521046f5f5 100644
+--- a/arch/arm64/boot/dts/qcom/sm8250-sony-xperia-edo-pdx206.dts
++++ b/arch/arm64/boot/dts/qcom/sm8250-sony-xperia-edo-pdx206.dts
+@@ -20,6 +20,8 @@
+ };
+ 
+ &gpio_keys {
++	pinctrl-0 = <&focus_n &snapshot_n &vol_down_n &g_assist_n>;
++
+ 	g-assist-key {
+ 		label = "Google Assistant Key";
+ 		linux,code = <KEY_LEFTMETA>;
+@@ -30,6 +32,247 @@
+ 	};
+ };
+ 
++&pm8009_gpios {
++	gpio-line-names = "NC", /* GPIO_1 */
++			  "NC",
++			  "WIDEC_PWR_EN",
++			  "NC";
++};
++
++&pm8150_gpios {
++	gpio-line-names = "VOL_DOWN_N", /* GPIO_1 */
++			  "OPTION_2",
++			  "NC",
++			  "PM_SLP_CLK_IN",
++			  "OPTION_1",
++			  "G_ASSIST_N",
++			  "NC",
++			  "SP_ARI_PWR_ALARM",
++			  "NC",
++			  "NC"; /* GPIO_10 */
++
++	g_assist_n: g-assist-n-state {
++		pins = "gpio6";
++		function = "normal";
++		power-source = <1>;
++		bias-pull-up;
++		input-enable;
++	};
++};
++
++&pm8150b_gpios {
++	gpio-line-names = "SNAPSHOT_N", /* GPIO_1 */
++			  "FOCUS_N",
++			  "NC",
++			  "NC",
++			  "RF_LCD_ID_EN",
++			  "NC",
++			  "NC",
++			  "LCD_ID",
++			  "NC",
++			  "NC", /* GPIO_10 */
++			  "NC",
++			  "RF_ID";
++};
++
++&pm8150l_gpios {
++	gpio-line-names = "NC", /* GPIO_1 */
++			  "PM3003A_EN",
++			  "NC",
++			  "NC",
++			  "NC",
++			  "AUX2_THERM",
++			  "BB_HP_EN",
++			  "FP_LDO_EN",
++			  "PMX_RESET_N",
++			  "NC", /* GPIO_10 */
++			  "NC",
++			  "PM3003A_MODE";
++};
++
++&tlmm {
++	gpio-line-names = "AP_CTI_IN", /* GPIO_0 */
++			  "MDM2AP_ERR_FATAL",
++			  "AP_CTI_OUT",
++			  "MDM2AP_STATUS",
++			  "NFC_I2C_SDA",
++			  "NFC_I2C_SCL",
++			  "NFC_EN",
++			  "NFC_CLK_REQ",
++			  "NFC_ESE_PWR_REQ",
++			  "DVDT_WRT_DET_AND",
++			  "SPK_AMP_RESET_N", /* GPIO_10 */
++			  "SPK_AMP_INT_N",
++			  "APPS_I2C_1_SDA",
++			  "APPS_I2C_1_SCL",
++			  "NC",
++			  "TX_GTR_THRES_IN",
++			  "HST_BT_UART_CTS",
++			  "HST_BT_UART_RFR",
++			  "HST_BT_UART_TX",
++			  "HST_BT_UART_RX",
++			  "HST_WLAN_EN", /* GPIO_20 */
++			  "HST_BT_EN",
++			  "RGBC_IR_PWR_EN",
++			  "FP_INT_N",
++			  "NC",
++			  "NC",
++			  "NC",
++			  "NC",
++			  "NFC_ESE_SPI_MISO",
++			  "NFC_ESE_SPI_MOSI",
++			  "NFC_ESE_SPI_SCLK", /* GPIO_30 */
++			  "NFC_ESE_SPI_CS_N",
++			  "WCD_RST_N",
++			  "NC",
++			  "SDM_DEBUG_UART_TX",
++			  "SDM_DEBUG_UART_RX",
++			  "TS_I2C_SDA",
++			  "TS_I2C_SCL",
++			  "TS_INT_N",
++			  "FP_SPI_MISO", /* GPIO_40 */
++			  "FP_SPI_MOSI",
++			  "FP_SPI_SCLK",
++			  "FP_SPI_CS_N",
++			  "APPS_I2C_0_SDA",
++			  "APPS_I2C_0_SCL",
++			  "DISP_ERR_FG",
++			  "UIM2_DETECT_EN",
++			  "NC",
++			  "NC",
++			  "NC", /* GPIO_50 */
++			  "NC",
++			  "MDM_UART_CTS",
++			  "MDM_UART_RFR",
++			  "MDM_UART_TX",
++			  "MDM_UART_RX",
++			  "AP2MDM_STATUS",
++			  "AP2MDM_ERR_FATAL",
++			  "MDM_IPC_HS_UART_TX",
++			  "MDM_IPC_HS_UART_RX",
++			  "NC", /* GPIO_60 */
++			  "NC",
++			  "NC",
++			  "NC",
++			  "NC",
++			  "USB_CC_DIR",
++			  "DISP_VSYNC",
++			  "NC",
++			  "NC",
++			  "CAM_PWR_B_CS",
++			  "NC", /* GPIO_70 */
++			  "FRONTC_PWR_EN",
++			  "SBU_SW_SEL",
++			  "SBU_SW_OE",
++			  "FP_RESET_N",
++			  "FP_RESET_N",
++			  "DISP_RESET_N",
++			  "DEBUG_GPIO0",
++			  "TRAY_DET",
++			  "CAM2_RST_N",
++			  "PCIE0_RST_N",
++			  "PCIE0_CLK_REQ_N", /* GPIO_80 */
++			  "PCIE0_WAKE_N",
++			  "DVDT_ENABLE",
++			  "DVDT_WRT_DET_OR",
++			  "NC",
++			  "PCIE2_RST_N",
++			  "PCIE2_CLK_REQ_N",
++			  "PCIE2_WAKE_N",
++			  "MDM_VFR_IRQ0",
++			  "MDM_VFR_IRQ1",
++			  "SW_SERVICE", /* GPIO_90 */
++			  "CAM_SOF",
++			  "CAM1_RST_N",
++			  "CAM0_RST_N",
++			  "CAM0_MCLK",
++			  "CAM1_MCLK",
++			  "CAM2_MCLK",
++			  "CAM3_MCLK",
++			  "NC",
++			  "NC",
++			  "NC", /* GPIO_100 */
++			  "CCI0_I2C_SDA",
++			  "CCI0_I2C_SCL",
++			  "CCI1_I2C_SDA",
++			  "CCI1_I2C_SCL_",
++			  "CCI2_I2C_SDA",
++			  "CCI2_I2C_SCL",
++			  "CCI3_I2C_SDA",
++			  "CCI3_I2C_SCL",
++			  "CAM3_RST_N",
++			  "NFC_DWL_REQ", /* GPIO_110 */
++			  "NFC_IRQ",
++			  "XVS",
++			  "NC",
++			  "RF_ID_EXTENSION",
++			  "SPK_AMP_I2C_SDA",
++			  "SPK_AMP_I2C_SCL",
++			  "NC",
++			  "NC",
++			  "NC",
++			  "NC",
++			  "ACC_COVER_OPEN",
++			  "ALS_PROX_INT_N",
++			  "ACCEL_INT",
++			  "WLAN_SW_CTRL",
++			  "CAMSENSOR_I2C_SDA",
++			  "CAMSENSOR_I2C_SCL",
++			  "UDON_SWITCH_SEL",
++			  "WDOG_DISABLE",
++			  "BAROMETER_INT",
++			  "NC", /* GPIO_130 */
++			  "NC",
++			  "FORCED_USB_BOOT",
++			  "NC",
++			  "NC",
++			  "NC",
++			  "NC",
++			  "NC",
++			  "RGBC_IR_INT",
++			  "NC",
++			  "NC", /* GPIO_140 */
++			  "NC",
++			  "BT_SLIMBUS_CLK",
++			  "BT_SLIMBUS_DATA",
++			  "HW_ID_0",
++			  "HW_ID_1",
++			  "WCD_SWR_TX_CLK",
++			  "WCD_SWR_TX_DATA0",
++			  "WCD_SWR_TX_DATA1",
++			  "WCD_SWR_RX_CLK",
++			  "WCD_SWR_RX_DATA0", /* GPIO_150 */
++			  "WCD_SWR_RX_DATA1",
++			  "SDM_DMIC_CLK1",
++			  "SDM_DMIC_DATA1",
++			  "SDM_DMIC_CLK2",
++			  "SDM_DMIC_DATA2",
++			  "SPK_AMP_I2S_CLK",
++			  "SPK_AMP_I2S_WS",
++			  "SPK_AMP_I2S_ASP_DIN",
++			  "SPK_AMP_I2S_ASP_DOUT",
++			  "COMPASS_I2C_SDA", /* GPIO_160 */
++			  "COMPASS_I2C_SCL",
++			  "NC",
++			  "NC",
++			  "SSC_SPI_1_MISO",
++			  "SSC_SPI_1_MOSI",
++			  "SSC_SPI_1_CLK",
++			  "SSC_SPI_1_CS_N",
++			  "NC",
++			  "NC",
++			  "SSC_SENSOR_I2C_SDA", /* GPIO_170 */
++			  "SSC_SENSOR_I2C_SCL",
++			  "NC",
++			  "NC",
++			  "NC",
++			  "NC",
++			  "HST_BLE_SNS_UART6_TX",
++			  "HST_BLE_SNS_UART6_RX",
++			  "HST_WLAN_UART_TX",
++			  "HST_WLAN_UART_RX";
++};
++
+ &vreg_l2f_1p3 {
+ 	regulator-min-microvolt = <1200000>;
+ 	regulator-max-microvolt = <1200000>;
+diff --git a/arch/arm64/boot/dts/qcom/sm8250-sony-xperia-edo.dtsi b/arch/arm64/boot/dts/qcom/sm8250-sony-xperia-edo.dtsi
+index 8ab82bacba81f..b044cffb419e5 100644
+--- a/arch/arm64/boot/dts/qcom/sm8250-sony-xperia-edo.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8250-sony-xperia-edo.dtsi
+@@ -51,12 +51,26 @@
+ 	gpio_keys: gpio-keys {
+ 		compatible = "gpio-keys";
+ 
+-		/*
+-		 * Camera focus (light press) and camera snapshot (full press)
+-		 * seem not to work properly.. Adding the former one stalls the CPU
+-		 * and the latter kills the volume down key for whatever reason. In any
+-		 * case, they are both on &pm8150b_gpios: camera focus(2), camera snapshot(1).
+-		 */
++		pinctrl-0 = <&focus_n &snapshot_n &vol_down_n>;
++		pinctrl-names = "default";
++
++		key-camera-focus {
++			label = "Camera Focus";
++			linux,code = <KEY_CAMERA_FOCUS>;
++			gpios = <&pm8150b_gpios 2 GPIO_ACTIVE_LOW>;
++			debounce-interval = <15>;
++			linux,can-disable;
++			wakeup-source;
++		};
++
++		key-camera-snapshot {
++			label = "Camera Snapshot";
++			linux,code = <KEY_CAMERA>;
++			gpios = <&pm8150b_gpios 1 GPIO_ACTIVE_LOW>;
++			debounce-interval = <15>;
++			linux,can-disable;
++			wakeup-source;
++		};
+ 
+ 		key-vol-down {
+ 			label = "Volume Down";
+@@ -551,6 +565,34 @@
+ 	vdda-pll-supply = <&vreg_l9a_1p2>;
+ };
+ 
++&pm8150_gpios {
++	vol_down_n: vol-down-n-state {
++		pins = "gpio1";
++		function = "normal";
++		power-source = <0>;
++		bias-pull-up;
++		input-enable;
++	};
++};
++
++&pm8150b_gpios {
++	snapshot_n: snapshot-n-state {
++		pins = "gpio1";
++		function = "normal";
++		power-source = <0>;
++		bias-pull-up;
++		input-enable;
++	};
++
++	focus_n: focus-n-state {
++		pins = "gpio2";
++		function = "normal";
++		power-source = <0>;
++		bias-pull-up;
++		input-enable;
++	};
++};
++
+ &pon_pwrkey {
+ 	status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sm8250.dtsi b/arch/arm64/boot/dts/qcom/sm8250.dtsi
+index 1efa07f2caff4..e03007e23e918 100644
+--- a/arch/arm64/boot/dts/qcom/sm8250.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8250.dtsi
+@@ -100,7 +100,7 @@
+ 			clocks = <&cpufreq_hw 0>;
+ 			enable-method = "psci";
+ 			capacity-dmips-mhz = <448>;
+-			dynamic-power-coefficient = <205>;
++			dynamic-power-coefficient = <105>;
+ 			next-level-cache = <&L2_0>;
+ 			power-domains = <&CPU_PD0>;
+ 			power-domain-names = "psci";
+@@ -131,7 +131,7 @@
+ 			clocks = <&cpufreq_hw 0>;
+ 			enable-method = "psci";
+ 			capacity-dmips-mhz = <448>;
+-			dynamic-power-coefficient = <205>;
++			dynamic-power-coefficient = <105>;
+ 			next-level-cache = <&L2_100>;
+ 			power-domains = <&CPU_PD1>;
+ 			power-domain-names = "psci";
+@@ -156,7 +156,7 @@
+ 			clocks = <&cpufreq_hw 0>;
+ 			enable-method = "psci";
+ 			capacity-dmips-mhz = <448>;
+-			dynamic-power-coefficient = <205>;
++			dynamic-power-coefficient = <105>;
+ 			next-level-cache = <&L2_200>;
+ 			power-domains = <&CPU_PD2>;
+ 			power-domain-names = "psci";
+@@ -181,7 +181,7 @@
+ 			clocks = <&cpufreq_hw 0>;
+ 			enable-method = "psci";
+ 			capacity-dmips-mhz = <448>;
+-			dynamic-power-coefficient = <205>;
++			dynamic-power-coefficient = <105>;
+ 			next-level-cache = <&L2_300>;
+ 			power-domains = <&CPU_PD3>;
+ 			power-domain-names = "psci";
+@@ -1905,6 +1905,7 @@
+ 
+ 			pinctrl-names = "default";
+ 			pinctrl-0 = <&pcie0_default_state>;
++			dma-coherent;
+ 
+ 			status = "disabled";
+ 		};
+@@ -2011,6 +2012,7 @@
+ 
+ 			pinctrl-names = "default";
+ 			pinctrl-0 = <&pcie1_default_state>;
++			dma-coherent;
+ 
+ 			status = "disabled";
+ 		};
+@@ -2119,6 +2121,7 @@
+ 
+ 			pinctrl-names = "default";
+ 			pinctrl-0 = <&pcie2_default_state>;
++			dma-coherent;
+ 
+ 			status = "disabled";
+ 		};
+@@ -2726,6 +2729,7 @@
+ 			clock-names = "ahb", "bus", "iface";
+ 
+ 			power-domains = <&gpucc GPU_CX_GDSC>;
++			dma-coherent;
+ 		};
+ 
+ 		slpi: remoteproc@5c00000 {
+@@ -3059,7 +3063,7 @@
+ 				port@7 {
+ 					reg = <7>;
+ 					funnel_swao_in_funnel_merg: endpoint {
+-						remote-endpoint= <&funnel_merg_out_funnel_swao>;
++						remote-endpoint = <&funnel_merg_out_funnel_swao>;
+ 					};
+ 				};
+ 			};
+@@ -5298,104 +5302,105 @@
+ 			reg = <0 0x15000000 0 0x100000>;
+ 			#iommu-cells = <2>;
+ 			#global-interrupts = <2>;
+-			interrupts =    <GIC_SPI 64 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 65 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 97 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 98 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 99 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 100 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 101 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 102 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 103 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 104 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 105 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 106 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 107 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 109 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 110 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 111 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 112 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 113 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 114 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 115 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 116 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 117 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 118 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 181 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 182 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 183 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 184 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 185 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 186 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 187 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 188 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 189 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 190 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 191 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 192 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 315 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 316 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 317 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 318 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 319 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 320 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 321 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 322 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 323 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 324 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 325 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 326 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 327 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 328 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 329 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 330 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 331 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 332 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 333 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 334 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 335 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 336 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 337 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 338 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 339 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 340 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 341 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 342 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 343 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 344 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 345 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 395 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 396 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 397 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 398 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 399 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 400 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 401 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 402 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 403 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 404 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 405 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 406 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 407 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 408 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 409 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 412 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 418 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 419 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 421 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 423 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 424 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 425 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 690 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 691 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 692 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 693 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 694 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 695 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 696 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 697 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 707 IRQ_TYPE_LEVEL_HIGH>;
++			interrupts = <GIC_SPI 64 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 65 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 97 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 98 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 99 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 100 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 101 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 102 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 103 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 104 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 105 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 106 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 107 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 109 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 110 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 111 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 112 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 113 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 114 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 115 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 116 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 117 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 118 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 181 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 182 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 183 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 184 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 185 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 186 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 187 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 188 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 189 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 190 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 191 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 192 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 315 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 316 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 317 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 318 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 319 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 320 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 321 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 322 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 323 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 324 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 325 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 326 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 327 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 328 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 329 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 330 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 331 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 332 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 333 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 334 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 335 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 336 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 337 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 338 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 339 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 340 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 341 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 342 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 343 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 344 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 345 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 395 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 396 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 397 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 398 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 399 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 400 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 401 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 402 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 403 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 404 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 405 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 406 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 407 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 408 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 409 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 412 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 418 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 419 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 421 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 423 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 424 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 425 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 690 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 691 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 692 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 693 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 694 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 695 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 696 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 697 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 707 IRQ_TYPE_LEVEL_HIGH>;
++			dma-coherent;
+ 		};
+ 
+ 		adsp: remoteproc@17300000 {
+diff --git a/arch/arm64/boot/dts/qcom/sm8350.dtsi b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+index ec451c616f3e4..c236967725c1b 100644
+--- a/arch/arm64/boot/dts/qcom/sm8350.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+@@ -48,7 +48,7 @@
+ 
+ 		CPU0: cpu@0 {
+ 			device_type = "cpu";
+-			compatible = "qcom,kryo685";
++			compatible = "arm,cortex-a55";
+ 			reg = <0x0 0x0>;
+ 			clocks = <&cpufreq_hw 0>;
+ 			enable-method = "psci";
+@@ -72,7 +72,7 @@
+ 
+ 		CPU1: cpu@100 {
+ 			device_type = "cpu";
+-			compatible = "qcom,kryo685";
++			compatible = "arm,cortex-a55";
+ 			reg = <0x0 0x100>;
+ 			clocks = <&cpufreq_hw 0>;
+ 			enable-method = "psci";
+@@ -91,7 +91,7 @@
+ 
+ 		CPU2: cpu@200 {
+ 			device_type = "cpu";
+-			compatible = "qcom,kryo685";
++			compatible = "arm,cortex-a55";
+ 			reg = <0x0 0x200>;
+ 			clocks = <&cpufreq_hw 0>;
+ 			enable-method = "psci";
+@@ -110,7 +110,7 @@
+ 
+ 		CPU3: cpu@300 {
+ 			device_type = "cpu";
+-			compatible = "qcom,kryo685";
++			compatible = "arm,cortex-a55";
+ 			reg = <0x0 0x300>;
+ 			clocks = <&cpufreq_hw 0>;
+ 			enable-method = "psci";
+@@ -129,7 +129,7 @@
+ 
+ 		CPU4: cpu@400 {
+ 			device_type = "cpu";
+-			compatible = "qcom,kryo685";
++			compatible = "arm,cortex-a78";
+ 			reg = <0x0 0x400>;
+ 			clocks = <&cpufreq_hw 1>;
+ 			enable-method = "psci";
+@@ -148,7 +148,7 @@
+ 
+ 		CPU5: cpu@500 {
+ 			device_type = "cpu";
+-			compatible = "qcom,kryo685";
++			compatible = "arm,cortex-a78";
+ 			reg = <0x0 0x500>;
+ 			clocks = <&cpufreq_hw 1>;
+ 			enable-method = "psci";
+@@ -167,7 +167,7 @@
+ 
+ 		CPU6: cpu@600 {
+ 			device_type = "cpu";
+-			compatible = "qcom,kryo685";
++			compatible = "arm,cortex-a78";
+ 			reg = <0x0 0x600>;
+ 			clocks = <&cpufreq_hw 1>;
+ 			enable-method = "psci";
+@@ -186,7 +186,7 @@
+ 
+ 		CPU7: cpu@700 {
+ 			device_type = "cpu";
+-			compatible = "qcom,kryo685";
++			compatible = "arm,cortex-x1";
+ 			reg = <0x0 0x700>;
+ 			clocks = <&cpufreq_hw 2>;
+ 			enable-method = "psci";
+@@ -246,8 +246,8 @@
+ 				compatible = "arm,idle-state";
+ 				idle-state-name = "silver-rail-power-collapse";
+ 				arm,psci-suspend-param = <0x40000004>;
+-				entry-latency-us = <355>;
+-				exit-latency-us = <909>;
++				entry-latency-us = <360>;
++				exit-latency-us = <531>;
+ 				min-residency-us = <3934>;
+ 				local-timer-stop;
+ 			};
+@@ -256,8 +256,8 @@
+ 				compatible = "arm,idle-state";
+ 				idle-state-name = "gold-rail-power-collapse";
+ 				arm,psci-suspend-param = <0x40000004>;
+-				entry-latency-us = <241>;
+-				exit-latency-us = <1461>;
++				entry-latency-us = <702>;
++				exit-latency-us = <1061>;
+ 				min-residency-us = <4488>;
+ 				local-timer-stop;
+ 			};
+@@ -3077,104 +3077,104 @@
+ 			reg = <0 0x15000000 0 0x100000>;
+ 			#iommu-cells = <2>;
+ 			#global-interrupts = <2>;
+-			interrupts =    <GIC_SPI 64 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 65 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 97 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 98 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 99 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 100 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 101 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 102 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 103 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 104 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 105 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 106 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 107 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 109 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 110 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 111 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 112 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 113 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 114 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 115 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 116 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 117 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 118 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 181 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 182 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 183 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 184 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 185 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 186 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 187 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 188 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 189 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 190 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 191 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 192 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 315 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 316 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 317 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 318 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 319 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 320 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 321 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 322 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 323 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 324 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 325 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 326 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 327 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 328 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 329 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 330 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 331 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 332 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 333 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 334 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 335 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 336 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 337 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 338 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 339 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 340 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 341 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 342 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 343 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 344 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 345 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 395 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 396 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 397 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 398 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 399 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 400 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 401 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 402 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 403 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 404 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 405 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 406 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 407 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 408 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 409 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 412 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 418 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 419 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 421 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 423 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 424 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 425 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 690 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 691 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 692 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 693 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 694 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 695 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 696 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 697 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 707 IRQ_TYPE_LEVEL_HIGH>;
++			interrupts = <GIC_SPI 64 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 65 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 97 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 98 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 99 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 100 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 101 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 102 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 103 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 104 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 105 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 106 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 107 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 109 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 110 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 111 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 112 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 113 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 114 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 115 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 116 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 117 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 118 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 181 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 182 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 183 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 184 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 185 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 186 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 187 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 188 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 189 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 190 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 191 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 192 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 315 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 316 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 317 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 318 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 319 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 320 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 321 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 322 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 323 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 324 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 325 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 326 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 327 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 328 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 329 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 330 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 331 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 332 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 333 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 334 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 335 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 336 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 337 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 338 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 339 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 340 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 341 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 342 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 343 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 344 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 345 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 395 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 396 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 397 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 398 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 399 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 400 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 401 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 402 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 403 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 404 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 405 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 406 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 407 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 408 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 409 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 412 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 418 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 419 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 421 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 423 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 424 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 425 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 690 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 691 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 692 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 693 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 694 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 695 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 696 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 697 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 707 IRQ_TYPE_LEVEL_HIGH>;
+ 		};
+ 
+ 		adsp: remoteproc@17300000 {
+@@ -3399,6 +3399,13 @@
+ 			      <0 0x18593000 0 0x1000>;
+ 			reg-names = "freq-domain0", "freq-domain1", "freq-domain2";
+ 
++			interrupts = <GIC_SPI 30 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 31 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 19 IRQ_TYPE_LEVEL_HIGH>;
++			interrupt-names = "dcvsh-irq-0",
++					  "dcvsh-irq-1",
++					  "dcvsh-irq-2";
++
+ 			clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GCC_GPLL0>;
+ 			clock-names = "xo", "alternate";
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sm8450-hdk.dts b/arch/arm64/boot/dts/qcom/sm8450-hdk.dts
+index bc4c125d1832e..dabb7e872f384 100644
+--- a/arch/arm64/boot/dts/qcom/sm8450-hdk.dts
++++ b/arch/arm64/boot/dts/qcom/sm8450-hdk.dts
+@@ -14,7 +14,6 @@
+ #include "pm8450.dtsi"
+ #include "pmk8350.dtsi"
+ #include "pmr735a.dtsi"
+-#include "pmr735b.dtsi"
+ 
+ / {
+ 	model = "Qualcomm Technologies, Inc. SM8450 HDK";
+diff --git a/arch/arm64/boot/dts/qcom/sm8450.dtsi b/arch/arm64/boot/dts/qcom/sm8450.dtsi
+index 5cd7296c76605..42b23ba7a573f 100644
+--- a/arch/arm64/boot/dts/qcom/sm8450.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8450.dtsi
+@@ -3810,103 +3810,103 @@
+ 			reg = <0 0x15000000 0 0x100000>;
+ 			#iommu-cells = <2>;
+ 			#global-interrupts = <1>;
+-			interrupts =    <GIC_SPI 65 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 97 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 98 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 99 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 100 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 101 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 102 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 103 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 104 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 105 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 106 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 107 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 109 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 110 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 111 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 112 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 113 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 114 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 115 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 116 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 117 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 118 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 181 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 182 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 183 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 184 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 185 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 186 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 187 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 188 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 189 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 190 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 191 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 192 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 315 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 316 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 317 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 318 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 319 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 320 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 321 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 322 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 323 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 324 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 325 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 326 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 327 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 328 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 329 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 330 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 331 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 332 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 333 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 334 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 335 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 336 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 337 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 338 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 339 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 340 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 341 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 342 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 343 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 344 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 345 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 395 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 396 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 397 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 398 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 399 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 400 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 401 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 402 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 403 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 404 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 405 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 406 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 407 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 408 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 409 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 418 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 419 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 412 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 421 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 707 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 423 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 424 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 425 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 690 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 691 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 692 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 693 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 694 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 695 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 696 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 697 IRQ_TYPE_LEVEL_HIGH>;
++			interrupts = <GIC_SPI 65 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 97 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 98 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 99 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 100 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 101 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 102 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 103 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 104 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 105 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 106 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 107 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 109 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 110 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 111 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 112 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 113 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 114 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 115 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 116 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 117 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 118 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 181 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 182 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 183 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 184 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 185 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 186 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 187 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 188 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 189 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 190 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 191 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 192 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 315 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 316 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 317 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 318 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 319 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 320 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 321 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 322 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 323 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 324 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 325 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 326 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 327 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 328 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 329 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 330 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 331 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 332 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 333 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 334 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 335 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 336 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 337 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 338 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 339 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 340 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 341 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 342 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 343 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 344 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 345 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 395 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 396 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 397 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 398 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 399 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 400 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 401 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 402 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 403 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 404 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 405 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 406 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 407 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 408 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 409 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 418 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 419 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 412 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 421 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 707 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 423 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 424 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 425 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 690 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 691 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 692 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 693 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 694 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 695 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 696 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 697 IRQ_TYPE_LEVEL_HIGH>;
+ 		};
+ 
+ 		intc: interrupt-controller@17100000 {
+@@ -4212,7 +4212,7 @@
+ 				 <&apps_smmu 0x59f 0x0>;
+ 		};
+ 
+-		crypto: crypto@1de0000 {
++		crypto: crypto@1dfa000 {
+ 			compatible = "qcom,sm8450-qce", "qcom,sm8150-qce", "qcom,qce";
+ 			reg = <0 0x01dfa000 0 0x6000>;
+ 			dmas = <&cryptobam 4>, <&cryptobam 5>;
+diff --git a/arch/arm64/boot/dts/qcom/sm8550-mtp.dts b/arch/arm64/boot/dts/qcom/sm8550-mtp.dts
+index ec86c5f380450..714a2f9497adc 100644
+--- a/arch/arm64/boot/dts/qcom/sm8550-mtp.dts
++++ b/arch/arm64/boot/dts/qcom/sm8550-mtp.dts
+@@ -186,6 +186,7 @@
+ 
+ 		vdd-bob1-supply = <&vph_pwr>;
+ 		vdd-bob2-supply = <&vph_pwr>;
++		vdd-l1-l4-l10-supply = <&vreg_s6g_1p8>;
+ 		vdd-l2-l13-l14-supply = <&vreg_bob1>;
+ 		vdd-l3-supply = <&vreg_s4g_1p3>;
+ 		vdd-l5-l16-supply = <&vreg_bob1>;
+diff --git a/arch/arm64/boot/dts/qcom/sm8550.dtsi b/arch/arm64/boot/dts/qcom/sm8550.dtsi
+index 41d60af936920..6e8aba2569316 100644
+--- a/arch/arm64/boot/dts/qcom/sm8550.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8550.dtsi
+@@ -1600,7 +1600,7 @@
+ 				pinctrl-0 = <&qup_uart7_default>;
+ 				interrupts = <GIC_SPI 579 IRQ_TYPE_LEVEL_HIGH>;
+ 				interconnect-names = "qup-core", "qup-config";
+-				interconnects =	<&clk_virt MASTER_QUP_CORE_1 0 &clk_virt SLAVE_QUP_CORE_1 0>,
++				interconnects = <&clk_virt MASTER_QUP_CORE_1 0 &clk_virt SLAVE_QUP_CORE_1 0>,
+ 						<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_QUP_1 0>;
+ 				status = "disabled";
+ 			};
+@@ -3517,103 +3517,103 @@
+ 			reg = <0 0x15000000 0 0x100000>;
+ 			#iommu-cells = <2>;
+ 			#global-interrupts = <1>;
+-			interrupts =	<GIC_SPI 65 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 97 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 98 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 99 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 100 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 101 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 102 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 103 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 104 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 105 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 106 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 107 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 109 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 110 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 111 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 112 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 113 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 114 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 115 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 116 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 117 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 118 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 181 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 182 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 183 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 184 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 185 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 186 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 187 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 188 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 189 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 190 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 191 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 192 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 315 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 316 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 317 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 318 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 319 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 320 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 321 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 322 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 323 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 324 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 325 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 326 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 327 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 328 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 329 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 330 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 331 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 332 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 333 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 334 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 335 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 336 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 337 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 338 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 339 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 340 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 341 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 342 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 343 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 344 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 345 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 395 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 396 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 397 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 398 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 399 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 400 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 401 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 402 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 403 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 404 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 405 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 406 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 407 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 408 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 409 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 418 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 419 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 412 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 421 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 706 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 423 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 424 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 425 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 689 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 690 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 691 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 692 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 693 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 694 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 695 IRQ_TYPE_LEVEL_HIGH>,
+-					<GIC_SPI 696 IRQ_TYPE_LEVEL_HIGH>;
++			interrupts = <GIC_SPI 65 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 97 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 98 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 99 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 100 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 101 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 102 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 103 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 104 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 105 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 106 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 107 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 109 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 110 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 111 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 112 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 113 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 114 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 115 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 116 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 117 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 118 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 181 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 182 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 183 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 184 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 185 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 186 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 187 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 188 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 189 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 190 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 191 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 192 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 315 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 316 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 317 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 318 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 319 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 320 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 321 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 322 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 323 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 324 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 325 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 326 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 327 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 328 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 329 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 330 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 331 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 332 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 333 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 334 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 335 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 336 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 337 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 338 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 339 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 340 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 341 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 342 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 343 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 344 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 345 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 395 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 396 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 397 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 398 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 399 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 400 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 401 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 402 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 403 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 404 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 405 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 406 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 407 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 408 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 409 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 418 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 419 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 412 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 421 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 706 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 423 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 424 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 425 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 689 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 690 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 691 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 692 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 693 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 694 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 695 IRQ_TYPE_LEVEL_HIGH>,
++				     <GIC_SPI 696 IRQ_TYPE_LEVEL_HIGH>;
+ 		};
+ 
+ 		intc: interrupt-controller@17100000 {
+diff --git a/arch/arm64/boot/dts/rockchip/rk3568-radxa-e25.dts b/arch/arm64/boot/dts/rockchip/rk3568-radxa-e25.dts
+index 63c4bd873188e..72ad74c38a2b4 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3568-radxa-e25.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3568-radxa-e25.dts
+@@ -47,6 +47,9 @@
+ 		vin-supply = <&vcc5v0_sys>;
+ 	};
+ 
++	/* actually fed by vcc5v0_sys, dependent
++	 * on pi6c clock generator
++	 */
+ 	vcc3v3_minipcie: vcc3v3-minipcie-regulator {
+ 		compatible = "regulator-fixed";
+ 		enable-active-high;
+@@ -54,9 +57,9 @@
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&minipcie_enable_h>;
+ 		regulator-name = "vcc3v3_minipcie";
+-		regulator-min-microvolt = <5000000>;
+-		regulator-max-microvolt = <5000000>;
+-		vin-supply = <&vcc5v0_sys>;
++		regulator-min-microvolt = <3300000>;
++		regulator-max-microvolt = <3300000>;
++		vin-supply = <&vcc3v3_pi6c_05>;
+ 	};
+ 
+ 	vcc3v3_ngff: vcc3v3-ngff-regulator {
+@@ -71,9 +74,6 @@
+ 		vin-supply = <&vcc5v0_sys>;
+ 	};
+ 
+-	/* actually fed by vcc5v0_sys, dependent
+-	 * on pi6c clock generator
+-	 */
+ 	vcc3v3_pcie30x1: vcc3v3-pcie30x1-regulator {
+ 		compatible = "regulator-fixed";
+ 		enable-active-high;
+@@ -83,7 +83,7 @@
+ 		regulator-name = "vcc3v3_pcie30x1";
+ 		regulator-min-microvolt = <3300000>;
+ 		regulator-max-microvolt = <3300000>;
+-		vin-supply = <&vcc3v3_pi6c_05>;
++		vin-supply = <&vcc5v0_sys>;
+ 	};
+ 
+ 	vcc3v3_pi6c_05: vcc3v3-pi6c-05-regulator {
+@@ -99,6 +99,10 @@
+ 	};
+ };
+ 
++&combphy1 {
++	phy-supply = <&vcc3v3_pcie30x1>;
++};
++
+ &pcie2x1 {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pcie20_reset_h>;
+@@ -117,7 +121,7 @@
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pcie30x1m0_pins>;
+ 	reset-gpios = <&gpio0 RK_PC3 GPIO_ACTIVE_HIGH>;
+-	vpcie3v3-supply = <&vcc3v3_pcie30x1>;
++	vpcie3v3-supply = <&vcc3v3_minipcie>;
+ 	status = "okay";
+ };
+ 
+@@ -178,6 +182,10 @@
+ 	status = "okay";
+ };
+ 
++&sata1 {
++	status = "okay";
++};
++
+ &sdmmc0 {
+ 	bus-width = <4>;
+ 	cap-sd-highspeed;
+diff --git a/arch/arm64/boot/dts/ti/k3-am62x-sk-common.dtsi b/arch/arm64/boot/dts/ti/k3-am62x-sk-common.dtsi
+index 34c8ffc553ec3..540ed8a0d7fb6 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62x-sk-common.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am62x-sk-common.dtsi
+@@ -300,7 +300,7 @@
+ 	status = "okay";
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&main_i2c1_pins_default>;
+-	clock-frequency = <400000>;
++	clock-frequency = <100000>;
+ 
+ 	tlv320aic3106: audio-codec@1b {
+ 		#sound-dai-cells = <0>;
+diff --git a/arch/arm64/boot/dts/ti/k3-j721s2-common-proc-board.dts b/arch/arm64/boot/dts/ti/k3-j721s2-common-proc-board.dts
+index 04d4739d72457..2a5000645752d 100644
+--- a/arch/arm64/boot/dts/ti/k3-j721s2-common-proc-board.dts
++++ b/arch/arm64/boot/dts/ti/k3-j721s2-common-proc-board.dts
+@@ -249,18 +249,19 @@
+ 			J721S2_WKUP_IOPAD(0x108, PIN_INPUT, 0) /* (N27) MCU_ADC1_AIN7 */
+ 		>;
+ 	};
++};
+ 
++&wkup_pmx1 {
+ 	mcu_fss0_ospi1_pins_default: mcu-fss0-ospi1-default-pins {
+ 		pinctrl-single,pins = <
+-			J721S2_WKUP_IOPAD(0x040, PIN_OUTPUT, 0) /* (A19) MCU_OSPI1_CLK */
+-			J721S2_WKUP_IOPAD(0x05c, PIN_OUTPUT, 0) /* (D20) MCU_OSPI1_CSn0 */
+-			J721S2_WKUP_IOPAD(0x060, PIN_OUTPUT, 0) /* (C21) MCU_OSPI1_CSn1 */
+-			J721S2_WKUP_IOPAD(0x04c, PIN_INPUT, 0) /* (D21) MCU_OSPI1_D0 */
+-			J721S2_WKUP_IOPAD(0x050, PIN_INPUT, 0) /* (G20) MCU_OSPI1_D1 */
+-			J721S2_WKUP_IOPAD(0x054, PIN_INPUT, 0) /* (C20) MCU_OSPI1_D2 */
+-			J721S2_WKUP_IOPAD(0x058, PIN_INPUT, 0) /* (A20) MCU_OSPI1_D3 */
+-			J721S2_WKUP_IOPAD(0x048, PIN_INPUT, 0) /* (B19) MCU_OSPI1_DQS */
+-			J721S2_WKUP_IOPAD(0x044, PIN_INPUT, 0) /* (B20) MCU_OSPI1_LBCLKO */
++			J721S2_WKUP_IOPAD(0x008, PIN_OUTPUT, 0) /* (A19) MCU_OSPI1_CLK */
++			J721S2_WKUP_IOPAD(0x024, PIN_OUTPUT, 0) /* (D20) MCU_OSPI1_CSn0 */
++			J721S2_WKUP_IOPAD(0x014, PIN_INPUT, 0) /* (D21) MCU_OSPI1_D0 */
++			J721S2_WKUP_IOPAD(0x018, PIN_INPUT, 0) /* (G20) MCU_OSPI1_D1 */
++			J721S2_WKUP_IOPAD(0x01c, PIN_INPUT, 0) /* (C20) MCU_OSPI1_D2 */
++			J721S2_WKUP_IOPAD(0x020, PIN_INPUT, 0) /* (A20) MCU_OSPI1_D3 */
++			J721S2_WKUP_IOPAD(0x010, PIN_INPUT, 0) /* (B19) MCU_OSPI1_DQS */
++			J721S2_WKUP_IOPAD(0x00c, PIN_INPUT, 0) /* (B20) MCU_OSPI1_LBCLKO */
+ 		>;
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/ti/k3-j721s2-som-p0.dtsi b/arch/arm64/boot/dts/ti/k3-j721s2-som-p0.dtsi
+index d57dd43da0ef4..17ae27eac39ad 100644
+--- a/arch/arm64/boot/dts/ti/k3-j721s2-som-p0.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j721s2-som-p0.dtsi
+@@ -44,9 +44,6 @@
+ 		pinctrl-single,pins = <
+ 			J721S2_WKUP_IOPAD(0x000, PIN_OUTPUT, 0) /* (D19) MCU_OSPI0_CLK */
+ 			J721S2_WKUP_IOPAD(0x02c, PIN_OUTPUT, 0) /* (F15) MCU_OSPI0_CSn0 */
+-			J721S2_WKUP_IOPAD(0x030, PIN_OUTPUT, 0) /* (G17) MCU_OSPI0_CSn1 */
+-			J721S2_WKUP_IOPAD(0x038, PIN_OUTPUT, 0) /* (F14) MCU_OSPI0_CSn2 */
+-			J721S2_WKUP_IOPAD(0x03c, PIN_OUTPUT, 0) /* (F17) MCU_OSPI0_CSn3 */
+ 			J721S2_WKUP_IOPAD(0x00c, PIN_INPUT, 0) /* (C19) MCU_OSPI0_D0 */
+ 			J721S2_WKUP_IOPAD(0x010, PIN_INPUT, 0) /* (F16) MCU_OSPI0_D1 */
+ 			J721S2_WKUP_IOPAD(0x014, PIN_INPUT, 0) /* (G15) MCU_OSPI0_D2 */
+diff --git a/arch/arm64/boot/dts/ti/k3-j784s4-evm.dts b/arch/arm64/boot/dts/ti/k3-j784s4-evm.dts
+index 430b8a2c5df57..bf772f0641170 100644
+--- a/arch/arm64/boot/dts/ti/k3-j784s4-evm.dts
++++ b/arch/arm64/boot/dts/ti/k3-j784s4-evm.dts
+@@ -340,27 +340,27 @@
+ 
+ 	mcu_adc0_pins_default: mcu-adc0-default-pins {
+ 		pinctrl-single,pins = <
+-			J784S4_WKUP_IOPAD(0x134, PIN_INPUT, 0) /* (P36) MCU_ADC0_AIN0 */
+-			J784S4_WKUP_IOPAD(0x138, PIN_INPUT, 0) /* (V36) MCU_ADC0_AIN1 */
+-			J784S4_WKUP_IOPAD(0x13c, PIN_INPUT, 0) /* (T34) MCU_ADC0_AIN2 */
+-			J784S4_WKUP_IOPAD(0x140, PIN_INPUT, 0) /* (T36) MCU_ADC0_AIN3 */
+-			J784S4_WKUP_IOPAD(0x144, PIN_INPUT, 0) /* (P34) MCU_ADC0_AIN4 */
+-			J784S4_WKUP_IOPAD(0x148, PIN_INPUT, 0) /* (R37) MCU_ADC0_AIN5 */
+-			J784S4_WKUP_IOPAD(0x14c, PIN_INPUT, 0) /* (R33) MCU_ADC0_AIN6 */
+-			J784S4_WKUP_IOPAD(0x150, PIN_INPUT, 0) /* (V38) MCU_ADC0_AIN7 */
++			J784S4_WKUP_IOPAD(0x0cc, PIN_INPUT, 0) /* (P36) MCU_ADC0_AIN0 */
++			J784S4_WKUP_IOPAD(0x0d0, PIN_INPUT, 0) /* (V36) MCU_ADC0_AIN1 */
++			J784S4_WKUP_IOPAD(0x0d4, PIN_INPUT, 0) /* (T34) MCU_ADC0_AIN2 */
++			J784S4_WKUP_IOPAD(0x0d8, PIN_INPUT, 0) /* (T36) MCU_ADC0_AIN3 */
++			J784S4_WKUP_IOPAD(0x0dc, PIN_INPUT, 0) /* (P34) MCU_ADC0_AIN4 */
++			J784S4_WKUP_IOPAD(0x0e0, PIN_INPUT, 0) /* (R37) MCU_ADC0_AIN5 */
++			J784S4_WKUP_IOPAD(0x0e4, PIN_INPUT, 0) /* (R33) MCU_ADC0_AIN6 */
++			J784S4_WKUP_IOPAD(0x0e8, PIN_INPUT, 0) /* (V38) MCU_ADC0_AIN7 */
+ 		>;
+ 	};
+ 
+ 	mcu_adc1_pins_default: mcu-adc1-default-pins {
+ 		pinctrl-single,pins = <
+-			J784S4_WKUP_IOPAD(0x154, PIN_INPUT, 0) /* (Y38) MCU_ADC1_AIN0 */
+-			J784S4_WKUP_IOPAD(0x158, PIN_INPUT, 0) /* (Y34) MCU_ADC1_AIN1 */
+-			J784S4_WKUP_IOPAD(0x15c, PIN_INPUT, 0) /* (V34) MCU_ADC1_AIN2 */
+-			J784S4_WKUP_IOPAD(0x160, PIN_INPUT, 0) /* (W37) MCU_ADC1_AIN3 */
+-			J784S4_WKUP_IOPAD(0x164, PIN_INPUT, 0) /* (AA37) MCU_ADC1_AIN4 */
+-			J784S4_WKUP_IOPAD(0x168, PIN_INPUT, 0) /* (W33) MCU_ADC1_AIN5 */
+-			J784S4_WKUP_IOPAD(0x16c, PIN_INPUT, 0) /* (U33) MCU_ADC1_AIN6 */
+-			J784S4_WKUP_IOPAD(0x170, PIN_INPUT, 0) /* (Y36) MCU_ADC1_AIN7 */
++			J784S4_WKUP_IOPAD(0x0ec, PIN_INPUT, 0) /* (Y38) MCU_ADC1_AIN0 */
++			J784S4_WKUP_IOPAD(0x0f0, PIN_INPUT, 0) /* (Y34) MCU_ADC1_AIN1 */
++			J784S4_WKUP_IOPAD(0x0f4, PIN_INPUT, 0) /* (V34) MCU_ADC1_AIN2 */
++			J784S4_WKUP_IOPAD(0x0f8, PIN_INPUT, 0) /* (W37) MCU_ADC1_AIN3 */
++			J784S4_WKUP_IOPAD(0x0fc, PIN_INPUT, 0) /* (AA37) MCU_ADC1_AIN4 */
++			J784S4_WKUP_IOPAD(0x100, PIN_INPUT, 0) /* (W33) MCU_ADC1_AIN5 */
++			J784S4_WKUP_IOPAD(0x104, PIN_INPUT, 0) /* (U33) MCU_ADC1_AIN6 */
++			J784S4_WKUP_IOPAD(0x108, PIN_INPUT, 0) /* (Y36) MCU_ADC1_AIN7 */
+ 		>;
+ 	};
+ };
+@@ -379,21 +379,28 @@
+ 			J784S4_WKUP_IOPAD(0x024, PIN_INPUT, 0) /* (E34) MCU_OSPI0_D6 */
+ 			J784S4_WKUP_IOPAD(0x028, PIN_INPUT, 0) /* (E33) MCU_OSPI0_D7 */
+ 			J784S4_WKUP_IOPAD(0x008, PIN_INPUT, 0) /* (C34) MCU_OSPI0_DQS */
+-			J784S4_WKUP_IOPAD(0x03c, PIN_OUTPUT, 6) /* (C32) MCU_OSPI0_CSn3.MCU_OSPI0_ECC_FAIL */
+-			J784S4_WKUP_IOPAD(0x038, PIN_OUTPUT, 6) /* (B34) MCU_OSPI0_CSn2.MCU_OSPI0_RESET_OUT0 */
++		>;
++	};
++};
++
++&wkup_pmx1 {
++	mcu_fss0_ospi0_1_pins_default: mcu-fss0-ospi0-1-default-pins {
++		pinctrl-single,pins = <
++			J784S4_WKUP_IOPAD(0x004, PIN_OUTPUT, 6) /* (C32) MCU_OSPI0_ECC_FAIL */
++			J784S4_WKUP_IOPAD(0x000, PIN_OUTPUT, 6) /* (B34) MCU_OSPI0_RESET_OUT0 */
+ 		>;
+ 	};
+ 
+ 	mcu_fss0_ospi1_pins_default: mcu-fss0-ospi1-default-pins {
+ 		pinctrl-single,pins = <
+-			J784S4_WKUP_IOPAD(0x040, PIN_OUTPUT, 0) /* (F32) MCU_OSPI1_CLK */
+-			J784S4_WKUP_IOPAD(0x05c, PIN_OUTPUT, 0) /* (G32) MCU_OSPI1_CSn0 */
+-			J784S4_WKUP_IOPAD(0x04c, PIN_INPUT, 0) /* (E35) MCU_OSPI1_D0 */
+-			J784S4_WKUP_IOPAD(0x050, PIN_INPUT, 0) /* (D31) MCU_OSPI1_D1 */
+-			J784S4_WKUP_IOPAD(0x054, PIN_INPUT, 0) /* (G31) MCU_OSPI1_D2 */
+-			J784S4_WKUP_IOPAD(0x058, PIN_INPUT, 0) /* (F33) MCU_OSPI1_D3 */
+-			J784S4_WKUP_IOPAD(0x048, PIN_INPUT, 0) /* (F31) MCU_OSPI1_DQS */
+-			J784S4_WKUP_IOPAD(0x044, PIN_INPUT, 0) /* (C31) MCU_OSPI1_LBCLKO */
++			J784S4_WKUP_IOPAD(0x008, PIN_OUTPUT, 0) /* (F32) MCU_OSPI1_CLK */
++			J784S4_WKUP_IOPAD(0x024, PIN_OUTPUT, 0) /* (G32) MCU_OSPI1_CSn0 */
++			J784S4_WKUP_IOPAD(0x014, PIN_INPUT, 0) /* (E35) MCU_OSPI1_D0 */
++			J784S4_WKUP_IOPAD(0x018, PIN_INPUT, 0) /* (D31) MCU_OSPI1_D1 */
++			J784S4_WKUP_IOPAD(0x01C, PIN_INPUT, 0) /* (G31) MCU_OSPI1_D2 */
++			J784S4_WKUP_IOPAD(0x020, PIN_INPUT, 0) /* (F33) MCU_OSPI1_D3 */
++			J784S4_WKUP_IOPAD(0x010, PIN_INPUT, 0) /* (F31) MCU_OSPI1_DQS */
++			J784S4_WKUP_IOPAD(0x00C, PIN_INPUT, 0) /* (C31) MCU_OSPI1_LBCLKO */
+ 		>;
+ 	};
+ };
+@@ -437,7 +444,7 @@
+ &ospi0 {
+ 	status = "okay";
+ 	pinctrl-names = "default";
+-	pinctrl-0 = <&mcu_fss0_ospi0_pins_default>;
++	pinctrl-0 = <&mcu_fss0_ospi0_pins_default>, <&mcu_fss0_ospi0_1_pins_default>;
+ 
+ 	flash@0 {
+ 		compatible = "jedec,spi-nor";
+diff --git a/arch/arm64/boot/dts/ti/k3-j784s4-main.dtsi b/arch/arm64/boot/dts/ti/k3-j784s4-main.dtsi
+index 2ea0adae6832f..76e610d8782b5 100644
+--- a/arch/arm64/boot/dts/ti/k3-j784s4-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j784s4-main.dtsi
+@@ -60,7 +60,7 @@
+ 		#interrupt-cells = <1>;
+ 		ti,sci = <&sms>;
+ 		ti,sci-dev-id = <10>;
+-		ti,interrupt-ranges = <8 360 56>;
++		ti,interrupt-ranges = <8 392 56>;
+ 	};
+ 
+ 	main_pmx0: pinctrl@11c000 {
+diff --git a/arch/arm64/boot/dts/ti/k3-j784s4-mcu-wakeup.dtsi b/arch/arm64/boot/dts/ti/k3-j784s4-mcu-wakeup.dtsi
+index 657fb1d72512c..62a0f172fb2d4 100644
+--- a/arch/arm64/boot/dts/ti/k3-j784s4-mcu-wakeup.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j784s4-mcu-wakeup.dtsi
+@@ -107,7 +107,7 @@
+ 		#interrupt-cells = <1>;
+ 		ti,sci = <&sms>;
+ 		ti,sci-dev-id = <177>;
+-		ti,interrupt-ranges = <16 928 16>;
++		ti,interrupt-ranges = <16 960 16>;
+ 	};
+ 
+ 	/* MCU_TIMERIO pad input CTRLMMR_MCU_TIMER*_CTRL registers */
+diff --git a/arch/arm64/configs/defconfig b/arch/arm64/configs/defconfig
+index a25d783dfb955..d8bae57af16d5 100644
+--- a/arch/arm64/configs/defconfig
++++ b/arch/arm64/configs/defconfig
+@@ -1129,7 +1129,6 @@ CONFIG_XEN_GNTDEV=y
+ CONFIG_XEN_GRANT_DEV_ALLOC=y
+ CONFIG_STAGING=y
+ CONFIG_STAGING_MEDIA=y
+-CONFIG_VIDEO_IMX_MEDIA=m
+ CONFIG_VIDEO_MAX96712=m
+ CONFIG_CHROME_PLATFORMS=y
+ CONFIG_CROS_EC=y
+@@ -1182,6 +1181,7 @@ CONFIG_IPQ_GCC_8074=y
+ CONFIG_IPQ_GCC_9574=y
+ CONFIG_MSM_GCC_8916=y
+ CONFIG_MSM_GCC_8994=y
++CONFIG_MSM_GCC_8996=y
+ CONFIG_MSM_MMCC_8994=m
+ CONFIG_MSM_MMCC_8996=m
+ CONFIG_MSM_MMCC_8998=m
+diff --git a/arch/arm64/include/asm/sdei.h b/arch/arm64/include/asm/sdei.h
+index 4292d9bafb9d2..484cb6972e99a 100644
+--- a/arch/arm64/include/asm/sdei.h
++++ b/arch/arm64/include/asm/sdei.h
+@@ -17,6 +17,9 @@
+ 
+ #include <asm/virt.h>
+ 
++DECLARE_PER_CPU(struct sdei_registered_event *, sdei_active_normal_event);
++DECLARE_PER_CPU(struct sdei_registered_event *, sdei_active_critical_event);
++
+ extern unsigned long sdei_exit_mode;
+ 
+ /* Software Delegated Exception entry point from firmware*/
+@@ -29,6 +32,9 @@ asmlinkage void __sdei_asm_entry_trampoline(unsigned long event_num,
+ 						   unsigned long pc,
+ 						   unsigned long pstate);
+ 
++/* Abort a running handler. Context is discarded. */
++void __sdei_handler_abort(void);
++
+ /*
+  * The above entry point does the minimum to call C code. This function does
+  * anything else, before calling the driver.
+diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
+index a40e5e50fa552..6ad61de03d0a0 100644
+--- a/arch/arm64/kernel/entry.S
++++ b/arch/arm64/kernel/entry.S
+@@ -986,9 +986,13 @@ SYM_CODE_START(__sdei_asm_handler)
+ 
+ 	mov	x19, x1
+ 
+-#if defined(CONFIG_VMAP_STACK) || defined(CONFIG_SHADOW_CALL_STACK)
++	/* Store the registered-event for crash_smp_send_stop() */
+ 	ldrb	w4, [x19, #SDEI_EVENT_PRIORITY]
+-#endif
++	cbnz	w4, 1f
++	adr_this_cpu dst=x5, sym=sdei_active_normal_event, tmp=x6
++	b	2f
++1:	adr_this_cpu dst=x5, sym=sdei_active_critical_event, tmp=x6
++2:	str	x19, [x5]
+ 
+ #ifdef CONFIG_VMAP_STACK
+ 	/*
+@@ -1055,6 +1059,14 @@ SYM_CODE_START(__sdei_asm_handler)
+ 
+ 	ldr_l	x2, sdei_exit_mode
+ 
++	/* Clear the registered-event seen by crash_smp_send_stop() */
++	ldrb	w3, [x4, #SDEI_EVENT_PRIORITY]
++	cbnz	w3, 1f
++	adr_this_cpu dst=x5, sym=sdei_active_normal_event, tmp=x6
++	b	2f
++1:	adr_this_cpu dst=x5, sym=sdei_active_critical_event, tmp=x6
++2:	str	xzr, [x5]
++
+ alternative_if_not ARM64_UNMAP_KERNEL_AT_EL0
+ 	sdei_handler_exit exit_mode=x2
+ alternative_else_nop_endif
+@@ -1065,4 +1077,15 @@ alternative_else_nop_endif
+ #endif
+ SYM_CODE_END(__sdei_asm_handler)
+ NOKPROBE(__sdei_asm_handler)
++
++SYM_CODE_START(__sdei_handler_abort)
++	mov_q	x0, SDEI_1_0_FN_SDEI_EVENT_COMPLETE_AND_RESUME
++	adr	x1, 1f
++	ldr_l	x2, sdei_exit_mode
++	sdei_handler_exit exit_mode=x2
++	// exit the handler and jump to the next instruction.
++	// Exit will stomp x0-x17, PSTATE, ELR_ELx, and SPSR_ELx.
++1:	ret
++SYM_CODE_END(__sdei_handler_abort)
++NOKPROBE(__sdei_handler_abort)
+ #endif /* CONFIG_ARM_SDE_INTERFACE */
+diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
+index 087c05aa960ea..91e44ac7150f9 100644
+--- a/arch/arm64/kernel/fpsimd.c
++++ b/arch/arm64/kernel/fpsimd.c
+@@ -1179,9 +1179,6 @@ void sve_kernel_enable(const struct arm64_cpu_capabilities *__always_unused p)
+  */
+ u64 read_zcr_features(void)
+ {
+-	u64 zcr;
+-	unsigned int vq_max;
+-
+ 	/*
+ 	 * Set the maximum possible VL, and write zeroes to all other
+ 	 * bits to see if they stick.
+@@ -1189,12 +1186,8 @@ u64 read_zcr_features(void)
+ 	sve_kernel_enable(NULL);
+ 	write_sysreg_s(ZCR_ELx_LEN_MASK, SYS_ZCR_EL1);
+ 
+-	zcr = read_sysreg_s(SYS_ZCR_EL1);
+-	zcr &= ~(u64)ZCR_ELx_LEN_MASK; /* find sticky 1s outside LEN field */
+-	vq_max = sve_vq_from_vl(sve_get_vl());
+-	zcr |= vq_max - 1; /* set LEN field to maximum effective value */
+-
+-	return zcr;
++	/* Return LEN value that would be written to get the maximum VL */
++	return sve_vq_from_vl(sve_get_vl()) - 1;
+ }
+ 
+ void __init sve_setup(void)
+@@ -1349,9 +1342,6 @@ void fa64_kernel_enable(const struct arm64_cpu_capabilities *__always_unused p)
+  */
+ u64 read_smcr_features(void)
+ {
+-	u64 smcr;
+-	unsigned int vq_max;
+-
+ 	sme_kernel_enable(NULL);
+ 
+ 	/*
+@@ -1360,12 +1350,8 @@ u64 read_smcr_features(void)
+ 	write_sysreg_s(read_sysreg_s(SYS_SMCR_EL1) | SMCR_ELx_LEN_MASK,
+ 		       SYS_SMCR_EL1);
+ 
+-	smcr = read_sysreg_s(SYS_SMCR_EL1);
+-	smcr &= ~(u64)SMCR_ELx_LEN_MASK; /* Only the LEN field */
+-	vq_max = sve_vq_from_vl(sme_get_vl());
+-	smcr |= vq_max - 1; /* set LEN field to maximum effective value */
+-
+-	return smcr;
++	/* Return LEN value that would be written to get the maximum VL */
++	return sve_vq_from_vl(sme_get_vl()) - 1;
+ }
+ 
+ void __init sme_setup(void)
+diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
+index 187aa2b175b4f..20d7ef82de90a 100644
+--- a/arch/arm64/kernel/ptrace.c
++++ b/arch/arm64/kernel/ptrace.c
+@@ -891,7 +891,8 @@ static int sve_set_common(struct task_struct *target,
+ 			break;
+ 		default:
+ 			WARN_ON_ONCE(1);
+-			return -EINVAL;
++			ret = -EINVAL;
++			goto out;
+ 		}
+ 
+ 		/*
+diff --git a/arch/arm64/kernel/sdei.c b/arch/arm64/kernel/sdei.c
+index 830be01af32db..255d12f881c26 100644
+--- a/arch/arm64/kernel/sdei.c
++++ b/arch/arm64/kernel/sdei.c
+@@ -47,6 +47,9 @@ DEFINE_PER_CPU(unsigned long *, sdei_shadow_call_stack_normal_ptr);
+ DEFINE_PER_CPU(unsigned long *, sdei_shadow_call_stack_critical_ptr);
+ #endif
+ 
++DEFINE_PER_CPU(struct sdei_registered_event *, sdei_active_normal_event);
++DEFINE_PER_CPU(struct sdei_registered_event *, sdei_active_critical_event);
++
+ static void _free_sdei_stack(unsigned long * __percpu *ptr, int cpu)
+ {
+ 	unsigned long *p;
+diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
+index edd63894d61e8..960b98b43506d 100644
+--- a/arch/arm64/kernel/smp.c
++++ b/arch/arm64/kernel/smp.c
+@@ -1044,10 +1044,8 @@ void crash_smp_send_stop(void)
+ 	 * If this cpu is the only one alive at this point in time, online or
+ 	 * not, there are no stop messages to be sent around, so just back out.
+ 	 */
+-	if (num_other_online_cpus() == 0) {
+-		sdei_mask_local_cpu();
+-		return;
+-	}
++	if (num_other_online_cpus() == 0)
++		goto skip_ipi;
+ 
+ 	cpumask_copy(&mask, cpu_online_mask);
+ 	cpumask_clear_cpu(smp_processor_id(), &mask);
+@@ -1066,7 +1064,9 @@ void crash_smp_send_stop(void)
+ 		pr_warn("SMP: failed to stop secondary CPUs %*pbl\n",
+ 			cpumask_pr_args(&mask));
+ 
++skip_ipi:
+ 	sdei_mask_local_cpu();
++	sdei_handler_abort();
+ }
+ 
+ bool smp_crash_stop_failed(void)
+diff --git a/arch/arm64/lib/csum.c b/arch/arm64/lib/csum.c
+index 78b87a64ca0a3..2432683e48a61 100644
+--- a/arch/arm64/lib/csum.c
++++ b/arch/arm64/lib/csum.c
+@@ -24,7 +24,7 @@ unsigned int __no_sanitize_address do_csum(const unsigned char *buff, int len)
+ 	const u64 *ptr;
+ 	u64 data, sum64 = 0;
+ 
+-	if (unlikely(len == 0))
++	if (unlikely(len <= 0))
+ 		return 0;
+ 
+ 	offset = (unsigned long)buff & 7;
+diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
+index 21716c9406821..9c52718ea7509 100644
+--- a/arch/arm64/mm/hugetlbpage.c
++++ b/arch/arm64/mm/hugetlbpage.c
+@@ -236,7 +236,7 @@ static void clear_flush(struct mm_struct *mm,
+ 	unsigned long i, saddr = addr;
+ 
+ 	for (i = 0; i < ncontig; i++, addr += pgsize, ptep++)
+-		pte_clear(mm, addr, ptep);
++		ptep_clear(mm, addr, ptep);
+ 
+ 	flush_tlb_range(&vma, saddr, addr);
+ }
+diff --git a/arch/loongarch/include/asm/irq.h b/arch/loongarch/include/asm/irq.h
+index a115e8999c69e..218b4da0ea90d 100644
+--- a/arch/loongarch/include/asm/irq.h
++++ b/arch/loongarch/include/asm/irq.h
+@@ -40,7 +40,7 @@ void spurious_interrupt(void);
+ #define NR_IRQS_LEGACY 16
+ 
+ #define arch_trigger_cpumask_backtrace arch_trigger_cpumask_backtrace
+-void arch_trigger_cpumask_backtrace(const struct cpumask *mask, bool exclude_self);
++void arch_trigger_cpumask_backtrace(const struct cpumask *mask, int exclude_cpu);
+ 
+ #define MAX_IO_PICS 2
+ #define NR_IRQS	(64 + (256 * MAX_IO_PICS))
+diff --git a/arch/loongarch/include/asm/local.h b/arch/loongarch/include/asm/local.h
+index 83e995b30e472..c49675852bdcd 100644
+--- a/arch/loongarch/include/asm/local.h
++++ b/arch/loongarch/include/asm/local.h
+@@ -63,8 +63,8 @@ static inline long local_cmpxchg(local_t *l, long old, long new)
+ 
+ static inline bool local_try_cmpxchg(local_t *l, long *old, long new)
+ {
+-	typeof(l->a.counter) *__old = (typeof(l->a.counter) *) old;
+-	return try_cmpxchg_local(&l->a.counter, __old, new);
++	return try_cmpxchg_local(&l->a.counter,
++				 (typeof(l->a.counter) *) old, new);
+ }
+ 
+ #define local_xchg(l, n) (atomic_long_xchg((&(l)->a), (n)))
+diff --git a/arch/loongarch/include/asm/pgtable.h b/arch/loongarch/include/asm/pgtable.h
+index 38afeb7dd58b6..0ac6afa4a825b 100644
+--- a/arch/loongarch/include/asm/pgtable.h
++++ b/arch/loongarch/include/asm/pgtable.h
+@@ -593,6 +593,9 @@ static inline long pmd_protnone(pmd_t pmd)
+ }
+ #endif /* CONFIG_NUMA_BALANCING */
+ 
++#define pmd_leaf(pmd)		((pmd_val(pmd) & _PAGE_HUGE) != 0)
++#define pud_leaf(pud)		((pud_val(pud) & _PAGE_HUGE) != 0)
++
+ /*
+  * We provide our own get_unmapped area to cope with the virtual aliasing
+  * constraints placed on us by the cache architecture.
+diff --git a/arch/loongarch/kernel/process.c b/arch/loongarch/kernel/process.c
+index 4ee1e9d6a65f1..ba457e43f5be5 100644
+--- a/arch/loongarch/kernel/process.c
++++ b/arch/loongarch/kernel/process.c
+@@ -338,9 +338,9 @@ static void raise_backtrace(cpumask_t *mask)
+ 	}
+ }
+ 
+-void arch_trigger_cpumask_backtrace(const cpumask_t *mask, bool exclude_self)
++void arch_trigger_cpumask_backtrace(const cpumask_t *mask, int exclude_cpu)
+ {
+-	nmi_trigger_cpumask_backtrace(mask, exclude_self, raise_backtrace);
++	nmi_trigger_cpumask_backtrace(mask, exclude_cpu, raise_backtrace);
+ }
+ 
+ #ifdef CONFIG_64BIT
+diff --git a/arch/mips/include/asm/irq.h b/arch/mips/include/asm/irq.h
+index 75abfa834ab7a..3a848e7e69f71 100644
+--- a/arch/mips/include/asm/irq.h
++++ b/arch/mips/include/asm/irq.h
+@@ -77,7 +77,7 @@ extern int cp0_fdc_irq;
+ extern int get_c0_fdc_int(void);
+ 
+ void arch_trigger_cpumask_backtrace(const struct cpumask *mask,
+-				    bool exclude_self);
++				    int exclude_cpu);
+ #define arch_trigger_cpumask_backtrace arch_trigger_cpumask_backtrace
+ 
+ #endif /* _ASM_IRQ_H */
+diff --git a/arch/mips/include/asm/local.h b/arch/mips/include/asm/local.h
+index 5daf6fe8e3e9a..e6ae3df0349d2 100644
+--- a/arch/mips/include/asm/local.h
++++ b/arch/mips/include/asm/local.h
+@@ -101,8 +101,8 @@ static __inline__ long local_cmpxchg(local_t *l, long old, long new)
+ 
+ static __inline__ bool local_try_cmpxchg(local_t *l, long *old, long new)
+ {
+-	typeof(l->a.counter) *__old = (typeof(l->a.counter) *) old;
+-	return try_cmpxchg_local(&l->a.counter, __old, new);
++	return try_cmpxchg_local(&l->a.counter,
++				 (typeof(l->a.counter) *) old, new);
+ }
+ 
+ #define local_xchg(l, n) (atomic_long_xchg((&(l)->a), (n)))
+diff --git a/arch/mips/kernel/process.c b/arch/mips/kernel/process.c
+index a3225912c862d..5387ed0a51862 100644
+--- a/arch/mips/kernel/process.c
++++ b/arch/mips/kernel/process.c
+@@ -750,9 +750,9 @@ static void raise_backtrace(cpumask_t *mask)
+ 	}
+ }
+ 
+-void arch_trigger_cpumask_backtrace(const cpumask_t *mask, bool exclude_self)
++void arch_trigger_cpumask_backtrace(const cpumask_t *mask, int exclude_cpu)
+ {
+-	nmi_trigger_cpumask_backtrace(mask, exclude_self, raise_backtrace);
++	nmi_trigger_cpumask_backtrace(mask, exclude_cpu, raise_backtrace);
+ }
+ 
+ int mips_get_process_fp_mode(struct task_struct *task)
+diff --git a/arch/parisc/include/asm/runway.h b/arch/parisc/include/asm/runway.h
+index 5cf061376ddb1..2837f0223d6d3 100644
+--- a/arch/parisc/include/asm/runway.h
++++ b/arch/parisc/include/asm/runway.h
+@@ -2,9 +2,6 @@
+ #ifndef ASM_PARISC_RUNWAY_H
+ #define ASM_PARISC_RUNWAY_H
+ 
+-/* declared in arch/parisc/kernel/setup.c */
+-extern struct proc_dir_entry * proc_runway_root;
+-
+ #define RUNWAY_STATUS	0x10
+ #define RUNWAY_DEBUG	0x40
+ 
+diff --git a/arch/parisc/kernel/processor.c b/arch/parisc/kernel/processor.c
+index 762289b9984ea..a0e2d37c5b3b5 100644
+--- a/arch/parisc/kernel/processor.c
++++ b/arch/parisc/kernel/processor.c
+@@ -378,10 +378,18 @@ int
+ show_cpuinfo (struct seq_file *m, void *v)
+ {
+ 	unsigned long cpu;
++	char cpu_name[60], *p;
++
++	/* strip PA path from CPU name to not confuse lscpu */
++	strlcpy(cpu_name, per_cpu(cpu_data, 0).dev->name, sizeof(cpu_name));
++	p = strrchr(cpu_name, '[');
++	if (p)
++		*(--p) = 0;
+ 
+ 	for_each_online_cpu(cpu) {
+-		const struct cpuinfo_parisc *cpuinfo = &per_cpu(cpu_data, cpu);
+ #ifdef CONFIG_SMP
++		const struct cpuinfo_parisc *cpuinfo = &per_cpu(cpu_data, cpu);
++
+ 		if (0 == cpuinfo->hpa)
+ 			continue;
+ #endif
+@@ -426,8 +434,7 @@ show_cpuinfo (struct seq_file *m, void *v)
+ 
+ 		seq_printf(m, "model\t\t: %s - %s\n",
+ 				 boot_cpu_data.pdc.sys_model_name,
+-				 cpuinfo->dev ?
+-				 cpuinfo->dev->name : "Unknown");
++				 cpu_name);
+ 
+ 		seq_printf(m, "hversion\t: 0x%08x\n"
+ 			        "sversion\t: 0x%08x\n",
+diff --git a/arch/powerpc/include/asm/ftrace.h b/arch/powerpc/include/asm/ftrace.h
+index 91c049d51d0e1..2edc6269b1a35 100644
+--- a/arch/powerpc/include/asm/ftrace.h
++++ b/arch/powerpc/include/asm/ftrace.h
+@@ -12,7 +12,7 @@
+ 
+ /* Ignore unused weak functions which will have larger offsets */
+ #ifdef CONFIG_MPROFILE_KERNEL
+-#define FTRACE_MCOUNT_MAX_OFFSET	12
++#define FTRACE_MCOUNT_MAX_OFFSET	16
+ #elif defined(CONFIG_PPC32)
+ #define FTRACE_MCOUNT_MAX_OFFSET	8
+ #endif
+diff --git a/arch/powerpc/include/asm/irq.h b/arch/powerpc/include/asm/irq.h
+index f257cacb49a9c..ba1a5974e7143 100644
+--- a/arch/powerpc/include/asm/irq.h
++++ b/arch/powerpc/include/asm/irq.h
+@@ -55,7 +55,7 @@ int irq_choose_cpu(const struct cpumask *mask);
+ 
+ #if defined(CONFIG_PPC_BOOK3S_64) && defined(CONFIG_NMI_IPI)
+ extern void arch_trigger_cpumask_backtrace(const cpumask_t *mask,
+-					   bool exclude_self);
++					   int exclude_cpu);
+ #define arch_trigger_cpumask_backtrace arch_trigger_cpumask_backtrace
+ #endif
+ 
+diff --git a/arch/powerpc/include/asm/lppaca.h b/arch/powerpc/include/asm/lppaca.h
+index 34d44cb17c874..ee1488d38fdc1 100644
+--- a/arch/powerpc/include/asm/lppaca.h
++++ b/arch/powerpc/include/asm/lppaca.h
+@@ -45,6 +45,7 @@
+ #include <asm/types.h>
+ #include <asm/mmu.h>
+ #include <asm/firmware.h>
++#include <asm/paca.h>
+ 
+ /*
+  * The lppaca is the "virtual processor area" registered with the hypervisor,
+@@ -127,13 +128,23 @@ struct lppaca {
+  */
+ #define LPPACA_OLD_SHARED_PROC		2
+ 
+-static inline bool lppaca_shared_proc(struct lppaca *l)
++#ifdef CONFIG_PPC_PSERIES
++/*
++ * All CPUs should have the same shared proc value, so directly access the PACA
++ * to avoid false positives from DEBUG_PREEMPT.
++ */
++static inline bool lppaca_shared_proc(void)
+ {
++	struct lppaca *l = local_paca->lppaca_ptr;
++
+ 	if (!firmware_has_feature(FW_FEATURE_SPLPAR))
+ 		return false;
+ 	return !!(l->__old_status & LPPACA_OLD_SHARED_PROC);
+ }
+ 
++#define get_lppaca()	(get_paca()->lppaca_ptr)
++#endif
++
+ /*
+  * SLB shadow buffer structure as defined in the PAPR.  The save_area
+  * contains adjacent ESID and VSID pairs for each shadowed SLB.  The
+diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h
+index cb325938766a5..e667d455ecb41 100644
+--- a/arch/powerpc/include/asm/paca.h
++++ b/arch/powerpc/include/asm/paca.h
+@@ -15,7 +15,6 @@
+ #include <linux/cache.h>
+ #include <linux/string.h>
+ #include <asm/types.h>
+-#include <asm/lppaca.h>
+ #include <asm/mmu.h>
+ #include <asm/page.h>
+ #ifdef CONFIG_PPC_BOOK3E_64
+@@ -47,14 +46,11 @@ extern unsigned int debug_smp_processor_id(void); /* from linux/smp.h */
+ #define get_paca()	local_paca
+ #endif
+ 
+-#ifdef CONFIG_PPC_PSERIES
+-#define get_lppaca()	(get_paca()->lppaca_ptr)
+-#endif
+-
+ #define get_slb_shadow()	(get_paca()->slb_shadow_ptr)
+ 
+ struct task_struct;
+ struct rtas_args;
++struct lppaca;
+ 
+ /*
+  * Defines the layout of the paca.
+diff --git a/arch/powerpc/include/asm/paravirt.h b/arch/powerpc/include/asm/paravirt.h
+index f5ba1a3c41f8e..e08513d731193 100644
+--- a/arch/powerpc/include/asm/paravirt.h
++++ b/arch/powerpc/include/asm/paravirt.h
+@@ -6,6 +6,7 @@
+ #include <asm/smp.h>
+ #ifdef CONFIG_PPC64
+ #include <asm/paca.h>
++#include <asm/lppaca.h>
+ #include <asm/hvcall.h>
+ #endif
+ 
+diff --git a/arch/powerpc/include/asm/plpar_wrappers.h b/arch/powerpc/include/asm/plpar_wrappers.h
+index 8239c0af5eb2b..fe3d0ea0058ac 100644
+--- a/arch/powerpc/include/asm/plpar_wrappers.h
++++ b/arch/powerpc/include/asm/plpar_wrappers.h
+@@ -9,6 +9,7 @@
+ 
+ #include <asm/hvcall.h>
+ #include <asm/paca.h>
++#include <asm/lppaca.h>
+ #include <asm/page.h>
+ 
+ static inline long poll_pending(void)
+diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
+index ea0a073abd969..3ff2da7b120b5 100644
+--- a/arch/powerpc/kernel/fadump.c
++++ b/arch/powerpc/kernel/fadump.c
+@@ -654,6 +654,7 @@ int __init fadump_reserve_mem(void)
+ 	return ret;
+ error_out:
+ 	fw_dump.fadump_enabled = 0;
++	fw_dump.reserve_dump_area_size = 0;
+ 	return 0;
+ }
+ 
+diff --git a/arch/powerpc/kernel/iommu.c b/arch/powerpc/kernel/iommu.c
+index c52449ae6936a..14251bc5219eb 100644
+--- a/arch/powerpc/kernel/iommu.c
++++ b/arch/powerpc/kernel/iommu.c
+@@ -172,17 +172,28 @@ static int fail_iommu_bus_notify(struct notifier_block *nb,
+ 	return 0;
+ }
+ 
+-static struct notifier_block fail_iommu_bus_notifier = {
++/*
++ * PCI and VIO buses need separate notifier_block structs, since they're linked
++ * list nodes.  Sharing a notifier_block would mean that any notifiers later
++ * registered for PCI buses would also get called by VIO buses and vice versa.
++ */
++static struct notifier_block fail_iommu_pci_bus_notifier = {
+ 	.notifier_call = fail_iommu_bus_notify
+ };
+ 
++#ifdef CONFIG_IBMVIO
++static struct notifier_block fail_iommu_vio_bus_notifier = {
++	.notifier_call = fail_iommu_bus_notify
++};
++#endif
++
+ static int __init fail_iommu_setup(void)
+ {
+ #ifdef CONFIG_PCI
+-	bus_register_notifier(&pci_bus_type, &fail_iommu_bus_notifier);
++	bus_register_notifier(&pci_bus_type, &fail_iommu_pci_bus_notifier);
+ #endif
+ #ifdef CONFIG_IBMVIO
+-	bus_register_notifier(&vio_bus_type, &fail_iommu_bus_notifier);
++	bus_register_notifier(&vio_bus_type, &fail_iommu_vio_bus_notifier);
+ #endif
+ 
+ 	return 0;
+diff --git a/arch/powerpc/kernel/stacktrace.c b/arch/powerpc/kernel/stacktrace.c
+index 5de8597eaab8d..b15f15dcacb5c 100644
+--- a/arch/powerpc/kernel/stacktrace.c
++++ b/arch/powerpc/kernel/stacktrace.c
+@@ -221,8 +221,8 @@ static void raise_backtrace_ipi(cpumask_t *mask)
+ 	}
+ }
+ 
+-void arch_trigger_cpumask_backtrace(const cpumask_t *mask, bool exclude_self)
++void arch_trigger_cpumask_backtrace(const cpumask_t *mask, int exclude_cpu)
+ {
+-	nmi_trigger_cpumask_backtrace(mask, exclude_self, raise_backtrace_ipi);
++	nmi_trigger_cpumask_backtrace(mask, exclude_cpu, raise_backtrace_ipi);
+ }
+ #endif /* defined(CONFIG_PPC_BOOK3S_64) && defined(CONFIG_NMI_IPI) */
+diff --git a/arch/powerpc/kernel/watchdog.c b/arch/powerpc/kernel/watchdog.c
+index edb2dd1f53ebc..8c464a5d82469 100644
+--- a/arch/powerpc/kernel/watchdog.c
++++ b/arch/powerpc/kernel/watchdog.c
+@@ -245,7 +245,7 @@ static void watchdog_smp_panic(int cpu)
+ 			__cpumask_clear_cpu(c, &wd_smp_cpus_ipi);
+ 		}
+ 	} else {
+-		trigger_allbutself_cpu_backtrace();
++		trigger_allbutcpu_cpu_backtrace(cpu);
+ 		cpumask_clear(&wd_smp_cpus_ipi);
+ 	}
+ 
+@@ -416,7 +416,7 @@ DEFINE_INTERRUPT_HANDLER_NMI(soft_nmi_interrupt)
+ 		xchg(&__wd_nmi_output, 1); // see wd_lockup_ipi
+ 
+ 		if (sysctl_hardlockup_all_cpu_backtrace)
+-			trigger_allbutself_cpu_backtrace();
++			trigger_allbutcpu_cpu_backtrace(cpu);
+ 
+ 		if (hardlockup_panic)
+ 			nmi_panic(regs, "Hard LOCKUP");
+diff --git a/arch/powerpc/kvm/book3s_hv_ras.c b/arch/powerpc/kvm/book3s_hv_ras.c
+index ccfd969656306..82be6d87514b7 100644
+--- a/arch/powerpc/kvm/book3s_hv_ras.c
++++ b/arch/powerpc/kvm/book3s_hv_ras.c
+@@ -9,6 +9,7 @@
+ #include <linux/kvm.h>
+ #include <linux/kvm_host.h>
+ #include <linux/kernel.h>
++#include <asm/lppaca.h>
+ #include <asm/opal.h>
+ #include <asm/mce.h>
+ #include <asm/machdep.h>
+diff --git a/arch/powerpc/mm/book3s64/radix_tlb.c b/arch/powerpc/mm/book3s64/radix_tlb.c
+index 0bd4866d98241..9383606c5e6e0 100644
+--- a/arch/powerpc/mm/book3s64/radix_tlb.c
++++ b/arch/powerpc/mm/book3s64/radix_tlb.c
+@@ -127,21 +127,6 @@ static __always_inline void __tlbie_pid(unsigned long pid, unsigned long ric)
+ 	trace_tlbie(0, 0, rb, rs, ric, prs, r);
+ }
+ 
+-static __always_inline void __tlbie_pid_lpid(unsigned long pid,
+-					     unsigned long lpid,
+-					     unsigned long ric)
+-{
+-	unsigned long rb, rs, prs, r;
+-
+-	rb = PPC_BIT(53); /* IS = 1 */
+-	rs = (pid << PPC_BITLSHIFT(31)) | (lpid & ~(PPC_BITMASK(0, 31)));
+-	prs = 1; /* process scoped */
+-	r = 1;   /* radix format */
+-
+-	asm volatile(PPC_TLBIE_5(%0, %4, %3, %2, %1)
+-		     : : "r"(rb), "i"(r), "i"(prs), "i"(ric), "r"(rs) : "memory");
+-	trace_tlbie(0, 0, rb, rs, ric, prs, r);
+-}
+ static __always_inline void __tlbie_lpid(unsigned long lpid, unsigned long ric)
+ {
+ 	unsigned long rb,rs,prs,r;
+@@ -202,23 +187,6 @@ static __always_inline void __tlbie_va(unsigned long va, unsigned long pid,
+ 	trace_tlbie(0, 0, rb, rs, ric, prs, r);
+ }
+ 
+-static __always_inline void __tlbie_va_lpid(unsigned long va, unsigned long pid,
+-					    unsigned long lpid,
+-					    unsigned long ap, unsigned long ric)
+-{
+-	unsigned long rb, rs, prs, r;
+-
+-	rb = va & ~(PPC_BITMASK(52, 63));
+-	rb |= ap << PPC_BITLSHIFT(58);
+-	rs = (pid << PPC_BITLSHIFT(31)) | (lpid & ~(PPC_BITMASK(0, 31)));
+-	prs = 1; /* process scoped */
+-	r = 1;   /* radix format */
+-
+-	asm volatile(PPC_TLBIE_5(%0, %4, %3, %2, %1)
+-		     : : "r"(rb), "i"(r), "i"(prs), "i"(ric), "r"(rs) : "memory");
+-	trace_tlbie(0, 0, rb, rs, ric, prs, r);
+-}
+-
+ static __always_inline void __tlbie_lpid_va(unsigned long va, unsigned long lpid,
+ 					    unsigned long ap, unsigned long ric)
+ {
+@@ -264,22 +232,6 @@ static inline void fixup_tlbie_va_range(unsigned long va, unsigned long pid,
+ 	}
+ }
+ 
+-static inline void fixup_tlbie_va_range_lpid(unsigned long va,
+-					     unsigned long pid,
+-					     unsigned long lpid,
+-					     unsigned long ap)
+-{
+-	if (cpu_has_feature(CPU_FTR_P9_TLBIE_ERAT_BUG)) {
+-		asm volatile("ptesync" : : : "memory");
+-		__tlbie_pid_lpid(0, lpid, RIC_FLUSH_TLB);
+-	}
+-
+-	if (cpu_has_feature(CPU_FTR_P9_TLBIE_STQ_BUG)) {
+-		asm volatile("ptesync" : : : "memory");
+-		__tlbie_va_lpid(va, pid, lpid, ap, RIC_FLUSH_TLB);
+-	}
+-}
+-
+ static inline void fixup_tlbie_pid(unsigned long pid)
+ {
+ 	/*
+@@ -299,26 +251,6 @@ static inline void fixup_tlbie_pid(unsigned long pid)
+ 	}
+ }
+ 
+-static inline void fixup_tlbie_pid_lpid(unsigned long pid, unsigned long lpid)
+-{
+-	/*
+-	 * We can use any address for the invalidation, pick one which is
+-	 * probably unused as an optimisation.
+-	 */
+-	unsigned long va = ((1UL << 52) - 1);
+-
+-	if (cpu_has_feature(CPU_FTR_P9_TLBIE_ERAT_BUG)) {
+-		asm volatile("ptesync" : : : "memory");
+-		__tlbie_pid_lpid(0, lpid, RIC_FLUSH_TLB);
+-	}
+-
+-	if (cpu_has_feature(CPU_FTR_P9_TLBIE_STQ_BUG)) {
+-		asm volatile("ptesync" : : : "memory");
+-		__tlbie_va_lpid(va, pid, lpid, mmu_get_ap(MMU_PAGE_64K),
+-				RIC_FLUSH_TLB);
+-	}
+-}
+-
+ static inline void fixup_tlbie_lpid_va(unsigned long va, unsigned long lpid,
+ 				       unsigned long ap)
+ {
+@@ -416,31 +348,6 @@ static inline void _tlbie_pid(unsigned long pid, unsigned long ric)
+ 	asm volatile("eieio; tlbsync; ptesync": : :"memory");
+ }
+ 
+-static inline void _tlbie_pid_lpid(unsigned long pid, unsigned long lpid,
+-				   unsigned long ric)
+-{
+-	asm volatile("ptesync" : : : "memory");
+-
+-	/*
+-	 * Workaround the fact that the "ric" argument to __tlbie_pid
+-	 * must be a compile-time contraint to match the "i" constraint
+-	 * in the asm statement.
+-	 */
+-	switch (ric) {
+-	case RIC_FLUSH_TLB:
+-		__tlbie_pid_lpid(pid, lpid, RIC_FLUSH_TLB);
+-		fixup_tlbie_pid_lpid(pid, lpid);
+-		break;
+-	case RIC_FLUSH_PWC:
+-		__tlbie_pid_lpid(pid, lpid, RIC_FLUSH_PWC);
+-		break;
+-	case RIC_FLUSH_ALL:
+-	default:
+-		__tlbie_pid_lpid(pid, lpid, RIC_FLUSH_ALL);
+-		fixup_tlbie_pid_lpid(pid, lpid);
+-	}
+-	asm volatile("eieio; tlbsync; ptesync" : : : "memory");
+-}
+ struct tlbiel_pid {
+ 	unsigned long pid;
+ 	unsigned long ric;
+@@ -566,20 +473,6 @@ static inline void __tlbie_va_range(unsigned long start, unsigned long end,
+ 	fixup_tlbie_va_range(addr - page_size, pid, ap);
+ }
+ 
+-static inline void __tlbie_va_range_lpid(unsigned long start, unsigned long end,
+-					 unsigned long pid, unsigned long lpid,
+-					 unsigned long page_size,
+-					 unsigned long psize)
+-{
+-	unsigned long addr;
+-	unsigned long ap = mmu_get_ap(psize);
+-
+-	for (addr = start; addr < end; addr += page_size)
+-		__tlbie_va_lpid(addr, pid, lpid, ap, RIC_FLUSH_TLB);
+-
+-	fixup_tlbie_va_range_lpid(addr - page_size, pid, lpid, ap);
+-}
+-
+ static __always_inline void _tlbie_va(unsigned long va, unsigned long pid,
+ 				      unsigned long psize, unsigned long ric)
+ {
+@@ -660,18 +553,6 @@ static inline void _tlbie_va_range(unsigned long start, unsigned long end,
+ 	asm volatile("eieio; tlbsync; ptesync": : :"memory");
+ }
+ 
+-static inline void _tlbie_va_range_lpid(unsigned long start, unsigned long end,
+-					unsigned long pid, unsigned long lpid,
+-					unsigned long page_size,
+-					unsigned long psize, bool also_pwc)
+-{
+-	asm volatile("ptesync" : : : "memory");
+-	if (also_pwc)
+-		__tlbie_pid_lpid(pid, lpid, RIC_FLUSH_PWC);
+-	__tlbie_va_range_lpid(start, end, pid, lpid, page_size, psize);
+-	asm volatile("eieio; tlbsync; ptesync" : : : "memory");
+-}
+-
+ static inline void _tlbiel_va_range_multicast(struct mm_struct *mm,
+ 				unsigned long start, unsigned long end,
+ 				unsigned long pid, unsigned long page_size,
+@@ -1486,6 +1367,127 @@ void radix__flush_tlb_all(void)
+ }
+ 
+ #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
++static __always_inline void __tlbie_pid_lpid(unsigned long pid,
++					     unsigned long lpid,
++					     unsigned long ric)
++{
++	unsigned long rb, rs, prs, r;
++
++	rb = PPC_BIT(53); /* IS = 1 */
++	rs = (pid << PPC_BITLSHIFT(31)) | (lpid & ~(PPC_BITMASK(0, 31)));
++	prs = 1; /* process scoped */
++	r = 1;   /* radix format */
++
++	asm volatile(PPC_TLBIE_5(%0, %4, %3, %2, %1)
++		     : : "r"(rb), "i"(r), "i"(prs), "i"(ric), "r"(rs) : "memory");
++	trace_tlbie(0, 0, rb, rs, ric, prs, r);
++}
++
++static __always_inline void __tlbie_va_lpid(unsigned long va, unsigned long pid,
++					    unsigned long lpid,
++					    unsigned long ap, unsigned long ric)
++{
++	unsigned long rb, rs, prs, r;
++
++	rb = va & ~(PPC_BITMASK(52, 63));
++	rb |= ap << PPC_BITLSHIFT(58);
++	rs = (pid << PPC_BITLSHIFT(31)) | (lpid & ~(PPC_BITMASK(0, 31)));
++	prs = 1; /* process scoped */
++	r = 1;   /* radix format */
++
++	asm volatile(PPC_TLBIE_5(%0, %4, %3, %2, %1)
++		     : : "r"(rb), "i"(r), "i"(prs), "i"(ric), "r"(rs) : "memory");
++	trace_tlbie(0, 0, rb, rs, ric, prs, r);
++}
++
++static inline void fixup_tlbie_pid_lpid(unsigned long pid, unsigned long lpid)
++{
++	/*
++	 * We can use any address for the invalidation, pick one which is
++	 * probably unused as an optimisation.
++	 */
++	unsigned long va = ((1UL << 52) - 1);
++
++	if (cpu_has_feature(CPU_FTR_P9_TLBIE_ERAT_BUG)) {
++		asm volatile("ptesync" : : : "memory");
++		__tlbie_pid_lpid(0, lpid, RIC_FLUSH_TLB);
++	}
++
++	if (cpu_has_feature(CPU_FTR_P9_TLBIE_STQ_BUG)) {
++		asm volatile("ptesync" : : : "memory");
++		__tlbie_va_lpid(va, pid, lpid, mmu_get_ap(MMU_PAGE_64K),
++				RIC_FLUSH_TLB);
++	}
++}
++
++static inline void _tlbie_pid_lpid(unsigned long pid, unsigned long lpid,
++				   unsigned long ric)
++{
++	asm volatile("ptesync" : : : "memory");
++
++	/*
++	 * Workaround the fact that the "ric" argument to __tlbie_pid
++	 * must be a compile-time contraint to match the "i" constraint
++	 * in the asm statement.
++	 */
++	switch (ric) {
++	case RIC_FLUSH_TLB:
++		__tlbie_pid_lpid(pid, lpid, RIC_FLUSH_TLB);
++		fixup_tlbie_pid_lpid(pid, lpid);
++		break;
++	case RIC_FLUSH_PWC:
++		__tlbie_pid_lpid(pid, lpid, RIC_FLUSH_PWC);
++		break;
++	case RIC_FLUSH_ALL:
++	default:
++		__tlbie_pid_lpid(pid, lpid, RIC_FLUSH_ALL);
++		fixup_tlbie_pid_lpid(pid, lpid);
++	}
++	asm volatile("eieio; tlbsync; ptesync" : : : "memory");
++}
++
++static inline void fixup_tlbie_va_range_lpid(unsigned long va,
++					     unsigned long pid,
++					     unsigned long lpid,
++					     unsigned long ap)
++{
++	if (cpu_has_feature(CPU_FTR_P9_TLBIE_ERAT_BUG)) {
++		asm volatile("ptesync" : : : "memory");
++		__tlbie_pid_lpid(0, lpid, RIC_FLUSH_TLB);
++	}
++
++	if (cpu_has_feature(CPU_FTR_P9_TLBIE_STQ_BUG)) {
++		asm volatile("ptesync" : : : "memory");
++		__tlbie_va_lpid(va, pid, lpid, ap, RIC_FLUSH_TLB);
++	}
++}
++
++static inline void __tlbie_va_range_lpid(unsigned long start, unsigned long end,
++					 unsigned long pid, unsigned long lpid,
++					 unsigned long page_size,
++					 unsigned long psize)
++{
++	unsigned long addr;
++	unsigned long ap = mmu_get_ap(psize);
++
++	for (addr = start; addr < end; addr += page_size)
++		__tlbie_va_lpid(addr, pid, lpid, ap, RIC_FLUSH_TLB);
++
++	fixup_tlbie_va_range_lpid(addr - page_size, pid, lpid, ap);
++}
++
++static inline void _tlbie_va_range_lpid(unsigned long start, unsigned long end,
++					unsigned long pid, unsigned long lpid,
++					unsigned long page_size,
++					unsigned long psize, bool also_pwc)
++{
++	asm volatile("ptesync" : : : "memory");
++	if (also_pwc)
++		__tlbie_pid_lpid(pid, lpid, RIC_FLUSH_PWC);
++	__tlbie_va_range_lpid(start, end, pid, lpid, page_size, psize);
++	asm volatile("eieio; tlbsync; ptesync" : : : "memory");
++}
++
+ /*
+  * Performs process-scoped invalidations for a given LPID
+  * as part of H_RPT_INVALIDATE hcall.
+diff --git a/arch/powerpc/mm/book3s64/slb.c b/arch/powerpc/mm/book3s64/slb.c
+index 6956f637a38c1..f2708c8629a52 100644
+--- a/arch/powerpc/mm/book3s64/slb.c
++++ b/arch/powerpc/mm/book3s64/slb.c
+@@ -13,6 +13,7 @@
+ #include <asm/mmu.h>
+ #include <asm/mmu_context.h>
+ #include <asm/paca.h>
++#include <asm/lppaca.h>
+ #include <asm/ppc-opcode.h>
+ #include <asm/cputable.h>
+ #include <asm/cacheflush.h>
+diff --git a/arch/powerpc/perf/core-fsl-emb.c b/arch/powerpc/perf/core-fsl-emb.c
+index ee721f420a7ba..1a53ab08447cb 100644
+--- a/arch/powerpc/perf/core-fsl-emb.c
++++ b/arch/powerpc/perf/core-fsl-emb.c
+@@ -645,7 +645,6 @@ static void perf_event_interrupt(struct pt_regs *regs)
+ 	struct cpu_hw_events *cpuhw = this_cpu_ptr(&cpu_hw_events);
+ 	struct perf_event *event;
+ 	unsigned long val;
+-	int found = 0;
+ 
+ 	for (i = 0; i < ppmu->n_counter; ++i) {
+ 		event = cpuhw->event[i];
+@@ -654,7 +653,6 @@ static void perf_event_interrupt(struct pt_regs *regs)
+ 		if ((int)val < 0) {
+ 			if (event) {
+ 				/* event has overflowed */
+-				found = 1;
+ 				record_and_restart(event, val, regs);
+ 			} else {
+ 				/*
+@@ -672,11 +670,13 @@ static void perf_event_interrupt(struct pt_regs *regs)
+ 	isync();
+ }
+ 
+-void hw_perf_event_setup(int cpu)
++static int fsl_emb_pmu_prepare_cpu(unsigned int cpu)
+ {
+ 	struct cpu_hw_events *cpuhw = &per_cpu(cpu_hw_events, cpu);
+ 
+ 	memset(cpuhw, 0, sizeof(*cpuhw));
++
++	return 0;
+ }
+ 
+ int register_fsl_emb_pmu(struct fsl_emb_pmu *pmu)
+@@ -689,6 +689,8 @@ int register_fsl_emb_pmu(struct fsl_emb_pmu *pmu)
+ 		pmu->name);
+ 
+ 	perf_pmu_register(&fsl_emb_pmu, "cpu", PERF_TYPE_RAW);
++	cpuhp_setup_state(CPUHP_PERF_POWER, "perf/powerpc:prepare",
++			  fsl_emb_pmu_prepare_cpu, NULL);
+ 
+ 	return 0;
+ }
+diff --git a/arch/powerpc/platforms/pseries/hvCall.S b/arch/powerpc/platforms/pseries/hvCall.S
+index 35254ac7af5ee..ca0674b0b683e 100644
+--- a/arch/powerpc/platforms/pseries/hvCall.S
++++ b/arch/powerpc/platforms/pseries/hvCall.S
+@@ -91,6 +91,7 @@ BEGIN_FTR_SECTION;						\
+ 	b	1f;						\
+ END_FTR_SECTION(0, 1);						\
+ 	LOAD_REG_ADDR(r12, hcall_tracepoint_refcount) ;		\
++	ld	r12,0(r12);					\
+ 	std	r12,32(r1);					\
+ 	cmpdi	r12,0;						\
+ 	bne-	LABEL;						\
+diff --git a/arch/powerpc/platforms/pseries/lpar.c b/arch/powerpc/platforms/pseries/lpar.c
+index 2eab323f69706..cb2f1211f7ebf 100644
+--- a/arch/powerpc/platforms/pseries/lpar.c
++++ b/arch/powerpc/platforms/pseries/lpar.c
+@@ -639,16 +639,8 @@ static const struct proc_ops vcpudispatch_stats_freq_proc_ops = {
+ 
+ static int __init vcpudispatch_stats_procfs_init(void)
+ {
+-	/*
+-	 * Avoid smp_processor_id while preemptible. All CPUs should have
+-	 * the same value for lppaca_shared_proc.
+-	 */
+-	preempt_disable();
+-	if (!lppaca_shared_proc(get_lppaca())) {
+-		preempt_enable();
++	if (!lppaca_shared_proc())
+ 		return 0;
+-	}
+-	preempt_enable();
+ 
+ 	if (!proc_create("powerpc/vcpudispatch_stats", 0600, NULL,
+ 					&vcpudispatch_stats_proc_ops))
+diff --git a/arch/powerpc/platforms/pseries/lparcfg.c b/arch/powerpc/platforms/pseries/lparcfg.c
+index 8acc705095209..1c151d77e74b3 100644
+--- a/arch/powerpc/platforms/pseries/lparcfg.c
++++ b/arch/powerpc/platforms/pseries/lparcfg.c
+@@ -206,7 +206,7 @@ static void parse_ppp_data(struct seq_file *m)
+ 	           ppp_data.active_system_procs);
+ 
+ 	/* pool related entries are appropriate for shared configs */
+-	if (lppaca_shared_proc(get_lppaca())) {
++	if (lppaca_shared_proc()) {
+ 		unsigned long pool_idle_time, pool_procs;
+ 
+ 		seq_printf(m, "pool=%d\n", ppp_data.pool_num);
+@@ -560,7 +560,7 @@ static int pseries_lparcfg_data(struct seq_file *m, void *v)
+ 		   partition_potential_processors);
+ 
+ 	seq_printf(m, "shared_processor_mode=%d\n",
+-		   lppaca_shared_proc(get_lppaca()));
++		   lppaca_shared_proc());
+ 
+ #ifdef CONFIG_PPC_64S_HASH_MMU
+ 	if (!radix_enabled())
+diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
+index e2a57cfa6c837..0ef2a7e014aa1 100644
+--- a/arch/powerpc/platforms/pseries/setup.c
++++ b/arch/powerpc/platforms/pseries/setup.c
+@@ -847,7 +847,7 @@ static void __init pSeries_setup_arch(void)
+ 	if (firmware_has_feature(FW_FEATURE_LPAR)) {
+ 		vpa_init(boot_cpuid);
+ 
+-		if (lppaca_shared_proc(get_lppaca())) {
++		if (lppaca_shared_proc()) {
+ 			static_branch_enable(&shared_processor);
+ 			pv_spinlocks_init();
+ #ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
+diff --git a/arch/powerpc/sysdev/mpc5xxx_clocks.c b/arch/powerpc/sysdev/mpc5xxx_clocks.c
+index c5bf7e1b37804..58cee28e23992 100644
+--- a/arch/powerpc/sysdev/mpc5xxx_clocks.c
++++ b/arch/powerpc/sysdev/mpc5xxx_clocks.c
+@@ -25,8 +25,10 @@ unsigned long mpc5xxx_fwnode_get_bus_frequency(struct fwnode_handle *fwnode)
+ 
+ 	fwnode_for_each_parent_node(fwnode, parent) {
+ 		ret = fwnode_property_read_u32(parent, "bus-frequency", &bus_freq);
+-		if (!ret)
++		if (!ret) {
++			fwnode_handle_put(parent);
+ 			return bus_freq;
++		}
+ 	}
+ 
+ 	return 0;
+diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
+index fae747cc57d2d..97e61a17e936a 100644
+--- a/arch/powerpc/xmon/xmon.c
++++ b/arch/powerpc/xmon/xmon.c
+@@ -58,6 +58,7 @@
+ #ifdef CONFIG_PPC64
+ #include <asm/hvcall.h>
+ #include <asm/paca.h>
++#include <asm/lppaca.h>
+ #endif
+ 
+ #include "nonstdio.h"
+diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
+index bea7b73e895dd..ab099679f808c 100644
+--- a/arch/riscv/Kconfig
++++ b/arch/riscv/Kconfig
+@@ -62,6 +62,7 @@ config RISCV
+ 	select COMMON_CLK
+ 	select CPU_PM if CPU_IDLE || HIBERNATION
+ 	select EDAC_SUPPORT
++	select FRAME_POINTER if PERF_EVENTS || (FUNCTION_TRACER && !DYNAMIC_FTRACE)
+ 	select GENERIC_ARCH_TOPOLOGY
+ 	select GENERIC_ATOMIC64 if !64BIT
+ 	select GENERIC_CLOCKEVENTS_BROADCAST if SMP
+diff --git a/arch/riscv/Makefile b/arch/riscv/Makefile
+index 6ec6d52a41804..1329e060c5482 100644
+--- a/arch/riscv/Makefile
++++ b/arch/riscv/Makefile
+@@ -87,9 +87,6 @@ endif
+ ifeq ($(CONFIG_CMODEL_MEDANY),y)
+ 	KBUILD_CFLAGS += -mcmodel=medany
+ endif
+-ifeq ($(CONFIG_PERF_EVENTS),y)
+-        KBUILD_CFLAGS += -fno-omit-frame-pointer
+-endif
+ 
+ # Avoid generating .eh_frame sections.
+ KBUILD_CFLAGS += -fno-asynchronous-unwind-tables -fno-unwind-tables
+diff --git a/arch/riscv/include/uapi/asm/ptrace.h b/arch/riscv/include/uapi/asm/ptrace.h
+index 283800130614b..575e95bb1bc33 100644
+--- a/arch/riscv/include/uapi/asm/ptrace.h
++++ b/arch/riscv/include/uapi/asm/ptrace.h
+@@ -103,13 +103,18 @@ struct __riscv_v_ext_state {
+ 	 * In signal handler, datap will be set a correct user stack offset
+ 	 * and vector registers will be copied to the address of datap
+ 	 * pointer.
+-	 *
+-	 * In ptrace syscall, datap will be set to zero and the vector
+-	 * registers will be copied to the address right after this
+-	 * structure.
+ 	 */
+ };
+ 
++struct __riscv_v_regset_state {
++	unsigned long vstart;
++	unsigned long vl;
++	unsigned long vtype;
++	unsigned long vcsr;
++	unsigned long vlenb;
++	char vreg[];
++};
++
+ /*
+  * According to spec: The number of bits in a single vector register,
+  * VLEN >= ELEN, which must be a power of 2, and must be no greater than
+diff --git a/arch/riscv/kernel/ptrace.c b/arch/riscv/kernel/ptrace.c
+index 487303e3ef229..2afe460de16a6 100644
+--- a/arch/riscv/kernel/ptrace.c
++++ b/arch/riscv/kernel/ptrace.c
+@@ -25,6 +25,9 @@ enum riscv_regset {
+ #ifdef CONFIG_FPU
+ 	REGSET_F,
+ #endif
++#ifdef CONFIG_RISCV_ISA_V
++	REGSET_V,
++#endif
+ };
+ 
+ static int riscv_gpr_get(struct task_struct *target,
+@@ -81,6 +84,71 @@ static int riscv_fpr_set(struct task_struct *target,
+ }
+ #endif
+ 
++#ifdef CONFIG_RISCV_ISA_V
++static int riscv_vr_get(struct task_struct *target,
++			const struct user_regset *regset,
++			struct membuf to)
++{
++	struct __riscv_v_ext_state *vstate = &target->thread.vstate;
++	struct __riscv_v_regset_state ptrace_vstate;
++
++	if (!riscv_v_vstate_query(task_pt_regs(target)))
++		return -EINVAL;
++
++	/*
++	 * Ensure the vector registers have been saved to the memory before
++	 * copying them to membuf.
++	 */
++	if (target == current)
++		riscv_v_vstate_save(current, task_pt_regs(current));
++
++	ptrace_vstate.vstart = vstate->vstart;
++	ptrace_vstate.vl = vstate->vl;
++	ptrace_vstate.vtype = vstate->vtype;
++	ptrace_vstate.vcsr = vstate->vcsr;
++	ptrace_vstate.vlenb = vstate->vlenb;
++
++	/* Copy vector header from vstate. */
++	membuf_write(&to, &ptrace_vstate, sizeof(struct __riscv_v_regset_state));
++
++	/* Copy all the vector registers from vstate. */
++	return membuf_write(&to, vstate->datap, riscv_v_vsize);
++}
++
++static int riscv_vr_set(struct task_struct *target,
++			const struct user_regset *regset,
++			unsigned int pos, unsigned int count,
++			const void *kbuf, const void __user *ubuf)
++{
++	int ret;
++	struct __riscv_v_ext_state *vstate = &target->thread.vstate;
++	struct __riscv_v_regset_state ptrace_vstate;
++
++	if (!riscv_v_vstate_query(task_pt_regs(target)))
++		return -EINVAL;
++
++	/* Copy rest of the vstate except datap */
++	ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, &ptrace_vstate, 0,
++				 sizeof(struct __riscv_v_regset_state));
++	if (unlikely(ret))
++		return ret;
++
++	if (vstate->vlenb != ptrace_vstate.vlenb)
++		return -EINVAL;
++
++	vstate->vstart = ptrace_vstate.vstart;
++	vstate->vl = ptrace_vstate.vl;
++	vstate->vtype = ptrace_vstate.vtype;
++	vstate->vcsr = ptrace_vstate.vcsr;
++
++	/* Copy all the vector registers. */
++	pos = 0;
++	ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, vstate->datap,
++				 0, riscv_v_vsize);
++	return ret;
++}
++#endif
++
+ static const struct user_regset riscv_user_regset[] = {
+ 	[REGSET_X] = {
+ 		.core_note_type = NT_PRSTATUS,
+@@ -100,6 +168,17 @@ static const struct user_regset riscv_user_regset[] = {
+ 		.set = riscv_fpr_set,
+ 	},
+ #endif
++#ifdef CONFIG_RISCV_ISA_V
++	[REGSET_V] = {
++		.core_note_type = NT_RISCV_VECTOR,
++		.align = 16,
++		.n = ((32 * RISCV_MAX_VLENB) +
++		      sizeof(struct __riscv_v_regset_state)) / sizeof(__u32),
++		.size = sizeof(__u32),
++		.regset_get = riscv_vr_get,
++		.set = riscv_vr_set,
++	},
++#endif
+ };
+ 
+ static const struct user_regset_view riscv_user_native_view = {
+diff --git a/arch/riscv/mm/kasan_init.c b/arch/riscv/mm/kasan_init.c
+index a01bc15dce244..5e39dcf23fdbc 100644
+--- a/arch/riscv/mm/kasan_init.c
++++ b/arch/riscv/mm/kasan_init.c
+@@ -22,9 +22,9 @@
+  * region is not and then we have to go down to the PUD level.
+  */
+ 
+-pgd_t tmp_pg_dir[PTRS_PER_PGD] __page_aligned_bss;
+-p4d_t tmp_p4d[PTRS_PER_P4D] __page_aligned_bss;
+-pud_t tmp_pud[PTRS_PER_PUD] __page_aligned_bss;
++static pgd_t tmp_pg_dir[PTRS_PER_PGD] __page_aligned_bss;
++static p4d_t tmp_p4d[PTRS_PER_P4D] __page_aligned_bss;
++static pud_t tmp_pud[PTRS_PER_PUD] __page_aligned_bss;
+ 
+ static void __init kasan_populate_pte(pmd_t *pmd, unsigned long vaddr, unsigned long end)
+ {
+@@ -438,7 +438,7 @@ static void __init kasan_shallow_populate(void *start, void *end)
+ 	kasan_shallow_populate_pgd(vaddr, vend);
+ }
+ 
+-static void create_tmp_mapping(void)
++static void __init create_tmp_mapping(void)
+ {
+ 	void *ptr;
+ 	p4d_t *base_p4d;
+diff --git a/arch/s390/crypto/paes_s390.c b/arch/s390/crypto/paes_s390.c
+index 38349150c96e8..8b541e44151d4 100644
+--- a/arch/s390/crypto/paes_s390.c
++++ b/arch/s390/crypto/paes_s390.c
+@@ -35,7 +35,7 @@
+  * and padding is also possible, the limits need to be generous.
+  */
+ #define PAES_MIN_KEYSIZE 16
+-#define PAES_MAX_KEYSIZE 320
++#define PAES_MAX_KEYSIZE MAXEP11AESKEYBLOBSIZE
+ 
+ static u8 *ctrblk;
+ static DEFINE_MUTEX(ctrblk_lock);
+diff --git a/arch/s390/include/uapi/asm/pkey.h b/arch/s390/include/uapi/asm/pkey.h
+index 5faf0a1d2c167..5ad76471e73ff 100644
+--- a/arch/s390/include/uapi/asm/pkey.h
++++ b/arch/s390/include/uapi/asm/pkey.h
+@@ -26,7 +26,7 @@
+ #define MAXCLRKEYSIZE	32	   /* a clear key value may be up to 32 bytes */
+ #define MAXAESCIPHERKEYSIZE 136  /* our aes cipher keys have always 136 bytes */
+ #define MINEP11AESKEYBLOBSIZE 256  /* min EP11 AES key blob size  */
+-#define MAXEP11AESKEYBLOBSIZE 320  /* max EP11 AES key blob size */
++#define MAXEP11AESKEYBLOBSIZE 336  /* max EP11 AES key blob size */
+ 
+ /* Minimum size of a key blob */
+ #define MINKEYBLOBSIZE	SECKEYBLOBSIZE
+diff --git a/arch/s390/kernel/ipl.c b/arch/s390/kernel/ipl.c
+index 85a00d97a3143..dfcb2b563e2bd 100644
+--- a/arch/s390/kernel/ipl.c
++++ b/arch/s390/kernel/ipl.c
+@@ -640,6 +640,8 @@ static struct attribute_group ipl_ccw_attr_group_lpar = {
+ 
+ static struct attribute *ipl_unknown_attrs[] = {
+ 	&sys_ipl_type_attr.attr,
++	&sys_ipl_secure_attr.attr,
++	&sys_ipl_has_secure_attr.attr,
+ 	NULL,
+ };
+ 
+diff --git a/arch/sparc/include/asm/irq_64.h b/arch/sparc/include/asm/irq_64.h
+index b436029f1ced2..8c4c0c87f9980 100644
+--- a/arch/sparc/include/asm/irq_64.h
++++ b/arch/sparc/include/asm/irq_64.h
+@@ -87,7 +87,7 @@ static inline unsigned long get_softint(void)
+ }
+ 
+ void arch_trigger_cpumask_backtrace(const struct cpumask *mask,
+-				    bool exclude_self);
++				    int exclude_cpu);
+ #define arch_trigger_cpumask_backtrace arch_trigger_cpumask_backtrace
+ 
+ extern void *hardirq_stack[NR_CPUS];
+diff --git a/arch/sparc/kernel/process_64.c b/arch/sparc/kernel/process_64.c
+index b51d8fb0ecdc2..1ea3f37fa9851 100644
+--- a/arch/sparc/kernel/process_64.c
++++ b/arch/sparc/kernel/process_64.c
+@@ -236,7 +236,7 @@ static void __global_reg_poll(struct global_reg_snapshot *gp)
+ 	}
+ }
+ 
+-void arch_trigger_cpumask_backtrace(const cpumask_t *mask, bool exclude_self)
++void arch_trigger_cpumask_backtrace(const cpumask_t *mask, int exclude_cpu)
+ {
+ 	struct thread_info *tp = current_thread_info();
+ 	struct pt_regs *regs = get_irq_regs();
+@@ -252,7 +252,7 @@ void arch_trigger_cpumask_backtrace(const cpumask_t *mask, bool exclude_self)
+ 
+ 	memset(global_cpu_snapshot, 0, sizeof(global_cpu_snapshot));
+ 
+-	if (cpumask_test_cpu(this_cpu, mask) && !exclude_self)
++	if (cpumask_test_cpu(this_cpu, mask) && this_cpu != exclude_cpu)
+ 		__global_reg_self(tp, regs, this_cpu);
+ 
+ 	smp_fetch_global_regs();
+@@ -260,7 +260,7 @@ void arch_trigger_cpumask_backtrace(const cpumask_t *mask, bool exclude_self)
+ 	for_each_cpu(cpu, mask) {
+ 		struct global_reg_snapshot *gp;
+ 
+-		if (exclude_self && cpu == this_cpu)
++		if (cpu == exclude_cpu)
+ 			continue;
+ 
+ 		gp = &global_cpu_snapshot[cpu].reg;
+diff --git a/arch/um/configs/i386_defconfig b/arch/um/configs/i386_defconfig
+index 630be793759e2..e543cbac87925 100644
+--- a/arch/um/configs/i386_defconfig
++++ b/arch/um/configs/i386_defconfig
+@@ -34,6 +34,7 @@ CONFIG_TTY_CHAN=y
+ CONFIG_XTERM_CHAN=y
+ CONFIG_CON_CHAN="pts"
+ CONFIG_SSL_CHAN="pts"
++CONFIG_SOUND=m
+ CONFIG_UML_SOUND=m
+ CONFIG_DEVTMPFS=y
+ CONFIG_DEVTMPFS_MOUNT=y
+diff --git a/arch/um/configs/x86_64_defconfig b/arch/um/configs/x86_64_defconfig
+index 8540d33702726..939cb12318cae 100644
+--- a/arch/um/configs/x86_64_defconfig
++++ b/arch/um/configs/x86_64_defconfig
+@@ -32,6 +32,7 @@ CONFIG_TTY_CHAN=y
+ CONFIG_XTERM_CHAN=y
+ CONFIG_CON_CHAN="pts"
+ CONFIG_SSL_CHAN="pts"
++CONFIG_SOUND=m
+ CONFIG_UML_SOUND=m
+ CONFIG_DEVTMPFS=y
+ CONFIG_DEVTMPFS_MOUNT=y
+diff --git a/arch/um/drivers/Kconfig b/arch/um/drivers/Kconfig
+index 36911b1fddcf0..b94b2618e7d84 100644
+--- a/arch/um/drivers/Kconfig
++++ b/arch/um/drivers/Kconfig
+@@ -111,24 +111,14 @@ config SSL_CHAN
+ 
+ config UML_SOUND
+ 	tristate "Sound support"
++	depends on SOUND
++	select SOUND_OSS_CORE
+ 	help
+ 	  This option enables UML sound support.  If enabled, it will pull in
+-	  soundcore and the UML hostaudio relay, which acts as a intermediary
++	  the UML hostaudio relay, which acts as a intermediary
+ 	  between the host's dsp and mixer devices and the UML sound system.
+ 	  It is safe to say 'Y' here.
+ 
+-config SOUND
+-	tristate
+-	default UML_SOUND
+-
+-config SOUND_OSS_CORE
+-	bool
+-	default UML_SOUND
+-
+-config HOSTAUDIO
+-	tristate
+-	default UML_SOUND
+-
+ endmenu
+ 
+ menu "UML Network Devices"
+diff --git a/arch/um/drivers/Makefile b/arch/um/drivers/Makefile
+index a461a950f0518..0e6af81096fd5 100644
+--- a/arch/um/drivers/Makefile
++++ b/arch/um/drivers/Makefile
+@@ -54,7 +54,7 @@ obj-$(CONFIG_UML_NET) += net.o
+ obj-$(CONFIG_MCONSOLE) += mconsole.o
+ obj-$(CONFIG_MMAPPER) += mmapper_kern.o 
+ obj-$(CONFIG_BLK_DEV_UBD) += ubd.o 
+-obj-$(CONFIG_HOSTAUDIO) += hostaudio.o
++obj-$(CONFIG_UML_SOUND) += hostaudio.o
+ obj-$(CONFIG_NULL_CHAN) += null.o 
+ obj-$(CONFIG_PORT_CHAN) += port.o
+ obj-$(CONFIG_PTY_CHAN) += pty.o
+diff --git a/arch/um/drivers/virt-pci.c b/arch/um/drivers/virt-pci.c
+index 7699ca5f35d48..ffe2ee8a02465 100644
+--- a/arch/um/drivers/virt-pci.c
++++ b/arch/um/drivers/virt-pci.c
+@@ -544,6 +544,7 @@ static void um_pci_irq_vq_cb(struct virtqueue *vq)
+ 	}
+ }
+ 
++#ifdef CONFIG_OF
+ /* Copied from arch/x86/kernel/devicetree.c */
+ struct device_node *pcibios_get_phb_of_node(struct pci_bus *bus)
+ {
+@@ -562,6 +563,7 @@ struct device_node *pcibios_get_phb_of_node(struct pci_bus *bus)
+ 	}
+ 	return NULL;
+ }
++#endif
+ 
+ static int um_pci_init_vqs(struct um_pci_device *dev)
+ {
+diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
+index 03c4328a88cbd..f732426d3b483 100644
+--- a/arch/x86/boot/compressed/head_64.S
++++ b/arch/x86/boot/compressed/head_64.S
+@@ -459,11 +459,25 @@ SYM_CODE_START(startup_64)
+ 	/* Save the trampoline address in RCX */
+ 	movq	%rax, %rcx
+ 
++	/* Set up 32-bit addressable stack */
++	leaq	TRAMPOLINE_32BIT_STACK_END(%rcx), %rsp
++
++	/*
++	 * Preserve live 64-bit registers on the stack: this is necessary
++	 * because the architecture does not guarantee that GPRs will retain
++	 * their full 64-bit values across a 32-bit mode switch.
++	 */
++	pushq	%rbp
++	pushq	%rbx
++	pushq	%rsi
++
+ 	/*
+-	 * Load the address of trampoline_return() into RDI.
+-	 * It will be used by the trampoline to return to the main code.
++	 * Push the 64-bit address of trampoline_return() onto the new stack.
++	 * It will be used by the trampoline to return to the main code. Due to
++	 * the 32-bit mode switch, it cannot be kept it in a register either.
+ 	 */
+ 	leaq	trampoline_return(%rip), %rdi
++	pushq	%rdi
+ 
+ 	/* Switch to compatibility mode (CS.L = 0 CS.D = 1) via far return */
+ 	pushq	$__KERNEL32_CS
+@@ -471,6 +485,11 @@ SYM_CODE_START(startup_64)
+ 	pushq	%rax
+ 	lretq
+ trampoline_return:
++	/* Restore live 64-bit registers */
++	popq	%rsi
++	popq	%rbx
++	popq	%rbp
++
+ 	/* Restore the stack, the 32-bit trampoline uses its own stack */
+ 	leaq	rva(boot_stack_end)(%rbx), %rsp
+ 
+@@ -582,7 +601,7 @@ SYM_FUNC_END(.Lrelocated)
+ /*
+  * This is the 32-bit trampoline that will be copied over to low memory.
+  *
+- * RDI contains the return address (might be above 4G).
++ * Return address is at the top of the stack (might be above 4G).
+  * ECX contains the base address of the trampoline memory.
+  * Non zero RDX means trampoline needs to enable 5-level paging.
+  */
+@@ -592,9 +611,6 @@ SYM_CODE_START(trampoline_32bit_src)
+ 	movl	%eax, %ds
+ 	movl	%eax, %ss
+ 
+-	/* Set up new stack */
+-	leal	TRAMPOLINE_32BIT_STACK_END(%ecx), %esp
+-
+ 	/* Disable paging */
+ 	movl	%cr0, %eax
+ 	btrl	$X86_CR0_PG_BIT, %eax
+@@ -671,7 +687,7 @@ SYM_CODE_END(trampoline_32bit_src)
+ 	.code64
+ SYM_FUNC_START_LOCAL_NOALIGN(.Lpaging_enabled)
+ 	/* Return from the trampoline */
+-	jmp	*%rdi
++	retq
+ SYM_FUNC_END(.Lpaging_enabled)
+ 
+ 	/*
+diff --git a/arch/x86/events/intel/uncore_snbep.c b/arch/x86/events/intel/uncore_snbep.c
+index d49e90dc04a4c..847740c08c97d 100644
+--- a/arch/x86/events/intel/uncore_snbep.c
++++ b/arch/x86/events/intel/uncore_snbep.c
+@@ -6474,8 +6474,18 @@ void spr_uncore_cpu_init(void)
+ 
+ 	type = uncore_find_type_by_id(uncore_msr_uncores, UNCORE_SPR_CHA);
+ 	if (type) {
++		/*
++		 * The value from the discovery table (stored in the type->num_boxes
++		 * of UNCORE_SPR_CHA) is incorrect on some SPR variants because of a
++		 * firmware bug. Using the value from SPR_MSR_UNC_CBO_CONFIG to replace it.
++		 */
+ 		rdmsrl(SPR_MSR_UNC_CBO_CONFIG, num_cbo);
+-		type->num_boxes = num_cbo;
++		/*
++		 * The MSR doesn't work on the EMR XCC, but the firmware bug doesn't impact
++		 * the EMR XCC. Don't let the value from the MSR replace the existing value.
++		 */
++		if (num_cbo)
++			type->num_boxes = num_cbo;
+ 	}
+ 	spr_uncore_iio_free_running.num_boxes = uncore_type_max_boxes(uncore_msr_uncores, UNCORE_SPR_IIO);
+ }
+diff --git a/arch/x86/include/asm/irq.h b/arch/x86/include/asm/irq.h
+index 29e083b92813c..836c170d30875 100644
+--- a/arch/x86/include/asm/irq.h
++++ b/arch/x86/include/asm/irq.h
+@@ -42,7 +42,7 @@ extern void init_ISA_irqs(void);
+ 
+ #ifdef CONFIG_X86_LOCAL_APIC
+ void arch_trigger_cpumask_backtrace(const struct cpumask *mask,
+-				    bool exclude_self);
++				    int exclude_cpu);
+ 
+ #define arch_trigger_cpumask_backtrace arch_trigger_cpumask_backtrace
+ #endif
+diff --git a/arch/x86/include/asm/local.h b/arch/x86/include/asm/local.h
+index 56d4ef604b919..635132a127782 100644
+--- a/arch/x86/include/asm/local.h
++++ b/arch/x86/include/asm/local.h
+@@ -127,8 +127,8 @@ static inline long local_cmpxchg(local_t *l, long old, long new)
+ 
+ static inline bool local_try_cmpxchg(local_t *l, long *old, long new)
+ {
+-	typeof(l->a.counter) *__old = (typeof(l->a.counter) *) old;
+-	return try_cmpxchg_local(&l->a.counter, __old, new);
++	return try_cmpxchg_local(&l->a.counter,
++				 (typeof(l->a.counter) *) old, new);
+ }
+ 
+ /* Always has a lock prefix */
+diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h
+index 7f97a8a97e24a..473b16d73b471 100644
+--- a/arch/x86/include/asm/mem_encrypt.h
++++ b/arch/x86/include/asm/mem_encrypt.h
+@@ -50,8 +50,8 @@ void __init sme_enable(struct boot_params *bp);
+ 
+ int __init early_set_memory_decrypted(unsigned long vaddr, unsigned long size);
+ int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size);
+-void __init early_set_mem_enc_dec_hypercall(unsigned long vaddr, int npages,
+-					    bool enc);
++void __init early_set_mem_enc_dec_hypercall(unsigned long vaddr,
++					    unsigned long size, bool enc);
+ 
+ void __init mem_encrypt_free_decrypted_mem(void);
+ 
+@@ -85,7 +85,7 @@ early_set_memory_decrypted(unsigned long vaddr, unsigned long size) { return 0;
+ static inline int __init
+ early_set_memory_encrypted(unsigned long vaddr, unsigned long size) { return 0; }
+ static inline void __init
+-early_set_mem_enc_dec_hypercall(unsigned long vaddr, int npages, bool enc) {}
++early_set_mem_enc_dec_hypercall(unsigned long vaddr, unsigned long size, bool enc) {}
+ 
+ static inline void mem_encrypt_free_decrypted_mem(void) { }
+ 
+diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
+index ba3e2554799ab..a6deb67cfbb26 100644
+--- a/arch/x86/include/asm/pgtable_types.h
++++ b/arch/x86/include/asm/pgtable_types.h
+@@ -125,11 +125,12 @@
+  * instance, and is *not* included in this mask since
+  * pte_modify() does modify it.
+  */
+-#define _PAGE_CHG_MASK	(PTE_PFN_MASK | _PAGE_PCD | _PAGE_PWT |		\
+-			 _PAGE_SPECIAL | _PAGE_ACCESSED | _PAGE_DIRTY |	\
+-			 _PAGE_SOFT_DIRTY | _PAGE_DEVMAP | _PAGE_ENC |  \
+-			 _PAGE_UFFD_WP)
+-#define _HPAGE_CHG_MASK (_PAGE_CHG_MASK | _PAGE_PSE)
++#define _COMMON_PAGE_CHG_MASK	(PTE_PFN_MASK | _PAGE_PCD | _PAGE_PWT |	       \
++				 _PAGE_SPECIAL | _PAGE_ACCESSED | _PAGE_DIRTY |\
++				 _PAGE_SOFT_DIRTY | _PAGE_DEVMAP | _PAGE_ENC | \
++				 _PAGE_UFFD_WP)
++#define _PAGE_CHG_MASK	(_COMMON_PAGE_CHG_MASK | _PAGE_PAT)
++#define _HPAGE_CHG_MASK (_COMMON_PAGE_CHG_MASK | _PAGE_PSE | _PAGE_PAT_LARGE)
+ 
+ /*
+  * The cache modes defined here are used to translate between pure SW usage
+diff --git a/arch/x86/kernel/apic/hw_nmi.c b/arch/x86/kernel/apic/hw_nmi.c
+index 34a992e275ef4..d6e01f9242996 100644
+--- a/arch/x86/kernel/apic/hw_nmi.c
++++ b/arch/x86/kernel/apic/hw_nmi.c
+@@ -34,9 +34,9 @@ static void nmi_raise_cpu_backtrace(cpumask_t *mask)
+ 	apic->send_IPI_mask(mask, NMI_VECTOR);
+ }
+ 
+-void arch_trigger_cpumask_backtrace(const cpumask_t *mask, bool exclude_self)
++void arch_trigger_cpumask_backtrace(const cpumask_t *mask, int exclude_cpu)
+ {
+-	nmi_trigger_cpumask_backtrace(mask, exclude_self,
++	nmi_trigger_cpumask_backtrace(mask, exclude_cpu,
+ 				      nmi_raise_cpu_backtrace);
+ }
+ 
+diff --git a/arch/x86/kernel/apm_32.c b/arch/x86/kernel/apm_32.c
+index c6c15ce1952fb..5934ee5bc087e 100644
+--- a/arch/x86/kernel/apm_32.c
++++ b/arch/x86/kernel/apm_32.c
+@@ -238,12 +238,6 @@
+ extern int (*console_blank_hook)(int);
+ #endif
+ 
+-/*
+- * The apm_bios device is one of the misc char devices.
+- * This is its minor number.
+- */
+-#define	APM_MINOR_DEV	134
+-
+ /*
+  * Various options can be changed at boot time as follows:
+  * (We allow underscores for compatibility with the modules code)
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index e3a65e9fc750d..00f043a094fcd 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -1265,11 +1265,11 @@ static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
+ 	VULNBL_INTEL_STEPPINGS(BROADWELL_G,	X86_STEPPING_ANY,		SRBDS),
+ 	VULNBL_INTEL_STEPPINGS(BROADWELL_X,	X86_STEPPING_ANY,		MMIO),
+ 	VULNBL_INTEL_STEPPINGS(BROADWELL,	X86_STEPPING_ANY,		SRBDS),
+-	VULNBL_INTEL_STEPPINGS(SKYLAKE_L,	X86_STEPPING_ANY,		SRBDS | MMIO | RETBLEED),
+ 	VULNBL_INTEL_STEPPINGS(SKYLAKE_X,	X86_STEPPING_ANY,		MMIO | RETBLEED | GDS),
+-	VULNBL_INTEL_STEPPINGS(SKYLAKE,		X86_STEPPING_ANY,		SRBDS | MMIO | RETBLEED),
+-	VULNBL_INTEL_STEPPINGS(KABYLAKE_L,	X86_STEPPING_ANY,		SRBDS | MMIO | RETBLEED | GDS),
+-	VULNBL_INTEL_STEPPINGS(KABYLAKE,	X86_STEPPING_ANY,		SRBDS | MMIO | RETBLEED | GDS),
++	VULNBL_INTEL_STEPPINGS(SKYLAKE_L,	X86_STEPPING_ANY,		MMIO | RETBLEED | GDS | SRBDS),
++	VULNBL_INTEL_STEPPINGS(SKYLAKE,		X86_STEPPING_ANY,		MMIO | RETBLEED | GDS | SRBDS),
++	VULNBL_INTEL_STEPPINGS(KABYLAKE_L,	X86_STEPPING_ANY,		MMIO | RETBLEED | GDS | SRBDS),
++	VULNBL_INTEL_STEPPINGS(KABYLAKE,	X86_STEPPING_ANY,		MMIO | RETBLEED | GDS | SRBDS),
+ 	VULNBL_INTEL_STEPPINGS(CANNONLAKE_L,	X86_STEPPING_ANY,		RETBLEED),
+ 	VULNBL_INTEL_STEPPINGS(ICELAKE_L,	X86_STEPPING_ANY,		MMIO | MMIO_SBDS | RETBLEED | GDS),
+ 	VULNBL_INTEL_STEPPINGS(ICELAKE_D,	X86_STEPPING_ANY,		MMIO | GDS),
+diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
+index 89e2aab5d34d8..17eb6a37a5872 100644
+--- a/arch/x86/kernel/cpu/mce/core.c
++++ b/arch/x86/kernel/cpu/mce/core.c
+@@ -842,6 +842,26 @@ static noinstr bool quirk_skylake_repmov(void)
+ 	return false;
+ }
+ 
++/*
++ * Some Zen-based Instruction Fetch Units set EIPV=RIPV=0 on poison consumption
++ * errors. This means mce_gather_info() will not save the "ip" and "cs" registers.
++ *
++ * However, the context is still valid, so save the "cs" register for later use.
++ *
++ * The "ip" register is truly unknown, so don't save it or fixup EIPV/RIPV.
++ *
++ * The Instruction Fetch Unit is at MCA bank 1 for all affected systems.
++ */
++static __always_inline void quirk_zen_ifu(int bank, struct mce *m, struct pt_regs *regs)
++{
++	if (bank != 1)
++		return;
++	if (!(m->status & MCI_STATUS_POISON))
++		return;
++
++	m->cs = regs->cs;
++}
++
+ /*
+  * Do a quick check if any of the events requires a panic.
+  * This decides if we keep the events around or clear them.
+@@ -861,6 +881,9 @@ static __always_inline int mce_no_way_out(struct mce *m, char **msg, unsigned lo
+ 		if (mce_flags.snb_ifu_quirk)
+ 			quirk_sandybridge_ifu(i, m, regs);
+ 
++		if (mce_flags.zen_ifu_quirk)
++			quirk_zen_ifu(i, m, regs);
++
+ 		m->bank = i;
+ 		if (mce_severity(m, regs, &tmp, true) >= MCE_PANIC_SEVERITY) {
+ 			mce_read_aux(m, i);
+@@ -1842,6 +1865,9 @@ static int __mcheck_cpu_apply_quirks(struct cpuinfo_x86 *c)
+ 		if (c->x86 == 0x15 && c->x86_model <= 0xf)
+ 			mce_flags.overflow_recov = 1;
+ 
++		if (c->x86 >= 0x17 && c->x86 <= 0x1A)
++			mce_flags.zen_ifu_quirk = 1;
++
+ 	}
+ 
+ 	if (c->x86_vendor == X86_VENDOR_INTEL) {
+diff --git a/arch/x86/kernel/cpu/mce/internal.h b/arch/x86/kernel/cpu/mce/internal.h
+index d2412ce2d312f..d5946fcdcd5de 100644
+--- a/arch/x86/kernel/cpu/mce/internal.h
++++ b/arch/x86/kernel/cpu/mce/internal.h
+@@ -157,6 +157,9 @@ struct mce_vendor_flags {
+ 	 */
+ 	smca			: 1,
+ 
++	/* Zen IFU quirk */
++	zen_ifu_quirk		: 1,
++
+ 	/* AMD-style error thresholding banks present. */
+ 	amd_threshold		: 1,
+ 
+@@ -172,7 +175,7 @@ struct mce_vendor_flags {
+ 	/* Skylake, Cascade Lake, Cooper Lake REP;MOVS* quirk */
+ 	skx_repmov_quirk	: 1,
+ 
+-	__reserved_0		: 56;
++	__reserved_0		: 55;
+ };
+ 
+ extern struct mce_vendor_flags mce_flags;
+diff --git a/arch/x86/kernel/cpu/sgx/virt.c b/arch/x86/kernel/cpu/sgx/virt.c
+index c3e37eaec8ecd..7aaa3652e31d1 100644
+--- a/arch/x86/kernel/cpu/sgx/virt.c
++++ b/arch/x86/kernel/cpu/sgx/virt.c
+@@ -204,6 +204,7 @@ static int sgx_vepc_release(struct inode *inode, struct file *file)
+ 			continue;
+ 
+ 		xa_erase(&vepc->page_array, index);
++		cond_resched();
+ 	}
+ 
+ 	/*
+@@ -222,6 +223,7 @@ static int sgx_vepc_release(struct inode *inode, struct file *file)
+ 			list_add_tail(&epc_page->list, &secs_pages);
+ 
+ 		xa_erase(&vepc->page_array, index);
++		cond_resched();
+ 	}
+ 
+ 	/*
+@@ -243,6 +245,7 @@ static int sgx_vepc_release(struct inode *inode, struct file *file)
+ 
+ 		if (sgx_vepc_free_page(epc_page))
+ 			list_add_tail(&epc_page->list, &secs_pages);
++		cond_resched();
+ 	}
+ 
+ 	if (!list_empty(&secs_pages))
+diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
+index 1cceac5984daa..526d4da3dcd46 100644
+--- a/arch/x86/kernel/kvm.c
++++ b/arch/x86/kernel/kvm.c
+@@ -966,10 +966,8 @@ static void __init kvm_init_platform(void)
+ 		 * Ensure that _bss_decrypted section is marked as decrypted in the
+ 		 * shared pages list.
+ 		 */
+-		nr_pages = DIV_ROUND_UP(__end_bss_decrypted - __start_bss_decrypted,
+-					PAGE_SIZE);
+ 		early_set_mem_enc_dec_hypercall((unsigned long)__start_bss_decrypted,
+-						nr_pages, 0);
++						__end_bss_decrypted - __start_bss_decrypted, 0);
+ 
+ 		/*
+ 		 * If not booted using EFI, enable Live migration support.
+diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
+index e1aa2cd7734ba..7d82f0bd449c7 100644
+--- a/arch/x86/kernel/smpboot.c
++++ b/arch/x86/kernel/smpboot.c
+@@ -1356,7 +1356,7 @@ bool smp_park_other_cpus_in_init(void)
+ 	if (this_cpu)
+ 		return false;
+ 
+-	for_each_present_cpu(cpu) {
++	for_each_cpu_and(cpu, &cpus_booted_once_mask, cpu_present_mask) {
+ 		if (cpu == this_cpu)
+ 			continue;
+ 		apicid = apic->cpu_present_to_apicid(cpu);
+diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
+index 83d41c2601d7b..f15fb71f280e2 100644
+--- a/arch/x86/kernel/vmlinux.lds.S
++++ b/arch/x86/kernel/vmlinux.lds.S
+@@ -156,7 +156,7 @@ SECTIONS
+ 		ALIGN_ENTRY_TEXT_END
+ 		*(.gnu.warning)
+ 
+-	} :text =0xcccc
++	} :text = 0xcccccccc
+ 
+ 	/* End of text section, which should occupy whole number of pages */
+ 	_etext = .;
+diff --git a/arch/x86/mm/mem_encrypt_amd.c b/arch/x86/mm/mem_encrypt_amd.c
+index 54bbd5163e8d3..6faea41e99b6b 100644
+--- a/arch/x86/mm/mem_encrypt_amd.c
++++ b/arch/x86/mm/mem_encrypt_amd.c
+@@ -288,11 +288,10 @@ static bool amd_enc_cache_flush_required(void)
+ 	return !cpu_feature_enabled(X86_FEATURE_SME_COHERENT);
+ }
+ 
+-static void enc_dec_hypercall(unsigned long vaddr, int npages, bool enc)
++static void enc_dec_hypercall(unsigned long vaddr, unsigned long size, bool enc)
+ {
+ #ifdef CONFIG_PARAVIRT
+-	unsigned long sz = npages << PAGE_SHIFT;
+-	unsigned long vaddr_end = vaddr + sz;
++	unsigned long vaddr_end = vaddr + size;
+ 
+ 	while (vaddr < vaddr_end) {
+ 		int psize, pmask, level;
+@@ -342,7 +341,7 @@ static bool amd_enc_status_change_finish(unsigned long vaddr, int npages, bool e
+ 		snp_set_memory_private(vaddr, npages);
+ 
+ 	if (!cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT))
+-		enc_dec_hypercall(vaddr, npages, enc);
++		enc_dec_hypercall(vaddr, npages << PAGE_SHIFT, enc);
+ 
+ 	return true;
+ }
+@@ -466,7 +465,7 @@ static int __init early_set_memory_enc_dec(unsigned long vaddr,
+ 
+ 	ret = 0;
+ 
+-	early_set_mem_enc_dec_hypercall(start, PAGE_ALIGN(size) >> PAGE_SHIFT, enc);
++	early_set_mem_enc_dec_hypercall(start, size, enc);
+ out:
+ 	__flush_tlb_all();
+ 	return ret;
+@@ -482,9 +481,9 @@ int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size)
+ 	return early_set_memory_enc_dec(vaddr, size, true);
+ }
+ 
+-void __init early_set_mem_enc_dec_hypercall(unsigned long vaddr, int npages, bool enc)
++void __init early_set_mem_enc_dec_hypercall(unsigned long vaddr, unsigned long size, bool enc)
+ {
+-	enc_dec_hypercall(vaddr, npages, enc);
++	enc_dec_hypercall(vaddr, size, enc);
+ }
+ 
+ void __init sme_early_init(void)
+diff --git a/arch/xtensa/include/asm/core.h b/arch/xtensa/include/asm/core.h
+index 0e1bb6f019d6b..3f5ffae89b580 100644
+--- a/arch/xtensa/include/asm/core.h
++++ b/arch/xtensa/include/asm/core.h
+@@ -52,4 +52,13 @@
+ #define XTENSA_STACK_ALIGNMENT	16
+ #endif
+ 
++#ifndef XCHAL_HW_MIN_VERSION
++#if defined(XCHAL_HW_MIN_VERSION_MAJOR) && defined(XCHAL_HW_MIN_VERSION_MINOR)
++#define XCHAL_HW_MIN_VERSION (XCHAL_HW_MIN_VERSION_MAJOR * 100 + \
++			      XCHAL_HW_MIN_VERSION_MINOR)
++#else
++#define XCHAL_HW_MIN_VERSION 0
++#endif
++#endif
++
+ #endif
+diff --git a/arch/xtensa/kernel/perf_event.c b/arch/xtensa/kernel/perf_event.c
+index a0d05c8598d0f..183618090d05b 100644
+--- a/arch/xtensa/kernel/perf_event.c
++++ b/arch/xtensa/kernel/perf_event.c
+@@ -13,17 +13,26 @@
+ #include <linux/perf_event.h>
+ #include <linux/platform_device.h>
+ 
++#include <asm/core.h>
+ #include <asm/processor.h>
+ #include <asm/stacktrace.h>
+ 
++#define XTENSA_HWVERSION_RG_2015_0	260000
++
++#if XCHAL_HW_MIN_VERSION >= XTENSA_HWVERSION_RG_2015_0
++#define XTENSA_PMU_ERI_BASE		0x00101000
++#else
++#define XTENSA_PMU_ERI_BASE		0x00001000
++#endif
++
+ /* Global control/status for all perf counters */
+-#define XTENSA_PMU_PMG			0x1000
++#define XTENSA_PMU_PMG			XTENSA_PMU_ERI_BASE
+ /* Perf counter values */
+-#define XTENSA_PMU_PM(i)		(0x1080 + (i) * 4)
++#define XTENSA_PMU_PM(i)		(XTENSA_PMU_ERI_BASE + 0x80 + (i) * 4)
+ /* Perf counter control registers */
+-#define XTENSA_PMU_PMCTRL(i)		(0x1100 + (i) * 4)
++#define XTENSA_PMU_PMCTRL(i)		(XTENSA_PMU_ERI_BASE + 0x100 + (i) * 4)
+ /* Perf counter status registers */
+-#define XTENSA_PMU_PMSTAT(i)		(0x1180 + (i) * 4)
++#define XTENSA_PMU_PMSTAT(i)		(XTENSA_PMU_ERI_BASE + 0x180 + (i) * 4)
+ 
+ #define XTENSA_PMU_PMG_PMEN		0x1
+ 
+diff --git a/block/bio-integrity.c b/block/bio-integrity.c
+index 4533eb4916610..6f81c10757fb9 100644
+--- a/block/bio-integrity.c
++++ b/block/bio-integrity.c
+@@ -123,17 +123,34 @@ void bio_integrity_free(struct bio *bio)
+ int bio_integrity_add_page(struct bio *bio, struct page *page,
+ 			   unsigned int len, unsigned int offset)
+ {
++	struct request_queue *q = bdev_get_queue(bio->bi_bdev);
+ 	struct bio_integrity_payload *bip = bio_integrity(bio);
+ 
+-	if (bip->bip_vcnt >= bip->bip_max_vcnt) {
+-		printk(KERN_ERR "%s: bip_vec full\n", __func__);
++	if (((bip->bip_iter.bi_size + len) >> SECTOR_SHIFT) >
++	    queue_max_hw_sectors(q))
+ 		return 0;
+-	}
+ 
+-	if (bip->bip_vcnt &&
+-	    bvec_gap_to_prev(&bdev_get_queue(bio->bi_bdev)->limits,
+-			     &bip->bip_vec[bip->bip_vcnt - 1], offset))
+-		return 0;
++	if (bip->bip_vcnt > 0) {
++		struct bio_vec *bv = &bip->bip_vec[bip->bip_vcnt - 1];
++		bool same_page = false;
++
++		if (bvec_try_merge_hw_page(q, bv, page, len, offset,
++					   &same_page)) {
++			bip->bip_iter.bi_size += len;
++			return len;
++		}
++
++		if (bip->bip_vcnt >=
++		    min(bip->bip_max_vcnt, queue_max_integrity_segments(q)))
++			return 0;
++
++		/*
++		 * If the queue doesn't support SG gaps and adding this segment
++		 * would create a gap, disallow it.
++		 */
++		if (bvec_gap_to_prev(&q->limits, bv, offset))
++			return 0;
++	}
+ 
+ 	bvec_set_page(&bip->bip_vec[bip->bip_vcnt], page, len, offset);
+ 	bip->bip_vcnt++;
+diff --git a/block/bio.c b/block/bio.c
+index 8672179213b93..00ac4c233e3aa 100644
+--- a/block/bio.c
++++ b/block/bio.c
+@@ -903,9 +903,8 @@ static inline bool bio_full(struct bio *bio, unsigned len)
+ 	return false;
+ }
+ 
+-static inline bool page_is_mergeable(const struct bio_vec *bv,
+-		struct page *page, unsigned int len, unsigned int off,
+-		bool *same_page)
++static bool bvec_try_merge_page(struct bio_vec *bv, struct page *page,
++		unsigned int len, unsigned int off, bool *same_page)
+ {
+ 	size_t bv_end = bv->bv_offset + bv->bv_len;
+ 	phys_addr_t vec_end_addr = page_to_phys(bv->bv_page) + bv_end - 1;
+@@ -919,49 +918,15 @@ static inline bool page_is_mergeable(const struct bio_vec *bv,
+ 		return false;
+ 
+ 	*same_page = ((vec_end_addr & PAGE_MASK) == page_addr);
+-	if (*same_page)
+-		return true;
+-	else if (IS_ENABLED(CONFIG_KMSAN))
+-		return false;
+-	return (bv->bv_page + bv_end / PAGE_SIZE) == (page + off / PAGE_SIZE);
+-}
+-
+-/**
+- * __bio_try_merge_page - try appending data to an existing bvec.
+- * @bio: destination bio
+- * @page: start page to add
+- * @len: length of the data to add
+- * @off: offset of the data relative to @page
+- * @same_page: return if the segment has been merged inside the same page
+- *
+- * Try to add the data at @page + @off to the last bvec of @bio.  This is a
+- * useful optimisation for file systems with a block size smaller than the
+- * page size.
+- *
+- * Warn if (@len, @off) crosses pages in case that @same_page is true.
+- *
+- * Return %true on success or %false on failure.
+- */
+-static bool __bio_try_merge_page(struct bio *bio, struct page *page,
+-		unsigned int len, unsigned int off, bool *same_page)
+-{
+-	if (WARN_ON_ONCE(bio_flagged(bio, BIO_CLONED)))
+-		return false;
+-
+-	if (bio->bi_vcnt > 0) {
+-		struct bio_vec *bv = &bio->bi_io_vec[bio->bi_vcnt - 1];
+-
+-		if (page_is_mergeable(bv, page, len, off, same_page)) {
+-			if (bio->bi_iter.bi_size > UINT_MAX - len) {
+-				*same_page = false;
+-				return false;
+-			}
+-			bv->bv_len += len;
+-			bio->bi_iter.bi_size += len;
+-			return true;
+-		}
++	if (!*same_page) {
++		if (IS_ENABLED(CONFIG_KMSAN))
++			return false;
++		if (bv->bv_page + bv_end / PAGE_SIZE != page + off / PAGE_SIZE)
++			return false;
+ 	}
+-	return false;
++
++	bv->bv_len += len;
++	return true;
+ }
+ 
+ /*
+@@ -969,11 +934,10 @@ static bool __bio_try_merge_page(struct bio *bio, struct page *page,
+  * size limit.  This is not for normal read/write bios, but for passthrough
+  * or Zone Append operations that we can't split.
+  */
+-static bool bio_try_merge_hw_seg(struct request_queue *q, struct bio *bio,
+-				 struct page *page, unsigned len,
+-				 unsigned offset, bool *same_page)
++bool bvec_try_merge_hw_page(struct request_queue *q, struct bio_vec *bv,
++		struct page *page, unsigned len, unsigned offset,
++		bool *same_page)
+ {
+-	struct bio_vec *bv = &bio->bi_io_vec[bio->bi_vcnt - 1];
+ 	unsigned long mask = queue_segment_boundary(q);
+ 	phys_addr_t addr1 = page_to_phys(bv->bv_page) + bv->bv_offset;
+ 	phys_addr_t addr2 = page_to_phys(page) + offset + len - 1;
+@@ -982,7 +946,7 @@ static bool bio_try_merge_hw_seg(struct request_queue *q, struct bio *bio,
+ 		return false;
+ 	if (bv->bv_len + len > queue_max_segment_size(q))
+ 		return false;
+-	return __bio_try_merge_page(bio, page, len, offset, same_page);
++	return bvec_try_merge_page(bv, page, len, offset, same_page);
+ }
+ 
+ /**
+@@ -1002,8 +966,6 @@ int bio_add_hw_page(struct request_queue *q, struct bio *bio,
+ 		struct page *page, unsigned int len, unsigned int offset,
+ 		unsigned int max_sectors, bool *same_page)
+ {
+-	struct bio_vec *bvec;
+-
+ 	if (WARN_ON_ONCE(bio_flagged(bio, BIO_CLONED)))
+ 		return 0;
+ 
+@@ -1011,15 +973,19 @@ int bio_add_hw_page(struct request_queue *q, struct bio *bio,
+ 		return 0;
+ 
+ 	if (bio->bi_vcnt > 0) {
+-		if (bio_try_merge_hw_seg(q, bio, page, len, offset, same_page))
++		struct bio_vec *bv = &bio->bi_io_vec[bio->bi_vcnt - 1];
++
++		if (bvec_try_merge_hw_page(q, bv, page, len, offset,
++				same_page)) {
++			bio->bi_iter.bi_size += len;
+ 			return len;
++		}
+ 
+ 		/*
+ 		 * If the queue doesn't support SG gaps and adding this segment
+ 		 * would create a gap, disallow it.
+ 		 */
+-		bvec = &bio->bi_io_vec[bio->bi_vcnt - 1];
+-		if (bvec_gap_to_prev(&q->limits, bvec, offset))
++		if (bvec_gap_to_prev(&q->limits, bv, offset))
+ 			return 0;
+ 	}
+ 
+@@ -1129,11 +1095,21 @@ int bio_add_page(struct bio *bio, struct page *page,
+ {
+ 	bool same_page = false;
+ 
+-	if (!__bio_try_merge_page(bio, page, len, offset, &same_page)) {
+-		if (bio_full(bio, len))
+-			return 0;
+-		__bio_add_page(bio, page, len, offset);
++	if (WARN_ON_ONCE(bio_flagged(bio, BIO_CLONED)))
++		return 0;
++	if (bio->bi_iter.bi_size > UINT_MAX - len)
++		return 0;
++
++	if (bio->bi_vcnt > 0 &&
++	    bvec_try_merge_page(&bio->bi_io_vec[bio->bi_vcnt - 1],
++				page, len, offset, &same_page)) {
++		bio->bi_iter.bi_size += len;
++		return len;
+ 	}
++
++	if (bio_full(bio, len))
++		return 0;
++	__bio_add_page(bio, page, len, offset);
+ 	return len;
+ }
+ EXPORT_SYMBOL(bio_add_page);
+@@ -1207,13 +1183,18 @@ static int bio_iov_add_page(struct bio *bio, struct page *page,
+ {
+ 	bool same_page = false;
+ 
+-	if (!__bio_try_merge_page(bio, page, len, offset, &same_page)) {
+-		__bio_add_page(bio, page, len, offset);
++	if (WARN_ON_ONCE(bio->bi_iter.bi_size > UINT_MAX - len))
++		return -EIO;
++
++	if (bio->bi_vcnt > 0 &&
++	    bvec_try_merge_page(&bio->bi_io_vec[bio->bi_vcnt - 1],
++				page, len, offset, &same_page)) {
++		bio->bi_iter.bi_size += len;
++		if (same_page)
++			bio_release_page(bio, page);
+ 		return 0;
+ 	}
+-
+-	if (same_page)
+-		bio_release_page(bio, page);
++	__bio_add_page(bio, page, len, offset);
+ 	return 0;
+ }
+ 
+@@ -1337,6 +1318,9 @@ int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
+ {
+ 	int ret = 0;
+ 
++	if (WARN_ON_ONCE(bio_flagged(bio, BIO_CLONED)))
++		return -EIO;
++
+ 	if (iov_iter_is_bvec(iter)) {
+ 		bio_iov_bvec_set(bio, iter);
+ 		iov_iter_advance(iter, bio->bi_iter.bi_size);
+diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
+index 9faafcd10e177..4a42ea2972ad8 100644
+--- a/block/blk-cgroup.c
++++ b/block/blk-cgroup.c
+@@ -1511,7 +1511,7 @@ int blkcg_activate_policy(struct gendisk *disk, const struct blkcg_policy *pol)
+ retry:
+ 	spin_lock_irq(&q->queue_lock);
+ 
+-	/* blkg_list is pushed at the head, reverse walk to allocate parents first */
++	/* blkg_list is pushed at the head, reverse walk to initialize parents first */
+ 	list_for_each_entry_reverse(blkg, &q->blkg_list, q_node) {
+ 		struct blkg_policy_data *pd;
+ 
+@@ -1549,21 +1549,20 @@ retry:
+ 				goto enomem;
+ 		}
+ 
+-		blkg->pd[pol->plid] = pd;
++		spin_lock(&blkg->blkcg->lock);
++
+ 		pd->blkg = blkg;
+ 		pd->plid = pol->plid;
+-		pd->online = false;
+-	}
++		blkg->pd[pol->plid] = pd;
+ 
+-	/* all allocated, init in the same order */
+-	if (pol->pd_init_fn)
+-		list_for_each_entry_reverse(blkg, &q->blkg_list, q_node)
+-			pol->pd_init_fn(blkg->pd[pol->plid]);
++		if (pol->pd_init_fn)
++			pol->pd_init_fn(pd);
+ 
+-	list_for_each_entry_reverse(blkg, &q->blkg_list, q_node) {
+ 		if (pol->pd_online_fn)
+-			pol->pd_online_fn(blkg->pd[pol->plid]);
+-		blkg->pd[pol->plid]->online = true;
++			pol->pd_online_fn(pd);
++		pd->online = true;
++
++		spin_unlock(&blkg->blkcg->lock);
+ 	}
+ 
+ 	__set_bit(pol->plid, q->blkcg_pols);
+@@ -1580,14 +1579,19 @@ out:
+ 	return ret;
+ 
+ enomem:
+-	/* alloc failed, nothing's initialized yet, free everything */
++	/* alloc failed, take down everything */
+ 	spin_lock_irq(&q->queue_lock);
+ 	list_for_each_entry(blkg, &q->blkg_list, q_node) {
+ 		struct blkcg *blkcg = blkg->blkcg;
++		struct blkg_policy_data *pd;
+ 
+ 		spin_lock(&blkcg->lock);
+-		if (blkg->pd[pol->plid]) {
+-			pol->pd_free_fn(blkg->pd[pol->plid]);
++		pd = blkg->pd[pol->plid];
++		if (pd) {
++			if (pd->online && pol->pd_offline_fn)
++				pol->pd_offline_fn(pd);
++			pd->online = false;
++			pol->pd_free_fn(pd);
+ 			blkg->pd[pol->plid] = NULL;
+ 		}
+ 		spin_unlock(&blkcg->lock);
+diff --git a/block/blk-flush.c b/block/blk-flush.c
+index 8220517c2d67d..fdc489e0ea162 100644
+--- a/block/blk-flush.c
++++ b/block/blk-flush.c
+@@ -443,7 +443,7 @@ bool blk_insert_flush(struct request *rq)
+ 		 * the post flush, and then just pass the command on.
+ 		 */
+ 		blk_rq_init_flush(rq);
+-		rq->flush.seq |= REQ_FSEQ_POSTFLUSH;
++		rq->flush.seq |= REQ_FSEQ_PREFLUSH;
+ 		spin_lock_irq(&fq->mq_flush_lock);
+ 		list_move_tail(&rq->flush.list, &fq->flush_data_in_flight);
+ 		spin_unlock_irq(&fq->mq_flush_lock);
+diff --git a/block/blk-map.c b/block/blk-map.c
+index 44d74a30ddac0..8584babf3ea0c 100644
+--- a/block/blk-map.c
++++ b/block/blk-map.c
+@@ -315,12 +315,11 @@ static int bio_map_user_iov(struct request *rq, struct iov_iter *iter,
+ 					n = bytes;
+ 
+ 				if (!bio_add_hw_page(rq->q, bio, page, n, offs,
+-						     max_sectors, &same_page)) {
+-					if (same_page)
+-						bio_release_page(bio, page);
++						     max_sectors, &same_page))
+ 					break;
+-				}
+ 
++				if (same_page)
++					bio_release_page(bio, page);
+ 				bytes -= n;
+ 				offs = 0;
+ 			}
+diff --git a/block/blk-settings.c b/block/blk-settings.c
+index 4dd59059b788e..0046b447268f9 100644
+--- a/block/blk-settings.c
++++ b/block/blk-settings.c
+@@ -830,10 +830,13 @@ EXPORT_SYMBOL(blk_set_queue_depth);
+  */
+ void blk_queue_write_cache(struct request_queue *q, bool wc, bool fua)
+ {
+-	if (wc)
++	if (wc) {
++		blk_queue_flag_set(QUEUE_FLAG_HW_WC, q);
+ 		blk_queue_flag_set(QUEUE_FLAG_WC, q);
+-	else
++	} else {
++		blk_queue_flag_clear(QUEUE_FLAG_HW_WC, q);
+ 		blk_queue_flag_clear(QUEUE_FLAG_WC, q);
++	}
+ 	if (fua)
+ 		blk_queue_flag_set(QUEUE_FLAG_FUA, q);
+ 	else
+diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
+index afc797fb0dfc4..63e4812623361 100644
+--- a/block/blk-sysfs.c
++++ b/block/blk-sysfs.c
+@@ -449,21 +449,16 @@ static ssize_t queue_wc_show(struct request_queue *q, char *page)
+ static ssize_t queue_wc_store(struct request_queue *q, const char *page,
+ 			      size_t count)
+ {
+-	int set = -1;
+-
+-	if (!strncmp(page, "write back", 10))
+-		set = 1;
+-	else if (!strncmp(page, "write through", 13) ||
+-		 !strncmp(page, "none", 4))
+-		set = 0;
+-
+-	if (set == -1)
+-		return -EINVAL;
+-
+-	if (set)
++	if (!strncmp(page, "write back", 10)) {
++		if (!test_bit(QUEUE_FLAG_HW_WC, &q->queue_flags))
++			return -EINVAL;
+ 		blk_queue_flag_set(QUEUE_FLAG_WC, q);
+-	else
++	} else if (!strncmp(page, "write through", 13) ||
++		 !strncmp(page, "none", 4)) {
+ 		blk_queue_flag_clear(QUEUE_FLAG_WC, q);
++	} else {
++		return -EINVAL;
++	}
+ 
+ 	return count;
+ }
+diff --git a/block/blk.h b/block/blk.h
+index 608c5dcc516b5..b0dbbc4055966 100644
+--- a/block/blk.h
++++ b/block/blk.h
+@@ -76,6 +76,10 @@ struct bio_vec *bvec_alloc(mempool_t *pool, unsigned short *nr_vecs,
+ 		gfp_t gfp_mask);
+ void bvec_free(mempool_t *pool, struct bio_vec *bv, unsigned short nr_vecs);
+ 
++bool bvec_try_merge_hw_page(struct request_queue *q, struct bio_vec *bv,
++		struct page *page, unsigned len, unsigned offset,
++		bool *same_page);
++
+ static inline bool biovec_phys_mergeable(struct request_queue *q,
+ 		struct bio_vec *vec1, struct bio_vec *vec2)
+ {
+diff --git a/block/ioctl.c b/block/ioctl.c
+index 3be11941fb2dd..9fcddd847937e 100644
+--- a/block/ioctl.c
++++ b/block/ioctl.c
+@@ -20,6 +20,8 @@ static int blkpg_do_ioctl(struct block_device *bdev,
+ 	struct blkpg_partition p;
+ 	long long start, length;
+ 
++	if (disk->flags & GENHD_FL_NO_PART)
++		return -EINVAL;
+ 	if (!capable(CAP_SYS_ADMIN))
+ 		return -EACCES;
+ 	if (copy_from_user(&p, upart, sizeof(struct blkpg_partition)))
+diff --git a/block/mq-deadline.c b/block/mq-deadline.c
+index 02a916ba62ee7..f958e79277b8b 100644
+--- a/block/mq-deadline.c
++++ b/block/mq-deadline.c
+@@ -646,8 +646,9 @@ static void dd_depth_updated(struct blk_mq_hw_ctx *hctx)
+ 	struct request_queue *q = hctx->queue;
+ 	struct deadline_data *dd = q->elevator->elevator_data;
+ 	struct blk_mq_tags *tags = hctx->sched_tags;
++	unsigned int shift = tags->bitmap_tags.sb.shift;
+ 
+-	dd->async_depth = max(1UL, 3 * q->nr_requests / 4);
++	dd->async_depth = max(1U, 3 * (1U << shift)  / 4);
+ 
+ 	sbitmap_queue_min_shallow_depth(&tags->bitmap_tags, dd->async_depth);
+ }
+diff --git a/crypto/af_alg.c b/crypto/af_alg.c
+index 10efb56d8b481..ea6fb8e89d065 100644
+--- a/crypto/af_alg.c
++++ b/crypto/af_alg.c
+@@ -320,18 +320,21 @@ static int alg_setkey_by_key_serial(struct alg_sock *ask, sockptr_t optval,
+ 
+ 	if (IS_ERR(ret)) {
+ 		up_read(&key->sem);
++		key_put(key);
+ 		return PTR_ERR(ret);
+ 	}
+ 
+ 	key_data = sock_kmalloc(&ask->sk, key_datalen, GFP_KERNEL);
+ 	if (!key_data) {
+ 		up_read(&key->sem);
++		key_put(key);
+ 		return -ENOMEM;
+ 	}
+ 
+ 	memcpy(key_data, ret, key_datalen);
+ 
+ 	up_read(&key->sem);
++	key_put(key);
+ 
+ 	err = type->setkey(ask->private, key_data, key_datalen);
+ 
+@@ -1192,6 +1195,7 @@ struct af_alg_async_req *af_alg_alloc_areq(struct sock *sk,
+ 
+ 	areq->areqlen = areqlen;
+ 	areq->sk = sk;
++	areq->first_rsgl.sgl.sgt.sgl = areq->first_rsgl.sgl.sgl;
+ 	areq->last_rsgl = NULL;
+ 	INIT_LIST_HEAD(&areq->rsgl_list);
+ 	areq->tsgl = NULL;
+diff --git a/crypto/algapi.c b/crypto/algapi.c
+index 5e7cd603d489c..4fe95c4480473 100644
+--- a/crypto/algapi.c
++++ b/crypto/algapi.c
+@@ -17,6 +17,7 @@
+ #include <linux/rtnetlink.h>
+ #include <linux/slab.h>
+ #include <linux/string.h>
++#include <linux/workqueue.h>
+ 
+ #include "internal.h"
+ 
+@@ -74,15 +75,26 @@ static void crypto_free_instance(struct crypto_instance *inst)
+ 	inst->alg.cra_type->free(inst);
+ }
+ 
+-static void crypto_destroy_instance(struct crypto_alg *alg)
++static void crypto_destroy_instance_workfn(struct work_struct *w)
+ {
+-	struct crypto_instance *inst = (void *)alg;
++	struct crypto_instance *inst = container_of(w, struct crypto_instance,
++						    free_work);
+ 	struct crypto_template *tmpl = inst->tmpl;
+ 
+ 	crypto_free_instance(inst);
+ 	crypto_tmpl_put(tmpl);
+ }
+ 
++static void crypto_destroy_instance(struct crypto_alg *alg)
++{
++	struct crypto_instance *inst = container_of(alg,
++						    struct crypto_instance,
++						    alg);
++
++	INIT_WORK(&inst->free_work, crypto_destroy_instance_workfn);
++	schedule_work(&inst->free_work);
++}
++
+ /*
+  * This function adds a spawn to the list secondary_spawns which
+  * will be used at the end of crypto_remove_spawns to unregister
+diff --git a/crypto/asymmetric_keys/x509_public_key.c b/crypto/asymmetric_keys/x509_public_key.c
+index 6fdfc82e23a8a..7c71db3ac23d4 100644
+--- a/crypto/asymmetric_keys/x509_public_key.c
++++ b/crypto/asymmetric_keys/x509_public_key.c
+@@ -130,6 +130,11 @@ int x509_check_for_self_signed(struct x509_certificate *cert)
+ 			goto out;
+ 	}
+ 
++	if (cert->unsupported_sig) {
++		ret = 0;
++		goto out;
++	}
++
+ 	ret = public_key_verify_signature(cert->pub, cert->sig);
+ 	if (ret < 0) {
+ 		if (ret == -ENOPKG) {
+diff --git a/drivers/acpi/x86/s2idle.c b/drivers/acpi/x86/s2idle.c
+index ce62e61a9605e..60cc4605169c5 100644
+--- a/drivers/acpi/x86/s2idle.c
++++ b/drivers/acpi/x86/s2idle.c
+@@ -123,17 +123,16 @@ static void lpi_device_get_constraints_amd(void)
+ 			acpi_handle_debug(lps0_device_handle,
+ 					  "LPI: constraints list begin:\n");
+ 
+-			for (j = 0; j < package->package.count; ++j) {
++			for (j = 0; j < package->package.count; j++) {
+ 				union acpi_object *info_obj = &package->package.elements[j];
+ 				struct lpi_device_constraint_amd dev_info = {};
+ 				struct lpi_constraints *list;
+ 				acpi_status status;
+ 
+-				for (k = 0; k < info_obj->package.count; ++k) {
+-					union acpi_object *obj = &info_obj->package.elements[k];
++				list = &lpi_constraints_table[lpi_constraints_table_size];
+ 
+-					list = &lpi_constraints_table[lpi_constraints_table_size];
+-					list->min_dstate = -1;
++				for (k = 0; k < info_obj->package.count; k++) {
++					union acpi_object *obj = &info_obj->package.elements[k];
+ 
+ 					switch (k) {
+ 					case 0:
+@@ -149,27 +148,21 @@ static void lpi_device_get_constraints_amd(void)
+ 						dev_info.min_dstate = obj->integer.value;
+ 						break;
+ 					}
++				}
+ 
+-					if (!dev_info.enabled || !dev_info.name ||
+-					    !dev_info.min_dstate)
+-						continue;
++				if (!dev_info.enabled || !dev_info.name ||
++				    !dev_info.min_dstate)
++					continue;
+ 
+-					status = acpi_get_handle(NULL, dev_info.name,
+-								 &list->handle);
+-					if (ACPI_FAILURE(status))
+-						continue;
++				status = acpi_get_handle(NULL, dev_info.name, &list->handle);
++				if (ACPI_FAILURE(status))
++					continue;
+ 
+-					acpi_handle_debug(lps0_device_handle,
+-							  "Name:%s\n", dev_info.name);
++				acpi_handle_debug(lps0_device_handle,
++						  "Name:%s\n", dev_info.name);
+ 
+-					list->min_dstate = dev_info.min_dstate;
++				list->min_dstate = dev_info.min_dstate;
+ 
+-					if (list->min_dstate < 0) {
+-						acpi_handle_debug(lps0_device_handle,
+-								  "Incomplete constraint defined\n");
+-						continue;
+-					}
+-				}
+ 				lpi_constraints_table_size++;
+ 			}
+ 		}
+@@ -214,7 +207,7 @@ static void lpi_device_get_constraints(void)
+ 		if (!package)
+ 			continue;
+ 
+-		for (j = 0; j < package->package.count; ++j) {
++		for (j = 0; j < package->package.count; j++) {
+ 			union acpi_object *element =
+ 					&(package->package.elements[j]);
+ 
+@@ -246,7 +239,7 @@ static void lpi_device_get_constraints(void)
+ 
+ 		constraint->min_dstate = -1;
+ 
+-		for (j = 0; j < package_count; ++j) {
++		for (j = 0; j < package_count; j++) {
+ 			union acpi_object *info_obj = &info.package[j];
+ 			union acpi_object *cnstr_pkg;
+ 			union acpi_object *obj;
+diff --git a/drivers/amba/bus.c b/drivers/amba/bus.c
+index ce88af9eb562f..09e72967b8abf 100644
+--- a/drivers/amba/bus.c
++++ b/drivers/amba/bus.c
+@@ -528,6 +528,7 @@ static void amba_device_release(struct device *dev)
+ {
+ 	struct amba_device *d = to_amba_device(dev);
+ 
++	of_node_put(d->dev.of_node);
+ 	if (d->res.parent)
+ 		release_resource(&d->res);
+ 	mutex_destroy(&d->periphid_lock);
+diff --git a/drivers/base/core.c b/drivers/base/core.c
+index 3dff5037943e0..6ceaf50f5a671 100644
+--- a/drivers/base/core.c
++++ b/drivers/base/core.c
+@@ -3817,6 +3817,17 @@ void device_del(struct device *dev)
+ 	device_platform_notify_remove(dev);
+ 	device_links_purge(dev);
+ 
++	/*
++	 * If a device does not have a driver attached, we need to clean
++	 * up any managed resources. We do this in device_release(), but
++	 * it's never called (and we leak the device) if a managed
++	 * resource holds a reference to the device. So release all
++	 * managed resources here, like we do in driver_detach(). We
++	 * still need to do so again in device_release() in case someone
++	 * adds a new resource after this point, though.
++	 */
++	devres_release_all(dev);
++
+ 	bus_notify(dev, BUS_NOTIFY_REMOVED_DEVICE);
+ 	kobject_uevent(&dev->kobj, KOBJ_REMOVE);
+ 	glue_dir = get_glue_dir(dev);
+diff --git a/drivers/base/dd.c b/drivers/base/dd.c
+index 878aa7646b37e..a528cec24264a 100644
+--- a/drivers/base/dd.c
++++ b/drivers/base/dd.c
+@@ -693,6 +693,8 @@ re_probe:
+ 
+ 		device_remove(dev);
+ 		driver_sysfs_remove(dev);
++		if (dev->bus && dev->bus->dma_cleanup)
++			dev->bus->dma_cleanup(dev);
+ 		device_unbind_cleanup(dev);
+ 
+ 		goto re_probe;
+diff --git a/drivers/base/regmap/regcache-maple.c b/drivers/base/regmap/regcache-maple.c
+index 283c2e02a2985..41edd6a430eb4 100644
+--- a/drivers/base/regmap/regcache-maple.c
++++ b/drivers/base/regmap/regcache-maple.c
+@@ -74,7 +74,7 @@ static int regcache_maple_write(struct regmap *map, unsigned int reg,
+ 	rcu_read_unlock();
+ 
+ 	entry = kmalloc((last - index + 1) * sizeof(unsigned long),
+-			GFP_KERNEL);
++			map->alloc_flags);
+ 	if (!entry)
+ 		return -ENOMEM;
+ 
+@@ -92,7 +92,7 @@ static int regcache_maple_write(struct regmap *map, unsigned int reg,
+ 	mas_lock(&mas);
+ 
+ 	mas_set_range(&mas, index, last);
+-	ret = mas_store_gfp(&mas, entry, GFP_KERNEL);
++	ret = mas_store_gfp(&mas, entry, map->alloc_flags);
+ 
+ 	mas_unlock(&mas);
+ 
+@@ -134,7 +134,7 @@ static int regcache_maple_drop(struct regmap *map, unsigned int min,
+ 
+ 			lower = kmemdup(entry, ((min - mas.index) *
+ 						sizeof(unsigned long)),
+-					GFP_KERNEL);
++					map->alloc_flags);
+ 			if (!lower) {
+ 				ret = -ENOMEM;
+ 				goto out_unlocked;
+@@ -148,7 +148,7 @@ static int regcache_maple_drop(struct regmap *map, unsigned int min,
+ 			upper = kmemdup(&entry[max + 1],
+ 					((mas.last - max) *
+ 					 sizeof(unsigned long)),
+-					GFP_KERNEL);
++					map->alloc_flags);
+ 			if (!upper) {
+ 				ret = -ENOMEM;
+ 				goto out_unlocked;
+@@ -162,7 +162,7 @@ static int regcache_maple_drop(struct regmap *map, unsigned int min,
+ 		/* Insert new nodes with the saved data */
+ 		if (lower) {
+ 			mas_set_range(&mas, lower_index, lower_last);
+-			ret = mas_store_gfp(&mas, lower, GFP_KERNEL);
++			ret = mas_store_gfp(&mas, lower, map->alloc_flags);
+ 			if (ret != 0)
+ 				goto out;
+ 			lower = NULL;
+@@ -170,7 +170,7 @@ static int regcache_maple_drop(struct regmap *map, unsigned int min,
+ 
+ 		if (upper) {
+ 			mas_set_range(&mas, upper_index, upper_last);
+-			ret = mas_store_gfp(&mas, upper, GFP_KERNEL);
++			ret = mas_store_gfp(&mas, upper, map->alloc_flags);
+ 			if (ret != 0)
+ 				goto out;
+ 			upper = NULL;
+@@ -320,7 +320,7 @@ static int regcache_maple_insert_block(struct regmap *map, int first,
+ 	unsigned long *entry;
+ 	int i, ret;
+ 
+-	entry = kcalloc(last - first + 1, sizeof(unsigned long), GFP_KERNEL);
++	entry = kcalloc(last - first + 1, sizeof(unsigned long), map->alloc_flags);
+ 	if (!entry)
+ 		return -ENOMEM;
+ 
+@@ -331,7 +331,7 @@ static int regcache_maple_insert_block(struct regmap *map, int first,
+ 
+ 	mas_set_range(&mas, map->reg_defaults[first].reg,
+ 		      map->reg_defaults[last].reg);
+-	ret = mas_store_gfp(&mas, entry, GFP_KERNEL);
++	ret = mas_store_gfp(&mas, entry, map->alloc_flags);
+ 
+ 	mas_unlock(&mas);
+ 
+diff --git a/drivers/base/regmap/regcache-rbtree.c b/drivers/base/regmap/regcache-rbtree.c
+index 584bcc55f56e3..06788965aa293 100644
+--- a/drivers/base/regmap/regcache-rbtree.c
++++ b/drivers/base/regmap/regcache-rbtree.c
+@@ -277,7 +277,7 @@ static int regcache_rbtree_insert_to_block(struct regmap *map,
+ 
+ 	blk = krealloc(rbnode->block,
+ 		       blklen * map->cache_word_size,
+-		       GFP_KERNEL);
++		       map->alloc_flags);
+ 	if (!blk)
+ 		return -ENOMEM;
+ 
+@@ -286,7 +286,7 @@ static int regcache_rbtree_insert_to_block(struct regmap *map,
+ 	if (BITS_TO_LONGS(blklen) > BITS_TO_LONGS(rbnode->blklen)) {
+ 		present = krealloc(rbnode->cache_present,
+ 				   BITS_TO_LONGS(blklen) * sizeof(*present),
+-				   GFP_KERNEL);
++				   map->alloc_flags);
+ 		if (!present)
+ 			return -ENOMEM;
+ 
+@@ -320,7 +320,7 @@ regcache_rbtree_node_alloc(struct regmap *map, unsigned int reg)
+ 	const struct regmap_range *range;
+ 	int i;
+ 
+-	rbnode = kzalloc(sizeof(*rbnode), GFP_KERNEL);
++	rbnode = kzalloc(sizeof(*rbnode), map->alloc_flags);
+ 	if (!rbnode)
+ 		return NULL;
+ 
+@@ -346,13 +346,13 @@ regcache_rbtree_node_alloc(struct regmap *map, unsigned int reg)
+ 	}
+ 
+ 	rbnode->block = kmalloc_array(rbnode->blklen, map->cache_word_size,
+-				      GFP_KERNEL);
++				      map->alloc_flags);
+ 	if (!rbnode->block)
+ 		goto err_free;
+ 
+ 	rbnode->cache_present = kcalloc(BITS_TO_LONGS(rbnode->blklen),
+ 					sizeof(*rbnode->cache_present),
+-					GFP_KERNEL);
++					map->alloc_flags);
+ 	if (!rbnode->cache_present)
+ 		goto err_free_block;
+ 
+diff --git a/drivers/base/test/test_async_driver_probe.c b/drivers/base/test/test_async_driver_probe.c
+index 929410d0dd6fe..3465800baa6c8 100644
+--- a/drivers/base/test/test_async_driver_probe.c
++++ b/drivers/base/test/test_async_driver_probe.c
+@@ -84,7 +84,7 @@ test_platform_device_register_node(char *name, int id, int nid)
+ 
+ 	pdev = platform_device_alloc(name, id);
+ 	if (!pdev)
+-		return NULL;
++		return ERR_PTR(-ENOMEM);
+ 
+ 	if (nid != NUMA_NO_NODE)
+ 		set_dev_node(&pdev->dev, nid);
+diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c
+index 79ab532aabafb..6bc86106c7b2a 100644
+--- a/drivers/block/drbd/drbd_main.c
++++ b/drivers/block/drbd/drbd_main.c
+@@ -1557,7 +1557,7 @@ static int _drbd_send_page(struct drbd_peer_device *peer_device, struct page *pa
+ 	do {
+ 		int sent;
+ 
+-		bvec_set_page(&bvec, page, offset, len);
++		bvec_set_page(&bvec, page, len, offset);
+ 		iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, &bvec, 1, len);
+ 
+ 		sent = sock_sendmsg(socket, &msg);
+diff --git a/drivers/bluetooth/btintel.c b/drivers/bluetooth/btintel.c
+index d9349ba48281e..7ba60151a16a6 100644
+--- a/drivers/bluetooth/btintel.c
++++ b/drivers/bluetooth/btintel.c
+@@ -2658,6 +2658,9 @@ static int btintel_setup_combined(struct hci_dev *hdev)
+ 			set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED,
+ 				&hdev->quirks);
+ 
++			/* These variants don't seem to support LE Coded PHY */
++			set_bit(HCI_QUIRK_BROKEN_LE_CODED, &hdev->quirks);
++
+ 			/* Setup MSFT Extension support */
+ 			btintel_set_msft_opcode(hdev, ver.hw_variant);
+ 
+@@ -2729,6 +2732,9 @@ static int btintel_setup_combined(struct hci_dev *hdev)
+ 		 */
+ 		set_bit(HCI_QUIRK_WIDEBAND_SPEECH_SUPPORTED, &hdev->quirks);
+ 
++		/* These variants don't seem to support LE Coded PHY */
++		set_bit(HCI_QUIRK_BROKEN_LE_CODED, &hdev->quirks);
++
+ 		/* Set Valid LE States quirk */
+ 		set_bit(HCI_QUIRK_VALID_LE_STATES, &hdev->quirks);
+ 
+diff --git a/drivers/bluetooth/btrtl.c b/drivers/bluetooth/btrtl.c
+index d978e7cea8731..c06a04080cd75 100644
+--- a/drivers/bluetooth/btrtl.c
++++ b/drivers/bluetooth/btrtl.c
+@@ -101,21 +101,21 @@ static const struct id_table ic_id_table[] = {
+ 	{ IC_INFO(RTL_ROM_LMP_8723A, 0xb, 0x6, HCI_USB),
+ 	  .config_needed = false,
+ 	  .has_rom_version = false,
+-	  .fw_name = "rtl_bt/rtl8723a_fw.bin",
++	  .fw_name = "rtl_bt/rtl8723a_fw",
+ 	  .cfg_name = NULL },
+ 
+ 	/* 8723BS */
+ 	{ IC_INFO(RTL_ROM_LMP_8723B, 0xb, 0x6, HCI_UART),
+ 	  .config_needed = true,
+ 	  .has_rom_version = true,
+-	  .fw_name  = "rtl_bt/rtl8723bs_fw.bin",
++	  .fw_name  = "rtl_bt/rtl8723bs_fw",
+ 	  .cfg_name = "rtl_bt/rtl8723bs_config" },
+ 
+ 	/* 8723B */
+ 	{ IC_INFO(RTL_ROM_LMP_8723B, 0xb, 0x6, HCI_USB),
+ 	  .config_needed = false,
+ 	  .has_rom_version = true,
+-	  .fw_name  = "rtl_bt/rtl8723b_fw.bin",
++	  .fw_name  = "rtl_bt/rtl8723b_fw",
+ 	  .cfg_name = "rtl_bt/rtl8723b_config" },
+ 
+ 	/* 8723CS-CG */
+@@ -126,7 +126,7 @@ static const struct id_table ic_id_table[] = {
+ 	  .hci_bus = HCI_UART,
+ 	  .config_needed = true,
+ 	  .has_rom_version = true,
+-	  .fw_name  = "rtl_bt/rtl8723cs_cg_fw.bin",
++	  .fw_name  = "rtl_bt/rtl8723cs_cg_fw",
+ 	  .cfg_name = "rtl_bt/rtl8723cs_cg_config" },
+ 
+ 	/* 8723CS-VF */
+@@ -137,7 +137,7 @@ static const struct id_table ic_id_table[] = {
+ 	  .hci_bus = HCI_UART,
+ 	  .config_needed = true,
+ 	  .has_rom_version = true,
+-	  .fw_name  = "rtl_bt/rtl8723cs_vf_fw.bin",
++	  .fw_name  = "rtl_bt/rtl8723cs_vf_fw",
+ 	  .cfg_name = "rtl_bt/rtl8723cs_vf_config" },
+ 
+ 	/* 8723CS-XX */
+@@ -148,28 +148,28 @@ static const struct id_table ic_id_table[] = {
+ 	  .hci_bus = HCI_UART,
+ 	  .config_needed = true,
+ 	  .has_rom_version = true,
+-	  .fw_name  = "rtl_bt/rtl8723cs_xx_fw.bin",
++	  .fw_name  = "rtl_bt/rtl8723cs_xx_fw",
+ 	  .cfg_name = "rtl_bt/rtl8723cs_xx_config" },
+ 
+ 	/* 8723D */
+ 	{ IC_INFO(RTL_ROM_LMP_8723B, 0xd, 0x8, HCI_USB),
+ 	  .config_needed = true,
+ 	  .has_rom_version = true,
+-	  .fw_name  = "rtl_bt/rtl8723d_fw.bin",
++	  .fw_name  = "rtl_bt/rtl8723d_fw",
+ 	  .cfg_name = "rtl_bt/rtl8723d_config" },
+ 
+ 	/* 8723DS */
+ 	{ IC_INFO(RTL_ROM_LMP_8723B, 0xd, 0x8, HCI_UART),
+ 	  .config_needed = true,
+ 	  .has_rom_version = true,
+-	  .fw_name  = "rtl_bt/rtl8723ds_fw.bin",
++	  .fw_name  = "rtl_bt/rtl8723ds_fw",
+ 	  .cfg_name = "rtl_bt/rtl8723ds_config" },
+ 
+ 	/* 8821A */
+ 	{ IC_INFO(RTL_ROM_LMP_8821A, 0xa, 0x6, HCI_USB),
+ 	  .config_needed = false,
+ 	  .has_rom_version = true,
+-	  .fw_name  = "rtl_bt/rtl8821a_fw.bin",
++	  .fw_name  = "rtl_bt/rtl8821a_fw",
+ 	  .cfg_name = "rtl_bt/rtl8821a_config" },
+ 
+ 	/* 8821C */
+@@ -177,7 +177,7 @@ static const struct id_table ic_id_table[] = {
+ 	  .config_needed = false,
+ 	  .has_rom_version = true,
+ 	  .has_msft_ext = true,
+-	  .fw_name  = "rtl_bt/rtl8821c_fw.bin",
++	  .fw_name  = "rtl_bt/rtl8821c_fw",
+ 	  .cfg_name = "rtl_bt/rtl8821c_config" },
+ 
+ 	/* 8821CS */
+@@ -185,14 +185,14 @@ static const struct id_table ic_id_table[] = {
+ 	  .config_needed = true,
+ 	  .has_rom_version = true,
+ 	  .has_msft_ext = true,
+-	  .fw_name  = "rtl_bt/rtl8821cs_fw.bin",
++	  .fw_name  = "rtl_bt/rtl8821cs_fw",
+ 	  .cfg_name = "rtl_bt/rtl8821cs_config" },
+ 
+ 	/* 8761A */
+ 	{ IC_INFO(RTL_ROM_LMP_8761A, 0xa, 0x6, HCI_USB),
+ 	  .config_needed = false,
+ 	  .has_rom_version = true,
+-	  .fw_name  = "rtl_bt/rtl8761a_fw.bin",
++	  .fw_name  = "rtl_bt/rtl8761a_fw",
+ 	  .cfg_name = "rtl_bt/rtl8761a_config" },
+ 
+ 	/* 8761B */
+@@ -200,14 +200,14 @@ static const struct id_table ic_id_table[] = {
+ 	  .config_needed = false,
+ 	  .has_rom_version = true,
+ 	  .has_msft_ext = true,
+-	  .fw_name  = "rtl_bt/rtl8761b_fw.bin",
++	  .fw_name  = "rtl_bt/rtl8761b_fw",
+ 	  .cfg_name = "rtl_bt/rtl8761b_config" },
+ 
+ 	/* 8761BU */
+ 	{ IC_INFO(RTL_ROM_LMP_8761A, 0xb, 0xa, HCI_USB),
+ 	  .config_needed = false,
+ 	  .has_rom_version = true,
+-	  .fw_name  = "rtl_bt/rtl8761bu_fw.bin",
++	  .fw_name  = "rtl_bt/rtl8761bu_fw",
+ 	  .cfg_name = "rtl_bt/rtl8761bu_config" },
+ 
+ 	/* 8822C with UART interface */
+@@ -215,7 +215,7 @@ static const struct id_table ic_id_table[] = {
+ 	  .config_needed = true,
+ 	  .has_rom_version = true,
+ 	  .has_msft_ext = true,
+-	  .fw_name  = "rtl_bt/rtl8822cs_fw.bin",
++	  .fw_name  = "rtl_bt/rtl8822cs_fw",
+ 	  .cfg_name = "rtl_bt/rtl8822cs_config" },
+ 
+ 	/* 8822C with UART interface */
+@@ -223,7 +223,7 @@ static const struct id_table ic_id_table[] = {
+ 	  .config_needed = true,
+ 	  .has_rom_version = true,
+ 	  .has_msft_ext = true,
+-	  .fw_name  = "rtl_bt/rtl8822cs_fw.bin",
++	  .fw_name  = "rtl_bt/rtl8822cs_fw",
+ 	  .cfg_name = "rtl_bt/rtl8822cs_config" },
+ 
+ 	/* 8822C with USB interface */
+@@ -231,7 +231,7 @@ static const struct id_table ic_id_table[] = {
+ 	  .config_needed = false,
+ 	  .has_rom_version = true,
+ 	  .has_msft_ext = true,
+-	  .fw_name  = "rtl_bt/rtl8822cu_fw.bin",
++	  .fw_name  = "rtl_bt/rtl8822cu_fw",
+ 	  .cfg_name = "rtl_bt/rtl8822cu_config" },
+ 
+ 	/* 8822B */
+@@ -239,7 +239,7 @@ static const struct id_table ic_id_table[] = {
+ 	  .config_needed = true,
+ 	  .has_rom_version = true,
+ 	  .has_msft_ext = true,
+-	  .fw_name  = "rtl_bt/rtl8822b_fw.bin",
++	  .fw_name  = "rtl_bt/rtl8822b_fw",
+ 	  .cfg_name = "rtl_bt/rtl8822b_config" },
+ 
+ 	/* 8852A */
+@@ -247,7 +247,7 @@ static const struct id_table ic_id_table[] = {
+ 	  .config_needed = false,
+ 	  .has_rom_version = true,
+ 	  .has_msft_ext = true,
+-	  .fw_name  = "rtl_bt/rtl8852au_fw.bin",
++	  .fw_name  = "rtl_bt/rtl8852au_fw",
+ 	  .cfg_name = "rtl_bt/rtl8852au_config" },
+ 
+ 	/* 8852B with UART interface */
+@@ -255,7 +255,7 @@ static const struct id_table ic_id_table[] = {
+ 	  .config_needed = true,
+ 	  .has_rom_version = true,
+ 	  .has_msft_ext = true,
+-	  .fw_name  = "rtl_bt/rtl8852bs_fw.bin",
++	  .fw_name  = "rtl_bt/rtl8852bs_fw",
+ 	  .cfg_name = "rtl_bt/rtl8852bs_config" },
+ 
+ 	/* 8852B */
+@@ -263,7 +263,7 @@ static const struct id_table ic_id_table[] = {
+ 	  .config_needed = false,
+ 	  .has_rom_version = true,
+ 	  .has_msft_ext = true,
+-	  .fw_name  = "rtl_bt/rtl8852bu_fw.bin",
++	  .fw_name  = "rtl_bt/rtl8852bu_fw",
+ 	  .cfg_name = "rtl_bt/rtl8852bu_config" },
+ 
+ 	/* 8852C */
+@@ -271,7 +271,7 @@ static const struct id_table ic_id_table[] = {
+ 	  .config_needed = false,
+ 	  .has_rom_version = true,
+ 	  .has_msft_ext = true,
+-	  .fw_name  = "rtl_bt/rtl8852cu_fw.bin",
++	  .fw_name  = "rtl_bt/rtl8852cu_fw",
+ 	  .cfg_name = "rtl_bt/rtl8852cu_config" },
+ 
+ 	/* 8851B */
+@@ -279,7 +279,7 @@ static const struct id_table ic_id_table[] = {
+ 	  .config_needed = false,
+ 	  .has_rom_version = true,
+ 	  .has_msft_ext = false,
+-	  .fw_name  = "rtl_bt/rtl8851bu_fw.bin",
++	  .fw_name  = "rtl_bt/rtl8851bu_fw",
+ 	  .cfg_name = "rtl_bt/rtl8851bu_config" },
+ 	};
+ 
+@@ -967,6 +967,7 @@ struct btrtl_device_info *btrtl_initialize(struct hci_dev *hdev,
+ 	struct btrtl_device_info *btrtl_dev;
+ 	struct sk_buff *skb;
+ 	struct hci_rp_read_local_version *resp;
++	char fw_name[40];
+ 	char cfg_name[40];
+ 	u16 hci_rev, lmp_subver;
+ 	u8 hci_ver, lmp_ver, chip_type = 0;
+@@ -1079,8 +1080,26 @@ next:
+ 			goto err_free;
+ 	}
+ 
+-	btrtl_dev->fw_len = rtl_load_file(hdev, btrtl_dev->ic_info->fw_name,
+-					  &btrtl_dev->fw_data);
++	if (!btrtl_dev->ic_info->fw_name) {
++		ret = -ENOMEM;
++		goto err_free;
++	}
++
++	btrtl_dev->fw_len = -EIO;
++	if (lmp_subver == RTL_ROM_LMP_8852A && hci_rev == 0x000c) {
++		snprintf(fw_name, sizeof(fw_name), "%s_v2.bin",
++				btrtl_dev->ic_info->fw_name);
++		btrtl_dev->fw_len = rtl_load_file(hdev, fw_name,
++				&btrtl_dev->fw_data);
++	}
++
++	if (btrtl_dev->fw_len < 0) {
++		snprintf(fw_name, sizeof(fw_name), "%s.bin",
++				btrtl_dev->ic_info->fw_name);
++		btrtl_dev->fw_len = rtl_load_file(hdev, fw_name,
++				&btrtl_dev->fw_data);
++	}
++
+ 	if (btrtl_dev->fw_len < 0) {
+ 		rtl_dev_err(hdev, "firmware file %s not found",
+ 			    btrtl_dev->ic_info->fw_name);
+@@ -1180,6 +1199,10 @@ void btrtl_set_quirks(struct hci_dev *hdev, struct btrtl_device_info *btrtl_dev)
+ 		if (btrtl_dev->project_id == CHIP_ID_8852C)
+ 			btrealtek_set_flag(hdev, REALTEK_ALT6_CONTINUOUS_TX_CHIP);
+ 
++		if (btrtl_dev->project_id == CHIP_ID_8852A ||
++		    btrtl_dev->project_id == CHIP_ID_8852C)
++			set_bit(HCI_QUIRK_USE_MSFT_EXT_ADDRESS_FILTER, &hdev->quirks);
++
+ 		hci_set_aosp_capable(hdev);
+ 		break;
+ 	default:
+@@ -1398,4 +1421,5 @@ MODULE_FIRMWARE("rtl_bt/rtl8852bs_config.bin");
+ MODULE_FIRMWARE("rtl_bt/rtl8852bu_fw.bin");
+ MODULE_FIRMWARE("rtl_bt/rtl8852bu_config.bin");
+ MODULE_FIRMWARE("rtl_bt/rtl8852cu_fw.bin");
++MODULE_FIRMWARE("rtl_bt/rtl8852cu_fw_v2.bin");
+ MODULE_FIRMWARE("rtl_bt/rtl8852cu_config.bin");
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 764d176e97351..e685acc5cacd9 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -2079,7 +2079,7 @@ static int btusb_switch_alt_setting(struct hci_dev *hdev, int new_alts)
+ 		 * alternate setting.
+ 		 */
+ 		spin_lock_irqsave(&data->rxlock, flags);
+-		kfree_skb(data->sco_skb);
++		dev_kfree_skb_irq(data->sco_skb);
+ 		data->sco_skb = NULL;
+ 		spin_unlock_irqrestore(&data->rxlock, flags);
+ 
+diff --git a/drivers/bluetooth/hci_nokia.c b/drivers/bluetooth/hci_nokia.c
+index 05f7f6de6863d..97da0b2bfd17e 100644
+--- a/drivers/bluetooth/hci_nokia.c
++++ b/drivers/bluetooth/hci_nokia.c
+@@ -734,7 +734,11 @@ static int nokia_bluetooth_serdev_probe(struct serdev_device *serdev)
+ 		return err;
+ 	}
+ 
+-	clk_prepare_enable(sysclk);
++	err = clk_prepare_enable(sysclk);
++	if (err) {
++		dev_err(dev, "could not enable sysclk: %d", err);
++		return err;
++	}
+ 	btdev->sysclk_speed = clk_get_rate(sysclk);
+ 	clk_disable_unprepare(sysclk);
+ 
+diff --git a/drivers/bus/ti-sysc.c b/drivers/bus/ti-sysc.c
+index 4cb23b9e06ea4..c95fa4335fee2 100644
+--- a/drivers/bus/ti-sysc.c
++++ b/drivers/bus/ti-sysc.c
+@@ -3106,7 +3106,7 @@ static int sysc_init_static_data(struct sysc *ddata)
+ 
+ 	match = soc_device_match(sysc_soc_match);
+ 	if (match && match->data)
+-		sysc_soc->soc = (int)match->data;
++		sysc_soc->soc = (enum sysc_soc)(uintptr_t)match->data;
+ 
+ 	/*
+ 	 * Check and warn about possible old incomplete dtb. We now want to see
+diff --git a/drivers/char/hw_random/iproc-rng200.c b/drivers/char/hw_random/iproc-rng200.c
+index 06bc060534d81..c0df053cbe4b2 100644
+--- a/drivers/char/hw_random/iproc-rng200.c
++++ b/drivers/char/hw_random/iproc-rng200.c
+@@ -182,6 +182,8 @@ static int iproc_rng200_probe(struct platform_device *pdev)
+ 		return PTR_ERR(priv->base);
+ 	}
+ 
++	dev_set_drvdata(dev, priv);
++
+ 	priv->rng.name = "iproc-rng200";
+ 	priv->rng.read = iproc_rng200_read;
+ 	priv->rng.init = iproc_rng200_init;
+@@ -199,6 +201,28 @@ static int iproc_rng200_probe(struct platform_device *pdev)
+ 	return 0;
+ }
+ 
++static int __maybe_unused iproc_rng200_suspend(struct device *dev)
++{
++	struct iproc_rng200_dev *priv = dev_get_drvdata(dev);
++
++	iproc_rng200_cleanup(&priv->rng);
++
++	return 0;
++}
++
++static int __maybe_unused iproc_rng200_resume(struct device *dev)
++{
++	struct iproc_rng200_dev *priv =  dev_get_drvdata(dev);
++
++	iproc_rng200_init(&priv->rng);
++
++	return 0;
++}
++
++static const struct dev_pm_ops iproc_rng200_pm_ops = {
++	SET_SYSTEM_SLEEP_PM_OPS(iproc_rng200_suspend, iproc_rng200_resume)
++};
++
+ static const struct of_device_id iproc_rng200_of_match[] = {
+ 	{ .compatible = "brcm,bcm2711-rng200", },
+ 	{ .compatible = "brcm,bcm7211-rng200", },
+@@ -212,6 +236,7 @@ static struct platform_driver iproc_rng200_driver = {
+ 	.driver = {
+ 		.name		= "iproc-rng200",
+ 		.of_match_table = iproc_rng200_of_match,
++		.pm		= &iproc_rng200_pm_ops,
+ 	},
+ 	.probe		= iproc_rng200_probe,
+ };
+diff --git a/drivers/char/hw_random/nomadik-rng.c b/drivers/char/hw_random/nomadik-rng.c
+index e8f9621e79541..3774adf903a83 100644
+--- a/drivers/char/hw_random/nomadik-rng.c
++++ b/drivers/char/hw_random/nomadik-rng.c
+@@ -13,8 +13,6 @@
+ #include <linux/clk.h>
+ #include <linux/err.h>
+ 
+-static struct clk *rng_clk;
+-
+ static int nmk_rng_read(struct hwrng *rng, void *data, size_t max, bool wait)
+ {
+ 	void __iomem *base = (void __iomem *)rng->priv;
+@@ -36,21 +34,20 @@ static struct hwrng nmk_rng = {
+ 
+ static int nmk_rng_probe(struct amba_device *dev, const struct amba_id *id)
+ {
++	struct clk *rng_clk;
+ 	void __iomem *base;
+ 	int ret;
+ 
+-	rng_clk = devm_clk_get(&dev->dev, NULL);
++	rng_clk = devm_clk_get_enabled(&dev->dev, NULL);
+ 	if (IS_ERR(rng_clk)) {
+ 		dev_err(&dev->dev, "could not get rng clock\n");
+ 		ret = PTR_ERR(rng_clk);
+ 		return ret;
+ 	}
+ 
+-	clk_prepare_enable(rng_clk);
+-
+ 	ret = amba_request_regions(dev, dev->dev.init_name);
+ 	if (ret)
+-		goto out_clk;
++		return ret;
+ 	ret = -ENOMEM;
+ 	base = devm_ioremap(&dev->dev, dev->res.start,
+ 			    resource_size(&dev->res));
+@@ -64,15 +61,12 @@ static int nmk_rng_probe(struct amba_device *dev, const struct amba_id *id)
+ 
+ out_release:
+ 	amba_release_regions(dev);
+-out_clk:
+-	clk_disable_unprepare(rng_clk);
+ 	return ret;
+ }
+ 
+ static void nmk_rng_remove(struct amba_device *dev)
+ {
+ 	amba_release_regions(dev);
+-	clk_disable_unprepare(rng_clk);
+ }
+ 
+ static const struct amba_id nmk_rng_ids[] = {
+diff --git a/drivers/char/hw_random/pic32-rng.c b/drivers/char/hw_random/pic32-rng.c
+index 99c8bd0859a14..e04a054e89307 100644
+--- a/drivers/char/hw_random/pic32-rng.c
++++ b/drivers/char/hw_random/pic32-rng.c
+@@ -36,7 +36,6 @@
+ struct pic32_rng {
+ 	void __iomem	*base;
+ 	struct hwrng	rng;
+-	struct clk	*clk;
+ };
+ 
+ /*
+@@ -70,6 +69,7 @@ static int pic32_rng_read(struct hwrng *rng, void *buf, size_t max,
+ static int pic32_rng_probe(struct platform_device *pdev)
+ {
+ 	struct pic32_rng *priv;
++	struct clk *clk;
+ 	u32 v;
+ 	int ret;
+ 
+@@ -81,13 +81,9 @@ static int pic32_rng_probe(struct platform_device *pdev)
+ 	if (IS_ERR(priv->base))
+ 		return PTR_ERR(priv->base);
+ 
+-	priv->clk = devm_clk_get(&pdev->dev, NULL);
+-	if (IS_ERR(priv->clk))
+-		return PTR_ERR(priv->clk);
+-
+-	ret = clk_prepare_enable(priv->clk);
+-	if (ret)
+-		return ret;
++	clk = devm_clk_get_enabled(&pdev->dev, NULL);
++	if (IS_ERR(clk))
++		return PTR_ERR(clk);
+ 
+ 	/* enable TRNG in enhanced mode */
+ 	v = TRNGEN | TRNGMOD;
+@@ -98,15 +94,11 @@ static int pic32_rng_probe(struct platform_device *pdev)
+ 
+ 	ret = devm_hwrng_register(&pdev->dev, &priv->rng);
+ 	if (ret)
+-		goto err_register;
++		return ret;
+ 
+ 	platform_set_drvdata(pdev, priv);
+ 
+ 	return 0;
+-
+-err_register:
+-	clk_disable_unprepare(priv->clk);
+-	return ret;
+ }
+ 
+ static int pic32_rng_remove(struct platform_device *pdev)
+@@ -114,7 +106,6 @@ static int pic32_rng_remove(struct platform_device *pdev)
+ 	struct pic32_rng *rng = platform_get_drvdata(pdev);
+ 
+ 	writel(0, rng->base + RNGCON);
+-	clk_disable_unprepare(rng->clk);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/char/ipmi/ipmi_si_intf.c b/drivers/char/ipmi/ipmi_si_intf.c
+index abddd7e43a9a6..5cd031f3fc970 100644
+--- a/drivers/char/ipmi/ipmi_si_intf.c
++++ b/drivers/char/ipmi/ipmi_si_intf.c
+@@ -2082,6 +2082,11 @@ static int try_smi_init(struct smi_info *new_smi)
+ 		new_smi->io.io_cleanup = NULL;
+ 	}
+ 
++	if (rv && new_smi->si_sm) {
++		kfree(new_smi->si_sm);
++		new_smi->si_sm = NULL;
++	}
++
+ 	return rv;
+ }
+ 
+diff --git a/drivers/char/ipmi/ipmi_ssif.c b/drivers/char/ipmi/ipmi_ssif.c
+index 3b921c78ba083..faf1f2ad584bf 100644
+--- a/drivers/char/ipmi/ipmi_ssif.c
++++ b/drivers/char/ipmi/ipmi_ssif.c
+@@ -1400,7 +1400,7 @@ static struct ssif_addr_info *ssif_info_find(unsigned short addr,
+ restart:
+ 	list_for_each_entry(info, &ssif_infos, link) {
+ 		if (info->binfo.addr == addr) {
+-			if (info->addr_src == SI_SMBIOS)
++			if (info->addr_src == SI_SMBIOS && !info->adapter_name)
+ 				info->adapter_name = kstrdup(adapter_name,
+ 							     GFP_KERNEL);
+ 
+@@ -1600,6 +1600,11 @@ static int ssif_add_infos(struct i2c_client *client)
+ 	info->addr_src = SI_ACPI;
+ 	info->client = client;
+ 	info->adapter_name = kstrdup(client->adapter->name, GFP_KERNEL);
++	if (!info->adapter_name) {
++		kfree(info);
++		return -ENOMEM;
++	}
++
+ 	info->binfo.addr = client->addr;
+ 	list_add_tail(&info->link, &ssif_infos);
+ 	return 0;
+diff --git a/drivers/char/tpm/tpm_crb.c b/drivers/char/tpm/tpm_crb.c
+index 9eb1a18590123..a5dbebb1acfcf 100644
+--- a/drivers/char/tpm/tpm_crb.c
++++ b/drivers/char/tpm/tpm_crb.c
+@@ -463,28 +463,6 @@ static bool crb_req_canceled(struct tpm_chip *chip, u8 status)
+ 	return (cancel & CRB_CANCEL_INVOKE) == CRB_CANCEL_INVOKE;
+ }
+ 
+-static int crb_check_flags(struct tpm_chip *chip)
+-{
+-	u32 val;
+-	int ret;
+-
+-	ret = crb_request_locality(chip, 0);
+-	if (ret)
+-		return ret;
+-
+-	ret = tpm2_get_tpm_pt(chip, TPM2_PT_MANUFACTURER, &val, NULL);
+-	if (ret)
+-		goto release;
+-
+-	if (val == 0x414D4400U /* AMD */)
+-		chip->flags |= TPM_CHIP_FLAG_HWRNG_DISABLED;
+-
+-release:
+-	crb_relinquish_locality(chip, 0);
+-
+-	return ret;
+-}
+-
+ static const struct tpm_class_ops tpm_crb = {
+ 	.flags = TPM_OPS_AUTO_STARTUP,
+ 	.status = crb_status,
+@@ -826,9 +804,14 @@ static int crb_acpi_add(struct acpi_device *device)
+ 	if (rc)
+ 		goto out;
+ 
+-	rc = crb_check_flags(chip);
+-	if (rc)
+-		goto out;
++#ifdef CONFIG_X86
++	/* A quirk for https://www.amd.com/en/support/kb/faq/pa-410 */
++	if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD &&
++	    priv->sm != ACPI_TPM2_COMMAND_BUFFER_WITH_PLUTON) {
++		dev_info(dev, "Disabling hwrng\n");
++		chip->flags |= TPM_CHIP_FLAG_HWRNG_DISABLED;
++	}
++#endif /* CONFIG_X86 */
+ 
+ 	rc = tpm_chip_register(chip);
+ 
+diff --git a/drivers/clk/imx/clk-composite-8m.c b/drivers/clk/imx/clk-composite-8m.c
+index 7a6e3ce97133b..27a08c50ac1d8 100644
+--- a/drivers/clk/imx/clk-composite-8m.c
++++ b/drivers/clk/imx/clk-composite-8m.c
+@@ -97,7 +97,7 @@ static int imx8m_clk_composite_divider_set_rate(struct clk_hw *hw,
+ 	int prediv_value;
+ 	int div_value;
+ 	int ret;
+-	u32 val;
++	u32 orig, val;
+ 
+ 	ret = imx8m_clk_composite_compute_dividers(rate, parent_rate,
+ 						&prediv_value, &div_value);
+@@ -106,13 +106,15 @@ static int imx8m_clk_composite_divider_set_rate(struct clk_hw *hw,
+ 
+ 	spin_lock_irqsave(divider->lock, flags);
+ 
+-	val = readl(divider->reg);
+-	val &= ~((clk_div_mask(divider->width) << divider->shift) |
+-			(clk_div_mask(PCG_DIV_WIDTH) << PCG_DIV_SHIFT));
++	orig = readl(divider->reg);
++	val = orig & ~((clk_div_mask(divider->width) << divider->shift) |
++		       (clk_div_mask(PCG_DIV_WIDTH) << PCG_DIV_SHIFT));
+ 
+ 	val |= (u32)(prediv_value  - 1) << divider->shift;
+ 	val |= (u32)(div_value - 1) << PCG_DIV_SHIFT;
+-	writel(val, divider->reg);
++
++	if (val != orig)
++		writel(val, divider->reg);
+ 
+ 	spin_unlock_irqrestore(divider->lock, flags);
+ 
+diff --git a/drivers/clk/imx/clk-imx8mp.c b/drivers/clk/imx/clk-imx8mp.c
+index 1469249386dd8..670aa2bab3017 100644
+--- a/drivers/clk/imx/clk-imx8mp.c
++++ b/drivers/clk/imx/clk-imx8mp.c
+@@ -178,10 +178,6 @@ static const char * const imx8mp_sai3_sels[] = {"osc_24m", "audio_pll1_out", "au
+ 						"video_pll1_out", "sys_pll1_133m", "osc_hdmi",
+ 						"clk_ext3", "clk_ext4", };
+ 
+-static const char * const imx8mp_sai4_sels[] = {"osc_24m", "audio_pll1_out", "audio_pll2_out",
+-						"video_pll1_out", "sys_pll1_133m", "osc_hdmi",
+-						"clk_ext1", "clk_ext2", };
+-
+ static const char * const imx8mp_sai5_sels[] = {"osc_24m", "audio_pll1_out", "audio_pll2_out",
+ 						"video_pll1_out", "sys_pll1_133m", "osc_hdmi",
+ 						"clk_ext2", "clk_ext3", };
+@@ -567,7 +563,6 @@ static int imx8mp_clocks_probe(struct platform_device *pdev)
+ 	hws[IMX8MP_CLK_SAI1] = imx8m_clk_hw_composite("sai1", imx8mp_sai1_sels, ccm_base + 0xa580);
+ 	hws[IMX8MP_CLK_SAI2] = imx8m_clk_hw_composite("sai2", imx8mp_sai2_sels, ccm_base + 0xa600);
+ 	hws[IMX8MP_CLK_SAI3] = imx8m_clk_hw_composite("sai3", imx8mp_sai3_sels, ccm_base + 0xa680);
+-	hws[IMX8MP_CLK_SAI4] = imx8m_clk_hw_composite("sai4", imx8mp_sai4_sels, ccm_base + 0xa700);
+ 	hws[IMX8MP_CLK_SAI5] = imx8m_clk_hw_composite("sai5", imx8mp_sai5_sels, ccm_base + 0xa780);
+ 	hws[IMX8MP_CLK_SAI6] = imx8m_clk_hw_composite("sai6", imx8mp_sai6_sels, ccm_base + 0xa800);
+ 	hws[IMX8MP_CLK_ENET_QOS] = imx8m_clk_hw_composite("enet_qos", imx8mp_enet_qos_sels, ccm_base + 0xa880);
+diff --git a/drivers/clk/imx/clk-imx8ulp.c b/drivers/clk/imx/clk-imx8ulp.c
+index e308c88cb801c..1b04e2fc78ad5 100644
+--- a/drivers/clk/imx/clk-imx8ulp.c
++++ b/drivers/clk/imx/clk-imx8ulp.c
+@@ -167,7 +167,7 @@ static int imx8ulp_clk_cgc1_init(struct platform_device *pdev)
+ 	clks[IMX8ULP_CLK_SPLL2_PRE_SEL]	= imx_clk_hw_mux_flags("spll2_pre_sel", base + 0x510, 0, 1, pll_pre_sels, ARRAY_SIZE(pll_pre_sels), CLK_SET_PARENT_GATE);
+ 	clks[IMX8ULP_CLK_SPLL3_PRE_SEL]	= imx_clk_hw_mux_flags("spll3_pre_sel", base + 0x610, 0, 1, pll_pre_sels, ARRAY_SIZE(pll_pre_sels), CLK_SET_PARENT_GATE);
+ 
+-	clks[IMX8ULP_CLK_SPLL2] = imx_clk_hw_pllv4(IMX_PLLV4_IMX8ULP, "spll2", "spll2_pre_sel", base + 0x500);
++	clks[IMX8ULP_CLK_SPLL2] = imx_clk_hw_pllv4(IMX_PLLV4_IMX8ULP_1GHZ, "spll2", "spll2_pre_sel", base + 0x500);
+ 	clks[IMX8ULP_CLK_SPLL3] = imx_clk_hw_pllv4(IMX_PLLV4_IMX8ULP, "spll3", "spll3_pre_sel", base + 0x600);
+ 	clks[IMX8ULP_CLK_SPLL3_VCODIV] = imx_clk_hw_divider("spll3_vcodiv", "spll3", base + 0x604, 0, 6);
+ 
+diff --git a/drivers/clk/imx/clk-pllv4.c b/drivers/clk/imx/clk-pllv4.c
+index 6e7e34571fc8d..9b136c951762c 100644
+--- a/drivers/clk/imx/clk-pllv4.c
++++ b/drivers/clk/imx/clk-pllv4.c
+@@ -44,11 +44,15 @@ struct clk_pllv4 {
+ 	u32		cfg_offset;
+ 	u32		num_offset;
+ 	u32		denom_offset;
++	bool		use_mult_range;
+ };
+ 
+ /* Valid PLL MULT Table */
+ static const int pllv4_mult_table[] = {33, 27, 22, 20, 17, 16};
+ 
++/* Valid PLL MULT range, (max, min) */
++static const int pllv4_mult_range[] = {54, 27};
++
+ #define to_clk_pllv4(__hw) container_of(__hw, struct clk_pllv4, hw)
+ 
+ #define LOCK_TIMEOUT_US		USEC_PER_MSEC
+@@ -94,17 +98,30 @@ static unsigned long clk_pllv4_recalc_rate(struct clk_hw *hw,
+ static long clk_pllv4_round_rate(struct clk_hw *hw, unsigned long rate,
+ 				 unsigned long *prate)
+ {
++	struct clk_pllv4 *pll = to_clk_pllv4(hw);
+ 	unsigned long parent_rate = *prate;
+ 	unsigned long round_rate, i;
+ 	u32 mfn, mfd = DEFAULT_MFD;
+ 	bool found = false;
+ 	u64 temp64;
+-
+-	for (i = 0; i < ARRAY_SIZE(pllv4_mult_table); i++) {
+-		round_rate = parent_rate * pllv4_mult_table[i];
+-		if (rate >= round_rate) {
++	u32 mult;
++
++	if (pll->use_mult_range) {
++		temp64 = (u64)rate;
++		do_div(temp64, parent_rate);
++		mult = temp64;
++		if (mult >= pllv4_mult_range[1] &&
++		    mult <= pllv4_mult_range[0]) {
++			round_rate = parent_rate * mult;
+ 			found = true;
+-			break;
++		}
++	} else {
++		for (i = 0; i < ARRAY_SIZE(pllv4_mult_table); i++) {
++			round_rate = parent_rate * pllv4_mult_table[i];
++			if (rate >= round_rate) {
++				found = true;
++				break;
++			}
+ 		}
+ 	}
+ 
+@@ -138,14 +155,20 @@ static long clk_pllv4_round_rate(struct clk_hw *hw, unsigned long rate,
+ 	return round_rate + (u32)temp64;
+ }
+ 
+-static bool clk_pllv4_is_valid_mult(unsigned int mult)
++static bool clk_pllv4_is_valid_mult(struct clk_pllv4 *pll, unsigned int mult)
+ {
+ 	int i;
+ 
+ 	/* check if mult is in valid MULT table */
+-	for (i = 0; i < ARRAY_SIZE(pllv4_mult_table); i++) {
+-		if (pllv4_mult_table[i] == mult)
++	if (pll->use_mult_range) {
++		if (mult >= pllv4_mult_range[1] &&
++		    mult <= pllv4_mult_range[0])
+ 			return true;
++	} else {
++		for (i = 0; i < ARRAY_SIZE(pllv4_mult_table); i++) {
++			if (pllv4_mult_table[i] == mult)
++				return true;
++		}
+ 	}
+ 
+ 	return false;
+@@ -160,7 +183,7 @@ static int clk_pllv4_set_rate(struct clk_hw *hw, unsigned long rate,
+ 
+ 	mult = rate / parent_rate;
+ 
+-	if (!clk_pllv4_is_valid_mult(mult))
++	if (!clk_pllv4_is_valid_mult(pll, mult))
+ 		return -EINVAL;
+ 
+ 	if (parent_rate <= MAX_MFD)
+@@ -227,10 +250,13 @@ struct clk_hw *imx_clk_hw_pllv4(enum imx_pllv4_type type, const char *name,
+ 
+ 	pll->base = base;
+ 
+-	if (type == IMX_PLLV4_IMX8ULP) {
++	if (type == IMX_PLLV4_IMX8ULP ||
++	    type == IMX_PLLV4_IMX8ULP_1GHZ) {
+ 		pll->cfg_offset = IMX8ULP_PLL_CFG_OFFSET;
+ 		pll->num_offset = IMX8ULP_PLL_NUM_OFFSET;
+ 		pll->denom_offset = IMX8ULP_PLL_DENOM_OFFSET;
++		if (type == IMX_PLLV4_IMX8ULP_1GHZ)
++			pll->use_mult_range = true;
+ 	} else {
+ 		pll->cfg_offset = PLL_CFG_OFFSET;
+ 		pll->num_offset = PLL_NUM_OFFSET;
+diff --git a/drivers/clk/imx/clk.h b/drivers/clk/imx/clk.h
+index af19d9f6aed09..adb7ad649a0d2 100644
+--- a/drivers/clk/imx/clk.h
++++ b/drivers/clk/imx/clk.h
+@@ -45,6 +45,7 @@ enum imx_pll14xx_type {
+ enum imx_pllv4_type {
+ 	IMX_PLLV4_IMX7ULP,
+ 	IMX_PLLV4_IMX8ULP,
++	IMX_PLLV4_IMX8ULP_1GHZ,
+ };
+ 
+ enum imx_pfdv2_type {
+diff --git a/drivers/clk/keystone/pll.c b/drivers/clk/keystone/pll.c
+index d59a7621bb204..ee5c72369334f 100644
+--- a/drivers/clk/keystone/pll.c
++++ b/drivers/clk/keystone/pll.c
+@@ -209,7 +209,7 @@ static void __init _of_pll_clk_init(struct device_node *node, bool pllctrl)
+ 	}
+ 
+ 	clk = clk_register_pll(NULL, node->name, parent_name, pll_data);
+-	if (clk) {
++	if (!IS_ERR_OR_NULL(clk)) {
+ 		of_clk_add_provider(node, of_clk_src_simple_get, clk);
+ 		return;
+ 	}
+diff --git a/drivers/clk/qcom/Kconfig b/drivers/clk/qcom/Kconfig
+index 263e55d75e3f5..92ef5314b59ce 100644
+--- a/drivers/clk/qcom/Kconfig
++++ b/drivers/clk/qcom/Kconfig
+@@ -987,6 +987,7 @@ config SM_GPUCC_8350
+ 
+ config SM_GPUCC_8450
+ 	tristate "SM8450 Graphics Clock Controller"
++	depends on ARM64 || COMPILE_TEST
+ 	select SM_GCC_8450
+ 	help
+ 	  Support for the graphics clock controller on SM8450 devices.
+@@ -995,6 +996,7 @@ config SM_GPUCC_8450
+ 
+ config SM_GPUCC_8550
+ 	tristate "SM8550 Graphics Clock Controller"
++	depends on ARM64 || COMPILE_TEST
+ 	select SM_GCC_8550
+ 	help
+ 	  Support for the graphics clock controller on SM8550 devices.
+@@ -1031,6 +1033,7 @@ config SM_VIDEOCC_8250
+ 
+ config SM_VIDEOCC_8350
+ 	tristate "SM8350 Video Clock Controller"
++	depends on ARM64 || COMPILE_TEST
+ 	select SM_GCC_8350
+ 	select QCOM_GDSC
+ 	help
+@@ -1040,6 +1043,7 @@ config SM_VIDEOCC_8350
+ 
+ config SM_VIDEOCC_8550
+ 	tristate "SM8550 Video Clock Controller"
++	depends on ARM64 || COMPILE_TEST
+ 	select SM_GCC_8550
+ 	select QCOM_GDSC
+ 	help
+@@ -1088,6 +1092,7 @@ config CLK_GFM_LPASS_SM8250
+ 
+ config SM_VIDEOCC_8450
+ 	tristate "SM8450 Video Clock Controller"
++	depends on ARM64 || COMPILE_TEST
+ 	select SM_GCC_8450
+ 	select QCOM_GDSC
+ 	help
+diff --git a/drivers/clk/qcom/dispcc-sc8280xp.c b/drivers/clk/qcom/dispcc-sc8280xp.c
+index 167470beb3691..30f636b9f0ec8 100644
+--- a/drivers/clk/qcom/dispcc-sc8280xp.c
++++ b/drivers/clk/qcom/dispcc-sc8280xp.c
+@@ -3057,7 +3057,7 @@ static struct gdsc disp0_mdss_gdsc = {
+ 		.name = "disp0_mdss_gdsc",
+ 	},
+ 	.pwrsts = PWRSTS_OFF_ON,
+-	.flags = HW_CTRL,
++	.flags = HW_CTRL | RETAIN_FF_ENABLE,
+ };
+ 
+ static struct gdsc disp1_mdss_gdsc = {
+@@ -3069,7 +3069,7 @@ static struct gdsc disp1_mdss_gdsc = {
+ 		.name = "disp1_mdss_gdsc",
+ 	},
+ 	.pwrsts = PWRSTS_OFF_ON,
+-	.flags = HW_CTRL,
++	.flags = HW_CTRL | RETAIN_FF_ENABLE,
+ };
+ 
+ static struct gdsc disp0_mdss_int2_gdsc = {
+@@ -3081,7 +3081,7 @@ static struct gdsc disp0_mdss_int2_gdsc = {
+ 		.name = "disp0_mdss_int2_gdsc",
+ 	},
+ 	.pwrsts = PWRSTS_OFF_ON,
+-	.flags = HW_CTRL,
++	.flags = HW_CTRL | RETAIN_FF_ENABLE,
+ };
+ 
+ static struct gdsc disp1_mdss_int2_gdsc = {
+@@ -3093,7 +3093,7 @@ static struct gdsc disp1_mdss_int2_gdsc = {
+ 		.name = "disp1_mdss_int2_gdsc",
+ 	},
+ 	.pwrsts = PWRSTS_OFF_ON,
+-	.flags = HW_CTRL,
++	.flags = HW_CTRL | RETAIN_FF_ENABLE,
+ };
+ 
+ static struct gdsc *disp0_cc_sc8280xp_gdscs[] = {
+diff --git a/drivers/clk/qcom/gcc-qdu1000.c b/drivers/clk/qcom/gcc-qdu1000.c
+index 5051769ad90c7..8df7b79839680 100644
+--- a/drivers/clk/qcom/gcc-qdu1000.c
++++ b/drivers/clk/qcom/gcc-qdu1000.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0-only
+ /*
+- * Copyright (c) 2022, Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2022-2023, Qualcomm Innovation Center, Inc. All rights reserved.
+  */
+ 
+ #include <linux/clk-provider.h>
+@@ -370,16 +370,6 @@ static const struct clk_parent_data gcc_parent_data_6[] = {
+ 	{ .index = DT_TCXO_IDX },
+ };
+ 
+-static const struct parent_map gcc_parent_map_7[] = {
+-	{ P_PCIE_0_PIPE_CLK, 0 },
+-	{ P_BI_TCXO, 2 },
+-};
+-
+-static const struct clk_parent_data gcc_parent_data_7[] = {
+-	{ .index = DT_PCIE_0_PIPE_CLK_IDX },
+-	{ .index = DT_TCXO_IDX },
+-};
+-
+ static const struct parent_map gcc_parent_map_8[] = {
+ 	{ P_BI_TCXO, 0 },
+ 	{ P_GCC_GPLL0_OUT_MAIN, 1 },
+@@ -439,16 +429,15 @@ static struct clk_regmap_mux gcc_pcie_0_phy_aux_clk_src = {
+ 	},
+ };
+ 
+-static struct clk_regmap_mux gcc_pcie_0_pipe_clk_src = {
++static struct clk_regmap_phy_mux gcc_pcie_0_pipe_clk_src = {
+ 	.reg = 0x9d064,
+-	.shift = 0,
+-	.width = 2,
+-	.parent_map = gcc_parent_map_7,
+ 	.clkr = {
+ 		.hw.init = &(const struct clk_init_data) {
+ 			.name = "gcc_pcie_0_pipe_clk_src",
+-			.parent_data = gcc_parent_data_7,
+-			.num_parents = ARRAY_SIZE(gcc_parent_data_7),
++			.parent_data = &(const struct clk_parent_data){
++				.index = DT_PCIE_0_PIPE_CLK_IDX,
++			},
++			.num_parents = 1,
+ 			.ops = &clk_regmap_phy_mux_ops,
+ 		},
+ 	},
+@@ -1458,14 +1447,13 @@ static struct clk_branch gcc_pcie_0_cfg_ahb_clk = {
+ 
+ static struct clk_branch gcc_pcie_0_clkref_en = {
+ 	.halt_reg = 0x9c004,
+-	.halt_bit = 31,
+-	.halt_check = BRANCH_HALT_ENABLE,
++	.halt_check = BRANCH_HALT,
+ 	.clkr = {
+ 		.enable_reg = 0x9c004,
+ 		.enable_mask = BIT(0),
+ 		.hw.init = &(const struct clk_init_data) {
+ 			.name = "gcc_pcie_0_clkref_en",
+-			.ops = &clk_branch_ops,
++			.ops = &clk_branch2_ops,
+ 		},
+ 	},
+ };
+@@ -2285,14 +2273,13 @@ static struct clk_branch gcc_tsc_etu_clk = {
+ 
+ static struct clk_branch gcc_usb2_clkref_en = {
+ 	.halt_reg = 0x9c008,
+-	.halt_bit = 31,
+-	.halt_check = BRANCH_HALT_ENABLE,
++	.halt_check = BRANCH_HALT,
+ 	.clkr = {
+ 		.enable_reg = 0x9c008,
+ 		.enable_mask = BIT(0),
+ 		.hw.init = &(const struct clk_init_data) {
+ 			.name = "gcc_usb2_clkref_en",
+-			.ops = &clk_branch_ops,
++			.ops = &clk_branch2_ops,
+ 		},
+ 	},
+ };
+diff --git a/drivers/clk/qcom/gcc-sc7180.c b/drivers/clk/qcom/gcc-sc7180.c
+index cef3c77564cfd..49f36e1df4fa8 100644
+--- a/drivers/clk/qcom/gcc-sc7180.c
++++ b/drivers/clk/qcom/gcc-sc7180.c
+@@ -651,6 +651,7 @@ static struct clk_rcg2 gcc_sdcc2_apps_clk_src = {
+ 		.name = "gcc_sdcc2_apps_clk_src",
+ 		.parent_data = gcc_parent_data_5,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_5),
++		.flags = CLK_OPS_PARENT_ENABLE,
+ 		.ops = &clk_rcg2_floor_ops,
+ 	},
+ };
+diff --git a/drivers/clk/qcom/gcc-sc8280xp.c b/drivers/clk/qcom/gcc-sc8280xp.c
+index b90c71637814b..4d1133406ae05 100644
+--- a/drivers/clk/qcom/gcc-sc8280xp.c
++++ b/drivers/clk/qcom/gcc-sc8280xp.c
+@@ -6761,7 +6761,7 @@ static struct gdsc pcie_0_tunnel_gdsc = {
+ 		.name = "pcie_0_tunnel_gdsc",
+ 	},
+ 	.pwrsts = PWRSTS_OFF_ON,
+-	.flags = VOTABLE,
++	.flags = VOTABLE | RETAIN_FF_ENABLE,
+ };
+ 
+ static struct gdsc pcie_1_tunnel_gdsc = {
+@@ -6772,7 +6772,7 @@ static struct gdsc pcie_1_tunnel_gdsc = {
+ 		.name = "pcie_1_tunnel_gdsc",
+ 	},
+ 	.pwrsts = PWRSTS_OFF_ON,
+-	.flags = VOTABLE,
++	.flags = VOTABLE | RETAIN_FF_ENABLE,
+ };
+ 
+ /*
+@@ -6787,7 +6787,7 @@ static struct gdsc pcie_2a_gdsc = {
+ 		.name = "pcie_2a_gdsc",
+ 	},
+ 	.pwrsts = PWRSTS_OFF_ON,
+-	.flags = VOTABLE | ALWAYS_ON,
++	.flags = VOTABLE | RETAIN_FF_ENABLE | ALWAYS_ON,
+ };
+ 
+ static struct gdsc pcie_2b_gdsc = {
+@@ -6798,7 +6798,7 @@ static struct gdsc pcie_2b_gdsc = {
+ 		.name = "pcie_2b_gdsc",
+ 	},
+ 	.pwrsts = PWRSTS_OFF_ON,
+-	.flags = VOTABLE | ALWAYS_ON,
++	.flags = VOTABLE | RETAIN_FF_ENABLE | ALWAYS_ON,
+ };
+ 
+ static struct gdsc pcie_3a_gdsc = {
+@@ -6809,7 +6809,7 @@ static struct gdsc pcie_3a_gdsc = {
+ 		.name = "pcie_3a_gdsc",
+ 	},
+ 	.pwrsts = PWRSTS_OFF_ON,
+-	.flags = VOTABLE | ALWAYS_ON,
++	.flags = VOTABLE | RETAIN_FF_ENABLE | ALWAYS_ON,
+ };
+ 
+ static struct gdsc pcie_3b_gdsc = {
+@@ -6820,7 +6820,7 @@ static struct gdsc pcie_3b_gdsc = {
+ 		.name = "pcie_3b_gdsc",
+ 	},
+ 	.pwrsts = PWRSTS_OFF_ON,
+-	.flags = VOTABLE | ALWAYS_ON,
++	.flags = VOTABLE | RETAIN_FF_ENABLE | ALWAYS_ON,
+ };
+ 
+ static struct gdsc pcie_4_gdsc = {
+@@ -6831,7 +6831,7 @@ static struct gdsc pcie_4_gdsc = {
+ 		.name = "pcie_4_gdsc",
+ 	},
+ 	.pwrsts = PWRSTS_OFF_ON,
+-	.flags = VOTABLE | ALWAYS_ON,
++	.flags = VOTABLE | RETAIN_FF_ENABLE | ALWAYS_ON,
+ };
+ 
+ static struct gdsc ufs_card_gdsc = {
+@@ -6840,6 +6840,7 @@ static struct gdsc ufs_card_gdsc = {
+ 		.name = "ufs_card_gdsc",
+ 	},
+ 	.pwrsts = PWRSTS_OFF_ON,
++	.flags = RETAIN_FF_ENABLE,
+ };
+ 
+ static struct gdsc ufs_phy_gdsc = {
+@@ -6848,6 +6849,7 @@ static struct gdsc ufs_phy_gdsc = {
+ 		.name = "ufs_phy_gdsc",
+ 	},
+ 	.pwrsts = PWRSTS_OFF_ON,
++	.flags = RETAIN_FF_ENABLE,
+ };
+ 
+ static struct gdsc usb30_mp_gdsc = {
+@@ -6856,6 +6858,7 @@ static struct gdsc usb30_mp_gdsc = {
+ 		.name = "usb30_mp_gdsc",
+ 	},
+ 	.pwrsts = PWRSTS_RET_ON,
++	.flags = RETAIN_FF_ENABLE,
+ };
+ 
+ static struct gdsc usb30_prim_gdsc = {
+@@ -6864,6 +6867,7 @@ static struct gdsc usb30_prim_gdsc = {
+ 		.name = "usb30_prim_gdsc",
+ 	},
+ 	.pwrsts = PWRSTS_RET_ON,
++	.flags = RETAIN_FF_ENABLE,
+ };
+ 
+ static struct gdsc usb30_sec_gdsc = {
+@@ -6872,6 +6876,7 @@ static struct gdsc usb30_sec_gdsc = {
+ 		.name = "usb30_sec_gdsc",
+ 	},
+ 	.pwrsts = PWRSTS_RET_ON,
++	.flags = RETAIN_FF_ENABLE,
+ };
+ 
+ static struct gdsc emac_0_gdsc = {
+@@ -6880,6 +6885,7 @@ static struct gdsc emac_0_gdsc = {
+ 		.name = "emac_0_gdsc",
+ 	},
+ 	.pwrsts = PWRSTS_OFF_ON,
++	.flags = RETAIN_FF_ENABLE,
+ };
+ 
+ static struct gdsc emac_1_gdsc = {
+@@ -6888,6 +6894,97 @@ static struct gdsc emac_1_gdsc = {
+ 		.name = "emac_1_gdsc",
+ 	},
+ 	.pwrsts = PWRSTS_OFF_ON,
++	.flags = RETAIN_FF_ENABLE,
++};
++
++static struct gdsc usb4_1_gdsc = {
++	.gdscr = 0xb8004,
++	.pd = {
++		.name = "usb4_1_gdsc",
++	},
++	.pwrsts = PWRSTS_OFF_ON,
++	.flags = RETAIN_FF_ENABLE,
++};
++
++static struct gdsc usb4_gdsc = {
++	.gdscr = 0x2a004,
++	.pd = {
++		.name = "usb4_gdsc",
++	},
++	.pwrsts = PWRSTS_OFF_ON,
++	.flags = RETAIN_FF_ENABLE,
++};
++
++static struct gdsc hlos1_vote_mmnoc_mmu_tbu_hf0_gdsc = {
++	.gdscr = 0x7d050,
++	.pd = {
++		.name = "hlos1_vote_mmnoc_mmu_tbu_hf0_gdsc",
++	},
++	.pwrsts = PWRSTS_OFF_ON,
++	.flags = VOTABLE,
++};
++
++static struct gdsc hlos1_vote_mmnoc_mmu_tbu_hf1_gdsc = {
++	.gdscr = 0x7d058,
++	.pd = {
++		.name = "hlos1_vote_mmnoc_mmu_tbu_hf1_gdsc",
++	},
++	.pwrsts = PWRSTS_OFF_ON,
++	.flags = VOTABLE,
++};
++
++static struct gdsc hlos1_vote_mmnoc_mmu_tbu_sf0_gdsc = {
++	.gdscr = 0x7d054,
++	.pd = {
++		.name = "hlos1_vote_mmnoc_mmu_tbu_sf0_gdsc",
++	},
++	.pwrsts = PWRSTS_OFF_ON,
++	.flags = VOTABLE,
++};
++
++static struct gdsc hlos1_vote_mmnoc_mmu_tbu_sf1_gdsc = {
++	.gdscr = 0x7d06c,
++	.pd = {
++		.name = "hlos1_vote_mmnoc_mmu_tbu_sf1_gdsc",
++	},
++	.pwrsts = PWRSTS_OFF_ON,
++	.flags = VOTABLE,
++};
++
++static struct gdsc hlos1_vote_turing_mmu_tbu0_gdsc = {
++	.gdscr = 0x7d05c,
++	.pd = {
++		.name = "hlos1_vote_turing_mmu_tbu0_gdsc",
++	},
++	.pwrsts = PWRSTS_OFF_ON,
++	.flags = VOTABLE,
++};
++
++static struct gdsc hlos1_vote_turing_mmu_tbu1_gdsc = {
++	.gdscr = 0x7d060,
++	.pd = {
++		.name = "hlos1_vote_turing_mmu_tbu1_gdsc",
++	},
++	.pwrsts = PWRSTS_OFF_ON,
++	.flags = VOTABLE,
++};
++
++static struct gdsc hlos1_vote_turing_mmu_tbu2_gdsc = {
++	.gdscr = 0x7d0a0,
++	.pd = {
++		.name = "hlos1_vote_turing_mmu_tbu2_gdsc",
++	},
++	.pwrsts = PWRSTS_OFF_ON,
++	.flags = VOTABLE,
++};
++
++static struct gdsc hlos1_vote_turing_mmu_tbu3_gdsc = {
++	.gdscr = 0x7d0a4,
++	.pd = {
++		.name = "hlos1_vote_turing_mmu_tbu3_gdsc",
++	},
++	.pwrsts = PWRSTS_OFF_ON,
++	.flags = VOTABLE,
+ };
+ 
+ static struct clk_regmap *gcc_sc8280xp_clocks[] = {
+@@ -7370,6 +7467,16 @@ static struct gdsc *gcc_sc8280xp_gdscs[] = {
+ 	[USB30_SEC_GDSC] = &usb30_sec_gdsc,
+ 	[EMAC_0_GDSC] = &emac_0_gdsc,
+ 	[EMAC_1_GDSC] = &emac_1_gdsc,
++	[USB4_1_GDSC] = &usb4_1_gdsc,
++	[USB4_GDSC] = &usb4_gdsc,
++	[HLOS1_VOTE_MMNOC_MMU_TBU_HF0_GDSC] = &hlos1_vote_mmnoc_mmu_tbu_hf0_gdsc,
++	[HLOS1_VOTE_MMNOC_MMU_TBU_HF1_GDSC] = &hlos1_vote_mmnoc_mmu_tbu_hf1_gdsc,
++	[HLOS1_VOTE_MMNOC_MMU_TBU_SF0_GDSC] = &hlos1_vote_mmnoc_mmu_tbu_sf0_gdsc,
++	[HLOS1_VOTE_MMNOC_MMU_TBU_SF1_GDSC] = &hlos1_vote_mmnoc_mmu_tbu_sf1_gdsc,
++	[HLOS1_VOTE_TURING_MMU_TBU0_GDSC] = &hlos1_vote_turing_mmu_tbu0_gdsc,
++	[HLOS1_VOTE_TURING_MMU_TBU1_GDSC] = &hlos1_vote_turing_mmu_tbu1_gdsc,
++	[HLOS1_VOTE_TURING_MMU_TBU2_GDSC] = &hlos1_vote_turing_mmu_tbu2_gdsc,
++	[HLOS1_VOTE_TURING_MMU_TBU3_GDSC] = &hlos1_vote_turing_mmu_tbu3_gdsc,
+ };
+ 
+ static const struct clk_rcg_dfs_data gcc_dfs_clocks[] = {
+@@ -7432,8 +7539,8 @@ static int gcc_sc8280xp_probe(struct platform_device *pdev)
+ 
+ 	regmap = qcom_cc_map(pdev, &gcc_sc8280xp_desc);
+ 	if (IS_ERR(regmap)) {
+-		pm_runtime_put(&pdev->dev);
+-		return PTR_ERR(regmap);
++		ret = PTR_ERR(regmap);
++		goto err_put_rpm;
+ 	}
+ 
+ 	/*
+@@ -7454,11 +7561,19 @@ static int gcc_sc8280xp_probe(struct platform_device *pdev)
+ 
+ 	ret = qcom_cc_register_rcg_dfs(regmap, gcc_dfs_clocks, ARRAY_SIZE(gcc_dfs_clocks));
+ 	if (ret)
+-		return ret;
++		goto err_put_rpm;
+ 
+ 	ret = qcom_cc_really_probe(pdev, &gcc_sc8280xp_desc, regmap);
++	if (ret)
++		goto err_put_rpm;
++
+ 	pm_runtime_put(&pdev->dev);
+ 
++	return 0;
++
++err_put_rpm:
++	pm_runtime_put_sync(&pdev->dev);
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/clk/qcom/gcc-sm6350.c b/drivers/clk/qcom/gcc-sm6350.c
+index 9b4e4bb059635..cf4a7b6e0b23a 100644
+--- a/drivers/clk/qcom/gcc-sm6350.c
++++ b/drivers/clk/qcom/gcc-sm6350.c
+@@ -641,6 +641,7 @@ static struct clk_rcg2 gcc_sdcc2_apps_clk_src = {
+ 		.name = "gcc_sdcc2_apps_clk_src",
+ 		.parent_data = gcc_parent_data_8,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_8),
++		.flags = CLK_OPS_PARENT_ENABLE,
+ 		.ops = &clk_rcg2_floor_ops,
+ 	},
+ };
+diff --git a/drivers/clk/qcom/gcc-sm7150.c b/drivers/clk/qcom/gcc-sm7150.c
+index 6b628178f62c4..6da87f0436d0c 100644
+--- a/drivers/clk/qcom/gcc-sm7150.c
++++ b/drivers/clk/qcom/gcc-sm7150.c
+@@ -739,6 +739,7 @@ static struct clk_rcg2 gcc_sdcc2_apps_clk_src = {
+ 		.parent_data = gcc_parent_data_6,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_6),
+ 		.ops = &clk_rcg2_floor_ops,
++		.flags = CLK_OPS_PARENT_ENABLE,
+ 	},
+ };
+ 
+diff --git a/drivers/clk/qcom/gcc-sm8250.c b/drivers/clk/qcom/gcc-sm8250.c
+index b6cf4bc88d4d4..d3c75bb55946a 100644
+--- a/drivers/clk/qcom/gcc-sm8250.c
++++ b/drivers/clk/qcom/gcc-sm8250.c
+@@ -721,6 +721,7 @@ static struct clk_rcg2 gcc_sdcc2_apps_clk_src = {
+ 		.name = "gcc_sdcc2_apps_clk_src",
+ 		.parent_data = gcc_parent_data_4,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_4),
++		.flags = CLK_OPS_PARENT_ENABLE,
+ 		.ops = &clk_rcg2_floor_ops,
+ 	},
+ };
+diff --git a/drivers/clk/qcom/gcc-sm8450.c b/drivers/clk/qcom/gcc-sm8450.c
+index 75635d40a12d3..9f4f72553ecf2 100644
+--- a/drivers/clk/qcom/gcc-sm8450.c
++++ b/drivers/clk/qcom/gcc-sm8450.c
+@@ -935,7 +935,7 @@ static struct clk_rcg2 gcc_sdcc2_apps_clk_src = {
+ 		.parent_data = gcc_parent_data_7,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_7),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_floor_ops,
+ 	},
+ };
+ 
+@@ -958,7 +958,7 @@ static struct clk_rcg2 gcc_sdcc4_apps_clk_src = {
+ 		.parent_data = gcc_parent_data_0,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_0),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_floor_ops,
+ 	},
+ };
+ 
+diff --git a/drivers/clk/qcom/gpucc-sm6350.c b/drivers/clk/qcom/gpucc-sm6350.c
+index ef15185a99c31..0bcbba2a29436 100644
+--- a/drivers/clk/qcom/gpucc-sm6350.c
++++ b/drivers/clk/qcom/gpucc-sm6350.c
+@@ -24,6 +24,12 @@
+ #define CX_GMU_CBCR_WAKE_MASK		0xF
+ #define CX_GMU_CBCR_WAKE_SHIFT		8
+ 
++enum {
++	DT_BI_TCXO,
++	DT_GPLL0_OUT_MAIN,
++	DT_GPLL0_OUT_MAIN_DIV,
++};
++
+ enum {
+ 	P_BI_TCXO,
+ 	P_GPLL0_OUT_MAIN,
+@@ -61,6 +67,7 @@ static struct clk_alpha_pll gpu_cc_pll0 = {
+ 		.hw.init = &(struct clk_init_data){
+ 			.name = "gpu_cc_pll0",
+ 			.parent_data =  &(const struct clk_parent_data){
++				.index = DT_BI_TCXO,
+ 				.fw_name = "bi_tcxo",
+ 			},
+ 			.num_parents = 1,
+@@ -104,6 +111,7 @@ static struct clk_alpha_pll gpu_cc_pll1 = {
+ 		.hw.init = &(struct clk_init_data){
+ 			.name = "gpu_cc_pll1",
+ 			.parent_data =  &(const struct clk_parent_data){
++				.index = DT_BI_TCXO,
+ 				.fw_name = "bi_tcxo",
+ 			},
+ 			.num_parents = 1,
+@@ -121,11 +129,11 @@ static const struct parent_map gpu_cc_parent_map_0[] = {
+ };
+ 
+ static const struct clk_parent_data gpu_cc_parent_data_0[] = {
+-	{ .fw_name = "bi_tcxo" },
++	{ .index = DT_BI_TCXO, .fw_name = "bi_tcxo" },
+ 	{ .hw = &gpu_cc_pll0.clkr.hw },
+ 	{ .hw = &gpu_cc_pll1.clkr.hw },
+-	{ .fw_name = "gcc_gpu_gpll0_clk" },
+-	{ .fw_name = "gcc_gpu_gpll0_div_clk" },
++	{ .index = DT_GPLL0_OUT_MAIN, .fw_name = "gcc_gpu_gpll0_clk_src" },
++	{ .index = DT_GPLL0_OUT_MAIN_DIV, .fw_name = "gcc_gpu_gpll0_div_clk_src" },
+ };
+ 
+ static const struct parent_map gpu_cc_parent_map_1[] = {
+@@ -138,12 +146,12 @@ static const struct parent_map gpu_cc_parent_map_1[] = {
+ };
+ 
+ static const struct clk_parent_data gpu_cc_parent_data_1[] = {
+-	{ .fw_name = "bi_tcxo" },
++	{ .index = DT_BI_TCXO, .fw_name = "bi_tcxo" },
+ 	{ .hw = &crc_div.hw },
+ 	{ .hw = &gpu_cc_pll0.clkr.hw },
+ 	{ .hw = &gpu_cc_pll1.clkr.hw },
+ 	{ .hw = &gpu_cc_pll1.clkr.hw },
+-	{ .fw_name = "gcc_gpu_gpll0_clk" },
++	{ .index = DT_GPLL0_OUT_MAIN, .fw_name = "gcc_gpu_gpll0_clk_src" },
+ };
+ 
+ static const struct freq_tbl ftbl_gpu_cc_gmu_clk_src[] = {
+diff --git a/drivers/clk/qcom/reset.c b/drivers/clk/qcom/reset.c
+index 0e914ec7aeae1..e45e32804d2c7 100644
+--- a/drivers/clk/qcom/reset.c
++++ b/drivers/clk/qcom/reset.c
+@@ -16,7 +16,8 @@ static int qcom_reset(struct reset_controller_dev *rcdev, unsigned long id)
+ 	struct qcom_reset_controller *rst = to_qcom_reset_controller(rcdev);
+ 
+ 	rcdev->ops->assert(rcdev, id);
+-	udelay(rst->reset_map[id].udelay ?: 1); /* use 1 us as default */
++	fsleep(rst->reset_map[id].udelay ?: 1); /* use 1 us as default */
++
+ 	rcdev->ops->deassert(rcdev, id);
+ 	return 0;
+ }
+diff --git a/drivers/clk/rockchip/clk-rk3568.c b/drivers/clk/rockchip/clk-rk3568.c
+index f85902e2590c7..2f54f630c8b65 100644
+--- a/drivers/clk/rockchip/clk-rk3568.c
++++ b/drivers/clk/rockchip/clk-rk3568.c
+@@ -81,7 +81,7 @@ static struct rockchip_pll_rate_table rk3568_pll_rates[] = {
+ 	RK3036_PLL_RATE(108000000, 2, 45, 5, 1, 1, 0),
+ 	RK3036_PLL_RATE(100000000, 1, 150, 6, 6, 1, 0),
+ 	RK3036_PLL_RATE(96000000, 1, 96, 6, 4, 1, 0),
+-	RK3036_PLL_RATE(78750000, 1, 96, 6, 4, 1, 0),
++	RK3036_PLL_RATE(78750000, 4, 315, 6, 4, 1, 0),
+ 	RK3036_PLL_RATE(74250000, 2, 99, 4, 4, 1, 0),
+ 	{ /* sentinel */ },
+ };
+diff --git a/drivers/clk/sunxi-ng/ccu_mmc_timing.c b/drivers/clk/sunxi-ng/ccu_mmc_timing.c
+index 23a8d44e2449b..78919d7843bec 100644
+--- a/drivers/clk/sunxi-ng/ccu_mmc_timing.c
++++ b/drivers/clk/sunxi-ng/ccu_mmc_timing.c
+@@ -43,7 +43,7 @@ int sunxi_ccu_set_mmc_timing_mode(struct clk *clk, bool new_mode)
+ EXPORT_SYMBOL_GPL(sunxi_ccu_set_mmc_timing_mode);
+ 
+ /**
+- * sunxi_ccu_set_mmc_timing_mode: Get the current MMC clock timing mode
++ * sunxi_ccu_get_mmc_timing_mode: Get the current MMC clock timing mode
+  * @clk: clock to query
+  *
+  * Return: %0 if the clock is in old timing mode, > %0 if it is in
+diff --git a/drivers/counter/Kconfig b/drivers/counter/Kconfig
+index 62962ae84b77d..497bc05dca4df 100644
+--- a/drivers/counter/Kconfig
++++ b/drivers/counter/Kconfig
+@@ -92,7 +92,7 @@ config MICROCHIP_TCB_CAPTURE
+ 
+ config RZ_MTU3_CNT
+ 	tristate "Renesas RZ/G2L MTU3a counter driver"
+-	depends on RZ_MTU3 || COMPILE_TEST
++	depends on RZ_MTU3
+ 	help
+ 	  Enable support for MTU3a counter driver found on Renesas RZ/G2L alike
+ 	  SoCs. This IP supports both 16-bit and 32-bit phase counting mode
+diff --git a/drivers/cpufreq/amd-pstate-ut.c b/drivers/cpufreq/amd-pstate-ut.c
+index 7f3fe20489818..502d494499ae8 100644
+--- a/drivers/cpufreq/amd-pstate-ut.c
++++ b/drivers/cpufreq/amd-pstate-ut.c
+@@ -64,27 +64,9 @@ static struct amd_pstate_ut_struct amd_pstate_ut_cases[] = {
+ static bool get_shared_mem(void)
+ {
+ 	bool result = false;
+-	char path[] = "/sys/module/amd_pstate/parameters/shared_mem";
+-	char buf[5] = {0};
+-	struct file *filp = NULL;
+-	loff_t pos = 0;
+-	ssize_t ret;
+-
+-	if (!boot_cpu_has(X86_FEATURE_CPPC)) {
+-		filp = filp_open(path, O_RDONLY, 0);
+-		if (IS_ERR(filp))
+-			pr_err("%s unable to open %s file!\n", __func__, path);
+-		else {
+-			ret = kernel_read(filp, &buf, sizeof(buf), &pos);
+-			if (ret < 0)
+-				pr_err("%s read %s file fail ret=%ld!\n",
+-					__func__, path, (long)ret);
+-			filp_close(filp, NULL);
+-		}
+ 
+-		if ('Y' == *buf)
+-			result = true;
+-	}
++	if (!boot_cpu_has(X86_FEATURE_CPPC))
++		result = true;
+ 
+ 	return result;
+ }
+@@ -158,7 +140,7 @@ static void amd_pstate_ut_check_perf(u32 index)
+ 			if (ret) {
+ 				amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL;
+ 				pr_err("%s cppc_get_perf_caps ret=%d error!\n", __func__, ret);
+-				return;
++				goto skip_test;
+ 			}
+ 
+ 			nominal_perf = cppc_perf.nominal_perf;
+@@ -169,7 +151,7 @@ static void amd_pstate_ut_check_perf(u32 index)
+ 			if (ret) {
+ 				amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL;
+ 				pr_err("%s read CPPC_CAP1 ret=%d error!\n", __func__, ret);
+-				return;
++				goto skip_test;
+ 			}
+ 
+ 			nominal_perf = AMD_CPPC_NOMINAL_PERF(cap1);
+@@ -187,7 +169,7 @@ static void amd_pstate_ut_check_perf(u32 index)
+ 				nominal_perf, cpudata->nominal_perf,
+ 				lowest_nonlinear_perf, cpudata->lowest_nonlinear_perf,
+ 				lowest_perf, cpudata->lowest_perf);
+-			return;
++			goto skip_test;
+ 		}
+ 
+ 		if (!((highest_perf >= nominal_perf) &&
+@@ -198,11 +180,15 @@ static void amd_pstate_ut_check_perf(u32 index)
+ 			pr_err("%s cpu%d highest=%d >= nominal=%d > lowest_nonlinear=%d > lowest=%d > 0, the formula is incorrect!\n",
+ 				__func__, cpu, highest_perf, nominal_perf,
+ 				lowest_nonlinear_perf, lowest_perf);
+-			return;
++			goto skip_test;
+ 		}
++		cpufreq_cpu_put(policy);
+ 	}
+ 
+ 	amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_PASS;
++	return;
++skip_test:
++	cpufreq_cpu_put(policy);
+ }
+ 
+ /*
+@@ -230,14 +216,14 @@ static void amd_pstate_ut_check_freq(u32 index)
+ 			pr_err("%s cpu%d max=%d >= nominal=%d > lowest_nonlinear=%d > min=%d > 0, the formula is incorrect!\n",
+ 				__func__, cpu, cpudata->max_freq, cpudata->nominal_freq,
+ 				cpudata->lowest_nonlinear_freq, cpudata->min_freq);
+-			return;
++			goto skip_test;
+ 		}
+ 
+ 		if (cpudata->min_freq != policy->min) {
+ 			amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL;
+ 			pr_err("%s cpu%d cpudata_min_freq=%d policy_min=%d, they should be equal!\n",
+ 				__func__, cpu, cpudata->min_freq, policy->min);
+-			return;
++			goto skip_test;
+ 		}
+ 
+ 		if (cpudata->boost_supported) {
+@@ -249,16 +235,20 @@ static void amd_pstate_ut_check_freq(u32 index)
+ 				pr_err("%s cpu%d policy_max=%d should be equal cpu_max=%d or cpu_nominal=%d !\n",
+ 					__func__, cpu, policy->max, cpudata->max_freq,
+ 					cpudata->nominal_freq);
+-				return;
++				goto skip_test;
+ 			}
+ 		} else {
+ 			amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL;
+ 			pr_err("%s cpu%d must support boost!\n", __func__, cpu);
+-			return;
++			goto skip_test;
+ 		}
++		cpufreq_cpu_put(policy);
+ 	}
+ 
+ 	amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_PASS;
++	return;
++skip_test:
++	cpufreq_cpu_put(policy);
+ }
+ 
+ static int __init amd_pstate_ut_init(void)
+diff --git a/drivers/cpufreq/brcmstb-avs-cpufreq.c b/drivers/cpufreq/brcmstb-avs-cpufreq.c
+index ffea6402189d3..3052949aebbc7 100644
+--- a/drivers/cpufreq/brcmstb-avs-cpufreq.c
++++ b/drivers/cpufreq/brcmstb-avs-cpufreq.c
+@@ -434,7 +434,11 @@ brcm_avs_get_freq_table(struct device *dev, struct private_data *priv)
+ 	if (ret)
+ 		return ERR_PTR(ret);
+ 
+-	table = devm_kcalloc(dev, AVS_PSTATE_MAX + 1, sizeof(*table),
++	/*
++	 * We allocate space for the 5 different P-STATES AVS,
++	 * plus extra space for a terminating element.
++	 */
++	table = devm_kcalloc(dev, AVS_PSTATE_MAX + 1 + 1, sizeof(*table),
+ 			     GFP_KERNEL);
+ 	if (!table)
+ 		return ERR_PTR(-ENOMEM);
+diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
+index 50bbc969ffe53..5c655d7b96d4f 100644
+--- a/drivers/cpufreq/cpufreq.c
++++ b/drivers/cpufreq/cpufreq.c
+@@ -455,8 +455,10 @@ void cpufreq_freq_transition_end(struct cpufreq_policy *policy,
+ 			    policy->cur,
+ 			    policy->cpuinfo.max_freq);
+ 
++	spin_lock(&policy->transition_lock);
+ 	policy->transition_ongoing = false;
+ 	policy->transition_task = NULL;
++	spin_unlock(&policy->transition_lock);
+ 
+ 	wake_up(&policy->transition_wait);
+ }
+diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
+index 8ca2bce4341a4..dc50c9fb488df 100644
+--- a/drivers/cpufreq/intel_pstate.c
++++ b/drivers/cpufreq/intel_pstate.c
+@@ -2609,6 +2609,11 @@ static int intel_pstate_set_policy(struct cpufreq_policy *policy)
+ 			intel_pstate_clear_update_util_hook(policy->cpu);
+ 		intel_pstate_hwp_set(policy->cpu);
+ 	}
++	/*
++	 * policy->cur is never updated with the intel_pstate driver, but it
++	 * is used as a stale frequency value. So, keep it within limits.
++	 */
++	policy->cur = policy->min;
+ 
+ 	mutex_unlock(&intel_pstate_limits_lock);
+ 
+diff --git a/drivers/cpufreq/powernow-k8.c b/drivers/cpufreq/powernow-k8.c
+index d289036beff23..b10f7a1b77f11 100644
+--- a/drivers/cpufreq/powernow-k8.c
++++ b/drivers/cpufreq/powernow-k8.c
+@@ -1101,7 +1101,8 @@ static int powernowk8_cpu_exit(struct cpufreq_policy *pol)
+ 
+ 	kfree(data->powernow_table);
+ 	kfree(data);
+-	for_each_cpu(cpu, pol->cpus)
++	/* pol->cpus will be empty here, use related_cpus instead. */
++	for_each_cpu(cpu, pol->related_cpus)
+ 		per_cpu(powernow_data, cpu) = NULL;
+ 
+ 	return 0;
+diff --git a/drivers/cpufreq/tegra194-cpufreq.c b/drivers/cpufreq/tegra194-cpufreq.c
+index 36dad5ea59475..75f1e611d0aab 100644
+--- a/drivers/cpufreq/tegra194-cpufreq.c
++++ b/drivers/cpufreq/tegra194-cpufreq.c
+@@ -508,6 +508,32 @@ static int tegra194_cpufreq_init(struct cpufreq_policy *policy)
+ 	return 0;
+ }
+ 
++static int tegra194_cpufreq_online(struct cpufreq_policy *policy)
++{
++	/* We did light-weight tear down earlier, nothing to do here */
++	return 0;
++}
++
++static int tegra194_cpufreq_offline(struct cpufreq_policy *policy)
++{
++	/*
++	 * Preserve policy->driver_data and don't free resources on light-weight
++	 * tear down.
++	 */
++
++	return 0;
++}
++
++static int tegra194_cpufreq_exit(struct cpufreq_policy *policy)
++{
++	struct device *cpu_dev = get_cpu_device(policy->cpu);
++
++	dev_pm_opp_remove_all_dynamic(cpu_dev);
++	dev_pm_opp_of_cpumask_remove_table(policy->related_cpus);
++
++	return 0;
++}
++
+ static int tegra194_cpufreq_set_target(struct cpufreq_policy *policy,
+ 				       unsigned int index)
+ {
+@@ -535,6 +561,9 @@ static struct cpufreq_driver tegra194_cpufreq_driver = {
+ 	.target_index = tegra194_cpufreq_set_target,
+ 	.get = tegra194_get_speed,
+ 	.init = tegra194_cpufreq_init,
++	.exit = tegra194_cpufreq_exit,
++	.online = tegra194_cpufreq_online,
++	.offline = tegra194_cpufreq_offline,
+ 	.attr = cpufreq_generic_attr,
+ };
+ 
+diff --git a/drivers/cpuidle/cpuidle-pseries.c b/drivers/cpuidle/cpuidle-pseries.c
+index a7d33f3ee01e7..14db9b7d985d1 100644
+--- a/drivers/cpuidle/cpuidle-pseries.c
++++ b/drivers/cpuidle/cpuidle-pseries.c
+@@ -414,13 +414,7 @@ static int __init pseries_idle_probe(void)
+ 		return -ENODEV;
+ 
+ 	if (firmware_has_feature(FW_FEATURE_SPLPAR)) {
+-		/*
+-		 * Use local_paca instead of get_lppaca() since
+-		 * preemption is not disabled, and it is not required in
+-		 * fact, since lppaca_ptr does not need to be the value
+-		 * associated to the current CPU, it can be from any CPU.
+-		 */
+-		if (lppaca_shared_proc(local_paca->lppaca_ptr)) {
++		if (lppaca_shared_proc()) {
+ 			cpuidle_state_table = shared_states;
+ 			max_idle_state = ARRAY_SIZE(shared_states);
+ 		} else {
+diff --git a/drivers/cpuidle/governors/teo.c b/drivers/cpuidle/governors/teo.c
+index 987fc5f3997dc..2cdc711679a5f 100644
+--- a/drivers/cpuidle/governors/teo.c
++++ b/drivers/cpuidle/governors/teo.c
+@@ -397,13 +397,23 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
+ 	 * the shallowest non-polling state and exit.
+ 	 */
+ 	if (drv->state_count < 3 && cpu_data->utilized) {
+-		for (i = 0; i < drv->state_count; ++i) {
+-			if (!dev->states_usage[i].disable &&
+-			    !(drv->states[i].flags & CPUIDLE_FLAG_POLLING)) {
+-				idx = i;
+-				goto end;
+-			}
+-		}
++		/* The CPU is utilized, so assume a short idle duration. */
++		duration_ns = teo_middle_of_bin(0, drv);
++		/*
++		 * If state 0 is enabled and it is not a polling one, select it
++		 * right away unless the scheduler tick has been stopped, in
++		 * which case care needs to be taken to leave the CPU in a deep
++		 * enough state in case it is not woken up any time soon after
++		 * all.  If state 1 is disabled, though, state 0 must be used
++		 * anyway.
++		 */
++		if ((!idx && !(drv->states[0].flags & CPUIDLE_FLAG_POLLING) &&
++		    teo_time_ok(duration_ns)) || dev->states_usage[1].disable)
++			idx = 0;
++		else /* Assume that state 1 is not a polling one and use it. */
++			idx = 1;
++
++		goto end;
+ 	}
+ 
+ 	/*
+@@ -539,10 +549,20 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
+ 
+ 	/*
+ 	 * If the CPU is being utilized over the threshold, choose a shallower
+-	 * non-polling state to improve latency
++	 * non-polling state to improve latency, unless the scheduler tick has
++	 * been stopped already and the shallower state's target residency is
++	 * not sufficiently large.
+ 	 */
+-	if (cpu_data->utilized)
+-		idx = teo_find_shallower_state(drv, dev, idx, duration_ns, true);
++	if (cpu_data->utilized) {
++		s64 span_ns;
++
++		i = teo_find_shallower_state(drv, dev, idx, duration_ns, true);
++		span_ns = teo_middle_of_bin(i, drv);
++		if (teo_time_ok(span_ns)) {
++			idx = i;
++			duration_ns = span_ns;
++		}
++	}
+ 
+ end:
+ 	/*
+diff --git a/drivers/crypto/caam/caampkc.c b/drivers/crypto/caam/caampkc.c
+index 72afc249d42fb..7e08af751e4ea 100644
+--- a/drivers/crypto/caam/caampkc.c
++++ b/drivers/crypto/caam/caampkc.c
+@@ -225,7 +225,9 @@ static int caam_rsa_count_leading_zeros(struct scatterlist *sgl,
+ 		if (len && *buff)
+ 			break;
+ 
+-		sg_miter_next(&miter);
++		if (!sg_miter_next(&miter))
++			break;
++
+ 		buff = miter.addr;
+ 		len = miter.length;
+ 
+diff --git a/drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c b/drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c
+index e543a9e24a06f..3eda91aa7c112 100644
+--- a/drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c
++++ b/drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c
+@@ -223,6 +223,8 @@ static u32 get_accel_cap(struct adf_accel_dev *accel_dev)
+ 			  ICP_ACCEL_CAPABILITIES_HKDF |
+ 			  ICP_ACCEL_CAPABILITIES_CHACHA_POLY |
+ 			  ICP_ACCEL_CAPABILITIES_AESGCM_SPC |
++			  ICP_ACCEL_CAPABILITIES_SM3 |
++			  ICP_ACCEL_CAPABILITIES_SM4 |
+ 			  ICP_ACCEL_CAPABILITIES_AES_V2;
+ 
+ 	/* A set bit in fusectl1 means the feature is OFF in this SKU */
+@@ -246,12 +248,19 @@ static u32 get_accel_cap(struct adf_accel_dev *accel_dev)
+ 		capabilities_sym &= ~ICP_ACCEL_CAPABILITIES_CIPHER;
+ 	}
+ 
++	if (fusectl1 & ICP_ACCEL_4XXX_MASK_SMX_SLICE) {
++		capabilities_sym &= ~ICP_ACCEL_CAPABILITIES_SM3;
++		capabilities_sym &= ~ICP_ACCEL_CAPABILITIES_SM4;
++	}
++
+ 	capabilities_asym = ICP_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC |
+ 			  ICP_ACCEL_CAPABILITIES_CIPHER |
++			  ICP_ACCEL_CAPABILITIES_SM2 |
+ 			  ICP_ACCEL_CAPABILITIES_ECEDMONT;
+ 
+ 	if (fusectl1 & ICP_ACCEL_4XXX_MASK_PKE_SLICE) {
+ 		capabilities_asym &= ~ICP_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC;
++		capabilities_asym &= ~ICP_ACCEL_CAPABILITIES_SM2;
+ 		capabilities_asym &= ~ICP_ACCEL_CAPABILITIES_ECEDMONT;
+ 	}
+ 
+diff --git a/drivers/crypto/intel/qat/qat_common/adf_gen4_pm.h b/drivers/crypto/intel/qat/qat_common/adf_gen4_pm.h
+index dd112923e006d..c2768762cca3b 100644
+--- a/drivers/crypto/intel/qat/qat_common/adf_gen4_pm.h
++++ b/drivers/crypto/intel/qat/qat_common/adf_gen4_pm.h
+@@ -35,7 +35,7 @@
+ #define ADF_GEN4_PM_MSG_PENDING			BIT(0)
+ #define ADF_GEN4_PM_MSG_PAYLOAD_BIT_MASK	GENMASK(28, 1)
+ 
+-#define ADF_GEN4_PM_DEFAULT_IDLE_FILTER		(0x0)
++#define ADF_GEN4_PM_DEFAULT_IDLE_FILTER		(0x6)
+ #define ADF_GEN4_PM_MAX_IDLE_FILTER		(0x7)
+ #define ADF_GEN4_PM_DEFAULT_IDLE_SUPPORT	(0x1)
+ 
+diff --git a/drivers/crypto/intel/qat/qat_common/icp_qat_hw.h b/drivers/crypto/intel/qat/qat_common/icp_qat_hw.h
+index a65059e56248a..0c8883e2ccc6d 100644
+--- a/drivers/crypto/intel/qat/qat_common/icp_qat_hw.h
++++ b/drivers/crypto/intel/qat/qat_common/icp_qat_hw.h
+@@ -97,7 +97,10 @@ enum icp_qat_capabilities_mask {
+ 	ICP_ACCEL_CAPABILITIES_SHA3_EXT = BIT(15),
+ 	ICP_ACCEL_CAPABILITIES_AESGCM_SPC = BIT(16),
+ 	ICP_ACCEL_CAPABILITIES_CHACHA_POLY = BIT(17),
+-	/* Bits 18-21 are currently reserved */
++	ICP_ACCEL_CAPABILITIES_SM2 = BIT(18),
++	ICP_ACCEL_CAPABILITIES_SM3 = BIT(19),
++	ICP_ACCEL_CAPABILITIES_SM4 = BIT(20),
++	/* Bit 21 is currently reserved */
+ 	ICP_ACCEL_CAPABILITIES_CNV_INTEGRITY = BIT(22),
+ 	ICP_ACCEL_CAPABILITIES_CNV_INTEGRITY64 = BIT(23),
+ 	ICP_ACCEL_CAPABILITIES_LZ4_COMPRESSION = BIT(24),
+diff --git a/drivers/crypto/stm32/stm32-hash.c b/drivers/crypto/stm32/stm32-hash.c
+index f0df32382719c..fabae6da627b9 100644
+--- a/drivers/crypto/stm32/stm32-hash.c
++++ b/drivers/crypto/stm32/stm32-hash.c
+@@ -492,7 +492,7 @@ static int stm32_hash_xmit_dma(struct stm32_hash_dev *hdev,
+ 
+ 	reg = stm32_hash_read(hdev, HASH_CR);
+ 
+-	if (!hdev->pdata->has_mdmat) {
++	if (hdev->pdata->has_mdmat) {
+ 		if (mdma)
+ 			reg |= HASH_CR_MDMAT;
+ 		else
+@@ -627,9 +627,9 @@ static int stm32_hash_dma_send(struct stm32_hash_dev *hdev)
+ 	}
+ 
+ 	for_each_sg(rctx->sg, tsg, rctx->nents, i) {
++		sg[0] = *tsg;
+ 		len = sg->length;
+ 
+-		sg[0] = *tsg;
+ 		if (sg_is_last(sg)) {
+ 			if (hdev->dma_mode == 1) {
+ 				len = (ALIGN(sg->length, 16) - 16);
+@@ -1705,9 +1705,7 @@ static int stm32_hash_remove(struct platform_device *pdev)
+ 	if (!hdev)
+ 		return -ENODEV;
+ 
+-	ret = pm_runtime_resume_and_get(hdev->dev);
+-	if (ret < 0)
+-		return ret;
++	ret = pm_runtime_get_sync(hdev->dev);
+ 
+ 	stm32_hash_unregister_algs(hdev);
+ 
+@@ -1723,7 +1721,8 @@ static int stm32_hash_remove(struct platform_device *pdev)
+ 	pm_runtime_disable(hdev->dev);
+ 	pm_runtime_put_noidle(hdev->dev);
+ 
+-	clk_disable_unprepare(hdev->clk);
++	if (ret >= 0)
++		clk_disable_unprepare(hdev->clk);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/devfreq/devfreq.c b/drivers/devfreq/devfreq.c
+index e36cbb920ec88..9464f8d3cb5b4 100644
+--- a/drivers/devfreq/devfreq.c
++++ b/drivers/devfreq/devfreq.c
+@@ -763,6 +763,7 @@ static void devfreq_dev_release(struct device *dev)
+ 		dev_pm_opp_put_opp_table(devfreq->opp_table);
+ 
+ 	mutex_destroy(&devfreq->lock);
++	srcu_cleanup_notifier_head(&devfreq->transition_notifier_list);
+ 	kfree(devfreq);
+ }
+ 
+diff --git a/drivers/dma/idxd/sysfs.c b/drivers/dma/idxd/sysfs.c
+index 293739ac55969..a5c3eb4348325 100644
+--- a/drivers/dma/idxd/sysfs.c
++++ b/drivers/dma/idxd/sysfs.c
+@@ -1095,8 +1095,8 @@ static ssize_t wq_ats_disable_store(struct device *dev, struct device_attribute
+ 	if (wq->state != IDXD_WQ_DISABLED)
+ 		return -EPERM;
+ 
+-	if (!idxd->hw.wq_cap.wq_ats_support)
+-		return -EOPNOTSUPP;
++	if (!test_bit(IDXD_FLAG_CONFIGURABLE, &idxd->flags))
++		return -EPERM;
+ 
+ 	rc = kstrtobool(buf, &ats_dis);
+ 	if (rc < 0)
+@@ -1131,8 +1131,8 @@ static ssize_t wq_prs_disable_store(struct device *dev, struct device_attribute
+ 	if (wq->state != IDXD_WQ_DISABLED)
+ 		return -EPERM;
+ 
+-	if (!idxd->hw.wq_cap.wq_prs_support)
+-		return -EOPNOTSUPP;
++	if (!test_bit(IDXD_FLAG_CONFIGURABLE, &idxd->flags))
++		return -EPERM;
+ 
+ 	rc = kstrtobool(buf, &prs_dis);
+ 	if (rc < 0)
+@@ -1288,12 +1288,9 @@ static struct attribute *idxd_wq_attributes[] = {
+ 	NULL,
+ };
+ 
+-static bool idxd_wq_attr_op_config_invisible(struct attribute *attr,
+-					     struct idxd_device *idxd)
+-{
+-	return attr == &dev_attr_wq_op_config.attr &&
+-	       !idxd->hw.wq_cap.op_config;
+-}
++/*  A WQ attr is invisible if the feature is not supported in WQCAP. */
++#define idxd_wq_attr_invisible(name, cap_field, a, idxd)		\
++	((a) == &dev_attr_wq_##name.attr && !(idxd)->hw.wq_cap.cap_field)
+ 
+ static bool idxd_wq_attr_max_batch_size_invisible(struct attribute *attr,
+ 						  struct idxd_device *idxd)
+@@ -1303,13 +1300,6 @@ static bool idxd_wq_attr_max_batch_size_invisible(struct attribute *attr,
+ 	       idxd->data->type == IDXD_TYPE_IAX;
+ }
+ 
+-static bool idxd_wq_attr_wq_prs_disable_invisible(struct attribute *attr,
+-						  struct idxd_device *idxd)
+-{
+-	return attr == &dev_attr_wq_prs_disable.attr &&
+-	       !idxd->hw.wq_cap.wq_prs_support;
+-}
+-
+ static umode_t idxd_wq_attr_visible(struct kobject *kobj,
+ 				    struct attribute *attr, int n)
+ {
+@@ -1317,13 +1307,16 @@ static umode_t idxd_wq_attr_visible(struct kobject *kobj,
+ 	struct idxd_wq *wq = confdev_to_wq(dev);
+ 	struct idxd_device *idxd = wq->idxd;
+ 
+-	if (idxd_wq_attr_op_config_invisible(attr, idxd))
++	if (idxd_wq_attr_invisible(op_config, op_config, attr, idxd))
+ 		return 0;
+ 
+ 	if (idxd_wq_attr_max_batch_size_invisible(attr, idxd))
+ 		return 0;
+ 
+-	if (idxd_wq_attr_wq_prs_disable_invisible(attr, idxd))
++	if (idxd_wq_attr_invisible(prs_disable, wq_prs_support, attr, idxd))
++		return 0;
++
++	if (idxd_wq_attr_invisible(ats_disable, wq_ats_support, attr, idxd))
+ 		return 0;
+ 
+ 	return attr->mode;
+@@ -1480,7 +1473,7 @@ static ssize_t pasid_enabled_show(struct device *dev,
+ {
+ 	struct idxd_device *idxd = confdev_to_idxd(dev);
+ 
+-	return sysfs_emit(buf, "%u\n", device_pasid_enabled(idxd));
++	return sysfs_emit(buf, "%u\n", device_user_pasid_enabled(idxd));
+ }
+ static DEVICE_ATTR_RO(pasid_enabled);
+ 
+diff --git a/drivers/dma/ste_dma40.c b/drivers/dma/ste_dma40.c
+index 825001bde42c4..89e82508c1339 100644
+--- a/drivers/dma/ste_dma40.c
++++ b/drivers/dma/ste_dma40.c
+@@ -3590,6 +3590,10 @@ static int __init d40_probe(struct platform_device *pdev)
+ 	spin_lock_init(&base->lcla_pool.lock);
+ 
+ 	base->irq = platform_get_irq(pdev, 0);
++	if (base->irq < 0) {
++		ret = base->irq;
++		goto destroy_cache;
++	}
+ 
+ 	ret = request_irq(base->irq, d40_handle_interrupt, 0, D40_NAME, base);
+ 	if (ret) {
+diff --git a/drivers/edac/i10nm_base.c b/drivers/edac/i10nm_base.c
+index a897b6aff3686..349ff6cfb3796 100644
+--- a/drivers/edac/i10nm_base.c
++++ b/drivers/edac/i10nm_base.c
+@@ -658,13 +658,49 @@ static struct pci_dev *get_ddr_munit(struct skx_dev *d, int i, u32 *offset, unsi
+ 	return mdev;
+ }
+ 
++/**
++ * i10nm_imc_absent() - Check whether the memory controller @imc is absent
++ *
++ * @imc    : The pointer to the structure of memory controller EDAC device.
++ *
++ * RETURNS : true if the memory controller EDAC device is absent, false otherwise.
++ */
++static bool i10nm_imc_absent(struct skx_imc *imc)
++{
++	u32 mcmtr;
++	int i;
++
++	switch (res_cfg->type) {
++	case SPR:
++		for (i = 0; i < res_cfg->ddr_chan_num; i++) {
++			mcmtr = I10NM_GET_MCMTR(imc, i);
++			edac_dbg(1, "ch%d mcmtr reg %x\n", i, mcmtr);
++			if (mcmtr != ~0)
++				return false;
++		}
++
++		/*
++		 * Some workstations' absent memory controllers still
++		 * appear as PCIe devices, misleading the EDAC driver.
++		 * By observing that the MMIO registers of these absent
++		 * memory controllers consistently hold the value of ~0.
++		 *
++		 * We identify a memory controller as absent by checking
++		 * if its MMIO register "mcmtr" == ~0 in all its channels.
++		 */
++		return true;
++	default:
++		return false;
++	}
++}
++
+ static int i10nm_get_ddr_munits(void)
+ {
+ 	struct pci_dev *mdev;
+ 	void __iomem *mbase;
+ 	unsigned long size;
+ 	struct skx_dev *d;
+-	int i, j = 0;
++	int i, lmc, j = 0;
+ 	u32 reg, off;
+ 	u64 base;
+ 
+@@ -690,7 +726,7 @@ static int i10nm_get_ddr_munits(void)
+ 		edac_dbg(2, "socket%d mmio base 0x%llx (reg 0x%x)\n",
+ 			 j++, base, reg);
+ 
+-		for (i = 0; i < res_cfg->ddr_imc_num; i++) {
++		for (lmc = 0, i = 0; i < res_cfg->ddr_imc_num; i++) {
+ 			mdev = get_ddr_munit(d, i, &off, &size);
+ 
+ 			if (i == 0 && !mdev) {
+@@ -700,8 +736,6 @@ static int i10nm_get_ddr_munits(void)
+ 			if (!mdev)
+ 				continue;
+ 
+-			d->imc[i].mdev = mdev;
+-
+ 			edac_dbg(2, "mc%d mmio base 0x%llx size 0x%lx (reg 0x%x)\n",
+ 				 i, base + off, size, reg);
+ 
+@@ -712,7 +746,17 @@ static int i10nm_get_ddr_munits(void)
+ 				return -ENODEV;
+ 			}
+ 
+-			d->imc[i].mbase = mbase;
++			d->imc[lmc].mbase = mbase;
++			if (i10nm_imc_absent(&d->imc[lmc])) {
++				pci_dev_put(mdev);
++				iounmap(mbase);
++				d->imc[lmc].mbase = NULL;
++				edac_dbg(2, "Skip absent mc%d\n", i);
++				continue;
++			} else {
++				d->imc[lmc].mdev = mdev;
++				lmc++;
++			}
+ 		}
+ 	}
+ 
+diff --git a/drivers/edac/igen6_edac.c b/drivers/edac/igen6_edac.c
+index 544dd19072eab..1a18693294db4 100644
+--- a/drivers/edac/igen6_edac.c
++++ b/drivers/edac/igen6_edac.c
+@@ -27,7 +27,7 @@
+ #include "edac_mc.h"
+ #include "edac_module.h"
+ 
+-#define IGEN6_REVISION	"v2.5"
++#define IGEN6_REVISION	"v2.5.1"
+ 
+ #define EDAC_MOD_STR	"igen6_edac"
+ #define IGEN6_NMI_NAME	"igen6_ibecc"
+@@ -1216,9 +1216,6 @@ static int igen6_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	INIT_WORK(&ecclog_work, ecclog_work_cb);
+ 	init_irq_work(&ecclog_irq_work, ecclog_irq_work_cb);
+ 
+-	/* Check if any pending errors before registering the NMI handler */
+-	ecclog_handler();
+-
+ 	rc = register_err_handler();
+ 	if (rc)
+ 		goto fail3;
+@@ -1230,6 +1227,9 @@ static int igen6_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 		goto fail4;
+ 	}
+ 
++	/* Check if any pending errors before/during the registration of the error handler */
++	ecclog_handler();
++
+ 	igen6_debug_setup();
+ 	return 0;
+ fail4:
+diff --git a/drivers/extcon/Kconfig b/drivers/extcon/Kconfig
+index 0ef1971d22bb0..8de9023c2a387 100644
+--- a/drivers/extcon/Kconfig
++++ b/drivers/extcon/Kconfig
+@@ -62,6 +62,7 @@ config EXTCON_INTEL_CHT_WC
+ 	tristate "Intel Cherrytrail Whiskey Cove PMIC extcon driver"
+ 	depends on INTEL_SOC_PMIC_CHTWC
+ 	depends on USB_SUPPORT
++	depends on POWER_SUPPLY
+ 	select USB_ROLE_SWITCH
+ 	help
+ 	  Say Y here to enable extcon support for charger detection / control
+diff --git a/drivers/firmware/arm_sdei.c b/drivers/firmware/arm_sdei.c
+index f9040bd610812..285fe7ad490d1 100644
+--- a/drivers/firmware/arm_sdei.c
++++ b/drivers/firmware/arm_sdei.c
+@@ -1095,3 +1095,22 @@ int sdei_event_handler(struct pt_regs *regs,
+ 	return err;
+ }
+ NOKPROBE_SYMBOL(sdei_event_handler);
++
++void sdei_handler_abort(void)
++{
++	/*
++	 * If the crash happened in an SDEI event handler then we need to
++	 * finish the handler with the firmware so that we can have working
++	 * interrupts in the crash kernel.
++	 */
++	if (__this_cpu_read(sdei_active_critical_event)) {
++	        pr_warn("still in SDEI critical event context, attempting to finish handler.\n");
++	        __sdei_handler_abort();
++	        __this_cpu_write(sdei_active_critical_event, NULL);
++	}
++	if (__this_cpu_read(sdei_active_normal_event)) {
++	        pr_warn("still in SDEI normal event context, attempting to finish handler.\n");
++	        __sdei_handler_abort();
++	        __this_cpu_write(sdei_active_normal_event, NULL);
++	}
++}
+diff --git a/drivers/firmware/cirrus/cs_dsp.c b/drivers/firmware/cirrus/cs_dsp.c
+index 6a9aa97373d37..49b70c70dc696 100644
+--- a/drivers/firmware/cirrus/cs_dsp.c
++++ b/drivers/firmware/cirrus/cs_dsp.c
+@@ -978,7 +978,8 @@ static int cs_dsp_create_control(struct cs_dsp *dsp,
+ 		    ctl->alg_region.alg == alg_region->alg &&
+ 		    ctl->alg_region.type == alg_region->type) {
+ 			if ((!subname && !ctl->subname) ||
+-			    (subname && !strncmp(ctl->subname, subname, ctl->subname_len))) {
++			    (subname && (ctl->subname_len == subname_len) &&
++			     !strncmp(ctl->subname, subname, ctl->subname_len))) {
+ 				if (!ctl->enabled)
+ 					ctl->enabled = 1;
+ 				return 0;
+diff --git a/drivers/firmware/efi/libstub/x86-stub.c b/drivers/firmware/efi/libstub/x86-stub.c
+index 220be75a5cdc1..146477da2b98c 100644
+--- a/drivers/firmware/efi/libstub/x86-stub.c
++++ b/drivers/firmware/efi/libstub/x86-stub.c
+@@ -72,7 +72,7 @@ preserve_pci_rom_image(efi_pci_io_protocol_t *pci, struct pci_setup_rom **__rom)
+ 	rom->data.type	= SETUP_PCI;
+ 	rom->data.len	= size - sizeof(struct setup_data);
+ 	rom->data.next	= 0;
+-	rom->pcilen	= pci->romsize;
++	rom->pcilen	= romsize;
+ 	*__rom = rom;
+ 
+ 	status = efi_call_proto(pci, pci.read, EfiPciIoWidthUint16,
+diff --git a/drivers/firmware/meson/meson_sm.c b/drivers/firmware/meson/meson_sm.c
+index 798bcdb05d84e..9a2656d73600b 100644
+--- a/drivers/firmware/meson/meson_sm.c
++++ b/drivers/firmware/meson/meson_sm.c
+@@ -292,6 +292,8 @@ static int __init meson_sm_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	chip = of_match_device(meson_sm_ids, dev)->data;
++	if (!chip)
++		return -EINVAL;
+ 
+ 	if (chip->cmd_shmem_in_base) {
+ 		fw->sm_shmem_in_base = meson_sm_map_shmem(chip->cmd_shmem_in_base,
+diff --git a/drivers/firmware/ti_sci.c b/drivers/firmware/ti_sci.c
+index 039d92a595ec6..91aaa0ca9bde8 100644
+--- a/drivers/firmware/ti_sci.c
++++ b/drivers/firmware/ti_sci.c
+@@ -97,7 +97,6 @@ struct ti_sci_desc {
+  * @node:	list head
+  * @host_id:	Host ID
+  * @users:	Number of users of this instance
+- * @is_suspending: Flag set to indicate in suspend path.
+  */
+ struct ti_sci_info {
+ 	struct device *dev;
+@@ -116,7 +115,6 @@ struct ti_sci_info {
+ 	u8 host_id;
+ 	/* protected by ti_sci_list_mutex */
+ 	int users;
+-	bool is_suspending;
+ };
+ 
+ #define cl_to_ti_sci_info(c)	container_of(c, struct ti_sci_info, cl)
+@@ -418,14 +416,14 @@ static inline int ti_sci_do_xfer(struct ti_sci_info *info,
+ 
+ 	ret = 0;
+ 
+-	if (!info->is_suspending) {
++	if (system_state <= SYSTEM_RUNNING) {
+ 		/* And we wait for the response. */
+ 		timeout = msecs_to_jiffies(info->desc->max_rx_timeout_ms);
+ 		if (!wait_for_completion_timeout(&xfer->done, timeout))
+ 			ret = -ETIMEDOUT;
+ 	} else {
+ 		/*
+-		 * If we are suspending, we cannot use wait_for_completion_timeout
++		 * If we are !running, we cannot use wait_for_completion_timeout
+ 		 * during noirq phase, so we must manually poll the completion.
+ 		 */
+ 		ret = read_poll_timeout_atomic(try_wait_for_completion, done_state,
+@@ -3281,35 +3279,6 @@ static int tisci_reboot_handler(struct notifier_block *nb, unsigned long mode,
+ 	return NOTIFY_BAD;
+ }
+ 
+-static void ti_sci_set_is_suspending(struct ti_sci_info *info, bool is_suspending)
+-{
+-	info->is_suspending = is_suspending;
+-}
+-
+-static int ti_sci_suspend(struct device *dev)
+-{
+-	struct ti_sci_info *info = dev_get_drvdata(dev);
+-	/*
+-	 * We must switch operation to polled mode now as drivers and the genpd
+-	 * layer may make late TI SCI calls to change clock and device states
+-	 * from the noirq phase of suspend.
+-	 */
+-	ti_sci_set_is_suspending(info, true);
+-
+-	return 0;
+-}
+-
+-static int ti_sci_resume(struct device *dev)
+-{
+-	struct ti_sci_info *info = dev_get_drvdata(dev);
+-
+-	ti_sci_set_is_suspending(info, false);
+-
+-	return 0;
+-}
+-
+-static DEFINE_SIMPLE_DEV_PM_OPS(ti_sci_pm_ops, ti_sci_suspend, ti_sci_resume);
+-
+ /* Description for K2G */
+ static const struct ti_sci_desc ti_sci_pmmc_k2g_desc = {
+ 	.default_host_id = 2,
+@@ -3516,7 +3485,6 @@ static struct platform_driver ti_sci_driver = {
+ 	.driver = {
+ 		   .name = "ti-sci",
+ 		   .of_match_table = of_match_ptr(ti_sci_of_match),
+-		   .pm = &ti_sci_pm_ops,
+ 	},
+ };
+ module_platform_driver(ti_sci_driver);
+diff --git a/drivers/fsi/fsi-master-aspeed.c b/drivers/fsi/fsi-master-aspeed.c
+index 7cec1772820d3..5eccab175e86b 100644
+--- a/drivers/fsi/fsi-master-aspeed.c
++++ b/drivers/fsi/fsi-master-aspeed.c
+@@ -454,6 +454,8 @@ static ssize_t cfam_reset_store(struct device *dev, struct device_attribute *att
+ 	gpiod_set_value(aspeed->cfam_reset_gpio, 1);
+ 	usleep_range(900, 1000);
+ 	gpiod_set_value(aspeed->cfam_reset_gpio, 0);
++	usleep_range(900, 1000);
++	opb_writel(aspeed, ctrl_base + FSI_MRESP0, cpu_to_be32(FSI_MRESP_RST_ALL_MASTER));
+ 	mutex_unlock(&aspeed->lock);
+ 	trace_fsi_master_aspeed_cfam_reset(false);
+ 
+diff --git a/drivers/gpio/gpio-zynq.c b/drivers/gpio/gpio-zynq.c
+index 0a7264aabe488..324e942c0650b 100644
+--- a/drivers/gpio/gpio-zynq.c
++++ b/drivers/gpio/gpio-zynq.c
+@@ -575,6 +575,26 @@ static int zynq_gpio_set_wake(struct irq_data *data, unsigned int on)
+ 	return 0;
+ }
+ 
++static int zynq_gpio_irq_reqres(struct irq_data *d)
++{
++	struct gpio_chip *chip = irq_data_get_irq_chip_data(d);
++	int ret;
++
++	ret = pm_runtime_resume_and_get(chip->parent);
++	if (ret < 0)
++		return ret;
++
++	return gpiochip_reqres_irq(chip, d->hwirq);
++}
++
++static void zynq_gpio_irq_relres(struct irq_data *d)
++{
++	struct gpio_chip *chip = irq_data_get_irq_chip_data(d);
++
++	gpiochip_relres_irq(chip, d->hwirq);
++	pm_runtime_put(chip->parent);
++}
++
+ /* irq chip descriptor */
+ static const struct irq_chip zynq_gpio_level_irqchip = {
+ 	.name		= DRIVER_NAME,
+@@ -584,9 +604,10 @@ static const struct irq_chip zynq_gpio_level_irqchip = {
+ 	.irq_unmask	= zynq_gpio_irq_unmask,
+ 	.irq_set_type	= zynq_gpio_set_irq_type,
+ 	.irq_set_wake	= zynq_gpio_set_wake,
++	.irq_request_resources = zynq_gpio_irq_reqres,
++	.irq_release_resources = zynq_gpio_irq_relres,
+ 	.flags		= IRQCHIP_EOI_THREADED | IRQCHIP_EOI_IF_HANDLED |
+ 			  IRQCHIP_MASK_ON_SUSPEND | IRQCHIP_IMMUTABLE,
+-	GPIOCHIP_IRQ_RESOURCE_HELPERS,
+ };
+ 
+ static const struct irq_chip zynq_gpio_edge_irqchip = {
+@@ -597,8 +618,9 @@ static const struct irq_chip zynq_gpio_edge_irqchip = {
+ 	.irq_unmask	= zynq_gpio_irq_unmask,
+ 	.irq_set_type	= zynq_gpio_set_irq_type,
+ 	.irq_set_wake	= zynq_gpio_set_wake,
++	.irq_request_resources = zynq_gpio_irq_reqres,
++	.irq_release_resources = zynq_gpio_irq_relres,
+ 	.flags		= IRQCHIP_MASK_ON_SUSPEND | IRQCHIP_IMMUTABLE,
+-	GPIOCHIP_IRQ_RESOURCE_HELPERS,
+ };
+ 
+ static void zynq_gpio_handle_bank_irq(struct zynq_gpio *gpio,
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index 6238701cde237..6e5e4603a51a1 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -1325,6 +1325,9 @@ int amdgpu_device_resize_fb_bar(struct amdgpu_device *adev)
+ 	u16 cmd;
+ 	int r;
+ 
++	if (!IS_ENABLED(CONFIG_PHYS_ADDR_T_64BIT))
++		return 0;
++
+ 	/* Bypass for VF */
+ 	if (amdgpu_sriov_vf(adev))
+ 		return 0;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+index 0593ef8fe0a63..e06009966428f 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+@@ -26,30 +26,30 @@
+ #include <drm/drm_drv.h>
+ #include <drm/drm_fbdev_generic.h>
+ #include <drm/drm_gem.h>
+-#include <drm/drm_vblank.h>
+ #include <drm/drm_managed.h>
+-#include "amdgpu_drv.h"
+-
+ #include <drm/drm_pciids.h>
+-#include <linux/module.h>
+-#include <linux/pm_runtime.h>
+-#include <linux/vga_switcheroo.h>
+ #include <drm/drm_probe_helper.h>
+-#include <linux/mmu_notifier.h>
+-#include <linux/suspend.h>
++#include <drm/drm_vblank.h>
++
+ #include <linux/cc_platform.h>
+ #include <linux/dynamic_debug.h>
++#include <linux/module.h>
++#include <linux/mmu_notifier.h>
++#include <linux/pm_runtime.h>
++#include <linux/suspend.h>
++#include <linux/vga_switcheroo.h>
+ 
+ #include "amdgpu.h"
+-#include "amdgpu_irq.h"
++#include "amdgpu_amdkfd.h"
+ #include "amdgpu_dma_buf.h"
+-#include "amdgpu_sched.h"
++#include "amdgpu_drv.h"
+ #include "amdgpu_fdinfo.h"
+-#include "amdgpu_amdkfd.h"
+-
++#include "amdgpu_irq.h"
++#include "amdgpu_psp.h"
+ #include "amdgpu_ras.h"
+-#include "amdgpu_xgmi.h"
+ #include "amdgpu_reset.h"
++#include "amdgpu_sched.h"
++#include "amdgpu_xgmi.h"
+ #include "../amdxcp/amdgpu_xcp_drv.h"
+ 
+ /*
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
+index ebeddc9a37e9b..6aa3b1d845abe 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
+@@ -62,7 +62,7 @@
+  * Returns 0 on success, error on failure.
+  */
+ int amdgpu_ib_get(struct amdgpu_device *adev, struct amdgpu_vm *vm,
+-		  unsigned size, enum amdgpu_ib_pool_type pool_type,
++		  unsigned int size, enum amdgpu_ib_pool_type pool_type,
+ 		  struct amdgpu_ib *ib)
+ {
+ 	int r;
+@@ -123,7 +123,7 @@ void amdgpu_ib_free(struct amdgpu_device *adev, struct amdgpu_ib *ib,
+  * a CONST_IB), it will be put on the ring prior to the DE IB.  Prior
+  * to SI there was just a DE IB.
+  */
+-int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned num_ibs,
++int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs,
+ 		       struct amdgpu_ib *ibs, struct amdgpu_job *job,
+ 		       struct dma_fence **f)
+ {
+@@ -131,16 +131,16 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned num_ibs,
+ 	struct amdgpu_ib *ib = &ibs[0];
+ 	struct dma_fence *tmp = NULL;
+ 	bool need_ctx_switch;
+-	unsigned patch_offset = ~0;
++	unsigned int patch_offset = ~0;
+ 	struct amdgpu_vm *vm;
+ 	uint64_t fence_ctx;
+ 	uint32_t status = 0, alloc_size;
+-	unsigned fence_flags = 0;
++	unsigned int fence_flags = 0;
+ 	bool secure, init_shadow;
+ 	u64 shadow_va, csa_va, gds_va;
+ 	int vmid = AMDGPU_JOB_GET_VMID(job);
+ 
+-	unsigned i;
++	unsigned int i;
+ 	int r = 0;
+ 	bool need_pipe_sync = false;
+ 
+@@ -282,7 +282,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned num_ibs,
+ 		amdgpu_ring_emit_gfx_shadow(ring, 0, 0, 0, false, 0);
+ 
+ 		if (ring->funcs->init_cond_exec) {
+-			unsigned ce_offset = ~0;
++			unsigned int ce_offset = ~0;
+ 
+ 			ce_offset = amdgpu_ring_init_cond_exec(ring);
+ 			if (ce_offset != ~0 && ring->funcs->patch_cond_exec)
+@@ -385,7 +385,7 @@ int amdgpu_ib_ring_tests(struct amdgpu_device *adev)
+ {
+ 	long tmo_gfx, tmo_mm;
+ 	int r, ret = 0;
+-	unsigned i;
++	unsigned int i;
+ 
+ 	tmo_mm = tmo_gfx = AMDGPU_IB_TEST_TIMEOUT;
+ 	if (amdgpu_sriov_vf(adev)) {
+@@ -402,7 +402,7 @@ int amdgpu_ib_ring_tests(struct amdgpu_device *adev)
+ 		/* for CP & SDMA engines since they are scheduled together so
+ 		 * need to make the timeout width enough to cover the time
+ 		 * cost waiting for it coming back under RUNTIME only
+-		*/
++		 */
+ 		tmo_gfx = 8 * AMDGPU_IB_TEST_TIMEOUT;
+ 	} else if (adev->gmc.xgmi.hive_id) {
+ 		tmo_gfx = AMDGPU_IB_TEST_GFX_XGMI_TIMEOUT;
+@@ -465,13 +465,13 @@ static int amdgpu_debugfs_sa_info_show(struct seq_file *m, void *unused)
+ {
+ 	struct amdgpu_device *adev = m->private;
+ 
+-	seq_printf(m, "--------------------- DELAYED --------------------- \n");
++	seq_puts(m, "--------------------- DELAYED ---------------------\n");
+ 	amdgpu_sa_bo_dump_debug_info(&adev->ib_pools[AMDGPU_IB_POOL_DELAYED],
+ 				     m);
+-	seq_printf(m, "-------------------- IMMEDIATE -------------------- \n");
++	seq_puts(m, "-------------------- IMMEDIATE --------------------\n");
+ 	amdgpu_sa_bo_dump_debug_info(&adev->ib_pools[AMDGPU_IB_POOL_IMMEDIATE],
+ 				     m);
+-	seq_printf(m, "--------------------- DIRECT ---------------------- \n");
++	seq_puts(m, "--------------------- DIRECT ----------------------\n");
+ 	amdgpu_sa_bo_dump_debug_info(&adev->ib_pools[AMDGPU_IB_POOL_DIRECT], m);
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+index 12414a7132564..d4ca19ba5a289 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+@@ -557,6 +557,7 @@ int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
+ 			crtc = (struct drm_crtc *)minfo->crtcs[i];
+ 			if (crtc && crtc->base.id == info->mode_crtc.id) {
+ 				struct amdgpu_crtc *amdgpu_crtc = to_amdgpu_crtc(crtc);
++
+ 				ui32 = amdgpu_crtc->crtc_id;
+ 				found = 1;
+ 				break;
+@@ -575,7 +576,7 @@ int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
+ 		if (ret)
+ 			return ret;
+ 
+-		ret = copy_to_user(out, &ip, min((size_t)size, sizeof(ip)));
++		ret = copy_to_user(out, &ip, min_t(size_t, size, sizeof(ip)));
+ 		return ret ? -EFAULT : 0;
+ 	}
+ 	case AMDGPU_INFO_HW_IP_COUNT: {
+@@ -721,17 +722,18 @@ int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
+ 				    ? -EFAULT : 0;
+ 	}
+ 	case AMDGPU_INFO_READ_MMR_REG: {
+-		unsigned n, alloc_size;
++		unsigned int n, alloc_size;
+ 		uint32_t *regs;
+-		unsigned se_num = (info->read_mmr_reg.instance >>
++		unsigned int se_num = (info->read_mmr_reg.instance >>
+ 				   AMDGPU_INFO_MMR_SE_INDEX_SHIFT) &
+ 				  AMDGPU_INFO_MMR_SE_INDEX_MASK;
+-		unsigned sh_num = (info->read_mmr_reg.instance >>
++		unsigned int sh_num = (info->read_mmr_reg.instance >>
+ 				   AMDGPU_INFO_MMR_SH_INDEX_SHIFT) &
+ 				  AMDGPU_INFO_MMR_SH_INDEX_MASK;
+ 
+ 		/* set full masks if the userspace set all bits
+-		 * in the bitfields */
++		 * in the bitfields
++		 */
+ 		if (se_num == AMDGPU_INFO_MMR_SE_INDEX_MASK)
+ 			se_num = 0xffffffff;
+ 		else if (se_num >= AMDGPU_GFX_MAX_SE)
+@@ -896,7 +898,7 @@ int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
+ 		return ret;
+ 	}
+ 	case AMDGPU_INFO_VCE_CLOCK_TABLE: {
+-		unsigned i;
++		unsigned int i;
+ 		struct drm_amdgpu_info_vce_clock_table vce_clk_table = {};
+ 		struct amd_vce_state *vce_state;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h
+index 2cae0b1a0b8ac..c162d018cf259 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h
+@@ -39,6 +39,8 @@
+ #define PSP_TMR_ALIGNMENT	0x100000
+ #define PSP_FW_NAME_LEN		0x24
+ 
++extern const struct attribute_group amdgpu_flash_attr_group;
++
+ enum psp_shared_mem_size {
+ 	PSP_ASD_SHARED_MEM_SIZE				= 0x0,
+ 	PSP_XGMI_SHARED_MEM_SIZE			= 0x4000,
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
+index 6d0d66e40db93..96732897f87a0 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
+@@ -39,6 +39,9 @@
+ 
+ #define AMDGPU_POISON	0xd0bed0be
+ 
++extern const struct attribute_group amdgpu_vram_mgr_attr_group;
++extern const struct attribute_group amdgpu_gtt_mgr_attr_group;
++
+ struct hmm_range;
+ 
+ struct amdgpu_gtt_mgr {
+diff --git a/drivers/gpu/drm/amd/amdgpu/cik.c b/drivers/gpu/drm/amd/amdgpu/cik.c
+index 5641cf05d856b..e63abdf52b6c2 100644
+--- a/drivers/gpu/drm/amd/amdgpu/cik.c
++++ b/drivers/gpu/drm/amd/amdgpu/cik.c
+@@ -1574,17 +1574,8 @@ static void cik_pcie_gen3_enable(struct amdgpu_device *adev)
+ 			u16 bridge_cfg2, gpu_cfg2;
+ 			u32 max_lw, current_lw, tmp;
+ 
+-			pcie_capability_read_word(root, PCI_EXP_LNKCTL,
+-						  &bridge_cfg);
+-			pcie_capability_read_word(adev->pdev, PCI_EXP_LNKCTL,
+-						  &gpu_cfg);
+-
+-			tmp16 = bridge_cfg | PCI_EXP_LNKCTL_HAWD;
+-			pcie_capability_write_word(root, PCI_EXP_LNKCTL, tmp16);
+-
+-			tmp16 = gpu_cfg | PCI_EXP_LNKCTL_HAWD;
+-			pcie_capability_write_word(adev->pdev, PCI_EXP_LNKCTL,
+-						   tmp16);
++			pcie_capability_set_word(root, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_HAWD);
++			pcie_capability_set_word(adev->pdev, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_HAWD);
+ 
+ 			tmp = RREG32_PCIE(ixPCIE_LC_STATUS1);
+ 			max_lw = (tmp & PCIE_LC_STATUS1__LC_DETECTED_LINK_WIDTH_MASK) >>
+@@ -1637,21 +1628,14 @@ static void cik_pcie_gen3_enable(struct amdgpu_device *adev)
+ 				msleep(100);
+ 
+ 				/* linkctl */
+-				pcie_capability_read_word(root, PCI_EXP_LNKCTL,
+-							  &tmp16);
+-				tmp16 &= ~PCI_EXP_LNKCTL_HAWD;
+-				tmp16 |= (bridge_cfg & PCI_EXP_LNKCTL_HAWD);
+-				pcie_capability_write_word(root, PCI_EXP_LNKCTL,
+-							   tmp16);
+-
+-				pcie_capability_read_word(adev->pdev,
+-							  PCI_EXP_LNKCTL,
+-							  &tmp16);
+-				tmp16 &= ~PCI_EXP_LNKCTL_HAWD;
+-				tmp16 |= (gpu_cfg & PCI_EXP_LNKCTL_HAWD);
+-				pcie_capability_write_word(adev->pdev,
+-							   PCI_EXP_LNKCTL,
+-							   tmp16);
++				pcie_capability_clear_and_set_word(root, PCI_EXP_LNKCTL,
++								   PCI_EXP_LNKCTL_HAWD,
++								   bridge_cfg &
++								   PCI_EXP_LNKCTL_HAWD);
++				pcie_capability_clear_and_set_word(adev->pdev, PCI_EXP_LNKCTL,
++								   PCI_EXP_LNKCTL_HAWD,
++								   gpu_cfg &
++								   PCI_EXP_LNKCTL_HAWD);
+ 
+ 				/* linkctl2 */
+ 				pcie_capability_read_word(root, PCI_EXP_LNKCTL2,
+diff --git a/drivers/gpu/drm/amd/amdgpu/si.c b/drivers/gpu/drm/amd/amdgpu/si.c
+index f64b87b11b1b5..4b81f29e5fd5a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/si.c
++++ b/drivers/gpu/drm/amd/amdgpu/si.c
+@@ -2276,17 +2276,8 @@ static void si_pcie_gen3_enable(struct amdgpu_device *adev)
+ 			u16 bridge_cfg2, gpu_cfg2;
+ 			u32 max_lw, current_lw, tmp;
+ 
+-			pcie_capability_read_word(root, PCI_EXP_LNKCTL,
+-						  &bridge_cfg);
+-			pcie_capability_read_word(adev->pdev, PCI_EXP_LNKCTL,
+-						  &gpu_cfg);
+-
+-			tmp16 = bridge_cfg | PCI_EXP_LNKCTL_HAWD;
+-			pcie_capability_write_word(root, PCI_EXP_LNKCTL, tmp16);
+-
+-			tmp16 = gpu_cfg | PCI_EXP_LNKCTL_HAWD;
+-			pcie_capability_write_word(adev->pdev, PCI_EXP_LNKCTL,
+-						   tmp16);
++			pcie_capability_set_word(root, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_HAWD);
++			pcie_capability_set_word(adev->pdev, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_HAWD);
+ 
+ 			tmp = RREG32_PCIE(PCIE_LC_STATUS1);
+ 			max_lw = (tmp & LC_DETECTED_LINK_WIDTH_MASK) >> LC_DETECTED_LINK_WIDTH_SHIFT;
+@@ -2331,21 +2322,14 @@ static void si_pcie_gen3_enable(struct amdgpu_device *adev)
+ 
+ 				mdelay(100);
+ 
+-				pcie_capability_read_word(root, PCI_EXP_LNKCTL,
+-							  &tmp16);
+-				tmp16 &= ~PCI_EXP_LNKCTL_HAWD;
+-				tmp16 |= (bridge_cfg & PCI_EXP_LNKCTL_HAWD);
+-				pcie_capability_write_word(root, PCI_EXP_LNKCTL,
+-							   tmp16);
+-
+-				pcie_capability_read_word(adev->pdev,
+-							  PCI_EXP_LNKCTL,
+-							  &tmp16);
+-				tmp16 &= ~PCI_EXP_LNKCTL_HAWD;
+-				tmp16 |= (gpu_cfg & PCI_EXP_LNKCTL_HAWD);
+-				pcie_capability_write_word(adev->pdev,
+-							   PCI_EXP_LNKCTL,
+-							   tmp16);
++				pcie_capability_clear_and_set_word(root, PCI_EXP_LNKCTL,
++								   PCI_EXP_LNKCTL_HAWD,
++								   bridge_cfg &
++								   PCI_EXP_LNKCTL_HAWD);
++				pcie_capability_clear_and_set_word(adev->pdev, PCI_EXP_LNKCTL,
++								   PCI_EXP_LNKCTL_HAWD,
++								   gpu_cfg &
++								   PCI_EXP_LNKCTL_HAWD);
+ 
+ 				pcie_capability_read_word(root, PCI_EXP_LNKCTL2,
+ 							  &tmp16);
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index e5554a36e8c8b..3a7e7d2ce847b 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -8074,10 +8074,12 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
+ 		 * fast updates.
+ 		 */
+ 		if (crtc->state->async_flip &&
+-		    acrtc_state->update_type != UPDATE_TYPE_FAST)
++		    (acrtc_state->update_type != UPDATE_TYPE_FAST ||
++		     get_mem_type(old_plane_state->fb) != get_mem_type(fb)))
+ 			drm_warn_once(state->dev,
+ 				      "[PLANE:%d:%s] async flip with non-fast update\n",
+ 				      plane->base.id, plane->name);
++
+ 		bundle->flip_addrs[planes_count].flip_immediate =
+ 			crtc->state->async_flip &&
+ 			acrtc_state->update_type == UPDATE_TYPE_FAST &&
+@@ -10040,6 +10042,11 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev,
+ 
+ 	/* Remove exiting planes if they are modified */
+ 	for_each_oldnew_plane_in_state_reverse(state, plane, old_plane_state, new_plane_state, i) {
++		if (old_plane_state->fb && new_plane_state->fb &&
++		    get_mem_type(old_plane_state->fb) !=
++		    get_mem_type(new_plane_state->fb))
++			lock_and_validation_needed = true;
++
+ 		ret = dm_update_plane_state(dc, state, plane,
+ 					    old_plane_state,
+ 					    new_plane_state,
+@@ -10287,9 +10294,20 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev,
+ 		struct dm_crtc_state *dm_new_crtc_state =
+ 			to_dm_crtc_state(new_crtc_state);
+ 
++		/*
++		 * Only allow async flips for fast updates that don't change
++		 * the FB pitch, the DCC state, rotation, etc.
++		 */
++		if (new_crtc_state->async_flip && lock_and_validation_needed) {
++			drm_dbg_atomic(crtc->dev,
++				       "[CRTC:%d:%s] async flips are only supported for fast updates\n",
++				       crtc->base.id, crtc->name);
++			ret = -EINVAL;
++			goto fail;
++		}
++
+ 		dm_new_crtc_state->update_type = lock_and_validation_needed ?
+-							 UPDATE_TYPE_FULL :
+-							 UPDATE_TYPE_FAST;
++			UPDATE_TYPE_FULL : UPDATE_TYPE_FAST;
+ 	}
+ 
+ 	/* Must be success */
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
+index 30d4c6fd95f53..440fc0869a34b 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
+@@ -398,18 +398,6 @@ static int dm_crtc_helper_atomic_check(struct drm_crtc *crtc,
+ 		return -EINVAL;
+ 	}
+ 
+-	/*
+-	 * Only allow async flips for fast updates that don't change the FB
+-	 * pitch, the DCC state, rotation, etc.
+-	 */
+-	if (crtc_state->async_flip &&
+-	    dm_crtc_state->update_type != UPDATE_TYPE_FAST) {
+-		drm_dbg_atomic(crtc->dev,
+-			       "[CRTC:%d:%s] async flips are only supported for fast updates\n",
+-			       crtc->base.id, crtc->name);
+-		return -EINVAL;
+-	}
+-
+ 	/* In some use cases, like reset, no stream is attached */
+ 	if (!dm_crtc_state->stream)
+ 		return 0;
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_smu.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_smu.c
+index 925d6e13620ec..1bbf85defd611 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_smu.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_smu.c
+@@ -32,6 +32,7 @@
+ 
+ #define MAX_INSTANCE                                        6
+ #define MAX_SEGMENT                                         6
++#define SMU_REGISTER_WRITE_RETRY_COUNT                      5
+ 
+ struct IP_BASE_INSTANCE
+ {
+@@ -134,6 +135,8 @@ static int dcn315_smu_send_msg_with_param(
+ 		unsigned int msg_id, unsigned int param)
+ {
+ 	uint32_t result;
++	uint32_t i = 0;
++	uint32_t read_back_data;
+ 
+ 	result = dcn315_smu_wait_for_response(clk_mgr, 10, 200000);
+ 
+@@ -150,10 +153,19 @@ static int dcn315_smu_send_msg_with_param(
+ 	/* Set the parameter register for the SMU message, unit is Mhz */
+ 	REG_WRITE(MP1_SMN_C2PMSG_37, param);
+ 
+-	/* Trigger the message transaction by writing the message ID */
+-	generic_write_indirect_reg(CTX,
+-		REG_NBIO(RSMU_INDEX), REG_NBIO(RSMU_DATA),
+-		mmMP1_C2PMSG_3, msg_id);
++	for (i = 0; i < SMU_REGISTER_WRITE_RETRY_COUNT; i++) {
++		/* Trigger the message transaction by writing the message ID */
++		generic_write_indirect_reg(CTX,
++			REG_NBIO(RSMU_INDEX), REG_NBIO(RSMU_DATA),
++			mmMP1_C2PMSG_3, msg_id);
++		read_back_data = generic_read_indirect_reg(CTX,
++			REG_NBIO(RSMU_INDEX), REG_NBIO(RSMU_DATA),
++			mmMP1_C2PMSG_3);
++		if (read_back_data == msg_id)
++			break;
++		udelay(2);
++		smu_print("SMU msg id write fail %x times. \n", i + 1);
++	}
+ 
+ 	result = dcn315_smu_wait_for_response(clk_mgr, 10, 200000);
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+index 4492bc2392b63..5cfa37804d7c6 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+@@ -2123,6 +2123,15 @@ void dcn20_optimize_bandwidth(
+ 	if (hubbub->funcs->program_compbuf_size)
+ 		hubbub->funcs->program_compbuf_size(hubbub, context->bw_ctx.bw.dcn.compbuf_size_kb, true);
+ 
++	if (context->bw_ctx.bw.dcn.clk.fw_based_mclk_switching) {
++		dc_dmub_srv_p_state_delegate(dc,
++			true, context);
++		context->bw_ctx.bw.dcn.clk.p_state_change_support = true;
++		dc->clk_mgr->clks.fw_based_mclk_switching = true;
++	} else {
++		dc->clk_mgr->clks.fw_based_mclk_switching = false;
++	}
++
+ 	dc->clk_mgr->funcs->update_clocks(
+ 			dc->clk_mgr,
+ 			context,
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c
+index bf8864bc8a99e..4cd4ae07d73dc 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c
+@@ -949,13 +949,36 @@ void dcn30_set_disp_pattern_generator(const struct dc *dc,
+ }
+ 
+ void dcn30_prepare_bandwidth(struct dc *dc,
+-			     struct dc_state *context)
++	struct dc_state *context)
+ {
++	bool p_state_change_support = context->bw_ctx.bw.dcn.clk.p_state_change_support;
++	/* Any transition into an FPO config should disable MCLK switching first to avoid
++	 * driver and FW P-State synchronization issues.
++	 */
++	if (context->bw_ctx.bw.dcn.clk.fw_based_mclk_switching || dc->clk_mgr->clks.fw_based_mclk_switching) {
++		dc->optimized_required = true;
++		context->bw_ctx.bw.dcn.clk.p_state_change_support = false;
++	}
++
+ 	if (dc->clk_mgr->dc_mode_softmax_enabled)
+ 		if (dc->clk_mgr->clks.dramclk_khz <= dc->clk_mgr->bw_params->dc_mode_softmax_memclk * 1000 &&
+ 				context->bw_ctx.bw.dcn.clk.dramclk_khz > dc->clk_mgr->bw_params->dc_mode_softmax_memclk * 1000)
+ 			dc->clk_mgr->funcs->set_max_memclk(dc->clk_mgr, dc->clk_mgr->bw_params->clk_table.entries[dc->clk_mgr->bw_params->clk_table.num_entries - 1].memclk_mhz);
+ 
+ 	dcn20_prepare_bandwidth(dc, context);
++	/*
++	 * enabled -> enabled: do not disable
++	 * enabled -> disabled: disable
++	 * disabled -> enabled: don't care
++	 * disabled -> disabled: don't care
++	 */
++	if (!context->bw_ctx.bw.dcn.clk.fw_based_mclk_switching)
++		dc_dmub_srv_p_state_delegate(dc, false, context);
++
++	if (context->bw_ctx.bw.dcn.clk.fw_based_mclk_switching || dc->clk_mgr->clks.fw_based_mclk_switching) {
++		/* After disabling P-State, restore the original value to ensure we get the correct P-State
++		 * on the next optimize. */
++		context->bw_ctx.bw.dcn.clk.p_state_change_support = p_state_change_support;
++	}
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_init.c b/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_init.c
+index 257df8660b4ca..61205cdbe2d5a 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_init.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn301/dcn301_init.c
+@@ -75,6 +75,7 @@ static const struct hw_sequencer_funcs dcn301_funcs = {
+ 	.get_hw_state = dcn10_get_hw_state,
+ 	.clear_status_bits = dcn10_clear_status_bits,
+ 	.wait_for_mpcc_disconnect = dcn10_wait_for_mpcc_disconnect,
++	.edp_backlight_control = dce110_edp_backlight_control,
+ 	.edp_power_control = dce110_edp_power_control,
+ 	.edp_wait_for_hpd_ready = dce110_edp_wait_for_hpd_ready,
+ 	.set_cursor_position = dcn10_set_cursor_position,
+diff --git a/drivers/gpu/drm/amd/pm/amdgpu_pm.c b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
+index 9ef88a0b1b57e..d68fe5474676b 100644
+--- a/drivers/gpu/drm/amd/pm/amdgpu_pm.c
++++ b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
+@@ -2172,15 +2172,19 @@ static int amdgpu_device_attr_create(struct amdgpu_device *adev,
+ 				     uint32_t mask, struct list_head *attr_list)
+ {
+ 	int ret = 0;
+-	struct device_attribute *dev_attr = &attr->dev_attr;
+-	const char *name = dev_attr->attr.name;
+ 	enum amdgpu_device_attr_states attr_states = ATTR_STATE_SUPPORTED;
+ 	struct amdgpu_device_attr_entry *attr_entry;
++	struct device_attribute *dev_attr;
++	const char *name;
+ 
+ 	int (*attr_update)(struct amdgpu_device *adev, struct amdgpu_device_attr *attr,
+ 			   uint32_t mask, enum amdgpu_device_attr_states *states) = default_attr_update;
+ 
+-	BUG_ON(!attr);
++	if (!attr)
++		return -EINVAL;
++
++	dev_attr = &attr->dev_attr;
++	name = dev_attr->attr.name;
+ 
+ 	attr_update = attr->attr_update ? attr->attr_update : default_attr_update;
+ 
+diff --git a/drivers/gpu/drm/armada/armada_overlay.c b/drivers/gpu/drm/armada/armada_overlay.c
+index f21eb8fb76d87..3b9bd8ecda137 100644
+--- a/drivers/gpu/drm/armada/armada_overlay.c
++++ b/drivers/gpu/drm/armada/armada_overlay.c
+@@ -4,6 +4,8 @@
+  *  Rewritten from the dovefb driver, and Armada510 manuals.
+  */
+ 
++#include <linux/bitfield.h>
++
+ #include <drm/armada_drm.h>
+ #include <drm/drm_atomic.h>
+ #include <drm/drm_atomic_helper.h>
+@@ -445,8 +447,8 @@ static int armada_overlay_get_property(struct drm_plane *plane,
+ 			     drm_to_overlay_state(state)->colorkey_ug,
+ 			     drm_to_overlay_state(state)->colorkey_vb, 0);
+ 	} else if (property == priv->colorkey_mode_prop) {
+-		*val = (drm_to_overlay_state(state)->colorkey_mode &
+-			CFG_CKMODE_MASK) >> ffs(CFG_CKMODE_MASK);
++		*val = FIELD_GET(CFG_CKMODE_MASK,
++				 drm_to_overlay_state(state)->colorkey_mode);
+ 	} else if (property == priv->brightness_prop) {
+ 		*val = drm_to_overlay_state(state)->brightness + 256;
+ 	} else if (property == priv->contrast_prop) {
+diff --git a/drivers/gpu/drm/ast/ast_dp.c b/drivers/gpu/drm/ast/ast_dp.c
+index 6dc1a09504e13..fdd9a493aa9c0 100644
+--- a/drivers/gpu/drm/ast/ast_dp.c
++++ b/drivers/gpu/drm/ast/ast_dp.c
+@@ -7,6 +7,17 @@
+ #include <drm/drm_print.h>
+ #include "ast_drv.h"
+ 
++bool ast_astdp_is_connected(struct ast_device *ast)
++{
++	if (!ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xD1, ASTDP_MCU_FW_EXECUTING))
++		return false;
++	if (!ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xDF, ASTDP_HPD))
++		return false;
++	if (!ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xDC, ASTDP_LINK_SUCCESS))
++		return false;
++	return true;
++}
++
+ int ast_astdp_read_edid(struct drm_device *dev, u8 *ediddata)
+ {
+ 	struct ast_device *ast = to_ast_device(dev);
+diff --git a/drivers/gpu/drm/ast/ast_dp501.c b/drivers/gpu/drm/ast/ast_dp501.c
+index 1bc35a992369d..fa7442b0c2612 100644
+--- a/drivers/gpu/drm/ast/ast_dp501.c
++++ b/drivers/gpu/drm/ast/ast_dp501.c
+@@ -272,11 +272,9 @@ static bool ast_launch_m68k(struct drm_device *dev)
+ 	return true;
+ }
+ 
+-bool ast_dp501_read_edid(struct drm_device *dev, u8 *ediddata)
++bool ast_dp501_is_connected(struct ast_device *ast)
+ {
+-	struct ast_device *ast = to_ast_device(dev);
+-	u32 i, boot_address, offset, data;
+-	u32 *pEDIDidx;
++	u32 boot_address, offset, data;
+ 
+ 	if (ast->config_mode == ast_use_p2a) {
+ 		boot_address = get_fw_base(ast);
+@@ -292,14 +290,6 @@ bool ast_dp501_read_edid(struct drm_device *dev, u8 *ediddata)
+ 		data = ast_mindwm(ast, boot_address + offset);
+ 		if (!(data & AST_DP501_PNP_CONNECTED))
+ 			return false;
+-
+-		/* Read EDID */
+-		offset = AST_DP501_EDID_DATA;
+-		for (i = 0; i < 128; i += 4) {
+-			data = ast_mindwm(ast, boot_address + offset + i);
+-			pEDIDidx = (u32 *)(ediddata + i);
+-			*pEDIDidx = data;
+-		}
+ 	} else {
+ 		if (!ast->dp501_fw_buf)
+ 			return false;
+@@ -319,7 +309,30 @@ bool ast_dp501_read_edid(struct drm_device *dev, u8 *ediddata)
+ 		data = readl(ast->dp501_fw_buf + offset);
+ 		if (!(data & AST_DP501_PNP_CONNECTED))
+ 			return false;
++	}
++	return true;
++}
++
++bool ast_dp501_read_edid(struct drm_device *dev, u8 *ediddata)
++{
++	struct ast_device *ast = to_ast_device(dev);
++	u32 i, boot_address, offset, data;
++	u32 *pEDIDidx;
++
++	if (!ast_dp501_is_connected(ast))
++		return false;
++
++	if (ast->config_mode == ast_use_p2a) {
++		boot_address = get_fw_base(ast);
+ 
++		/* Read EDID */
++		offset = AST_DP501_EDID_DATA;
++		for (i = 0; i < 128; i += 4) {
++			data = ast_mindwm(ast, boot_address + offset + i);
++			pEDIDidx = (u32 *)(ediddata + i);
++			*pEDIDidx = data;
++		}
++	} else {
+ 		/* Read EDID */
+ 		offset = AST_DP501_EDID_DATA;
+ 		for (i = 0; i < 128; i += 4) {
+diff --git a/drivers/gpu/drm/ast/ast_drv.h b/drivers/gpu/drm/ast/ast_drv.h
+index 5498a6676f2e8..8a0ffa8b5939b 100644
+--- a/drivers/gpu/drm/ast/ast_drv.h
++++ b/drivers/gpu/drm/ast/ast_drv.h
+@@ -468,6 +468,7 @@ void ast_patch_ahb_2500(struct ast_device *ast);
+ /* ast dp501 */
+ void ast_set_dp501_video_output(struct drm_device *dev, u8 mode);
+ bool ast_backup_fw(struct drm_device *dev, u8 *addr, u32 size);
++bool ast_dp501_is_connected(struct ast_device *ast);
+ bool ast_dp501_read_edid(struct drm_device *dev, u8 *ediddata);
+ u8 ast_get_dp501_max_clk(struct drm_device *dev);
+ void ast_init_3rdtx(struct drm_device *dev);
+@@ -476,6 +477,7 @@ void ast_init_3rdtx(struct drm_device *dev);
+ struct ast_i2c_chan *ast_i2c_create(struct drm_device *dev);
+ 
+ /* aspeed DP */
++bool ast_astdp_is_connected(struct ast_device *ast);
+ int ast_astdp_read_edid(struct drm_device *dev, u8 *ediddata);
+ void ast_dp_launch(struct drm_device *dev);
+ void ast_dp_power_on_off(struct drm_device *dev, bool no);
+diff --git a/drivers/gpu/drm/ast/ast_mode.c b/drivers/gpu/drm/ast/ast_mode.c
+index b3c670af6ef2b..0724516f29737 100644
+--- a/drivers/gpu/drm/ast/ast_mode.c
++++ b/drivers/gpu/drm/ast/ast_mode.c
+@@ -1585,8 +1585,20 @@ err_drm_connector_update_edid_property:
+ 	return 0;
+ }
+ 
++static int ast_dp501_connector_helper_detect_ctx(struct drm_connector *connector,
++						 struct drm_modeset_acquire_ctx *ctx,
++						 bool force)
++{
++	struct ast_device *ast = to_ast_device(connector->dev);
++
++	if (ast_dp501_is_connected(ast))
++		return connector_status_connected;
++	return connector_status_disconnected;
++}
++
+ static const struct drm_connector_helper_funcs ast_dp501_connector_helper_funcs = {
+ 	.get_modes = ast_dp501_connector_helper_get_modes,
++	.detect_ctx = ast_dp501_connector_helper_detect_ctx,
+ };
+ 
+ static const struct drm_connector_funcs ast_dp501_connector_funcs = {
+@@ -1611,7 +1623,7 @@ static int ast_dp501_connector_init(struct drm_device *dev, struct drm_connector
+ 	connector->interlace_allowed = 0;
+ 	connector->doublescan_allowed = 0;
+ 
+-	connector->polled = DRM_CONNECTOR_POLL_CONNECT;
++	connector->polled = DRM_CONNECTOR_POLL_CONNECT | DRM_CONNECTOR_POLL_DISCONNECT;
+ 
+ 	return 0;
+ }
+@@ -1683,8 +1695,20 @@ err_drm_connector_update_edid_property:
+ 	return 0;
+ }
+ 
++static int ast_astdp_connector_helper_detect_ctx(struct drm_connector *connector,
++						 struct drm_modeset_acquire_ctx *ctx,
++						 bool force)
++{
++	struct ast_device *ast = to_ast_device(connector->dev);
++
++	if (ast_astdp_is_connected(ast))
++		return connector_status_connected;
++	return connector_status_disconnected;
++}
++
+ static const struct drm_connector_helper_funcs ast_astdp_connector_helper_funcs = {
+ 	.get_modes = ast_astdp_connector_helper_get_modes,
++	.detect_ctx = ast_astdp_connector_helper_detect_ctx,
+ };
+ 
+ static const struct drm_connector_funcs ast_astdp_connector_funcs = {
+@@ -1709,7 +1733,7 @@ static int ast_astdp_connector_init(struct drm_device *dev, struct drm_connector
+ 	connector->interlace_allowed = 0;
+ 	connector->doublescan_allowed = 0;
+ 
+-	connector->polled = DRM_CONNECTOR_POLL_CONNECT;
++	connector->polled = DRM_CONNECTOR_POLL_CONNECT | DRM_CONNECTOR_POLL_DISCONNECT;
+ 
+ 	return 0;
+ }
+@@ -1848,5 +1872,7 @@ int ast_mode_config_init(struct ast_device *ast)
+ 
+ 	drm_mode_config_reset(dev);
+ 
++	drm_kms_helper_poll_init(dev);
++
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
+index 2254457ab5d02..9aeeb63435cd9 100644
+--- a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
++++ b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
+@@ -786,8 +786,13 @@ static void adv7511_mode_set(struct adv7511 *adv7511,
+ 	else
+ 		low_refresh_rate = ADV7511_LOW_REFRESH_RATE_NONE;
+ 
+-	regmap_update_bits(adv7511->regmap, 0xfb,
+-		0x6, low_refresh_rate << 1);
++	if (adv7511->type == ADV7511)
++		regmap_update_bits(adv7511->regmap, 0xfb,
++				   0x6, low_refresh_rate << 1);
++	else
++		regmap_update_bits(adv7511->regmap, 0x4a,
++				   0xc, low_refresh_rate << 2);
++
+ 	regmap_update_bits(adv7511->regmap, 0x17,
+ 		0x60, (vsync_polarity << 6) | (hsync_polarity << 5));
+ 
+diff --git a/drivers/gpu/drm/bridge/analogix/anx7625.c b/drivers/gpu/drm/bridge/analogix/anx7625.c
+index 8b985efdc086b..866d018f4bb11 100644
+--- a/drivers/gpu/drm/bridge/analogix/anx7625.c
++++ b/drivers/gpu/drm/bridge/analogix/anx7625.c
+@@ -872,11 +872,11 @@ static int anx7625_hdcp_enable(struct anx7625_data *ctx)
+ 	}
+ 
+ 	/* Read downstream capability */
+-	ret = anx7625_aux_trans(ctx, DP_AUX_NATIVE_READ, 0x68028, 1, &bcap);
++	ret = anx7625_aux_trans(ctx, DP_AUX_NATIVE_READ, DP_AUX_HDCP_BCAPS, 1, &bcap);
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	if (!(bcap & 0x01)) {
++	if (!(bcap & DP_BCAPS_HDCP_CAPABLE)) {
+ 		pr_warn("downstream not support HDCP 1.4, cap(%x).\n", bcap);
+ 		return 0;
+ 	}
+@@ -931,8 +931,8 @@ static void anx7625_dp_start(struct anx7625_data *ctx)
+ 
+ 	dev_dbg(dev, "set downstream sink into normal\n");
+ 	/* Downstream sink enter into normal mode */
+-	data = 1;
+-	ret = anx7625_aux_trans(ctx, DP_AUX_NATIVE_WRITE, 0x000600, 1, &data);
++	data = DP_SET_POWER_D0;
++	ret = anx7625_aux_trans(ctx, DP_AUX_NATIVE_WRITE, DP_SET_POWER, 1, &data);
+ 	if (ret < 0)
+ 		dev_err(dev, "IO error : set sink into normal mode fail\n");
+ 
+@@ -971,8 +971,8 @@ static void anx7625_dp_stop(struct anx7625_data *ctx)
+ 
+ 	dev_dbg(dev, "notify downstream enter into standby\n");
+ 	/* Downstream monitor enter into standby mode */
+-	data = 2;
+-	ret |= anx7625_aux_trans(ctx, DP_AUX_NATIVE_WRITE, 0x000600, 1, &data);
++	data = DP_SET_POWER_D3;
++	ret |= anx7625_aux_trans(ctx, DP_AUX_NATIVE_WRITE, DP_SET_POWER, 1, &data);
+ 	if (ret < 0)
+ 		DRM_DEV_ERROR(dev, "IO error : mute video fail\n");
+ 
+diff --git a/drivers/gpu/drm/bridge/synopsys/dw-mipi-dsi.c b/drivers/gpu/drm/bridge/synopsys/dw-mipi-dsi.c
+index b2efecf7d1603..4291798bd70f5 100644
+--- a/drivers/gpu/drm/bridge/synopsys/dw-mipi-dsi.c
++++ b/drivers/gpu/drm/bridge/synopsys/dw-mipi-dsi.c
+@@ -265,6 +265,7 @@ struct dw_mipi_dsi {
+ 	struct dw_mipi_dsi *master; /* dual-dsi master ptr */
+ 	struct dw_mipi_dsi *slave; /* dual-dsi slave ptr */
+ 
++	struct drm_display_mode mode;
+ 	const struct dw_mipi_dsi_plat_data *plat_data;
+ };
+ 
+@@ -332,6 +333,7 @@ static int dw_mipi_dsi_host_attach(struct mipi_dsi_host *host,
+ 	if (IS_ERR(bridge))
+ 		return PTR_ERR(bridge);
+ 
++	bridge->pre_enable_prev_first = true;
+ 	dsi->panel_bridge = bridge;
+ 
+ 	drm_bridge_add(&dsi->bridge);
+@@ -859,15 +861,6 @@ static void dw_mipi_dsi_bridge_post_atomic_disable(struct drm_bridge *bridge,
+ 	 */
+ 	dw_mipi_dsi_set_mode(dsi, 0);
+ 
+-	/*
+-	 * TODO Only way found to call panel-bridge post_disable &
+-	 * panel unprepare before the dsi "final" disable...
+-	 * This needs to be fixed in the drm_bridge framework and the API
+-	 * needs to be updated to manage our own call chains...
+-	 */
+-	if (dsi->panel_bridge->funcs->post_disable)
+-		dsi->panel_bridge->funcs->post_disable(dsi->panel_bridge);
+-
+ 	if (phy_ops->power_off)
+ 		phy_ops->power_off(dsi->plat_data->priv_data);
+ 
+@@ -942,15 +935,25 @@ static void dw_mipi_dsi_mode_set(struct dw_mipi_dsi *dsi,
+ 		phy_ops->power_on(dsi->plat_data->priv_data);
+ }
+ 
++static void dw_mipi_dsi_bridge_atomic_pre_enable(struct drm_bridge *bridge,
++						 struct drm_bridge_state *old_bridge_state)
++{
++	struct dw_mipi_dsi *dsi = bridge_to_dsi(bridge);
++
++	/* Power up the dsi ctl into a command mode */
++	dw_mipi_dsi_mode_set(dsi, &dsi->mode);
++	if (dsi->slave)
++		dw_mipi_dsi_mode_set(dsi->slave, &dsi->mode);
++}
++
+ static void dw_mipi_dsi_bridge_mode_set(struct drm_bridge *bridge,
+ 					const struct drm_display_mode *mode,
+ 					const struct drm_display_mode *adjusted_mode)
+ {
+ 	struct dw_mipi_dsi *dsi = bridge_to_dsi(bridge);
+ 
+-	dw_mipi_dsi_mode_set(dsi, adjusted_mode);
+-	if (dsi->slave)
+-		dw_mipi_dsi_mode_set(dsi->slave, adjusted_mode);
++	/* Store the display mode for later use in pre_enable callback */
++	drm_mode_copy(&dsi->mode, adjusted_mode);
+ }
+ 
+ static void dw_mipi_dsi_bridge_atomic_enable(struct drm_bridge *bridge,
+@@ -1004,6 +1007,7 @@ static const struct drm_bridge_funcs dw_mipi_dsi_bridge_funcs = {
+ 	.atomic_duplicate_state	= drm_atomic_helper_bridge_duplicate_state,
+ 	.atomic_destroy_state	= drm_atomic_helper_bridge_destroy_state,
+ 	.atomic_reset		= drm_atomic_helper_bridge_reset,
++	.atomic_pre_enable	= dw_mipi_dsi_bridge_atomic_pre_enable,
+ 	.atomic_enable		= dw_mipi_dsi_bridge_atomic_enable,
+ 	.atomic_post_disable	= dw_mipi_dsi_bridge_post_atomic_disable,
+ 	.mode_set		= dw_mipi_dsi_bridge_mode_set,
+diff --git a/drivers/gpu/drm/bridge/tc358764.c b/drivers/gpu/drm/bridge/tc358764.c
+index f85654f1b1045..8e938a7480f37 100644
+--- a/drivers/gpu/drm/bridge/tc358764.c
++++ b/drivers/gpu/drm/bridge/tc358764.c
+@@ -176,7 +176,7 @@ static void tc358764_read(struct tc358764 *ctx, u16 addr, u32 *val)
+ 	if (ret >= 0)
+ 		le32_to_cpus(val);
+ 
+-	dev_dbg(ctx->dev, "read: %d, addr: %d\n", addr, *val);
++	dev_dbg(ctx->dev, "read: addr=0x%04x data=0x%08x\n", addr, *val);
+ }
+ 
+ static void tc358764_write(struct tc358764 *ctx, u16 addr, u32 val)
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_dump.c b/drivers/gpu/drm/etnaviv/etnaviv_dump.c
+index 44b5f3c35aabe..898f84a0fc30c 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_dump.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_dump.c
+@@ -130,9 +130,9 @@ void etnaviv_core_dump(struct etnaviv_gem_submit *submit)
+ 		return;
+ 	etnaviv_dump_core = false;
+ 
+-	mutex_lock(&gpu->mmu_context->lock);
++	mutex_lock(&submit->mmu_context->lock);
+ 
+-	mmu_size = etnaviv_iommu_dump_size(gpu->mmu_context);
++	mmu_size = etnaviv_iommu_dump_size(submit->mmu_context);
+ 
+ 	/* We always dump registers, mmu, ring, hanging cmdbuf and end marker */
+ 	n_obj = 5;
+@@ -162,7 +162,7 @@ void etnaviv_core_dump(struct etnaviv_gem_submit *submit)
+ 	iter.start = __vmalloc(file_size, GFP_KERNEL | __GFP_NOWARN |
+ 			__GFP_NORETRY);
+ 	if (!iter.start) {
+-		mutex_unlock(&gpu->mmu_context->lock);
++		mutex_unlock(&submit->mmu_context->lock);
+ 		dev_warn(gpu->dev, "failed to allocate devcoredump file\n");
+ 		return;
+ 	}
+@@ -174,18 +174,18 @@ void etnaviv_core_dump(struct etnaviv_gem_submit *submit)
+ 	memset(iter.hdr, 0, iter.data - iter.start);
+ 
+ 	etnaviv_core_dump_registers(&iter, gpu);
+-	etnaviv_core_dump_mmu(&iter, gpu->mmu_context, mmu_size);
++	etnaviv_core_dump_mmu(&iter, submit->mmu_context, mmu_size);
+ 	etnaviv_core_dump_mem(&iter, ETDUMP_BUF_RING, gpu->buffer.vaddr,
+ 			      gpu->buffer.size,
+ 			      etnaviv_cmdbuf_get_va(&gpu->buffer,
+-					&gpu->mmu_context->cmdbuf_mapping));
++					&submit->mmu_context->cmdbuf_mapping));
+ 
+ 	etnaviv_core_dump_mem(&iter, ETDUMP_BUF_CMD,
+ 			      submit->cmdbuf.vaddr, submit->cmdbuf.size,
+ 			      etnaviv_cmdbuf_get_va(&submit->cmdbuf,
+-					&gpu->mmu_context->cmdbuf_mapping));
++					&submit->mmu_context->cmdbuf_mapping));
+ 
+-	mutex_unlock(&gpu->mmu_context->lock);
++	mutex_unlock(&submit->mmu_context->lock);
+ 
+ 	/* Reserve space for the bomap */
+ 	if (n_bomap_pages) {
+diff --git a/drivers/gpu/drm/hyperv/hyperv_drm_drv.c b/drivers/gpu/drm/hyperv/hyperv_drm_drv.c
+index a7d2c92d6c6a0..8026118c6e033 100644
+--- a/drivers/gpu/drm/hyperv/hyperv_drm_drv.c
++++ b/drivers/gpu/drm/hyperv/hyperv_drm_drv.c
+@@ -7,6 +7,7 @@
+ #include <linux/hyperv.h>
+ #include <linux/module.h>
+ #include <linux/pci.h>
++#include <linux/screen_info.h>
+ 
+ #include <drm/drm_aperture.h>
+ #include <drm/drm_atomic_helper.h>
+diff --git a/drivers/gpu/drm/mediatek/mtk_disp_ovl_adaptor.c b/drivers/gpu/drm/mediatek/mtk_disp_ovl_adaptor.c
+index c0a38f5217eee..f2f6a5c01a6d2 100644
+--- a/drivers/gpu/drm/mediatek/mtk_disp_ovl_adaptor.c
++++ b/drivers/gpu/drm/mediatek/mtk_disp_ovl_adaptor.c
+@@ -426,7 +426,7 @@ static int ovl_adaptor_comp_init(struct device *dev, struct component_match **ma
+ 			continue;
+ 		}
+ 
+-		type = (enum mtk_ovl_adaptor_comp_type)of_id->data;
++		type = (enum mtk_ovl_adaptor_comp_type)(uintptr_t)of_id->data;
+ 		id = ovl_adaptor_comp_get_id(dev, node, type);
+ 		if (id < 0) {
+ 			dev_warn(dev, "Skipping unknown component %pOF\n",
+diff --git a/drivers/gpu/drm/mediatek/mtk_dp.c b/drivers/gpu/drm/mediatek/mtk_dp.c
+index 64eee77452c04..c58b775877a31 100644
+--- a/drivers/gpu/drm/mediatek/mtk_dp.c
++++ b/drivers/gpu/drm/mediatek/mtk_dp.c
+@@ -1588,7 +1588,9 @@ static int mtk_dp_parse_capabilities(struct mtk_dp *mtk_dp)
+ 	u8 val;
+ 	ssize_t ret;
+ 
+-	drm_dp_read_dpcd_caps(&mtk_dp->aux, mtk_dp->rx_cap);
++	ret = drm_dp_read_dpcd_caps(&mtk_dp->aux, mtk_dp->rx_cap);
++	if (ret < 0)
++		return ret;
+ 
+ 	if (drm_dp_tps4_supported(mtk_dp->rx_cap))
+ 		mtk_dp->train_info.channel_eq_pattern = DP_TRAINING_PATTERN_4;
+@@ -1615,10 +1617,13 @@ static int mtk_dp_parse_capabilities(struct mtk_dp *mtk_dp)
+ 			return ret == 0 ? -EIO : ret;
+ 		}
+ 
+-		if (val)
+-			drm_dp_dpcd_writeb(&mtk_dp->aux,
+-					   DP_DEVICE_SERVICE_IRQ_VECTOR_ESI0,
+-					   val);
++		if (val) {
++			ret = drm_dp_dpcd_writeb(&mtk_dp->aux,
++						 DP_DEVICE_SERVICE_IRQ_VECTOR_ESI0,
++						 val);
++			if (ret < 0)
++				return ret;
++		}
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_crtc.c b/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
+index d40142842f85c..8d44f3df116fa 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
++++ b/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
+@@ -116,10 +116,9 @@ static int mtk_drm_cmdq_pkt_create(struct cmdq_client *client, struct cmdq_pkt *
+ 	dma_addr_t dma_addr;
+ 
+ 	pkt->va_base = kzalloc(size, GFP_KERNEL);
+-	if (!pkt->va_base) {
+-		kfree(pkt);
++	if (!pkt->va_base)
+ 		return -ENOMEM;
+-	}
++
+ 	pkt->buf_size = size;
+ 	pkt->cl = (void *)client;
+ 
+@@ -129,7 +128,6 @@ static int mtk_drm_cmdq_pkt_create(struct cmdq_client *client, struct cmdq_pkt *
+ 	if (dma_mapping_error(dev, dma_addr)) {
+ 		dev_err(dev, "dma map failed, size=%u\n", (u32)(u64)size);
+ 		kfree(pkt->va_base);
+-		kfree(pkt);
+ 		return -ENOMEM;
+ 	}
+ 
+@@ -145,7 +143,6 @@ static void mtk_drm_cmdq_pkt_destroy(struct cmdq_pkt *pkt)
+ 	dma_unmap_single(client->chan->mbox->dev, pkt->pa_base, pkt->buf_size,
+ 			 DMA_TO_DEVICE);
+ 	kfree(pkt->va_base);
+-	kfree(pkt);
+ }
+ #endif
+ 
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.c b/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.c
+index f114da4d36a96..771f4e1733539 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.c
++++ b/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.c
+@@ -563,14 +563,15 @@ int mtk_ddp_comp_init(struct device_node *node, struct mtk_ddp_comp *comp,
+ 	/* Not all drm components have a DTS device node, such as ovl_adaptor,
+ 	 * which is the drm bring up sub driver
+ 	 */
+-	if (node) {
+-		comp_pdev = of_find_device_by_node(node);
+-		if (!comp_pdev) {
+-			DRM_INFO("Waiting for device %s\n", node->full_name);
+-			return -EPROBE_DEFER;
+-		}
+-		comp->dev = &comp_pdev->dev;
++	if (!node)
++		return 0;
++
++	comp_pdev = of_find_device_by_node(node);
++	if (!comp_pdev) {
++		DRM_INFO("Waiting for device %s\n", node->full_name);
++		return -EPROBE_DEFER;
+ 	}
++	comp->dev = &comp_pdev->dev;
+ 
+ 	if (type == MTK_DISP_AAL ||
+ 	    type == MTK_DISP_BLS ||
+@@ -580,7 +581,6 @@ int mtk_ddp_comp_init(struct device_node *node, struct mtk_ddp_comp *comp,
+ 	    type == MTK_DISP_MERGE ||
+ 	    type == MTK_DISP_OVL ||
+ 	    type == MTK_DISP_OVL_2L ||
+-	    type == MTK_DISP_OVL_ADAPTOR ||
+ 	    type == MTK_DISP_PWM ||
+ 	    type == MTK_DISP_RDMA ||
+ 	    type == MTK_DPI ||
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_drv.c b/drivers/gpu/drm/mediatek/mtk_drm_drv.c
+index 6dcb4ba2466c0..30d10f21562f4 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_drv.c
++++ b/drivers/gpu/drm/mediatek/mtk_drm_drv.c
+@@ -354,7 +354,7 @@ static bool mtk_drm_get_all_drm_priv(struct device *dev)
+ 	const struct of_device_id *of_id;
+ 	struct device_node *node;
+ 	struct device *drm_dev;
+-	int cnt = 0;
++	unsigned int cnt = 0;
+ 	int i, j;
+ 
+ 	for_each_child_of_node(phandle->parent, node) {
+@@ -375,6 +375,9 @@ static bool mtk_drm_get_all_drm_priv(struct device *dev)
+ 		all_drm_priv[cnt] = dev_get_drvdata(drm_dev);
+ 		if (all_drm_priv[cnt] && all_drm_priv[cnt]->mtk_drm_bound)
+ 			cnt++;
++
++		if (cnt == MAX_CRTC)
++			break;
+ 	}
+ 
+ 	if (drm_priv->data->mmsys_dev_num == cnt) {
+@@ -829,7 +832,7 @@ static int mtk_drm_probe(struct platform_device *pdev)
+ 			continue;
+ 		}
+ 
+-		comp_type = (enum mtk_ddp_comp_type)of_id->data;
++		comp_type = (enum mtk_ddp_comp_type)(uintptr_t)of_id->data;
+ 
+ 		if (comp_type == MTK_DISP_MUTEX) {
+ 			int id;
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_gem.c b/drivers/gpu/drm/mediatek/mtk_drm_gem.c
+index a25b28d3ee902..9f364df52478d 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_gem.c
++++ b/drivers/gpu/drm/mediatek/mtk_drm_gem.c
+@@ -247,7 +247,11 @@ int mtk_drm_gem_prime_vmap(struct drm_gem_object *obj, struct iosys_map *map)
+ 
+ 	mtk_gem->kvaddr = vmap(mtk_gem->pages, npages, VM_MAP,
+ 			       pgprot_writecombine(PAGE_KERNEL));
+-
++	if (!mtk_gem->kvaddr) {
++		kfree(sgt);
++		kfree(mtk_gem->pages);
++		return -ENOMEM;
++	}
+ out:
+ 	kfree(sgt);
+ 	iosys_map_set_vaddr(map, mtk_gem->kvaddr);
+diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c
+index c67089a7ebc10..ad4570d60abf2 100644
+--- a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c
+@@ -540,6 +540,10 @@ struct msm_gpu *a2xx_gpu_init(struct drm_device *dev)
+ 	gpu->perfcntrs = perfcntrs;
+ 	gpu->num_perfcntrs = ARRAY_SIZE(perfcntrs);
+ 
++	ret = adreno_gpu_init(dev, pdev, adreno_gpu, &funcs, 1);
++	if (ret)
++		goto fail;
++
+ 	if (adreno_is_a20x(adreno_gpu))
+ 		adreno_gpu->registers = a200_registers;
+ 	else if (adreno_is_a225(adreno_gpu))
+@@ -547,10 +551,6 @@ struct msm_gpu *a2xx_gpu_init(struct drm_device *dev)
+ 	else
+ 		adreno_gpu->registers = a220_registers;
+ 
+-	ret = adreno_gpu_init(dev, pdev, adreno_gpu, &funcs, 1);
+-	if (ret)
+-		goto fail;
+-
+ 	if (!gpu->aspace) {
+ 		dev_err(dev->dev, "No memory protection without MMU\n");
+ 		if (!allow_vram_carveout) {
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+index 5deb79924897a..63dde676f4339 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+@@ -1435,8 +1435,15 @@ void a6xx_gmu_remove(struct a6xx_gpu *a6xx_gpu)
+ 	struct a6xx_gmu *gmu = &a6xx_gpu->gmu;
+ 	struct platform_device *pdev = to_platform_device(gmu->dev);
+ 
+-	if (!gmu->initialized)
++	mutex_lock(&gmu->lock);
++	if (!gmu->initialized) {
++		mutex_unlock(&gmu->lock);
+ 		return;
++	}
++
++	gmu->initialized = false;
++
++	mutex_unlock(&gmu->lock);
+ 
+ 	pm_runtime_force_suspend(gmu->dev);
+ 
+@@ -1466,8 +1473,6 @@ void a6xx_gmu_remove(struct a6xx_gpu *a6xx_gpu)
+ 
+ 	/* Drop reference taken in of_find_device_by_node */
+ 	put_device(gmu->dev);
+-
+-	gmu->initialized = false;
+ }
+ 
+ static int cxpd_notifier_cb(struct notifier_block *nb,
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+index b3ada1e7b598b..a2513f7168238 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+@@ -2091,9 +2091,7 @@ static void a6xx_destroy(struct msm_gpu *gpu)
+ 
+ 	a6xx_llc_slices_destroy(a6xx_gpu);
+ 
+-	mutex_lock(&a6xx_gpu->gmu.lock);
+ 	a6xx_gmu_remove(a6xx_gpu);
+-	mutex_unlock(&a6xx_gpu->gmu.lock);
+ 
+ 	adreno_gpu_cleanup(adreno_gpu);
+ 
+diff --git a/drivers/gpu/drm/msm/adreno/adreno_device.c b/drivers/gpu/drm/msm/adreno/adreno_device.c
+index ce8d0b2475bf1..6e3c1368c5e15 100644
+--- a/drivers/gpu/drm/msm/adreno/adreno_device.c
++++ b/drivers/gpu/drm/msm/adreno/adreno_device.c
+@@ -371,7 +371,7 @@ static const struct adreno_info gpulist[] = {
+ 		.rev = ADRENO_REV(6, 9, 0, ANY_ID),
+ 		.fw = {
+ 			[ADRENO_FW_SQE] = "a660_sqe.fw",
+-			[ADRENO_FW_GMU] = "a690_gmu.bin",
++			[ADRENO_FW_GMU] = "a660_gmu.bin",
+ 		},
+ 		.gmem = SZ_4M,
+ 		.inactive_period = DRM_MSM_INACTIVE_PERIOD,
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
+index 7d0d0e74c3b08..be8e7e54df8af 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
+@@ -127,8 +127,13 @@ static const struct dpu_pingpong_cfg msm8998_pp[] = {
+ };
+ 
+ static const struct dpu_dsc_cfg msm8998_dsc[] = {
+-	DSC_BLK("dsc_0", DSC_0, 0x80000, 0),
+-	DSC_BLK("dsc_1", DSC_1, 0x80400, 0),
++	{
++		.name = "dsc_0", .id = DSC_0,
++		.base = 0x80000, .len = 0x140,
++	}, {
++		.name = "dsc_1", .id = DSC_1,
++		.base = 0x80400, .len = 0x140,
++	},
+ };
+ 
+ static const struct dpu_dspp_cfg msm8998_dspp[] = {
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
+index b6098141bb9bb..bd450712e65cd 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
+@@ -111,13 +111,13 @@ static const struct dpu_lm_cfg sdm845_lm[] = {
+ 
+ static const struct dpu_dspp_cfg sdm845_dspp[] = {
+ 	DSPP_BLK("dspp_0", DSPP_0, 0x54000, DSPP_SC7180_MASK,
+-		 &sm8150_dspp_sblk),
++		 &sdm845_dspp_sblk),
+ 	DSPP_BLK("dspp_1", DSPP_1, 0x56000, DSPP_SC7180_MASK,
+-		 &sm8150_dspp_sblk),
++		 &sdm845_dspp_sblk),
+ 	DSPP_BLK("dspp_2", DSPP_2, 0x58000, DSPP_SC7180_MASK,
+-		 &sm8150_dspp_sblk),
++		 &sdm845_dspp_sblk),
+ 	DSPP_BLK("dspp_3", DSPP_3, 0x5a000, DSPP_SC7180_MASK,
+-		 &sm8150_dspp_sblk),
++		 &sdm845_dspp_sblk),
+ };
+ 
+ static const struct dpu_pingpong_cfg sdm845_pp[] = {
+@@ -136,10 +136,19 @@ static const struct dpu_pingpong_cfg sdm845_pp[] = {
+ };
+ 
+ static const struct dpu_dsc_cfg sdm845_dsc[] = {
+-	DSC_BLK("dsc_0", DSC_0, 0x80000, 0),
+-	DSC_BLK("dsc_1", DSC_1, 0x80400, 0),
+-	DSC_BLK("dsc_2", DSC_2, 0x80800, 0),
+-	DSC_BLK("dsc_3", DSC_3, 0x80c00, 0),
++	{
++		.name = "dsc_0", .id = DSC_0,
++		.base = 0x80000, .len = 0x140,
++	}, {
++		.name = "dsc_1", .id = DSC_1,
++		.base = 0x80400, .len = 0x140,
++	}, {
++		.name = "dsc_2", .id = DSC_2,
++		.base = 0x80800, .len = 0x140,
++	}, {
++		.name = "dsc_3", .id = DSC_3,
++		.base = 0x80c00, .len = 0x140,
++	},
+ };
+ 
+ static const struct dpu_intf_cfg sdm845_intf[] = {
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
+index b5f7513542678..4589b7a043990 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
+@@ -118,13 +118,13 @@ static const struct dpu_lm_cfg sm8150_lm[] = {
+ 
+ static const struct dpu_dspp_cfg sm8150_dspp[] = {
+ 	DSPP_BLK("dspp_0", DSPP_0, 0x54000, DSPP_SC7180_MASK,
+-		 &sm8150_dspp_sblk),
++		 &sdm845_dspp_sblk),
+ 	DSPP_BLK("dspp_1", DSPP_1, 0x56000, DSPP_SC7180_MASK,
+-		 &sm8150_dspp_sblk),
++		 &sdm845_dspp_sblk),
+ 	DSPP_BLK("dspp_2", DSPP_2, 0x58000, DSPP_SC7180_MASK,
+-		 &sm8150_dspp_sblk),
++		 &sdm845_dspp_sblk),
+ 	DSPP_BLK("dspp_3", DSPP_3, 0x5a000, DSPP_SC7180_MASK,
+-		 &sm8150_dspp_sblk),
++		 &sdm845_dspp_sblk),
+ };
+ 
+ static const struct dpu_pingpong_cfg sm8150_pp[] = {
+@@ -155,10 +155,23 @@ static const struct dpu_merge_3d_cfg sm8150_merge_3d[] = {
+ };
+ 
+ static const struct dpu_dsc_cfg sm8150_dsc[] = {
+-	DSC_BLK("dsc_0", DSC_0, 0x80000, BIT(DPU_DSC_OUTPUT_CTRL)),
+-	DSC_BLK("dsc_1", DSC_1, 0x80400, BIT(DPU_DSC_OUTPUT_CTRL)),
+-	DSC_BLK("dsc_2", DSC_2, 0x80800, BIT(DPU_DSC_OUTPUT_CTRL)),
+-	DSC_BLK("dsc_3", DSC_3, 0x80c00, BIT(DPU_DSC_OUTPUT_CTRL)),
++	{
++		.name = "dsc_0", .id = DSC_0,
++		.base = 0x80000, .len = 0x140,
++		.features = BIT(DPU_DSC_OUTPUT_CTRL),
++	}, {
++		.name = "dsc_1", .id = DSC_1,
++		.base = 0x80400, .len = 0x140,
++		.features = BIT(DPU_DSC_OUTPUT_CTRL),
++	}, {
++		.name = "dsc_2", .id = DSC_2,
++		.base = 0x80800, .len = 0x140,
++		.features = BIT(DPU_DSC_OUTPUT_CTRL),
++	}, {
++		.name = "dsc_3", .id = DSC_3,
++		.base = 0x80c00, .len = 0x140,
++		.features = BIT(DPU_DSC_OUTPUT_CTRL),
++	},
+ };
+ 
+ static const struct dpu_intf_cfg sm8150_intf[] = {
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h
+index 8ed2b263c5ea3..8f5d5d44ccb3d 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h
+@@ -117,13 +117,13 @@ static const struct dpu_lm_cfg sc8180x_lm[] = {
+ 
+ static const struct dpu_dspp_cfg sc8180x_dspp[] = {
+ 	DSPP_BLK("dspp_0", DSPP_0, 0x54000, DSPP_SC7180_MASK,
+-		 &sm8150_dspp_sblk),
++		 &sdm845_dspp_sblk),
+ 	DSPP_BLK("dspp_1", DSPP_1, 0x56000, DSPP_SC7180_MASK,
+-		 &sm8150_dspp_sblk),
++		 &sdm845_dspp_sblk),
+ 	DSPP_BLK("dspp_2", DSPP_2, 0x58000, DSPP_SC7180_MASK,
+-		 &sm8150_dspp_sblk),
++		 &sdm845_dspp_sblk),
+ 	DSPP_BLK("dspp_3", DSPP_3, 0x5a000, DSPP_SC7180_MASK,
+-		 &sm8150_dspp_sblk),
++		 &sdm845_dspp_sblk),
+ };
+ 
+ static const struct dpu_pingpong_cfg sc8180x_pp[] = {
+@@ -154,12 +154,31 @@ static const struct dpu_merge_3d_cfg sc8180x_merge_3d[] = {
+ };
+ 
+ static const struct dpu_dsc_cfg sc8180x_dsc[] = {
+-	DSC_BLK("dsc_0", DSC_0, 0x80000, BIT(DPU_DSC_OUTPUT_CTRL)),
+-	DSC_BLK("dsc_1", DSC_1, 0x80400, BIT(DPU_DSC_OUTPUT_CTRL)),
+-	DSC_BLK("dsc_2", DSC_2, 0x80800, BIT(DPU_DSC_OUTPUT_CTRL)),
+-	DSC_BLK("dsc_3", DSC_3, 0x80c00, BIT(DPU_DSC_OUTPUT_CTRL)),
+-	DSC_BLK("dsc_4", DSC_4, 0x81000, BIT(DPU_DSC_OUTPUT_CTRL)),
+-	DSC_BLK("dsc_5", DSC_5, 0x81400, BIT(DPU_DSC_OUTPUT_CTRL)),
++	{
++		.name = "dsc_0", .id = DSC_0,
++		.base = 0x80000, .len = 0x140,
++		.features = BIT(DPU_DSC_OUTPUT_CTRL),
++	}, {
++		.name = "dsc_1", .id = DSC_1,
++		.base = 0x80400, .len = 0x140,
++		.features = BIT(DPU_DSC_OUTPUT_CTRL),
++	}, {
++		.name = "dsc_2", .id = DSC_2,
++		.base = 0x80800, .len = 0x140,
++		.features = BIT(DPU_DSC_OUTPUT_CTRL),
++	}, {
++		.name = "dsc_3", .id = DSC_3,
++		.base = 0x80c00, .len = 0x140,
++		.features = BIT(DPU_DSC_OUTPUT_CTRL),
++	}, {
++		.name = "dsc_4", .id = DSC_4,
++		.base = 0x81000, .len = 0x140,
++		.features = BIT(DPU_DSC_OUTPUT_CTRL),
++	}, {
++		.name = "dsc_5", .id = DSC_5,
++		.base = 0x81400, .len = 0x140,
++		.features = BIT(DPU_DSC_OUTPUT_CTRL),
++	},
+ };
+ 
+ static const struct dpu_intf_cfg sc8180x_intf[] = {
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_0_sm8250.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_0_sm8250.h
+index daebd21700413..0e17be6ed94f2 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_0_sm8250.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_0_sm8250.h
+@@ -119,13 +119,13 @@ static const struct dpu_lm_cfg sm8250_lm[] = {
+ 
+ static const struct dpu_dspp_cfg sm8250_dspp[] = {
+ 	DSPP_BLK("dspp_0", DSPP_0, 0x54000, DSPP_SC7180_MASK,
+-		 &sm8150_dspp_sblk),
++		 &sdm845_dspp_sblk),
+ 	DSPP_BLK("dspp_1", DSPP_1, 0x56000, DSPP_SC7180_MASK,
+-		 &sm8150_dspp_sblk),
++		 &sdm845_dspp_sblk),
+ 	DSPP_BLK("dspp_2", DSPP_2, 0x58000, DSPP_SC7180_MASK,
+-		 &sm8150_dspp_sblk),
++		 &sdm845_dspp_sblk),
+ 	DSPP_BLK("dspp_3", DSPP_3, 0x5a000, DSPP_SC7180_MASK,
+-		 &sm8150_dspp_sblk),
++		 &sdm845_dspp_sblk),
+ };
+ 
+ static const struct dpu_pingpong_cfg sm8250_pp[] = {
+@@ -156,10 +156,23 @@ static const struct dpu_merge_3d_cfg sm8250_merge_3d[] = {
+ };
+ 
+ static const struct dpu_dsc_cfg sm8250_dsc[] = {
+-	DSC_BLK("dsc_0", DSC_0, 0x80000, BIT(DPU_DSC_OUTPUT_CTRL)),
+-	DSC_BLK("dsc_1", DSC_1, 0x80400, BIT(DPU_DSC_OUTPUT_CTRL)),
+-	DSC_BLK("dsc_2", DSC_2, 0x80800, BIT(DPU_DSC_OUTPUT_CTRL)),
+-	DSC_BLK("dsc_3", DSC_3, 0x80c00, BIT(DPU_DSC_OUTPUT_CTRL)),
++	{
++		.name = "dsc_0", .id = DSC_0,
++		.base = 0x80000, .len = 0x140,
++		.features = BIT(DPU_DSC_OUTPUT_CTRL),
++	}, {
++		.name = "dsc_1", .id = DSC_1,
++		.base = 0x80400, .len = 0x140,
++		.features = BIT(DPU_DSC_OUTPUT_CTRL),
++	}, {
++		.name = "dsc_2", .id = DSC_2,
++		.base = 0x80800, .len = 0x140,
++		.features = BIT(DPU_DSC_OUTPUT_CTRL),
++	}, {
++		.name = "dsc_3", .id = DSC_3,
++		.base = 0x80c00, .len = 0x140,
++		.features = BIT(DPU_DSC_OUTPUT_CTRL),
++	},
+ };
+ 
+ static const struct dpu_intf_cfg sm8250_intf[] = {
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_2_sc7180.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_2_sc7180.h
+index 67566b07195a2..a3124661cb65f 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_2_sc7180.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_2_sc7180.h
+@@ -76,7 +76,7 @@ static const struct dpu_lm_cfg sc7180_lm[] = {
+ 
+ static const struct dpu_dspp_cfg sc7180_dspp[] = {
+ 	DSPP_BLK("dspp_0", DSPP_0, 0x54000, DSPP_SC7180_MASK,
+-		 &sm8150_dspp_sblk),
++		 &sdm845_dspp_sblk),
+ };
+ 
+ static const struct dpu_pingpong_cfg sc7180_pp[] = {
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_3_sm6115.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_3_sm6115.h
+index 031fc8dae3c69..04a0dbf96e179 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_3_sm6115.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_3_sm6115.h
+@@ -56,7 +56,7 @@ static const struct dpu_lm_cfg sm6115_lm[] = {
+ 
+ static const struct dpu_dspp_cfg sm6115_dspp[] = {
+ 	DSPP_BLK("dspp_0", DSPP_0, 0x54000, DSPP_SC7180_MASK,
+-		 &sm8150_dspp_sblk),
++		 &sdm845_dspp_sblk),
+ };
+ 
+ static const struct dpu_pingpong_cfg sm6115_pp[] = {
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_4_sm6350.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_4_sm6350.h
+index 06eba23b02364..398ea3749f805 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_4_sm6350.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_4_sm6350.h
+@@ -85,7 +85,7 @@ static const struct dpu_lm_cfg sm6350_lm[] = {
+ 
+ static const struct dpu_dspp_cfg sm6350_dspp[] = {
+ 	DSPP_BLK("dspp_0", DSPP_0, 0x54000, DSPP_SC7180_MASK,
+-		&sm8150_dspp_sblk),
++		&sdm845_dspp_sblk),
+ };
+ 
+ static struct dpu_pingpong_cfg sm6350_pp[] = {
+@@ -98,7 +98,11 @@ static struct dpu_pingpong_cfg sm6350_pp[] = {
+ };
+ 
+ static const struct dpu_dsc_cfg sm6350_dsc[] = {
+-	DSC_BLK("dsc_0", DSC_0, 0x80000, BIT(DPU_DSC_OUTPUT_CTRL)),
++	{
++		.name = "dsc_0", .id = DSC_0,
++		.base = 0x80000, .len = 0x140,
++		.features = BIT(DPU_DSC_OUTPUT_CTRL),
++	},
+ };
+ 
+ static const struct dpu_intf_cfg sm6350_intf[] = {
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_5_qcm2290.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_5_qcm2290.h
+index f2808098af399..06cf48b55f989 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_5_qcm2290.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_5_qcm2290.h
+@@ -53,7 +53,7 @@ static const struct dpu_lm_cfg qcm2290_lm[] = {
+ 
+ static const struct dpu_dspp_cfg qcm2290_dspp[] = {
+ 	DSPP_BLK("dspp_0", DSPP_0, 0x54000, DSPP_SC7180_MASK,
+-		 &sm8150_dspp_sblk),
++		 &sdm845_dspp_sblk),
+ };
+ 
+ static const struct dpu_pingpong_cfg qcm2290_pp[] = {
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_9_sm6375.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_9_sm6375.h
+index 241fa6746674d..ec12602896f31 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_9_sm6375.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_9_sm6375.h
+@@ -57,7 +57,7 @@ static const struct dpu_lm_cfg sm6375_lm[] = {
+ 
+ static const struct dpu_dspp_cfg sm6375_dspp[] = {
+ 	DSPP_BLK("dspp_0", DSPP_0, 0x54000, DSPP_SC7180_MASK,
+-		&sm8150_dspp_sblk),
++		&sdm845_dspp_sblk),
+ };
+ 
+ static const struct dpu_pingpong_cfg sm6375_pp[] = {
+@@ -67,7 +67,11 @@ static const struct dpu_pingpong_cfg sm6375_pp[] = {
+ };
+ 
+ static const struct dpu_dsc_cfg sm6375_dsc[] = {
+-	DSC_BLK("dsc_0", DSC_0, 0x80000, BIT(DPU_DSC_OUTPUT_CTRL)),
++	{
++		.name = "dsc_0", .id = DSC_0,
++		.base = 0x80000, .len = 0x140,
++		.features = BIT(DPU_DSC_OUTPUT_CTRL),
++	},
+ };
+ 
+ static const struct dpu_intf_cfg sm6375_intf[] = {
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_0_sm8350.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_0_sm8350.h
+index 8da424eaee6a2..66b3d299ffcf7 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_0_sm8350.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_0_sm8350.h
+@@ -117,13 +117,13 @@ static const struct dpu_lm_cfg sm8350_lm[] = {
+ 
+ static const struct dpu_dspp_cfg sm8350_dspp[] = {
+ 	DSPP_BLK("dspp_0", DSPP_0, 0x54000, DSPP_SC7180_MASK,
+-		 &sm8150_dspp_sblk),
++		 &sdm845_dspp_sblk),
+ 	DSPP_BLK("dspp_1", DSPP_1, 0x56000, DSPP_SC7180_MASK,
+-		 &sm8150_dspp_sblk),
++		 &sdm845_dspp_sblk),
+ 	DSPP_BLK("dspp_2", DSPP_2, 0x58000, DSPP_SC7180_MASK,
+-		 &sm8150_dspp_sblk),
++		 &sdm845_dspp_sblk),
+ 	DSPP_BLK("dspp_3", DSPP_3, 0x5a000, DSPP_SC7180_MASK,
+-		 &sm8150_dspp_sblk),
++		 &sdm845_dspp_sblk),
+ };
+ 
+ static const struct dpu_pingpong_cfg sm8350_pp[] = {
+@@ -159,10 +159,27 @@ static const struct dpu_merge_3d_cfg sm8350_merge_3d[] = {
+  * its own different sub block address.
+  */
+ static const struct dpu_dsc_cfg sm8350_dsc[] = {
+-	DSC_BLK_1_2("dce_0_0", DSC_0, 0x80000, 0x29c, 0, dsc_sblk_0),
+-	DSC_BLK_1_2("dce_0_1", DSC_1, 0x80000, 0x29c, 0, dsc_sblk_1),
+-	DSC_BLK_1_2("dce_1_0", DSC_2, 0x81000, 0x29c, BIT(DPU_DSC_NATIVE_42x_EN), dsc_sblk_0),
+-	DSC_BLK_1_2("dce_1_1", DSC_3, 0x81000, 0x29c, BIT(DPU_DSC_NATIVE_42x_EN), dsc_sblk_1),
++	{
++		.name = "dce_0_0", .id = DSC_0,
++		.base = 0x80000, .len = 0x4,
++		.features = BIT(DPU_DSC_HW_REV_1_2),
++		.sblk = &dsc_sblk_0,
++	}, {
++		.name = "dce_0_1", .id = DSC_1,
++		.base = 0x80000, .len = 0x4,
++		.features = BIT(DPU_DSC_HW_REV_1_2),
++		.sblk = &dsc_sblk_1,
++	}, {
++		.name = "dce_1_0", .id = DSC_2,
++		.base = 0x81000, .len = 0x4,
++		.features = BIT(DPU_DSC_HW_REV_1_2) | BIT(DPU_DSC_NATIVE_42x_EN),
++		.sblk = &dsc_sblk_0,
++	}, {
++		.name = "dce_1_1", .id = DSC_3,
++		.base = 0x81000, .len = 0x4,
++		.features = BIT(DPU_DSC_HW_REV_1_2) | BIT(DPU_DSC_NATIVE_42x_EN),
++		.sblk = &dsc_sblk_1,
++	},
+ };
+ 
+ static const struct dpu_intf_cfg sm8350_intf[] = {
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_2_sc7280.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_2_sc7280.h
+index 900fee410e113..f06ed9a73b071 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_2_sc7280.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_2_sc7280.h
+@@ -84,7 +84,7 @@ static const struct dpu_lm_cfg sc7280_lm[] = {
+ 
+ static const struct dpu_dspp_cfg sc7280_dspp[] = {
+ 	DSPP_BLK("dspp_0", DSPP_0, 0x54000, DSPP_SC7180_MASK,
+-		 &sm8150_dspp_sblk),
++		 &sdm845_dspp_sblk),
+ };
+ 
+ static const struct dpu_pingpong_cfg sc7280_pp[] = {
+@@ -104,7 +104,12 @@ static const struct dpu_pingpong_cfg sc7280_pp[] = {
+ 
+ /* NOTE: sc7280 only has one DSC hard slice encoder */
+ static const struct dpu_dsc_cfg sc7280_dsc[] = {
+-	DSC_BLK_1_2("dce_0_0", DSC_0, 0x80000, 0x29c, BIT(DPU_DSC_NATIVE_42x_EN), dsc_sblk_0),
++	{
++		.name = "dce_0_0", .id = DSC_0,
++		.base = 0x80000, .len = 0x4,
++		.features = BIT(DPU_DSC_HW_REV_1_2) | BIT(DPU_DSC_NATIVE_42x_EN),
++		.sblk = &dsc_sblk_0,
++	},
+ };
+ 
+ static const struct dpu_wb_cfg sc7280_wb[] = {
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_8_0_sc8280xp.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_8_0_sc8280xp.h
+index f6ce6b090f718..ac71cc62f605a 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_8_0_sc8280xp.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_8_0_sc8280xp.h
+@@ -112,13 +112,13 @@ static const struct dpu_lm_cfg sc8280xp_lm[] = {
+ 
+ static const struct dpu_dspp_cfg sc8280xp_dspp[] = {
+ 	DSPP_BLK("dspp_0", DSPP_0, 0x54000, DSPP_SC7180_MASK,
+-		 &sm8150_dspp_sblk),
++		 &sdm845_dspp_sblk),
+ 	DSPP_BLK("dspp_1", DSPP_1, 0x56000, DSPP_SC7180_MASK,
+-		 &sm8150_dspp_sblk),
++		 &sdm845_dspp_sblk),
+ 	DSPP_BLK("dspp_2", DSPP_2, 0x58000, DSPP_SC7180_MASK,
+-		 &sm8150_dspp_sblk),
++		 &sdm845_dspp_sblk),
+ 	DSPP_BLK("dspp_3", DSPP_3, 0x5a000, DSPP_SC7180_MASK,
+-		 &sm8150_dspp_sblk),
++		 &sdm845_dspp_sblk),
+ };
+ 
+ static const struct dpu_pingpong_cfg sc8280xp_pp[] = {
+@@ -148,12 +148,37 @@ static const struct dpu_merge_3d_cfg sc8280xp_merge_3d[] = {
+  * its own different sub block address.
+  */
+ static const struct dpu_dsc_cfg sc8280xp_dsc[] = {
+-	DSC_BLK_1_2("dce_0_0", DSC_0, 0x80000, 0x29c, 0, dsc_sblk_0),
+-	DSC_BLK_1_2("dce_0_1", DSC_1, 0x80000, 0x29c, 0, dsc_sblk_1),
+-	DSC_BLK_1_2("dce_1_0", DSC_2, 0x81000, 0x29c, BIT(DPU_DSC_NATIVE_42x_EN), dsc_sblk_0),
+-	DSC_BLK_1_2("dce_1_1", DSC_3, 0x81000, 0x29c, BIT(DPU_DSC_NATIVE_42x_EN), dsc_sblk_1),
+-	DSC_BLK_1_2("dce_2_0", DSC_4, 0x82000, 0x29c, 0, dsc_sblk_0),
+-	DSC_BLK_1_2("dce_2_1", DSC_5, 0x82000, 0x29c, 0, dsc_sblk_1),
++	{
++		.name = "dce_0_0", .id = DSC_0,
++		.base = 0x80000, .len = 0x4,
++		.features = BIT(DPU_DSC_HW_REV_1_2),
++		.sblk = &dsc_sblk_0,
++	}, {
++		.name = "dce_0_1", .id = DSC_1,
++		.base = 0x80000, .len = 0x4,
++		.features = BIT(DPU_DSC_HW_REV_1_2),
++		.sblk = &dsc_sblk_1,
++	}, {
++		.name = "dce_1_0", .id = DSC_2,
++		.base = 0x81000, .len = 0x4,
++		.features = BIT(DPU_DSC_HW_REV_1_2) | BIT(DPU_DSC_NATIVE_42x_EN),
++		.sblk = &dsc_sblk_0,
++	}, {
++		.name = "dce_1_1", .id = DSC_3,
++		.base = 0x81000, .len = 0x4,
++		.features = BIT(DPU_DSC_HW_REV_1_2) | BIT(DPU_DSC_NATIVE_42x_EN),
++		.sblk = &dsc_sblk_1,
++	}, {
++		.name = "dce_2_0", .id = DSC_4,
++		.base = 0x82000, .len = 0x4,
++		.features = BIT(DPU_DSC_HW_REV_1_2),
++		.sblk = &dsc_sblk_0,
++	}, {
++		.name = "dce_2_1", .id = DSC_5,
++		.base = 0x82000, .len = 0x4,
++		.features = BIT(DPU_DSC_HW_REV_1_2),
++		.sblk = &dsc_sblk_1,
++	},
+ };
+ 
+ /* TODO: INTF 3, 8 and 7 are used for MST, marked as INTF_NONE for now */
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_8_1_sm8450.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_8_1_sm8450.h
+index 8d13c369213c0..d7407d471a31e 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_8_1_sm8450.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_8_1_sm8450.h
+@@ -118,13 +118,13 @@ static const struct dpu_lm_cfg sm8450_lm[] = {
+ 
+ static const struct dpu_dspp_cfg sm8450_dspp[] = {
+ 	DSPP_BLK("dspp_0", DSPP_0, 0x54000, DSPP_SC7180_MASK,
+-		 &sm8150_dspp_sblk),
++		 &sdm845_dspp_sblk),
+ 	DSPP_BLK("dspp_1", DSPP_1, 0x56000, DSPP_SC7180_MASK,
+-		 &sm8150_dspp_sblk),
++		 &sdm845_dspp_sblk),
+ 	DSPP_BLK("dspp_2", DSPP_2, 0x58000, DSPP_SC7180_MASK,
+-		 &sm8150_dspp_sblk),
++		 &sdm845_dspp_sblk),
+ 	DSPP_BLK("dspp_3", DSPP_3, 0x5a000, DSPP_SC7180_MASK,
+-		 &sm8150_dspp_sblk),
++		 &sdm845_dspp_sblk),
+ };
+ 
+ static const struct dpu_pingpong_cfg sm8450_pp[] = {
+@@ -167,10 +167,27 @@ static const struct dpu_merge_3d_cfg sm8450_merge_3d[] = {
+  * its own different sub block address.
+  */
+ static const struct dpu_dsc_cfg sm8450_dsc[] = {
+-	DSC_BLK_1_2("dce_0_0", DSC_0, 0x80000, 0x29c, 0, dsc_sblk_0),
+-	DSC_BLK_1_2("dce_0_1", DSC_1, 0x80000, 0x29c, 0, dsc_sblk_1),
+-	DSC_BLK_1_2("dce_1_0", DSC_2, 0x81000, 0x29c, BIT(DPU_DSC_NATIVE_42x_EN), dsc_sblk_0),
+-	DSC_BLK_1_2("dce_1_1", DSC_3, 0x81000, 0x29c, BIT(DPU_DSC_NATIVE_42x_EN), dsc_sblk_1),
++	{
++		.name = "dce_0_0", .id = DSC_0,
++		.base = 0x80000, .len = 0x4,
++		.features = BIT(DPU_DSC_HW_REV_1_2),
++		.sblk = &dsc_sblk_0,
++	}, {
++		.name = "dce_0_1", .id = DSC_1,
++		.base = 0x80000, .len = 0x4,
++		.features = BIT(DPU_DSC_HW_REV_1_2),
++		.sblk = &dsc_sblk_1,
++	}, {
++		.name = "dce_1_0", .id = DSC_2,
++		.base = 0x81000, .len = 0x4,
++		.features = BIT(DPU_DSC_HW_REV_1_2) | BIT(DPU_DSC_NATIVE_42x_EN),
++		.sblk = &dsc_sblk_0,
++	}, {
++		.name = "dce_1_1", .id = DSC_3,
++		.base = 0x81000, .len = 0x4,
++		.features = BIT(DPU_DSC_HW_REV_1_2) | BIT(DPU_DSC_NATIVE_42x_EN),
++		.sblk = &dsc_sblk_1,
++	},
+ };
+ 
+ static const struct dpu_intf_cfg sm8450_intf[] = {
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_9_0_sm8550.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_9_0_sm8550.h
+index f17b9a7fee851..d51c2f8acba0a 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_9_0_sm8550.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_9_0_sm8550.h
+@@ -123,13 +123,13 @@ static const struct dpu_lm_cfg sm8550_lm[] = {
+ 
+ static const struct dpu_dspp_cfg sm8550_dspp[] = {
+ 	DSPP_BLK("dspp_0", DSPP_0, 0x54000, DSPP_SC7180_MASK,
+-		 &sm8150_dspp_sblk),
++		 &sdm845_dspp_sblk),
+ 	DSPP_BLK("dspp_1", DSPP_1, 0x56000, DSPP_SC7180_MASK,
+-		 &sm8150_dspp_sblk),
++		 &sdm845_dspp_sblk),
+ 	DSPP_BLK("dspp_2", DSPP_2, 0x58000, DSPP_SC7180_MASK,
+-		 &sm8150_dspp_sblk),
++		 &sdm845_dspp_sblk),
+ 	DSPP_BLK("dspp_3", DSPP_3, 0x5a000, DSPP_SC7180_MASK,
+-		 &sm8150_dspp_sblk),
++		 &sdm845_dspp_sblk),
+ };
+ static const struct dpu_pingpong_cfg sm8550_pp[] = {
+ 	PP_BLK_DITHER("pingpong_0", PINGPONG_0, 0x69000, MERGE_3D_0, sc7280_pp_sblk,
+@@ -171,10 +171,27 @@ static const struct dpu_merge_3d_cfg sm8550_merge_3d[] = {
+  * its own different sub block address.
+  */
+ static const struct dpu_dsc_cfg sm8550_dsc[] = {
+-	DSC_BLK_1_2("dce_0_0", DSC_0, 0x80000, 0x29c, 0, dsc_sblk_0),
+-	DSC_BLK_1_2("dce_0_1", DSC_1, 0x80000, 0x29c, 0, dsc_sblk_1),
+-	DSC_BLK_1_2("dce_1_0", DSC_2, 0x81000, 0x29c, BIT(DPU_DSC_NATIVE_42x_EN), dsc_sblk_0),
+-	DSC_BLK_1_2("dce_1_1", DSC_3, 0x81000, 0x29c, BIT(DPU_DSC_NATIVE_42x_EN), dsc_sblk_1),
++	{
++		.name = "dce_0_0", .id = DSC_0,
++		.base = 0x80000, .len = 0x4,
++		.features = BIT(DPU_DSC_HW_REV_1_2),
++		.sblk = &dsc_sblk_0,
++	}, {
++		.name = "dce_0_1", .id = DSC_1,
++		.base = 0x80000, .len = 0x4,
++		.features = BIT(DPU_DSC_HW_REV_1_2),
++		.sblk = &dsc_sblk_1,
++	}, {
++		.name = "dce_1_0", .id = DSC_2,
++		.base = 0x81000, .len = 0x4,
++		.features = BIT(DPU_DSC_HW_REV_1_2) | BIT(DPU_DSC_NATIVE_42x_EN),
++		.sblk = &dsc_sblk_0,
++	}, {
++		.name = "dce_1_1", .id = DSC_3,
++		.base = 0x81000, .len = 0x4,
++		.features = BIT(DPU_DSC_HW_REV_1_2) | BIT(DPU_DSC_NATIVE_42x_EN),
++		.sblk = &dsc_sblk_1,
++	},
+ };
+ 
+ static const struct dpu_intf_cfg sm8550_intf[] = {
+@@ -245,8 +262,8 @@ const struct dpu_mdss_cfg dpu_sm8550_cfg = {
+ 	.merge_3d = sm8550_merge_3d,
+ 	.intf_count = ARRAY_SIZE(sm8550_intf),
+ 	.intf = sm8550_intf,
+-	.vbif_count = ARRAY_SIZE(sdm845_vbif),
+-	.vbif = sdm845_vbif,
++	.vbif_count = ARRAY_SIZE(sm8550_vbif),
++	.vbif = sm8550_vbif,
+ 	.perf = &sm8550_perf_data,
+ 	.mdss_irqs = BIT(MDP_SSPP_TOP0_INTR) | \
+ 		     BIT(MDP_SSPP_TOP0_INTR2) | \
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c
+index a466ff70a4d62..78037a697633b 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c
+@@ -446,7 +446,8 @@ static int dpu_encoder_phys_wb_wait_for_commit_done(
+ 	wait_info.atomic_cnt = &phys_enc->pending_kickoff_cnt;
+ 	wait_info.timeout_ms = KICKOFF_TIMEOUT_MS;
+ 
+-	ret = dpu_encoder_helper_wait_for_irq(phys_enc, INTR_IDX_WB_DONE,
++	ret = dpu_encoder_helper_wait_for_irq(phys_enc,
++			phys_enc->irq[INTR_IDX_WB_DONE],
+ 			dpu_encoder_phys_wb_done_irq, &wait_info);
+ 	if (ret == -ETIMEDOUT)
+ 		_dpu_encoder_phys_wb_handle_wbdone_timeout(phys_enc);
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
+index 0de507d4d7b7a..721c18cf9b1eb 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
+@@ -444,12 +444,12 @@ static const struct dpu_lm_sub_blks qcm2290_lm_sblk = {
+  * DSPP sub blocks config
+  *************************************************************/
+ static const struct dpu_dspp_sub_blks msm8998_dspp_sblk = {
+-	.pcc = {.id = DPU_DSPP_PCC, .base = 0x1700,
++	.pcc = {.name = "pcc", .id = DPU_DSPP_PCC, .base = 0x1700,
+ 		.len = 0x90, .version = 0x10007},
+ };
+ 
+-static const struct dpu_dspp_sub_blks sm8150_dspp_sblk = {
+-	.pcc = {.id = DPU_DSPP_PCC, .base = 0x1700,
++static const struct dpu_dspp_sub_blks sdm845_dspp_sblk = {
++	.pcc = {.name = "pcc", .id = DPU_DSPP_PCC, .base = 0x1700,
+ 		.len = 0x90, .version = 0x40000},
+ };
+ 
+@@ -465,19 +465,19 @@ static const struct dpu_dspp_sub_blks sm8150_dspp_sblk = {
+  * PINGPONG sub blocks config
+  *************************************************************/
+ static const struct dpu_pingpong_sub_blks sdm845_pp_sblk_te = {
+-	.te2 = {.id = DPU_PINGPONG_TE2, .base = 0x2000, .len = 0x0,
++	.te2 = {.name = "te2", .id = DPU_PINGPONG_TE2, .base = 0x2000, .len = 0x0,
+ 		.version = 0x1},
+-	.dither = {.id = DPU_PINGPONG_DITHER, .base = 0x30e0,
++	.dither = {.name = "dither", .id = DPU_PINGPONG_DITHER, .base = 0x30e0,
+ 		.len = 0x20, .version = 0x10000},
+ };
+ 
+ static const struct dpu_pingpong_sub_blks sdm845_pp_sblk = {
+-	.dither = {.id = DPU_PINGPONG_DITHER, .base = 0x30e0,
++	.dither = {.name = "dither", .id = DPU_PINGPONG_DITHER, .base = 0x30e0,
+ 		.len = 0x20, .version = 0x10000},
+ };
+ 
+ static const struct dpu_pingpong_sub_blks sc7280_pp_sblk = {
+-	.dither = {.id = DPU_PINGPONG_DITHER, .base = 0xe0,
++	.dither = {.name = "dither", .id = DPU_PINGPONG_DITHER, .base = 0xe0,
+ 	.len = 0x20, .version = 0x20000},
+ };
+ 
+@@ -517,30 +517,15 @@ static const struct dpu_pingpong_sub_blks sc7280_pp_sblk = {
+  * DSC sub blocks config
+  *************************************************************/
+ static const struct dpu_dsc_sub_blks dsc_sblk_0 = {
+-	.enc = {.base = 0x100, .len = 0x100},
+-	.ctl = {.base = 0xF00, .len = 0x10},
++	.enc = {.name = "enc", .base = 0x100, .len = 0x9c},
++	.ctl = {.name = "ctl", .base = 0xF00, .len = 0x10},
+ };
+ 
+ static const struct dpu_dsc_sub_blks dsc_sblk_1 = {
+-	.enc = {.base = 0x200, .len = 0x100},
+-	.ctl = {.base = 0xF80, .len = 0x10},
++	.enc = {.name = "enc", .base = 0x200, .len = 0x9c},
++	.ctl = {.name = "ctl", .base = 0xF80, .len = 0x10},
+ };
+ 
+-#define DSC_BLK(_name, _id, _base, _features) \
+-	{\
+-	.name = _name, .id = _id, \
+-	.base = _base, .len = 0x140, \
+-	.features = _features, \
+-	}
+-
+-#define DSC_BLK_1_2(_name, _id, _base, _len, _features, _sblk) \
+-	{\
+-	.name = _name, .id = _id, \
+-	.base = _base, .len = _len, \
+-	.features = BIT(DPU_DSC_HW_REV_1_2) | _features, \
+-	.sblk = &_sblk, \
+-	}
+-
+ /*************************************************************
+  * INTF sub blocks config
+  *************************************************************/
+@@ -663,6 +648,26 @@ static const struct dpu_vbif_cfg sdm845_vbif[] = {
+ 	},
+ };
+ 
++static const struct dpu_vbif_cfg sm8550_vbif[] = {
++	{
++	.name = "vbif_rt", .id = VBIF_RT,
++	.base = 0, .len = 0x1040,
++	.features = BIT(DPU_VBIF_QOS_REMAP),
++	.xin_halt_timeout = 0x4000,
++	.qos_rp_remap_size = 0x40,
++	.qos_rt_tbl = {
++		.npriority_lvl = ARRAY_SIZE(sdm845_rt_pri_lvl),
++		.priority_lvl = sdm845_rt_pri_lvl,
++		},
++	.qos_nrt_tbl = {
++		.npriority_lvl = ARRAY_SIZE(sdm845_nrt_pri_lvl),
++		.priority_lvl = sdm845_nrt_pri_lvl,
++		},
++	.memtype_count = 16,
++	.memtype = {3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3},
++	},
++};
++
+ /*************************************************************
+  * PERF data config
+  *************************************************************/
+diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c
+index bd2c4ac456017..0d5ff03cb0910 100644
+--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c
++++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c
+@@ -130,8 +130,7 @@ static void mdp5_plane_destroy_state(struct drm_plane *plane,
+ {
+ 	struct mdp5_plane_state *pstate = to_mdp5_plane_state(state);
+ 
+-	if (state->fb)
+-		drm_framebuffer_put(state->fb);
++	__drm_atomic_helper_plane_destroy_state(state);
+ 
+ 	kfree(pstate);
+ }
+diff --git a/drivers/gpu/drm/msm/disp/msm_disp_snapshot_util.c b/drivers/gpu/drm/msm/disp/msm_disp_snapshot_util.c
+index acfe1b31e0792..add72bbc28b17 100644
+--- a/drivers/gpu/drm/msm/disp/msm_disp_snapshot_util.c
++++ b/drivers/gpu/drm/msm/disp/msm_disp_snapshot_util.c
+@@ -192,5 +192,5 @@ void msm_disp_snapshot_add_block(struct msm_disp_state *disp_state, u32 len,
+ 	new_blk->base_addr = base_addr;
+ 
+ 	msm_disp_state_dump_regs(&new_blk->state, new_blk->size, base_addr);
+-	list_add(&new_blk->node, &disp_state->blocks);
++	list_add_tail(&new_blk->node, &disp_state->blocks);
+ }
+diff --git a/drivers/gpu/drm/panel/panel-simple.c b/drivers/gpu/drm/panel/panel-simple.c
+index b38d0e95cd542..03196fbfa4d79 100644
+--- a/drivers/gpu/drm/panel/panel-simple.c
++++ b/drivers/gpu/drm/panel/panel-simple.c
+@@ -1189,7 +1189,9 @@ static const struct panel_desc auo_t215hvn01 = {
+ 	.delay = {
+ 		.disable = 5,
+ 		.unprepare = 1000,
+-	}
++	},
++	.bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG,
++	.connector_type = DRM_MODE_CONNECTOR_LVDS,
+ };
+ 
+ static const struct drm_display_mode avic_tm070ddh03_mode = {
+diff --git a/drivers/gpu/drm/radeon/cik.c b/drivers/gpu/drm/radeon/cik.c
+index 5819737c21c67..a6f3c811ceb8e 100644
+--- a/drivers/gpu/drm/radeon/cik.c
++++ b/drivers/gpu/drm/radeon/cik.c
+@@ -9534,17 +9534,8 @@ static void cik_pcie_gen3_enable(struct radeon_device *rdev)
+ 			u16 bridge_cfg2, gpu_cfg2;
+ 			u32 max_lw, current_lw, tmp;
+ 
+-			pcie_capability_read_word(root, PCI_EXP_LNKCTL,
+-						  &bridge_cfg);
+-			pcie_capability_read_word(rdev->pdev, PCI_EXP_LNKCTL,
+-						  &gpu_cfg);
+-
+-			tmp16 = bridge_cfg | PCI_EXP_LNKCTL_HAWD;
+-			pcie_capability_write_word(root, PCI_EXP_LNKCTL, tmp16);
+-
+-			tmp16 = gpu_cfg | PCI_EXP_LNKCTL_HAWD;
+-			pcie_capability_write_word(rdev->pdev, PCI_EXP_LNKCTL,
+-						   tmp16);
++			pcie_capability_set_word(root, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_HAWD);
++			pcie_capability_set_word(rdev->pdev, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_HAWD);
+ 
+ 			tmp = RREG32_PCIE_PORT(PCIE_LC_STATUS1);
+ 			max_lw = (tmp & LC_DETECTED_LINK_WIDTH_MASK) >> LC_DETECTED_LINK_WIDTH_SHIFT;
+@@ -9591,21 +9582,14 @@ static void cik_pcie_gen3_enable(struct radeon_device *rdev)
+ 				msleep(100);
+ 
+ 				/* linkctl */
+-				pcie_capability_read_word(root, PCI_EXP_LNKCTL,
+-							  &tmp16);
+-				tmp16 &= ~PCI_EXP_LNKCTL_HAWD;
+-				tmp16 |= (bridge_cfg & PCI_EXP_LNKCTL_HAWD);
+-				pcie_capability_write_word(root, PCI_EXP_LNKCTL,
+-							   tmp16);
+-
+-				pcie_capability_read_word(rdev->pdev,
+-							  PCI_EXP_LNKCTL,
+-							  &tmp16);
+-				tmp16 &= ~PCI_EXP_LNKCTL_HAWD;
+-				tmp16 |= (gpu_cfg & PCI_EXP_LNKCTL_HAWD);
+-				pcie_capability_write_word(rdev->pdev,
+-							   PCI_EXP_LNKCTL,
+-							   tmp16);
++				pcie_capability_clear_and_set_word(root, PCI_EXP_LNKCTL,
++								   PCI_EXP_LNKCTL_HAWD,
++								   bridge_cfg &
++								   PCI_EXP_LNKCTL_HAWD);
++				pcie_capability_clear_and_set_word(rdev->pdev, PCI_EXP_LNKCTL,
++								   PCI_EXP_LNKCTL_HAWD,
++								   gpu_cfg &
++								   PCI_EXP_LNKCTL_HAWD);
+ 
+ 				/* linkctl2 */
+ 				pcie_capability_read_word(root, PCI_EXP_LNKCTL2,
+diff --git a/drivers/gpu/drm/radeon/si.c b/drivers/gpu/drm/radeon/si.c
+index 8d5e4b25609d5..a91012447b56e 100644
+--- a/drivers/gpu/drm/radeon/si.c
++++ b/drivers/gpu/drm/radeon/si.c
+@@ -7131,17 +7131,8 @@ static void si_pcie_gen3_enable(struct radeon_device *rdev)
+ 			u16 bridge_cfg2, gpu_cfg2;
+ 			u32 max_lw, current_lw, tmp;
+ 
+-			pcie_capability_read_word(root, PCI_EXP_LNKCTL,
+-						  &bridge_cfg);
+-			pcie_capability_read_word(rdev->pdev, PCI_EXP_LNKCTL,
+-						  &gpu_cfg);
+-
+-			tmp16 = bridge_cfg | PCI_EXP_LNKCTL_HAWD;
+-			pcie_capability_write_word(root, PCI_EXP_LNKCTL, tmp16);
+-
+-			tmp16 = gpu_cfg | PCI_EXP_LNKCTL_HAWD;
+-			pcie_capability_write_word(rdev->pdev, PCI_EXP_LNKCTL,
+-						   tmp16);
++			pcie_capability_set_word(root, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_HAWD);
++			pcie_capability_set_word(rdev->pdev, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_HAWD);
+ 
+ 			tmp = RREG32_PCIE(PCIE_LC_STATUS1);
+ 			max_lw = (tmp & LC_DETECTED_LINK_WIDTH_MASK) >> LC_DETECTED_LINK_WIDTH_SHIFT;
+@@ -7188,22 +7179,14 @@ static void si_pcie_gen3_enable(struct radeon_device *rdev)
+ 				msleep(100);
+ 
+ 				/* linkctl */
+-				pcie_capability_read_word(root, PCI_EXP_LNKCTL,
+-							  &tmp16);
+-				tmp16 &= ~PCI_EXP_LNKCTL_HAWD;
+-				tmp16 |= (bridge_cfg & PCI_EXP_LNKCTL_HAWD);
+-				pcie_capability_write_word(root,
+-							   PCI_EXP_LNKCTL,
+-							   tmp16);
+-
+-				pcie_capability_read_word(rdev->pdev,
+-							  PCI_EXP_LNKCTL,
+-							  &tmp16);
+-				tmp16 &= ~PCI_EXP_LNKCTL_HAWD;
+-				tmp16 |= (gpu_cfg & PCI_EXP_LNKCTL_HAWD);
+-				pcie_capability_write_word(rdev->pdev,
+-							   PCI_EXP_LNKCTL,
+-							   tmp16);
++				pcie_capability_clear_and_set_word(root, PCI_EXP_LNKCTL,
++								   PCI_EXP_LNKCTL_HAWD,
++								   bridge_cfg &
++								   PCI_EXP_LNKCTL_HAWD);
++				pcie_capability_clear_and_set_word(rdev->pdev, PCI_EXP_LNKCTL,
++								   PCI_EXP_LNKCTL_HAWD,
++								   gpu_cfg &
++								   PCI_EXP_LNKCTL_HAWD);
+ 
+ 				/* linkctl2 */
+ 				pcie_capability_read_word(root, PCI_EXP_LNKCTL2,
+diff --git a/drivers/gpu/drm/tegra/dpaux.c b/drivers/gpu/drm/tegra/dpaux.c
+index 4d2677dcd8315..68ded2e34e1cf 100644
+--- a/drivers/gpu/drm/tegra/dpaux.c
++++ b/drivers/gpu/drm/tegra/dpaux.c
+@@ -468,7 +468,7 @@ static int tegra_dpaux_probe(struct platform_device *pdev)
+ 
+ 	dpaux->irq = platform_get_irq(pdev, 0);
+ 	if (dpaux->irq < 0)
+-		return -ENXIO;
++		return dpaux->irq;
+ 
+ 	if (!pdev->dev.pm_domain) {
+ 		dpaux->rst = devm_reset_control_get(&pdev->dev, "dpaux");
+diff --git a/drivers/gpu/drm/tiny/repaper.c b/drivers/gpu/drm/tiny/repaper.c
+index c2677d081a7b6..13ae148f59b9b 100644
+--- a/drivers/gpu/drm/tiny/repaper.c
++++ b/drivers/gpu/drm/tiny/repaper.c
+@@ -533,7 +533,7 @@ static int repaper_fb_dirty(struct drm_framebuffer *fb)
+ 	DRM_DEBUG("Flushing [FB:%d] st=%ums\n", fb->base.id,
+ 		  epd->factored_stage_time);
+ 
+-	buf = kmalloc_array(fb->width, fb->height, GFP_KERNEL);
++	buf = kmalloc(fb->width * fb->height / 8, GFP_KERNEL);
+ 	if (!buf) {
+ 		ret = -ENOMEM;
+ 		goto out_exit;
+diff --git a/drivers/gpu/drm/xlnx/zynqmp_dpsub.c b/drivers/gpu/drm/xlnx/zynqmp_dpsub.c
+index bab862484d429..068413be65275 100644
+--- a/drivers/gpu/drm/xlnx/zynqmp_dpsub.c
++++ b/drivers/gpu/drm/xlnx/zynqmp_dpsub.c
+@@ -227,7 +227,9 @@ static int zynqmp_dpsub_probe(struct platform_device *pdev)
+ 	dpsub->dev = &pdev->dev;
+ 	platform_set_drvdata(pdev, dpsub);
+ 
+-	dma_set_mask(dpsub->dev, DMA_BIT_MASK(ZYNQMP_DISP_MAX_DMA_BIT));
++	ret = dma_set_mask(dpsub->dev, DMA_BIT_MASK(ZYNQMP_DISP_MAX_DMA_BIT));
++	if (ret)
++		return ret;
+ 
+ 	/* Try the reserved memory. Proceed if there's none. */
+ 	of_reserved_mem_device_init(&pdev->dev);
+diff --git a/drivers/hid/hid-input.c b/drivers/hid/hid-input.c
+index 851ee86eff32a..40a5645f8fe81 100644
+--- a/drivers/hid/hid-input.c
++++ b/drivers/hid/hid-input.c
+@@ -988,6 +988,7 @@ static void hidinput_configure_usage(struct hid_input *hidinput, struct hid_fiel
+ 			return;
+ 
+ 		case 0x3c: /* Invert */
++			device->quirks &= ~HID_QUIRK_NOINVERT;
+ 			map_key_clear(BTN_TOOL_RUBBER);
+ 			break;
+ 
+@@ -1013,9 +1014,13 @@ static void hidinput_configure_usage(struct hid_input *hidinput, struct hid_fiel
+ 		case 0x45: /* ERASER */
+ 			/*
+ 			 * This event is reported when eraser tip touches the surface.
+-			 * Actual eraser (BTN_TOOL_RUBBER) is set by Invert usage when
+-			 * tool gets in proximity.
++			 * Actual eraser (BTN_TOOL_RUBBER) is set and released either
++			 * by Invert if tool reports proximity or by Eraser directly.
+ 			 */
++			if (!test_bit(BTN_TOOL_RUBBER, input->keybit)) {
++				device->quirks |= HID_QUIRK_NOINVERT;
++				set_bit(BTN_TOOL_RUBBER, input->keybit);
++			}
+ 			map_key_clear(BTN_TOUCH);
+ 			break;
+ 
+@@ -1580,6 +1585,15 @@ void hidinput_hid_event(struct hid_device *hid, struct hid_field *field, struct
+ 		else if (report->tool != BTN_TOOL_RUBBER)
+ 			/* value is off, tool is not rubber, ignore */
+ 			return;
++		else if (*quirks & HID_QUIRK_NOINVERT &&
++			 !test_bit(BTN_TOUCH, input->key)) {
++			/*
++			 * There is no invert to release the tool, let hid_input
++			 * send BTN_TOUCH with scancode and release the tool after.
++			 */
++			hid_report_release_tool(report, input, BTN_TOOL_RUBBER);
++			return;
++		}
+ 
+ 		/* let hid-input set BTN_TOUCH */
+ 		break;
+diff --git a/drivers/hid/hid-logitech-dj.c b/drivers/hid/hid-logitech-dj.c
+index 62180414efccd..e6a8b6d8eab70 100644
+--- a/drivers/hid/hid-logitech-dj.c
++++ b/drivers/hid/hid-logitech-dj.c
+@@ -1285,6 +1285,9 @@ static int logi_dj_recv_switch_to_dj_mode(struct dj_receiver_dev *djrcv_dev,
+ 		 * 50 msec should gives enough time to the receiver to be ready.
+ 		 */
+ 		msleep(50);
++
++		if (retval)
++			return retval;
+ 	}
+ 
+ 	/*
+@@ -1306,7 +1309,7 @@ static int logi_dj_recv_switch_to_dj_mode(struct dj_receiver_dev *djrcv_dev,
+ 	buf[5] = 0x09;
+ 	buf[6] = 0x00;
+ 
+-	hid_hw_raw_request(hdev, REPORT_ID_HIDPP_SHORT, buf,
++	retval = hid_hw_raw_request(hdev, REPORT_ID_HIDPP_SHORT, buf,
+ 			HIDPP_REPORT_SHORT_LENGTH, HID_OUTPUT_REPORT,
+ 			HID_REQ_SET_REPORT);
+ 
+diff --git a/drivers/hid/hid-logitech-hidpp.c b/drivers/hid/hid-logitech-hidpp.c
+index 129b01be488d2..09ba2086c95ce 100644
+--- a/drivers/hid/hid-logitech-hidpp.c
++++ b/drivers/hid/hid-logitech-hidpp.c
+@@ -275,21 +275,22 @@ static int __hidpp_send_report(struct hid_device *hdev,
+ }
+ 
+ /*
+- * hidpp_send_message_sync() returns 0 in case of success, and something else
+- * in case of a failure.
+- * - If ' something else' is positive, that means that an error has been raised
+- *   by the protocol itself.
+- * - If ' something else' is negative, that means that we had a classic error
+- *   (-ENOMEM, -EPIPE, etc...)
++ * Effectively send the message to the device, waiting for its answer.
++ *
++ * Must be called with hidpp->send_mutex locked
++ *
++ * Same return protocol than hidpp_send_message_sync():
++ * - success on 0
++ * - negative error means transport error
++ * - positive value means protocol error
+  */
+-static int hidpp_send_message_sync(struct hidpp_device *hidpp,
++static int __do_hidpp_send_message_sync(struct hidpp_device *hidpp,
+ 	struct hidpp_report *message,
+ 	struct hidpp_report *response)
+ {
+-	int ret = -1;
+-	int max_retries = 3;
++	int ret;
+ 
+-	mutex_lock(&hidpp->send_mutex);
++	__must_hold(&hidpp->send_mutex);
+ 
+ 	hidpp->send_receive_buf = response;
+ 	hidpp->answer_available = false;
+@@ -300,47 +301,74 @@ static int hidpp_send_message_sync(struct hidpp_device *hidpp,
+ 	 */
+ 	*response = *message;
+ 
+-	for (; max_retries != 0 && ret; max_retries--) {
+-		ret = __hidpp_send_report(hidpp->hid_dev, message);
++	ret = __hidpp_send_report(hidpp->hid_dev, message);
++	if (ret) {
++		dbg_hid("__hidpp_send_report returned err: %d\n", ret);
++		memset(response, 0, sizeof(struct hidpp_report));
++		return ret;
++	}
+ 
+-		if (ret) {
+-			dbg_hid("__hidpp_send_report returned err: %d\n", ret);
+-			memset(response, 0, sizeof(struct hidpp_report));
+-			break;
+-		}
++	if (!wait_event_timeout(hidpp->wait, hidpp->answer_available,
++				5*HZ)) {
++		dbg_hid("%s:timeout waiting for response\n", __func__);
++		memset(response, 0, sizeof(struct hidpp_report));
++		return -ETIMEDOUT;
++	}
+ 
+-		if (!wait_event_timeout(hidpp->wait, hidpp->answer_available,
+-					5*HZ)) {
+-			dbg_hid("%s:timeout waiting for response\n", __func__);
+-			memset(response, 0, sizeof(struct hidpp_report));
+-			ret = -ETIMEDOUT;
+-			break;
+-		}
++	if (response->report_id == REPORT_ID_HIDPP_SHORT &&
++	    response->rap.sub_id == HIDPP_ERROR) {
++		ret = response->rap.params[1];
++		dbg_hid("%s:got hidpp error %02X\n", __func__, ret);
++		return ret;
++	}
+ 
+-		if (response->report_id == REPORT_ID_HIDPP_SHORT &&
+-		    response->rap.sub_id == HIDPP_ERROR) {
+-			ret = response->rap.params[1];
+-			dbg_hid("%s:got hidpp error %02X\n", __func__, ret);
++	if ((response->report_id == REPORT_ID_HIDPP_LONG ||
++	     response->report_id == REPORT_ID_HIDPP_VERY_LONG) &&
++	    response->fap.feature_index == HIDPP20_ERROR) {
++		ret = response->fap.params[1];
++		dbg_hid("%s:got hidpp 2.0 error %02X\n", __func__, ret);
++		return ret;
++	}
++
++	return 0;
++}
++
++/*
++ * hidpp_send_message_sync() returns 0 in case of success, and something else
++ * in case of a failure.
++ *
++ * See __do_hidpp_send_message_sync() for a detailed explanation of the returned
++ * value.
++ */
++static int hidpp_send_message_sync(struct hidpp_device *hidpp,
++	struct hidpp_report *message,
++	struct hidpp_report *response)
++{
++	int ret;
++	int max_retries = 3;
++
++	mutex_lock(&hidpp->send_mutex);
++
++	do {
++		ret = __do_hidpp_send_message_sync(hidpp, message, response);
++		if (ret != HIDPP20_ERROR_BUSY)
+ 			break;
+-		}
+ 
+-		if ((response->report_id == REPORT_ID_HIDPP_LONG ||
+-		     response->report_id == REPORT_ID_HIDPP_VERY_LONG) &&
+-		    response->fap.feature_index == HIDPP20_ERROR) {
+-			ret = response->fap.params[1];
+-			if (ret != HIDPP20_ERROR_BUSY) {
+-				dbg_hid("%s:got hidpp 2.0 error %02X\n", __func__, ret);
+-				break;
+-			}
+-			dbg_hid("%s:got busy hidpp 2.0 error %02X, retrying\n", __func__, ret);
+-		}
+-	}
++		dbg_hid("%s:got busy hidpp 2.0 error %02X, retrying\n", __func__, ret);
++	} while (--max_retries);
+ 
+ 	mutex_unlock(&hidpp->send_mutex);
+ 	return ret;
+ 
+ }
+ 
++/*
++ * hidpp_send_fap_command_sync() returns 0 in case of success, and something else
++ * in case of a failure.
++ *
++ * See __do_hidpp_send_message_sync() for a detailed explanation of the returned
++ * value.
++ */
+ static int hidpp_send_fap_command_sync(struct hidpp_device *hidpp,
+ 	u8 feat_index, u8 funcindex_clientid, u8 *params, int param_count,
+ 	struct hidpp_report *response)
+@@ -373,6 +401,13 @@ static int hidpp_send_fap_command_sync(struct hidpp_device *hidpp,
+ 	return ret;
+ }
+ 
++/*
++ * hidpp_send_rap_command_sync() returns 0 in case of success, and something else
++ * in case of a failure.
++ *
++ * See __do_hidpp_send_message_sync() for a detailed explanation of the returned
++ * value.
++ */
+ static int hidpp_send_rap_command_sync(struct hidpp_device *hidpp_dev,
+ 	u8 report_id, u8 sub_id, u8 reg_address, u8 *params, int param_count,
+ 	struct hidpp_report *response)
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index e31be0cb8b850..521b2ffb42449 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -1594,7 +1594,6 @@ static void mt_post_parse(struct mt_device *td, struct mt_application *app)
+ static int mt_input_configured(struct hid_device *hdev, struct hid_input *hi)
+ {
+ 	struct mt_device *td = hid_get_drvdata(hdev);
+-	char *name;
+ 	const char *suffix = NULL;
+ 	struct mt_report_data *rdata;
+ 	struct mt_application *mt_application = NULL;
+@@ -1645,15 +1644,9 @@ static int mt_input_configured(struct hid_device *hdev, struct hid_input *hi)
+ 		break;
+ 	}
+ 
+-	if (suffix) {
+-		name = devm_kzalloc(&hi->input->dev,
+-				    strlen(hdev->name) + strlen(suffix) + 2,
+-				    GFP_KERNEL);
+-		if (name) {
+-			sprintf(name, "%s %s", hdev->name, suffix);
+-			hi->input->name = name;
+-		}
+-	}
++	if (suffix)
++		hi->input->name = devm_kasprintf(&hdev->dev, GFP_KERNEL,
++						 "%s %s", hdev->name, suffix);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/hid/hid-nvidia-shield.c b/drivers/hid/hid-nvidia-shield.c
+index a928ad2be62db..9c44974135079 100644
+--- a/drivers/hid/hid-nvidia-shield.c
++++ b/drivers/hid/hid-nvidia-shield.c
+@@ -164,7 +164,7 @@ static struct input_dev *shield_allocate_input_dev(struct hid_device *hdev,
+ 	idev->id.product = hdev->product;
+ 	idev->id.version = hdev->version;
+ 	idev->uniq = hdev->uniq;
+-	idev->name = devm_kasprintf(&idev->dev, GFP_KERNEL, "%s %s", hdev->name,
++	idev->name = devm_kasprintf(&hdev->dev, GFP_KERNEL, "%s %s", hdev->name,
+ 				    name_suffix);
+ 	if (!idev->name)
+ 		goto err_name;
+@@ -513,21 +513,22 @@ static struct shield_device *thunderstrike_create(struct hid_device *hdev)
+ 
+ 	hid_set_drvdata(hdev, shield_dev);
+ 
++	ts->haptics_dev = shield_haptics_create(shield_dev, thunderstrike_play_effect);
++	if (IS_ERR(ts->haptics_dev))
++		return ERR_CAST(ts->haptics_dev);
++
+ 	ret = thunderstrike_led_create(ts);
+ 	if (ret) {
+ 		hid_err(hdev, "Failed to create Thunderstrike LED instance\n");
+-		return ERR_PTR(ret);
+-	}
+-
+-	ts->haptics_dev = shield_haptics_create(shield_dev, thunderstrike_play_effect);
+-	if (IS_ERR(ts->haptics_dev))
+ 		goto err;
++	}
+ 
+ 	hid_info(hdev, "Registered Thunderstrike controller\n");
+ 	return shield_dev;
+ 
+ err:
+-	led_classdev_unregister(&ts->led_dev);
++	if (ts->haptics_dev)
++		input_unregister_device(ts->haptics_dev);
+ 	return ERR_CAST(ts->haptics_dev);
+ }
+ 
+diff --git a/drivers/hid/hid-uclogic-core.c b/drivers/hid/hid-uclogic-core.c
+index f67835f9ed4cc..ad74cbc9a0aa5 100644
+--- a/drivers/hid/hid-uclogic-core.c
++++ b/drivers/hid/hid-uclogic-core.c
+@@ -85,10 +85,8 @@ static int uclogic_input_configured(struct hid_device *hdev,
+ {
+ 	struct uclogic_drvdata *drvdata = hid_get_drvdata(hdev);
+ 	struct uclogic_params *params = &drvdata->params;
+-	char *name;
+ 	const char *suffix = NULL;
+ 	struct hid_field *field;
+-	size_t len;
+ 	size_t i;
+ 	const struct uclogic_params_frame *frame;
+ 
+@@ -146,14 +144,9 @@ static int uclogic_input_configured(struct hid_device *hdev,
+ 		}
+ 	}
+ 
+-	if (suffix) {
+-		len = strlen(hdev->name) + 2 + strlen(suffix);
+-		name = devm_kzalloc(&hi->input->dev, len, GFP_KERNEL);
+-		if (name) {
+-			snprintf(name, len, "%s %s", hdev->name, suffix);
+-			hi->input->name = name;
+-		}
+-	}
++	if (suffix)
++		hi->input->name = devm_kasprintf(&hdev->dev, GFP_KERNEL,
++						 "%s %s", hdev->name, suffix);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
+index 67f95a29aeca5..edbb38f6956b9 100644
+--- a/drivers/hv/vmbus_drv.c
++++ b/drivers/hv/vmbus_drv.c
+@@ -2287,7 +2287,8 @@ static int vmbus_acpi_add(struct platform_device *pdev)
+ 	 * Some ancestor of the vmbus acpi device (Gen1 or Gen2
+ 	 * firmware) is the VMOD that has the mmio ranges. Get that.
+ 	 */
+-	for (ancestor = acpi_dev_parent(device); ancestor;
++	for (ancestor = acpi_dev_parent(device);
++	     ancestor && ancestor->handle != ACPI_ROOT_OBJECT;
+ 	     ancestor = acpi_dev_parent(ancestor)) {
+ 		result = acpi_walk_resources(ancestor->handle, METHOD_NAME__CRS,
+ 					     vmbus_walk_resources, NULL);
+diff --git a/drivers/hwmon/asus-ec-sensors.c b/drivers/hwmon/asus-ec-sensors.c
+index f52a539eb33e9..51f9c2db403e7 100644
+--- a/drivers/hwmon/asus-ec-sensors.c
++++ b/drivers/hwmon/asus-ec-sensors.c
+@@ -340,7 +340,7 @@ static const struct ec_board_info board_info_crosshair_x670e_hero = {
+ 	.sensors = SENSOR_TEMP_CPU | SENSOR_TEMP_CPU_PACKAGE |
+ 		SENSOR_TEMP_MB | SENSOR_TEMP_VRM |
+ 		SENSOR_SET_TEMP_WATER,
+-	.mutex_path = ASUS_HW_ACCESS_MUTEX_RMTW_ASMX,
++	.mutex_path = ACPI_GLOBAL_LOCK_PSEUDO_PATH,
+ 	.family = family_amd_600_series,
+ };
+ 
+diff --git a/drivers/hwmon/tmp513.c b/drivers/hwmon/tmp513.c
+index bff10f4b56e19..13f0c08360638 100644
+--- a/drivers/hwmon/tmp513.c
++++ b/drivers/hwmon/tmp513.c
+@@ -434,7 +434,7 @@ static umode_t tmp51x_is_visible(const void *_data,
+ 
+ 	switch (type) {
+ 	case hwmon_temp:
+-		if (data->id == tmp512 && channel == 4)
++		if (data->id == tmp512 && channel == 3)
+ 			return 0;
+ 		switch (attr) {
+ 		case hwmon_temp_input:
+diff --git a/drivers/hwtracing/coresight/coresight-platform.c b/drivers/hwtracing/coresight/coresight-platform.c
+index 3e2e135cb8f6d..dbf508fdd8d16 100644
+--- a/drivers/hwtracing/coresight/coresight-platform.c
++++ b/drivers/hwtracing/coresight/coresight-platform.c
+@@ -494,19 +494,18 @@ static inline bool acpi_validate_dsd_graph(const union acpi_object *graph)
+ 
+ /* acpi_get_dsd_graph	- Find the _DSD Graph property for the given device. */
+ static const union acpi_object *
+-acpi_get_dsd_graph(struct acpi_device *adev)
++acpi_get_dsd_graph(struct acpi_device *adev, struct acpi_buffer *buf)
+ {
+ 	int i;
+-	struct acpi_buffer buf = { ACPI_ALLOCATE_BUFFER };
+ 	acpi_status status;
+ 	const union acpi_object *dsd;
+ 
+ 	status = acpi_evaluate_object_typed(adev->handle, "_DSD", NULL,
+-					    &buf, ACPI_TYPE_PACKAGE);
++					    buf, ACPI_TYPE_PACKAGE);
+ 	if (ACPI_FAILURE(status))
+ 		return NULL;
+ 
+-	dsd = buf.pointer;
++	dsd = buf->pointer;
+ 
+ 	/*
+ 	 * _DSD property consists tuples { Prop_UUID, Package() }
+@@ -557,12 +556,12 @@ acpi_validate_coresight_graph(const union acpi_object *cs_graph)
+  * returns NULL.
+  */
+ static const union acpi_object *
+-acpi_get_coresight_graph(struct acpi_device *adev)
++acpi_get_coresight_graph(struct acpi_device *adev, struct acpi_buffer *buf)
+ {
+ 	const union acpi_object *graph_list, *graph;
+ 	int i, nr_graphs;
+ 
+-	graph_list = acpi_get_dsd_graph(adev);
++	graph_list = acpi_get_dsd_graph(adev, buf);
+ 	if (!graph_list)
+ 		return graph_list;
+ 
+@@ -663,18 +662,24 @@ static int acpi_coresight_parse_graph(struct device *dev,
+ 				      struct acpi_device *adev,
+ 				      struct coresight_platform_data *pdata)
+ {
++	int ret = 0;
+ 	int i, nlinks;
+ 	const union acpi_object *graph;
+ 	struct coresight_connection conn, zero_conn = {};
+ 	struct coresight_connection *new_conn;
++	struct acpi_buffer buf = { ACPI_ALLOCATE_BUFFER, NULL };
+ 
+-	graph = acpi_get_coresight_graph(adev);
++	graph = acpi_get_coresight_graph(adev, &buf);
++	/*
++	 * There are no graph connections, which is fine for some components.
++	 * e.g., ETE
++	 */
+ 	if (!graph)
+-		return -ENOENT;
++		goto free;
+ 
+ 	nlinks = graph->package.elements[2].integer.value;
+ 	if (!nlinks)
+-		return 0;
++		goto free;
+ 
+ 	for (i = 0; i < nlinks; i++) {
+ 		const union acpi_object *link = &graph->package.elements[3 + i];
+@@ -682,17 +687,28 @@ static int acpi_coresight_parse_graph(struct device *dev,
+ 
+ 		conn = zero_conn;
+ 		dir = acpi_coresight_parse_link(adev, link, &conn);
+-		if (dir < 0)
+-			return dir;
++		if (dir < 0) {
++			ret = dir;
++			goto free;
++		}
+ 
+ 		if (dir == ACPI_CORESIGHT_LINK_MASTER) {
+ 			new_conn = coresight_add_out_conn(dev, pdata, &conn);
+-			if (IS_ERR(new_conn))
+-				return PTR_ERR(new_conn);
++			if (IS_ERR(new_conn)) {
++				ret = PTR_ERR(new_conn);
++				goto free;
++			}
+ 		}
+ 	}
+ 
+-	return 0;
++free:
++	/*
++	 * When ACPI fails to alloc a buffer, it will free the buffer
++	 * created via ACPI_ALLOCATE_BUFFER and set to NULL.
++	 * ACPI_FREE can handle NULL pointers, so free it directly.
++	 */
++	ACPI_FREE(buf.pointer);
++	return ret;
+ }
+ 
+ /*
+diff --git a/drivers/hwtracing/coresight/coresight-tmc-etf.c b/drivers/hwtracing/coresight/coresight-tmc-etf.c
+index 79d8c64eac494..7406b65e2cdda 100644
+--- a/drivers/hwtracing/coresight/coresight-tmc-etf.c
++++ b/drivers/hwtracing/coresight/coresight-tmc-etf.c
+@@ -452,7 +452,7 @@ static int tmc_set_etf_buffer(struct coresight_device *csdev,
+ 		return -EINVAL;
+ 
+ 	/* wrap head around to the amount of space we have */
+-	head = handle->head & ((buf->nr_pages << PAGE_SHIFT) - 1);
++	head = handle->head & (((unsigned long)buf->nr_pages << PAGE_SHIFT) - 1);
+ 
+ 	/* find the page to write to */
+ 	buf->cur = head / PAGE_SIZE;
+diff --git a/drivers/hwtracing/coresight/coresight-tmc-etr.c b/drivers/hwtracing/coresight/coresight-tmc-etr.c
+index 766325de0e29b..66dc5f97a0098 100644
+--- a/drivers/hwtracing/coresight/coresight-tmc-etr.c
++++ b/drivers/hwtracing/coresight/coresight-tmc-etr.c
+@@ -45,7 +45,8 @@ struct etr_perf_buffer {
+ };
+ 
+ /* Convert the perf index to an offset within the ETR buffer */
+-#define PERF_IDX2OFF(idx, buf)	((idx) % ((buf)->nr_pages << PAGE_SHIFT))
++#define PERF_IDX2OFF(idx, buf)		\
++		((idx) % ((unsigned long)(buf)->nr_pages << PAGE_SHIFT))
+ 
+ /* Lower limit for ETR hardware buffer */
+ #define TMC_ETR_PERF_MIN_BUF_SIZE	SZ_1M
+@@ -1267,7 +1268,7 @@ alloc_etr_buf(struct tmc_drvdata *drvdata, struct perf_event *event,
+ 	 * than the size requested via sysfs.
+ 	 */
+ 	if ((nr_pages << PAGE_SHIFT) > drvdata->size) {
+-		etr_buf = tmc_alloc_etr_buf(drvdata, (nr_pages << PAGE_SHIFT),
++		etr_buf = tmc_alloc_etr_buf(drvdata, ((ssize_t)nr_pages << PAGE_SHIFT),
+ 					    0, node, NULL);
+ 		if (!IS_ERR(etr_buf))
+ 			goto done;
+diff --git a/drivers/hwtracing/coresight/coresight-tmc.h b/drivers/hwtracing/coresight/coresight-tmc.h
+index b97da39652d26..0ee48c5ba764d 100644
+--- a/drivers/hwtracing/coresight/coresight-tmc.h
++++ b/drivers/hwtracing/coresight/coresight-tmc.h
+@@ -325,7 +325,7 @@ ssize_t tmc_sg_table_get_data(struct tmc_sg_table *sg_table,
+ static inline unsigned long
+ tmc_sg_table_buf_size(struct tmc_sg_table *sg_table)
+ {
+-	return sg_table->data_pages.nr_pages << PAGE_SHIFT;
++	return (unsigned long)sg_table->data_pages.nr_pages << PAGE_SHIFT;
+ }
+ 
+ struct coresight_device *tmc_etr_get_catu_device(struct tmc_drvdata *drvdata);
+diff --git a/drivers/hwtracing/coresight/coresight-trbe.c b/drivers/hwtracing/coresight/coresight-trbe.c
+index 7720619909d65..e20c1c6acc731 100644
+--- a/drivers/hwtracing/coresight/coresight-trbe.c
++++ b/drivers/hwtracing/coresight/coresight-trbe.c
+@@ -1225,6 +1225,16 @@ static void arm_trbe_enable_cpu(void *info)
+ 	enable_percpu_irq(drvdata->irq, IRQ_TYPE_NONE);
+ }
+ 
++static void arm_trbe_disable_cpu(void *info)
++{
++	struct trbe_drvdata *drvdata = info;
++	struct trbe_cpudata *cpudata = this_cpu_ptr(drvdata->cpudata);
++
++	disable_percpu_irq(drvdata->irq);
++	trbe_reset_local(cpudata);
++}
++
++
+ static void arm_trbe_register_coresight_cpu(struct trbe_drvdata *drvdata, int cpu)
+ {
+ 	struct trbe_cpudata *cpudata = per_cpu_ptr(drvdata->cpudata, cpu);
+@@ -1244,10 +1254,13 @@ static void arm_trbe_register_coresight_cpu(struct trbe_drvdata *drvdata, int cp
+ 	if (!desc.name)
+ 		goto cpu_clear;
+ 
++	desc.pdata = coresight_get_platform_data(dev);
++	if (IS_ERR(desc.pdata))
++		goto cpu_clear;
++
+ 	desc.type = CORESIGHT_DEV_TYPE_SINK;
+ 	desc.subtype.sink_subtype = CORESIGHT_DEV_SUBTYPE_SINK_PERCPU_SYSMEM;
+ 	desc.ops = &arm_trbe_cs_ops;
+-	desc.pdata = dev_get_platdata(dev);
+ 	desc.groups = arm_trbe_groups;
+ 	desc.dev = dev;
+ 	trbe_csdev = coresight_register(&desc);
+@@ -1326,18 +1339,12 @@ cpu_clear:
+ 	cpumask_clear_cpu(cpu, &drvdata->supported_cpus);
+ }
+ 
+-static void arm_trbe_remove_coresight_cpu(void *info)
++static void arm_trbe_remove_coresight_cpu(struct trbe_drvdata *drvdata, int cpu)
+ {
+-	int cpu = smp_processor_id();
+-	struct trbe_drvdata *drvdata = info;
+-	struct trbe_cpudata *cpudata = per_cpu_ptr(drvdata->cpudata, cpu);
+ 	struct coresight_device *trbe_csdev = coresight_get_percpu_sink(cpu);
+ 
+-	disable_percpu_irq(drvdata->irq);
+-	trbe_reset_local(cpudata);
+ 	if (trbe_csdev) {
+ 		coresight_unregister(trbe_csdev);
+-		cpudata->drvdata = NULL;
+ 		coresight_set_percpu_sink(cpu, NULL);
+ 	}
+ }
+@@ -1366,8 +1373,10 @@ static int arm_trbe_remove_coresight(struct trbe_drvdata *drvdata)
+ {
+ 	int cpu;
+ 
+-	for_each_cpu(cpu, &drvdata->supported_cpus)
+-		smp_call_function_single(cpu, arm_trbe_remove_coresight_cpu, drvdata, 1);
++	for_each_cpu(cpu, &drvdata->supported_cpus) {
++		smp_call_function_single(cpu, arm_trbe_disable_cpu, drvdata, 1);
++		arm_trbe_remove_coresight_cpu(drvdata, cpu);
++	}
+ 	free_percpu(drvdata->cpudata);
+ 	return 0;
+ }
+@@ -1406,12 +1415,8 @@ static int arm_trbe_cpu_teardown(unsigned int cpu, struct hlist_node *node)
+ {
+ 	struct trbe_drvdata *drvdata = hlist_entry_safe(node, struct trbe_drvdata, hotplug_node);
+ 
+-	if (cpumask_test_cpu(cpu, &drvdata->supported_cpus)) {
+-		struct trbe_cpudata *cpudata = per_cpu_ptr(drvdata->cpudata, cpu);
+-
+-		disable_percpu_irq(drvdata->irq);
+-		trbe_reset_local(cpudata);
+-	}
++	if (cpumask_test_cpu(cpu, &drvdata->supported_cpus))
++		arm_trbe_disable_cpu(drvdata);
+ 	return 0;
+ }
+ 
+@@ -1479,7 +1484,6 @@ static void arm_trbe_remove_irq(struct trbe_drvdata *drvdata)
+ 
+ static int arm_trbe_device_probe(struct platform_device *pdev)
+ {
+-	struct coresight_platform_data *pdata;
+ 	struct trbe_drvdata *drvdata;
+ 	struct device *dev = &pdev->dev;
+ 	int ret;
+@@ -1494,12 +1498,7 @@ static int arm_trbe_device_probe(struct platform_device *pdev)
+ 	if (!drvdata)
+ 		return -ENOMEM;
+ 
+-	pdata = coresight_get_platform_data(dev);
+-	if (IS_ERR(pdata))
+-		return PTR_ERR(pdata);
+-
+ 	dev_set_drvdata(dev, drvdata);
+-	dev->platform_data = pdata;
+ 	drvdata->pdev = pdev;
+ 	ret = arm_trbe_probe_irq(pdev, drvdata);
+ 	if (ret)
+diff --git a/drivers/i3c/master/svc-i3c-master.c b/drivers/i3c/master/svc-i3c-master.c
+index 0d63b732ef0c8..2fefbe55c1675 100644
+--- a/drivers/i3c/master/svc-i3c-master.c
++++ b/drivers/i3c/master/svc-i3c-master.c
+@@ -789,6 +789,10 @@ static int svc_i3c_master_do_daa_locked(struct svc_i3c_master *master,
+ 				 */
+ 				break;
+ 			} else if (SVC_I3C_MSTATUS_NACKED(reg)) {
++				/* No I3C devices attached */
++				if (dev_nb == 0)
++					break;
++
+ 				/*
+ 				 * A slave device nacked the address, this is
+ 				 * allowed only once, DAA will be stopped and
+@@ -1263,11 +1267,17 @@ static int svc_i3c_master_send_ccc_cmd(struct i3c_master_controller *m,
+ {
+ 	struct svc_i3c_master *master = to_svc_i3c_master(m);
+ 	bool broadcast = cmd->id < 0x80;
++	int ret;
+ 
+ 	if (broadcast)
+-		return svc_i3c_master_send_bdcast_ccc_cmd(master, cmd);
++		ret = svc_i3c_master_send_bdcast_ccc_cmd(master, cmd);
+ 	else
+-		return svc_i3c_master_send_direct_ccc_cmd(master, cmd);
++		ret = svc_i3c_master_send_direct_ccc_cmd(master, cmd);
++
++	if (ret)
++		cmd->err = I3C_ERROR_M2;
++
++	return ret;
+ }
+ 
+ static int svc_i3c_master_priv_xfers(struct i3c_dev_desc *dev,
+diff --git a/drivers/iio/accel/adxl313_i2c.c b/drivers/iio/accel/adxl313_i2c.c
+index 524327ea36631..e0a860ab9e58f 100644
+--- a/drivers/iio/accel/adxl313_i2c.c
++++ b/drivers/iio/accel/adxl313_i2c.c
+@@ -40,8 +40,8 @@ static const struct regmap_config adxl31x_i2c_regmap_config[] = {
+ 
+ static const struct i2c_device_id adxl313_i2c_id[] = {
+ 	{ .name = "adxl312", .driver_data = (kernel_ulong_t)&adxl31x_chip_info[ADXL312] },
+-	{ .name = "adxl313", .driver_data = (kernel_ulong_t)&adxl31x_chip_info[ADXL312] },
+-	{ .name = "adxl314", .driver_data = (kernel_ulong_t)&adxl31x_chip_info[ADXL312] },
++	{ .name = "adxl313", .driver_data = (kernel_ulong_t)&adxl31x_chip_info[ADXL313] },
++	{ .name = "adxl314", .driver_data = (kernel_ulong_t)&adxl31x_chip_info[ADXL314] },
+ 	{ }
+ };
+ 
+diff --git a/drivers/infiniband/core/uverbs_std_types_counters.c b/drivers/infiniband/core/uverbs_std_types_counters.c
+index 999da9c798668..381aa57976417 100644
+--- a/drivers/infiniband/core/uverbs_std_types_counters.c
++++ b/drivers/infiniband/core/uverbs_std_types_counters.c
+@@ -107,6 +107,8 @@ static int UVERBS_HANDLER(UVERBS_METHOD_COUNTERS_READ)(
+ 		return ret;
+ 
+ 	uattr = uverbs_attr_get(attrs, UVERBS_ATTR_READ_COUNTERS_BUFF);
++	if (IS_ERR(uattr))
++		return PTR_ERR(uattr);
+ 	read_attr.ncounters = uattr->ptr_attr.len / sizeof(u64);
+ 	read_attr.counters_buff = uverbs_zalloc(
+ 		attrs, array_size(read_attr.ncounters, sizeof(u64)));
+diff --git a/drivers/infiniband/hw/bnxt_re/bnxt_re.h b/drivers/infiniband/hw/bnxt_re/bnxt_re.h
+index ea81b2497511a..5b6d581eb5f41 100644
+--- a/drivers/infiniband/hw/bnxt_re/bnxt_re.h
++++ b/drivers/infiniband/hw/bnxt_re/bnxt_re.h
+@@ -112,12 +112,32 @@ struct bnxt_re_gsi_context {
+ #define BNXT_RE_NQ_IDX			1
+ #define BNXT_RE_GEN_P5_MAX_VF		64
+ 
++struct bnxt_re_pacing {
++	u64 dbr_db_fifo_reg_off;
++	void *dbr_page;
++	u64 dbr_bar_addr;
++	u32 pacing_algo_th;
++	u32 do_pacing_save;
++	u32 dbq_pacing_time; /* ms */
++	u32 dbr_def_do_pacing;
++	bool dbr_pacing;
++};
++
++#define BNXT_RE_DBR_PACING_TIME 5 /* ms */
++#define BNXT_RE_PACING_ALGO_THRESHOLD 250 /* Entries in DB FIFO */
++#define BNXT_RE_PACING_ALARM_TH_MULTIPLE 2 /* Multiple of pacing algo threshold */
++/* Default do_pacing value when there is no congestion */
++#define BNXT_RE_DBR_DO_PACING_NO_CONGESTION 0x7F /* 1 in 512 probability */
++#define BNXT_RE_DB_FIFO_ROOM_MASK 0x1FFF8000
++#define BNXT_RE_MAX_FIFO_DEPTH 0x2c00
++#define BNXT_RE_DB_FIFO_ROOM_SHIFT 15
++#define BNXT_RE_GRC_FIFO_REG_BASE 0x2000
++
+ struct bnxt_re_dev {
+ 	struct ib_device		ibdev;
+ 	struct list_head		list;
+ 	unsigned long			flags;
+ #define BNXT_RE_FLAG_NETDEV_REGISTERED		0
+-#define BNXT_RE_FLAG_GOT_MSIX			2
+ #define BNXT_RE_FLAG_HAVE_L2_REF		3
+ #define BNXT_RE_FLAG_RCFW_CHANNEL_EN		4
+ #define BNXT_RE_FLAG_QOS_WORK_REG		5
+@@ -171,6 +191,7 @@ struct bnxt_re_dev {
+ 	atomic_t nq_alloc_cnt;
+ 	u32 is_virtfn;
+ 	u32 num_vfs;
++	struct bnxt_re_pacing pacing;
+ };
+ 
+ #define to_bnxt_re_dev(ptr, member)	\
+diff --git a/drivers/infiniband/hw/bnxt_re/main.c b/drivers/infiniband/hw/bnxt_re/main.c
+index 63e98e2d35962..120e588fb13ba 100644
+--- a/drivers/infiniband/hw/bnxt_re/main.c
++++ b/drivers/infiniband/hw/bnxt_re/main.c
+@@ -432,9 +432,92 @@ int bnxt_re_hwrm_qcaps(struct bnxt_re_dev *rdev)
+ 		return rc;
+ 	cctx->modes.db_push = le32_to_cpu(resp.flags) & FUNC_QCAPS_RESP_FLAGS_WCB_PUSH_MODE;
+ 
++	cctx->modes.dbr_pacing =
++		le32_to_cpu(resp.flags_ext2) & FUNC_QCAPS_RESP_FLAGS_EXT2_DBR_PACING_EXT_SUPPORTED ?
++		true : false;
+ 	return 0;
+ }
+ 
++static int bnxt_re_hwrm_dbr_pacing_qcfg(struct bnxt_re_dev *rdev)
++{
++	struct hwrm_func_dbr_pacing_qcfg_output resp = {};
++	struct hwrm_func_dbr_pacing_qcfg_input req = {};
++	struct bnxt_en_dev *en_dev = rdev->en_dev;
++	struct bnxt_qplib_chip_ctx *cctx;
++	struct bnxt_fw_msg fw_msg = {};
++	int rc;
++
++	cctx = rdev->chip_ctx;
++	bnxt_re_init_hwrm_hdr((void *)&req, HWRM_FUNC_DBR_PACING_QCFG);
++	bnxt_re_fill_fw_msg(&fw_msg, (void *)&req, sizeof(req), (void *)&resp,
++			    sizeof(resp), DFLT_HWRM_CMD_TIMEOUT);
++	rc = bnxt_send_msg(en_dev, &fw_msg);
++	if (rc)
++		return rc;
++
++	if ((le32_to_cpu(resp.dbr_stat_db_fifo_reg) &
++	    FUNC_DBR_PACING_QCFG_RESP_DBR_STAT_DB_FIFO_REG_ADDR_SPACE_MASK) ==
++		FUNC_DBR_PACING_QCFG_RESP_DBR_STAT_DB_FIFO_REG_ADDR_SPACE_GRC)
++		cctx->dbr_stat_db_fifo =
++			le32_to_cpu(resp.dbr_stat_db_fifo_reg) &
++			~FUNC_DBR_PACING_QCFG_RESP_DBR_STAT_DB_FIFO_REG_ADDR_SPACE_MASK;
++	return 0;
++}
++
++/* Update the pacing tunable parameters to the default values */
++static void bnxt_re_set_default_pacing_data(struct bnxt_re_dev *rdev)
++{
++	struct bnxt_qplib_db_pacing_data *pacing_data = rdev->qplib_res.pacing_data;
++
++	pacing_data->do_pacing = rdev->pacing.dbr_def_do_pacing;
++	pacing_data->pacing_th = rdev->pacing.pacing_algo_th;
++	pacing_data->alarm_th =
++		pacing_data->pacing_th * BNXT_RE_PACING_ALARM_TH_MULTIPLE;
++}
++
++static int bnxt_re_initialize_dbr_pacing(struct bnxt_re_dev *rdev)
++{
++	if (bnxt_re_hwrm_dbr_pacing_qcfg(rdev))
++		return -EIO;
++
++	/* Allocate a page for app use */
++	rdev->pacing.dbr_page = (void *)__get_free_page(GFP_KERNEL);
++	if (!rdev->pacing.dbr_page)
++		return -ENOMEM;
++
++	memset((u8 *)rdev->pacing.dbr_page, 0, PAGE_SIZE);
++	rdev->qplib_res.pacing_data = (struct bnxt_qplib_db_pacing_data *)rdev->pacing.dbr_page;
++
++	/* MAP HW window 2 for reading db fifo depth */
++	writel(rdev->chip_ctx->dbr_stat_db_fifo & BNXT_GRC_BASE_MASK,
++	       rdev->en_dev->bar0 + BNXT_GRCPF_REG_WINDOW_BASE_OUT + 4);
++	rdev->pacing.dbr_db_fifo_reg_off =
++		(rdev->chip_ctx->dbr_stat_db_fifo & BNXT_GRC_OFFSET_MASK) +
++		 BNXT_RE_GRC_FIFO_REG_BASE;
++	rdev->pacing.dbr_bar_addr =
++		pci_resource_start(rdev->qplib_res.pdev, 0) + rdev->pacing.dbr_db_fifo_reg_off;
++
++	rdev->pacing.pacing_algo_th = BNXT_RE_PACING_ALGO_THRESHOLD;
++	rdev->pacing.dbq_pacing_time = BNXT_RE_DBR_PACING_TIME;
++	rdev->pacing.dbr_def_do_pacing = BNXT_RE_DBR_DO_PACING_NO_CONGESTION;
++	rdev->pacing.do_pacing_save = rdev->pacing.dbr_def_do_pacing;
++	rdev->qplib_res.pacing_data->fifo_max_depth = BNXT_RE_MAX_FIFO_DEPTH;
++	rdev->qplib_res.pacing_data->fifo_room_mask = BNXT_RE_DB_FIFO_ROOM_MASK;
++	rdev->qplib_res.pacing_data->fifo_room_shift = BNXT_RE_DB_FIFO_ROOM_SHIFT;
++	rdev->qplib_res.pacing_data->grc_reg_offset = rdev->pacing.dbr_db_fifo_reg_off;
++	bnxt_re_set_default_pacing_data(rdev);
++	return 0;
++}
++
++static void bnxt_re_deinitialize_dbr_pacing(struct bnxt_re_dev *rdev)
++{
++	if (rdev->pacing.dbr_page)
++		free_page((u64)rdev->pacing.dbr_page);
++
++	rdev->pacing.dbr_page = NULL;
++	rdev->pacing.dbr_pacing = false;
++}
++
+ static int bnxt_re_net_ring_free(struct bnxt_re_dev *rdev,
+ 				 u16 fw_ring_id, int type)
+ {
+@@ -942,8 +1025,7 @@ static int bnxt_re_alloc_res(struct bnxt_re_dev *rdev)
+ 
+ 	/* Configure and allocate resources for qplib */
+ 	rdev->qplib_res.rcfw = &rdev->rcfw;
+-	rc = bnxt_qplib_get_dev_attr(&rdev->rcfw, &rdev->dev_attr,
+-				     rdev->is_virtfn);
++	rc = bnxt_qplib_get_dev_attr(&rdev->rcfw, &rdev->dev_attr);
+ 	if (rc)
+ 		goto fail;
+ 
+@@ -1214,8 +1296,11 @@ static void bnxt_re_dev_uninit(struct bnxt_re_dev *rdev)
+ 		bnxt_re_net_ring_free(rdev, rdev->rcfw.creq.ring_id, type);
+ 		bnxt_qplib_free_rcfw_channel(&rdev->rcfw);
+ 	}
+-	if (test_and_clear_bit(BNXT_RE_FLAG_GOT_MSIX, &rdev->flags))
+-		rdev->num_msix = 0;
++
++	rdev->num_msix = 0;
++
++	if (rdev->pacing.dbr_pacing)
++		bnxt_re_deinitialize_dbr_pacing(rdev);
+ 
+ 	bnxt_re_destroy_chip_ctx(rdev);
+ 	if (test_and_clear_bit(BNXT_RE_FLAG_NETDEV_REGISTERED, &rdev->flags))
+@@ -1271,7 +1356,6 @@ static int bnxt_re_dev_init(struct bnxt_re_dev *rdev, u8 wqe_mode)
+ 	ibdev_dbg(&rdev->ibdev, "Got %d MSI-X vectors\n",
+ 		  rdev->en_dev->ulp_tbl->msix_requested);
+ 	rdev->num_msix = rdev->en_dev->ulp_tbl->msix_requested;
+-	set_bit(BNXT_RE_FLAG_GOT_MSIX, &rdev->flags);
+ 
+ 	bnxt_re_query_hwrm_intf_version(rdev);
+ 
+@@ -1311,8 +1395,17 @@ static int bnxt_re_dev_init(struct bnxt_re_dev *rdev, u8 wqe_mode)
+ 		goto free_ring;
+ 	}
+ 
+-	rc = bnxt_qplib_get_dev_attr(&rdev->rcfw, &rdev->dev_attr,
+-				     rdev->is_virtfn);
++	if (bnxt_qplib_dbr_pacing_en(rdev->chip_ctx)) {
++		rc = bnxt_re_initialize_dbr_pacing(rdev);
++		if (!rc) {
++			rdev->pacing.dbr_pacing = true;
++		} else {
++			ibdev_err(&rdev->ibdev,
++				  "DBR pacing disabled with error : %d\n", rc);
++			rdev->pacing.dbr_pacing = false;
++		}
++	}
++	rc = bnxt_qplib_get_dev_attr(&rdev->rcfw, &rdev->dev_attr);
+ 	if (rc)
+ 		goto disable_rcfw;
+ 
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_res.h b/drivers/infiniband/hw/bnxt_re/qplib_res.h
+index d850a553821e3..57161d303c257 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_res.h
++++ b/drivers/infiniband/hw/bnxt_re/qplib_res.h
+@@ -48,6 +48,7 @@ extern const struct bnxt_qplib_gid bnxt_qplib_gid_zero;
+ struct bnxt_qplib_drv_modes {
+ 	u8	wqe_mode;
+ 	bool db_push;
++	bool dbr_pacing;
+ };
+ 
+ struct bnxt_qplib_chip_ctx {
+@@ -58,6 +59,17 @@ struct bnxt_qplib_chip_ctx {
+ 	u16	hwrm_cmd_max_timeout;
+ 	struct bnxt_qplib_drv_modes modes;
+ 	u64	hwrm_intf_ver;
++	u32     dbr_stat_db_fifo;
++};
++
++struct bnxt_qplib_db_pacing_data {
++	u32 do_pacing;
++	u32 pacing_th;
++	u32 alarm_th;
++	u32 fifo_max_depth;
++	u32 fifo_room_mask;
++	u32 fifo_room_shift;
++	u32 grc_reg_offset;
+ };
+ 
+ #define BNXT_QPLIB_DBR_PF_DB_OFFSET     0x10000
+@@ -271,6 +283,7 @@ struct bnxt_qplib_res {
+ 	struct mutex                    dpi_tbl_lock;
+ 	bool				prio;
+ 	bool                            is_vf;
++	struct bnxt_qplib_db_pacing_data *pacing_data;
+ };
+ 
+ static inline bool bnxt_qplib_is_chip_gen_p5(struct bnxt_qplib_chip_ctx *cctx)
+@@ -467,4 +480,10 @@ static inline bool _is_ext_stats_supported(u16 dev_cap_flags)
+ 	return dev_cap_flags &
+ 		CREQ_QUERY_FUNC_RESP_SB_EXT_STATS;
+ }
++
++static inline u8 bnxt_qplib_dbr_pacing_en(struct bnxt_qplib_chip_ctx *cctx)
++{
++	return cctx->modes.dbr_pacing;
++}
++
+ #endif /* __BNXT_QPLIB_RES_H__ */
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_sp.c b/drivers/infiniband/hw/bnxt_re/qplib_sp.c
+index ab45f9d4bb02f..7a244fd506e2a 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_sp.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_sp.c
+@@ -89,7 +89,7 @@ static void bnxt_qplib_query_version(struct bnxt_qplib_rcfw *rcfw,
+ }
+ 
+ int bnxt_qplib_get_dev_attr(struct bnxt_qplib_rcfw *rcfw,
+-			    struct bnxt_qplib_dev_attr *attr, bool vf)
++			    struct bnxt_qplib_dev_attr *attr)
+ {
+ 	struct creq_query_func_resp resp = {};
+ 	struct bnxt_qplib_cmdqmsg msg = {};
+@@ -121,9 +121,8 @@ int bnxt_qplib_get_dev_attr(struct bnxt_qplib_rcfw *rcfw,
+ 
+ 	/* Extract the context from the side buffer */
+ 	attr->max_qp = le32_to_cpu(sb->max_qp);
+-	/* max_qp value reported by FW for PF doesn't include the QP1 for PF */
+-	if (!vf)
+-		attr->max_qp += 1;
++	/* max_qp value reported by FW doesn't include the QP1 */
++	attr->max_qp += 1;
+ 	attr->max_qp_rd_atom =
+ 		sb->max_qp_rd_atom > BNXT_QPLIB_MAX_OUT_RD_ATOM ?
+ 		BNXT_QPLIB_MAX_OUT_RD_ATOM : sb->max_qp_rd_atom;
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_sp.h b/drivers/infiniband/hw/bnxt_re/qplib_sp.h
+index 264ef3cedc45b..d33c78b96217a 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_sp.h
++++ b/drivers/infiniband/hw/bnxt_re/qplib_sp.h
+@@ -322,7 +322,7 @@ int bnxt_qplib_update_sgid(struct bnxt_qplib_sgid_tbl *sgid_tbl,
+ 			   struct bnxt_qplib_gid *gid, u16 gid_idx,
+ 			   const u8 *smac);
+ int bnxt_qplib_get_dev_attr(struct bnxt_qplib_rcfw *rcfw,
+-			    struct bnxt_qplib_dev_attr *attr, bool vf);
++			    struct bnxt_qplib_dev_attr *attr);
+ int bnxt_qplib_set_func_resources(struct bnxt_qplib_res *res,
+ 				  struct bnxt_qplib_rcfw *rcfw,
+ 				  struct bnxt_qplib_ctx *ctx);
+diff --git a/drivers/infiniband/hw/efa/efa_verbs.c b/drivers/infiniband/hw/efa/efa_verbs.c
+index 2a195c4b0f17d..3538d59521e41 100644
+--- a/drivers/infiniband/hw/efa/efa_verbs.c
++++ b/drivers/infiniband/hw/efa/efa_verbs.c
+@@ -449,12 +449,12 @@ int efa_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata)
+ 
+ 	ibdev_dbg(&dev->ibdev, "Destroy qp[%u]\n", ibqp->qp_num);
+ 
+-	efa_qp_user_mmap_entries_remove(qp);
+-
+ 	err = efa_destroy_qp_handle(dev, qp->qp_handle);
+ 	if (err)
+ 		return err;
+ 
++	efa_qp_user_mmap_entries_remove(qp);
++
+ 	if (qp->rq_cpu_addr) {
+ 		ibdev_dbg(&dev->ibdev,
+ 			  "qp->cpu_addr[0x%p] freed: size[%lu], dma[%pad]\n",
+@@ -1013,8 +1013,8 @@ int efa_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata)
+ 		  "Destroy cq[%d] virt[0x%p] freed: size[%lu], dma[%pad]\n",
+ 		  cq->cq_idx, cq->cpu_addr, cq->size, &cq->dma_addr);
+ 
+-	efa_cq_user_mmap_entries_remove(cq);
+ 	efa_destroy_cq_idx(dev, cq->cq_idx);
++	efa_cq_user_mmap_entries_remove(cq);
+ 	if (cq->eq) {
+ 		xa_erase(&dev->cqs_xa, cq->cq_idx);
+ 		synchronize_irq(cq->eq->irq.irqn);
+diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h
+index 84239b907de2a..bb94eb076858c 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_device.h
++++ b/drivers/infiniband/hw/hns/hns_roce_device.h
+@@ -97,6 +97,7 @@
+ #define HNS_ROCE_CQ_BANK_NUM 4
+ 
+ #define CQ_BANKID_SHIFT 2
++#define CQ_BANKID_MASK GENMASK(1, 0)
+ 
+ enum {
+ 	SERV_TYPE_RC,
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index 8f7eb11066b43..1d998298e28fc 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -750,7 +750,8 @@ out:
+ 		qp->sq.head += nreq;
+ 		qp->next_sge = sge_idx;
+ 
+-		if (nreq == 1 && (qp->en_flags & HNS_ROCE_QP_CAP_DIRECT_WQE))
++		if (nreq == 1 && !ret &&
++		    (qp->en_flags & HNS_ROCE_QP_CAP_DIRECT_WQE))
+ 			write_dwqe(hr_dev, qp, wqe);
+ 		else
+ 			update_sq_db(hr_dev, qp);
+@@ -6722,14 +6723,14 @@ static int __hns_roce_hw_v2_init_instance(struct hnae3_handle *handle)
+ 	ret = hns_roce_init(hr_dev);
+ 	if (ret) {
+ 		dev_err(hr_dev->dev, "RoCE Engine init failed!\n");
+-		goto error_failed_cfg;
++		goto error_failed_roce_init;
+ 	}
+ 
+ 	if (hr_dev->pci_dev->revision == PCI_REVISION_ID_HIP08) {
+ 		ret = free_mr_init(hr_dev);
+ 		if (ret) {
+ 			dev_err(hr_dev->dev, "failed to init free mr!\n");
+-			goto error_failed_roce_init;
++			goto error_failed_free_mr_init;
+ 		}
+ 	}
+ 
+@@ -6737,10 +6738,10 @@ static int __hns_roce_hw_v2_init_instance(struct hnae3_handle *handle)
+ 
+ 	return 0;
+ 
+-error_failed_roce_init:
++error_failed_free_mr_init:
+ 	hns_roce_exit(hr_dev);
+ 
+-error_failed_cfg:
++error_failed_roce_init:
+ 	kfree(hr_dev->priv);
+ 
+ error_failed_kzalloc:
+diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c
+index 485e110ca4333..9141eadf33d2a 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_main.c
++++ b/drivers/infiniband/hw/hns/hns_roce_main.c
+@@ -219,6 +219,7 @@ static int hns_roce_query_port(struct ib_device *ib_dev, u32 port_num,
+ 	unsigned long flags;
+ 	enum ib_mtu mtu;
+ 	u32 port;
++	int ret;
+ 
+ 	port = port_num - 1;
+ 
+@@ -231,8 +232,10 @@ static int hns_roce_query_port(struct ib_device *ib_dev, u32 port_num,
+ 				IB_PORT_BOOT_MGMT_SUP;
+ 	props->max_msg_sz = HNS_ROCE_MAX_MSG_LEN;
+ 	props->pkey_tbl_len = 1;
+-	props->active_width = IB_WIDTH_4X;
+-	props->active_speed = 1;
++	ret = ib_get_eth_speed(ib_dev, port_num, &props->active_speed,
++			       &props->active_width);
++	if (ret)
++		ibdev_warn(ib_dev, "failed to get speed, ret = %d.\n", ret);
+ 
+ 	spin_lock_irqsave(&hr_dev->iboe.lock, flags);
+ 
+diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
+index d855a917f4cfa..cdc1c6de43a17 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_qp.c
++++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
+@@ -170,14 +170,29 @@ static void hns_roce_ib_qp_event(struct hns_roce_qp *hr_qp,
+ 	}
+ }
+ 
+-static u8 get_least_load_bankid_for_qp(struct hns_roce_bank *bank)
++static u8 get_affinity_cq_bank(u8 qp_bank)
+ {
+-	u32 least_load = bank[0].inuse;
++	return (qp_bank >> 1) & CQ_BANKID_MASK;
++}
++
++static u8 get_least_load_bankid_for_qp(struct ib_qp_init_attr *init_attr,
++					struct hns_roce_bank *bank)
++{
++#define INVALID_LOAD_QPNUM 0xFFFFFFFF
++	struct ib_cq *scq = init_attr->send_cq;
++	u32 least_load = INVALID_LOAD_QPNUM;
++	unsigned long cqn = 0;
+ 	u8 bankid = 0;
+ 	u32 bankcnt;
+ 	u8 i;
+ 
+-	for (i = 1; i < HNS_ROCE_QP_BANK_NUM; i++) {
++	if (scq)
++		cqn = to_hr_cq(scq)->cqn;
++
++	for (i = 0; i < HNS_ROCE_QP_BANK_NUM; i++) {
++		if (scq && (get_affinity_cq_bank(i) != (cqn & CQ_BANKID_MASK)))
++			continue;
++
+ 		bankcnt = bank[i].inuse;
+ 		if (bankcnt < least_load) {
+ 			least_load = bankcnt;
+@@ -209,7 +224,8 @@ static int alloc_qpn_with_bankid(struct hns_roce_bank *bank, u8 bankid,
+ 
+ 	return 0;
+ }
+-static int alloc_qpn(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp)
++static int alloc_qpn(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp,
++		     struct ib_qp_init_attr *init_attr)
+ {
+ 	struct hns_roce_qp_table *qp_table = &hr_dev->qp_table;
+ 	unsigned long num = 0;
+@@ -220,7 +236,7 @@ static int alloc_qpn(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp)
+ 		num = 1;
+ 	} else {
+ 		mutex_lock(&qp_table->bank_mutex);
+-		bankid = get_least_load_bankid_for_qp(qp_table->bank);
++		bankid = get_least_load_bankid_for_qp(init_attr, qp_table->bank);
+ 
+ 		ret = alloc_qpn_with_bankid(&qp_table->bank[bankid], bankid,
+ 					    &num);
+@@ -1082,7 +1098,7 @@ static int hns_roce_create_qp_common(struct hns_roce_dev *hr_dev,
+ 		goto err_buf;
+ 	}
+ 
+-	ret = alloc_qpn(hr_dev, hr_qp);
++	ret = alloc_qpn(hr_dev, hr_qp, init_attr);
+ 	if (ret) {
+ 		ibdev_err(ibdev, "failed to alloc QPN, ret = %d.\n", ret);
+ 		goto err_qpn;
+diff --git a/drivers/infiniband/hw/irdma/ctrl.c b/drivers/infiniband/hw/irdma/ctrl.c
+index 45e3344daa048..ef47ec271e19e 100644
+--- a/drivers/infiniband/hw/irdma/ctrl.c
++++ b/drivers/infiniband/hw/irdma/ctrl.c
+@@ -1061,6 +1061,9 @@ static int irdma_sc_alloc_stag(struct irdma_sc_dev *dev,
+ 	u64 hdr;
+ 	enum irdma_page_size page_size;
+ 
++	if (!info->total_len && !info->all_memory)
++		return -EINVAL;
++
+ 	if (info->page_size == 0x40000000)
+ 		page_size = IRDMA_PAGE_SIZE_1G;
+ 	else if (info->page_size == 0x200000)
+@@ -1126,6 +1129,9 @@ static int irdma_sc_mr_reg_non_shared(struct irdma_sc_dev *dev,
+ 	u8 addr_type;
+ 	enum irdma_page_size page_size;
+ 
++	if (!info->total_len && !info->all_memory)
++		return -EINVAL;
++
+ 	if (info->page_size == 0x40000000)
+ 		page_size = IRDMA_PAGE_SIZE_1G;
+ 	else if (info->page_size == 0x200000)
+diff --git a/drivers/infiniband/hw/irdma/main.h b/drivers/infiniband/hw/irdma/main.h
+index 2323962cdeacb..de2f4c0514118 100644
+--- a/drivers/infiniband/hw/irdma/main.h
++++ b/drivers/infiniband/hw/irdma/main.h
+@@ -239,7 +239,7 @@ struct irdma_qv_info {
+ 
+ struct irdma_qvlist_info {
+ 	u32 num_vectors;
+-	struct irdma_qv_info qv_info[1];
++	struct irdma_qv_info qv_info[];
+ };
+ 
+ struct irdma_gen_ops {
+diff --git a/drivers/infiniband/hw/irdma/type.h b/drivers/infiniband/hw/irdma/type.h
+index a20709577ab0a..3b1fa5bc0a585 100644
+--- a/drivers/infiniband/hw/irdma/type.h
++++ b/drivers/infiniband/hw/irdma/type.h
+@@ -971,6 +971,7 @@ struct irdma_allocate_stag_info {
+ 	bool remote_access:1;
+ 	bool use_hmc_fcn_index:1;
+ 	bool use_pf_rid:1;
++	bool all_memory:1;
+ 	u8 hmc_fcn_index;
+ };
+ 
+@@ -998,6 +999,7 @@ struct irdma_reg_ns_stag_info {
+ 	bool use_hmc_fcn_index:1;
+ 	u8 hmc_fcn_index;
+ 	bool use_pf_rid:1;
++	bool all_memory:1;
+ };
+ 
+ struct irdma_fast_reg_stag_info {
+diff --git a/drivers/infiniband/hw/irdma/verbs.c b/drivers/infiniband/hw/irdma/verbs.c
+index 9c4fe4fa90018..377c5bab5f2e0 100644
+--- a/drivers/infiniband/hw/irdma/verbs.c
++++ b/drivers/infiniband/hw/irdma/verbs.c
+@@ -2552,7 +2552,8 @@ static int irdma_hw_alloc_stag(struct irdma_device *iwdev,
+ 			       struct irdma_mr *iwmr)
+ {
+ 	struct irdma_allocate_stag_info *info;
+-	struct irdma_pd *iwpd = to_iwpd(iwmr->ibmr.pd);
++	struct ib_pd *pd = iwmr->ibmr.pd;
++	struct irdma_pd *iwpd = to_iwpd(pd);
+ 	int status;
+ 	struct irdma_cqp_request *cqp_request;
+ 	struct cqp_cmds_info *cqp_info;
+@@ -2568,6 +2569,7 @@ static int irdma_hw_alloc_stag(struct irdma_device *iwdev,
+ 	info->stag_idx = iwmr->stag >> IRDMA_CQPSQ_STAG_IDX_S;
+ 	info->pd_id = iwpd->sc_pd.pd_id;
+ 	info->total_len = iwmr->len;
++	info->all_memory = pd->flags & IB_PD_UNSAFE_GLOBAL_RKEY;
+ 	info->remote_access = true;
+ 	cqp_info->cqp_cmd = IRDMA_OP_ALLOC_STAG;
+ 	cqp_info->post_sq = 1;
+@@ -2615,6 +2617,8 @@ static struct ib_mr *irdma_alloc_mr(struct ib_pd *pd, enum ib_mr_type mr_type,
+ 	iwmr->type = IRDMA_MEMREG_TYPE_MEM;
+ 	palloc = &iwpbl->pble_alloc;
+ 	iwmr->page_cnt = max_num_sg;
++	/* Use system PAGE_SIZE as the sg page sizes are unknown at this point */
++	iwmr->len = max_num_sg * PAGE_SIZE;
+ 	err_code = irdma_get_pble(iwdev->rf->pble_rsrc, palloc, iwmr->page_cnt,
+ 				  false);
+ 	if (err_code)
+@@ -2694,7 +2698,8 @@ static int irdma_hwreg_mr(struct irdma_device *iwdev, struct irdma_mr *iwmr,
+ {
+ 	struct irdma_pbl *iwpbl = &iwmr->iwpbl;
+ 	struct irdma_reg_ns_stag_info *stag_info;
+-	struct irdma_pd *iwpd = to_iwpd(iwmr->ibmr.pd);
++	struct ib_pd *pd = iwmr->ibmr.pd;
++	struct irdma_pd *iwpd = to_iwpd(pd);
+ 	struct irdma_pble_alloc *palloc = &iwpbl->pble_alloc;
+ 	struct irdma_cqp_request *cqp_request;
+ 	struct cqp_cmds_info *cqp_info;
+@@ -2713,6 +2718,7 @@ static int irdma_hwreg_mr(struct irdma_device *iwdev, struct irdma_mr *iwmr,
+ 	stag_info->total_len = iwmr->len;
+ 	stag_info->access_rights = irdma_get_mr_access(access);
+ 	stag_info->pd_id = iwpd->sc_pd.pd_id;
++	stag_info->all_memory = pd->flags & IB_PD_UNSAFE_GLOBAL_RKEY;
+ 	if (stag_info->access_rights & IRDMA_ACCESS_FLAGS_ZERO_BASED)
+ 		stag_info->addr_type = IRDMA_ADDR_TYPE_ZERO_BASED;
+ 	else
+@@ -4424,7 +4430,6 @@ static int irdma_query_ah(struct ib_ah *ibah, struct rdma_ah_attr *ah_attr)
+ 		ah_attr->grh.traffic_class = ah->sc_ah.ah_info.tc_tos;
+ 		ah_attr->grh.hop_limit = ah->sc_ah.ah_info.hop_ttl;
+ 		ah_attr->grh.sgid_index = ah->sgid_index;
+-		ah_attr->grh.sgid_index = ah->sgid_index;
+ 		memcpy(&ah_attr->grh.dgid, &ah->dgid,
+ 		       sizeof(ah_attr->grh.dgid));
+ 	}
+diff --git a/drivers/infiniband/sw/rxe/rxe_comp.c b/drivers/infiniband/sw/rxe/rxe_comp.c
+index 5111735aafaed..d0bdc2d8adc82 100644
+--- a/drivers/infiniband/sw/rxe/rxe_comp.c
++++ b/drivers/infiniband/sw/rxe/rxe_comp.c
+@@ -597,6 +597,10 @@ static void flush_send_queue(struct rxe_qp *qp, bool notify)
+ 	struct rxe_queue *q = qp->sq.queue;
+ 	int err;
+ 
++	/* send queue never got created. nothing to do. */
++	if (!qp->sq.queue)
++		return;
++
+ 	while ((wqe = queue_head(q, q->type))) {
+ 		if (notify) {
+ 			err = flush_send_wqe(qp, wqe);
+diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h
+index 666e06a82bc9e..4d2a8ef52c850 100644
+--- a/drivers/infiniband/sw/rxe/rxe_loc.h
++++ b/drivers/infiniband/sw/rxe/rxe_loc.h
+@@ -136,12 +136,6 @@ static inline int qp_mtu(struct rxe_qp *qp)
+ 		return IB_MTU_4096;
+ }
+ 
+-static inline int rcv_wqe_size(int max_sge)
+-{
+-	return sizeof(struct rxe_recv_wqe) +
+-		max_sge * sizeof(struct ib_sge);
+-}
+-
+ void free_rd_atomic_resource(struct resp_res *res);
+ 
+ static inline void rxe_advance_resp_resource(struct rxe_qp *qp)
+diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c
+index a569b111a9d2a..28e379c108bce 100644
+--- a/drivers/infiniband/sw/rxe/rxe_qp.c
++++ b/drivers/infiniband/sw/rxe/rxe_qp.c
+@@ -183,13 +183,63 @@ static void rxe_qp_init_misc(struct rxe_dev *rxe, struct rxe_qp *qp,
+ 	atomic_set(&qp->skb_out, 0);
+ }
+ 
++static int rxe_init_sq(struct rxe_qp *qp, struct ib_qp_init_attr *init,
++		       struct ib_udata *udata,
++		       struct rxe_create_qp_resp __user *uresp)
++{
++	struct rxe_dev *rxe = to_rdev(qp->ibqp.device);
++	int wqe_size;
++	int err;
++
++	qp->sq.max_wr = init->cap.max_send_wr;
++	wqe_size = max_t(int, init->cap.max_send_sge * sizeof(struct ib_sge),
++			 init->cap.max_inline_data);
++	qp->sq.max_sge = wqe_size / sizeof(struct ib_sge);
++	qp->sq.max_inline = wqe_size;
++	wqe_size += sizeof(struct rxe_send_wqe);
++
++	qp->sq.queue = rxe_queue_init(rxe, &qp->sq.max_wr, wqe_size,
++				      QUEUE_TYPE_FROM_CLIENT);
++	if (!qp->sq.queue) {
++		rxe_err_qp(qp, "Unable to allocate send queue");
++		err = -ENOMEM;
++		goto err_out;
++	}
++
++	/* prepare info for caller to mmap send queue if user space qp */
++	err = do_mmap_info(rxe, uresp ? &uresp->sq_mi : NULL, udata,
++			   qp->sq.queue->buf, qp->sq.queue->buf_size,
++			   &qp->sq.queue->ip);
++	if (err) {
++		rxe_err_qp(qp, "do_mmap_info failed, err = %d", err);
++		goto err_free;
++	}
++
++	/* return actual capabilities to caller which may be larger
++	 * than requested
++	 */
++	init->cap.max_send_wr = qp->sq.max_wr;
++	init->cap.max_send_sge = qp->sq.max_sge;
++	init->cap.max_inline_data = qp->sq.max_inline;
++
++	return 0;
++
++err_free:
++	vfree(qp->sq.queue->buf);
++	kfree(qp->sq.queue);
++	qp->sq.queue = NULL;
++err_out:
++	return err;
++}
++
+ static int rxe_qp_init_req(struct rxe_dev *rxe, struct rxe_qp *qp,
+ 			   struct ib_qp_init_attr *init, struct ib_udata *udata,
+ 			   struct rxe_create_qp_resp __user *uresp)
+ {
+ 	int err;
+-	int wqe_size;
+-	enum queue_type type;
++
++	/* if we don't finish qp create make sure queue is valid */
++	skb_queue_head_init(&qp->req_pkts);
+ 
+ 	err = sock_create_kern(&init_net, AF_INET, SOCK_DGRAM, 0, &qp->sk);
+ 	if (err < 0)
+@@ -204,32 +254,10 @@ static int rxe_qp_init_req(struct rxe_dev *rxe, struct rxe_qp *qp,
+ 	 * (0xc000 - 0xffff).
+ 	 */
+ 	qp->src_port = RXE_ROCE_V2_SPORT + (hash_32(qp_num(qp), 14) & 0x3fff);
+-	qp->sq.max_wr		= init->cap.max_send_wr;
+-
+-	/* These caps are limited by rxe_qp_chk_cap() done by the caller */
+-	wqe_size = max_t(int, init->cap.max_send_sge * sizeof(struct ib_sge),
+-			 init->cap.max_inline_data);
+-	qp->sq.max_sge = init->cap.max_send_sge =
+-		wqe_size / sizeof(struct ib_sge);
+-	qp->sq.max_inline = init->cap.max_inline_data = wqe_size;
+-	wqe_size += sizeof(struct rxe_send_wqe);
+ 
+-	type = QUEUE_TYPE_FROM_CLIENT;
+-	qp->sq.queue = rxe_queue_init(rxe, &qp->sq.max_wr,
+-				wqe_size, type);
+-	if (!qp->sq.queue)
+-		return -ENOMEM;
+-
+-	err = do_mmap_info(rxe, uresp ? &uresp->sq_mi : NULL, udata,
+-			   qp->sq.queue->buf, qp->sq.queue->buf_size,
+-			   &qp->sq.queue->ip);
+-
+-	if (err) {
+-		vfree(qp->sq.queue->buf);
+-		kfree(qp->sq.queue);
+-		qp->sq.queue = NULL;
++	err = rxe_init_sq(qp, init, udata, uresp);
++	if (err)
+ 		return err;
+-	}
+ 
+ 	qp->req.wqe_index = queue_get_producer(qp->sq.queue,
+ 					       QUEUE_TYPE_FROM_CLIENT);
+@@ -248,36 +276,65 @@ static int rxe_qp_init_req(struct rxe_dev *rxe, struct rxe_qp *qp,
+ 	return 0;
+ }
+ 
++static int rxe_init_rq(struct rxe_qp *qp, struct ib_qp_init_attr *init,
++		       struct ib_udata *udata,
++		       struct rxe_create_qp_resp __user *uresp)
++{
++	struct rxe_dev *rxe = to_rdev(qp->ibqp.device);
++	int wqe_size;
++	int err;
++
++	qp->rq.max_wr = init->cap.max_recv_wr;
++	qp->rq.max_sge = init->cap.max_recv_sge;
++	wqe_size = sizeof(struct rxe_recv_wqe) +
++				qp->rq.max_sge*sizeof(struct ib_sge);
++
++	qp->rq.queue = rxe_queue_init(rxe, &qp->rq.max_wr, wqe_size,
++				      QUEUE_TYPE_FROM_CLIENT);
++	if (!qp->rq.queue) {
++		rxe_err_qp(qp, "Unable to allocate recv queue");
++		err = -ENOMEM;
++		goto err_out;
++	}
++
++	/* prepare info for caller to mmap recv queue if user space qp */
++	err = do_mmap_info(rxe, uresp ? &uresp->rq_mi : NULL, udata,
++			   qp->rq.queue->buf, qp->rq.queue->buf_size,
++			   &qp->rq.queue->ip);
++	if (err) {
++		rxe_err_qp(qp, "do_mmap_info failed, err = %d", err);
++		goto err_free;
++	}
++
++	/* return actual capabilities to caller which may be larger
++	 * than requested
++	 */
++	init->cap.max_recv_wr = qp->rq.max_wr;
++
++	return 0;
++
++err_free:
++	vfree(qp->rq.queue->buf);
++	kfree(qp->rq.queue);
++	qp->rq.queue = NULL;
++err_out:
++	return err;
++}
++
+ static int rxe_qp_init_resp(struct rxe_dev *rxe, struct rxe_qp *qp,
+ 			    struct ib_qp_init_attr *init,
+ 			    struct ib_udata *udata,
+ 			    struct rxe_create_qp_resp __user *uresp)
+ {
+ 	int err;
+-	int wqe_size;
+-	enum queue_type type;
++
++	/* if we don't finish qp create make sure queue is valid */
++	skb_queue_head_init(&qp->resp_pkts);
+ 
+ 	if (!qp->srq) {
+-		qp->rq.max_wr		= init->cap.max_recv_wr;
+-		qp->rq.max_sge		= init->cap.max_recv_sge;
+-
+-		wqe_size = rcv_wqe_size(qp->rq.max_sge);
+-
+-		type = QUEUE_TYPE_FROM_CLIENT;
+-		qp->rq.queue = rxe_queue_init(rxe, &qp->rq.max_wr,
+-					wqe_size, type);
+-		if (!qp->rq.queue)
+-			return -ENOMEM;
+-
+-		err = do_mmap_info(rxe, uresp ? &uresp->rq_mi : NULL, udata,
+-				   qp->rq.queue->buf, qp->rq.queue->buf_size,
+-				   &qp->rq.queue->ip);
+-		if (err) {
+-			vfree(qp->rq.queue->buf);
+-			kfree(qp->rq.queue);
+-			qp->rq.queue = NULL;
++		err = rxe_init_rq(qp, init, udata, uresp);
++		if (err)
+ 			return err;
+-		}
+ 	}
+ 
+ 	rxe_init_task(&qp->resp.task, qp, rxe_responder);
+@@ -307,10 +364,10 @@ int rxe_qp_from_init(struct rxe_dev *rxe, struct rxe_qp *qp, struct rxe_pd *pd,
+ 	if (srq)
+ 		rxe_get(srq);
+ 
+-	qp->pd			= pd;
+-	qp->rcq			= rcq;
+-	qp->scq			= scq;
+-	qp->srq			= srq;
++	qp->pd = pd;
++	qp->rcq = rcq;
++	qp->scq = scq;
++	qp->srq = srq;
+ 
+ 	atomic_inc(&rcq->num_wq);
+ 	atomic_inc(&scq->num_wq);
+diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c
+index 2171f19494bca..d8c41fd626a94 100644
+--- a/drivers/infiniband/sw/rxe/rxe_req.c
++++ b/drivers/infiniband/sw/rxe/rxe_req.c
+@@ -578,10 +578,11 @@ static void save_state(struct rxe_send_wqe *wqe,
+ 		       struct rxe_send_wqe *rollback_wqe,
+ 		       u32 *rollback_psn)
+ {
+-	rollback_wqe->state     = wqe->state;
++	rollback_wqe->state = wqe->state;
+ 	rollback_wqe->first_psn = wqe->first_psn;
+-	rollback_wqe->last_psn  = wqe->last_psn;
+-	*rollback_psn		= qp->req.psn;
++	rollback_wqe->last_psn = wqe->last_psn;
++	rollback_wqe->dma = wqe->dma;
++	*rollback_psn = qp->req.psn;
+ }
+ 
+ static void rollback_state(struct rxe_send_wqe *wqe,
+@@ -589,10 +590,11 @@ static void rollback_state(struct rxe_send_wqe *wqe,
+ 			   struct rxe_send_wqe *rollback_wqe,
+ 			   u32 rollback_psn)
+ {
+-	wqe->state     = rollback_wqe->state;
++	wqe->state = rollback_wqe->state;
+ 	wqe->first_psn = rollback_wqe->first_psn;
+-	wqe->last_psn  = rollback_wqe->last_psn;
+-	qp->req.psn    = rollback_psn;
++	wqe->last_psn = rollback_wqe->last_psn;
++	wqe->dma = rollback_wqe->dma;
++	qp->req.psn = rollback_psn;
+ }
+ 
+ static void update_state(struct rxe_qp *qp, struct rxe_pkt_info *pkt)
+@@ -797,6 +799,9 @@ int rxe_requester(struct rxe_qp *qp)
+ 	pkt.mask = rxe_opcode[opcode].mask;
+ 	pkt.wqe = wqe;
+ 
++	/* save wqe state before we build and send packet */
++	save_state(wqe, qp, &rollback_wqe, &rollback_psn);
++
+ 	av = rxe_get_av(&pkt, &ah);
+ 	if (unlikely(!av)) {
+ 		rxe_dbg_qp(qp, "Failed no address vector\n");
+@@ -829,29 +834,29 @@ int rxe_requester(struct rxe_qp *qp)
+ 	if (ah)
+ 		rxe_put(ah);
+ 
+-	/*
+-	 * To prevent a race on wqe access between requester and completer,
+-	 * wqe members state and psn need to be set before calling
+-	 * rxe_xmit_packet().
+-	 * Otherwise, completer might initiate an unjustified retry flow.
+-	 */
+-	save_state(wqe, qp, &rollback_wqe, &rollback_psn);
++	/* update wqe state as though we had sent it */
+ 	update_wqe_state(qp, wqe, &pkt);
+ 	update_wqe_psn(qp, wqe, &pkt, payload);
+ 
+ 	err = rxe_xmit_packet(qp, &pkt, skb);
+ 	if (err) {
+-		qp->need_req_skb = 1;
++		if (err != -EAGAIN) {
++			wqe->status = IB_WC_LOC_QP_OP_ERR;
++			goto err;
++		}
+ 
++		/* the packet was dropped so reset wqe to the state
++		 * before we sent it so we can try to resend
++		 */
+ 		rollback_state(wqe, qp, &rollback_wqe, rollback_psn);
+ 
+-		if (err == -EAGAIN) {
+-			rxe_sched_task(&qp->req.task);
+-			goto exit;
+-		}
++		/* force a delay until the dropped packet is freed and
++		 * the send queue is drained below the low water mark
++		 */
++		qp->need_req_skb = 1;
+ 
+-		wqe->status = IB_WC_LOC_QP_OP_ERR;
+-		goto err;
++		rxe_sched_task(&qp->req.task);
++		goto exit;
+ 	}
+ 
+ 	update_state(qp, &pkt);
+diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c
+index 64c64f5f36a81..da470a925efc7 100644
+--- a/drivers/infiniband/sw/rxe/rxe_resp.c
++++ b/drivers/infiniband/sw/rxe/rxe_resp.c
+@@ -1469,6 +1469,10 @@ static void flush_recv_queue(struct rxe_qp *qp, bool notify)
+ 		return;
+ 	}
+ 
++	/* recv queue not created. nothing to do. */
++	if (!qp->rq.queue)
++		return;
++
+ 	while ((wqe = queue_head(q, q->type))) {
+ 		if (notify) {
+ 			err = flush_recv_wqe(qp, wqe);
+diff --git a/drivers/infiniband/sw/rxe/rxe_srq.c b/drivers/infiniband/sw/rxe/rxe_srq.c
+index 27ca82ec0826b..3661cb627d28a 100644
+--- a/drivers/infiniband/sw/rxe/rxe_srq.c
++++ b/drivers/infiniband/sw/rxe/rxe_srq.c
+@@ -45,40 +45,41 @@ int rxe_srq_from_init(struct rxe_dev *rxe, struct rxe_srq *srq,
+ 		      struct ib_srq_init_attr *init, struct ib_udata *udata,
+ 		      struct rxe_create_srq_resp __user *uresp)
+ {
+-	int err;
+-	int srq_wqe_size;
+ 	struct rxe_queue *q;
+-	enum queue_type type;
++	int wqe_size;
++	int err;
+ 
+-	srq->ibsrq.event_handler	= init->event_handler;
+-	srq->ibsrq.srq_context		= init->srq_context;
+-	srq->limit		= init->attr.srq_limit;
+-	srq->srq_num		= srq->elem.index;
+-	srq->rq.max_wr		= init->attr.max_wr;
+-	srq->rq.max_sge		= init->attr.max_sge;
++	srq->ibsrq.event_handler = init->event_handler;
++	srq->ibsrq.srq_context = init->srq_context;
++	srq->limit = init->attr.srq_limit;
++	srq->srq_num = srq->elem.index;
++	srq->rq.max_wr = init->attr.max_wr;
++	srq->rq.max_sge = init->attr.max_sge;
+ 
+-	srq_wqe_size		= rcv_wqe_size(srq->rq.max_sge);
++	wqe_size = sizeof(struct rxe_recv_wqe) +
++			srq->rq.max_sge*sizeof(struct ib_sge);
+ 
+ 	spin_lock_init(&srq->rq.producer_lock);
+ 	spin_lock_init(&srq->rq.consumer_lock);
+ 
+-	type = QUEUE_TYPE_FROM_CLIENT;
+-	q = rxe_queue_init(rxe, &srq->rq.max_wr, srq_wqe_size, type);
++	q = rxe_queue_init(rxe, &srq->rq.max_wr, wqe_size,
++			   QUEUE_TYPE_FROM_CLIENT);
+ 	if (!q) {
+ 		rxe_dbg_srq(srq, "Unable to allocate queue\n");
+-		return -ENOMEM;
++		err = -ENOMEM;
++		goto err_out;
+ 	}
+ 
+-	srq->rq.queue = q;
+-
+ 	err = do_mmap_info(rxe, uresp ? &uresp->mi : NULL, udata, q->buf,
+ 			   q->buf_size, &q->ip);
+ 	if (err) {
+-		vfree(q->buf);
+-		kfree(q);
+-		return err;
++		rxe_dbg_srq(srq, "Unable to init mmap info for caller\n");
++		goto err_free;
+ 	}
+ 
++	srq->rq.queue = q;
++	init->attr.max_wr = srq->rq.max_wr;
++
+ 	if (uresp) {
+ 		if (copy_to_user(&uresp->srq_num, &srq->srq_num,
+ 				 sizeof(uresp->srq_num))) {
+@@ -88,6 +89,12 @@ int rxe_srq_from_init(struct rxe_dev *rxe, struct rxe_srq *srq,
+ 	}
+ 
+ 	return 0;
++
++err_free:
++	vfree(q->buf);
++	kfree(q);
++err_out:
++	return err;
+ }
+ 
+ int rxe_srq_chk_attr(struct rxe_dev *rxe, struct rxe_srq *srq,
+@@ -145,9 +152,10 @@ int rxe_srq_from_attr(struct rxe_dev *rxe, struct rxe_srq *srq,
+ 		      struct ib_srq_attr *attr, enum ib_srq_attr_mask mask,
+ 		      struct rxe_modify_srq_cmd *ucmd, struct ib_udata *udata)
+ {
+-	int err;
+ 	struct rxe_queue *q = srq->rq.queue;
+ 	struct mminfo __user *mi = NULL;
++	int wqe_size;
++	int err;
+ 
+ 	if (mask & IB_SRQ_MAX_WR) {
+ 		/*
+@@ -156,12 +164,16 @@ int rxe_srq_from_attr(struct rxe_dev *rxe, struct rxe_srq *srq,
+ 		 */
+ 		mi = u64_to_user_ptr(ucmd->mmap_info_addr);
+ 
+-		err = rxe_queue_resize(q, &attr->max_wr,
+-				       rcv_wqe_size(srq->rq.max_sge), udata, mi,
+-				       &srq->rq.producer_lock,
++		wqe_size = sizeof(struct rxe_recv_wqe) +
++				srq->rq.max_sge*sizeof(struct ib_sge);
++
++		err = rxe_queue_resize(q, &attr->max_wr, wqe_size,
++				       udata, mi, &srq->rq.producer_lock,
+ 				       &srq->rq.consumer_lock);
+ 		if (err)
+-			goto err2;
++			goto err_free;
++
++		srq->rq.max_wr = attr->max_wr;
+ 	}
+ 
+ 	if (mask & IB_SRQ_LIMIT)
+@@ -169,7 +181,7 @@ int rxe_srq_from_attr(struct rxe_dev *rxe, struct rxe_srq *srq,
+ 
+ 	return 0;
+ 
+-err2:
++err_free:
+ 	rxe_queue_cleanup(q);
+ 	srq->rq.queue = NULL;
+ 	return err;
+diff --git a/drivers/infiniband/sw/siw/siw.h b/drivers/infiniband/sw/siw/siw.h
+index 2f3a9cda3850f..8b4a710b82bc1 100644
+--- a/drivers/infiniband/sw/siw/siw.h
++++ b/drivers/infiniband/sw/siw/siw.h
+@@ -74,6 +74,7 @@ struct siw_device {
+ 
+ 	u32 vendor_part_id;
+ 	int numa_node;
++	char raw_gid[ETH_ALEN];
+ 
+ 	/* physical port state (only one port per device) */
+ 	enum ib_port_state state;
+diff --git a/drivers/infiniband/sw/siw/siw_cm.c b/drivers/infiniband/sw/siw/siw_cm.c
+index da530c0404da4..a2605178f4eda 100644
+--- a/drivers/infiniband/sw/siw/siw_cm.c
++++ b/drivers/infiniband/sw/siw/siw_cm.c
+@@ -1501,7 +1501,6 @@ error:
+ 
+ 		cep->cm_id = NULL;
+ 		id->rem_ref(id);
+-		siw_cep_put(cep);
+ 
+ 		qp->cep = NULL;
+ 		siw_cep_put(cep);
+diff --git a/drivers/infiniband/sw/siw/siw_main.c b/drivers/infiniband/sw/siw/siw_main.c
+index 65b5cda5457ba..f45600d169ae7 100644
+--- a/drivers/infiniband/sw/siw/siw_main.c
++++ b/drivers/infiniband/sw/siw/siw_main.c
+@@ -75,8 +75,7 @@ static int siw_device_register(struct siw_device *sdev, const char *name)
+ 		return rv;
+ 	}
+ 
+-	siw_dbg(base_dev, "HWaddr=%pM\n", sdev->netdev->dev_addr);
+-
++	siw_dbg(base_dev, "HWaddr=%pM\n", sdev->raw_gid);
+ 	return 0;
+ }
+ 
+@@ -313,24 +312,19 @@ static struct siw_device *siw_device_create(struct net_device *netdev)
+ 		return NULL;
+ 
+ 	base_dev = &sdev->base_dev;
+-
+ 	sdev->netdev = netdev;
+ 
+-	if (netdev->type != ARPHRD_LOOPBACK && netdev->type != ARPHRD_NONE) {
+-		addrconf_addr_eui48((unsigned char *)&base_dev->node_guid,
+-				    netdev->dev_addr);
++	if (netdev->addr_len) {
++		memcpy(sdev->raw_gid, netdev->dev_addr,
++		       min_t(unsigned int, netdev->addr_len, ETH_ALEN));
+ 	} else {
+ 		/*
+-		 * This device does not have a HW address,
+-		 * but connection mangagement lib expects gid != 0
++		 * This device does not have a HW address, but
++		 * connection mangagement requires a unique gid.
+ 		 */
+-		size_t len = min_t(size_t, strlen(base_dev->name), 6);
+-		char addr[6] = { };
+-
+-		memcpy(addr, base_dev->name, len);
+-		addrconf_addr_eui48((unsigned char *)&base_dev->node_guid,
+-				    addr);
++		eth_random_addr(sdev->raw_gid);
+ 	}
++	addrconf_addr_eui48((u8 *)&base_dev->node_guid, sdev->raw_gid);
+ 
+ 	base_dev->uverbs_cmd_mask |= BIT_ULL(IB_USER_VERBS_CMD_POST_SEND);
+ 
+diff --git a/drivers/infiniband/sw/siw/siw_verbs.c b/drivers/infiniband/sw/siw/siw_verbs.c
+index 398ec13db6248..10cabc792c68e 100644
+--- a/drivers/infiniband/sw/siw/siw_verbs.c
++++ b/drivers/infiniband/sw/siw/siw_verbs.c
+@@ -157,7 +157,7 @@ int siw_query_device(struct ib_device *base_dev, struct ib_device_attr *attr,
+ 	attr->vendor_part_id = sdev->vendor_part_id;
+ 
+ 	addrconf_addr_eui48((u8 *)&attr->sys_image_guid,
+-			    sdev->netdev->dev_addr);
++			    sdev->raw_gid);
+ 
+ 	return 0;
+ }
+@@ -218,7 +218,7 @@ int siw_query_gid(struct ib_device *base_dev, u32 port, int idx,
+ 
+ 	/* subnet_prefix == interface_id == 0; */
+ 	memset(gid, 0, sizeof(*gid));
+-	memcpy(&gid->raw[0], sdev->netdev->dev_addr, 6);
++	memcpy(gid->raw, sdev->raw_gid, ETH_ALEN);
+ 
+ 	return 0;
+ }
+@@ -1494,7 +1494,7 @@ int siw_map_mr_sg(struct ib_mr *base_mr, struct scatterlist *sl, int num_sle,
+ 
+ 	if (pbl->max_buf < num_sle) {
+ 		siw_dbg_mem(mem, "too many SGE's: %d > %d\n",
+-			    mem->pbl->max_buf, num_sle);
++			    num_sle, pbl->max_buf);
+ 		return -ENOMEM;
+ 	}
+ 	for_each_sg(sl, slp, num_sle, i) {
+diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c
+index 92e1e7587af8b..00a7303c8cc60 100644
+--- a/drivers/infiniband/ulp/isert/ib_isert.c
++++ b/drivers/infiniband/ulp/isert/ib_isert.c
+@@ -2570,6 +2570,8 @@ static void isert_wait_conn(struct iscsit_conn *conn)
+ 	isert_put_unsol_pending_cmds(conn);
+ 	isert_wait4cmds(conn);
+ 	isert_wait4logout(isert_conn);
++
++	queue_work(isert_release_wq, &isert_conn->release_work);
+ }
+ 
+ static void isert_free_conn(struct iscsit_conn *conn)
+diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c
+index 0e513a7e5ac80..1574218764e0a 100644
+--- a/drivers/infiniband/ulp/srp/ib_srp.c
++++ b/drivers/infiniband/ulp/srp/ib_srp.c
+@@ -1979,12 +1979,8 @@ static void srp_process_rsp(struct srp_rdma_ch *ch, struct srp_rsp *rsp)
+ 
+ 		if (unlikely(rsp->flags & SRP_RSP_FLAG_DIUNDER))
+ 			scsi_set_resid(scmnd, be32_to_cpu(rsp->data_in_res_cnt));
+-		else if (unlikely(rsp->flags & SRP_RSP_FLAG_DIOVER))
+-			scsi_set_resid(scmnd, -be32_to_cpu(rsp->data_in_res_cnt));
+ 		else if (unlikely(rsp->flags & SRP_RSP_FLAG_DOUNDER))
+ 			scsi_set_resid(scmnd, be32_to_cpu(rsp->data_out_res_cnt));
+-		else if (unlikely(rsp->flags & SRP_RSP_FLAG_DOOVER))
+-			scsi_set_resid(scmnd, -be32_to_cpu(rsp->data_out_res_cnt));
+ 
+ 		srp_free_req(ch, req, scmnd,
+ 			     be32_to_cpu(rsp->req_lim_delta));
+diff --git a/drivers/input/serio/i8042-acpipnpio.h b/drivers/input/serio/i8042-acpipnpio.h
+index 028e45bd050bf..1724d6cb8649d 100644
+--- a/drivers/input/serio/i8042-acpipnpio.h
++++ b/drivers/input/serio/i8042-acpipnpio.h
+@@ -1281,6 +1281,13 @@ static const struct dmi_system_id i8042_dmi_quirk_table[] __initconst = {
+ 		.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
+ 					SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
+ 	},
++	/* See comment on TUXEDO InfinityBook S17 Gen6 / Clevo NS70MU above */
++	{
++		.matches = {
++			DMI_MATCH(DMI_BOARD_NAME, "PD5x_7xPNP_PNR_PNN_PNT"),
++		},
++		.driver_data = (void *)(SERIO_QUIRK_NOAUX)
++	},
+ 	{
+ 		.matches = {
+ 			DMI_MATCH(DMI_BOARD_NAME, "X170SM"),
+diff --git a/drivers/interconnect/qcom/bcm-voter.c b/drivers/interconnect/qcom/bcm-voter.c
+index d5f2a6b5376bd..a2d437a05a11f 100644
+--- a/drivers/interconnect/qcom/bcm-voter.c
++++ b/drivers/interconnect/qcom/bcm-voter.c
+@@ -58,6 +58,36 @@ static u64 bcm_div(u64 num, u32 base)
+ 	return num;
+ }
+ 
++/* BCMs with enable_mask use one-hot-encoding for on/off signaling */
++static void bcm_aggregate_mask(struct qcom_icc_bcm *bcm)
++{
++	struct qcom_icc_node *node;
++	int bucket, i;
++
++	for (bucket = 0; bucket < QCOM_ICC_NUM_BUCKETS; bucket++) {
++		bcm->vote_x[bucket] = 0;
++		bcm->vote_y[bucket] = 0;
++
++		for (i = 0; i < bcm->num_nodes; i++) {
++			node = bcm->nodes[i];
++
++			/* If any vote in this bucket exists, keep the BCM enabled */
++			if (node->sum_avg[bucket] || node->max_peak[bucket]) {
++				bcm->vote_x[bucket] = 0;
++				bcm->vote_y[bucket] = bcm->enable_mask;
++				break;
++			}
++		}
++	}
++
++	if (bcm->keepalive) {
++		bcm->vote_x[QCOM_ICC_BUCKET_AMC] = bcm->enable_mask;
++		bcm->vote_x[QCOM_ICC_BUCKET_WAKE] = bcm->enable_mask;
++		bcm->vote_y[QCOM_ICC_BUCKET_AMC] = bcm->enable_mask;
++		bcm->vote_y[QCOM_ICC_BUCKET_WAKE] = bcm->enable_mask;
++	}
++}
++
+ static void bcm_aggregate(struct qcom_icc_bcm *bcm)
+ {
+ 	struct qcom_icc_node *node;
+@@ -83,11 +113,6 @@ static void bcm_aggregate(struct qcom_icc_bcm *bcm)
+ 
+ 		temp = agg_peak[bucket] * bcm->vote_scale;
+ 		bcm->vote_y[bucket] = bcm_div(temp, bcm->aux_data.unit);
+-
+-		if (bcm->enable_mask && (bcm->vote_x[bucket] || bcm->vote_y[bucket])) {
+-			bcm->vote_x[bucket] = 0;
+-			bcm->vote_y[bucket] = bcm->enable_mask;
+-		}
+ 	}
+ 
+ 	if (bcm->keepalive && bcm->vote_x[QCOM_ICC_BUCKET_AMC] == 0 &&
+@@ -260,8 +285,12 @@ int qcom_icc_bcm_voter_commit(struct bcm_voter *voter)
+ 		return 0;
+ 
+ 	mutex_lock(&voter->lock);
+-	list_for_each_entry(bcm, &voter->commit_list, list)
+-		bcm_aggregate(bcm);
++	list_for_each_entry(bcm, &voter->commit_list, list) {
++		if (bcm->enable_mask)
++			bcm_aggregate_mask(bcm);
++		else
++			bcm_aggregate(bcm);
++	}
+ 
+ 	/*
+ 	 * Pre sort the BCMs based on VCD for ease of generating a command list
+diff --git a/drivers/interconnect/qcom/qcm2290.c b/drivers/interconnect/qcom/qcm2290.c
+index a29cdb4fac03f..82a2698ad66b1 100644
+--- a/drivers/interconnect/qcom/qcm2290.c
++++ b/drivers/interconnect/qcom/qcm2290.c
+@@ -1355,6 +1355,7 @@ static struct platform_driver qcm2290_noc_driver = {
+ 	.driver = {
+ 		.name = "qnoc-qcm2290",
+ 		.of_match_table = qcm2290_noc_of_match,
++		.sync_state = icc_sync_state,
+ 	},
+ };
+ module_platform_driver(qcm2290_noc_driver);
+diff --git a/drivers/interconnect/qcom/sm8450.c b/drivers/interconnect/qcom/sm8450.c
+index e64c214b40209..d6e582a02e628 100644
+--- a/drivers/interconnect/qcom/sm8450.c
++++ b/drivers/interconnect/qcom/sm8450.c
+@@ -1886,6 +1886,7 @@ static struct platform_driver qnoc_driver = {
+ 	.driver = {
+ 		.name = "qnoc-sm8450",
+ 		.of_match_table = qnoc_of_match,
++		.sync_state = icc_sync_state,
+ 	},
+ };
+ 
+diff --git a/drivers/iommu/amd/iommu_v2.c b/drivers/iommu/amd/iommu_v2.c
+index 261352a232716..65d78d7e04408 100644
+--- a/drivers/iommu/amd/iommu_v2.c
++++ b/drivers/iommu/amd/iommu_v2.c
+@@ -262,8 +262,8 @@ static void put_pasid_state(struct pasid_state *pasid_state)
+ 
+ static void put_pasid_state_wait(struct pasid_state *pasid_state)
+ {
+-	refcount_dec(&pasid_state->count);
+-	wait_event(pasid_state->wq, !refcount_read(&pasid_state->count));
++	if (!refcount_dec_and_test(&pasid_state->count))
++		wait_event(pasid_state->wq, !refcount_read(&pasid_state->count));
+ 	free_pasid_state(pasid_state);
+ }
+ 
+diff --git a/drivers/iommu/arm/arm-smmu/qcom_iommu.c b/drivers/iommu/arm/arm-smmu/qcom_iommu.c
+index a503ed758ec30..3e551ca6afdb9 100644
+--- a/drivers/iommu/arm/arm-smmu/qcom_iommu.c
++++ b/drivers/iommu/arm/arm-smmu/qcom_iommu.c
+@@ -273,6 +273,13 @@ static int qcom_iommu_init_domain(struct iommu_domain *domain,
+ 			ctx->secure_init = true;
+ 		}
+ 
++		/* Disable context bank before programming */
++		iommu_writel(ctx, ARM_SMMU_CB_SCTLR, 0);
++
++		/* Clear context bank fault address fault status registers */
++		iommu_writel(ctx, ARM_SMMU_CB_FAR, 0);
++		iommu_writel(ctx, ARM_SMMU_CB_FSR, ARM_SMMU_FSR_FAULT);
++
+ 		/* TTBRs */
+ 		iommu_writeq(ctx, ARM_SMMU_CB_TTBR0,
+ 				pgtbl_cfg.arm_lpae_s1_cfg.ttbr |
+diff --git a/drivers/iommu/intel/pasid.c b/drivers/iommu/intel/pasid.c
+index c5d479770e12e..49fc5a038a145 100644
+--- a/drivers/iommu/intel/pasid.c
++++ b/drivers/iommu/intel/pasid.c
+@@ -129,7 +129,7 @@ int intel_pasid_alloc_table(struct device *dev)
+ 	info->pasid_table = pasid_table;
+ 
+ 	if (!ecap_coherent(info->iommu->ecap))
+-		clflush_cache_range(pasid_table->table, size);
++		clflush_cache_range(pasid_table->table, (1 << order) * PAGE_SIZE);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
+index caaf563d38ae0..cabeb5bd3e41f 100644
+--- a/drivers/iommu/iommu.c
++++ b/drivers/iommu/iommu.c
+@@ -3203,7 +3203,7 @@ static void __iommu_release_dma_ownership(struct iommu_group *group)
+ 
+ /**
+  * iommu_group_release_dma_owner() - Release DMA ownership of a group
+- * @dev: The device
++ * @group: The group
+  *
+  * Release the DMA ownership claimed by iommu_group_claim_dma_owner().
+  */
+@@ -3217,7 +3217,7 @@ EXPORT_SYMBOL_GPL(iommu_group_release_dma_owner);
+ 
+ /**
+  * iommu_device_release_dma_owner() - Release DMA ownership of a device
+- * @group: The device.
++ * @dev: The device.
+  *
+  * Release the DMA ownership claimed by iommu_device_claim_dma_owner().
+  */
+diff --git a/drivers/iommu/iommufd/device.c b/drivers/iommu/iommufd/device.c
+index ed2937a4e196f..2e43ebf1a2b5c 100644
+--- a/drivers/iommu/iommufd/device.c
++++ b/drivers/iommu/iommufd/device.c
+@@ -298,8 +298,8 @@ static int iommufd_device_auto_get_domain(struct iommufd_device *idev,
+ 	}
+ 	hwpt->auto_domain = true;
+ 
+-	mutex_unlock(&ioas->mutex);
+ 	iommufd_object_finalize(idev->ictx, &hwpt->obj);
++	mutex_unlock(&ioas->mutex);
+ 	return 0;
+ out_unlock:
+ 	mutex_unlock(&ioas->mutex);
+diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c
+index e93906d6e112e..c2764891a779c 100644
+--- a/drivers/iommu/mtk_iommu.c
++++ b/drivers/iommu/mtk_iommu.c
+@@ -258,6 +258,8 @@ struct mtk_iommu_data {
+ 	struct device			*smicomm_dev;
+ 
+ 	struct mtk_iommu_bank_data	*bank;
++	struct mtk_iommu_domain		*share_dom; /* For 2 HWs share pgtable */
++
+ 	struct regmap			*pericfg;
+ 	struct mutex			mutex; /* Protect m4u_group/m4u_dom above */
+ 
+@@ -620,15 +622,14 @@ static int mtk_iommu_domain_finalise(struct mtk_iommu_domain *dom,
+ 				     struct mtk_iommu_data *data,
+ 				     unsigned int region_id)
+ {
++	struct mtk_iommu_domain	*share_dom = data->share_dom;
+ 	const struct mtk_iommu_iova_region *region;
+-	struct mtk_iommu_domain	*m4u_dom;
+-
+-	/* Always use bank0 in sharing pgtable case */
+-	m4u_dom = data->bank[0].m4u_dom;
+-	if (m4u_dom) {
+-		dom->iop = m4u_dom->iop;
+-		dom->cfg = m4u_dom->cfg;
+-		dom->domain.pgsize_bitmap = m4u_dom->cfg.pgsize_bitmap;
++
++	/* Always use share domain in sharing pgtable case */
++	if (MTK_IOMMU_HAS_FLAG(data->plat_data, SHARE_PGTABLE) && share_dom) {
++		dom->iop = share_dom->iop;
++		dom->cfg = share_dom->cfg;
++		dom->domain.pgsize_bitmap = share_dom->cfg.pgsize_bitmap;
+ 		goto update_iova_region;
+ 	}
+ 
+@@ -658,6 +659,9 @@ static int mtk_iommu_domain_finalise(struct mtk_iommu_domain *dom,
+ 	/* Update our support page sizes bitmap */
+ 	dom->domain.pgsize_bitmap = dom->cfg.pgsize_bitmap;
+ 
++	if (MTK_IOMMU_HAS_FLAG(data->plat_data, SHARE_PGTABLE))
++		data->share_dom = dom;
++
+ update_iova_region:
+ 	/* Update the iova region for this domain */
+ 	region = data->plat_data->iova_region + region_id;
+@@ -708,7 +712,9 @@ static int mtk_iommu_attach_device(struct iommu_domain *domain,
+ 		/* Data is in the frstdata in sharing pgtable case. */
+ 		frstdata = mtk_iommu_get_frst_data(hw_list);
+ 
++		mutex_lock(&frstdata->mutex);
+ 		ret = mtk_iommu_domain_finalise(dom, frstdata, region_id);
++		mutex_unlock(&frstdata->mutex);
+ 		if (ret) {
+ 			mutex_unlock(&dom->mutex);
+ 			return ret;
+diff --git a/drivers/iommu/rockchip-iommu.c b/drivers/iommu/rockchip-iommu.c
+index 4054030c32379..ae42959bc4905 100644
+--- a/drivers/iommu/rockchip-iommu.c
++++ b/drivers/iommu/rockchip-iommu.c
+@@ -98,8 +98,6 @@ struct rk_iommu_ops {
+ 	phys_addr_t (*pt_address)(u32 dte);
+ 	u32 (*mk_dtentries)(dma_addr_t pt_dma);
+ 	u32 (*mk_ptentries)(phys_addr_t page, int prot);
+-	phys_addr_t (*dte_addr_phys)(u32 addr);
+-	u32 (*dma_addr_dte)(dma_addr_t dt_dma);
+ 	u64 dma_bit_mask;
+ };
+ 
+@@ -278,8 +276,8 @@ static u32 rk_mk_pte(phys_addr_t page, int prot)
+ /*
+  * In v2:
+  * 31:12 - Page address bit 31:0
+- *  11:9 - Page address bit 34:32
+- *   8:4 - Page address bit 39:35
++ * 11: 8 - Page address bit 35:32
++ *  7: 4 - Page address bit 39:36
+  *     3 - Security
+  *     2 - Writable
+  *     1 - Readable
+@@ -506,7 +504,7 @@ static int rk_iommu_force_reset(struct rk_iommu *iommu)
+ 
+ 	/*
+ 	 * Check if register DTE_ADDR is working by writing DTE_ADDR_DUMMY
+-	 * and verifying that upper 5 nybbles are read back.
++	 * and verifying that upper 5 (v1) or 7 (v2) nybbles are read back.
+ 	 */
+ 	for (i = 0; i < iommu->num_mmu; i++) {
+ 		dte_addr = rk_ops->pt_address(DTE_ADDR_DUMMY);
+@@ -531,33 +529,6 @@ static int rk_iommu_force_reset(struct rk_iommu *iommu)
+ 	return 0;
+ }
+ 
+-static inline phys_addr_t rk_dte_addr_phys(u32 addr)
+-{
+-	return (phys_addr_t)addr;
+-}
+-
+-static inline u32 rk_dma_addr_dte(dma_addr_t dt_dma)
+-{
+-	return dt_dma;
+-}
+-
+-#define DT_HI_MASK GENMASK_ULL(39, 32)
+-#define DTE_BASE_HI_MASK GENMASK(11, 4)
+-#define DT_SHIFT   28
+-
+-static inline phys_addr_t rk_dte_addr_phys_v2(u32 addr)
+-{
+-	u64 addr64 = addr;
+-	return (phys_addr_t)(addr64 & RK_DTE_PT_ADDRESS_MASK) |
+-	       ((addr64 & DTE_BASE_HI_MASK) << DT_SHIFT);
+-}
+-
+-static inline u32 rk_dma_addr_dte_v2(dma_addr_t dt_dma)
+-{
+-	return (dt_dma & RK_DTE_PT_ADDRESS_MASK) |
+-	       ((dt_dma & DT_HI_MASK) >> DT_SHIFT);
+-}
+-
+ static void log_iova(struct rk_iommu *iommu, int index, dma_addr_t iova)
+ {
+ 	void __iomem *base = iommu->bases[index];
+@@ -577,7 +548,7 @@ static void log_iova(struct rk_iommu *iommu, int index, dma_addr_t iova)
+ 	page_offset = rk_iova_page_offset(iova);
+ 
+ 	mmu_dte_addr = rk_iommu_read(base, RK_MMU_DTE_ADDR);
+-	mmu_dte_addr_phys = rk_ops->dte_addr_phys(mmu_dte_addr);
++	mmu_dte_addr_phys = rk_ops->pt_address(mmu_dte_addr);
+ 
+ 	dte_addr_phys = mmu_dte_addr_phys + (4 * dte_index);
+ 	dte_addr = phys_to_virt(dte_addr_phys);
+@@ -967,7 +938,7 @@ static int rk_iommu_enable(struct rk_iommu *iommu)
+ 
+ 	for (i = 0; i < iommu->num_mmu; i++) {
+ 		rk_iommu_write(iommu->bases[i], RK_MMU_DTE_ADDR,
+-			       rk_ops->dma_addr_dte(rk_domain->dt_dma));
++			       rk_ops->mk_dtentries(rk_domain->dt_dma));
+ 		rk_iommu_base_command(iommu->bases[i], RK_MMU_CMD_ZAP_CACHE);
+ 		rk_iommu_write(iommu->bases[i], RK_MMU_INT_MASK, RK_MMU_IRQ_MASK);
+ 	}
+@@ -1405,8 +1376,6 @@ static struct rk_iommu_ops iommu_data_ops_v1 = {
+ 	.pt_address = &rk_dte_pt_address,
+ 	.mk_dtentries = &rk_mk_dte,
+ 	.mk_ptentries = &rk_mk_pte,
+-	.dte_addr_phys = &rk_dte_addr_phys,
+-	.dma_addr_dte = &rk_dma_addr_dte,
+ 	.dma_bit_mask = DMA_BIT_MASK(32),
+ };
+ 
+@@ -1414,8 +1383,6 @@ static struct rk_iommu_ops iommu_data_ops_v2 = {
+ 	.pt_address = &rk_dte_pt_address_v2,
+ 	.mk_dtentries = &rk_mk_dte_v2,
+ 	.mk_ptentries = &rk_mk_pte_v2,
+-	.dte_addr_phys = &rk_dte_addr_phys_v2,
+-	.dma_addr_dte = &rk_dma_addr_dte_v2,
+ 	.dma_bit_mask = DMA_BIT_MASK(40),
+ };
+ 
+diff --git a/drivers/iommu/sprd-iommu.c b/drivers/iommu/sprd-iommu.c
+index 39e34fdeccda7..eb684d8807cab 100644
+--- a/drivers/iommu/sprd-iommu.c
++++ b/drivers/iommu/sprd-iommu.c
+@@ -148,6 +148,7 @@ static struct iommu_domain *sprd_iommu_domain_alloc(unsigned int domain_type)
+ 
+ 	dom->domain.geometry.aperture_start = 0;
+ 	dom->domain.geometry.aperture_end = SZ_256M - 1;
++	dom->domain.geometry.force_aperture = true;
+ 
+ 	return &dom->domain;
+ }
+diff --git a/drivers/irqchip/irq-loongson-eiointc.c b/drivers/irqchip/irq-loongson-eiointc.c
+index 92d8aa28bdf54..1623cd7791752 100644
+--- a/drivers/irqchip/irq-loongson-eiointc.c
++++ b/drivers/irqchip/irq-loongson-eiointc.c
+@@ -144,7 +144,7 @@ static int eiointc_router_init(unsigned int cpu)
+ 	int i, bit;
+ 	uint32_t data;
+ 	uint32_t node = cpu_to_eio_node(cpu);
+-	uint32_t index = eiointc_index(node);
++	int index = eiointc_index(node);
+ 
+ 	if (index < 0) {
+ 		pr_err("Error: invalid nodemap!\n");
+diff --git a/drivers/leds/led-class-multicolor.c b/drivers/leds/led-class-multicolor.c
+index e317408583df9..ec62a48116135 100644
+--- a/drivers/leds/led-class-multicolor.c
++++ b/drivers/leds/led-class-multicolor.c
+@@ -6,6 +6,7 @@
+ #include <linux/device.h>
+ #include <linux/init.h>
+ #include <linux/led-class-multicolor.h>
++#include <linux/math.h>
+ #include <linux/module.h>
+ #include <linux/slab.h>
+ #include <linux/uaccess.h>
+@@ -19,9 +20,10 @@ int led_mc_calc_color_components(struct led_classdev_mc *mcled_cdev,
+ 	int i;
+ 
+ 	for (i = 0; i < mcled_cdev->num_colors; i++)
+-		mcled_cdev->subled_info[i].brightness = brightness *
+-					mcled_cdev->subled_info[i].intensity /
+-					led_cdev->max_brightness;
++		mcled_cdev->subled_info[i].brightness =
++			DIV_ROUND_CLOSEST(brightness *
++					  mcled_cdev->subled_info[i].intensity,
++					  led_cdev->max_brightness);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/leds/led-core.c b/drivers/leds/led-core.c
+index b9b1295833c90..04f9ea675f2ce 100644
+--- a/drivers/leds/led-core.c
++++ b/drivers/leds/led-core.c
+@@ -474,15 +474,15 @@ int led_compose_name(struct device *dev, struct led_init_data *init_data,
+ 	struct fwnode_handle *fwnode = init_data->fwnode;
+ 	const char *devicename = init_data->devicename;
+ 
+-	/* We want to label LEDs that can produce full range of colors
+-	 * as RGB, not multicolor */
+-	BUG_ON(props.color == LED_COLOR_ID_MULTI);
+-
+ 	if (!led_classdev_name)
+ 		return -EINVAL;
+ 
+ 	led_parse_fwnode_props(dev, fwnode, &props);
+ 
++	/* We want to label LEDs that can produce full range of colors
++	 * as RGB, not multicolor */
++	BUG_ON(props.color == LED_COLOR_ID_MULTI);
++
+ 	if (props.label) {
+ 		/*
+ 		 * If init_data.devicename is NULL, then it indicates that
+diff --git a/drivers/leds/leds-aw200xx.c b/drivers/leds/leds-aw200xx.c
+index 96979b8e09b7d..7b996bc01c469 100644
+--- a/drivers/leds/leds-aw200xx.c
++++ b/drivers/leds/leds-aw200xx.c
+@@ -368,7 +368,7 @@ static int aw200xx_probe_fw(struct device *dev, struct aw200xx *chip)
+ 
+ 	if (!chip->display_rows ||
+ 	    chip->display_rows > chip->cdef->display_size_rows_max) {
+-		return dev_err_probe(dev, ret,
++		return dev_err_probe(dev, -EINVAL,
+ 				     "Invalid leds display size %u\n",
+ 				     chip->display_rows);
+ 	}
+diff --git a/drivers/leds/leds-pwm.c b/drivers/leds/leds-pwm.c
+index 29194cc382afb..87c199242f3c8 100644
+--- a/drivers/leds/leds-pwm.c
++++ b/drivers/leds/leds-pwm.c
+@@ -146,7 +146,7 @@ static int led_pwm_create_fwnode(struct device *dev, struct led_pwm_priv *priv)
+ 			led.name = to_of_node(fwnode)->name;
+ 
+ 		if (!led.name) {
+-			ret = EINVAL;
++			ret = -EINVAL;
+ 			goto err_child_out;
+ 		}
+ 
+diff --git a/drivers/leds/simple/Kconfig b/drivers/leds/simple/Kconfig
+index 44fa0f93cb3b3..02443e745ff3b 100644
+--- a/drivers/leds/simple/Kconfig
++++ b/drivers/leds/simple/Kconfig
+@@ -1,6 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0-only
+ config LEDS_SIEMENS_SIMATIC_IPC
+ 	tristate "LED driver for Siemens Simatic IPCs"
++	depends on LEDS_CLASS
+ 	depends on SIEMENS_SIMATIC_IPC
+ 	help
+ 	  This option enables support for the LEDs of several Industrial PCs
+diff --git a/drivers/leds/trigger/ledtrig-tty.c b/drivers/leds/trigger/ledtrig-tty.c
+index f62db7e520b52..8ae0d2d284aff 100644
+--- a/drivers/leds/trigger/ledtrig-tty.c
++++ b/drivers/leds/trigger/ledtrig-tty.c
+@@ -7,6 +7,8 @@
+ #include <linux/tty.h>
+ #include <uapi/linux/serial.h>
+ 
++#define LEDTRIG_TTY_INTERVAL	50
++
+ struct ledtrig_tty_data {
+ 	struct led_classdev *led_cdev;
+ 	struct delayed_work dwork;
+@@ -122,17 +124,19 @@ static void ledtrig_tty_work(struct work_struct *work)
+ 
+ 	if (icount.rx != trigger_data->rx ||
+ 	    icount.tx != trigger_data->tx) {
+-		led_set_brightness_sync(trigger_data->led_cdev, LED_ON);
++		unsigned long interval = LEDTRIG_TTY_INTERVAL;
++
++		led_blink_set_oneshot(trigger_data->led_cdev, &interval,
++				      &interval, 0);
+ 
+ 		trigger_data->rx = icount.rx;
+ 		trigger_data->tx = icount.tx;
+-	} else {
+-		led_set_brightness_sync(trigger_data->led_cdev, LED_OFF);
+ 	}
+ 
+ out:
+ 	mutex_unlock(&trigger_data->mutex);
+-	schedule_delayed_work(&trigger_data->dwork, msecs_to_jiffies(100));
++	schedule_delayed_work(&trigger_data->dwork,
++			      msecs_to_jiffies(LEDTRIG_TTY_INTERVAL * 2));
+ }
+ 
+ static struct attribute *ledtrig_tty_attrs[] = {
+diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c
+index 1ff712889a3b3..a08bf6b9accb4 100644
+--- a/drivers/md/md-bitmap.c
++++ b/drivers/md/md-bitmap.c
+@@ -2542,6 +2542,10 @@ backlog_store(struct mddev *mddev, const char *buf, size_t len)
+ 	if (backlog > COUNTER_MAX)
+ 		return -EINVAL;
+ 
++	rv = mddev_lock(mddev);
++	if (rv)
++		return rv;
++
+ 	/*
+ 	 * Without write mostly device, it doesn't make sense to set
+ 	 * backlog for max_write_behind.
+@@ -2555,6 +2559,7 @@ backlog_store(struct mddev *mddev, const char *buf, size_t len)
+ 	if (!has_write_mostly) {
+ 		pr_warn_ratelimited("%s: can't set backlog, no write mostly device available\n",
+ 				    mdname(mddev));
++		mddev_unlock(mddev);
+ 		return -EINVAL;
+ 	}
+ 
+@@ -2565,13 +2570,13 @@ backlog_store(struct mddev *mddev, const char *buf, size_t len)
+ 			mddev_destroy_serial_pool(mddev, NULL, false);
+ 	} else if (backlog && !mddev->serial_info_pool) {
+ 		/* serial_info_pool is needed since backlog is not zero */
+-		struct md_rdev *rdev;
+-
+ 		rdev_for_each(rdev, mddev)
+ 			mddev_create_serial_pool(mddev, rdev, false);
+ 	}
+ 	if (old_mwb != backlog)
+ 		md_bitmap_update_sb(mddev->bitmap);
++
++	mddev_unlock(mddev);
+ 	return len;
+ }
+ 
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 78be7811a89f5..2a4a3d3039fae 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -465,11 +465,13 @@ EXPORT_SYMBOL_GPL(mddev_suspend);
+ 
+ void mddev_resume(struct mddev *mddev)
+ {
+-	/* entred the memalloc scope from mddev_suspend() */
+-	memalloc_noio_restore(mddev->noio_flag);
+ 	lockdep_assert_held(&mddev->reconfig_mutex);
+ 	if (--mddev->suspended)
+ 		return;
++
++	/* entred the memalloc scope from mddev_suspend() */
++	memalloc_noio_restore(mddev->noio_flag);
++
+ 	percpu_ref_resurrect(&mddev->active_io);
+ 	wake_up(&mddev->sb_wait);
+ 	mddev->pers->quiesce(mddev, 0);
+diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
+index d1ac73fcd8529..7c6a0b4437d8f 100644
+--- a/drivers/md/raid0.c
++++ b/drivers/md/raid0.c
+@@ -557,54 +557,20 @@ static void raid0_handle_discard(struct mddev *mddev, struct bio *bio)
+ 	bio_endio(bio);
+ }
+ 
+-static bool raid0_make_request(struct mddev *mddev, struct bio *bio)
++static void raid0_map_submit_bio(struct mddev *mddev, struct bio *bio)
+ {
+ 	struct r0conf *conf = mddev->private;
+ 	struct strip_zone *zone;
+ 	struct md_rdev *tmp_dev;
+-	sector_t bio_sector;
+-	sector_t sector;
+-	sector_t orig_sector;
+-	unsigned chunk_sects;
+-	unsigned sectors;
+-
+-	if (unlikely(bio->bi_opf & REQ_PREFLUSH)
+-	    && md_flush_request(mddev, bio))
+-		return true;
++	sector_t bio_sector = bio->bi_iter.bi_sector;
++	sector_t sector = bio_sector;
+ 
+-	if (unlikely((bio_op(bio) == REQ_OP_DISCARD))) {
+-		raid0_handle_discard(mddev, bio);
+-		return true;
+-	}
++	md_account_bio(mddev, &bio);
+ 
+-	bio_sector = bio->bi_iter.bi_sector;
+-	sector = bio_sector;
+-	chunk_sects = mddev->chunk_sectors;
+-
+-	sectors = chunk_sects -
+-		(likely(is_power_of_2(chunk_sects))
+-		 ? (sector & (chunk_sects-1))
+-		 : sector_div(sector, chunk_sects));
+-
+-	/* Restore due to sector_div */
+-	sector = bio_sector;
+-
+-	if (sectors < bio_sectors(bio)) {
+-		struct bio *split = bio_split(bio, sectors, GFP_NOIO,
+-					      &mddev->bio_set);
+-		bio_chain(split, bio);
+-		submit_bio_noacct(bio);
+-		bio = split;
+-	}
+-
+-	if (bio->bi_pool != &mddev->bio_set)
+-		md_account_bio(mddev, &bio);
+-
+-	orig_sector = sector;
+ 	zone = find_zone(mddev->private, &sector);
+ 	switch (conf->layout) {
+ 	case RAID0_ORIG_LAYOUT:
+-		tmp_dev = map_sector(mddev, zone, orig_sector, &sector);
++		tmp_dev = map_sector(mddev, zone, bio_sector, &sector);
+ 		break;
+ 	case RAID0_ALT_MULTIZONE_LAYOUT:
+ 		tmp_dev = map_sector(mddev, zone, sector, &sector);
+@@ -612,13 +578,13 @@ static bool raid0_make_request(struct mddev *mddev, struct bio *bio)
+ 	default:
+ 		WARN(1, "md/raid0:%s: Invalid layout\n", mdname(mddev));
+ 		bio_io_error(bio);
+-		return true;
++		return;
+ 	}
+ 
+ 	if (unlikely(is_rdev_broken(tmp_dev))) {
+ 		bio_io_error(bio);
+ 		md_error(mddev, tmp_dev);
+-		return true;
++		return;
+ 	}
+ 
+ 	bio_set_dev(bio, tmp_dev->bdev);
+@@ -630,6 +596,40 @@ static bool raid0_make_request(struct mddev *mddev, struct bio *bio)
+ 				      bio_sector);
+ 	mddev_check_write_zeroes(mddev, bio);
+ 	submit_bio_noacct(bio);
++}
++
++static bool raid0_make_request(struct mddev *mddev, struct bio *bio)
++{
++	sector_t sector;
++	unsigned chunk_sects;
++	unsigned sectors;
++
++	if (unlikely(bio->bi_opf & REQ_PREFLUSH)
++	    && md_flush_request(mddev, bio))
++		return true;
++
++	if (unlikely((bio_op(bio) == REQ_OP_DISCARD))) {
++		raid0_handle_discard(mddev, bio);
++		return true;
++	}
++
++	sector = bio->bi_iter.bi_sector;
++	chunk_sects = mddev->chunk_sectors;
++
++	sectors = chunk_sects -
++		(likely(is_power_of_2(chunk_sects))
++		 ? (sector & (chunk_sects-1))
++		 : sector_div(sector, chunk_sects));
++
++	if (sectors < bio_sectors(bio)) {
++		struct bio *split = bio_split(bio, sectors, GFP_NOIO,
++					      &mddev->bio_set);
++		bio_chain(split, bio);
++		raid0_map_submit_bio(mddev, bio);
++		bio = split;
++	}
++
++	raid0_map_submit_bio(mddev, bio);
+ 	return true;
+ }
+ 
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index 5051149e27bbe..0578bcda7c6b7 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -1322,6 +1322,25 @@ static void raid10_write_one_disk(struct mddev *mddev, struct r10bio *r10_bio,
+ 	}
+ }
+ 
++static struct md_rdev *dereference_rdev_and_rrdev(struct raid10_info *mirror,
++						  struct md_rdev **prrdev)
++{
++	struct md_rdev *rdev, *rrdev;
++
++	rrdev = rcu_dereference(mirror->replacement);
++	/*
++	 * Read replacement first to prevent reading both rdev and
++	 * replacement as NULL during replacement replace rdev.
++	 */
++	smp_mb();
++	rdev = rcu_dereference(mirror->rdev);
++	if (rdev == rrdev)
++		rrdev = NULL;
++
++	*prrdev = rrdev;
++	return rdev;
++}
++
+ static void wait_blocked_dev(struct mddev *mddev, struct r10bio *r10_bio)
+ {
+ 	int i;
+@@ -1332,11 +1351,9 @@ retry_wait:
+ 	blocked_rdev = NULL;
+ 	rcu_read_lock();
+ 	for (i = 0; i < conf->copies; i++) {
+-		struct md_rdev *rdev = rcu_dereference(conf->mirrors[i].rdev);
+-		struct md_rdev *rrdev = rcu_dereference(
+-			conf->mirrors[i].replacement);
+-		if (rdev == rrdev)
+-			rrdev = NULL;
++		struct md_rdev *rdev, *rrdev;
++
++		rdev = dereference_rdev_and_rrdev(&conf->mirrors[i], &rrdev);
+ 		if (rdev && unlikely(test_bit(Blocked, &rdev->flags))) {
+ 			atomic_inc(&rdev->nr_pending);
+ 			blocked_rdev = rdev;
+@@ -1465,15 +1482,7 @@ static void raid10_write_request(struct mddev *mddev, struct bio *bio,
+ 		int d = r10_bio->devs[i].devnum;
+ 		struct md_rdev *rdev, *rrdev;
+ 
+-		rrdev = rcu_dereference(conf->mirrors[d].replacement);
+-		/*
+-		 * Read replacement first to prevent reading both rdev and
+-		 * replacement as NULL during replacement replace rdev.
+-		 */
+-		smp_mb();
+-		rdev = rcu_dereference(conf->mirrors[d].rdev);
+-		if (rdev == rrdev)
+-			rrdev = NULL;
++		rdev = dereference_rdev_and_rrdev(&conf->mirrors[d], &rrdev);
+ 		if (rdev && (test_bit(Faulty, &rdev->flags)))
+ 			rdev = NULL;
+ 		if (rrdev && (test_bit(Faulty, &rrdev->flags)))
+@@ -1780,10 +1789,9 @@ retry_discard:
+ 	 */
+ 	rcu_read_lock();
+ 	for (disk = 0; disk < geo->raid_disks; disk++) {
+-		struct md_rdev *rdev = rcu_dereference(conf->mirrors[disk].rdev);
+-		struct md_rdev *rrdev = rcu_dereference(
+-			conf->mirrors[disk].replacement);
++		struct md_rdev *rdev, *rrdev;
+ 
++		rdev = dereference_rdev_and_rrdev(&conf->mirrors[disk], &rrdev);
+ 		r10_bio->devs[disk].bio = NULL;
+ 		r10_bio->devs[disk].repl_bio = NULL;
+ 
+diff --git a/drivers/md/raid5-cache.c b/drivers/md/raid5-cache.c
+index 47ba7d9e81e18..8b3fc484fd758 100644
+--- a/drivers/md/raid5-cache.c
++++ b/drivers/md/raid5-cache.c
+@@ -1260,14 +1260,13 @@ static void r5l_log_flush_endio(struct bio *bio)
+ 
+ 	if (bio->bi_status)
+ 		md_error(log->rdev->mddev, log->rdev);
++	bio_uninit(bio);
+ 
+ 	spin_lock_irqsave(&log->io_list_lock, flags);
+ 	list_for_each_entry(io, &log->flushing_ios, log_sibling)
+ 		r5l_io_run_stripes(io);
+ 	list_splice_tail_init(&log->flushing_ios, &log->finished_ios);
+ 	spin_unlock_irqrestore(&log->io_list_lock, flags);
+-
+-	bio_uninit(bio);
+ }
+ 
+ /*
+@@ -3168,12 +3167,15 @@ void r5l_exit_log(struct r5conf *conf)
+ {
+ 	struct r5l_log *log = conf->log;
+ 
+-	/* Ensure disable_writeback_work wakes up and exits */
+-	wake_up(&conf->mddev->sb_wait);
+-	flush_work(&log->disable_writeback_work);
+ 	md_unregister_thread(&log->reclaim_thread);
+ 
++	/*
++	 * 'reconfig_mutex' is held by caller, set 'confg->log' to NULL to
++	 * ensure disable_writeback_work wakes up and exits.
++	 */
+ 	conf->log = NULL;
++	wake_up(&conf->mddev->sb_wait);
++	flush_work(&log->disable_writeback_work);
+ 
+ 	mempool_exit(&log->meta_pool);
+ 	bioset_exit(&log->bs);
+diff --git a/drivers/media/cec/core/cec-adap.c b/drivers/media/cec/core/cec-adap.c
+index 241b1621b197c..09ca83c233299 100644
+--- a/drivers/media/cec/core/cec-adap.c
++++ b/drivers/media/cec/core/cec-adap.c
+@@ -385,8 +385,8 @@ static void cec_data_cancel(struct cec_data *data, u8 tx_status, u8 rx_status)
+ 	cec_queue_msg_monitor(adap, &data->msg, 1);
+ 
+ 	if (!data->blocking && data->msg.sequence)
+-		/* Allow drivers to process the message first */
+-		call_op(adap, received, &data->msg);
++		/* Allow drivers to react to a canceled transmit */
++		call_void_op(adap, adap_nb_transmit_canceled, &data->msg);
+ 
+ 	cec_data_completed(data);
+ }
+@@ -1348,7 +1348,7 @@ static void cec_adap_unconfigure(struct cec_adapter *adap)
+ 	cec_flush(adap);
+ 	wake_up_interruptible(&adap->kthread_waitq);
+ 	cec_post_state_event(adap);
+-	call_void_op(adap, adap_configured, false);
++	call_void_op(adap, adap_unconfigured);
+ }
+ 
+ /*
+@@ -1539,7 +1539,7 @@ configured:
+ 	adap->kthread_config = NULL;
+ 	complete(&adap->config_completion);
+ 	mutex_unlock(&adap->lock);
+-	call_void_op(adap, adap_configured, true);
++	call_void_op(adap, configured);
+ 	return 0;
+ 
+ unconfigure:
+diff --git a/drivers/media/dvb-frontends/ascot2e.c b/drivers/media/dvb-frontends/ascot2e.c
+index 9b00b56230b61..cf8e5f1bd1018 100644
+--- a/drivers/media/dvb-frontends/ascot2e.c
++++ b/drivers/media/dvb-frontends/ascot2e.c
+@@ -533,7 +533,7 @@ struct dvb_frontend *ascot2e_attach(struct dvb_frontend *fe,
+ 		priv->i2c_address, priv->i2c);
+ 	return fe;
+ }
+-EXPORT_SYMBOL(ascot2e_attach);
++EXPORT_SYMBOL_GPL(ascot2e_attach);
+ 
+ MODULE_DESCRIPTION("Sony ASCOT2E terr/cab tuner driver");
+ MODULE_AUTHOR("info@netup.ru");
+diff --git a/drivers/media/dvb-frontends/atbm8830.c b/drivers/media/dvb-frontends/atbm8830.c
+index bdd16b9c58244..778c865085bf9 100644
+--- a/drivers/media/dvb-frontends/atbm8830.c
++++ b/drivers/media/dvb-frontends/atbm8830.c
+@@ -489,7 +489,7 @@ error_out:
+ 	return NULL;
+ 
+ }
+-EXPORT_SYMBOL(atbm8830_attach);
++EXPORT_SYMBOL_GPL(atbm8830_attach);
+ 
+ MODULE_DESCRIPTION("AltoBeam ATBM8830/8831 GB20600 demodulator driver");
+ MODULE_AUTHOR("David T. L. Wong <davidtlwong@gmail.com>");
+diff --git a/drivers/media/dvb-frontends/au8522_dig.c b/drivers/media/dvb-frontends/au8522_dig.c
+index 78cafdf279618..230436bf6cbd9 100644
+--- a/drivers/media/dvb-frontends/au8522_dig.c
++++ b/drivers/media/dvb-frontends/au8522_dig.c
+@@ -879,7 +879,7 @@ error:
+ 	au8522_release_state(state);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(au8522_attach);
++EXPORT_SYMBOL_GPL(au8522_attach);
+ 
+ static const struct dvb_frontend_ops au8522_ops = {
+ 	.delsys = { SYS_ATSC, SYS_DVBC_ANNEX_B },
+diff --git a/drivers/media/dvb-frontends/bcm3510.c b/drivers/media/dvb-frontends/bcm3510.c
+index 68b92b4419cff..b3f5c49accafd 100644
+--- a/drivers/media/dvb-frontends/bcm3510.c
++++ b/drivers/media/dvb-frontends/bcm3510.c
+@@ -835,7 +835,7 @@ error:
+ 	kfree(state);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(bcm3510_attach);
++EXPORT_SYMBOL_GPL(bcm3510_attach);
+ 
+ static const struct dvb_frontend_ops bcm3510_ops = {
+ 	.delsys = { SYS_ATSC, SYS_DVBC_ANNEX_B },
+diff --git a/drivers/media/dvb-frontends/cx22700.c b/drivers/media/dvb-frontends/cx22700.c
+index b39ff516271b2..1d04c0a652b26 100644
+--- a/drivers/media/dvb-frontends/cx22700.c
++++ b/drivers/media/dvb-frontends/cx22700.c
+@@ -432,4 +432,4 @@ MODULE_DESCRIPTION("Conexant CX22700 DVB-T Demodulator driver");
+ MODULE_AUTHOR("Holger Waechtler");
+ MODULE_LICENSE("GPL");
+ 
+-EXPORT_SYMBOL(cx22700_attach);
++EXPORT_SYMBOL_GPL(cx22700_attach);
+diff --git a/drivers/media/dvb-frontends/cx22702.c b/drivers/media/dvb-frontends/cx22702.c
+index cc6acbf6393d4..61ad34b7004b5 100644
+--- a/drivers/media/dvb-frontends/cx22702.c
++++ b/drivers/media/dvb-frontends/cx22702.c
+@@ -604,7 +604,7 @@ error:
+ 	kfree(state);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(cx22702_attach);
++EXPORT_SYMBOL_GPL(cx22702_attach);
+ 
+ static const struct dvb_frontend_ops cx22702_ops = {
+ 	.delsys = { SYS_DVBT },
+diff --git a/drivers/media/dvb-frontends/cx24110.c b/drivers/media/dvb-frontends/cx24110.c
+index 6f99d6a27be2d..9aeea089756fe 100644
+--- a/drivers/media/dvb-frontends/cx24110.c
++++ b/drivers/media/dvb-frontends/cx24110.c
+@@ -653,4 +653,4 @@ MODULE_DESCRIPTION("Conexant CX24110 DVB-S Demodulator driver");
+ MODULE_AUTHOR("Peter Hettkamp");
+ MODULE_LICENSE("GPL");
+ 
+-EXPORT_SYMBOL(cx24110_attach);
++EXPORT_SYMBOL_GPL(cx24110_attach);
+diff --git a/drivers/media/dvb-frontends/cx24113.c b/drivers/media/dvb-frontends/cx24113.c
+index dd55d314bf9af..203cb6b3f941b 100644
+--- a/drivers/media/dvb-frontends/cx24113.c
++++ b/drivers/media/dvb-frontends/cx24113.c
+@@ -590,7 +590,7 @@ error:
+ 
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(cx24113_attach);
++EXPORT_SYMBOL_GPL(cx24113_attach);
+ 
+ module_param(debug, int, 0644);
+ MODULE_PARM_DESC(debug, "Activates frontend debugging (default:0)");
+diff --git a/drivers/media/dvb-frontends/cx24116.c b/drivers/media/dvb-frontends/cx24116.c
+index ea8264ccbb4e8..8b978a9f74a4e 100644
+--- a/drivers/media/dvb-frontends/cx24116.c
++++ b/drivers/media/dvb-frontends/cx24116.c
+@@ -1133,7 +1133,7 @@ struct dvb_frontend *cx24116_attach(const struct cx24116_config *config,
+ 	state->frontend.demodulator_priv = state;
+ 	return &state->frontend;
+ }
+-EXPORT_SYMBOL(cx24116_attach);
++EXPORT_SYMBOL_GPL(cx24116_attach);
+ 
+ /*
+  * Initialise or wake up device
+diff --git a/drivers/media/dvb-frontends/cx24120.c b/drivers/media/dvb-frontends/cx24120.c
+index d8acd582c7111..44515fdbe91d4 100644
+--- a/drivers/media/dvb-frontends/cx24120.c
++++ b/drivers/media/dvb-frontends/cx24120.c
+@@ -305,7 +305,7 @@ error:
+ 	kfree(state);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(cx24120_attach);
++EXPORT_SYMBOL_GPL(cx24120_attach);
+ 
+ static int cx24120_test_rom(struct cx24120_state *state)
+ {
+@@ -973,7 +973,9 @@ static void cx24120_set_clock_ratios(struct dvb_frontend *fe)
+ 	cmd.arg[8] = (clock_ratios_table[idx].rate >> 8) & 0xff;
+ 	cmd.arg[9] = (clock_ratios_table[idx].rate >> 0) & 0xff;
+ 
+-	cx24120_message_send(state, &cmd);
++	ret = cx24120_message_send(state, &cmd);
++	if (ret != 0)
++		return;
+ 
+ 	/* Calculate ber window rates for stat work */
+ 	cx24120_calculate_ber_window(state, clock_ratios_table[idx].rate);
+diff --git a/drivers/media/dvb-frontends/cx24123.c b/drivers/media/dvb-frontends/cx24123.c
+index 3d84ee17e54c6..539889e638ccc 100644
+--- a/drivers/media/dvb-frontends/cx24123.c
++++ b/drivers/media/dvb-frontends/cx24123.c
+@@ -1096,7 +1096,7 @@ error:
+ 
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(cx24123_attach);
++EXPORT_SYMBOL_GPL(cx24123_attach);
+ 
+ static const struct dvb_frontend_ops cx24123_ops = {
+ 	.delsys = { SYS_DVBS },
+diff --git a/drivers/media/dvb-frontends/cxd2820r_core.c b/drivers/media/dvb-frontends/cxd2820r_core.c
+index d7ee294c68334..7feb08dccfa1c 100644
+--- a/drivers/media/dvb-frontends/cxd2820r_core.c
++++ b/drivers/media/dvb-frontends/cxd2820r_core.c
+@@ -536,7 +536,7 @@ struct dvb_frontend *cxd2820r_attach(const struct cxd2820r_config *config,
+ 
+ 	return pdata.get_dvb_frontend(client);
+ }
+-EXPORT_SYMBOL(cxd2820r_attach);
++EXPORT_SYMBOL_GPL(cxd2820r_attach);
+ 
+ static struct dvb_frontend *cxd2820r_get_dvb_frontend(struct i2c_client *client)
+ {
+diff --git a/drivers/media/dvb-frontends/cxd2841er.c b/drivers/media/dvb-frontends/cxd2841er.c
+index 5431f922f55e4..e9d1eef40c627 100644
+--- a/drivers/media/dvb-frontends/cxd2841er.c
++++ b/drivers/media/dvb-frontends/cxd2841er.c
+@@ -3930,14 +3930,14 @@ struct dvb_frontend *cxd2841er_attach_s(struct cxd2841er_config *cfg,
+ {
+ 	return cxd2841er_attach(cfg, i2c, SYS_DVBS);
+ }
+-EXPORT_SYMBOL(cxd2841er_attach_s);
++EXPORT_SYMBOL_GPL(cxd2841er_attach_s);
+ 
+ struct dvb_frontend *cxd2841er_attach_t_c(struct cxd2841er_config *cfg,
+ 					struct i2c_adapter *i2c)
+ {
+ 	return cxd2841er_attach(cfg, i2c, 0);
+ }
+-EXPORT_SYMBOL(cxd2841er_attach_t_c);
++EXPORT_SYMBOL_GPL(cxd2841er_attach_t_c);
+ 
+ static const struct dvb_frontend_ops cxd2841er_dvbs_s2_ops = {
+ 	.delsys = { SYS_DVBS, SYS_DVBS2 },
+diff --git a/drivers/media/dvb-frontends/cxd2880/cxd2880_top.c b/drivers/media/dvb-frontends/cxd2880/cxd2880_top.c
+index d5b1b3788e392..09d31c368741d 100644
+--- a/drivers/media/dvb-frontends/cxd2880/cxd2880_top.c
++++ b/drivers/media/dvb-frontends/cxd2880/cxd2880_top.c
+@@ -1950,7 +1950,7 @@ struct dvb_frontend *cxd2880_attach(struct dvb_frontend *fe,
+ 
+ 	return fe;
+ }
+-EXPORT_SYMBOL(cxd2880_attach);
++EXPORT_SYMBOL_GPL(cxd2880_attach);
+ 
+ MODULE_DESCRIPTION("Sony CXD2880 DVB-T2/T tuner + demod driver");
+ MODULE_AUTHOR("Sony Semiconductor Solutions Corporation");
+diff --git a/drivers/media/dvb-frontends/dib0070.c b/drivers/media/dvb-frontends/dib0070.c
+index cafb41dba861c..9a8e7cdd2a247 100644
+--- a/drivers/media/dvb-frontends/dib0070.c
++++ b/drivers/media/dvb-frontends/dib0070.c
+@@ -762,7 +762,7 @@ free_mem:
+ 	fe->tuner_priv = NULL;
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(dib0070_attach);
++EXPORT_SYMBOL_GPL(dib0070_attach);
+ 
+ MODULE_AUTHOR("Patrick Boettcher <patrick.boettcher@posteo.de>");
+ MODULE_DESCRIPTION("Driver for the DiBcom 0070 base-band RF Tuner");
+diff --git a/drivers/media/dvb-frontends/dib0090.c b/drivers/media/dvb-frontends/dib0090.c
+index 903da33642dff..c958bcff026ec 100644
+--- a/drivers/media/dvb-frontends/dib0090.c
++++ b/drivers/media/dvb-frontends/dib0090.c
+@@ -2634,7 +2634,7 @@ struct dvb_frontend *dib0090_register(struct dvb_frontend *fe, struct i2c_adapte
+ 	return NULL;
+ }
+ 
+-EXPORT_SYMBOL(dib0090_register);
++EXPORT_SYMBOL_GPL(dib0090_register);
+ 
+ struct dvb_frontend *dib0090_fw_register(struct dvb_frontend *fe, struct i2c_adapter *i2c, const struct dib0090_config *config)
+ {
+@@ -2660,7 +2660,7 @@ free_mem:
+ 	fe->tuner_priv = NULL;
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(dib0090_fw_register);
++EXPORT_SYMBOL_GPL(dib0090_fw_register);
+ 
+ MODULE_AUTHOR("Patrick Boettcher <patrick.boettcher@posteo.de>");
+ MODULE_AUTHOR("Olivier Grenie <olivier.grenie@parrot.com>");
+diff --git a/drivers/media/dvb-frontends/dib3000mb.c b/drivers/media/dvb-frontends/dib3000mb.c
+index a6c2fc4586eb3..c598b2a633256 100644
+--- a/drivers/media/dvb-frontends/dib3000mb.c
++++ b/drivers/media/dvb-frontends/dib3000mb.c
+@@ -815,4 +815,4 @@ MODULE_AUTHOR(DRIVER_AUTHOR);
+ MODULE_DESCRIPTION(DRIVER_DESC);
+ MODULE_LICENSE("GPL");
+ 
+-EXPORT_SYMBOL(dib3000mb_attach);
++EXPORT_SYMBOL_GPL(dib3000mb_attach);
+diff --git a/drivers/media/dvb-frontends/dib3000mc.c b/drivers/media/dvb-frontends/dib3000mc.c
+index 2e11a246aae0d..c2fca8289abae 100644
+--- a/drivers/media/dvb-frontends/dib3000mc.c
++++ b/drivers/media/dvb-frontends/dib3000mc.c
+@@ -935,7 +935,7 @@ error:
+ 	kfree(st);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(dib3000mc_attach);
++EXPORT_SYMBOL_GPL(dib3000mc_attach);
+ 
+ static const struct dvb_frontend_ops dib3000mc_ops = {
+ 	.delsys = { SYS_DVBT },
+diff --git a/drivers/media/dvb-frontends/dib7000m.c b/drivers/media/dvb-frontends/dib7000m.c
+index 97ce97789c9e3..fdb22f32e3a11 100644
+--- a/drivers/media/dvb-frontends/dib7000m.c
++++ b/drivers/media/dvb-frontends/dib7000m.c
+@@ -1434,7 +1434,7 @@ error:
+ 	kfree(st);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(dib7000m_attach);
++EXPORT_SYMBOL_GPL(dib7000m_attach);
+ 
+ static const struct dvb_frontend_ops dib7000m_ops = {
+ 	.delsys = { SYS_DVBT },
+diff --git a/drivers/media/dvb-frontends/dib7000p.c b/drivers/media/dvb-frontends/dib7000p.c
+index a90d2f51868ff..d1e53de5206ae 100644
+--- a/drivers/media/dvb-frontends/dib7000p.c
++++ b/drivers/media/dvb-frontends/dib7000p.c
+@@ -497,7 +497,7 @@ static int dib7000p_update_pll(struct dvb_frontend *fe, struct dibx000_bandwidth
+ 	prediv = reg_1856 & 0x3f;
+ 	loopdiv = (reg_1856 >> 6) & 0x3f;
+ 
+-	if ((bw != NULL) && (bw->pll_prediv != prediv || bw->pll_ratio != loopdiv)) {
++	if (loopdiv && bw && (bw->pll_prediv != prediv || bw->pll_ratio != loopdiv)) {
+ 		dprintk("Updating pll (prediv: old =  %d new = %d ; loopdiv : old = %d new = %d)\n", prediv, bw->pll_prediv, loopdiv, bw->pll_ratio);
+ 		reg_1856 &= 0xf000;
+ 		reg_1857 = dib7000p_read_word(state, 1857);
+@@ -2822,7 +2822,7 @@ void *dib7000p_attach(struct dib7000p_ops *ops)
+ 
+ 	return ops;
+ }
+-EXPORT_SYMBOL(dib7000p_attach);
++EXPORT_SYMBOL_GPL(dib7000p_attach);
+ 
+ static const struct dvb_frontend_ops dib7000p_ops = {
+ 	.delsys = { SYS_DVBT },
+diff --git a/drivers/media/dvb-frontends/dib8000.c b/drivers/media/dvb-frontends/dib8000.c
+index fe19d127abb3f..301d8eca7a6f9 100644
+--- a/drivers/media/dvb-frontends/dib8000.c
++++ b/drivers/media/dvb-frontends/dib8000.c
+@@ -4527,7 +4527,7 @@ void *dib8000_attach(struct dib8000_ops *ops)
+ 
+ 	return ops;
+ }
+-EXPORT_SYMBOL(dib8000_attach);
++EXPORT_SYMBOL_GPL(dib8000_attach);
+ 
+ MODULE_AUTHOR("Olivier Grenie <Olivier.Grenie@parrot.com, Patrick Boettcher <patrick.boettcher@posteo.de>");
+ MODULE_DESCRIPTION("Driver for the DiBcom 8000 ISDB-T demodulator");
+diff --git a/drivers/media/dvb-frontends/dib9000.c b/drivers/media/dvb-frontends/dib9000.c
+index 914ca820c174b..6f81890b31eeb 100644
+--- a/drivers/media/dvb-frontends/dib9000.c
++++ b/drivers/media/dvb-frontends/dib9000.c
+@@ -2546,7 +2546,7 @@ error:
+ 	kfree(st);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(dib9000_attach);
++EXPORT_SYMBOL_GPL(dib9000_attach);
+ 
+ static const struct dvb_frontend_ops dib9000_ops = {
+ 	.delsys = { SYS_DVBT },
+diff --git a/drivers/media/dvb-frontends/drx39xyj/drxj.c b/drivers/media/dvb-frontends/drx39xyj/drxj.c
+index 68f4e8b5a0abb..a738573c8cd7a 100644
+--- a/drivers/media/dvb-frontends/drx39xyj/drxj.c
++++ b/drivers/media/dvb-frontends/drx39xyj/drxj.c
+@@ -12372,7 +12372,7 @@ error:
+ 
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(drx39xxj_attach);
++EXPORT_SYMBOL_GPL(drx39xxj_attach);
+ 
+ static const struct dvb_frontend_ops drx39xxj_ops = {
+ 	.delsys = { SYS_ATSC, SYS_DVBC_ANNEX_B },
+diff --git a/drivers/media/dvb-frontends/drxd_hard.c b/drivers/media/dvb-frontends/drxd_hard.c
+index 9860cae65f1cf..6a531937f4bbb 100644
+--- a/drivers/media/dvb-frontends/drxd_hard.c
++++ b/drivers/media/dvb-frontends/drxd_hard.c
+@@ -2939,7 +2939,7 @@ error:
+ 	kfree(state);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(drxd_attach);
++EXPORT_SYMBOL_GPL(drxd_attach);
+ 
+ MODULE_DESCRIPTION("DRXD driver");
+ MODULE_AUTHOR("Micronas");
+diff --git a/drivers/media/dvb-frontends/drxk_hard.c b/drivers/media/dvb-frontends/drxk_hard.c
+index 3301ef75d4417..1acdd204c25ce 100644
+--- a/drivers/media/dvb-frontends/drxk_hard.c
++++ b/drivers/media/dvb-frontends/drxk_hard.c
+@@ -6833,7 +6833,7 @@ error:
+ 	kfree(state);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(drxk_attach);
++EXPORT_SYMBOL_GPL(drxk_attach);
+ 
+ MODULE_DESCRIPTION("DRX-K driver");
+ MODULE_AUTHOR("Ralph Metzler");
+diff --git a/drivers/media/dvb-frontends/ds3000.c b/drivers/media/dvb-frontends/ds3000.c
+index 20fcf31af1658..515aa7c7baf2a 100644
+--- a/drivers/media/dvb-frontends/ds3000.c
++++ b/drivers/media/dvb-frontends/ds3000.c
+@@ -859,7 +859,7 @@ struct dvb_frontend *ds3000_attach(const struct ds3000_config *config,
+ 	ds3000_set_voltage(&state->frontend, SEC_VOLTAGE_OFF);
+ 	return &state->frontend;
+ }
+-EXPORT_SYMBOL(ds3000_attach);
++EXPORT_SYMBOL_GPL(ds3000_attach);
+ 
+ static int ds3000_set_carrier_offset(struct dvb_frontend *fe,
+ 					s32 carrier_offset_khz)
+diff --git a/drivers/media/dvb-frontends/dvb-pll.c b/drivers/media/dvb-frontends/dvb-pll.c
+index 90cb41eacf98c..ef697ab6bc2e5 100644
+--- a/drivers/media/dvb-frontends/dvb-pll.c
++++ b/drivers/media/dvb-frontends/dvb-pll.c
+@@ -866,7 +866,7 @@ out:
+ 
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(dvb_pll_attach);
++EXPORT_SYMBOL_GPL(dvb_pll_attach);
+ 
+ 
+ static int
+diff --git a/drivers/media/dvb-frontends/ec100.c b/drivers/media/dvb-frontends/ec100.c
+index 03bd80666cf83..2ad0a3c2f7567 100644
+--- a/drivers/media/dvb-frontends/ec100.c
++++ b/drivers/media/dvb-frontends/ec100.c
+@@ -299,7 +299,7 @@ error:
+ 	kfree(state);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(ec100_attach);
++EXPORT_SYMBOL_GPL(ec100_attach);
+ 
+ static const struct dvb_frontend_ops ec100_ops = {
+ 	.delsys = { SYS_DVBT },
+diff --git a/drivers/media/dvb-frontends/helene.c b/drivers/media/dvb-frontends/helene.c
+index 68c1a3e0e2ba5..f127adee3ebb7 100644
+--- a/drivers/media/dvb-frontends/helene.c
++++ b/drivers/media/dvb-frontends/helene.c
+@@ -1025,7 +1025,7 @@ struct dvb_frontend *helene_attach_s(struct dvb_frontend *fe,
+ 			priv->i2c_address, priv->i2c);
+ 	return fe;
+ }
+-EXPORT_SYMBOL(helene_attach_s);
++EXPORT_SYMBOL_GPL(helene_attach_s);
+ 
+ struct dvb_frontend *helene_attach(struct dvb_frontend *fe,
+ 		const struct helene_config *config,
+@@ -1061,7 +1061,7 @@ struct dvb_frontend *helene_attach(struct dvb_frontend *fe,
+ 			priv->i2c_address, priv->i2c);
+ 	return fe;
+ }
+-EXPORT_SYMBOL(helene_attach);
++EXPORT_SYMBOL_GPL(helene_attach);
+ 
+ static int helene_probe(struct i2c_client *client)
+ {
+diff --git a/drivers/media/dvb-frontends/horus3a.c b/drivers/media/dvb-frontends/horus3a.c
+index 24bf5cbcc1846..0330b78a5b3f2 100644
+--- a/drivers/media/dvb-frontends/horus3a.c
++++ b/drivers/media/dvb-frontends/horus3a.c
+@@ -395,7 +395,7 @@ struct dvb_frontend *horus3a_attach(struct dvb_frontend *fe,
+ 		priv->i2c_address, priv->i2c);
+ 	return fe;
+ }
+-EXPORT_SYMBOL(horus3a_attach);
++EXPORT_SYMBOL_GPL(horus3a_attach);
+ 
+ MODULE_DESCRIPTION("Sony HORUS3A satellite tuner driver");
+ MODULE_AUTHOR("Sergey Kozlov <serjk@netup.ru>");
+diff --git a/drivers/media/dvb-frontends/isl6405.c b/drivers/media/dvb-frontends/isl6405.c
+index 2cd69b4ff82cb..7d28a743f97eb 100644
+--- a/drivers/media/dvb-frontends/isl6405.c
++++ b/drivers/media/dvb-frontends/isl6405.c
+@@ -141,7 +141,7 @@ struct dvb_frontend *isl6405_attach(struct dvb_frontend *fe, struct i2c_adapter
+ 
+ 	return fe;
+ }
+-EXPORT_SYMBOL(isl6405_attach);
++EXPORT_SYMBOL_GPL(isl6405_attach);
+ 
+ MODULE_DESCRIPTION("Driver for lnb supply and control ic isl6405");
+ MODULE_AUTHOR("Hartmut Hackmann & Oliver Endriss");
+diff --git a/drivers/media/dvb-frontends/isl6421.c b/drivers/media/dvb-frontends/isl6421.c
+index 43b0dfc6f453e..2e9f6f12f849e 100644
+--- a/drivers/media/dvb-frontends/isl6421.c
++++ b/drivers/media/dvb-frontends/isl6421.c
+@@ -213,7 +213,7 @@ struct dvb_frontend *isl6421_attach(struct dvb_frontend *fe, struct i2c_adapter
+ 
+ 	return fe;
+ }
+-EXPORT_SYMBOL(isl6421_attach);
++EXPORT_SYMBOL_GPL(isl6421_attach);
+ 
+ MODULE_DESCRIPTION("Driver for lnb supply and control ic isl6421");
+ MODULE_AUTHOR("Andrew de Quincey & Oliver Endriss");
+diff --git a/drivers/media/dvb-frontends/isl6423.c b/drivers/media/dvb-frontends/isl6423.c
+index 8cd1bb88ce6e7..a0d0a38340574 100644
+--- a/drivers/media/dvb-frontends/isl6423.c
++++ b/drivers/media/dvb-frontends/isl6423.c
+@@ -289,7 +289,7 @@ exit:
+ 	fe->sec_priv = NULL;
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(isl6423_attach);
++EXPORT_SYMBOL_GPL(isl6423_attach);
+ 
+ MODULE_DESCRIPTION("ISL6423 SEC");
+ MODULE_AUTHOR("Manu Abraham");
+diff --git a/drivers/media/dvb-frontends/itd1000.c b/drivers/media/dvb-frontends/itd1000.c
+index 1b33478653d16..f8f362f50e78d 100644
+--- a/drivers/media/dvb-frontends/itd1000.c
++++ b/drivers/media/dvb-frontends/itd1000.c
+@@ -389,7 +389,7 @@ struct dvb_frontend *itd1000_attach(struct dvb_frontend *fe, struct i2c_adapter
+ 
+ 	return fe;
+ }
+-EXPORT_SYMBOL(itd1000_attach);
++EXPORT_SYMBOL_GPL(itd1000_attach);
+ 
+ MODULE_AUTHOR("Patrick Boettcher <pb@linuxtv.org>");
+ MODULE_DESCRIPTION("Integrant ITD1000 driver");
+diff --git a/drivers/media/dvb-frontends/ix2505v.c b/drivers/media/dvb-frontends/ix2505v.c
+index 73f27105c139d..3212e333d472b 100644
+--- a/drivers/media/dvb-frontends/ix2505v.c
++++ b/drivers/media/dvb-frontends/ix2505v.c
+@@ -302,7 +302,7 @@ error:
+ 	kfree(state);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(ix2505v_attach);
++EXPORT_SYMBOL_GPL(ix2505v_attach);
+ 
+ module_param_named(debug, ix2505v_debug, int, 0644);
+ MODULE_PARM_DESC(debug, "Turn on/off frontend debugging (default:off).");
+diff --git a/drivers/media/dvb-frontends/l64781.c b/drivers/media/dvb-frontends/l64781.c
+index c5106a1ea1cd0..fe5af2453d559 100644
+--- a/drivers/media/dvb-frontends/l64781.c
++++ b/drivers/media/dvb-frontends/l64781.c
+@@ -593,4 +593,4 @@ MODULE_DESCRIPTION("LSI L64781 DVB-T Demodulator driver");
+ MODULE_AUTHOR("Holger Waechtler, Marko Kohtala");
+ MODULE_LICENSE("GPL");
+ 
+-EXPORT_SYMBOL(l64781_attach);
++EXPORT_SYMBOL_GPL(l64781_attach);
+diff --git a/drivers/media/dvb-frontends/lg2160.c b/drivers/media/dvb-frontends/lg2160.c
+index f343066c297e2..fe700aa56bff3 100644
+--- a/drivers/media/dvb-frontends/lg2160.c
++++ b/drivers/media/dvb-frontends/lg2160.c
+@@ -1426,7 +1426,7 @@ struct dvb_frontend *lg2160_attach(const struct lg2160_config *config,
+ 
+ 	return &state->frontend;
+ }
+-EXPORT_SYMBOL(lg2160_attach);
++EXPORT_SYMBOL_GPL(lg2160_attach);
+ 
+ MODULE_DESCRIPTION("LG Electronics LG216x ATSC/MH Demodulator Driver");
+ MODULE_AUTHOR("Michael Krufky <mkrufky@linuxtv.org>");
+diff --git a/drivers/media/dvb-frontends/lgdt3305.c b/drivers/media/dvb-frontends/lgdt3305.c
+index 62d7439889196..60a97f1cc74e5 100644
+--- a/drivers/media/dvb-frontends/lgdt3305.c
++++ b/drivers/media/dvb-frontends/lgdt3305.c
+@@ -1148,7 +1148,7 @@ fail:
+ 	kfree(state);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(lgdt3305_attach);
++EXPORT_SYMBOL_GPL(lgdt3305_attach);
+ 
+ static const struct dvb_frontend_ops lgdt3304_ops = {
+ 	.delsys = { SYS_ATSC, SYS_DVBC_ANNEX_B },
+diff --git a/drivers/media/dvb-frontends/lgdt3306a.c b/drivers/media/dvb-frontends/lgdt3306a.c
+index 70258884126b0..2d7750649850c 100644
+--- a/drivers/media/dvb-frontends/lgdt3306a.c
++++ b/drivers/media/dvb-frontends/lgdt3306a.c
+@@ -1859,7 +1859,7 @@ fail:
+ 	kfree(state);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(lgdt3306a_attach);
++EXPORT_SYMBOL_GPL(lgdt3306a_attach);
+ 
+ #ifdef DBG_DUMP
+ 
+diff --git a/drivers/media/dvb-frontends/lgdt330x.c b/drivers/media/dvb-frontends/lgdt330x.c
+index 83565209c3b1e..f87937f9a6654 100644
+--- a/drivers/media/dvb-frontends/lgdt330x.c
++++ b/drivers/media/dvb-frontends/lgdt330x.c
+@@ -927,7 +927,7 @@ struct dvb_frontend *lgdt330x_attach(const struct lgdt330x_config *_config,
+ 
+ 	return lgdt330x_get_dvb_frontend(client);
+ }
+-EXPORT_SYMBOL(lgdt330x_attach);
++EXPORT_SYMBOL_GPL(lgdt330x_attach);
+ 
+ static const struct dvb_frontend_ops lgdt3302_ops = {
+ 	.delsys = { SYS_ATSC, SYS_DVBC_ANNEX_B },
+diff --git a/drivers/media/dvb-frontends/lgs8gxx.c b/drivers/media/dvb-frontends/lgs8gxx.c
+index 30014979b985b..ffaf60e16ecd4 100644
+--- a/drivers/media/dvb-frontends/lgs8gxx.c
++++ b/drivers/media/dvb-frontends/lgs8gxx.c
+@@ -1043,7 +1043,7 @@ error_out:
+ 	return NULL;
+ 
+ }
+-EXPORT_SYMBOL(lgs8gxx_attach);
++EXPORT_SYMBOL_GPL(lgs8gxx_attach);
+ 
+ MODULE_DESCRIPTION("Legend Silicon LGS8913/LGS8GXX DMB-TH demodulator driver");
+ MODULE_AUTHOR("David T. L. Wong <davidtlwong@gmail.com>");
+diff --git a/drivers/media/dvb-frontends/lnbh25.c b/drivers/media/dvb-frontends/lnbh25.c
+index 9ffe06cd787dd..41bec050642b5 100644
+--- a/drivers/media/dvb-frontends/lnbh25.c
++++ b/drivers/media/dvb-frontends/lnbh25.c
+@@ -173,7 +173,7 @@ struct dvb_frontend *lnbh25_attach(struct dvb_frontend *fe,
+ 		__func__, priv->i2c_address);
+ 	return fe;
+ }
+-EXPORT_SYMBOL(lnbh25_attach);
++EXPORT_SYMBOL_GPL(lnbh25_attach);
+ 
+ MODULE_DESCRIPTION("ST LNBH25 driver");
+ MODULE_AUTHOR("info@netup.ru");
+diff --git a/drivers/media/dvb-frontends/lnbp21.c b/drivers/media/dvb-frontends/lnbp21.c
+index e564974162d65..32593b1f75a38 100644
+--- a/drivers/media/dvb-frontends/lnbp21.c
++++ b/drivers/media/dvb-frontends/lnbp21.c
+@@ -155,7 +155,7 @@ struct dvb_frontend *lnbh24_attach(struct dvb_frontend *fe,
+ 	return lnbx2x_attach(fe, i2c, override_set, override_clear,
+ 							i2c_addr, LNBH24_TTX);
+ }
+-EXPORT_SYMBOL(lnbh24_attach);
++EXPORT_SYMBOL_GPL(lnbh24_attach);
+ 
+ struct dvb_frontend *lnbp21_attach(struct dvb_frontend *fe,
+ 				struct i2c_adapter *i2c, u8 override_set,
+@@ -164,7 +164,7 @@ struct dvb_frontend *lnbp21_attach(struct dvb_frontend *fe,
+ 	return lnbx2x_attach(fe, i2c, override_set, override_clear,
+ 							0x08, LNBP21_ISEL);
+ }
+-EXPORT_SYMBOL(lnbp21_attach);
++EXPORT_SYMBOL_GPL(lnbp21_attach);
+ 
+ MODULE_DESCRIPTION("Driver for lnb supply and control ic lnbp21, lnbh24");
+ MODULE_AUTHOR("Oliver Endriss, Igor M. Liplianin");
+diff --git a/drivers/media/dvb-frontends/lnbp22.c b/drivers/media/dvb-frontends/lnbp22.c
+index b8c7145d4cefe..cb4ea5d3fad4a 100644
+--- a/drivers/media/dvb-frontends/lnbp22.c
++++ b/drivers/media/dvb-frontends/lnbp22.c
+@@ -125,7 +125,7 @@ struct dvb_frontend *lnbp22_attach(struct dvb_frontend *fe,
+ 
+ 	return fe;
+ }
+-EXPORT_SYMBOL(lnbp22_attach);
++EXPORT_SYMBOL_GPL(lnbp22_attach);
+ 
+ MODULE_DESCRIPTION("Driver for lnb supply and control ic lnbp22");
+ MODULE_AUTHOR("Dominik Kuhlen");
+diff --git a/drivers/media/dvb-frontends/m88ds3103.c b/drivers/media/dvb-frontends/m88ds3103.c
+index cf49ac56a37ed..cf037b61b226b 100644
+--- a/drivers/media/dvb-frontends/m88ds3103.c
++++ b/drivers/media/dvb-frontends/m88ds3103.c
+@@ -1695,7 +1695,7 @@ struct dvb_frontend *m88ds3103_attach(const struct m88ds3103_config *cfg,
+ 	*tuner_i2c_adapter = pdata.get_i2c_adapter(client);
+ 	return pdata.get_dvb_frontend(client);
+ }
+-EXPORT_SYMBOL(m88ds3103_attach);
++EXPORT_SYMBOL_GPL(m88ds3103_attach);
+ 
+ static const struct dvb_frontend_ops m88ds3103_ops = {
+ 	.delsys = {SYS_DVBS, SYS_DVBS2},
+diff --git a/drivers/media/dvb-frontends/m88rs2000.c b/drivers/media/dvb-frontends/m88rs2000.c
+index b294ba87e934f..2aa98203cd659 100644
+--- a/drivers/media/dvb-frontends/m88rs2000.c
++++ b/drivers/media/dvb-frontends/m88rs2000.c
+@@ -808,7 +808,7 @@ error:
+ 
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(m88rs2000_attach);
++EXPORT_SYMBOL_GPL(m88rs2000_attach);
+ 
+ MODULE_DESCRIPTION("M88RS2000 DVB-S Demodulator driver");
+ MODULE_AUTHOR("Malcolm Priestley tvboxspy@gmail.com");
+diff --git a/drivers/media/dvb-frontends/mb86a16.c b/drivers/media/dvb-frontends/mb86a16.c
+index d3e29937cf4cf..460821a986e53 100644
+--- a/drivers/media/dvb-frontends/mb86a16.c
++++ b/drivers/media/dvb-frontends/mb86a16.c
+@@ -1851,6 +1851,6 @@ error:
+ 	kfree(state);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(mb86a16_attach);
++EXPORT_SYMBOL_GPL(mb86a16_attach);
+ MODULE_LICENSE("GPL");
+ MODULE_AUTHOR("Manu Abraham");
+diff --git a/drivers/media/dvb-frontends/mb86a20s.c b/drivers/media/dvb-frontends/mb86a20s.c
+index 125fed4891ba9..f8e4bbee5bd50 100644
+--- a/drivers/media/dvb-frontends/mb86a20s.c
++++ b/drivers/media/dvb-frontends/mb86a20s.c
+@@ -2078,7 +2078,7 @@ struct dvb_frontend *mb86a20s_attach(const struct mb86a20s_config *config,
+ 	dev_info(&i2c->dev, "Detected a Fujitsu mb86a20s frontend\n");
+ 	return &state->frontend;
+ }
+-EXPORT_SYMBOL(mb86a20s_attach);
++EXPORT_SYMBOL_GPL(mb86a20s_attach);
+ 
+ static const struct dvb_frontend_ops mb86a20s_ops = {
+ 	.delsys = { SYS_ISDBT },
+diff --git a/drivers/media/dvb-frontends/mt312.c b/drivers/media/dvb-frontends/mt312.c
+index d43a67045dbe7..fb867dd8a26be 100644
+--- a/drivers/media/dvb-frontends/mt312.c
++++ b/drivers/media/dvb-frontends/mt312.c
+@@ -827,7 +827,7 @@ error:
+ 	kfree(state);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(mt312_attach);
++EXPORT_SYMBOL_GPL(mt312_attach);
+ 
+ module_param(debug, int, 0644);
+ MODULE_PARM_DESC(debug, "Turn on/off frontend debugging (default:off).");
+diff --git a/drivers/media/dvb-frontends/mt352.c b/drivers/media/dvb-frontends/mt352.c
+index 399d5c519027e..1b2889f5cf67d 100644
+--- a/drivers/media/dvb-frontends/mt352.c
++++ b/drivers/media/dvb-frontends/mt352.c
+@@ -593,4 +593,4 @@ MODULE_DESCRIPTION("Zarlink MT352 DVB-T Demodulator driver");
+ MODULE_AUTHOR("Holger Waechtler, Daniel Mack, Antonio Mancuso");
+ MODULE_LICENSE("GPL");
+ 
+-EXPORT_SYMBOL(mt352_attach);
++EXPORT_SYMBOL_GPL(mt352_attach);
+diff --git a/drivers/media/dvb-frontends/nxt200x.c b/drivers/media/dvb-frontends/nxt200x.c
+index 200b6dbc75f81..1c549ada6ebf9 100644
+--- a/drivers/media/dvb-frontends/nxt200x.c
++++ b/drivers/media/dvb-frontends/nxt200x.c
+@@ -1216,5 +1216,5 @@ MODULE_DESCRIPTION("NXT200X (ATSC 8VSB & ITU-T J.83 AnnexB 64/256 QAM) Demodulat
+ MODULE_AUTHOR("Kirk Lapray, Michael Krufky, Jean-Francois Thibert, and Taylor Jacob");
+ MODULE_LICENSE("GPL");
+ 
+-EXPORT_SYMBOL(nxt200x_attach);
++EXPORT_SYMBOL_GPL(nxt200x_attach);
+ 
+diff --git a/drivers/media/dvb-frontends/nxt6000.c b/drivers/media/dvb-frontends/nxt6000.c
+index 136918f82dda0..e8d4940370ddf 100644
+--- a/drivers/media/dvb-frontends/nxt6000.c
++++ b/drivers/media/dvb-frontends/nxt6000.c
+@@ -621,4 +621,4 @@ MODULE_DESCRIPTION("NxtWave NXT6000 DVB-T demodulator driver");
+ MODULE_AUTHOR("Florian Schirmer");
+ MODULE_LICENSE("GPL");
+ 
+-EXPORT_SYMBOL(nxt6000_attach);
++EXPORT_SYMBOL_GPL(nxt6000_attach);
+diff --git a/drivers/media/dvb-frontends/or51132.c b/drivers/media/dvb-frontends/or51132.c
+index 24de1b1151583..144a1f25dec0a 100644
+--- a/drivers/media/dvb-frontends/or51132.c
++++ b/drivers/media/dvb-frontends/or51132.c
+@@ -605,4 +605,4 @@ MODULE_AUTHOR("Kirk Lapray");
+ MODULE_AUTHOR("Trent Piepho");
+ MODULE_LICENSE("GPL");
+ 
+-EXPORT_SYMBOL(or51132_attach);
++EXPORT_SYMBOL_GPL(or51132_attach);
+diff --git a/drivers/media/dvb-frontends/or51211.c b/drivers/media/dvb-frontends/or51211.c
+index ddcaea5c9941f..dc60482162c54 100644
+--- a/drivers/media/dvb-frontends/or51211.c
++++ b/drivers/media/dvb-frontends/or51211.c
+@@ -551,5 +551,5 @@ MODULE_DESCRIPTION("Oren OR51211 VSB [pcHDTV HD-2000] Demodulator Driver");
+ MODULE_AUTHOR("Kirk Lapray");
+ MODULE_LICENSE("GPL");
+ 
+-EXPORT_SYMBOL(or51211_attach);
++EXPORT_SYMBOL_GPL(or51211_attach);
+ 
+diff --git a/drivers/media/dvb-frontends/s5h1409.c b/drivers/media/dvb-frontends/s5h1409.c
+index 3089cc174a6f5..28b1dca077ead 100644
+--- a/drivers/media/dvb-frontends/s5h1409.c
++++ b/drivers/media/dvb-frontends/s5h1409.c
+@@ -981,7 +981,7 @@ error:
+ 	kfree(state);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(s5h1409_attach);
++EXPORT_SYMBOL_GPL(s5h1409_attach);
+ 
+ static const struct dvb_frontend_ops s5h1409_ops = {
+ 	.delsys = { SYS_ATSC, SYS_DVBC_ANNEX_B },
+diff --git a/drivers/media/dvb-frontends/s5h1411.c b/drivers/media/dvb-frontends/s5h1411.c
+index 2563a72e98b70..fc48e659c2d8a 100644
+--- a/drivers/media/dvb-frontends/s5h1411.c
++++ b/drivers/media/dvb-frontends/s5h1411.c
+@@ -900,7 +900,7 @@ error:
+ 	kfree(state);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(s5h1411_attach);
++EXPORT_SYMBOL_GPL(s5h1411_attach);
+ 
+ static const struct dvb_frontend_ops s5h1411_ops = {
+ 	.delsys = { SYS_ATSC, SYS_DVBC_ANNEX_B },
+diff --git a/drivers/media/dvb-frontends/s5h1420.c b/drivers/media/dvb-frontends/s5h1420.c
+index 6bdec2898bc81..d700de1ea6c24 100644
+--- a/drivers/media/dvb-frontends/s5h1420.c
++++ b/drivers/media/dvb-frontends/s5h1420.c
+@@ -918,7 +918,7 @@ error:
+ 	kfree(state);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(s5h1420_attach);
++EXPORT_SYMBOL_GPL(s5h1420_attach);
+ 
+ static const struct dvb_frontend_ops s5h1420_ops = {
+ 	.delsys = { SYS_DVBS },
+diff --git a/drivers/media/dvb-frontends/s5h1432.c b/drivers/media/dvb-frontends/s5h1432.c
+index 956e8ee4b388e..ff5d3bdf3bc67 100644
+--- a/drivers/media/dvb-frontends/s5h1432.c
++++ b/drivers/media/dvb-frontends/s5h1432.c
+@@ -355,7 +355,7 @@ struct dvb_frontend *s5h1432_attach(const struct s5h1432_config *config,
+ 
+ 	return &state->frontend;
+ }
+-EXPORT_SYMBOL(s5h1432_attach);
++EXPORT_SYMBOL_GPL(s5h1432_attach);
+ 
+ static const struct dvb_frontend_ops s5h1432_ops = {
+ 	.delsys = { SYS_DVBT },
+diff --git a/drivers/media/dvb-frontends/s921.c b/drivers/media/dvb-frontends/s921.c
+index f118d8e641030..7e461ac159fc1 100644
+--- a/drivers/media/dvb-frontends/s921.c
++++ b/drivers/media/dvb-frontends/s921.c
+@@ -495,7 +495,7 @@ struct dvb_frontend *s921_attach(const struct s921_config *config,
+ 
+ 	return &state->frontend;
+ }
+-EXPORT_SYMBOL(s921_attach);
++EXPORT_SYMBOL_GPL(s921_attach);
+ 
+ static const struct dvb_frontend_ops s921_ops = {
+ 	.delsys = { SYS_ISDBT },
+diff --git a/drivers/media/dvb-frontends/si21xx.c b/drivers/media/dvb-frontends/si21xx.c
+index 2d29d2c4d434c..210ccd356e2bf 100644
+--- a/drivers/media/dvb-frontends/si21xx.c
++++ b/drivers/media/dvb-frontends/si21xx.c
+@@ -937,7 +937,7 @@ error:
+ 	kfree(state);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(si21xx_attach);
++EXPORT_SYMBOL_GPL(si21xx_attach);
+ 
+ module_param(debug, int, 0644);
+ MODULE_PARM_DESC(debug, "Turn on/off frontend debugging (default:off).");
+diff --git a/drivers/media/dvb-frontends/sp887x.c b/drivers/media/dvb-frontends/sp887x.c
+index 146e7f2dd3c5e..f59c0f96416b5 100644
+--- a/drivers/media/dvb-frontends/sp887x.c
++++ b/drivers/media/dvb-frontends/sp887x.c
+@@ -624,4 +624,4 @@ MODULE_PARM_DESC(debug, "Turn on/off frontend debugging (default:off).");
+ MODULE_DESCRIPTION("Spase sp887x DVB-T demodulator driver");
+ MODULE_LICENSE("GPL");
+ 
+-EXPORT_SYMBOL(sp887x_attach);
++EXPORT_SYMBOL_GPL(sp887x_attach);
+diff --git a/drivers/media/dvb-frontends/stb0899_drv.c b/drivers/media/dvb-frontends/stb0899_drv.c
+index 4ee6c1e1e9f7d..2f4d8fb400cd6 100644
+--- a/drivers/media/dvb-frontends/stb0899_drv.c
++++ b/drivers/media/dvb-frontends/stb0899_drv.c
+@@ -1638,7 +1638,7 @@ error:
+ 	kfree(state);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(stb0899_attach);
++EXPORT_SYMBOL_GPL(stb0899_attach);
+ MODULE_PARM_DESC(verbose, "Set Verbosity level");
+ MODULE_AUTHOR("Manu Abraham");
+ MODULE_DESCRIPTION("STB0899 Multi-Std frontend");
+diff --git a/drivers/media/dvb-frontends/stb6000.c b/drivers/media/dvb-frontends/stb6000.c
+index 8c9800d577e03..d74e34677b925 100644
+--- a/drivers/media/dvb-frontends/stb6000.c
++++ b/drivers/media/dvb-frontends/stb6000.c
+@@ -232,7 +232,7 @@ struct dvb_frontend *stb6000_attach(struct dvb_frontend *fe, int addr,
+ 
+ 	return fe;
+ }
+-EXPORT_SYMBOL(stb6000_attach);
++EXPORT_SYMBOL_GPL(stb6000_attach);
+ 
+ module_param(debug, int, 0644);
+ MODULE_PARM_DESC(debug, "Turn on/off frontend debugging (default:off).");
+diff --git a/drivers/media/dvb-frontends/stb6100.c b/drivers/media/dvb-frontends/stb6100.c
+index 698866c4f15a7..c5818a15a0d70 100644
+--- a/drivers/media/dvb-frontends/stb6100.c
++++ b/drivers/media/dvb-frontends/stb6100.c
+@@ -557,7 +557,7 @@ static void stb6100_release(struct dvb_frontend *fe)
+ 	kfree(state);
+ }
+ 
+-EXPORT_SYMBOL(stb6100_attach);
++EXPORT_SYMBOL_GPL(stb6100_attach);
+ MODULE_PARM_DESC(verbose, "Set Verbosity level");
+ 
+ MODULE_AUTHOR("Manu Abraham");
+diff --git a/drivers/media/dvb-frontends/stv0288.c b/drivers/media/dvb-frontends/stv0288.c
+index 3ae1f3a2f1420..a5581bd60f9e8 100644
+--- a/drivers/media/dvb-frontends/stv0288.c
++++ b/drivers/media/dvb-frontends/stv0288.c
+@@ -590,7 +590,7 @@ error:
+ 
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(stv0288_attach);
++EXPORT_SYMBOL_GPL(stv0288_attach);
+ 
+ module_param(debug_legacy_dish_switch, int, 0444);
+ MODULE_PARM_DESC(debug_legacy_dish_switch,
+diff --git a/drivers/media/dvb-frontends/stv0297.c b/drivers/media/dvb-frontends/stv0297.c
+index 6d5962d5697ac..9d4dbd99a5a79 100644
+--- a/drivers/media/dvb-frontends/stv0297.c
++++ b/drivers/media/dvb-frontends/stv0297.c
+@@ -710,4 +710,4 @@ MODULE_DESCRIPTION("ST STV0297 DVB-C Demodulator driver");
+ MODULE_AUTHOR("Dennis Noermann and Andrew de Quincey");
+ MODULE_LICENSE("GPL");
+ 
+-EXPORT_SYMBOL(stv0297_attach);
++EXPORT_SYMBOL_GPL(stv0297_attach);
+diff --git a/drivers/media/dvb-frontends/stv0299.c b/drivers/media/dvb-frontends/stv0299.c
+index b5263a0ee5aa5..da7ff2c2e8e55 100644
+--- a/drivers/media/dvb-frontends/stv0299.c
++++ b/drivers/media/dvb-frontends/stv0299.c
+@@ -752,4 +752,4 @@ MODULE_DESCRIPTION("ST STV0299 DVB Demodulator driver");
+ MODULE_AUTHOR("Ralph Metzler, Holger Waechtler, Peter Schildmann, Felix Domke, Andreas Oberritter, Andrew de Quincey, Kenneth Aafly");
+ MODULE_LICENSE("GPL");
+ 
+-EXPORT_SYMBOL(stv0299_attach);
++EXPORT_SYMBOL_GPL(stv0299_attach);
+diff --git a/drivers/media/dvb-frontends/stv0367.c b/drivers/media/dvb-frontends/stv0367.c
+index 95e376f23506f..04556b77c16c9 100644
+--- a/drivers/media/dvb-frontends/stv0367.c
++++ b/drivers/media/dvb-frontends/stv0367.c
+@@ -1750,7 +1750,7 @@ error:
+ 	kfree(state);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(stv0367ter_attach);
++EXPORT_SYMBOL_GPL(stv0367ter_attach);
+ 
+ static int stv0367cab_gate_ctrl(struct dvb_frontend *fe, int enable)
+ {
+@@ -2919,7 +2919,7 @@ error:
+ 	kfree(state);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(stv0367cab_attach);
++EXPORT_SYMBOL_GPL(stv0367cab_attach);
+ 
+ /*
+  * Functions for operation on Digital Devices hardware
+@@ -3340,7 +3340,7 @@ error:
+ 	kfree(state);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(stv0367ddb_attach);
++EXPORT_SYMBOL_GPL(stv0367ddb_attach);
+ 
+ MODULE_PARM_DESC(debug, "Set debug");
+ MODULE_PARM_DESC(i2c_debug, "Set i2c debug");
+diff --git a/drivers/media/dvb-frontends/stv0900_core.c b/drivers/media/dvb-frontends/stv0900_core.c
+index 212312d20ff62..e7b9b9b11d7df 100644
+--- a/drivers/media/dvb-frontends/stv0900_core.c
++++ b/drivers/media/dvb-frontends/stv0900_core.c
+@@ -1957,7 +1957,7 @@ error:
+ 	kfree(state);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(stv0900_attach);
++EXPORT_SYMBOL_GPL(stv0900_attach);
+ 
+ MODULE_PARM_DESC(debug, "Set debug");
+ 
+diff --git a/drivers/media/dvb-frontends/stv090x.c b/drivers/media/dvb-frontends/stv090x.c
+index a07dc5fdeb3d8..cc45139057ba8 100644
+--- a/drivers/media/dvb-frontends/stv090x.c
++++ b/drivers/media/dvb-frontends/stv090x.c
+@@ -5071,7 +5071,7 @@ error:
+ 	kfree(state);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(stv090x_attach);
++EXPORT_SYMBOL_GPL(stv090x_attach);
+ 
+ static const struct i2c_device_id stv090x_id_table[] = {
+ 	{"stv090x", 0},
+diff --git a/drivers/media/dvb-frontends/stv6110.c b/drivers/media/dvb-frontends/stv6110.c
+index 963f6a896102a..1cf9c095dbff0 100644
+--- a/drivers/media/dvb-frontends/stv6110.c
++++ b/drivers/media/dvb-frontends/stv6110.c
+@@ -427,7 +427,7 @@ struct dvb_frontend *stv6110_attach(struct dvb_frontend *fe,
+ 
+ 	return fe;
+ }
+-EXPORT_SYMBOL(stv6110_attach);
++EXPORT_SYMBOL_GPL(stv6110_attach);
+ 
+ module_param(debug, int, 0644);
+ MODULE_PARM_DESC(debug, "Turn on/off frontend debugging (default:off).");
+diff --git a/drivers/media/dvb-frontends/stv6110x.c b/drivers/media/dvb-frontends/stv6110x.c
+index 11653f846c123..c678f47d2449c 100644
+--- a/drivers/media/dvb-frontends/stv6110x.c
++++ b/drivers/media/dvb-frontends/stv6110x.c
+@@ -467,7 +467,7 @@ const struct stv6110x_devctl *stv6110x_attach(struct dvb_frontend *fe,
+ 	dev_info(&stv6110x->i2c->dev, "Attaching STV6110x\n");
+ 	return stv6110x->devctl;
+ }
+-EXPORT_SYMBOL(stv6110x_attach);
++EXPORT_SYMBOL_GPL(stv6110x_attach);
+ 
+ static const struct i2c_device_id stv6110x_id_table[] = {
+ 	{"stv6110x", 0},
+diff --git a/drivers/media/dvb-frontends/tda10021.c b/drivers/media/dvb-frontends/tda10021.c
+index faa6e54b33729..462e12ab6bd14 100644
+--- a/drivers/media/dvb-frontends/tda10021.c
++++ b/drivers/media/dvb-frontends/tda10021.c
+@@ -523,4 +523,4 @@ MODULE_DESCRIPTION("Philips TDA10021 DVB-C demodulator driver");
+ MODULE_AUTHOR("Ralph Metzler, Holger Waechtler, Markus Schulz");
+ MODULE_LICENSE("GPL");
+ 
+-EXPORT_SYMBOL(tda10021_attach);
++EXPORT_SYMBOL_GPL(tda10021_attach);
+diff --git a/drivers/media/dvb-frontends/tda10023.c b/drivers/media/dvb-frontends/tda10023.c
+index 8f32edf6b700e..4c2541ecd7433 100644
+--- a/drivers/media/dvb-frontends/tda10023.c
++++ b/drivers/media/dvb-frontends/tda10023.c
+@@ -594,4 +594,4 @@ MODULE_DESCRIPTION("Philips TDA10023 DVB-C demodulator driver");
+ MODULE_AUTHOR("Georg Acher, Hartmut Birr");
+ MODULE_LICENSE("GPL");
+ 
+-EXPORT_SYMBOL(tda10023_attach);
++EXPORT_SYMBOL_GPL(tda10023_attach);
+diff --git a/drivers/media/dvb-frontends/tda10048.c b/drivers/media/dvb-frontends/tda10048.c
+index 0b3f6999515e3..f6d8a64762b99 100644
+--- a/drivers/media/dvb-frontends/tda10048.c
++++ b/drivers/media/dvb-frontends/tda10048.c
+@@ -1138,7 +1138,7 @@ error:
+ 	kfree(state);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(tda10048_attach);
++EXPORT_SYMBOL_GPL(tda10048_attach);
+ 
+ static const struct dvb_frontend_ops tda10048_ops = {
+ 	.delsys = { SYS_DVBT },
+diff --git a/drivers/media/dvb-frontends/tda1004x.c b/drivers/media/dvb-frontends/tda1004x.c
+index 83a798ca9b002..6f306db6c615f 100644
+--- a/drivers/media/dvb-frontends/tda1004x.c
++++ b/drivers/media/dvb-frontends/tda1004x.c
+@@ -1378,5 +1378,5 @@ MODULE_DESCRIPTION("Philips TDA10045H & TDA10046H DVB-T Demodulator");
+ MODULE_AUTHOR("Andrew de Quincey & Robert Schlabbach");
+ MODULE_LICENSE("GPL");
+ 
+-EXPORT_SYMBOL(tda10045_attach);
+-EXPORT_SYMBOL(tda10046_attach);
++EXPORT_SYMBOL_GPL(tda10045_attach);
++EXPORT_SYMBOL_GPL(tda10046_attach);
+diff --git a/drivers/media/dvb-frontends/tda10086.c b/drivers/media/dvb-frontends/tda10086.c
+index cdcf97664bba8..b449514ae5854 100644
+--- a/drivers/media/dvb-frontends/tda10086.c
++++ b/drivers/media/dvb-frontends/tda10086.c
+@@ -764,4 +764,4 @@ MODULE_DESCRIPTION("Philips TDA10086 DVB-S Demodulator");
+ MODULE_AUTHOR("Andrew de Quincey");
+ MODULE_LICENSE("GPL");
+ 
+-EXPORT_SYMBOL(tda10086_attach);
++EXPORT_SYMBOL_GPL(tda10086_attach);
+diff --git a/drivers/media/dvb-frontends/tda665x.c b/drivers/media/dvb-frontends/tda665x.c
+index 13e8969da7f89..346be5011fb73 100644
+--- a/drivers/media/dvb-frontends/tda665x.c
++++ b/drivers/media/dvb-frontends/tda665x.c
+@@ -227,7 +227,7 @@ struct dvb_frontend *tda665x_attach(struct dvb_frontend *fe,
+ 
+ 	return fe;
+ }
+-EXPORT_SYMBOL(tda665x_attach);
++EXPORT_SYMBOL_GPL(tda665x_attach);
+ 
+ MODULE_DESCRIPTION("TDA665x driver");
+ MODULE_AUTHOR("Manu Abraham");
+diff --git a/drivers/media/dvb-frontends/tda8083.c b/drivers/media/dvb-frontends/tda8083.c
+index e3e1c3db2c856..44f53624557bc 100644
+--- a/drivers/media/dvb-frontends/tda8083.c
++++ b/drivers/media/dvb-frontends/tda8083.c
+@@ -481,4 +481,4 @@ MODULE_DESCRIPTION("Philips TDA8083 DVB-S Demodulator");
+ MODULE_AUTHOR("Ralph Metzler, Holger Waechtler");
+ MODULE_LICENSE("GPL");
+ 
+-EXPORT_SYMBOL(tda8083_attach);
++EXPORT_SYMBOL_GPL(tda8083_attach);
+diff --git a/drivers/media/dvb-frontends/tda8261.c b/drivers/media/dvb-frontends/tda8261.c
+index 0d576d41c67d8..8b06f92745dca 100644
+--- a/drivers/media/dvb-frontends/tda8261.c
++++ b/drivers/media/dvb-frontends/tda8261.c
+@@ -188,7 +188,7 @@ exit:
+ 	return NULL;
+ }
+ 
+-EXPORT_SYMBOL(tda8261_attach);
++EXPORT_SYMBOL_GPL(tda8261_attach);
+ 
+ MODULE_AUTHOR("Manu Abraham");
+ MODULE_DESCRIPTION("TDA8261 8PSK/QPSK Tuner");
+diff --git a/drivers/media/dvb-frontends/tda826x.c b/drivers/media/dvb-frontends/tda826x.c
+index f9703a1dd758c..eafcf5f7da3dc 100644
+--- a/drivers/media/dvb-frontends/tda826x.c
++++ b/drivers/media/dvb-frontends/tda826x.c
+@@ -164,7 +164,7 @@ struct dvb_frontend *tda826x_attach(struct dvb_frontend *fe, int addr, struct i2
+ 
+ 	return fe;
+ }
+-EXPORT_SYMBOL(tda826x_attach);
++EXPORT_SYMBOL_GPL(tda826x_attach);
+ 
+ module_param(debug, int, 0644);
+ MODULE_PARM_DESC(debug, "Turn on/off frontend debugging (default:off).");
+diff --git a/drivers/media/dvb-frontends/ts2020.c b/drivers/media/dvb-frontends/ts2020.c
+index f5b60f8276974..a5ebce57f35e6 100644
+--- a/drivers/media/dvb-frontends/ts2020.c
++++ b/drivers/media/dvb-frontends/ts2020.c
+@@ -525,7 +525,7 @@ struct dvb_frontend *ts2020_attach(struct dvb_frontend *fe,
+ 
+ 	return fe;
+ }
+-EXPORT_SYMBOL(ts2020_attach);
++EXPORT_SYMBOL_GPL(ts2020_attach);
+ 
+ /*
+  * We implement own regmap locking due to legacy DVB attach which uses frontend
+diff --git a/drivers/media/dvb-frontends/tua6100.c b/drivers/media/dvb-frontends/tua6100.c
+index 2483f614d0e7d..41dd9b6d31908 100644
+--- a/drivers/media/dvb-frontends/tua6100.c
++++ b/drivers/media/dvb-frontends/tua6100.c
+@@ -186,7 +186,7 @@ struct dvb_frontend *tua6100_attach(struct dvb_frontend *fe, int addr, struct i2
+ 	fe->tuner_priv = priv;
+ 	return fe;
+ }
+-EXPORT_SYMBOL(tua6100_attach);
++EXPORT_SYMBOL_GPL(tua6100_attach);
+ 
+ MODULE_DESCRIPTION("DVB tua6100 driver");
+ MODULE_AUTHOR("Andrew de Quincey");
+diff --git a/drivers/media/dvb-frontends/ves1820.c b/drivers/media/dvb-frontends/ves1820.c
+index 9df14d0be1c1a..ee5620e731e9b 100644
+--- a/drivers/media/dvb-frontends/ves1820.c
++++ b/drivers/media/dvb-frontends/ves1820.c
+@@ -434,4 +434,4 @@ MODULE_DESCRIPTION("VLSI VES1820 DVB-C Demodulator driver");
+ MODULE_AUTHOR("Ralph Metzler, Holger Waechtler");
+ MODULE_LICENSE("GPL");
+ 
+-EXPORT_SYMBOL(ves1820_attach);
++EXPORT_SYMBOL_GPL(ves1820_attach);
+diff --git a/drivers/media/dvb-frontends/ves1x93.c b/drivers/media/dvb-frontends/ves1x93.c
+index b747272863025..c60e21d26b881 100644
+--- a/drivers/media/dvb-frontends/ves1x93.c
++++ b/drivers/media/dvb-frontends/ves1x93.c
+@@ -540,4 +540,4 @@ MODULE_DESCRIPTION("VLSI VES1x93 DVB-S Demodulator driver");
+ MODULE_AUTHOR("Ralph Metzler");
+ MODULE_LICENSE("GPL");
+ 
+-EXPORT_SYMBOL(ves1x93_attach);
++EXPORT_SYMBOL_GPL(ves1x93_attach);
+diff --git a/drivers/media/dvb-frontends/zl10036.c b/drivers/media/dvb-frontends/zl10036.c
+index d392c7cce2ce0..7ba575e9c55f4 100644
+--- a/drivers/media/dvb-frontends/zl10036.c
++++ b/drivers/media/dvb-frontends/zl10036.c
+@@ -496,7 +496,7 @@ error:
+ 	kfree(state);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(zl10036_attach);
++EXPORT_SYMBOL_GPL(zl10036_attach);
+ 
+ module_param_named(debug, zl10036_debug, int, 0644);
+ MODULE_PARM_DESC(debug, "Turn on/off frontend debugging (default:off).");
+diff --git a/drivers/media/dvb-frontends/zl10039.c b/drivers/media/dvb-frontends/zl10039.c
+index 1335bf78d5b7f..a3e4d219400ce 100644
+--- a/drivers/media/dvb-frontends/zl10039.c
++++ b/drivers/media/dvb-frontends/zl10039.c
+@@ -295,7 +295,7 @@ error:
+ 	kfree(state);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(zl10039_attach);
++EXPORT_SYMBOL_GPL(zl10039_attach);
+ 
+ module_param(debug, int, 0644);
+ MODULE_PARM_DESC(debug, "Turn on/off frontend debugging (default:off).");
+diff --git a/drivers/media/dvb-frontends/zl10353.c b/drivers/media/dvb-frontends/zl10353.c
+index 2a2cf20a73d61..8849d05475c27 100644
+--- a/drivers/media/dvb-frontends/zl10353.c
++++ b/drivers/media/dvb-frontends/zl10353.c
+@@ -665,4 +665,4 @@ MODULE_DESCRIPTION("Zarlink ZL10353 DVB-T demodulator driver");
+ MODULE_AUTHOR("Chris Pascoe");
+ MODULE_LICENSE("GPL");
+ 
+-EXPORT_SYMBOL(zl10353_attach);
++EXPORT_SYMBOL_GPL(zl10353_attach);
+diff --git a/drivers/media/i2c/Kconfig b/drivers/media/i2c/Kconfig
+index 226454b6a90dd..0669aea3eba35 100644
+--- a/drivers/media/i2c/Kconfig
++++ b/drivers/media/i2c/Kconfig
+@@ -25,8 +25,15 @@ config VIDEO_IR_I2C
+ # V4L2 I2C drivers that are related with Camera support
+ #
+ 
+-menu "Camera sensor devices"
+-	visible if MEDIA_CAMERA_SUPPORT
++menuconfig VIDEO_CAMERA_SENSOR
++	bool "Camera sensor devices"
++	depends on MEDIA_CAMERA_SUPPORT && I2C
++	select MEDIA_CONTROLLER
++	select V4L2_FWNODE
++	select VIDEO_V4L2_SUBDEV_API
++	default y
++
++if VIDEO_CAMERA_SENSOR
+ 
+ config VIDEO_APTINA_PLL
+ 	tristate
+@@ -810,7 +817,7 @@ config VIDEO_ST_VGXY61
+ source "drivers/media/i2c/ccs/Kconfig"
+ source "drivers/media/i2c/et8ek8/Kconfig"
+ 
+-endmenu
++endif
+ 
+ menu "Lens drivers"
+ 	visible if MEDIA_CAMERA_SUPPORT
+diff --git a/drivers/media/i2c/ad5820.c b/drivers/media/i2c/ad5820.c
+index 5f605b9be3b15..1543d24f522c3 100644
+--- a/drivers/media/i2c/ad5820.c
++++ b/drivers/media/i2c/ad5820.c
+@@ -349,7 +349,6 @@ static void ad5820_remove(struct i2c_client *client)
+ static const struct i2c_device_id ad5820_id_table[] = {
+ 	{ "ad5820", 0 },
+ 	{ "ad5821", 0 },
+-	{ "ad5823", 0 },
+ 	{ }
+ };
+ MODULE_DEVICE_TABLE(i2c, ad5820_id_table);
+@@ -357,7 +356,6 @@ MODULE_DEVICE_TABLE(i2c, ad5820_id_table);
+ static const struct of_device_id ad5820_of_table[] = {
+ 	{ .compatible = "adi,ad5820" },
+ 	{ .compatible = "adi,ad5821" },
+-	{ .compatible = "adi,ad5823" },
+ 	{ }
+ };
+ MODULE_DEVICE_TABLE(of, ad5820_of_table);
+diff --git a/drivers/media/i2c/ccs/ccs-data.c b/drivers/media/i2c/ccs/ccs-data.c
+index 45f2b2f55ec5c..08400edf77ced 100644
+--- a/drivers/media/i2c/ccs/ccs-data.c
++++ b/drivers/media/i2c/ccs/ccs-data.c
+@@ -464,8 +464,7 @@ static int ccs_data_parse_rules(struct bin_container *bin,
+ 		rule_payload = __rule_type + 1;
+ 		rule_plen2 = rule_plen - sizeof(*__rule_type);
+ 
+-		switch (*__rule_type) {
+-		case CCS_DATA_BLOCK_RULE_ID_IF: {
++		if (*__rule_type == CCS_DATA_BLOCK_RULE_ID_IF) {
+ 			const struct __ccs_data_block_rule_if *__if_rules =
+ 				rule_payload;
+ 			const size_t __num_if_rules =
+@@ -514,49 +513,61 @@ static int ccs_data_parse_rules(struct bin_container *bin,
+ 				rules->if_rules = if_rule;
+ 				rules->num_if_rules = __num_if_rules;
+ 			}
+-			break;
+-		}
+-		case CCS_DATA_BLOCK_RULE_ID_READ_ONLY_REGS:
+-			rval = ccs_data_parse_reg_rules(bin, &rules->read_only_regs,
+-							&rules->num_read_only_regs,
+-							rule_payload,
+-							rule_payload + rule_plen2,
+-							dev);
+-			if (rval)
+-				return rval;
+-			break;
+-		case CCS_DATA_BLOCK_RULE_ID_FFD:
+-			rval = ccs_data_parse_ffd(bin, &rules->frame_format,
+-						  rule_payload,
+-						  rule_payload + rule_plen2,
+-						  dev);
+-			if (rval)
+-				return rval;
+-			break;
+-		case CCS_DATA_BLOCK_RULE_ID_MSR:
+-			rval = ccs_data_parse_reg_rules(bin,
+-							&rules->manufacturer_regs,
+-							&rules->num_manufacturer_regs,
+-							rule_payload,
+-							rule_payload + rule_plen2,
+-							dev);
+-			if (rval)
+-				return rval;
+-			break;
+-		case CCS_DATA_BLOCK_RULE_ID_PDAF_READOUT:
+-			rval = ccs_data_parse_pdaf_readout(bin,
+-							   &rules->pdaf_readout,
+-							   rule_payload,
+-							   rule_payload + rule_plen2,
+-							   dev);
+-			if (rval)
+-				return rval;
+-			break;
+-		default:
+-			dev_dbg(dev,
+-				"Don't know how to handle rule type %u!\n",
+-				*__rule_type);
+-			return -EINVAL;
++		} else {
++			/* Check there was an if rule before any other rules */
++			if (bin->base && !rules)
++				return -EINVAL;
++
++			switch (*__rule_type) {
++			case CCS_DATA_BLOCK_RULE_ID_READ_ONLY_REGS:
++				rval = ccs_data_parse_reg_rules(bin,
++								rules ?
++								&rules->read_only_regs : NULL,
++								rules ?
++								&rules->num_read_only_regs : NULL,
++								rule_payload,
++								rule_payload + rule_plen2,
++								dev);
++				if (rval)
++					return rval;
++				break;
++			case CCS_DATA_BLOCK_RULE_ID_FFD:
++				rval = ccs_data_parse_ffd(bin, rules ?
++							  &rules->frame_format : NULL,
++							  rule_payload,
++							  rule_payload + rule_plen2,
++							  dev);
++				if (rval)
++					return rval;
++				break;
++			case CCS_DATA_BLOCK_RULE_ID_MSR:
++				rval = ccs_data_parse_reg_rules(bin,
++								rules ?
++								&rules->manufacturer_regs : NULL,
++								rules ?
++								&rules->num_manufacturer_regs : NULL,
++								rule_payload,
++								rule_payload + rule_plen2,
++								dev);
++				if (rval)
++					return rval;
++				break;
++			case CCS_DATA_BLOCK_RULE_ID_PDAF_READOUT:
++				rval = ccs_data_parse_pdaf_readout(bin,
++								   rules ?
++								   &rules->pdaf_readout : NULL,
++								   rule_payload,
++								   rule_payload + rule_plen2,
++								   dev);
++				if (rval)
++					return rval;
++				break;
++			default:
++				dev_dbg(dev,
++					"Don't know how to handle rule type %u!\n",
++					*__rule_type);
++				return -EINVAL;
++			}
+ 		}
+ 		__next_rule = __next_rule + rule_hlen + rule_plen;
+ 	}
+diff --git a/drivers/media/i2c/imx290.c b/drivers/media/i2c/imx290.c
+index b3f832e9d7e16..0622a9fcd2e07 100644
+--- a/drivers/media/i2c/imx290.c
++++ b/drivers/media/i2c/imx290.c
+@@ -902,7 +902,6 @@ static const char * const imx290_test_pattern_menu[] = {
+ };
+ 
+ static void imx290_ctrl_update(struct imx290 *imx290,
+-			       const struct v4l2_mbus_framefmt *format,
+ 			       const struct imx290_mode *mode)
+ {
+ 	unsigned int hblank_min = mode->hmax_min - mode->width;
+@@ -1195,7 +1194,7 @@ static int imx290_set_fmt(struct v4l2_subdev *sd,
+ 	if (fmt->which == V4L2_SUBDEV_FORMAT_ACTIVE) {
+ 		imx290->current_mode = mode;
+ 
+-		imx290_ctrl_update(imx290, &fmt->format, mode);
++		imx290_ctrl_update(imx290, mode);
+ 		imx290_exposure_update(imx290, mode);
+ 	}
+ 
+@@ -1300,7 +1299,6 @@ static const struct media_entity_operations imx290_subdev_entity_ops = {
+ static int imx290_subdev_init(struct imx290 *imx290)
+ {
+ 	struct i2c_client *client = to_i2c_client(imx290->dev);
+-	const struct v4l2_mbus_framefmt *format;
+ 	struct v4l2_subdev_state *state;
+ 	int ret;
+ 
+@@ -1335,8 +1333,7 @@ static int imx290_subdev_init(struct imx290 *imx290)
+ 	}
+ 
+ 	state = v4l2_subdev_lock_and_get_active_state(&imx290->sd);
+-	format = v4l2_subdev_get_pad_format(&imx290->sd, state, 0);
+-	imx290_ctrl_update(imx290, format, imx290->current_mode);
++	imx290_ctrl_update(imx290, imx290->current_mode);
+ 	v4l2_subdev_unlock_state(state);
+ 
+ 	return 0;
+diff --git a/drivers/media/i2c/ov2680.c b/drivers/media/i2c/ov2680.c
+index d06e9fc37f770..55fc56ffad31c 100644
+--- a/drivers/media/i2c/ov2680.c
++++ b/drivers/media/i2c/ov2680.c
+@@ -54,6 +54,9 @@
+ #define OV2680_WIDTH_MAX		1600
+ #define OV2680_HEIGHT_MAX		1200
+ 
++#define OV2680_DEFAULT_WIDTH			800
++#define OV2680_DEFAULT_HEIGHT			600
++
+ enum ov2680_mode_id {
+ 	OV2680_MODE_QUXGA_800_600,
+ 	OV2680_MODE_720P_1280_720,
+@@ -85,15 +88,8 @@ struct ov2680_mode_info {
+ 
+ struct ov2680_ctrls {
+ 	struct v4l2_ctrl_handler handler;
+-	struct {
+-		struct v4l2_ctrl *auto_exp;
+-		struct v4l2_ctrl *exposure;
+-	};
+-	struct {
+-		struct v4l2_ctrl *auto_gain;
+-		struct v4l2_ctrl *gain;
+-	};
+-
++	struct v4l2_ctrl *exposure;
++	struct v4l2_ctrl *gain;
+ 	struct v4l2_ctrl *hflip;
+ 	struct v4l2_ctrl *vflip;
+ 	struct v4l2_ctrl *test_pattern;
+@@ -143,6 +139,7 @@ static const struct reg_value ov2680_setting_30fps_QUXGA_800_600[] = {
+ 	{0x380e, 0x02}, {0x380f, 0x84}, {0x3811, 0x04}, {0x3813, 0x04},
+ 	{0x3814, 0x31}, {0x3815, 0x31}, {0x3820, 0xc0}, {0x4008, 0x00},
+ 	{0x4009, 0x03}, {0x4837, 0x1e}, {0x3501, 0x4e}, {0x3502, 0xe0},
++	{0x3503, 0x03},
+ };
+ 
+ static const struct reg_value ov2680_setting_30fps_720P_1280_720[] = {
+@@ -321,70 +318,62 @@ static void ov2680_power_down(struct ov2680_dev *sensor)
+ 	usleep_range(5000, 10000);
+ }
+ 
+-static int ov2680_bayer_order(struct ov2680_dev *sensor)
++static void ov2680_set_bayer_order(struct ov2680_dev *sensor,
++				   struct v4l2_mbus_framefmt *fmt)
+ {
+-	u32 format1;
+-	u32 format2;
+-	u32 hv_flip;
+-	int ret;
+-
+-	ret = ov2680_read_reg(sensor, OV2680_REG_FORMAT1, &format1);
+-	if (ret < 0)
+-		return ret;
+-
+-	ret = ov2680_read_reg(sensor, OV2680_REG_FORMAT2, &format2);
+-	if (ret < 0)
+-		return ret;
++	int hv_flip = 0;
+ 
+-	hv_flip = (format2 & BIT(2)  << 1) | (format1 & BIT(2));
++	if (sensor->ctrls.vflip && sensor->ctrls.vflip->val)
++		hv_flip += 1;
+ 
+-	sensor->fmt.code = ov2680_hv_flip_bayer_order[hv_flip];
++	if (sensor->ctrls.hflip && sensor->ctrls.hflip->val)
++		hv_flip += 2;
+ 
+-	return 0;
++	fmt->code = ov2680_hv_flip_bayer_order[hv_flip];
+ }
+ 
+-static int ov2680_vflip_enable(struct ov2680_dev *sensor)
++static void ov2680_fill_format(struct ov2680_dev *sensor,
++			       struct v4l2_mbus_framefmt *fmt,
++			       unsigned int width, unsigned int height)
+ {
+-	int ret;
+-
+-	ret = ov2680_mod_reg(sensor, OV2680_REG_FORMAT1, BIT(2), BIT(2));
+-	if (ret < 0)
+-		return ret;
+-
+-	return ov2680_bayer_order(sensor);
++	memset(fmt, 0, sizeof(*fmt));
++	fmt->width = width;
++	fmt->height = height;
++	fmt->field = V4L2_FIELD_NONE;
++	fmt->colorspace = V4L2_COLORSPACE_SRGB;
++	ov2680_set_bayer_order(sensor, fmt);
+ }
+ 
+-static int ov2680_vflip_disable(struct ov2680_dev *sensor)
++static int ov2680_set_vflip(struct ov2680_dev *sensor, s32 val)
+ {
+ 	int ret;
+ 
+-	ret = ov2680_mod_reg(sensor, OV2680_REG_FORMAT1, BIT(2), BIT(0));
+-	if (ret < 0)
+-		return ret;
+-
+-	return ov2680_bayer_order(sensor);
+-}
+-
+-static int ov2680_hflip_enable(struct ov2680_dev *sensor)
+-{
+-	int ret;
++	if (sensor->is_streaming)
++		return -EBUSY;
+ 
+-	ret = ov2680_mod_reg(sensor, OV2680_REG_FORMAT2, BIT(2), BIT(2));
++	ret = ov2680_mod_reg(sensor, OV2680_REG_FORMAT1,
++			     BIT(2), val ? BIT(2) : 0);
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	return ov2680_bayer_order(sensor);
++	ov2680_set_bayer_order(sensor, &sensor->fmt);
++	return 0;
+ }
+ 
+-static int ov2680_hflip_disable(struct ov2680_dev *sensor)
++static int ov2680_set_hflip(struct ov2680_dev *sensor, s32 val)
+ {
+ 	int ret;
+ 
+-	ret = ov2680_mod_reg(sensor, OV2680_REG_FORMAT2, BIT(2), BIT(0));
++	if (sensor->is_streaming)
++		return -EBUSY;
++
++	ret = ov2680_mod_reg(sensor, OV2680_REG_FORMAT2,
++			     BIT(2), val ? BIT(2) : 0);
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	return ov2680_bayer_order(sensor);
++	ov2680_set_bayer_order(sensor, &sensor->fmt);
++	return 0;
+ }
+ 
+ static int ov2680_test_pattern_set(struct ov2680_dev *sensor, int value)
+@@ -405,69 +394,15 @@ static int ov2680_test_pattern_set(struct ov2680_dev *sensor, int value)
+ 	return 0;
+ }
+ 
+-static int ov2680_gain_set(struct ov2680_dev *sensor, bool auto_gain)
++static int ov2680_gain_set(struct ov2680_dev *sensor, u32 gain)
+ {
+-	struct ov2680_ctrls *ctrls = &sensor->ctrls;
+-	u32 gain;
+-	int ret;
+-
+-	ret = ov2680_mod_reg(sensor, OV2680_REG_R_MANUAL, BIT(1),
+-			     auto_gain ? 0 : BIT(1));
+-	if (ret < 0)
+-		return ret;
+-
+-	if (auto_gain || !ctrls->gain->is_new)
+-		return 0;
+-
+-	gain = ctrls->gain->val;
+-
+-	ret = ov2680_write_reg16(sensor, OV2680_REG_GAIN_PK, gain);
+-
+-	return 0;
+-}
+-
+-static int ov2680_gain_get(struct ov2680_dev *sensor)
+-{
+-	u32 gain;
+-	int ret;
+-
+-	ret = ov2680_read_reg16(sensor, OV2680_REG_GAIN_PK, &gain);
+-	if (ret)
+-		return ret;
+-
+-	return gain;
+-}
+-
+-static int ov2680_exposure_set(struct ov2680_dev *sensor, bool auto_exp)
+-{
+-	struct ov2680_ctrls *ctrls = &sensor->ctrls;
+-	u32 exp;
+-	int ret;
+-
+-	ret = ov2680_mod_reg(sensor, OV2680_REG_R_MANUAL, BIT(0),
+-			     auto_exp ? 0 : BIT(0));
+-	if (ret < 0)
+-		return ret;
+-
+-	if (auto_exp || !ctrls->exposure->is_new)
+-		return 0;
+-
+-	exp = (u32)ctrls->exposure->val;
+-	exp <<= 4;
+-
+-	return ov2680_write_reg24(sensor, OV2680_REG_EXPOSURE_PK_HIGH, exp);
++	return ov2680_write_reg16(sensor, OV2680_REG_GAIN_PK, gain);
+ }
+ 
+-static int ov2680_exposure_get(struct ov2680_dev *sensor)
++static int ov2680_exposure_set(struct ov2680_dev *sensor, u32 exp)
+ {
+-	int ret;
+-	u32 exp;
+-
+-	ret = ov2680_read_reg24(sensor, OV2680_REG_EXPOSURE_PK_HIGH, &exp);
+-	if (ret)
+-		return ret;
+-
+-	return exp >> 4;
++	return ov2680_write_reg24(sensor, OV2680_REG_EXPOSURE_PK_HIGH,
++				  exp << 4);
+ }
+ 
+ static int ov2680_stream_enable(struct ov2680_dev *sensor)
+@@ -482,33 +417,17 @@ static int ov2680_stream_disable(struct ov2680_dev *sensor)
+ 
+ static int ov2680_mode_set(struct ov2680_dev *sensor)
+ {
+-	struct ov2680_ctrls *ctrls = &sensor->ctrls;
+ 	int ret;
+ 
+-	ret = ov2680_gain_set(sensor, false);
+-	if (ret < 0)
+-		return ret;
+-
+-	ret = ov2680_exposure_set(sensor, false);
++	ret = ov2680_load_regs(sensor, sensor->current_mode);
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	ret = ov2680_load_regs(sensor, sensor->current_mode);
++	/* Restore value of all ctrls */
++	ret = __v4l2_ctrl_handler_setup(&sensor->ctrls.handler);
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	if (ctrls->auto_gain->val) {
+-		ret = ov2680_gain_set(sensor, true);
+-		if (ret < 0)
+-			return ret;
+-	}
+-
+-	if (ctrls->auto_exp->val == V4L2_EXPOSURE_AUTO) {
+-		ret = ov2680_exposure_set(sensor, true);
+-		if (ret < 0)
+-			return ret;
+-	}
+-
+ 	sensor->mode_pending_changes = false;
+ 
+ 	return 0;
+@@ -556,7 +475,7 @@ static int ov2680_power_on(struct ov2680_dev *sensor)
+ 		ret = ov2680_write_reg(sensor, OV2680_REG_SOFT_RESET, 0x01);
+ 		if (ret != 0) {
+ 			dev_err(dev, "sensor soft reset failed\n");
+-			return ret;
++			goto err_disable_regulators;
+ 		}
+ 		usleep_range(1000, 2000);
+ 	} else {
+@@ -566,7 +485,7 @@ static int ov2680_power_on(struct ov2680_dev *sensor)
+ 
+ 	ret = clk_prepare_enable(sensor->xvclk);
+ 	if (ret < 0)
+-		return ret;
++		goto err_disable_regulators;
+ 
+ 	sensor->is_enabled = true;
+ 
+@@ -576,6 +495,10 @@ static int ov2680_power_on(struct ov2680_dev *sensor)
+ 	ov2680_stream_disable(sensor);
+ 
+ 	return 0;
++
++err_disable_regulators:
++	regulator_bulk_disable(OV2680_NUM_SUPPLIES, sensor->supplies);
++	return ret;
+ }
+ 
+ static int ov2680_s_power(struct v4l2_subdev *sd, int on)
+@@ -590,15 +513,10 @@ static int ov2680_s_power(struct v4l2_subdev *sd, int on)
+ 	else
+ 		ret = ov2680_power_off(sensor);
+ 
+-	mutex_unlock(&sensor->lock);
+-
+-	if (on && ret == 0) {
+-		ret = v4l2_ctrl_handler_setup(&sensor->ctrls.handler);
+-		if (ret < 0)
+-			return ret;
+-
++	if (on && ret == 0)
+ 		ret = ov2680_mode_restore(sensor);
+-	}
++
++	mutex_unlock(&sensor->lock);
+ 
+ 	return ret;
+ }
+@@ -664,7 +582,6 @@ static int ov2680_get_fmt(struct v4l2_subdev *sd,
+ {
+ 	struct ov2680_dev *sensor = to_ov2680_dev(sd);
+ 	struct v4l2_mbus_framefmt *fmt = NULL;
+-	int ret = 0;
+ 
+ 	if (format->pad != 0)
+ 		return -EINVAL;
+@@ -672,22 +589,17 @@ static int ov2680_get_fmt(struct v4l2_subdev *sd,
+ 	mutex_lock(&sensor->lock);
+ 
+ 	if (format->which == V4L2_SUBDEV_FORMAT_TRY) {
+-#ifdef CONFIG_VIDEO_V4L2_SUBDEV_API
+ 		fmt = v4l2_subdev_get_try_format(&sensor->sd, sd_state,
+ 						 format->pad);
+-#else
+-		ret = -EINVAL;
+-#endif
+ 	} else {
+ 		fmt = &sensor->fmt;
+ 	}
+ 
+-	if (fmt)
+-		format->format = *fmt;
++	format->format = *fmt;
+ 
+ 	mutex_unlock(&sensor->lock);
+ 
+-	return ret;
++	return 0;
+ }
+ 
+ static int ov2680_set_fmt(struct v4l2_subdev *sd,
+@@ -695,43 +607,35 @@ static int ov2680_set_fmt(struct v4l2_subdev *sd,
+ 			  struct v4l2_subdev_format *format)
+ {
+ 	struct ov2680_dev *sensor = to_ov2680_dev(sd);
+-	struct v4l2_mbus_framefmt *fmt = &format->format;
+-#ifdef CONFIG_VIDEO_V4L2_SUBDEV_API
+ 	struct v4l2_mbus_framefmt *try_fmt;
+-#endif
+ 	const struct ov2680_mode_info *mode;
+ 	int ret = 0;
+ 
+ 	if (format->pad != 0)
+ 		return -EINVAL;
+ 
+-	mutex_lock(&sensor->lock);
+-
+-	if (sensor->is_streaming) {
+-		ret = -EBUSY;
+-		goto unlock;
+-	}
+-
+ 	mode = v4l2_find_nearest_size(ov2680_mode_data,
+-				      ARRAY_SIZE(ov2680_mode_data), width,
+-				      height, fmt->width, fmt->height);
+-	if (!mode) {
+-		ret = -EINVAL;
+-		goto unlock;
+-	}
++				      ARRAY_SIZE(ov2680_mode_data),
++				      width, height,
++				      format->format.width,
++				      format->format.height);
++	if (!mode)
++		return -EINVAL;
++
++	ov2680_fill_format(sensor, &format->format, mode->width, mode->height);
+ 
+ 	if (format->which == V4L2_SUBDEV_FORMAT_TRY) {
+-#ifdef CONFIG_VIDEO_V4L2_SUBDEV_API
+ 		try_fmt = v4l2_subdev_get_try_format(sd, sd_state, 0);
+-		format->format = *try_fmt;
+-#endif
+-		goto unlock;
++		*try_fmt = format->format;
++		return 0;
+ 	}
+ 
+-	fmt->width = mode->width;
+-	fmt->height = mode->height;
+-	fmt->code = sensor->fmt.code;
+-	fmt->colorspace = sensor->fmt.colorspace;
++	mutex_lock(&sensor->lock);
++
++	if (sensor->is_streaming) {
++		ret = -EBUSY;
++		goto unlock;
++	}
+ 
+ 	sensor->current_mode = mode;
+ 	sensor->fmt = format->format;
+@@ -746,16 +650,11 @@ unlock:
+ static int ov2680_init_cfg(struct v4l2_subdev *sd,
+ 			   struct v4l2_subdev_state *sd_state)
+ {
+-	struct v4l2_subdev_format fmt = {
+-		.which = sd_state ? V4L2_SUBDEV_FORMAT_TRY
+-		: V4L2_SUBDEV_FORMAT_ACTIVE,
+-		.format = {
+-			.width = 800,
+-			.height = 600,
+-		}
+-	};
++	struct ov2680_dev *sensor = to_ov2680_dev(sd);
+ 
+-	return ov2680_set_fmt(sd, sd_state, &fmt);
++	ov2680_fill_format(sensor, &sd_state->pads[0].try_fmt,
++			   OV2680_DEFAULT_WIDTH, OV2680_DEFAULT_HEIGHT);
++	return 0;
+ }
+ 
+ static int ov2680_enum_frame_size(struct v4l2_subdev *sd,
+@@ -794,66 +693,23 @@ static int ov2680_enum_frame_interval(struct v4l2_subdev *sd,
+ 	return 0;
+ }
+ 
+-static int ov2680_g_volatile_ctrl(struct v4l2_ctrl *ctrl)
+-{
+-	struct v4l2_subdev *sd = ctrl_to_sd(ctrl);
+-	struct ov2680_dev *sensor = to_ov2680_dev(sd);
+-	struct ov2680_ctrls *ctrls = &sensor->ctrls;
+-	int val;
+-
+-	if (!sensor->is_enabled)
+-		return 0;
+-
+-	switch (ctrl->id) {
+-	case V4L2_CID_GAIN:
+-		val = ov2680_gain_get(sensor);
+-		if (val < 0)
+-			return val;
+-		ctrls->gain->val = val;
+-		break;
+-	case V4L2_CID_EXPOSURE:
+-		val = ov2680_exposure_get(sensor);
+-		if (val < 0)
+-			return val;
+-		ctrls->exposure->val = val;
+-		break;
+-	}
+-
+-	return 0;
+-}
+-
+ static int ov2680_s_ctrl(struct v4l2_ctrl *ctrl)
+ {
+ 	struct v4l2_subdev *sd = ctrl_to_sd(ctrl);
+ 	struct ov2680_dev *sensor = to_ov2680_dev(sd);
+-	struct ov2680_ctrls *ctrls = &sensor->ctrls;
+ 
+ 	if (!sensor->is_enabled)
+ 		return 0;
+ 
+ 	switch (ctrl->id) {
+-	case V4L2_CID_AUTOGAIN:
+-		return ov2680_gain_set(sensor, !!ctrl->val);
+ 	case V4L2_CID_GAIN:
+-		return ov2680_gain_set(sensor, !!ctrls->auto_gain->val);
+-	case V4L2_CID_EXPOSURE_AUTO:
+-		return ov2680_exposure_set(sensor, !!ctrl->val);
++		return ov2680_gain_set(sensor, ctrl->val);
+ 	case V4L2_CID_EXPOSURE:
+-		return ov2680_exposure_set(sensor, !!ctrls->auto_exp->val);
++		return ov2680_exposure_set(sensor, ctrl->val);
+ 	case V4L2_CID_VFLIP:
+-		if (sensor->is_streaming)
+-			return -EBUSY;
+-		if (ctrl->val)
+-			return ov2680_vflip_enable(sensor);
+-		else
+-			return ov2680_vflip_disable(sensor);
++		return ov2680_set_vflip(sensor, ctrl->val);
+ 	case V4L2_CID_HFLIP:
+-		if (sensor->is_streaming)
+-			return -EBUSY;
+-		if (ctrl->val)
+-			return ov2680_hflip_enable(sensor);
+-		else
+-			return ov2680_hflip_disable(sensor);
++		return ov2680_set_hflip(sensor, ctrl->val);
+ 	case V4L2_CID_TEST_PATTERN:
+ 		return ov2680_test_pattern_set(sensor, ctrl->val);
+ 	default:
+@@ -864,7 +720,6 @@ static int ov2680_s_ctrl(struct v4l2_ctrl *ctrl)
+ }
+ 
+ static const struct v4l2_ctrl_ops ov2680_ctrl_ops = {
+-	.g_volatile_ctrl = ov2680_g_volatile_ctrl,
+ 	.s_ctrl = ov2680_s_ctrl,
+ };
+ 
+@@ -898,11 +753,8 @@ static int ov2680_mode_init(struct ov2680_dev *sensor)
+ 	const struct ov2680_mode_info *init_mode;
+ 
+ 	/* set initial mode */
+-	sensor->fmt.code = MEDIA_BUS_FMT_SBGGR10_1X10;
+-	sensor->fmt.width = 800;
+-	sensor->fmt.height = 600;
+-	sensor->fmt.field = V4L2_FIELD_NONE;
+-	sensor->fmt.colorspace = V4L2_COLORSPACE_SRGB;
++	ov2680_fill_format(sensor, &sensor->fmt,
++			   OV2680_DEFAULT_WIDTH, OV2680_DEFAULT_HEIGHT);
+ 
+ 	sensor->frame_interval.denominator = OV2680_FRAME_RATE;
+ 	sensor->frame_interval.numerator = 1;
+@@ -926,9 +778,7 @@ static int ov2680_v4l2_register(struct ov2680_dev *sensor)
+ 	v4l2_i2c_subdev_init(&sensor->sd, sensor->i2c_client,
+ 			     &ov2680_subdev_ops);
+ 
+-#ifdef CONFIG_VIDEO_V4L2_SUBDEV_API
+ 	sensor->sd.flags = V4L2_SUBDEV_FL_HAS_DEVNODE;
+-#endif
+ 	sensor->pad.flags = MEDIA_PAD_FL_SOURCE;
+ 	sensor->sd.entity.function = MEDIA_ENT_F_CAM_SENSOR;
+ 
+@@ -936,7 +786,7 @@ static int ov2680_v4l2_register(struct ov2680_dev *sensor)
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	v4l2_ctrl_handler_init(hdl, 7);
++	v4l2_ctrl_handler_init(hdl, 5);
+ 
+ 	hdl->lock = &sensor->lock;
+ 
+@@ -948,16 +798,9 @@ static int ov2680_v4l2_register(struct ov2680_dev *sensor)
+ 					ARRAY_SIZE(test_pattern_menu) - 1,
+ 					0, 0, test_pattern_menu);
+ 
+-	ctrls->auto_exp = v4l2_ctrl_new_std_menu(hdl, ops,
+-						 V4L2_CID_EXPOSURE_AUTO,
+-						 V4L2_EXPOSURE_MANUAL, 0,
+-						 V4L2_EXPOSURE_AUTO);
+-
+ 	ctrls->exposure = v4l2_ctrl_new_std(hdl, ops, V4L2_CID_EXPOSURE,
+ 					    0, 32767, 1, 0);
+ 
+-	ctrls->auto_gain = v4l2_ctrl_new_std(hdl, ops, V4L2_CID_AUTOGAIN,
+-					     0, 1, 1, 1);
+ 	ctrls->gain = v4l2_ctrl_new_std(hdl, ops, V4L2_CID_GAIN, 0, 2047, 1, 0);
+ 
+ 	if (hdl->error) {
+@@ -965,14 +808,9 @@ static int ov2680_v4l2_register(struct ov2680_dev *sensor)
+ 		goto cleanup_entity;
+ 	}
+ 
+-	ctrls->gain->flags |= V4L2_CTRL_FLAG_VOLATILE;
+-	ctrls->exposure->flags |= V4L2_CTRL_FLAG_VOLATILE;
+ 	ctrls->vflip->flags |= V4L2_CTRL_FLAG_MODIFY_LAYOUT;
+ 	ctrls->hflip->flags |= V4L2_CTRL_FLAG_MODIFY_LAYOUT;
+ 
+-	v4l2_ctrl_auto_cluster(2, &ctrls->auto_gain, 0, true);
+-	v4l2_ctrl_auto_cluster(2, &ctrls->auto_exp, 1, true);
+-
+ 	sensor->sd.ctrl_handler = hdl;
+ 
+ 	ret = v4l2_async_register_subdev(&sensor->sd);
+diff --git a/drivers/media/i2c/ov5640.c b/drivers/media/i2c/ov5640.c
+index 36b509714c8c7..8b7ff2f3bdda7 100644
+--- a/drivers/media/i2c/ov5640.c
++++ b/drivers/media/i2c/ov5640.c
+@@ -568,9 +568,7 @@ static const struct reg_value ov5640_init_setting[] = {
+ 	{0x4001, 0x02, 0, 0}, {0x4004, 0x02, 0, 0}, {0x3000, 0x00, 0, 0},
+ 	{0x3002, 0x1c, 0, 0}, {0x3004, 0xff, 0, 0}, {0x3006, 0xc3, 0, 0},
+ 	{0x302e, 0x08, 0, 0}, {0x4300, 0x3f, 0, 0},
+-	{0x501f, 0x00, 0, 0}, {0x4407, 0x04, 0, 0},
+-	{0x440e, 0x00, 0, 0}, {0x460b, 0x35, 0, 0}, {0x460c, 0x22, 0, 0},
+-	{0x4837, 0x0a, 0, 0}, {0x3824, 0x02, 0, 0},
++	{0x501f, 0x00, 0, 0}, {0x440e, 0x00, 0, 0}, {0x4837, 0x0a, 0, 0},
+ 	{0x5000, 0xa7, 0, 0}, {0x5001, 0xa3, 0, 0}, {0x5180, 0xff, 0, 0},
+ 	{0x5181, 0xf2, 0, 0}, {0x5182, 0x00, 0, 0}, {0x5183, 0x14, 0, 0},
+ 	{0x5184, 0x25, 0, 0}, {0x5185, 0x24, 0, 0}, {0x5186, 0x09, 0, 0},
+@@ -634,7 +632,8 @@ static const struct reg_value ov5640_setting_low_res[] = {
+ 	{0x3a0a, 0x00, 0, 0}, {0x3a0b, 0xf6, 0, 0}, {0x3a0e, 0x03, 0, 0},
+ 	{0x3a0d, 0x04, 0, 0}, {0x3a14, 0x03, 0, 0}, {0x3a15, 0xd8, 0, 0},
+ 	{0x4001, 0x02, 0, 0}, {0x4004, 0x02, 0, 0},
+-	{0x4407, 0x04, 0, 0}, {0x5001, 0xa3, 0, 0},
++	{0x4407, 0x04, 0, 0}, {0x460b, 0x35, 0, 0}, {0x460c, 0x22, 0, 0},
++	{0x3824, 0x02, 0, 0}, {0x5001, 0xa3, 0, 0},
+ };
+ 
+ static const struct reg_value ov5640_setting_720P_1280_720[] = {
+@@ -2453,16 +2452,13 @@ static void ov5640_power(struct ov5640_dev *sensor, bool enable)
+ static void ov5640_powerup_sequence(struct ov5640_dev *sensor)
+ {
+ 	if (sensor->pwdn_gpio) {
+-		gpiod_set_value_cansleep(sensor->reset_gpio, 0);
++		gpiod_set_value_cansleep(sensor->reset_gpio, 1);
+ 
+ 		/* camera power cycle */
+ 		ov5640_power(sensor, false);
+-		usleep_range(5000, 10000);
++		usleep_range(5000, 10000);	/* t2 */
+ 		ov5640_power(sensor, true);
+-		usleep_range(5000, 10000);
+-
+-		gpiod_set_value_cansleep(sensor->reset_gpio, 1);
+-		usleep_range(1000, 2000);
++		usleep_range(1000, 2000);	/* t3 */
+ 
+ 		gpiod_set_value_cansleep(sensor->reset_gpio, 0);
+ 	} else {
+@@ -2470,7 +2466,7 @@ static void ov5640_powerup_sequence(struct ov5640_dev *sensor)
+ 		ov5640_write_reg(sensor, OV5640_REG_SYS_CTRL0,
+ 				 OV5640_REG_SYS_CTRL0_SW_RST);
+ 	}
+-	usleep_range(20000, 25000);
++	usleep_range(20000, 25000);	/* t4 */
+ 
+ 	/*
+ 	 * software standby: allows registers programming;
+@@ -2543,9 +2539,9 @@ static int ov5640_set_power_mipi(struct ov5640_dev *sensor, bool on)
+ 	 *		  "ov5640_set_stream_mipi()")
+ 	 * [4] = 0	: Power up MIPI HS Tx
+ 	 * [3] = 0	: Power up MIPI LS Rx
+-	 * [2] = 0	: MIPI interface disabled
++	 * [2] = 1	: MIPI interface enabled
+ 	 */
+-	ret = ov5640_write_reg(sensor, OV5640_REG_IO_MIPI_CTRL00, 0x40);
++	ret = ov5640_write_reg(sensor, OV5640_REG_IO_MIPI_CTRL00, 0x44);
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/drivers/media/i2c/rdacm21.c b/drivers/media/i2c/rdacm21.c
+index 043fec778a5e5..7a71bb30426b5 100644
+--- a/drivers/media/i2c/rdacm21.c
++++ b/drivers/media/i2c/rdacm21.c
+@@ -351,7 +351,7 @@ static void ov10640_power_up(struct rdacm21_device *dev)
+ static int ov10640_check_id(struct rdacm21_device *dev)
+ {
+ 	unsigned int i;
+-	u8 val;
++	u8 val = 0;
+ 
+ 	/* Read OV10640 ID to test communications. */
+ 	for (i = 0; i < OV10640_PID_TIMEOUT; ++i) {
+diff --git a/drivers/media/i2c/tvp5150.c b/drivers/media/i2c/tvp5150.c
+index c7fb35ee3f9de..e543b3f7a4d89 100644
+--- a/drivers/media/i2c/tvp5150.c
++++ b/drivers/media/i2c/tvp5150.c
+@@ -2068,6 +2068,10 @@ static int tvp5150_parse_dt(struct tvp5150 *decoder, struct device_node *np)
+ 		tvpc->ent.name = devm_kasprintf(dev, GFP_KERNEL, "%s %s",
+ 						v4l2c->name, v4l2c->label ?
+ 						v4l2c->label : "");
++		if (!tvpc->ent.name) {
++			ret = -ENOMEM;
++			goto err_free;
++		}
+ 	}
+ 
+ 	ep_np = of_graph_get_endpoint_by_regs(np, TVP5150_PAD_VID_OUT, 0);
+diff --git a/drivers/media/pci/Kconfig b/drivers/media/pci/Kconfig
+index 480194543d055..ee095bde0b686 100644
+--- a/drivers/media/pci/Kconfig
++++ b/drivers/media/pci/Kconfig
+@@ -73,7 +73,7 @@ config VIDEO_PCI_SKELETON
+ 	  Enable build of the skeleton PCI driver, used as a reference
+ 	  when developing new drivers.
+ 
+-source "drivers/media/pci/intel/ipu3/Kconfig"
++source "drivers/media/pci/intel/Kconfig"
+ 
+ endif #MEDIA_PCI_SUPPORT
+ endif #PCI
+diff --git a/drivers/media/pci/bt8xx/dst.c b/drivers/media/pci/bt8xx/dst.c
+index 3e52a51982d76..110651e478314 100644
+--- a/drivers/media/pci/bt8xx/dst.c
++++ b/drivers/media/pci/bt8xx/dst.c
+@@ -1722,7 +1722,7 @@ struct dst_state *dst_attach(struct dst_state *state, struct dvb_adapter *dvb_ad
+ 	return state;				/*	Manu (DST is a card not a frontend)	*/
+ }
+ 
+-EXPORT_SYMBOL(dst_attach);
++EXPORT_SYMBOL_GPL(dst_attach);
+ 
+ static const struct dvb_frontend_ops dst_dvbt_ops = {
+ 	.delsys = { SYS_DVBT },
+diff --git a/drivers/media/pci/bt8xx/dst_ca.c b/drivers/media/pci/bt8xx/dst_ca.c
+index d234a0f404d68..a9cc6e7a57f99 100644
+--- a/drivers/media/pci/bt8xx/dst_ca.c
++++ b/drivers/media/pci/bt8xx/dst_ca.c
+@@ -668,7 +668,7 @@ struct dvb_device *dst_ca_attach(struct dst_state *dst, struct dvb_adapter *dvb_
+ 	return NULL;
+ }
+ 
+-EXPORT_SYMBOL(dst_ca_attach);
++EXPORT_SYMBOL_GPL(dst_ca_attach);
+ 
+ MODULE_DESCRIPTION("DST DVB-S/T/C Combo CA driver");
+ MODULE_AUTHOR("Manu Abraham");
+diff --git a/drivers/media/pci/ddbridge/ddbridge-dummy-fe.c b/drivers/media/pci/ddbridge/ddbridge-dummy-fe.c
+index 6868a0c4fc82a..520ebd16b0c44 100644
+--- a/drivers/media/pci/ddbridge/ddbridge-dummy-fe.c
++++ b/drivers/media/pci/ddbridge/ddbridge-dummy-fe.c
+@@ -112,7 +112,7 @@ struct dvb_frontend *ddbridge_dummy_fe_qam_attach(void)
+ 	state->frontend.demodulator_priv = state;
+ 	return &state->frontend;
+ }
+-EXPORT_SYMBOL(ddbridge_dummy_fe_qam_attach);
++EXPORT_SYMBOL_GPL(ddbridge_dummy_fe_qam_attach);
+ 
+ static const struct dvb_frontend_ops ddbridge_dummy_fe_qam_ops = {
+ 	.delsys = { SYS_DVBC_ANNEX_A },
+diff --git a/drivers/media/pci/intel/Kconfig b/drivers/media/pci/intel/Kconfig
+new file mode 100644
+index 0000000000000..51b18fce6a1de
+--- /dev/null
++++ b/drivers/media/pci/intel/Kconfig
+@@ -0,0 +1,10 @@
++# SPDX-License-Identifier: GPL-2.0-only
++config IPU_BRIDGE
++	tristate
++	depends on I2C && ACPI
++	help
++	  This is a helper module for the IPU bridge, which can be
++	  used by ipu3 and other drivers. In order to handle module
++	  dependencies, this is selected by each driver that needs it.
++
++source "drivers/media/pci/intel/ipu3/Kconfig"
+diff --git a/drivers/media/pci/intel/Makefile b/drivers/media/pci/intel/Makefile
+index 0b4236c4db49a..951191a7e4011 100644
+--- a/drivers/media/pci/intel/Makefile
++++ b/drivers/media/pci/intel/Makefile
+@@ -1,6 +1,6 @@
+ # SPDX-License-Identifier: GPL-2.0-only
+ #
+-# Makefile for the IPU3 cio2 and ImGU drivers
++# Makefile for the IPU drivers
+ #
+-
++obj-$(CONFIG_IPU_BRIDGE) += ipu-bridge.o
+ obj-y	+= ipu3/
+diff --git a/drivers/media/pci/intel/ipu-bridge.c b/drivers/media/pci/intel/ipu-bridge.c
+new file mode 100644
+index 0000000000000..c5c44fb43c97a
+--- /dev/null
++++ b/drivers/media/pci/intel/ipu-bridge.c
+@@ -0,0 +1,502 @@
++// SPDX-License-Identifier: GPL-2.0
++/* Author: Dan Scally <djrscally@gmail.com> */
++
++#include <linux/acpi.h>
++#include <linux/device.h>
++#include <linux/i2c.h>
++#include <linux/pci.h>
++#include <linux/property.h>
++#include <media/v4l2-fwnode.h>
++
++#include "ipu-bridge.h"
++
++/*
++ * Extend this array with ACPI Hardware IDs of devices known to be working
++ * plus the number of link-frequencies expected by their drivers, along with
++ * the frequency values in hertz. This is somewhat opportunistic way of adding
++ * support for this for now in the hopes of a better source for the information
++ * (possibly some encoded value in the SSDB buffer that we're unaware of)
++ * becoming apparent in the future.
++ *
++ * Do not add an entry for a sensor that is not actually supported.
++ */
++static const struct ipu_sensor_config ipu_supported_sensors[] = {
++	/* Omnivision OV5693 */
++	IPU_SENSOR_CONFIG("INT33BE", 1, 419200000),
++	/* Omnivision OV8865 */
++	IPU_SENSOR_CONFIG("INT347A", 1, 360000000),
++	/* Omnivision OV7251 */
++	IPU_SENSOR_CONFIG("INT347E", 1, 319200000),
++	/* Omnivision OV2680 */
++	IPU_SENSOR_CONFIG("OVTI2680", 0),
++	/* Omnivision ov8856 */
++	IPU_SENSOR_CONFIG("OVTI8856", 3, 180000000, 360000000, 720000000),
++	/* Omnivision ov2740 */
++	IPU_SENSOR_CONFIG("INT3474", 1, 360000000),
++	/* Hynix hi556 */
++	IPU_SENSOR_CONFIG("INT3537", 1, 437000000),
++	/* Omnivision ov13b10 */
++	IPU_SENSOR_CONFIG("OVTIDB10", 1, 560000000),
++};
++
++static const struct ipu_property_names prop_names = {
++	.clock_frequency = "clock-frequency",
++	.rotation = "rotation",
++	.orientation = "orientation",
++	.bus_type = "bus-type",
++	.data_lanes = "data-lanes",
++	.remote_endpoint = "remote-endpoint",
++	.link_frequencies = "link-frequencies",
++};
++
++static const char * const ipu_vcm_types[] = {
++	"ad5823",
++	"dw9714",
++	"ad5816",
++	"dw9719",
++	"dw9718",
++	"dw9806b",
++	"wv517s",
++	"lc898122xa",
++	"lc898212axb",
++};
++
++static int ipu_bridge_read_acpi_buffer(struct acpi_device *adev, char *id,
++				       void *data, u32 size)
++{
++	struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL };
++	union acpi_object *obj;
++	acpi_status status;
++	int ret = 0;
++
++	status = acpi_evaluate_object(adev->handle, id, NULL, &buffer);
++	if (ACPI_FAILURE(status))
++		return -ENODEV;
++
++	obj = buffer.pointer;
++	if (!obj) {
++		dev_err(&adev->dev, "Couldn't locate ACPI buffer\n");
++		return -ENODEV;
++	}
++
++	if (obj->type != ACPI_TYPE_BUFFER) {
++		dev_err(&adev->dev, "Not an ACPI buffer\n");
++		ret = -ENODEV;
++		goto out_free_buff;
++	}
++
++	if (obj->buffer.length > size) {
++		dev_err(&adev->dev, "Given buffer is too small\n");
++		ret = -EINVAL;
++		goto out_free_buff;
++	}
++
++	memcpy(data, obj->buffer.pointer, obj->buffer.length);
++
++out_free_buff:
++	kfree(buffer.pointer);
++	return ret;
++}
++
++static u32 ipu_bridge_parse_rotation(struct ipu_sensor *sensor)
++{
++	switch (sensor->ssdb.degree) {
++	case IPU_SENSOR_ROTATION_NORMAL:
++		return 0;
++	case IPU_SENSOR_ROTATION_INVERTED:
++		return 180;
++	default:
++		dev_warn(&sensor->adev->dev,
++			 "Unknown rotation %d. Assume 0 degree rotation\n",
++			 sensor->ssdb.degree);
++		return 0;
++	}
++}
++
++static enum v4l2_fwnode_orientation ipu_bridge_parse_orientation(struct ipu_sensor *sensor)
++{
++	switch (sensor->pld->panel) {
++	case ACPI_PLD_PANEL_FRONT:
++		return V4L2_FWNODE_ORIENTATION_FRONT;
++	case ACPI_PLD_PANEL_BACK:
++		return V4L2_FWNODE_ORIENTATION_BACK;
++	case ACPI_PLD_PANEL_TOP:
++	case ACPI_PLD_PANEL_LEFT:
++	case ACPI_PLD_PANEL_RIGHT:
++	case ACPI_PLD_PANEL_UNKNOWN:
++		return V4L2_FWNODE_ORIENTATION_EXTERNAL;
++	default:
++		dev_warn(&sensor->adev->dev, "Unknown _PLD panel value %d\n",
++			 sensor->pld->panel);
++		return V4L2_FWNODE_ORIENTATION_EXTERNAL;
++	}
++}
++
++static void ipu_bridge_create_fwnode_properties(
++	struct ipu_sensor *sensor,
++	struct ipu_bridge *bridge,
++	const struct ipu_sensor_config *cfg)
++{
++	u32 rotation;
++	enum v4l2_fwnode_orientation orientation;
++
++	rotation = ipu_bridge_parse_rotation(sensor);
++	orientation = ipu_bridge_parse_orientation(sensor);
++
++	sensor->prop_names = prop_names;
++
++	sensor->local_ref[0] = SOFTWARE_NODE_REFERENCE(&sensor->swnodes[SWNODE_IPU_ENDPOINT]);
++	sensor->remote_ref[0] = SOFTWARE_NODE_REFERENCE(&sensor->swnodes[SWNODE_SENSOR_ENDPOINT]);
++
++	sensor->dev_properties[0] = PROPERTY_ENTRY_U32(
++					sensor->prop_names.clock_frequency,
++					sensor->ssdb.mclkspeed);
++	sensor->dev_properties[1] = PROPERTY_ENTRY_U32(
++					sensor->prop_names.rotation,
++					rotation);
++	sensor->dev_properties[2] = PROPERTY_ENTRY_U32(
++					sensor->prop_names.orientation,
++					orientation);
++	if (sensor->ssdb.vcmtype) {
++		sensor->vcm_ref[0] =
++			SOFTWARE_NODE_REFERENCE(&sensor->swnodes[SWNODE_VCM]);
++		sensor->dev_properties[3] =
++			PROPERTY_ENTRY_REF_ARRAY("lens-focus", sensor->vcm_ref);
++	}
++
++	sensor->ep_properties[0] = PROPERTY_ENTRY_U32(
++					sensor->prop_names.bus_type,
++					V4L2_FWNODE_BUS_TYPE_CSI2_DPHY);
++	sensor->ep_properties[1] = PROPERTY_ENTRY_U32_ARRAY_LEN(
++					sensor->prop_names.data_lanes,
++					bridge->data_lanes,
++					sensor->ssdb.lanes);
++	sensor->ep_properties[2] = PROPERTY_ENTRY_REF_ARRAY(
++					sensor->prop_names.remote_endpoint,
++					sensor->local_ref);
++
++	if (cfg->nr_link_freqs > 0)
++		sensor->ep_properties[3] = PROPERTY_ENTRY_U64_ARRAY_LEN(
++			sensor->prop_names.link_frequencies,
++			cfg->link_freqs,
++			cfg->nr_link_freqs);
++
++	sensor->ipu_properties[0] = PROPERTY_ENTRY_U32_ARRAY_LEN(
++					sensor->prop_names.data_lanes,
++					bridge->data_lanes,
++					sensor->ssdb.lanes);
++	sensor->ipu_properties[1] = PROPERTY_ENTRY_REF_ARRAY(
++					sensor->prop_names.remote_endpoint,
++					sensor->remote_ref);
++}
++
++static void ipu_bridge_init_swnode_names(struct ipu_sensor *sensor)
++{
++	snprintf(sensor->node_names.remote_port,
++		 sizeof(sensor->node_names.remote_port),
++		 SWNODE_GRAPH_PORT_NAME_FMT, sensor->ssdb.link);
++	snprintf(sensor->node_names.port,
++		 sizeof(sensor->node_names.port),
++		 SWNODE_GRAPH_PORT_NAME_FMT, 0); /* Always port 0 */
++	snprintf(sensor->node_names.endpoint,
++		 sizeof(sensor->node_names.endpoint),
++		 SWNODE_GRAPH_ENDPOINT_NAME_FMT, 0); /* And endpoint 0 */
++}
++
++static void ipu_bridge_init_swnode_group(struct ipu_sensor *sensor)
++{
++	struct software_node *nodes = sensor->swnodes;
++
++	sensor->group[SWNODE_SENSOR_HID] = &nodes[SWNODE_SENSOR_HID];
++	sensor->group[SWNODE_SENSOR_PORT] = &nodes[SWNODE_SENSOR_PORT];
++	sensor->group[SWNODE_SENSOR_ENDPOINT] = &nodes[SWNODE_SENSOR_ENDPOINT];
++	sensor->group[SWNODE_IPU_PORT] = &nodes[SWNODE_IPU_PORT];
++	sensor->group[SWNODE_IPU_ENDPOINT] = &nodes[SWNODE_IPU_ENDPOINT];
++	if (sensor->ssdb.vcmtype)
++		sensor->group[SWNODE_VCM] =  &nodes[SWNODE_VCM];
++}
++
++static void ipu_bridge_create_connection_swnodes(struct ipu_bridge *bridge,
++						 struct ipu_sensor *sensor)
++{
++	struct software_node *nodes = sensor->swnodes;
++
++	ipu_bridge_init_swnode_names(sensor);
++
++	nodes[SWNODE_SENSOR_HID] = NODE_SENSOR(sensor->name,
++					       sensor->dev_properties);
++	nodes[SWNODE_SENSOR_PORT] = NODE_PORT(sensor->node_names.port,
++					      &nodes[SWNODE_SENSOR_HID]);
++	nodes[SWNODE_SENSOR_ENDPOINT] = NODE_ENDPOINT(
++						sensor->node_names.endpoint,
++						&nodes[SWNODE_SENSOR_PORT],
++						sensor->ep_properties);
++	nodes[SWNODE_IPU_PORT] = NODE_PORT(sensor->node_names.remote_port,
++					   &bridge->ipu_hid_node);
++	nodes[SWNODE_IPU_ENDPOINT] = NODE_ENDPOINT(
++						sensor->node_names.endpoint,
++						&nodes[SWNODE_IPU_PORT],
++						sensor->ipu_properties);
++	if (sensor->ssdb.vcmtype) {
++		/* append ssdb.link to distinguish VCM nodes with same HID */
++		snprintf(sensor->node_names.vcm, sizeof(sensor->node_names.vcm),
++			 "%s-%u", ipu_vcm_types[sensor->ssdb.vcmtype - 1],
++			 sensor->ssdb.link);
++		nodes[SWNODE_VCM] = NODE_VCM(sensor->node_names.vcm);
++	}
++
++	ipu_bridge_init_swnode_group(sensor);
++}
++
++static void ipu_bridge_instantiate_vcm_i2c_client(struct ipu_sensor *sensor)
++{
++	struct i2c_board_info board_info = { };
++	char name[16];
++
++	if (!sensor->ssdb.vcmtype)
++		return;
++
++	snprintf(name, sizeof(name), "%s-VCM", acpi_dev_name(sensor->adev));
++	board_info.dev_name = name;
++	strscpy(board_info.type, ipu_vcm_types[sensor->ssdb.vcmtype - 1],
++		ARRAY_SIZE(board_info.type));
++	board_info.swnode = &sensor->swnodes[SWNODE_VCM];
++
++	sensor->vcm_i2c_client =
++		i2c_acpi_new_device_by_fwnode(acpi_fwnode_handle(sensor->adev),
++					      1, &board_info);
++	if (IS_ERR(sensor->vcm_i2c_client)) {
++		dev_warn(&sensor->adev->dev, "Error instantiation VCM i2c-client: %ld\n",
++			 PTR_ERR(sensor->vcm_i2c_client));
++		sensor->vcm_i2c_client = NULL;
++	}
++}
++
++static void ipu_bridge_unregister_sensors(struct ipu_bridge *bridge)
++{
++	struct ipu_sensor *sensor;
++	unsigned int i;
++
++	for (i = 0; i < bridge->n_sensors; i++) {
++		sensor = &bridge->sensors[i];
++		software_node_unregister_node_group(sensor->group);
++		ACPI_FREE(sensor->pld);
++		acpi_dev_put(sensor->adev);
++		i2c_unregister_device(sensor->vcm_i2c_client);
++	}
++}
++
++static int ipu_bridge_connect_sensor(const struct ipu_sensor_config *cfg,
++				     struct ipu_bridge *bridge,
++				     struct pci_dev *ipu)
++{
++	struct fwnode_handle *fwnode, *primary;
++	struct ipu_sensor *sensor;
++	struct acpi_device *adev;
++	acpi_status status;
++	int ret;
++
++	for_each_acpi_dev_match(adev, cfg->hid, NULL, -1) {
++		if (!adev->status.enabled)
++			continue;
++
++		if (bridge->n_sensors >= CIO2_NUM_PORTS) {
++			acpi_dev_put(adev);
++			dev_err(&ipu->dev, "Exceeded available IPU ports\n");
++			return -EINVAL;
++		}
++
++		sensor = &bridge->sensors[bridge->n_sensors];
++		/*
++		 * Borrow our adev ref to the sensor for now, on success
++		 * acpi_dev_get(adev) is done further below.
++		 */
++		sensor->adev = adev;
++
++		ret = ipu_bridge_read_acpi_buffer(adev, "SSDB",
++						  &sensor->ssdb,
++						  sizeof(sensor->ssdb));
++		if (ret)
++			goto err_put_adev;
++
++		snprintf(sensor->name, sizeof(sensor->name), "%s-%u",
++			 cfg->hid, sensor->ssdb.link);
++
++		if (sensor->ssdb.vcmtype > ARRAY_SIZE(ipu_vcm_types)) {
++			dev_warn(&adev->dev, "Unknown VCM type %d\n",
++				 sensor->ssdb.vcmtype);
++			sensor->ssdb.vcmtype = 0;
++		}
++
++		status = acpi_get_physical_device_location(adev->handle, &sensor->pld);
++		if (ACPI_FAILURE(status)) {
++			ret = -ENODEV;
++			goto err_put_adev;
++		}
++
++		if (sensor->ssdb.lanes > IPU_MAX_LANES) {
++			dev_err(&adev->dev,
++				"Number of lanes in SSDB is invalid\n");
++			ret = -EINVAL;
++			goto err_free_pld;
++		}
++
++		ipu_bridge_create_fwnode_properties(sensor, bridge, cfg);
++		ipu_bridge_create_connection_swnodes(bridge, sensor);
++
++		ret = software_node_register_node_group(sensor->group);
++		if (ret)
++			goto err_free_pld;
++
++		fwnode = software_node_fwnode(&sensor->swnodes[
++						      SWNODE_SENSOR_HID]);
++		if (!fwnode) {
++			ret = -ENODEV;
++			goto err_free_swnodes;
++		}
++
++		sensor->adev = acpi_dev_get(adev);
++
++		primary = acpi_fwnode_handle(adev);
++		primary->secondary = fwnode;
++
++		ipu_bridge_instantiate_vcm_i2c_client(sensor);
++
++		dev_info(&ipu->dev, "Found supported sensor %s\n",
++			 acpi_dev_name(adev));
++
++		bridge->n_sensors++;
++	}
++
++	return 0;
++
++err_free_swnodes:
++	software_node_unregister_node_group(sensor->group);
++err_free_pld:
++	ACPI_FREE(sensor->pld);
++err_put_adev:
++	acpi_dev_put(adev);
++	return ret;
++}
++
++static int ipu_bridge_connect_sensors(struct ipu_bridge *bridge,
++				      struct pci_dev *ipu)
++{
++	unsigned int i;
++	int ret;
++
++	for (i = 0; i < ARRAY_SIZE(ipu_supported_sensors); i++) {
++		const struct ipu_sensor_config *cfg =
++			&ipu_supported_sensors[i];
++
++		ret = ipu_bridge_connect_sensor(cfg, bridge, ipu);
++		if (ret)
++			goto err_unregister_sensors;
++	}
++
++	return 0;
++
++err_unregister_sensors:
++	ipu_bridge_unregister_sensors(bridge);
++	return ret;
++}
++
++/*
++ * The VCM cannot be probed until the PMIC is completely setup. We cannot rely
++ * on -EPROBE_DEFER for this, since the consumer<->supplier relations between
++ * the VCM and regulators/clks are not described in ACPI, instead they are
++ * passed as board-data to the PMIC drivers. Since -PROBE_DEFER does not work
++ * for the clks/regulators the VCM i2c-clients must not be instantiated until
++ * the PMIC is fully setup.
++ *
++ * The sensor/VCM ACPI device has an ACPI _DEP on the PMIC, check this using the
++ * acpi_dev_ready_for_enumeration() helper, like the i2c-core-acpi code does
++ * for the sensors.
++ */
++static int ipu_bridge_sensors_are_ready(void)
++{
++	struct acpi_device *adev;
++	bool ready = true;
++	unsigned int i;
++
++	for (i = 0; i < ARRAY_SIZE(ipu_supported_sensors); i++) {
++		const struct ipu_sensor_config *cfg =
++			&ipu_supported_sensors[i];
++
++		for_each_acpi_dev_match(adev, cfg->hid, NULL, -1) {
++			if (!adev->status.enabled)
++				continue;
++
++			if (!acpi_dev_ready_for_enumeration(adev))
++				ready = false;
++		}
++	}
++
++	return ready;
++}
++
++int ipu_bridge_init(struct pci_dev *ipu)
++{
++	struct device *dev = &ipu->dev;
++	struct fwnode_handle *fwnode;
++	struct ipu_bridge *bridge;
++	unsigned int i;
++	int ret;
++
++	if (!ipu_bridge_sensors_are_ready())
++		return -EPROBE_DEFER;
++
++	bridge = kzalloc(sizeof(*bridge), GFP_KERNEL);
++	if (!bridge)
++		return -ENOMEM;
++
++	strscpy(bridge->ipu_node_name, IPU_HID,
++		sizeof(bridge->ipu_node_name));
++	bridge->ipu_hid_node.name = bridge->ipu_node_name;
++
++	ret = software_node_register(&bridge->ipu_hid_node);
++	if (ret < 0) {
++		dev_err(dev, "Failed to register the IPU HID node\n");
++		goto err_free_bridge;
++	}
++
++	/*
++	 * Map the lane arrangement, which is fixed for the IPU3 (meaning we
++	 * only need one, rather than one per sensor). We include it as a
++	 * member of the struct ipu_bridge rather than a global variable so
++	 * that it survives if the module is unloaded along with the rest of
++	 * the struct.
++	 */
++	for (i = 0; i < IPU_MAX_LANES; i++)
++		bridge->data_lanes[i] = i + 1;
++
++	ret = ipu_bridge_connect_sensors(bridge, ipu);
++	if (ret || bridge->n_sensors == 0)
++		goto err_unregister_ipu;
++
++	dev_info(dev, "Connected %d cameras\n", bridge->n_sensors);
++
++	fwnode = software_node_fwnode(&bridge->ipu_hid_node);
++	if (!fwnode) {
++		dev_err(dev, "Error getting fwnode from ipu software_node\n");
++		ret = -ENODEV;
++		goto err_unregister_sensors;
++	}
++
++	set_secondary_fwnode(dev, fwnode);
++
++	return 0;
++
++err_unregister_sensors:
++	ipu_bridge_unregister_sensors(bridge);
++err_unregister_ipu:
++	software_node_unregister(&bridge->ipu_hid_node);
++err_free_bridge:
++	kfree(bridge);
++
++	return ret;
++}
++EXPORT_SYMBOL_NS_GPL(ipu_bridge_init, INTEL_IPU_BRIDGE);
++
++MODULE_LICENSE("GPL");
++MODULE_DESCRIPTION("Intel IPU Sensors Bridge driver");
+diff --git a/drivers/media/pci/intel/ipu-bridge.h b/drivers/media/pci/intel/ipu-bridge.h
+new file mode 100644
+index 0000000000000..1ff0b2d04d929
+--- /dev/null
++++ b/drivers/media/pci/intel/ipu-bridge.h
+@@ -0,0 +1,153 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++/* Author: Dan Scally <djrscally@gmail.com> */
++#ifndef __IPU_BRIDGE_H
++#define __IPU_BRIDGE_H
++
++#include <linux/property.h>
++#include <linux/types.h>
++
++#include "ipu3/ipu3-cio2.h"
++
++struct i2c_client;
++
++#define IPU_HID				"INT343E"
++#define IPU_MAX_LANES				4
++#define MAX_NUM_LINK_FREQS			3
++
++/* Values are educated guesses as we don't have a spec */
++#define IPU_SENSOR_ROTATION_NORMAL		0
++#define IPU_SENSOR_ROTATION_INVERTED		1
++
++#define IPU_SENSOR_CONFIG(_HID, _NR, ...)	\
++	(const struct ipu_sensor_config) {	\
++		.hid = _HID,			\
++		.nr_link_freqs = _NR,		\
++		.link_freqs = { __VA_ARGS__ }	\
++	}
++
++#define NODE_SENSOR(_HID, _PROPS)		\
++	(const struct software_node) {		\
++		.name = _HID,			\
++		.properties = _PROPS,		\
++	}
++
++#define NODE_PORT(_PORT, _SENSOR_NODE)		\
++	(const struct software_node) {		\
++		.name = _PORT,			\
++		.parent = _SENSOR_NODE,		\
++	}
++
++#define NODE_ENDPOINT(_EP, _PORT, _PROPS)	\
++	(const struct software_node) {		\
++		.name = _EP,			\
++		.parent = _PORT,		\
++		.properties = _PROPS,		\
++	}
++
++#define NODE_VCM(_TYPE)				\
++	(const struct software_node) {		\
++		.name = _TYPE,			\
++	}
++
++enum ipu_sensor_swnodes {
++	SWNODE_SENSOR_HID,
++	SWNODE_SENSOR_PORT,
++	SWNODE_SENSOR_ENDPOINT,
++	SWNODE_IPU_PORT,
++	SWNODE_IPU_ENDPOINT,
++	/* Must be last because it is optional / maybe empty */
++	SWNODE_VCM,
++	SWNODE_COUNT
++};
++
++/* Data representation as it is in ACPI SSDB buffer */
++struct ipu_sensor_ssdb {
++	u8 version;
++	u8 sku;
++	u8 guid_csi2[16];
++	u8 devfunction;
++	u8 bus;
++	u32 dphylinkenfuses;
++	u32 clockdiv;
++	u8 link;
++	u8 lanes;
++	u32 csiparams[10];
++	u32 maxlanespeed;
++	u8 sensorcalibfileidx;
++	u8 sensorcalibfileidxInMBZ[3];
++	u8 romtype;
++	u8 vcmtype;
++	u8 platforminfo;
++	u8 platformsubinfo;
++	u8 flash;
++	u8 privacyled;
++	u8 degree;
++	u8 mipilinkdefined;
++	u32 mclkspeed;
++	u8 controllogicid;
++	u8 reserved1[3];
++	u8 mclkport;
++	u8 reserved2[13];
++} __packed;
++
++struct ipu_property_names {
++	char clock_frequency[16];
++	char rotation[9];
++	char orientation[12];
++	char bus_type[9];
++	char data_lanes[11];
++	char remote_endpoint[16];
++	char link_frequencies[17];
++};
++
++struct ipu_node_names {
++	char port[7];
++	char endpoint[11];
++	char remote_port[7];
++	char vcm[16];
++};
++
++struct ipu_sensor_config {
++	const char *hid;
++	const u8 nr_link_freqs;
++	const u64 link_freqs[MAX_NUM_LINK_FREQS];
++};
++
++struct ipu_sensor {
++	/* append ssdb.link(u8) in "-%u" format as suffix of HID */
++	char name[ACPI_ID_LEN + 4];
++	struct acpi_device *adev;
++	struct i2c_client *vcm_i2c_client;
++
++	/* SWNODE_COUNT + 1 for terminating NULL */
++	const struct software_node *group[SWNODE_COUNT + 1];
++	struct software_node swnodes[SWNODE_COUNT];
++	struct ipu_node_names node_names;
++
++	struct ipu_sensor_ssdb ssdb;
++	struct acpi_pld_info *pld;
++
++	struct ipu_property_names prop_names;
++	struct property_entry ep_properties[5];
++	struct property_entry dev_properties[5];
++	struct property_entry ipu_properties[3];
++	struct software_node_ref_args local_ref[1];
++	struct software_node_ref_args remote_ref[1];
++	struct software_node_ref_args vcm_ref[1];
++};
++
++struct ipu_bridge {
++	char ipu_node_name[ACPI_ID_LEN];
++	struct software_node ipu_hid_node;
++	u32 data_lanes[4];
++	unsigned int n_sensors;
++	struct ipu_sensor sensors[CIO2_NUM_PORTS];
++};
++
++#if IS_ENABLED(CONFIG_IPU_BRIDGE)
++int ipu_bridge_init(struct pci_dev *ipu);
++#else
++static inline int ipu_bridge_init(struct pci_dev *ipu) { return 0; }
++#endif
++
++#endif
+diff --git a/drivers/media/pci/intel/ipu3/Kconfig b/drivers/media/pci/intel/ipu3/Kconfig
+index 65b0c1598fbf1..0951545eab21a 100644
+--- a/drivers/media/pci/intel/ipu3/Kconfig
++++ b/drivers/media/pci/intel/ipu3/Kconfig
+@@ -8,6 +8,7 @@ config VIDEO_IPU3_CIO2
+ 	select VIDEO_V4L2_SUBDEV_API
+ 	select V4L2_FWNODE
+ 	select VIDEOBUF2_DMA_SG
++	select IPU_BRIDGE if CIO2_BRIDGE
+ 
+ 	help
+ 	  This is the Intel IPU3 CIO2 CSI-2 receiver unit, found in Intel
+diff --git a/drivers/media/pci/intel/ipu3/Makefile b/drivers/media/pci/intel/ipu3/Makefile
+index 933777e6ea8ab..429d516452e42 100644
+--- a/drivers/media/pci/intel/ipu3/Makefile
++++ b/drivers/media/pci/intel/ipu3/Makefile
+@@ -2,4 +2,3 @@
+ obj-$(CONFIG_VIDEO_IPU3_CIO2) += ipu3-cio2.o
+ 
+ ipu3-cio2-y += ipu3-cio2-main.o
+-ipu3-cio2-$(CONFIG_CIO2_BRIDGE) += cio2-bridge.o
+diff --git a/drivers/media/pci/intel/ipu3/cio2-bridge.c b/drivers/media/pci/intel/ipu3/cio2-bridge.c
+deleted file mode 100644
+index 3c2accfe54551..0000000000000
+--- a/drivers/media/pci/intel/ipu3/cio2-bridge.c
++++ /dev/null
+@@ -1,494 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/* Author: Dan Scally <djrscally@gmail.com> */
+-
+-#include <linux/acpi.h>
+-#include <linux/device.h>
+-#include <linux/i2c.h>
+-#include <linux/pci.h>
+-#include <linux/property.h>
+-#include <media/v4l2-fwnode.h>
+-
+-#include "cio2-bridge.h"
+-
+-/*
+- * Extend this array with ACPI Hardware IDs of devices known to be working
+- * plus the number of link-frequencies expected by their drivers, along with
+- * the frequency values in hertz. This is somewhat opportunistic way of adding
+- * support for this for now in the hopes of a better source for the information
+- * (possibly some encoded value in the SSDB buffer that we're unaware of)
+- * becoming apparent in the future.
+- *
+- * Do not add an entry for a sensor that is not actually supported.
+- */
+-static const struct cio2_sensor_config cio2_supported_sensors[] = {
+-	/* Omnivision OV5693 */
+-	CIO2_SENSOR_CONFIG("INT33BE", 1, 419200000),
+-	/* Omnivision OV8865 */
+-	CIO2_SENSOR_CONFIG("INT347A", 1, 360000000),
+-	/* Omnivision OV7251 */
+-	CIO2_SENSOR_CONFIG("INT347E", 1, 319200000),
+-	/* Omnivision OV2680 */
+-	CIO2_SENSOR_CONFIG("OVTI2680", 0),
+-	/* Omnivision ov8856 */
+-	CIO2_SENSOR_CONFIG("OVTI8856", 3, 180000000, 360000000, 720000000),
+-	/* Omnivision ov2740 */
+-	CIO2_SENSOR_CONFIG("INT3474", 1, 360000000),
+-	/* Hynix hi556 */
+-	CIO2_SENSOR_CONFIG("INT3537", 1, 437000000),
+-	/* Omnivision ov13b10 */
+-	CIO2_SENSOR_CONFIG("OVTIDB10", 1, 560000000),
+-};
+-
+-static const struct cio2_property_names prop_names = {
+-	.clock_frequency = "clock-frequency",
+-	.rotation = "rotation",
+-	.orientation = "orientation",
+-	.bus_type = "bus-type",
+-	.data_lanes = "data-lanes",
+-	.remote_endpoint = "remote-endpoint",
+-	.link_frequencies = "link-frequencies",
+-};
+-
+-static const char * const cio2_vcm_types[] = {
+-	"ad5823",
+-	"dw9714",
+-	"ad5816",
+-	"dw9719",
+-	"dw9718",
+-	"dw9806b",
+-	"wv517s",
+-	"lc898122xa",
+-	"lc898212axb",
+-};
+-
+-static int cio2_bridge_read_acpi_buffer(struct acpi_device *adev, char *id,
+-					void *data, u32 size)
+-{
+-	struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL };
+-	union acpi_object *obj;
+-	acpi_status status;
+-	int ret = 0;
+-
+-	status = acpi_evaluate_object(adev->handle, id, NULL, &buffer);
+-	if (ACPI_FAILURE(status))
+-		return -ENODEV;
+-
+-	obj = buffer.pointer;
+-	if (!obj) {
+-		dev_err(&adev->dev, "Couldn't locate ACPI buffer\n");
+-		return -ENODEV;
+-	}
+-
+-	if (obj->type != ACPI_TYPE_BUFFER) {
+-		dev_err(&adev->dev, "Not an ACPI buffer\n");
+-		ret = -ENODEV;
+-		goto out_free_buff;
+-	}
+-
+-	if (obj->buffer.length > size) {
+-		dev_err(&adev->dev, "Given buffer is too small\n");
+-		ret = -EINVAL;
+-		goto out_free_buff;
+-	}
+-
+-	memcpy(data, obj->buffer.pointer, obj->buffer.length);
+-
+-out_free_buff:
+-	kfree(buffer.pointer);
+-	return ret;
+-}
+-
+-static u32 cio2_bridge_parse_rotation(struct cio2_sensor *sensor)
+-{
+-	switch (sensor->ssdb.degree) {
+-	case CIO2_SENSOR_ROTATION_NORMAL:
+-		return 0;
+-	case CIO2_SENSOR_ROTATION_INVERTED:
+-		return 180;
+-	default:
+-		dev_warn(&sensor->adev->dev,
+-			 "Unknown rotation %d. Assume 0 degree rotation\n",
+-			 sensor->ssdb.degree);
+-		return 0;
+-	}
+-}
+-
+-static enum v4l2_fwnode_orientation cio2_bridge_parse_orientation(struct cio2_sensor *sensor)
+-{
+-	switch (sensor->pld->panel) {
+-	case ACPI_PLD_PANEL_FRONT:
+-		return V4L2_FWNODE_ORIENTATION_FRONT;
+-	case ACPI_PLD_PANEL_BACK:
+-		return V4L2_FWNODE_ORIENTATION_BACK;
+-	case ACPI_PLD_PANEL_TOP:
+-	case ACPI_PLD_PANEL_LEFT:
+-	case ACPI_PLD_PANEL_RIGHT:
+-	case ACPI_PLD_PANEL_UNKNOWN:
+-		return V4L2_FWNODE_ORIENTATION_EXTERNAL;
+-	default:
+-		dev_warn(&sensor->adev->dev, "Unknown _PLD panel value %d\n",
+-			 sensor->pld->panel);
+-		return V4L2_FWNODE_ORIENTATION_EXTERNAL;
+-	}
+-}
+-
+-static void cio2_bridge_create_fwnode_properties(
+-	struct cio2_sensor *sensor,
+-	struct cio2_bridge *bridge,
+-	const struct cio2_sensor_config *cfg)
+-{
+-	u32 rotation;
+-	enum v4l2_fwnode_orientation orientation;
+-
+-	rotation = cio2_bridge_parse_rotation(sensor);
+-	orientation = cio2_bridge_parse_orientation(sensor);
+-
+-	sensor->prop_names = prop_names;
+-
+-	sensor->local_ref[0] = SOFTWARE_NODE_REFERENCE(&sensor->swnodes[SWNODE_CIO2_ENDPOINT]);
+-	sensor->remote_ref[0] = SOFTWARE_NODE_REFERENCE(&sensor->swnodes[SWNODE_SENSOR_ENDPOINT]);
+-
+-	sensor->dev_properties[0] = PROPERTY_ENTRY_U32(
+-					sensor->prop_names.clock_frequency,
+-					sensor->ssdb.mclkspeed);
+-	sensor->dev_properties[1] = PROPERTY_ENTRY_U32(
+-					sensor->prop_names.rotation,
+-					rotation);
+-	sensor->dev_properties[2] = PROPERTY_ENTRY_U32(
+-					sensor->prop_names.orientation,
+-					orientation);
+-	if (sensor->ssdb.vcmtype) {
+-		sensor->vcm_ref[0] =
+-			SOFTWARE_NODE_REFERENCE(&sensor->swnodes[SWNODE_VCM]);
+-		sensor->dev_properties[3] =
+-			PROPERTY_ENTRY_REF_ARRAY("lens-focus", sensor->vcm_ref);
+-	}
+-
+-	sensor->ep_properties[0] = PROPERTY_ENTRY_U32(
+-					sensor->prop_names.bus_type,
+-					V4L2_FWNODE_BUS_TYPE_CSI2_DPHY);
+-	sensor->ep_properties[1] = PROPERTY_ENTRY_U32_ARRAY_LEN(
+-					sensor->prop_names.data_lanes,
+-					bridge->data_lanes,
+-					sensor->ssdb.lanes);
+-	sensor->ep_properties[2] = PROPERTY_ENTRY_REF_ARRAY(
+-					sensor->prop_names.remote_endpoint,
+-					sensor->local_ref);
+-
+-	if (cfg->nr_link_freqs > 0)
+-		sensor->ep_properties[3] = PROPERTY_ENTRY_U64_ARRAY_LEN(
+-			sensor->prop_names.link_frequencies,
+-			cfg->link_freqs,
+-			cfg->nr_link_freqs);
+-
+-	sensor->cio2_properties[0] = PROPERTY_ENTRY_U32_ARRAY_LEN(
+-					sensor->prop_names.data_lanes,
+-					bridge->data_lanes,
+-					sensor->ssdb.lanes);
+-	sensor->cio2_properties[1] = PROPERTY_ENTRY_REF_ARRAY(
+-					sensor->prop_names.remote_endpoint,
+-					sensor->remote_ref);
+-}
+-
+-static void cio2_bridge_init_swnode_names(struct cio2_sensor *sensor)
+-{
+-	snprintf(sensor->node_names.remote_port,
+-		 sizeof(sensor->node_names.remote_port),
+-		 SWNODE_GRAPH_PORT_NAME_FMT, sensor->ssdb.link);
+-	snprintf(sensor->node_names.port,
+-		 sizeof(sensor->node_names.port),
+-		 SWNODE_GRAPH_PORT_NAME_FMT, 0); /* Always port 0 */
+-	snprintf(sensor->node_names.endpoint,
+-		 sizeof(sensor->node_names.endpoint),
+-		 SWNODE_GRAPH_ENDPOINT_NAME_FMT, 0); /* And endpoint 0 */
+-}
+-
+-static void cio2_bridge_init_swnode_group(struct cio2_sensor *sensor)
+-{
+-	struct software_node *nodes = sensor->swnodes;
+-
+-	sensor->group[SWNODE_SENSOR_HID] = &nodes[SWNODE_SENSOR_HID];
+-	sensor->group[SWNODE_SENSOR_PORT] = &nodes[SWNODE_SENSOR_PORT];
+-	sensor->group[SWNODE_SENSOR_ENDPOINT] = &nodes[SWNODE_SENSOR_ENDPOINT];
+-	sensor->group[SWNODE_CIO2_PORT] = &nodes[SWNODE_CIO2_PORT];
+-	sensor->group[SWNODE_CIO2_ENDPOINT] = &nodes[SWNODE_CIO2_ENDPOINT];
+-	if (sensor->ssdb.vcmtype)
+-		sensor->group[SWNODE_VCM] =  &nodes[SWNODE_VCM];
+-}
+-
+-static void cio2_bridge_create_connection_swnodes(struct cio2_bridge *bridge,
+-						  struct cio2_sensor *sensor)
+-{
+-	struct software_node *nodes = sensor->swnodes;
+-	char vcm_name[ACPI_ID_LEN + 4];
+-
+-	cio2_bridge_init_swnode_names(sensor);
+-
+-	nodes[SWNODE_SENSOR_HID] = NODE_SENSOR(sensor->name,
+-					       sensor->dev_properties);
+-	nodes[SWNODE_SENSOR_PORT] = NODE_PORT(sensor->node_names.port,
+-					      &nodes[SWNODE_SENSOR_HID]);
+-	nodes[SWNODE_SENSOR_ENDPOINT] = NODE_ENDPOINT(
+-						sensor->node_names.endpoint,
+-						&nodes[SWNODE_SENSOR_PORT],
+-						sensor->ep_properties);
+-	nodes[SWNODE_CIO2_PORT] = NODE_PORT(sensor->node_names.remote_port,
+-					    &bridge->cio2_hid_node);
+-	nodes[SWNODE_CIO2_ENDPOINT] = NODE_ENDPOINT(
+-						sensor->node_names.endpoint,
+-						&nodes[SWNODE_CIO2_PORT],
+-						sensor->cio2_properties);
+-	if (sensor->ssdb.vcmtype) {
+-		/* append ssdb.link to distinguish VCM nodes with same HID */
+-		snprintf(vcm_name, sizeof(vcm_name), "%s-%u",
+-			 cio2_vcm_types[sensor->ssdb.vcmtype - 1],
+-			 sensor->ssdb.link);
+-		nodes[SWNODE_VCM] = NODE_VCM(vcm_name);
+-	}
+-
+-	cio2_bridge_init_swnode_group(sensor);
+-}
+-
+-static void cio2_bridge_instantiate_vcm_i2c_client(struct cio2_sensor *sensor)
+-{
+-	struct i2c_board_info board_info = { };
+-	char name[16];
+-
+-	if (!sensor->ssdb.vcmtype)
+-		return;
+-
+-	snprintf(name, sizeof(name), "%s-VCM", acpi_dev_name(sensor->adev));
+-	board_info.dev_name = name;
+-	strscpy(board_info.type, cio2_vcm_types[sensor->ssdb.vcmtype - 1],
+-		ARRAY_SIZE(board_info.type));
+-	board_info.swnode = &sensor->swnodes[SWNODE_VCM];
+-
+-	sensor->vcm_i2c_client =
+-		i2c_acpi_new_device_by_fwnode(acpi_fwnode_handle(sensor->adev),
+-					      1, &board_info);
+-	if (IS_ERR(sensor->vcm_i2c_client)) {
+-		dev_warn(&sensor->adev->dev, "Error instantiation VCM i2c-client: %ld\n",
+-			 PTR_ERR(sensor->vcm_i2c_client));
+-		sensor->vcm_i2c_client = NULL;
+-	}
+-}
+-
+-static void cio2_bridge_unregister_sensors(struct cio2_bridge *bridge)
+-{
+-	struct cio2_sensor *sensor;
+-	unsigned int i;
+-
+-	for (i = 0; i < bridge->n_sensors; i++) {
+-		sensor = &bridge->sensors[i];
+-		software_node_unregister_node_group(sensor->group);
+-		ACPI_FREE(sensor->pld);
+-		acpi_dev_put(sensor->adev);
+-		i2c_unregister_device(sensor->vcm_i2c_client);
+-	}
+-}
+-
+-static int cio2_bridge_connect_sensor(const struct cio2_sensor_config *cfg,
+-				      struct cio2_bridge *bridge,
+-				      struct pci_dev *cio2)
+-{
+-	struct fwnode_handle *fwnode, *primary;
+-	struct cio2_sensor *sensor;
+-	struct acpi_device *adev;
+-	acpi_status status;
+-	int ret;
+-
+-	for_each_acpi_dev_match(adev, cfg->hid, NULL, -1) {
+-		if (!adev->status.enabled)
+-			continue;
+-
+-		if (bridge->n_sensors >= CIO2_NUM_PORTS) {
+-			acpi_dev_put(adev);
+-			dev_err(&cio2->dev, "Exceeded available CIO2 ports\n");
+-			return -EINVAL;
+-		}
+-
+-		sensor = &bridge->sensors[bridge->n_sensors];
+-
+-		ret = cio2_bridge_read_acpi_buffer(adev, "SSDB",
+-						   &sensor->ssdb,
+-						   sizeof(sensor->ssdb));
+-		if (ret)
+-			goto err_put_adev;
+-
+-		snprintf(sensor->name, sizeof(sensor->name), "%s-%u",
+-			 cfg->hid, sensor->ssdb.link);
+-
+-		if (sensor->ssdb.vcmtype > ARRAY_SIZE(cio2_vcm_types)) {
+-			dev_warn(&adev->dev, "Unknown VCM type %d\n",
+-				 sensor->ssdb.vcmtype);
+-			sensor->ssdb.vcmtype = 0;
+-		}
+-
+-		status = acpi_get_physical_device_location(adev->handle, &sensor->pld);
+-		if (ACPI_FAILURE(status)) {
+-			ret = -ENODEV;
+-			goto err_put_adev;
+-		}
+-
+-		if (sensor->ssdb.lanes > CIO2_MAX_LANES) {
+-			dev_err(&adev->dev,
+-				"Number of lanes in SSDB is invalid\n");
+-			ret = -EINVAL;
+-			goto err_free_pld;
+-		}
+-
+-		cio2_bridge_create_fwnode_properties(sensor, bridge, cfg);
+-		cio2_bridge_create_connection_swnodes(bridge, sensor);
+-
+-		ret = software_node_register_node_group(sensor->group);
+-		if (ret)
+-			goto err_free_pld;
+-
+-		fwnode = software_node_fwnode(&sensor->swnodes[
+-						      SWNODE_SENSOR_HID]);
+-		if (!fwnode) {
+-			ret = -ENODEV;
+-			goto err_free_swnodes;
+-		}
+-
+-		sensor->adev = acpi_dev_get(adev);
+-
+-		primary = acpi_fwnode_handle(adev);
+-		primary->secondary = fwnode;
+-
+-		cio2_bridge_instantiate_vcm_i2c_client(sensor);
+-
+-		dev_info(&cio2->dev, "Found supported sensor %s\n",
+-			 acpi_dev_name(adev));
+-
+-		bridge->n_sensors++;
+-	}
+-
+-	return 0;
+-
+-err_free_swnodes:
+-	software_node_unregister_node_group(sensor->group);
+-err_free_pld:
+-	ACPI_FREE(sensor->pld);
+-err_put_adev:
+-	acpi_dev_put(adev);
+-	return ret;
+-}
+-
+-static int cio2_bridge_connect_sensors(struct cio2_bridge *bridge,
+-				       struct pci_dev *cio2)
+-{
+-	unsigned int i;
+-	int ret;
+-
+-	for (i = 0; i < ARRAY_SIZE(cio2_supported_sensors); i++) {
+-		const struct cio2_sensor_config *cfg =
+-			&cio2_supported_sensors[i];
+-
+-		ret = cio2_bridge_connect_sensor(cfg, bridge, cio2);
+-		if (ret)
+-			goto err_unregister_sensors;
+-	}
+-
+-	return 0;
+-
+-err_unregister_sensors:
+-	cio2_bridge_unregister_sensors(bridge);
+-	return ret;
+-}
+-
+-/*
+- * The VCM cannot be probed until the PMIC is completely setup. We cannot rely
+- * on -EPROBE_DEFER for this, since the consumer<->supplier relations between
+- * the VCM and regulators/clks are not described in ACPI, instead they are
+- * passed as board-data to the PMIC drivers. Since -PROBE_DEFER does not work
+- * for the clks/regulators the VCM i2c-clients must not be instantiated until
+- * the PMIC is fully setup.
+- *
+- * The sensor/VCM ACPI device has an ACPI _DEP on the PMIC, check this using the
+- * acpi_dev_ready_for_enumeration() helper, like the i2c-core-acpi code does
+- * for the sensors.
+- */
+-static int cio2_bridge_sensors_are_ready(void)
+-{
+-	struct acpi_device *adev;
+-	bool ready = true;
+-	unsigned int i;
+-
+-	for (i = 0; i < ARRAY_SIZE(cio2_supported_sensors); i++) {
+-		const struct cio2_sensor_config *cfg =
+-			&cio2_supported_sensors[i];
+-
+-		for_each_acpi_dev_match(adev, cfg->hid, NULL, -1) {
+-			if (!adev->status.enabled)
+-				continue;
+-
+-			if (!acpi_dev_ready_for_enumeration(adev))
+-				ready = false;
+-		}
+-	}
+-
+-	return ready;
+-}
+-
+-int cio2_bridge_init(struct pci_dev *cio2)
+-{
+-	struct device *dev = &cio2->dev;
+-	struct fwnode_handle *fwnode;
+-	struct cio2_bridge *bridge;
+-	unsigned int i;
+-	int ret;
+-
+-	if (!cio2_bridge_sensors_are_ready())
+-		return -EPROBE_DEFER;
+-
+-	bridge = kzalloc(sizeof(*bridge), GFP_KERNEL);
+-	if (!bridge)
+-		return -ENOMEM;
+-
+-	strscpy(bridge->cio2_node_name, CIO2_HID,
+-		sizeof(bridge->cio2_node_name));
+-	bridge->cio2_hid_node.name = bridge->cio2_node_name;
+-
+-	ret = software_node_register(&bridge->cio2_hid_node);
+-	if (ret < 0) {
+-		dev_err(dev, "Failed to register the CIO2 HID node\n");
+-		goto err_free_bridge;
+-	}
+-
+-	/*
+-	 * Map the lane arrangement, which is fixed for the IPU3 (meaning we
+-	 * only need one, rather than one per sensor). We include it as a
+-	 * member of the struct cio2_bridge rather than a global variable so
+-	 * that it survives if the module is unloaded along with the rest of
+-	 * the struct.
+-	 */
+-	for (i = 0; i < CIO2_MAX_LANES; i++)
+-		bridge->data_lanes[i] = i + 1;
+-
+-	ret = cio2_bridge_connect_sensors(bridge, cio2);
+-	if (ret || bridge->n_sensors == 0)
+-		goto err_unregister_cio2;
+-
+-	dev_info(dev, "Connected %d cameras\n", bridge->n_sensors);
+-
+-	fwnode = software_node_fwnode(&bridge->cio2_hid_node);
+-	if (!fwnode) {
+-		dev_err(dev, "Error getting fwnode from cio2 software_node\n");
+-		ret = -ENODEV;
+-		goto err_unregister_sensors;
+-	}
+-
+-	set_secondary_fwnode(dev, fwnode);
+-
+-	return 0;
+-
+-err_unregister_sensors:
+-	cio2_bridge_unregister_sensors(bridge);
+-err_unregister_cio2:
+-	software_node_unregister(&bridge->cio2_hid_node);
+-err_free_bridge:
+-	kfree(bridge);
+-
+-	return ret;
+-}
+diff --git a/drivers/media/pci/intel/ipu3/cio2-bridge.h b/drivers/media/pci/intel/ipu3/cio2-bridge.h
+deleted file mode 100644
+index b76ed8a641e20..0000000000000
+--- a/drivers/media/pci/intel/ipu3/cio2-bridge.h
++++ /dev/null
+@@ -1,146 +0,0 @@
+-/* SPDX-License-Identifier: GPL-2.0 */
+-/* Author: Dan Scally <djrscally@gmail.com> */
+-#ifndef __CIO2_BRIDGE_H
+-#define __CIO2_BRIDGE_H
+-
+-#include <linux/property.h>
+-#include <linux/types.h>
+-
+-#include "ipu3-cio2.h"
+-
+-struct i2c_client;
+-
+-#define CIO2_HID				"INT343E"
+-#define CIO2_MAX_LANES				4
+-#define MAX_NUM_LINK_FREQS			3
+-
+-/* Values are educated guesses as we don't have a spec */
+-#define CIO2_SENSOR_ROTATION_NORMAL		0
+-#define CIO2_SENSOR_ROTATION_INVERTED		1
+-
+-#define CIO2_SENSOR_CONFIG(_HID, _NR, ...)	\
+-	(const struct cio2_sensor_config) {	\
+-		.hid = _HID,			\
+-		.nr_link_freqs = _NR,		\
+-		.link_freqs = { __VA_ARGS__ }	\
+-	}
+-
+-#define NODE_SENSOR(_HID, _PROPS)		\
+-	(const struct software_node) {		\
+-		.name = _HID,			\
+-		.properties = _PROPS,		\
+-	}
+-
+-#define NODE_PORT(_PORT, _SENSOR_NODE)		\
+-	(const struct software_node) {		\
+-		.name = _PORT,			\
+-		.parent = _SENSOR_NODE,		\
+-	}
+-
+-#define NODE_ENDPOINT(_EP, _PORT, _PROPS)	\
+-	(const struct software_node) {		\
+-		.name = _EP,			\
+-		.parent = _PORT,		\
+-		.properties = _PROPS,		\
+-	}
+-
+-#define NODE_VCM(_TYPE)				\
+-	(const struct software_node) {		\
+-		.name = _TYPE,			\
+-	}
+-
+-enum cio2_sensor_swnodes {
+-	SWNODE_SENSOR_HID,
+-	SWNODE_SENSOR_PORT,
+-	SWNODE_SENSOR_ENDPOINT,
+-	SWNODE_CIO2_PORT,
+-	SWNODE_CIO2_ENDPOINT,
+-	/* Must be last because it is optional / maybe empty */
+-	SWNODE_VCM,
+-	SWNODE_COUNT
+-};
+-
+-/* Data representation as it is in ACPI SSDB buffer */
+-struct cio2_sensor_ssdb {
+-	u8 version;
+-	u8 sku;
+-	u8 guid_csi2[16];
+-	u8 devfunction;
+-	u8 bus;
+-	u32 dphylinkenfuses;
+-	u32 clockdiv;
+-	u8 link;
+-	u8 lanes;
+-	u32 csiparams[10];
+-	u32 maxlanespeed;
+-	u8 sensorcalibfileidx;
+-	u8 sensorcalibfileidxInMBZ[3];
+-	u8 romtype;
+-	u8 vcmtype;
+-	u8 platforminfo;
+-	u8 platformsubinfo;
+-	u8 flash;
+-	u8 privacyled;
+-	u8 degree;
+-	u8 mipilinkdefined;
+-	u32 mclkspeed;
+-	u8 controllogicid;
+-	u8 reserved1[3];
+-	u8 mclkport;
+-	u8 reserved2[13];
+-} __packed;
+-
+-struct cio2_property_names {
+-	char clock_frequency[16];
+-	char rotation[9];
+-	char orientation[12];
+-	char bus_type[9];
+-	char data_lanes[11];
+-	char remote_endpoint[16];
+-	char link_frequencies[17];
+-};
+-
+-struct cio2_node_names {
+-	char port[7];
+-	char endpoint[11];
+-	char remote_port[7];
+-};
+-
+-struct cio2_sensor_config {
+-	const char *hid;
+-	const u8 nr_link_freqs;
+-	const u64 link_freqs[MAX_NUM_LINK_FREQS];
+-};
+-
+-struct cio2_sensor {
+-	/* append ssdb.link(u8) in "-%u" format as suffix of HID */
+-	char name[ACPI_ID_LEN + 4];
+-	struct acpi_device *adev;
+-	struct i2c_client *vcm_i2c_client;
+-
+-	/* SWNODE_COUNT + 1 for terminating NULL */
+-	const struct software_node *group[SWNODE_COUNT + 1];
+-	struct software_node swnodes[SWNODE_COUNT];
+-	struct cio2_node_names node_names;
+-
+-	struct cio2_sensor_ssdb ssdb;
+-	struct acpi_pld_info *pld;
+-
+-	struct cio2_property_names prop_names;
+-	struct property_entry ep_properties[5];
+-	struct property_entry dev_properties[5];
+-	struct property_entry cio2_properties[3];
+-	struct software_node_ref_args local_ref[1];
+-	struct software_node_ref_args remote_ref[1];
+-	struct software_node_ref_args vcm_ref[1];
+-};
+-
+-struct cio2_bridge {
+-	char cio2_node_name[ACPI_ID_LEN];
+-	struct software_node cio2_hid_node;
+-	u32 data_lanes[4];
+-	unsigned int n_sensors;
+-	struct cio2_sensor sensors[CIO2_NUM_PORTS];
+-};
+-
+-#endif
+diff --git a/drivers/media/pci/intel/ipu3/ipu3-cio2-main.c b/drivers/media/pci/intel/ipu3/ipu3-cio2-main.c
+index 34984a7474ed8..dc09fbdb062b0 100644
+--- a/drivers/media/pci/intel/ipu3/ipu3-cio2-main.c
++++ b/drivers/media/pci/intel/ipu3/ipu3-cio2-main.c
+@@ -29,6 +29,7 @@
+ #include <media/v4l2-ioctl.h>
+ #include <media/videobuf2-dma-sg.h>
+ 
++#include "../ipu-bridge.h"
+ #include "ipu3-cio2.h"
+ 
+ struct ipu3_cio2_fmt {
+@@ -1724,7 +1725,7 @@ static int cio2_pci_probe(struct pci_dev *pci_dev,
+ 			return -EINVAL;
+ 		}
+ 
+-		r = cio2_bridge_init(pci_dev);
++		r = ipu_bridge_init(pci_dev);
+ 		if (r)
+ 			return r;
+ 	}
+@@ -2057,3 +2058,4 @@ MODULE_AUTHOR("Yuning Pu <yuning.pu@intel.com>");
+ MODULE_AUTHOR("Yong Zhi <yong.zhi@intel.com>");
+ MODULE_LICENSE("GPL v2");
+ MODULE_DESCRIPTION("IPU3 CIO2 driver");
++MODULE_IMPORT_NS(INTEL_IPU_BRIDGE);
+diff --git a/drivers/media/pci/intel/ipu3/ipu3-cio2.h b/drivers/media/pci/intel/ipu3/ipu3-cio2.h
+index 3a1f394e05aa7..d731ce8adbe31 100644
+--- a/drivers/media/pci/intel/ipu3/ipu3-cio2.h
++++ b/drivers/media/pci/intel/ipu3/ipu3-cio2.h
+@@ -459,10 +459,4 @@ static inline struct cio2_queue *vb2q_to_cio2_queue(struct vb2_queue *vq)
+ 	return container_of(vq, struct cio2_queue, vbq);
+ }
+ 
+-#if IS_ENABLED(CONFIG_CIO2_BRIDGE)
+-int cio2_bridge_init(struct pci_dev *cio2);
+-#else
+-static inline int cio2_bridge_init(struct pci_dev *cio2) { return 0; }
+-#endif
+-
+ #endif
+diff --git a/drivers/media/platform/amphion/vdec.c b/drivers/media/platform/amphion/vdec.c
+index 6515f3cdb7a74..133d77d1ea0c3 100644
+--- a/drivers/media/platform/amphion/vdec.c
++++ b/drivers/media/platform/amphion/vdec.c
+@@ -299,7 +299,8 @@ static int vdec_update_state(struct vpu_inst *inst, enum vpu_codec_state state,
+ 		vdec->state = VPU_CODEC_STATE_DYAMIC_RESOLUTION_CHANGE;
+ 
+ 	if (inst->state != pre_state)
+-		vpu_trace(inst->dev, "[%d] %d -> %d\n", inst->id, pre_state, inst->state);
++		vpu_trace(inst->dev, "[%d] %s -> %s\n", inst->id,
++			  vpu_codec_state_name(pre_state), vpu_codec_state_name(inst->state));
+ 
+ 	if (inst->state == VPU_CODEC_STATE_DYAMIC_RESOLUTION_CHANGE)
+ 		vdec_handle_resolution_change(inst);
+@@ -741,6 +742,21 @@ static int vdec_frame_decoded(struct vpu_inst *inst, void *arg)
+ 		dev_info(inst->dev, "[%d] buf[%d] has been decoded\n", inst->id, info->id);
+ 	vpu_set_buffer_state(vbuf, VPU_BUF_STATE_DECODED);
+ 	vdec->decoded_frame_count++;
++	if (vdec->params.display_delay_enable) {
++		struct vpu_format *cur_fmt;
++
++		cur_fmt = vpu_get_format(inst, inst->cap_format.type);
++		vpu_set_buffer_state(vbuf, VPU_BUF_STATE_READY);
++		for (int i = 0; i < vbuf->vb2_buf.num_planes; i++)
++			vb2_set_plane_payload(&vbuf->vb2_buf,
++					      i, vpu_get_fmt_plane_size(cur_fmt, i));
++		vbuf->field = cur_fmt->field;
++		vbuf->sequence = vdec->sequence++;
++		dev_dbg(inst->dev, "[%d][OUTPUT TS]%32lld\n", inst->id, vbuf->vb2_buf.timestamp);
++
++		v4l2_m2m_buf_done(vbuf, VB2_BUF_STATE_DONE);
++		vdec->display_frame_count++;
++	}
+ exit:
+ 	vpu_inst_unlock(inst);
+ 
+@@ -768,14 +784,14 @@ static void vdec_buf_done(struct vpu_inst *inst, struct vpu_frame_info *frame)
+ 	struct vpu_format *cur_fmt;
+ 	struct vpu_vb2_buffer *vpu_buf;
+ 	struct vb2_v4l2_buffer *vbuf;
+-	u32 sequence;
+ 	int i;
+ 
+ 	if (!frame)
+ 		return;
+ 
+ 	vpu_inst_lock(inst);
+-	sequence = vdec->sequence++;
++	if (!vdec->params.display_delay_enable)
++		vdec->sequence++;
+ 	vpu_buf = vdec_find_buffer(inst, frame->luma);
+ 	vpu_inst_unlock(inst);
+ 	if (!vpu_buf) {
+@@ -794,13 +810,17 @@ static void vdec_buf_done(struct vpu_inst *inst, struct vpu_frame_info *frame)
+ 		dev_err(inst->dev, "[%d] buffer id(%d, %d) dismatch\n",
+ 			inst->id, vbuf->vb2_buf.index, frame->id);
+ 
++	if (vpu_get_buffer_state(vbuf) == VPU_BUF_STATE_READY && vdec->params.display_delay_enable)
++		return;
++
+ 	if (vpu_get_buffer_state(vbuf) != VPU_BUF_STATE_DECODED)
+ 		dev_err(inst->dev, "[%d] buffer(%d) ready without decoded\n", inst->id, frame->id);
++
+ 	vpu_set_buffer_state(vbuf, VPU_BUF_STATE_READY);
+ 	for (i = 0; i < vbuf->vb2_buf.num_planes; i++)
+ 		vb2_set_plane_payload(&vbuf->vb2_buf, i, vpu_get_fmt_plane_size(cur_fmt, i));
+ 	vbuf->field = cur_fmt->field;
+-	vbuf->sequence = sequence;
++	vbuf->sequence = vdec->sequence;
+ 	dev_dbg(inst->dev, "[%d][OUTPUT TS]%32lld\n", inst->id, vbuf->vb2_buf.timestamp);
+ 
+ 	v4l2_m2m_buf_done(vbuf, VB2_BUF_STATE_DONE);
+@@ -999,6 +1019,7 @@ static int vdec_response_frame_abnormal(struct vpu_inst *inst)
+ {
+ 	struct vdec_t *vdec = inst->priv;
+ 	struct vpu_fs_info info;
++	int ret;
+ 
+ 	if (!vdec->req_frame_count)
+ 		return 0;
+@@ -1006,7 +1027,9 @@ static int vdec_response_frame_abnormal(struct vpu_inst *inst)
+ 	memset(&info, 0, sizeof(info));
+ 	info.type = MEM_RES_FRAME;
+ 	info.tag = vdec->seq_tag + 0xf0;
+-	vpu_session_alloc_fs(inst, &info);
++	ret = vpu_session_alloc_fs(inst, &info);
++	if (ret)
++		return ret;
+ 	vdec->req_frame_count--;
+ 
+ 	return 0;
+@@ -1037,8 +1060,8 @@ static int vdec_response_frame(struct vpu_inst *inst, struct vb2_v4l2_buffer *vb
+ 		return -EINVAL;
+ 	}
+ 
+-	dev_dbg(inst->dev, "[%d] state = %d, alloc fs %d, tag = 0x%x\n",
+-		inst->id, inst->state, vbuf->vb2_buf.index, vdec->seq_tag);
++	dev_dbg(inst->dev, "[%d] state = %s, alloc fs %d, tag = 0x%x\n",
++		inst->id, vpu_codec_state_name(inst->state), vbuf->vb2_buf.index, vdec->seq_tag);
+ 	vpu_buf = to_vpu_vb2_buffer(vbuf);
+ 
+ 	memset(&info, 0, sizeof(info));
+@@ -1400,7 +1423,7 @@ static void vdec_abort(struct vpu_inst *inst)
+ 	struct vpu_rpc_buffer_desc desc;
+ 	int ret;
+ 
+-	vpu_trace(inst->dev, "[%d] state = %d\n", inst->id, inst->state);
++	vpu_trace(inst->dev, "[%d] state = %s\n", inst->id, vpu_codec_state_name(inst->state));
+ 
+ 	vdec->aborting = true;
+ 	vpu_iface_add_scode(inst, SCODE_PADDING_ABORT);
+@@ -1453,9 +1476,7 @@ static void vdec_release(struct vpu_inst *inst)
+ {
+ 	if (inst->id != VPU_INST_NULL_ID)
+ 		vpu_trace(inst->dev, "[%d]\n", inst->id);
+-	vpu_inst_lock(inst);
+ 	vdec_stop(inst, true);
+-	vpu_inst_unlock(inst);
+ }
+ 
+ static void vdec_cleanup(struct vpu_inst *inst)
+diff --git a/drivers/media/platform/amphion/venc.c b/drivers/media/platform/amphion/venc.c
+index 58480e2755ec4..4eb57d793a9c0 100644
+--- a/drivers/media/platform/amphion/venc.c
++++ b/drivers/media/platform/amphion/venc.c
+@@ -268,7 +268,7 @@ static int venc_g_parm(struct file *file, void *fh, struct v4l2_streamparm *parm
+ {
+ 	struct vpu_inst *inst = to_inst(file);
+ 	struct venc_t *venc = inst->priv;
+-	struct v4l2_fract *timeperframe = &parm->parm.capture.timeperframe;
++	struct v4l2_fract *timeperframe;
+ 
+ 	if (!parm)
+ 		return -EINVAL;
+@@ -279,6 +279,7 @@ static int venc_g_parm(struct file *file, void *fh, struct v4l2_streamparm *parm
+ 	if (!vpu_helper_check_type(inst, parm->type))
+ 		return -EINVAL;
+ 
++	timeperframe = &parm->parm.capture.timeperframe;
+ 	parm->parm.capture.capability = V4L2_CAP_TIMEPERFRAME;
+ 	parm->parm.capture.readbuffers = 0;
+ 	timeperframe->numerator = venc->params.frame_rate.numerator;
+@@ -291,7 +292,7 @@ static int venc_s_parm(struct file *file, void *fh, struct v4l2_streamparm *parm
+ {
+ 	struct vpu_inst *inst = to_inst(file);
+ 	struct venc_t *venc = inst->priv;
+-	struct v4l2_fract *timeperframe = &parm->parm.capture.timeperframe;
++	struct v4l2_fract *timeperframe;
+ 	unsigned long n, d;
+ 
+ 	if (!parm)
+@@ -303,6 +304,7 @@ static int venc_s_parm(struct file *file, void *fh, struct v4l2_streamparm *parm
+ 	if (!vpu_helper_check_type(inst, parm->type))
+ 		return -EINVAL;
+ 
++	timeperframe = &parm->parm.capture.timeperframe;
+ 	if (!timeperframe->numerator)
+ 		timeperframe->numerator = venc->params.frame_rate.numerator;
+ 	if (!timeperframe->denominator)
+diff --git a/drivers/media/platform/amphion/vpu.h b/drivers/media/platform/amphion/vpu.h
+index 3bfe193722af4..5a701f64289ef 100644
+--- a/drivers/media/platform/amphion/vpu.h
++++ b/drivers/media/platform/amphion/vpu.h
+@@ -355,6 +355,9 @@ void vpu_inst_record_flow(struct vpu_inst *inst, u32 flow);
+ int vpu_core_driver_init(void);
+ void vpu_core_driver_exit(void);
+ 
++const char *vpu_id_name(u32 id);
++const char *vpu_codec_state_name(enum vpu_codec_state state);
++
+ extern bool debug;
+ #define vpu_trace(dev, fmt, arg...)					\
+ 	do {								\
+diff --git a/drivers/media/platform/amphion/vpu_cmds.c b/drivers/media/platform/amphion/vpu_cmds.c
+index fa581ba6bab2d..235b71398d403 100644
+--- a/drivers/media/platform/amphion/vpu_cmds.c
++++ b/drivers/media/platform/amphion/vpu_cmds.c
+@@ -98,7 +98,7 @@ static struct vpu_cmd_t *vpu_alloc_cmd(struct vpu_inst *inst, u32 id, void *data
+ 	cmd->id = id;
+ 	ret = vpu_iface_pack_cmd(inst->core, cmd->pkt, inst->id, id, data);
+ 	if (ret) {
+-		dev_err(inst->dev, "iface pack cmd(%d) fail\n", id);
++		dev_err(inst->dev, "iface pack cmd %s fail\n", vpu_id_name(id));
+ 		vfree(cmd->pkt);
+ 		vfree(cmd);
+ 		return NULL;
+@@ -125,14 +125,14 @@ static int vpu_session_process_cmd(struct vpu_inst *inst, struct vpu_cmd_t *cmd)
+ {
+ 	int ret;
+ 
+-	dev_dbg(inst->dev, "[%d]send cmd(0x%x)\n", inst->id, cmd->id);
++	dev_dbg(inst->dev, "[%d]send cmd %s\n", inst->id, vpu_id_name(cmd->id));
+ 	vpu_iface_pre_send_cmd(inst);
+ 	ret = vpu_cmd_send(inst->core, cmd->pkt);
+ 	if (!ret) {
+ 		vpu_iface_post_send_cmd(inst);
+ 		vpu_inst_record_flow(inst, cmd->id);
+ 	} else {
+-		dev_err(inst->dev, "[%d] iface send cmd(0x%x) fail\n", inst->id, cmd->id);
++		dev_err(inst->dev, "[%d] iface send cmd %s fail\n", inst->id, vpu_id_name(cmd->id));
+ 	}
+ 
+ 	return ret;
+@@ -149,7 +149,8 @@ static void vpu_process_cmd_request(struct vpu_inst *inst)
+ 	list_for_each_entry_safe(cmd, tmp, &inst->cmd_q, list) {
+ 		list_del_init(&cmd->list);
+ 		if (vpu_session_process_cmd(inst, cmd))
+-			dev_err(inst->dev, "[%d] process cmd(%d) fail\n", inst->id, cmd->id);
++			dev_err(inst->dev, "[%d] process cmd %s fail\n",
++				inst->id, vpu_id_name(cmd->id));
+ 		if (cmd->request) {
+ 			inst->pending = (void *)cmd;
+ 			break;
+@@ -305,7 +306,8 @@ static void vpu_core_keep_active(struct vpu_core *core)
+ 
+ 	dev_dbg(core->dev, "try to wake up\n");
+ 	mutex_lock(&core->cmd_lock);
+-	vpu_cmd_send(core, &pkt);
++	if (vpu_cmd_send(core, &pkt))
++		dev_err(core->dev, "fail to keep active\n");
+ 	mutex_unlock(&core->cmd_lock);
+ }
+ 
+@@ -313,7 +315,7 @@ static int vpu_session_send_cmd(struct vpu_inst *inst, u32 id, void *data)
+ {
+ 	unsigned long key;
+ 	int sync = false;
+-	int ret = -EINVAL;
++	int ret;
+ 
+ 	if (inst->id < 0)
+ 		return -EINVAL;
+@@ -339,7 +341,7 @@ static int vpu_session_send_cmd(struct vpu_inst *inst, u32 id, void *data)
+ 
+ exit:
+ 	if (ret)
+-		dev_err(inst->dev, "[%d] send cmd(0x%x) fail\n", inst->id, id);
++		dev_err(inst->dev, "[%d] send cmd %s fail\n", inst->id, vpu_id_name(id));
+ 
+ 	return ret;
+ }
+diff --git a/drivers/media/platform/amphion/vpu_core.c b/drivers/media/platform/amphion/vpu_core.c
+index 7863b7b53494c..2bb9f187e163c 100644
+--- a/drivers/media/platform/amphion/vpu_core.c
++++ b/drivers/media/platform/amphion/vpu_core.c
+@@ -88,6 +88,8 @@ static int vpu_core_boot_done(struct vpu_core *core)
+ 
+ 		core->supported_instance_count = min(core->supported_instance_count, count);
+ 	}
++	if (core->supported_instance_count >= BITS_PER_TYPE(core->instance_mask))
++		core->supported_instance_count = BITS_PER_TYPE(core->instance_mask);
+ 	core->fw_version = fw_version;
+ 	vpu_core_set_state(core, VPU_CORE_ACTIVE);
+ 
+diff --git a/drivers/media/platform/amphion/vpu_dbg.c b/drivers/media/platform/amphion/vpu_dbg.c
+index 44b830ae01d8c..982c2c777484c 100644
+--- a/drivers/media/platform/amphion/vpu_dbg.c
++++ b/drivers/media/platform/amphion/vpu_dbg.c
+@@ -50,6 +50,13 @@ static char *vpu_stat_name[] = {
+ 	[VPU_BUF_STATE_ERROR] = "error",
+ };
+ 
++static inline const char *to_vpu_stat_name(int state)
++{
++	if (state <= VPU_BUF_STATE_ERROR)
++		return vpu_stat_name[state];
++	return "unknown";
++}
++
+ static int vpu_dbg_instance(struct seq_file *s, void *data)
+ {
+ 	struct vpu_inst *inst = s->private;
+@@ -67,7 +74,7 @@ static int vpu_dbg_instance(struct seq_file *s, void *data)
+ 	num = scnprintf(str, sizeof(str), "tgig = %d,pid = %d\n", inst->tgid, inst->pid);
+ 	if (seq_write(s, str, num))
+ 		return 0;
+-	num = scnprintf(str, sizeof(str), "state = %d\n", inst->state);
++	num = scnprintf(str, sizeof(str), "state = %s\n", vpu_codec_state_name(inst->state));
+ 	if (seq_write(s, str, num))
+ 		return 0;
+ 	num = scnprintf(str, sizeof(str),
+@@ -141,7 +148,7 @@ static int vpu_dbg_instance(struct seq_file *s, void *data)
+ 		num = scnprintf(str, sizeof(str),
+ 				"output [%2d] state = %10s, %8s\n",
+ 				i, vb2_stat_name[vb->state],
+-				vpu_stat_name[vpu_get_buffer_state(vbuf)]);
++				to_vpu_stat_name(vpu_get_buffer_state(vbuf)));
+ 		if (seq_write(s, str, num))
+ 			return 0;
+ 	}
+@@ -156,7 +163,7 @@ static int vpu_dbg_instance(struct seq_file *s, void *data)
+ 		num = scnprintf(str, sizeof(str),
+ 				"capture[%2d] state = %10s, %8s\n",
+ 				i, vb2_stat_name[vb->state],
+-				vpu_stat_name[vpu_get_buffer_state(vbuf)]);
++				to_vpu_stat_name(vpu_get_buffer_state(vbuf)));
+ 		if (seq_write(s, str, num))
+ 			return 0;
+ 	}
+@@ -188,9 +195,9 @@ static int vpu_dbg_instance(struct seq_file *s, void *data)
+ 
+ 		if (!inst->flows[idx])
+ 			continue;
+-		num = scnprintf(str, sizeof(str), "\t[%s]0x%x\n",
++		num = scnprintf(str, sizeof(str), "\t[%s] %s\n",
+ 				inst->flows[idx] >= VPU_MSG_ID_NOOP ? "M" : "C",
+-				inst->flows[idx]);
++				vpu_id_name(inst->flows[idx]));
+ 		if (seq_write(s, str, num)) {
+ 			mutex_unlock(&inst->core->cmd_lock);
+ 			return 0;
+diff --git a/drivers/media/platform/amphion/vpu_helpers.c b/drivers/media/platform/amphion/vpu_helpers.c
+index 019c77e84514c..af3b336e5dc32 100644
+--- a/drivers/media/platform/amphion/vpu_helpers.c
++++ b/drivers/media/platform/amphion/vpu_helpers.c
+@@ -11,6 +11,7 @@
+ #include <linux/module.h>
+ #include <linux/platform_device.h>
+ #include "vpu.h"
++#include "vpu_defs.h"
+ #include "vpu_core.h"
+ #include "vpu_rpc.h"
+ #include "vpu_helpers.h"
+@@ -447,3 +448,63 @@ int vpu_find_src_by_dst(struct vpu_pair *pairs, u32 cnt, u32 dst)
+ 
+ 	return -EINVAL;
+ }
++
++const char *vpu_id_name(u32 id)
++{
++	switch (id) {
++	case VPU_CMD_ID_NOOP: return "noop";
++	case VPU_CMD_ID_CONFIGURE_CODEC: return "configure codec";
++	case VPU_CMD_ID_START: return "start";
++	case VPU_CMD_ID_STOP: return "stop";
++	case VPU_CMD_ID_ABORT: return "abort";
++	case VPU_CMD_ID_RST_BUF: return "reset buf";
++	case VPU_CMD_ID_SNAPSHOT: return "snapshot";
++	case VPU_CMD_ID_FIRM_RESET: return "reset firmware";
++	case VPU_CMD_ID_UPDATE_PARAMETER: return "update parameter";
++	case VPU_CMD_ID_FRAME_ENCODE: return "encode frame";
++	case VPU_CMD_ID_SKIP: return "skip";
++	case VPU_CMD_ID_FS_ALLOC: return "alloc fb";
++	case VPU_CMD_ID_FS_RELEASE: return "release fb";
++	case VPU_CMD_ID_TIMESTAMP: return "timestamp";
++	case VPU_CMD_ID_DEBUG: return "debug";
++	case VPU_MSG_ID_RESET_DONE: return "reset done";
++	case VPU_MSG_ID_START_DONE: return "start done";
++	case VPU_MSG_ID_STOP_DONE: return "stop done";
++	case VPU_MSG_ID_ABORT_DONE: return "abort done";
++	case VPU_MSG_ID_BUF_RST: return "buf reset done";
++	case VPU_MSG_ID_MEM_REQUEST: return "mem request";
++	case VPU_MSG_ID_PARAM_UPD_DONE: return "param upd done";
++	case VPU_MSG_ID_FRAME_INPUT_DONE: return "frame input done";
++	case VPU_MSG_ID_ENC_DONE: return "encode done";
++	case VPU_MSG_ID_DEC_DONE: return "frame display";
++	case VPU_MSG_ID_FRAME_REQ: return "fb request";
++	case VPU_MSG_ID_FRAME_RELEASE: return "fb release";
++	case VPU_MSG_ID_SEQ_HDR_FOUND: return "seq hdr found";
++	case VPU_MSG_ID_RES_CHANGE: return "resolution change";
++	case VPU_MSG_ID_PIC_HDR_FOUND: return "pic hdr found";
++	case VPU_MSG_ID_PIC_DECODED: return "picture decoded";
++	case VPU_MSG_ID_PIC_EOS: return "eos";
++	case VPU_MSG_ID_FIFO_LOW: return "fifo low";
++	case VPU_MSG_ID_BS_ERROR: return "bs error";
++	case VPU_MSG_ID_UNSUPPORTED: return "unsupported";
++	case VPU_MSG_ID_FIRMWARE_XCPT: return "exception";
++	case VPU_MSG_ID_PIC_SKIPPED: return "skipped";
++	}
++	return "<unknown>";
++}
++
++const char *vpu_codec_state_name(enum vpu_codec_state state)
++{
++	switch (state) {
++	case VPU_CODEC_STATE_DEINIT: return "initialization";
++	case VPU_CODEC_STATE_CONFIGURED: return "configured";
++	case VPU_CODEC_STATE_START: return "start";
++	case VPU_CODEC_STATE_STARTED: return "started";
++	case VPU_CODEC_STATE_ACTIVE: return "active";
++	case VPU_CODEC_STATE_SEEK: return "seek";
++	case VPU_CODEC_STATE_STOP: return "stop";
++	case VPU_CODEC_STATE_DRAIN: return "drain";
++	case VPU_CODEC_STATE_DYAMIC_RESOLUTION_CHANGE: return "resolution change";
++	}
++	return "<unknown>";
++}
+diff --git a/drivers/media/platform/amphion/vpu_msgs.c b/drivers/media/platform/amphion/vpu_msgs.c
+index 92672a802b492..d0ead051f7d18 100644
+--- a/drivers/media/platform/amphion/vpu_msgs.c
++++ b/drivers/media/platform/amphion/vpu_msgs.c
+@@ -32,7 +32,7 @@ static void vpu_session_handle_start_done(struct vpu_inst *inst, struct vpu_rpc_
+ 
+ static void vpu_session_handle_mem_request(struct vpu_inst *inst, struct vpu_rpc_event *pkt)
+ {
+-	struct vpu_pkt_mem_req_data req_data;
++	struct vpu_pkt_mem_req_data req_data = { 0 };
+ 
+ 	vpu_iface_unpack_msg_data(inst->core, pkt, (void *)&req_data);
+ 	vpu_trace(inst->dev, "[%d] %d:%d %d:%d %d:%d\n",
+@@ -80,7 +80,7 @@ static void vpu_session_handle_resolution_change(struct vpu_inst *inst, struct v
+ 
+ static void vpu_session_handle_enc_frame_done(struct vpu_inst *inst, struct vpu_rpc_event *pkt)
+ {
+-	struct vpu_enc_pic_info info;
++	struct vpu_enc_pic_info info = { 0 };
+ 
+ 	vpu_iface_unpack_msg_data(inst->core, pkt, (void *)&info);
+ 	dev_dbg(inst->dev, "[%d] frame id = %d, wptr = 0x%x, size = %d\n",
+@@ -90,7 +90,7 @@ static void vpu_session_handle_enc_frame_done(struct vpu_inst *inst, struct vpu_
+ 
+ static void vpu_session_handle_frame_request(struct vpu_inst *inst, struct vpu_rpc_event *pkt)
+ {
+-	struct vpu_fs_info fs;
++	struct vpu_fs_info fs = { 0 };
+ 
+ 	vpu_iface_unpack_msg_data(inst->core, pkt, &fs);
+ 	call_void_vop(inst, event_notify, VPU_MSG_ID_FRAME_REQ, &fs);
+@@ -107,7 +107,7 @@ static void vpu_session_handle_frame_release(struct vpu_inst *inst, struct vpu_r
+ 		info.type = inst->out_format.type;
+ 		call_void_vop(inst, buf_done, &info);
+ 	} else if (inst->core->type == VPU_CORE_TYPE_DEC) {
+-		struct vpu_fs_info fs;
++		struct vpu_fs_info fs = { 0 };
+ 
+ 		vpu_iface_unpack_msg_data(inst->core, pkt, &fs);
+ 		call_void_vop(inst, event_notify, VPU_MSG_ID_FRAME_RELEASE, &fs);
+@@ -122,7 +122,7 @@ static void vpu_session_handle_input_done(struct vpu_inst *inst, struct vpu_rpc_
+ 
+ static void vpu_session_handle_pic_decoded(struct vpu_inst *inst, struct vpu_rpc_event *pkt)
+ {
+-	struct vpu_dec_pic_info info;
++	struct vpu_dec_pic_info info = { 0 };
+ 
+ 	vpu_iface_unpack_msg_data(inst->core, pkt, (void *)&info);
+ 	call_void_vop(inst, get_one_frame, &info);
+@@ -130,7 +130,7 @@ static void vpu_session_handle_pic_decoded(struct vpu_inst *inst, struct vpu_rpc
+ 
+ static void vpu_session_handle_pic_done(struct vpu_inst *inst, struct vpu_rpc_event *pkt)
+ {
+-	struct vpu_dec_pic_info info;
++	struct vpu_dec_pic_info info = { 0 };
+ 	struct vpu_frame_info frame;
+ 
+ 	memset(&frame, 0, sizeof(frame));
+@@ -210,7 +210,7 @@ static int vpu_session_handle_msg(struct vpu_inst *inst, struct vpu_rpc_event *m
+ 		return -EINVAL;
+ 
+ 	msg_id = ret;
+-	dev_dbg(inst->dev, "[%d] receive event(0x%x)\n", inst->id, msg_id);
++	dev_dbg(inst->dev, "[%d] receive event(%s)\n", inst->id, vpu_id_name(msg_id));
+ 
+ 	for (i = 0; i < ARRAY_SIZE(handlers); i++) {
+ 		if (handlers[i].id == msg_id) {
+diff --git a/drivers/media/platform/amphion/vpu_v4l2.c b/drivers/media/platform/amphion/vpu_v4l2.c
+index 021235e1c1446..0f6e4c666440e 100644
+--- a/drivers/media/platform/amphion/vpu_v4l2.c
++++ b/drivers/media/platform/amphion/vpu_v4l2.c
+@@ -489,6 +489,11 @@ static int vpu_vb2_queue_setup(struct vb2_queue *vq,
+ 	for (i = 0; i < cur_fmt->mem_planes; i++)
+ 		psize[i] = vpu_get_fmt_plane_size(cur_fmt, i);
+ 
++	if (V4L2_TYPE_IS_OUTPUT(vq->type) && inst->state == VPU_CODEC_STATE_SEEK) {
++		vpu_trace(inst->dev, "reinit when VIDIOC_REQBUFS(OUTPUT, 0)\n");
++		call_void_vop(inst, release);
++	}
++
+ 	return 0;
+ }
+ 
+@@ -773,9 +778,9 @@ int vpu_v4l2_close(struct file *file)
+ 		v4l2_m2m_ctx_release(inst->fh.m2m_ctx);
+ 		inst->fh.m2m_ctx = NULL;
+ 	}
++	call_void_vop(inst, release);
+ 	vpu_inst_unlock(inst);
+ 
+-	call_void_vop(inst, release);
+ 	vpu_inst_unregister(inst);
+ 	vpu_inst_put(inst);
+ 
+diff --git a/drivers/media/platform/mediatek/jpeg/mtk_jpeg_core.c b/drivers/media/platform/mediatek/jpeg/mtk_jpeg_core.c
+index 60425c99a2b8b..7194f88edc0fb 100644
+--- a/drivers/media/platform/mediatek/jpeg/mtk_jpeg_core.c
++++ b/drivers/media/platform/mediatek/jpeg/mtk_jpeg_core.c
+@@ -1403,6 +1403,7 @@ static void mtk_jpeg_remove(struct platform_device *pdev)
+ {
+ 	struct mtk_jpeg_dev *jpeg = platform_get_drvdata(pdev);
+ 
++	cancel_delayed_work_sync(&jpeg->job_timeout_work);
+ 	pm_runtime_disable(&pdev->dev);
+ 	video_unregister_device(jpeg->vdev);
+ 	v4l2_m2m_release(jpeg->m2m_dev);
+diff --git a/drivers/media/platform/mediatek/vcodec/vdec/vdec_av1_req_lat_if.c b/drivers/media/platform/mediatek/vcodec/vdec/vdec_av1_req_lat_if.c
+index 404a1a23fd402..b00b423274b3b 100644
+--- a/drivers/media/platform/mediatek/vcodec/vdec/vdec_av1_req_lat_if.c
++++ b/drivers/media/platform/mediatek/vcodec/vdec/vdec_av1_req_lat_if.c
+@@ -1658,9 +1658,9 @@ static void vdec_av1_slice_setup_tile_buffer(struct vdec_av1_slice_instance *ins
+ 	u32 allow_update_cdf = 0;
+ 	u32 sb_boundary_x_m1 = 0, sb_boundary_y_m1 = 0;
+ 	int tile_info_base;
+-	u32 tile_buf_pa;
++	u64 tile_buf_pa;
+ 	u32 *tile_info_buf = instance->tile.va;
+-	u32 pa = (u32)bs->dma_addr;
++	u64 pa = (u64)bs->dma_addr;
+ 
+ 	if (uh->disable_cdf_update == 0)
+ 		allow_update_cdf = 1;
+@@ -1673,8 +1673,12 @@ static void vdec_av1_slice_setup_tile_buffer(struct vdec_av1_slice_instance *ins
+ 		tile_info_buf[tile_info_base + 0] = (tile_group->tile_size[tile_num] << 3);
+ 		tile_buf_pa = pa + tile_group->tile_start_offset[tile_num];
+ 
+-		tile_info_buf[tile_info_base + 1] = (tile_buf_pa >> 4) << 4;
+-		tile_info_buf[tile_info_base + 2] = (tile_buf_pa % 16) << 3;
++		/* save av1 tile high 4bits(bit 32-35) address in lower 4 bits position
++		 * and clear original for hw requirement.
++		 */
++		tile_info_buf[tile_info_base + 1] = (tile_buf_pa & 0xFFFFFFF0ull) |
++			((tile_buf_pa & 0xF00000000ull) >> 32);
++		tile_info_buf[tile_info_base + 2] = (tile_buf_pa & 0xFull) << 3;
+ 
+ 		sb_boundary_x_m1 =
+ 			(tile->mi_col_starts[tile_col + 1] - tile->mi_col_starts[tile_col] - 1) &
+diff --git a/drivers/media/platform/mediatek/vcodec/vdec/vdec_vp9_if.c b/drivers/media/platform/mediatek/vcodec/vdec/vdec_vp9_if.c
+index 70b8383f7c8ec..a27a109d8d144 100644
+--- a/drivers/media/platform/mediatek/vcodec/vdec/vdec_vp9_if.c
++++ b/drivers/media/platform/mediatek/vcodec/vdec/vdec_vp9_if.c
+@@ -226,10 +226,11 @@ static struct vdec_fb *vp9_rm_from_fb_use_list(struct vdec_vp9_inst
+ 		if (fb->base_y.va == addr) {
+ 			list_move_tail(&node->list,
+ 				       &inst->available_fb_node_list);
+-			break;
++			return fb;
+ 		}
+ 	}
+-	return fb;
++
++	return NULL;
+ }
+ 
+ static void vp9_add_to_fb_free_list(struct vdec_vp9_inst *inst,
+diff --git a/drivers/media/platform/mediatek/vcodec/vdec_msg_queue.c b/drivers/media/platform/mediatek/vcodec/vdec_msg_queue.c
+index 04e6dc6cfa1de..898f9dbb9f46d 100644
+--- a/drivers/media/platform/mediatek/vcodec/vdec_msg_queue.c
++++ b/drivers/media/platform/mediatek/vcodec/vdec_msg_queue.c
+@@ -231,6 +231,7 @@ void vdec_msg_queue_deinit(struct vdec_msg_queue *msg_queue,
+ 			mtk_vcodec_mem_free(ctx, mem);
+ 
+ 		kfree(lat_buf->private_data);
++		lat_buf->private_data = NULL;
+ 	}
+ 
+ 	if (msg_queue->wdma_addr.size)
+@@ -307,6 +308,7 @@ int vdec_msg_queue_init(struct vdec_msg_queue *msg_queue,
+ 	err = mtk_vcodec_mem_alloc(ctx, &msg_queue->wdma_addr);
+ 	if (err) {
+ 		mtk_v4l2_err("failed to allocate wdma_addr buf");
++		msg_queue->wdma_addr.size = 0;
+ 		return -ENOMEM;
+ 	}
+ 	msg_queue->wdma_rptr_addr = msg_queue->wdma_addr.dma_addr;
+@@ -338,14 +340,14 @@ int vdec_msg_queue_init(struct vdec_msg_queue *msg_queue,
+ 			err = mtk_vcodec_mem_alloc(ctx, &lat_buf->rd_mv_addr);
+ 			if (err) {
+ 				mtk_v4l2_err("failed to allocate rd_mv_addr buf[%d]", i);
+-				return -ENOMEM;
++				goto mem_alloc_err;
+ 			}
+ 
+ 			lat_buf->tile_addr.size = VDEC_LAT_TILE_SZ;
+ 			err = mtk_vcodec_mem_alloc(ctx, &lat_buf->tile_addr);
+ 			if (err) {
+ 				mtk_v4l2_err("failed to allocate tile_addr buf[%d]", i);
+-				return -ENOMEM;
++				goto mem_alloc_err;
+ 			}
+ 		}
+ 
+diff --git a/drivers/media/platform/nxp/imx8-isi/imx8-isi-crossbar.c b/drivers/media/platform/nxp/imx8-isi/imx8-isi-crossbar.c
+index f7447b2f4d777..9fcfc39257332 100644
+--- a/drivers/media/platform/nxp/imx8-isi/imx8-isi-crossbar.c
++++ b/drivers/media/platform/nxp/imx8-isi/imx8-isi-crossbar.c
+@@ -483,7 +483,7 @@ int mxc_isi_crossbar_init(struct mxc_isi_dev *isi)
+ 
+ 	xbar->inputs = kcalloc(xbar->num_sinks, sizeof(*xbar->inputs),
+ 			       GFP_KERNEL);
+-	if (!xbar->pads) {
++	if (!xbar->inputs) {
+ 		ret = -ENOMEM;
+ 		goto err_free;
+ 	}
+diff --git a/drivers/media/platform/qcom/venus/hfi_venus.c b/drivers/media/platform/qcom/venus/hfi_venus.c
+index f0b46389e8d56..5506a0d196ef9 100644
+--- a/drivers/media/platform/qcom/venus/hfi_venus.c
++++ b/drivers/media/platform/qcom/venus/hfi_venus.c
+@@ -131,7 +131,6 @@ struct venus_hfi_device {
+ 
+ static bool venus_pkt_debug;
+ int venus_fw_debug = HFI_DEBUG_MSG_ERROR | HFI_DEBUG_MSG_FATAL;
+-static bool venus_sys_idle_indicator;
+ static bool venus_fw_low_power_mode = true;
+ static int venus_hw_rsp_timeout = 1000;
+ static bool venus_fw_coverage;
+@@ -454,7 +453,6 @@ static int venus_boot_core(struct venus_hfi_device *hdev)
+ 	void __iomem *wrapper_base = hdev->core->wrapper_base;
+ 	int ret = 0;
+ 
+-	writel(BIT(VIDC_CTRL_INIT_CTRL_SHIFT), cpu_cs_base + VIDC_CTRL_INIT);
+ 	if (IS_V6(hdev->core)) {
+ 		mask_val = readl(wrapper_base + WRAPPER_INTR_MASK);
+ 		mask_val &= ~(WRAPPER_INTR_MASK_A2HWD_BASK_V6 |
+@@ -465,6 +463,7 @@ static int venus_boot_core(struct venus_hfi_device *hdev)
+ 	writel(mask_val, wrapper_base + WRAPPER_INTR_MASK);
+ 	writel(1, cpu_cs_base + CPU_CS_SCIACMDARG3);
+ 
++	writel(BIT(VIDC_CTRL_INIT_CTRL_SHIFT), cpu_cs_base + VIDC_CTRL_INIT);
+ 	while (!ctrl_status && count < max_tries) {
+ 		ctrl_status = readl(cpu_cs_base + CPU_CS_SCIACMDARG0);
+ 		if ((ctrl_status & CPU_CS_SCIACMDARG0_ERROR_STATUS_MASK) == 4) {
+@@ -927,17 +926,12 @@ static int venus_sys_set_default_properties(struct venus_hfi_device *hdev)
+ 	if (ret)
+ 		dev_warn(dev, "setting fw debug msg ON failed (%d)\n", ret);
+ 
+-	/*
+-	 * Idle indicator is disabled by default on some 4xx firmware versions,
+-	 * enable it explicitly in order to make suspend functional by checking
+-	 * WFI (wait-for-interrupt) bit.
+-	 */
+-	if (IS_V4(hdev->core) || IS_V6(hdev->core))
+-		venus_sys_idle_indicator = true;
+-
+-	ret = venus_sys_set_idle_message(hdev, venus_sys_idle_indicator);
+-	if (ret)
+-		dev_warn(dev, "setting idle response ON failed (%d)\n", ret);
++	/* HFI_PROPERTY_SYS_IDLE_INDICATOR is not supported beyond 8916 (HFI V1) */
++	if (IS_V1(hdev->core)) {
++		ret = venus_sys_set_idle_message(hdev, false);
++		if (ret)
++			dev_warn(dev, "setting idle response ON failed (%d)\n", ret);
++	}
+ 
+ 	ret = venus_sys_set_power_control(hdev, venus_fw_low_power_mode);
+ 	if (ret)
+diff --git a/drivers/media/platform/verisilicon/hantro_v4l2.c b/drivers/media/platform/verisilicon/hantro_v4l2.c
+index e871c078dd59e..b3ae037a50f61 100644
+--- a/drivers/media/platform/verisilicon/hantro_v4l2.c
++++ b/drivers/media/platform/verisilicon/hantro_v4l2.c
+@@ -297,6 +297,7 @@ static int hantro_try_fmt(const struct hantro_ctx *ctx,
+ 			  enum v4l2_buf_type type)
+ {
+ 	const struct hantro_fmt *fmt;
++	const struct hantro_fmt *vpu_fmt;
+ 	bool capture = V4L2_TYPE_IS_CAPTURE(type);
+ 	bool coded;
+ 
+@@ -316,19 +317,23 @@ static int hantro_try_fmt(const struct hantro_ctx *ctx,
+ 
+ 	if (coded) {
+ 		pix_mp->num_planes = 1;
+-	} else if (!ctx->is_encoder) {
++		vpu_fmt = fmt;
++	} else if (ctx->is_encoder) {
++		vpu_fmt = hantro_find_format(ctx, ctx->dst_fmt.pixelformat);
++	} else {
+ 		/*
+ 		 * Width/height on the CAPTURE end of a decoder are ignored and
+ 		 * replaced by the OUTPUT ones.
+ 		 */
+ 		pix_mp->width = ctx->src_fmt.width;
+ 		pix_mp->height = ctx->src_fmt.height;
++		vpu_fmt = fmt;
+ 	}
+ 
+ 	pix_mp->field = V4L2_FIELD_NONE;
+ 
+ 	v4l2_apply_frmsize_constraints(&pix_mp->width, &pix_mp->height,
+-				       &fmt->frmsize);
++				       &vpu_fmt->frmsize);
+ 
+ 	if (!coded) {
+ 		/* Fill remaining fields */
+diff --git a/drivers/media/tuners/fc0011.c b/drivers/media/tuners/fc0011.c
+index eaa3bbc903d7e..3d3b54be29557 100644
+--- a/drivers/media/tuners/fc0011.c
++++ b/drivers/media/tuners/fc0011.c
+@@ -499,7 +499,7 @@ struct dvb_frontend *fc0011_attach(struct dvb_frontend *fe,
+ 
+ 	return fe;
+ }
+-EXPORT_SYMBOL(fc0011_attach);
++EXPORT_SYMBOL_GPL(fc0011_attach);
+ 
+ MODULE_DESCRIPTION("Fitipower FC0011 silicon tuner driver");
+ MODULE_AUTHOR("Michael Buesch <m@bues.ch>");
+diff --git a/drivers/media/tuners/fc0012.c b/drivers/media/tuners/fc0012.c
+index 4429d5e8c5796..81e65acbdb170 100644
+--- a/drivers/media/tuners/fc0012.c
++++ b/drivers/media/tuners/fc0012.c
+@@ -495,7 +495,7 @@ err:
+ 
+ 	return fe;
+ }
+-EXPORT_SYMBOL(fc0012_attach);
++EXPORT_SYMBOL_GPL(fc0012_attach);
+ 
+ MODULE_DESCRIPTION("Fitipower FC0012 silicon tuner driver");
+ MODULE_AUTHOR("Hans-Frieder Vogt <hfvogt@gmx.net>");
+diff --git a/drivers/media/tuners/fc0013.c b/drivers/media/tuners/fc0013.c
+index 29dd9b55ff333..1006a2798eefc 100644
+--- a/drivers/media/tuners/fc0013.c
++++ b/drivers/media/tuners/fc0013.c
+@@ -608,7 +608,7 @@ struct dvb_frontend *fc0013_attach(struct dvb_frontend *fe,
+ 
+ 	return fe;
+ }
+-EXPORT_SYMBOL(fc0013_attach);
++EXPORT_SYMBOL_GPL(fc0013_attach);
+ 
+ MODULE_DESCRIPTION("Fitipower FC0013 silicon tuner driver");
+ MODULE_AUTHOR("Hans-Frieder Vogt <hfvogt@gmx.net>");
+diff --git a/drivers/media/tuners/max2165.c b/drivers/media/tuners/max2165.c
+index 1c746bed51fee..1575ab94e1c8b 100644
+--- a/drivers/media/tuners/max2165.c
++++ b/drivers/media/tuners/max2165.c
+@@ -410,7 +410,7 @@ struct dvb_frontend *max2165_attach(struct dvb_frontend *fe,
+ 
+ 	return fe;
+ }
+-EXPORT_SYMBOL(max2165_attach);
++EXPORT_SYMBOL_GPL(max2165_attach);
+ 
+ MODULE_AUTHOR("David T. L. Wong <davidtlwong@gmail.com>");
+ MODULE_DESCRIPTION("Maxim MAX2165 silicon tuner driver");
+diff --git a/drivers/media/tuners/mc44s803.c b/drivers/media/tuners/mc44s803.c
+index 0c9161516abdf..ed8bdf7ebd99d 100644
+--- a/drivers/media/tuners/mc44s803.c
++++ b/drivers/media/tuners/mc44s803.c
+@@ -356,7 +356,7 @@ error:
+ 	kfree(priv);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(mc44s803_attach);
++EXPORT_SYMBOL_GPL(mc44s803_attach);
+ 
+ MODULE_AUTHOR("Jochen Friedrich");
+ MODULE_DESCRIPTION("Freescale MC44S803 silicon tuner driver");
+diff --git a/drivers/media/tuners/mt2060.c b/drivers/media/tuners/mt2060.c
+index 0278a9f0aeefa..4205ed4cf4675 100644
+--- a/drivers/media/tuners/mt2060.c
++++ b/drivers/media/tuners/mt2060.c
+@@ -440,7 +440,7 @@ struct dvb_frontend * mt2060_attach(struct dvb_frontend *fe, struct i2c_adapter
+ 
+ 	return fe;
+ }
+-EXPORT_SYMBOL(mt2060_attach);
++EXPORT_SYMBOL_GPL(mt2060_attach);
+ 
+ static int mt2060_probe(struct i2c_client *client)
+ {
+diff --git a/drivers/media/tuners/mt2131.c b/drivers/media/tuners/mt2131.c
+index 37f50ff6c0bd2..eebc060883414 100644
+--- a/drivers/media/tuners/mt2131.c
++++ b/drivers/media/tuners/mt2131.c
+@@ -274,7 +274,7 @@ struct dvb_frontend * mt2131_attach(struct dvb_frontend *fe,
+ 	fe->tuner_priv = priv;
+ 	return fe;
+ }
+-EXPORT_SYMBOL(mt2131_attach);
++EXPORT_SYMBOL_GPL(mt2131_attach);
+ 
+ MODULE_AUTHOR("Steven Toth");
+ MODULE_DESCRIPTION("Microtune MT2131 silicon tuner driver");
+diff --git a/drivers/media/tuners/mt2266.c b/drivers/media/tuners/mt2266.c
+index 6136f20fa9b7f..2e92885a6bcb9 100644
+--- a/drivers/media/tuners/mt2266.c
++++ b/drivers/media/tuners/mt2266.c
+@@ -336,7 +336,7 @@ struct dvb_frontend * mt2266_attach(struct dvb_frontend *fe, struct i2c_adapter
+ 	mt2266_calibrate(priv);
+ 	return fe;
+ }
+-EXPORT_SYMBOL(mt2266_attach);
++EXPORT_SYMBOL_GPL(mt2266_attach);
+ 
+ MODULE_AUTHOR("Olivier DANET");
+ MODULE_DESCRIPTION("Microtune MT2266 silicon tuner driver");
+diff --git a/drivers/media/tuners/mxl5005s.c b/drivers/media/tuners/mxl5005s.c
+index 06dfab9fb8cbc..d9bfa257a0054 100644
+--- a/drivers/media/tuners/mxl5005s.c
++++ b/drivers/media/tuners/mxl5005s.c
+@@ -4120,7 +4120,7 @@ struct dvb_frontend *mxl5005s_attach(struct dvb_frontend *fe,
+ 	fe->tuner_priv = state;
+ 	return fe;
+ }
+-EXPORT_SYMBOL(mxl5005s_attach);
++EXPORT_SYMBOL_GPL(mxl5005s_attach);
+ 
+ MODULE_DESCRIPTION("MaxLinear MXL5005S silicon tuner driver");
+ MODULE_AUTHOR("Steven Toth");
+diff --git a/drivers/media/tuners/qt1010.c b/drivers/media/tuners/qt1010.c
+index 3853a3d43d4f2..60931367b82ca 100644
+--- a/drivers/media/tuners/qt1010.c
++++ b/drivers/media/tuners/qt1010.c
+@@ -440,7 +440,7 @@ struct dvb_frontend * qt1010_attach(struct dvb_frontend *fe,
+ 	fe->tuner_priv = priv;
+ 	return fe;
+ }
+-EXPORT_SYMBOL(qt1010_attach);
++EXPORT_SYMBOL_GPL(qt1010_attach);
+ 
+ MODULE_DESCRIPTION("Quantek QT1010 silicon tuner driver");
+ MODULE_AUTHOR("Antti Palosaari <crope@iki.fi>");
+diff --git a/drivers/media/tuners/tda18218.c b/drivers/media/tuners/tda18218.c
+index 4ed94646116fa..7d8d84dcb2459 100644
+--- a/drivers/media/tuners/tda18218.c
++++ b/drivers/media/tuners/tda18218.c
+@@ -336,7 +336,7 @@ struct dvb_frontend *tda18218_attach(struct dvb_frontend *fe,
+ 
+ 	return fe;
+ }
+-EXPORT_SYMBOL(tda18218_attach);
++EXPORT_SYMBOL_GPL(tda18218_attach);
+ 
+ MODULE_DESCRIPTION("NXP TDA18218HN silicon tuner driver");
+ MODULE_AUTHOR("Antti Palosaari <crope@iki.fi>");
+diff --git a/drivers/media/tuners/xc2028.c b/drivers/media/tuners/xc2028.c
+index 69c2e1b99bf17..5a967edceca93 100644
+--- a/drivers/media/tuners/xc2028.c
++++ b/drivers/media/tuners/xc2028.c
+@@ -1512,7 +1512,7 @@ fail:
+ 	return NULL;
+ }
+ 
+-EXPORT_SYMBOL(xc2028_attach);
++EXPORT_SYMBOL_GPL(xc2028_attach);
+ 
+ MODULE_DESCRIPTION("Xceive xc2028/xc3028 tuner driver");
+ MODULE_AUTHOR("Michel Ludwig <michel.ludwig@gmail.com>");
+diff --git a/drivers/media/tuners/xc4000.c b/drivers/media/tuners/xc4000.c
+index d59b4ab774302..57ded9ff3f043 100644
+--- a/drivers/media/tuners/xc4000.c
++++ b/drivers/media/tuners/xc4000.c
+@@ -1742,7 +1742,7 @@ fail2:
+ 	xc4000_release(fe);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(xc4000_attach);
++EXPORT_SYMBOL_GPL(xc4000_attach);
+ 
+ MODULE_AUTHOR("Steven Toth, Davide Ferri");
+ MODULE_DESCRIPTION("Xceive xc4000 silicon tuner driver");
+diff --git a/drivers/media/tuners/xc5000.c b/drivers/media/tuners/xc5000.c
+index 7b7d9fe4f9453..2182e5b7b6064 100644
+--- a/drivers/media/tuners/xc5000.c
++++ b/drivers/media/tuners/xc5000.c
+@@ -1460,7 +1460,7 @@ fail:
+ 	xc5000_release(fe);
+ 	return NULL;
+ }
+-EXPORT_SYMBOL(xc5000_attach);
++EXPORT_SYMBOL_GPL(xc5000_attach);
+ 
+ MODULE_AUTHOR("Steven Toth");
+ MODULE_DESCRIPTION("Xceive xc5000 silicon tuner driver");
+diff --git a/drivers/media/usb/dvb-usb/m920x.c b/drivers/media/usb/dvb-usb/m920x.c
+index fea5bcf72a31a..c88a202daf5fc 100644
+--- a/drivers/media/usb/dvb-usb/m920x.c
++++ b/drivers/media/usb/dvb-usb/m920x.c
+@@ -277,7 +277,6 @@ static int m920x_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[], int nu
+ 			char *read = kmalloc(1, GFP_KERNEL);
+ 			if (!read) {
+ 				ret = -ENOMEM;
+-				kfree(read);
+ 				goto unlock;
+ 			}
+ 
+@@ -288,8 +287,10 @@ static int m920x_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[], int nu
+ 
+ 				if ((ret = m920x_read(d->udev, M9206_I2C, 0x0,
+ 						      0x20 | stop,
+-						      read, 1)) != 0)
++						      read, 1)) != 0) {
++					kfree(read);
+ 					goto unlock;
++				}
+ 				msg[i].buf[j] = read[0];
+ 			}
+ 
+diff --git a/drivers/media/usb/go7007/go7007-i2c.c b/drivers/media/usb/go7007/go7007-i2c.c
+index 38339dd2f83f7..2880370e45c8b 100644
+--- a/drivers/media/usb/go7007/go7007-i2c.c
++++ b/drivers/media/usb/go7007/go7007-i2c.c
+@@ -165,8 +165,6 @@ static int go7007_i2c_master_xfer(struct i2c_adapter *adapter,
+ 		} else if (msgs[i].len == 3) {
+ 			if (msgs[i].flags & I2C_M_RD)
+ 				return -EIO;
+-			if (msgs[i].len != 3)
+-				return -EIO;
+ 			if (go7007_i2c_xfer(go, msgs[i].addr, 0,
+ 					(msgs[i].buf[0] << 8) | msgs[i].buf[1],
+ 					0x01, &msgs[i].buf[2]) < 0)
+diff --git a/drivers/media/usb/siano/smsusb.c b/drivers/media/usb/siano/smsusb.c
+index 640737d3b8aeb..8a39cac76c585 100644
+--- a/drivers/media/usb/siano/smsusb.c
++++ b/drivers/media/usb/siano/smsusb.c
+@@ -455,12 +455,7 @@ static int smsusb_init_device(struct usb_interface *intf, int board_id)
+ 	rc = smscore_register_device(&params, &dev->coredev, 0, mdev);
+ 	if (rc < 0) {
+ 		pr_err("smscore_register_device(...) failed, rc %d\n", rc);
+-		smsusb_term_device(intf);
+-#ifdef CONFIG_MEDIA_CONTROLLER_DVB
+-		media_device_unregister(mdev);
+-#endif
+-		kfree(mdev);
+-		return rc;
++		goto err_unregister_device;
+ 	}
+ 
+ 	smscore_set_board_id(dev->coredev, board_id);
+@@ -477,8 +472,7 @@ static int smsusb_init_device(struct usb_interface *intf, int board_id)
+ 	rc = smsusb_start_streaming(dev);
+ 	if (rc < 0) {
+ 		pr_err("smsusb_start_streaming(...) failed\n");
+-		smsusb_term_device(intf);
+-		return rc;
++		goto err_unregister_device;
+ 	}
+ 
+ 	dev->state = SMSUSB_ACTIVE;
+@@ -486,13 +480,20 @@ static int smsusb_init_device(struct usb_interface *intf, int board_id)
+ 	rc = smscore_start_device(dev->coredev);
+ 	if (rc < 0) {
+ 		pr_err("smscore_start_device(...) failed\n");
+-		smsusb_term_device(intf);
+-		return rc;
++		goto err_unregister_device;
+ 	}
+ 
+ 	pr_debug("device 0x%p created\n", dev);
+ 
+ 	return rc;
++
++err_unregister_device:
++	smsusb_term_device(intf);
++#ifdef CONFIG_MEDIA_CONTROLLER_DVB
++	media_device_unregister(mdev);
++#endif
++	kfree(mdev);
++	return rc;
+ }
+ 
+ static int smsusb_probe(struct usb_interface *intf,
+diff --git a/drivers/media/v4l2-core/v4l2-fwnode.c b/drivers/media/v4l2-core/v4l2-fwnode.c
+index 049c2f2001eaa..4fa9225aa3d93 100644
+--- a/drivers/media/v4l2-core/v4l2-fwnode.c
++++ b/drivers/media/v4l2-core/v4l2-fwnode.c
+@@ -568,19 +568,29 @@ int v4l2_fwnode_parse_link(struct fwnode_handle *fwnode,
+ 	link->local_id = fwep.id;
+ 	link->local_port = fwep.port;
+ 	link->local_node = fwnode_graph_get_port_parent(fwnode);
++	if (!link->local_node)
++		return -ENOLINK;
+ 
+ 	fwnode = fwnode_graph_get_remote_endpoint(fwnode);
+-	if (!fwnode) {
+-		fwnode_handle_put(fwnode);
+-		return -ENOLINK;
+-	}
++	if (!fwnode)
++		goto err_put_local_node;
+ 
+ 	fwnode_graph_parse_endpoint(fwnode, &fwep);
+ 	link->remote_id = fwep.id;
+ 	link->remote_port = fwep.port;
+ 	link->remote_node = fwnode_graph_get_port_parent(fwnode);
++	if (!link->remote_node)
++		goto err_put_remote_endpoint;
+ 
+ 	return 0;
++
++err_put_remote_endpoint:
++	fwnode_handle_put(fwnode);
++
++err_put_local_node:
++	fwnode_handle_put(link->local_node);
++
++	return -ENOLINK;
+ }
+ EXPORT_SYMBOL_GPL(v4l2_fwnode_parse_link);
+ 
+diff --git a/drivers/mfd/Kconfig b/drivers/mfd/Kconfig
+index 6f5b259a6d6a0..f6b519eaaa710 100644
+--- a/drivers/mfd/Kconfig
++++ b/drivers/mfd/Kconfig
+@@ -1197,7 +1197,7 @@ config MFD_RC5T583
+ 	  different functionality of the device.
+ 
+ config MFD_RK8XX
+-	bool
++	tristate
+ 	select MFD_CORE
+ 
+ config MFD_RK8XX_I2C
+diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c
+index 9666d28037e18..5a134fa8a174c 100644
+--- a/drivers/misc/fastrpc.c
++++ b/drivers/misc/fastrpc.c
+@@ -1322,13 +1322,18 @@ static int fastrpc_init_create_static_process(struct fastrpc_user *fl,
+ 	return 0;
+ err_invoke:
+ 	if (fl->cctx->vmcount) {
+-		struct qcom_scm_vmperm perm;
++		u64 src_perms = 0;
++		struct qcom_scm_vmperm dst_perms;
++		u32 i;
+ 
+-		perm.vmid = QCOM_SCM_VMID_HLOS;
+-		perm.perm = QCOM_SCM_PERM_RWX;
++		for (i = 0; i < fl->cctx->vmcount; i++)
++			src_perms |= BIT(fl->cctx->vmperms[i].vmid);
++
++		dst_perms.vmid = QCOM_SCM_VMID_HLOS;
++		dst_perms.perm = QCOM_SCM_PERM_RWX;
+ 		err = qcom_scm_assign_mem(fl->cctx->remote_heap->phys,
+ 						(u64)fl->cctx->remote_heap->size,
+-						&fl->cctx->perms, &perm, 1);
++						&src_perms, &dst_perms, 1);
+ 		if (err)
+ 			dev_err(fl->sctx->dev, "Failed to assign memory phys 0x%llx size 0x%llx err %d",
+ 				fl->cctx->remote_heap->phys, fl->cctx->remote_heap->size, err);
+diff --git a/drivers/mmc/host/renesas_sdhi_core.c b/drivers/mmc/host/renesas_sdhi_core.c
+index 345934e4f59e6..2d5ef9c37d769 100644
+--- a/drivers/mmc/host/renesas_sdhi_core.c
++++ b/drivers/mmc/host/renesas_sdhi_core.c
+@@ -1006,6 +1006,8 @@ int renesas_sdhi_probe(struct platform_device *pdev,
+ 		host->sdcard_irq_setbit_mask = TMIO_STAT_ALWAYS_SET_27;
+ 		host->sdcard_irq_mask_all = TMIO_MASK_ALL_RCAR2;
+ 		host->reset = renesas_sdhi_reset;
++	} else {
++		host->sdcard_irq_mask_all = TMIO_MASK_ALL;
+ 	}
+ 
+ 	/* Orginally registers were 16 bit apart, could be 32 or 64 nowadays */
+@@ -1100,9 +1102,7 @@ int renesas_sdhi_probe(struct platform_device *pdev,
+ 		host->ops.hs400_complete = renesas_sdhi_hs400_complete;
+ 	}
+ 
+-	ret = tmio_mmc_host_probe(host);
+-	if (ret < 0)
+-		goto edisclk;
++	sd_ctrl_write32_as_16_and_16(host, CTL_IRQ_MASK, host->sdcard_irq_mask_all);
+ 
+ 	num_irqs = platform_irq_count(pdev);
+ 	if (num_irqs < 0) {
+@@ -1129,6 +1129,10 @@ int renesas_sdhi_probe(struct platform_device *pdev,
+ 			goto eirq;
+ 	}
+ 
++	ret = tmio_mmc_host_probe(host);
++	if (ret < 0)
++		goto edisclk;
++
+ 	dev_info(&pdev->dev, "%s base at %pa, max clock rate %u MHz\n",
+ 		 mmc_hostname(host->mmc), &res->start, host->mmc->f_max / 1000000);
+ 
+diff --git a/drivers/mtd/nand/raw/brcmnand/brcmnand.c b/drivers/mtd/nand/raw/brcmnand/brcmnand.c
+index 2e9c2e2d9c9f7..d8418d7fcc372 100644
+--- a/drivers/mtd/nand/raw/brcmnand/brcmnand.c
++++ b/drivers/mtd/nand/raw/brcmnand/brcmnand.c
+@@ -2612,6 +2612,8 @@ static int brcmnand_setup_dev(struct brcmnand_host *host)
+ 	struct nand_chip *chip = &host->chip;
+ 	const struct nand_ecc_props *requirements =
+ 		nanddev_get_ecc_requirements(&chip->base);
++	struct nand_memory_organization *memorg =
++		nanddev_get_memorg(&chip->base);
+ 	struct brcmnand_controller *ctrl = host->ctrl;
+ 	struct brcmnand_cfg *cfg = &host->hwcfg;
+ 	char msg[128];
+@@ -2633,10 +2635,11 @@ static int brcmnand_setup_dev(struct brcmnand_host *host)
+ 	if (cfg->spare_area_size > ctrl->max_oob)
+ 		cfg->spare_area_size = ctrl->max_oob;
+ 	/*
+-	 * Set oobsize to be consistent with controller's spare_area_size, as
+-	 * the rest is inaccessible.
++	 * Set mtd and memorg oobsize to be consistent with controller's
++	 * spare_area_size, as the rest is inaccessible.
+ 	 */
+ 	mtd->oobsize = cfg->spare_area_size * (mtd->writesize >> FC_SHIFT);
++	memorg->oobsize = mtd->oobsize;
+ 
+ 	cfg->device_size = mtd->size;
+ 	cfg->block_size = mtd->erasesize;
+diff --git a/drivers/mtd/nand/raw/fsmc_nand.c b/drivers/mtd/nand/raw/fsmc_nand.c
+index 7b4742420dfcb..2e33ae77502a0 100644
+--- a/drivers/mtd/nand/raw/fsmc_nand.c
++++ b/drivers/mtd/nand/raw/fsmc_nand.c
+@@ -1200,9 +1200,14 @@ static int fsmc_nand_suspend(struct device *dev)
+ static int fsmc_nand_resume(struct device *dev)
+ {
+ 	struct fsmc_nand_data *host = dev_get_drvdata(dev);
++	int ret;
+ 
+ 	if (host) {
+-		clk_prepare_enable(host->clk);
++		ret = clk_prepare_enable(host->clk);
++		if (ret) {
++			dev_err(dev, "failed to enable clk\n");
++			return ret;
++		}
+ 		if (host->dev_timings)
+ 			fsmc_nand_setup(host, host->dev_timings);
+ 		nand_reset(&host->nand, 0);
+diff --git a/drivers/mtd/spi-nor/core.c b/drivers/mtd/spi-nor/core.c
+index 5f29fac8669a3..55f4a902b8be9 100644
+--- a/drivers/mtd/spi-nor/core.c
++++ b/drivers/mtd/spi-nor/core.c
+@@ -870,21 +870,22 @@ static int spi_nor_write_16bit_sr_and_check(struct spi_nor *nor, u8 sr1)
+ 		ret = spi_nor_read_cr(nor, &sr_cr[1]);
+ 		if (ret)
+ 			return ret;
+-	} else if (nor->params->quad_enable) {
++	} else if (spi_nor_get_protocol_width(nor->read_proto) == 4 &&
++		   spi_nor_get_protocol_width(nor->write_proto) == 4 &&
++		   nor->params->quad_enable) {
+ 		/*
+ 		 * If the Status Register 2 Read command (35h) is not
+ 		 * supported, we should at least be sure we don't
+ 		 * change the value of the SR2 Quad Enable bit.
+ 		 *
+-		 * We can safely assume that when the Quad Enable method is
+-		 * set, the value of the QE bit is one, as a consequence of the
+-		 * nor->params->quad_enable() call.
++		 * When the Quad Enable method is set and the buswidth is 4, we
++		 * can safely assume that the value of the QE bit is one, as a
++		 * consequence of the nor->params->quad_enable() call.
+ 		 *
+-		 * We can safely assume that the Quad Enable bit is present in
+-		 * the Status Register 2 at BIT(1). According to the JESD216
+-		 * revB standard, BFPT DWORDS[15], bits 22:20, the 16-bit
+-		 * Write Status (01h) command is available just for the cases
+-		 * in which the QE bit is described in SR2 at BIT(1).
++		 * According to the JESD216 revB standard, BFPT DWORDS[15],
++		 * bits 22:20, the 16-bit Write Status (01h) command is
++		 * available just for the cases in which the QE bit is
++		 * described in SR2 at BIT(1).
+ 		 */
+ 		sr_cr[1] = SR2_QUAD_EN_BIT1;
+ 	} else {
+diff --git a/drivers/net/arcnet/arcnet.c b/drivers/net/arcnet/arcnet.c
+index 99265667538c3..d9e052c49ba1a 100644
+--- a/drivers/net/arcnet/arcnet.c
++++ b/drivers/net/arcnet/arcnet.c
+@@ -464,7 +464,7 @@ static void arcnet_reply_tasklet(struct tasklet_struct *t)
+ 
+ 	ret = sock_queue_err_skb(sk, ackskb);
+ 	if (ret)
+-		kfree_skb(ackskb);
++		dev_kfree_skb_irq(ackskb);
+ 
+ 	local_irq_enable();
+ };
+diff --git a/drivers/net/can/m_can/tcan4x5x-regmap.c b/drivers/net/can/m_can/tcan4x5x-regmap.c
+index 2b218ce04e9f2..fafa6daa67e69 100644
+--- a/drivers/net/can/m_can/tcan4x5x-regmap.c
++++ b/drivers/net/can/m_can/tcan4x5x-regmap.c
+@@ -95,7 +95,6 @@ static const struct regmap_range tcan4x5x_reg_table_wr_range[] = {
+ 	regmap_reg_range(0x000c, 0x0010),
+ 	/* Device configuration registers and Interrupt Flags*/
+ 	regmap_reg_range(0x0800, 0x080c),
+-	regmap_reg_range(0x0814, 0x0814),
+ 	regmap_reg_range(0x0820, 0x0820),
+ 	regmap_reg_range(0x0830, 0x0830),
+ 	/* M_CAN */
+diff --git a/drivers/net/can/usb/gs_usb.c b/drivers/net/can/usb/gs_usb.c
+index bd9eb066ecf15..129ef60a577c8 100644
+--- a/drivers/net/can/usb/gs_usb.c
++++ b/drivers/net/can/usb/gs_usb.c
+@@ -633,6 +633,9 @@ static void gs_usb_receive_bulk_callback(struct urb *urb)
+ 	}
+ 
+ 	if (hf->flags & GS_CAN_FLAG_OVERFLOW) {
++		stats->rx_over_errors++;
++		stats->rx_errors++;
++
+ 		skb = alloc_can_err_skb(netdev, &cf);
+ 		if (!skb)
+ 			goto resubmit_urb;
+@@ -640,8 +643,6 @@ static void gs_usb_receive_bulk_callback(struct urb *urb)
+ 		cf->can_id |= CAN_ERR_CRTL;
+ 		cf->len = CAN_ERR_DLC;
+ 		cf->data[1] = CAN_ERR_CRTL_RX_OVERFLOW;
+-		stats->rx_over_errors++;
+-		stats->rx_errors++;
+ 		netif_rx(skb);
+ 	}
+ 
+diff --git a/drivers/net/ethernet/amd/pds_core/core.c b/drivers/net/ethernet/amd/pds_core/core.c
+index f2c79456d7452..36f9b932b9e2a 100644
+--- a/drivers/net/ethernet/amd/pds_core/core.c
++++ b/drivers/net/ethernet/amd/pds_core/core.c
+@@ -464,7 +464,8 @@ void pdsc_teardown(struct pdsc *pdsc, bool removing)
+ {
+ 	int i;
+ 
+-	pdsc_devcmd_reset(pdsc);
++	if (!pdsc->pdev->is_virtfn)
++		pdsc_devcmd_reset(pdsc);
+ 	pdsc_qcq_free(pdsc, &pdsc->notifyqcq);
+ 	pdsc_qcq_free(pdsc, &pdsc->adminqcq);
+ 
+@@ -524,7 +525,8 @@ static void pdsc_fw_down(struct pdsc *pdsc)
+ 	}
+ 
+ 	/* Notify clients of fw_down */
+-	devlink_health_report(pdsc->fw_reporter, "FW down reported", pdsc);
++	if (pdsc->fw_reporter)
++		devlink_health_report(pdsc->fw_reporter, "FW down reported", pdsc);
+ 	pdsc_notify(PDS_EVENT_RESET, &reset_event);
+ 
+ 	pdsc_stop(pdsc);
+@@ -554,8 +556,9 @@ static void pdsc_fw_up(struct pdsc *pdsc)
+ 
+ 	/* Notify clients of fw_up */
+ 	pdsc->fw_recoveries++;
+-	devlink_health_reporter_state_update(pdsc->fw_reporter,
+-					     DEVLINK_HEALTH_REPORTER_STATE_HEALTHY);
++	if (pdsc->fw_reporter)
++		devlink_health_reporter_state_update(pdsc->fw_reporter,
++						     DEVLINK_HEALTH_REPORTER_STATE_HEALTHY);
+ 	pdsc_notify(PDS_EVENT_RESET, &reset_event);
+ 
+ 	return;
+diff --git a/drivers/net/ethernet/amd/pds_core/dev.c b/drivers/net/ethernet/amd/pds_core/dev.c
+index debe5216fe29e..f77cd9f5a2fda 100644
+--- a/drivers/net/ethernet/amd/pds_core/dev.c
++++ b/drivers/net/ethernet/amd/pds_core/dev.c
+@@ -121,7 +121,7 @@ static const char *pdsc_devcmd_str(int opcode)
+ 	}
+ }
+ 
+-static int pdsc_devcmd_wait(struct pdsc *pdsc, int max_seconds)
++static int pdsc_devcmd_wait(struct pdsc *pdsc, u8 opcode, int max_seconds)
+ {
+ 	struct device *dev = pdsc->dev;
+ 	unsigned long start_time;
+@@ -131,9 +131,6 @@ static int pdsc_devcmd_wait(struct pdsc *pdsc, int max_seconds)
+ 	int done = 0;
+ 	int err = 0;
+ 	int status;
+-	int opcode;
+-
+-	opcode = ioread8(&pdsc->cmd_regs->cmd.opcode);
+ 
+ 	start_time = jiffies;
+ 	max_wait = start_time + (max_seconds * HZ);
+@@ -180,10 +177,10 @@ int pdsc_devcmd_locked(struct pdsc *pdsc, union pds_core_dev_cmd *cmd,
+ 
+ 	memcpy_toio(&pdsc->cmd_regs->cmd, cmd, sizeof(*cmd));
+ 	pdsc_devcmd_dbell(pdsc);
+-	err = pdsc_devcmd_wait(pdsc, max_seconds);
++	err = pdsc_devcmd_wait(pdsc, cmd->opcode, max_seconds);
+ 	memcpy_fromio(comp, &pdsc->cmd_regs->comp, sizeof(*comp));
+ 
+-	if (err == -ENXIO || err == -ETIMEDOUT)
++	if ((err == -ENXIO || err == -ETIMEDOUT) && pdsc->wq)
+ 		queue_work(pdsc->wq, &pdsc->health_work);
+ 
+ 	return err;
+diff --git a/drivers/net/ethernet/amd/pds_core/devlink.c b/drivers/net/ethernet/amd/pds_core/devlink.c
+index 9c6b3653c1c7c..d9607033bbf21 100644
+--- a/drivers/net/ethernet/amd/pds_core/devlink.c
++++ b/drivers/net/ethernet/amd/pds_core/devlink.c
+@@ -10,6 +10,9 @@ pdsc_viftype *pdsc_dl_find_viftype_by_id(struct pdsc *pdsc,
+ {
+ 	int vt;
+ 
++	if (!pdsc->viftype_status)
++		return NULL;
++
+ 	for (vt = 0; vt < PDS_DEV_TYPE_MAX; vt++) {
+ 		if (pdsc->viftype_status[vt].dl_id == dl_id)
+ 			return &pdsc->viftype_status[vt];
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_hsi.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_hsi.h
+index b31de4cf6534b..a2d3a80236c4f 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_hsi.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_hsi.h
+@@ -3721,6 +3721,60 @@ struct hwrm_func_backing_store_qcaps_v2_output {
+ 	u8	valid;
+ };
+ 
++/* hwrm_func_dbr_pacing_qcfg_input (size:128b/16B) */
++struct hwrm_func_dbr_pacing_qcfg_input {
++	__le16  req_type;
++	__le16  cmpl_ring;
++	__le16  seq_id;
++	__le16  target_id;
++	__le64  resp_addr;
++};
++
++/* hwrm_func_dbr_pacing_qcfg_output (size:512b/64B) */
++struct hwrm_func_dbr_pacing_qcfg_output {
++	__le16  error_code;
++	__le16  req_type;
++	__le16  seq_id;
++	__le16  resp_len;
++	u8      flags;
++#define FUNC_DBR_PACING_QCFG_RESP_FLAGS_DBR_NQ_EVENT_ENABLED     0x1UL
++	u8      unused_0[7];
++	__le32  dbr_stat_db_fifo_reg;
++#define FUNC_DBR_PACING_QCFG_RESP_DBR_STAT_DB_FIFO_REG_ADDR_SPACE_MASK    0x3UL
++#define FUNC_DBR_PACING_QCFG_RESP_DBR_STAT_DB_FIFO_REG_ADDR_SPACE_SFT     0
++#define FUNC_DBR_PACING_QCFG_RESP_DBR_STAT_DB_FIFO_REG_ADDR_SPACE_PCIE_CFG  0x0UL
++#define FUNC_DBR_PACING_QCFG_RESP_DBR_STAT_DB_FIFO_REG_ADDR_SPACE_GRC       0x1UL
++#define FUNC_DBR_PACING_QCFG_RESP_DBR_STAT_DB_FIFO_REG_ADDR_SPACE_BAR0      0x2UL
++#define FUNC_DBR_PACING_QCFG_RESP_DBR_STAT_DB_FIFO_REG_ADDR_SPACE_BAR1      0x3UL
++#define FUNC_DBR_PACING_QCFG_RESP_DBR_STAT_DB_FIFO_REG_ADDR_SPACE_LAST     \
++		FUNC_DBR_PACING_QCFG_RESP_DBR_STAT_DB_FIFO_REG_ADDR_SPACE_BAR1
++#define FUNC_DBR_PACING_QCFG_RESP_DBR_STAT_DB_FIFO_REG_ADDR_MASK          0xfffffffcUL
++#define FUNC_DBR_PACING_QCFG_RESP_DBR_STAT_DB_FIFO_REG_ADDR_SFT           2
++	__le32  dbr_stat_db_fifo_reg_watermark_mask;
++	u8      dbr_stat_db_fifo_reg_watermark_shift;
++	u8      unused_1[3];
++	__le32  dbr_stat_db_fifo_reg_fifo_room_mask;
++	u8      dbr_stat_db_fifo_reg_fifo_room_shift;
++	u8      unused_2[3];
++	__le32  dbr_throttling_aeq_arm_reg;
++#define FUNC_DBR_PACING_QCFG_RESP_DBR_THROTTLING_AEQ_ARM_REG_ADDR_SPACE_MASK    0x3UL
++#define FUNC_DBR_PACING_QCFG_RESP_DBR_THROTTLING_AEQ_ARM_REG_ADDR_SPACE_SFT     0
++#define FUNC_DBR_PACING_QCFG_RESP_DBR_THROTTLING_AEQ_ARM_REG_ADDR_SPACE_PCIE_CFG  0x0UL
++#define FUNC_DBR_PACING_QCFG_RESP_DBR_THROTTLING_AEQ_ARM_REG_ADDR_SPACE_GRC       0x1UL
++#define FUNC_DBR_PACING_QCFG_RESP_DBR_THROTTLING_AEQ_ARM_REG_ADDR_SPACE_BAR0      0x2UL
++#define FUNC_DBR_PACING_QCFG_RESP_DBR_THROTTLING_AEQ_ARM_REG_ADDR_SPACE_BAR1      0x3UL
++#define FUNC_DBR_PACING_QCFG_RESP_DBR_THROTTLING_AEQ_ARM_REG_ADDR_SPACE_LAST	\
++		FUNC_DBR_PACING_QCFG_RESP_DBR_THROTTLING_AEQ_ARM_REG_ADDR_SPACE_BAR1
++#define FUNC_DBR_PACING_QCFG_RESP_DBR_THROTTLING_AEQ_ARM_REG_ADDR_MASK          0xfffffffcUL
++#define FUNC_DBR_PACING_QCFG_RESP_DBR_THROTTLING_AEQ_ARM_REG_ADDR_SFT           2
++	u8      dbr_throttling_aeq_arm_reg_val;
++	u8      unused_3[7];
++	__le32  primary_nq_id;
++	__le32  pacing_threshold;
++	u8      unused_4[7];
++	u8      valid;
++};
++
+ /* hwrm_func_drv_if_change_input (size:192b/24B) */
+ struct hwrm_func_drv_if_change_input {
+ 	__le16	req_type;
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
+index 852eb449ccae2..6ba2b93986333 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
+@@ -345,7 +345,7 @@ static void bnxt_set_edev_info(struct bnxt_en_dev *edev, struct bnxt *bp)
+ 	edev->hw_ring_stats_size = bp->hw_ring_stats_size;
+ 	edev->pf_port_id = bp->pf.port_id;
+ 	edev->en_state = bp->state;
+-
++	edev->bar0 = bp->bar0;
+ 	edev->ulp_tbl->msix_requested = bnxt_get_ulp_msix_num(bp);
+ }
+ 
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.h
+index 80cbc4b6130aa..6ff77f082e6c7 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.h
+@@ -81,6 +81,7 @@ struct bnxt_en_dev {
+ 							 * mode only. Will be
+ 							 * updated in resume.
+ 							 */
++	void __iomem                    *bar0;
+ };
+ 
+ static inline bool bnxt_ulp_registered(struct bnxt_en_dev *edev)
+diff --git a/drivers/net/ethernet/hisilicon/hns3/Makefile b/drivers/net/ethernet/hisilicon/hns3/Makefile
+index 6efea46628587..e214bfaece1f3 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/Makefile
++++ b/drivers/net/ethernet/hisilicon/hns3/Makefile
+@@ -17,11 +17,11 @@ hns3-$(CONFIG_HNS3_DCB) += hns3_dcbnl.o
+ 
+ obj-$(CONFIG_HNS3_HCLGEVF) += hclgevf.o
+ 
+-hclgevf-objs = hns3vf/hclgevf_main.o hns3vf/hclgevf_mbx.o  hns3vf/hclgevf_devlink.o \
++hclgevf-objs = hns3vf/hclgevf_main.o hns3vf/hclgevf_mbx.o  hns3vf/hclgevf_devlink.o hns3vf/hclgevf_regs.o \
+ 		hns3_common/hclge_comm_cmd.o hns3_common/hclge_comm_rss.o hns3_common/hclge_comm_tqp_stats.o
+ 
+ obj-$(CONFIG_HNS3_HCLGE) += hclge.o
+-hclge-objs = hns3pf/hclge_main.o hns3pf/hclge_mdio.o hns3pf/hclge_tm.o \
++hclge-objs = hns3pf/hclge_main.o hns3pf/hclge_mdio.o hns3pf/hclge_tm.o hns3pf/hclge_regs.o \
+ 		hns3pf/hclge_mbx.o hns3pf/hclge_err.o  hns3pf/hclge_debugfs.o hns3pf/hclge_ptp.o hns3pf/hclge_devlink.o \
+ 		hns3_common/hclge_comm_cmd.o hns3_common/hclge_comm_rss.o hns3_common/hclge_comm_tqp_stats.o
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.h b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+index 514a20bce4f44..a4b43bcd2f0c9 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hnae3.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+@@ -382,6 +382,7 @@ struct hnae3_dev_specs {
+ 	u16 umv_size;
+ 	u16 mc_mac_size;
+ 	u32 mac_stats_num;
++	u8 tnl_num;
+ };
+ 
+ struct hnae3_client_ops {
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h
+index 91c173f40701a..d5cfdc4c082d8 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h
+@@ -826,7 +826,9 @@ struct hclge_dev_specs_1_cmd {
+ 	u8 rsv0[2];
+ 	__le16 umv_size;
+ 	__le16 mc_mac_size;
+-	u8 rsv1[12];
++	u8 rsv1[6];
++	u8 tnl_num;
++	u8 rsv2[5];
+ };
+ 
+ /* mac speed type defined in firmware command */
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c
+index 0fb2eaee3e8a0..f01a7a9ee02ca 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_debugfs.c
+@@ -7,6 +7,7 @@
+ #include "hclge_debugfs.h"
+ #include "hclge_err.h"
+ #include "hclge_main.h"
++#include "hclge_regs.h"
+ #include "hclge_tm.h"
+ #include "hnae3.h"
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index a940e35aef29d..2d5a2e1ef664d 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -20,6 +20,7 @@
+ #include "hclge_main.h"
+ #include "hclge_mbx.h"
+ #include "hclge_mdio.h"
++#include "hclge_regs.h"
+ #include "hclge_tm.h"
+ #include "hclge_err.h"
+ #include "hnae3.h"
+@@ -40,20 +41,6 @@
+ #define HCLGE_PF_RESET_SYNC_TIME	20
+ #define HCLGE_PF_RESET_SYNC_CNT		1500
+ 
+-/* Get DFX BD number offset */
+-#define HCLGE_DFX_BIOS_BD_OFFSET        1
+-#define HCLGE_DFX_SSU_0_BD_OFFSET       2
+-#define HCLGE_DFX_SSU_1_BD_OFFSET       3
+-#define HCLGE_DFX_IGU_BD_OFFSET         4
+-#define HCLGE_DFX_RPU_0_BD_OFFSET       5
+-#define HCLGE_DFX_RPU_1_BD_OFFSET       6
+-#define HCLGE_DFX_NCSI_BD_OFFSET        7
+-#define HCLGE_DFX_RTC_BD_OFFSET         8
+-#define HCLGE_DFX_PPP_BD_OFFSET         9
+-#define HCLGE_DFX_RCB_BD_OFFSET         10
+-#define HCLGE_DFX_TQP_BD_OFFSET         11
+-#define HCLGE_DFX_SSU_2_BD_OFFSET       12
+-
+ #define HCLGE_LINK_STATUS_MS	10
+ 
+ static int hclge_set_mac_mtu(struct hclge_dev *hdev, int new_mps);
+@@ -94,62 +81,6 @@ static const struct pci_device_id ae_algo_pci_tbl[] = {
+ 
+ MODULE_DEVICE_TABLE(pci, ae_algo_pci_tbl);
+ 
+-static const u32 cmdq_reg_addr_list[] = {HCLGE_COMM_NIC_CSQ_BASEADDR_L_REG,
+-					 HCLGE_COMM_NIC_CSQ_BASEADDR_H_REG,
+-					 HCLGE_COMM_NIC_CSQ_DEPTH_REG,
+-					 HCLGE_COMM_NIC_CSQ_TAIL_REG,
+-					 HCLGE_COMM_NIC_CSQ_HEAD_REG,
+-					 HCLGE_COMM_NIC_CRQ_BASEADDR_L_REG,
+-					 HCLGE_COMM_NIC_CRQ_BASEADDR_H_REG,
+-					 HCLGE_COMM_NIC_CRQ_DEPTH_REG,
+-					 HCLGE_COMM_NIC_CRQ_TAIL_REG,
+-					 HCLGE_COMM_NIC_CRQ_HEAD_REG,
+-					 HCLGE_COMM_VECTOR0_CMDQ_SRC_REG,
+-					 HCLGE_COMM_CMDQ_INTR_STS_REG,
+-					 HCLGE_COMM_CMDQ_INTR_EN_REG,
+-					 HCLGE_COMM_CMDQ_INTR_GEN_REG};
+-
+-static const u32 common_reg_addr_list[] = {HCLGE_MISC_VECTOR_REG_BASE,
+-					   HCLGE_PF_OTHER_INT_REG,
+-					   HCLGE_MISC_RESET_STS_REG,
+-					   HCLGE_MISC_VECTOR_INT_STS,
+-					   HCLGE_GLOBAL_RESET_REG,
+-					   HCLGE_FUN_RST_ING,
+-					   HCLGE_GRO_EN_REG};
+-
+-static const u32 ring_reg_addr_list[] = {HCLGE_RING_RX_ADDR_L_REG,
+-					 HCLGE_RING_RX_ADDR_H_REG,
+-					 HCLGE_RING_RX_BD_NUM_REG,
+-					 HCLGE_RING_RX_BD_LENGTH_REG,
+-					 HCLGE_RING_RX_MERGE_EN_REG,
+-					 HCLGE_RING_RX_TAIL_REG,
+-					 HCLGE_RING_RX_HEAD_REG,
+-					 HCLGE_RING_RX_FBD_NUM_REG,
+-					 HCLGE_RING_RX_OFFSET_REG,
+-					 HCLGE_RING_RX_FBD_OFFSET_REG,
+-					 HCLGE_RING_RX_STASH_REG,
+-					 HCLGE_RING_RX_BD_ERR_REG,
+-					 HCLGE_RING_TX_ADDR_L_REG,
+-					 HCLGE_RING_TX_ADDR_H_REG,
+-					 HCLGE_RING_TX_BD_NUM_REG,
+-					 HCLGE_RING_TX_PRIORITY_REG,
+-					 HCLGE_RING_TX_TC_REG,
+-					 HCLGE_RING_TX_MERGE_EN_REG,
+-					 HCLGE_RING_TX_TAIL_REG,
+-					 HCLGE_RING_TX_HEAD_REG,
+-					 HCLGE_RING_TX_FBD_NUM_REG,
+-					 HCLGE_RING_TX_OFFSET_REG,
+-					 HCLGE_RING_TX_EBD_NUM_REG,
+-					 HCLGE_RING_TX_EBD_OFFSET_REG,
+-					 HCLGE_RING_TX_BD_ERR_REG,
+-					 HCLGE_RING_EN_REG};
+-
+-static const u32 tqp_intr_reg_addr_list[] = {HCLGE_TQP_INTR_CTRL_REG,
+-					     HCLGE_TQP_INTR_GL0_REG,
+-					     HCLGE_TQP_INTR_GL1_REG,
+-					     HCLGE_TQP_INTR_GL2_REG,
+-					     HCLGE_TQP_INTR_RL_REG};
+-
+ static const char hns3_nic_test_strs[][ETH_GSTRING_LEN] = {
+ 	"External Loopback test",
+ 	"App      Loopback test",
+@@ -375,36 +306,6 @@ static const struct hclge_mac_mgr_tbl_entry_cmd hclge_mgr_table[] = {
+ 	},
+ };
+ 
+-static const u32 hclge_dfx_bd_offset_list[] = {
+-	HCLGE_DFX_BIOS_BD_OFFSET,
+-	HCLGE_DFX_SSU_0_BD_OFFSET,
+-	HCLGE_DFX_SSU_1_BD_OFFSET,
+-	HCLGE_DFX_IGU_BD_OFFSET,
+-	HCLGE_DFX_RPU_0_BD_OFFSET,
+-	HCLGE_DFX_RPU_1_BD_OFFSET,
+-	HCLGE_DFX_NCSI_BD_OFFSET,
+-	HCLGE_DFX_RTC_BD_OFFSET,
+-	HCLGE_DFX_PPP_BD_OFFSET,
+-	HCLGE_DFX_RCB_BD_OFFSET,
+-	HCLGE_DFX_TQP_BD_OFFSET,
+-	HCLGE_DFX_SSU_2_BD_OFFSET
+-};
+-
+-static const enum hclge_opcode_type hclge_dfx_reg_opcode_list[] = {
+-	HCLGE_OPC_DFX_BIOS_COMMON_REG,
+-	HCLGE_OPC_DFX_SSU_REG_0,
+-	HCLGE_OPC_DFX_SSU_REG_1,
+-	HCLGE_OPC_DFX_IGU_EGU_REG,
+-	HCLGE_OPC_DFX_RPU_REG_0,
+-	HCLGE_OPC_DFX_RPU_REG_1,
+-	HCLGE_OPC_DFX_NCSI_REG,
+-	HCLGE_OPC_DFX_RTC_REG,
+-	HCLGE_OPC_DFX_PPP_REG,
+-	HCLGE_OPC_DFX_RCB_REG,
+-	HCLGE_OPC_DFX_TQP_REG,
+-	HCLGE_OPC_DFX_SSU_REG_2
+-};
+-
+ static const struct key_info meta_data_key_info[] = {
+ 	{ PACKET_TYPE_ID, 6 },
+ 	{ IP_FRAGEMENT, 1 },
+@@ -1425,6 +1326,7 @@ static void hclge_set_default_dev_specs(struct hclge_dev *hdev)
+ 	ae_dev->dev_specs.max_frm_size = HCLGE_MAC_MAX_FRAME;
+ 	ae_dev->dev_specs.max_qset_num = HCLGE_MAX_QSET_NUM;
+ 	ae_dev->dev_specs.umv_size = HCLGE_DEFAULT_UMV_SPACE_PER_PF;
++	ae_dev->dev_specs.tnl_num = 0;
+ }
+ 
+ static void hclge_parse_dev_specs(struct hclge_dev *hdev,
+@@ -1448,6 +1350,7 @@ static void hclge_parse_dev_specs(struct hclge_dev *hdev,
+ 	ae_dev->dev_specs.max_frm_size = le16_to_cpu(req1->max_frm_size);
+ 	ae_dev->dev_specs.umv_size = le16_to_cpu(req1->umv_size);
+ 	ae_dev->dev_specs.mc_mac_size = le16_to_cpu(req1->mc_mac_size);
++	ae_dev->dev_specs.tnl_num = req1->tnl_num;
+ }
+ 
+ static void hclge_check_dev_specs(struct hclge_dev *hdev)
+@@ -12383,463 +12286,6 @@ out:
+ 	return ret;
+ }
+ 
+-static int hclge_get_regs_num(struct hclge_dev *hdev, u32 *regs_num_32_bit,
+-			      u32 *regs_num_64_bit)
+-{
+-	struct hclge_desc desc;
+-	u32 total_num;
+-	int ret;
+-
+-	hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_QUERY_REG_NUM, true);
+-	ret = hclge_cmd_send(&hdev->hw, &desc, 1);
+-	if (ret) {
+-		dev_err(&hdev->pdev->dev,
+-			"Query register number cmd failed, ret = %d.\n", ret);
+-		return ret;
+-	}
+-
+-	*regs_num_32_bit = le32_to_cpu(desc.data[0]);
+-	*regs_num_64_bit = le32_to_cpu(desc.data[1]);
+-
+-	total_num = *regs_num_32_bit + *regs_num_64_bit;
+-	if (!total_num)
+-		return -EINVAL;
+-
+-	return 0;
+-}
+-
+-static int hclge_get_32_bit_regs(struct hclge_dev *hdev, u32 regs_num,
+-				 void *data)
+-{
+-#define HCLGE_32_BIT_REG_RTN_DATANUM 8
+-#define HCLGE_32_BIT_DESC_NODATA_LEN 2
+-
+-	struct hclge_desc *desc;
+-	u32 *reg_val = data;
+-	__le32 *desc_data;
+-	int nodata_num;
+-	int cmd_num;
+-	int i, k, n;
+-	int ret;
+-
+-	if (regs_num == 0)
+-		return 0;
+-
+-	nodata_num = HCLGE_32_BIT_DESC_NODATA_LEN;
+-	cmd_num = DIV_ROUND_UP(regs_num + nodata_num,
+-			       HCLGE_32_BIT_REG_RTN_DATANUM);
+-	desc = kcalloc(cmd_num, sizeof(struct hclge_desc), GFP_KERNEL);
+-	if (!desc)
+-		return -ENOMEM;
+-
+-	hclge_cmd_setup_basic_desc(&desc[0], HCLGE_OPC_QUERY_32_BIT_REG, true);
+-	ret = hclge_cmd_send(&hdev->hw, desc, cmd_num);
+-	if (ret) {
+-		dev_err(&hdev->pdev->dev,
+-			"Query 32 bit register cmd failed, ret = %d.\n", ret);
+-		kfree(desc);
+-		return ret;
+-	}
+-
+-	for (i = 0; i < cmd_num; i++) {
+-		if (i == 0) {
+-			desc_data = (__le32 *)(&desc[i].data[0]);
+-			n = HCLGE_32_BIT_REG_RTN_DATANUM - nodata_num;
+-		} else {
+-			desc_data = (__le32 *)(&desc[i]);
+-			n = HCLGE_32_BIT_REG_RTN_DATANUM;
+-		}
+-		for (k = 0; k < n; k++) {
+-			*reg_val++ = le32_to_cpu(*desc_data++);
+-
+-			regs_num--;
+-			if (!regs_num)
+-				break;
+-		}
+-	}
+-
+-	kfree(desc);
+-	return 0;
+-}
+-
+-static int hclge_get_64_bit_regs(struct hclge_dev *hdev, u32 regs_num,
+-				 void *data)
+-{
+-#define HCLGE_64_BIT_REG_RTN_DATANUM 4
+-#define HCLGE_64_BIT_DESC_NODATA_LEN 1
+-
+-	struct hclge_desc *desc;
+-	u64 *reg_val = data;
+-	__le64 *desc_data;
+-	int nodata_len;
+-	int cmd_num;
+-	int i, k, n;
+-	int ret;
+-
+-	if (regs_num == 0)
+-		return 0;
+-
+-	nodata_len = HCLGE_64_BIT_DESC_NODATA_LEN;
+-	cmd_num = DIV_ROUND_UP(regs_num + nodata_len,
+-			       HCLGE_64_BIT_REG_RTN_DATANUM);
+-	desc = kcalloc(cmd_num, sizeof(struct hclge_desc), GFP_KERNEL);
+-	if (!desc)
+-		return -ENOMEM;
+-
+-	hclge_cmd_setup_basic_desc(&desc[0], HCLGE_OPC_QUERY_64_BIT_REG, true);
+-	ret = hclge_cmd_send(&hdev->hw, desc, cmd_num);
+-	if (ret) {
+-		dev_err(&hdev->pdev->dev,
+-			"Query 64 bit register cmd failed, ret = %d.\n", ret);
+-		kfree(desc);
+-		return ret;
+-	}
+-
+-	for (i = 0; i < cmd_num; i++) {
+-		if (i == 0) {
+-			desc_data = (__le64 *)(&desc[i].data[0]);
+-			n = HCLGE_64_BIT_REG_RTN_DATANUM - nodata_len;
+-		} else {
+-			desc_data = (__le64 *)(&desc[i]);
+-			n = HCLGE_64_BIT_REG_RTN_DATANUM;
+-		}
+-		for (k = 0; k < n; k++) {
+-			*reg_val++ = le64_to_cpu(*desc_data++);
+-
+-			regs_num--;
+-			if (!regs_num)
+-				break;
+-		}
+-	}
+-
+-	kfree(desc);
+-	return 0;
+-}
+-
+-#define MAX_SEPARATE_NUM	4
+-#define SEPARATOR_VALUE		0xFDFCFBFA
+-#define REG_NUM_PER_LINE	4
+-#define REG_LEN_PER_LINE	(REG_NUM_PER_LINE * sizeof(u32))
+-#define REG_SEPARATOR_LINE	1
+-#define REG_NUM_REMAIN_MASK	3
+-
+-int hclge_query_bd_num_cmd_send(struct hclge_dev *hdev, struct hclge_desc *desc)
+-{
+-	int i;
+-
+-	/* initialize command BD except the last one */
+-	for (i = 0; i < HCLGE_GET_DFX_REG_TYPE_CNT - 1; i++) {
+-		hclge_cmd_setup_basic_desc(&desc[i], HCLGE_OPC_DFX_BD_NUM,
+-					   true);
+-		desc[i].flag |= cpu_to_le16(HCLGE_COMM_CMD_FLAG_NEXT);
+-	}
+-
+-	/* initialize the last command BD */
+-	hclge_cmd_setup_basic_desc(&desc[i], HCLGE_OPC_DFX_BD_NUM, true);
+-
+-	return hclge_cmd_send(&hdev->hw, desc, HCLGE_GET_DFX_REG_TYPE_CNT);
+-}
+-
+-static int hclge_get_dfx_reg_bd_num(struct hclge_dev *hdev,
+-				    int *bd_num_list,
+-				    u32 type_num)
+-{
+-	u32 entries_per_desc, desc_index, index, offset, i;
+-	struct hclge_desc desc[HCLGE_GET_DFX_REG_TYPE_CNT];
+-	int ret;
+-
+-	ret = hclge_query_bd_num_cmd_send(hdev, desc);
+-	if (ret) {
+-		dev_err(&hdev->pdev->dev,
+-			"Get dfx bd num fail, status is %d.\n", ret);
+-		return ret;
+-	}
+-
+-	entries_per_desc = ARRAY_SIZE(desc[0].data);
+-	for (i = 0; i < type_num; i++) {
+-		offset = hclge_dfx_bd_offset_list[i];
+-		index = offset % entries_per_desc;
+-		desc_index = offset / entries_per_desc;
+-		bd_num_list[i] = le32_to_cpu(desc[desc_index].data[index]);
+-	}
+-
+-	return ret;
+-}
+-
+-static int hclge_dfx_reg_cmd_send(struct hclge_dev *hdev,
+-				  struct hclge_desc *desc_src, int bd_num,
+-				  enum hclge_opcode_type cmd)
+-{
+-	struct hclge_desc *desc = desc_src;
+-	int i, ret;
+-
+-	hclge_cmd_setup_basic_desc(desc, cmd, true);
+-	for (i = 0; i < bd_num - 1; i++) {
+-		desc->flag |= cpu_to_le16(HCLGE_COMM_CMD_FLAG_NEXT);
+-		desc++;
+-		hclge_cmd_setup_basic_desc(desc, cmd, true);
+-	}
+-
+-	desc = desc_src;
+-	ret = hclge_cmd_send(&hdev->hw, desc, bd_num);
+-	if (ret)
+-		dev_err(&hdev->pdev->dev,
+-			"Query dfx reg cmd(0x%x) send fail, status is %d.\n",
+-			cmd, ret);
+-
+-	return ret;
+-}
+-
+-static int hclge_dfx_reg_fetch_data(struct hclge_desc *desc_src, int bd_num,
+-				    void *data)
+-{
+-	int entries_per_desc, reg_num, separator_num, desc_index, index, i;
+-	struct hclge_desc *desc = desc_src;
+-	u32 *reg = data;
+-
+-	entries_per_desc = ARRAY_SIZE(desc->data);
+-	reg_num = entries_per_desc * bd_num;
+-	separator_num = REG_NUM_PER_LINE - (reg_num & REG_NUM_REMAIN_MASK);
+-	for (i = 0; i < reg_num; i++) {
+-		index = i % entries_per_desc;
+-		desc_index = i / entries_per_desc;
+-		*reg++ = le32_to_cpu(desc[desc_index].data[index]);
+-	}
+-	for (i = 0; i < separator_num; i++)
+-		*reg++ = SEPARATOR_VALUE;
+-
+-	return reg_num + separator_num;
+-}
+-
+-static int hclge_get_dfx_reg_len(struct hclge_dev *hdev, int *len)
+-{
+-	u32 dfx_reg_type_num = ARRAY_SIZE(hclge_dfx_bd_offset_list);
+-	int data_len_per_desc, bd_num, i;
+-	int *bd_num_list;
+-	u32 data_len;
+-	int ret;
+-
+-	bd_num_list = kcalloc(dfx_reg_type_num, sizeof(int), GFP_KERNEL);
+-	if (!bd_num_list)
+-		return -ENOMEM;
+-
+-	ret = hclge_get_dfx_reg_bd_num(hdev, bd_num_list, dfx_reg_type_num);
+-	if (ret) {
+-		dev_err(&hdev->pdev->dev,
+-			"Get dfx reg bd num fail, status is %d.\n", ret);
+-		goto out;
+-	}
+-
+-	data_len_per_desc = sizeof_field(struct hclge_desc, data);
+-	*len = 0;
+-	for (i = 0; i < dfx_reg_type_num; i++) {
+-		bd_num = bd_num_list[i];
+-		data_len = data_len_per_desc * bd_num;
+-		*len += (data_len / REG_LEN_PER_LINE + 1) * REG_LEN_PER_LINE;
+-	}
+-
+-out:
+-	kfree(bd_num_list);
+-	return ret;
+-}
+-
+-static int hclge_get_dfx_reg(struct hclge_dev *hdev, void *data)
+-{
+-	u32 dfx_reg_type_num = ARRAY_SIZE(hclge_dfx_bd_offset_list);
+-	int bd_num, bd_num_max, buf_len, i;
+-	struct hclge_desc *desc_src;
+-	int *bd_num_list;
+-	u32 *reg = data;
+-	int ret;
+-
+-	bd_num_list = kcalloc(dfx_reg_type_num, sizeof(int), GFP_KERNEL);
+-	if (!bd_num_list)
+-		return -ENOMEM;
+-
+-	ret = hclge_get_dfx_reg_bd_num(hdev, bd_num_list, dfx_reg_type_num);
+-	if (ret) {
+-		dev_err(&hdev->pdev->dev,
+-			"Get dfx reg bd num fail, status is %d.\n", ret);
+-		goto out;
+-	}
+-
+-	bd_num_max = bd_num_list[0];
+-	for (i = 1; i < dfx_reg_type_num; i++)
+-		bd_num_max = max_t(int, bd_num_max, bd_num_list[i]);
+-
+-	buf_len = sizeof(*desc_src) * bd_num_max;
+-	desc_src = kzalloc(buf_len, GFP_KERNEL);
+-	if (!desc_src) {
+-		ret = -ENOMEM;
+-		goto out;
+-	}
+-
+-	for (i = 0; i < dfx_reg_type_num; i++) {
+-		bd_num = bd_num_list[i];
+-		ret = hclge_dfx_reg_cmd_send(hdev, desc_src, bd_num,
+-					     hclge_dfx_reg_opcode_list[i]);
+-		if (ret) {
+-			dev_err(&hdev->pdev->dev,
+-				"Get dfx reg fail, status is %d.\n", ret);
+-			break;
+-		}
+-
+-		reg += hclge_dfx_reg_fetch_data(desc_src, bd_num, reg);
+-	}
+-
+-	kfree(desc_src);
+-out:
+-	kfree(bd_num_list);
+-	return ret;
+-}
+-
+-static int hclge_fetch_pf_reg(struct hclge_dev *hdev, void *data,
+-			      struct hnae3_knic_private_info *kinfo)
+-{
+-#define HCLGE_RING_REG_OFFSET		0x200
+-#define HCLGE_RING_INT_REG_OFFSET	0x4
+-
+-	int i, j, reg_num, separator_num;
+-	int data_num_sum;
+-	u32 *reg = data;
+-
+-	/* fetching per-PF registers valus from PF PCIe register space */
+-	reg_num = ARRAY_SIZE(cmdq_reg_addr_list);
+-	separator_num = MAX_SEPARATE_NUM - (reg_num & REG_NUM_REMAIN_MASK);
+-	for (i = 0; i < reg_num; i++)
+-		*reg++ = hclge_read_dev(&hdev->hw, cmdq_reg_addr_list[i]);
+-	for (i = 0; i < separator_num; i++)
+-		*reg++ = SEPARATOR_VALUE;
+-	data_num_sum = reg_num + separator_num;
+-
+-	reg_num = ARRAY_SIZE(common_reg_addr_list);
+-	separator_num = MAX_SEPARATE_NUM - (reg_num & REG_NUM_REMAIN_MASK);
+-	for (i = 0; i < reg_num; i++)
+-		*reg++ = hclge_read_dev(&hdev->hw, common_reg_addr_list[i]);
+-	for (i = 0; i < separator_num; i++)
+-		*reg++ = SEPARATOR_VALUE;
+-	data_num_sum += reg_num + separator_num;
+-
+-	reg_num = ARRAY_SIZE(ring_reg_addr_list);
+-	separator_num = MAX_SEPARATE_NUM - (reg_num & REG_NUM_REMAIN_MASK);
+-	for (j = 0; j < kinfo->num_tqps; j++) {
+-		for (i = 0; i < reg_num; i++)
+-			*reg++ = hclge_read_dev(&hdev->hw,
+-						ring_reg_addr_list[i] +
+-						HCLGE_RING_REG_OFFSET * j);
+-		for (i = 0; i < separator_num; i++)
+-			*reg++ = SEPARATOR_VALUE;
+-	}
+-	data_num_sum += (reg_num + separator_num) * kinfo->num_tqps;
+-
+-	reg_num = ARRAY_SIZE(tqp_intr_reg_addr_list);
+-	separator_num = MAX_SEPARATE_NUM - (reg_num & REG_NUM_REMAIN_MASK);
+-	for (j = 0; j < hdev->num_msi_used - 1; j++) {
+-		for (i = 0; i < reg_num; i++)
+-			*reg++ = hclge_read_dev(&hdev->hw,
+-						tqp_intr_reg_addr_list[i] +
+-						HCLGE_RING_INT_REG_OFFSET * j);
+-		for (i = 0; i < separator_num; i++)
+-			*reg++ = SEPARATOR_VALUE;
+-	}
+-	data_num_sum += (reg_num + separator_num) * (hdev->num_msi_used - 1);
+-
+-	return data_num_sum;
+-}
+-
+-static int hclge_get_regs_len(struct hnae3_handle *handle)
+-{
+-	int cmdq_lines, common_lines, ring_lines, tqp_intr_lines;
+-	struct hnae3_knic_private_info *kinfo = &handle->kinfo;
+-	struct hclge_vport *vport = hclge_get_vport(handle);
+-	struct hclge_dev *hdev = vport->back;
+-	int regs_num_32_bit, regs_num_64_bit, dfx_regs_len;
+-	int regs_lines_32_bit, regs_lines_64_bit;
+-	int ret;
+-
+-	ret = hclge_get_regs_num(hdev, &regs_num_32_bit, &regs_num_64_bit);
+-	if (ret) {
+-		dev_err(&hdev->pdev->dev,
+-			"Get register number failed, ret = %d.\n", ret);
+-		return ret;
+-	}
+-
+-	ret = hclge_get_dfx_reg_len(hdev, &dfx_regs_len);
+-	if (ret) {
+-		dev_err(&hdev->pdev->dev,
+-			"Get dfx reg len failed, ret = %d.\n", ret);
+-		return ret;
+-	}
+-
+-	cmdq_lines = sizeof(cmdq_reg_addr_list) / REG_LEN_PER_LINE +
+-		REG_SEPARATOR_LINE;
+-	common_lines = sizeof(common_reg_addr_list) / REG_LEN_PER_LINE +
+-		REG_SEPARATOR_LINE;
+-	ring_lines = sizeof(ring_reg_addr_list) / REG_LEN_PER_LINE +
+-		REG_SEPARATOR_LINE;
+-	tqp_intr_lines = sizeof(tqp_intr_reg_addr_list) / REG_LEN_PER_LINE +
+-		REG_SEPARATOR_LINE;
+-	regs_lines_32_bit = regs_num_32_bit * sizeof(u32) / REG_LEN_PER_LINE +
+-		REG_SEPARATOR_LINE;
+-	regs_lines_64_bit = regs_num_64_bit * sizeof(u64) / REG_LEN_PER_LINE +
+-		REG_SEPARATOR_LINE;
+-
+-	return (cmdq_lines + common_lines + ring_lines * kinfo->num_tqps +
+-		tqp_intr_lines * (hdev->num_msi_used - 1) + regs_lines_32_bit +
+-		regs_lines_64_bit) * REG_LEN_PER_LINE + dfx_regs_len;
+-}
+-
+-static void hclge_get_regs(struct hnae3_handle *handle, u32 *version,
+-			   void *data)
+-{
+-	struct hnae3_knic_private_info *kinfo = &handle->kinfo;
+-	struct hclge_vport *vport = hclge_get_vport(handle);
+-	struct hclge_dev *hdev = vport->back;
+-	u32 regs_num_32_bit, regs_num_64_bit;
+-	int i, reg_num, separator_num, ret;
+-	u32 *reg = data;
+-
+-	*version = hdev->fw_version;
+-
+-	ret = hclge_get_regs_num(hdev, &regs_num_32_bit, &regs_num_64_bit);
+-	if (ret) {
+-		dev_err(&hdev->pdev->dev,
+-			"Get register number failed, ret = %d.\n", ret);
+-		return;
+-	}
+-
+-	reg += hclge_fetch_pf_reg(hdev, reg, kinfo);
+-
+-	ret = hclge_get_32_bit_regs(hdev, regs_num_32_bit, reg);
+-	if (ret) {
+-		dev_err(&hdev->pdev->dev,
+-			"Get 32 bit register failed, ret = %d.\n", ret);
+-		return;
+-	}
+-	reg_num = regs_num_32_bit;
+-	reg += reg_num;
+-	separator_num = MAX_SEPARATE_NUM - (reg_num & REG_NUM_REMAIN_MASK);
+-	for (i = 0; i < separator_num; i++)
+-		*reg++ = SEPARATOR_VALUE;
+-
+-	ret = hclge_get_64_bit_regs(hdev, regs_num_64_bit, reg);
+-	if (ret) {
+-		dev_err(&hdev->pdev->dev,
+-			"Get 64 bit register failed, ret = %d.\n", ret);
+-		return;
+-	}
+-	reg_num = regs_num_64_bit * 2;
+-	reg += reg_num;
+-	separator_num = MAX_SEPARATE_NUM - (reg_num & REG_NUM_REMAIN_MASK);
+-	for (i = 0; i < separator_num; i++)
+-		*reg++ = SEPARATOR_VALUE;
+-
+-	ret = hclge_get_dfx_reg(hdev, reg);
+-	if (ret)
+-		dev_err(&hdev->pdev->dev,
+-			"Get dfx register failed, ret = %d.\n", ret);
+-}
+-
+ static int hclge_set_led_status(struct hclge_dev *hdev, u8 locate_led_status)
+ {
+ 	struct hclge_set_led_state_cmd *req;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
+index 6a43d1515585f..8f76b568c1bf6 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
+@@ -1142,8 +1142,6 @@ int hclge_push_vf_port_base_vlan_info(struct hclge_vport *vport, u8 vfid,
+ 				      u16 state,
+ 				      struct hclge_vlan_info *vlan_info);
+ void hclge_task_schedule(struct hclge_dev *hdev, unsigned long delay_time);
+-int hclge_query_bd_num_cmd_send(struct hclge_dev *hdev,
+-				struct hclge_desc *desc);
+ void hclge_report_hw_error(struct hclge_dev *hdev,
+ 			   enum hnae3_hw_error_type type);
+ void hclge_inform_vf_promisc_info(struct hclge_vport *vport);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_regs.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_regs.c
+new file mode 100644
+index 0000000000000..43c1c18fa81f8
+--- /dev/null
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_regs.c
+@@ -0,0 +1,668 @@
++// SPDX-License-Identifier: GPL-2.0+
++// Copyright (c) 2023 Hisilicon Limited.
++
++#include "hclge_cmd.h"
++#include "hclge_main.h"
++#include "hclge_regs.h"
++#include "hnae3.h"
++
++static const u32 cmdq_reg_addr_list[] = {HCLGE_COMM_NIC_CSQ_BASEADDR_L_REG,
++					 HCLGE_COMM_NIC_CSQ_BASEADDR_H_REG,
++					 HCLGE_COMM_NIC_CSQ_DEPTH_REG,
++					 HCLGE_COMM_NIC_CSQ_TAIL_REG,
++					 HCLGE_COMM_NIC_CSQ_HEAD_REG,
++					 HCLGE_COMM_NIC_CRQ_BASEADDR_L_REG,
++					 HCLGE_COMM_NIC_CRQ_BASEADDR_H_REG,
++					 HCLGE_COMM_NIC_CRQ_DEPTH_REG,
++					 HCLGE_COMM_NIC_CRQ_TAIL_REG,
++					 HCLGE_COMM_NIC_CRQ_HEAD_REG,
++					 HCLGE_COMM_VECTOR0_CMDQ_SRC_REG,
++					 HCLGE_COMM_CMDQ_INTR_STS_REG,
++					 HCLGE_COMM_CMDQ_INTR_EN_REG,
++					 HCLGE_COMM_CMDQ_INTR_GEN_REG};
++
++static const u32 common_reg_addr_list[] = {HCLGE_MISC_VECTOR_REG_BASE,
++					   HCLGE_PF_OTHER_INT_REG,
++					   HCLGE_MISC_RESET_STS_REG,
++					   HCLGE_MISC_VECTOR_INT_STS,
++					   HCLGE_GLOBAL_RESET_REG,
++					   HCLGE_FUN_RST_ING,
++					   HCLGE_GRO_EN_REG};
++
++static const u32 ring_reg_addr_list[] = {HCLGE_RING_RX_ADDR_L_REG,
++					 HCLGE_RING_RX_ADDR_H_REG,
++					 HCLGE_RING_RX_BD_NUM_REG,
++					 HCLGE_RING_RX_BD_LENGTH_REG,
++					 HCLGE_RING_RX_MERGE_EN_REG,
++					 HCLGE_RING_RX_TAIL_REG,
++					 HCLGE_RING_RX_HEAD_REG,
++					 HCLGE_RING_RX_FBD_NUM_REG,
++					 HCLGE_RING_RX_OFFSET_REG,
++					 HCLGE_RING_RX_FBD_OFFSET_REG,
++					 HCLGE_RING_RX_STASH_REG,
++					 HCLGE_RING_RX_BD_ERR_REG,
++					 HCLGE_RING_TX_ADDR_L_REG,
++					 HCLGE_RING_TX_ADDR_H_REG,
++					 HCLGE_RING_TX_BD_NUM_REG,
++					 HCLGE_RING_TX_PRIORITY_REG,
++					 HCLGE_RING_TX_TC_REG,
++					 HCLGE_RING_TX_MERGE_EN_REG,
++					 HCLGE_RING_TX_TAIL_REG,
++					 HCLGE_RING_TX_HEAD_REG,
++					 HCLGE_RING_TX_FBD_NUM_REG,
++					 HCLGE_RING_TX_OFFSET_REG,
++					 HCLGE_RING_TX_EBD_NUM_REG,
++					 HCLGE_RING_TX_EBD_OFFSET_REG,
++					 HCLGE_RING_TX_BD_ERR_REG,
++					 HCLGE_RING_EN_REG};
++
++static const u32 tqp_intr_reg_addr_list[] = {HCLGE_TQP_INTR_CTRL_REG,
++					     HCLGE_TQP_INTR_GL0_REG,
++					     HCLGE_TQP_INTR_GL1_REG,
++					     HCLGE_TQP_INTR_GL2_REG,
++					     HCLGE_TQP_INTR_RL_REG};
++
++/* Get DFX BD number offset */
++#define HCLGE_DFX_BIOS_BD_OFFSET        1
++#define HCLGE_DFX_SSU_0_BD_OFFSET       2
++#define HCLGE_DFX_SSU_1_BD_OFFSET       3
++#define HCLGE_DFX_IGU_BD_OFFSET         4
++#define HCLGE_DFX_RPU_0_BD_OFFSET       5
++#define HCLGE_DFX_RPU_1_BD_OFFSET       6
++#define HCLGE_DFX_NCSI_BD_OFFSET        7
++#define HCLGE_DFX_RTC_BD_OFFSET         8
++#define HCLGE_DFX_PPP_BD_OFFSET         9
++#define HCLGE_DFX_RCB_BD_OFFSET         10
++#define HCLGE_DFX_TQP_BD_OFFSET         11
++#define HCLGE_DFX_SSU_2_BD_OFFSET       12
++
++static const u32 hclge_dfx_bd_offset_list[] = {
++	HCLGE_DFX_BIOS_BD_OFFSET,
++	HCLGE_DFX_SSU_0_BD_OFFSET,
++	HCLGE_DFX_SSU_1_BD_OFFSET,
++	HCLGE_DFX_IGU_BD_OFFSET,
++	HCLGE_DFX_RPU_0_BD_OFFSET,
++	HCLGE_DFX_RPU_1_BD_OFFSET,
++	HCLGE_DFX_NCSI_BD_OFFSET,
++	HCLGE_DFX_RTC_BD_OFFSET,
++	HCLGE_DFX_PPP_BD_OFFSET,
++	HCLGE_DFX_RCB_BD_OFFSET,
++	HCLGE_DFX_TQP_BD_OFFSET,
++	HCLGE_DFX_SSU_2_BD_OFFSET
++};
++
++static const enum hclge_opcode_type hclge_dfx_reg_opcode_list[] = {
++	HCLGE_OPC_DFX_BIOS_COMMON_REG,
++	HCLGE_OPC_DFX_SSU_REG_0,
++	HCLGE_OPC_DFX_SSU_REG_1,
++	HCLGE_OPC_DFX_IGU_EGU_REG,
++	HCLGE_OPC_DFX_RPU_REG_0,
++	HCLGE_OPC_DFX_RPU_REG_1,
++	HCLGE_OPC_DFX_NCSI_REG,
++	HCLGE_OPC_DFX_RTC_REG,
++	HCLGE_OPC_DFX_PPP_REG,
++	HCLGE_OPC_DFX_RCB_REG,
++	HCLGE_OPC_DFX_TQP_REG,
++	HCLGE_OPC_DFX_SSU_REG_2
++};
++
++enum hclge_reg_tag {
++	HCLGE_REG_TAG_CMDQ = 0,
++	HCLGE_REG_TAG_COMMON,
++	HCLGE_REG_TAG_RING,
++	HCLGE_REG_TAG_TQP_INTR,
++	HCLGE_REG_TAG_QUERY_32_BIT,
++	HCLGE_REG_TAG_QUERY_64_BIT,
++	HCLGE_REG_TAG_DFX_BIOS_COMMON,
++	HCLGE_REG_TAG_DFX_SSU_0,
++	HCLGE_REG_TAG_DFX_SSU_1,
++	HCLGE_REG_TAG_DFX_IGU_EGU,
++	HCLGE_REG_TAG_DFX_RPU_0,
++	HCLGE_REG_TAG_DFX_RPU_1,
++	HCLGE_REG_TAG_DFX_NCSI,
++	HCLGE_REG_TAG_DFX_RTC,
++	HCLGE_REG_TAG_DFX_PPP,
++	HCLGE_REG_TAG_DFX_RCB,
++	HCLGE_REG_TAG_DFX_TQP,
++	HCLGE_REG_TAG_DFX_SSU_2,
++	HCLGE_REG_TAG_RPU_TNL,
++};
++
++#pragma pack(4)
++struct hclge_reg_tlv {
++	u16 tag;
++	u16 len;
++};
++
++struct hclge_reg_header {
++	u64 magic_number;
++	u8 is_vf;
++	u8 rsv[7];
++};
++
++#pragma pack()
++
++#define HCLGE_REG_TLV_SIZE	sizeof(struct hclge_reg_tlv)
++#define HCLGE_REG_HEADER_SIZE	sizeof(struct hclge_reg_header)
++#define HCLGE_REG_TLV_SPACE	(sizeof(struct hclge_reg_tlv) / sizeof(u32))
++#define HCLGE_REG_HEADER_SPACE	(sizeof(struct hclge_reg_header) / sizeof(u32))
++#define HCLGE_REG_MAGIC_NUMBER	0x686e733372656773 /* meaning is hns3regs */
++
++#define HCLGE_REG_RPU_TNL_ID_0	1
++
++static u32 hclge_reg_get_header(void *data)
++{
++	struct hclge_reg_header *header = data;
++
++	header->magic_number = HCLGE_REG_MAGIC_NUMBER;
++	header->is_vf = 0x0;
++
++	return HCLGE_REG_HEADER_SPACE;
++}
++
++static u32 hclge_reg_get_tlv(u32 tag, u32 regs_num, void *data)
++{
++	struct hclge_reg_tlv *tlv = data;
++
++	tlv->tag = tag;
++	tlv->len = regs_num * sizeof(u32) + HCLGE_REG_TLV_SIZE;
++
++	return HCLGE_REG_TLV_SPACE;
++}
++
++static int hclge_get_32_bit_regs(struct hclge_dev *hdev, u32 regs_num,
++				 void *data)
++{
++#define HCLGE_32_BIT_REG_RTN_DATANUM 8
++#define HCLGE_32_BIT_DESC_NODATA_LEN 2
++
++	struct hclge_desc *desc;
++	u32 *reg_val = data;
++	__le32 *desc_data;
++	int nodata_num;
++	int cmd_num;
++	int i, k, n;
++	int ret;
++
++	if (regs_num == 0)
++		return 0;
++
++	nodata_num = HCLGE_32_BIT_DESC_NODATA_LEN;
++	cmd_num = DIV_ROUND_UP(regs_num + nodata_num,
++			       HCLGE_32_BIT_REG_RTN_DATANUM);
++	desc = kcalloc(cmd_num, sizeof(struct hclge_desc), GFP_KERNEL);
++	if (!desc)
++		return -ENOMEM;
++
++	hclge_cmd_setup_basic_desc(&desc[0], HCLGE_OPC_QUERY_32_BIT_REG, true);
++	ret = hclge_cmd_send(&hdev->hw, desc, cmd_num);
++	if (ret) {
++		dev_err(&hdev->pdev->dev,
++			"Query 32 bit register cmd failed, ret = %d.\n", ret);
++		kfree(desc);
++		return ret;
++	}
++
++	for (i = 0; i < cmd_num; i++) {
++		if (i == 0) {
++			desc_data = (__le32 *)(&desc[i].data[0]);
++			n = HCLGE_32_BIT_REG_RTN_DATANUM - nodata_num;
++		} else {
++			desc_data = (__le32 *)(&desc[i]);
++			n = HCLGE_32_BIT_REG_RTN_DATANUM;
++		}
++		for (k = 0; k < n; k++) {
++			*reg_val++ = le32_to_cpu(*desc_data++);
++
++			regs_num--;
++			if (!regs_num)
++				break;
++		}
++	}
++
++	kfree(desc);
++	return 0;
++}
++
++static int hclge_get_64_bit_regs(struct hclge_dev *hdev, u32 regs_num,
++				 void *data)
++{
++#define HCLGE_64_BIT_REG_RTN_DATANUM 4
++#define HCLGE_64_BIT_DESC_NODATA_LEN 1
++
++	struct hclge_desc *desc;
++	u64 *reg_val = data;
++	__le64 *desc_data;
++	int nodata_len;
++	int cmd_num;
++	int i, k, n;
++	int ret;
++
++	if (regs_num == 0)
++		return 0;
++
++	nodata_len = HCLGE_64_BIT_DESC_NODATA_LEN;
++	cmd_num = DIV_ROUND_UP(regs_num + nodata_len,
++			       HCLGE_64_BIT_REG_RTN_DATANUM);
++	desc = kcalloc(cmd_num, sizeof(struct hclge_desc), GFP_KERNEL);
++	if (!desc)
++		return -ENOMEM;
++
++	hclge_cmd_setup_basic_desc(&desc[0], HCLGE_OPC_QUERY_64_BIT_REG, true);
++	ret = hclge_cmd_send(&hdev->hw, desc, cmd_num);
++	if (ret) {
++		dev_err(&hdev->pdev->dev,
++			"Query 64 bit register cmd failed, ret = %d.\n", ret);
++		kfree(desc);
++		return ret;
++	}
++
++	for (i = 0; i < cmd_num; i++) {
++		if (i == 0) {
++			desc_data = (__le64 *)(&desc[i].data[0]);
++			n = HCLGE_64_BIT_REG_RTN_DATANUM - nodata_len;
++		} else {
++			desc_data = (__le64 *)(&desc[i]);
++			n = HCLGE_64_BIT_REG_RTN_DATANUM;
++		}
++		for (k = 0; k < n; k++) {
++			*reg_val++ = le64_to_cpu(*desc_data++);
++
++			regs_num--;
++			if (!regs_num)
++				break;
++		}
++	}
++
++	kfree(desc);
++	return 0;
++}
++
++int hclge_query_bd_num_cmd_send(struct hclge_dev *hdev, struct hclge_desc *desc)
++{
++	int i;
++
++	/* initialize command BD except the last one */
++	for (i = 0; i < HCLGE_GET_DFX_REG_TYPE_CNT - 1; i++) {
++		hclge_cmd_setup_basic_desc(&desc[i], HCLGE_OPC_DFX_BD_NUM,
++					   true);
++		desc[i].flag |= cpu_to_le16(HCLGE_COMM_CMD_FLAG_NEXT);
++	}
++
++	/* initialize the last command BD */
++	hclge_cmd_setup_basic_desc(&desc[i], HCLGE_OPC_DFX_BD_NUM, true);
++
++	return hclge_cmd_send(&hdev->hw, desc, HCLGE_GET_DFX_REG_TYPE_CNT);
++}
++
++static int hclge_get_dfx_reg_bd_num(struct hclge_dev *hdev,
++				    int *bd_num_list,
++				    u32 type_num)
++{
++	u32 entries_per_desc, desc_index, index, offset, i;
++	struct hclge_desc desc[HCLGE_GET_DFX_REG_TYPE_CNT];
++	int ret;
++
++	ret = hclge_query_bd_num_cmd_send(hdev, desc);
++	if (ret) {
++		dev_err(&hdev->pdev->dev,
++			"Get dfx bd num fail, status is %d.\n", ret);
++		return ret;
++	}
++
++	entries_per_desc = ARRAY_SIZE(desc[0].data);
++	for (i = 0; i < type_num; i++) {
++		offset = hclge_dfx_bd_offset_list[i];
++		index = offset % entries_per_desc;
++		desc_index = offset / entries_per_desc;
++		bd_num_list[i] = le32_to_cpu(desc[desc_index].data[index]);
++	}
++
++	return ret;
++}
++
++static int hclge_dfx_reg_cmd_send(struct hclge_dev *hdev,
++				  struct hclge_desc *desc_src, int bd_num,
++				  enum hclge_opcode_type cmd)
++{
++	struct hclge_desc *desc = desc_src;
++	int i, ret;
++
++	hclge_cmd_setup_basic_desc(desc, cmd, true);
++	for (i = 0; i < bd_num - 1; i++) {
++		desc->flag |= cpu_to_le16(HCLGE_COMM_CMD_FLAG_NEXT);
++		desc++;
++		hclge_cmd_setup_basic_desc(desc, cmd, true);
++	}
++
++	desc = desc_src;
++	ret = hclge_cmd_send(&hdev->hw, desc, bd_num);
++	if (ret)
++		dev_err(&hdev->pdev->dev,
++			"Query dfx reg cmd(0x%x) send fail, status is %d.\n",
++			cmd, ret);
++
++	return ret;
++}
++
++/* tnl_id = 0 means get sum of all tnl reg's value */
++static int hclge_dfx_reg_rpu_tnl_cmd_send(struct hclge_dev *hdev, u32 tnl_id,
++					  struct hclge_desc *desc, int bd_num)
++{
++	int i, ret;
++
++	for (i = 0; i < bd_num; i++) {
++		hclge_cmd_setup_basic_desc(&desc[i], HCLGE_OPC_DFX_RPU_REG_0,
++					   true);
++		if (i != bd_num - 1)
++			desc[i].flag |= cpu_to_le16(HCLGE_COMM_CMD_FLAG_NEXT);
++	}
++
++	desc[0].data[0] = cpu_to_le32(tnl_id);
++	ret = hclge_cmd_send(&hdev->hw, desc, bd_num);
++	if (ret)
++		dev_err(&hdev->pdev->dev,
++			"failed to query dfx rpu tnl reg, ret = %d\n",
++			ret);
++	return ret;
++}
++
++static int hclge_dfx_reg_fetch_data(struct hclge_desc *desc_src, int bd_num,
++				    void *data)
++{
++	int entries_per_desc, reg_num, desc_index, index, i;
++	struct hclge_desc *desc = desc_src;
++	u32 *reg = data;
++
++	entries_per_desc = ARRAY_SIZE(desc->data);
++	reg_num = entries_per_desc * bd_num;
++	for (i = 0; i < reg_num; i++) {
++		index = i % entries_per_desc;
++		desc_index = i / entries_per_desc;
++		*reg++ = le32_to_cpu(desc[desc_index].data[index]);
++	}
++
++	return reg_num;
++}
++
++static int hclge_get_dfx_reg_len(struct hclge_dev *hdev, int *len)
++{
++	u32 dfx_reg_type_num = ARRAY_SIZE(hclge_dfx_bd_offset_list);
++	struct hnae3_ae_dev *ae_dev = pci_get_drvdata(hdev->pdev);
++	int data_len_per_desc;
++	int *bd_num_list;
++	int ret;
++	u32 i;
++
++	bd_num_list = kcalloc(dfx_reg_type_num, sizeof(int), GFP_KERNEL);
++	if (!bd_num_list)
++		return -ENOMEM;
++
++	ret = hclge_get_dfx_reg_bd_num(hdev, bd_num_list, dfx_reg_type_num);
++	if (ret) {
++		dev_err(&hdev->pdev->dev,
++			"Get dfx reg bd num fail, status is %d.\n", ret);
++		goto out;
++	}
++
++	data_len_per_desc = sizeof_field(struct hclge_desc, data);
++	*len = 0;
++	for (i = 0; i < dfx_reg_type_num; i++)
++		*len += bd_num_list[i] * data_len_per_desc + HCLGE_REG_TLV_SIZE;
++
++	/**
++	 * the num of dfx_rpu_0 is reused by each dfx_rpu_tnl
++	 * HCLGE_DFX_BD_OFFSET is starting at 1, but the array subscript is
++	 * starting at 0, so offset need '- 1'.
++	 */
++	*len += (bd_num_list[HCLGE_DFX_RPU_0_BD_OFFSET - 1] * data_len_per_desc +
++		 HCLGE_REG_TLV_SIZE) * ae_dev->dev_specs.tnl_num;
++
++out:
++	kfree(bd_num_list);
++	return ret;
++}
++
++static int hclge_get_dfx_rpu_tnl_reg(struct hclge_dev *hdev, u32 *reg,
++				     struct hclge_desc *desc_src,
++				     int bd_num)
++{
++	struct hnae3_ae_dev *ae_dev = pci_get_drvdata(hdev->pdev);
++	int ret = 0;
++	u8 i;
++
++	for (i = HCLGE_REG_RPU_TNL_ID_0; i <= ae_dev->dev_specs.tnl_num; i++) {
++		ret = hclge_dfx_reg_rpu_tnl_cmd_send(hdev, i, desc_src, bd_num);
++		if (ret)
++			break;
++
++		reg += hclge_reg_get_tlv(HCLGE_REG_TAG_RPU_TNL,
++					 ARRAY_SIZE(desc_src->data) * bd_num,
++					 reg);
++		reg += hclge_dfx_reg_fetch_data(desc_src, bd_num, reg);
++	}
++
++	return ret;
++}
++
++static int hclge_get_dfx_reg(struct hclge_dev *hdev, void *data)
++{
++	u32 dfx_reg_type_num = ARRAY_SIZE(hclge_dfx_bd_offset_list);
++	int bd_num, bd_num_max, buf_len;
++	struct hclge_desc *desc_src;
++	int *bd_num_list;
++	u32 *reg = data;
++	int ret;
++	u32 i;
++
++	bd_num_list = kcalloc(dfx_reg_type_num, sizeof(int), GFP_KERNEL);
++	if (!bd_num_list)
++		return -ENOMEM;
++
++	ret = hclge_get_dfx_reg_bd_num(hdev, bd_num_list, dfx_reg_type_num);
++	if (ret) {
++		dev_err(&hdev->pdev->dev,
++			"Get dfx reg bd num fail, status is %d.\n", ret);
++		goto out;
++	}
++
++	bd_num_max = bd_num_list[0];
++	for (i = 1; i < dfx_reg_type_num; i++)
++		bd_num_max = max_t(int, bd_num_max, bd_num_list[i]);
++
++	buf_len = sizeof(*desc_src) * bd_num_max;
++	desc_src = kzalloc(buf_len, GFP_KERNEL);
++	if (!desc_src) {
++		ret = -ENOMEM;
++		goto out;
++	}
++
++	for (i = 0; i < dfx_reg_type_num; i++) {
++		bd_num = bd_num_list[i];
++		ret = hclge_dfx_reg_cmd_send(hdev, desc_src, bd_num,
++					     hclge_dfx_reg_opcode_list[i]);
++		if (ret) {
++			dev_err(&hdev->pdev->dev,
++				"Get dfx reg fail, status is %d.\n", ret);
++			goto free;
++		}
++
++		reg += hclge_reg_get_tlv(HCLGE_REG_TAG_DFX_BIOS_COMMON + i,
++					 ARRAY_SIZE(desc_src->data) * bd_num,
++					 reg);
++		reg += hclge_dfx_reg_fetch_data(desc_src, bd_num, reg);
++	}
++
++	/**
++	 * HCLGE_DFX_BD_OFFSET is starting at 1, but the array subscript is
++	 * starting at 0, so offset need '- 1'.
++	 */
++	bd_num = bd_num_list[HCLGE_DFX_RPU_0_BD_OFFSET - 1];
++	ret = hclge_get_dfx_rpu_tnl_reg(hdev, reg, desc_src, bd_num);
++
++free:
++	kfree(desc_src);
++out:
++	kfree(bd_num_list);
++	return ret;
++}
++
++static int hclge_fetch_pf_reg(struct hclge_dev *hdev, void *data,
++			      struct hnae3_knic_private_info *kinfo)
++{
++#define HCLGE_RING_REG_OFFSET		0x200
++#define HCLGE_RING_INT_REG_OFFSET	0x4
++
++	int i, j, reg_num;
++	int data_num_sum;
++	u32 *reg = data;
++
++	/* fetching per-PF registers valus from PF PCIe register space */
++	reg_num = ARRAY_SIZE(cmdq_reg_addr_list);
++	reg += hclge_reg_get_tlv(HCLGE_REG_TAG_CMDQ, reg_num, reg);
++	for (i = 0; i < reg_num; i++)
++		*reg++ = hclge_read_dev(&hdev->hw, cmdq_reg_addr_list[i]);
++	data_num_sum = reg_num + HCLGE_REG_TLV_SPACE;
++
++	reg_num = ARRAY_SIZE(common_reg_addr_list);
++	reg += hclge_reg_get_tlv(HCLGE_REG_TAG_COMMON, reg_num, reg);
++	for (i = 0; i < reg_num; i++)
++		*reg++ = hclge_read_dev(&hdev->hw, common_reg_addr_list[i]);
++	data_num_sum += reg_num + HCLGE_REG_TLV_SPACE;
++
++	reg_num = ARRAY_SIZE(ring_reg_addr_list);
++	for (j = 0; j < kinfo->num_tqps; j++) {
++		reg += hclge_reg_get_tlv(HCLGE_REG_TAG_RING, reg_num, reg);
++		for (i = 0; i < reg_num; i++)
++			*reg++ = hclge_read_dev(&hdev->hw,
++						ring_reg_addr_list[i] +
++						HCLGE_RING_REG_OFFSET * j);
++	}
++	data_num_sum += (reg_num + HCLGE_REG_TLV_SPACE) * kinfo->num_tqps;
++
++	reg_num = ARRAY_SIZE(tqp_intr_reg_addr_list);
++	for (j = 0; j < hdev->num_msi_used - 1; j++) {
++		reg += hclge_reg_get_tlv(HCLGE_REG_TAG_TQP_INTR, reg_num, reg);
++		for (i = 0; i < reg_num; i++)
++			*reg++ = hclge_read_dev(&hdev->hw,
++						tqp_intr_reg_addr_list[i] +
++						HCLGE_RING_INT_REG_OFFSET * j);
++	}
++	data_num_sum += (reg_num + HCLGE_REG_TLV_SPACE) *
++			(hdev->num_msi_used - 1);
++
++	return data_num_sum;
++}
++
++static int hclge_get_regs_num(struct hclge_dev *hdev, u32 *regs_num_32_bit,
++			      u32 *regs_num_64_bit)
++{
++	struct hclge_desc desc;
++	u32 total_num;
++	int ret;
++
++	hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_QUERY_REG_NUM, true);
++	ret = hclge_cmd_send(&hdev->hw, &desc, 1);
++	if (ret) {
++		dev_err(&hdev->pdev->dev,
++			"Query register number cmd failed, ret = %d.\n", ret);
++		return ret;
++	}
++
++	*regs_num_32_bit = le32_to_cpu(desc.data[0]);
++	*regs_num_64_bit = le32_to_cpu(desc.data[1]);
++
++	total_num = *regs_num_32_bit + *regs_num_64_bit;
++	if (!total_num)
++		return -EINVAL;
++
++	return 0;
++}
++
++int hclge_get_regs_len(struct hnae3_handle *handle)
++{
++	struct hnae3_knic_private_info *kinfo = &handle->kinfo;
++	struct hclge_vport *vport = hclge_get_vport(handle);
++	int regs_num_32_bit, regs_num_64_bit, dfx_regs_len;
++	int cmdq_len, common_len, ring_len, tqp_intr_len;
++	int regs_len_32_bit, regs_len_64_bit;
++	struct hclge_dev *hdev = vport->back;
++	int ret;
++
++	ret = hclge_get_regs_num(hdev, &regs_num_32_bit, &regs_num_64_bit);
++	if (ret) {
++		dev_err(&hdev->pdev->dev,
++			"Get register number failed, ret = %d.\n", ret);
++		return ret;
++	}
++
++	ret = hclge_get_dfx_reg_len(hdev, &dfx_regs_len);
++	if (ret) {
++		dev_err(&hdev->pdev->dev,
++			"Get dfx reg len failed, ret = %d.\n", ret);
++		return ret;
++	}
++
++	cmdq_len = HCLGE_REG_TLV_SIZE + sizeof(cmdq_reg_addr_list);
++	common_len = HCLGE_REG_TLV_SIZE + sizeof(common_reg_addr_list);
++	ring_len = HCLGE_REG_TLV_SIZE + sizeof(ring_reg_addr_list);
++	tqp_intr_len = HCLGE_REG_TLV_SIZE + sizeof(tqp_intr_reg_addr_list);
++	regs_len_32_bit = HCLGE_REG_TLV_SIZE + regs_num_32_bit * sizeof(u32);
++	regs_len_64_bit = HCLGE_REG_TLV_SIZE + regs_num_64_bit * sizeof(u64);
++
++	/* return the total length of all register values */
++	return HCLGE_REG_HEADER_SIZE + cmdq_len + common_len + ring_len *
++		kinfo->num_tqps + tqp_intr_len * (hdev->num_msi_used - 1) +
++		regs_len_32_bit + regs_len_64_bit + dfx_regs_len;
++}
++
++void hclge_get_regs(struct hnae3_handle *handle, u32 *version,
++		    void *data)
++{
++#define HCLGE_REG_64_BIT_SPACE_MULTIPLE		2
++
++	struct hnae3_knic_private_info *kinfo = &handle->kinfo;
++	struct hclge_vport *vport = hclge_get_vport(handle);
++	struct hclge_dev *hdev = vport->back;
++	u32 regs_num_32_bit, regs_num_64_bit;
++	u32 *reg = data;
++	int ret;
++
++	*version = hdev->fw_version;
++
++	ret = hclge_get_regs_num(hdev, &regs_num_32_bit, &regs_num_64_bit);
++	if (ret) {
++		dev_err(&hdev->pdev->dev,
++			"Get register number failed, ret = %d.\n", ret);
++		return;
++	}
++
++	reg += hclge_reg_get_header(reg);
++	reg += hclge_fetch_pf_reg(hdev, reg, kinfo);
++
++	reg += hclge_reg_get_tlv(HCLGE_REG_TAG_QUERY_32_BIT,
++				 regs_num_32_bit, reg);
++	ret = hclge_get_32_bit_regs(hdev, regs_num_32_bit, reg);
++	if (ret) {
++		dev_err(&hdev->pdev->dev,
++			"Get 32 bit register failed, ret = %d.\n", ret);
++		return;
++	}
++	reg += regs_num_32_bit;
++
++	reg += hclge_reg_get_tlv(HCLGE_REG_TAG_QUERY_64_BIT,
++				 regs_num_64_bit *
++				 HCLGE_REG_64_BIT_SPACE_MULTIPLE, reg);
++	ret = hclge_get_64_bit_regs(hdev, regs_num_64_bit, reg);
++	if (ret) {
++		dev_err(&hdev->pdev->dev,
++			"Get 64 bit register failed, ret = %d.\n", ret);
++		return;
++	}
++	reg += regs_num_64_bit * HCLGE_REG_64_BIT_SPACE_MULTIPLE;
++
++	ret = hclge_get_dfx_reg(hdev, reg);
++	if (ret)
++		dev_err(&hdev->pdev->dev,
++			"Get dfx register failed, ret = %d.\n", ret);
++}
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_regs.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_regs.h
+new file mode 100644
+index 0000000000000..b6bc1ecb8054e
+--- /dev/null
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_regs.h
+@@ -0,0 +1,17 @@
++/* SPDX-License-Identifier: GPL-2.0+ */
++// Copyright (c) 2023 Hisilicon Limited.
++
++#ifndef __HCLGE_REGS_H
++#define __HCLGE_REGS_H
++#include <linux/types.h>
++#include "hclge_comm_cmd.h"
++
++struct hnae3_handle;
++struct hclge_dev;
++
++int hclge_query_bd_num_cmd_send(struct hclge_dev *hdev,
++				struct hclge_desc *desc);
++int hclge_get_regs_len(struct hnae3_handle *handle);
++void hclge_get_regs(struct hnae3_handle *handle, u32 *version,
++		    void *data);
++#endif
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index 34f02ca8d1d2d..7a2f9233d6954 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -6,6 +6,7 @@
+ #include <net/rtnetlink.h>
+ #include "hclgevf_cmd.h"
+ #include "hclgevf_main.h"
++#include "hclgevf_regs.h"
+ #include "hclge_mbx.h"
+ #include "hnae3.h"
+ #include "hclgevf_devlink.h"
+@@ -33,58 +34,6 @@ static const struct pci_device_id ae_algovf_pci_tbl[] = {
+ 
+ MODULE_DEVICE_TABLE(pci, ae_algovf_pci_tbl);
+ 
+-static const u32 cmdq_reg_addr_list[] = {HCLGE_COMM_NIC_CSQ_BASEADDR_L_REG,
+-					 HCLGE_COMM_NIC_CSQ_BASEADDR_H_REG,
+-					 HCLGE_COMM_NIC_CSQ_DEPTH_REG,
+-					 HCLGE_COMM_NIC_CSQ_TAIL_REG,
+-					 HCLGE_COMM_NIC_CSQ_HEAD_REG,
+-					 HCLGE_COMM_NIC_CRQ_BASEADDR_L_REG,
+-					 HCLGE_COMM_NIC_CRQ_BASEADDR_H_REG,
+-					 HCLGE_COMM_NIC_CRQ_DEPTH_REG,
+-					 HCLGE_COMM_NIC_CRQ_TAIL_REG,
+-					 HCLGE_COMM_NIC_CRQ_HEAD_REG,
+-					 HCLGE_COMM_VECTOR0_CMDQ_SRC_REG,
+-					 HCLGE_COMM_VECTOR0_CMDQ_STATE_REG,
+-					 HCLGE_COMM_CMDQ_INTR_EN_REG,
+-					 HCLGE_COMM_CMDQ_INTR_GEN_REG};
+-
+-static const u32 common_reg_addr_list[] = {HCLGEVF_MISC_VECTOR_REG_BASE,
+-					   HCLGEVF_RST_ING,
+-					   HCLGEVF_GRO_EN_REG};
+-
+-static const u32 ring_reg_addr_list[] = {HCLGEVF_RING_RX_ADDR_L_REG,
+-					 HCLGEVF_RING_RX_ADDR_H_REG,
+-					 HCLGEVF_RING_RX_BD_NUM_REG,
+-					 HCLGEVF_RING_RX_BD_LENGTH_REG,
+-					 HCLGEVF_RING_RX_MERGE_EN_REG,
+-					 HCLGEVF_RING_RX_TAIL_REG,
+-					 HCLGEVF_RING_RX_HEAD_REG,
+-					 HCLGEVF_RING_RX_FBD_NUM_REG,
+-					 HCLGEVF_RING_RX_OFFSET_REG,
+-					 HCLGEVF_RING_RX_FBD_OFFSET_REG,
+-					 HCLGEVF_RING_RX_STASH_REG,
+-					 HCLGEVF_RING_RX_BD_ERR_REG,
+-					 HCLGEVF_RING_TX_ADDR_L_REG,
+-					 HCLGEVF_RING_TX_ADDR_H_REG,
+-					 HCLGEVF_RING_TX_BD_NUM_REG,
+-					 HCLGEVF_RING_TX_PRIORITY_REG,
+-					 HCLGEVF_RING_TX_TC_REG,
+-					 HCLGEVF_RING_TX_MERGE_EN_REG,
+-					 HCLGEVF_RING_TX_TAIL_REG,
+-					 HCLGEVF_RING_TX_HEAD_REG,
+-					 HCLGEVF_RING_TX_FBD_NUM_REG,
+-					 HCLGEVF_RING_TX_OFFSET_REG,
+-					 HCLGEVF_RING_TX_EBD_NUM_REG,
+-					 HCLGEVF_RING_TX_EBD_OFFSET_REG,
+-					 HCLGEVF_RING_TX_BD_ERR_REG,
+-					 HCLGEVF_RING_EN_REG};
+-
+-static const u32 tqp_intr_reg_addr_list[] = {HCLGEVF_TQP_INTR_CTRL_REG,
+-					     HCLGEVF_TQP_INTR_GL0_REG,
+-					     HCLGEVF_TQP_INTR_GL1_REG,
+-					     HCLGEVF_TQP_INTR_GL2_REG,
+-					     HCLGEVF_TQP_INTR_RL_REG};
+-
+ /* hclgevf_cmd_send - send command to command queue
+  * @hw: pointer to the hw struct
+  * @desc: prefilled descriptor for describing the command
+@@ -111,7 +60,7 @@ void hclgevf_arq_init(struct hclgevf_dev *hdev)
+ 	spin_unlock(&cmdq->crq.lock);
+ }
+ 
+-static struct hclgevf_dev *hclgevf_ae_get_hdev(struct hnae3_handle *handle)
++struct hclgevf_dev *hclgevf_ae_get_hdev(struct hnae3_handle *handle)
+ {
+ 	if (!handle->client)
+ 		return container_of(handle, struct hclgevf_dev, nic);
+@@ -3258,72 +3207,6 @@ static void hclgevf_get_link_mode(struct hnae3_handle *handle,
+ 	*advertising = hdev->hw.mac.advertising;
+ }
+ 
+-#define MAX_SEPARATE_NUM	4
+-#define SEPARATOR_VALUE		0xFDFCFBFA
+-#define REG_NUM_PER_LINE	4
+-#define REG_LEN_PER_LINE	(REG_NUM_PER_LINE * sizeof(u32))
+-
+-static int hclgevf_get_regs_len(struct hnae3_handle *handle)
+-{
+-	int cmdq_lines, common_lines, ring_lines, tqp_intr_lines;
+-	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+-
+-	cmdq_lines = sizeof(cmdq_reg_addr_list) / REG_LEN_PER_LINE + 1;
+-	common_lines = sizeof(common_reg_addr_list) / REG_LEN_PER_LINE + 1;
+-	ring_lines = sizeof(ring_reg_addr_list) / REG_LEN_PER_LINE + 1;
+-	tqp_intr_lines = sizeof(tqp_intr_reg_addr_list) / REG_LEN_PER_LINE + 1;
+-
+-	return (cmdq_lines + common_lines + ring_lines * hdev->num_tqps +
+-		tqp_intr_lines * (hdev->num_msi_used - 1)) * REG_LEN_PER_LINE;
+-}
+-
+-static void hclgevf_get_regs(struct hnae3_handle *handle, u32 *version,
+-			     void *data)
+-{
+-	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
+-	int i, j, reg_um, separator_num;
+-	u32 *reg = data;
+-
+-	*version = hdev->fw_version;
+-
+-	/* fetching per-VF registers values from VF PCIe register space */
+-	reg_um = sizeof(cmdq_reg_addr_list) / sizeof(u32);
+-	separator_num = MAX_SEPARATE_NUM - reg_um % REG_NUM_PER_LINE;
+-	for (i = 0; i < reg_um; i++)
+-		*reg++ = hclgevf_read_dev(&hdev->hw, cmdq_reg_addr_list[i]);
+-	for (i = 0; i < separator_num; i++)
+-		*reg++ = SEPARATOR_VALUE;
+-
+-	reg_um = sizeof(common_reg_addr_list) / sizeof(u32);
+-	separator_num = MAX_SEPARATE_NUM - reg_um % REG_NUM_PER_LINE;
+-	for (i = 0; i < reg_um; i++)
+-		*reg++ = hclgevf_read_dev(&hdev->hw, common_reg_addr_list[i]);
+-	for (i = 0; i < separator_num; i++)
+-		*reg++ = SEPARATOR_VALUE;
+-
+-	reg_um = sizeof(ring_reg_addr_list) / sizeof(u32);
+-	separator_num = MAX_SEPARATE_NUM - reg_um % REG_NUM_PER_LINE;
+-	for (j = 0; j < hdev->num_tqps; j++) {
+-		for (i = 0; i < reg_um; i++)
+-			*reg++ = hclgevf_read_dev(&hdev->hw,
+-						  ring_reg_addr_list[i] +
+-						  HCLGEVF_TQP_REG_SIZE * j);
+-		for (i = 0; i < separator_num; i++)
+-			*reg++ = SEPARATOR_VALUE;
+-	}
+-
+-	reg_um = sizeof(tqp_intr_reg_addr_list) / sizeof(u32);
+-	separator_num = MAX_SEPARATE_NUM - reg_um % REG_NUM_PER_LINE;
+-	for (j = 0; j < hdev->num_msi_used - 1; j++) {
+-		for (i = 0; i < reg_um; i++)
+-			*reg++ = hclgevf_read_dev(&hdev->hw,
+-						  tqp_intr_reg_addr_list[i] +
+-						  4 * j);
+-		for (i = 0; i < separator_num; i++)
+-			*reg++ = SEPARATOR_VALUE;
+-	}
+-}
+-
+ void hclgevf_update_port_base_vlan_info(struct hclgevf_dev *hdev, u16 state,
+ 				struct hclge_mbx_port_base_vlan *port_base_vlan)
+ {
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h
+index 59ca6c794d6db..81c16b8c8da29 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h
+@@ -294,4 +294,5 @@ void hclgevf_reset_task_schedule(struct hclgevf_dev *hdev);
+ void hclgevf_mbx_task_schedule(struct hclgevf_dev *hdev);
+ void hclgevf_update_port_base_vlan_info(struct hclgevf_dev *hdev, u16 state,
+ 			struct hclge_mbx_port_base_vlan *port_base_vlan);
++struct hclgevf_dev *hclgevf_ae_get_hdev(struct hnae3_handle *handle);
+ #endif
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_regs.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_regs.c
+new file mode 100644
+index 0000000000000..197ab733306b5
+--- /dev/null
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_regs.c
+@@ -0,0 +1,127 @@
++// SPDX-License-Identifier: GPL-2.0+
++// Copyright (c) 2023 Hisilicon Limited.
++
++#include "hclgevf_main.h"
++#include "hclgevf_regs.h"
++#include "hnae3.h"
++
++static const u32 cmdq_reg_addr_list[] = {HCLGE_COMM_NIC_CSQ_BASEADDR_L_REG,
++					 HCLGE_COMM_NIC_CSQ_BASEADDR_H_REG,
++					 HCLGE_COMM_NIC_CSQ_DEPTH_REG,
++					 HCLGE_COMM_NIC_CSQ_TAIL_REG,
++					 HCLGE_COMM_NIC_CSQ_HEAD_REG,
++					 HCLGE_COMM_NIC_CRQ_BASEADDR_L_REG,
++					 HCLGE_COMM_NIC_CRQ_BASEADDR_H_REG,
++					 HCLGE_COMM_NIC_CRQ_DEPTH_REG,
++					 HCLGE_COMM_NIC_CRQ_TAIL_REG,
++					 HCLGE_COMM_NIC_CRQ_HEAD_REG,
++					 HCLGE_COMM_VECTOR0_CMDQ_SRC_REG,
++					 HCLGE_COMM_VECTOR0_CMDQ_STATE_REG,
++					 HCLGE_COMM_CMDQ_INTR_EN_REG,
++					 HCLGE_COMM_CMDQ_INTR_GEN_REG};
++
++static const u32 common_reg_addr_list[] = {HCLGEVF_MISC_VECTOR_REG_BASE,
++					   HCLGEVF_RST_ING,
++					   HCLGEVF_GRO_EN_REG};
++
++static const u32 ring_reg_addr_list[] = {HCLGEVF_RING_RX_ADDR_L_REG,
++					 HCLGEVF_RING_RX_ADDR_H_REG,
++					 HCLGEVF_RING_RX_BD_NUM_REG,
++					 HCLGEVF_RING_RX_BD_LENGTH_REG,
++					 HCLGEVF_RING_RX_MERGE_EN_REG,
++					 HCLGEVF_RING_RX_TAIL_REG,
++					 HCLGEVF_RING_RX_HEAD_REG,
++					 HCLGEVF_RING_RX_FBD_NUM_REG,
++					 HCLGEVF_RING_RX_OFFSET_REG,
++					 HCLGEVF_RING_RX_FBD_OFFSET_REG,
++					 HCLGEVF_RING_RX_STASH_REG,
++					 HCLGEVF_RING_RX_BD_ERR_REG,
++					 HCLGEVF_RING_TX_ADDR_L_REG,
++					 HCLGEVF_RING_TX_ADDR_H_REG,
++					 HCLGEVF_RING_TX_BD_NUM_REG,
++					 HCLGEVF_RING_TX_PRIORITY_REG,
++					 HCLGEVF_RING_TX_TC_REG,
++					 HCLGEVF_RING_TX_MERGE_EN_REG,
++					 HCLGEVF_RING_TX_TAIL_REG,
++					 HCLGEVF_RING_TX_HEAD_REG,
++					 HCLGEVF_RING_TX_FBD_NUM_REG,
++					 HCLGEVF_RING_TX_OFFSET_REG,
++					 HCLGEVF_RING_TX_EBD_NUM_REG,
++					 HCLGEVF_RING_TX_EBD_OFFSET_REG,
++					 HCLGEVF_RING_TX_BD_ERR_REG,
++					 HCLGEVF_RING_EN_REG};
++
++static const u32 tqp_intr_reg_addr_list[] = {HCLGEVF_TQP_INTR_CTRL_REG,
++					     HCLGEVF_TQP_INTR_GL0_REG,
++					     HCLGEVF_TQP_INTR_GL1_REG,
++					     HCLGEVF_TQP_INTR_GL2_REG,
++					     HCLGEVF_TQP_INTR_RL_REG};
++
++#define MAX_SEPARATE_NUM	4
++#define SEPARATOR_VALUE		0xFDFCFBFA
++#define REG_NUM_PER_LINE	4
++#define REG_LEN_PER_LINE	(REG_NUM_PER_LINE * sizeof(u32))
++
++int hclgevf_get_regs_len(struct hnae3_handle *handle)
++{
++	int cmdq_lines, common_lines, ring_lines, tqp_intr_lines;
++	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
++
++	cmdq_lines = sizeof(cmdq_reg_addr_list) / REG_LEN_PER_LINE + 1;
++	common_lines = sizeof(common_reg_addr_list) / REG_LEN_PER_LINE + 1;
++	ring_lines = sizeof(ring_reg_addr_list) / REG_LEN_PER_LINE + 1;
++	tqp_intr_lines = sizeof(tqp_intr_reg_addr_list) / REG_LEN_PER_LINE + 1;
++
++	return (cmdq_lines + common_lines + ring_lines * hdev->num_tqps +
++		tqp_intr_lines * (hdev->num_msi_used - 1)) * REG_LEN_PER_LINE;
++}
++
++void hclgevf_get_regs(struct hnae3_handle *handle, u32 *version,
++		      void *data)
++{
++#define HCLGEVF_RING_REG_OFFSET		0x200
++#define HCLGEVF_RING_INT_REG_OFFSET	0x4
++
++	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
++	int i, j, reg_um, separator_num;
++	u32 *reg = data;
++
++	*version = hdev->fw_version;
++
++	/* fetching per-VF registers values from VF PCIe register space */
++	reg_um = sizeof(cmdq_reg_addr_list) / sizeof(u32);
++	separator_num = MAX_SEPARATE_NUM - reg_um % REG_NUM_PER_LINE;
++	for (i = 0; i < reg_um; i++)
++		*reg++ = hclgevf_read_dev(&hdev->hw, cmdq_reg_addr_list[i]);
++	for (i = 0; i < separator_num; i++)
++		*reg++ = SEPARATOR_VALUE;
++
++	reg_um = sizeof(common_reg_addr_list) / sizeof(u32);
++	separator_num = MAX_SEPARATE_NUM - reg_um % REG_NUM_PER_LINE;
++	for (i = 0; i < reg_um; i++)
++		*reg++ = hclgevf_read_dev(&hdev->hw, common_reg_addr_list[i]);
++	for (i = 0; i < separator_num; i++)
++		*reg++ = SEPARATOR_VALUE;
++
++	reg_um = sizeof(ring_reg_addr_list) / sizeof(u32);
++	separator_num = MAX_SEPARATE_NUM - reg_um % REG_NUM_PER_LINE;
++	for (j = 0; j < hdev->num_tqps; j++) {
++		for (i = 0; i < reg_um; i++)
++			*reg++ = hclgevf_read_dev(&hdev->hw,
++						  ring_reg_addr_list[i] +
++						  HCLGEVF_RING_REG_OFFSET * j);
++		for (i = 0; i < separator_num; i++)
++			*reg++ = SEPARATOR_VALUE;
++	}
++
++	reg_um = sizeof(tqp_intr_reg_addr_list) / sizeof(u32);
++	separator_num = MAX_SEPARATE_NUM - reg_um % REG_NUM_PER_LINE;
++	for (j = 0; j < hdev->num_msi_used - 1; j++) {
++		for (i = 0; i < reg_um; i++)
++			*reg++ = hclgevf_read_dev(&hdev->hw,
++						  tqp_intr_reg_addr_list[i] +
++						  HCLGEVF_RING_INT_REG_OFFSET * j);
++		for (i = 0; i < separator_num; i++)
++			*reg++ = SEPARATOR_VALUE;
++	}
++}
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_regs.h b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_regs.h
+new file mode 100644
+index 0000000000000..77bdcf60a1afe
+--- /dev/null
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_regs.h
+@@ -0,0 +1,13 @@
++/* SPDX-License-Identifier: GPL-2.0+ */
++/* Copyright (c) 2023 Hisilicon Limited. */
++
++#ifndef __HCLGEVF_REGS_H
++#define __HCLGEVF_REGS_H
++#include <linux/types.h>
++
++struct hnae3_handle;
++
++int hclgevf_get_regs_len(struct hnae3_handle *handle);
++void hclgevf_get_regs(struct hnae3_handle *handle, u32 *version,
++		      void *data);
++#endif
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index b40dfe6ae3217..c2cdc79308dc1 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -1346,6 +1346,7 @@ int ice_aq_wait_for_event(struct ice_pf *pf, u16 opcode, unsigned long timeout,
+ static void ice_aq_check_events(struct ice_pf *pf, u16 opcode,
+ 				struct ice_rq_event_info *event)
+ {
++	struct ice_rq_event_info *task_ev;
+ 	struct ice_aq_task *task;
+ 	bool found = false;
+ 
+@@ -1354,15 +1355,15 @@ static void ice_aq_check_events(struct ice_pf *pf, u16 opcode,
+ 		if (task->state || task->opcode != opcode)
+ 			continue;
+ 
+-		memcpy(&task->event->desc, &event->desc, sizeof(event->desc));
+-		task->event->msg_len = event->msg_len;
++		task_ev = task->event;
++		memcpy(&task_ev->desc, &event->desc, sizeof(event->desc));
++		task_ev->msg_len = event->msg_len;
+ 
+ 		/* Only copy the data buffer if a destination was set */
+-		if (task->event->msg_buf &&
+-		    task->event->buf_len > event->buf_len) {
+-			memcpy(task->event->msg_buf, event->msg_buf,
++		if (task_ev->msg_buf && task_ev->buf_len >= event->buf_len) {
++			memcpy(task_ev->msg_buf, event->msg_buf,
+ 			       event->buf_len);
+-			task->event->buf_len = event->buf_len;
++			task_ev->buf_len = event->buf_len;
+ 		}
+ 
+ 		task->state = ICE_AQ_TASK_COMPLETE;
+diff --git a/drivers/net/ethernet/intel/ice/ice_ptp_hw.c b/drivers/net/ethernet/intel/ice/ice_ptp_hw.c
+index a38614d21ea8f..de1d83300481d 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ptp_hw.c
++++ b/drivers/net/ethernet/intel/ice/ice_ptp_hw.c
+@@ -131,6 +131,8 @@ static void ice_ptp_src_cmd(struct ice_hw *hw, enum ice_ptp_tmr_cmd cmd)
+ 	case READ_TIME:
+ 		cmd_val |= GLTSYN_CMD_READ_TIME;
+ 		break;
++	case ICE_PTP_NOP:
++		break;
+ 	}
+ 
+ 	wr32(hw, GLTSYN_CMD, cmd_val);
+@@ -1226,18 +1228,18 @@ ice_ptp_read_port_capture(struct ice_hw *hw, u8 port, u64 *tx_ts, u64 *rx_ts)
+ }
+ 
+ /**
+- * ice_ptp_one_port_cmd - Prepare a single PHY port for a timer command
++ * ice_ptp_write_port_cmd_e822 - Prepare a single PHY port for a timer command
+  * @hw: pointer to HW struct
+  * @port: Port to which cmd has to be sent
+  * @cmd: Command to be sent to the port
+  *
+  * Prepare the requested port for an upcoming timer sync command.
+  *
+- * Note there is no equivalent of this operation on E810, as that device
+- * always handles all external PHYs internally.
++ * Do not use this function directly. If you want to configure exactly one
++ * port, use ice_ptp_one_port_cmd() instead.
+  */
+ static int
+-ice_ptp_one_port_cmd(struct ice_hw *hw, u8 port, enum ice_ptp_tmr_cmd cmd)
++ice_ptp_write_port_cmd_e822(struct ice_hw *hw, u8 port, enum ice_ptp_tmr_cmd cmd)
+ {
+ 	u32 cmd_val, val;
+ 	u8 tmr_idx;
+@@ -1261,6 +1263,8 @@ ice_ptp_one_port_cmd(struct ice_hw *hw, u8 port, enum ice_ptp_tmr_cmd cmd)
+ 	case ADJ_TIME_AT_TIME:
+ 		cmd_val |= PHY_CMD_ADJ_TIME_AT_TIME;
+ 		break;
++	case ICE_PTP_NOP:
++		break;
+ 	}
+ 
+ 	/* Tx case */
+@@ -1306,6 +1310,39 @@ ice_ptp_one_port_cmd(struct ice_hw *hw, u8 port, enum ice_ptp_tmr_cmd cmd)
+ 	return 0;
+ }
+ 
++/**
++ * ice_ptp_one_port_cmd - Prepare one port for a timer command
++ * @hw: pointer to the HW struct
++ * @configured_port: the port to configure with configured_cmd
++ * @configured_cmd: timer command to prepare on the configured_port
++ *
++ * Prepare the configured_port for the configured_cmd, and prepare all other
++ * ports for ICE_PTP_NOP. This causes the configured_port to execute the
++ * desired command while all other ports perform no operation.
++ */
++static int
++ice_ptp_one_port_cmd(struct ice_hw *hw, u8 configured_port,
++		     enum ice_ptp_tmr_cmd configured_cmd)
++{
++	u8 port;
++
++	for (port = 0; port < ICE_NUM_EXTERNAL_PORTS; port++) {
++		enum ice_ptp_tmr_cmd cmd;
++		int err;
++
++		if (port == configured_port)
++			cmd = configured_cmd;
++		else
++			cmd = ICE_PTP_NOP;
++
++		err = ice_ptp_write_port_cmd_e822(hw, port, cmd);
++		if (err)
++			return err;
++	}
++
++	return 0;
++}
++
+ /**
+  * ice_ptp_port_cmd_e822 - Prepare all ports for a timer command
+  * @hw: pointer to the HW struct
+@@ -1322,7 +1359,7 @@ ice_ptp_port_cmd_e822(struct ice_hw *hw, enum ice_ptp_tmr_cmd cmd)
+ 	for (port = 0; port < ICE_NUM_EXTERNAL_PORTS; port++) {
+ 		int err;
+ 
+-		err = ice_ptp_one_port_cmd(hw, port, cmd);
++		err = ice_ptp_write_port_cmd_e822(hw, port, cmd);
+ 		if (err)
+ 			return err;
+ 	}
+@@ -2252,6 +2289,9 @@ static int ice_sync_phy_timer_e822(struct ice_hw *hw, u8 port)
+ 	if (err)
+ 		goto err_unlock;
+ 
++	/* Do not perform any action on the main timer */
++	ice_ptp_src_cmd(hw, ICE_PTP_NOP);
++
+ 	/* Issue the sync to activate the time adjustment */
+ 	ice_ptp_exec_tmr_cmd(hw);
+ 
+@@ -2372,6 +2412,9 @@ int ice_start_phy_timer_e822(struct ice_hw *hw, u8 port)
+ 	if (err)
+ 		return err;
+ 
++	/* Do not perform any action on the main timer */
++	ice_ptp_src_cmd(hw, ICE_PTP_NOP);
++
+ 	ice_ptp_exec_tmr_cmd(hw);
+ 
+ 	err = ice_read_phy_reg_e822(hw, port, P_REG_PS, &val);
+@@ -2847,6 +2890,8 @@ static int ice_ptp_port_cmd_e810(struct ice_hw *hw, enum ice_ptp_tmr_cmd cmd)
+ 	case ADJ_TIME_AT_TIME:
+ 		cmd_val = GLTSYN_CMD_ADJ_INIT_TIME;
+ 		break;
++	case ICE_PTP_NOP:
++		return 0;
+ 	}
+ 
+ 	/* Read, modify, write */
+diff --git a/drivers/net/ethernet/intel/ice/ice_ptp_hw.h b/drivers/net/ethernet/intel/ice/ice_ptp_hw.h
+index 3b68cb91bd819..096685237ca61 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ptp_hw.h
++++ b/drivers/net/ethernet/intel/ice/ice_ptp_hw.h
+@@ -9,7 +9,8 @@ enum ice_ptp_tmr_cmd {
+ 	INIT_INCVAL,
+ 	ADJ_TIME,
+ 	ADJ_TIME_AT_TIME,
+-	READ_TIME
++	READ_TIME,
++	ICE_PTP_NOP,
+ };
+ 
+ enum ice_ptp_serdes {
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index 9a2561409b06f..08e3df37089fe 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -4814,6 +4814,10 @@ void igb_configure_rx_ring(struct igb_adapter *adapter,
+ static void igb_set_rx_buffer_len(struct igb_adapter *adapter,
+ 				  struct igb_ring *rx_ring)
+ {
++#if (PAGE_SIZE < 8192)
++	struct e1000_hw *hw = &adapter->hw;
++#endif
++
+ 	/* set build_skb and buffer size flags */
+ 	clear_ring_build_skb_enabled(rx_ring);
+ 	clear_ring_uses_large_buffer(rx_ring);
+@@ -4824,10 +4828,9 @@ static void igb_set_rx_buffer_len(struct igb_adapter *adapter,
+ 	set_ring_build_skb_enabled(rx_ring);
+ 
+ #if (PAGE_SIZE < 8192)
+-	if (adapter->max_frame_size <= IGB_MAX_FRAME_BUILD_SKB)
+-		return;
+-
+-	set_ring_uses_large_buffer(rx_ring);
++	if (adapter->max_frame_size > IGB_MAX_FRAME_BUILD_SKB ||
++	    rd32(E1000_RCTL) & E1000_RCTL_SBP)
++		set_ring_uses_large_buffer(rx_ring);
+ #endif
+ }
+ 
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rpm.c b/drivers/net/ethernet/marvell/octeontx2/af/rpm.c
+index b4fcb20c3f4fd..af21e2030cff2 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rpm.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rpm.c
+@@ -355,8 +355,8 @@ int rpm_lmac_enadis_pause_frm(void *rpmd, int lmac_id, u8 tx_pause,
+ 
+ void rpm_lmac_pause_frm_config(void *rpmd, int lmac_id, bool enable)
+ {
++	u64 cfg, pfc_class_mask_cfg;
+ 	rpm_t *rpm = rpmd;
+-	u64 cfg;
+ 
+ 	/* ALL pause frames received are completely ignored */
+ 	cfg = rpm_read(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG);
+@@ -380,9 +380,11 @@ void rpm_lmac_pause_frm_config(void *rpmd, int lmac_id, bool enable)
+ 		rpm_write(rpm, 0, RPMX_CMR_CHAN_MSK_OR, ~0ULL);
+ 
+ 	/* Disable all PFC classes */
+-	cfg = rpm_read(rpm, lmac_id, RPMX_CMRX_PRT_CBFC_CTL);
++	pfc_class_mask_cfg = is_dev_rpm2(rpm) ? RPM2_CMRX_PRT_CBFC_CTL :
++						RPMX_CMRX_PRT_CBFC_CTL;
++	cfg = rpm_read(rpm, lmac_id, pfc_class_mask_cfg);
+ 	cfg = FIELD_SET(RPM_PFC_CLASS_MASK, 0, cfg);
+-	rpm_write(rpm, lmac_id, RPMX_CMRX_PRT_CBFC_CTL, cfg);
++	rpm_write(rpm, lmac_id, pfc_class_mask_cfg, cfg);
+ }
+ 
+ int rpm_get_rx_stats(void *rpmd, int lmac_id, int idx, u64 *rx_stat)
+@@ -605,8 +607,11 @@ int rpm_lmac_pfc_config(void *rpmd, int lmac_id, u8 tx_pause, u8 rx_pause, u16 p
+ 	if (!is_lmac_valid(rpm, lmac_id))
+ 		return -ENODEV;
+ 
++	pfc_class_mask_cfg = is_dev_rpm2(rpm) ? RPM2_CMRX_PRT_CBFC_CTL :
++						RPMX_CMRX_PRT_CBFC_CTL;
++
+ 	cfg = rpm_read(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG);
+-	class_en = rpm_read(rpm, lmac_id, RPMX_CMRX_PRT_CBFC_CTL);
++	class_en = rpm_read(rpm, lmac_id, pfc_class_mask_cfg);
+ 	pfc_en |= FIELD_GET(RPM_PFC_CLASS_MASK, class_en);
+ 
+ 	if (rx_pause) {
+@@ -635,10 +640,6 @@ int rpm_lmac_pfc_config(void *rpmd, int lmac_id, u8 tx_pause, u8 rx_pause, u16 p
+ 		cfg |= RPMX_MTI_MAC100X_COMMAND_CONFIG_PFC_MODE;
+ 
+ 	rpm_write(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG, cfg);
+-
+-	pfc_class_mask_cfg = is_dev_rpm2(rpm) ? RPM2_CMRX_PRT_CBFC_CTL :
+-						RPMX_CMRX_PRT_CBFC_CTL;
+-
+ 	rpm_write(rpm, lmac_id, pfc_class_mask_cfg, class_en);
+ 
+ 	return 0;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+index 77c8f650f7ac1..b9712040a0bc2 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+@@ -804,6 +804,7 @@ void otx2_txschq_free_one(struct otx2_nic *pfvf, u16 lvl, u16 schq)
+ 
+ 	mutex_unlock(&pfvf->mbox.lock);
+ }
++EXPORT_SYMBOL(otx2_txschq_free_one);
+ 
+ void otx2_txschq_stop(struct otx2_nic *pfvf)
+ {
+@@ -1432,7 +1433,7 @@ int otx2_pool_init(struct otx2_nic *pfvf, u16 pool_id,
+ 	}
+ 
+ 	pp_params.flags = PP_FLAG_PAGE_FRAG | PP_FLAG_DMA_MAP;
+-	pp_params.pool_size = numptrs;
++	pp_params.pool_size = min(OTX2_PAGE_POOL_SZ, numptrs);
+ 	pp_params.nid = NUMA_NO_NODE;
+ 	pp_params.dev = pfvf->dev;
+ 	pp_params.dma_dir = DMA_FROM_DEVICE;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dcbnl.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dcbnl.c
+index ccaf97bb1ce03..bfddbff7bcdfb 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dcbnl.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dcbnl.c
+@@ -70,7 +70,7 @@ static int otx2_pfc_txschq_alloc_one(struct otx2_nic *pfvf, u8 prio)
+ 	 * link config level. These rest of the scheduler can be
+ 	 * same as hw.txschq_list.
+ 	 */
+-	for (lvl = 0; lvl < pfvf->hw.txschq_link_cfg_lvl; lvl++)
++	for (lvl = 0; lvl <= pfvf->hw.txschq_link_cfg_lvl; lvl++)
+ 		req->schq[lvl] = 1;
+ 
+ 	rc = otx2_sync_mbox_msg(&pfvf->mbox);
+@@ -83,7 +83,7 @@ static int otx2_pfc_txschq_alloc_one(struct otx2_nic *pfvf, u8 prio)
+ 		return PTR_ERR(rsp);
+ 
+ 	/* Setup transmit scheduler list */
+-	for (lvl = 0; lvl < pfvf->hw.txschq_link_cfg_lvl; lvl++) {
++	for (lvl = 0; lvl <= pfvf->hw.txschq_link_cfg_lvl; lvl++) {
+ 		if (!rsp->schq[lvl])
+ 			return -ENOSPC;
+ 
+@@ -125,19 +125,12 @@ int otx2_pfc_txschq_alloc(struct otx2_nic *pfvf)
+ 
+ static int otx2_pfc_txschq_stop_one(struct otx2_nic *pfvf, u8 prio)
+ {
+-	struct nix_txsch_free_req *free_req;
++	int lvl;
+ 
+-	mutex_lock(&pfvf->mbox.lock);
+ 	/* free PFC TLx nodes */
+-	free_req = otx2_mbox_alloc_msg_nix_txsch_free(&pfvf->mbox);
+-	if (!free_req) {
+-		mutex_unlock(&pfvf->mbox.lock);
+-		return -ENOMEM;
+-	}
+-
+-	free_req->flags = TXSCHQ_FREE_ALL;
+-	otx2_sync_mbox_msg(&pfvf->mbox);
+-	mutex_unlock(&pfvf->mbox.lock);
++	for (lvl = 0; lvl <= pfvf->hw.txschq_link_cfg_lvl; lvl++)
++		otx2_txschq_free_one(pfvf, lvl,
++				     pfvf->pfc_schq_list[lvl][prio]);
+ 
+ 	pfvf->pfc_alloc_status[prio] = false;
+ 	return 0;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h
+index b5d689eeff80b..9e3bfbe5c4809 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h
+@@ -23,6 +23,8 @@
+ #define	OTX2_ETH_HLEN		(VLAN_ETH_HLEN + VLAN_HLEN)
+ #define	OTX2_MIN_MTU		60
+ 
++#define OTX2_PAGE_POOL_SZ	2048
++
+ #define OTX2_MAX_GSO_SEGS	255
+ #define OTX2_MAX_FRAGS_IN_SQE	9
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c b/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
+index 4804990b7f226..99dcbd006357a 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
+@@ -384,16 +384,11 @@ static int mlx5_pci_link_toggle(struct mlx5_core_dev *dev)
+ 		pci_cfg_access_lock(sdev);
+ 	}
+ 	/* PCI link toggle */
+-	err = pci_read_config_word(bridge, cap + PCI_EXP_LNKCTL, &reg16);
+-	if (err)
+-		return err;
+-	reg16 |= PCI_EXP_LNKCTL_LD;
+-	err = pci_write_config_word(bridge, cap + PCI_EXP_LNKCTL, reg16);
++	err = pcie_capability_set_word(bridge, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_LD);
+ 	if (err)
+ 		return err;
+ 	msleep(500);
+-	reg16 &= ~PCI_EXP_LNKCTL_LD;
+-	err = pci_write_config_word(bridge, cap + PCI_EXP_LNKCTL, reg16);
++	err = pcie_capability_clear_word(bridge, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_LD);
+ 	if (err)
+ 		return err;
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c
+index 377372f0578ae..aa29f09e83564 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c
+@@ -32,16 +32,13 @@
+ 
+ #include <linux/clocksource.h>
+ #include <linux/highmem.h>
++#include <linux/log2.h>
+ #include <linux/ptp_clock_kernel.h>
+ #include <rdma/mlx5-abi.h>
+ #include "lib/eq.h"
+ #include "en.h"
+ #include "clock.h"
+ 
+-enum {
+-	MLX5_CYCLES_SHIFT	= 31
+-};
+-
+ enum {
+ 	MLX5_PIN_MODE_IN		= 0x0,
+ 	MLX5_PIN_MODE_OUT		= 0x1,
+@@ -93,6 +90,31 @@ static bool mlx5_modify_mtutc_allowed(struct mlx5_core_dev *mdev)
+ 	return MLX5_CAP_MCAM_FEATURE(mdev, ptpcyc2realtime_modify);
+ }
+ 
++static u32 mlx5_ptp_shift_constant(u32 dev_freq_khz)
++{
++	/* Optimal shift constant leads to corrections above just 1 scaled ppm.
++	 *
++	 * Two sets of equations are needed to derive the optimal shift
++	 * constant for the cyclecounter.
++	 *
++	 *    dev_freq_khz * 1000 / 2^shift_constant = 1 scaled_ppm
++	 *    ppb = scaled_ppm * 1000 / 2^16
++	 *
++	 * Using the two equations together
++	 *
++	 *    dev_freq_khz * 1000 / 1 scaled_ppm = 2^shift_constant
++	 *    dev_freq_khz * 2^16 / 1 ppb = 2^shift_constant
++	 *    dev_freq_khz = 2^(shift_constant - 16)
++	 *
++	 * then yields
++	 *
++	 *    shift_constant = ilog2(dev_freq_khz) + 16
++	 */
++
++	return min(ilog2(dev_freq_khz) + 16,
++		   ilog2((U32_MAX / NSEC_PER_MSEC) * dev_freq_khz));
++}
++
+ static s32 mlx5_ptp_getmaxphase(struct ptp_clock_info *ptp)
+ {
+ 	struct mlx5_clock *clock = container_of(ptp, struct mlx5_clock, ptp_info);
+@@ -909,7 +931,7 @@ static void mlx5_timecounter_init(struct mlx5_core_dev *mdev)
+ 
+ 	dev_freq = MLX5_CAP_GEN(mdev, device_frequency_khz);
+ 	timer->cycles.read = read_internal_timer;
+-	timer->cycles.shift = MLX5_CYCLES_SHIFT;
++	timer->cycles.shift = mlx5_ptp_shift_constant(dev_freq);
+ 	timer->cycles.mult = clocksource_khz2mult(dev_freq,
+ 						  timer->cycles.shift);
+ 	timer->nominal_c_mult = timer->cycles.mult;
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/core_hwmon.c b/drivers/net/ethernet/mellanox/mlxsw/core_hwmon.c
+index 70735068cf292..0fd290d776ffe 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/core_hwmon.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/core_hwmon.c
+@@ -405,7 +405,8 @@ mlxsw_hwmon_module_temp_label_show(struct device *dev,
+ 			container_of(attr, struct mlxsw_hwmon_attr, dev_attr);
+ 
+ 	return sprintf(buf, "front panel %03u\n",
+-		       mlxsw_hwmon_attr->type_index);
++		       mlxsw_hwmon_attr->type_index + 1 -
++		       mlxsw_hwmon_attr->mlxsw_hwmon_dev->sensor_count);
+ }
+ 
+ static ssize_t
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/i2c.c b/drivers/net/ethernet/mellanox/mlxsw/i2c.c
+index 41298835a11e1..d23f293e285cb 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/i2c.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/i2c.c
+@@ -48,6 +48,7 @@
+ #define MLXSW_I2C_MBOX_SIZE_BITS	12
+ #define MLXSW_I2C_ADDR_BUF_SIZE		4
+ #define MLXSW_I2C_BLK_DEF		32
++#define MLXSW_I2C_BLK_MAX		100
+ #define MLXSW_I2C_RETRY			5
+ #define MLXSW_I2C_TIMEOUT_MSECS		5000
+ #define MLXSW_I2C_MAX_DATA_SIZE		256
+@@ -444,7 +445,7 @@ mlxsw_i2c_cmd(struct device *dev, u16 opcode, u32 in_mod, size_t in_mbox_size,
+ 	} else {
+ 		/* No input mailbox is case of initialization query command. */
+ 		reg_size = MLXSW_I2C_MAX_DATA_SIZE;
+-		num = reg_size / mlxsw_i2c->block_size;
++		num = DIV_ROUND_UP(reg_size, mlxsw_i2c->block_size);
+ 
+ 		if (mutex_lock_interruptible(&mlxsw_i2c->cmd.lock) < 0) {
+ 			dev_err(&client->dev, "Could not acquire lock");
+@@ -653,7 +654,7 @@ static int mlxsw_i2c_probe(struct i2c_client *client)
+ 			return -EOPNOTSUPP;
+ 		}
+ 
+-		mlxsw_i2c->block_size = max_t(u16, MLXSW_I2C_BLK_DEF,
++		mlxsw_i2c->block_size = min_t(u16, MLXSW_I2C_BLK_MAX,
+ 					      min_t(u16, quirks->max_read_len,
+ 						    quirks->max_write_len));
+ 	} else {
+diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_ptp.c b/drivers/net/ethernet/microchip/lan966x/lan966x_ptp.c
+index 266a21a2d1246..1da2b1f82ae93 100644
+--- a/drivers/net/ethernet/microchip/lan966x/lan966x_ptp.c
++++ b/drivers/net/ethernet/microchip/lan966x/lan966x_ptp.c
+@@ -59,7 +59,7 @@ static int lan966x_ptp_add_trap(struct lan966x_port *port,
+ 	int err;
+ 
+ 	vrule = vcap_get_rule(lan966x->vcap_ctrl, rule_id);
+-	if (vrule) {
++	if (!IS_ERR(vrule)) {
+ 		u32 value, mask;
+ 
+ 		/* Just modify the ingress port mask and exit */
+@@ -106,7 +106,7 @@ static int lan966x_ptp_del_trap(struct lan966x_port *port,
+ 	int err;
+ 
+ 	vrule = vcap_get_rule(lan966x->vcap_ctrl, rule_id);
+-	if (!vrule)
++	if (IS_ERR(vrule))
+ 		return -EEXIST;
+ 
+ 	vcap_rule_get_key_u32(vrule, VCAP_KF_IF_IGR_PORT_MASK, &value, &mask);
+diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
+index 5eb50b265c0bd..6351a2dc13bce 100644
+--- a/drivers/net/ethernet/realtek/r8169_main.c
++++ b/drivers/net/ethernet/realtek/r8169_main.c
+@@ -5239,13 +5239,9 @@ static int rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 
+ 	/* Disable ASPM L1 as that cause random device stop working
+ 	 * problems as well as full system hangs for some PCIe devices users.
+-	 * Chips from RTL8168h partially have issues with L1.2, but seem
+-	 * to work fine with L1 and L1.1.
+ 	 */
+ 	if (rtl_aspm_is_safe(tp))
+ 		rc = 0;
+-	else if (tp->mac_version >= RTL_GIGA_MAC_VER_46)
+-		rc = pci_disable_link_state(pdev, PCIE_LINK_STATE_L1_2);
+ 	else
+ 		rc = pci_disable_link_state(pdev, PCIE_LINK_STATE_L1);
+ 	tp->aspm_manageable = !rc;
+diff --git a/drivers/net/ethernet/sfc/ptp.c b/drivers/net/ethernet/sfc/ptp.c
+index 0c40571133cb9..00cf6de3bb2be 100644
+--- a/drivers/net/ethernet/sfc/ptp.c
++++ b/drivers/net/ethernet/sfc/ptp.c
+@@ -1485,7 +1485,9 @@ static int efx_ptp_insert_multicast_filters(struct efx_nic *efx)
+ 			goto fail;
+ 
+ 		rc = efx_ptp_insert_eth_multicast_filter(efx);
+-		if (rc < 0)
++
++		/* Not all firmware variants support this filter */
++		if (rc < 0 && rc != -EPROTONOSUPPORT)
+ 			goto fail;
+ 	}
+ 
+diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
+index 144ec756c796a..2d64650f4eb3c 100644
+--- a/drivers/net/macsec.c
++++ b/drivers/net/macsec.c
+@@ -1341,8 +1341,7 @@ static struct crypto_aead *macsec_alloc_tfm(char *key, int key_len, int icv_len)
+ 	struct crypto_aead *tfm;
+ 	int ret;
+ 
+-	/* Pick a sync gcm(aes) cipher to ensure order is preserved. */
+-	tfm = crypto_alloc_aead("gcm(aes)", 0, CRYPTO_ALG_ASYNC);
++	tfm = crypto_alloc_aead("gcm(aes)", 0, 0);
+ 
+ 	if (IS_ERR(tfm))
+ 		return tfm;
+diff --git a/drivers/net/pcs/pcs-lynx.c b/drivers/net/pcs/pcs-lynx.c
+index 9021b96d4f9df..dc3962b2aa6b0 100644
+--- a/drivers/net/pcs/pcs-lynx.c
++++ b/drivers/net/pcs/pcs-lynx.c
+@@ -216,7 +216,7 @@ static void lynx_pcs_link_up_sgmii(struct mdio_device *pcs,
+ 	/* The PCS needs to be configured manually only
+ 	 * when not operating on in-band mode
+ 	 */
+-	if (neg_mode != PHYLINK_PCS_NEG_INBAND_ENABLED)
++	if (neg_mode == PHYLINK_PCS_NEG_INBAND_ENABLED)
+ 		return;
+ 
+ 	if (duplex == DUPLEX_HALF)
+diff --git a/drivers/net/wireless/ath/ath10k/pci.c b/drivers/net/wireless/ath/ath10k/pci.c
+index a7f44f6335fb8..9275a672f90cb 100644
+--- a/drivers/net/wireless/ath/ath10k/pci.c
++++ b/drivers/net/wireless/ath/ath10k/pci.c
+@@ -1963,8 +1963,9 @@ static int ath10k_pci_hif_start(struct ath10k *ar)
+ 	ath10k_pci_irq_enable(ar);
+ 	ath10k_pci_rx_post(ar);
+ 
+-	pcie_capability_write_word(ar_pci->pdev, PCI_EXP_LNKCTL,
+-				   ar_pci->link_ctl);
++	pcie_capability_clear_and_set_word(ar_pci->pdev, PCI_EXP_LNKCTL,
++					   PCI_EXP_LNKCTL_ASPMC,
++					   ar_pci->link_ctl & PCI_EXP_LNKCTL_ASPMC);
+ 
+ 	return 0;
+ }
+@@ -2821,8 +2822,8 @@ static int ath10k_pci_hif_power_up(struct ath10k *ar,
+ 
+ 	pcie_capability_read_word(ar_pci->pdev, PCI_EXP_LNKCTL,
+ 				  &ar_pci->link_ctl);
+-	pcie_capability_write_word(ar_pci->pdev, PCI_EXP_LNKCTL,
+-				   ar_pci->link_ctl & ~PCI_EXP_LNKCTL_ASPMC);
++	pcie_capability_clear_word(ar_pci->pdev, PCI_EXP_LNKCTL,
++				   PCI_EXP_LNKCTL_ASPMC);
+ 
+ 	/*
+ 	 * Bring the target up cleanly.
+diff --git a/drivers/net/wireless/ath/ath11k/dp_rx.c b/drivers/net/wireless/ath/ath11k/dp_rx.c
+index 5c76664ba0dd9..1e488eed282b5 100644
+--- a/drivers/net/wireless/ath/ath11k/dp_rx.c
++++ b/drivers/net/wireless/ath/ath11k/dp_rx.c
+@@ -2408,7 +2408,7 @@ static void ath11k_dp_rx_h_ppdu(struct ath11k *ar, struct hal_rx_desc *rx_desc,
+ 		rx_status->freq = center_freq;
+ 	} else if (channel_num >= 1 && channel_num <= 14) {
+ 		rx_status->band = NL80211_BAND_2GHZ;
+-	} else if (channel_num >= 36 && channel_num <= 173) {
++	} else if (channel_num >= 36 && channel_num <= 177) {
+ 		rx_status->band = NL80211_BAND_5GHZ;
+ 	} else {
+ 		spin_lock_bh(&ar->data_lock);
+diff --git a/drivers/net/wireless/ath/ath11k/pci.c b/drivers/net/wireless/ath/ath11k/pci.c
+index 79e2cbe826384..ec40adc1cb235 100644
+--- a/drivers/net/wireless/ath/ath11k/pci.c
++++ b/drivers/net/wireless/ath/ath11k/pci.c
+@@ -581,8 +581,8 @@ static void ath11k_pci_aspm_disable(struct ath11k_pci *ab_pci)
+ 		   u16_get_bits(ab_pci->link_ctl, PCI_EXP_LNKCTL_ASPM_L1));
+ 
+ 	/* disable L0s and L1 */
+-	pcie_capability_write_word(ab_pci->pdev, PCI_EXP_LNKCTL,
+-				   ab_pci->link_ctl & ~PCI_EXP_LNKCTL_ASPMC);
++	pcie_capability_clear_word(ab_pci->pdev, PCI_EXP_LNKCTL,
++				   PCI_EXP_LNKCTL_ASPMC);
+ 
+ 	set_bit(ATH11K_PCI_ASPM_RESTORE, &ab_pci->flags);
+ }
+@@ -590,8 +590,10 @@ static void ath11k_pci_aspm_disable(struct ath11k_pci *ab_pci)
+ static void ath11k_pci_aspm_restore(struct ath11k_pci *ab_pci)
+ {
+ 	if (test_and_clear_bit(ATH11K_PCI_ASPM_RESTORE, &ab_pci->flags))
+-		pcie_capability_write_word(ab_pci->pdev, PCI_EXP_LNKCTL,
+-					   ab_pci->link_ctl);
++		pcie_capability_clear_and_set_word(ab_pci->pdev, PCI_EXP_LNKCTL,
++						   PCI_EXP_LNKCTL_ASPMC,
++						   ab_pci->link_ctl &
++						   PCI_EXP_LNKCTL_ASPMC);
+ }
+ 
+ static int ath11k_pci_power_up(struct ath11k_base *ab)
+diff --git a/drivers/net/wireless/ath/ath12k/mac.c b/drivers/net/wireless/ath/ath12k/mac.c
+index 1bb9802ef5696..45d88e35fc2eb 100644
+--- a/drivers/net/wireless/ath/ath12k/mac.c
++++ b/drivers/net/wireless/ath/ath12k/mac.c
+@@ -1637,9 +1637,9 @@ static void ath12k_peer_assoc_h_he(struct ath12k *ar,
+ 	arg->peer_nss = min(sta->deflink.rx_nss, max_nss);
+ 
+ 	memcpy(&arg->peer_he_cap_macinfo, he_cap->he_cap_elem.mac_cap_info,
+-	       sizeof(arg->peer_he_cap_macinfo));
++	       sizeof(he_cap->he_cap_elem.mac_cap_info));
+ 	memcpy(&arg->peer_he_cap_phyinfo, he_cap->he_cap_elem.phy_cap_info,
+-	       sizeof(arg->peer_he_cap_phyinfo));
++	       sizeof(he_cap->he_cap_elem.phy_cap_info));
+ 	arg->peer_he_ops = vif->bss_conf.he_oper.params;
+ 
+ 	/* the top most byte is used to indicate BSS color info */
+diff --git a/drivers/net/wireless/ath/ath12k/pci.c b/drivers/net/wireless/ath/ath12k/pci.c
+index 5990a55801f0a..e4f08a066ca10 100644
+--- a/drivers/net/wireless/ath/ath12k/pci.c
++++ b/drivers/net/wireless/ath/ath12k/pci.c
+@@ -794,8 +794,8 @@ static void ath12k_pci_aspm_disable(struct ath12k_pci *ab_pci)
+ 		   u16_get_bits(ab_pci->link_ctl, PCI_EXP_LNKCTL_ASPM_L1));
+ 
+ 	/* disable L0s and L1 */
+-	pcie_capability_write_word(ab_pci->pdev, PCI_EXP_LNKCTL,
+-				   ab_pci->link_ctl & ~PCI_EXP_LNKCTL_ASPMC);
++	pcie_capability_clear_word(ab_pci->pdev, PCI_EXP_LNKCTL,
++				   PCI_EXP_LNKCTL_ASPMC);
+ 
+ 	set_bit(ATH12K_PCI_ASPM_RESTORE, &ab_pci->flags);
+ }
+@@ -803,8 +803,10 @@ static void ath12k_pci_aspm_disable(struct ath12k_pci *ab_pci)
+ static void ath12k_pci_aspm_restore(struct ath12k_pci *ab_pci)
+ {
+ 	if (test_and_clear_bit(ATH12K_PCI_ASPM_RESTORE, &ab_pci->flags))
+-		pcie_capability_write_word(ab_pci->pdev, PCI_EXP_LNKCTL,
+-					   ab_pci->link_ctl);
++		pcie_capability_clear_and_set_word(ab_pci->pdev, PCI_EXP_LNKCTL,
++						   PCI_EXP_LNKCTL_ASPMC,
++						   ab_pci->link_ctl &
++						   PCI_EXP_LNKCTL_ASPMC);
+ }
+ 
+ static void ath12k_pci_kill_tasklets(struct ath12k_base *ab)
+diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_debug.c b/drivers/net/wireless/ath/ath9k/htc_drv_debug.c
+index b3ed65e5c4da8..c55aab01fff5d 100644
+--- a/drivers/net/wireless/ath/ath9k/htc_drv_debug.c
++++ b/drivers/net/wireless/ath/ath9k/htc_drv_debug.c
+@@ -491,7 +491,7 @@ int ath9k_htc_init_debug(struct ath_hw *ah)
+ 
+ 	priv->debug.debugfs_phy = debugfs_create_dir(KBUILD_MODNAME,
+ 					     priv->hw->wiphy->debugfsdir);
+-	if (!priv->debug.debugfs_phy)
++	if (IS_ERR(priv->debug.debugfs_phy))
+ 		return -ENOMEM;
+ 
+ 	ath9k_cmn_spectral_init_debug(&priv->spec_priv, priv->debug.debugfs_phy);
+diff --git a/drivers/net/wireless/ath/ath9k/wmi.c b/drivers/net/wireless/ath/ath9k/wmi.c
+index d652c647d56b5..1476b42b52a91 100644
+--- a/drivers/net/wireless/ath/ath9k/wmi.c
++++ b/drivers/net/wireless/ath/ath9k/wmi.c
+@@ -242,10 +242,10 @@ static void ath9k_wmi_ctrl_rx(void *priv, struct sk_buff *skb,
+ 		spin_unlock_irqrestore(&wmi->wmi_lock, flags);
+ 		goto free_skb;
+ 	}
+-	spin_unlock_irqrestore(&wmi->wmi_lock, flags);
+ 
+ 	/* WMI command response */
+ 	ath9k_wmi_rsp_callback(wmi, skb);
++	spin_unlock_irqrestore(&wmi->wmi_lock, flags);
+ 
+ free_skb:
+ 	kfree_skb(skb);
+@@ -283,7 +283,8 @@ int ath9k_wmi_connect(struct htc_target *htc, struct wmi *wmi,
+ 
+ static int ath9k_wmi_cmd_issue(struct wmi *wmi,
+ 			       struct sk_buff *skb,
+-			       enum wmi_cmd_id cmd, u16 len)
++			       enum wmi_cmd_id cmd, u16 len,
++			       u8 *rsp_buf, u32 rsp_len)
+ {
+ 	struct wmi_cmd_hdr *hdr;
+ 	unsigned long flags;
+@@ -293,6 +294,11 @@ static int ath9k_wmi_cmd_issue(struct wmi *wmi,
+ 	hdr->seq_no = cpu_to_be16(++wmi->tx_seq_id);
+ 
+ 	spin_lock_irqsave(&wmi->wmi_lock, flags);
++
++	/* record the rsp buffer and length */
++	wmi->cmd_rsp_buf = rsp_buf;
++	wmi->cmd_rsp_len = rsp_len;
++
+ 	wmi->last_seq_id = wmi->tx_seq_id;
+ 	spin_unlock_irqrestore(&wmi->wmi_lock, flags);
+ 
+@@ -308,8 +314,8 @@ int ath9k_wmi_cmd(struct wmi *wmi, enum wmi_cmd_id cmd_id,
+ 	struct ath_common *common = ath9k_hw_common(ah);
+ 	u16 headroom = sizeof(struct htc_frame_hdr) +
+ 		       sizeof(struct wmi_cmd_hdr);
++	unsigned long time_left, flags;
+ 	struct sk_buff *skb;
+-	unsigned long time_left;
+ 	int ret = 0;
+ 
+ 	if (ah->ah_flags & AH_UNPLUGGED)
+@@ -333,11 +339,7 @@ int ath9k_wmi_cmd(struct wmi *wmi, enum wmi_cmd_id cmd_id,
+ 		goto out;
+ 	}
+ 
+-	/* record the rsp buffer and length */
+-	wmi->cmd_rsp_buf = rsp_buf;
+-	wmi->cmd_rsp_len = rsp_len;
+-
+-	ret = ath9k_wmi_cmd_issue(wmi, skb, cmd_id, cmd_len);
++	ret = ath9k_wmi_cmd_issue(wmi, skb, cmd_id, cmd_len, rsp_buf, rsp_len);
+ 	if (ret)
+ 		goto out;
+ 
+@@ -345,7 +347,9 @@ int ath9k_wmi_cmd(struct wmi *wmi, enum wmi_cmd_id cmd_id,
+ 	if (!time_left) {
+ 		ath_dbg(common, WMI, "Timeout waiting for WMI command: %s\n",
+ 			wmi_cmd_to_name(cmd_id));
++		spin_lock_irqsave(&wmi->wmi_lock, flags);
+ 		wmi->last_seq_id = 0;
++		spin_unlock_irqrestore(&wmi->wmi_lock, flags);
+ 		mutex_unlock(&wmi->op_mutex);
+ 		return -ETIMEDOUT;
+ 	}
+diff --git a/drivers/net/wireless/marvell/mwifiex/debugfs.c b/drivers/net/wireless/marvell/mwifiex/debugfs.c
+index 52b18f4a774b7..0cdd6c50c1c08 100644
+--- a/drivers/net/wireless/marvell/mwifiex/debugfs.c
++++ b/drivers/net/wireless/marvell/mwifiex/debugfs.c
+@@ -253,8 +253,11 @@ mwifiex_histogram_read(struct file *file, char __user *ubuf,
+ 	if (!p)
+ 		return -ENOMEM;
+ 
+-	if (!priv || !priv->hist_data)
+-		return -EFAULT;
++	if (!priv || !priv->hist_data) {
++		ret = -EFAULT;
++		goto free_and_exit;
++	}
++
+ 	phist_data = priv->hist_data;
+ 
+ 	p += sprintf(p, "\n"
+@@ -309,6 +312,8 @@ mwifiex_histogram_read(struct file *file, char __user *ubuf,
+ 	ret = simple_read_from_buffer(ubuf, count, ppos, (char *)page,
+ 				      (unsigned long)p - page);
+ 
++free_and_exit:
++	free_page(page);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/wireless/marvell/mwifiex/pcie.c b/drivers/net/wireless/marvell/mwifiex/pcie.c
+index 9a698a16a8f38..6697132ecc977 100644
+--- a/drivers/net/wireless/marvell/mwifiex/pcie.c
++++ b/drivers/net/wireless/marvell/mwifiex/pcie.c
+@@ -189,6 +189,8 @@ static int mwifiex_pcie_probe_of(struct device *dev)
+ }
+ 
+ static void mwifiex_pcie_work(struct work_struct *work);
++static int mwifiex_pcie_delete_rxbd_ring(struct mwifiex_adapter *adapter);
++static int mwifiex_pcie_delete_evtbd_ring(struct mwifiex_adapter *adapter);
+ 
+ static int
+ mwifiex_map_pci_memory(struct mwifiex_adapter *adapter, struct sk_buff *skb,
+@@ -792,14 +794,15 @@ static int mwifiex_init_rxq_ring(struct mwifiex_adapter *adapter)
+ 		if (!skb) {
+ 			mwifiex_dbg(adapter, ERROR,
+ 				    "Unable to allocate skb for RX ring.\n");
+-			kfree(card->rxbd_ring_vbase);
+ 			return -ENOMEM;
+ 		}
+ 
+ 		if (mwifiex_map_pci_memory(adapter, skb,
+ 					   MWIFIEX_RX_DATA_BUF_SIZE,
+-					   DMA_FROM_DEVICE))
+-			return -1;
++					   DMA_FROM_DEVICE)) {
++			kfree_skb(skb);
++			return -ENOMEM;
++		}
+ 
+ 		buf_pa = MWIFIEX_SKB_DMA_ADDR(skb);
+ 
+@@ -849,7 +852,6 @@ static int mwifiex_pcie_init_evt_ring(struct mwifiex_adapter *adapter)
+ 		if (!skb) {
+ 			mwifiex_dbg(adapter, ERROR,
+ 				    "Unable to allocate skb for EVENT buf.\n");
+-			kfree(card->evtbd_ring_vbase);
+ 			return -ENOMEM;
+ 		}
+ 		skb_put(skb, MAX_EVENT_SIZE);
+@@ -857,8 +859,7 @@ static int mwifiex_pcie_init_evt_ring(struct mwifiex_adapter *adapter)
+ 		if (mwifiex_map_pci_memory(adapter, skb, MAX_EVENT_SIZE,
+ 					   DMA_FROM_DEVICE)) {
+ 			kfree_skb(skb);
+-			kfree(card->evtbd_ring_vbase);
+-			return -1;
++			return -ENOMEM;
+ 		}
+ 
+ 		buf_pa = MWIFIEX_SKB_DMA_ADDR(skb);
+@@ -1058,6 +1059,7 @@ static int mwifiex_pcie_delete_txbd_ring(struct mwifiex_adapter *adapter)
+  */
+ static int mwifiex_pcie_create_rxbd_ring(struct mwifiex_adapter *adapter)
+ {
++	int ret;
+ 	struct pcie_service_card *card = adapter->card;
+ 	const struct mwifiex_pcie_card_reg *reg = card->pcie.reg;
+ 
+@@ -1096,7 +1098,10 @@ static int mwifiex_pcie_create_rxbd_ring(struct mwifiex_adapter *adapter)
+ 		    (u32)((u64)card->rxbd_ring_pbase >> 32),
+ 		    card->rxbd_ring_size);
+ 
+-	return mwifiex_init_rxq_ring(adapter);
++	ret = mwifiex_init_rxq_ring(adapter);
++	if (ret)
++		mwifiex_pcie_delete_rxbd_ring(adapter);
++	return ret;
+ }
+ 
+ /*
+@@ -1127,6 +1132,7 @@ static int mwifiex_pcie_delete_rxbd_ring(struct mwifiex_adapter *adapter)
+  */
+ static int mwifiex_pcie_create_evtbd_ring(struct mwifiex_adapter *adapter)
+ {
++	int ret;
+ 	struct pcie_service_card *card = adapter->card;
+ 	const struct mwifiex_pcie_card_reg *reg = card->pcie.reg;
+ 
+@@ -1161,7 +1167,10 @@ static int mwifiex_pcie_create_evtbd_ring(struct mwifiex_adapter *adapter)
+ 		    (u32)((u64)card->evtbd_ring_pbase >> 32),
+ 		    card->evtbd_ring_size);
+ 
+-	return mwifiex_pcie_init_evt_ring(adapter);
++	ret = mwifiex_pcie_init_evt_ring(adapter);
++	if (ret)
++		mwifiex_pcie_delete_evtbd_ring(adapter);
++	return ret;
+ }
+ 
+ /*
+diff --git a/drivers/net/wireless/marvell/mwifiex/sta_rx.c b/drivers/net/wireless/marvell/mwifiex/sta_rx.c
+index 13659b02ba882..65420ad674167 100644
+--- a/drivers/net/wireless/marvell/mwifiex/sta_rx.c
++++ b/drivers/net/wireless/marvell/mwifiex/sta_rx.c
+@@ -86,6 +86,15 @@ int mwifiex_process_rx_packet(struct mwifiex_private *priv,
+ 	rx_pkt_len = le16_to_cpu(local_rx_pd->rx_pkt_length);
+ 	rx_pkt_hdr = (void *)local_rx_pd + rx_pkt_off;
+ 
++	if (sizeof(*rx_pkt_hdr) + rx_pkt_off > skb->len) {
++		mwifiex_dbg(priv->adapter, ERROR,
++			    "wrong rx packet offset: len=%d, rx_pkt_off=%d\n",
++			    skb->len, rx_pkt_off);
++		priv->stats.rx_dropped++;
++		dev_kfree_skb_any(skb);
++		return -1;
++	}
++
+ 	if ((!memcmp(&rx_pkt_hdr->rfc1042_hdr, bridge_tunnel_header,
+ 		     sizeof(bridge_tunnel_header))) ||
+ 	    (!memcmp(&rx_pkt_hdr->rfc1042_hdr, rfc1042_header,
+@@ -194,7 +203,8 @@ int mwifiex_process_sta_rx_packet(struct mwifiex_private *priv,
+ 
+ 	rx_pkt_hdr = (void *)local_rx_pd + rx_pkt_offset;
+ 
+-	if ((rx_pkt_offset + rx_pkt_length) > (u16) skb->len) {
++	if ((rx_pkt_offset + rx_pkt_length) > skb->len ||
++	    sizeof(rx_pkt_hdr->eth803_hdr) + rx_pkt_offset > skb->len) {
+ 		mwifiex_dbg(adapter, ERROR,
+ 			    "wrong rx packet: len=%d, rx_pkt_offset=%d, rx_pkt_length=%d\n",
+ 			    skb->len, rx_pkt_offset, rx_pkt_length);
+diff --git a/drivers/net/wireless/marvell/mwifiex/uap_txrx.c b/drivers/net/wireless/marvell/mwifiex/uap_txrx.c
+index e495f7eaea033..b8b9a0fcb19cd 100644
+--- a/drivers/net/wireless/marvell/mwifiex/uap_txrx.c
++++ b/drivers/net/wireless/marvell/mwifiex/uap_txrx.c
+@@ -103,6 +103,16 @@ static void mwifiex_uap_queue_bridged_pkt(struct mwifiex_private *priv,
+ 		return;
+ 	}
+ 
++	if (sizeof(*rx_pkt_hdr) +
++	    le16_to_cpu(uap_rx_pd->rx_pkt_offset) > skb->len) {
++		mwifiex_dbg(adapter, ERROR,
++			    "wrong rx packet offset: len=%d,rx_pkt_offset=%d\n",
++			    skb->len, le16_to_cpu(uap_rx_pd->rx_pkt_offset));
++		priv->stats.rx_dropped++;
++		dev_kfree_skb_any(skb);
++		return;
++	}
++
+ 	if ((!memcmp(&rx_pkt_hdr->rfc1042_hdr, bridge_tunnel_header,
+ 		     sizeof(bridge_tunnel_header))) ||
+ 	    (!memcmp(&rx_pkt_hdr->rfc1042_hdr, rfc1042_header,
+@@ -243,7 +253,15 @@ int mwifiex_handle_uap_rx_forward(struct mwifiex_private *priv,
+ 
+ 	if (is_multicast_ether_addr(ra)) {
+ 		skb_uap = skb_copy(skb, GFP_ATOMIC);
+-		mwifiex_uap_queue_bridged_pkt(priv, skb_uap);
++		if (likely(skb_uap)) {
++			mwifiex_uap_queue_bridged_pkt(priv, skb_uap);
++		} else {
++			mwifiex_dbg(adapter, ERROR,
++				    "failed to copy skb for uAP\n");
++			priv->stats.rx_dropped++;
++			dev_kfree_skb_any(skb);
++			return -1;
++		}
+ 	} else {
+ 		if (mwifiex_get_sta_entry(priv, ra)) {
+ 			/* Requeue Intra-BSS packet */
+@@ -367,6 +385,16 @@ int mwifiex_process_uap_rx_packet(struct mwifiex_private *priv,
+ 	rx_pkt_type = le16_to_cpu(uap_rx_pd->rx_pkt_type);
+ 	rx_pkt_hdr = (void *)uap_rx_pd + le16_to_cpu(uap_rx_pd->rx_pkt_offset);
+ 
++	if (le16_to_cpu(uap_rx_pd->rx_pkt_offset) +
++	    sizeof(rx_pkt_hdr->eth803_hdr) > skb->len) {
++		mwifiex_dbg(adapter, ERROR,
++			    "wrong rx packet for struct ethhdr: len=%d, offset=%d\n",
++			    skb->len, le16_to_cpu(uap_rx_pd->rx_pkt_offset));
++		priv->stats.rx_dropped++;
++		dev_kfree_skb_any(skb);
++		return 0;
++	}
++
+ 	ether_addr_copy(ta, rx_pkt_hdr->eth803_hdr.h_source);
+ 
+ 	if ((le16_to_cpu(uap_rx_pd->rx_pkt_offset) +
+diff --git a/drivers/net/wireless/marvell/mwifiex/util.c b/drivers/net/wireless/marvell/mwifiex/util.c
+index 94c2d219835da..745b1d925b217 100644
+--- a/drivers/net/wireless/marvell/mwifiex/util.c
++++ b/drivers/net/wireless/marvell/mwifiex/util.c
+@@ -393,11 +393,15 @@ mwifiex_process_mgmt_packet(struct mwifiex_private *priv,
+ 	}
+ 
+ 	rx_pd = (struct rxpd *)skb->data;
++	pkt_len = le16_to_cpu(rx_pd->rx_pkt_length);
++	if (pkt_len < sizeof(struct ieee80211_hdr) + sizeof(pkt_len)) {
++		mwifiex_dbg(priv->adapter, ERROR, "invalid rx_pkt_length");
++		return -1;
++	}
+ 
+ 	skb_pull(skb, le16_to_cpu(rx_pd->rx_pkt_offset));
+ 	skb_pull(skb, sizeof(pkt_len));
+-
+-	pkt_len = le16_to_cpu(rx_pd->rx_pkt_length);
++	pkt_len -= sizeof(pkt_len);
+ 
+ 	ieee_hdr = (void *)skb->data;
+ 	if (ieee80211_is_mgmt(ieee_hdr->frame_control)) {
+@@ -410,7 +414,7 @@ mwifiex_process_mgmt_packet(struct mwifiex_private *priv,
+ 		skb->data + sizeof(struct ieee80211_hdr),
+ 		pkt_len - sizeof(struct ieee80211_hdr));
+ 
+-	pkt_len -= ETH_ALEN + sizeof(pkt_len);
++	pkt_len -= ETH_ALEN;
+ 	rx_pd->rx_pkt_length = cpu_to_le16(pkt_len);
+ 
+ 	cfg80211_rx_mgmt(&priv->wdev, priv->roc_cfg.chan.center_freq,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76.h b/drivers/net/wireless/mediatek/mt76/mt76.h
+index 6b07b8fafec2f..0e9f4197213a3 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76.h
++++ b/drivers/net/wireless/mediatek/mt76/mt76.h
+@@ -277,7 +277,7 @@ struct mt76_sta_stats {
+ 	u64 tx_mcs[16];		/* mcs idx */
+ 	u64 tx_bytes;
+ 	/* WED TX */
+-	u32 tx_packets;
++	u32 tx_packets;		/* unit: MSDU */
+ 	u32 tx_retries;
+ 	u32 tx_failed;
+ 	/* WED RX */
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac_mac.c b/drivers/net/wireless/mediatek/mt76/mt76_connac_mac.c
+index be4d63db5f64a..e415ac5e321f1 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76_connac_mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac_mac.c
+@@ -522,9 +522,9 @@ void mt76_connac2_mac_write_txwi(struct mt76_dev *dev, __le32 *txwi,
+ 		q_idx = wmm_idx * MT76_CONNAC_MAX_WMM_SETS +
+ 			mt76_connac_lmac_mapping(skb_get_queue_mapping(skb));
+ 
+-		/* counting non-offloading skbs */
+-		wcid->stats.tx_bytes += skb->len;
+-		wcid->stats.tx_packets++;
++		/* mt7915 WA only counts WED path */
++		if (is_mt7915(dev) && mtk_wed_device_active(&dev->mmio.wed))
++			wcid->stats.tx_packets++;
+ 	}
+ 
+ 	val = FIELD_PREP(MT_TXD0_TX_BYTES, skb->len + sz_txd) |
+@@ -609,12 +609,11 @@ bool mt76_connac2_mac_fill_txs(struct mt76_dev *dev, struct mt76_wcid *wcid,
+ 	txs = le32_to_cpu(txs_data[0]);
+ 
+ 	/* PPDU based reporting */
+-	if (FIELD_GET(MT_TXS0_TXS_FORMAT, txs) > 1) {
++	if (mtk_wed_device_active(&dev->mmio.wed) &&
++	    FIELD_GET(MT_TXS0_TXS_FORMAT, txs) > 1) {
+ 		stats->tx_bytes +=
+ 			le32_get_bits(txs_data[5], MT_TXS5_MPDU_TX_BYTE) -
+ 			le32_get_bits(txs_data[7], MT_TXS7_MPDU_RETRY_BYTE);
+-		stats->tx_packets +=
+-			le32_get_bits(txs_data[5], MT_TXS5_MPDU_TX_CNT);
+ 		stats->tx_failed +=
+ 			le32_get_bits(txs_data[6], MT_TXS6_MPDU_FAIL_CNT);
+ 		stats->tx_retries +=
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h
+index ca1ce97a6d2fd..7a52b68491b6e 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h
++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h
+@@ -998,6 +998,7 @@ enum {
+ 	MCU_EXT_EVENT_ASSERT_DUMP = 0x23,
+ 	MCU_EXT_EVENT_RDD_REPORT = 0x3a,
+ 	MCU_EXT_EVENT_CSA_NOTIFY = 0x4f,
++	MCU_EXT_EVENT_WA_TX_STAT = 0x74,
+ 	MCU_EXT_EVENT_BCC_NOTIFY = 0x75,
+ 	MCU_EXT_EVENT_MURU_CTRL = 0x9f,
+ };
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/init.c b/drivers/net/wireless/mediatek/mt76/mt7915/init.c
+index ac2049f49bb38..9defd2b3c2f8d 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/init.c
+@@ -414,7 +414,6 @@ mt7915_init_wiphy(struct mt7915_phy *phy)
+ 			if (!dev->dbdc_support)
+ 				vht_cap->cap |=
+ 					IEEE80211_VHT_CAP_SHORT_GI_160 |
+-					IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_160MHZ |
+ 					FIELD_PREP(IEEE80211_VHT_CAP_EXT_NSS_BW_MASK, 1);
+ 		} else {
+ 			vht_cap->cap |=
+@@ -499,6 +498,12 @@ mt7915_mac_init_band(struct mt7915_dev *dev, u8 band)
+ 	set = FIELD_PREP(MT_WTBLOFF_TOP_RSCR_RCPI_MODE, 0) |
+ 	      FIELD_PREP(MT_WTBLOFF_TOP_RSCR_RCPI_PARAM, 0x3);
+ 	mt76_rmw(dev, MT_WTBLOFF_TOP_RSCR(band), mask, set);
++
++	/* MT_TXD5_TX_STATUS_HOST (MPDU format) has higher priority than
++	 * MT_AGG_ACR_PPDU_TXS2H (PPDU format) even though ACR bit is set.
++	 */
++	if (mtk_wed_device_active(&dev->mt76.mmio.wed))
++		mt76_set(dev, MT_AGG_ACR4(band), MT_AGG_ACR_PPDU_TXS2H);
+ }
+ 
+ static void
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/main.c b/drivers/net/wireless/mediatek/mt76/mt7915/main.c
+index 1b361199c0616..42a983e40ade9 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/main.c
+@@ -269,6 +269,7 @@ static int mt7915_add_interface(struct ieee80211_hw *hw,
+ 	vif->offload_flags |= IEEE80211_OFFLOAD_ENCAP_4ADDR;
+ 
+ 	mt7915_init_bitrate_mask(vif);
++	memset(&mvif->cap, -1, sizeof(mvif->cap));
+ 
+ 	mt7915_mcu_add_bss_info(phy, vif, true);
+ 	mt7915_mcu_add_sta(dev, vif, NULL, true);
+@@ -470,7 +471,8 @@ static int mt7915_config(struct ieee80211_hw *hw, u32 changed)
+ 		ieee80211_wake_queues(hw);
+ 	}
+ 
+-	if (changed & IEEE80211_CONF_CHANGE_POWER) {
++	if (changed & (IEEE80211_CONF_CHANGE_POWER |
++		       IEEE80211_CONF_CHANGE_CHANNEL)) {
+ 		ret = mt7915_mcu_set_txpower_sku(phy);
+ 		if (ret)
+ 			return ret;
+@@ -599,6 +601,7 @@ static void mt7915_bss_info_changed(struct ieee80211_hw *hw,
+ {
+ 	struct mt7915_phy *phy = mt7915_hw_phy(hw);
+ 	struct mt7915_dev *dev = mt7915_hw_dev(hw);
++	int set_bss_info = -1, set_sta = -1;
+ 
+ 	mutex_lock(&dev->mt76.mutex);
+ 
+@@ -607,15 +610,18 @@ static void mt7915_bss_info_changed(struct ieee80211_hw *hw,
+ 	 * and then peer references bss_info_rfch to set bandwidth cap.
+ 	 */
+ 	if (changed & BSS_CHANGED_BSSID &&
+-	    vif->type == NL80211_IFTYPE_STATION) {
+-		bool join = !is_zero_ether_addr(info->bssid);
+-
+-		mt7915_mcu_add_bss_info(phy, vif, join);
+-		mt7915_mcu_add_sta(dev, vif, NULL, join);
+-	}
+-
++	    vif->type == NL80211_IFTYPE_STATION)
++		set_bss_info = set_sta = !is_zero_ether_addr(info->bssid);
+ 	if (changed & BSS_CHANGED_ASSOC)
+-		mt7915_mcu_add_bss_info(phy, vif, vif->cfg.assoc);
++		set_bss_info = vif->cfg.assoc;
++	if (changed & BSS_CHANGED_BEACON_ENABLED &&
++	    vif->type != NL80211_IFTYPE_AP)
++		set_bss_info = set_sta = info->enable_beacon;
++
++	if (set_bss_info == 1)
++		mt7915_mcu_add_bss_info(phy, vif, true);
++	if (set_sta == 1)
++		mt7915_mcu_add_sta(dev, vif, NULL, true);
+ 
+ 	if (changed & BSS_CHANGED_ERP_CTS_PROT)
+ 		mt7915_mac_enable_rtscts(dev, vif, info->use_cts_prot);
+@@ -629,11 +635,6 @@ static void mt7915_bss_info_changed(struct ieee80211_hw *hw,
+ 		}
+ 	}
+ 
+-	if (changed & BSS_CHANGED_BEACON_ENABLED && info->enable_beacon) {
+-		mt7915_mcu_add_bss_info(phy, vif, true);
+-		mt7915_mcu_add_sta(dev, vif, NULL, true);
+-	}
+-
+ 	/* ensure that enable txcmd_mode after bss_info */
+ 	if (changed & (BSS_CHANGED_QOS | BSS_CHANGED_BEACON_ENABLED))
+ 		mt7915_mcu_set_tx(dev, vif);
+@@ -650,6 +651,62 @@ static void mt7915_bss_info_changed(struct ieee80211_hw *hw,
+ 		       BSS_CHANGED_FILS_DISCOVERY))
+ 		mt7915_mcu_add_beacon(hw, vif, info->enable_beacon, changed);
+ 
++	if (set_bss_info == 0)
++		mt7915_mcu_add_bss_info(phy, vif, false);
++	if (set_sta == 0)
++		mt7915_mcu_add_sta(dev, vif, NULL, false);
++
++	mutex_unlock(&dev->mt76.mutex);
++}
++
++static void
++mt7915_vif_check_caps(struct mt7915_phy *phy, struct ieee80211_vif *vif)
++{
++	struct mt7915_vif *mvif = (struct mt7915_vif *)vif->drv_priv;
++	struct mt7915_vif_cap *vc = &mvif->cap;
++
++	vc->ht_ldpc = vif->bss_conf.ht_ldpc;
++	vc->vht_ldpc = vif->bss_conf.vht_ldpc;
++	vc->vht_su_ebfer = vif->bss_conf.vht_su_beamformer;
++	vc->vht_su_ebfee = vif->bss_conf.vht_su_beamformee;
++	vc->vht_mu_ebfer = vif->bss_conf.vht_mu_beamformer;
++	vc->vht_mu_ebfee = vif->bss_conf.vht_mu_beamformee;
++	vc->he_ldpc = vif->bss_conf.he_ldpc;
++	vc->he_su_ebfer = vif->bss_conf.he_su_beamformer;
++	vc->he_su_ebfee = vif->bss_conf.he_su_beamformee;
++	vc->he_mu_ebfer = vif->bss_conf.he_mu_beamformer;
++}
++
++static int
++mt7915_start_ap(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
++		struct ieee80211_bss_conf *link_conf)
++{
++	struct mt7915_phy *phy = mt7915_hw_phy(hw);
++	struct mt7915_dev *dev = mt7915_hw_dev(hw);
++	int err;
++
++	mutex_lock(&dev->mt76.mutex);
++
++	mt7915_vif_check_caps(phy, vif);
++
++	err = mt7915_mcu_add_bss_info(phy, vif, true);
++	if (err)
++		goto out;
++	err = mt7915_mcu_add_sta(dev, vif, NULL, true);
++out:
++	mutex_unlock(&dev->mt76.mutex);
++
++	return err;
++}
++
++static void
++mt7915_stop_ap(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
++	       struct ieee80211_bss_conf *link_conf)
++{
++	struct mt7915_dev *dev = mt7915_hw_dev(hw);
++
++	mutex_lock(&dev->mt76.mutex);
++	mt7915_mcu_add_sta(dev, vif, NULL, false);
+ 	mutex_unlock(&dev->mt76.mutex);
+ }
+ 
+@@ -1042,8 +1099,10 @@ static void mt7915_sta_statistics(struct ieee80211_hw *hw,
+ 		sinfo->tx_bytes = msta->wcid.stats.tx_bytes;
+ 		sinfo->filled |= BIT_ULL(NL80211_STA_INFO_TX_BYTES64);
+ 
+-		sinfo->tx_packets = msta->wcid.stats.tx_packets;
+-		sinfo->filled |= BIT_ULL(NL80211_STA_INFO_TX_PACKETS);
++		if (!mt7915_mcu_wed_wa_tx_stats(phy->dev, msta->wcid.idx)) {
++			sinfo->tx_packets = msta->wcid.stats.tx_packets;
++			sinfo->filled |= BIT_ULL(NL80211_STA_INFO_TX_PACKETS);
++		}
+ 
+ 		sinfo->tx_failed = msta->wcid.stats.tx_failed;
+ 		sinfo->filled |= BIT_ULL(NL80211_STA_INFO_TX_FAILED);
+@@ -1526,6 +1585,8 @@ const struct ieee80211_ops mt7915_ops = {
+ 	.conf_tx = mt7915_conf_tx,
+ 	.configure_filter = mt7915_configure_filter,
+ 	.bss_info_changed = mt7915_bss_info_changed,
++	.start_ap = mt7915_start_ap,
++	.stop_ap = mt7915_stop_ap,
+ 	.sta_add = mt7915_sta_add,
+ 	.sta_remove = mt7915_sta_remove,
+ 	.sta_pre_rcu_remove = mt76_sta_pre_rcu_remove,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+index 9fcb22fa1f97e..1a8611c6b684d 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+@@ -164,7 +164,9 @@ mt7915_mcu_parse_response(struct mt76_dev *mdev, int cmd,
+ 	}
+ 
+ 	rxd = (struct mt76_connac2_mcu_rxd *)skb->data;
+-	if (seq != rxd->seq)
++	if (seq != rxd->seq &&
++	    !(rxd->eid == MCU_CMD_EXT_CID &&
++	      rxd->ext_eid == MCU_EXT_EVENT_WA_TX_STAT))
+ 		return -EAGAIN;
+ 
+ 	if (cmd == MCU_CMD(PATCH_SEM_CONTROL)) {
+@@ -274,7 +276,7 @@ mt7915_mcu_rx_radar_detected(struct mt7915_dev *dev, struct sk_buff *skb)
+ 
+ 	r = (struct mt7915_mcu_rdd_report *)skb->data;
+ 
+-	if (r->band_idx > MT_BAND1)
++	if (r->band_idx > MT_RX_SEL2)
+ 		return;
+ 
+ 	if ((r->band_idx && !dev->phy.mt76->band_idx) &&
+@@ -395,12 +397,14 @@ void mt7915_mcu_rx_event(struct mt7915_dev *dev, struct sk_buff *skb)
+ 	struct mt76_connac2_mcu_rxd *rxd;
+ 
+ 	rxd = (struct mt76_connac2_mcu_rxd *)skb->data;
+-	if (rxd->ext_eid == MCU_EXT_EVENT_THERMAL_PROTECT ||
+-	    rxd->ext_eid == MCU_EXT_EVENT_FW_LOG_2_HOST ||
+-	    rxd->ext_eid == MCU_EXT_EVENT_ASSERT_DUMP ||
+-	    rxd->ext_eid == MCU_EXT_EVENT_PS_SYNC ||
+-	    rxd->ext_eid == MCU_EXT_EVENT_BCC_NOTIFY ||
+-	    !rxd->seq)
++	if ((rxd->ext_eid == MCU_EXT_EVENT_THERMAL_PROTECT ||
++	     rxd->ext_eid == MCU_EXT_EVENT_FW_LOG_2_HOST ||
++	     rxd->ext_eid == MCU_EXT_EVENT_ASSERT_DUMP ||
++	     rxd->ext_eid == MCU_EXT_EVENT_PS_SYNC ||
++	     rxd->ext_eid == MCU_EXT_EVENT_BCC_NOTIFY ||
++	     !rxd->seq) &&
++	     !(rxd->eid == MCU_CMD_EXT_CID &&
++	       rxd->ext_eid == MCU_EXT_EVENT_WA_TX_STAT))
+ 		mt7915_mcu_rx_unsolicited_event(dev, skb);
+ 	else
+ 		mt76_mcu_rx_event(&dev->mt76, skb);
+@@ -706,6 +710,7 @@ static void
+ mt7915_mcu_sta_he_tlv(struct sk_buff *skb, struct ieee80211_sta *sta,
+ 		      struct ieee80211_vif *vif)
+ {
++	struct mt7915_vif *mvif = (struct mt7915_vif *)vif->drv_priv;
+ 	struct ieee80211_he_cap_elem *elem = &sta->deflink.he_cap.he_cap_elem;
+ 	struct ieee80211_he_mcs_nss_supp mcs_map;
+ 	struct sta_rec_he *he;
+@@ -739,7 +744,7 @@ mt7915_mcu_sta_he_tlv(struct sk_buff *skb, struct ieee80211_sta *sta,
+ 	     IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_RU_MAPPING_IN_5G))
+ 		cap |= STA_REC_HE_CAP_BW20_RU242_SUPPORT;
+ 
+-	if (vif->bss_conf.he_ldpc &&
++	if (mvif->cap.he_ldpc &&
+ 	    (elem->phy_cap_info[1] &
+ 	     IEEE80211_HE_PHY_CAP1_LDPC_CODING_IN_PAYLOAD))
+ 		cap |= STA_REC_HE_CAP_LDPC;
+@@ -848,6 +853,7 @@ static void
+ mt7915_mcu_sta_muru_tlv(struct mt7915_dev *dev, struct sk_buff *skb,
+ 			struct ieee80211_sta *sta, struct ieee80211_vif *vif)
+ {
++	struct mt7915_vif *mvif = (struct mt7915_vif *)vif->drv_priv;
+ 	struct ieee80211_he_cap_elem *elem = &sta->deflink.he_cap.he_cap_elem;
+ 	struct sta_rec_muru *muru;
+ 	struct tlv *tlv;
+@@ -860,9 +866,9 @@ mt7915_mcu_sta_muru_tlv(struct mt7915_dev *dev, struct sk_buff *skb,
+ 
+ 	muru = (struct sta_rec_muru *)tlv;
+ 
+-	muru->cfg.mimo_dl_en = vif->bss_conf.he_mu_beamformer ||
+-			       vif->bss_conf.vht_mu_beamformer ||
+-			       vif->bss_conf.vht_mu_beamformee;
++	muru->cfg.mimo_dl_en = mvif->cap.he_mu_ebfer ||
++			       mvif->cap.vht_mu_ebfer ||
++			       mvif->cap.vht_mu_ebfee;
+ 	if (!is_mt7915(&dev->mt76))
+ 		muru->cfg.mimo_ul_en = true;
+ 	muru->cfg.ofdma_dl_en = true;
+@@ -995,8 +1001,8 @@ mt7915_mcu_sta_wtbl_tlv(struct mt7915_dev *dev, struct sk_buff *skb,
+ 	mt76_connac_mcu_wtbl_hdr_trans_tlv(skb, vif, wcid, tlv, wtbl_hdr);
+ 	if (sta)
+ 		mt76_connac_mcu_wtbl_ht_tlv(&dev->mt76, skb, sta, tlv,
+-					    wtbl_hdr, vif->bss_conf.ht_ldpc,
+-					    vif->bss_conf.vht_ldpc);
++					    wtbl_hdr, mvif->cap.ht_ldpc,
++					    mvif->cap.vht_ldpc);
+ 
+ 	return 0;
+ }
+@@ -1005,6 +1011,7 @@ static inline bool
+ mt7915_is_ebf_supported(struct mt7915_phy *phy, struct ieee80211_vif *vif,
+ 			struct ieee80211_sta *sta, bool bfee)
+ {
++	struct mt7915_vif *mvif = (struct mt7915_vif *)vif->drv_priv;
+ 	int tx_ant = hweight8(phy->mt76->chainmask) - 1;
+ 
+ 	if (vif->type != NL80211_IFTYPE_STATION &&
+@@ -1018,10 +1025,10 @@ mt7915_is_ebf_supported(struct mt7915_phy *phy, struct ieee80211_vif *vif,
+ 		struct ieee80211_he_cap_elem *pe = &sta->deflink.he_cap.he_cap_elem;
+ 
+ 		if (bfee)
+-			return vif->bss_conf.he_su_beamformee &&
++			return mvif->cap.he_su_ebfee &&
+ 			       HE_PHY(CAP3_SU_BEAMFORMER, pe->phy_cap_info[3]);
+ 		else
+-			return vif->bss_conf.he_su_beamformer &&
++			return mvif->cap.he_su_ebfer &&
+ 			       HE_PHY(CAP4_SU_BEAMFORMEE, pe->phy_cap_info[4]);
+ 	}
+ 
+@@ -1029,10 +1036,10 @@ mt7915_is_ebf_supported(struct mt7915_phy *phy, struct ieee80211_vif *vif,
+ 		u32 cap = sta->deflink.vht_cap.cap;
+ 
+ 		if (bfee)
+-			return vif->bss_conf.vht_su_beamformee &&
++			return mvif->cap.vht_su_ebfee &&
+ 			       (cap & IEEE80211_VHT_CAP_SU_BEAMFORMER_CAPABLE);
+ 		else
+-			return vif->bss_conf.vht_su_beamformer &&
++			return mvif->cap.vht_su_ebfer &&
+ 			       (cap & IEEE80211_VHT_CAP_SU_BEAMFORMEE_CAPABLE);
+ 	}
+ 
+@@ -1527,7 +1534,7 @@ mt7915_mcu_sta_rate_ctrl_tlv(struct sk_buff *skb, struct mt7915_dev *dev,
+ 			cap |= STA_CAP_TX_STBC;
+ 		if (sta->deflink.ht_cap.cap & IEEE80211_HT_CAP_RX_STBC)
+ 			cap |= STA_CAP_RX_STBC;
+-		if (vif->bss_conf.ht_ldpc &&
++		if (mvif->cap.ht_ldpc &&
+ 		    (sta->deflink.ht_cap.cap & IEEE80211_HT_CAP_LDPC_CODING))
+ 			cap |= STA_CAP_LDPC;
+ 
+@@ -1553,7 +1560,7 @@ mt7915_mcu_sta_rate_ctrl_tlv(struct sk_buff *skb, struct mt7915_dev *dev,
+ 			cap |= STA_CAP_VHT_TX_STBC;
+ 		if (sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_RXSTBC_1)
+ 			cap |= STA_CAP_VHT_RX_STBC;
+-		if (vif->bss_conf.vht_ldpc &&
++		if (mvif->cap.vht_ldpc &&
+ 		    (sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_RXLDPC))
+ 			cap |= STA_CAP_VHT_LDPC;
+ 
+@@ -2993,7 +3000,7 @@ int mt7915_mcu_get_chan_mib_info(struct mt7915_phy *phy, bool chan_switch)
+ 	}
+ 
+ 	ret = mt76_mcu_send_and_get_msg(&dev->mt76, MCU_EXT_CMD(GET_MIB_INFO),
+-					req, sizeof(req), true, &skb);
++					req, len * sizeof(req[0]), true, &skb);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -3733,6 +3740,62 @@ int mt7915_mcu_twt_agrt_update(struct mt7915_dev *dev,
+ 				 &req, sizeof(req), true);
+ }
+ 
++int mt7915_mcu_wed_wa_tx_stats(struct mt7915_dev *dev, u16 wlan_idx)
++{
++	struct {
++		__le32 cmd;
++		__le32 num;
++		__le32 __rsv;
++		__le16 wlan_idx;
++	} req = {
++		.cmd = cpu_to_le32(0x15),
++		.num = cpu_to_le32(1),
++		.wlan_idx = cpu_to_le16(wlan_idx),
++	};
++	struct mt7915_mcu_wa_tx_stat {
++		__le16 wlan_idx;
++		u8 __rsv[2];
++
++		/* tx_bytes is deprecated since WA byte counter uses u32,
++		 * which easily leads to overflow.
++		 */
++		__le32 tx_bytes;
++		__le32 tx_packets;
++	} *res;
++	struct mt76_wcid *wcid;
++	struct sk_buff *skb;
++	int ret;
++
++	ret = mt76_mcu_send_and_get_msg(&dev->mt76, MCU_WA_PARAM_CMD(QUERY),
++					&req, sizeof(req), true, &skb);
++	if (ret)
++		return ret;
++
++	if (!is_mt7915(&dev->mt76))
++		skb_pull(skb, 4);
++
++	res = (struct mt7915_mcu_wa_tx_stat *)skb->data;
++
++	if (le16_to_cpu(res->wlan_idx) != wlan_idx) {
++		ret = -EINVAL;
++		goto out;
++	}
++
++	rcu_read_lock();
++
++	wcid = rcu_dereference(dev->mt76.wcid[wlan_idx]);
++	if (wcid)
++		wcid->stats.tx_packets += le32_to_cpu(res->tx_packets);
++	else
++		ret = -EINVAL;
++
++	rcu_read_unlock();
++out:
++	dev_kfree_skb(skb);
++
++	return ret;
++}
++
+ int mt7915_mcu_rf_regval(struct mt7915_dev *dev, u32 regidx, u32 *val, bool set)
+ {
+ 	struct {
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mmio.c b/drivers/net/wireless/mediatek/mt76/mt7915/mmio.c
+index 45f3558bf31c1..2fa059af23ded 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mmio.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mmio.c
+@@ -545,8 +545,6 @@ static u32 mt7915_rmw(struct mt76_dev *mdev, u32 offset, u32 mask, u32 val)
+ static int mt7915_mmio_wed_offload_enable(struct mtk_wed_device *wed)
+ {
+ 	struct mt7915_dev *dev;
+-	struct mt7915_phy *phy;
+-	int ret;
+ 
+ 	dev = container_of(wed, struct mt7915_dev, mt76.mmio.wed);
+ 
+@@ -554,43 +552,19 @@ static int mt7915_mmio_wed_offload_enable(struct mtk_wed_device *wed)
+ 	dev->mt76.token_size = wed->wlan.token_start;
+ 	spin_unlock_bh(&dev->mt76.token_lock);
+ 
+-	ret = wait_event_timeout(dev->mt76.tx_wait,
+-				 !dev->mt76.wed_token_count, HZ);
+-	if (!ret)
+-		return -EAGAIN;
+-
+-	phy = &dev->phy;
+-	mt76_set(dev, MT_AGG_ACR4(phy->mt76->band_idx), MT_AGG_ACR_PPDU_TXS2H);
+-
+-	phy = dev->mt76.phys[MT_BAND1] ? dev->mt76.phys[MT_BAND1]->priv : NULL;
+-	if (phy)
+-		mt76_set(dev, MT_AGG_ACR4(phy->mt76->band_idx),
+-			 MT_AGG_ACR_PPDU_TXS2H);
+-
+-	return 0;
++	return !wait_event_timeout(dev->mt76.tx_wait,
++				   !dev->mt76.wed_token_count, HZ);
+ }
+ 
+ static void mt7915_mmio_wed_offload_disable(struct mtk_wed_device *wed)
+ {
+ 	struct mt7915_dev *dev;
+-	struct mt7915_phy *phy;
+ 
+ 	dev = container_of(wed, struct mt7915_dev, mt76.mmio.wed);
+ 
+ 	spin_lock_bh(&dev->mt76.token_lock);
+ 	dev->mt76.token_size = MT7915_TOKEN_SIZE;
+ 	spin_unlock_bh(&dev->mt76.token_lock);
+-
+-	/* MT_TXD5_TX_STATUS_HOST (MPDU format) has higher priority than
+-	 * MT_AGG_ACR_PPDU_TXS2H (PPDU format) even though ACR bit is set.
+-	 */
+-	phy = &dev->phy;
+-	mt76_clear(dev, MT_AGG_ACR4(phy->mt76->band_idx), MT_AGG_ACR_PPDU_TXS2H);
+-
+-	phy = dev->mt76.phys[MT_BAND1] ? dev->mt76.phys[MT_BAND1]->priv : NULL;
+-	if (phy)
+-		mt76_clear(dev, MT_AGG_ACR4(phy->mt76->band_idx),
+-			   MT_AGG_ACR_PPDU_TXS2H);
+ }
+ 
+ static void mt7915_mmio_wed_release_rx_buf(struct mtk_wed_device *wed)
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h b/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h
+index b3ead35307406..0f76733c9c1ac 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h
+@@ -147,9 +147,23 @@ struct mt7915_sta {
+ 	} twt;
+ };
+ 
++struct mt7915_vif_cap {
++	bool ht_ldpc:1;
++	bool vht_ldpc:1;
++	bool he_ldpc:1;
++	bool vht_su_ebfer:1;
++	bool vht_su_ebfee:1;
++	bool vht_mu_ebfer:1;
++	bool vht_mu_ebfee:1;
++	bool he_su_ebfer:1;
++	bool he_su_ebfee:1;
++	bool he_mu_ebfer:1;
++};
++
+ struct mt7915_vif {
+ 	struct mt76_vif mt76; /* must be first */
+ 
++	struct mt7915_vif_cap cap;
+ 	struct mt7915_sta sta;
+ 	struct mt7915_phy *phy;
+ 
+@@ -539,6 +553,7 @@ int mt7915_mcu_get_rx_rate(struct mt7915_phy *phy, struct ieee80211_vif *vif,
+ 			   struct ieee80211_sta *sta, struct rate_info *rate);
+ int mt7915_mcu_rdd_background_enable(struct mt7915_phy *phy,
+ 				     struct cfg80211_chan_def *chandef);
++int mt7915_mcu_wed_wa_tx_stats(struct mt7915_dev *dev, u16 wcid);
+ int mt7915_mcu_rf_regval(struct mt7915_dev *dev, u32 regidx, u32 *val, bool set);
+ int mt7915_mcu_wa_cmd(struct mt7915_dev *dev, int cmd, u32 a1, u32 a2, u32 a3);
+ int mt7915_mcu_fw_log_2_host(struct mt7915_dev *dev, u8 type, u8 ctrl);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/init.c b/drivers/net/wireless/mediatek/mt76/mt7921/init.c
+index bf1da9fddfaba..f41975e37d06a 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/init.c
+@@ -113,7 +113,8 @@ mt7921_init_wiphy(struct ieee80211_hw *hw)
+ 	wiphy->max_sched_scan_ssids = MT76_CONNAC_MAX_SCHED_SCAN_SSID;
+ 	wiphy->max_match_sets = MT76_CONNAC_MAX_SCAN_MATCH;
+ 	wiphy->max_sched_scan_reqs = 1;
+-	wiphy->flags |= WIPHY_FLAG_HAS_CHANNEL_SWITCH;
++	wiphy->flags |= WIPHY_FLAG_HAS_CHANNEL_SWITCH |
++			WIPHY_FLAG_SPLIT_SCAN_6GHZ;
+ 	wiphy->reg_notifier = mt7921_regd_notifier;
+ 
+ 	wiphy->features |= NL80211_FEATURE_SCHED_SCAN_RANDOM_MAC_ADDR |
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/dma.c b/drivers/net/wireless/mediatek/mt76/mt7996/dma.c
+index 534143465d9b3..fbedaacffbba5 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/dma.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/dma.c
+@@ -293,7 +293,7 @@ int mt7996_dma_init(struct mt7996_dev *dev)
+ 	/* event from WA */
+ 	ret = mt76_queue_alloc(dev, &dev->mt76.q_rx[MT_RXQ_MCU_WA],
+ 			       MT_RXQ_ID(MT_RXQ_MCU_WA),
+-			       MT7996_RX_MCU_RING_SIZE,
++			       MT7996_RX_MCU_RING_SIZE_WA,
+ 			       MT_RX_BUF_SIZE,
+ 			       MT_RXQ_RING_BASE(MT_RXQ_MCU_WA));
+ 	if (ret)
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mac.c b/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
+index 9b0f6053e0fa6..25c5deb15d213 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
+@@ -836,14 +836,19 @@ mt7996_mac_fill_rx(struct mt7996_dev *dev, struct sk_buff *skb)
+ 		skb_pull(skb, hdr_gap);
+ 		if (!hdr_trans && status->amsdu && !(ieee80211_has_a4(fc) && is_mesh)) {
+ 			pad_start = ieee80211_get_hdrlen_from_skb(skb);
+-		} else if (hdr_trans && (rxd2 & MT_RXD2_NORMAL_HDR_TRANS_ERROR) &&
+-			   get_unaligned_be16(skb->data + pad_start) == ETH_P_8021Q) {
++		} else if (hdr_trans && (rxd2 & MT_RXD2_NORMAL_HDR_TRANS_ERROR)) {
+ 			/* When header translation failure is indicated,
+ 			 * the hardware will insert an extra 2-byte field
+ 			 * containing the data length after the protocol
+-			 * type field.
++			 * type field. This happens either when the LLC-SNAP
++			 * pattern did not match, or if a VLAN header was
++			 * detected.
+ 			 */
+-			pad_start = 16;
++			pad_start = 12;
++			if (get_unaligned_be16(skb->data + pad_start) == ETH_P_8021Q)
++				pad_start += 4;
++			else
++				pad_start = 0;
+ 		}
+ 
+ 		if (pad_start) {
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c
+index 88e2f9d0e5130..62a02b03d83ba 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c
+@@ -339,7 +339,11 @@ mt7996_mcu_rx_radar_detected(struct mt7996_dev *dev, struct sk_buff *skb)
+ 	if (r->band_idx >= ARRAY_SIZE(dev->mt76.phys))
+ 		return;
+ 
+-	mphy = dev->mt76.phys[r->band_idx];
++	if (dev->rdd2_phy && r->band_idx == MT_RX_SEL2)
++		mphy = dev->rdd2_phy->mt76;
++	else
++		mphy = dev->mt76.phys[r->band_idx];
++
+ 	if (!mphy)
+ 		return;
+ 
+@@ -712,6 +716,7 @@ mt7996_mcu_bss_basic_tlv(struct sk_buff *skb,
+ 	struct cfg80211_chan_def *chandef = &phy->chandef;
+ 	struct mt76_connac_bss_basic_tlv *bss;
+ 	u32 type = CONNECTION_INFRA_AP;
++	u16 sta_wlan_idx = wlan_idx;
+ 	struct tlv *tlv;
+ 	int idx;
+ 
+@@ -731,7 +736,7 @@ mt7996_mcu_bss_basic_tlv(struct sk_buff *skb,
+ 				struct mt76_wcid *wcid;
+ 
+ 				wcid = (struct mt76_wcid *)sta->drv_priv;
+-				wlan_idx = wcid->idx;
++				sta_wlan_idx = wcid->idx;
+ 			}
+ 			rcu_read_unlock();
+ 		}
+@@ -751,7 +756,7 @@ mt7996_mcu_bss_basic_tlv(struct sk_buff *skb,
+ 	bss->bcn_interval = cpu_to_le16(vif->bss_conf.beacon_int);
+ 	bss->dtim_period = vif->bss_conf.dtim_period;
+ 	bss->bmc_tx_wlan_idx = cpu_to_le16(wlan_idx);
+-	bss->sta_idx = cpu_to_le16(wlan_idx);
++	bss->sta_idx = cpu_to_le16(sta_wlan_idx);
+ 	bss->conn_type = cpu_to_le32(type);
+ 	bss->omac_idx = mvif->omac_idx;
+ 	bss->band_idx = mvif->band_idx;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mt7996.h b/drivers/net/wireless/mediatek/mt76/mt7996/mt7996.h
+index 4d7dcb95a620a..b8bcad717d89f 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/mt7996.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/mt7996.h
+@@ -26,6 +26,7 @@
+ 
+ #define MT7996_RX_RING_SIZE		1536
+ #define MT7996_RX_MCU_RING_SIZE		512
++#define MT7996_RX_MCU_RING_SIZE_WA	1024
+ 
+ #define MT7996_FIRMWARE_WA		"mediatek/mt7996/mt7996_wa.bin"
+ #define MT7996_FIRMWARE_WM		"mediatek/mt7996/mt7996_wm.bin"
+diff --git a/drivers/net/wireless/mediatek/mt76/testmode.c b/drivers/net/wireless/mediatek/mt76/testmode.c
+index 0accc71a91c9a..4644dace9bb34 100644
+--- a/drivers/net/wireless/mediatek/mt76/testmode.c
++++ b/drivers/net/wireless/mediatek/mt76/testmode.c
+@@ -8,6 +8,7 @@ const struct nla_policy mt76_tm_policy[NUM_MT76_TM_ATTRS] = {
+ 	[MT76_TM_ATTR_RESET] = { .type = NLA_FLAG },
+ 	[MT76_TM_ATTR_STATE] = { .type = NLA_U8 },
+ 	[MT76_TM_ATTR_TX_COUNT] = { .type = NLA_U32 },
++	[MT76_TM_ATTR_TX_LENGTH] = { .type = NLA_U32 },
+ 	[MT76_TM_ATTR_TX_RATE_MODE] = { .type = NLA_U8 },
+ 	[MT76_TM_ATTR_TX_RATE_NSS] = { .type = NLA_U8 },
+ 	[MT76_TM_ATTR_TX_RATE_IDX] = { .type = NLA_U8 },
+diff --git a/drivers/net/wireless/mediatek/mt76/tx.c b/drivers/net/wireless/mediatek/mt76/tx.c
+index 72b3ec715e47a..e9b9728458a9b 100644
+--- a/drivers/net/wireless/mediatek/mt76/tx.c
++++ b/drivers/net/wireless/mediatek/mt76/tx.c
+@@ -121,6 +121,7 @@ int
+ mt76_tx_status_skb_add(struct mt76_dev *dev, struct mt76_wcid *wcid,
+ 		       struct sk_buff *skb)
+ {
++	struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
+ 	struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
+ 	struct mt76_tx_cb *cb = mt76_tx_skb_cb(skb);
+ 	int pid;
+@@ -134,8 +135,14 @@ mt76_tx_status_skb_add(struct mt76_dev *dev, struct mt76_wcid *wcid,
+ 		return MT_PACKET_ID_NO_ACK;
+ 
+ 	if (!(info->flags & (IEEE80211_TX_CTL_REQ_TX_STATUS |
+-			     IEEE80211_TX_CTL_RATE_CTRL_PROBE)))
++			     IEEE80211_TX_CTL_RATE_CTRL_PROBE))) {
++		if (mtk_wed_device_active(&dev->mmio.wed) &&
++		    ((info->flags & IEEE80211_TX_CTL_HW_80211_ENCAP) ||
++		     ieee80211_is_data(hdr->frame_control)))
++			return MT_PACKET_ID_WED;
++
+ 		return MT_PACKET_ID_NO_SKB;
++	}
+ 
+ 	spin_lock_bh(&dev->status_lock);
+ 
+diff --git a/drivers/net/wireless/realtek/rtw89/debug.c b/drivers/net/wireless/realtek/rtw89/debug.c
+index a4bbac916e22b..ce5a9ac081457 100644
+--- a/drivers/net/wireless/realtek/rtw89/debug.c
++++ b/drivers/net/wireless/realtek/rtw89/debug.c
+@@ -3193,12 +3193,14 @@ static ssize_t rtw89_debug_priv_btc_manual_set(struct file *filp,
+ 	struct rtw89_dev *rtwdev = debugfs_priv->rtwdev;
+ 	struct rtw89_btc *btc = &rtwdev->btc;
+ 	bool btc_manual;
++	int ret;
+ 
+-	if (kstrtobool_from_user(user_buf, count, &btc_manual))
+-		goto out;
++	ret = kstrtobool_from_user(user_buf, count, &btc_manual);
++	if (ret)
++		return ret;
+ 
+ 	btc->ctrl.manual = btc_manual;
+-out:
++
+ 	return count;
+ }
+ 
+diff --git a/drivers/net/wireless/realtek/rtw89/fw.c b/drivers/net/wireless/realtek/rtw89/fw.c
+index 9637f5e48d842..d44628a900465 100644
+--- a/drivers/net/wireless/realtek/rtw89/fw.c
++++ b/drivers/net/wireless/realtek/rtw89/fw.c
+@@ -312,31 +312,17 @@ rtw89_early_fw_feature_recognize(struct device *device,
+ 				 struct rtw89_fw_info *early_fw,
+ 				 int *used_fw_format)
+ {
+-	union rtw89_compat_fw_hdr buf = {};
+ 	const struct firmware *firmware;
+-	bool full_req = false;
+ 	char fw_name[64];
+ 	int fw_format;
+ 	u32 ver_code;
+ 	int ret;
+ 
+-	/* If SECURITY_LOADPIN_ENFORCE is enabled, reading partial files will
+-	 * be denied (-EPERM). Then, we don't get right firmware things as
+-	 * expected. So, in this case, we have to request full firmware here.
+-	 */
+-	if (IS_ENABLED(CONFIG_SECURITY_LOADPIN_ENFORCE))
+-		full_req = true;
+-
+ 	for (fw_format = chip->fw_format_max; fw_format >= 0; fw_format--) {
+ 		rtw89_fw_get_filename(fw_name, sizeof(fw_name),
+ 				      chip->fw_basename, fw_format);
+ 
+-		if (full_req)
+-			ret = request_firmware(&firmware, fw_name, device);
+-		else
+-			ret = request_partial_firmware_into_buf(&firmware, fw_name,
+-								device, &buf, sizeof(buf),
+-								0);
++		ret = request_firmware(&firmware, fw_name, device);
+ 		if (!ret) {
+ 			dev_info(device, "loaded firmware %s\n", fw_name);
+ 			*used_fw_format = fw_format;
+@@ -349,10 +335,7 @@ rtw89_early_fw_feature_recognize(struct device *device,
+ 		return NULL;
+ 	}
+ 
+-	if (full_req)
+-		ver_code = rtw89_compat_fw_hdr_ver_code(firmware->data);
+-	else
+-		ver_code = rtw89_compat_fw_hdr_ver_code(&buf);
++	ver_code = rtw89_compat_fw_hdr_ver_code(firmware->data);
+ 
+ 	if (!ver_code)
+ 		goto out;
+@@ -360,11 +343,7 @@ rtw89_early_fw_feature_recognize(struct device *device,
+ 	rtw89_fw_iterate_feature_cfg(early_fw, chip, ver_code);
+ 
+ out:
+-	if (full_req)
+-		return firmware;
+-
+-	release_firmware(firmware);
+-	return NULL;
++	return firmware;
+ }
+ 
+ int rtw89_fw_recognize(struct rtw89_dev *rtwdev)
+diff --git a/drivers/net/wireless/realtek/rtw89/rtw8852b_rfk.c b/drivers/net/wireless/realtek/rtw89/rtw8852b_rfk.c
+index fa018e1f499b2..259df67836a0e 100644
+--- a/drivers/net/wireless/realtek/rtw89/rtw8852b_rfk.c
++++ b/drivers/net/wireless/realtek/rtw89/rtw8852b_rfk.c
+@@ -846,7 +846,7 @@ static bool _iqk_one_shot(struct rtw89_dev *rtwdev, enum rtw89_phy_idx phy_idx,
+ 	case ID_NBTXK:
+ 		rtw89_phy_write32_mask(rtwdev, R_P0_RFCTM, B_P0_RFCTM_EN, 0x0);
+ 		rtw89_phy_write32_mask(rtwdev, R_IQK_DIF4, B_IQK_DIF4_TXT, 0x011);
+-		iqk_cmd = 0x308 | (1 << (4 + path));
++		iqk_cmd = 0x408 | (1 << (4 + path));
+ 		break;
+ 	case ID_NBRXK:
+ 		rtw89_phy_write32_mask(rtwdev, R_P0_RFCTM, B_P0_RFCTM_EN, 0x1);
+@@ -1078,7 +1078,7 @@ static bool _iqk_nbtxk(struct rtw89_dev *rtwdev, enum rtw89_phy_idx phy_idx, u8
+ {
+ 	struct rtw89_iqk_info *iqk_info = &rtwdev->iqk;
+ 	bool kfail;
+-	u8 gp = 0x3;
++	u8 gp = 0x2;
+ 
+ 	switch (iqk_info->iqk_band[path]) {
+ 	case RTW89_BAND_2G:
+diff --git a/drivers/ntb/ntb_transport.c b/drivers/ntb/ntb_transport.c
+index 2abd2235bbcab..9532108d2dce1 100644
+--- a/drivers/ntb/ntb_transport.c
++++ b/drivers/ntb/ntb_transport.c
+@@ -909,7 +909,7 @@ static int ntb_set_mw(struct ntb_transport_ctx *nt, int num_mw,
+ 	return 0;
+ }
+ 
+-static void ntb_qp_link_down_reset(struct ntb_transport_qp *qp)
++static void ntb_qp_link_context_reset(struct ntb_transport_qp *qp)
+ {
+ 	qp->link_is_up = false;
+ 	qp->active = false;
+@@ -932,6 +932,13 @@ static void ntb_qp_link_down_reset(struct ntb_transport_qp *qp)
+ 	qp->tx_async = 0;
+ }
+ 
++static void ntb_qp_link_down_reset(struct ntb_transport_qp *qp)
++{
++	ntb_qp_link_context_reset(qp);
++	if (qp->remote_rx_info)
++		qp->remote_rx_info->entry = qp->rx_max_entry - 1;
++}
++
+ static void ntb_qp_link_cleanup(struct ntb_transport_qp *qp)
+ {
+ 	struct ntb_transport_ctx *nt = qp->transport;
+@@ -1174,7 +1181,7 @@ static int ntb_transport_init_queue(struct ntb_transport_ctx *nt,
+ 	qp->ndev = nt->ndev;
+ 	qp->client_ready = false;
+ 	qp->event_handler = NULL;
+-	ntb_qp_link_down_reset(qp);
++	ntb_qp_link_context_reset(qp);
+ 
+ 	if (mw_num < qp_count % mw_count)
+ 		num_qps_mw = qp_count / mw_count + 1;
+@@ -2276,9 +2283,13 @@ int ntb_transport_tx_enqueue(struct ntb_transport_qp *qp, void *cb, void *data,
+ 	struct ntb_queue_entry *entry;
+ 	int rc;
+ 
+-	if (!qp || !qp->link_is_up || !len)
++	if (!qp || !len)
+ 		return -EINVAL;
+ 
++	/* If the qp link is down already, just ignore. */
++	if (!qp->link_is_up)
++		return 0;
++
+ 	entry = ntb_list_rm(&qp->ntb_tx_free_q_lock, &qp->tx_free_q);
+ 	if (!entry) {
+ 		qp->tx_err_no_buf++;
+@@ -2418,7 +2429,7 @@ unsigned int ntb_transport_tx_free_entry(struct ntb_transport_qp *qp)
+ 	unsigned int head = qp->tx_index;
+ 	unsigned int tail = qp->remote_rx_info->entry;
+ 
+-	return tail > head ? tail - head : qp->tx_max_entry + tail - head;
++	return tail >= head ? tail - head : qp->tx_max_entry + tail - head;
+ }
+ EXPORT_SYMBOL_GPL(ntb_transport_tx_free_entry);
+ 
+diff --git a/drivers/nvdimm/nd_perf.c b/drivers/nvdimm/nd_perf.c
+index 433bbb68ae641..2b6dc80d8fb5b 100644
+--- a/drivers/nvdimm/nd_perf.c
++++ b/drivers/nvdimm/nd_perf.c
+@@ -308,8 +308,8 @@ int register_nvdimm_pmu(struct nvdimm_pmu *nd_pmu, struct platform_device *pdev)
+ 
+ 	rc = perf_pmu_register(&nd_pmu->pmu, nd_pmu->pmu.name, -1);
+ 	if (rc) {
+-		kfree(nd_pmu->pmu.attr_groups);
+ 		nvdimm_pmu_free_hotplug_memory(nd_pmu);
++		kfree(nd_pmu->pmu.attr_groups);
+ 		return rc;
+ 	}
+ 
+@@ -324,6 +324,7 @@ void unregister_nvdimm_pmu(struct nvdimm_pmu *nd_pmu)
+ {
+ 	perf_pmu_unregister(&nd_pmu->pmu);
+ 	nvdimm_pmu_free_hotplug_memory(nd_pmu);
++	kfree(nd_pmu->pmu.attr_groups);
+ 	kfree(nd_pmu);
+ }
+ EXPORT_SYMBOL_GPL(unregister_nvdimm_pmu);
+diff --git a/drivers/nvdimm/nd_virtio.c b/drivers/nvdimm/nd_virtio.c
+index c6a648fd8744a..1f8c667c6f1ee 100644
+--- a/drivers/nvdimm/nd_virtio.c
++++ b/drivers/nvdimm/nd_virtio.c
+@@ -105,7 +105,8 @@ int async_pmem_flush(struct nd_region *nd_region, struct bio *bio)
+ 	 * parent bio. Otherwise directly call nd_region flush.
+ 	 */
+ 	if (bio && bio->bi_iter.bi_sector != -1) {
+-		struct bio *child = bio_alloc(bio->bi_bdev, 0, REQ_PREFLUSH,
++		struct bio *child = bio_alloc(bio->bi_bdev, 0,
++					      REQ_OP_WRITE | REQ_PREFLUSH,
+ 					      GFP_ATOMIC);
+ 
+ 		if (!child)
+diff --git a/drivers/of/overlay.c b/drivers/of/overlay.c
+index 7feb643f13707..28b479afd506f 100644
+--- a/drivers/of/overlay.c
++++ b/drivers/of/overlay.c
+@@ -752,8 +752,6 @@ static int init_overlay_changeset(struct overlay_changeset *ovcs)
+ 	if (!of_node_is_root(ovcs->overlay_root))
+ 		pr_debug("%s() ovcs->overlay_root is not root\n", __func__);
+ 
+-	of_changeset_init(&ovcs->cset);
+-
+ 	cnt = 0;
+ 
+ 	/* fragment nodes */
+@@ -1013,6 +1011,7 @@ int of_overlay_fdt_apply(const void *overlay_fdt, u32 overlay_fdt_size,
+ 
+ 	INIT_LIST_HEAD(&ovcs->ovcs_list);
+ 	list_add_tail(&ovcs->ovcs_list, &ovcs_list);
++	of_changeset_init(&ovcs->cset);
+ 
+ 	/*
+ 	 * Must create permanent copy of FDT because of_fdt_unflatten_tree()
+diff --git a/drivers/of/property.c b/drivers/of/property.c
+index ddc75cd50825e..cf8dacf3e3b84 100644
+--- a/drivers/of/property.c
++++ b/drivers/of/property.c
+@@ -1266,6 +1266,7 @@ DEFINE_SIMPLE_PROP(pwms, "pwms", "#pwm-cells")
+ DEFINE_SIMPLE_PROP(resets, "resets", "#reset-cells")
+ DEFINE_SIMPLE_PROP(leds, "leds", NULL)
+ DEFINE_SIMPLE_PROP(backlight, "backlight", NULL)
++DEFINE_SIMPLE_PROP(panel, "panel", NULL)
+ DEFINE_SUFFIX_PROP(regulators, "-supply", NULL)
+ DEFINE_SUFFIX_PROP(gpio, "-gpio", "#gpio-cells")
+ 
+@@ -1354,6 +1355,7 @@ static const struct supplier_bindings of_supplier_bindings[] = {
+ 	{ .parse_prop = parse_resets, },
+ 	{ .parse_prop = parse_leds, },
+ 	{ .parse_prop = parse_backlight, },
++	{ .parse_prop = parse_panel, },
+ 	{ .parse_prop = parse_gpio_compat, },
+ 	{ .parse_prop = parse_interrupts, },
+ 	{ .parse_prop = parse_regulators, },
+diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
+index b545fcb22536d..f6784cce8369b 100644
+--- a/drivers/of/unittest.c
++++ b/drivers/of/unittest.c
+@@ -77,7 +77,7 @@ static void __init of_unittest_find_node_by_name(void)
+ 
+ 	np = of_find_node_by_path("/testcase-data");
+ 	name = kasprintf(GFP_KERNEL, "%pOF", np);
+-	unittest(np && !strcmp("/testcase-data", name),
++	unittest(np && name && !strcmp("/testcase-data", name),
+ 		"find /testcase-data failed\n");
+ 	of_node_put(np);
+ 	kfree(name);
+@@ -88,14 +88,14 @@ static void __init of_unittest_find_node_by_name(void)
+ 
+ 	np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a");
+ 	name = kasprintf(GFP_KERNEL, "%pOF", np);
+-	unittest(np && !strcmp("/testcase-data/phandle-tests/consumer-a", name),
++	unittest(np && name && !strcmp("/testcase-data/phandle-tests/consumer-a", name),
+ 		"find /testcase-data/phandle-tests/consumer-a failed\n");
+ 	of_node_put(np);
+ 	kfree(name);
+ 
+ 	np = of_find_node_by_path("testcase-alias");
+ 	name = kasprintf(GFP_KERNEL, "%pOF", np);
+-	unittest(np && !strcmp("/testcase-data", name),
++	unittest(np && name && !strcmp("/testcase-data", name),
+ 		"find testcase-alias failed\n");
+ 	of_node_put(np);
+ 	kfree(name);
+@@ -106,7 +106,7 @@ static void __init of_unittest_find_node_by_name(void)
+ 
+ 	np = of_find_node_by_path("testcase-alias/phandle-tests/consumer-a");
+ 	name = kasprintf(GFP_KERNEL, "%pOF", np);
+-	unittest(np && !strcmp("/testcase-data/phandle-tests/consumer-a", name),
++	unittest(np && name && !strcmp("/testcase-data/phandle-tests/consumer-a", name),
+ 		"find testcase-alias/phandle-tests/consumer-a failed\n");
+ 	of_node_put(np);
+ 	kfree(name);
+@@ -1533,6 +1533,8 @@ static void attach_node_and_children(struct device_node *np)
+ 	const char *full_name;
+ 
+ 	full_name = kasprintf(GFP_KERNEL, "%pOF", np);
++	if (!full_name)
++		return;
+ 
+ 	if (!strcmp(full_name, "/__local_fixups__") ||
+ 	    !strcmp(full_name, "/__fixups__")) {
+@@ -2180,7 +2182,7 @@ static int __init of_unittest_apply_revert_overlay_check(int overlay_nr,
+ 	of_unittest_untrack_overlay(save_ovcs_id);
+ 
+ 	/* unittest device must be again in before state */
+-	if (of_unittest_device_exists(unittest_nr, PDEV_OVERLAY) != before) {
++	if (of_unittest_device_exists(unittest_nr, ovtype) != before) {
+ 		unittest(0, "%s with device @\"%s\" %s\n",
+ 				overlay_name_from_nr(overlay_nr),
+ 				unittest_path(unittest_nr, ovtype),
+diff --git a/drivers/opp/core.c b/drivers/opp/core.c
+index 3f46e499d615f..ae359ed6a1611 100644
+--- a/drivers/opp/core.c
++++ b/drivers/opp/core.c
+@@ -227,20 +227,18 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_get_level);
+ unsigned int dev_pm_opp_get_required_pstate(struct dev_pm_opp *opp,
+ 					    unsigned int index)
+ {
+-	struct opp_table *opp_table = opp->opp_table;
+-
+ 	if (IS_ERR_OR_NULL(opp) || !opp->available ||
+-	    index >= opp_table->required_opp_count) {
++	    index >= opp->opp_table->required_opp_count) {
+ 		pr_err("%s: Invalid parameters\n", __func__);
+ 		return 0;
+ 	}
+ 
+ 	/* required-opps not fully initialized yet */
+-	if (lazy_linking_pending(opp_table))
++	if (lazy_linking_pending(opp->opp_table))
+ 		return 0;
+ 
+ 	/* The required OPP table must belong to a genpd */
+-	if (unlikely(!opp_table->required_opp_tables[index]->is_genpd)) {
++	if (unlikely(!opp->opp_table->required_opp_tables[index]->is_genpd)) {
+ 		pr_err("%s: Performance state is only valid for genpds.\n", __func__);
+ 		return 0;
+ 	}
+@@ -2379,7 +2377,7 @@ static int _opp_attach_genpd(struct opp_table *opp_table, struct device *dev,
+ 
+ 		virt_dev = dev_pm_domain_attach_by_name(dev, *name);
+ 		if (IS_ERR_OR_NULL(virt_dev)) {
+-			ret = PTR_ERR(virt_dev) ? : -ENODEV;
++			ret = virt_dev ? PTR_ERR(virt_dev) : -ENODEV;
+ 			dev_err(dev, "Couldn't attach to pm_domain: %d\n", ret);
+ 			goto err;
+ 		}
+diff --git a/drivers/parisc/ccio-dma.c b/drivers/parisc/ccio-dma.c
+index 9bf652bd002cf..10e846286f4ef 100644
+--- a/drivers/parisc/ccio-dma.c
++++ b/drivers/parisc/ccio-dma.c
+@@ -71,8 +71,6 @@
+ #undef CCIO_COLLECT_STATS
+ #endif
+ 
+-#include <asm/runway.h>		/* for proc_runway_root */
+-
+ #ifdef DEBUG_CCIO_INIT
+ #define DBG_INIT(x...)  printk(x)
+ #else
+@@ -1567,10 +1565,15 @@ static int __init ccio_probe(struct parisc_device *dev)
+ 
+ #ifdef CONFIG_PROC_FS
+ 	if (ioc_count == 0) {
+-		proc_create_single(MODULE_NAME, 0, proc_runway_root,
++		struct proc_dir_entry *runway;
++
++		runway = proc_mkdir("bus/runway", NULL);
++		if (runway) {
++			proc_create_single(MODULE_NAME, 0, runway,
+ 				ccio_proc_info);
+-		proc_create_single(MODULE_NAME"-bitmap", 0, proc_runway_root,
++			proc_create_single(MODULE_NAME"-bitmap", 0, runway,
+ 				ccio_proc_bitmap_info);
++		}
+ 	}
+ #endif
+ 	ioc_count++;
+diff --git a/drivers/parisc/sba_iommu.c b/drivers/parisc/sba_iommu.c
+index 8b1dcd537020f..8f28f8696bf32 100644
+--- a/drivers/parisc/sba_iommu.c
++++ b/drivers/parisc/sba_iommu.c
+@@ -121,7 +121,7 @@ module_param(sba_reserve_agpgart, int, 0444);
+ MODULE_PARM_DESC(sba_reserve_agpgart, "Reserve half of IO pdir as AGPGART");
+ #endif
+ 
+-struct proc_dir_entry *proc_runway_root __ro_after_init;
++static struct proc_dir_entry *proc_runway_root __ro_after_init;
+ struct proc_dir_entry *proc_mckinley_root __ro_after_init;
+ 
+ /************************************
+diff --git a/drivers/pci/access.c b/drivers/pci/access.c
+index 3c230ca3de584..0b2e90d2f04f2 100644
+--- a/drivers/pci/access.c
++++ b/drivers/pci/access.c
+@@ -497,8 +497,8 @@ int pcie_capability_write_dword(struct pci_dev *dev, int pos, u32 val)
+ }
+ EXPORT_SYMBOL(pcie_capability_write_dword);
+ 
+-int pcie_capability_clear_and_set_word(struct pci_dev *dev, int pos,
+-				       u16 clear, u16 set)
++int pcie_capability_clear_and_set_word_unlocked(struct pci_dev *dev, int pos,
++						u16 clear, u16 set)
+ {
+ 	int ret;
+ 	u16 val;
+@@ -512,7 +512,21 @@ int pcie_capability_clear_and_set_word(struct pci_dev *dev, int pos,
+ 
+ 	return ret;
+ }
+-EXPORT_SYMBOL(pcie_capability_clear_and_set_word);
++EXPORT_SYMBOL(pcie_capability_clear_and_set_word_unlocked);
++
++int pcie_capability_clear_and_set_word_locked(struct pci_dev *dev, int pos,
++					      u16 clear, u16 set)
++{
++	unsigned long flags;
++	int ret;
++
++	spin_lock_irqsave(&dev->pcie_cap_lock, flags);
++	ret = pcie_capability_clear_and_set_word_unlocked(dev, pos, clear, set);
++	spin_unlock_irqrestore(&dev->pcie_cap_lock, flags);
++
++	return ret;
++}
++EXPORT_SYMBOL(pcie_capability_clear_and_set_word_locked);
+ 
+ int pcie_capability_clear_and_set_dword(struct pci_dev *dev, int pos,
+ 					u32 clear, u32 set)
+diff --git a/drivers/pci/controller/dwc/pci-layerscape-ep.c b/drivers/pci/controller/dwc/pci-layerscape-ep.c
+index de4c1758a6c33..19595e93dd4b6 100644
+--- a/drivers/pci/controller/dwc/pci-layerscape-ep.c
++++ b/drivers/pci/controller/dwc/pci-layerscape-ep.c
+@@ -45,6 +45,7 @@ struct ls_pcie_ep {
+ 	struct pci_epc_features		*ls_epc;
+ 	const struct ls_pcie_ep_drvdata *drvdata;
+ 	int				irq;
++	u32				lnkcap;
+ 	bool				big_endian;
+ };
+ 
+@@ -73,6 +74,7 @@ static irqreturn_t ls_pcie_ep_event_handler(int irq, void *dev_id)
+ 	struct ls_pcie_ep *pcie = dev_id;
+ 	struct dw_pcie *pci = pcie->pci;
+ 	u32 val, cfg;
++	u8 offset;
+ 
+ 	val = ls_lut_readl(pcie, PEX_PF0_PME_MES_DR);
+ 	ls_lut_writel(pcie, PEX_PF0_PME_MES_DR, val);
+@@ -81,6 +83,19 @@ static irqreturn_t ls_pcie_ep_event_handler(int irq, void *dev_id)
+ 		return IRQ_NONE;
+ 
+ 	if (val & PEX_PF0_PME_MES_DR_LUD) {
++
++		offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP);
++
++		/*
++		 * The values of the Maximum Link Width and Supported Link
++		 * Speed from the Link Capabilities Register will be lost
++		 * during link down or hot reset. Restore initial value
++		 * that configured by the Reset Configuration Word (RCW).
++		 */
++		dw_pcie_dbi_ro_wr_en(pci);
++		dw_pcie_writel_dbi(pci, offset + PCI_EXP_LNKCAP, pcie->lnkcap);
++		dw_pcie_dbi_ro_wr_dis(pci);
++
+ 		cfg = ls_lut_readl(pcie, PEX_PF0_CONFIG);
+ 		cfg |= PEX_PF0_CFG_READY;
+ 		ls_lut_writel(pcie, PEX_PF0_CONFIG, cfg);
+@@ -215,6 +230,7 @@ static int __init ls_pcie_ep_probe(struct platform_device *pdev)
+ 	struct ls_pcie_ep *pcie;
+ 	struct pci_epc_features *ls_epc;
+ 	struct resource *dbi_base;
++	u8 offset;
+ 	int ret;
+ 
+ 	pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
+@@ -251,6 +267,9 @@ static int __init ls_pcie_ep_probe(struct platform_device *pdev)
+ 
+ 	platform_set_drvdata(pdev, pcie);
+ 
++	offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP);
++	pcie->lnkcap = dw_pcie_readl_dbi(pci, offset + PCI_EXP_LNKCAP);
++
+ 	ret = dw_pcie_ep_init(&pci->ep);
+ 	if (ret)
+ 		return ret;
+diff --git a/drivers/pci/controller/dwc/pcie-qcom-ep.c b/drivers/pci/controller/dwc/pcie-qcom-ep.c
+index 0fe7f06f21026..267e1247d548f 100644
+--- a/drivers/pci/controller/dwc/pcie-qcom-ep.c
++++ b/drivers/pci/controller/dwc/pcie-qcom-ep.c
+@@ -415,7 +415,7 @@ static int qcom_pcie_perst_deassert(struct dw_pcie *pci)
+ 	/* Gate Master AXI clock to MHI bus during L1SS */
+ 	val = readl_relaxed(pcie_ep->parf + PARF_MHI_CLOCK_RESET_CTRL);
+ 	val &= ~PARF_MSTR_AXI_CLK_EN;
+-	val = readl_relaxed(pcie_ep->parf + PARF_MHI_CLOCK_RESET_CTRL);
++	writel_relaxed(val, pcie_ep->parf + PARF_MHI_CLOCK_RESET_CTRL);
+ 
+ 	dw_pcie_ep_init_notify(&pcie_ep->pci.ep);
+ 
+diff --git a/drivers/pci/controller/dwc/pcie-tegra194.c b/drivers/pci/controller/dwc/pcie-tegra194.c
+index e1db909f53ec9..ccff8cde5cff6 100644
+--- a/drivers/pci/controller/dwc/pcie-tegra194.c
++++ b/drivers/pci/controller/dwc/pcie-tegra194.c
+@@ -900,11 +900,6 @@ static int tegra_pcie_dw_host_init(struct dw_pcie_rp *pp)
+ 		pcie->pcie_cap_base = dw_pcie_find_capability(&pcie->pci,
+ 							      PCI_CAP_ID_EXP);
+ 
+-	val_16 = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_DEVCTL);
+-	val_16 &= ~PCI_EXP_DEVCTL_PAYLOAD;
+-	val_16 |= PCI_EXP_DEVCTL_PAYLOAD_256B;
+-	dw_pcie_writew_dbi(pci, pcie->pcie_cap_base + PCI_EXP_DEVCTL, val_16);
+-
+ 	val = dw_pcie_readl_dbi(pci, PCI_IO_BASE);
+ 	val &= ~(IO_BASE_IO_DECODE | IO_BASE_IO_DECODE_BIT8);
+ 	dw_pcie_writel_dbi(pci, PCI_IO_BASE, val);
+@@ -1887,11 +1882,6 @@ static void pex_ep_event_pex_rst_deassert(struct tegra_pcie_dw *pcie)
+ 	pcie->pcie_cap_base = dw_pcie_find_capability(&pcie->pci,
+ 						      PCI_CAP_ID_EXP);
+ 
+-	val_16 = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_DEVCTL);
+-	val_16 &= ~PCI_EXP_DEVCTL_PAYLOAD;
+-	val_16 |= PCI_EXP_DEVCTL_PAYLOAD_256B;
+-	dw_pcie_writew_dbi(pci, pcie->pcie_cap_base + PCI_EXP_DEVCTL, val_16);
+-
+ 	/* Clear Slot Clock Configuration bit if SRNS configuration */
+ 	if (pcie->enable_srns) {
+ 		val_16 = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base +
+diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
+index 2d93d0c4f10db..bed3cefdaf198 100644
+--- a/drivers/pci/controller/pci-hyperv.c
++++ b/drivers/pci/controller/pci-hyperv.c
+@@ -3983,6 +3983,9 @@ static int hv_pci_restore_msi_msg(struct pci_dev *pdev, void *arg)
+ 	struct msi_desc *entry;
+ 	int ret = 0;
+ 
++	if (!pdev->msi_enabled && !pdev->msix_enabled)
++		return 0;
++
+ 	msi_lock_descs(&pdev->dev);
+ 	msi_for_each_desc(entry, &pdev->dev, MSI_DESC_ASSOCIATED) {
+ 		irq_data = irq_get_irq_data(entry->irq);
+diff --git a/drivers/pci/controller/pcie-apple.c b/drivers/pci/controller/pcie-apple.c
+index 66f37e403a09c..2340dab6cd5bd 100644
+--- a/drivers/pci/controller/pcie-apple.c
++++ b/drivers/pci/controller/pcie-apple.c
+@@ -783,6 +783,10 @@ static int apple_pcie_init(struct pci_config_window *cfg)
+ 	cfg->priv = pcie;
+ 	INIT_LIST_HEAD(&pcie->ports);
+ 
++	ret = apple_msi_init(pcie);
++	if (ret)
++		return ret;
++
+ 	for_each_child_of_node(dev->of_node, of_port) {
+ 		ret = apple_pcie_setup_port(pcie, of_port);
+ 		if (ret) {
+@@ -792,7 +796,7 @@ static int apple_pcie_init(struct pci_config_window *cfg)
+ 		}
+ 	}
+ 
+-	return apple_msi_init(pcie);
++	return 0;
+ }
+ 
+ static int apple_pcie_probe(struct platform_device *pdev)
+diff --git a/drivers/pci/controller/pcie-microchip-host.c b/drivers/pci/controller/pcie-microchip-host.c
+index 5e710e4854646..dd5245904c874 100644
+--- a/drivers/pci/controller/pcie-microchip-host.c
++++ b/drivers/pci/controller/pcie-microchip-host.c
+@@ -167,12 +167,12 @@
+ #define EVENT_PCIE_DLUP_EXIT			2
+ #define EVENT_SEC_TX_RAM_SEC_ERR		3
+ #define EVENT_SEC_RX_RAM_SEC_ERR		4
+-#define EVENT_SEC_AXI2PCIE_RAM_SEC_ERR		5
+-#define EVENT_SEC_PCIE2AXI_RAM_SEC_ERR		6
++#define EVENT_SEC_PCIE2AXI_RAM_SEC_ERR		5
++#define EVENT_SEC_AXI2PCIE_RAM_SEC_ERR		6
+ #define EVENT_DED_TX_RAM_DED_ERR		7
+ #define EVENT_DED_RX_RAM_DED_ERR		8
+-#define EVENT_DED_AXI2PCIE_RAM_DED_ERR		9
+-#define EVENT_DED_PCIE2AXI_RAM_DED_ERR		10
++#define EVENT_DED_PCIE2AXI_RAM_DED_ERR		9
++#define EVENT_DED_AXI2PCIE_RAM_DED_ERR		10
+ #define EVENT_LOCAL_DMA_END_ENGINE_0		11
+ #define EVENT_LOCAL_DMA_END_ENGINE_1		12
+ #define EVENT_LOCAL_DMA_ERROR_ENGINE_0		13
+diff --git a/drivers/pci/controller/pcie-rockchip.h b/drivers/pci/controller/pcie-rockchip.h
+index fe0333778fd93..6111de35f84ca 100644
+--- a/drivers/pci/controller/pcie-rockchip.h
++++ b/drivers/pci/controller/pcie-rockchip.h
+@@ -158,7 +158,9 @@
+ #define PCIE_RC_CONFIG_THP_CAP		(PCIE_RC_CONFIG_BASE + 0x274)
+ #define   PCIE_RC_CONFIG_THP_CAP_NEXT_MASK	GENMASK(31, 20)
+ 
+-#define PCIE_ADDR_MASK			0xffffff00
++#define MAX_AXI_IB_ROOTPORT_REGION_NUM		3
++#define MIN_AXI_ADDR_BITS_PASSED		8
++#define PCIE_ADDR_MASK			GENMASK_ULL(63, MIN_AXI_ADDR_BITS_PASSED)
+ #define PCIE_CORE_AXI_CONF_BASE		0xc00000
+ #define PCIE_CORE_OB_REGION_ADDR0	(PCIE_CORE_AXI_CONF_BASE + 0x0)
+ #define   PCIE_CORE_OB_REGION_ADDR0_NUM_BITS	0x3f
+@@ -185,8 +187,6 @@
+ #define AXI_WRAPPER_TYPE1_CFG			0xb
+ #define AXI_WRAPPER_NOR_MSG			0xc
+ 
+-#define MAX_AXI_IB_ROOTPORT_REGION_NUM		3
+-#define MIN_AXI_ADDR_BITS_PASSED		8
+ #define PCIE_RC_SEND_PME_OFF			0x11960
+ #define ROCKCHIP_VENDOR_ID			0x1d87
+ #define PCIE_LINK_IS_L2(x) \
+diff --git a/drivers/pci/doe.c b/drivers/pci/doe.c
+index 1b97a5ab71a96..e3aab5edaf706 100644
+--- a/drivers/pci/doe.c
++++ b/drivers/pci/doe.c
+@@ -293,8 +293,8 @@ static int pci_doe_recv_resp(struct pci_doe_mb *doe_mb, struct pci_doe_task *tas
+ static void signal_task_complete(struct pci_doe_task *task, int rv)
+ {
+ 	task->rv = rv;
+-	task->complete(task);
+ 	destroy_work_on_stack(&task->work);
++	task->complete(task);
+ }
+ 
+ static void signal_task_abort(struct pci_doe_task *task, int rv)
+diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c
+index 8711325605f0a..fd713abdfb9f9 100644
+--- a/drivers/pci/hotplug/pciehp_hpc.c
++++ b/drivers/pci/hotplug/pciehp_hpc.c
+@@ -332,17 +332,11 @@ int pciehp_check_link_status(struct controller *ctrl)
+ static int __pciehp_link_set(struct controller *ctrl, bool enable)
+ {
+ 	struct pci_dev *pdev = ctrl_dev(ctrl);
+-	u16 lnk_ctrl;
+ 
+-	pcie_capability_read_word(pdev, PCI_EXP_LNKCTL, &lnk_ctrl);
++	pcie_capability_clear_and_set_word(pdev, PCI_EXP_LNKCTL,
++					   PCI_EXP_LNKCTL_LD,
++					   enable ? 0 : PCI_EXP_LNKCTL_LD);
+ 
+-	if (enable)
+-		lnk_ctrl &= ~PCI_EXP_LNKCTL_LD;
+-	else
+-		lnk_ctrl |= PCI_EXP_LNKCTL_LD;
+-
+-	pcie_capability_write_word(pdev, PCI_EXP_LNKCTL, lnk_ctrl);
+-	ctrl_dbg(ctrl, "%s: lnk_ctrl = %x\n", __func__, lnk_ctrl);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index 60230da957e0c..702fe577089b4 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -1226,6 +1226,10 @@ static int pci_dev_wait(struct pci_dev *dev, char *reset_type, int timeout)
+  *
+  * On success, return 0 or 1, depending on whether or not it is necessary to
+  * restore the device's BARs subsequently (1 is returned in that case).
++ *
++ * On failure, return a negative error code.  Always return failure if @dev
++ * lacks a Power Management Capability, even if the platform was able to
++ * put the device in D0 via non-PCI means.
+  */
+ int pci_power_up(struct pci_dev *dev)
+ {
+@@ -1242,9 +1246,6 @@ int pci_power_up(struct pci_dev *dev)
+ 		else
+ 			dev->current_state = state;
+ 
+-		if (state == PCI_D0)
+-			return 0;
+-
+ 		return -EIO;
+ 	}
+ 
+@@ -1302,8 +1303,12 @@ static int pci_set_full_power_state(struct pci_dev *dev)
+ 	int ret;
+ 
+ 	ret = pci_power_up(dev);
+-	if (ret < 0)
++	if (ret < 0) {
++		if (dev->current_state == PCI_D0)
++			return 0;
++
+ 		return ret;
++	}
+ 
+ 	pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr);
+ 	dev->current_state = pmcsr & PCI_PM_CTRL_STATE_MASK;
+@@ -4927,7 +4932,6 @@ static int pcie_wait_for_link_status(struct pci_dev *pdev,
+ int pcie_retrain_link(struct pci_dev *pdev, bool use_lt)
+ {
+ 	int rc;
+-	u16 lnkctl;
+ 
+ 	/*
+ 	 * Ensure the updated LNKCTL parameters are used during link
+@@ -4939,17 +4943,14 @@ int pcie_retrain_link(struct pci_dev *pdev, bool use_lt)
+ 	if (rc)
+ 		return rc;
+ 
+-	pcie_capability_read_word(pdev, PCI_EXP_LNKCTL, &lnkctl);
+-	lnkctl |= PCI_EXP_LNKCTL_RL;
+-	pcie_capability_write_word(pdev, PCI_EXP_LNKCTL, lnkctl);
++	pcie_capability_set_word(pdev, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_RL);
+ 	if (pdev->clear_retrain_link) {
+ 		/*
+ 		 * Due to an erratum in some devices the Retrain Link bit
+ 		 * needs to be cleared again manually to allow the link
+ 		 * training to succeed.
+ 		 */
+-		lnkctl &= ~PCI_EXP_LNKCTL_RL;
+-		pcie_capability_write_word(pdev, PCI_EXP_LNKCTL, lnkctl);
++		pcie_capability_clear_word(pdev, PCI_EXP_LNKCTL, PCI_EXP_LNKCTL_RL);
+ 	}
+ 
+ 	return pcie_wait_for_link_status(pdev, use_lt, !use_lt);
+diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c
+index 3dafba0b5f411..1bf6300592644 100644
+--- a/drivers/pci/pcie/aspm.c
++++ b/drivers/pci/pcie/aspm.c
+@@ -199,7 +199,7 @@ static void pcie_clkpm_cap_init(struct pcie_link_state *link, int blacklist)
+ static void pcie_aspm_configure_common_clock(struct pcie_link_state *link)
+ {
+ 	int same_clock = 1;
+-	u16 reg16, parent_reg, child_reg[8];
++	u16 reg16, ccc, parent_old_ccc, child_old_ccc[8];
+ 	struct pci_dev *child, *parent = link->pdev;
+ 	struct pci_bus *linkbus = parent->subordinate;
+ 	/*
+@@ -221,6 +221,7 @@ static void pcie_aspm_configure_common_clock(struct pcie_link_state *link)
+ 
+ 	/* Port might be already in common clock mode */
+ 	pcie_capability_read_word(parent, PCI_EXP_LNKCTL, &reg16);
++	parent_old_ccc = reg16 & PCI_EXP_LNKCTL_CCC;
+ 	if (same_clock && (reg16 & PCI_EXP_LNKCTL_CCC)) {
+ 		bool consistent = true;
+ 
+@@ -237,34 +238,29 @@ static void pcie_aspm_configure_common_clock(struct pcie_link_state *link)
+ 		pci_info(parent, "ASPM: current common clock configuration is inconsistent, reconfiguring\n");
+ 	}
+ 
++	ccc = same_clock ? PCI_EXP_LNKCTL_CCC : 0;
+ 	/* Configure downstream component, all functions */
+ 	list_for_each_entry(child, &linkbus->devices, bus_list) {
+ 		pcie_capability_read_word(child, PCI_EXP_LNKCTL, &reg16);
+-		child_reg[PCI_FUNC(child->devfn)] = reg16;
+-		if (same_clock)
+-			reg16 |= PCI_EXP_LNKCTL_CCC;
+-		else
+-			reg16 &= ~PCI_EXP_LNKCTL_CCC;
+-		pcie_capability_write_word(child, PCI_EXP_LNKCTL, reg16);
++		child_old_ccc[PCI_FUNC(child->devfn)] = reg16 & PCI_EXP_LNKCTL_CCC;
++		pcie_capability_clear_and_set_word(child, PCI_EXP_LNKCTL,
++						   PCI_EXP_LNKCTL_CCC, ccc);
+ 	}
+ 
+ 	/* Configure upstream component */
+-	pcie_capability_read_word(parent, PCI_EXP_LNKCTL, &reg16);
+-	parent_reg = reg16;
+-	if (same_clock)
+-		reg16 |= PCI_EXP_LNKCTL_CCC;
+-	else
+-		reg16 &= ~PCI_EXP_LNKCTL_CCC;
+-	pcie_capability_write_word(parent, PCI_EXP_LNKCTL, reg16);
++	pcie_capability_clear_and_set_word(parent, PCI_EXP_LNKCTL,
++					   PCI_EXP_LNKCTL_CCC, ccc);
+ 
+ 	if (pcie_retrain_link(link->pdev, true)) {
+ 
+ 		/* Training failed. Restore common clock configurations */
+ 		pci_err(parent, "ASPM: Could not configure common clock\n");
+ 		list_for_each_entry(child, &linkbus->devices, bus_list)
+-			pcie_capability_write_word(child, PCI_EXP_LNKCTL,
+-					   child_reg[PCI_FUNC(child->devfn)]);
+-		pcie_capability_write_word(parent, PCI_EXP_LNKCTL, parent_reg);
++			pcie_capability_clear_and_set_word(child, PCI_EXP_LNKCTL,
++							   PCI_EXP_LNKCTL_CCC,
++							   child_old_ccc[PCI_FUNC(child->devfn)]);
++		pcie_capability_clear_and_set_word(parent, PCI_EXP_LNKCTL,
++						   PCI_EXP_LNKCTL_CCC, parent_old_ccc);
+ 	}
+ }
+ 
+diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
+index 8bac3ce02609c..24a83cf5ace8c 100644
+--- a/drivers/pci/probe.c
++++ b/drivers/pci/probe.c
+@@ -998,6 +998,7 @@ static int pci_register_host_bridge(struct pci_host_bridge *bridge)
+ 		res = window->res;
+ 		if (!res->flags && !res->start && !res->end) {
+ 			release_resource(res);
++			resource_list_destroy_entry(window);
+ 			continue;
+ 		}
+ 
+@@ -2324,6 +2325,7 @@ struct pci_dev *pci_alloc_dev(struct pci_bus *bus)
+ 		.end = -1,
+ 	};
+ 
++	spin_lock_init(&dev->pcie_cap_lock);
+ #ifdef CONFIG_PCI_MSI
+ 	raw_spin_lock_init(&dev->msi_lock);
+ #endif
+diff --git a/drivers/perf/fsl_imx8_ddr_perf.c b/drivers/perf/fsl_imx8_ddr_perf.c
+index 5222ba1e79d0e..c684aab407f86 100644
+--- a/drivers/perf/fsl_imx8_ddr_perf.c
++++ b/drivers/perf/fsl_imx8_ddr_perf.c
+@@ -101,6 +101,7 @@ struct ddr_pmu {
+ 	const struct fsl_ddr_devtype_data *devtype_data;
+ 	int irq;
+ 	int id;
++	int active_counter;
+ };
+ 
+ static ssize_t ddr_perf_identifier_show(struct device *dev,
+@@ -495,6 +496,10 @@ static void ddr_perf_event_start(struct perf_event *event, int flags)
+ 
+ 	ddr_perf_counter_enable(pmu, event->attr.config, counter, true);
+ 
++	if (!pmu->active_counter++)
++		ddr_perf_counter_enable(pmu, EVENT_CYCLES_ID,
++			EVENT_CYCLES_COUNTER, true);
++
+ 	hwc->state = 0;
+ }
+ 
+@@ -548,6 +553,10 @@ static void ddr_perf_event_stop(struct perf_event *event, int flags)
+ 	ddr_perf_counter_enable(pmu, event->attr.config, counter, false);
+ 	ddr_perf_event_update(event);
+ 
++	if (!--pmu->active_counter)
++		ddr_perf_counter_enable(pmu, EVENT_CYCLES_ID,
++			EVENT_CYCLES_COUNTER, false);
++
+ 	hwc->state |= PERF_HES_STOPPED;
+ }
+ 
+@@ -565,25 +574,10 @@ static void ddr_perf_event_del(struct perf_event *event, int flags)
+ 
+ static void ddr_perf_pmu_enable(struct pmu *pmu)
+ {
+-	struct ddr_pmu *ddr_pmu = to_ddr_pmu(pmu);
+-
+-	/* enable cycle counter if cycle is not active event list */
+-	if (ddr_pmu->events[EVENT_CYCLES_COUNTER] == NULL)
+-		ddr_perf_counter_enable(ddr_pmu,
+-				      EVENT_CYCLES_ID,
+-				      EVENT_CYCLES_COUNTER,
+-				      true);
+ }
+ 
+ static void ddr_perf_pmu_disable(struct pmu *pmu)
+ {
+-	struct ddr_pmu *ddr_pmu = to_ddr_pmu(pmu);
+-
+-	if (ddr_pmu->events[EVENT_CYCLES_COUNTER] == NULL)
+-		ddr_perf_counter_enable(ddr_pmu,
+-				      EVENT_CYCLES_ID,
+-				      EVENT_CYCLES_COUNTER,
+-				      false);
+ }
+ 
+ static int ddr_perf_init(struct ddr_pmu *pmu, void __iomem *base,
+diff --git a/drivers/phy/rockchip/phy-rockchip-inno-hdmi.c b/drivers/phy/rockchip/phy-rockchip-inno-hdmi.c
+index 1e1563f5fffc4..fbdc23953b52e 100644
+--- a/drivers/phy/rockchip/phy-rockchip-inno-hdmi.c
++++ b/drivers/phy/rockchip/phy-rockchip-inno-hdmi.c
+@@ -745,10 +745,12 @@ unsigned long inno_hdmi_phy_rk3328_clk_recalc_rate(struct clk_hw *hw,
+ 		do_div(vco, (nd * (no_a == 1 ? no_b : no_a) * no_d * 2));
+ 	}
+ 
+-	inno->pixclock = vco;
+-	dev_dbg(inno->dev, "%s rate %lu\n", __func__, inno->pixclock);
++	inno->pixclock = DIV_ROUND_CLOSEST((unsigned long)vco, 1000) * 1000;
+ 
+-	return vco;
++	dev_dbg(inno->dev, "%s rate %lu vco %llu\n",
++		__func__, inno->pixclock, vco);
++
++	return inno->pixclock;
+ }
+ 
+ static long inno_hdmi_phy_rk3328_clk_round_rate(struct clk_hw *hw,
+@@ -790,8 +792,8 @@ static int inno_hdmi_phy_rk3328_clk_set_rate(struct clk_hw *hw,
+ 			 RK3328_PRE_PLL_POWER_DOWN);
+ 
+ 	/* Configure pre-pll */
+-	inno_update_bits(inno, 0xa0, RK3228_PCLK_VCO_DIV_5_MASK,
+-			 RK3228_PCLK_VCO_DIV_5(cfg->vco_div_5_en));
++	inno_update_bits(inno, 0xa0, RK3328_PCLK_VCO_DIV_5_MASK,
++			 RK3328_PCLK_VCO_DIV_5(cfg->vco_div_5_en));
+ 	inno_write(inno, 0xa1, RK3328_PRE_PLL_PRE_DIV(cfg->prediv));
+ 
+ 	val = RK3328_SPREAD_SPECTRUM_MOD_DISABLE;
+@@ -1021,9 +1023,10 @@ inno_hdmi_phy_rk3328_power_on(struct inno_hdmi_phy *inno,
+ 
+ 	inno_write(inno, 0xac, RK3328_POST_PLL_FB_DIV_7_0(cfg->fbdiv));
+ 	if (cfg->postdiv == 1) {
+-		inno_write(inno, 0xaa, RK3328_POST_PLL_REFCLK_SEL_TMDS);
+ 		inno_write(inno, 0xab, RK3328_POST_PLL_FB_DIV_8(cfg->fbdiv) |
+ 			   RK3328_POST_PLL_PRE_DIV(cfg->prediv));
++		inno_write(inno, 0xaa, RK3328_POST_PLL_REFCLK_SEL_TMDS |
++			   RK3328_POST_PLL_POWER_DOWN);
+ 	} else {
+ 		v = (cfg->postdiv / 2) - 1;
+ 		v &= RK3328_POST_PLL_POST_DIV_MASK;
+@@ -1031,7 +1034,8 @@ inno_hdmi_phy_rk3328_power_on(struct inno_hdmi_phy *inno,
+ 		inno_write(inno, 0xab, RK3328_POST_PLL_FB_DIV_8(cfg->fbdiv) |
+ 			   RK3328_POST_PLL_PRE_DIV(cfg->prediv));
+ 		inno_write(inno, 0xaa, RK3328_POST_PLL_POST_DIV_ENABLE |
+-			   RK3328_POST_PLL_REFCLK_SEL_TMDS);
++			   RK3328_POST_PLL_REFCLK_SEL_TMDS |
++			   RK3328_POST_PLL_POWER_DOWN);
+ 	}
+ 
+ 	for (v = 0; v < 14; v++)
+diff --git a/drivers/pinctrl/mediatek/pinctrl-mt7981.c b/drivers/pinctrl/mediatek/pinctrl-mt7981.c
+index 18abc57800111..0fd2c0c451f95 100644
+--- a/drivers/pinctrl/mediatek/pinctrl-mt7981.c
++++ b/drivers/pinctrl/mediatek/pinctrl-mt7981.c
+@@ -457,37 +457,15 @@ static const unsigned int mt7981_pull_type[] = {
+ 	MTK_PULL_PUPD_R1R0_TYPE,/*34*/ MTK_PULL_PUPD_R1R0_TYPE,/*35*/
+ 	MTK_PULL_PUPD_R1R0_TYPE,/*36*/ MTK_PULL_PUPD_R1R0_TYPE,/*37*/
+ 	MTK_PULL_PUPD_R1R0_TYPE,/*38*/ MTK_PULL_PUPD_R1R0_TYPE,/*39*/
+-	MTK_PULL_PUPD_R1R0_TYPE,/*40*/ MTK_PULL_PUPD_R1R0_TYPE,/*41*/
+-	MTK_PULL_PUPD_R1R0_TYPE,/*42*/ MTK_PULL_PUPD_R1R0_TYPE,/*43*/
+-	MTK_PULL_PUPD_R1R0_TYPE,/*44*/ MTK_PULL_PUPD_R1R0_TYPE,/*45*/
+-	MTK_PULL_PUPD_R1R0_TYPE,/*46*/ MTK_PULL_PUPD_R1R0_TYPE,/*47*/
+-	MTK_PULL_PUPD_R1R0_TYPE,/*48*/ MTK_PULL_PUPD_R1R0_TYPE,/*49*/
+-	MTK_PULL_PUPD_R1R0_TYPE,/*50*/ MTK_PULL_PUPD_R1R0_TYPE,/*51*/
+-	MTK_PULL_PUPD_R1R0_TYPE,/*52*/ MTK_PULL_PUPD_R1R0_TYPE,/*53*/
+-	MTK_PULL_PUPD_R1R0_TYPE,/*54*/ MTK_PULL_PUPD_R1R0_TYPE,/*55*/
+-	MTK_PULL_PUPD_R1R0_TYPE,/*56*/ MTK_PULL_PUPD_R1R0_TYPE,/*57*/
+-	MTK_PULL_PUPD_R1R0_TYPE,/*58*/ MTK_PULL_PUPD_R1R0_TYPE,/*59*/
+-	MTK_PULL_PUPD_R1R0_TYPE,/*60*/ MTK_PULL_PUPD_R1R0_TYPE,/*61*/
+-	MTK_PULL_PUPD_R1R0_TYPE,/*62*/ MTK_PULL_PUPD_R1R0_TYPE,/*63*/
+-	MTK_PULL_PUPD_R1R0_TYPE,/*64*/ MTK_PULL_PUPD_R1R0_TYPE,/*65*/
+-	MTK_PULL_PUPD_R1R0_TYPE,/*66*/ MTK_PULL_PUPD_R1R0_TYPE,/*67*/
+-	MTK_PULL_PUPD_R1R0_TYPE,/*68*/ MTK_PULL_PU_PD_TYPE,/*69*/
+-	MTK_PULL_PU_PD_TYPE,/*70*/ MTK_PULL_PU_PD_TYPE,/*71*/
+-	MTK_PULL_PU_PD_TYPE,/*72*/ MTK_PULL_PU_PD_TYPE,/*73*/
+-	MTK_PULL_PU_PD_TYPE,/*74*/ MTK_PULL_PU_PD_TYPE,/*75*/
+-	MTK_PULL_PU_PD_TYPE,/*76*/ MTK_PULL_PU_PD_TYPE,/*77*/
+-	MTK_PULL_PU_PD_TYPE,/*78*/ MTK_PULL_PU_PD_TYPE,/*79*/
+-	MTK_PULL_PU_PD_TYPE,/*80*/ MTK_PULL_PU_PD_TYPE,/*81*/
+-	MTK_PULL_PU_PD_TYPE,/*82*/ MTK_PULL_PU_PD_TYPE,/*83*/
+-	MTK_PULL_PU_PD_TYPE,/*84*/ MTK_PULL_PU_PD_TYPE,/*85*/
+-	MTK_PULL_PU_PD_TYPE,/*86*/ MTK_PULL_PU_PD_TYPE,/*87*/
+-	MTK_PULL_PU_PD_TYPE,/*88*/ MTK_PULL_PU_PD_TYPE,/*89*/
+-	MTK_PULL_PU_PD_TYPE,/*90*/ MTK_PULL_PU_PD_TYPE,/*91*/
+-	MTK_PULL_PU_PD_TYPE,/*92*/ MTK_PULL_PU_PD_TYPE,/*93*/
+-	MTK_PULL_PU_PD_TYPE,/*94*/ MTK_PULL_PU_PD_TYPE,/*95*/
+-	MTK_PULL_PU_PD_TYPE,/*96*/ MTK_PULL_PU_PD_TYPE,/*97*/
+-	MTK_PULL_PU_PD_TYPE,/*98*/ MTK_PULL_PU_PD_TYPE,/*99*/
+-	MTK_PULL_PU_PD_TYPE,/*100*/
++	MTK_PULL_PU_PD_TYPE,/*40*/ MTK_PULL_PU_PD_TYPE,/*41*/
++	MTK_PULL_PU_PD_TYPE,/*42*/ MTK_PULL_PU_PD_TYPE,/*43*/
++	MTK_PULL_PU_PD_TYPE,/*44*/ MTK_PULL_PU_PD_TYPE,/*45*/
++	MTK_PULL_PU_PD_TYPE,/*46*/ MTK_PULL_PU_PD_TYPE,/*47*/
++	MTK_PULL_PU_PD_TYPE,/*48*/ MTK_PULL_PU_PD_TYPE,/*49*/
++	MTK_PULL_PU_PD_TYPE,/*50*/ MTK_PULL_PU_PD_TYPE,/*51*/
++	MTK_PULL_PU_PD_TYPE,/*52*/ MTK_PULL_PU_PD_TYPE,/*53*/
++	MTK_PULL_PU_PD_TYPE,/*54*/ MTK_PULL_PU_PD_TYPE,/*55*/
++	MTK_PULL_PU_PD_TYPE,/*56*/
+ };
+ 
+ static const struct mtk_pin_reg_calc mt7981_reg_cals[] = {
+@@ -1014,6 +992,10 @@ static struct mtk_pin_soc mt7981_data = {
+ 	.ies_present = false,
+ 	.base_names = mt7981_pinctrl_register_base_names,
+ 	.nbase_names = ARRAY_SIZE(mt7981_pinctrl_register_base_names),
++	.bias_disable_set = mtk_pinconf_bias_disable_set,
++	.bias_disable_get = mtk_pinconf_bias_disable_get,
++	.bias_set = mtk_pinconf_bias_set,
++	.bias_get = mtk_pinconf_bias_get,
+ 	.pull_type = mt7981_pull_type,
+ 	.bias_set_combo = mtk_pinconf_bias_set_combo,
+ 	.bias_get_combo = mtk_pinconf_bias_get_combo,
+diff --git a/drivers/pinctrl/mediatek/pinctrl-mt7986.c b/drivers/pinctrl/mediatek/pinctrl-mt7986.c
+index aa0ccd67f4f4e..acaac9b38aa8a 100644
+--- a/drivers/pinctrl/mediatek/pinctrl-mt7986.c
++++ b/drivers/pinctrl/mediatek/pinctrl-mt7986.c
+@@ -922,6 +922,10 @@ static struct mtk_pin_soc mt7986a_data = {
+ 	.ies_present = false,
+ 	.base_names = mt7986_pinctrl_register_base_names,
+ 	.nbase_names = ARRAY_SIZE(mt7986_pinctrl_register_base_names),
++	.bias_disable_set = mtk_pinconf_bias_disable_set,
++	.bias_disable_get = mtk_pinconf_bias_disable_get,
++	.bias_set = mtk_pinconf_bias_set,
++	.bias_get = mtk_pinconf_bias_get,
+ 	.pull_type = mt7986_pull_type,
+ 	.bias_set_combo = mtk_pinconf_bias_set_combo,
+ 	.bias_get_combo = mtk_pinconf_bias_get_combo,
+@@ -944,6 +948,10 @@ static struct mtk_pin_soc mt7986b_data = {
+ 	.ies_present = false,
+ 	.base_names = mt7986_pinctrl_register_base_names,
+ 	.nbase_names = ARRAY_SIZE(mt7986_pinctrl_register_base_names),
++	.bias_disable_set = mtk_pinconf_bias_disable_set,
++	.bias_disable_get = mtk_pinconf_bias_disable_get,
++	.bias_set = mtk_pinconf_bias_set,
++	.bias_get = mtk_pinconf_bias_get,
+ 	.pull_type = mt7986_pull_type,
+ 	.bias_set_combo = mtk_pinconf_bias_set_combo,
+ 	.bias_get_combo = mtk_pinconf_bias_get_combo,
+diff --git a/drivers/pinctrl/pinctrl-mcp23s08_spi.c b/drivers/pinctrl/pinctrl-mcp23s08_spi.c
+index 9ae10318f6f35..ea059b9c5542e 100644
+--- a/drivers/pinctrl/pinctrl-mcp23s08_spi.c
++++ b/drivers/pinctrl/pinctrl-mcp23s08_spi.c
+@@ -91,18 +91,28 @@ static int mcp23s08_spi_regmap_init(struct mcp23s08 *mcp, struct device *dev,
+ 		mcp->reg_shift = 0;
+ 		mcp->chip.ngpio = 8;
+ 		mcp->chip.label = devm_kasprintf(dev, GFP_KERNEL, "mcp23s08.%d", addr);
++		if (!mcp->chip.label)
++			return -ENOMEM;
+ 
+ 		config = &mcp23x08_regmap;
+ 		name = devm_kasprintf(dev, GFP_KERNEL, "%d", addr);
++		if (!name)
++			return -ENOMEM;
++
+ 		break;
+ 
+ 	case MCP_TYPE_S17:
+ 		mcp->reg_shift = 1;
+ 		mcp->chip.ngpio = 16;
+ 		mcp->chip.label = devm_kasprintf(dev, GFP_KERNEL, "mcp23s17.%d", addr);
++		if (!mcp->chip.label)
++			return -ENOMEM;
+ 
+ 		config = &mcp23x17_regmap;
+ 		name = devm_kasprintf(dev, GFP_KERNEL, "%d", addr);
++		if (!name)
++			return -ENOMEM;
++
+ 		break;
+ 
+ 	case MCP_TYPE_S18:
+diff --git a/drivers/platform/chrome/chromeos_acpi.c b/drivers/platform/chrome/chromeos_acpi.c
+index 50d8a4d4352d6..1312aaaa8750b 100644
+--- a/drivers/platform/chrome/chromeos_acpi.c
++++ b/drivers/platform/chrome/chromeos_acpi.c
+@@ -90,7 +90,36 @@ static int chromeos_acpi_handle_package(struct device *dev, union acpi_object *o
+ 	case ACPI_TYPE_STRING:
+ 		return sysfs_emit(buf, "%s\n", element->string.pointer);
+ 	case ACPI_TYPE_BUFFER:
+-		return sysfs_emit(buf, "%s\n", element->buffer.pointer);
++		{
++			int i, r, at, room_left;
++			const int byte_per_line = 16;
++
++			at = 0;
++			room_left = PAGE_SIZE - 1;
++			for (i = 0; i < element->buffer.length && room_left; i += byte_per_line) {
++				r = hex_dump_to_buffer(element->buffer.pointer + i,
++						       element->buffer.length - i,
++						       byte_per_line, 1, buf + at, room_left,
++						       false);
++				if (r > room_left)
++					goto truncating;
++				at += r;
++				room_left -= r;
++
++				r = sysfs_emit_at(buf, at, "\n");
++				if (!r)
++					goto truncating;
++				at += r;
++				room_left -= r;
++			}
++
++			buf[at] = 0;
++			return at;
++truncating:
++			dev_info_once(dev, "truncating sysfs content for %s\n", name);
++			sysfs_emit_at(buf, PAGE_SIZE - 4, "..\n");
++			return PAGE_SIZE - 1;
++		}
+ 	default:
+ 		dev_err(dev, "element type %d not supported\n", element->type);
+ 		return -EINVAL;
+diff --git a/drivers/platform/x86/amd/pmf/core.c b/drivers/platform/x86/amd/pmf/core.c
+index 57bf1a9f0e766..78ed3ee22555d 100644
+--- a/drivers/platform/x86/amd/pmf/core.c
++++ b/drivers/platform/x86/amd/pmf/core.c
+@@ -324,7 +324,8 @@ static void amd_pmf_init_features(struct amd_pmf_dev *dev)
+ 
+ static void amd_pmf_deinit_features(struct amd_pmf_dev *dev)
+ {
+-	if (is_apmf_func_supported(dev, APMF_FUNC_STATIC_SLIDER_GRANULAR)) {
++	if (is_apmf_func_supported(dev, APMF_FUNC_STATIC_SLIDER_GRANULAR) ||
++	    is_apmf_func_supported(dev, APMF_FUNC_OS_POWER_SLIDER_UPDATE)) {
+ 		power_supply_unreg_notifier(&dev->pwr_src_notifier);
+ 		amd_pmf_deinit_sps(dev);
+ 	}
+diff --git a/drivers/platform/x86/dell/dell-wmi-sysman/sysman.c b/drivers/platform/x86/dell/dell-wmi-sysman/sysman.c
+index b68dd11cb8924..b929b4f824205 100644
+--- a/drivers/platform/x86/dell/dell-wmi-sysman/sysman.c
++++ b/drivers/platform/x86/dell/dell-wmi-sysman/sysman.c
+@@ -393,6 +393,7 @@ static int init_bios_attributes(int attr_type, const char *guid)
+ 	struct kobject *attr_name_kobj; //individual attribute names
+ 	union acpi_object *obj = NULL;
+ 	union acpi_object *elements;
++	struct kobject *duplicate;
+ 	struct kset *tmp_set;
+ 	int min_elements;
+ 
+@@ -451,9 +452,11 @@ static int init_bios_attributes(int attr_type, const char *guid)
+ 		else
+ 			tmp_set = wmi_priv.main_dir_kset;
+ 
+-		if (kset_find_obj(tmp_set, elements[ATTR_NAME].string.pointer)) {
+-			pr_debug("duplicate attribute name found - %s\n",
+-				elements[ATTR_NAME].string.pointer);
++		duplicate = kset_find_obj(tmp_set, elements[ATTR_NAME].string.pointer);
++		if (duplicate) {
++			pr_debug("Duplicate attribute name found - %s\n",
++				 elements[ATTR_NAME].string.pointer);
++			kobject_put(duplicate);
+ 			goto nextobj;
+ 		}
+ 
+diff --git a/drivers/power/supply/qcom_pmi8998_charger.c b/drivers/power/supply/qcom_pmi8998_charger.c
+index d16c5ee172496..cac89d233c388 100644
+--- a/drivers/power/supply/qcom_pmi8998_charger.c
++++ b/drivers/power/supply/qcom_pmi8998_charger.c
+@@ -556,7 +556,8 @@ static int smb2_set_current_limit(struct smb2_chip *chip, unsigned int val)
+ static void smb2_status_change_work(struct work_struct *work)
+ {
+ 	unsigned int charger_type, current_ua;
+-	int usb_online, count, rc;
++	int usb_online = 0;
++	int count, rc;
+ 	struct smb2_chip *chip;
+ 
+ 	chip = container_of(work, struct smb2_chip, status_change_work.work);
+diff --git a/drivers/powercap/arm_scmi_powercap.c b/drivers/powercap/arm_scmi_powercap.c
+index 5231f6d52ae3a..a081f177e702e 100644
+--- a/drivers/powercap/arm_scmi_powercap.c
++++ b/drivers/powercap/arm_scmi_powercap.c
+@@ -12,6 +12,7 @@
+ #include <linux/module.h>
+ #include <linux/powercap.h>
+ #include <linux/scmi_protocol.h>
++#include <linux/slab.h>
+ 
+ #define to_scmi_powercap_zone(z)		\
+ 	container_of(z, struct scmi_powercap_zone, zone)
+@@ -19,6 +20,8 @@
+ static const struct scmi_powercap_proto_ops *powercap_ops;
+ 
+ struct scmi_powercap_zone {
++	bool registered;
++	bool invalid;
+ 	unsigned int height;
+ 	struct device *dev;
+ 	struct scmi_protocol_handle *ph;
+@@ -32,6 +35,7 @@ struct scmi_powercap_root {
+ 	unsigned int num_zones;
+ 	struct scmi_powercap_zone *spzones;
+ 	struct list_head *registered_zones;
++	struct list_head scmi_zones;
+ };
+ 
+ static struct powercap_control_type *scmi_top_pcntrl;
+@@ -271,12 +275,6 @@ static void scmi_powercap_unregister_all_zones(struct scmi_powercap_root *pr)
+ 	}
+ }
+ 
+-static inline bool
+-scmi_powercap_is_zone_registered(struct scmi_powercap_zone *spz)
+-{
+-	return !list_empty(&spz->node);
+-}
+-
+ static inline unsigned int
+ scmi_powercap_get_zone_height(struct scmi_powercap_zone *spz)
+ {
+@@ -295,11 +293,46 @@ scmi_powercap_get_parent_zone(struct scmi_powercap_zone *spz)
+ 	return &spz->spzones[spz->info->parent_id];
+ }
+ 
++static int scmi_powercap_register_zone(struct scmi_powercap_root *pr,
++				       struct scmi_powercap_zone *spz,
++				       struct scmi_powercap_zone *parent)
++{
++	int ret = 0;
++	struct powercap_zone *z;
++
++	if (spz->invalid) {
++		list_del(&spz->node);
++		return -EINVAL;
++	}
++
++	z = powercap_register_zone(&spz->zone, scmi_top_pcntrl, spz->info->name,
++				   parent ? &parent->zone : NULL,
++				   &zone_ops, 1, &constraint_ops);
++	if (!IS_ERR(z)) {
++		spz->height = scmi_powercap_get_zone_height(spz);
++		spz->registered = true;
++		list_move(&spz->node, &pr->registered_zones[spz->height]);
++		dev_dbg(spz->dev, "Registered node %s - parent %s - height:%d\n",
++			spz->info->name, parent ? parent->info->name : "ROOT",
++			spz->height);
++	} else {
++		list_del(&spz->node);
++		ret = PTR_ERR(z);
++		dev_err(spz->dev,
++			"Error registering node:%s - parent:%s - h:%d - ret:%d\n",
++			spz->info->name,
++			parent ? parent->info->name : "ROOT",
++			spz->height, ret);
++	}
++
++	return ret;
++}
++
+ /**
+- * scmi_powercap_register_zone  - Register an SCMI powercap zone recursively
++ * scmi_zones_register- Register SCMI powercap zones starting from parent zones
+  *
++ * @dev: A reference to the SCMI device
+  * @pr: A reference to the root powercap zones descriptors
+- * @spz: A reference to the SCMI powercap zone to register
+  *
+  * When registering SCMI powercap zones with the powercap framework we should
+  * take care to always register zones starting from the root ones and to
+@@ -309,10 +342,10 @@ scmi_powercap_get_parent_zone(struct scmi_powercap_zone *spz)
+  * zones provided by the SCMI platform firmware is built to comply with such
+  * requirement.
+  *
+- * This function, given an SCMI powercap zone to register, takes care to walk
+- * the SCMI powercap zones tree up to the root looking recursively for
+- * unregistered parent zones before registering the provided zone; at the same
+- * time each registered zone height in such a tree is accounted for and each
++ * This function, given the set of SCMI powercap zones to register, takes care
++ * to walk the SCMI powercap zones trees up to the root registering any
++ * unregistered parent zone before registering the child zones; at the same
++ * time each registered-zone height in such a tree is accounted for and each
+  * zone, once registered, is stored in the @registered_zones array that is
+  * indexed by zone height: this way will be trivial, at unregister time, to walk
+  * the @registered_zones array backward and unregister all the zones starting
+@@ -330,57 +363,55 @@ scmi_powercap_get_parent_zone(struct scmi_powercap_zone *spz)
+  *
+  * Return: 0 on Success
+  */
+-static int scmi_powercap_register_zone(struct scmi_powercap_root *pr,
+-				       struct scmi_powercap_zone *spz)
++static int scmi_zones_register(struct device *dev,
++			       struct scmi_powercap_root *pr)
+ {
+ 	int ret = 0;
+-	struct scmi_powercap_zone *parent;
+-
+-	if (!spz->info)
+-		return ret;
++	unsigned int sp = 0, reg_zones = 0;
++	struct scmi_powercap_zone *spz, **zones_stack;
+ 
+-	parent = scmi_powercap_get_parent_zone(spz);
+-	if (parent && !scmi_powercap_is_zone_registered(parent)) {
+-		/*
+-		 * Bail out if a parent domain was marked as unsupported:
+-		 * only domains participating as leaves can be skipped.
+-		 */
+-		if (!parent->info)
+-			return -ENODEV;
++	zones_stack = kcalloc(pr->num_zones, sizeof(spz), GFP_KERNEL);
++	if (!zones_stack)
++		return -ENOMEM;
+ 
+-		ret = scmi_powercap_register_zone(pr, parent);
+-		if (ret)
+-			return ret;
+-	}
++	spz = list_first_entry_or_null(&pr->scmi_zones,
++				       struct scmi_powercap_zone, node);
++	while (spz) {
++		struct scmi_powercap_zone *parent;
+ 
+-	if (!scmi_powercap_is_zone_registered(spz)) {
+-		struct powercap_zone *z;
+-
+-		z = powercap_register_zone(&spz->zone,
+-					   scmi_top_pcntrl,
+-					   spz->info->name,
+-					   parent ? &parent->zone : NULL,
+-					   &zone_ops, 1, &constraint_ops);
+-		if (!IS_ERR(z)) {
+-			spz->height = scmi_powercap_get_zone_height(spz);
+-			list_add(&spz->node,
+-				 &pr->registered_zones[spz->height]);
+-			dev_dbg(spz->dev,
+-				"Registered node %s - parent %s - height:%d\n",
+-				spz->info->name,
+-				parent ? parent->info->name : "ROOT",
+-				spz->height);
+-			ret = 0;
++		parent = scmi_powercap_get_parent_zone(spz);
++		if (parent && !parent->registered) {
++			zones_stack[sp++] = spz;
++			spz = parent;
+ 		} else {
+-			ret = PTR_ERR(z);
+-			dev_err(spz->dev,
+-				"Error registering node:%s - parent:%s - h:%d - ret:%d\n",
+-				 spz->info->name,
+-				 parent ? parent->info->name : "ROOT",
+-				 spz->height, ret);
++			ret = scmi_powercap_register_zone(pr, spz, parent);
++			if (!ret) {
++				reg_zones++;
++			} else if (sp) {
++				/* Failed to register a non-leaf zone.
++				 * Bail-out.
++				 */
++				dev_err(dev,
++					"Failed to register non-leaf zone - ret:%d\n",
++					ret);
++				scmi_powercap_unregister_all_zones(pr);
++				reg_zones = 0;
++				goto out;
++			}
++			/* Pick next zone to process */
++			if (sp)
++				spz = zones_stack[--sp];
++			else
++				spz = list_first_entry_or_null(&pr->scmi_zones,
++							       struct scmi_powercap_zone,
++							       node);
+ 		}
+ 	}
+ 
++out:
++	kfree(zones_stack);
++	dev_info(dev, "Registered %d SCMI Powercap domains !\n", reg_zones);
++
+ 	return ret;
+ }
+ 
+@@ -424,6 +455,8 @@ static int scmi_powercap_probe(struct scmi_device *sdev)
+ 	if (!pr->registered_zones)
+ 		return -ENOMEM;
+ 
++	INIT_LIST_HEAD(&pr->scmi_zones);
++
+ 	for (i = 0, spz = pr->spzones; i < pr->num_zones; i++, spz++) {
+ 		/*
+ 		 * Powercap domains are validate by the protocol layer, i.e.
+@@ -438,6 +471,7 @@ static int scmi_powercap_probe(struct scmi_device *sdev)
+ 		INIT_LIST_HEAD(&spz->node);
+ 		INIT_LIST_HEAD(&pr->registered_zones[i]);
+ 
++		list_add_tail(&spz->node, &pr->scmi_zones);
+ 		/*
+ 		 * Forcibly skip powercap domains using an abstract scale.
+ 		 * Note that only leaves domains can be skipped, so this could
+@@ -448,7 +482,7 @@ static int scmi_powercap_probe(struct scmi_device *sdev)
+ 			dev_warn(dev,
+ 				 "Abstract power scale not supported. Skip %s.\n",
+ 				 spz->info->name);
+-			spz->info = NULL;
++			spz->invalid = true;
+ 			continue;
+ 		}
+ 	}
+@@ -457,21 +491,12 @@ static int scmi_powercap_probe(struct scmi_device *sdev)
+ 	 * Scan array of retrieved SCMI powercap domains and register them
+ 	 * recursively starting from the root domains.
+ 	 */
+-	for (i = 0, spz = pr->spzones; i < pr->num_zones; i++, spz++) {
+-		ret = scmi_powercap_register_zone(pr, spz);
+-		if (ret) {
+-			dev_err(dev,
+-				"Failed to register powercap zone %s - ret:%d\n",
+-				spz->info->name, ret);
+-			scmi_powercap_unregister_all_zones(pr);
+-			return ret;
+-		}
+-	}
++	ret = scmi_zones_register(dev, pr);
++	if (ret)
++		return ret;
+ 
+ 	dev_set_drvdata(dev, pr);
+ 
+-	dev_info(dev, "Registered %d SCMI Powercap domains !\n", pr->num_zones);
+-
+ 	return ret;
+ }
+ 
+diff --git a/drivers/powercap/intel_rapl_common.c b/drivers/powercap/intel_rapl_common.c
+index 8fac57b28f8a3..e618ed5aa8caa 100644
+--- a/drivers/powercap/intel_rapl_common.c
++++ b/drivers/powercap/intel_rapl_common.c
+@@ -658,8 +658,6 @@ static struct rapl_primitive_info rpi_msr[NR_RAPL_PRIMITIVES] = {
+ 			    RAPL_DOMAIN_REG_LIMIT, ARBITRARY_UNIT, 0),
+ 	[PL2_CLAMP] = PRIMITIVE_INFO_INIT(PL2_CLAMP, POWER_LIMIT2_CLAMP, 48,
+ 			    RAPL_DOMAIN_REG_LIMIT, ARBITRARY_UNIT, 0),
+-	[PL4_ENABLE] = PRIMITIVE_INFO_INIT(PL4_ENABLE, POWER_LIMIT4_MASK, 0,
+-				RAPL_DOMAIN_REG_PL4, ARBITRARY_UNIT, 0),
+ 	[TIME_WINDOW1] = PRIMITIVE_INFO_INIT(TIME_WINDOW1, TIME_WINDOW1_MASK, 17,
+ 			    RAPL_DOMAIN_REG_LIMIT, TIME_UNIT, 0),
+ 	[TIME_WINDOW2] = PRIMITIVE_INFO_INIT(TIME_WINDOW2, TIME_WINDOW2_MASK, 49,
+@@ -1458,7 +1456,7 @@ static void rapl_detect_powerlimit(struct rapl_domain *rd)
+ 			}
+ 		}
+ 
+-		if (rapl_read_pl_data(rd, i, PL_ENABLE, false, &val64))
++		if (rapl_read_pl_data(rd, i, PL_LIMIT, false, &val64))
+ 			rd->rpl[i].name = NULL;
+ 	}
+ }
+diff --git a/drivers/pwm/Kconfig b/drivers/pwm/Kconfig
+index 6210babb0741a..8ebcddf91f7b7 100644
+--- a/drivers/pwm/Kconfig
++++ b/drivers/pwm/Kconfig
+@@ -505,7 +505,7 @@ config PWM_ROCKCHIP
+ 
+ config PWM_RZ_MTU3
+ 	tristate "Renesas RZ/G2L MTU3a PWM Timer support"
+-	depends on RZ_MTU3 || COMPILE_TEST
++	depends on RZ_MTU3
+ 	depends on HAS_IOMEM
+ 	help
+ 	  This driver exposes the MTU3a PWM Timer controller found in Renesas
+diff --git a/drivers/remoteproc/stm32_rproc.c b/drivers/remoteproc/stm32_rproc.c
+index cf073bac79f73..48dca3503fa45 100644
+--- a/drivers/remoteproc/stm32_rproc.c
++++ b/drivers/remoteproc/stm32_rproc.c
+@@ -921,7 +921,7 @@ static void stm32_rproc_remove(struct platform_device *pdev)
+ 	rproc_free(rproc);
+ }
+ 
+-static int __maybe_unused stm32_rproc_suspend(struct device *dev)
++static int stm32_rproc_suspend(struct device *dev)
+ {
+ 	struct rproc *rproc = dev_get_drvdata(dev);
+ 	struct stm32_rproc *ddata = rproc->priv;
+@@ -932,7 +932,7 @@ static int __maybe_unused stm32_rproc_suspend(struct device *dev)
+ 	return 0;
+ }
+ 
+-static int __maybe_unused stm32_rproc_resume(struct device *dev)
++static int stm32_rproc_resume(struct device *dev)
+ {
+ 	struct rproc *rproc = dev_get_drvdata(dev);
+ 	struct stm32_rproc *ddata = rproc->priv;
+@@ -943,16 +943,16 @@ static int __maybe_unused stm32_rproc_resume(struct device *dev)
+ 	return 0;
+ }
+ 
+-static SIMPLE_DEV_PM_OPS(stm32_rproc_pm_ops,
+-			 stm32_rproc_suspend, stm32_rproc_resume);
++static DEFINE_SIMPLE_DEV_PM_OPS(stm32_rproc_pm_ops,
++				stm32_rproc_suspend, stm32_rproc_resume);
+ 
+ static struct platform_driver stm32_rproc_driver = {
+ 	.probe = stm32_rproc_probe,
+ 	.remove_new = stm32_rproc_remove,
+ 	.driver = {
+ 		.name = "stm32-rproc",
+-		.pm = &stm32_rproc_pm_ops,
+-		.of_match_table = of_match_ptr(stm32_rproc_match),
++		.pm = pm_ptr(&stm32_rproc_pm_ops),
++		.of_match_table = stm32_rproc_match,
+ 	},
+ };
+ module_platform_driver(stm32_rproc_driver);
+diff --git a/drivers/rpmsg/qcom_glink_native.c b/drivers/rpmsg/qcom_glink_native.c
+index 1beb40a1d3df2..e4015db99899d 100644
+--- a/drivers/rpmsg/qcom_glink_native.c
++++ b/drivers/rpmsg/qcom_glink_native.c
+@@ -221,6 +221,10 @@ static struct glink_channel *qcom_glink_alloc_channel(struct qcom_glink *glink,
+ 
+ 	channel->glink = glink;
+ 	channel->name = kstrdup(name, GFP_KERNEL);
++	if (!channel->name) {
++		kfree(channel);
++		return ERR_PTR(-ENOMEM);
++	}
+ 
+ 	init_completion(&channel->open_req);
+ 	init_completion(&channel->open_ack);
+diff --git a/drivers/s390/block/dasd_devmap.c b/drivers/s390/block/dasd_devmap.c
+index 620fab01b710b..c4e36650c4264 100644
+--- a/drivers/s390/block/dasd_devmap.c
++++ b/drivers/s390/block/dasd_devmap.c
+@@ -1378,16 +1378,12 @@ static ssize_t dasd_vendor_show(struct device *dev,
+ 
+ static DEVICE_ATTR(vendor, 0444, dasd_vendor_show, NULL);
+ 
+-#define UID_STRLEN ( /* vendor */ 3 + 1 + /* serial    */ 14 + 1 +\
+-		     /* SSID   */ 4 + 1 + /* unit addr */ 2 + 1 +\
+-		     /* vduit */ 32 + 1)
+-
+ static ssize_t
+ dasd_uid_show(struct device *dev, struct device_attribute *attr, char *buf)
+ {
++	char uid_string[DASD_UID_STRLEN];
+ 	struct dasd_device *device;
+ 	struct dasd_uid uid;
+-	char uid_string[UID_STRLEN];
+ 	char ua_string[3];
+ 
+ 	device = dasd_device_from_cdev(to_ccwdev(dev));
+diff --git a/drivers/s390/block/dasd_eckd.c b/drivers/s390/block/dasd_eckd.c
+index 8587e423169ec..bd89b032968a4 100644
+--- a/drivers/s390/block/dasd_eckd.c
++++ b/drivers/s390/block/dasd_eckd.c
+@@ -1079,12 +1079,12 @@ static void dasd_eckd_get_uid_string(struct dasd_conf *conf,
+ 
+ 	create_uid(conf, &uid);
+ 	if (strlen(uid.vduit) > 0)
+-		snprintf(print_uid, sizeof(*print_uid),
++		snprintf(print_uid, DASD_UID_STRLEN,
+ 			 "%s.%s.%04x.%02x.%s",
+ 			 uid.vendor, uid.serial, uid.ssid,
+ 			 uid.real_unit_addr, uid.vduit);
+ 	else
+-		snprintf(print_uid, sizeof(*print_uid),
++		snprintf(print_uid, DASD_UID_STRLEN,
+ 			 "%s.%s.%04x.%02x",
+ 			 uid.vendor, uid.serial, uid.ssid,
+ 			 uid.real_unit_addr);
+@@ -1093,8 +1093,8 @@ static void dasd_eckd_get_uid_string(struct dasd_conf *conf,
+ static int dasd_eckd_check_cabling(struct dasd_device *device,
+ 				   void *conf_data, __u8 lpm)
+ {
++	char print_path_uid[DASD_UID_STRLEN], print_device_uid[DASD_UID_STRLEN];
+ 	struct dasd_eckd_private *private = device->private;
+-	char print_path_uid[60], print_device_uid[60];
+ 	struct dasd_conf path_conf;
+ 
+ 	path_conf.data = conf_data;
+@@ -1293,9 +1293,9 @@ static void dasd_eckd_path_available_action(struct dasd_device *device,
+ 	__u8 path_rcd_buf[DASD_ECKD_RCD_DATA_SIZE];
+ 	__u8 lpm, opm, npm, ppm, epm, hpfpm, cablepm;
+ 	struct dasd_conf_data *conf_data;
++	char print_uid[DASD_UID_STRLEN];
+ 	struct dasd_conf path_conf;
+ 	unsigned long flags;
+-	char print_uid[60];
+ 	int rc, pos;
+ 
+ 	opm = 0;
+@@ -5855,8 +5855,8 @@ static void dasd_eckd_dump_sense(struct dasd_device *device,
+ static int dasd_eckd_reload_device(struct dasd_device *device)
+ {
+ 	struct dasd_eckd_private *private = device->private;
++	char print_uid[DASD_UID_STRLEN];
+ 	int rc, old_base;
+-	char print_uid[60];
+ 	struct dasd_uid uid;
+ 	unsigned long flags;
+ 
+diff --git a/drivers/s390/block/dasd_int.h b/drivers/s390/block/dasd_int.h
+index 0aa56351da720..8a4dbe9d77411 100644
+--- a/drivers/s390/block/dasd_int.h
++++ b/drivers/s390/block/dasd_int.h
+@@ -259,6 +259,10 @@ struct dasd_uid {
+ 	char vduit[33];
+ };
+ 
++#define DASD_UID_STRLEN ( /* vendor */ 3 + 1 + /* serial    */ 14 + 1 +	\
++			  /* SSID   */ 4 + 1 + /* unit addr */ 2 + 1 +	\
++			  /* vduit */ 32 + 1)
++
+ /*
+  * PPRC Status data
+  */
+diff --git a/drivers/s390/block/dcssblk.c b/drivers/s390/block/dcssblk.c
+index 09acf3853a77e..4abd8f9ec2527 100644
+--- a/drivers/s390/block/dcssblk.c
++++ b/drivers/s390/block/dcssblk.c
+@@ -412,6 +412,7 @@ removeseg:
+ 	}
+ 	list_del(&dev_info->lh);
+ 
++	dax_remove_host(dev_info->gd);
+ 	kill_dax(dev_info->dax_dev);
+ 	put_dax(dev_info->dax_dev);
+ 	del_gendisk(dev_info->gd);
+@@ -707,9 +708,9 @@ dcssblk_add_store(struct device *dev, struct device_attribute *attr, const char
+ 	goto out;
+ 
+ out_dax_host:
++	put_device(&dev_info->dev);
+ 	dax_remove_host(dev_info->gd);
+ out_dax:
+-	put_device(&dev_info->dev);
+ 	kill_dax(dev_info->dax_dev);
+ 	put_dax(dev_info->dax_dev);
+ put_dev:
+@@ -789,6 +790,7 @@ dcssblk_remove_store(struct device *dev, struct device_attribute *attr, const ch
+ 	}
+ 
+ 	list_del(&dev_info->lh);
++	dax_remove_host(dev_info->gd);
+ 	kill_dax(dev_info->dax_dev);
+ 	put_dax(dev_info->dax_dev);
+ 	del_gendisk(dev_info->gd);
+diff --git a/drivers/s390/crypto/pkey_api.c b/drivers/s390/crypto/pkey_api.c
+index e58bfd2253231..396a159afdf5b 100644
+--- a/drivers/s390/crypto/pkey_api.c
++++ b/drivers/s390/crypto/pkey_api.c
+@@ -272,7 +272,8 @@ static int pkey_clr2ep11key(const u8 *clrkey, size_t clrkeylen,
+ 		card = apqns[i] >> 16;
+ 		dom = apqns[i] & 0xFFFF;
+ 		rc = ep11_clr2keyblob(card, dom, clrkeylen * 8,
+-				      0, clrkey, keybuf, keybuflen);
++				      0, clrkey, keybuf, keybuflen,
++				      PKEY_TYPE_EP11);
+ 		if (rc == 0)
+ 			break;
+ 	}
+@@ -287,10 +288,9 @@ out:
+ /*
+  * Find card and transform EP11 secure key into protected key.
+  */
+-static int pkey_ep11key2pkey(const u8 *key, u8 *protkey,
+-			     u32 *protkeylen, u32 *protkeytype)
++static int pkey_ep11key2pkey(const u8 *key, size_t keylen,
++			     u8 *protkey, u32 *protkeylen, u32 *protkeytype)
+ {
+-	struct ep11keyblob *kb = (struct ep11keyblob *)key;
+ 	u32 nr_apqns, *apqns = NULL;
+ 	u16 card, dom;
+ 	int i, rc;
+@@ -299,7 +299,8 @@ static int pkey_ep11key2pkey(const u8 *key, u8 *protkey,
+ 
+ 	/* build a list of apqns suitable for this key */
+ 	rc = ep11_findcard2(&apqns, &nr_apqns, 0xFFFF, 0xFFFF,
+-			    ZCRYPT_CEX7, EP11_API_V, kb->wkvp);
++			    ZCRYPT_CEX7, EP11_API_V,
++			    ep11_kb_wkvp(key, keylen));
+ 	if (rc)
+ 		goto out;
+ 
+@@ -307,7 +308,7 @@ static int pkey_ep11key2pkey(const u8 *key, u8 *protkey,
+ 	for (rc = -ENODEV, i = 0; i < nr_apqns; i++) {
+ 		card = apqns[i] >> 16;
+ 		dom = apqns[i] & 0xFFFF;
+-		rc = ep11_kblob2protkey(card, dom, key, kb->head.len,
++		rc = ep11_kblob2protkey(card, dom, key, keylen,
+ 					protkey, protkeylen, protkeytype);
+ 		if (rc == 0)
+ 			break;
+@@ -495,7 +496,7 @@ try_via_ep11:
+ 			      tmpbuf, &tmpbuflen);
+ 	if (rc)
+ 		goto failure;
+-	rc = pkey_ep11key2pkey(tmpbuf,
++	rc = pkey_ep11key2pkey(tmpbuf, tmpbuflen,
+ 			       protkey, protkeylen, protkeytype);
+ 	if (!rc)
+ 		goto out;
+@@ -611,7 +612,7 @@ static int pkey_nonccatok2pkey(const u8 *key, u32 keylen,
+ 		rc = ep11_check_aes_key(debug_info, 3, key, keylen, 1);
+ 		if (rc)
+ 			goto out;
+-		rc = pkey_ep11key2pkey(key,
++		rc = pkey_ep11key2pkey(key, keylen,
+ 				       protkey, protkeylen, protkeytype);
+ 		break;
+ 	}
+@@ -620,7 +621,7 @@ static int pkey_nonccatok2pkey(const u8 *key, u32 keylen,
+ 		rc = ep11_check_aes_key_with_hdr(debug_info, 3, key, keylen, 1);
+ 		if (rc)
+ 			goto out;
+-		rc = pkey_ep11key2pkey(key + sizeof(struct ep11kblob_header),
++		rc = pkey_ep11key2pkey(key, keylen,
+ 				       protkey, protkeylen, protkeytype);
+ 		break;
+ 	default:
+@@ -713,6 +714,11 @@ static int pkey_genseckey2(const struct pkey_apqn *apqns, size_t nr_apqns,
+ 		if (*keybufsize < MINEP11AESKEYBLOBSIZE)
+ 			return -EINVAL;
+ 		break;
++	case PKEY_TYPE_EP11_AES:
++		if (*keybufsize < (sizeof(struct ep11kblob_header) +
++				   MINEP11AESKEYBLOBSIZE))
++			return -EINVAL;
++		break;
+ 	default:
+ 		return -EINVAL;
+ 	}
+@@ -729,9 +735,10 @@ static int pkey_genseckey2(const struct pkey_apqn *apqns, size_t nr_apqns,
+ 	for (i = 0, rc = -ENODEV; i < nr_apqns; i++) {
+ 		card = apqns[i].card;
+ 		dom = apqns[i].domain;
+-		if (ktype == PKEY_TYPE_EP11) {
++		if (ktype == PKEY_TYPE_EP11 ||
++		    ktype == PKEY_TYPE_EP11_AES) {
+ 			rc = ep11_genaeskey(card, dom, ksize, kflags,
+-					    keybuf, keybufsize);
++					    keybuf, keybufsize, ktype);
+ 		} else if (ktype == PKEY_TYPE_CCA_DATA) {
+ 			rc = cca_genseckey(card, dom, ksize, keybuf);
+ 			*keybufsize = (rc ? 0 : SECKEYBLOBSIZE);
+@@ -769,6 +776,11 @@ static int pkey_clr2seckey2(const struct pkey_apqn *apqns, size_t nr_apqns,
+ 		if (*keybufsize < MINEP11AESKEYBLOBSIZE)
+ 			return -EINVAL;
+ 		break;
++	case PKEY_TYPE_EP11_AES:
++		if (*keybufsize < (sizeof(struct ep11kblob_header) +
++				   MINEP11AESKEYBLOBSIZE))
++			return -EINVAL;
++		break;
+ 	default:
+ 		return -EINVAL;
+ 	}
+@@ -787,9 +799,11 @@ static int pkey_clr2seckey2(const struct pkey_apqn *apqns, size_t nr_apqns,
+ 	for (i = 0, rc = -ENODEV; i < nr_apqns; i++) {
+ 		card = apqns[i].card;
+ 		dom = apqns[i].domain;
+-		if (ktype == PKEY_TYPE_EP11) {
++		if (ktype == PKEY_TYPE_EP11 ||
++		    ktype == PKEY_TYPE_EP11_AES) {
+ 			rc = ep11_clr2keyblob(card, dom, ksize, kflags,
+-					      clrkey, keybuf, keybufsize);
++					      clrkey, keybuf, keybufsize,
++					      ktype);
+ 		} else if (ktype == PKEY_TYPE_CCA_DATA) {
+ 			rc = cca_clr2seckey(card, dom, ksize,
+ 					    clrkey, keybuf);
+@@ -895,10 +909,11 @@ static int pkey_verifykey2(const u8 *key, size_t keylen,
+ 		if (ktype)
+ 			*ktype = PKEY_TYPE_EP11;
+ 		if (ksize)
+-			*ksize = kb->head.keybitlen;
++			*ksize = kb->head.bitlen;
+ 
+ 		rc = ep11_findcard2(&_apqns, &_nr_apqns, *cardnr, *domain,
+-				    ZCRYPT_CEX7, EP11_API_V, kb->wkvp);
++				    ZCRYPT_CEX7, EP11_API_V,
++				    ep11_kb_wkvp(key, keylen));
+ 		if (rc)
+ 			goto out;
+ 
+@@ -908,6 +923,30 @@ static int pkey_verifykey2(const u8 *key, size_t keylen,
+ 		*cardnr = ((struct pkey_apqn *)_apqns)->card;
+ 		*domain = ((struct pkey_apqn *)_apqns)->domain;
+ 
++	} else if (hdr->type == TOKTYPE_NON_CCA &&
++		   hdr->version == TOKVER_EP11_AES_WITH_HEADER) {
++		struct ep11kblob_header *kh = (struct ep11kblob_header *)key;
++
++		rc = ep11_check_aes_key_with_hdr(debug_info, 3,
++						 key, keylen, 1);
++		if (rc)
++			goto out;
++		if (ktype)
++			*ktype = PKEY_TYPE_EP11_AES;
++		if (ksize)
++			*ksize = kh->bitlen;
++
++		rc = ep11_findcard2(&_apqns, &_nr_apqns, *cardnr, *domain,
++				    ZCRYPT_CEX7, EP11_API_V,
++				    ep11_kb_wkvp(key, keylen));
++		if (rc)
++			goto out;
++
++		if (flags)
++			*flags = PKEY_FLAGS_MATCH_CUR_MKVP;
++
++		*cardnr = ((struct pkey_apqn *)_apqns)->card;
++		*domain = ((struct pkey_apqn *)_apqns)->domain;
+ 	} else {
+ 		rc = -EINVAL;
+ 	}
+@@ -949,10 +988,12 @@ static int pkey_keyblob2pkey2(const struct pkey_apqn *apqns, size_t nr_apqns,
+ 		}
+ 	} else if (hdr->type == TOKTYPE_NON_CCA) {
+ 		if (hdr->version == TOKVER_EP11_AES) {
+-			if (keylen < sizeof(struct ep11keyblob))
+-				return -EINVAL;
+ 			if (ep11_check_aes_key(debug_info, 3, key, keylen, 1))
+ 				return -EINVAL;
++		} else if (hdr->version == TOKVER_EP11_AES_WITH_HEADER) {
++			if (ep11_check_aes_key_with_hdr(debug_info, 3,
++							key, keylen, 1))
++				return -EINVAL;
+ 		} else {
+ 			return pkey_nonccatok2pkey(key, keylen,
+ 						   protkey, protkeylen,
+@@ -980,10 +1021,7 @@ static int pkey_keyblob2pkey2(const struct pkey_apqn *apqns, size_t nr_apqns,
+ 						protkey, protkeylen,
+ 						protkeytype);
+ 		} else {
+-			/* EP11 AES secure key blob */
+-			struct ep11keyblob *kb = (struct ep11keyblob *)key;
+-
+-			rc = ep11_kblob2protkey(card, dom, key, kb->head.len,
++			rc = ep11_kblob2protkey(card, dom, key, keylen,
+ 						protkey, protkeylen,
+ 						protkeytype);
+ 		}
+@@ -1243,12 +1281,14 @@ static int pkey_keyblob2pkey3(const struct pkey_apqn *apqns, size_t nr_apqns,
+ 		     hdr->version == TOKVER_EP11_ECC_WITH_HEADER) &&
+ 		    is_ep11_keyblob(key + sizeof(struct ep11kblob_header)))
+ 			rc = ep11_kblob2protkey(card, dom, key, hdr->len,
+-						protkey, protkeylen, protkeytype);
++						protkey, protkeylen,
++						protkeytype);
+ 		else if (hdr->type == TOKTYPE_NON_CCA &&
+ 			 hdr->version == TOKVER_EP11_AES &&
+ 			 is_ep11_keyblob(key))
+ 			rc = ep11_kblob2protkey(card, dom, key, hdr->len,
+-						protkey, protkeylen, protkeytype);
++						protkey, protkeylen,
++						protkeytype);
+ 		else if (hdr->type == TOKTYPE_CCA_INTERNAL &&
+ 			 hdr->version == TOKVER_CCA_AES)
+ 			rc = cca_sec2protkey(card, dom, key, protkey,
+@@ -1466,7 +1506,7 @@ static long pkey_unlocked_ioctl(struct file *filp, unsigned int cmd,
+ 		apqns = _copy_apqns_from_user(kgs.apqns, kgs.apqn_entries);
+ 		if (IS_ERR(apqns))
+ 			return PTR_ERR(apqns);
+-		kkey = kmalloc(klen, GFP_KERNEL);
++		kkey = kzalloc(klen, GFP_KERNEL);
+ 		if (!kkey) {
+ 			kfree(apqns);
+ 			return -ENOMEM;
+@@ -1508,7 +1548,7 @@ static long pkey_unlocked_ioctl(struct file *filp, unsigned int cmd,
+ 		apqns = _copy_apqns_from_user(kcs.apqns, kcs.apqn_entries);
+ 		if (IS_ERR(apqns))
+ 			return PTR_ERR(apqns);
+-		kkey = kmalloc(klen, GFP_KERNEL);
++		kkey = kzalloc(klen, GFP_KERNEL);
+ 		if (!kkey) {
+ 			kfree(apqns);
+ 			return -ENOMEM;
+@@ -2102,7 +2142,7 @@ static struct attribute_group ccacipher_attr_group = {
+  * (i.e. off != 0 or count < key blob size) -EINVAL is returned.
+  * This function and the sysfs attributes using it provide EP11 key blobs
+  * padded to the upper limit of MAXEP11AESKEYBLOBSIZE which is currently
+- * 320 bytes.
++ * 336 bytes.
+  */
+ static ssize_t pkey_ep11_aes_attr_read(enum pkey_key_size keybits,
+ 				       bool is_xts, char *buf, loff_t off,
+@@ -2130,7 +2170,8 @@ static ssize_t pkey_ep11_aes_attr_read(enum pkey_key_size keybits,
+ 	for (i = 0, rc = -ENODEV; i < nr_apqns; i++) {
+ 		card = apqns[i] >> 16;
+ 		dom = apqns[i] & 0xFFFF;
+-		rc = ep11_genaeskey(card, dom, keybits, 0, buf, &keysize);
++		rc = ep11_genaeskey(card, dom, keybits, 0, buf, &keysize,
++				    PKEY_TYPE_EP11_AES);
+ 		if (rc == 0)
+ 			break;
+ 	}
+@@ -2140,7 +2181,8 @@ static ssize_t pkey_ep11_aes_attr_read(enum pkey_key_size keybits,
+ 	if (is_xts) {
+ 		keysize = MAXEP11AESKEYBLOBSIZE;
+ 		buf += MAXEP11AESKEYBLOBSIZE;
+-		rc = ep11_genaeskey(card, dom, keybits, 0, buf, &keysize);
++		rc = ep11_genaeskey(card, dom, keybits, 0, buf, &keysize,
++				    PKEY_TYPE_EP11_AES);
+ 		if (rc == 0)
+ 			return 2 * MAXEP11AESKEYBLOBSIZE;
+ 	}
+diff --git a/drivers/s390/crypto/zcrypt_ep11misc.c b/drivers/s390/crypto/zcrypt_ep11misc.c
+index 958f5ee47f1b0..669ad6f5d5b07 100644
+--- a/drivers/s390/crypto/zcrypt_ep11misc.c
++++ b/drivers/s390/crypto/zcrypt_ep11misc.c
+@@ -113,6 +113,109 @@ static void __exit card_cache_free(void)
+ 	spin_unlock_bh(&card_list_lock);
+ }
+ 
++static int ep11_kb_split(const u8 *kb, size_t kblen, u32 kbver,
++			 struct ep11kblob_header **kbhdr, size_t *kbhdrsize,
++			 u8 **kbpl, size_t *kbplsize)
++{
++	struct ep11kblob_header *hdr = NULL;
++	size_t hdrsize, plsize = 0;
++	int rc = -EINVAL;
++	u8 *pl = NULL;
++
++	if (kblen < sizeof(struct ep11kblob_header))
++		goto out;
++	hdr = (struct ep11kblob_header *)kb;
++
++	switch (kbver) {
++	case TOKVER_EP11_AES:
++		/* header overlays the payload */
++		hdrsize = 0;
++		break;
++	case TOKVER_EP11_ECC_WITH_HEADER:
++	case TOKVER_EP11_AES_WITH_HEADER:
++		/* payload starts after the header */
++		hdrsize = sizeof(struct ep11kblob_header);
++		break;
++	default:
++		goto out;
++	}
++
++	plsize = kblen - hdrsize;
++	pl = (u8 *)kb + hdrsize;
++
++	if (kbhdr)
++		*kbhdr = hdr;
++	if (kbhdrsize)
++		*kbhdrsize = hdrsize;
++	if (kbpl)
++		*kbpl = pl;
++	if (kbplsize)
++		*kbplsize = plsize;
++
++	rc = 0;
++out:
++	return rc;
++}
++
++static int ep11_kb_decode(const u8 *kb, size_t kblen,
++			  struct ep11kblob_header **kbhdr, size_t *kbhdrsize,
++			  struct ep11keyblob **kbpl, size_t *kbplsize)
++{
++	struct ep11kblob_header *tmph, *hdr = NULL;
++	size_t hdrsize = 0, plsize = 0;
++	struct ep11keyblob *pl = NULL;
++	int rc = -EINVAL;
++	u8 *tmpp;
++
++	if (kblen < sizeof(struct ep11kblob_header))
++		goto out;
++	tmph = (struct ep11kblob_header *)kb;
++
++	if (tmph->type != TOKTYPE_NON_CCA &&
++	    tmph->len > kblen)
++		goto out;
++
++	if (ep11_kb_split(kb, kblen, tmph->version,
++			  &hdr, &hdrsize, &tmpp, &plsize))
++		goto out;
++
++	if (plsize < sizeof(struct ep11keyblob))
++		goto out;
++
++	if (!is_ep11_keyblob(tmpp))
++		goto out;
++
++	pl = (struct ep11keyblob *)tmpp;
++	plsize = hdr->len - hdrsize;
++
++	if (kbhdr)
++		*kbhdr = hdr;
++	if (kbhdrsize)
++		*kbhdrsize = hdrsize;
++	if (kbpl)
++		*kbpl = pl;
++	if (kbplsize)
++		*kbplsize = plsize;
++
++	rc = 0;
++out:
++	return rc;
++}
++
++/*
++ * For valid ep11 keyblobs, returns a reference to the wrappingkey verification
++ * pattern. Otherwise NULL.
++ */
++const u8 *ep11_kb_wkvp(const u8 *keyblob, size_t keybloblen)
++{
++	struct ep11keyblob *kb;
++
++	if (ep11_kb_decode(keyblob, keybloblen, NULL, NULL, &kb, NULL))
++		return NULL;
++	return kb->wkvp;
++}
++EXPORT_SYMBOL(ep11_kb_wkvp);
++
+ /*
+  * Simple check if the key blob is a valid EP11 AES key blob with header.
+  */
+@@ -664,8 +767,9 @@ EXPORT_SYMBOL(ep11_get_domain_info);
+  */
+ #define KEY_ATTR_DEFAULTS 0x00200c00
+ 
+-int ep11_genaeskey(u16 card, u16 domain, u32 keybitsize, u32 keygenflags,
+-		   u8 *keybuf, size_t *keybufsize)
++static int _ep11_genaeskey(u16 card, u16 domain,
++			   u32 keybitsize, u32 keygenflags,
++			   u8 *keybuf, size_t *keybufsize)
+ {
+ 	struct keygen_req_pl {
+ 		struct pl_head head;
+@@ -701,7 +805,6 @@ int ep11_genaeskey(u16 card, u16 domain, u32 keybitsize, u32 keygenflags,
+ 	struct ep11_cprb *req = NULL, *rep = NULL;
+ 	struct ep11_target_dev target;
+ 	struct ep11_urb *urb = NULL;
+-	struct ep11keyblob *kb;
+ 	int api, rc = -ENOMEM;
+ 
+ 	switch (keybitsize) {
+@@ -780,14 +883,9 @@ int ep11_genaeskey(u16 card, u16 domain, u32 keybitsize, u32 keygenflags,
+ 		goto out;
+ 	}
+ 
+-	/* copy key blob and set header values */
++	/* copy key blob */
+ 	memcpy(keybuf, rep_pl->data, rep_pl->data_len);
+ 	*keybufsize = rep_pl->data_len;
+-	kb = (struct ep11keyblob *)keybuf;
+-	kb->head.type = TOKTYPE_NON_CCA;
+-	kb->head.len = rep_pl->data_len;
+-	kb->head.version = TOKVER_EP11_AES;
+-	kb->head.keybitlen = keybitsize;
+ 
+ out:
+ 	kfree(req);
+@@ -795,6 +893,43 @@ out:
+ 	kfree(urb);
+ 	return rc;
+ }
++
++int ep11_genaeskey(u16 card, u16 domain, u32 keybitsize, u32 keygenflags,
++		   u8 *keybuf, size_t *keybufsize, u32 keybufver)
++{
++	struct ep11kblob_header *hdr;
++	size_t hdr_size, pl_size;
++	u8 *pl;
++	int rc;
++
++	switch (keybufver) {
++	case TOKVER_EP11_AES:
++	case TOKVER_EP11_AES_WITH_HEADER:
++		break;
++	default:
++		return -EINVAL;
++	}
++
++	rc = ep11_kb_split(keybuf, *keybufsize, keybufver,
++			   &hdr, &hdr_size, &pl, &pl_size);
++	if (rc)
++		return rc;
++
++	rc = _ep11_genaeskey(card, domain, keybitsize, keygenflags,
++			     pl, &pl_size);
++	if (rc)
++		return rc;
++
++	*keybufsize = hdr_size + pl_size;
++
++	/* update header information */
++	hdr->type = TOKTYPE_NON_CCA;
++	hdr->len = *keybufsize;
++	hdr->version = keybufver;
++	hdr->bitlen = keybitsize;
++
++	return 0;
++}
+ EXPORT_SYMBOL(ep11_genaeskey);
+ 
+ static int ep11_cryptsingle(u16 card, u16 domain,
+@@ -924,12 +1059,12 @@ out:
+ 	return rc;
+ }
+ 
+-static int ep11_unwrapkey(u16 card, u16 domain,
+-			  const u8 *kek, size_t keksize,
+-			  const u8 *enckey, size_t enckeysize,
+-			  u32 mech, const u8 *iv,
+-			  u32 keybitsize, u32 keygenflags,
+-			  u8 *keybuf, size_t *keybufsize)
++static int _ep11_unwrapkey(u16 card, u16 domain,
++			   const u8 *kek, size_t keksize,
++			   const u8 *enckey, size_t enckeysize,
++			   u32 mech, const u8 *iv,
++			   u32 keybitsize, u32 keygenflags,
++			   u8 *keybuf, size_t *keybufsize)
+ {
+ 	struct uw_req_pl {
+ 		struct pl_head head;
+@@ -966,7 +1101,6 @@ static int ep11_unwrapkey(u16 card, u16 domain,
+ 	struct ep11_cprb *req = NULL, *rep = NULL;
+ 	struct ep11_target_dev target;
+ 	struct ep11_urb *urb = NULL;
+-	struct ep11keyblob *kb;
+ 	size_t req_pl_size;
+ 	int api, rc = -ENOMEM;
+ 	u8 *p;
+@@ -1048,14 +1182,9 @@ static int ep11_unwrapkey(u16 card, u16 domain,
+ 		goto out;
+ 	}
+ 
+-	/* copy key blob and set header values */
++	/* copy key blob */
+ 	memcpy(keybuf, rep_pl->data, rep_pl->data_len);
+ 	*keybufsize = rep_pl->data_len;
+-	kb = (struct ep11keyblob *)keybuf;
+-	kb->head.type = TOKTYPE_NON_CCA;
+-	kb->head.len = rep_pl->data_len;
+-	kb->head.version = TOKVER_EP11_AES;
+-	kb->head.keybitlen = keybitsize;
+ 
+ out:
+ 	kfree(req);
+@@ -1064,10 +1193,46 @@ out:
+ 	return rc;
+ }
+ 
+-static int ep11_wrapkey(u16 card, u16 domain,
+-			const u8 *key, size_t keysize,
+-			u32 mech, const u8 *iv,
+-			u8 *databuf, size_t *datasize)
++static int ep11_unwrapkey(u16 card, u16 domain,
++			  const u8 *kek, size_t keksize,
++			  const u8 *enckey, size_t enckeysize,
++			  u32 mech, const u8 *iv,
++			  u32 keybitsize, u32 keygenflags,
++			  u8 *keybuf, size_t *keybufsize,
++			  u8 keybufver)
++{
++	struct ep11kblob_header *hdr;
++	size_t hdr_size, pl_size;
++	u8 *pl;
++	int rc;
++
++	rc = ep11_kb_split(keybuf, *keybufsize, keybufver,
++			   &hdr, &hdr_size, &pl, &pl_size);
++	if (rc)
++		return rc;
++
++	rc = _ep11_unwrapkey(card, domain, kek, keksize, enckey, enckeysize,
++			     mech, iv, keybitsize, keygenflags,
++			     pl, &pl_size);
++	if (rc)
++		return rc;
++
++	*keybufsize = hdr_size + pl_size;
++
++	/* update header information */
++	hdr = (struct ep11kblob_header *)keybuf;
++	hdr->type = TOKTYPE_NON_CCA;
++	hdr->len = *keybufsize;
++	hdr->version = keybufver;
++	hdr->bitlen = keybitsize;
++
++	return 0;
++}
++
++static int _ep11_wrapkey(u16 card, u16 domain,
++			 const u8 *key, size_t keysize,
++			 u32 mech, const u8 *iv,
++			 u8 *databuf, size_t *datasize)
+ {
+ 	struct wk_req_pl {
+ 		struct pl_head head;
+@@ -1097,20 +1262,10 @@ static int ep11_wrapkey(u16 card, u16 domain,
+ 	struct ep11_cprb *req = NULL, *rep = NULL;
+ 	struct ep11_target_dev target;
+ 	struct ep11_urb *urb = NULL;
+-	struct ep11keyblob *kb;
+ 	size_t req_pl_size;
+ 	int api, rc = -ENOMEM;
+-	bool has_header = false;
+ 	u8 *p;
+ 
+-	/* maybe the session field holds a header with key info */
+-	kb = (struct ep11keyblob *)key;
+-	if (kb->head.type == TOKTYPE_NON_CCA &&
+-	    kb->head.version == TOKVER_EP11_AES) {
+-		has_header = true;
+-		keysize = min_t(size_t, kb->head.len, keysize);
+-	}
+-
+ 	/* request cprb and payload */
+ 	req_pl_size = sizeof(struct wk_req_pl) + (iv ? 16 : 0)
+ 		+ ASN1TAGLEN(keysize) + 4;
+@@ -1135,11 +1290,6 @@ static int ep11_wrapkey(u16 card, u16 domain,
+ 	}
+ 	/* key blob */
+ 	p += asn1tag_write(p, 0x04, key, keysize);
+-	/* maybe the key argument needs the head data cleaned out */
+-	if (has_header) {
+-		kb = (struct ep11keyblob *)(p - keysize);
+-		memset(&kb->head, 0, sizeof(kb->head));
+-	}
+ 	/* empty kek tag */
+ 	*p++ = 0x04;
+ 	*p++ = 0;
+@@ -1198,10 +1348,10 @@ out:
+ }
+ 
+ int ep11_clr2keyblob(u16 card, u16 domain, u32 keybitsize, u32 keygenflags,
+-		     const u8 *clrkey, u8 *keybuf, size_t *keybufsize)
++		     const u8 *clrkey, u8 *keybuf, size_t *keybufsize,
++		     u32 keytype)
+ {
+ 	int rc;
+-	struct ep11keyblob *kb;
+ 	u8 encbuf[64], *kek = NULL;
+ 	size_t clrkeylen, keklen, encbuflen = sizeof(encbuf);
+ 
+@@ -1223,17 +1373,15 @@ int ep11_clr2keyblob(u16 card, u16 domain, u32 keybitsize, u32 keygenflags,
+ 	}
+ 
+ 	/* Step 1: generate AES 256 bit random kek key */
+-	rc = ep11_genaeskey(card, domain, 256,
+-			    0x00006c00, /* EN/DECRYPT, WRAP/UNWRAP */
+-			    kek, &keklen);
++	rc = _ep11_genaeskey(card, domain, 256,
++			     0x00006c00, /* EN/DECRYPT, WRAP/UNWRAP */
++			     kek, &keklen);
+ 	if (rc) {
+ 		DEBUG_ERR(
+ 			"%s generate kek key failed, rc=%d\n",
+ 			__func__, rc);
+ 		goto out;
+ 	}
+-	kb = (struct ep11keyblob *)kek;
+-	memset(&kb->head, 0, sizeof(kb->head));
+ 
+ 	/* Step 2: encrypt clear key value with the kek key */
+ 	rc = ep11_cryptsingle(card, domain, 0, 0, def_iv, kek, keklen,
+@@ -1248,7 +1396,7 @@ int ep11_clr2keyblob(u16 card, u16 domain, u32 keybitsize, u32 keygenflags,
+ 	/* Step 3: import the encrypted key value as a new key */
+ 	rc = ep11_unwrapkey(card, domain, kek, keklen,
+ 			    encbuf, encbuflen, 0, def_iv,
+-			    keybitsize, 0, keybuf, keybufsize);
++			    keybitsize, 0, keybuf, keybufsize, keytype);
+ 	if (rc) {
+ 		DEBUG_ERR(
+ 			"%s importing key value as new key failed,, rc=%d\n",
+@@ -1262,11 +1410,12 @@ out:
+ }
+ EXPORT_SYMBOL(ep11_clr2keyblob);
+ 
+-int ep11_kblob2protkey(u16 card, u16 dom, const u8 *keyblob, size_t keybloblen,
++int ep11_kblob2protkey(u16 card, u16 dom,
++		       const u8 *keyblob, size_t keybloblen,
+ 		       u8 *protkey, u32 *protkeylen, u32 *protkeytype)
+ {
+-	int rc = -EIO;
+-	u8 *wkbuf = NULL;
++	struct ep11kblob_header *hdr;
++	struct ep11keyblob *key;
+ 	size_t wkbuflen, keylen;
+ 	struct wk_info {
+ 		u16 version;
+@@ -1277,31 +1426,17 @@ int ep11_kblob2protkey(u16 card, u16 dom, const u8 *keyblob, size_t keybloblen,
+ 		u8  res2[8];
+ 		u8  pkey[];
+ 	} __packed * wki;
+-	const u8 *key;
+-	struct ep11kblob_header *hdr;
++	u8 *wkbuf = NULL;
++	int rc = -EIO;
+ 
+-	/* key with or without header ? */
+-	hdr = (struct ep11kblob_header *)keyblob;
+-	if (hdr->type == TOKTYPE_NON_CCA &&
+-	    (hdr->version == TOKVER_EP11_AES_WITH_HEADER ||
+-	     hdr->version == TOKVER_EP11_ECC_WITH_HEADER) &&
+-	    is_ep11_keyblob(keyblob + sizeof(struct ep11kblob_header))) {
+-		/* EP11 AES or ECC key with header */
+-		key = keyblob + sizeof(struct ep11kblob_header);
+-		keylen = hdr->len - sizeof(struct ep11kblob_header);
+-	} else if (hdr->type == TOKTYPE_NON_CCA &&
+-		   hdr->version == TOKVER_EP11_AES &&
+-		   is_ep11_keyblob(keyblob)) {
+-		/* EP11 AES key (old style) */
+-		key = keyblob;
+-		keylen = hdr->len;
+-	} else if (is_ep11_keyblob(keyblob)) {
+-		/* raw EP11 key blob */
+-		key = keyblob;
+-		keylen = keybloblen;
+-	} else {
++	if (ep11_kb_decode((u8 *)keyblob, keybloblen, &hdr, NULL, &key, &keylen))
+ 		return -EINVAL;
++
++	if (hdr->version == TOKVER_EP11_AES) {
++		/* wipe overlayed header */
++		memset(hdr, 0, sizeof(*hdr));
+ 	}
++	/* !!! hdr is no longer a valid header !!! */
+ 
+ 	/* alloc temp working buffer */
+ 	wkbuflen = (keylen + AES_BLOCK_SIZE) & (~(AES_BLOCK_SIZE - 1));
+@@ -1310,8 +1445,8 @@ int ep11_kblob2protkey(u16 card, u16 dom, const u8 *keyblob, size_t keybloblen,
+ 		return -ENOMEM;
+ 
+ 	/* ep11 secure key -> protected key + info */
+-	rc = ep11_wrapkey(card, dom, key, keylen,
+-			  0, def_iv, wkbuf, &wkbuflen);
++	rc = _ep11_wrapkey(card, dom, (u8 *)key, keylen,
++			   0, def_iv, wkbuf, &wkbuflen);
+ 	if (rc) {
+ 		DEBUG_ERR(
+ 			"%s rewrapping ep11 key to pkey failed, rc=%d\n",
+diff --git a/drivers/s390/crypto/zcrypt_ep11misc.h b/drivers/s390/crypto/zcrypt_ep11misc.h
+index a3eddf51242da..a0de1cccebbe0 100644
+--- a/drivers/s390/crypto/zcrypt_ep11misc.h
++++ b/drivers/s390/crypto/zcrypt_ep11misc.h
+@@ -29,14 +29,7 @@ struct ep11keyblob {
+ 	union {
+ 		u8 session[32];
+ 		/* only used for PKEY_TYPE_EP11: */
+-		struct {
+-			u8  type;      /* 0x00 (TOKTYPE_NON_CCA) */
+-			u8  res0;      /* unused */
+-			u16 len;       /* total length in bytes of this blob */
+-			u8  version;   /* 0x03 (TOKVER_EP11_AES) */
+-			u8  res1;      /* unused */
+-			u16 keybitlen; /* clear key bit len, 0 for unknown */
+-		} head;
++		struct ep11kblob_header head;
+ 	};
+ 	u8  wkvp[16];  /* wrapping key verification pattern */
+ 	u64 attr;      /* boolean key attributes */
+@@ -55,6 +48,12 @@ static inline bool is_ep11_keyblob(const u8 *key)
+ 	return (kb->version == EP11_STRUCT_MAGIC);
+ }
+ 
++/*
++ * For valid ep11 keyblobs, returns a reference to the wrappingkey verification
++ * pattern. Otherwise NULL.
++ */
++const u8 *ep11_kb_wkvp(const u8 *kblob, size_t kbloblen);
++
+ /*
+  * Simple check if the key blob is a valid EP11 AES key blob with header.
+  * If checkcpacfexport is enabled, the key is also checked for the
+@@ -114,13 +113,14 @@ int ep11_get_domain_info(u16 card, u16 domain, struct ep11_domain_info *info);
+  * Generate (random) EP11 AES secure key.
+  */
+ int ep11_genaeskey(u16 card, u16 domain, u32 keybitsize, u32 keygenflags,
+-		   u8 *keybuf, size_t *keybufsize);
++		   u8 *keybuf, size_t *keybufsize, u32 keybufver);
+ 
+ /*
+  * Generate EP11 AES secure key with given clear key value.
+  */
+ int ep11_clr2keyblob(u16 cardnr, u16 domain, u32 keybitsize, u32 keygenflags,
+-		     const u8 *clrkey, u8 *keybuf, size_t *keybufsize);
++		     const u8 *clrkey, u8 *keybuf, size_t *keybufsize,
++		     u32 keytype);
+ 
+ /*
+  * Build a list of ep11 apqns meeting the following constrains:
+diff --git a/drivers/scsi/be2iscsi/be_iscsi.c b/drivers/scsi/be2iscsi/be_iscsi.c
+index 8aeaddc93b167..8d374ae863ba2 100644
+--- a/drivers/scsi/be2iscsi/be_iscsi.c
++++ b/drivers/scsi/be2iscsi/be_iscsi.c
+@@ -450,6 +450,10 @@ int beiscsi_iface_set_param(struct Scsi_Host *shost,
+ 	}
+ 
+ 	nla_for_each_attr(attrib, data, dt_len, rm_len) {
++		/* ignore nla_type as it is never used */
++		if (nla_len(attrib) < sizeof(*iface_param))
++			return -EINVAL;
++
+ 		iface_param = nla_data(attrib);
+ 
+ 		if (iface_param->param_type != ISCSI_NET_PARAM)
+diff --git a/drivers/scsi/fcoe/fcoe_ctlr.c b/drivers/scsi/fcoe/fcoe_ctlr.c
+index 5c8d1ba3f8f3c..19eee108db021 100644
+--- a/drivers/scsi/fcoe/fcoe_ctlr.c
++++ b/drivers/scsi/fcoe/fcoe_ctlr.c
+@@ -319,16 +319,17 @@ static void fcoe_ctlr_announce(struct fcoe_ctlr *fip)
+ {
+ 	struct fcoe_fcf *sel;
+ 	struct fcoe_fcf *fcf;
++	unsigned long flags;
+ 
+ 	mutex_lock(&fip->ctlr_mutex);
+-	spin_lock_bh(&fip->ctlr_lock);
++	spin_lock_irqsave(&fip->ctlr_lock, flags);
+ 
+ 	kfree_skb(fip->flogi_req);
+ 	fip->flogi_req = NULL;
+ 	list_for_each_entry(fcf, &fip->fcfs, list)
+ 		fcf->flogi_sent = 0;
+ 
+-	spin_unlock_bh(&fip->ctlr_lock);
++	spin_unlock_irqrestore(&fip->ctlr_lock, flags);
+ 	sel = fip->sel_fcf;
+ 
+ 	if (sel && ether_addr_equal(sel->fcf_mac, fip->dest_addr))
+@@ -699,6 +700,7 @@ int fcoe_ctlr_els_send(struct fcoe_ctlr *fip, struct fc_lport *lport,
+ {
+ 	struct fc_frame *fp;
+ 	struct fc_frame_header *fh;
++	unsigned long flags;
+ 	u16 old_xid;
+ 	u8 op;
+ 	u8 mac[ETH_ALEN];
+@@ -732,11 +734,11 @@ int fcoe_ctlr_els_send(struct fcoe_ctlr *fip, struct fc_lport *lport,
+ 		op = FIP_DT_FLOGI;
+ 		if (fip->mode == FIP_MODE_VN2VN)
+ 			break;
+-		spin_lock_bh(&fip->ctlr_lock);
++		spin_lock_irqsave(&fip->ctlr_lock, flags);
+ 		kfree_skb(fip->flogi_req);
+ 		fip->flogi_req = skb;
+ 		fip->flogi_req_send = 1;
+-		spin_unlock_bh(&fip->ctlr_lock);
++		spin_unlock_irqrestore(&fip->ctlr_lock, flags);
+ 		schedule_work(&fip->timer_work);
+ 		return -EINPROGRESS;
+ 	case ELS_FDISC:
+@@ -1705,10 +1707,11 @@ static int fcoe_ctlr_flogi_send_locked(struct fcoe_ctlr *fip)
+ static int fcoe_ctlr_flogi_retry(struct fcoe_ctlr *fip)
+ {
+ 	struct fcoe_fcf *fcf;
++	unsigned long flags;
+ 	int error;
+ 
+ 	mutex_lock(&fip->ctlr_mutex);
+-	spin_lock_bh(&fip->ctlr_lock);
++	spin_lock_irqsave(&fip->ctlr_lock, flags);
+ 	LIBFCOE_FIP_DBG(fip, "re-sending FLOGI - reselect\n");
+ 	fcf = fcoe_ctlr_select(fip);
+ 	if (!fcf || fcf->flogi_sent) {
+@@ -1719,7 +1722,7 @@ static int fcoe_ctlr_flogi_retry(struct fcoe_ctlr *fip)
+ 		fcoe_ctlr_solicit(fip, NULL);
+ 		error = fcoe_ctlr_flogi_send_locked(fip);
+ 	}
+-	spin_unlock_bh(&fip->ctlr_lock);
++	spin_unlock_irqrestore(&fip->ctlr_lock, flags);
+ 	mutex_unlock(&fip->ctlr_mutex);
+ 	return error;
+ }
+@@ -1736,8 +1739,9 @@ static int fcoe_ctlr_flogi_retry(struct fcoe_ctlr *fip)
+ static void fcoe_ctlr_flogi_send(struct fcoe_ctlr *fip)
+ {
+ 	struct fcoe_fcf *fcf;
++	unsigned long flags;
+ 
+-	spin_lock_bh(&fip->ctlr_lock);
++	spin_lock_irqsave(&fip->ctlr_lock, flags);
+ 	fcf = fip->sel_fcf;
+ 	if (!fcf || !fip->flogi_req_send)
+ 		goto unlock;
+@@ -1764,7 +1768,7 @@ static void fcoe_ctlr_flogi_send(struct fcoe_ctlr *fip)
+ 	} else /* XXX */
+ 		LIBFCOE_FIP_DBG(fip, "No FCF selected - defer send\n");
+ unlock:
+-	spin_unlock_bh(&fip->ctlr_lock);
++	spin_unlock_irqrestore(&fip->ctlr_lock, flags);
+ }
+ 
+ /**
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
+index 87d8e408ccd1c..404aa7e179cba 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
+@@ -2026,6 +2026,11 @@ static void slot_err_v2_hw(struct hisi_hba *hisi_hba,
+ 	u16 dma_tx_err_type = le16_to_cpu(err_record->dma_tx_err_type);
+ 	u16 sipc_rx_err_type = le16_to_cpu(err_record->sipc_rx_err_type);
+ 	u32 dma_rx_err_type = le32_to_cpu(err_record->dma_rx_err_type);
++	struct hisi_sas_complete_v2_hdr *complete_queue =
++			hisi_hba->complete_hdr[slot->cmplt_queue];
++	struct hisi_sas_complete_v2_hdr *complete_hdr =
++			&complete_queue[slot->cmplt_queue_slot];
++	u32 dw0 = le32_to_cpu(complete_hdr->dw0);
+ 	int error = -1;
+ 
+ 	if (err_phase == 1) {
+@@ -2310,7 +2315,8 @@ static void slot_err_v2_hw(struct hisi_hba *hisi_hba,
+ 			break;
+ 		}
+ 		}
+-		hisi_sas_sata_done(task, slot);
++		if (dw0 & CMPLT_HDR_RSPNS_XFRD_MSK)
++			hisi_sas_sata_done(task, slot);
+ 	}
+ 		break;
+ 	default:
+@@ -2443,7 +2449,8 @@ static void slot_complete_v2_hw(struct hisi_hba *hisi_hba,
+ 	case SAS_PROTOCOL_SATA | SAS_PROTOCOL_STP:
+ 	{
+ 		ts->stat = SAS_SAM_STAT_GOOD;
+-		hisi_sas_sata_done(task, slot);
++		if (dw0 & CMPLT_HDR_RSPNS_XFRD_MSK)
++			hisi_sas_sata_done(task, slot);
+ 		break;
+ 	}
+ 	default:
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+index 20e1607c62828..2f33e6b4a92fb 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+@@ -2257,7 +2257,8 @@ slot_err_v3_hw(struct hisi_hba *hisi_hba, struct sas_task *task,
+ 			ts->stat = SAS_OPEN_REJECT;
+ 			ts->open_rej_reason = SAS_OREJ_RSVD_RETRY;
+ 		}
+-		hisi_sas_sata_done(task, slot);
++		if (dw0 & CMPLT_HDR_RSPNS_XFRD_MSK)
++			hisi_sas_sata_done(task, slot);
+ 		break;
+ 	case SAS_PROTOCOL_SMP:
+ 		ts->stat = SAS_SAM_STAT_CHECK_CONDITION;
+@@ -2384,7 +2385,8 @@ static void slot_complete_v3_hw(struct hisi_hba *hisi_hba,
+ 	case SAS_PROTOCOL_STP:
+ 	case SAS_PROTOCOL_SATA | SAS_PROTOCOL_STP:
+ 		ts->stat = SAS_SAM_STAT_GOOD;
+-		hisi_sas_sata_done(task, slot);
++		if (dw0 & CMPLT_HDR_RSPNS_XFRD_MSK)
++			hisi_sas_sata_done(task, slot);
+ 		break;
+ 	default:
+ 		ts->stat = SAS_SAM_STAT_CHECK_CONDITION;
+diff --git a/drivers/scsi/hosts.c b/drivers/scsi/hosts.c
+index 198edf03f9297..d7f51b84f3c78 100644
+--- a/drivers/scsi/hosts.c
++++ b/drivers/scsi/hosts.c
+@@ -537,7 +537,7 @@ EXPORT_SYMBOL(scsi_host_alloc);
+ static int __scsi_host_match(struct device *dev, const void *data)
+ {
+ 	struct Scsi_Host *p;
+-	const unsigned short *hostnum = data;
++	const unsigned int *hostnum = data;
+ 
+ 	p = class_to_shost(dev);
+ 	return p->host_no == *hostnum;
+@@ -554,7 +554,7 @@ static int __scsi_host_match(struct device *dev, const void *data)
+  *	that scsi_host_get() took. The put_device() below dropped
+  *	the reference from class_find_device().
+  **/
+-struct Scsi_Host *scsi_host_lookup(unsigned short hostnum)
++struct Scsi_Host *scsi_host_lookup(unsigned int hostnum)
+ {
+ 	struct device *cdev;
+ 	struct Scsi_Host *shost = NULL;
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c b/drivers/scsi/mpt3sas/mpt3sas_base.c
+index 53f5492579cb7..5284584e4cd2b 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_base.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_base.c
+@@ -138,6 +138,9 @@ _base_get_ioc_facts(struct MPT3SAS_ADAPTER *ioc);
+ static void
+ _base_clear_outstanding_commands(struct MPT3SAS_ADAPTER *ioc);
+ 
++static u32
++_base_readl_ext_retry(const volatile void __iomem *addr);
++
+ /**
+  * mpt3sas_base_check_cmd_timeout - Function
+  *		to check timeout and command termination due
+@@ -213,6 +216,20 @@ _base_readl_aero(const volatile void __iomem *addr)
+ 	return ret_val;
+ }
+ 
++static u32
++_base_readl_ext_retry(const volatile void __iomem *addr)
++{
++	u32 i, ret_val;
++
++	for (i = 0 ; i < 30 ; i++) {
++		ret_val = readl(addr);
++		if (ret_val == 0)
++			continue;
++	}
++
++	return ret_val;
++}
++
+ static inline u32
+ _base_readl(const volatile void __iomem *addr)
+ {
+@@ -940,7 +957,7 @@ mpt3sas_halt_firmware(struct MPT3SAS_ADAPTER *ioc)
+ 
+ 	dump_stack();
+ 
+-	doorbell = ioc->base_readl(&ioc->chip->Doorbell);
++	doorbell = ioc->base_readl_ext_retry(&ioc->chip->Doorbell);
+ 	if ((doorbell & MPI2_IOC_STATE_MASK) == MPI2_IOC_STATE_FAULT) {
+ 		mpt3sas_print_fault_code(ioc, doorbell &
+ 		    MPI2_DOORBELL_DATA_MASK);
+@@ -6686,7 +6703,7 @@ mpt3sas_base_get_iocstate(struct MPT3SAS_ADAPTER *ioc, int cooked)
+ {
+ 	u32 s, sc;
+ 
+-	s = ioc->base_readl(&ioc->chip->Doorbell);
++	s = ioc->base_readl_ext_retry(&ioc->chip->Doorbell);
+ 	sc = s & MPI2_IOC_STATE_MASK;
+ 	return cooked ? sc : s;
+ }
+@@ -6831,7 +6848,7 @@ _base_wait_for_doorbell_ack(struct MPT3SAS_ADAPTER *ioc, int timeout)
+ 					   __func__, count, timeout));
+ 			return 0;
+ 		} else if (int_status & MPI2_HIS_IOC2SYS_DB_STATUS) {
+-			doorbell = ioc->base_readl(&ioc->chip->Doorbell);
++			doorbell = ioc->base_readl_ext_retry(&ioc->chip->Doorbell);
+ 			if ((doorbell & MPI2_IOC_STATE_MASK) ==
+ 			    MPI2_IOC_STATE_FAULT) {
+ 				mpt3sas_print_fault_code(ioc, doorbell);
+@@ -6871,7 +6888,7 @@ _base_wait_for_doorbell_not_used(struct MPT3SAS_ADAPTER *ioc, int timeout)
+ 	count = 0;
+ 	cntdn = 1000 * timeout;
+ 	do {
+-		doorbell_reg = ioc->base_readl(&ioc->chip->Doorbell);
++		doorbell_reg = ioc->base_readl_ext_retry(&ioc->chip->Doorbell);
+ 		if (!(doorbell_reg & MPI2_DOORBELL_USED)) {
+ 			dhsprintk(ioc,
+ 				  ioc_info(ioc, "%s: successful count(%d), timeout(%d)\n",
+@@ -7019,7 +7036,7 @@ _base_handshake_req_reply_wait(struct MPT3SAS_ADAPTER *ioc, int request_bytes,
+ 	__le32 *mfp;
+ 
+ 	/* make sure doorbell is not in use */
+-	if ((ioc->base_readl(&ioc->chip->Doorbell) & MPI2_DOORBELL_USED)) {
++	if ((ioc->base_readl_ext_retry(&ioc->chip->Doorbell) & MPI2_DOORBELL_USED)) {
+ 		ioc_err(ioc, "doorbell is in use (line=%d)\n", __LINE__);
+ 		return -EFAULT;
+ 	}
+@@ -7068,7 +7085,7 @@ _base_handshake_req_reply_wait(struct MPT3SAS_ADAPTER *ioc, int request_bytes,
+ 	}
+ 
+ 	/* read the first two 16-bits, it gives the total length of the reply */
+-	reply[0] = le16_to_cpu(ioc->base_readl(&ioc->chip->Doorbell)
++	reply[0] = le16_to_cpu(ioc->base_readl_ext_retry(&ioc->chip->Doorbell)
+ 	    & MPI2_DOORBELL_DATA_MASK);
+ 	writel(0, &ioc->chip->HostInterruptStatus);
+ 	if ((_base_wait_for_doorbell_int(ioc, 5))) {
+@@ -7076,7 +7093,7 @@ _base_handshake_req_reply_wait(struct MPT3SAS_ADAPTER *ioc, int request_bytes,
+ 			__LINE__);
+ 		return -EFAULT;
+ 	}
+-	reply[1] = le16_to_cpu(ioc->base_readl(&ioc->chip->Doorbell)
++	reply[1] = le16_to_cpu(ioc->base_readl_ext_retry(&ioc->chip->Doorbell)
+ 	    & MPI2_DOORBELL_DATA_MASK);
+ 	writel(0, &ioc->chip->HostInterruptStatus);
+ 
+@@ -7087,10 +7104,10 @@ _base_handshake_req_reply_wait(struct MPT3SAS_ADAPTER *ioc, int request_bytes,
+ 			return -EFAULT;
+ 		}
+ 		if (i >=  reply_bytes/2) /* overflow case */
+-			ioc->base_readl(&ioc->chip->Doorbell);
++			ioc->base_readl_ext_retry(&ioc->chip->Doorbell);
+ 		else
+ 			reply[i] = le16_to_cpu(
+-			    ioc->base_readl(&ioc->chip->Doorbell)
++			    ioc->base_readl_ext_retry(&ioc->chip->Doorbell)
+ 			    & MPI2_DOORBELL_DATA_MASK);
+ 		writel(0, &ioc->chip->HostInterruptStatus);
+ 	}
+@@ -7949,7 +7966,7 @@ _base_diag_reset(struct MPT3SAS_ADAPTER *ioc)
+ 			goto out;
+ 		}
+ 
+-		host_diagnostic = ioc->base_readl(&ioc->chip->HostDiagnostic);
++		host_diagnostic = ioc->base_readl_ext_retry(&ioc->chip->HostDiagnostic);
+ 		drsprintk(ioc,
+ 			  ioc_info(ioc, "wrote magic sequence: count(%d), host_diagnostic(0x%08x)\n",
+ 				   count, host_diagnostic));
+@@ -7969,7 +7986,7 @@ _base_diag_reset(struct MPT3SAS_ADAPTER *ioc)
+ 	for (count = 0; count < (300000000 /
+ 		MPI2_HARD_RESET_PCIE_SECOND_READ_DELAY_MICRO_SEC); count++) {
+ 
+-		host_diagnostic = ioc->base_readl(&ioc->chip->HostDiagnostic);
++		host_diagnostic = ioc->base_readl_ext_retry(&ioc->chip->HostDiagnostic);
+ 
+ 		if (host_diagnostic == 0xFFFFFFFF) {
+ 			ioc_info(ioc,
+@@ -8359,10 +8376,13 @@ mpt3sas_base_attach(struct MPT3SAS_ADAPTER *ioc)
+ 	ioc->rdpq_array_enable_assigned = 0;
+ 	ioc->use_32bit_dma = false;
+ 	ioc->dma_mask = 64;
+-	if (ioc->is_aero_ioc)
++	if (ioc->is_aero_ioc) {
+ 		ioc->base_readl = &_base_readl_aero;
+-	else
++		ioc->base_readl_ext_retry = &_base_readl_ext_retry;
++	} else {
+ 		ioc->base_readl = &_base_readl;
++		ioc->base_readl_ext_retry = &_base_readl;
++	}
+ 	r = mpt3sas_base_map_resources(ioc);
+ 	if (r)
+ 		goto out_free_resources;
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.h b/drivers/scsi/mpt3sas/mpt3sas_base.h
+index 05364aa15ecdb..10055c7e4a9f7 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_base.h
++++ b/drivers/scsi/mpt3sas/mpt3sas_base.h
+@@ -1618,6 +1618,7 @@ struct MPT3SAS_ADAPTER {
+ 	u8		diag_trigger_active;
+ 	u8		atomic_desc_capable;
+ 	BASE_READ_REG	base_readl;
++	BASE_READ_REG	base_readl_ext_retry;
+ 	struct SL_WH_MASTER_TRIGGER_T diag_trigger_master;
+ 	struct SL_WH_EVENT_TRIGGERS_T diag_trigger_event;
+ 	struct SL_WH_SCSI_TRIGGERS_T diag_trigger_scsi;
+diff --git a/drivers/scsi/qedf/qedf_dbg.h b/drivers/scsi/qedf/qedf_dbg.h
+index f4d81127239eb..5ec2b817c694a 100644
+--- a/drivers/scsi/qedf/qedf_dbg.h
++++ b/drivers/scsi/qedf/qedf_dbg.h
+@@ -59,6 +59,8 @@ extern uint qedf_debug;
+ #define QEDF_LOG_NOTICE	0x40000000	/* Notice logs */
+ #define QEDF_LOG_WARN		0x80000000	/* Warning logs */
+ 
++#define QEDF_DEBUGFS_LOG_LEN (2 * PAGE_SIZE)
++
+ /* Debug context structure */
+ struct qedf_dbg_ctx {
+ 	unsigned int host_no;
+diff --git a/drivers/scsi/qedf/qedf_debugfs.c b/drivers/scsi/qedf/qedf_debugfs.c
+index a3ed681c8ce3f..451fd236bfd05 100644
+--- a/drivers/scsi/qedf/qedf_debugfs.c
++++ b/drivers/scsi/qedf/qedf_debugfs.c
+@@ -8,6 +8,7 @@
+ #include <linux/uaccess.h>
+ #include <linux/debugfs.h>
+ #include <linux/module.h>
++#include <linux/vmalloc.h>
+ 
+ #include "qedf.h"
+ #include "qedf_dbg.h"
+@@ -98,7 +99,9 @@ static ssize_t
+ qedf_dbg_fp_int_cmd_read(struct file *filp, char __user *buffer, size_t count,
+ 			 loff_t *ppos)
+ {
++	ssize_t ret;
+ 	size_t cnt = 0;
++	char *cbuf;
+ 	int id;
+ 	struct qedf_fastpath *fp = NULL;
+ 	struct qedf_dbg_ctx *qedf_dbg =
+@@ -108,19 +111,25 @@ qedf_dbg_fp_int_cmd_read(struct file *filp, char __user *buffer, size_t count,
+ 
+ 	QEDF_INFO(qedf_dbg, QEDF_LOG_DEBUGFS, "entered\n");
+ 
+-	cnt = sprintf(buffer, "\nFastpath I/O completions\n\n");
++	cbuf = vmalloc(QEDF_DEBUGFS_LOG_LEN);
++	if (!cbuf)
++		return 0;
++
++	cnt += scnprintf(cbuf + cnt, QEDF_DEBUGFS_LOG_LEN - cnt, "\nFastpath I/O completions\n\n");
+ 
+ 	for (id = 0; id < qedf->num_queues; id++) {
+ 		fp = &(qedf->fp_array[id]);
+ 		if (fp->sb_id == QEDF_SB_ID_NULL)
+ 			continue;
+-		cnt += sprintf((buffer + cnt), "#%d: %lu\n", id,
+-			       fp->completions);
++		cnt += scnprintf(cbuf + cnt, QEDF_DEBUGFS_LOG_LEN - cnt,
++				 "#%d: %lu\n", id, fp->completions);
+ 	}
+ 
+-	cnt = min_t(int, count, cnt - *ppos);
+-	*ppos += cnt;
+-	return cnt;
++	ret = simple_read_from_buffer(buffer, count, ppos, cbuf, cnt);
++
++	vfree(cbuf);
++
++	return ret;
+ }
+ 
+ static ssize_t
+@@ -138,15 +147,14 @@ qedf_dbg_debug_cmd_read(struct file *filp, char __user *buffer, size_t count,
+ 			loff_t *ppos)
+ {
+ 	int cnt;
++	char cbuf[32];
+ 	struct qedf_dbg_ctx *qedf_dbg =
+ 				(struct qedf_dbg_ctx *)filp->private_data;
+ 
+ 	QEDF_INFO(qedf_dbg, QEDF_LOG_DEBUGFS, "debug mask=0x%x\n", qedf_debug);
+-	cnt = sprintf(buffer, "debug mask = 0x%x\n", qedf_debug);
++	cnt = scnprintf(cbuf, sizeof(cbuf), "debug mask = 0x%x\n", qedf_debug);
+ 
+-	cnt = min_t(int, count, cnt - *ppos);
+-	*ppos += cnt;
+-	return cnt;
++	return simple_read_from_buffer(buffer, count, ppos, cbuf, cnt);
+ }
+ 
+ static ssize_t
+@@ -185,18 +193,17 @@ qedf_dbg_stop_io_on_error_cmd_read(struct file *filp, char __user *buffer,
+ 				   size_t count, loff_t *ppos)
+ {
+ 	int cnt;
++	char cbuf[7];
+ 	struct qedf_dbg_ctx *qedf_dbg =
+ 				(struct qedf_dbg_ctx *)filp->private_data;
+ 	struct qedf_ctx *qedf = container_of(qedf_dbg,
+ 	    struct qedf_ctx, dbg_ctx);
+ 
+ 	QEDF_INFO(qedf_dbg, QEDF_LOG_DEBUGFS, "entered\n");
+-	cnt = sprintf(buffer, "%s\n",
++	cnt = scnprintf(cbuf, sizeof(cbuf), "%s\n",
+ 	    qedf->stop_io_on_error ? "true" : "false");
+ 
+-	cnt = min_t(int, count, cnt - *ppos);
+-	*ppos += cnt;
+-	return cnt;
++	return simple_read_from_buffer(buffer, count, ppos, cbuf, cnt);
+ }
+ 
+ static ssize_t
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index 367fba27fe699..33d4914e19fa6 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -5549,7 +5549,7 @@ static void qla_get_login_template(scsi_qla_host_t *vha)
+ 	__be32 *q;
+ 
+ 	memset(ha->init_cb, 0, ha->init_cb_size);
+-	sz = min_t(int, sizeof(struct fc_els_csp), ha->init_cb_size);
++	sz = min_t(int, sizeof(struct fc_els_flogi), ha->init_cb_size);
+ 	rval = qla24xx_get_port_login_templ(vha, ha->init_cb_dma,
+ 					    ha->init_cb, sz);
+ 	if (rval != QLA_SUCCESS) {
+diff --git a/drivers/scsi/qla4xxx/ql4_os.c b/drivers/scsi/qla4xxx/ql4_os.c
+index b2a3988e1e159..675332e49a7b0 100644
+--- a/drivers/scsi/qla4xxx/ql4_os.c
++++ b/drivers/scsi/qla4xxx/ql4_os.c
+@@ -968,6 +968,11 @@ static int qla4xxx_set_chap_entry(struct Scsi_Host *shost, void *data, int len)
+ 	memset(&chap_rec, 0, sizeof(chap_rec));
+ 
+ 	nla_for_each_attr(attr, data, len, rem) {
++		if (nla_len(attr) < sizeof(*param_info)) {
++			rc = -EINVAL;
++			goto exit_set_chap;
++		}
++
+ 		param_info = nla_data(attr);
+ 
+ 		switch (param_info->param) {
+@@ -2750,6 +2755,11 @@ qla4xxx_iface_set_param(struct Scsi_Host *shost, void *data, uint32_t len)
+ 	}
+ 
+ 	nla_for_each_attr(attr, data, len, rem) {
++		if (nla_len(attr) < sizeof(*iface_param)) {
++			rval = -EINVAL;
++			goto exit_init_fw_cb;
++		}
++
+ 		iface_param = nla_data(attr);
+ 
+ 		if (iface_param->param_type == ISCSI_NET_PARAM) {
+@@ -8104,6 +8114,11 @@ qla4xxx_sysfs_ddb_set_param(struct iscsi_bus_flash_session *fnode_sess,
+ 
+ 	memset((void *)&chap_tbl, 0, sizeof(chap_tbl));
+ 	nla_for_each_attr(attr, data, len, rem) {
++		if (nla_len(attr) < sizeof(*fnode_param)) {
++			rc = -EINVAL;
++			goto exit_set_param;
++		}
++
+ 		fnode_param = nla_data(attr);
+ 
+ 		switch (fnode_param->param) {
+diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c
+index e527ece12453a..3075b2ddf7a69 100644
+--- a/drivers/scsi/scsi_transport_iscsi.c
++++ b/drivers/scsi/scsi_transport_iscsi.c
+@@ -3014,14 +3014,15 @@ iscsi_if_destroy_conn(struct iscsi_transport *transport, struct iscsi_uevent *ev
+ }
+ 
+ static int
+-iscsi_if_set_param(struct iscsi_transport *transport, struct iscsi_uevent *ev)
++iscsi_if_set_param(struct iscsi_transport *transport, struct iscsi_uevent *ev, u32 rlen)
+ {
+ 	char *data = (char*)ev + sizeof(*ev);
+ 	struct iscsi_cls_conn *conn;
+ 	struct iscsi_cls_session *session;
+ 	int err = 0, value = 0, state;
+ 
+-	if (ev->u.set_param.len > PAGE_SIZE)
++	if (ev->u.set_param.len > rlen ||
++	    ev->u.set_param.len > PAGE_SIZE)
+ 		return -EINVAL;
+ 
+ 	session = iscsi_session_lookup(ev->u.set_param.sid);
+@@ -3029,6 +3030,10 @@ iscsi_if_set_param(struct iscsi_transport *transport, struct iscsi_uevent *ev)
+ 	if (!conn || !session)
+ 		return -EINVAL;
+ 
++	/* data will be regarded as NULL-ended string, do length check */
++	if (strlen(data) > ev->u.set_param.len)
++		return -EINVAL;
++
+ 	switch (ev->u.set_param.param) {
+ 	case ISCSI_PARAM_SESS_RECOVERY_TMO:
+ 		sscanf(data, "%d", &value);
+@@ -3118,7 +3123,7 @@ put_ep:
+ 
+ static int
+ iscsi_if_transport_ep(struct iscsi_transport *transport,
+-		      struct iscsi_uevent *ev, int msg_type)
++		      struct iscsi_uevent *ev, int msg_type, u32 rlen)
+ {
+ 	struct iscsi_endpoint *ep;
+ 	int rc = 0;
+@@ -3126,7 +3131,10 @@ iscsi_if_transport_ep(struct iscsi_transport *transport,
+ 	switch (msg_type) {
+ 	case ISCSI_UEVENT_TRANSPORT_EP_CONNECT_THROUGH_HOST:
+ 	case ISCSI_UEVENT_TRANSPORT_EP_CONNECT:
+-		rc = iscsi_if_ep_connect(transport, ev, msg_type);
++		if (rlen < sizeof(struct sockaddr))
++			rc = -EINVAL;
++		else
++			rc = iscsi_if_ep_connect(transport, ev, msg_type);
+ 		break;
+ 	case ISCSI_UEVENT_TRANSPORT_EP_POLL:
+ 		if (!transport->ep_poll)
+@@ -3150,12 +3158,15 @@ iscsi_if_transport_ep(struct iscsi_transport *transport,
+ 
+ static int
+ iscsi_tgt_dscvr(struct iscsi_transport *transport,
+-		struct iscsi_uevent *ev)
++		struct iscsi_uevent *ev, u32 rlen)
+ {
+ 	struct Scsi_Host *shost;
+ 	struct sockaddr *dst_addr;
+ 	int err;
+ 
++	if (rlen < sizeof(*dst_addr))
++		return -EINVAL;
++
+ 	if (!transport->tgt_dscvr)
+ 		return -EINVAL;
+ 
+@@ -3176,7 +3187,7 @@ iscsi_tgt_dscvr(struct iscsi_transport *transport,
+ 
+ static int
+ iscsi_set_host_param(struct iscsi_transport *transport,
+-		     struct iscsi_uevent *ev)
++		     struct iscsi_uevent *ev, u32 rlen)
+ {
+ 	char *data = (char*)ev + sizeof(*ev);
+ 	struct Scsi_Host *shost;
+@@ -3185,7 +3196,8 @@ iscsi_set_host_param(struct iscsi_transport *transport,
+ 	if (!transport->set_host_param)
+ 		return -ENOSYS;
+ 
+-	if (ev->u.set_host_param.len > PAGE_SIZE)
++	if (ev->u.set_host_param.len > rlen ||
++	    ev->u.set_host_param.len > PAGE_SIZE)
+ 		return -EINVAL;
+ 
+ 	shost = scsi_host_lookup(ev->u.set_host_param.host_no);
+@@ -3195,6 +3207,10 @@ iscsi_set_host_param(struct iscsi_transport *transport,
+ 		return -ENODEV;
+ 	}
+ 
++	/* see similar check in iscsi_if_set_param() */
++	if (strlen(data) > ev->u.set_host_param.len)
++		return -EINVAL;
++
+ 	err = transport->set_host_param(shost, ev->u.set_host_param.param,
+ 					data, ev->u.set_host_param.len);
+ 	scsi_host_put(shost);
+@@ -3202,12 +3218,15 @@ iscsi_set_host_param(struct iscsi_transport *transport,
+ }
+ 
+ static int
+-iscsi_set_path(struct iscsi_transport *transport, struct iscsi_uevent *ev)
++iscsi_set_path(struct iscsi_transport *transport, struct iscsi_uevent *ev, u32 rlen)
+ {
+ 	struct Scsi_Host *shost;
+ 	struct iscsi_path *params;
+ 	int err;
+ 
++	if (rlen < sizeof(*params))
++		return -EINVAL;
++
+ 	if (!transport->set_path)
+ 		return -ENOSYS;
+ 
+@@ -3267,12 +3286,15 @@ iscsi_set_iface_params(struct iscsi_transport *transport,
+ }
+ 
+ static int
+-iscsi_send_ping(struct iscsi_transport *transport, struct iscsi_uevent *ev)
++iscsi_send_ping(struct iscsi_transport *transport, struct iscsi_uevent *ev, u32 rlen)
+ {
+ 	struct Scsi_Host *shost;
+ 	struct sockaddr *dst_addr;
+ 	int err;
+ 
++	if (rlen < sizeof(*dst_addr))
++		return -EINVAL;
++
+ 	if (!transport->send_ping)
+ 		return -ENOSYS;
+ 
+@@ -3770,13 +3792,12 @@ exit_host_stats:
+ }
+ 
+ static int iscsi_if_transport_conn(struct iscsi_transport *transport,
+-				   struct nlmsghdr *nlh)
++				   struct nlmsghdr *nlh, u32 pdu_len)
+ {
+ 	struct iscsi_uevent *ev = nlmsg_data(nlh);
+ 	struct iscsi_cls_session *session;
+ 	struct iscsi_cls_conn *conn = NULL;
+ 	struct iscsi_endpoint *ep;
+-	uint32_t pdu_len;
+ 	int err = 0;
+ 
+ 	switch (nlh->nlmsg_type) {
+@@ -3861,8 +3882,6 @@ static int iscsi_if_transport_conn(struct iscsi_transport *transport,
+ 
+ 		break;
+ 	case ISCSI_UEVENT_SEND_PDU:
+-		pdu_len = nlh->nlmsg_len - sizeof(*nlh) - sizeof(*ev);
+-
+ 		if ((ev->u.send_pdu.hdr_size > pdu_len) ||
+ 		    (ev->u.send_pdu.data_size > (pdu_len - ev->u.send_pdu.hdr_size))) {
+ 			err = -EINVAL;
+@@ -3892,6 +3911,7 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
+ 	struct iscsi_internal *priv;
+ 	struct iscsi_cls_session *session;
+ 	struct iscsi_endpoint *ep = NULL;
++	u32 rlen;
+ 
+ 	if (!netlink_capable(skb, CAP_SYS_ADMIN))
+ 		return -EPERM;
+@@ -3911,6 +3931,13 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
+ 
+ 	portid = NETLINK_CB(skb).portid;
+ 
++	/*
++	 * Even though the remaining payload may not be regarded as nlattr,
++	 * (like address or something else), calculate the remaining length
++	 * here to ease following length checks.
++	 */
++	rlen = nlmsg_attrlen(nlh, sizeof(*ev));
++
+ 	switch (nlh->nlmsg_type) {
+ 	case ISCSI_UEVENT_CREATE_SESSION:
+ 		err = iscsi_if_create_session(priv, ep, ev,
+@@ -3967,7 +3994,7 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
+ 			err = -EINVAL;
+ 		break;
+ 	case ISCSI_UEVENT_SET_PARAM:
+-		err = iscsi_if_set_param(transport, ev);
++		err = iscsi_if_set_param(transport, ev, rlen);
+ 		break;
+ 	case ISCSI_UEVENT_CREATE_CONN:
+ 	case ISCSI_UEVENT_DESTROY_CONN:
+@@ -3975,7 +4002,7 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
+ 	case ISCSI_UEVENT_START_CONN:
+ 	case ISCSI_UEVENT_BIND_CONN:
+ 	case ISCSI_UEVENT_SEND_PDU:
+-		err = iscsi_if_transport_conn(transport, nlh);
++		err = iscsi_if_transport_conn(transport, nlh, rlen);
+ 		break;
+ 	case ISCSI_UEVENT_GET_STATS:
+ 		err = iscsi_if_get_stats(transport, nlh);
+@@ -3984,23 +4011,22 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
+ 	case ISCSI_UEVENT_TRANSPORT_EP_POLL:
+ 	case ISCSI_UEVENT_TRANSPORT_EP_DISCONNECT:
+ 	case ISCSI_UEVENT_TRANSPORT_EP_CONNECT_THROUGH_HOST:
+-		err = iscsi_if_transport_ep(transport, ev, nlh->nlmsg_type);
++		err = iscsi_if_transport_ep(transport, ev, nlh->nlmsg_type, rlen);
+ 		break;
+ 	case ISCSI_UEVENT_TGT_DSCVR:
+-		err = iscsi_tgt_dscvr(transport, ev);
++		err = iscsi_tgt_dscvr(transport, ev, rlen);
+ 		break;
+ 	case ISCSI_UEVENT_SET_HOST_PARAM:
+-		err = iscsi_set_host_param(transport, ev);
++		err = iscsi_set_host_param(transport, ev, rlen);
+ 		break;
+ 	case ISCSI_UEVENT_PATH_UPDATE:
+-		err = iscsi_set_path(transport, ev);
++		err = iscsi_set_path(transport, ev, rlen);
+ 		break;
+ 	case ISCSI_UEVENT_SET_IFACE_PARAMS:
+-		err = iscsi_set_iface_params(transport, ev,
+-					     nlmsg_attrlen(nlh, sizeof(*ev)));
++		err = iscsi_set_iface_params(transport, ev, rlen);
+ 		break;
+ 	case ISCSI_UEVENT_PING:
+-		err = iscsi_send_ping(transport, ev);
++		err = iscsi_send_ping(transport, ev, rlen);
+ 		break;
+ 	case ISCSI_UEVENT_GET_CHAP:
+ 		err = iscsi_get_chap(transport, nlh);
+@@ -4009,13 +4035,10 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
+ 		err = iscsi_delete_chap(transport, ev);
+ 		break;
+ 	case ISCSI_UEVENT_SET_FLASHNODE_PARAMS:
+-		err = iscsi_set_flashnode_param(transport, ev,
+-						nlmsg_attrlen(nlh,
+-							      sizeof(*ev)));
++		err = iscsi_set_flashnode_param(transport, ev, rlen);
+ 		break;
+ 	case ISCSI_UEVENT_NEW_FLASHNODE:
+-		err = iscsi_new_flashnode(transport, ev,
+-					  nlmsg_attrlen(nlh, sizeof(*ev)));
++		err = iscsi_new_flashnode(transport, ev, rlen);
+ 		break;
+ 	case ISCSI_UEVENT_DEL_FLASHNODE:
+ 		err = iscsi_del_flashnode(transport, ev);
+@@ -4030,8 +4053,7 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
+ 		err = iscsi_logout_flashnode_sid(transport, ev);
+ 		break;
+ 	case ISCSI_UEVENT_SET_CHAP:
+-		err = iscsi_set_chap(transport, ev,
+-				     nlmsg_attrlen(nlh, sizeof(*ev)));
++		err = iscsi_set_chap(transport, ev, rlen);
+ 		break;
+ 	case ISCSI_UEVENT_GET_HOST_STATS:
+ 		err = iscsi_get_host_stats(transport, nlh);
+diff --git a/drivers/soc/qcom/ocmem.c b/drivers/soc/qcom/ocmem.c
+index aaddc3cc53b7f..ef7c1748242ac 100644
+--- a/drivers/soc/qcom/ocmem.c
++++ b/drivers/soc/qcom/ocmem.c
+@@ -80,8 +80,8 @@ struct ocmem {
+ #define OCMEM_HW_VERSION_MINOR(val)		FIELD_GET(GENMASK(27, 16), val)
+ #define OCMEM_HW_VERSION_STEP(val)		FIELD_GET(GENMASK(15, 0), val)
+ 
+-#define OCMEM_HW_PROFILE_NUM_PORTS(val)		FIELD_PREP(0x0000000f, (val))
+-#define OCMEM_HW_PROFILE_NUM_MACROS(val)	FIELD_PREP(0x00003f00, (val))
++#define OCMEM_HW_PROFILE_NUM_PORTS(val)		FIELD_GET(0x0000000f, (val))
++#define OCMEM_HW_PROFILE_NUM_MACROS(val)	FIELD_GET(0x00003f00, (val))
+ 
+ #define OCMEM_HW_PROFILE_LAST_REGN_HALFSIZE	0x00010000
+ #define OCMEM_HW_PROFILE_INTERLEAVING		0x00020000
+diff --git a/drivers/soc/qcom/smem.c b/drivers/soc/qcom/smem.c
+index b0d59e815c3b7..a516b8b5efac9 100644
+--- a/drivers/soc/qcom/smem.c
++++ b/drivers/soc/qcom/smem.c
+@@ -724,7 +724,7 @@ EXPORT_SYMBOL_GPL(qcom_smem_get_free_space);
+ 
+ static bool addr_in_range(void __iomem *base, size_t size, void *addr)
+ {
+-	return base && (addr >= base && addr < base + size);
++	return base && ((void __iomem *)addr >= base && (void __iomem *)addr < base + size);
+ }
+ 
+ /**
+diff --git a/drivers/spi/spi-mpc512x-psc.c b/drivers/spi/spi-mpc512x-psc.c
+index 99aeef28a4774..5cecca1bef026 100644
+--- a/drivers/spi/spi-mpc512x-psc.c
++++ b/drivers/spi/spi-mpc512x-psc.c
+@@ -53,7 +53,7 @@ struct mpc512x_psc_spi {
+ 	int type;
+ 	void __iomem *psc;
+ 	struct mpc512x_psc_fifo __iomem *fifo;
+-	unsigned int irq;
++	int irq;
+ 	u8 bits_per_word;
+ 	u32 mclk_rate;
+ 
+diff --git a/drivers/spi/spi-tegra20-sflash.c b/drivers/spi/spi-tegra20-sflash.c
+index 4286310628a2b..0c5507473f972 100644
+--- a/drivers/spi/spi-tegra20-sflash.c
++++ b/drivers/spi/spi-tegra20-sflash.c
+@@ -455,7 +455,11 @@ static int tegra_sflash_probe(struct platform_device *pdev)
+ 		goto exit_free_master;
+ 	}
+ 
+-	tsd->irq = platform_get_irq(pdev, 0);
++	ret = platform_get_irq(pdev, 0);
++	if (ret < 0)
++		goto exit_free_master;
++	tsd->irq = ret;
++
+ 	ret = request_irq(tsd->irq, tegra_sflash_isr, 0,
+ 			dev_name(&pdev->dev), tsd);
+ 	if (ret < 0) {
+diff --git a/drivers/staging/media/av7110/sp8870.c b/drivers/staging/media/av7110/sp8870.c
+index 9767159aeb9b2..abf5c72607b64 100644
+--- a/drivers/staging/media/av7110/sp8870.c
++++ b/drivers/staging/media/av7110/sp8870.c
+@@ -606,4 +606,4 @@ MODULE_DESCRIPTION("Spase SP8870 DVB-T Demodulator driver");
+ MODULE_AUTHOR("Juergen Peitz");
+ MODULE_LICENSE("GPL");
+ 
+-EXPORT_SYMBOL(sp8870_attach);
++EXPORT_SYMBOL_GPL(sp8870_attach);
+diff --git a/drivers/staging/media/rkvdec/rkvdec.c b/drivers/staging/media/rkvdec/rkvdec.c
+index 134e2b9fa7d9a..84a41792cb4b8 100644
+--- a/drivers/staging/media/rkvdec/rkvdec.c
++++ b/drivers/staging/media/rkvdec/rkvdec.c
+@@ -120,7 +120,7 @@ static const struct rkvdec_coded_fmt_desc rkvdec_coded_fmts[] = {
+ 			.max_width = 4096,
+ 			.step_width = 16,
+ 			.min_height = 48,
+-			.max_height = 2304,
++			.max_height = 2560,
+ 			.step_height = 16,
+ 		},
+ 		.ctrls = &rkvdec_h264_ctrls,
+diff --git a/drivers/thermal/imx8mm_thermal.c b/drivers/thermal/imx8mm_thermal.c
+index d4b40869c7d7b..dd474166ca671 100644
+--- a/drivers/thermal/imx8mm_thermal.c
++++ b/drivers/thermal/imx8mm_thermal.c
+@@ -179,10 +179,8 @@ static int imx8mm_tmu_probe_set_calib_v1(struct platform_device *pdev,
+ 	int ret;
+ 
+ 	ret = nvmem_cell_read_u32(&pdev->dev, "calib", &ana0);
+-	if (ret) {
+-		dev_warn(dev, "Failed to read OCOTP nvmem cell (%d).\n", ret);
+-		return ret;
+-	}
++	if (ret)
++		return dev_err_probe(dev, ret, "Failed to read OCOTP nvmem cell\n");
+ 
+ 	writel(FIELD_PREP(TASR_BUF_VREF_MASK,
+ 			  FIELD_GET(ANA0_BUF_VREF_MASK, ana0)) |
+diff --git a/drivers/thermal/mediatek/lvts_thermal.c b/drivers/thermal/mediatek/lvts_thermal.c
+index b693fac2d6779..b0d71b74a928e 100644
+--- a/drivers/thermal/mediatek/lvts_thermal.c
++++ b/drivers/thermal/mediatek/lvts_thermal.c
+@@ -65,7 +65,12 @@
+ #define LVTS_HW_FILTER				0x2
+ #define LVTS_TSSEL_CONF				0x13121110
+ #define LVTS_CALSCALE_CONF			0x300
+-#define LVTS_MONINT_CONF			0x9FBF7BDE
++#define LVTS_MONINT_CONF			0x8300318C
++
++#define LVTS_MONINT_OFFSET_SENSOR0		0xC
++#define LVTS_MONINT_OFFSET_SENSOR1		0x180
++#define LVTS_MONINT_OFFSET_SENSOR2		0x3000
++#define LVTS_MONINT_OFFSET_SENSOR3		0x3000000
+ 
+ #define LVTS_INT_SENSOR0			0x0009001F
+ #define LVTS_INT_SENSOR1			0x001203E0
+@@ -83,6 +88,8 @@
+ 
+ #define LVTS_HW_SHUTDOWN_MT8195		105000
+ 
++#define LVTS_MINIMUM_THRESHOLD		20000
++
+ static int golden_temp = LVTS_GOLDEN_TEMP_DEFAULT;
+ static int coeff_b = LVTS_COEFF_B;
+ 
+@@ -110,6 +117,8 @@ struct lvts_sensor {
+ 	void __iomem *base;
+ 	int id;
+ 	int dt_id;
++	int low_thresh;
++	int high_thresh;
+ };
+ 
+ struct lvts_ctrl {
+@@ -119,6 +128,8 @@ struct lvts_ctrl {
+ 	int num_lvts_sensor;
+ 	int mode;
+ 	void __iomem *base;
++	int low_thresh;
++	int high_thresh;
+ };
+ 
+ struct lvts_domain {
+@@ -290,32 +301,84 @@ static int lvts_get_temp(struct thermal_zone_device *tz, int *temp)
+ 	return 0;
+ }
+ 
++static void lvts_update_irq_mask(struct lvts_ctrl *lvts_ctrl)
++{
++	u32 masks[] = {
++		LVTS_MONINT_OFFSET_SENSOR0,
++		LVTS_MONINT_OFFSET_SENSOR1,
++		LVTS_MONINT_OFFSET_SENSOR2,
++		LVTS_MONINT_OFFSET_SENSOR3,
++	};
++	u32 value = 0;
++	int i;
++
++	value = readl(LVTS_MONINT(lvts_ctrl->base));
++
++	for (i = 0; i < ARRAY_SIZE(masks); i++) {
++		if (lvts_ctrl->sensors[i].high_thresh == lvts_ctrl->high_thresh
++		    && lvts_ctrl->sensors[i].low_thresh == lvts_ctrl->low_thresh)
++			value |= masks[i];
++		else
++			value &= ~masks[i];
++	}
++
++	writel(value, LVTS_MONINT(lvts_ctrl->base));
++}
++
++static bool lvts_should_update_thresh(struct lvts_ctrl *lvts_ctrl, int high)
++{
++	int i;
++
++	if (high > lvts_ctrl->high_thresh)
++		return true;
++
++	for (i = 0; i < lvts_ctrl->num_lvts_sensor; i++)
++		if (lvts_ctrl->sensors[i].high_thresh == lvts_ctrl->high_thresh
++		    && lvts_ctrl->sensors[i].low_thresh == lvts_ctrl->low_thresh)
++			return false;
++
++	return true;
++}
++
+ static int lvts_set_trips(struct thermal_zone_device *tz, int low, int high)
+ {
+ 	struct lvts_sensor *lvts_sensor = thermal_zone_device_priv(tz);
++	struct lvts_ctrl *lvts_ctrl = container_of(lvts_sensor, struct lvts_ctrl, sensors[lvts_sensor->id]);
+ 	void __iomem *base = lvts_sensor->base;
+-	u32 raw_low = lvts_temp_to_raw(low);
++	u32 raw_low = lvts_temp_to_raw(low != -INT_MAX ? low : LVTS_MINIMUM_THRESHOLD);
+ 	u32 raw_high = lvts_temp_to_raw(high);
++	bool should_update_thresh;
++
++	lvts_sensor->low_thresh = low;
++	lvts_sensor->high_thresh = high;
++
++	should_update_thresh = lvts_should_update_thresh(lvts_ctrl, high);
++	if (should_update_thresh) {
++		lvts_ctrl->high_thresh = high;
++		lvts_ctrl->low_thresh = low;
++	}
++	lvts_update_irq_mask(lvts_ctrl);
++
++	if (!should_update_thresh)
++		return 0;
+ 
+ 	/*
+-	 * Hot to normal temperature threshold
++	 * Low offset temperature threshold
+ 	 *
+-	 * LVTS_H2NTHRE
++	 * LVTS_OFFSETL
+ 	 *
+ 	 * Bits:
+ 	 *
+ 	 * 14-0 : Raw temperature for threshold
+ 	 */
+-	if (low != -INT_MAX) {
+-		pr_debug("%s: Setting low limit temperature interrupt: %d\n",
+-			 thermal_zone_device_type(tz), low);
+-		writel(raw_low, LVTS_H2NTHRE(base));
+-	}
++	pr_debug("%s: Setting low limit temperature interrupt: %d\n",
++		 thermal_zone_device_type(tz), low);
++	writel(raw_low, LVTS_OFFSETL(base));
+ 
+ 	/*
+-	 * Hot temperature threshold
++	 * High offset temperature threshold
+ 	 *
+-	 * LVTS_HTHRE
++	 * LVTS_OFFSETH
+ 	 *
+ 	 * Bits:
+ 	 *
+@@ -323,7 +386,7 @@ static int lvts_set_trips(struct thermal_zone_device *tz, int low, int high)
+ 	 */
+ 	pr_debug("%s: Setting high limit temperature interrupt: %d\n",
+ 		 thermal_zone_device_type(tz), high);
+-	writel(raw_high, LVTS_HTHRE(base));
++	writel(raw_high, LVTS_OFFSETH(base));
+ 
+ 	return 0;
+ }
+@@ -451,7 +514,7 @@ static irqreturn_t lvts_irq_handler(int irq, void *data)
+ 
+ 	for (i = 0; i < lvts_td->num_lvts_ctrl; i++) {
+ 
+-		aux = lvts_ctrl_irq_handler(lvts_td->lvts_ctrl);
++		aux = lvts_ctrl_irq_handler(&lvts_td->lvts_ctrl[i]);
+ 		if (aux != IRQ_HANDLED)
+ 			continue;
+ 
+@@ -521,6 +584,9 @@ static int lvts_sensor_init(struct device *dev, struct lvts_ctrl *lvts_ctrl,
+ 		 */
+ 		lvts_sensor[i].msr = lvts_ctrl_data->mode == LVTS_MSR_IMMEDIATE_MODE ?
+ 			imm_regs[i] : msr_regs[i];
++
++		lvts_sensor[i].low_thresh = INT_MIN;
++		lvts_sensor[i].high_thresh = INT_MIN;
+ 	};
+ 
+ 	lvts_ctrl->num_lvts_sensor = lvts_ctrl_data->num_lvts_sensor;
+@@ -688,6 +754,9 @@ static int lvts_ctrl_init(struct device *dev, struct lvts_domain *lvts_td,
+ 		 */
+ 		lvts_ctrl[i].hw_tshut_raw_temp =
+ 			lvts_temp_to_raw(lvts_data->lvts_ctrl[i].hw_tshut_temp);
++
++		lvts_ctrl[i].low_thresh = INT_MIN;
++		lvts_ctrl[i].high_thresh = INT_MIN;
+ 	}
+ 
+ 	/*
+@@ -896,24 +965,6 @@ static int lvts_ctrl_configure(struct device *dev, struct lvts_ctrl *lvts_ctrl)
+ 			LVTS_HW_FILTER << 3 | LVTS_HW_FILTER;
+ 	writel(value, LVTS_MSRCTL0(lvts_ctrl->base));
+ 
+-	/*
+-	 * LVTS_MSRCTL1 : Measurement control
+-	 *
+-	 * Bits:
+-	 *
+-	 * 9: Ignore MSRCTL0 config and do immediate measurement on sensor3
+-	 * 6: Ignore MSRCTL0 config and do immediate measurement on sensor2
+-	 * 5: Ignore MSRCTL0 config and do immediate measurement on sensor1
+-	 * 4: Ignore MSRCTL0 config and do immediate measurement on sensor0
+-	 *
+-	 * That configuration will ignore the filtering and the delays
+-	 * introduced below in MONCTL1 and MONCTL2
+-	 */
+-	if (lvts_ctrl->mode == LVTS_MSR_IMMEDIATE_MODE) {
+-		value = BIT(9) | BIT(6) | BIT(5) | BIT(4);
+-		writel(value, LVTS_MSRCTL1(lvts_ctrl->base));
+-	}
+-
+ 	/*
+ 	 * LVTS_MONCTL1 : Period unit and group interval configuration
+ 	 *
+@@ -979,6 +1030,15 @@ static int lvts_ctrl_start(struct device *dev, struct lvts_ctrl *lvts_ctrl)
+ 	struct thermal_zone_device *tz;
+ 	u32 sensor_map = 0;
+ 	int i;
++	/*
++	 * Bitmaps to enable each sensor on immediate and filtered modes, as
++	 * described in MSRCTL1 and MONCTL0 registers below, respectively.
++	 */
++	u32 sensor_imm_bitmap[] = { BIT(4), BIT(5), BIT(6), BIT(9) };
++	u32 sensor_filt_bitmap[] = { BIT(0), BIT(1), BIT(2), BIT(3) };
++
++	u32 *sensor_bitmap = lvts_ctrl->mode == LVTS_MSR_IMMEDIATE_MODE ?
++			     sensor_imm_bitmap : sensor_filt_bitmap;
+ 
+ 	for (i = 0; i < lvts_ctrl->num_lvts_sensor; i++) {
+ 
+@@ -1016,20 +1076,38 @@ static int lvts_ctrl_start(struct device *dev, struct lvts_ctrl *lvts_ctrl)
+ 		 * map, so we can enable the temperature monitoring in
+ 		 * the hardware thermal controller.
+ 		 */
+-		sensor_map |= BIT(i);
++		sensor_map |= sensor_bitmap[i];
+ 	}
+ 
+ 	/*
+-	 * Bits:
+-	 *      9: Single point access flow
+-	 *    0-3: Enable sensing point 0-3
+-	 *
+ 	 * The initialization of the thermal zones give us
+ 	 * which sensor point to enable. If any thermal zone
+ 	 * was not described in the device tree, it won't be
+ 	 * enabled here in the sensor map.
+ 	 */
+-	writel(sensor_map | BIT(9), LVTS_MONCTL0(lvts_ctrl->base));
++	if (lvts_ctrl->mode == LVTS_MSR_IMMEDIATE_MODE) {
++		/*
++		 * LVTS_MSRCTL1 : Measurement control
++		 *
++		 * Bits:
++		 *
++		 * 9: Ignore MSRCTL0 config and do immediate measurement on sensor3
++		 * 6: Ignore MSRCTL0 config and do immediate measurement on sensor2
++		 * 5: Ignore MSRCTL0 config and do immediate measurement on sensor1
++		 * 4: Ignore MSRCTL0 config and do immediate measurement on sensor0
++		 *
++		 * That configuration will ignore the filtering and the delays
++		 * introduced in MONCTL1 and MONCTL2
++		 */
++		writel(sensor_map, LVTS_MSRCTL1(lvts_ctrl->base));
++	} else {
++		/*
++		 * Bits:
++		 *      9: Single point access flow
++		 *    0-3: Enable sensing point 0-3
++		 */
++		writel(sensor_map | BIT(9), LVTS_MONCTL0(lvts_ctrl->base));
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/thermal/thermal_of.c b/drivers/thermal/thermal_of.c
+index bc07ae1c284cf..22272f9c5934a 100644
+--- a/drivers/thermal/thermal_of.c
++++ b/drivers/thermal/thermal_of.c
+@@ -292,13 +292,13 @@ static int __thermal_of_unbind(struct device_node *map_np, int index, int trip_i
+ 	ret = of_parse_phandle_with_args(map_np, "cooling-device", "#cooling-cells",
+ 					 index, &cooling_spec);
+ 
+-	of_node_put(cooling_spec.np);
+-
+ 	if (ret < 0) {
+ 		pr_err("Invalid cooling-device entry\n");
+ 		return ret;
+ 	}
+ 
++	of_node_put(cooling_spec.np);
++
+ 	if (cooling_spec.args_count < 2) {
+ 		pr_err("wrong reference to cooling device, missing limits\n");
+ 		return -EINVAL;
+@@ -325,13 +325,13 @@ static int __thermal_of_bind(struct device_node *map_np, int index, int trip_id,
+ 	ret = of_parse_phandle_with_args(map_np, "cooling-device", "#cooling-cells",
+ 					 index, &cooling_spec);
+ 
+-	of_node_put(cooling_spec.np);
+-
+ 	if (ret < 0) {
+ 		pr_err("Invalid cooling-device entry\n");
+ 		return ret;
+ 	}
+ 
++	of_node_put(cooling_spec.np);
++
+ 	if (cooling_spec.args_count < 2) {
+ 		pr_err("wrong reference to cooling device, missing limits\n");
+ 		return -EINVAL;
+diff --git a/drivers/tty/serial/qcom_geni_serial.c b/drivers/tty/serial/qcom_geni_serial.c
+index 54b22cbc0fcef..67484c062edd1 100644
+--- a/drivers/tty/serial/qcom_geni_serial.c
++++ b/drivers/tty/serial/qcom_geni_serial.c
+@@ -592,7 +592,6 @@ static void qcom_geni_serial_stop_tx_dma(struct uart_port *uport)
+ {
+ 	struct qcom_geni_serial_port *port = to_dev_port(uport);
+ 	bool done;
+-	u32 m_irq_en;
+ 
+ 	if (!qcom_geni_serial_main_active(uport))
+ 		return;
+@@ -604,12 +603,10 @@ static void qcom_geni_serial_stop_tx_dma(struct uart_port *uport)
+ 		port->tx_remaining = 0;
+ 	}
+ 
+-	m_irq_en = readl(uport->membase + SE_GENI_M_IRQ_EN);
+-	writel(m_irq_en, uport->membase + SE_GENI_M_IRQ_EN);
+ 	geni_se_cancel_m_cmd(&port->se);
+ 
+-	done = qcom_geni_serial_poll_bit(uport, SE_GENI_S_IRQ_STATUS,
+-					 S_CMD_CANCEL_EN, true);
++	done = qcom_geni_serial_poll_bit(uport, SE_GENI_M_IRQ_STATUS,
++					 M_CMD_CANCEL_EN, true);
+ 	if (!done) {
+ 		geni_se_abort_m_cmd(&port->se);
+ 		done = qcom_geni_serial_poll_bit(uport, SE_GENI_M_IRQ_STATUS,
+diff --git a/drivers/tty/serial/sc16is7xx.c b/drivers/tty/serial/sc16is7xx.c
+index faeb3dc371c05..289ca7d4e5669 100644
+--- a/drivers/tty/serial/sc16is7xx.c
++++ b/drivers/tty/serial/sc16is7xx.c
+@@ -236,7 +236,8 @@
+ 
+ /* IOControl register bits (Only 750/760) */
+ #define SC16IS7XX_IOCONTROL_LATCH_BIT	(1 << 0) /* Enable input latching */
+-#define SC16IS7XX_IOCONTROL_MODEM_BIT	(1 << 1) /* Enable GPIO[7:4] as modem pins */
++#define SC16IS7XX_IOCONTROL_MODEM_A_BIT	(1 << 1) /* Enable GPIO[7:4] as modem A pins */
++#define SC16IS7XX_IOCONTROL_MODEM_B_BIT	(1 << 2) /* Enable GPIO[3:0] as modem B pins */
+ #define SC16IS7XX_IOCONTROL_SRESET_BIT	(1 << 3) /* Software Reset */
+ 
+ /* EFCR register bits */
+@@ -301,12 +302,12 @@
+ /* Misc definitions */
+ #define SC16IS7XX_FIFO_SIZE		(64)
+ #define SC16IS7XX_REG_SHIFT		2
++#define SC16IS7XX_GPIOS_PER_BANK	4
+ 
+ struct sc16is7xx_devtype {
+ 	char	name[10];
+ 	int	nr_gpio;
+ 	int	nr_uart;
+-	int	has_mctrl;
+ };
+ 
+ #define SC16IS7XX_RECONF_MD		(1 << 0)
+@@ -336,7 +337,9 @@ struct sc16is7xx_port {
+ 	struct clk			*clk;
+ #ifdef CONFIG_GPIOLIB
+ 	struct gpio_chip		gpio;
++	unsigned long			gpio_valid_mask;
+ #endif
++	u8				mctrl_mask;
+ 	unsigned char			buf[SC16IS7XX_FIFO_SIZE];
+ 	struct kthread_worker		kworker;
+ 	struct task_struct		*kworker_task;
+@@ -447,35 +450,30 @@ static const struct sc16is7xx_devtype sc16is74x_devtype = {
+ 	.name		= "SC16IS74X",
+ 	.nr_gpio	= 0,
+ 	.nr_uart	= 1,
+-	.has_mctrl	= 0,
+ };
+ 
+ static const struct sc16is7xx_devtype sc16is750_devtype = {
+ 	.name		= "SC16IS750",
+-	.nr_gpio	= 4,
++	.nr_gpio	= 8,
+ 	.nr_uart	= 1,
+-	.has_mctrl	= 1,
+ };
+ 
+ static const struct sc16is7xx_devtype sc16is752_devtype = {
+ 	.name		= "SC16IS752",
+-	.nr_gpio	= 0,
++	.nr_gpio	= 8,
+ 	.nr_uart	= 2,
+-	.has_mctrl	= 1,
+ };
+ 
+ static const struct sc16is7xx_devtype sc16is760_devtype = {
+ 	.name		= "SC16IS760",
+-	.nr_gpio	= 4,
++	.nr_gpio	= 8,
+ 	.nr_uart	= 1,
+-	.has_mctrl	= 1,
+ };
+ 
+ static const struct sc16is7xx_devtype sc16is762_devtype = {
+ 	.name		= "SC16IS762",
+-	.nr_gpio	= 0,
++	.nr_gpio	= 8,
+ 	.nr_uart	= 2,
+-	.has_mctrl	= 1,
+ };
+ 
+ static bool sc16is7xx_regmap_volatile(struct device *dev, unsigned int reg)
+@@ -1357,8 +1355,98 @@ static int sc16is7xx_gpio_direction_output(struct gpio_chip *chip,
+ 
+ 	return 0;
+ }
++
++static int sc16is7xx_gpio_init_valid_mask(struct gpio_chip *chip,
++					  unsigned long *valid_mask,
++					  unsigned int ngpios)
++{
++	struct sc16is7xx_port *s = gpiochip_get_data(chip);
++
++	*valid_mask = s->gpio_valid_mask;
++
++	return 0;
++}
++
++static int sc16is7xx_setup_gpio_chip(struct sc16is7xx_port *s)
++{
++	struct device *dev = s->p[0].port.dev;
++
++	if (!s->devtype->nr_gpio)
++		return 0;
++
++	switch (s->mctrl_mask) {
++	case 0:
++		s->gpio_valid_mask = GENMASK(7, 0);
++		break;
++	case SC16IS7XX_IOCONTROL_MODEM_A_BIT:
++		s->gpio_valid_mask = GENMASK(3, 0);
++		break;
++	case SC16IS7XX_IOCONTROL_MODEM_B_BIT:
++		s->gpio_valid_mask = GENMASK(7, 4);
++		break;
++	default:
++		break;
++	}
++
++	if (s->gpio_valid_mask == 0)
++		return 0;
++
++	s->gpio.owner		 = THIS_MODULE;
++	s->gpio.parent		 = dev;
++	s->gpio.label		 = dev_name(dev);
++	s->gpio.init_valid_mask	 = sc16is7xx_gpio_init_valid_mask;
++	s->gpio.direction_input	 = sc16is7xx_gpio_direction_input;
++	s->gpio.get		 = sc16is7xx_gpio_get;
++	s->gpio.direction_output = sc16is7xx_gpio_direction_output;
++	s->gpio.set		 = sc16is7xx_gpio_set;
++	s->gpio.base		 = -1;
++	s->gpio.ngpio		 = s->devtype->nr_gpio;
++	s->gpio.can_sleep	 = 1;
++
++	return gpiochip_add_data(&s->gpio, s);
++}
+ #endif
+ 
++/*
++ * Configure ports designated to operate as modem control lines.
++ */
++static int sc16is7xx_setup_mctrl_ports(struct sc16is7xx_port *s)
++{
++	int i;
++	int ret;
++	int count;
++	u32 mctrl_port[2];
++	struct device *dev = s->p[0].port.dev;
++
++	count = device_property_count_u32(dev, "nxp,modem-control-line-ports");
++	if (count < 0 || count > ARRAY_SIZE(mctrl_port))
++		return 0;
++
++	ret = device_property_read_u32_array(dev, "nxp,modem-control-line-ports",
++					     mctrl_port, count);
++	if (ret)
++		return ret;
++
++	s->mctrl_mask = 0;
++
++	for (i = 0; i < count; i++) {
++		/* Use GPIO lines as modem control lines */
++		if (mctrl_port[i] == 0)
++			s->mctrl_mask |= SC16IS7XX_IOCONTROL_MODEM_A_BIT;
++		else if (mctrl_port[i] == 1)
++			s->mctrl_mask |= SC16IS7XX_IOCONTROL_MODEM_B_BIT;
++	}
++
++	if (s->mctrl_mask)
++		regmap_update_bits(
++			s->regmap,
++			SC16IS7XX_IOCONTROL_REG << SC16IS7XX_REG_SHIFT,
++			SC16IS7XX_IOCONTROL_MODEM_A_BIT |
++			SC16IS7XX_IOCONTROL_MODEM_B_BIT, s->mctrl_mask);
++
++	return 0;
++}
++
+ static const struct serial_rs485 sc16is7xx_rs485_supported = {
+ 	.flags = SER_RS485_ENABLED | SER_RS485_RTS_AFTER_SEND,
+ 	.delay_rts_before_send = 1,
+@@ -1471,12 +1559,6 @@ static int sc16is7xx_probe(struct device *dev,
+ 				     SC16IS7XX_EFCR_RXDISABLE_BIT |
+ 				     SC16IS7XX_EFCR_TXDISABLE_BIT);
+ 
+-		/* Use GPIO lines as modem status registers */
+-		if (devtype->has_mctrl)
+-			sc16is7xx_port_write(&s->p[i].port,
+-					     SC16IS7XX_IOCONTROL_REG,
+-					     SC16IS7XX_IOCONTROL_MODEM_BIT);
+-
+ 		/* Initialize kthread work structs */
+ 		kthread_init_work(&s->p[i].tx_work, sc16is7xx_tx_proc);
+ 		kthread_init_work(&s->p[i].reg_work, sc16is7xx_reg_proc);
+@@ -1514,23 +1596,14 @@ static int sc16is7xx_probe(struct device *dev,
+ 				s->p[u].irda_mode = true;
+ 	}
+ 
++	ret = sc16is7xx_setup_mctrl_ports(s);
++	if (ret)
++		goto out_ports;
++
+ #ifdef CONFIG_GPIOLIB
+-	if (devtype->nr_gpio) {
+-		/* Setup GPIO cotroller */
+-		s->gpio.owner		 = THIS_MODULE;
+-		s->gpio.parent		 = dev;
+-		s->gpio.label		 = dev_name(dev);
+-		s->gpio.direction_input	 = sc16is7xx_gpio_direction_input;
+-		s->gpio.get		 = sc16is7xx_gpio_get;
+-		s->gpio.direction_output = sc16is7xx_gpio_direction_output;
+-		s->gpio.set		 = sc16is7xx_gpio_set;
+-		s->gpio.base		 = -1;
+-		s->gpio.ngpio		 = devtype->nr_gpio;
+-		s->gpio.can_sleep	 = 1;
+-		ret = gpiochip_add_data(&s->gpio, s);
+-		if (ret)
+-			goto out_thread;
+-	}
++	ret = sc16is7xx_setup_gpio_chip(s);
++	if (ret)
++		goto out_ports;
+ #endif
+ 
+ 	/*
+@@ -1553,10 +1626,8 @@ static int sc16is7xx_probe(struct device *dev,
+ 		return 0;
+ 
+ #ifdef CONFIG_GPIOLIB
+-	if (devtype->nr_gpio)
++	if (s->gpio_valid_mask)
+ 		gpiochip_remove(&s->gpio);
+-
+-out_thread:
+ #endif
+ 
+ out_ports:
+@@ -1579,7 +1650,7 @@ static void sc16is7xx_remove(struct device *dev)
+ 	int i;
+ 
+ #ifdef CONFIG_GPIOLIB
+-	if (s->devtype->nr_gpio)
++	if (s->gpio_valid_mask)
+ 		gpiochip_remove(&s->gpio);
+ #endif
+ 
+diff --git a/drivers/tty/serial/serial-tegra.c b/drivers/tty/serial/serial-tegra.c
+index 1cf08b33456c9..37e1e05bc87e6 100644
+--- a/drivers/tty/serial/serial-tegra.c
++++ b/drivers/tty/serial/serial-tegra.c
+@@ -998,7 +998,11 @@ static int tegra_uart_hw_init(struct tegra_uart_port *tup)
+ 	tup->ier_shadow = 0;
+ 	tup->current_baud = 0;
+ 
+-	clk_prepare_enable(tup->uart_clk);
++	ret = clk_prepare_enable(tup->uart_clk);
++	if (ret) {
++		dev_err(tup->uport.dev, "could not enable clk\n");
++		return ret;
++	}
+ 
+ 	/* Reset the UART controller to clear all previous status.*/
+ 	reset_control_assert(tup->rst);
+diff --git a/drivers/tty/serial/sprd_serial.c b/drivers/tty/serial/sprd_serial.c
+index b58f51296ace2..99da964e8bd44 100644
+--- a/drivers/tty/serial/sprd_serial.c
++++ b/drivers/tty/serial/sprd_serial.c
+@@ -364,7 +364,7 @@ static void sprd_rx_free_buf(struct sprd_uart_port *sp)
+ 	if (sp->rx_dma.virt)
+ 		dma_free_coherent(sp->port.dev, SPRD_UART_RX_SIZE,
+ 				  sp->rx_dma.virt, sp->rx_dma.phys_addr);
+-
++	sp->rx_dma.virt = NULL;
+ }
+ 
+ static int sprd_rx_dma_config(struct uart_port *port, u32 burst)
+@@ -1106,7 +1106,7 @@ static bool sprd_uart_is_console(struct uart_port *uport)
+ static int sprd_clk_init(struct uart_port *uport)
+ {
+ 	struct clk *clk_uart, *clk_parent;
+-	struct sprd_uart_port *u = sprd_port[uport->line];
++	struct sprd_uart_port *u = container_of(uport, struct sprd_uart_port, port);
+ 
+ 	clk_uart = devm_clk_get(uport->dev, "uart");
+ 	if (IS_ERR(clk_uart)) {
+@@ -1149,22 +1149,22 @@ static int sprd_probe(struct platform_device *pdev)
+ {
+ 	struct resource *res;
+ 	struct uart_port *up;
++	struct sprd_uart_port *sport;
+ 	int irq;
+ 	int index;
+ 	int ret;
+ 
+ 	index = of_alias_get_id(pdev->dev.of_node, "serial");
+-	if (index < 0 || index >= ARRAY_SIZE(sprd_port)) {
++	if (index < 0 || index >= UART_NR_MAX) {
+ 		dev_err(&pdev->dev, "got a wrong serial alias id %d\n", index);
+ 		return -EINVAL;
+ 	}
+ 
+-	sprd_port[index] = devm_kzalloc(&pdev->dev, sizeof(*sprd_port[index]),
+-					GFP_KERNEL);
+-	if (!sprd_port[index])
++	sport = devm_kzalloc(&pdev->dev, sizeof(*sport), GFP_KERNEL);
++	if (!sport)
+ 		return -ENOMEM;
+ 
+-	up = &sprd_port[index]->port;
++	up = &sport->port;
+ 	up->dev = &pdev->dev;
+ 	up->line = index;
+ 	up->type = PORT_SPRD;
+@@ -1195,7 +1195,7 @@ static int sprd_probe(struct platform_device *pdev)
+ 	 * Allocate one dma buffer to prepare for receive transfer, in case
+ 	 * memory allocation failure at runtime.
+ 	 */
+-	ret = sprd_rx_alloc_buf(sprd_port[index]);
++	ret = sprd_rx_alloc_buf(sport);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -1203,17 +1203,27 @@ static int sprd_probe(struct platform_device *pdev)
+ 		ret = uart_register_driver(&sprd_uart_driver);
+ 		if (ret < 0) {
+ 			pr_err("Failed to register SPRD-UART driver\n");
+-			return ret;
++			goto free_rx_buf;
+ 		}
+ 	}
++
+ 	sprd_ports_num++;
++	sprd_port[index] = sport;
+ 
+ 	ret = uart_add_one_port(&sprd_uart_driver, up);
+ 	if (ret)
+-		sprd_remove(pdev);
++		goto clean_port;
+ 
+ 	platform_set_drvdata(pdev, up);
+ 
++	return 0;
++
++clean_port:
++	sprd_port[index] = NULL;
++	if (--sprd_ports_num == 0)
++		uart_unregister_driver(&sprd_uart_driver);
++free_rx_buf:
++	sprd_rx_free_buf(sport);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
+index 1294467757964..fa18806e80b61 100644
+--- a/drivers/ufs/core/ufshcd.c
++++ b/drivers/ufs/core/ufshcd.c
+@@ -5251,9 +5251,17 @@ ufshcd_transfer_rsp_status(struct ufs_hba *hba, struct ufshcd_lrb *lrbp,
+ 	int result = 0;
+ 	int scsi_status;
+ 	enum utp_ocs ocs;
++	u8 upiu_flags;
++	u32 resid;
+ 
+-	scsi_set_resid(lrbp->cmd,
+-		be32_to_cpu(lrbp->ucd_rsp_ptr->sr.residual_transfer_count));
++	upiu_flags = be32_to_cpu(lrbp->ucd_rsp_ptr->header.dword_0) >> 16;
++	resid = be32_to_cpu(lrbp->ucd_rsp_ptr->sr.residual_transfer_count);
++	/*
++	 * Test !overflow instead of underflow to support UFS devices that do
++	 * not set either flag.
++	 */
++	if (resid && !(upiu_flags & UPIU_RSP_FLAG_OVERFLOW))
++		scsi_set_resid(lrbp->cmd, resid);
+ 
+ 	/* overall command status of utrd */
+ 	ocs = ufshcd_get_tr_ocs(lrbp, cqe);
+diff --git a/drivers/usb/core/hcd.c b/drivers/usb/core/hcd.c
+index 8300baedafd20..6af0a31ff1475 100644
+--- a/drivers/usb/core/hcd.c
++++ b/drivers/usb/core/hcd.c
+@@ -983,6 +983,7 @@ static int register_root_hub(struct usb_hcd *hcd)
+ {
+ 	struct device *parent_dev = hcd->self.controller;
+ 	struct usb_device *usb_dev = hcd->self.root_hub;
++	struct usb_device_descriptor *descr;
+ 	const int devnum = 1;
+ 	int retval;
+ 
+@@ -994,13 +995,16 @@ static int register_root_hub(struct usb_hcd *hcd)
+ 	mutex_lock(&usb_bus_idr_lock);
+ 
+ 	usb_dev->ep0.desc.wMaxPacketSize = cpu_to_le16(64);
+-	retval = usb_get_device_descriptor(usb_dev, USB_DT_DEVICE_SIZE);
+-	if (retval != sizeof usb_dev->descriptor) {
++	descr = usb_get_device_descriptor(usb_dev);
++	if (IS_ERR(descr)) {
++		retval = PTR_ERR(descr);
+ 		mutex_unlock(&usb_bus_idr_lock);
+ 		dev_dbg (parent_dev, "can't read %s device descriptor %d\n",
+ 				dev_name(&usb_dev->dev), retval);
+-		return (retval < 0) ? retval : -EMSGSIZE;
++		return retval;
+ 	}
++	usb_dev->descriptor = *descr;
++	kfree(descr);
+ 
+ 	if (le16_to_cpu(usb_dev->descriptor.bcdUSB) >= 0x0201) {
+ 		retval = usb_get_bos_descriptor(usb_dev);
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index a739403a9e455..26a27ff504085 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -2671,12 +2671,17 @@ int usb_authorize_device(struct usb_device *usb_dev)
+ 	}
+ 
+ 	if (usb_dev->wusb) {
+-		result = usb_get_device_descriptor(usb_dev, sizeof(usb_dev->descriptor));
+-		if (result < 0) {
++		struct usb_device_descriptor *descr;
++
++		descr = usb_get_device_descriptor(usb_dev);
++		if (IS_ERR(descr)) {
++			result = PTR_ERR(descr);
+ 			dev_err(&usb_dev->dev, "can't re-read device descriptor for "
+ 				"authorization: %d\n", result);
+ 			goto error_device_descriptor;
+ 		}
++		usb_dev->descriptor = *descr;
++		kfree(descr);
+ 	}
+ 
+ 	usb_dev->authorized = 1;
+@@ -4718,6 +4723,67 @@ static int hub_enable_device(struct usb_device *udev)
+ 	return hcd->driver->enable_device(hcd, udev);
+ }
+ 
++/*
++ * Get the bMaxPacketSize0 value during initialization by reading the
++ * device's device descriptor.  Since we don't already know this value,
++ * the transfer is unsafe and it ignores I/O errors, only testing for
++ * reasonable received values.
++ *
++ * For "old scheme" initialization, size will be 8 so we read just the
++ * start of the device descriptor, which should work okay regardless of
++ * the actual bMaxPacketSize0 value.  For "new scheme" initialization,
++ * size will be 64 (and buf will point to a sufficiently large buffer),
++ * which might not be kosher according to the USB spec but it's what
++ * Windows does and what many devices expect.
++ *
++ * Returns: bMaxPacketSize0 or a negative error code.
++ */
++static int get_bMaxPacketSize0(struct usb_device *udev,
++		struct usb_device_descriptor *buf, int size, bool first_time)
++{
++	int i, rc;
++
++	/*
++	 * Retry on all errors; some devices are flakey.
++	 * 255 is for WUSB devices, we actually need to use
++	 * 512 (WUSB1.0[4.8.1]).
++	 */
++	for (i = 0; i < GET_MAXPACKET0_TRIES; ++i) {
++		/* Start with invalid values in case the transfer fails */
++		buf->bDescriptorType = buf->bMaxPacketSize0 = 0;
++		rc = usb_control_msg(udev, usb_rcvaddr0pipe(),
++				USB_REQ_GET_DESCRIPTOR, USB_DIR_IN,
++				USB_DT_DEVICE << 8, 0,
++				buf, size,
++				initial_descriptor_timeout);
++		switch (buf->bMaxPacketSize0) {
++		case 8: case 16: case 32: case 64: case 9:
++			if (buf->bDescriptorType == USB_DT_DEVICE) {
++				rc = buf->bMaxPacketSize0;
++				break;
++			}
++			fallthrough;
++		default:
++			if (rc >= 0)
++				rc = -EPROTO;
++			break;
++		}
++
++		/*
++		 * Some devices time out if they are powered on
++		 * when already connected. They need a second
++		 * reset, so return early. But only on the first
++		 * attempt, lest we get into a time-out/reset loop.
++		 */
++		if (rc > 0 || (rc == -ETIMEDOUT && first_time &&
++				udev->speed > USB_SPEED_FULL))
++			break;
++	}
++	return rc;
++}
++
++#define GET_DESCRIPTOR_BUFSIZE	64
++
+ /* Reset device, (re)assign address, get device descriptor.
+  * Device connection must be stable, no more debouncing needed.
+  * Returns device in USB_STATE_ADDRESS, except on error.
+@@ -4727,10 +4793,17 @@ static int hub_enable_device(struct usb_device *udev)
+  * the port lock.  For a newly detected device that is not accessible
+  * through any global pointers, it's not necessary to lock the device,
+  * but it is still necessary to lock the port.
++ *
++ * For a newly detected device, @dev_descr must be NULL.  The device
++ * descriptor retrieved from the device will then be stored in
++ * @udev->descriptor.  For an already existing device, @dev_descr
++ * must be non-NULL.  The device descriptor will be stored there,
++ * not in @udev->descriptor, because descriptors for registered
++ * devices are meant to be immutable.
+  */
+ static int
+ hub_port_init(struct usb_hub *hub, struct usb_device *udev, int port1,
+-		int retry_counter)
++		int retry_counter, struct usb_device_descriptor *dev_descr)
+ {
+ 	struct usb_device	*hdev = hub->hdev;
+ 	struct usb_hcd		*hcd = bus_to_hcd(hdev->bus);
+@@ -4742,6 +4815,13 @@ hub_port_init(struct usb_hub *hub, struct usb_device *udev, int port1,
+ 	int			devnum = udev->devnum;
+ 	const char		*driver_name;
+ 	bool			do_new_scheme;
++	const bool		initial = !dev_descr;
++	int			maxp0;
++	struct usb_device_descriptor	*buf, *descr;
++
++	buf = kmalloc(GET_DESCRIPTOR_BUFSIZE, GFP_NOIO);
++	if (!buf)
++		return -ENOMEM;
+ 
+ 	/* root hub ports have a slightly longer reset period
+ 	 * (from USB 2.0 spec, section 7.1.7.5)
+@@ -4774,32 +4854,34 @@ hub_port_init(struct usb_hub *hub, struct usb_device *udev, int port1,
+ 	}
+ 	oldspeed = udev->speed;
+ 
+-	/* USB 2.0 section 5.5.3 talks about ep0 maxpacket ...
+-	 * it's fixed size except for full speed devices.
+-	 * For Wireless USB devices, ep0 max packet is always 512 (tho
+-	 * reported as 0xff in the device descriptor). WUSB1.0[4.8.1].
+-	 */
+-	switch (udev->speed) {
+-	case USB_SPEED_SUPER_PLUS:
+-	case USB_SPEED_SUPER:
+-	case USB_SPEED_WIRELESS:	/* fixed at 512 */
+-		udev->ep0.desc.wMaxPacketSize = cpu_to_le16(512);
+-		break;
+-	case USB_SPEED_HIGH:		/* fixed at 64 */
+-		udev->ep0.desc.wMaxPacketSize = cpu_to_le16(64);
+-		break;
+-	case USB_SPEED_FULL:		/* 8, 16, 32, or 64 */
+-		/* to determine the ep0 maxpacket size, try to read
+-		 * the device descriptor to get bMaxPacketSize0 and
+-		 * then correct our initial guess.
++	if (initial) {
++		/* USB 2.0 section 5.5.3 talks about ep0 maxpacket ...
++		 * it's fixed size except for full speed devices.
++		 * For Wireless USB devices, ep0 max packet is always 512 (tho
++		 * reported as 0xff in the device descriptor). WUSB1.0[4.8.1].
+ 		 */
+-		udev->ep0.desc.wMaxPacketSize = cpu_to_le16(64);
+-		break;
+-	case USB_SPEED_LOW:		/* fixed at 8 */
+-		udev->ep0.desc.wMaxPacketSize = cpu_to_le16(8);
+-		break;
+-	default:
+-		goto fail;
++		switch (udev->speed) {
++		case USB_SPEED_SUPER_PLUS:
++		case USB_SPEED_SUPER:
++		case USB_SPEED_WIRELESS:	/* fixed at 512 */
++			udev->ep0.desc.wMaxPacketSize = cpu_to_le16(512);
++			break;
++		case USB_SPEED_HIGH:		/* fixed at 64 */
++			udev->ep0.desc.wMaxPacketSize = cpu_to_le16(64);
++			break;
++		case USB_SPEED_FULL:		/* 8, 16, 32, or 64 */
++			/* to determine the ep0 maxpacket size, try to read
++			 * the device descriptor to get bMaxPacketSize0 and
++			 * then correct our initial guess.
++			 */
++			udev->ep0.desc.wMaxPacketSize = cpu_to_le16(64);
++			break;
++		case USB_SPEED_LOW:		/* fixed at 8 */
++			udev->ep0.desc.wMaxPacketSize = cpu_to_le16(8);
++			break;
++		default:
++			goto fail;
++		}
+ 	}
+ 
+ 	if (udev->speed == USB_SPEED_WIRELESS)
+@@ -4822,22 +4904,24 @@ hub_port_init(struct usb_hub *hub, struct usb_device *udev, int port1,
+ 	if (udev->speed < USB_SPEED_SUPER)
+ 		dev_info(&udev->dev,
+ 				"%s %s USB device number %d using %s\n",
+-				(udev->config) ? "reset" : "new", speed,
++				(initial ? "new" : "reset"), speed,
+ 				devnum, driver_name);
+ 
+-	/* Set up TT records, if needed  */
+-	if (hdev->tt) {
+-		udev->tt = hdev->tt;
+-		udev->ttport = hdev->ttport;
+-	} else if (udev->speed != USB_SPEED_HIGH
+-			&& hdev->speed == USB_SPEED_HIGH) {
+-		if (!hub->tt.hub) {
+-			dev_err(&udev->dev, "parent hub has no TT\n");
+-			retval = -EINVAL;
+-			goto fail;
++	if (initial) {
++		/* Set up TT records, if needed  */
++		if (hdev->tt) {
++			udev->tt = hdev->tt;
++			udev->ttport = hdev->ttport;
++		} else if (udev->speed != USB_SPEED_HIGH
++				&& hdev->speed == USB_SPEED_HIGH) {
++			if (!hub->tt.hub) {
++				dev_err(&udev->dev, "parent hub has no TT\n");
++				retval = -EINVAL;
++				goto fail;
++			}
++			udev->tt = &hub->tt;
++			udev->ttport = port1;
+ 		}
+-		udev->tt = &hub->tt;
+-		udev->ttport = port1;
+ 	}
+ 
+ 	/* Why interleave GET_DESCRIPTOR and SET_ADDRESS this way?
+@@ -4861,9 +4945,6 @@ hub_port_init(struct usb_hub *hub, struct usb_device *udev, int port1,
+ 		}
+ 
+ 		if (do_new_scheme) {
+-			struct usb_device_descriptor *buf;
+-			int r = 0;
+-
+ 			retval = hub_enable_device(udev);
+ 			if (retval < 0) {
+ 				dev_err(&udev->dev,
+@@ -4872,52 +4953,14 @@ hub_port_init(struct usb_hub *hub, struct usb_device *udev, int port1,
+ 				goto fail;
+ 			}
+ 
+-#define GET_DESCRIPTOR_BUFSIZE	64
+-			buf = kmalloc(GET_DESCRIPTOR_BUFSIZE, GFP_NOIO);
+-			if (!buf) {
+-				retval = -ENOMEM;
+-				continue;
+-			}
+-
+-			/* Retry on all errors; some devices are flakey.
+-			 * 255 is for WUSB devices, we actually need to use
+-			 * 512 (WUSB1.0[4.8.1]).
+-			 */
+-			for (operations = 0; operations < GET_MAXPACKET0_TRIES;
+-					++operations) {
+-				buf->bMaxPacketSize0 = 0;
+-				r = usb_control_msg(udev, usb_rcvaddr0pipe(),
+-					USB_REQ_GET_DESCRIPTOR, USB_DIR_IN,
+-					USB_DT_DEVICE << 8, 0,
+-					buf, GET_DESCRIPTOR_BUFSIZE,
+-					initial_descriptor_timeout);
+-				switch (buf->bMaxPacketSize0) {
+-				case 8: case 16: case 32: case 64: case 255:
+-					if (buf->bDescriptorType ==
+-							USB_DT_DEVICE) {
+-						r = 0;
+-						break;
+-					}
+-					fallthrough;
+-				default:
+-					if (r == 0)
+-						r = -EPROTO;
+-					break;
+-				}
+-				/*
+-				 * Some devices time out if they are powered on
+-				 * when already connected. They need a second
+-				 * reset. But only on the first attempt,
+-				 * lest we get into a time out/reset loop
+-				 */
+-				if (r == 0 || (r == -ETIMEDOUT &&
+-						retries == 0 &&
+-						udev->speed > USB_SPEED_FULL))
+-					break;
++			maxp0 = get_bMaxPacketSize0(udev, buf,
++					GET_DESCRIPTOR_BUFSIZE, retries == 0);
++			if (maxp0 > 0 && !initial &&
++					maxp0 != udev->descriptor.bMaxPacketSize0) {
++				dev_err(&udev->dev, "device reset changed ep0 maxpacket size!\n");
++				retval = -ENODEV;
++				goto fail;
+ 			}
+-			udev->descriptor.bMaxPacketSize0 =
+-					buf->bMaxPacketSize0;
+-			kfree(buf);
+ 
+ 			retval = hub_port_reset(hub, port1, udev, delay, false);
+ 			if (retval < 0)		/* error or disconnect */
+@@ -4928,14 +4971,13 @@ hub_port_init(struct usb_hub *hub, struct usb_device *udev, int port1,
+ 				retval = -ENODEV;
+ 				goto fail;
+ 			}
+-			if (r) {
+-				if (r != -ENODEV)
++			if (maxp0 < 0) {
++				if (maxp0 != -ENODEV)
+ 					dev_err(&udev->dev, "device descriptor read/64, error %d\n",
+-							r);
+-				retval = -EMSGSIZE;
++							maxp0);
++				retval = maxp0;
+ 				continue;
+ 			}
+-#undef GET_DESCRIPTOR_BUFSIZE
+ 		}
+ 
+ 		/*
+@@ -4981,18 +5023,22 @@ hub_port_init(struct usb_hub *hub, struct usb_device *udev, int port1,
+ 				break;
+ 		}
+ 
+-		retval = usb_get_device_descriptor(udev, 8);
+-		if (retval < 8) {
++		/* !do_new_scheme || wusb */
++		maxp0 = get_bMaxPacketSize0(udev, buf, 8, retries == 0);
++		if (maxp0 < 0) {
++			retval = maxp0;
+ 			if (retval != -ENODEV)
+ 				dev_err(&udev->dev,
+ 					"device descriptor read/8, error %d\n",
+ 					retval);
+-			if (retval >= 0)
+-				retval = -EMSGSIZE;
+ 		} else {
+ 			u32 delay;
+ 
+-			retval = 0;
++			if (!initial && maxp0 != udev->descriptor.bMaxPacketSize0) {
++				dev_err(&udev->dev, "device reset changed ep0 maxpacket size!\n");
++				retval = -ENODEV;
++				goto fail;
++			}
+ 
+ 			delay = udev->parent->hub_delay;
+ 			udev->hub_delay = min_t(u32, delay,
+@@ -5011,48 +5057,61 @@ hub_port_init(struct usb_hub *hub, struct usb_device *udev, int port1,
+ 		goto fail;
+ 
+ 	/*
+-	 * Some superspeed devices have finished the link training process
+-	 * and attached to a superspeed hub port, but the device descriptor
+-	 * got from those devices show they aren't superspeed devices. Warm
+-	 * reset the port attached by the devices can fix them.
++	 * Check the ep0 maxpacket guess and correct it if necessary.
++	 * maxp0 is the value stored in the device descriptor;
++	 * i is the value it encodes (logarithmic for SuperSpeed or greater).
+ 	 */
+-	if ((udev->speed >= USB_SPEED_SUPER) &&
+-			(le16_to_cpu(udev->descriptor.bcdUSB) < 0x0300)) {
+-		dev_err(&udev->dev, "got a wrong device descriptor, "
+-				"warm reset device\n");
+-		hub_port_reset(hub, port1, udev,
+-				HUB_BH_RESET_TIME, true);
+-		retval = -EINVAL;
+-		goto fail;
+-	}
+-
+-	if (udev->descriptor.bMaxPacketSize0 == 0xff ||
+-			udev->speed >= USB_SPEED_SUPER)
+-		i = 512;
+-	else
+-		i = udev->descriptor.bMaxPacketSize0;
+-	if (usb_endpoint_maxp(&udev->ep0.desc) != i) {
+-		if (udev->speed == USB_SPEED_LOW ||
+-				!(i == 8 || i == 16 || i == 32 || i == 64)) {
+-			dev_err(&udev->dev, "Invalid ep0 maxpacket: %d\n", i);
+-			retval = -EMSGSIZE;
+-			goto fail;
+-		}
++	i = maxp0;
++	if (udev->speed >= USB_SPEED_SUPER) {
++		if (maxp0 <= 16)
++			i = 1 << maxp0;
++		else
++			i = 0;		/* Invalid */
++	}
++	if (usb_endpoint_maxp(&udev->ep0.desc) == i) {
++		;	/* Initial ep0 maxpacket guess is right */
++	} else if ((udev->speed == USB_SPEED_FULL ||
++				udev->speed == USB_SPEED_HIGH) &&
++			(i == 8 || i == 16 || i == 32 || i == 64)) {
++		/* Initial guess is wrong; use the descriptor's value */
+ 		if (udev->speed == USB_SPEED_FULL)
+ 			dev_dbg(&udev->dev, "ep0 maxpacket = %d\n", i);
+ 		else
+ 			dev_warn(&udev->dev, "Using ep0 maxpacket: %d\n", i);
+ 		udev->ep0.desc.wMaxPacketSize = cpu_to_le16(i);
+ 		usb_ep0_reinit(udev);
++	} else {
++		/* Initial guess is wrong and descriptor's value is invalid */
++		dev_err(&udev->dev, "Invalid ep0 maxpacket: %d\n", maxp0);
++		retval = -EMSGSIZE;
++		goto fail;
+ 	}
+ 
+-	retval = usb_get_device_descriptor(udev, USB_DT_DEVICE_SIZE);
+-	if (retval < (signed)sizeof(udev->descriptor)) {
++	descr = usb_get_device_descriptor(udev);
++	if (IS_ERR(descr)) {
++		retval = PTR_ERR(descr);
+ 		if (retval != -ENODEV)
+ 			dev_err(&udev->dev, "device descriptor read/all, error %d\n",
+ 					retval);
+-		if (retval >= 0)
+-			retval = -ENOMSG;
++		goto fail;
++	}
++	if (initial)
++		udev->descriptor = *descr;
++	else
++		*dev_descr = *descr;
++	kfree(descr);
++
++	/*
++	 * Some superspeed devices have finished the link training process
++	 * and attached to a superspeed hub port, but the device descriptor
++	 * got from those devices show they aren't superspeed devices. Warm
++	 * reset the port attached by the devices can fix them.
++	 */
++	if ((udev->speed >= USB_SPEED_SUPER) &&
++			(le16_to_cpu(udev->descriptor.bcdUSB) < 0x0300)) {
++		dev_err(&udev->dev, "got a wrong device descriptor, warm reset device\n");
++		hub_port_reset(hub, port1, udev, HUB_BH_RESET_TIME, true);
++		retval = -EINVAL;
+ 		goto fail;
+ 	}
+ 
+@@ -5078,6 +5137,7 @@ fail:
+ 		hub_port_disable(hub, port1, 0);
+ 		update_devnum(udev, devnum);	/* for disconnect processing */
+ 	}
++	kfree(buf);
+ 	return retval;
+ }
+ 
+@@ -5158,7 +5218,7 @@ hub_power_remaining(struct usb_hub *hub)
+ 
+ 
+ static int descriptors_changed(struct usb_device *udev,
+-		struct usb_device_descriptor *old_device_descriptor,
++		struct usb_device_descriptor *new_device_descriptor,
+ 		struct usb_host_bos *old_bos)
+ {
+ 	int		changed = 0;
+@@ -5169,8 +5229,8 @@ static int descriptors_changed(struct usb_device *udev,
+ 	int		length;
+ 	char		*buf;
+ 
+-	if (memcmp(&udev->descriptor, old_device_descriptor,
+-			sizeof(*old_device_descriptor)) != 0)
++	if (memcmp(&udev->descriptor, new_device_descriptor,
++			sizeof(*new_device_descriptor)) != 0)
+ 		return 1;
+ 
+ 	if ((old_bos && !udev->bos) || (!old_bos && udev->bos))
+@@ -5348,7 +5408,7 @@ static void hub_port_connect(struct usb_hub *hub, int port1, u16 portstatus,
+ 		}
+ 
+ 		/* reset (non-USB 3.0 devices) and get descriptor */
+-		status = hub_port_init(hub, udev, port1, i);
++		status = hub_port_init(hub, udev, port1, i, NULL);
+ 		if (status < 0)
+ 			goto loop;
+ 
+@@ -5495,9 +5555,8 @@ static void hub_port_connect_change(struct usb_hub *hub, int port1,
+ {
+ 	struct usb_port *port_dev = hub->ports[port1 - 1];
+ 	struct usb_device *udev = port_dev->child;
+-	struct usb_device_descriptor descriptor;
++	struct usb_device_descriptor *descr;
+ 	int status = -ENODEV;
+-	int retval;
+ 
+ 	dev_dbg(&port_dev->dev, "status %04x, change %04x, %s\n", portstatus,
+ 			portchange, portspeed(hub, portstatus));
+@@ -5524,23 +5583,20 @@ static void hub_port_connect_change(struct usb_hub *hub, int port1,
+ 			 * changed device descriptors before resuscitating the
+ 			 * device.
+ 			 */
+-			descriptor = udev->descriptor;
+-			retval = usb_get_device_descriptor(udev,
+-					sizeof(udev->descriptor));
+-			if (retval < 0) {
++			descr = usb_get_device_descriptor(udev);
++			if (IS_ERR(descr)) {
+ 				dev_dbg(&udev->dev,
+-						"can't read device descriptor %d\n",
+-						retval);
++						"can't read device descriptor %ld\n",
++						PTR_ERR(descr));
+ 			} else {
+-				if (descriptors_changed(udev, &descriptor,
++				if (descriptors_changed(udev, descr,
+ 						udev->bos)) {
+ 					dev_dbg(&udev->dev,
+ 							"device descriptor has changed\n");
+-					/* for disconnect() calls */
+-					udev->descriptor = descriptor;
+ 				} else {
+ 					status = 0; /* Nothing to do */
+ 				}
++				kfree(descr);
+ 			}
+ #ifdef CONFIG_PM
+ 		} else if (udev->state == USB_STATE_SUSPENDED &&
+@@ -5982,7 +6038,7 @@ static int usb_reset_and_verify_device(struct usb_device *udev)
+ 	struct usb_device		*parent_hdev = udev->parent;
+ 	struct usb_hub			*parent_hub;
+ 	struct usb_hcd			*hcd = bus_to_hcd(udev->bus);
+-	struct usb_device_descriptor	descriptor = udev->descriptor;
++	struct usb_device_descriptor	descriptor;
+ 	struct usb_host_bos		*bos;
+ 	int				i, j, ret = 0;
+ 	int				port1 = udev->portnum;
+@@ -6018,7 +6074,7 @@ static int usb_reset_and_verify_device(struct usb_device *udev)
+ 		/* ep0 maxpacket size may change; let the HCD know about it.
+ 		 * Other endpoints will be handled by re-enumeration. */
+ 		usb_ep0_reinit(udev);
+-		ret = hub_port_init(parent_hub, udev, port1, i);
++		ret = hub_port_init(parent_hub, udev, port1, i, &descriptor);
+ 		if (ret >= 0 || ret == -ENOTCONN || ret == -ENODEV)
+ 			break;
+ 	}
+@@ -6030,7 +6086,6 @@ static int usb_reset_and_verify_device(struct usb_device *udev)
+ 	/* Device might have changed firmware (DFU or similar) */
+ 	if (descriptors_changed(udev, &descriptor, bos)) {
+ 		dev_info(&udev->dev, "device firmware changed\n");
+-		udev->descriptor = descriptor;	/* for disconnect() calls */
+ 		goto re_enumerate;
+ 	}
+ 
+diff --git a/drivers/usb/core/message.c b/drivers/usb/core/message.c
+index b5811620f1de1..1da8e7ff39830 100644
+--- a/drivers/usb/core/message.c
++++ b/drivers/usb/core/message.c
+@@ -1040,40 +1040,35 @@ char *usb_cache_string(struct usb_device *udev, int index)
+ EXPORT_SYMBOL_GPL(usb_cache_string);
+ 
+ /*
+- * usb_get_device_descriptor - (re)reads the device descriptor (usbcore)
+- * @dev: the device whose device descriptor is being updated
+- * @size: how much of the descriptor to read
++ * usb_get_device_descriptor - read the device descriptor
++ * @udev: the device whose device descriptor should be read
+  *
+  * Context: task context, might sleep.
+  *
+- * Updates the copy of the device descriptor stored in the device structure,
+- * which dedicates space for this purpose.
+- *
+  * Not exported, only for use by the core.  If drivers really want to read
+  * the device descriptor directly, they can call usb_get_descriptor() with
+  * type = USB_DT_DEVICE and index = 0.
+  *
+- * This call is synchronous, and may not be used in an interrupt context.
+- *
+- * Return: The number of bytes received on success, or else the status code
+- * returned by the underlying usb_control_msg() call.
++ * Returns: a pointer to a dynamically allocated usb_device_descriptor
++ * structure (which the caller must deallocate), or an ERR_PTR value.
+  */
+-int usb_get_device_descriptor(struct usb_device *dev, unsigned int size)
++struct usb_device_descriptor *usb_get_device_descriptor(struct usb_device *udev)
+ {
+ 	struct usb_device_descriptor *desc;
+ 	int ret;
+ 
+-	if (size > sizeof(*desc))
+-		return -EINVAL;
+ 	desc = kmalloc(sizeof(*desc), GFP_NOIO);
+ 	if (!desc)
+-		return -ENOMEM;
++		return ERR_PTR(-ENOMEM);
++
++	ret = usb_get_descriptor(udev, USB_DT_DEVICE, 0, desc, sizeof(*desc));
++	if (ret == sizeof(*desc))
++		return desc;
+ 
+-	ret = usb_get_descriptor(dev, USB_DT_DEVICE, 0, desc, size);
+ 	if (ret >= 0)
+-		memcpy(&dev->descriptor, desc, size);
++		ret = -EMSGSIZE;
+ 	kfree(desc);
+-	return ret;
++	return ERR_PTR(ret);
+ }
+ 
+ /*
+diff --git a/drivers/usb/core/usb.h b/drivers/usb/core/usb.h
+index ffe3f6818e9cf..4a16d559d3bff 100644
+--- a/drivers/usb/core/usb.h
++++ b/drivers/usb/core/usb.h
+@@ -43,8 +43,8 @@ extern bool usb_endpoint_is_ignored(struct usb_device *udev,
+ 		struct usb_endpoint_descriptor *epd);
+ extern int usb_remove_device(struct usb_device *udev);
+ 
+-extern int usb_get_device_descriptor(struct usb_device *dev,
+-		unsigned int size);
++extern struct usb_device_descriptor *usb_get_device_descriptor(
++		struct usb_device *udev);
+ extern int usb_set_isoch_delay(struct usb_device *dev);
+ extern int usb_get_bos_descriptor(struct usb_device *dev);
+ extern void usb_release_bos_descriptor(struct usb_device *dev);
+diff --git a/drivers/usb/gadget/function/f_mass_storage.c b/drivers/usb/gadget/function/f_mass_storage.c
+index da07e45ae6df5..722a3ab2b3379 100644
+--- a/drivers/usb/gadget/function/f_mass_storage.c
++++ b/drivers/usb/gadget/function/f_mass_storage.c
+@@ -927,7 +927,7 @@ static void invalidate_sub(struct fsg_lun *curlun)
+ {
+ 	struct file	*filp = curlun->filp;
+ 	struct inode	*inode = file_inode(filp);
+-	unsigned long	rc;
++	unsigned long __maybe_unused	rc;
+ 
+ 	rc = invalidate_mapping_pages(inode->i_mapping, 0, -1);
+ 	VLDBG(curlun, "invalidate_mapping_pages -> %ld\n", rc);
+diff --git a/drivers/usb/gadget/udc/core.c b/drivers/usb/gadget/udc/core.c
+index 7d49d8a0b00c2..7166d1117742a 100644
+--- a/drivers/usb/gadget/udc/core.c
++++ b/drivers/usb/gadget/udc/core.c
+@@ -40,6 +40,7 @@ static const struct bus_type gadget_bus_type;
+  * @allow_connect: Indicates whether UDC is allowed to be pulled up.
+  * Set/cleared by gadget_(un)bind_driver() after gadget driver is bound or
+  * unbound.
++ * @vbus_work: work routine to handle VBUS status change notifications.
+  * @connect_lock: protects udc->started, gadget->connect,
+  * gadget->allow_connect and gadget->deactivate. The routines
+  * usb_gadget_connect_locked(), usb_gadget_disconnect_locked(),
+diff --git a/drivers/usb/phy/phy-mxs-usb.c b/drivers/usb/phy/phy-mxs-usb.c
+index e1a2b2ea098b5..cceabb9d37e98 100644
+--- a/drivers/usb/phy/phy-mxs-usb.c
++++ b/drivers/usb/phy/phy-mxs-usb.c
+@@ -388,14 +388,8 @@ static void __mxs_phy_disconnect_line(struct mxs_phy *mxs_phy, bool disconnect)
+ 
+ static bool mxs_phy_is_otg_host(struct mxs_phy *mxs_phy)
+ {
+-	void __iomem *base = mxs_phy->phy.io_priv;
+-	u32 phyctrl = readl(base + HW_USBPHY_CTRL);
+-
+-	if (IS_ENABLED(CONFIG_USB_OTG) &&
+-			!(phyctrl & BM_USBPHY_CTRL_OTG_ID_VALUE))
+-		return true;
+-
+-	return false;
++	return IS_ENABLED(CONFIG_USB_OTG) &&
++		mxs_phy->phy.last_event == USB_EVENT_ID;
+ }
+ 
+ static void mxs_phy_disconnect_line(struct mxs_phy *mxs_phy, bool on)
+diff --git a/drivers/usb/typec/bus.c b/drivers/usb/typec/bus.c
+index fe5b9a2e61f58..e95ec7e382bb7 100644
+--- a/drivers/usb/typec/bus.c
++++ b/drivers/usb/typec/bus.c
+@@ -183,12 +183,20 @@ EXPORT_SYMBOL_GPL(typec_altmode_exit);
+  *
+  * Notifies the partner of @adev about Attention command.
+  */
+-void typec_altmode_attention(struct typec_altmode *adev, u32 vdo)
++int typec_altmode_attention(struct typec_altmode *adev, u32 vdo)
+ {
+-	struct typec_altmode *pdev = &to_altmode(adev)->partner->adev;
++	struct altmode *partner = to_altmode(adev)->partner;
++	struct typec_altmode *pdev;
++
++	if (!partner)
++		return -ENODEV;
++
++	pdev = &partner->adev;
+ 
+ 	if (pdev->ops && pdev->ops->attention)
+ 		pdev->ops->attention(pdev, vdo);
++
++	return 0;
+ }
+ EXPORT_SYMBOL_GPL(typec_altmode_attention);
+ 
+diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c
+index bf97b81ff5b07..1596afee6c86f 100644
+--- a/drivers/usb/typec/tcpm/tcpm.c
++++ b/drivers/usb/typec/tcpm/tcpm.c
+@@ -1877,7 +1877,8 @@ static void tcpm_handle_vdm_request(struct tcpm_port *port,
+ 			}
+ 			break;
+ 		case ADEV_ATTENTION:
+-			typec_altmode_attention(adev, p[1]);
++			if (typec_altmode_attention(adev, p[1]))
++				tcpm_log(port, "typec_altmode_attention no port partner altmode");
+ 			break;
+ 		}
+ 	}
+@@ -3935,6 +3936,29 @@ static enum typec_cc_status tcpm_pwr_opmode_to_rp(enum typec_pwr_opmode opmode)
+ 	}
+ }
+ 
++static void tcpm_set_initial_svdm_version(struct tcpm_port *port)
++{
++	switch (port->negotiated_rev) {
++	case PD_REV30:
++		break;
++	/*
++	 * 6.4.4.2.3 Structured VDM Version
++	 * 2.0 states "At this time, there is only one version (1.0) defined.
++	 * This field Shall be set to zero to indicate Version 1.0."
++	 * 3.0 states "This field Shall be set to 01b to indicate Version 2.0."
++	 * To ensure that we follow the Power Delivery revision we are currently
++	 * operating on, downgrade the SVDM version to the highest one supported
++	 * by the Power Delivery revision.
++	 */
++	case PD_REV20:
++		typec_partner_set_svdm_version(port->partner, SVDM_VER_1_0);
++		break;
++	default:
++		typec_partner_set_svdm_version(port->partner, SVDM_VER_1_0);
++		break;
++	}
++}
++
+ static void run_state_machine(struct tcpm_port *port)
+ {
+ 	int ret;
+@@ -4172,10 +4196,12 @@ static void run_state_machine(struct tcpm_port *port)
+ 		 * For now, this driver only supports SOP for DISCOVER_IDENTITY, thus using
+ 		 * port->explicit_contract to decide whether to send the command.
+ 		 */
+-		if (port->explicit_contract)
++		if (port->explicit_contract) {
++			tcpm_set_initial_svdm_version(port);
+ 			mod_send_discover_delayed_work(port, 0);
+-		else
++		} else {
+ 			port->send_discover = false;
++		}
+ 
+ 		/*
+ 		 * 6.3.5
+@@ -4462,10 +4488,12 @@ static void run_state_machine(struct tcpm_port *port)
+ 		 * For now, this driver only supports SOP for DISCOVER_IDENTITY, thus using
+ 		 * port->explicit_contract.
+ 		 */
+-		if (port->explicit_contract)
++		if (port->explicit_contract) {
++			tcpm_set_initial_svdm_version(port);
+ 			mod_send_discover_delayed_work(port, 0);
+-		else
++		} else {
+ 			port->send_discover = false;
++		}
+ 
+ 		power_supply_changed(port->psy);
+ 		break;
+diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
+index ebe0ad31d0b03..d662aa9d1b4b6 100644
+--- a/drivers/vfio/vfio_iommu_type1.c
++++ b/drivers/vfio/vfio_iommu_type1.c
+@@ -2732,7 +2732,7 @@ static int vfio_iommu_iova_build_caps(struct vfio_iommu *iommu,
+ static int vfio_iommu_migration_build_caps(struct vfio_iommu *iommu,
+ 					   struct vfio_info_cap *caps)
+ {
+-	struct vfio_iommu_type1_info_cap_migration cap_mig;
++	struct vfio_iommu_type1_info_cap_migration cap_mig = {};
+ 
+ 	cap_mig.header.id = VFIO_IOMMU_TYPE1_INFO_CAP_MIGRATION;
+ 	cap_mig.header.version = 1;
+diff --git a/drivers/video/backlight/bd6107.c b/drivers/video/backlight/bd6107.c
+index 7df25faa07a59..40979ee7133aa 100644
+--- a/drivers/video/backlight/bd6107.c
++++ b/drivers/video/backlight/bd6107.c
+@@ -104,7 +104,7 @@ static int bd6107_backlight_check_fb(struct backlight_device *backlight,
+ {
+ 	struct bd6107 *bd = bl_get_data(backlight);
+ 
+-	return bd->pdata->fbdev == NULL || bd->pdata->fbdev == info->dev;
++	return bd->pdata->fbdev == NULL || bd->pdata->fbdev == info->device;
+ }
+ 
+ static const struct backlight_ops bd6107_backlight_ops = {
+diff --git a/drivers/video/backlight/gpio_backlight.c b/drivers/video/backlight/gpio_backlight.c
+index 6f78d928f054a..5c5c99f7979e3 100644
+--- a/drivers/video/backlight/gpio_backlight.c
++++ b/drivers/video/backlight/gpio_backlight.c
+@@ -35,7 +35,7 @@ static int gpio_backlight_check_fb(struct backlight_device *bl,
+ {
+ 	struct gpio_backlight *gbl = bl_get_data(bl);
+ 
+-	return gbl->fbdev == NULL || gbl->fbdev == info->dev;
++	return gbl->fbdev == NULL || gbl->fbdev == info->device;
+ }
+ 
+ static const struct backlight_ops gpio_backlight_ops = {
+diff --git a/drivers/video/backlight/lv5207lp.c b/drivers/video/backlight/lv5207lp.c
+index 56695ce67e480..dce2983315444 100644
+--- a/drivers/video/backlight/lv5207lp.c
++++ b/drivers/video/backlight/lv5207lp.c
+@@ -67,7 +67,7 @@ static int lv5207lp_backlight_check_fb(struct backlight_device *backlight,
+ {
+ 	struct lv5207lp *lv = bl_get_data(backlight);
+ 
+-	return lv->pdata->fbdev == NULL || lv->pdata->fbdev == info->dev;
++	return lv->pdata->fbdev == NULL || lv->pdata->fbdev == info->device;
+ }
+ 
+ static const struct backlight_ops lv5207lp_backlight_ops = {
+diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
+index c5310eaf8b468..da1150d127c24 100644
+--- a/drivers/virtio/virtio_ring.c
++++ b/drivers/virtio/virtio_ring.c
+@@ -1461,7 +1461,7 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq,
+ 		}
+ 	}
+ 
+-	if (i < head)
++	if (i <= head)
+ 		vq->packed.avail_wrap_counter ^= 1;
+ 
+ 	/* We're using some buffers from the free list. */
+diff --git a/drivers/virtio/virtio_vdpa.c b/drivers/virtio/virtio_vdpa.c
+index 961161da59000..06ce6d8c2e004 100644
+--- a/drivers/virtio/virtio_vdpa.c
++++ b/drivers/virtio/virtio_vdpa.c
+@@ -366,11 +366,14 @@ static int virtio_vdpa_find_vqs(struct virtio_device *vdev, unsigned int nvqs,
+ 	struct irq_affinity default_affd = { 0 };
+ 	struct cpumask *masks;
+ 	struct vdpa_callback cb;
++	bool has_affinity = desc && ops->set_vq_affinity;
+ 	int i, err, queue_idx = 0;
+ 
+-	masks = create_affinity_masks(nvqs, desc ? desc : &default_affd);
+-	if (!masks)
+-		return -ENOMEM;
++	if (has_affinity) {
++		masks = create_affinity_masks(nvqs, desc ? desc : &default_affd);
++		if (!masks)
++			return -ENOMEM;
++	}
+ 
+ 	for (i = 0; i < nvqs; ++i) {
+ 		if (!names[i]) {
+@@ -386,20 +389,22 @@ static int virtio_vdpa_find_vqs(struct virtio_device *vdev, unsigned int nvqs,
+ 			goto err_setup_vq;
+ 		}
+ 
+-		if (ops->set_vq_affinity)
++		if (has_affinity)
+ 			ops->set_vq_affinity(vdpa, i, &masks[i]);
+ 	}
+ 
+ 	cb.callback = virtio_vdpa_config_cb;
+ 	cb.private = vd_dev;
+ 	ops->set_config_cb(vdpa, &cb);
+-	kfree(masks);
++	if (has_affinity)
++		kfree(masks);
+ 
+ 	return 0;
+ 
+ err_setup_vq:
+ 	virtio_vdpa_del_vqs(vdev);
+-	kfree(masks);
++	if (has_affinity)
++		kfree(masks);
+ 	return err;
+ }
+ 
+diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
+index 72b90bc19a191..2490301350015 100644
+--- a/fs/btrfs/zoned.c
++++ b/fs/btrfs/zoned.c
+@@ -1707,10 +1707,21 @@ void btrfs_finish_ordered_zoned(struct btrfs_ordered_extent *ordered)
+ {
+ 	struct btrfs_inode *inode = BTRFS_I(ordered->inode);
+ 	struct btrfs_fs_info *fs_info = inode->root->fs_info;
+-	struct btrfs_ordered_sum *sum =
+-		list_first_entry(&ordered->list, typeof(*sum), list);
+-	u64 logical = sum->logical;
+-	u64 len = sum->len;
++	struct btrfs_ordered_sum *sum;
++	u64 logical, len;
++
++	/*
++	 * Write to pre-allocated region is for the data relocation, and so
++	 * it should use WRITE operation. No split/rewrite are necessary.
++	 */
++	if (test_bit(BTRFS_ORDERED_PREALLOC, &ordered->flags))
++		return;
++
++	ASSERT(!list_empty(&ordered->list));
++	/* The ordered->list can be empty in the above pre-alloc case. */
++	sum = list_first_entry(&ordered->list, struct btrfs_ordered_sum, list);
++	logical = sum->logical;
++	len = sum->len;
+ 
+ 	while (len < ordered->disk_num_bytes) {
+ 		sum = list_next_entry(sum, list);
+diff --git a/fs/dlm/plock.c b/fs/dlm/plock.c
+index 70a4752ed913a..fd603e06d07fe 100644
+--- a/fs/dlm/plock.c
++++ b/fs/dlm/plock.c
+@@ -456,7 +456,8 @@ static ssize_t dev_write(struct file *file, const char __user *u, size_t count,
+ 		}
+ 	} else {
+ 		list_for_each_entry(iter, &recv_list, list) {
+-			if (!iter->info.wait) {
++			if (!iter->info.wait &&
++			    iter->info.fsid == info.fsid) {
+ 				op = iter;
+ 				break;
+ 			}
+@@ -468,8 +469,7 @@ static ssize_t dev_write(struct file *file, const char __user *u, size_t count,
+ 		if (info.wait)
+ 			WARN_ON(op->info.optype != DLM_PLOCK_OP_LOCK);
+ 		else
+-			WARN_ON(op->info.fsid != info.fsid ||
+-				op->info.number != info.number ||
++			WARN_ON(op->info.number != info.number ||
+ 				op->info.owner != info.owner ||
+ 				op->info.optype != info.optype);
+ 
+diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c
+index 9c9350eb17040..9bfdb4ad7c763 100644
+--- a/fs/erofs/zdata.c
++++ b/fs/erofs/zdata.c
+@@ -1412,7 +1412,10 @@ static void z_erofs_decompress_queue(const struct z_erofs_decompressqueue *io,
+ 		owned = READ_ONCE(be.pcl->next);
+ 
+ 		z_erofs_decompress_pcluster(&be, io->eio ? -EIO : 0);
+-		erofs_workgroup_put(&be.pcl->obj);
++		if (z_erofs_is_inline_pcluster(be.pcl))
++			z_erofs_free_pcluster(be.pcl);
++		else
++			erofs_workgroup_put(&be.pcl->obj);
+ 	}
+ }
+ 
+diff --git a/fs/eventfd.c b/fs/eventfd.c
+index 8aa36cd373516..33a918f9566c3 100644
+--- a/fs/eventfd.c
++++ b/fs/eventfd.c
+@@ -189,7 +189,7 @@ void eventfd_ctx_do_read(struct eventfd_ctx *ctx, __u64 *cnt)
+ {
+ 	lockdep_assert_held(&ctx->wqh.lock);
+ 
+-	*cnt = (ctx->flags & EFD_SEMAPHORE) ? 1 : ctx->count;
++	*cnt = ((ctx->flags & EFD_SEMAPHORE) && ctx->count) ? 1 : ctx->count;
+ 	ctx->count -= *cnt;
+ }
+ EXPORT_SYMBOL_GPL(eventfd_ctx_do_read);
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index 21b903fe546e8..a197ef71b7b02 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -1080,8 +1080,9 @@ static inline int should_optimize_scan(struct ext4_allocation_context *ac)
+  * Return next linear group for allocation. If linear traversal should not be
+  * performed, this function just returns the same group
+  */
+-static int
+-next_linear_group(struct ext4_allocation_context *ac, int group, int ngroups)
++static ext4_group_t
++next_linear_group(struct ext4_allocation_context *ac, ext4_group_t group,
++		  ext4_group_t ngroups)
+ {
+ 	if (!should_optimize_scan(ac))
+ 		goto inc_and_return;
+@@ -2553,7 +2554,7 @@ static bool ext4_mb_good_group(struct ext4_allocation_context *ac,
+ 
+ 	BUG_ON(cr < CR_POWER2_ALIGNED || cr >= EXT4_MB_NUM_CRS);
+ 
+-	if (unlikely(EXT4_MB_GRP_BBITMAP_CORRUPT(grp) || !grp))
++	if (unlikely(!grp || EXT4_MB_GRP_BBITMAP_CORRUPT(grp)))
+ 		return false;
+ 
+ 	free = grp->bb_free;
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index 0caf6c730ce34..6bcc3770ee19f 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -2799,6 +2799,7 @@ static int ext4_add_nondir(handle_t *handle,
+ 		return err;
+ 	}
+ 	drop_nlink(inode);
++	ext4_mark_inode_dirty(handle, inode);
+ 	ext4_orphan_add(handle, inode);
+ 	unlock_new_inode(inode);
+ 	return err;
+@@ -3436,6 +3437,7 @@ retry:
+ 
+ err_drop_inode:
+ 	clear_nlink(inode);
++	ext4_mark_inode_dirty(handle, inode);
+ 	ext4_orphan_add(handle, inode);
+ 	unlock_new_inode(inode);
+ 	if (handle)
+@@ -4021,6 +4023,7 @@ end_rename:
+ 			ext4_resetent(handle, &old,
+ 				      old.inode->i_ino, old_file_type);
+ 			drop_nlink(whiteout);
++			ext4_mark_inode_dirty(handle, whiteout);
+ 			ext4_orphan_add(handle, whiteout);
+ 		}
+ 		unlock_new_inode(whiteout);
+diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
+index 8fd3b7f9fb88e..b0597a539fc54 100644
+--- a/fs/f2fs/checkpoint.c
++++ b/fs/f2fs/checkpoint.c
+@@ -1701,9 +1701,9 @@ int f2fs_write_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
+ 	}
+ 
+ 	f2fs_restore_inmem_curseg(sbi);
++	stat_inc_cp_count(sbi);
+ stop:
+ 	unblock_operations(sbi);
+-	stat_inc_cp_count(sbi->stat_info);
+ 
+ 	if (cpc->reason & CP_RECOVERY)
+ 		f2fs_notice(sbi, "checkpoint: version = %llx", ckpt_ver);
+diff --git a/fs/f2fs/debug.c b/fs/f2fs/debug.c
+index 61c35b59126ec..fdbf994f12718 100644
+--- a/fs/f2fs/debug.c
++++ b/fs/f2fs/debug.c
+@@ -215,6 +215,9 @@ static void update_general_status(struct f2fs_sb_info *sbi)
+ 		si->valid_blks[type] += blks;
+ 	}
+ 
++	for (i = 0; i < MAX_CALL_TYPE; i++)
++		si->cp_call_count[i] = atomic_read(&sbi->cp_call_count[i]);
++
+ 	for (i = 0; i < 2; i++) {
+ 		si->segment_count[i] = sbi->segment_count[i];
+ 		si->block_count[i] = sbi->block_count[i];
+@@ -497,7 +500,9 @@ static int stat_show(struct seq_file *s, void *v)
+ 		seq_printf(s, "  - Prefree: %d\n  - Free: %d (%d)\n\n",
+ 			   si->prefree_count, si->free_segs, si->free_secs);
+ 		seq_printf(s, "CP calls: %d (BG: %d)\n",
+-				si->cp_count, si->bg_cp_count);
++			   si->cp_call_count[TOTAL_CALL],
++			   si->cp_call_count[BACKGROUND]);
++		seq_printf(s, "CP count: %d\n", si->cp_count);
+ 		seq_printf(s, "  - cp blocks : %u\n", si->meta_count[META_CP]);
+ 		seq_printf(s, "  - sit blocks : %u\n",
+ 				si->meta_count[META_SIT]);
+@@ -511,12 +516,24 @@ static int stat_show(struct seq_file *s, void *v)
+ 		seq_printf(s, "  - Total : %4d\n", si->nr_total_ckpt);
+ 		seq_printf(s, "  - Cur time : %4d(ms)\n", si->cur_ckpt_time);
+ 		seq_printf(s, "  - Peak time : %4d(ms)\n", si->peak_ckpt_time);
+-		seq_printf(s, "GC calls: %d (BG: %d)\n",
+-			   si->call_count, si->bg_gc);
+-		seq_printf(s, "  - data segments : %d (%d)\n",
+-				si->data_segs, si->bg_data_segs);
+-		seq_printf(s, "  - node segments : %d (%d)\n",
+-				si->node_segs, si->bg_node_segs);
++		seq_printf(s, "GC calls: %d (gc_thread: %d)\n",
++			   si->gc_call_count[BACKGROUND] +
++			   si->gc_call_count[FOREGROUND],
++			   si->gc_call_count[BACKGROUND]);
++		if (__is_large_section(sbi)) {
++			seq_printf(s, "  - data sections : %d (BG: %d)\n",
++					si->gc_secs[DATA][BG_GC] + si->gc_secs[DATA][FG_GC],
++					si->gc_secs[DATA][BG_GC]);
++			seq_printf(s, "  - node sections : %d (BG: %d)\n",
++					si->gc_secs[NODE][BG_GC] + si->gc_secs[NODE][FG_GC],
++					si->gc_secs[NODE][BG_GC]);
++		}
++		seq_printf(s, "  - data segments : %d (BG: %d)\n",
++				si->gc_segs[DATA][BG_GC] + si->gc_segs[DATA][FG_GC],
++				si->gc_segs[DATA][BG_GC]);
++		seq_printf(s, "  - node segments : %d (BG: %d)\n",
++				si->gc_segs[NODE][BG_GC] + si->gc_segs[NODE][FG_GC],
++				si->gc_segs[NODE][BG_GC]);
+ 		seq_puts(s, "  - Reclaimed segs :\n");
+ 		seq_printf(s, "    - Normal : %d\n", sbi->gc_reclaimed_segs[GC_NORMAL]);
+ 		seq_printf(s, "    - Idle CB : %d\n", sbi->gc_reclaimed_segs[GC_IDLE_CB]);
+@@ -687,6 +704,8 @@ int f2fs_build_stats(struct f2fs_sb_info *sbi)
+ 	atomic_set(&sbi->inplace_count, 0);
+ 	for (i = META_CP; i < META_MAX; i++)
+ 		atomic_set(&sbi->meta_count[i], 0);
++	for (i = 0; i < MAX_CALL_TYPE; i++)
++		atomic_set(&sbi->cp_call_count[i], 0);
+ 
+ 	atomic_set(&sbi->max_aw_cnt, 0);
+ 
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index c7cb2177b2527..c602ff2403b67 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -1383,6 +1383,13 @@ enum errors_option {
+ 	MOUNT_ERRORS_PANIC,	/* panic on errors */
+ };
+ 
++enum {
++	BACKGROUND,
++	FOREGROUND,
++	MAX_CALL_TYPE,
++	TOTAL_CALL = FOREGROUND,
++};
++
+ static inline int f2fs_test_bit(unsigned int nr, char *addr);
+ static inline void f2fs_set_bit(unsigned int nr, char *addr);
+ static inline void f2fs_clear_bit(unsigned int nr, char *addr);
+@@ -1695,6 +1702,7 @@ struct f2fs_sb_info {
+ 	unsigned int io_skip_bggc;		/* skip background gc for in-flight IO */
+ 	unsigned int other_skip_bggc;		/* skip background gc for other reasons */
+ 	unsigned int ndirty_inode[NR_INODE_TYPE];	/* # of dirty inodes */
++	atomic_t cp_call_count[MAX_CALL_TYPE];	/* # of cp call */
+ #endif
+ 	spinlock_t stat_lock;			/* lock for stat operations */
+ 
+@@ -3885,7 +3893,7 @@ struct f2fs_stat_info {
+ 	int nats, dirty_nats, sits, dirty_sits;
+ 	int free_nids, avail_nids, alloc_nids;
+ 	int total_count, utilization;
+-	int bg_gc, nr_wb_cp_data, nr_wb_data;
++	int nr_wb_cp_data, nr_wb_data;
+ 	int nr_rd_data, nr_rd_node, nr_rd_meta;
+ 	int nr_dio_read, nr_dio_write;
+ 	unsigned int io_skip_bggc, other_skip_bggc;
+@@ -3905,9 +3913,11 @@ struct f2fs_stat_info {
+ 	int rsvd_segs, overp_segs;
+ 	int dirty_count, node_pages, meta_pages, compress_pages;
+ 	int compress_page_hit;
+-	int prefree_count, call_count, cp_count, bg_cp_count;
+-	int tot_segs, node_segs, data_segs, free_segs, free_secs;
+-	int bg_node_segs, bg_data_segs;
++	int prefree_count, free_segs, free_secs;
++	int cp_call_count[MAX_CALL_TYPE], cp_count;
++	int gc_call_count[MAX_CALL_TYPE];
++	int gc_segs[2][2];
++	int gc_secs[2][2];
+ 	int tot_blks, data_blks, node_blks;
+ 	int bg_data_blks, bg_node_blks;
+ 	int curseg[NR_CURSEG_TYPE];
+@@ -3929,10 +3939,9 @@ static inline struct f2fs_stat_info *F2FS_STAT(struct f2fs_sb_info *sbi)
+ 	return (struct f2fs_stat_info *)sbi->stat_info;
+ }
+ 
+-#define stat_inc_cp_count(si)		((si)->cp_count++)
+-#define stat_inc_bg_cp_count(si)	((si)->bg_cp_count++)
+-#define stat_inc_call_count(si)		((si)->call_count++)
+-#define stat_inc_bggc_count(si)		((si)->bg_gc++)
++#define stat_inc_cp_call_count(sbi, foreground)				\
++		atomic_inc(&sbi->cp_call_count[(foreground)])
++#define stat_inc_cp_count(si)		(F2FS_STAT(sbi)->cp_count++)
+ #define stat_io_skip_bggc_count(sbi)	((sbi)->io_skip_bggc++)
+ #define stat_other_skip_bggc_count(sbi)	((sbi)->other_skip_bggc++)
+ #define stat_inc_dirty_inode(sbi, type)	((sbi)->ndirty_inode[type]++)
+@@ -4017,18 +4026,12 @@ static inline struct f2fs_stat_info *F2FS_STAT(struct f2fs_sb_info *sbi)
+ 		if (cur > max)						\
+ 			atomic_set(&F2FS_I_SB(inode)->max_aw_cnt, cur);	\
+ 	} while (0)
+-#define stat_inc_seg_count(sbi, type, gc_type)				\
+-	do {								\
+-		struct f2fs_stat_info *si = F2FS_STAT(sbi);		\
+-		si->tot_segs++;						\
+-		if ((type) == SUM_TYPE_DATA) {				\
+-			si->data_segs++;				\
+-			si->bg_data_segs += (gc_type == BG_GC) ? 1 : 0;	\
+-		} else {						\
+-			si->node_segs++;				\
+-			si->bg_node_segs += (gc_type == BG_GC) ? 1 : 0;	\
+-		}							\
+-	} while (0)
++#define stat_inc_gc_call_count(sbi, foreground)				\
++		(F2FS_STAT(sbi)->gc_call_count[(foreground)]++)
++#define stat_inc_gc_sec_count(sbi, type, gc_type)			\
++		(F2FS_STAT(sbi)->gc_secs[(type)][(gc_type)]++)
++#define stat_inc_gc_seg_count(sbi, type, gc_type)			\
++		(F2FS_STAT(sbi)->gc_segs[(type)][(gc_type)]++)
+ 
+ #define stat_inc_tot_blk_count(si, blks)				\
+ 	((si)->tot_blks += (blks))
+@@ -4055,10 +4058,8 @@ void __init f2fs_create_root_stats(void);
+ void f2fs_destroy_root_stats(void);
+ void f2fs_update_sit_info(struct f2fs_sb_info *sbi);
+ #else
+-#define stat_inc_cp_count(si)				do { } while (0)
+-#define stat_inc_bg_cp_count(si)			do { } while (0)
+-#define stat_inc_call_count(si)				do { } while (0)
+-#define stat_inc_bggc_count(si)				do { } while (0)
++#define stat_inc_cp_call_count(sbi, foreground)		do { } while (0)
++#define stat_inc_cp_count(sbi)				do { } while (0)
+ #define stat_io_skip_bggc_count(sbi)			do { } while (0)
+ #define stat_other_skip_bggc_count(sbi)			do { } while (0)
+ #define stat_inc_dirty_inode(sbi, type)			do { } while (0)
+@@ -4086,7 +4087,9 @@ void f2fs_update_sit_info(struct f2fs_sb_info *sbi);
+ #define stat_inc_seg_type(sbi, curseg)			do { } while (0)
+ #define stat_inc_block_count(sbi, curseg)		do { } while (0)
+ #define stat_inc_inplace_blocks(sbi)			do { } while (0)
+-#define stat_inc_seg_count(sbi, type, gc_type)		do { } while (0)
++#define stat_inc_gc_call_count(sbi, foreground)		do { } while (0)
++#define stat_inc_gc_sec_count(sbi, type, gc_type)	do { } while (0)
++#define stat_inc_gc_seg_count(sbi, type, gc_type)	do { } while (0)
+ #define stat_inc_tot_blk_count(si, blks)		do { } while (0)
+ #define stat_inc_data_blk_count(sbi, blks, gc_type)	do { } while (0)
+ #define stat_inc_node_blk_count(sbi, blks, gc_type)	do { } while (0)
+@@ -4423,6 +4426,22 @@ static inline bool f2fs_blkz_is_seq(struct f2fs_sb_info *sbi, int devi,
+ }
+ #endif
+ 
++static inline int f2fs_bdev_index(struct f2fs_sb_info *sbi,
++				  struct block_device *bdev)
++{
++	int i;
++
++	if (!f2fs_is_multi_device(sbi))
++		return 0;
++
++	for (i = 0; i < sbi->s_ndevs; i++)
++		if (FDEV(i).bdev == bdev)
++			return i;
++
++	WARN_ON(1);
++	return -1;
++}
++
+ static inline bool f2fs_hw_should_discard(struct f2fs_sb_info *sbi)
+ {
+ 	return f2fs_sb_has_blkzoned(sbi);
+@@ -4483,7 +4502,8 @@ static inline bool f2fs_low_mem_mode(struct f2fs_sb_info *sbi)
+ static inline bool f2fs_may_compress(struct inode *inode)
+ {
+ 	if (IS_SWAPFILE(inode) || f2fs_is_pinned_file(inode) ||
+-		f2fs_is_atomic_file(inode) || f2fs_has_inline_data(inode))
++		f2fs_is_atomic_file(inode) || f2fs_has_inline_data(inode) ||
++		f2fs_is_mmap_file(inode))
+ 		return false;
+ 	return S_ISREG(inode->i_mode) || S_ISDIR(inode->i_mode);
+ }
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index 093039dee9920..ea4a094c518f9 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -526,7 +526,11 @@ static int f2fs_file_mmap(struct file *file, struct vm_area_struct *vma)
+ 
+ 	file_accessed(file);
+ 	vma->vm_ops = &f2fs_file_vm_ops;
++
++	f2fs_down_read(&F2FS_I(inode)->i_sem);
+ 	set_inode_flag(inode, FI_MMAP_FILE);
++	f2fs_up_read(&F2FS_I(inode)->i_sem);
++
+ 	return 0;
+ }
+ 
+@@ -1724,6 +1728,7 @@ next_alloc:
+ 		if (has_not_enough_free_secs(sbi, 0,
+ 			GET_SEC_FROM_SEG(sbi, overprovision_segments(sbi)))) {
+ 			f2fs_down_write(&sbi->gc_lock);
++			stat_inc_gc_call_count(sbi, FOREGROUND);
+ 			err = f2fs_gc(sbi, &gc_control);
+ 			if (err && err != -ENODATA)
+ 				goto out_err;
+@@ -1919,12 +1924,19 @@ static int f2fs_setflags_common(struct inode *inode, u32 iflags, u32 mask)
+ 			int err = f2fs_convert_inline_inode(inode);
+ 			if (err)
+ 				return err;
+-			if (!f2fs_may_compress(inode))
+-				return -EINVAL;
+-			if (S_ISREG(inode->i_mode) && F2FS_HAS_BLOCKS(inode))
++
++			f2fs_down_write(&F2FS_I(inode)->i_sem);
++			if (!f2fs_may_compress(inode) ||
++					(S_ISREG(inode->i_mode) &&
++					F2FS_HAS_BLOCKS(inode))) {
++				f2fs_up_write(&F2FS_I(inode)->i_sem);
+ 				return -EINVAL;
+-			if (set_compress_context(inode))
+-				return -EOPNOTSUPP;
++			}
++			err = set_compress_context(inode);
++			f2fs_up_write(&F2FS_I(inode)->i_sem);
++
++			if (err)
++				return err;
+ 		}
+ 	}
+ 
+@@ -2465,6 +2477,7 @@ static int f2fs_ioc_gc(struct file *filp, unsigned long arg)
+ 
+ 	gc_control.init_gc_type = sync ? FG_GC : BG_GC;
+ 	gc_control.err_gc_skipped = sync;
++	stat_inc_gc_call_count(sbi, FOREGROUND);
+ 	ret = f2fs_gc(sbi, &gc_control);
+ out:
+ 	mnt_drop_write_file(filp);
+@@ -2508,6 +2521,7 @@ do_more:
+ 	}
+ 
+ 	gc_control.victim_segno = GET_SEGNO(sbi, range->start);
++	stat_inc_gc_call_count(sbi, FOREGROUND);
+ 	ret = f2fs_gc(sbi, &gc_control);
+ 	if (ret) {
+ 		if (ret == -EBUSY)
+@@ -2990,6 +3004,7 @@ static int f2fs_ioc_flush_device(struct file *filp, unsigned long arg)
+ 		sm->last_victim[ALLOC_NEXT] = end_segno + 1;
+ 
+ 		gc_control.victim_segno = start_segno;
++		stat_inc_gc_call_count(sbi, FOREGROUND);
+ 		ret = f2fs_gc(sbi, &gc_control);
+ 		if (ret == -EAGAIN)
+ 			ret = 0;
+@@ -3976,6 +3991,7 @@ static int f2fs_ioc_set_compress_option(struct file *filp, unsigned long arg)
+ 	file_start_write(filp);
+ 	inode_lock(inode);
+ 
++	f2fs_down_write(&F2FS_I(inode)->i_sem);
+ 	if (f2fs_is_mmap_file(inode) || get_dirty_pages(inode)) {
+ 		ret = -EBUSY;
+ 		goto out;
+@@ -3995,6 +4011,7 @@ static int f2fs_ioc_set_compress_option(struct file *filp, unsigned long arg)
+ 		f2fs_warn(sbi, "compression algorithm is successfully set, "
+ 			"but current kernel doesn't support this algorithm.");
+ out:
++	f2fs_up_write(&F2FS_I(inode)->i_sem);
+ 	inode_unlock(inode);
+ 	file_end_write(filp);
+ 
+diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
+index 01effd3fcb6c7..6690323fff83b 100644
+--- a/fs/f2fs/gc.c
++++ b/fs/f2fs/gc.c
+@@ -121,8 +121,8 @@ static int gc_thread_func(void *data)
+ 		else
+ 			increase_sleep_time(gc_th, &wait_ms);
+ do_gc:
+-		if (!foreground)
+-			stat_inc_bggc_count(sbi->stat_info);
++		stat_inc_gc_call_count(sbi, foreground ?
++					FOREGROUND : BACKGROUND);
+ 
+ 		sync_mode = F2FS_OPTION(sbi).bggc_mode == BGGC_MODE_SYNC;
+ 
+@@ -1685,6 +1685,7 @@ static int do_garbage_collect(struct f2fs_sb_info *sbi,
+ 	int seg_freed = 0, migrated = 0;
+ 	unsigned char type = IS_DATASEG(get_seg_entry(sbi, segno)->type) ?
+ 						SUM_TYPE_DATA : SUM_TYPE_NODE;
++	unsigned char data_type = (type == SUM_TYPE_DATA) ? DATA : NODE;
+ 	int submitted = 0;
+ 
+ 	if (__is_large_section(sbi))
+@@ -1766,7 +1767,7 @@ static int do_garbage_collect(struct f2fs_sb_info *sbi,
+ 							segno, gc_type,
+ 							force_migrate);
+ 
+-		stat_inc_seg_count(sbi, type, gc_type);
++		stat_inc_gc_seg_count(sbi, data_type, gc_type);
+ 		sbi->gc_reclaimed_segs[sbi->gc_mode]++;
+ 		migrated++;
+ 
+@@ -1783,12 +1784,12 @@ skip:
+ 	}
+ 
+ 	if (submitted)
+-		f2fs_submit_merged_write(sbi,
+-				(type == SUM_TYPE_NODE) ? NODE : DATA);
++		f2fs_submit_merged_write(sbi, data_type);
+ 
+ 	blk_finish_plug(&plug);
+ 
+-	stat_inc_call_count(sbi->stat_info);
++	if (migrated)
++		stat_inc_gc_sec_count(sbi, data_type, gc_type);
+ 
+ 	return seg_freed;
+ }
+@@ -1839,6 +1840,7 @@ gc_more:
+ 		 * secure free segments which doesn't need fggc any more.
+ 		 */
+ 		if (prefree_segments(sbi)) {
++			stat_inc_cp_call_count(sbi, TOTAL_CALL);
+ 			ret = f2fs_write_checkpoint(sbi, &cpc);
+ 			if (ret)
+ 				goto stop;
+@@ -1887,6 +1889,7 @@ retry:
+ 		round++;
+ 		if (skipped_round > MAX_SKIP_GC_COUNT &&
+ 				skipped_round * 2 >= round) {
++			stat_inc_cp_call_count(sbi, TOTAL_CALL);
+ 			ret = f2fs_write_checkpoint(sbi, &cpc);
+ 			goto stop;
+ 		}
+@@ -1902,6 +1905,7 @@ retry:
+ 	 */
+ 	if (free_sections(sbi) <= upper_secs + NR_GC_CHECKPOINT_SECS &&
+ 				prefree_segments(sbi)) {
++		stat_inc_cp_call_count(sbi, TOTAL_CALL);
+ 		ret = f2fs_write_checkpoint(sbi, &cpc);
+ 		if (ret)
+ 			goto stop;
+@@ -2029,6 +2033,7 @@ static int free_segment_range(struct f2fs_sb_info *sbi,
+ 	if (gc_only)
+ 		goto out;
+ 
++	stat_inc_cp_call_count(sbi, TOTAL_CALL);
+ 	err = f2fs_write_checkpoint(sbi, &cpc);
+ 	if (err)
+ 		goto out;
+@@ -2221,6 +2226,7 @@ out_drop_write:
+ 	clear_sbi_flag(sbi, SBI_IS_RESIZEFS);
+ 	set_sbi_flag(sbi, SBI_IS_DIRTY);
+ 
++	stat_inc_cp_call_count(sbi, TOTAL_CALL);
+ 	err = f2fs_write_checkpoint(sbi, &cpc);
+ 	if (err) {
+ 		update_fs_metadata(sbi, secs);
+diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c
+index 09e986b050c61..e81725c922cd4 100644
+--- a/fs/f2fs/inode.c
++++ b/fs/f2fs/inode.c
+@@ -475,6 +475,12 @@ static int do_read_inode(struct inode *inode)
+ 		fi->i_inline_xattr_size = 0;
+ 	}
+ 
++	if (!sanity_check_inode(inode, node_page)) {
++		f2fs_put_page(node_page, 1);
++		f2fs_handle_error(sbi, ERROR_CORRUPTED_INODE);
++		return -EFSCORRUPTED;
++	}
++
+ 	/* check data exist */
+ 	if (f2fs_has_inline_data(inode) && !f2fs_exist_data(inode))
+ 		__recover_inline_status(inode, node_page);
+@@ -544,12 +550,6 @@ static int do_read_inode(struct inode *inode)
+ 	f2fs_init_read_extent_tree(inode, node_page);
+ 	f2fs_init_age_extent_tree(inode);
+ 
+-	if (!sanity_check_inode(inode, node_page)) {
+-		f2fs_put_page(node_page, 1);
+-		f2fs_handle_error(sbi, ERROR_CORRUPTED_INODE);
+-		return -EFSCORRUPTED;
+-	}
+-
+ 	if (!sanity_check_extent_cache(inode)) {
+ 		f2fs_put_page(node_page, 1);
+ 		f2fs_handle_error(sbi, ERROR_CORRUPTED_INODE);
+diff --git a/fs/f2fs/recovery.c b/fs/f2fs/recovery.c
+index 4e7d4ceeb084c..e91f4619aa5bb 100644
+--- a/fs/f2fs/recovery.c
++++ b/fs/f2fs/recovery.c
+@@ -924,6 +924,7 @@ skip:
+ 			struct cp_control cpc = {
+ 				.reason = CP_RECOVERY,
+ 			};
++			stat_inc_cp_call_count(sbi, TOTAL_CALL);
+ 			err = f2fs_write_checkpoint(sbi, &cpc);
+ 		}
+ 	}
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index 0457d620011f6..be08be6f4bfd6 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -433,6 +433,7 @@ void f2fs_balance_fs(struct f2fs_sb_info *sbi, bool need)
+ 			.err_gc_skipped = false,
+ 			.nr_free_secs = 1 };
+ 		f2fs_down_write(&sbi->gc_lock);
++		stat_inc_gc_call_count(sbi, FOREGROUND);
+ 		f2fs_gc(sbi, &gc_control);
+ 	}
+ }
+@@ -510,8 +511,8 @@ do_sync:
+ 
+ 		mutex_unlock(&sbi->flush_lock);
+ 	}
++	stat_inc_cp_call_count(sbi, BACKGROUND);
+ 	f2fs_sync_fs(sbi->sb, 1);
+-	stat_inc_bg_cp_count(sbi->stat_info);
+ }
+ 
+ static int __submit_flush_wait(struct f2fs_sb_info *sbi,
+@@ -1258,8 +1259,16 @@ static int __submit_discard_cmd(struct f2fs_sb_info *sbi,
+ 
+ #ifdef CONFIG_BLK_DEV_ZONED
+ 	if (f2fs_sb_has_blkzoned(sbi) && bdev_is_zoned(bdev)) {
+-		__submit_zone_reset_cmd(sbi, dc, flag, wait_list, issued);
+-		return 0;
++		int devi = f2fs_bdev_index(sbi, bdev);
++
++		if (devi < 0)
++			return -EINVAL;
++
++		if (f2fs_blkz_is_seq(sbi, devi, dc->di.start)) {
++			__submit_zone_reset_cmd(sbi, dc, flag,
++						wait_list, issued);
++			return 0;
++		}
+ 	}
+ #endif
+ 
+@@ -1785,15 +1794,24 @@ static void f2fs_wait_discard_bio(struct f2fs_sb_info *sbi, block_t blkaddr)
+ 	dc = __lookup_discard_cmd(sbi, blkaddr);
+ #ifdef CONFIG_BLK_DEV_ZONED
+ 	if (dc && f2fs_sb_has_blkzoned(sbi) && bdev_is_zoned(dc->bdev)) {
+-		/* force submit zone reset */
+-		if (dc->state == D_PREP)
+-			__submit_zone_reset_cmd(sbi, dc, REQ_SYNC,
+-						&dcc->wait_list, NULL);
+-		dc->ref++;
+-		mutex_unlock(&dcc->cmd_lock);
+-		/* wait zone reset */
+-		__wait_one_discard_bio(sbi, dc);
+-		return;
++		int devi = f2fs_bdev_index(sbi, dc->bdev);
++
++		if (devi < 0) {
++			mutex_unlock(&dcc->cmd_lock);
++			return;
++		}
++
++		if (f2fs_blkz_is_seq(sbi, devi, dc->di.start)) {
++			/* force submit zone reset */
++			if (dc->state == D_PREP)
++				__submit_zone_reset_cmd(sbi, dc, REQ_SYNC,
++							&dcc->wait_list, NULL);
++			dc->ref++;
++			mutex_unlock(&dcc->cmd_lock);
++			/* wait zone reset */
++			__wait_one_discard_bio(sbi, dc);
++			return;
++		}
+ 	}
+ #endif
+ 	if (dc) {
+@@ -2193,7 +2211,7 @@ find_next:
+ 			len = next_pos - cur_pos;
+ 
+ 			if (f2fs_sb_has_blkzoned(sbi) ||
+-					!force || len < cpc->trim_minlen)
++			    (force && len < cpc->trim_minlen))
+ 				goto skip;
+ 
+ 			f2fs_issue_discard(sbi, entry->start_blkaddr + cur_pos,
+@@ -3228,6 +3246,7 @@ int f2fs_trim_fs(struct f2fs_sb_info *sbi, struct fstrim_range *range)
+ 		goto out;
+ 
+ 	f2fs_down_write(&sbi->gc_lock);
++	stat_inc_cp_call_count(sbi, TOTAL_CALL);
+ 	err = f2fs_write_checkpoint(sbi, &cpc);
+ 	f2fs_up_write(&sbi->gc_lock);
+ 	if (err)
+@@ -4846,17 +4865,17 @@ static int check_zone_write_pointer(struct f2fs_sb_info *sbi,
+ {
+ 	unsigned int wp_segno, wp_blkoff, zone_secno, zone_segno, segno;
+ 	block_t zone_block, wp_block, last_valid_block;
++	unsigned int log_sectors_per_block = sbi->log_blocksize - SECTOR_SHIFT;
+ 	int i, s, b, ret;
+ 	struct seg_entry *se;
+ 
+ 	if (zone->type != BLK_ZONE_TYPE_SEQWRITE_REQ)
+ 		return 0;
+ 
+-	wp_block = fdev->start_blk + (zone->wp >> sbi->log_sectors_per_block);
++	wp_block = fdev->start_blk + (zone->wp >> log_sectors_per_block);
+ 	wp_segno = GET_SEGNO(sbi, wp_block);
+ 	wp_blkoff = wp_block - START_BLOCK(sbi, wp_segno);
+-	zone_block = fdev->start_blk + (zone->start >>
+-						sbi->log_sectors_per_block);
++	zone_block = fdev->start_blk + (zone->start >> log_sectors_per_block);
+ 	zone_segno = GET_SEGNO(sbi, zone_block);
+ 	zone_secno = GET_SEC_FROM_SEG(sbi, zone_segno);
+ 
+@@ -4906,7 +4925,7 @@ static int check_zone_write_pointer(struct f2fs_sb_info *sbi,
+ 			    "pointer. Reset the write pointer: wp[0x%x,0x%x]",
+ 			    wp_segno, wp_blkoff);
+ 		ret = __f2fs_issue_discard_zone(sbi, fdev->bdev, zone_block,
+-				zone->len >> sbi->log_sectors_per_block);
++					zone->len >> log_sectors_per_block);
+ 		if (ret)
+ 			f2fs_err(sbi, "Discard zone failed: %s (errno=%d)",
+ 				 fdev->path, ret);
+@@ -4967,6 +4986,7 @@ static int fix_curseg_write_pointer(struct f2fs_sb_info *sbi, int type)
+ 	struct blk_zone zone;
+ 	unsigned int cs_section, wp_segno, wp_blkoff, wp_sector_off;
+ 	block_t cs_zone_block, wp_block;
++	unsigned int log_sectors_per_block = sbi->log_blocksize - SECTOR_SHIFT;
+ 	sector_t zone_sector;
+ 	int err;
+ 
+@@ -4978,8 +4998,8 @@ static int fix_curseg_write_pointer(struct f2fs_sb_info *sbi, int type)
+ 		return 0;
+ 
+ 	/* report zone for the sector the curseg points to */
+-	zone_sector = (sector_t)(cs_zone_block - zbd->start_blk) <<
+-						sbi->log_sectors_per_block;
++	zone_sector = (sector_t)(cs_zone_block - zbd->start_blk)
++		<< log_sectors_per_block;
+ 	err = blkdev_report_zones(zbd->bdev, zone_sector, 1,
+ 				  report_one_zone_cb, &zone);
+ 	if (err != 1) {
+@@ -4991,10 +5011,10 @@ static int fix_curseg_write_pointer(struct f2fs_sb_info *sbi, int type)
+ 	if (zone.type != BLK_ZONE_TYPE_SEQWRITE_REQ)
+ 		return 0;
+ 
+-	wp_block = zbd->start_blk + (zone.wp >> sbi->log_sectors_per_block);
++	wp_block = zbd->start_blk + (zone.wp >> log_sectors_per_block);
+ 	wp_segno = GET_SEGNO(sbi, wp_block);
+ 	wp_blkoff = wp_block - START_BLOCK(sbi, wp_segno);
+-	wp_sector_off = zone.wp & GENMASK(sbi->log_sectors_per_block - 1, 0);
++	wp_sector_off = zone.wp & GENMASK(log_sectors_per_block - 1, 0);
+ 
+ 	if (cs->segno == wp_segno && cs->next_blkoff == wp_blkoff &&
+ 		wp_sector_off == 0)
+@@ -5021,8 +5041,8 @@ static int fix_curseg_write_pointer(struct f2fs_sb_info *sbi, int type)
+ 	if (!zbd)
+ 		return 0;
+ 
+-	zone_sector = (sector_t)(cs_zone_block - zbd->start_blk) <<
+-						sbi->log_sectors_per_block;
++	zone_sector = (sector_t)(cs_zone_block - zbd->start_blk)
++		<< log_sectors_per_block;
+ 	err = blkdev_report_zones(zbd->bdev, zone_sector, 1,
+ 				  report_one_zone_cb, &zone);
+ 	if (err != 1) {
+@@ -5040,7 +5060,7 @@ static int fix_curseg_write_pointer(struct f2fs_sb_info *sbi, int type)
+ 			    "Reset the zone: curseg[0x%x,0x%x]",
+ 			    type, cs->segno, cs->next_blkoff);
+ 		err = __f2fs_issue_discard_zone(sbi, zbd->bdev,	cs_zone_block,
+-					zone.len >> sbi->log_sectors_per_block);
++					zone.len >> log_sectors_per_block);
+ 		if (err) {
+ 			f2fs_err(sbi, "Discard zone failed: %s (errno=%d)",
+ 				 zbd->path, err);
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index ca31163da00a5..8d9d2ee7f3c7f 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -591,7 +591,7 @@ static int f2fs_set_lz4hc_level(struct f2fs_sb_info *sbi, const char *str)
+ 	unsigned int level;
+ 
+ 	if (strlen(str) == 3) {
+-		F2FS_OPTION(sbi).compress_level = LZ4HC_DEFAULT_CLEVEL;
++		F2FS_OPTION(sbi).compress_level = 0;
+ 		return 0;
+ 	}
+ 
+@@ -862,11 +862,6 @@ static int parse_options(struct super_block *sb, char *options, bool is_remount)
+ 			if (!name)
+ 				return -ENOMEM;
+ 			if (!strcmp(name, "adaptive")) {
+-				if (f2fs_sb_has_blkzoned(sbi)) {
+-					f2fs_warn(sbi, "adaptive mode is not allowed with zoned block device feature");
+-					kfree(name);
+-					return -EINVAL;
+-				}
+ 				F2FS_OPTION(sbi).fs_mode = FS_MODE_ADAPTIVE;
+ 			} else if (!strcmp(name, "lfs")) {
+ 				F2FS_OPTION(sbi).fs_mode = FS_MODE_LFS;
+@@ -1331,6 +1326,11 @@ default_check:
+ 			F2FS_OPTION(sbi).discard_unit =
+ 					DISCARD_UNIT_SECTION;
+ 		}
++
++		if (F2FS_OPTION(sbi).fs_mode != FS_MODE_LFS) {
++			f2fs_info(sbi, "Only lfs mode is allowed with zoned block device feature");
++			return -EINVAL;
++		}
+ #else
+ 		f2fs_err(sbi, "Zoned block device support is not enabled");
+ 		return -EINVAL;
+@@ -1561,7 +1561,8 @@ static void destroy_device_list(struct f2fs_sb_info *sbi)
+ 	int i;
+ 
+ 	for (i = 0; i < sbi->s_ndevs; i++) {
+-		blkdev_put(FDEV(i).bdev, sbi->sb->s_type);
++		if (i > 0)
++			blkdev_put(FDEV(i).bdev, sbi->sb->s_type);
+ #ifdef CONFIG_BLK_DEV_ZONED
+ 		kvfree(FDEV(i).blkz_seq);
+ #endif
+@@ -1600,6 +1601,7 @@ static void f2fs_put_super(struct super_block *sb)
+ 		struct cp_control cpc = {
+ 			.reason = CP_UMOUNT,
+ 		};
++		stat_inc_cp_call_count(sbi, TOTAL_CALL);
+ 		err = f2fs_write_checkpoint(sbi, &cpc);
+ 	}
+ 
+@@ -1609,6 +1611,7 @@ static void f2fs_put_super(struct super_block *sb)
+ 		struct cp_control cpc = {
+ 			.reason = CP_UMOUNT | CP_TRIMMED,
+ 		};
++		stat_inc_cp_call_count(sbi, TOTAL_CALL);
+ 		err = f2fs_write_checkpoint(sbi, &cpc);
+ 	}
+ 
+@@ -1705,8 +1708,10 @@ int f2fs_sync_fs(struct super_block *sb, int sync)
+ 	if (unlikely(is_sbi_flag_set(sbi, SBI_POR_DOING)))
+ 		return -EAGAIN;
+ 
+-	if (sync)
++	if (sync) {
++		stat_inc_cp_call_count(sbi, TOTAL_CALL);
+ 		err = f2fs_issue_checkpoint(sbi);
++	}
+ 
+ 	return err;
+ }
+@@ -2205,6 +2210,7 @@ static int f2fs_disable_checkpoint(struct f2fs_sb_info *sbi)
+ 			.nr_free_secs = 1 };
+ 
+ 		f2fs_down_write(&sbi->gc_lock);
++		stat_inc_gc_call_count(sbi, FOREGROUND);
+ 		err = f2fs_gc(sbi, &gc_control);
+ 		if (err == -ENODATA) {
+ 			err = 0;
+@@ -2230,6 +2236,7 @@ skip_gc:
+ 	f2fs_down_write(&sbi->gc_lock);
+ 	cpc.reason = CP_PAUSE;
+ 	set_sbi_flag(sbi, SBI_CP_DISABLED);
++	stat_inc_cp_call_count(sbi, TOTAL_CALL);
+ 	err = f2fs_write_checkpoint(sbi, &cpc);
+ 	if (err)
+ 		goto out_unlock;
+@@ -4190,16 +4197,12 @@ static int f2fs_scan_devices(struct f2fs_sb_info *sbi)
+ 	sbi->aligned_blksize = true;
+ 
+ 	for (i = 0; i < max_devices; i++) {
+-
+-		if (i > 0 && !RDEV(i).path[0])
++		if (i == 0)
++			FDEV(0).bdev = sbi->sb->s_bdev;
++		else if (!RDEV(i).path[0])
+ 			break;
+ 
+-		if (max_devices == 1) {
+-			/* Single zoned block device mount */
+-			FDEV(0).bdev =
+-				blkdev_get_by_dev(sbi->sb->s_bdev->bd_dev, mode,
+-						  sbi->sb->s_type, NULL);
+-		} else {
++		if (max_devices > 1) {
+ 			/* Multi-device mount */
+ 			memcpy(FDEV(i).path, RDEV(i).path, MAX_PATH_LEN);
+ 			FDEV(i).total_segments =
+@@ -4215,10 +4218,9 @@ static int f2fs_scan_devices(struct f2fs_sb_info *sbi)
+ 				FDEV(i).end_blk = FDEV(i).start_blk +
+ 					(FDEV(i).total_segments <<
+ 					sbi->log_blocks_per_seg) - 1;
++				FDEV(i).bdev = blkdev_get_by_path(FDEV(i).path,
++					mode, sbi->sb->s_type, NULL);
+ 			}
+-			FDEV(i).bdev = blkdev_get_by_path(FDEV(i).path, mode,
+-							  sbi->sb->s_type,
+-							  NULL);
+ 		}
+ 		if (IS_ERR(FDEV(i).bdev))
+ 			return PTR_ERR(FDEV(i).bdev);
+@@ -4871,6 +4873,7 @@ static void kill_f2fs_super(struct super_block *sb)
+ 			struct cp_control cpc = {
+ 				.reason = CP_UMOUNT,
+ 			};
++			stat_inc_cp_call_count(sbi, TOTAL_CALL);
+ 			f2fs_write_checkpoint(sbi, &cpc);
+ 		}
+ 
+diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
+index 48b7e0073884a..417fae96890f6 100644
+--- a/fs/f2fs/sysfs.c
++++ b/fs/f2fs/sysfs.c
+@@ -356,6 +356,16 @@ static ssize_t f2fs_sbi_show(struct f2fs_attr *a,
+ 	if (!strcmp(a->attr.name, "revoked_atomic_block"))
+ 		return sysfs_emit(buf, "%llu\n", sbi->revoked_atomic_block);
+ 
++#ifdef CONFIG_F2FS_STAT_FS
++	if (!strcmp(a->attr.name, "cp_foreground_calls"))
++		return sysfs_emit(buf, "%d\n",
++				atomic_read(&sbi->cp_call_count[TOTAL_CALL]) -
++				atomic_read(&sbi->cp_call_count[BACKGROUND]));
++	if (!strcmp(a->attr.name, "cp_background_calls"))
++		return sysfs_emit(buf, "%d\n",
++				atomic_read(&sbi->cp_call_count[BACKGROUND]));
++#endif
++
+ 	ui = (unsigned int *)(ptr + a->offset);
+ 
+ 	return sysfs_emit(buf, "%u\n", *ui);
+@@ -972,10 +982,10 @@ F2FS_SBI_GENERAL_RO_ATTR(unusable_blocks_per_sec);
+ 
+ /* STAT_INFO ATTR */
+ #ifdef CONFIG_F2FS_STAT_FS
+-STAT_INFO_RO_ATTR(cp_foreground_calls, cp_count);
+-STAT_INFO_RO_ATTR(cp_background_calls, bg_cp_count);
+-STAT_INFO_RO_ATTR(gc_foreground_calls, call_count);
+-STAT_INFO_RO_ATTR(gc_background_calls, bg_gc);
++STAT_INFO_RO_ATTR(cp_foreground_calls, cp_call_count[FOREGROUND]);
++STAT_INFO_RO_ATTR(cp_background_calls, cp_call_count[BACKGROUND]);
++STAT_INFO_RO_ATTR(gc_foreground_calls, gc_call_count[FOREGROUND]);
++STAT_INFO_RO_ATTR(gc_background_calls, gc_call_count[BACKGROUND]);
+ #endif
+ 
+ /* FAULT_INFO ATTR */
+diff --git a/fs/fs_context.c b/fs/fs_context.c
+index 851214d1d013d..375023e40161d 100644
+--- a/fs/fs_context.c
++++ b/fs/fs_context.c
+@@ -315,10 +315,31 @@ struct fs_context *fs_context_for_reconfigure(struct dentry *dentry,
+ }
+ EXPORT_SYMBOL(fs_context_for_reconfigure);
+ 
++/**
++ * fs_context_for_submount: allocate a new fs_context for a submount
++ * @type: file_system_type of the new context
++ * @reference: reference dentry from which to copy relevant info
++ *
++ * Allocate a new fs_context suitable for a submount. This also ensures that
++ * the fc->security object is inherited from @reference (if needed).
++ */
+ struct fs_context *fs_context_for_submount(struct file_system_type *type,
+ 					   struct dentry *reference)
+ {
+-	return alloc_fs_context(type, reference, 0, 0, FS_CONTEXT_FOR_SUBMOUNT);
++	struct fs_context *fc;
++	int ret;
++
++	fc = alloc_fs_context(type, reference, 0, 0, FS_CONTEXT_FOR_SUBMOUNT);
++	if (IS_ERR(fc))
++		return fc;
++
++	ret = security_fs_context_submount(fc, reference->d_sb);
++	if (ret) {
++		put_fs_context(fc);
++		return ERR_PTR(ret);
++	}
++
++	return fc;
+ }
+ EXPORT_SYMBOL(fs_context_for_submount);
+ 
+diff --git a/fs/fuse/file.c b/fs/fuse/file.c
+index bc4115288eec7..1c7599ed90625 100644
+--- a/fs/fuse/file.c
++++ b/fs/fuse/file.c
+@@ -19,7 +19,6 @@
+ #include <linux/uio.h>
+ #include <linux/fs.h>
+ #include <linux/filelock.h>
+-#include <linux/file.h>
+ 
+ static int fuse_send_open(struct fuse_mount *fm, u64 nodeid,
+ 			  unsigned int open_flags, int opcode,
+@@ -479,36 +478,48 @@ static void fuse_sync_writes(struct inode *inode)
+ 	fuse_release_nowrite(inode);
+ }
+ 
+-struct fuse_flush_args {
+-	struct fuse_args args;
+-	struct fuse_flush_in inarg;
+-	struct work_struct work;
+-	struct file *file;
+-};
+-
+-static int fuse_do_flush(struct fuse_flush_args *fa)
++static int fuse_flush(struct file *file, fl_owner_t id)
+ {
+-	int err;
+-	struct inode *inode = file_inode(fa->file);
++	struct inode *inode = file_inode(file);
+ 	struct fuse_mount *fm = get_fuse_mount(inode);
++	struct fuse_file *ff = file->private_data;
++	struct fuse_flush_in inarg;
++	FUSE_ARGS(args);
++	int err;
++
++	if (fuse_is_bad(inode))
++		return -EIO;
++
++	if (ff->open_flags & FOPEN_NOFLUSH && !fm->fc->writeback_cache)
++		return 0;
+ 
+ 	err = write_inode_now(inode, 1);
+ 	if (err)
+-		goto out;
++		return err;
+ 
+ 	inode_lock(inode);
+ 	fuse_sync_writes(inode);
+ 	inode_unlock(inode);
+ 
+-	err = filemap_check_errors(fa->file->f_mapping);
++	err = filemap_check_errors(file->f_mapping);
+ 	if (err)
+-		goto out;
++		return err;
+ 
+ 	err = 0;
+ 	if (fm->fc->no_flush)
+ 		goto inval_attr_out;
+ 
+-	err = fuse_simple_request(fm, &fa->args);
++	memset(&inarg, 0, sizeof(inarg));
++	inarg.fh = ff->fh;
++	inarg.lock_owner = fuse_lock_owner_id(fm->fc, id);
++	args.opcode = FUSE_FLUSH;
++	args.nodeid = get_node_id(inode);
++	args.in_numargs = 1;
++	args.in_args[0].size = sizeof(inarg);
++	args.in_args[0].value = &inarg;
++	args.force = true;
++
++	err = fuse_simple_request(fm, &args);
+ 	if (err == -ENOSYS) {
+ 		fm->fc->no_flush = 1;
+ 		err = 0;
+@@ -521,57 +532,9 @@ inval_attr_out:
+ 	 */
+ 	if (!err && fm->fc->writeback_cache)
+ 		fuse_invalidate_attr_mask(inode, STATX_BLOCKS);
+-
+-out:
+-	fput(fa->file);
+-	kfree(fa);
+ 	return err;
+ }
+ 
+-static void fuse_flush_async(struct work_struct *work)
+-{
+-	struct fuse_flush_args *fa = container_of(work, typeof(*fa), work);
+-
+-	fuse_do_flush(fa);
+-}
+-
+-static int fuse_flush(struct file *file, fl_owner_t id)
+-{
+-	struct fuse_flush_args *fa;
+-	struct inode *inode = file_inode(file);
+-	struct fuse_mount *fm = get_fuse_mount(inode);
+-	struct fuse_file *ff = file->private_data;
+-
+-	if (fuse_is_bad(inode))
+-		return -EIO;
+-
+-	if (ff->open_flags & FOPEN_NOFLUSH && !fm->fc->writeback_cache)
+-		return 0;
+-
+-	fa = kzalloc(sizeof(*fa), GFP_KERNEL);
+-	if (!fa)
+-		return -ENOMEM;
+-
+-	fa->inarg.fh = ff->fh;
+-	fa->inarg.lock_owner = fuse_lock_owner_id(fm->fc, id);
+-	fa->args.opcode = FUSE_FLUSH;
+-	fa->args.nodeid = get_node_id(inode);
+-	fa->args.in_numargs = 1;
+-	fa->args.in_args[0].size = sizeof(fa->inarg);
+-	fa->args.in_args[0].value = &fa->inarg;
+-	fa->args.force = true;
+-	fa->file = get_file(file);
+-
+-	/* Don't wait if the task is exiting */
+-	if (current->flags & PF_EXITING) {
+-		INIT_WORK(&fa->work, fuse_flush_async);
+-		schedule_work(&fa->work);
+-		return 0;
+-	}
+-
+-	return fuse_do_flush(fa);
+-}
+-
+ int fuse_fsync_common(struct file *file, loff_t start, loff_t end,
+ 		      int datasync, int opcode)
+ {
+diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
+index aa8967cca1a31..7d2f70708f37d 100644
+--- a/fs/iomap/buffered-io.c
++++ b/fs/iomap/buffered-io.c
+@@ -508,11 +508,6 @@ void iomap_invalidate_folio(struct folio *folio, size_t offset, size_t len)
+ 		WARN_ON_ONCE(folio_test_writeback(folio));
+ 		folio_cancel_dirty(folio);
+ 		iomap_page_release(folio);
+-	} else if (folio_test_large(folio)) {
+-		/* Must release the iop so the page can be split */
+-		WARN_ON_ONCE(!folio_test_uptodate(folio) &&
+-			     folio_test_dirty(folio));
+-		iomap_page_release(folio);
+ 	}
+ }
+ EXPORT_SYMBOL_GPL(iomap_invalidate_folio);
+diff --git a/fs/jfs/jfs_extent.c b/fs/jfs/jfs_extent.c
+index ae99a7e232eeb..a82751e6c47f9 100644
+--- a/fs/jfs/jfs_extent.c
++++ b/fs/jfs/jfs_extent.c
+@@ -311,6 +311,11 @@ extBalloc(struct inode *ip, s64 hint, s64 * nblocks, s64 * blkno)
+ 	 * blocks in the map. in that case, we'll start off with the
+ 	 * maximum free.
+ 	 */
++
++	/* give up if no space left */
++	if (bmp->db_maxfreebud == -1)
++		return -ENOSPC;
++
+ 	max = (s64) 1 << bmp->db_maxfreebud;
+ 	if (*nblocks >= max && *nblocks > nbperpage)
+ 		nb = nblks = (max > nbperpage) ? max : nbperpage;
+diff --git a/fs/lockd/mon.c b/fs/lockd/mon.c
+index 1d9488cf05348..87a0f207df0b9 100644
+--- a/fs/lockd/mon.c
++++ b/fs/lockd/mon.c
+@@ -276,6 +276,9 @@ static struct nsm_handle *nsm_create_handle(const struct sockaddr *sap,
+ {
+ 	struct nsm_handle *new;
+ 
++	if (!hostname)
++		return NULL;
++
+ 	new = kzalloc(sizeof(*new) + hostname_len + 1, GFP_KERNEL);
+ 	if (unlikely(new == NULL))
+ 		return NULL;
+diff --git a/fs/namei.c b/fs/namei.c
+index e56ff39a79bc8..2bae29ea52ffa 100644
+--- a/fs/namei.c
++++ b/fs/namei.c
+@@ -2890,7 +2890,7 @@ int path_pts(struct path *path)
+ 	dput(path->dentry);
+ 	path->dentry = parent;
+ 	child = d_hash_and_lookup(parent, &this);
+-	if (!child)
++	if (IS_ERR_OR_NULL(child))
+ 		return -ENOENT;
+ 
+ 	path->dentry = child;
+diff --git a/fs/nfs/blocklayout/dev.c b/fs/nfs/blocklayout/dev.c
+index 70f5563a8e81c..65cbb5607a5fc 100644
+--- a/fs/nfs/blocklayout/dev.c
++++ b/fs/nfs/blocklayout/dev.c
+@@ -404,7 +404,7 @@ bl_parse_concat(struct nfs_server *server, struct pnfs_block_dev *d,
+ 	int ret, i;
+ 
+ 	d->children = kcalloc(v->concat.volumes_count,
+-			sizeof(struct pnfs_block_dev), GFP_KERNEL);
++			sizeof(struct pnfs_block_dev), gfp_mask);
+ 	if (!d->children)
+ 		return -ENOMEM;
+ 
+@@ -433,7 +433,7 @@ bl_parse_stripe(struct nfs_server *server, struct pnfs_block_dev *d,
+ 	int ret, i;
+ 
+ 	d->children = kcalloc(v->stripe.volumes_count,
+-			sizeof(struct pnfs_block_dev), GFP_KERNEL);
++			sizeof(struct pnfs_block_dev), gfp_mask);
+ 	if (!d->children)
+ 		return -ENOMEM;
+ 
+diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h
+index 913c09806c7f5..41abea340ad84 100644
+--- a/fs/nfs/internal.h
++++ b/fs/nfs/internal.h
+@@ -493,6 +493,7 @@ extern const struct nfs_pgio_completion_ops nfs_async_read_completion_ops;
+ extern void nfs_pageio_init_read(struct nfs_pageio_descriptor *pgio,
+ 			struct inode *inode, bool force_mds,
+ 			const struct nfs_pgio_completion_ops *compl_ops);
++extern bool nfs_read_alloc_scratch(struct nfs_pgio_header *hdr, size_t size);
+ extern int nfs_read_add_folio(struct nfs_pageio_descriptor *pgio,
+ 			       struct nfs_open_context *ctx,
+ 			       struct folio *folio);
+diff --git a/fs/nfs/nfs2xdr.c b/fs/nfs/nfs2xdr.c
+index 05c3b4b2b3dd8..c190938142960 100644
+--- a/fs/nfs/nfs2xdr.c
++++ b/fs/nfs/nfs2xdr.c
+@@ -949,7 +949,7 @@ int nfs2_decode_dirent(struct xdr_stream *xdr, struct nfs_entry *entry,
+ 
+ 	error = decode_filename_inline(xdr, &entry->name, &entry->len);
+ 	if (unlikely(error))
+-		return -EAGAIN;
++		return error == -ENAMETOOLONG ? -ENAMETOOLONG : -EAGAIN;
+ 
+ 	/*
+ 	 * The type (size and byte order) of nfscookie isn't defined in
+diff --git a/fs/nfs/nfs3xdr.c b/fs/nfs/nfs3xdr.c
+index 3b0b650c9c5ab..60f032be805ae 100644
+--- a/fs/nfs/nfs3xdr.c
++++ b/fs/nfs/nfs3xdr.c
+@@ -1991,7 +1991,7 @@ int nfs3_decode_dirent(struct xdr_stream *xdr, struct nfs_entry *entry,
+ 
+ 	error = decode_inline_filename3(xdr, &entry->name, &entry->len);
+ 	if (unlikely(error))
+-		return -EAGAIN;
++		return error == -ENAMETOOLONG ? -ENAMETOOLONG : -EAGAIN;
+ 
+ 	error = decode_cookie3(xdr, &new_cookie);
+ 	if (unlikely(error))
+diff --git a/fs/nfs/nfs42.h b/fs/nfs/nfs42.h
+index 0fe5aacbcfdf1..b59876b01a1e3 100644
+--- a/fs/nfs/nfs42.h
++++ b/fs/nfs/nfs42.h
+@@ -13,6 +13,7 @@
+  * more? Need to consider not to pre-alloc too much for a compound.
+  */
+ #define PNFS_LAYOUTSTATS_MAXDEV (4)
++#define READ_PLUS_SCRATCH_SIZE (16)
+ 
+ /* nfs4.2proc.c */
+ #ifdef CONFIG_NFS_V4_2
+diff --git a/fs/nfs/nfs42proc.c b/fs/nfs/nfs42proc.c
+index 49f78e23b34c0..063e00aff87ed 100644
+--- a/fs/nfs/nfs42proc.c
++++ b/fs/nfs/nfs42proc.c
+@@ -471,8 +471,9 @@ ssize_t nfs42_proc_copy(struct file *src, loff_t pos_src,
+ 				continue;
+ 			}
+ 			break;
+-		} else if (err == -NFS4ERR_OFFLOAD_NO_REQS && !args.sync) {
+-			args.sync = true;
++		} else if (err == -NFS4ERR_OFFLOAD_NO_REQS &&
++				args.sync != res.synchronous) {
++			args.sync = res.synchronous;
+ 			dst_exception.retry = 1;
+ 			continue;
+ 		} else if ((err == -ESTALE ||
+diff --git a/fs/nfs/nfs42xdr.c b/fs/nfs/nfs42xdr.c
+index 95234208dc9ee..9e3ae53e22058 100644
+--- a/fs/nfs/nfs42xdr.c
++++ b/fs/nfs/nfs42xdr.c
+@@ -54,10 +54,16 @@
+ 					(1 /* data_content4 */ + \
+ 					 2 /* data_info4.di_offset */ + \
+ 					 1 /* data_info4.di_length */)
++#define NFS42_READ_PLUS_HOLE_SEGMENT_SIZE \
++					(1 /* data_content4 */ + \
++					 2 /* data_info4.di_offset */ + \
++					 2 /* data_info4.di_length */)
++#define READ_PLUS_SEGMENT_SIZE_DIFF	(NFS42_READ_PLUS_HOLE_SEGMENT_SIZE - \
++					 NFS42_READ_PLUS_DATA_SEGMENT_SIZE)
+ #define decode_read_plus_maxsz		(op_decode_hdr_maxsz + \
+ 					 1 /* rpr_eof */ + \
+ 					 1 /* rpr_contents count */ + \
+-					 NFS42_READ_PLUS_DATA_SEGMENT_SIZE)
++					 NFS42_READ_PLUS_HOLE_SEGMENT_SIZE)
+ #define encode_seek_maxsz		(op_encode_hdr_maxsz + \
+ 					 encode_stateid_maxsz + \
+ 					 2 /* offset */ + \
+@@ -617,8 +623,8 @@ static void nfs4_xdr_enc_read_plus(struct rpc_rqst *req,
+ 	encode_putfh(xdr, args->fh, &hdr);
+ 	encode_read_plus(xdr, args, &hdr);
+ 
+-	rpc_prepare_reply_pages(req, args->pages, args->pgbase,
+-				args->count, hdr.replen);
++	rpc_prepare_reply_pages(req, args->pages, args->pgbase, args->count,
++				hdr.replen - READ_PLUS_SEGMENT_SIZE_DIFF);
+ 	encode_nops(&hdr);
+ }
+ 
+@@ -1056,13 +1062,12 @@ static int decode_read_plus(struct xdr_stream *xdr, struct nfs_pgio_res *res)
+ 	res->eof = be32_to_cpup(p++);
+ 	segments = be32_to_cpup(p++);
+ 	if (segments == 0)
+-		return status;
++		return 0;
+ 
+ 	segs = kmalloc_array(segments, sizeof(*segs), GFP_KERNEL);
+ 	if (!segs)
+ 		return -ENOMEM;
+ 
+-	status = -EIO;
+ 	for (i = 0; i < segments; i++) {
+ 		status = decode_read_plus_segment(xdr, &segs[i]);
+ 		if (status < 0)
+@@ -1428,7 +1433,7 @@ static int nfs4_xdr_dec_read_plus(struct rpc_rqst *rqstp,
+ 	struct compound_hdr hdr;
+ 	int status;
+ 
+-	xdr_set_scratch_buffer(xdr, res->scratch, sizeof(res->scratch));
++	xdr_set_scratch_buffer(xdr, res->scratch, READ_PLUS_SCRATCH_SIZE);
+ 
+ 	status = decode_compound_hdr(xdr, &hdr);
+ 	if (status)
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 832fa226b8f26..3c24c3c99e8ac 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -5438,18 +5438,8 @@ static bool nfs4_read_plus_not_supported(struct rpc_task *task,
+ 	return false;
+ }
+ 
+-static inline void nfs4_read_plus_scratch_free(struct nfs_pgio_header *hdr)
+-{
+-	if (hdr->res.scratch) {
+-		kfree(hdr->res.scratch);
+-		hdr->res.scratch = NULL;
+-	}
+-}
+-
+ static int nfs4_read_done(struct rpc_task *task, struct nfs_pgio_header *hdr)
+ {
+-	nfs4_read_plus_scratch_free(hdr);
+-
+ 	if (!nfs4_sequence_done(task, &hdr->res.seq_res))
+ 		return -EAGAIN;
+ 	if (nfs4_read_stateid_changed(task, &hdr->args))
+@@ -5469,8 +5459,7 @@ static bool nfs42_read_plus_support(struct nfs_pgio_header *hdr,
+ 	/* Note: We don't use READ_PLUS with pNFS yet */
+ 	if (nfs_server_capable(hdr->inode, NFS_CAP_READ_PLUS) && !hdr->ds_clp) {
+ 		msg->rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_READ_PLUS];
+-		hdr->res.scratch = kmalloc(32, GFP_KERNEL);
+-		return hdr->res.scratch != NULL;
++		return nfs_read_alloc_scratch(hdr, READ_PLUS_SCRATCH_SIZE);
+ 	}
+ 	return false;
+ }
+diff --git a/fs/nfs/pnfs_nfs.c b/fs/nfs/pnfs_nfs.c
+index a0112ad4937aa..2e14ce2f82191 100644
+--- a/fs/nfs/pnfs_nfs.c
++++ b/fs/nfs/pnfs_nfs.c
+@@ -943,7 +943,7 @@ static int _nfs4_pnfs_v4_ds_connect(struct nfs_server *mds_srv,
+ 			* Test this address for session trunking and
+ 			* add as an alias
+ 			*/
+-			xprtdata.cred = nfs4_get_clid_cred(clp),
++			xprtdata.cred = nfs4_get_clid_cred(clp);
+ 			rpc_clnt_add_xprt(clp->cl_rpcclient, &xprt_args,
+ 					  rpc_clnt_setup_test_and_add_xprt,
+ 					  &rpcdata);
+diff --git a/fs/nfs/read.c b/fs/nfs/read.c
+index f71eeee67e201..7dc21a48e3e7b 100644
+--- a/fs/nfs/read.c
++++ b/fs/nfs/read.c
+@@ -47,6 +47,8 @@ static struct nfs_pgio_header *nfs_readhdr_alloc(void)
+ 
+ static void nfs_readhdr_free(struct nfs_pgio_header *rhdr)
+ {
++	if (rhdr->res.scratch != NULL)
++		kfree(rhdr->res.scratch);
+ 	kmem_cache_free(nfs_rdata_cachep, rhdr);
+ }
+ 
+@@ -108,6 +110,14 @@ void nfs_pageio_reset_read_mds(struct nfs_pageio_descriptor *pgio)
+ }
+ EXPORT_SYMBOL_GPL(nfs_pageio_reset_read_mds);
+ 
++bool nfs_read_alloc_scratch(struct nfs_pgio_header *hdr, size_t size)
++{
++	WARN_ON(hdr->res.scratch != NULL);
++	hdr->res.scratch = kmalloc(size, GFP_KERNEL);
++	return hdr->res.scratch != NULL;
++}
++EXPORT_SYMBOL_GPL(nfs_read_alloc_scratch);
++
+ static void nfs_readpage_release(struct nfs_page *req, int error)
+ {
+ 	struct folio *folio = nfs_page_to_folio(req);
+diff --git a/fs/nfsd/blocklayoutxdr.c b/fs/nfsd/blocklayoutxdr.c
+index 8e9c1a0f8d380..1ed2f691ebb90 100644
+--- a/fs/nfsd/blocklayoutxdr.c
++++ b/fs/nfsd/blocklayoutxdr.c
+@@ -83,6 +83,15 @@ nfsd4_block_encode_getdeviceinfo(struct xdr_stream *xdr,
+ 	int len = sizeof(__be32), ret, i;
+ 	__be32 *p;
+ 
++	/*
++	 * See paragraph 5 of RFC 8881 S18.40.3.
++	 */
++	if (!gdp->gd_maxcount) {
++		if (xdr_stream_encode_u32(xdr, 0) != XDR_UNIT)
++			return nfserr_resource;
++		return nfs_ok;
++	}
++
+ 	p = xdr_reserve_space(xdr, len + sizeof(__be32));
+ 	if (!p)
+ 		return nfserr_resource;
+diff --git a/fs/nfsd/flexfilelayoutxdr.c b/fs/nfsd/flexfilelayoutxdr.c
+index e81d2a5cf381e..bb205328e043d 100644
+--- a/fs/nfsd/flexfilelayoutxdr.c
++++ b/fs/nfsd/flexfilelayoutxdr.c
+@@ -85,6 +85,15 @@ nfsd4_ff_encode_getdeviceinfo(struct xdr_stream *xdr,
+ 	int addr_len;
+ 	__be32 *p;
+ 
++	/*
++	 * See paragraph 5 of RFC 8881 S18.40.3.
++	 */
++	if (!gdp->gd_maxcount) {
++		if (xdr_stream_encode_u32(xdr, 0) != XDR_UNIT)
++			return nfserr_resource;
++		return nfs_ok;
++	}
++
+ 	/* len + padding for two strings */
+ 	addr_len = 16 + da->netaddr.netid_len + da->netaddr.addr_len;
+ 	ver_len = 20;
+diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
+index b30dca7de8cc0..be72628b13376 100644
+--- a/fs/nfsd/nfs4xdr.c
++++ b/fs/nfsd/nfs4xdr.c
+@@ -4678,20 +4678,17 @@ nfsd4_encode_getdeviceinfo(struct nfsd4_compoundres *resp, __be32 nfserr,
+ 
+ 	*p++ = cpu_to_be32(gdev->gd_layout_type);
+ 
+-	/* If maxcount is 0 then just update notifications */
+-	if (gdev->gd_maxcount != 0) {
+-		ops = nfsd4_layout_ops[gdev->gd_layout_type];
+-		nfserr = ops->encode_getdeviceinfo(xdr, gdev);
+-		if (nfserr) {
+-			/*
+-			 * We don't bother to burden the layout drivers with
+-			 * enforcing gd_maxcount, just tell the client to
+-			 * come back with a bigger buffer if it's not enough.
+-			 */
+-			if (xdr->buf->len + 4 > gdev->gd_maxcount)
+-				goto toosmall;
+-			return nfserr;
+-		}
++	ops = nfsd4_layout_ops[gdev->gd_layout_type];
++	nfserr = ops->encode_getdeviceinfo(xdr, gdev);
++	if (nfserr) {
++		/*
++		 * We don't bother to burden the layout drivers with
++		 * enforcing gd_maxcount, just tell the client to
++		 * come back with a bigger buffer if it's not enough.
++		 */
++		if (xdr->buf->len + 4 > gdev->gd_maxcount)
++			goto toosmall;
++		return nfserr;
+ 	}
+ 
+ 	if (gdev->gd_notify_types) {
+diff --git a/fs/ocfs2/namei.c b/fs/ocfs2/namei.c
+index 17c52225b87d4..03bccfd183f3c 100644
+--- a/fs/ocfs2/namei.c
++++ b/fs/ocfs2/namei.c
+@@ -1535,6 +1535,10 @@ static int ocfs2_rename(struct mnt_idmap *idmap,
+ 		status = ocfs2_add_entry(handle, new_dentry, old_inode,
+ 					 OCFS2_I(old_inode)->ip_blkno,
+ 					 new_dir_bh, &target_insert);
++		if (status < 0) {
++			mlog_errno(status);
++			goto bail;
++		}
+ 	}
+ 
+ 	old_inode->i_ctime = current_time(old_inode);
+diff --git a/fs/proc/base.c b/fs/proc/base.c
+index 9df3f48396628..ee4b824658a0a 100644
+--- a/fs/proc/base.c
++++ b/fs/proc/base.c
+@@ -3583,7 +3583,8 @@ static int proc_tid_comm_permission(struct mnt_idmap *idmap,
+ }
+ 
+ static const struct inode_operations proc_tid_comm_inode_operations = {
+-		.permission = proc_tid_comm_permission,
++		.setattr	= proc_setattr,
++		.permission	= proc_tid_comm_permission,
+ };
+ 
+ /*
+diff --git a/fs/pstore/ram_core.c b/fs/pstore/ram_core.c
+index 85aaf0fc6d7d1..eb6df190d7523 100644
+--- a/fs/pstore/ram_core.c
++++ b/fs/pstore/ram_core.c
+@@ -519,7 +519,7 @@ static int persistent_ram_post_init(struct persistent_ram_zone *prz, u32 sig,
+ 	sig ^= PERSISTENT_RAM_SIG;
+ 
+ 	if (prz->buffer->sig == sig) {
+-		if (buffer_size(prz) == 0) {
++		if (buffer_size(prz) == 0 && buffer_start(prz) == 0) {
+ 			pr_debug("found existing empty buffer\n");
+ 			return 0;
+ 		}
+diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
+index e3e4f40476579..c7afe433d991a 100644
+--- a/fs/quota/dquot.c
++++ b/fs/quota/dquot.c
+@@ -225,13 +225,22 @@ static void put_quota_format(struct quota_format_type *fmt)
+ 
+ /*
+  * Dquot List Management:
+- * The quota code uses four lists for dquot management: the inuse_list,
+- * free_dquots, dqi_dirty_list, and dquot_hash[] array. A single dquot
+- * structure may be on some of those lists, depending on its current state.
++ * The quota code uses five lists for dquot management: the inuse_list,
++ * releasing_dquots, free_dquots, dqi_dirty_list, and dquot_hash[] array.
++ * A single dquot structure may be on some of those lists, depending on
++ * its current state.
+  *
+  * All dquots are placed to the end of inuse_list when first created, and this
+  * list is used for invalidate operation, which must look at every dquot.
+  *
++ * When the last reference of a dquot will be dropped, the dquot will be
++ * added to releasing_dquots. We'd then queue work item which would call
++ * synchronize_srcu() and after that perform the final cleanup of all the
++ * dquots on the list. Both releasing_dquots and free_dquots use the
++ * dq_free list_head in the dquot struct. When a dquot is removed from
++ * releasing_dquots, a reference count is always subtracted, and if
++ * dq_count == 0 at that point, the dquot will be added to the free_dquots.
++ *
+  * Unused dquots (dq_count == 0) are added to the free_dquots list when freed,
+  * and this list is searched whenever we need an available dquot.  Dquots are
+  * removed from the list as soon as they are used again, and
+@@ -250,6 +259,7 @@ static void put_quota_format(struct quota_format_type *fmt)
+ 
+ static LIST_HEAD(inuse_list);
+ static LIST_HEAD(free_dquots);
++static LIST_HEAD(releasing_dquots);
+ static unsigned int dq_hash_bits, dq_hash_mask;
+ static struct hlist_head *dquot_hash;
+ 
+@@ -260,6 +270,9 @@ static qsize_t inode_get_rsv_space(struct inode *inode);
+ static qsize_t __inode_get_rsv_space(struct inode *inode);
+ static int __dquot_initialize(struct inode *inode, int type);
+ 
++static void quota_release_workfn(struct work_struct *work);
++static DECLARE_DELAYED_WORK(quota_release_work, quota_release_workfn);
++
+ static inline unsigned int
+ hashfn(const struct super_block *sb, struct kqid qid)
+ {
+@@ -305,12 +318,18 @@ static inline void put_dquot_last(struct dquot *dquot)
+ 	dqstats_inc(DQST_FREE_DQUOTS);
+ }
+ 
++static inline void put_releasing_dquots(struct dquot *dquot)
++{
++	list_add_tail(&dquot->dq_free, &releasing_dquots);
++}
++
+ static inline void remove_free_dquot(struct dquot *dquot)
+ {
+ 	if (list_empty(&dquot->dq_free))
+ 		return;
+ 	list_del_init(&dquot->dq_free);
+-	dqstats_dec(DQST_FREE_DQUOTS);
++	if (!atomic_read(&dquot->dq_count))
++		dqstats_dec(DQST_FREE_DQUOTS);
+ }
+ 
+ static inline void put_inuse(struct dquot *dquot)
+@@ -336,6 +355,11 @@ static void wait_on_dquot(struct dquot *dquot)
+ 	mutex_unlock(&dquot->dq_lock);
+ }
+ 
++static inline int dquot_active(struct dquot *dquot)
++{
++	return test_bit(DQ_ACTIVE_B, &dquot->dq_flags);
++}
++
+ static inline int dquot_dirty(struct dquot *dquot)
+ {
+ 	return test_bit(DQ_MOD_B, &dquot->dq_flags);
+@@ -351,14 +375,14 @@ int dquot_mark_dquot_dirty(struct dquot *dquot)
+ {
+ 	int ret = 1;
+ 
+-	if (!test_bit(DQ_ACTIVE_B, &dquot->dq_flags))
++	if (!dquot_active(dquot))
+ 		return 0;
+ 
+ 	if (sb_dqopt(dquot->dq_sb)->flags & DQUOT_NOLIST_DIRTY)
+ 		return test_and_set_bit(DQ_MOD_B, &dquot->dq_flags);
+ 
+ 	/* If quota is dirty already, we don't have to acquire dq_list_lock */
+-	if (test_bit(DQ_MOD_B, &dquot->dq_flags))
++	if (dquot_dirty(dquot))
+ 		return 1;
+ 
+ 	spin_lock(&dq_list_lock);
+@@ -440,7 +464,7 @@ int dquot_acquire(struct dquot *dquot)
+ 	smp_mb__before_atomic();
+ 	set_bit(DQ_READ_B, &dquot->dq_flags);
+ 	/* Instantiate dquot if needed */
+-	if (!test_bit(DQ_ACTIVE_B, &dquot->dq_flags) && !dquot->dq_off) {
++	if (!dquot_active(dquot) && !dquot->dq_off) {
+ 		ret = dqopt->ops[dquot->dq_id.type]->commit_dqblk(dquot);
+ 		/* Write the info if needed */
+ 		if (info_dirty(&dqopt->info[dquot->dq_id.type])) {
+@@ -482,7 +506,7 @@ int dquot_commit(struct dquot *dquot)
+ 		goto out_lock;
+ 	/* Inactive dquot can be only if there was error during read/init
+ 	 * => we have better not writing it */
+-	if (test_bit(DQ_ACTIVE_B, &dquot->dq_flags))
++	if (dquot_active(dquot))
+ 		ret = dqopt->ops[dquot->dq_id.type]->commit_dqblk(dquot);
+ 	else
+ 		ret = -EIO;
+@@ -547,6 +571,8 @@ static void invalidate_dquots(struct super_block *sb, int type)
+ 	struct dquot *dquot, *tmp;
+ 
+ restart:
++	flush_delayed_work(&quota_release_work);
++
+ 	spin_lock(&dq_list_lock);
+ 	list_for_each_entry_safe(dquot, tmp, &inuse_list, dq_inuse) {
+ 		if (dquot->dq_sb != sb)
+@@ -555,6 +581,12 @@ restart:
+ 			continue;
+ 		/* Wait for dquot users */
+ 		if (atomic_read(&dquot->dq_count)) {
++			/* dquot in releasing_dquots, flush and retry */
++			if (!list_empty(&dquot->dq_free)) {
++				spin_unlock(&dq_list_lock);
++				goto restart;
++			}
++
+ 			atomic_inc(&dquot->dq_count);
+ 			spin_unlock(&dq_list_lock);
+ 			/*
+@@ -597,7 +629,7 @@ int dquot_scan_active(struct super_block *sb,
+ 
+ 	spin_lock(&dq_list_lock);
+ 	list_for_each_entry(dquot, &inuse_list, dq_inuse) {
+-		if (!test_bit(DQ_ACTIVE_B, &dquot->dq_flags))
++		if (!dquot_active(dquot))
+ 			continue;
+ 		if (dquot->dq_sb != sb)
+ 			continue;
+@@ -612,7 +644,7 @@ int dquot_scan_active(struct super_block *sb,
+ 		 * outstanding call and recheck the DQ_ACTIVE_B after that.
+ 		 */
+ 		wait_on_dquot(dquot);
+-		if (test_bit(DQ_ACTIVE_B, &dquot->dq_flags)) {
++		if (dquot_active(dquot)) {
+ 			ret = fn(dquot, priv);
+ 			if (ret < 0)
+ 				goto out;
+@@ -628,6 +660,18 @@ out:
+ }
+ EXPORT_SYMBOL(dquot_scan_active);
+ 
++static inline int dquot_write_dquot(struct dquot *dquot)
++{
++	int ret = dquot->dq_sb->dq_op->write_dquot(dquot);
++	if (ret < 0) {
++		quota_error(dquot->dq_sb, "Can't write quota structure "
++			    "(error %d). Quota may get out of sync!", ret);
++		/* Clear dirty bit anyway to avoid infinite loop. */
++		clear_dquot_dirty(dquot);
++	}
++	return ret;
++}
++
+ /* Write all dquot structures to quota files */
+ int dquot_writeback_dquots(struct super_block *sb, int type)
+ {
+@@ -651,23 +695,16 @@ int dquot_writeback_dquots(struct super_block *sb, int type)
+ 			dquot = list_first_entry(&dirty, struct dquot,
+ 						 dq_dirty);
+ 
+-			WARN_ON(!test_bit(DQ_ACTIVE_B, &dquot->dq_flags));
++			WARN_ON(!dquot_active(dquot));
+ 
+ 			/* Now we have active dquot from which someone is
+  			 * holding reference so we can safely just increase
+ 			 * use count */
+ 			dqgrab(dquot);
+ 			spin_unlock(&dq_list_lock);
+-			err = sb->dq_op->write_dquot(dquot);
+-			if (err) {
+-				/*
+-				 * Clear dirty bit anyway to avoid infinite
+-				 * loop here.
+-				 */
+-				clear_dquot_dirty(dquot);
+-				if (!ret)
+-					ret = err;
+-			}
++			err = dquot_write_dquot(dquot);
++			if (err && !ret)
++				ret = err;
+ 			dqput(dquot);
+ 			spin_lock(&dq_list_lock);
+ 		}
+@@ -760,13 +797,54 @@ static struct shrinker dqcache_shrinker = {
+ 	.seeks = DEFAULT_SEEKS,
+ };
+ 
++/*
++ * Safely release dquot and put reference to dquot.
++ */
++static void quota_release_workfn(struct work_struct *work)
++{
++	struct dquot *dquot;
++	struct list_head rls_head;
++
++	spin_lock(&dq_list_lock);
++	/* Exchange the list head to avoid livelock. */
++	list_replace_init(&releasing_dquots, &rls_head);
++	spin_unlock(&dq_list_lock);
++
++restart:
++	synchronize_srcu(&dquot_srcu);
++	spin_lock(&dq_list_lock);
++	while (!list_empty(&rls_head)) {
++		dquot = list_first_entry(&rls_head, struct dquot, dq_free);
++		/* Dquot got used again? */
++		if (atomic_read(&dquot->dq_count) > 1) {
++			remove_free_dquot(dquot);
++			atomic_dec(&dquot->dq_count);
++			continue;
++		}
++		if (dquot_dirty(dquot)) {
++			spin_unlock(&dq_list_lock);
++			/* Commit dquot before releasing */
++			dquot_write_dquot(dquot);
++			goto restart;
++		}
++		if (dquot_active(dquot)) {
++			spin_unlock(&dq_list_lock);
++			dquot->dq_sb->dq_op->release_dquot(dquot);
++			goto restart;
++		}
++		/* Dquot is inactive and clean, now move it to free list */
++		remove_free_dquot(dquot);
++		atomic_dec(&dquot->dq_count);
++		put_dquot_last(dquot);
++	}
++	spin_unlock(&dq_list_lock);
++}
++
+ /*
+  * Put reference to dquot
+  */
+ void dqput(struct dquot *dquot)
+ {
+-	int ret;
+-
+ 	if (!dquot)
+ 		return;
+ #ifdef CONFIG_QUOTA_DEBUG
+@@ -778,7 +856,7 @@ void dqput(struct dquot *dquot)
+ 	}
+ #endif
+ 	dqstats_inc(DQST_DROPS);
+-we_slept:
++
+ 	spin_lock(&dq_list_lock);
+ 	if (atomic_read(&dquot->dq_count) > 1) {
+ 		/* We have more than one user... nothing to do */
+@@ -790,35 +868,15 @@ we_slept:
+ 		spin_unlock(&dq_list_lock);
+ 		return;
+ 	}
++
+ 	/* Need to release dquot? */
+-	if (dquot_dirty(dquot)) {
+-		spin_unlock(&dq_list_lock);
+-		/* Commit dquot before releasing */
+-		ret = dquot->dq_sb->dq_op->write_dquot(dquot);
+-		if (ret < 0) {
+-			quota_error(dquot->dq_sb, "Can't write quota structure"
+-				    " (error %d). Quota may get out of sync!",
+-				    ret);
+-			/*
+-			 * We clear dirty bit anyway, so that we avoid
+-			 * infinite loop here
+-			 */
+-			clear_dquot_dirty(dquot);
+-		}
+-		goto we_slept;
+-	}
+-	if (test_bit(DQ_ACTIVE_B, &dquot->dq_flags)) {
+-		spin_unlock(&dq_list_lock);
+-		dquot->dq_sb->dq_op->release_dquot(dquot);
+-		goto we_slept;
+-	}
+-	atomic_dec(&dquot->dq_count);
+ #ifdef CONFIG_QUOTA_DEBUG
+ 	/* sanity check */
+ 	BUG_ON(!list_empty(&dquot->dq_free));
+ #endif
+-	put_dquot_last(dquot);
++	put_releasing_dquots(dquot);
+ 	spin_unlock(&dq_list_lock);
++	queue_delayed_work(system_unbound_wq, &quota_release_work, 1);
+ }
+ EXPORT_SYMBOL(dqput);
+ 
+@@ -908,7 +966,7 @@ we_slept:
+ 	 * already finished or it will be canceled due to dq_count > 1 test */
+ 	wait_on_dquot(dquot);
+ 	/* Read the dquot / allocate space in quota file */
+-	if (!test_bit(DQ_ACTIVE_B, &dquot->dq_flags)) {
++	if (!dquot_active(dquot)) {
+ 		int err;
+ 
+ 		err = sb->dq_op->acquire_dquot(dquot);
+@@ -1425,7 +1483,7 @@ static int info_bdq_free(struct dquot *dquot, qsize_t space)
+ 	return QUOTA_NL_NOWARN;
+ }
+ 
+-static int dquot_active(const struct inode *inode)
++static int inode_quota_active(const struct inode *inode)
+ {
+ 	struct super_block *sb = inode->i_sb;
+ 
+@@ -1448,7 +1506,7 @@ static int __dquot_initialize(struct inode *inode, int type)
+ 	qsize_t rsv;
+ 	int ret = 0;
+ 
+-	if (!dquot_active(inode))
++	if (!inode_quota_active(inode))
+ 		return 0;
+ 
+ 	dquots = i_dquot(inode);
+@@ -1556,7 +1614,7 @@ bool dquot_initialize_needed(struct inode *inode)
+ 	struct dquot **dquots;
+ 	int i;
+ 
+-	if (!dquot_active(inode))
++	if (!inode_quota_active(inode))
+ 		return false;
+ 
+ 	dquots = i_dquot(inode);
+@@ -1667,7 +1725,7 @@ int __dquot_alloc_space(struct inode *inode, qsize_t number, int flags)
+ 	int reserve = flags & DQUOT_SPACE_RESERVE;
+ 	struct dquot **dquots;
+ 
+-	if (!dquot_active(inode)) {
++	if (!inode_quota_active(inode)) {
+ 		if (reserve) {
+ 			spin_lock(&inode->i_lock);
+ 			*inode_reserved_space(inode) += number;
+@@ -1737,7 +1795,7 @@ int dquot_alloc_inode(struct inode *inode)
+ 	struct dquot_warn warn[MAXQUOTAS];
+ 	struct dquot * const *dquots;
+ 
+-	if (!dquot_active(inode))
++	if (!inode_quota_active(inode))
+ 		return 0;
+ 	for (cnt = 0; cnt < MAXQUOTAS; cnt++)
+ 		warn[cnt].w_type = QUOTA_NL_NOWARN;
+@@ -1780,7 +1838,7 @@ int dquot_claim_space_nodirty(struct inode *inode, qsize_t number)
+ 	struct dquot **dquots;
+ 	int cnt, index;
+ 
+-	if (!dquot_active(inode)) {
++	if (!inode_quota_active(inode)) {
+ 		spin_lock(&inode->i_lock);
+ 		*inode_reserved_space(inode) -= number;
+ 		__inode_add_bytes(inode, number);
+@@ -1822,7 +1880,7 @@ void dquot_reclaim_space_nodirty(struct inode *inode, qsize_t number)
+ 	struct dquot **dquots;
+ 	int cnt, index;
+ 
+-	if (!dquot_active(inode)) {
++	if (!inode_quota_active(inode)) {
+ 		spin_lock(&inode->i_lock);
+ 		*inode_reserved_space(inode) += number;
+ 		__inode_sub_bytes(inode, number);
+@@ -1866,7 +1924,7 @@ void __dquot_free_space(struct inode *inode, qsize_t number, int flags)
+ 	struct dquot **dquots;
+ 	int reserve = flags & DQUOT_SPACE_RESERVE, index;
+ 
+-	if (!dquot_active(inode)) {
++	if (!inode_quota_active(inode)) {
+ 		if (reserve) {
+ 			spin_lock(&inode->i_lock);
+ 			*inode_reserved_space(inode) -= number;
+@@ -1921,7 +1979,7 @@ void dquot_free_inode(struct inode *inode)
+ 	struct dquot * const *dquots;
+ 	int index;
+ 
+-	if (!dquot_active(inode))
++	if (!inode_quota_active(inode))
+ 		return;
+ 
+ 	dquots = i_dquot(inode);
+@@ -2093,7 +2151,7 @@ int dquot_transfer(struct mnt_idmap *idmap, struct inode *inode,
+ 	struct super_block *sb = inode->i_sb;
+ 	int ret;
+ 
+-	if (!dquot_active(inode))
++	if (!inode_quota_active(inode))
+ 		return 0;
+ 
+ 	if (i_uid_needs_update(idmap, iattr, inode)) {
+diff --git a/fs/reiserfs/journal.c b/fs/reiserfs/journal.c
+index 479aa4a57602f..015bfe4e45241 100644
+--- a/fs/reiserfs/journal.c
++++ b/fs/reiserfs/journal.c
+@@ -2326,7 +2326,7 @@ static struct buffer_head *reiserfs_breada(struct block_device *dev,
+ 	int i, j;
+ 
+ 	bh = __getblk(dev, block, bufsize);
+-	if (buffer_uptodate(bh))
++	if (!bh || buffer_uptodate(bh))
+ 		return (bh);
+ 
+ 	if (block + BUFNR > max_block) {
+@@ -2336,6 +2336,8 @@ static struct buffer_head *reiserfs_breada(struct block_device *dev,
+ 	j = 1;
+ 	for (i = 1; i < blocks; i++) {
+ 		bh = __getblk(dev, block + i, bufsize);
++		if (!bh)
++			break;
+ 		if (buffer_uptodate(bh)) {
+ 			brelse(bh);
+ 			break;
+diff --git a/fs/splice.c b/fs/splice.c
+index 3e2a31e1ce6a8..2e4cab57fb2ff 100644
+--- a/fs/splice.c
++++ b/fs/splice.c
+@@ -1269,10 +1269,8 @@ long do_splice(struct file *in, loff_t *off_in, struct file *out,
+ 		if ((in->f_flags | out->f_flags) & O_NONBLOCK)
+ 			flags |= SPLICE_F_NONBLOCK;
+ 
+-		return splice_pipe_to_pipe(ipipe, opipe, len, flags);
+-	}
+-
+-	if (ipipe) {
++		ret = splice_pipe_to_pipe(ipipe, opipe, len, flags);
++	} else if (ipipe) {
+ 		if (off_in)
+ 			return -ESPIPE;
+ 		if (off_out) {
+@@ -1297,18 +1295,11 @@ long do_splice(struct file *in, loff_t *off_in, struct file *out,
+ 		ret = do_splice_from(ipipe, out, &offset, len, flags);
+ 		file_end_write(out);
+ 
+-		if (ret > 0)
+-			fsnotify_modify(out);
+-
+ 		if (!off_out)
+ 			out->f_pos = offset;
+ 		else
+ 			*off_out = offset;
+-
+-		return ret;
+-	}
+-
+-	if (opipe) {
++	} else if (opipe) {
+ 		if (off_out)
+ 			return -ESPIPE;
+ 		if (off_in) {
+@@ -1324,18 +1315,25 @@ long do_splice(struct file *in, loff_t *off_in, struct file *out,
+ 
+ 		ret = splice_file_to_pipe(in, opipe, &offset, len, flags);
+ 
+-		if (ret > 0)
+-			fsnotify_access(in);
+-
+ 		if (!off_in)
+ 			in->f_pos = offset;
+ 		else
+ 			*off_in = offset;
++	} else {
++		ret = -EINVAL;
++	}
+ 
+-		return ret;
++	if (ret > 0) {
++		/*
++		 * Generate modify out before access in:
++		 * do_splice_from() may've already sent modify out,
++		 * and this ensures the events get merged.
++		 */
++		fsnotify_modify(out);
++		fsnotify_access(in);
+ 	}
+ 
+-	return -EINVAL;
++	return ret;
+ }
+ 
+ static long __do_splice(struct file *in, loff_t __user *off_in,
+@@ -1464,6 +1462,9 @@ static long vmsplice_to_user(struct file *file, struct iov_iter *iter,
+ 		pipe_unlock(pipe);
+ 	}
+ 
++	if (ret > 0)
++		fsnotify_access(file);
++
+ 	return ret;
+ }
+ 
+@@ -1493,8 +1494,10 @@ static long vmsplice_to_pipe(struct file *file, struct iov_iter *iter,
+ 	if (!ret)
+ 		ret = iter_to_pipe(iter, pipe, buf_flag);
+ 	pipe_unlock(pipe);
+-	if (ret > 0)
++	if (ret > 0) {
+ 		wakeup_pipe_readers(pipe);
++		fsnotify_modify(file);
++	}
+ 	return ret;
+ }
+ 
+@@ -1928,6 +1931,11 @@ long do_tee(struct file *in, struct file *out, size_t len, unsigned int flags)
+ 		}
+ 	}
+ 
++	if (ret > 0) {
++		fsnotify_access(in);
++		fsnotify_modify(out);
++	}
++
+ 	return ret;
+ }
+ 
+diff --git a/fs/verity/signature.c b/fs/verity/signature.c
+index 72034bc71c9d9..e22fb25d7babe 100644
+--- a/fs/verity/signature.c
++++ b/fs/verity/signature.c
+@@ -62,6 +62,22 @@ int fsverity_verify_signature(const struct fsverity_info *vi,
+ 		return 0;
+ 	}
+ 
++	if (fsverity_keyring->keys.nr_leaves_on_tree == 0) {
++		/*
++		 * The ".fs-verity" keyring is empty, due to builtin signatures
++		 * being supported by the kernel but not actually being used.
++		 * In this case, verify_pkcs7_signature() would always return an
++		 * error, usually ENOKEY.  It could also be EBADMSG if the
++		 * PKCS#7 is malformed, but that isn't very important to
++		 * distinguish.  So, just skip to ENOKEY to avoid the attack
++		 * surface of the PKCS#7 parser, which would otherwise be
++		 * reachable by any task able to execute FS_IOC_ENABLE_VERITY.
++		 */
++		fsverity_err(inode,
++			     "fs-verity keyring is empty, rejecting signed file!");
++		return -ENOKEY;
++	}
++
+ 	d = kzalloc(sizeof(*d) + hash_alg->digest_size, GFP_KERNEL);
+ 	if (!d)
+ 		return -ENOMEM;
+diff --git a/include/crypto/algapi.h b/include/crypto/algapi.h
+index 6156161b181f1..ca86f4c6ba439 100644
+--- a/include/crypto/algapi.h
++++ b/include/crypto/algapi.h
+@@ -12,6 +12,7 @@
+ #include <linux/cache.h>
+ #include <linux/crypto.h>
+ #include <linux/types.h>
++#include <linux/workqueue.h>
+ 
+ /*
+  * Maximum values for blocksize and alignmask, used to allocate
+@@ -82,6 +83,8 @@ struct crypto_instance {
+ 		struct crypto_spawn *spawns;
+ 	};
+ 
++	struct work_struct free_work;
++
+ 	void *__ctx[] CRYPTO_MINALIGN_ATTR;
+ };
+ 
+diff --git a/include/dt-bindings/clock/qcom,gcc-sc8280xp.h b/include/dt-bindings/clock/qcom,gcc-sc8280xp.h
+index 721105ea4fad8..8454915917849 100644
+--- a/include/dt-bindings/clock/qcom,gcc-sc8280xp.h
++++ b/include/dt-bindings/clock/qcom,gcc-sc8280xp.h
+@@ -494,5 +494,15 @@
+ #define USB30_SEC_GDSC					11
+ #define EMAC_0_GDSC					12
+ #define EMAC_1_GDSC					13
++#define USB4_1_GDSC					14
++#define USB4_GDSC					15
++#define HLOS1_VOTE_MMNOC_MMU_TBU_HF0_GDSC		16
++#define HLOS1_VOTE_MMNOC_MMU_TBU_HF1_GDSC		17
++#define HLOS1_VOTE_MMNOC_MMU_TBU_SF0_GDSC		18
++#define HLOS1_VOTE_MMNOC_MMU_TBU_SF1_GDSC		19
++#define HLOS1_VOTE_TURING_MMU_TBU0_GDSC			20
++#define HLOS1_VOTE_TURING_MMU_TBU1_GDSC			21
++#define HLOS1_VOTE_TURING_MMU_TBU2_GDSC			22
++#define HLOS1_VOTE_TURING_MMU_TBU3_GDSC			23
+ 
+ #endif
+diff --git a/include/linux/arm_sdei.h b/include/linux/arm_sdei.h
+index 14dc461b0e829..255701e1251b4 100644
+--- a/include/linux/arm_sdei.h
++++ b/include/linux/arm_sdei.h
+@@ -47,10 +47,12 @@ int sdei_unregister_ghes(struct ghes *ghes);
+ int sdei_mask_local_cpu(void);
+ int sdei_unmask_local_cpu(void);
+ void __init sdei_init(void);
++void sdei_handler_abort(void);
+ #else
+ static inline int sdei_mask_local_cpu(void) { return 0; }
+ static inline int sdei_unmask_local_cpu(void) { return 0; }
+ static inline void sdei_init(void) { }
++static inline void sdei_handler_abort(void) { }
+ #endif /* CONFIG_ARM_SDE_INTERFACE */
+ 
+ 
+diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
+index 87d94be7825af..56f7f79137921 100644
+--- a/include/linux/blkdev.h
++++ b/include/linux/blkdev.h
+@@ -538,6 +538,7 @@ struct request_queue {
+ #define QUEUE_FLAG_ADD_RANDOM	10	/* Contributes to random pool */
+ #define QUEUE_FLAG_SYNCHRONOUS	11	/* always completes in submit context */
+ #define QUEUE_FLAG_SAME_FORCE	12	/* force complete on same CPU */
++#define QUEUE_FLAG_HW_WC	18	/* Write back caching supported */
+ #define QUEUE_FLAG_INIT_DONE	14	/* queue is initialized */
+ #define QUEUE_FLAG_STABLE_WRITES 15	/* don't modify blks until WB is done */
+ #define QUEUE_FLAG_POLL		16	/* IO polling enabled if set */
+diff --git a/include/linux/hid.h b/include/linux/hid.h
+index 39e21e3815ad4..9e8f87800e21a 100644
+--- a/include/linux/hid.h
++++ b/include/linux/hid.h
+@@ -360,6 +360,7 @@ struct hid_item {
+ #define HID_QUIRK_NO_OUTPUT_REPORTS_ON_INTR_EP	BIT(18)
+ #define HID_QUIRK_HAVE_SPECIAL_DRIVER		BIT(19)
+ #define HID_QUIRK_INCREMENT_USAGE_ON_DUPLICATE	BIT(20)
++#define HID_QUIRK_NOINVERT			BIT(21)
+ #define HID_QUIRK_FULLSPEED_INTERVAL		BIT(28)
+ #define HID_QUIRK_NO_INIT_REPORTS		BIT(29)
+ #define HID_QUIRK_NO_IGNORE			BIT(30)
+diff --git a/include/linux/if_arp.h b/include/linux/if_arp.h
+index 1ed52441972f9..10a1e81434cb9 100644
+--- a/include/linux/if_arp.h
++++ b/include/linux/if_arp.h
+@@ -53,6 +53,10 @@ static inline bool dev_is_mac_header_xmit(const struct net_device *dev)
+ 	case ARPHRD_NONE:
+ 	case ARPHRD_RAWIP:
+ 	case ARPHRD_PIMREG:
++	/* PPP adds its l2 header automatically in ppp_start_xmit().
++	 * This makes it look like an l3 device to __bpf_redirect() and tcf_mirred_init().
++	 */
++	case ARPHRD_PPP:
+ 		return false;
+ 	default:
+ 		return true;
+diff --git a/include/linux/kernfs.h b/include/linux/kernfs.h
+index 73f5c120def88..2a36f3218b510 100644
+--- a/include/linux/kernfs.h
++++ b/include/linux/kernfs.h
+@@ -550,6 +550,10 @@ static inline int kernfs_setattr(struct kernfs_node *kn,
+ 				 const struct iattr *iattr)
+ { return -ENOSYS; }
+ 
++static inline __poll_t kernfs_generic_poll(struct kernfs_open_file *of,
++					   struct poll_table_struct *pt)
++{ return -ENOSYS; }
++
+ static inline void kernfs_notify(struct kernfs_node *kn) { }
+ 
+ static inline int kernfs_xattr_get(struct kernfs_node *kn, const char *name,
+diff --git a/include/linux/lsm_hook_defs.h b/include/linux/lsm_hook_defs.h
+index 7308a1a7599b4..af796986baee6 100644
+--- a/include/linux/lsm_hook_defs.h
++++ b/include/linux/lsm_hook_defs.h
+@@ -54,6 +54,7 @@ LSM_HOOK(int, 0, bprm_creds_from_file, struct linux_binprm *bprm, struct file *f
+ LSM_HOOK(int, 0, bprm_check_security, struct linux_binprm *bprm)
+ LSM_HOOK(void, LSM_RET_VOID, bprm_committing_creds, struct linux_binprm *bprm)
+ LSM_HOOK(void, LSM_RET_VOID, bprm_committed_creds, struct linux_binprm *bprm)
++LSM_HOOK(int, 0, fs_context_submount, struct fs_context *fc, struct super_block *reference)
+ LSM_HOOK(int, 0, fs_context_dup, struct fs_context *fc,
+ 	 struct fs_context *src_sc)
+ LSM_HOOK(int, -ENOPARAM, fs_context_parse_param, struct fs_context *fc,
+diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
+index 5818af8eca5a5..dbf26bc89dd46 100644
+--- a/include/linux/memcontrol.h
++++ b/include/linux/memcontrol.h
+@@ -284,6 +284,11 @@ struct mem_cgroup {
+ 	atomic_long_t		memory_events[MEMCG_NR_MEMORY_EVENTS];
+ 	atomic_long_t		memory_events_local[MEMCG_NR_MEMORY_EVENTS];
+ 
++	/*
++	 * Hint of reclaim pressure for socket memroy management. Note
++	 * that this indicator should NOT be used in legacy cgroup mode
++	 * where socket memory is accounted/charged separately.
++	 */
+ 	unsigned long		socket_pressure;
+ 
+ 	/* Legacy tcp memory accounting */
+@@ -1727,8 +1732,8 @@ void mem_cgroup_sk_alloc(struct sock *sk);
+ void mem_cgroup_sk_free(struct sock *sk);
+ static inline bool mem_cgroup_under_socket_pressure(struct mem_cgroup *memcg)
+ {
+-	if (!cgroup_subsys_on_dfl(memory_cgrp_subsys) && memcg->tcpmem_pressure)
+-		return true;
++	if (!cgroup_subsys_on_dfl(memory_cgrp_subsys))
++		return !!memcg->tcpmem_pressure;
+ 	do {
+ 		if (time_before(jiffies, READ_ONCE(memcg->socket_pressure)))
+ 			return true;
+diff --git a/include/linux/mfd/rz-mtu3.h b/include/linux/mfd/rz-mtu3.h
+index c5173bc062701..8421d49500bf4 100644
+--- a/include/linux/mfd/rz-mtu3.h
++++ b/include/linux/mfd/rz-mtu3.h
+@@ -151,7 +151,6 @@ struct rz_mtu3 {
+ 	void *priv_data;
+ };
+ 
+-#if IS_ENABLED(CONFIG_RZ_MTU3)
+ static inline bool rz_mtu3_request_channel(struct rz_mtu3_channel *ch)
+ {
+ 	mutex_lock(&ch->lock);
+@@ -188,70 +187,5 @@ void rz_mtu3_32bit_ch_write(struct rz_mtu3_channel *ch, u16 off, u32 val);
+ void rz_mtu3_shared_reg_write(struct rz_mtu3_channel *ch, u16 off, u16 val);
+ void rz_mtu3_shared_reg_update_bit(struct rz_mtu3_channel *ch, u16 off,
+ 				   u16 pos, u8 val);
+-#else
+-static inline bool rz_mtu3_request_channel(struct rz_mtu3_channel *ch)
+-{
+-	return false;
+-}
+-
+-static inline void rz_mtu3_release_channel(struct rz_mtu3_channel *ch)
+-{
+-}
+-
+-static inline bool rz_mtu3_is_enabled(struct rz_mtu3_channel *ch)
+-{
+-	return false;
+-}
+-
+-static inline void rz_mtu3_disable(struct rz_mtu3_channel *ch)
+-{
+-}
+-
+-static inline int rz_mtu3_enable(struct rz_mtu3_channel *ch)
+-{
+-	return 0;
+-}
+-
+-static inline u8 rz_mtu3_8bit_ch_read(struct rz_mtu3_channel *ch, u16 off)
+-{
+-	return 0;
+-}
+-
+-static inline u16 rz_mtu3_16bit_ch_read(struct rz_mtu3_channel *ch, u16 off)
+-{
+-	return 0;
+-}
+-
+-static inline u32 rz_mtu3_32bit_ch_read(struct rz_mtu3_channel *ch, u16 off)
+-{
+-	return 0;
+-}
+-
+-static inline u16 rz_mtu3_shared_reg_read(struct rz_mtu3_channel *ch, u16 off)
+-{
+-	return 0;
+-}
+-
+-static inline void rz_mtu3_8bit_ch_write(struct rz_mtu3_channel *ch, u16 off, u8 val)
+-{
+-}
+-
+-static inline void rz_mtu3_16bit_ch_write(struct rz_mtu3_channel *ch, u16 off, u16 val)
+-{
+-}
+-
+-static inline void rz_mtu3_32bit_ch_write(struct rz_mtu3_channel *ch, u16 off, u32 val)
+-{
+-}
+-
+-static inline void rz_mtu3_shared_reg_write(struct rz_mtu3_channel *ch, u16 off, u16 val)
+-{
+-}
+-
+-static inline void rz_mtu3_shared_reg_update_bit(struct rz_mtu3_channel *ch,
+-						 u16 off, u16 pos, u8 val)
+-{
+-}
+-#endif
+ 
+ #endif /* __MFD_RZ_MTU3_H__ */
+diff --git a/include/linux/nmi.h b/include/linux/nmi.h
+index e3e6a64b98e09..e92e378df000f 100644
+--- a/include/linux/nmi.h
++++ b/include/linux/nmi.h
+@@ -157,31 +157,31 @@ static inline void touch_nmi_watchdog(void)
+ #ifdef arch_trigger_cpumask_backtrace
+ static inline bool trigger_all_cpu_backtrace(void)
+ {
+-	arch_trigger_cpumask_backtrace(cpu_online_mask, false);
++	arch_trigger_cpumask_backtrace(cpu_online_mask, -1);
+ 	return true;
+ }
+ 
+-static inline bool trigger_allbutself_cpu_backtrace(void)
++static inline bool trigger_allbutcpu_cpu_backtrace(int exclude_cpu)
+ {
+-	arch_trigger_cpumask_backtrace(cpu_online_mask, true);
++	arch_trigger_cpumask_backtrace(cpu_online_mask, exclude_cpu);
+ 	return true;
+ }
+ 
+ static inline bool trigger_cpumask_backtrace(struct cpumask *mask)
+ {
+-	arch_trigger_cpumask_backtrace(mask, false);
++	arch_trigger_cpumask_backtrace(mask, -1);
+ 	return true;
+ }
+ 
+ static inline bool trigger_single_cpu_backtrace(int cpu)
+ {
+-	arch_trigger_cpumask_backtrace(cpumask_of(cpu), false);
++	arch_trigger_cpumask_backtrace(cpumask_of(cpu), -1);
+ 	return true;
+ }
+ 
+ /* generic implementation */
+ void nmi_trigger_cpumask_backtrace(const cpumask_t *mask,
+-				   bool exclude_self,
++				   int exclude_cpu,
+ 				   void (*raise)(cpumask_t *mask));
+ bool nmi_cpu_backtrace(struct pt_regs *regs);
+ 
+@@ -190,7 +190,7 @@ static inline bool trigger_all_cpu_backtrace(void)
+ {
+ 	return false;
+ }
+-static inline bool trigger_allbutself_cpu_backtrace(void)
++static inline bool trigger_allbutcpu_cpu_backtrace(int exclude_cpu)
+ {
+ 	return false;
+ }
+diff --git a/include/linux/nvmem-consumer.h b/include/linux/nvmem-consumer.h
+index fa030d93b768e..27373024856dc 100644
+--- a/include/linux/nvmem-consumer.h
++++ b/include/linux/nvmem-consumer.h
+@@ -256,7 +256,7 @@ static inline struct nvmem_device *of_nvmem_device_get(struct device_node *np,
+ static inline struct device_node *
+ of_nvmem_layout_get_container(struct nvmem_device *nvmem)
+ {
+-	return ERR_PTR(-EOPNOTSUPP);
++	return NULL;
+ }
+ #endif /* CONFIG_NVMEM && CONFIG_OF */
+ 
+diff --git a/include/linux/pci.h b/include/linux/pci.h
+index c69a2cc1f4123..7ee498cd1f374 100644
+--- a/include/linux/pci.h
++++ b/include/linux/pci.h
+@@ -467,6 +467,7 @@ struct pci_dev {
+ 	pci_dev_flags_t dev_flags;
+ 	atomic_t	enable_cnt;	/* pci_enable_device has been called */
+ 
++	spinlock_t	pcie_cap_lock;		/* Protects RMW ops in capability accessors */
+ 	u32		saved_config_space[16]; /* Config space saved at suspend time */
+ 	struct hlist_head saved_cap_space;
+ 	int		rom_attr_enabled;	/* Display of ROM attribute enabled? */
+@@ -1217,11 +1218,40 @@ int pcie_capability_read_word(struct pci_dev *dev, int pos, u16 *val);
+ int pcie_capability_read_dword(struct pci_dev *dev, int pos, u32 *val);
+ int pcie_capability_write_word(struct pci_dev *dev, int pos, u16 val);
+ int pcie_capability_write_dword(struct pci_dev *dev, int pos, u32 val);
+-int pcie_capability_clear_and_set_word(struct pci_dev *dev, int pos,
+-				       u16 clear, u16 set);
++int pcie_capability_clear_and_set_word_unlocked(struct pci_dev *dev, int pos,
++						u16 clear, u16 set);
++int pcie_capability_clear_and_set_word_locked(struct pci_dev *dev, int pos,
++					      u16 clear, u16 set);
+ int pcie_capability_clear_and_set_dword(struct pci_dev *dev, int pos,
+ 					u32 clear, u32 set);
+ 
++/**
++ * pcie_capability_clear_and_set_word - RMW accessor for PCI Express Capability Registers
++ * @dev:	PCI device structure of the PCI Express device
++ * @pos:	PCI Express Capability Register
++ * @clear:	Clear bitmask
++ * @set:	Set bitmask
++ *
++ * Perform a Read-Modify-Write (RMW) operation using @clear and @set
++ * bitmasks on PCI Express Capability Register at @pos. Certain PCI Express
++ * Capability Registers are accessed concurrently in RMW fashion, hence
++ * require locking which is handled transparently to the caller.
++ */
++static inline int pcie_capability_clear_and_set_word(struct pci_dev *dev,
++						     int pos,
++						     u16 clear, u16 set)
++{
++	switch (pos) {
++	case PCI_EXP_LNKCTL:
++	case PCI_EXP_RTCTL:
++		return pcie_capability_clear_and_set_word_locked(dev, pos,
++								 clear, set);
++	default:
++		return pcie_capability_clear_and_set_word_unlocked(dev, pos,
++								   clear, set);
++	}
++}
++
+ static inline int pcie_capability_set_word(struct pci_dev *dev, int pos,
+ 					   u16 set)
+ {
+diff --git a/include/linux/pid_namespace.h b/include/linux/pid_namespace.h
+index c758809d5bcf3..f9f9931e02d6a 100644
+--- a/include/linux/pid_namespace.h
++++ b/include/linux/pid_namespace.h
+@@ -17,18 +17,10 @@
+ struct fs_pin;
+ 
+ #if defined(CONFIG_SYSCTL) && defined(CONFIG_MEMFD_CREATE)
+-/*
+- * sysctl for vm.memfd_noexec
+- * 0: memfd_create() without MFD_EXEC nor MFD_NOEXEC_SEAL
+- *	acts like MFD_EXEC was set.
+- * 1: memfd_create() without MFD_EXEC nor MFD_NOEXEC_SEAL
+- *	acts like MFD_NOEXEC_SEAL was set.
+- * 2: memfd_create() without MFD_NOEXEC_SEAL will be
+- *	rejected.
+- */
+-#define MEMFD_NOEXEC_SCOPE_EXEC			0
+-#define MEMFD_NOEXEC_SCOPE_NOEXEC_SEAL		1
+-#define MEMFD_NOEXEC_SCOPE_NOEXEC_ENFORCED	2
++/* modes for vm.memfd_noexec sysctl */
++#define MEMFD_NOEXEC_SCOPE_EXEC			0 /* MFD_EXEC implied if unset */
++#define MEMFD_NOEXEC_SCOPE_NOEXEC_SEAL		1 /* MFD_NOEXEC_SEAL implied if unset */
++#define MEMFD_NOEXEC_SCOPE_NOEXEC_ENFORCED	2 /* same as 1, except MFD_EXEC rejected */
+ #endif
+ 
+ struct pid_namespace {
+@@ -47,7 +39,6 @@ struct pid_namespace {
+ 	int reboot;	/* group exit code if this pidns was rebooted */
+ 	struct ns_common ns;
+ #if defined(CONFIG_SYSCTL) && defined(CONFIG_MEMFD_CREATE)
+-	/* sysctl for vm.memfd_noexec */
+ 	int memfd_noexec_scope;
+ #endif
+ } __randomize_layout;
+@@ -64,6 +55,23 @@ static inline struct pid_namespace *get_pid_ns(struct pid_namespace *ns)
+ 	return ns;
+ }
+ 
++#if defined(CONFIG_SYSCTL) && defined(CONFIG_MEMFD_CREATE)
++static inline int pidns_memfd_noexec_scope(struct pid_namespace *ns)
++{
++	int scope = MEMFD_NOEXEC_SCOPE_EXEC;
++
++	for (; ns; ns = ns->parent)
++		scope = max(scope, READ_ONCE(ns->memfd_noexec_scope));
++
++	return scope;
++}
++#else
++static inline int pidns_memfd_noexec_scope(struct pid_namespace *ns)
++{
++	return 0;
++}
++#endif
++
+ extern struct pid_namespace *copy_pid_ns(unsigned long flags,
+ 	struct user_namespace *user_ns, struct pid_namespace *ns);
+ extern void zap_pid_ns_processes(struct pid_namespace *pid_ns);
+@@ -78,6 +86,11 @@ static inline struct pid_namespace *get_pid_ns(struct pid_namespace *ns)
+ 	return ns;
+ }
+ 
++static inline int pidns_memfd_noexec_scope(struct pid_namespace *ns)
++{
++	return 0;
++}
++
+ static inline struct pid_namespace *copy_pid_ns(unsigned long flags,
+ 	struct user_namespace *user_ns, struct pid_namespace *ns)
+ {
+diff --git a/include/linux/security.h b/include/linux/security.h
+index 32828502f09ea..bac98ea18f78b 100644
+--- a/include/linux/security.h
++++ b/include/linux/security.h
+@@ -293,6 +293,7 @@ int security_bprm_creds_from_file(struct linux_binprm *bprm, struct file *file);
+ int security_bprm_check(struct linux_binprm *bprm);
+ void security_bprm_committing_creds(struct linux_binprm *bprm);
+ void security_bprm_committed_creds(struct linux_binprm *bprm);
++int security_fs_context_submount(struct fs_context *fc, struct super_block *reference);
+ int security_fs_context_dup(struct fs_context *fc, struct fs_context *src_fc);
+ int security_fs_context_parse_param(struct fs_context *fc, struct fs_parameter *param);
+ int security_sb_alloc(struct super_block *sb);
+@@ -629,6 +630,11 @@ static inline void security_bprm_committed_creds(struct linux_binprm *bprm)
+ {
+ }
+ 
++static inline int security_fs_context_submount(struct fs_context *fc,
++					   struct super_block *reference)
++{
++	return 0;
++}
+ static inline int security_fs_context_dup(struct fs_context *fc,
+ 					  struct fs_context *src_fc)
+ {
+diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
+index 1e8bbdb8da905..f99d798093ab3 100644
+--- a/include/linux/trace_events.h
++++ b/include/linux/trace_events.h
+@@ -878,7 +878,8 @@ extern int  perf_uprobe_init(struct perf_event *event,
+ extern void perf_uprobe_destroy(struct perf_event *event);
+ extern int bpf_get_uprobe_info(const struct perf_event *event,
+ 			       u32 *fd_type, const char **filename,
+-			       u64 *probe_offset, bool perf_type_tracepoint);
++			       u64 *probe_offset, u64 *probe_addr,
++			       bool perf_type_tracepoint);
+ #endif
+ extern int  ftrace_profile_set_filter(struct perf_event *event, int event_id,
+ 				     char *filter_str);
+diff --git a/include/linux/usb/typec_altmode.h b/include/linux/usb/typec_altmode.h
+index 350d49012659b..28aeef8f9e7b5 100644
+--- a/include/linux/usb/typec_altmode.h
++++ b/include/linux/usb/typec_altmode.h
+@@ -67,7 +67,7 @@ struct typec_altmode_ops {
+ 
+ int typec_altmode_enter(struct typec_altmode *altmode, u32 *vdo);
+ int typec_altmode_exit(struct typec_altmode *altmode);
+-void typec_altmode_attention(struct typec_altmode *altmode, u32 vdo);
++int typec_altmode_attention(struct typec_altmode *altmode, u32 vdo);
+ int typec_altmode_vdm(struct typec_altmode *altmode,
+ 		      const u32 header, const u32 *vdo, int count);
+ int typec_altmode_notify(struct typec_altmode *altmode, unsigned long conf,
+diff --git a/include/media/cec.h b/include/media/cec.h
+index abee41ae02d0e..9c007f83569aa 100644
+--- a/include/media/cec.h
++++ b/include/media/cec.h
+@@ -113,22 +113,25 @@ struct cec_fh {
+ #define CEC_FREE_TIME_TO_USEC(ft)		((ft) * 2400)
+ 
+ struct cec_adap_ops {
+-	/* Low-level callbacks */
++	/* Low-level callbacks, called with adap->lock held */
+ 	int (*adap_enable)(struct cec_adapter *adap, bool enable);
+ 	int (*adap_monitor_all_enable)(struct cec_adapter *adap, bool enable);
+ 	int (*adap_monitor_pin_enable)(struct cec_adapter *adap, bool enable);
+ 	int (*adap_log_addr)(struct cec_adapter *adap, u8 logical_addr);
+-	void (*adap_configured)(struct cec_adapter *adap, bool configured);
++	void (*adap_unconfigured)(struct cec_adapter *adap);
+ 	int (*adap_transmit)(struct cec_adapter *adap, u8 attempts,
+ 			     u32 signal_free_time, struct cec_msg *msg);
++	void (*adap_nb_transmit_canceled)(struct cec_adapter *adap,
++					  const struct cec_msg *msg);
+ 	void (*adap_status)(struct cec_adapter *adap, struct seq_file *file);
+ 	void (*adap_free)(struct cec_adapter *adap);
+ 
+-	/* Error injection callbacks */
++	/* Error injection callbacks, called without adap->lock held */
+ 	int (*error_inj_show)(struct cec_adapter *adap, struct seq_file *sf);
+ 	bool (*error_inj_parse_line)(struct cec_adapter *adap, char *line);
+ 
+-	/* High-level CEC message callback */
++	/* High-level CEC message callback, called without adap->lock held */
++	void (*configured)(struct cec_adapter *adap);
+ 	int (*received)(struct cec_adapter *adap, struct cec_msg *msg);
+ };
+ 
+diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h
+index 872dcb91a540e..3ff822ebb3a47 100644
+--- a/include/net/bluetooth/hci.h
++++ b/include/net/bluetooth/hci.h
+@@ -309,6 +309,26 @@ enum {
+ 	 * to support it.
+ 	 */
+ 	HCI_QUIRK_BROKEN_SET_RPA_TIMEOUT,
++
++	/* When this quirk is set, MSFT extension monitor tracking by
++	 * address filter is supported. Since tracking quantity of each
++	 * pattern is limited, this feature supports tracking multiple
++	 * devices concurrently if controller supports multiple
++	 * address filters.
++	 *
++	 * This quirk must be set before hci_register_dev is called.
++	 */
++	HCI_QUIRK_USE_MSFT_EXT_ADDRESS_FILTER,
++
++	/*
++	 * When this quirk is set, LE Coded PHY shall not be used. This is
++	 * required for some Intel controllers which erroneously claim to
++	 * support it but it causes problems with extended scanning.
++	 *
++	 * This quirk can be set before hci_register_dev is called or
++	 * during the hdev->setup vendor callback.
++	 */
++	HCI_QUIRK_BROKEN_LE_CODED,
+ };
+ 
+ /* HCI device flags */
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index e01d52cb668c0..c0a87558aea71 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -321,8 +321,8 @@ struct adv_monitor {
+ 
+ #define HCI_MAX_SHORT_NAME_LENGTH	10
+ 
+-#define HCI_CONN_HANDLE_UNSET		0xffff
+ #define HCI_CONN_HANDLE_MAX		0x0eff
++#define HCI_CONN_HANDLE_UNSET(_handle)	(_handle > HCI_CONN_HANDLE_MAX)
+ 
+ /* Min encryption key size to match with SMP */
+ #define HCI_MIN_ENC_KEY_SIZE		7
+@@ -739,6 +739,7 @@ struct hci_conn {
+ 	unsigned long	flags;
+ 
+ 	enum conn_reasons conn_reason;
++	__u8		abort_reason;
+ 
+ 	__u32		clock;
+ 	__u16		clock_accuracy;
+@@ -758,7 +759,6 @@ struct hci_conn {
+ 	struct delayed_work auto_accept_work;
+ 	struct delayed_work idle_work;
+ 	struct delayed_work le_conn_timeout;
+-	struct work_struct  le_scan_cleanup;
+ 
+ 	struct device	dev;
+ 	struct dentry	*debugfs;
+@@ -974,6 +974,10 @@ enum {
+ 	HCI_CONN_SCANNING,
+ 	HCI_CONN_AUTH_FAILURE,
+ 	HCI_CONN_PER_ADV,
++	HCI_CONN_BIG_CREATED,
++	HCI_CONN_CREATE_CIS,
++	HCI_CONN_BIG_SYNC,
++	HCI_CONN_BIG_SYNC_FAILED,
+ };
+ 
+ static inline bool hci_conn_ssp_enabled(struct hci_conn *conn)
+@@ -1115,6 +1119,32 @@ static inline struct hci_conn *hci_conn_hash_lookup_bis(struct hci_dev *hdev,
+ 	return NULL;
+ }
+ 
++static inline struct hci_conn *
++hci_conn_hash_lookup_per_adv_bis(struct hci_dev *hdev,
++				 bdaddr_t *ba,
++				 __u8 big, __u8 bis)
++{
++	struct hci_conn_hash *h = &hdev->conn_hash;
++	struct hci_conn  *c;
++
++	rcu_read_lock();
++
++	list_for_each_entry_rcu(c, &h->list, list) {
++		if (bacmp(&c->dst, ba) || c->type != ISO_LINK ||
++			!test_bit(HCI_CONN_PER_ADV, &c->flags))
++			continue;
++
++		if (c->iso_qos.bcast.big == big &&
++		    c->iso_qos.bcast.bis == bis) {
++			rcu_read_unlock();
++			return c;
++		}
++	}
++	rcu_read_unlock();
++
++	return NULL;
++}
++
+ static inline struct hci_conn *hci_conn_hash_lookup_handle(struct hci_dev *hdev,
+ 								__u16 handle)
+ {
+@@ -1259,6 +1289,29 @@ static inline struct hci_conn *hci_conn_hash_lookup_big(struct hci_dev *hdev,
+ 	return NULL;
+ }
+ 
++static inline struct hci_conn *hci_conn_hash_lookup_big_any_dst(struct hci_dev *hdev,
++							__u8 handle)
++{
++	struct hci_conn_hash *h = &hdev->conn_hash;
++	struct hci_conn  *c;
++
++	rcu_read_lock();
++
++	list_for_each_entry_rcu(c, &h->list, list) {
++		if (c->type != ISO_LINK)
++			continue;
++
++		if (handle == c->iso_qos.bcast.big) {
++			rcu_read_unlock();
++			return c;
++		}
++	}
++
++	rcu_read_unlock();
++
++	return NULL;
++}
++
+ static inline struct hci_conn *hci_conn_hash_lookup_state(struct hci_dev *hdev,
+ 							__u8 type, __u16 state)
+ {
+@@ -1324,7 +1377,8 @@ int hci_disconnect(struct hci_conn *conn, __u8 reason);
+ bool hci_setup_sync(struct hci_conn *conn, __u16 handle);
+ void hci_sco_setup(struct hci_conn *conn, __u8 status);
+ bool hci_iso_setup_path(struct hci_conn *conn);
+-int hci_le_create_cis(struct hci_conn *conn);
++int hci_le_create_cis_pending(struct hci_dev *hdev);
++int hci_conn_check_create_cis(struct hci_conn *conn);
+ 
+ struct hci_conn *hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t *dst,
+ 			      u8 role);
+@@ -1351,6 +1405,9 @@ struct hci_conn *hci_connect_sco(struct hci_dev *hdev, int type, bdaddr_t *dst,
+ 				 __u16 setting, struct bt_codec *codec);
+ struct hci_conn *hci_bind_cis(struct hci_dev *hdev, bdaddr_t *dst,
+ 			      __u8 dst_type, struct bt_iso_qos *qos);
++struct hci_conn *hci_bind_bis(struct hci_dev *hdev, bdaddr_t *dst,
++			      struct bt_iso_qos *qos,
++			      __u8 base_len, __u8 *base);
+ struct hci_conn *hci_connect_cis(struct hci_dev *hdev, bdaddr_t *dst,
+ 				 __u8 dst_type, struct bt_iso_qos *qos);
+ struct hci_conn *hci_connect_bis(struct hci_dev *hdev, bdaddr_t *dst,
+@@ -1713,7 +1770,9 @@ void hci_conn_del_sysfs(struct hci_conn *conn);
+ #define scan_2m(dev) (((dev)->le_tx_def_phys & HCI_LE_SET_PHY_2M) || \
+ 		      ((dev)->le_rx_def_phys & HCI_LE_SET_PHY_2M))
+ 
+-#define le_coded_capable(dev) (((dev)->le_features[1] & HCI_LE_PHY_CODED))
++#define le_coded_capable(dev) (((dev)->le_features[1] & HCI_LE_PHY_CODED) && \
++			       !test_bit(HCI_QUIRK_BROKEN_LE_CODED, \
++					 &(dev)->quirks))
+ 
+ #define scan_coded(dev) (((dev)->le_tx_def_phys & HCI_LE_SET_PHY_CODED) || \
+ 			 ((dev)->le_rx_def_phys & HCI_LE_SET_PHY_CODED))
+diff --git a/include/net/bluetooth/hci_sync.h b/include/net/bluetooth/hci_sync.h
+index 2495be4d8b828..b516a0f4a55b8 100644
+--- a/include/net/bluetooth/hci_sync.h
++++ b/include/net/bluetooth/hci_sync.h
+@@ -124,7 +124,7 @@ int hci_abort_conn_sync(struct hci_dev *hdev, struct hci_conn *conn, u8 reason);
+ 
+ int hci_le_create_conn_sync(struct hci_dev *hdev, struct hci_conn *conn);
+ 
+-int hci_le_create_cis_sync(struct hci_dev *hdev, struct hci_conn *conn);
++int hci_le_create_cis_sync(struct hci_dev *hdev);
+ 
+ int hci_le_remove_cig_sync(struct hci_dev *hdev, u8 handle);
+ 
+diff --git a/include/net/lwtunnel.h b/include/net/lwtunnel.h
+index 6f15e6fa154e6..53bd2d02a4f0d 100644
+--- a/include/net/lwtunnel.h
++++ b/include/net/lwtunnel.h
+@@ -16,9 +16,12 @@
+ #define LWTUNNEL_STATE_INPUT_REDIRECT	BIT(1)
+ #define LWTUNNEL_STATE_XMIT_REDIRECT	BIT(2)
+ 
++/* LWTUNNEL_XMIT_CONTINUE should be distinguishable from dst_output return
++ * values (NET_XMIT_xxx and NETDEV_TX_xxx in linux/netdevice.h) for safety.
++ */
+ enum {
+ 	LWTUNNEL_XMIT_DONE,
+-	LWTUNNEL_XMIT_CONTINUE,
++	LWTUNNEL_XMIT_CONTINUE = 0x100,
+ };
+ 
+ 
+diff --git a/include/net/mac80211.h b/include/net/mac80211.h
+index 2a55ae932c568..ad41581384d9f 100644
+--- a/include/net/mac80211.h
++++ b/include/net/mac80211.h
+@@ -1192,9 +1192,11 @@ struct ieee80211_tx_info {
+ 			u8 ampdu_ack_len;
+ 			u8 ampdu_len;
+ 			u8 antenna;
++			u8 pad;
+ 			u16 tx_time;
+ 			u8 flags;
+-			void *status_driver_data[18 / sizeof(void *)];
++			u8 pad2;
++			void *status_driver_data[16 / sizeof(void *)];
+ 		} status;
+ 		struct {
+ 			struct ieee80211_tx_rate driver_rates[
+diff --git a/include/net/tcp.h b/include/net/tcp.h
+index 0ca972ebd3dd0..10fc5c5928f71 100644
+--- a/include/net/tcp.h
++++ b/include/net/tcp.h
+@@ -350,7 +350,6 @@ ssize_t tcp_splice_read(struct socket *sk, loff_t *ppos,
+ struct sk_buff *tcp_stream_alloc_skb(struct sock *sk, gfp_t gfp,
+ 				     bool force_schedule);
+ 
+-void tcp_enter_quickack_mode(struct sock *sk, unsigned int max_quickacks);
+ static inline void tcp_dec_quickack_mode(struct sock *sk,
+ 					 const unsigned int pkts)
+ {
+diff --git a/include/scsi/scsi_host.h b/include/scsi/scsi_host.h
+index 70b7475dcf56b..a2b8d30c4c803 100644
+--- a/include/scsi/scsi_host.h
++++ b/include/scsi/scsi_host.h
+@@ -769,7 +769,7 @@ extern void scsi_remove_host(struct Scsi_Host *);
+ extern struct Scsi_Host *scsi_host_get(struct Scsi_Host *);
+ extern int scsi_host_busy(struct Scsi_Host *shost);
+ extern void scsi_host_put(struct Scsi_Host *t);
+-extern struct Scsi_Host *scsi_host_lookup(unsigned short);
++extern struct Scsi_Host *scsi_host_lookup(unsigned int hostnum);
+ extern const char *scsi_host_state_name(enum scsi_host_state);
+ extern void scsi_host_complete_all_commands(struct Scsi_Host *shost,
+ 					    enum scsi_host_status status);
+diff --git a/include/sound/ump.h b/include/sound/ump.h
+index 44d2c2fd021d2..91238dabe3075 100644
+--- a/include/sound/ump.h
++++ b/include/sound/ump.h
+@@ -45,6 +45,7 @@ struct snd_ump_endpoint {
+ 	spinlock_t legacy_locks[2];
+ 	struct snd_rawmidi *legacy_rmidi;
+ 	struct snd_rawmidi_substream *legacy_substreams[2][SNDRV_UMP_MAX_GROUPS];
++	unsigned char legacy_mapping[SNDRV_UMP_MAX_GROUPS];
+ 
+ 	/* for legacy output; need to open the actual substream unlike input */
+ 	int legacy_out_opens;
+diff --git a/include/uapi/linux/elf.h b/include/uapi/linux/elf.h
+index e0e1591383312..20e285fdbc463 100644
+--- a/include/uapi/linux/elf.h
++++ b/include/uapi/linux/elf.h
+@@ -443,6 +443,8 @@ typedef struct elf64_shdr {
+ #define NT_MIPS_DSP	0x800		/* MIPS DSP ASE registers */
+ #define NT_MIPS_FP_MODE	0x801		/* MIPS floating-point mode */
+ #define NT_MIPS_MSA	0x802		/* MIPS SIMD registers */
++#define NT_RISCV_CSR	0x900		/* RISC-V Control and Status Registers */
++#define NT_RISCV_VECTOR	0x901		/* RISC-V vector registers */
+ #define NT_LOONGARCH_CPUCFG	0xa00	/* LoongArch CPU config registers */
+ #define NT_LOONGARCH_CSR	0xa01	/* LoongArch control and status registers */
+ #define NT_LOONGARCH_LSX	0xa02	/* LoongArch Loongson SIMD Extension registers */
+diff --git a/include/uapi/linux/ioprio.h b/include/uapi/linux/ioprio.h
+index 99440b2e8c352..bee2bdb0eedbc 100644
+--- a/include/uapi/linux/ioprio.h
++++ b/include/uapi/linux/ioprio.h
+@@ -107,20 +107,21 @@ enum {
+ /*
+  * Return an I/O priority value based on a class, a level and a hint.
+  */
+-static __always_inline __u16 ioprio_value(int class, int level, int hint)
++static __always_inline __u16 ioprio_value(int prioclass, int priolevel,
++					  int priohint)
+ {
+-	if (IOPRIO_BAD_VALUE(class, IOPRIO_NR_CLASSES) ||
+-	    IOPRIO_BAD_VALUE(level, IOPRIO_NR_LEVELS) ||
+-	    IOPRIO_BAD_VALUE(hint, IOPRIO_NR_HINTS))
++	if (IOPRIO_BAD_VALUE(prioclass, IOPRIO_NR_CLASSES) ||
++	    IOPRIO_BAD_VALUE(priolevel, IOPRIO_NR_LEVELS) ||
++	    IOPRIO_BAD_VALUE(priohint, IOPRIO_NR_HINTS))
+ 		return IOPRIO_CLASS_INVALID << IOPRIO_CLASS_SHIFT;
+ 
+-	return (class << IOPRIO_CLASS_SHIFT) |
+-		(hint << IOPRIO_HINT_SHIFT) | level;
++	return (prioclass << IOPRIO_CLASS_SHIFT) |
++		(priohint << IOPRIO_HINT_SHIFT) | priolevel;
+ }
+ 
+-#define IOPRIO_PRIO_VALUE(class, level)			\
+-	ioprio_value(class, level, IOPRIO_HINT_NONE)
+-#define IOPRIO_PRIO_VALUE_HINT(class, level, hint)	\
+-	ioprio_value(class, level, hint)
++#define IOPRIO_PRIO_VALUE(prioclass, priolevel)			\
++	ioprio_value(prioclass, priolevel, IOPRIO_HINT_NONE)
++#define IOPRIO_PRIO_VALUE_HINT(prioclass, priolevel, priohint)	\
++	ioprio_value(prioclass, priolevel, priohint)
+ 
+ #endif /* _UAPI_LINUX_IOPRIO_H */
+diff --git a/include/uapi/linux/sync_file.h b/include/uapi/linux/sync_file.h
+index 7e42a5b7558bf..ff0a931833e25 100644
+--- a/include/uapi/linux/sync_file.h
++++ b/include/uapi/linux/sync_file.h
+@@ -56,7 +56,7 @@ struct sync_fence_info {
+  * @name:	name of fence
+  * @status:	status of fence. 1: signaled 0:active <0:error
+  * @flags:	sync_file_info flags
+- * @num_fences	number of fences in the sync_file
++ * @num_fences:	number of fences in the sync_file
+  * @pad:	padding for 64-bit alignment, should always be zero
+  * @sync_fence_info: pointer to array of struct &sync_fence_info with all
+  *		 fences in the sync_file
+diff --git a/include/ufs/ufs.h b/include/ufs/ufs.h
+index 198cb391f9db2..29760d5cb273c 100644
+--- a/include/ufs/ufs.h
++++ b/include/ufs/ufs.h
+@@ -102,6 +102,12 @@ enum {
+ 	UPIU_CMD_FLAGS_READ	= 0x40,
+ };
+ 
++/* UPIU response flags */
++enum {
++	UPIU_RSP_FLAG_UNDERFLOW	= 0x20,
++	UPIU_RSP_FLAG_OVERFLOW	= 0x40,
++};
++
+ /* UPIU Task Attributes */
+ enum {
+ 	UPIU_TASK_ATTR_SIMPLE	= 0x00,
+diff --git a/init/Kconfig b/init/Kconfig
+index f7f65af4ee129..5e7d4885d1bf8 100644
+--- a/init/Kconfig
++++ b/init/Kconfig
+@@ -629,6 +629,7 @@ config TASK_IO_ACCOUNTING
+ 
+ config PSI
+ 	bool "Pressure stall information tracking"
++	select KERNFS
+ 	help
+ 	  Collect metrics that indicate how overcommitted the CPU, memory,
+ 	  and IO capacity are in the system.
+diff --git a/io_uring/io-wq.c b/io_uring/io-wq.c
+index 399e9a15c38d6..2c03bc881edfd 100644
+--- a/io_uring/io-wq.c
++++ b/io_uring/io-wq.c
+@@ -174,6 +174,16 @@ static void io_worker_ref_put(struct io_wq *wq)
+ 		complete(&wq->worker_done);
+ }
+ 
++bool io_wq_worker_stopped(void)
++{
++	struct io_worker *worker = current->worker_private;
++
++	if (WARN_ON_ONCE(!io_wq_current_is_worker()))
++		return true;
++
++	return test_bit(IO_WQ_BIT_EXIT, &worker->wq->state);
++}
++
+ static void io_worker_cancel_cb(struct io_worker *worker)
+ {
+ 	struct io_wq_acct *acct = io_wq_get_acct(worker);
+@@ -1285,13 +1295,16 @@ static int io_wq_cpu_offline(unsigned int cpu, struct hlist_node *node)
+ 	return __io_wq_cpu_online(wq, cpu, false);
+ }
+ 
+-int io_wq_cpu_affinity(struct io_wq *wq, cpumask_var_t mask)
++int io_wq_cpu_affinity(struct io_uring_task *tctx, cpumask_var_t mask)
+ {
++	if (!tctx || !tctx->io_wq)
++		return -EINVAL;
++
+ 	rcu_read_lock();
+ 	if (mask)
+-		cpumask_copy(wq->cpu_mask, mask);
++		cpumask_copy(tctx->io_wq->cpu_mask, mask);
+ 	else
+-		cpumask_copy(wq->cpu_mask, cpu_possible_mask);
++		cpumask_copy(tctx->io_wq->cpu_mask, cpu_possible_mask);
+ 	rcu_read_unlock();
+ 
+ 	return 0;
+diff --git a/io_uring/io-wq.h b/io_uring/io-wq.h
+index 31228426d1924..2b2a6406dd8ee 100644
+--- a/io_uring/io-wq.h
++++ b/io_uring/io-wq.h
+@@ -50,8 +50,9 @@ void io_wq_put_and_exit(struct io_wq *wq);
+ void io_wq_enqueue(struct io_wq *wq, struct io_wq_work *work);
+ void io_wq_hash_work(struct io_wq_work *work, void *val);
+ 
+-int io_wq_cpu_affinity(struct io_wq *wq, cpumask_var_t mask);
++int io_wq_cpu_affinity(struct io_uring_task *tctx, cpumask_var_t mask);
+ int io_wq_max_workers(struct io_wq *wq, int *new_count);
++bool io_wq_worker_stopped(void);
+ 
+ static inline bool io_wq_is_hashed(struct io_wq_work *work)
+ {
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index 93db3e4e7b688..4e9217c1eb2e0 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -229,7 +229,6 @@ static inline void req_fail_link_node(struct io_kiocb *req, int res)
+ static inline void io_req_add_to_cache(struct io_kiocb *req, struct io_ring_ctx *ctx)
+ {
+ 	wq_stack_add_head(&req->comp_list, &ctx->submit_state.free_list);
+-	kasan_poison_object_data(req_cachep, req);
+ }
+ 
+ static __cold void io_ring_ctx_ref_free(struct percpu_ref *ref)
+@@ -1674,6 +1673,9 @@ static int io_iopoll_check(struct io_ring_ctx *ctx, long min)
+ 			break;
+ 		nr_events += ret;
+ 		ret = 0;
++
++		if (task_sigpending(current))
++			return -EINTR;
+ 	} while (nr_events < min && !need_resched());
+ 
+ 	return ret;
+@@ -1964,6 +1966,8 @@ fail:
+ 		if (!needs_poll) {
+ 			if (!(req->ctx->flags & IORING_SETUP_IOPOLL))
+ 				break;
++			if (io_wq_worker_stopped())
++				break;
+ 			cond_resched();
+ 			continue;
+ 		}
+@@ -2382,7 +2386,9 @@ static bool io_get_sqe(struct io_ring_ctx *ctx, const struct io_uring_sqe **sqe)
+ 	}
+ 
+ 	/* drop invalid entries */
++	spin_lock(&ctx->completion_lock);
+ 	ctx->cq_extra--;
++	spin_unlock(&ctx->completion_lock);
+ 	WRITE_ONCE(ctx->rings->sq_dropped,
+ 		   READ_ONCE(ctx->rings->sq_dropped) + 1);
+ 	return false;
+@@ -4197,16 +4203,28 @@ static int io_register_enable_rings(struct io_ring_ctx *ctx)
+ 	return 0;
+ }
+ 
++static __cold int __io_register_iowq_aff(struct io_ring_ctx *ctx,
++					 cpumask_var_t new_mask)
++{
++	int ret;
++
++	if (!(ctx->flags & IORING_SETUP_SQPOLL)) {
++		ret = io_wq_cpu_affinity(current->io_uring, new_mask);
++	} else {
++		mutex_unlock(&ctx->uring_lock);
++		ret = io_sqpoll_wq_cpu_affinity(ctx, new_mask);
++		mutex_lock(&ctx->uring_lock);
++	}
++
++	return ret;
++}
++
+ static __cold int io_register_iowq_aff(struct io_ring_ctx *ctx,
+ 				       void __user *arg, unsigned len)
+ {
+-	struct io_uring_task *tctx = current->io_uring;
+ 	cpumask_var_t new_mask;
+ 	int ret;
+ 
+-	if (!tctx || !tctx->io_wq)
+-		return -EINVAL;
+-
+ 	if (!alloc_cpumask_var(&new_mask, GFP_KERNEL))
+ 		return -ENOMEM;
+ 
+@@ -4227,19 +4245,14 @@ static __cold int io_register_iowq_aff(struct io_ring_ctx *ctx,
+ 		return -EFAULT;
+ 	}
+ 
+-	ret = io_wq_cpu_affinity(tctx->io_wq, new_mask);
++	ret = __io_register_iowq_aff(ctx, new_mask);
+ 	free_cpumask_var(new_mask);
+ 	return ret;
+ }
+ 
+ static __cold int io_unregister_iowq_aff(struct io_ring_ctx *ctx)
+ {
+-	struct io_uring_task *tctx = current->io_uring;
+-
+-	if (!tctx || !tctx->io_wq)
+-		return -EINVAL;
+-
+-	return io_wq_cpu_affinity(tctx->io_wq, NULL);
++	return __io_register_iowq_aff(ctx, NULL);
+ }
+ 
+ static __cold int io_register_iowq_max_workers(struct io_ring_ctx *ctx,
+diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h
+index d3606d30cf6fd..12769bad5cee0 100644
+--- a/io_uring/io_uring.h
++++ b/io_uring/io_uring.h
+@@ -354,7 +354,6 @@ static inline struct io_kiocb *io_extract_req(struct io_ring_ctx *ctx)
+ 	struct io_kiocb *req;
+ 
+ 	req = container_of(ctx->submit_state.free_list.next, struct io_kiocb, comp_list);
+-	kasan_unpoison_object_data(req_cachep, req);
+ 	wq_stack_extract(&ctx->submit_state.free_list);
+ 	return req;
+ }
+diff --git a/io_uring/net.c b/io_uring/net.c
+index eb1f51ddcb232..8c419c01a5dba 100644
+--- a/io_uring/net.c
++++ b/io_uring/net.c
+@@ -642,7 +642,7 @@ static inline bool io_recv_finish(struct io_kiocb *req, int *ret,
+ 
+ 	if (!mshot_finished) {
+ 		if (io_aux_cqe(req, issue_flags & IO_URING_F_COMPLETE_DEFER,
+-			       *ret, cflags | IORING_CQE_F_MORE, true)) {
++			       *ret, cflags | IORING_CQE_F_MORE, false)) {
+ 			io_recv_prep_retry(req);
+ 			/* Known not-empty or unknown state, retry */
+ 			if (cflags & IORING_CQE_F_SOCK_NONEMPTY ||
+@@ -1367,7 +1367,7 @@ retry:
+ 	if (ret < 0)
+ 		return ret;
+ 	if (io_aux_cqe(req, issue_flags & IO_URING_F_COMPLETE_DEFER, ret,
+-		       IORING_CQE_F_MORE, true))
++		       IORING_CQE_F_MORE, false))
+ 		goto retry;
+ 
+ 	return -ECANCELED;
+diff --git a/io_uring/sqpoll.c b/io_uring/sqpoll.c
+index 5e329e3cd4706..bd6c2c7959a5b 100644
+--- a/io_uring/sqpoll.c
++++ b/io_uring/sqpoll.c
+@@ -421,3 +421,20 @@ err:
+ 	io_sq_thread_finish(ctx);
+ 	return ret;
+ }
++
++__cold int io_sqpoll_wq_cpu_affinity(struct io_ring_ctx *ctx,
++				     cpumask_var_t mask)
++{
++	struct io_sq_data *sqd = ctx->sq_data;
++	int ret = -EINVAL;
++
++	if (sqd) {
++		io_sq_thread_park(sqd);
++		/* Don't set affinity for a dying thread */
++		if (sqd->thread)
++			ret = io_wq_cpu_affinity(sqd->thread->io_uring, mask);
++		io_sq_thread_unpark(sqd);
++	}
++
++	return ret;
++}
+diff --git a/io_uring/sqpoll.h b/io_uring/sqpoll.h
+index e1b8d508d22d1..8df37e8c91493 100644
+--- a/io_uring/sqpoll.h
++++ b/io_uring/sqpoll.h
+@@ -27,3 +27,4 @@ void io_sq_thread_park(struct io_sq_data *sqd);
+ void io_sq_thread_unpark(struct io_sq_data *sqd);
+ void io_put_sq_data(struct io_sq_data *sqd);
+ void io_sqpoll_wait_sq(struct io_ring_ctx *ctx);
++int io_sqpoll_wq_cpu_affinity(struct io_ring_ctx *ctx, cpumask_var_t mask);
+diff --git a/kernel/auditsc.c b/kernel/auditsc.c
+index addeed3df15d3..8dfd581cd5543 100644
+--- a/kernel/auditsc.c
++++ b/kernel/auditsc.c
+@@ -2456,6 +2456,8 @@ void __audit_inode_child(struct inode *parent,
+ 		}
+ 	}
+ 
++	cond_resched();
++
+ 	/* is there a matching child entry? */
+ 	list_for_each_entry(n, &context->names_list, list) {
+ 		/* can only match entries that have a name */
+diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
+index 817204d533723..4b38c97990872 100644
+--- a/kernel/bpf/btf.c
++++ b/kernel/bpf/btf.c
+@@ -6133,7 +6133,6 @@ static int btf_struct_walk(struct bpf_verifier_log *log, const struct btf *btf,
+ 	const char *tname, *mname, *tag_value;
+ 	u32 vlen, elem_id, mid;
+ 
+-	*flag = 0;
+ again:
+ 	tname = __btf_name_by_offset(btf, t->name_off);
+ 	if (!btf_type_is_struct(t)) {
+@@ -6142,6 +6141,14 @@ again:
+ 	}
+ 
+ 	vlen = btf_type_vlen(t);
++	if (BTF_INFO_KIND(t->info) == BTF_KIND_UNION && vlen != 1 && !(*flag & PTR_UNTRUSTED))
++		/*
++		 * walking unions yields untrusted pointers
++		 * with exception of __bpf_md_ptr and other
++		 * unions with a single member
++		 */
++		*flag |= PTR_UNTRUSTED;
++
+ 	if (off + size > t->size) {
+ 		/* If the last element is a variable size array, we may
+ 		 * need to relax the rule.
+@@ -6302,15 +6309,6 @@ error:
+ 		 * of this field or inside of this struct
+ 		 */
+ 		if (btf_type_is_struct(mtype)) {
+-			if (BTF_INFO_KIND(mtype->info) == BTF_KIND_UNION &&
+-			    btf_type_vlen(mtype) != 1)
+-				/*
+-				 * walking unions yields untrusted pointers
+-				 * with exception of __bpf_md_ptr and other
+-				 * unions with a single member
+-				 */
+-				*flag |= PTR_UNTRUSTED;
+-
+ 			/* our field must be inside that union or struct */
+ 			t = mtype;
+ 
+@@ -6368,7 +6366,7 @@ error:
+ 		 * that also allows using an array of int as a scratch
+ 		 * space. e.g. skb->cb[].
+ 		 */
+-		if (off + size > mtrue_end) {
++		if (off + size > mtrue_end && !(*flag & PTR_UNTRUSTED)) {
+ 			bpf_log(log,
+ 				"access beyond the end of member %s (mend:%u) in struct %s with off %u size %u\n",
+ 				mname, mtrue_end, tname, off, size);
+@@ -6476,7 +6474,7 @@ bool btf_struct_ids_match(struct bpf_verifier_log *log,
+ 			  bool strict)
+ {
+ 	const struct btf_type *type;
+-	enum bpf_type_flag flag;
++	enum bpf_type_flag flag = 0;
+ 	int err;
+ 
+ 	/* Are we already done? */
+diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
+index 9e80efa59a5d6..8812397a5cd96 100644
+--- a/kernel/bpf/helpers.c
++++ b/kernel/bpf/helpers.c
+@@ -2243,7 +2243,7 @@ __bpf_kfunc void *bpf_dynptr_slice(const struct bpf_dynptr_kern *ptr, u32 offset
+ 	case BPF_DYNPTR_TYPE_XDP:
+ 	{
+ 		void *xdp_ptr = bpf_xdp_pointer(ptr->data, ptr->offset + offset, len);
+-		if (xdp_ptr)
++		if (!IS_ERR_OR_NULL(xdp_ptr))
+ 			return xdp_ptr;
+ 
+ 		if (!buffer__opt)
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 02a021c524ab8..76845dd22cd26 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -4982,20 +4982,22 @@ static int map_kptr_match_type(struct bpf_verifier_env *env,
+ 			       struct bpf_reg_state *reg, u32 regno)
+ {
+ 	const char *targ_name = btf_type_name(kptr_field->kptr.btf, kptr_field->kptr.btf_id);
+-	int perm_flags = PTR_MAYBE_NULL | PTR_TRUSTED | MEM_RCU;
++	int perm_flags;
+ 	const char *reg_name = "";
+ 
+-	/* Only unreferenced case accepts untrusted pointers */
+-	if (kptr_field->type == BPF_KPTR_UNREF)
+-		perm_flags |= PTR_UNTRUSTED;
++	if (btf_is_kernel(reg->btf)) {
++		perm_flags = PTR_MAYBE_NULL | PTR_TRUSTED | MEM_RCU;
++
++		/* Only unreferenced case accepts untrusted pointers */
++		if (kptr_field->type == BPF_KPTR_UNREF)
++			perm_flags |= PTR_UNTRUSTED;
++	} else {
++		perm_flags = PTR_MAYBE_NULL | MEM_ALLOC;
++	}
+ 
+ 	if (base_type(reg->type) != PTR_TO_BTF_ID || (type_flag(reg->type) & ~perm_flags))
+ 		goto bad_type;
+ 
+-	if (!btf_is_kernel(reg->btf)) {
+-		verbose(env, "R%d must point to kernel BTF\n", regno);
+-		return -EINVAL;
+-	}
+ 	/* We need to verify reg->type and reg->btf, before accessing reg->btf */
+ 	reg_name = btf_type_name(reg->btf, reg->btf_id);
+ 
+@@ -5008,7 +5010,7 @@ static int map_kptr_match_type(struct bpf_verifier_env *env,
+ 	if (__check_ptr_off_reg(env, reg, regno, true))
+ 		return -EACCES;
+ 
+-	/* A full type match is needed, as BTF can be vmlinux or module BTF, and
++	/* A full type match is needed, as BTF can be vmlinux, module or prog BTF, and
+ 	 * we also need to take into account the reg->off.
+ 	 *
+ 	 * We want to support cases like:
+@@ -6085,6 +6087,11 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
+ 				   type_is_rcu_or_null(env, reg, field_name, btf_id)) {
+ 				/* __rcu tagged pointers can be NULL */
+ 				flag |= MEM_RCU | PTR_MAYBE_NULL;
++
++				/* We always trust them */
++				if (type_is_rcu_or_null(env, reg, field_name, btf_id) &&
++				    flag & PTR_UNTRUSTED)
++					flag &= ~PTR_UNTRUSTED;
+ 			} else if (flag & (MEM_PERCPU | MEM_USER)) {
+ 				/* keep as-is */
+ 			} else {
+@@ -7745,7 +7752,10 @@ found:
+ 			verbose(env, "verifier internal error: unimplemented handling of MEM_ALLOC\n");
+ 			return -EFAULT;
+ 		}
+-		/* Handled by helper specific checks */
++		if (meta->func_id == BPF_FUNC_kptr_xchg) {
++			if (map_kptr_match_type(env, meta->kptr_field, reg, regno))
++				return -EACCES;
++		}
+ 		break;
+ 	case PTR_TO_BTF_ID | MEM_PERCPU:
+ 	case PTR_TO_BTF_ID | MEM_PERCPU | PTR_TRUSTED:
+@@ -7797,17 +7807,6 @@ int check_func_arg_reg_off(struct bpf_verifier_env *env,
+ 		if (arg_type_is_dynptr(arg_type) && type == PTR_TO_STACK)
+ 			return 0;
+ 
+-		if ((type_is_ptr_alloc_obj(type) || type_is_non_owning_ref(type)) && reg->off) {
+-			if (reg_find_field_offset(reg, reg->off, BPF_GRAPH_NODE_OR_ROOT))
+-				return __check_ptr_off_reg(env, reg, regno, true);
+-
+-			verbose(env, "R%d must have zero offset when passed to release func\n",
+-				regno);
+-			verbose(env, "No graph node or root found at R%d type:%s off:%d\n", regno,
+-				btf_type_name(reg->btf, reg->btf_id), reg->off);
+-			return -EINVAL;
+-		}
+-
+ 		/* Doing check_ptr_off_reg check for the offset will catch this
+ 		 * because fixed_off_ok is false, but checking here allows us
+ 		 * to give the user a better error message.
+@@ -13817,6 +13816,12 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,
+ 		return -EINVAL;
+ 	}
+ 
++	/* check src2 operand */
++	err = check_reg_arg(env, insn->dst_reg, SRC_OP);
++	if (err)
++		return err;
++
++	dst_reg = &regs[insn->dst_reg];
+ 	if (BPF_SRC(insn->code) == BPF_X) {
+ 		if (insn->imm != 0) {
+ 			verbose(env, "BPF_JMP/JMP32 uses reserved fields\n");
+@@ -13828,12 +13833,13 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,
+ 		if (err)
+ 			return err;
+ 
+-		if (is_pointer_value(env, insn->src_reg)) {
++		src_reg = &regs[insn->src_reg];
++		if (!(reg_is_pkt_pointer_any(dst_reg) && reg_is_pkt_pointer_any(src_reg)) &&
++		    is_pointer_value(env, insn->src_reg)) {
+ 			verbose(env, "R%d pointer comparison prohibited\n",
+ 				insn->src_reg);
+ 			return -EACCES;
+ 		}
+-		src_reg = &regs[insn->src_reg];
+ 	} else {
+ 		if (insn->src_reg != BPF_REG_0) {
+ 			verbose(env, "BPF_JMP/JMP32 uses reserved fields\n");
+@@ -13841,12 +13847,6 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,
+ 		}
+ 	}
+ 
+-	/* check src2 operand */
+-	err = check_reg_arg(env, insn->dst_reg, SRC_OP);
+-	if (err)
+-		return err;
+-
+-	dst_reg = &regs[insn->dst_reg];
+ 	is_jmp32 = BPF_CLASS(insn->code) == BPF_JMP32;
+ 
+ 	if (BPF_SRC(insn->code) == BPF_K) {
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index 58e6f18f01c1b..170e342b07e3d 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -1588,11 +1588,16 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp,
+ 		}
+ 
+ 		/*
+-		 * Skip the whole subtree if the cpumask remains the same
+-		 * and has no partition root state and force flag not set.
++		 * Skip the whole subtree if
++		 * 1) the cpumask remains the same,
++		 * 2) has no partition root state,
++		 * 3) force flag not set, and
++		 * 4) for v2 load balance state same as its parent.
+ 		 */
+ 		if (!cp->partition_root_state && !force &&
+-		    cpumask_equal(tmp->new_cpus, cp->effective_cpus)) {
++		    cpumask_equal(tmp->new_cpus, cp->effective_cpus) &&
++		    (!cgroup_subsys_on_dfl(cpuset_cgrp_subsys) ||
++		    (is_sched_load_balance(parent) == is_sched_load_balance(cp)))) {
+ 			pos_css = css_rightmost_descendant(pos_css);
+ 			continue;
+ 		}
+@@ -1675,6 +1680,20 @@ update_parent_subparts:
+ 
+ 		update_tasks_cpumask(cp, tmp->new_cpus);
+ 
++		/*
++		 * On default hierarchy, inherit the CS_SCHED_LOAD_BALANCE
++		 * from parent if current cpuset isn't a valid partition root
++		 * and their load balance states differ.
++		 */
++		if (cgroup_subsys_on_dfl(cpuset_cgrp_subsys) &&
++		    !is_partition_valid(cp) &&
++		    (is_sched_load_balance(parent) != is_sched_load_balance(cp))) {
++			if (is_sched_load_balance(parent))
++				set_bit(CS_SCHED_LOAD_BALANCE, &cp->flags);
++			else
++				clear_bit(CS_SCHED_LOAD_BALANCE, &cp->flags);
++		}
++
+ 		/*
+ 		 * On legacy hierarchy, if the effective cpumask of any non-
+ 		 * empty cpuset is changed, we need to rebuild sched domains.
+@@ -3222,6 +3241,14 @@ static int cpuset_css_online(struct cgroup_subsys_state *css)
+ 		cs->use_parent_ecpus = true;
+ 		parent->child_ecpus_count++;
+ 	}
++
++	/*
++	 * For v2, clear CS_SCHED_LOAD_BALANCE if parent is isolated
++	 */
++	if (cgroup_subsys_on_dfl(cpuset_cgrp_subsys) &&
++	    !is_sched_load_balance(parent))
++		clear_bit(CS_SCHED_LOAD_BALANCE, &cs->flags);
++
+ 	spin_unlock_irq(&callback_lock);
+ 
+ 	if (!test_bit(CGRP_CPUSET_CLONE_CHILDREN, &css->cgroup->flags))
+diff --git a/kernel/cgroup/namespace.c b/kernel/cgroup/namespace.c
+index 0d5c29879a50b..144a464e45c66 100644
+--- a/kernel/cgroup/namespace.c
++++ b/kernel/cgroup/namespace.c
+@@ -149,9 +149,3 @@ const struct proc_ns_operations cgroupns_operations = {
+ 	.install	= cgroupns_install,
+ 	.owner		= cgroupns_owner,
+ };
+-
+-static __init int cgroup_namespaces_init(void)
+-{
+-	return 0;
+-}
+-subsys_initcall(cgroup_namespaces_init);
+diff --git a/kernel/cpu.c b/kernel/cpu.c
+index 88a7ede322bd5..9628ae3c2825b 100644
+--- a/kernel/cpu.c
++++ b/kernel/cpu.c
+@@ -1467,8 +1467,22 @@ out:
+ 	return ret;
+ }
+ 
++struct cpu_down_work {
++	unsigned int		cpu;
++	enum cpuhp_state	target;
++};
++
++static long __cpu_down_maps_locked(void *arg)
++{
++	struct cpu_down_work *work = arg;
++
++	return _cpu_down(work->cpu, 0, work->target);
++}
++
+ static int cpu_down_maps_locked(unsigned int cpu, enum cpuhp_state target)
+ {
++	struct cpu_down_work work = { .cpu = cpu, .target = target, };
++
+ 	/*
+ 	 * If the platform does not support hotplug, report it explicitly to
+ 	 * differentiate it from a transient offlining failure.
+@@ -1477,7 +1491,15 @@ static int cpu_down_maps_locked(unsigned int cpu, enum cpuhp_state target)
+ 		return -EOPNOTSUPP;
+ 	if (cpu_hotplug_disabled)
+ 		return -EBUSY;
+-	return _cpu_down(cpu, 0, target);
++
++	/*
++	 * Ensure that the control task does not run on the to be offlined
++	 * CPU to prevent a deadlock against cfs_b->period_timer.
++	 */
++	cpu = cpumask_any_but(cpu_online_mask, cpu);
++	if (cpu >= nr_cpu_ids)
++		return -EBUSY;
++	return work_on_cpu(cpu, __cpu_down_maps_locked, &work);
+ }
+ 
+ static int cpu_down(unsigned int cpu, enum cpuhp_state target)
+diff --git a/kernel/pid.c b/kernel/pid.c
+index 6a1d23a11026c..fee14a4486a31 100644
+--- a/kernel/pid.c
++++ b/kernel/pid.c
+@@ -83,6 +83,9 @@ struct pid_namespace init_pid_ns = {
+ #ifdef CONFIG_PID_NS
+ 	.ns.ops = &pidns_operations,
+ #endif
++#if defined(CONFIG_SYSCTL) && defined(CONFIG_MEMFD_CREATE)
++	.memfd_noexec_scope = MEMFD_NOEXEC_SCOPE_EXEC,
++#endif
+ };
+ EXPORT_SYMBOL_GPL(init_pid_ns);
+ 
+diff --git a/kernel/pid_namespace.c b/kernel/pid_namespace.c
+index 0bf44afe04dd1..619972c78774f 100644
+--- a/kernel/pid_namespace.c
++++ b/kernel/pid_namespace.c
+@@ -110,9 +110,9 @@ static struct pid_namespace *create_pid_namespace(struct user_namespace *user_ns
+ 	ns->user_ns = get_user_ns(user_ns);
+ 	ns->ucounts = ucounts;
+ 	ns->pid_allocated = PIDNS_ADDING;
+-
+-	initialize_memfd_noexec_scope(ns);
+-
++#if defined(CONFIG_SYSCTL) && defined(CONFIG_MEMFD_CREATE)
++	ns->memfd_noexec_scope = pidns_memfd_noexec_scope(parent_pid_ns);
++#endif
+ 	return ns;
+ 
+ out_free_idr:
+diff --git a/kernel/pid_sysctl.h b/kernel/pid_sysctl.h
+index b26e027fc9cd4..2ee41a3a1dfde 100644
+--- a/kernel/pid_sysctl.h
++++ b/kernel/pid_sysctl.h
+@@ -5,33 +5,30 @@
+ #include <linux/pid_namespace.h>
+ 
+ #if defined(CONFIG_SYSCTL) && defined(CONFIG_MEMFD_CREATE)
+-static inline void initialize_memfd_noexec_scope(struct pid_namespace *ns)
+-{
+-	ns->memfd_noexec_scope =
+-		task_active_pid_ns(current)->memfd_noexec_scope;
+-}
+-
+ static int pid_mfd_noexec_dointvec_minmax(struct ctl_table *table,
+ 	int write, void *buf, size_t *lenp, loff_t *ppos)
+ {
+ 	struct pid_namespace *ns = task_active_pid_ns(current);
+ 	struct ctl_table table_copy;
++	int err, scope, parent_scope;
+ 
+ 	if (write && !ns_capable(ns->user_ns, CAP_SYS_ADMIN))
+ 		return -EPERM;
+ 
+ 	table_copy = *table;
+-	if (ns != &init_pid_ns)
+-		table_copy.data = &ns->memfd_noexec_scope;
+ 
+-	/*
+-	 * set minimum to current value, the effect is only bigger
+-	 * value is accepted.
+-	 */
+-	if (*(int *)table_copy.data > *(int *)table_copy.extra1)
+-		table_copy.extra1 = table_copy.data;
++	/* You cannot set a lower enforcement value than your parent. */
++	parent_scope = pidns_memfd_noexec_scope(ns->parent);
++	/* Equivalent to pidns_memfd_noexec_scope(ns). */
++	scope = max(READ_ONCE(ns->memfd_noexec_scope), parent_scope);
++
++	table_copy.data = &scope;
++	table_copy.extra1 = &parent_scope;
+ 
+-	return proc_dointvec_minmax(&table_copy, write, buf, lenp, ppos);
++	err = proc_dointvec_minmax(&table_copy, write, buf, lenp, ppos);
++	if (!err && write)
++		WRITE_ONCE(ns->memfd_noexec_scope, scope);
++	return err;
+ }
+ 
+ static struct ctl_table pid_ns_ctl_table_vm[] = {
+@@ -51,7 +48,6 @@ static inline void register_pid_ns_sysctl_table_vm(void)
+ 	register_sysctl("vm", pid_ns_ctl_table_vm);
+ }
+ #else
+-static inline void initialize_memfd_noexec_scope(struct pid_namespace *ns) {}
+ static inline void register_pid_ns_sysctl_table_vm(void) {}
+ #endif
+ 
+diff --git a/kernel/printk/printk_ringbuffer.c b/kernel/printk/printk_ringbuffer.c
+index 2dc4d5a1f1ff8..fde338606ce83 100644
+--- a/kernel/printk/printk_ringbuffer.c
++++ b/kernel/printk/printk_ringbuffer.c
+@@ -1735,7 +1735,7 @@ static bool copy_data(struct prb_data_ring *data_ring,
+ 	if (!buf || !buf_size)
+ 		return true;
+ 
+-	data_size = min_t(u16, buf_size, len);
++	data_size = min_t(unsigned int, buf_size, len);
+ 
+ 	memcpy(&buf[0], data, data_size); /* LMM(copy_data:A) */
+ 	return true;
+diff --git a/kernel/rcu/refscale.c b/kernel/rcu/refscale.c
+index 1970ce5f22d40..71d138573856f 100644
+--- a/kernel/rcu/refscale.c
++++ b/kernel/rcu/refscale.c
+@@ -1107,12 +1107,11 @@ ref_scale_init(void)
+ 	VERBOSE_SCALEOUT("Starting %d reader threads", nreaders);
+ 
+ 	for (i = 0; i < nreaders; i++) {
++		init_waitqueue_head(&reader_tasks[i].wq);
+ 		firsterr = torture_create_kthread(ref_scale_reader, (void *)i,
+ 						  reader_tasks[i].task);
+ 		if (torture_init_error(firsterr))
+ 			goto unwind;
+-
+-		init_waitqueue_head(&(reader_tasks[i].wq));
+ 	}
+ 
+ 	// Main Task
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index b3e25be58e2b7..1d9c2482c5a35 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -7289,9 +7289,6 @@ cpu_util(int cpu, struct task_struct *p, int dst_cpu, int boost)
+ 
+ 		util_est = READ_ONCE(cfs_rq->avg.util_est.enqueued);
+ 
+-		if (boost)
+-			util_est = max(util_est, runnable);
+-
+ 		/*
+ 		 * During wake-up @p isn't enqueued yet and doesn't contribute
+ 		 * to any cpu_rq(cpu)->cfs.avg.util_est.enqueued.
+diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
+index 00e0e50741153..185d3d749f6b6 100644
+--- a/kernel/sched/rt.c
++++ b/kernel/sched/rt.c
+@@ -25,7 +25,7 @@ unsigned int sysctl_sched_rt_period = 1000000;
+ int sysctl_sched_rt_runtime = 950000;
+ 
+ #ifdef CONFIG_SYSCTL
+-static int sysctl_sched_rr_timeslice = (MSEC_PER_SEC / HZ) * RR_TIMESLICE;
++static int sysctl_sched_rr_timeslice = (MSEC_PER_SEC * RR_TIMESLICE) / HZ;
+ static int sched_rt_handler(struct ctl_table *table, int write, void *buffer,
+ 		size_t *lenp, loff_t *ppos);
+ static int sched_rr_handler(struct ctl_table *table, int write, void *buffer,
+diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c
+index 88cbc1181b239..c108ed8a9804a 100644
+--- a/kernel/time/clocksource.c
++++ b/kernel/time/clocksource.c
+@@ -473,8 +473,8 @@ static void clocksource_watchdog(struct timer_list *unused)
+ 		/* Check the deviation from the watchdog clocksource. */
+ 		md = cs->uncertainty_margin + watchdog->uncertainty_margin;
+ 		if (abs(cs_nsec - wd_nsec) > md) {
+-			u64 cs_wd_msec;
+-			u64 wd_msec;
++			s64 cs_wd_msec;
++			s64 wd_msec;
+ 			u32 wd_rem;
+ 
+ 			pr_warn("timekeeping watchdog on CPU%d: Marking clocksource '%s' as unstable because the skew is too large:\n",
+@@ -483,8 +483,8 @@ static void clocksource_watchdog(struct timer_list *unused)
+ 				watchdog->name, wd_nsec, wdnow, wdlast, watchdog->mask);
+ 			pr_warn("                      '%s' cs_nsec: %lld cs_now: %llx cs_last: %llx mask: %llx\n",
+ 				cs->name, cs_nsec, csnow, cslast, cs->mask);
+-			cs_wd_msec = div_u64_rem(cs_nsec - wd_nsec, 1000U * 1000U, &wd_rem);
+-			wd_msec = div_u64_rem(wd_nsec, 1000U * 1000U, &wd_rem);
++			cs_wd_msec = div_s64_rem(cs_nsec - wd_nsec, 1000 * 1000, &wd_rem);
++			wd_msec = div_s64_rem(wd_nsec, 1000 * 1000, &wd_rem);
+ 			pr_warn("                      Clocksource '%s' skewed %lld ns (%lld ms) over watchdog '%s' interval of %lld ns (%lld ms)\n",
+ 				cs->name, cs_nsec - wd_nsec, cs_wd_msec, watchdog->name, wd_nsec, wd_msec);
+ 			if (curr_clocksource == cs)
+diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
+index 4df14db4da490..87015e9deacc9 100644
+--- a/kernel/time/tick-sched.c
++++ b/kernel/time/tick-sched.c
+@@ -1045,7 +1045,7 @@ static bool report_idle_softirq(void)
+ 		return false;
+ 
+ 	/* On RT, softirqs handling may be waiting on some lock */
+-	if (!local_bh_blocked())
++	if (local_bh_blocked())
+ 		return false;
+ 
+ 	pr_warn("NOHZ tick-stop error: local softirq work is pending, handler #%02x!!!\n",
+diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
+index bd1a42b23f3ff..30d8db47c1e2f 100644
+--- a/kernel/trace/bpf_trace.c
++++ b/kernel/trace/bpf_trace.c
+@@ -2391,7 +2391,7 @@ int bpf_get_perf_event_info(const struct perf_event *event, u32 *prog_id,
+ #ifdef CONFIG_UPROBE_EVENTS
+ 		if (flags & TRACE_EVENT_FL_UPROBE)
+ 			err = bpf_get_uprobe_info(event, fd_type, buf,
+-						  probe_offset,
++						  probe_offset, probe_addr,
+ 						  event->attr.type == PERF_TYPE_TRACEPOINT);
+ #endif
+ 	}
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 2656ca3b9b39c..745332d10b3e1 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -7618,6 +7618,11 @@ out:
+ 	return ret;
+ }
+ 
++static void tracing_swap_cpu_buffer(void *tr)
++{
++	update_max_tr_single((struct trace_array *)tr, current, smp_processor_id());
++}
++
+ static ssize_t
+ tracing_snapshot_write(struct file *filp, const char __user *ubuf, size_t cnt,
+ 		       loff_t *ppos)
+@@ -7676,13 +7681,15 @@ tracing_snapshot_write(struct file *filp, const char __user *ubuf, size_t cnt,
+ 			ret = tracing_alloc_snapshot_instance(tr);
+ 		if (ret < 0)
+ 			break;
+-		local_irq_disable();
+ 		/* Now, we're going to swap */
+-		if (iter->cpu_file == RING_BUFFER_ALL_CPUS)
++		if (iter->cpu_file == RING_BUFFER_ALL_CPUS) {
++			local_irq_disable();
+ 			update_max_tr(tr, current, smp_processor_id(), NULL);
+-		else
+-			update_max_tr_single(tr, current, iter->cpu_file);
+-		local_irq_enable();
++			local_irq_enable();
++		} else {
++			smp_call_function_single(iter->cpu_file, tracing_swap_cpu_buffer,
++						 (void *)tr, 1);
++		}
+ 		break;
+ 	default:
+ 		if (tr->allocated_snapshot) {
+diff --git a/kernel/trace/trace_hwlat.c b/kernel/trace/trace_hwlat.c
+index 2f37a6e68aa9f..b791524a6536a 100644
+--- a/kernel/trace/trace_hwlat.c
++++ b/kernel/trace/trace_hwlat.c
+@@ -635,7 +635,7 @@ static int s_mode_show(struct seq_file *s, void *v)
+ 	else
+ 		seq_printf(s, "%s", thread_mode_str[mode]);
+ 
+-	if (mode != MODE_MAX)
++	if (mode < MODE_MAX - 1) /* if mode is any but last */
+ 		seq_puts(s, " ");
+ 
+ 	return 0;
+diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c
+index 688bf579f2f1e..555c223c32321 100644
+--- a/kernel/trace/trace_uprobe.c
++++ b/kernel/trace/trace_uprobe.c
+@@ -1418,7 +1418,7 @@ static void uretprobe_perf_func(struct trace_uprobe *tu, unsigned long func,
+ 
+ int bpf_get_uprobe_info(const struct perf_event *event, u32 *fd_type,
+ 			const char **filename, u64 *probe_offset,
+-			bool perf_type_tracepoint)
++			u64 *probe_addr, bool perf_type_tracepoint)
+ {
+ 	const char *pevent = trace_event_name(event->tp_event);
+ 	const char *group = event->tp_event->class->system;
+@@ -1435,6 +1435,7 @@ int bpf_get_uprobe_info(const struct perf_event *event, u32 *fd_type,
+ 				    : BPF_FD_TYPE_UPROBE;
+ 	*filename = tu->filename;
+ 	*probe_offset = tu->offset;
++	*probe_addr = 0;
+ 	return 0;
+ }
+ #endif	/* CONFIG_PERF_EVENTS */
+diff --git a/kernel/watchdog.c b/kernel/watchdog.c
+index be38276a365f3..d145305d95fe8 100644
+--- a/kernel/watchdog.c
++++ b/kernel/watchdog.c
+@@ -151,9 +151,6 @@ void watchdog_hardlockup_check(unsigned int cpu, struct pt_regs *regs)
+ 	 */
+ 	if (is_hardlockup(cpu)) {
+ 		unsigned int this_cpu = smp_processor_id();
+-		struct cpumask backtrace_mask;
+-
+-		cpumask_copy(&backtrace_mask, cpu_online_mask);
+ 
+ 		/* Only print hardlockups once. */
+ 		if (per_cpu(watchdog_hardlockup_warned, cpu))
+@@ -167,10 +164,8 @@ void watchdog_hardlockup_check(unsigned int cpu, struct pt_regs *regs)
+ 				show_regs(regs);
+ 			else
+ 				dump_stack();
+-			cpumask_clear_cpu(cpu, &backtrace_mask);
+ 		} else {
+-			if (trigger_single_cpu_backtrace(cpu))
+-				cpumask_clear_cpu(cpu, &backtrace_mask);
++			trigger_single_cpu_backtrace(cpu);
+ 		}
+ 
+ 		/*
+@@ -179,7 +174,7 @@ void watchdog_hardlockup_check(unsigned int cpu, struct pt_regs *regs)
+ 		 */
+ 		if (sysctl_hardlockup_all_cpu_backtrace &&
+ 		    !test_and_set_bit(0, &watchdog_hardlockup_all_cpu_dumped))
+-			trigger_cpumask_backtrace(&backtrace_mask);
++			trigger_allbutcpu_cpu_backtrace(cpu);
+ 
+ 		if (hardlockup_panic)
+ 			nmi_panic(regs, "Hard LOCKUP");
+@@ -523,7 +518,7 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer)
+ 			dump_stack();
+ 
+ 		if (softlockup_all_cpu_backtrace) {
+-			trigger_allbutself_cpu_backtrace();
++			trigger_allbutcpu_cpu_backtrace(smp_processor_id());
+ 			clear_bit_unlock(0, &soft_lockup_nmi_warn);
+ 		}
+ 
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index 800b4208dba9a..e51ab3d4765eb 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -2569,6 +2569,7 @@ __acquires(&pool->lock)
+ 	 */
+ 	set_work_pool_and_clear_pending(work, pool->id);
+ 
++	pwq->stats[PWQ_STAT_STARTED]++;
+ 	raw_spin_unlock_irq(&pool->lock);
+ 
+ 	lock_map_acquire(&pwq->wq->lockdep_map);
+@@ -2595,7 +2596,6 @@ __acquires(&pool->lock)
+ 	 * workqueues), so hiding them isn't a problem.
+ 	 */
+ 	lockdep_invariant_state(true);
+-	pwq->stats[PWQ_STAT_STARTED]++;
+ 	trace_workqueue_execute_start(work);
+ 	worker->current_func(work);
+ 	/*
+diff --git a/lib/checksum_kunit.c b/lib/checksum_kunit.c
+index ace3c4799fe15..0eed92b77ba37 100644
+--- a/lib/checksum_kunit.c
++++ b/lib/checksum_kunit.c
+@@ -10,7 +10,8 @@
+ #define MAX_ALIGN 64
+ #define TEST_BUFLEN (MAX_LEN + MAX_ALIGN)
+ 
+-static const __wsum random_init_sum = 0x2847aab;
++/* Values for a little endian CPU. Byte swap each half on big endian CPU. */
++static const u32 random_init_sum = 0x2847aab;
+ static const u8 random_buf[] = {
+ 	0xac, 0xd7, 0x76, 0x69, 0x6e, 0xf2, 0x93, 0x2c, 0x1f, 0xe0, 0xde, 0x86,
+ 	0x8f, 0x54, 0x33, 0x90, 0x95, 0xbf, 0xff, 0xb9, 0xea, 0x62, 0x6e, 0xb5,
+@@ -56,7 +57,9 @@ static const u8 random_buf[] = {
+ 	0xe1, 0xdf, 0x4b, 0xe1, 0x81, 0xe2, 0x17, 0x02, 0x7b, 0x58, 0x8b, 0x92,
+ 	0x1a, 0xac, 0x46, 0xdd, 0x2e, 0xce, 0x40, 0x09
+ };
+-static const __sum16 expected_results[] = {
++
++/* Values for a little endian CPU. Byte swap on big endian CPU. */
++static const u16 expected_results[] = {
+ 	0x82d0, 0x8224, 0xab23, 0xaaad, 0x41ad, 0x413f, 0x4f3e, 0x4eab, 0x22ab,
+ 	0x228c, 0x428b, 0x41ad, 0xbbac, 0xbb1d, 0x671d, 0x66ea, 0xd6e9, 0xd654,
+ 	0x1754, 0x1655, 0x5d54, 0x5c6a, 0xfa69, 0xf9fb, 0x44fb, 0x4428, 0xf527,
+@@ -115,7 +118,9 @@ static const __sum16 expected_results[] = {
+ 	0x1d47, 0x3c46, 0x3bc5, 0x59c4, 0x59ad, 0x57ad, 0x5732, 0xff31, 0xfea6,
+ 	0x6ca6, 0x6c8c, 0xc08b, 0xc045, 0xe344, 0xe316, 0x1516, 0x14d6,
+ };
+-static const __wsum init_sums_no_overflow[] = {
++
++/* Values for a little endian CPU. Byte swap each half on big endian CPU. */
++static const u32 init_sums_no_overflow[] = {
+ 	0xffffffff, 0xfffffffb, 0xfffffbfb, 0xfffffbf7, 0xfffff7f7, 0xfffff7f3,
+ 	0xfffff3f3, 0xfffff3ef, 0xffffefef, 0xffffefeb, 0xffffebeb, 0xffffebe7,
+ 	0xffffe7e7, 0xffffe7e3, 0xffffe3e3, 0xffffe3df, 0xffffdfdf, 0xffffdfdb,
+@@ -208,7 +213,21 @@ static u8 tmp_buf[TEST_BUFLEN];
+ 
+ #define full_csum(buff, len, sum) csum_fold(csum_partial(buff, len, sum))
+ 
+-#define CHECK_EQ(lhs, rhs) KUNIT_ASSERT_EQ(test, lhs, rhs)
++#define CHECK_EQ(lhs, rhs) KUNIT_ASSERT_EQ(test, (__force u64)lhs, (__force u64)rhs)
++
++static __sum16 to_sum16(u16 x)
++{
++	return (__force __sum16)le16_to_cpu((__force __le16)x);
++}
++
++/* This function swaps the bytes inside each half of a __wsum */
++static __wsum to_wsum(u32 x)
++{
++	u16 hi = le16_to_cpu((__force __le16)(x >> 16));
++	u16 lo = le16_to_cpu((__force __le16)x);
++
++	return (__force __wsum)((hi << 16) | lo);
++}
+ 
+ static void assert_setup_correct(struct kunit *test)
+ {
+@@ -226,7 +245,8 @@ static void assert_setup_correct(struct kunit *test)
+ static void test_csum_fixed_random_inputs(struct kunit *test)
+ {
+ 	int len, align;
+-	__wsum result, expec, sum;
++	__wsum sum;
++	__sum16 result, expec;
+ 
+ 	assert_setup_correct(test);
+ 	for (align = 0; align < TEST_BUFLEN; ++align) {
+@@ -237,9 +257,9 @@ static void test_csum_fixed_random_inputs(struct kunit *test)
+ 			/*
+ 			 * Test the precomputed random input.
+ 			 */
+-			sum = random_init_sum;
++			sum = to_wsum(random_init_sum);
+ 			result = full_csum(&tmp_buf[align], len, sum);
+-			expec = expected_results[len];
++			expec = to_sum16(expected_results[len]);
+ 			CHECK_EQ(result, expec);
+ 		}
+ 	}
+@@ -251,7 +271,8 @@ static void test_csum_fixed_random_inputs(struct kunit *test)
+ static void test_csum_all_carry_inputs(struct kunit *test)
+ {
+ 	int len, align;
+-	__wsum result, expec, sum;
++	__wsum sum;
++	__sum16 result, expec;
+ 
+ 	assert_setup_correct(test);
+ 	memset(tmp_buf, 0xff, TEST_BUFLEN);
+@@ -261,9 +282,9 @@ static void test_csum_all_carry_inputs(struct kunit *test)
+ 			/*
+ 			 * All carries from input and initial sum.
+ 			 */
+-			sum = 0xffffffff;
++			sum = to_wsum(0xffffffff);
+ 			result = full_csum(&tmp_buf[align], len, sum);
+-			expec = (len & 1) ? 0xff00 : 0;
++			expec = to_sum16((len & 1) ? 0xff00 : 0);
+ 			CHECK_EQ(result, expec);
+ 
+ 			/*
+@@ -272,11 +293,11 @@ static void test_csum_all_carry_inputs(struct kunit *test)
+ 			sum = 0;
+ 			result = full_csum(&tmp_buf[align], len, sum);
+ 			if (len & 1)
+-				expec = 0xff00;
++				expec = to_sum16(0xff00);
+ 			else if (len)
+ 				expec = 0;
+ 			else
+-				expec = 0xffff;
++				expec = to_sum16(0xffff);
+ 			CHECK_EQ(result, expec);
+ 		}
+ 	}
+@@ -290,7 +311,8 @@ static void test_csum_all_carry_inputs(struct kunit *test)
+ static void test_csum_no_carry_inputs(struct kunit *test)
+ {
+ 	int len, align;
+-	__wsum result, expec, sum;
++	__wsum sum;
++	__sum16 result, expec;
+ 
+ 	assert_setup_correct(test);
+ 	memset(tmp_buf, 0x4, TEST_BUFLEN);
+@@ -300,7 +322,7 @@ static void test_csum_no_carry_inputs(struct kunit *test)
+ 			/*
+ 			 * Expect no carries.
+ 			 */
+-			sum = init_sums_no_overflow[len];
++			sum = to_wsum(init_sums_no_overflow[len]);
+ 			result = full_csum(&tmp_buf[align], len, sum);
+ 			expec = 0;
+ 			CHECK_EQ(result, expec);
+@@ -308,9 +330,9 @@ static void test_csum_no_carry_inputs(struct kunit *test)
+ 			/*
+ 			 * Expect one carry.
+ 			 */
+-			sum = init_sums_no_overflow[len] + 1;
++			sum = to_wsum(init_sums_no_overflow[len] + 1);
+ 			result = full_csum(&tmp_buf[align], len, sum);
+-			expec = len ? 0xfffe : 0xffff;
++			expec = to_sum16(len ? 0xfffe : 0xffff);
+ 			CHECK_EQ(result, expec);
+ 		}
+ 	}
+diff --git a/lib/iov_iter.c b/lib/iov_iter.c
+index e4dc809d10754..37f78d7b3d323 100644
+--- a/lib/iov_iter.c
++++ b/lib/iov_iter.c
+@@ -1640,14 +1640,14 @@ static ssize_t iov_iter_extract_bvec_pages(struct iov_iter *i,
+ 					   size_t *offset0)
+ {
+ 	struct page **p, *page;
+-	size_t skip = i->iov_offset, offset;
++	size_t skip = i->iov_offset, offset, size;
+ 	int k;
+ 
+ 	for (;;) {
+ 		if (i->nr_segs == 0)
+ 			return 0;
+-		maxsize = min(maxsize, i->bvec->bv_len - skip);
+-		if (maxsize)
++		size = min(maxsize, i->bvec->bv_len - skip);
++		if (size)
+ 			break;
+ 		i->iov_offset = 0;
+ 		i->nr_segs--;
+@@ -1660,16 +1660,16 @@ static ssize_t iov_iter_extract_bvec_pages(struct iov_iter *i,
+ 	offset = skip % PAGE_SIZE;
+ 	*offset0 = offset;
+ 
+-	maxpages = want_pages_array(pages, maxsize, offset, maxpages);
++	maxpages = want_pages_array(pages, size, offset, maxpages);
+ 	if (!maxpages)
+ 		return -ENOMEM;
+ 	p = *pages;
+ 	for (k = 0; k < maxpages; k++)
+ 		p[k] = page + k;
+ 
+-	maxsize = min_t(size_t, maxsize, maxpages * PAGE_SIZE - offset);
+-	iov_iter_advance(i, maxsize);
+-	return maxsize;
++	size = min_t(size_t, size, maxpages * PAGE_SIZE - offset);
++	iov_iter_advance(i, size);
++	return size;
+ }
+ 
+ /*
+@@ -1684,14 +1684,14 @@ static ssize_t iov_iter_extract_kvec_pages(struct iov_iter *i,
+ {
+ 	struct page **p, *page;
+ 	const void *kaddr;
+-	size_t skip = i->iov_offset, offset, len;
++	size_t skip = i->iov_offset, offset, len, size;
+ 	int k;
+ 
+ 	for (;;) {
+ 		if (i->nr_segs == 0)
+ 			return 0;
+-		maxsize = min(maxsize, i->kvec->iov_len - skip);
+-		if (maxsize)
++		size = min(maxsize, i->kvec->iov_len - skip);
++		if (size)
+ 			break;
+ 		i->iov_offset = 0;
+ 		i->nr_segs--;
+@@ -1703,13 +1703,13 @@ static ssize_t iov_iter_extract_kvec_pages(struct iov_iter *i,
+ 	offset = (unsigned long)kaddr & ~PAGE_MASK;
+ 	*offset0 = offset;
+ 
+-	maxpages = want_pages_array(pages, maxsize, offset, maxpages);
++	maxpages = want_pages_array(pages, size, offset, maxpages);
+ 	if (!maxpages)
+ 		return -ENOMEM;
+ 	p = *pages;
+ 
+ 	kaddr -= offset;
+-	len = offset + maxsize;
++	len = offset + size;
+ 	for (k = 0; k < maxpages; k++) {
+ 		size_t seg = min_t(size_t, len, PAGE_SIZE);
+ 
+@@ -1723,9 +1723,9 @@ static ssize_t iov_iter_extract_kvec_pages(struct iov_iter *i,
+ 		kaddr += PAGE_SIZE;
+ 	}
+ 
+-	maxsize = min_t(size_t, maxsize, maxpages * PAGE_SIZE - offset);
+-	iov_iter_advance(i, maxsize);
+-	return maxsize;
++	size = min_t(size_t, size, maxpages * PAGE_SIZE - offset);
++	iov_iter_advance(i, size);
++	return size;
+ }
+ 
+ /*
+diff --git a/lib/nmi_backtrace.c b/lib/nmi_backtrace.c
+index 5274bbb026d79..33c154264bfe2 100644
+--- a/lib/nmi_backtrace.c
++++ b/lib/nmi_backtrace.c
+@@ -34,7 +34,7 @@ static unsigned long backtrace_flag;
+  * they are passed being updated as a side effect of this call.
+  */
+ void nmi_trigger_cpumask_backtrace(const cpumask_t *mask,
+-				   bool exclude_self,
++				   int exclude_cpu,
+ 				   void (*raise)(cpumask_t *mask))
+ {
+ 	int i, this_cpu = get_cpu();
+@@ -49,8 +49,8 @@ void nmi_trigger_cpumask_backtrace(const cpumask_t *mask,
+ 	}
+ 
+ 	cpumask_copy(to_cpumask(backtrace_mask), mask);
+-	if (exclude_self)
+-		cpumask_clear_cpu(this_cpu, to_cpumask(backtrace_mask));
++	if (exclude_cpu != -1)
++		cpumask_clear_cpu(exclude_cpu, to_cpumask(backtrace_mask));
+ 
+ 	/*
+ 	 * Don't try to send an NMI to this cpu; it may work on some
+diff --git a/lib/xarray.c b/lib/xarray.c
+index 2071a3718f4ed..142e36f9dfda1 100644
+--- a/lib/xarray.c
++++ b/lib/xarray.c
+@@ -206,7 +206,7 @@ static void *xas_descend(struct xa_state *xas, struct xa_node *node)
+ 	void *entry = xa_entry(xas->xa, node, offset);
+ 
+ 	xas->xa_node = node;
+-	if (xa_is_sibling(entry)) {
++	while (xa_is_sibling(entry)) {
+ 		offset = xa_to_sibling(entry);
+ 		entry = xa_entry(xas->xa, node, offset);
+ 		if (node->shift && xa_is_node(entry))
+diff --git a/mm/memfd.c b/mm/memfd.c
+index e763e76f11064..2dba2cb6f0d0f 100644
+--- a/mm/memfd.c
++++ b/mm/memfd.c
+@@ -268,11 +268,33 @@ long memfd_fcntl(struct file *file, unsigned int cmd, unsigned int arg)
+ 
+ #define MFD_ALL_FLAGS (MFD_CLOEXEC | MFD_ALLOW_SEALING | MFD_HUGETLB | MFD_NOEXEC_SEAL | MFD_EXEC)
+ 
++static int check_sysctl_memfd_noexec(unsigned int *flags)
++{
++#ifdef CONFIG_SYSCTL
++	struct pid_namespace *ns = task_active_pid_ns(current);
++	int sysctl = pidns_memfd_noexec_scope(ns);
++
++	if (!(*flags & (MFD_EXEC | MFD_NOEXEC_SEAL))) {
++		if (sysctl >= MEMFD_NOEXEC_SCOPE_NOEXEC_SEAL)
++			*flags |= MFD_NOEXEC_SEAL;
++		else
++			*flags |= MFD_EXEC;
++	}
++
++	if (!(*flags & MFD_NOEXEC_SEAL) && sysctl >= MEMFD_NOEXEC_SCOPE_NOEXEC_ENFORCED) {
++		pr_err_ratelimited(
++			"%s[%d]: memfd_create() requires MFD_NOEXEC_SEAL with vm.memfd_noexec=%d\n",
++			current->comm, task_pid_nr(current), sysctl);
++		return -EACCES;
++	}
++#endif
++	return 0;
++}
++
+ SYSCALL_DEFINE2(memfd_create,
+ 		const char __user *, uname,
+ 		unsigned int, flags)
+ {
+-	char comm[TASK_COMM_LEN];
+ 	unsigned int *file_seals;
+ 	struct file *file;
+ 	int fd, error;
+@@ -294,35 +316,15 @@ SYSCALL_DEFINE2(memfd_create,
+ 		return -EINVAL;
+ 
+ 	if (!(flags & (MFD_EXEC | MFD_NOEXEC_SEAL))) {
+-#ifdef CONFIG_SYSCTL
+-		int sysctl = MEMFD_NOEXEC_SCOPE_EXEC;
+-		struct pid_namespace *ns;
+-
+-		ns = task_active_pid_ns(current);
+-		if (ns)
+-			sysctl = ns->memfd_noexec_scope;
+-
+-		switch (sysctl) {
+-		case MEMFD_NOEXEC_SCOPE_EXEC:
+-			flags |= MFD_EXEC;
+-			break;
+-		case MEMFD_NOEXEC_SCOPE_NOEXEC_SEAL:
+-			flags |= MFD_NOEXEC_SEAL;
+-			break;
+-		default:
+-			pr_warn_once(
+-				"memfd_create(): MFD_NOEXEC_SEAL is enforced, pid=%d '%s'\n",
+-				task_pid_nr(current), get_task_comm(comm, current));
+-			return -EINVAL;
+-		}
+-#else
+-		flags |= MFD_EXEC;
+-#endif
+ 		pr_warn_once(
+-			"memfd_create() without MFD_EXEC nor MFD_NOEXEC_SEAL, pid=%d '%s'\n",
+-			task_pid_nr(current), get_task_comm(comm, current));
++			"%s[%d]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set\n",
++			current->comm, task_pid_nr(current));
+ 	}
+ 
++	error = check_sysctl_memfd_noexec(&flags);
++	if (error < 0)
++		return error;
++
+ 	/* length includes terminating zero */
+ 	len = strnlen_user(uname, MFD_NAME_MAX_LEN + 1);
+ 	if (len <= 0)
+diff --git a/mm/pagewalk.c b/mm/pagewalk.c
+index 9b2d23fbf4d35..b7d7e4fcfad7a 100644
+--- a/mm/pagewalk.c
++++ b/mm/pagewalk.c
+@@ -58,7 +58,7 @@ static int walk_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
+ 			pte = pte_offset_map(pmd, addr);
+ 		if (pte) {
+ 			err = walk_pte_range_inner(pte, addr, end, walk);
+-			if (walk->mm != &init_mm)
++			if (walk->mm != &init_mm && addr < TASK_SIZE)
+ 				pte_unmap(pte);
+ 		}
+ 	} else {
+diff --git a/mm/shmem.c b/mm/shmem.c
+index d963c747dabca..79a998b38ac85 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -3641,6 +3641,8 @@ static int shmem_parse_one(struct fs_context *fc, struct fs_parameter *param)
+ 	unsigned long long size;
+ 	char *rest;
+ 	int opt;
++	kuid_t kuid;
++	kgid_t kgid;
+ 
+ 	opt = fs_parse(fc, shmem_fs_parameters, param, &result);
+ 	if (opt < 0)
+@@ -3676,14 +3678,32 @@ static int shmem_parse_one(struct fs_context *fc, struct fs_parameter *param)
+ 		ctx->mode = result.uint_32 & 07777;
+ 		break;
+ 	case Opt_uid:
+-		ctx->uid = make_kuid(current_user_ns(), result.uint_32);
+-		if (!uid_valid(ctx->uid))
++		kuid = make_kuid(current_user_ns(), result.uint_32);
++		if (!uid_valid(kuid))
+ 			goto bad_value;
++
++		/*
++		 * The requested uid must be representable in the
++		 * filesystem's idmapping.
++		 */
++		if (!kuid_has_mapping(fc->user_ns, kuid))
++			goto bad_value;
++
++		ctx->uid = kuid;
+ 		break;
+ 	case Opt_gid:
+-		ctx->gid = make_kgid(current_user_ns(), result.uint_32);
+-		if (!gid_valid(ctx->gid))
++		kgid = make_kgid(current_user_ns(), result.uint_32);
++		if (!gid_valid(kgid))
+ 			goto bad_value;
++
++		/*
++		 * The requested gid must be representable in the
++		 * filesystem's idmapping.
++		 */
++		if (!kgid_has_mapping(fc->user_ns, kgid))
++			goto bad_value;
++
++		ctx->gid = kgid;
+ 		break;
+ 	case Opt_huge:
+ 		ctx->huge = result.uint_32;
+diff --git a/mm/util.c b/mm/util.c
+index dd12b9531ac4c..406634f26918c 100644
+--- a/mm/util.c
++++ b/mm/util.c
+@@ -1071,7 +1071,9 @@ void mem_dump_obj(void *object)
+ 	if (vmalloc_dump_obj(object))
+ 		return;
+ 
+-	if (virt_addr_valid(object))
++	if (is_vmalloc_addr(object))
++		type = "vmalloc memory";
++	else if (virt_addr_valid(object))
+ 		type = "non-slab/vmalloc memory";
+ 	else if (object == NULL)
+ 		type = "NULL pointer";
+diff --git a/mm/vmalloc.c b/mm/vmalloc.c
+index 228a4a5312f22..ef8599d394fd0 100644
+--- a/mm/vmalloc.c
++++ b/mm/vmalloc.c
+@@ -4278,14 +4278,32 @@ void pcpu_free_vm_areas(struct vm_struct **vms, int nr_vms)
+ #ifdef CONFIG_PRINTK
+ bool vmalloc_dump_obj(void *object)
+ {
+-	struct vm_struct *vm;
+ 	void *objp = (void *)PAGE_ALIGN((unsigned long)object);
++	const void *caller;
++	struct vm_struct *vm;
++	struct vmap_area *va;
++	unsigned long addr;
++	unsigned int nr_pages;
+ 
+-	vm = find_vm_area(objp);
+-	if (!vm)
++	if (!spin_trylock(&vmap_area_lock))
++		return false;
++	va = __find_vmap_area((unsigned long)objp, &vmap_area_root);
++	if (!va) {
++		spin_unlock(&vmap_area_lock);
+ 		return false;
++	}
++
++	vm = va->vm;
++	if (!vm) {
++		spin_unlock(&vmap_area_lock);
++		return false;
++	}
++	addr = (unsigned long)vm->addr;
++	caller = vm->caller;
++	nr_pages = vm->nr_pages;
++	spin_unlock(&vmap_area_lock);
+ 	pr_cont(" %u-page vmalloc region starting at %#lx allocated at %pS\n",
+-		vm->nr_pages, (unsigned long)vm->addr, vm->caller);
++		nr_pages, addr, caller);
+ 	return true;
+ }
+ #endif
+diff --git a/mm/vmpressure.c b/mm/vmpressure.c
+index b52644771cc43..22c6689d93027 100644
+--- a/mm/vmpressure.c
++++ b/mm/vmpressure.c
+@@ -244,6 +244,14 @@ void vmpressure(gfp_t gfp, struct mem_cgroup *memcg, bool tree,
+ 	if (mem_cgroup_disabled())
+ 		return;
+ 
++	/*
++	 * The in-kernel users only care about the reclaim efficiency
++	 * for this @memcg rather than the whole subtree, and there
++	 * isn't and won't be any in-kernel user in a legacy cgroup.
++	 */
++	if (!cgroup_subsys_on_dfl(memory_cgrp_subsys) && !tree)
++		return;
++
+ 	vmpr = memcg_to_vmpressure(memcg);
+ 
+ 	/*
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index 2fe4a11d63f44..5be64834a8527 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -4891,7 +4891,8 @@ static int lru_gen_memcg_seg(struct lruvec *lruvec)
+  *                          the eviction
+  ******************************************************************************/
+ 
+-static bool sort_folio(struct lruvec *lruvec, struct folio *folio, int tier_idx)
++static bool sort_folio(struct lruvec *lruvec, struct folio *folio, struct scan_control *sc,
++		       int tier_idx)
+ {
+ 	bool success;
+ 	int gen = folio_lru_gen(folio);
+@@ -4941,6 +4942,13 @@ static bool sort_folio(struct lruvec *lruvec, struct folio *folio, int tier_idx)
+ 		return true;
+ 	}
+ 
++	/* ineligible */
++	if (zone > sc->reclaim_idx) {
++		gen = folio_inc_gen(lruvec, folio, false);
++		list_move_tail(&folio->lru, &lrugen->folios[gen][type][zone]);
++		return true;
++	}
++
+ 	/* waiting for writeback */
+ 	if (folio_test_locked(folio) || folio_test_writeback(folio) ||
+ 	    (type == LRU_GEN_FILE && folio_test_dirty(folio))) {
+@@ -4989,7 +4997,8 @@ static bool isolate_folio(struct lruvec *lruvec, struct folio *folio, struct sca
+ static int scan_folios(struct lruvec *lruvec, struct scan_control *sc,
+ 		       int type, int tier, struct list_head *list)
+ {
+-	int gen, zone;
++	int i;
++	int gen;
+ 	enum vm_event_item item;
+ 	int sorted = 0;
+ 	int scanned = 0;
+@@ -5005,9 +5014,10 @@ static int scan_folios(struct lruvec *lruvec, struct scan_control *sc,
+ 
+ 	gen = lru_gen_from_seq(lrugen->min_seq[type]);
+ 
+-	for (zone = sc->reclaim_idx; zone >= 0; zone--) {
++	for (i = MAX_NR_ZONES; i > 0; i--) {
+ 		LIST_HEAD(moved);
+ 		int skipped = 0;
++		int zone = (sc->reclaim_idx + i) % MAX_NR_ZONES;
+ 		struct list_head *head = &lrugen->folios[gen][type][zone];
+ 
+ 		while (!list_empty(head)) {
+@@ -5021,7 +5031,7 @@ static int scan_folios(struct lruvec *lruvec, struct scan_control *sc,
+ 
+ 			scanned += delta;
+ 
+-			if (sort_folio(lruvec, folio, tier))
++			if (sort_folio(lruvec, folio, sc, tier))
+ 				sorted += delta;
+ 			else if (isolate_folio(lruvec, folio, sc)) {
+ 				list_add(&folio->lru, list);
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index 76222565e2df0..ce76931d11d86 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -178,57 +178,6 @@ static void hci_conn_cleanup(struct hci_conn *conn)
+ 	hci_conn_put(conn);
+ }
+ 
+-static void le_scan_cleanup(struct work_struct *work)
+-{
+-	struct hci_conn *conn = container_of(work, struct hci_conn,
+-					     le_scan_cleanup);
+-	struct hci_dev *hdev = conn->hdev;
+-	struct hci_conn *c = NULL;
+-
+-	BT_DBG("%s hcon %p", hdev->name, conn);
+-
+-	hci_dev_lock(hdev);
+-
+-	/* Check that the hci_conn is still around */
+-	rcu_read_lock();
+-	list_for_each_entry_rcu(c, &hdev->conn_hash.list, list) {
+-		if (c == conn)
+-			break;
+-	}
+-	rcu_read_unlock();
+-
+-	if (c == conn) {
+-		hci_connect_le_scan_cleanup(conn, 0x00);
+-		hci_conn_cleanup(conn);
+-	}
+-
+-	hci_dev_unlock(hdev);
+-	hci_dev_put(hdev);
+-	hci_conn_put(conn);
+-}
+-
+-static void hci_connect_le_scan_remove(struct hci_conn *conn)
+-{
+-	BT_DBG("%s hcon %p", conn->hdev->name, conn);
+-
+-	/* We can't call hci_conn_del/hci_conn_cleanup here since that
+-	 * could deadlock with another hci_conn_del() call that's holding
+-	 * hci_dev_lock and doing cancel_delayed_work_sync(&conn->disc_work).
+-	 * Instead, grab temporary extra references to the hci_dev and
+-	 * hci_conn and perform the necessary cleanup in a separate work
+-	 * callback.
+-	 */
+-
+-	hci_dev_hold(conn->hdev);
+-	hci_conn_get(conn);
+-
+-	/* Even though we hold a reference to the hdev, many other
+-	 * things might get cleaned up meanwhile, including the hdev's
+-	 * own workqueue, so we can't use that for scheduling.
+-	 */
+-	schedule_work(&conn->le_scan_cleanup);
+-}
+-
+ static void hci_acl_create_connection(struct hci_conn *conn)
+ {
+ 	struct hci_dev *hdev = conn->hdev;
+@@ -679,13 +628,6 @@ static void hci_conn_timeout(struct work_struct *work)
+ 	if (refcnt > 0)
+ 		return;
+ 
+-	/* LE connections in scanning state need special handling */
+-	if (conn->state == BT_CONNECT && conn->type == LE_LINK &&
+-	    test_bit(HCI_CONN_SCANNING, &conn->flags)) {
+-		hci_connect_le_scan_remove(conn);
+-		return;
+-	}
+-
+ 	hci_abort_conn(conn, hci_proto_disconn_ind(conn));
+ }
+ 
+@@ -791,7 +733,8 @@ struct iso_list_data {
+ 		u16 sync_handle;
+ 	};
+ 	int count;
+-	struct iso_cig_params pdu;
++	bool big_term;
++	bool big_sync_term;
+ };
+ 
+ static void bis_list(struct hci_conn *conn, void *data)
+@@ -809,17 +752,6 @@ static void bis_list(struct hci_conn *conn, void *data)
+ 	d->count++;
+ }
+ 
+-static void find_bis(struct hci_conn *conn, void *data)
+-{
+-	struct iso_list_data *d = data;
+-
+-	/* Ignore unicast */
+-	if (bacmp(&conn->dst, BDADDR_ANY))
+-		return;
+-
+-	d->count++;
+-}
+-
+ static int terminate_big_sync(struct hci_dev *hdev, void *data)
+ {
+ 	struct iso_list_data *d = data;
+@@ -828,11 +760,8 @@ static int terminate_big_sync(struct hci_dev *hdev, void *data)
+ 
+ 	hci_remove_ext_adv_instance_sync(hdev, d->bis, NULL);
+ 
+-	/* Check if ISO connection is a BIS and terminate BIG if there are
+-	 * no other connections using it.
+-	 */
+-	hci_conn_hash_list_state(hdev, find_bis, ISO_LINK, BT_CONNECTED, d);
+-	if (d->count)
++	/* Only terminate BIG if it has been created */
++	if (!d->big_term)
+ 		return 0;
+ 
+ 	return hci_le_terminate_big_sync(hdev, d->big,
+@@ -844,19 +773,21 @@ static void terminate_big_destroy(struct hci_dev *hdev, void *data, int err)
+ 	kfree(data);
+ }
+ 
+-static int hci_le_terminate_big(struct hci_dev *hdev, u8 big, u8 bis)
++static int hci_le_terminate_big(struct hci_dev *hdev, struct hci_conn *conn)
+ {
+ 	struct iso_list_data *d;
+ 	int ret;
+ 
+-	bt_dev_dbg(hdev, "big 0x%2.2x bis 0x%2.2x", big, bis);
++	bt_dev_dbg(hdev, "big 0x%2.2x bis 0x%2.2x", conn->iso_qos.bcast.big,
++		   conn->iso_qos.bcast.bis);
+ 
+ 	d = kzalloc(sizeof(*d), GFP_KERNEL);
+ 	if (!d)
+ 		return -ENOMEM;
+ 
+-	d->big = big;
+-	d->bis = bis;
++	d->big = conn->iso_qos.bcast.big;
++	d->bis = conn->iso_qos.bcast.bis;
++	d->big_term = test_and_clear_bit(HCI_CONN_BIG_CREATED, &conn->flags);
+ 
+ 	ret = hci_cmd_sync_queue(hdev, terminate_big_sync, d,
+ 				 terminate_big_destroy);
+@@ -873,31 +804,26 @@ static int big_terminate_sync(struct hci_dev *hdev, void *data)
+ 	bt_dev_dbg(hdev, "big 0x%2.2x sync_handle 0x%4.4x", d->big,
+ 		   d->sync_handle);
+ 
+-	/* Check if ISO connection is a BIS and terminate BIG if there are
+-	 * no other connections using it.
+-	 */
+-	hci_conn_hash_list_state(hdev, find_bis, ISO_LINK, BT_CONNECTED, d);
+-	if (d->count)
+-		return 0;
+-
+-	hci_le_big_terminate_sync(hdev, d->big);
++	if (d->big_sync_term)
++		hci_le_big_terminate_sync(hdev, d->big);
+ 
+ 	return hci_le_pa_terminate_sync(hdev, d->sync_handle);
+ }
+ 
+-static int hci_le_big_terminate(struct hci_dev *hdev, u8 big, u16 sync_handle)
++static int hci_le_big_terminate(struct hci_dev *hdev, u8 big, struct hci_conn *conn)
+ {
+ 	struct iso_list_data *d;
+ 	int ret;
+ 
+-	bt_dev_dbg(hdev, "big 0x%2.2x sync_handle 0x%4.4x", big, sync_handle);
++	bt_dev_dbg(hdev, "big 0x%2.2x sync_handle 0x%4.4x", big, conn->sync_handle);
+ 
+ 	d = kzalloc(sizeof(*d), GFP_KERNEL);
+ 	if (!d)
+ 		return -ENOMEM;
+ 
+ 	d->big = big;
+-	d->sync_handle = sync_handle;
++	d->sync_handle = conn->sync_handle;
++	d->big_sync_term = test_and_clear_bit(HCI_CONN_BIG_SYNC, &conn->flags);
+ 
+ 	ret = hci_cmd_sync_queue(hdev, big_terminate_sync, d,
+ 				 terminate_big_destroy);
+@@ -916,6 +842,7 @@ static int hci_le_big_terminate(struct hci_dev *hdev, u8 big, u16 sync_handle)
+ static void bis_cleanup(struct hci_conn *conn)
+ {
+ 	struct hci_dev *hdev = conn->hdev;
++	struct hci_conn *bis;
+ 
+ 	bt_dev_dbg(hdev, "conn %p", conn);
+ 
+@@ -923,11 +850,25 @@ static void bis_cleanup(struct hci_conn *conn)
+ 		if (!test_and_clear_bit(HCI_CONN_PER_ADV, &conn->flags))
+ 			return;
+ 
+-		hci_le_terminate_big(hdev, conn->iso_qos.bcast.big,
+-				     conn->iso_qos.bcast.bis);
++		/* Check if ISO connection is a BIS and terminate advertising
++		 * set and BIG if there are no other connections using it.
++		 */
++		bis = hci_conn_hash_lookup_bis(hdev, BDADDR_ANY,
++					       conn->iso_qos.bcast.big,
++					       conn->iso_qos.bcast.bis);
++		if (bis)
++			return;
++
++		hci_le_terminate_big(hdev, conn);
+ 	} else {
++		bis = hci_conn_hash_lookup_big_any_dst(hdev,
++						       conn->iso_qos.bcast.big);
++
++		if (bis)
++			return;
++
+ 		hci_le_big_terminate(hdev, conn->iso_qos.bcast.big,
+-				     conn->sync_handle);
++				     conn);
+ 	}
+ }
+ 
+@@ -983,6 +924,25 @@ static void cis_cleanup(struct hci_conn *conn)
+ 	hci_le_remove_cig(hdev, conn->iso_qos.ucast.cig);
+ }
+ 
++static u16 hci_conn_hash_alloc_unset(struct hci_dev *hdev)
++{
++	struct hci_conn_hash *h = &hdev->conn_hash;
++	struct hci_conn  *c;
++	u16 handle = HCI_CONN_HANDLE_MAX + 1;
++
++	rcu_read_lock();
++
++	list_for_each_entry_rcu(c, &h->list, list) {
++		/* Find the first unused handle */
++		if (handle == 0xffff || c->handle != handle)
++			break;
++		handle++;
++	}
++	rcu_read_unlock();
++
++	return handle;
++}
++
+ struct hci_conn *hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t *dst,
+ 			      u8 role)
+ {
+@@ -996,7 +956,7 @@ struct hci_conn *hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t *dst,
+ 
+ 	bacpy(&conn->dst, dst);
+ 	bacpy(&conn->src, &hdev->bdaddr);
+-	conn->handle = HCI_CONN_HANDLE_UNSET;
++	conn->handle = hci_conn_hash_alloc_unset(hdev);
+ 	conn->hdev  = hdev;
+ 	conn->type  = type;
+ 	conn->role  = role;
+@@ -1059,7 +1019,6 @@ struct hci_conn *hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t *dst,
+ 	INIT_DELAYED_WORK(&conn->auto_accept_work, hci_conn_auto_accept);
+ 	INIT_DELAYED_WORK(&conn->idle_work, hci_conn_idle);
+ 	INIT_DELAYED_WORK(&conn->le_conn_timeout, le_conn_timeout);
+-	INIT_WORK(&conn->le_scan_cleanup, le_scan_cleanup);
+ 
+ 	atomic_set(&conn->refcnt, 0);
+ 
+@@ -1081,6 +1040,29 @@ struct hci_conn *hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t *dst,
+ 	return conn;
+ }
+ 
++static void hci_conn_cleanup_child(struct hci_conn *conn, u8 reason)
++{
++	if (!reason)
++		reason = HCI_ERROR_REMOTE_USER_TERM;
++
++	/* Due to race, SCO/ISO conn might be not established yet at this point,
++	 * and nothing else will clean it up. In other cases it is done via HCI
++	 * events.
++	 */
++	switch (conn->type) {
++	case SCO_LINK:
++	case ESCO_LINK:
++		if (HCI_CONN_HANDLE_UNSET(conn->handle))
++			hci_conn_failed(conn, reason);
++		break;
++	case ISO_LINK:
++		if (conn->state != BT_CONNECTED &&
++		    !test_bit(HCI_CONN_CREATE_CIS, &conn->flags))
++			hci_conn_failed(conn, reason);
++		break;
++	}
++}
++
+ static void hci_conn_unlink(struct hci_conn *conn)
+ {
+ 	struct hci_dev *hdev = conn->hdev;
+@@ -1103,14 +1085,7 @@ static void hci_conn_unlink(struct hci_conn *conn)
+ 			if (!test_bit(HCI_UP, &hdev->flags))
+ 				continue;
+ 
+-			/* Due to race, SCO connection might be not established
+-			 * yet at this point. Delete it now, otherwise it is
+-			 * possible for it to be stuck and can't be deleted.
+-			 */
+-			if ((child->type == SCO_LINK ||
+-			     child->type == ESCO_LINK) &&
+-			    child->handle == HCI_CONN_HANDLE_UNSET)
+-				hci_conn_del(child);
++			hci_conn_cleanup_child(child, conn->abort_reason);
+ 		}
+ 
+ 		return;
+@@ -1495,10 +1470,10 @@ static int qos_set_bis(struct hci_dev *hdev, struct bt_iso_qos *qos)
+ 
+ /* This function requires the caller holds hdev->lock */
+ static struct hci_conn *hci_add_bis(struct hci_dev *hdev, bdaddr_t *dst,
+-				    struct bt_iso_qos *qos)
++				    struct bt_iso_qos *qos, __u8 base_len,
++				    __u8 *base)
+ {
+ 	struct hci_conn *conn;
+-	struct iso_list_data data;
+ 	int err;
+ 
+ 	/* Let's make sure that le is enabled.*/
+@@ -1516,24 +1491,27 @@ static struct hci_conn *hci_add_bis(struct hci_dev *hdev, bdaddr_t *dst,
+ 	if (err)
+ 		return ERR_PTR(err);
+ 
+-	data.big = qos->bcast.big;
+-	data.bis = qos->bcast.bis;
+-	data.count = 0;
+-
+-	/* Check if there is already a matching BIG/BIS */
+-	hci_conn_hash_list_state(hdev, bis_list, ISO_LINK, BT_BOUND, &data);
+-	if (data.count)
++	/* Check if the LE Create BIG command has already been sent */
++	conn = hci_conn_hash_lookup_per_adv_bis(hdev, dst, qos->bcast.big,
++						qos->bcast.big);
++	if (conn)
+ 		return ERR_PTR(-EADDRINUSE);
+ 
+-	conn = hci_conn_hash_lookup_bis(hdev, dst, qos->bcast.big, qos->bcast.bis);
+-	if (conn)
++	/* Check BIS settings against other bound BISes, since all
++	 * BISes in a BIG must have the same value for all parameters
++	 */
++	conn = hci_conn_hash_lookup_bis(hdev, dst, qos->bcast.big,
++					qos->bcast.bis);
++
++	if (conn && (memcmp(qos, &conn->iso_qos, sizeof(*qos)) ||
++		     base_len != conn->le_per_adv_data_len ||
++		     memcmp(conn->le_per_adv_data, base, base_len)))
+ 		return ERR_PTR(-EADDRINUSE);
+ 
+ 	conn = hci_conn_add(hdev, ISO_LINK, dst, HCI_ROLE_MASTER);
+ 	if (!conn)
+ 		return ERR_PTR(-ENOMEM);
+ 
+-	set_bit(HCI_CONN_PER_ADV, &conn->flags);
+ 	conn->state = BT_CONNECT;
+ 
+ 	hci_conn_hold(conn);
+@@ -1707,52 +1685,25 @@ struct hci_conn *hci_connect_sco(struct hci_dev *hdev, int type, bdaddr_t *dst,
+ 	return sco;
+ }
+ 
+-static void cis_add(struct iso_list_data *d, struct bt_iso_qos *qos)
+-{
+-	struct hci_cis_params *cis = &d->pdu.cis[d->pdu.cp.num_cis];
+-
+-	cis->cis_id = qos->ucast.cis;
+-	cis->c_sdu  = cpu_to_le16(qos->ucast.out.sdu);
+-	cis->p_sdu  = cpu_to_le16(qos->ucast.in.sdu);
+-	cis->c_phy  = qos->ucast.out.phy ? qos->ucast.out.phy : qos->ucast.in.phy;
+-	cis->p_phy  = qos->ucast.in.phy ? qos->ucast.in.phy : qos->ucast.out.phy;
+-	cis->c_rtn  = qos->ucast.out.rtn;
+-	cis->p_rtn  = qos->ucast.in.rtn;
+-
+-	d->pdu.cp.num_cis++;
+-}
+-
+-static void cis_list(struct hci_conn *conn, void *data)
+-{
+-	struct iso_list_data *d = data;
+-
+-	/* Skip if broadcast/ANY address */
+-	if (!bacmp(&conn->dst, BDADDR_ANY))
+-		return;
+-
+-	if (d->cig != conn->iso_qos.ucast.cig || d->cis == BT_ISO_QOS_CIS_UNSET ||
+-	    d->cis != conn->iso_qos.ucast.cis)
+-		return;
+-
+-	d->count++;
+-
+-	if (d->pdu.cp.cig_id == BT_ISO_QOS_CIG_UNSET ||
+-	    d->count >= ARRAY_SIZE(d->pdu.cis))
+-		return;
+-
+-	cis_add(d, &conn->iso_qos);
+-}
+-
+ static int hci_le_create_big(struct hci_conn *conn, struct bt_iso_qos *qos)
+ {
+ 	struct hci_dev *hdev = conn->hdev;
+ 	struct hci_cp_le_create_big cp;
++	struct iso_list_data data;
+ 
+ 	memset(&cp, 0, sizeof(cp));
+ 
++	data.big = qos->bcast.big;
++	data.bis = qos->bcast.bis;
++	data.count = 0;
++
++	/* Create a BIS for each bound connection */
++	hci_conn_hash_list_state(hdev, bis_list, ISO_LINK,
++				 BT_BOUND, &data);
++
+ 	cp.handle = qos->bcast.big;
+ 	cp.adv_handle = qos->bcast.bis;
+-	cp.num_bis  = 0x01;
++	cp.num_bis  = data.count;
+ 	hci_cpu_to_le24(qos->bcast.out.interval, cp.bis.sdu_interval);
+ 	cp.bis.sdu = cpu_to_le16(qos->bcast.out.sdu);
+ 	cp.bis.latency =  cpu_to_le16(qos->bcast.out.latency);
+@@ -1766,25 +1717,62 @@ static int hci_le_create_big(struct hci_conn *conn, struct bt_iso_qos *qos)
+ 	return hci_send_cmd(hdev, HCI_OP_LE_CREATE_BIG, sizeof(cp), &cp);
+ }
+ 
+-static void set_cig_params_complete(struct hci_dev *hdev, void *data, int err)
++static int set_cig_params_sync(struct hci_dev *hdev, void *data)
+ {
+-	struct iso_cig_params *pdu = data;
++	u8 cig_id = PTR_ERR(data);
++	struct hci_conn *conn;
++	struct bt_iso_qos *qos;
++	struct iso_cig_params pdu;
++	u8 cis_id;
+ 
+-	bt_dev_dbg(hdev, "");
++	conn = hci_conn_hash_lookup_cig(hdev, cig_id);
++	if (!conn)
++		return 0;
+ 
+-	if (err)
+-		bt_dev_err(hdev, "Unable to set CIG parameters: %d", err);
++	memset(&pdu, 0, sizeof(pdu));
+ 
+-	kfree(pdu);
+-}
++	qos = &conn->iso_qos;
++	pdu.cp.cig_id = cig_id;
++	hci_cpu_to_le24(qos->ucast.out.interval, pdu.cp.c_interval);
++	hci_cpu_to_le24(qos->ucast.in.interval, pdu.cp.p_interval);
++	pdu.cp.sca = qos->ucast.sca;
++	pdu.cp.packing = qos->ucast.packing;
++	pdu.cp.framing = qos->ucast.framing;
++	pdu.cp.c_latency = cpu_to_le16(qos->ucast.out.latency);
++	pdu.cp.p_latency = cpu_to_le16(qos->ucast.in.latency);
++
++	/* Reprogram all CIS(s) with the same CIG, valid range are:
++	 * num_cis: 0x00 to 0x1F
++	 * cis_id: 0x00 to 0xEF
++	 */
++	for (cis_id = 0x00; cis_id < 0xf0 &&
++	     pdu.cp.num_cis < ARRAY_SIZE(pdu.cis); cis_id++) {
++		struct hci_cis_params *cis;
+ 
+-static int set_cig_params_sync(struct hci_dev *hdev, void *data)
+-{
+-	struct iso_cig_params *pdu = data;
+-	u32 plen;
++		conn = hci_conn_hash_lookup_cis(hdev, NULL, 0, cig_id, cis_id);
++		if (!conn)
++			continue;
++
++		qos = &conn->iso_qos;
++
++		cis = &pdu.cis[pdu.cp.num_cis++];
++		cis->cis_id = cis_id;
++		cis->c_sdu  = cpu_to_le16(conn->iso_qos.ucast.out.sdu);
++		cis->p_sdu  = cpu_to_le16(conn->iso_qos.ucast.in.sdu);
++		cis->c_phy  = qos->ucast.out.phy ? qos->ucast.out.phy :
++			      qos->ucast.in.phy;
++		cis->p_phy  = qos->ucast.in.phy ? qos->ucast.in.phy :
++			      qos->ucast.out.phy;
++		cis->c_rtn  = qos->ucast.out.rtn;
++		cis->p_rtn  = qos->ucast.in.rtn;
++	}
++
++	if (!pdu.cp.num_cis)
++		return 0;
+ 
+-	plen = sizeof(pdu->cp) + pdu->cp.num_cis * sizeof(pdu->cis[0]);
+-	return __hci_cmd_sync_status(hdev, HCI_OP_LE_SET_CIG_PARAMS, plen, pdu,
++	return __hci_cmd_sync_status(hdev, HCI_OP_LE_SET_CIG_PARAMS,
++				     sizeof(pdu.cp) +
++				     pdu.cp.num_cis * sizeof(pdu.cis[0]), &pdu,
+ 				     HCI_CMD_TIMEOUT);
+ }
+ 
+@@ -1792,7 +1780,6 @@ static bool hci_le_set_cig_params(struct hci_conn *conn, struct bt_iso_qos *qos)
+ {
+ 	struct hci_dev *hdev = conn->hdev;
+ 	struct iso_list_data data;
+-	struct iso_cig_params *pdu;
+ 
+ 	memset(&data, 0, sizeof(data));
+ 
+@@ -1819,58 +1806,31 @@ static bool hci_le_set_cig_params(struct hci_conn *conn, struct bt_iso_qos *qos)
+ 		qos->ucast.cig = data.cig;
+ 	}
+ 
+-	data.pdu.cp.cig_id = qos->ucast.cig;
+-	hci_cpu_to_le24(qos->ucast.out.interval, data.pdu.cp.c_interval);
+-	hci_cpu_to_le24(qos->ucast.in.interval, data.pdu.cp.p_interval);
+-	data.pdu.cp.sca = qos->ucast.sca;
+-	data.pdu.cp.packing = qos->ucast.packing;
+-	data.pdu.cp.framing = qos->ucast.framing;
+-	data.pdu.cp.c_latency = cpu_to_le16(qos->ucast.out.latency);
+-	data.pdu.cp.p_latency = cpu_to_le16(qos->ucast.in.latency);
+-
+ 	if (qos->ucast.cis != BT_ISO_QOS_CIS_UNSET) {
+-		data.count = 0;
+-		data.cig = qos->ucast.cig;
+-		data.cis = qos->ucast.cis;
+-
+-		hci_conn_hash_list_state(hdev, cis_list, ISO_LINK, BT_BOUND,
+-					 &data);
+-		if (data.count)
++		if (hci_conn_hash_lookup_cis(hdev, NULL, 0, qos->ucast.cig,
++					     qos->ucast.cis))
+ 			return false;
+-
+-		cis_add(&data, qos);
++		goto done;
+ 	}
+ 
+-	/* Reprogram all CIS(s) with the same CIG */
+-	for (data.cig = qos->ucast.cig, data.cis = 0x00; data.cis < 0x11;
++	/* Allocate first available CIS if not set */
++	for (data.cig = qos->ucast.cig, data.cis = 0x00; data.cis < 0xf0;
+ 	     data.cis++) {
+-		data.count = 0;
+-
+-		hci_conn_hash_list_state(hdev, cis_list, ISO_LINK, BT_BOUND,
+-					 &data);
+-		if (data.count)
+-			continue;
+-
+-		/* Allocate a CIS if not set */
+-		if (qos->ucast.cis == BT_ISO_QOS_CIS_UNSET) {
++		if (!hci_conn_hash_lookup_cis(hdev, NULL, 0, data.cig,
++					      data.cis)) {
+ 			/* Update CIS */
+ 			qos->ucast.cis = data.cis;
+-			cis_add(&data, qos);
++			break;
+ 		}
+ 	}
+ 
+-	if (qos->ucast.cis == BT_ISO_QOS_CIS_UNSET || !data.pdu.cp.num_cis)
++	if (qos->ucast.cis == BT_ISO_QOS_CIS_UNSET)
+ 		return false;
+ 
+-	pdu = kmemdup(&data.pdu, sizeof(*pdu), GFP_KERNEL);
+-	if (!pdu)
+-		return false;
+-
+-	if (hci_cmd_sync_queue(hdev, set_cig_params_sync, pdu,
+-			       set_cig_params_complete) < 0) {
+-		kfree(pdu);
++done:
++	if (hci_cmd_sync_queue(hdev, set_cig_params_sync,
++			       ERR_PTR(qos->ucast.cig), NULL) < 0)
+ 		return false;
+-	}
+ 
+ 	return true;
+ }
+@@ -1969,59 +1929,47 @@ bool hci_iso_setup_path(struct hci_conn *conn)
+ 	return true;
+ }
+ 
+-static int hci_create_cis_sync(struct hci_dev *hdev, void *data)
++int hci_conn_check_create_cis(struct hci_conn *conn)
+ {
+-	return hci_le_create_cis_sync(hdev, data);
+-}
++	if (conn->type != ISO_LINK || !bacmp(&conn->dst, BDADDR_ANY))
++		return -EINVAL;
+ 
+-int hci_le_create_cis(struct hci_conn *conn)
+-{
+-	struct hci_conn *cis;
+-	struct hci_link *link, *t;
+-	struct hci_dev *hdev = conn->hdev;
+-	int err;
++	if (!conn->parent || conn->parent->state != BT_CONNECTED ||
++	    conn->state != BT_CONNECT || HCI_CONN_HANDLE_UNSET(conn->handle))
++		return 1;
+ 
+-	bt_dev_dbg(hdev, "hcon %p", conn);
++	return 0;
++}
+ 
+-	switch (conn->type) {
+-	case LE_LINK:
+-		if (conn->state != BT_CONNECTED || list_empty(&conn->link_list))
+-			return -EINVAL;
++static int hci_create_cis_sync(struct hci_dev *hdev, void *data)
++{
++	return hci_le_create_cis_sync(hdev);
++}
+ 
+-		cis = NULL;
++int hci_le_create_cis_pending(struct hci_dev *hdev)
++{
++	struct hci_conn *conn;
++	bool pending = false;
+ 
+-		/* hci_conn_link uses list_add_tail_rcu so the list is in
+-		 * the same order as the connections are requested.
+-		 */
+-		list_for_each_entry_safe(link, t, &conn->link_list, list) {
+-			if (link->conn->state == BT_BOUND) {
+-				err = hci_le_create_cis(link->conn);
+-				if (err)
+-					return err;
++	rcu_read_lock();
+ 
+-				cis = link->conn;
+-			}
++	list_for_each_entry_rcu(conn, &hdev->conn_hash.list, list) {
++		if (test_bit(HCI_CONN_CREATE_CIS, &conn->flags)) {
++			rcu_read_unlock();
++			return -EBUSY;
+ 		}
+ 
+-		return cis ? 0 : -EINVAL;
+-	case ISO_LINK:
+-		cis = conn;
+-		break;
+-	default:
+-		return -EINVAL;
++		if (!hci_conn_check_create_cis(conn))
++			pending = true;
+ 	}
+ 
+-	if (cis->state == BT_CONNECT)
++	rcu_read_unlock();
++
++	if (!pending)
+ 		return 0;
+ 
+ 	/* Queue Create CIS */
+-	err = hci_cmd_sync_queue(hdev, hci_create_cis_sync, cis, NULL);
+-	if (err)
+-		return err;
+-
+-	cis->state = BT_CONNECT;
+-
+-	return 0;
++	return hci_cmd_sync_queue(hdev, hci_create_cis_sync, NULL, NULL);
+ }
+ 
+ static void hci_iso_qos_setup(struct hci_dev *hdev, struct hci_conn *conn,
+@@ -2051,16 +1999,6 @@ static void hci_iso_qos_setup(struct hci_dev *hdev, struct hci_conn *conn,
+ 		qos->latency = conn->le_conn_latency;
+ }
+ 
+-static void hci_bind_bis(struct hci_conn *conn,
+-			 struct bt_iso_qos *qos)
+-{
+-	/* Update LINK PHYs according to QoS preference */
+-	conn->le_tx_phy = qos->bcast.out.phy;
+-	conn->le_tx_phy = qos->bcast.out.phy;
+-	conn->iso_qos = *qos;
+-	conn->state = BT_BOUND;
+-}
+-
+ static int create_big_sync(struct hci_dev *hdev, void *data)
+ {
+ 	struct hci_conn *conn = data;
+@@ -2183,27 +2121,80 @@ static void create_big_complete(struct hci_dev *hdev, void *data, int err)
+ 	}
+ }
+ 
+-struct hci_conn *hci_connect_bis(struct hci_dev *hdev, bdaddr_t *dst,
+-				 __u8 dst_type, struct bt_iso_qos *qos,
+-				 __u8 base_len, __u8 *base)
++struct hci_conn *hci_bind_bis(struct hci_dev *hdev, bdaddr_t *dst,
++			      struct bt_iso_qos *qos,
++			      __u8 base_len, __u8 *base)
+ {
+ 	struct hci_conn *conn;
+-	int err;
++	__u8 eir[HCI_MAX_PER_AD_LENGTH];
++
++	if (base_len && base)
++		base_len = eir_append_service_data(eir, 0,  0x1851,
++						   base, base_len);
+ 
+ 	/* We need hci_conn object using the BDADDR_ANY as dst */
+-	conn = hci_add_bis(hdev, dst, qos);
++	conn = hci_add_bis(hdev, dst, qos, base_len, eir);
+ 	if (IS_ERR(conn))
+ 		return conn;
+ 
+-	hci_bind_bis(conn, qos);
++	/* Update LINK PHYs according to QoS preference */
++	conn->le_tx_phy = qos->bcast.out.phy;
++	conn->le_tx_phy = qos->bcast.out.phy;
+ 
+ 	/* Add Basic Announcement into Peridic Adv Data if BASE is set */
+ 	if (base_len && base) {
+-		base_len = eir_append_service_data(conn->le_per_adv_data, 0,
+-						   0x1851, base, base_len);
++		memcpy(conn->le_per_adv_data,  eir, sizeof(eir));
+ 		conn->le_per_adv_data_len = base_len;
+ 	}
+ 
++	hci_iso_qos_setup(hdev, conn, &qos->bcast.out,
++			  conn->le_tx_phy ? conn->le_tx_phy :
++			  hdev->le_tx_def_phys);
++
++	conn->iso_qos = *qos;
++	conn->state = BT_BOUND;
++
++	return conn;
++}
++
++static void bis_mark_per_adv(struct hci_conn *conn, void *data)
++{
++	struct iso_list_data *d = data;
++
++	/* Skip if not broadcast/ANY address */
++	if (bacmp(&conn->dst, BDADDR_ANY))
++		return;
++
++	if (d->big != conn->iso_qos.bcast.big ||
++	    d->bis == BT_ISO_QOS_BIS_UNSET ||
++	    d->bis != conn->iso_qos.bcast.bis)
++		return;
++
++	set_bit(HCI_CONN_PER_ADV, &conn->flags);
++}
++
++struct hci_conn *hci_connect_bis(struct hci_dev *hdev, bdaddr_t *dst,
++				 __u8 dst_type, struct bt_iso_qos *qos,
++				 __u8 base_len, __u8 *base)
++{
++	struct hci_conn *conn;
++	int err;
++	struct iso_list_data data;
++
++	conn = hci_bind_bis(hdev, dst, qos, base_len, base);
++	if (IS_ERR(conn))
++		return conn;
++
++	data.big = qos->bcast.big;
++	data.bis = qos->bcast.bis;
++
++	/* Set HCI_CONN_PER_ADV for all bound connections, to mark that
++	 * the start periodic advertising and create BIG commands have
++	 * been queued
++	 */
++	hci_conn_hash_list_state(hdev, bis_mark_per_adv, ISO_LINK,
++				 BT_BOUND, &data);
++
+ 	/* Queue start periodic advertising and create BIG */
+ 	err = hci_cmd_sync_queue(hdev, create_big_sync, conn,
+ 				 create_big_complete);
+@@ -2212,10 +2203,6 @@ struct hci_conn *hci_connect_bis(struct hci_dev *hdev, bdaddr_t *dst,
+ 		return ERR_PTR(err);
+ 	}
+ 
+-	hci_iso_qos_setup(hdev, conn, &qos->bcast.out,
+-			  conn->le_tx_phy ? conn->le_tx_phy :
+-			  hdev->le_tx_def_phys);
+-
+ 	return conn;
+ }
+ 
+@@ -2257,11 +2244,9 @@ struct hci_conn *hci_connect_cis(struct hci_dev *hdev, bdaddr_t *dst,
+ 		return ERR_PTR(-ENOLINK);
+ 	}
+ 
+-	/* If LE is already connected and CIS handle is already set proceed to
+-	 * Create CIS immediately.
+-	 */
+-	if (le->state == BT_CONNECTED && cis->handle != HCI_CONN_HANDLE_UNSET)
+-		hci_le_create_cis(cis);
++	cis->state = BT_CONNECT;
++
++	hci_le_create_cis_pending(hdev);
+ 
+ 	return cis;
+ }
+@@ -2848,81 +2833,46 @@ u32 hci_conn_get_phy(struct hci_conn *conn)
+ 	return phys;
+ }
+ 
+-int hci_abort_conn(struct hci_conn *conn, u8 reason)
++static int abort_conn_sync(struct hci_dev *hdev, void *data)
+ {
+-	int r = 0;
++	struct hci_conn *conn;
++	u16 handle = PTR_ERR(data);
+ 
+-	if (test_and_set_bit(HCI_CONN_CANCEL, &conn->flags))
++	conn = hci_conn_hash_lookup_handle(hdev, handle);
++	if (!conn)
+ 		return 0;
+ 
+-	switch (conn->state) {
+-	case BT_CONNECTED:
+-	case BT_CONFIG:
+-		if (conn->type == AMP_LINK) {
+-			struct hci_cp_disconn_phy_link cp;
++	return hci_abort_conn_sync(hdev, conn, conn->abort_reason);
++}
+ 
+-			cp.phy_handle = HCI_PHY_HANDLE(conn->handle);
+-			cp.reason = reason;
+-			r = hci_send_cmd(conn->hdev, HCI_OP_DISCONN_PHY_LINK,
+-					 sizeof(cp), &cp);
+-		} else {
+-			struct hci_cp_disconnect dc;
++int hci_abort_conn(struct hci_conn *conn, u8 reason)
++{
++	struct hci_dev *hdev = conn->hdev;
+ 
+-			dc.handle = cpu_to_le16(conn->handle);
+-			dc.reason = reason;
+-			r = hci_send_cmd(conn->hdev, HCI_OP_DISCONNECT,
+-					 sizeof(dc), &dc);
+-		}
++	/* If abort_reason has already been set it means the connection is
++	 * already being aborted so don't attempt to overwrite it.
++	 */
++	if (conn->abort_reason)
++		return 0;
+ 
+-		conn->state = BT_DISCONN;
++	bt_dev_dbg(hdev, "handle 0x%2.2x reason 0x%2.2x", conn->handle, reason);
+ 
+-		break;
+-	case BT_CONNECT:
+-		if (conn->type == LE_LINK) {
+-			if (test_bit(HCI_CONN_SCANNING, &conn->flags))
+-				break;
+-			r = hci_send_cmd(conn->hdev,
+-					 HCI_OP_LE_CREATE_CONN_CANCEL, 0, NULL);
+-		} else if (conn->type == ACL_LINK) {
+-			if (conn->hdev->hci_ver < BLUETOOTH_VER_1_2)
+-				break;
+-			r = hci_send_cmd(conn->hdev,
+-					 HCI_OP_CREATE_CONN_CANCEL,
+-					 6, &conn->dst);
+-		}
+-		break;
+-	case BT_CONNECT2:
+-		if (conn->type == ACL_LINK) {
+-			struct hci_cp_reject_conn_req rej;
+-
+-			bacpy(&rej.bdaddr, &conn->dst);
+-			rej.reason = reason;
+-
+-			r = hci_send_cmd(conn->hdev,
+-					 HCI_OP_REJECT_CONN_REQ,
+-					 sizeof(rej), &rej);
+-		} else if (conn->type == SCO_LINK || conn->type == ESCO_LINK) {
+-			struct hci_cp_reject_sync_conn_req rej;
+-
+-			bacpy(&rej.bdaddr, &conn->dst);
+-
+-			/* SCO rejection has its own limited set of
+-			 * allowed error values (0x0D-0x0F) which isn't
+-			 * compatible with most values passed to this
+-			 * function. To be safe hard-code one of the
+-			 * values that's suitable for SCO.
+-			 */
+-			rej.reason = HCI_ERROR_REJ_LIMITED_RESOURCES;
++	conn->abort_reason = reason;
+ 
+-			r = hci_send_cmd(conn->hdev,
+-					 HCI_OP_REJECT_SYNC_CONN_REQ,
+-					 sizeof(rej), &rej);
++	/* If the connection is pending check the command opcode since that
++	 * might be blocking on hci_cmd_sync_work while waiting its respective
++	 * event so we need to hci_cmd_sync_cancel to cancel it.
++	 */
++	if (conn->state == BT_CONNECT && hdev->req_status == HCI_REQ_PEND) {
++		switch (hci_skb_event(hdev->sent_cmd)) {
++		case HCI_EV_LE_CONN_COMPLETE:
++		case HCI_EV_LE_ENHANCED_CONN_COMPLETE:
++		case HCI_EVT_LE_CIS_ESTABLISHED:
++			hci_cmd_sync_cancel(hdev, -ECANCELED);
++			break;
+ 		}
+-		break;
+-	default:
+-		conn->state = BT_CLOSED;
+-		break;
+ 	}
+ 
+-	return r;
++	return hci_cmd_sync_queue(hdev, abort_conn_sync, ERR_PTR(conn->handle),
++				  NULL);
+ }
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index 1ec83985f1ab0..2c845c9a26be0 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -1074,9 +1074,9 @@ void hci_uuids_clear(struct hci_dev *hdev)
+ 
+ void hci_link_keys_clear(struct hci_dev *hdev)
+ {
+-	struct link_key *key;
++	struct link_key *key, *tmp;
+ 
+-	list_for_each_entry(key, &hdev->link_keys, list) {
++	list_for_each_entry_safe(key, tmp, &hdev->link_keys, list) {
+ 		list_del_rcu(&key->list);
+ 		kfree_rcu(key, rcu);
+ 	}
+@@ -1084,9 +1084,9 @@ void hci_link_keys_clear(struct hci_dev *hdev)
+ 
+ void hci_smp_ltks_clear(struct hci_dev *hdev)
+ {
+-	struct smp_ltk *k;
++	struct smp_ltk *k, *tmp;
+ 
+-	list_for_each_entry(k, &hdev->long_term_keys, list) {
++	list_for_each_entry_safe(k, tmp, &hdev->long_term_keys, list) {
+ 		list_del_rcu(&k->list);
+ 		kfree_rcu(k, rcu);
+ 	}
+@@ -1094,9 +1094,9 @@ void hci_smp_ltks_clear(struct hci_dev *hdev)
+ 
+ void hci_smp_irks_clear(struct hci_dev *hdev)
+ {
+-	struct smp_irk *k;
++	struct smp_irk *k, *tmp;
+ 
+-	list_for_each_entry(k, &hdev->identity_resolving_keys, list) {
++	list_for_each_entry_safe(k, tmp, &hdev->identity_resolving_keys, list) {
+ 		list_del_rcu(&k->list);
+ 		kfree_rcu(k, rcu);
+ 	}
+@@ -1104,9 +1104,9 @@ void hci_smp_irks_clear(struct hci_dev *hdev)
+ 
+ void hci_blocked_keys_clear(struct hci_dev *hdev)
+ {
+-	struct blocked_key *b;
++	struct blocked_key *b, *tmp;
+ 
+-	list_for_each_entry(b, &hdev->blocked_keys, list) {
++	list_for_each_entry_safe(b, tmp, &hdev->blocked_keys, list) {
+ 		list_del_rcu(&b->list);
+ 		kfree_rcu(b, rcu);
+ 	}
+@@ -1949,15 +1949,15 @@ int hci_add_adv_monitor(struct hci_dev *hdev, struct adv_monitor *monitor)
+ 
+ 	switch (hci_get_adv_monitor_offload_ext(hdev)) {
+ 	case HCI_ADV_MONITOR_EXT_NONE:
+-		bt_dev_dbg(hdev, "%s add monitor %d status %d", hdev->name,
++		bt_dev_dbg(hdev, "add monitor %d status %d",
+ 			   monitor->handle, status);
+ 		/* Message was not forwarded to controller - not an error */
+ 		break;
+ 
+ 	case HCI_ADV_MONITOR_EXT_MSFT:
+ 		status = msft_add_monitor_pattern(hdev, monitor);
+-		bt_dev_dbg(hdev, "%s add monitor %d msft status %d", hdev->name,
+-			   monitor->handle, status);
++		bt_dev_dbg(hdev, "add monitor %d msft status %d",
++			   handle, status);
+ 		break;
+ 	}
+ 
+@@ -1976,15 +1976,15 @@ static int hci_remove_adv_monitor(struct hci_dev *hdev,
+ 
+ 	switch (hci_get_adv_monitor_offload_ext(hdev)) {
+ 	case HCI_ADV_MONITOR_EXT_NONE: /* also goes here when powered off */
+-		bt_dev_dbg(hdev, "%s remove monitor %d status %d", hdev->name,
++		bt_dev_dbg(hdev, "remove monitor %d status %d",
+ 			   monitor->handle, status);
+ 		goto free_monitor;
+ 
+ 	case HCI_ADV_MONITOR_EXT_MSFT:
+ 		handle = monitor->handle;
+ 		status = msft_remove_monitor(hdev, monitor);
+-		bt_dev_dbg(hdev, "%s remove monitor %d msft status %d",
+-			   hdev->name, handle, status);
++		bt_dev_dbg(hdev, "remove monitor %d msft status %d",
++			   handle, status);
+ 		break;
+ 	}
+ 
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 31ca320ce38d3..2358c1835d475 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -3173,7 +3173,7 @@ static void hci_conn_complete_evt(struct hci_dev *hdev, void *data,
+ 	 * As the connection handle is set here for the first time, it indicates
+ 	 * whether the connection is already set up.
+ 	 */
+-	if (conn->handle != HCI_CONN_HANDLE_UNSET) {
++	if (!HCI_CONN_HANDLE_UNSET(conn->handle)) {
+ 		bt_dev_err(hdev, "Ignoring HCI_Connection_Complete for existing connection");
+ 		goto unlock;
+ 	}
+@@ -3803,6 +3803,22 @@ static u8 hci_cc_le_read_buffer_size_v2(struct hci_dev *hdev, void *data,
+ 	return rp->status;
+ }
+ 
++static void hci_unbound_cis_failed(struct hci_dev *hdev, u8 cig, u8 status)
++{
++	struct hci_conn *conn, *tmp;
++
++	lockdep_assert_held(&hdev->lock);
++
++	list_for_each_entry_safe(conn, tmp, &hdev->conn_hash.list, list) {
++		if (conn->type != ISO_LINK || !bacmp(&conn->dst, BDADDR_ANY) ||
++		    conn->state == BT_OPEN || conn->iso_qos.ucast.cig != cig)
++			continue;
++
++		if (HCI_CONN_HANDLE_UNSET(conn->handle))
++			hci_conn_failed(conn, status);
++	}
++}
++
+ static u8 hci_cc_le_set_cig_params(struct hci_dev *hdev, void *data,
+ 				   struct sk_buff *skb)
+ {
+@@ -3810,6 +3826,7 @@ static u8 hci_cc_le_set_cig_params(struct hci_dev *hdev, void *data,
+ 	struct hci_cp_le_set_cig_params *cp;
+ 	struct hci_conn *conn;
+ 	u8 status = rp->status;
++	bool pending = false;
+ 	int i;
+ 
+ 	bt_dev_dbg(hdev, "status 0x%2.2x", rp->status);
+@@ -3823,12 +3840,15 @@ static u8 hci_cc_le_set_cig_params(struct hci_dev *hdev, void *data,
+ 
+ 	hci_dev_lock(hdev);
+ 
++	/* BLUETOOTH CORE SPECIFICATION Version 5.4 | Vol 4, Part E page 2554
++	 *
++	 * If the Status return parameter is non-zero, then the state of the CIG
++	 * and its CIS configurations shall not be changed by the command. If
++	 * the CIG did not already exist, it shall not be created.
++	 */
+ 	if (status) {
+-		while ((conn = hci_conn_hash_lookup_cig(hdev, rp->cig_id))) {
+-			conn->state = BT_CLOSED;
+-			hci_connect_cfm(conn, status);
+-			hci_conn_del(conn);
+-		}
++		/* Keep current configuration, fail only the unbound CIS */
++		hci_unbound_cis_failed(hdev, rp->cig_id, status);
+ 		goto unlock;
+ 	}
+ 
+@@ -3852,13 +3872,15 @@ static u8 hci_cc_le_set_cig_params(struct hci_dev *hdev, void *data,
+ 
+ 		bt_dev_dbg(hdev, "%p handle 0x%4.4x parent %p", conn,
+ 			   conn->handle, conn->parent);
+-
+-		/* Create CIS if LE is already connected */
+-		if (conn->parent && conn->parent->state == BT_CONNECTED)
+-			hci_le_create_cis(conn);
++		
++		if (conn->state == BT_CONNECT)
++			pending = true;
+ 	}
+ 
+ unlock:
++	if (pending)
++		hci_le_create_cis_pending(hdev);
++
+ 	hci_dev_unlock(hdev);
+ 
+ 	return rp->status;
+@@ -4224,6 +4246,7 @@ static void hci_cmd_complete_evt(struct hci_dev *hdev, void *data,
+ static void hci_cs_le_create_cis(struct hci_dev *hdev, u8 status)
+ {
+ 	struct hci_cp_le_create_cis *cp;
++	bool pending = false;
+ 	int i;
+ 
+ 	bt_dev_dbg(hdev, "status 0x%2.2x", status);
+@@ -4246,12 +4269,18 @@ static void hci_cs_le_create_cis(struct hci_dev *hdev, u8 status)
+ 
+ 		conn = hci_conn_hash_lookup_handle(hdev, handle);
+ 		if (conn) {
++			if (test_and_clear_bit(HCI_CONN_CREATE_CIS,
++					       &conn->flags))
++				pending = true;
+ 			conn->state = BT_CLOSED;
+ 			hci_connect_cfm(conn, status);
+ 			hci_conn_del(conn);
+ 		}
+ 	}
+ 
++	if (pending)
++		hci_le_create_cis_pending(hdev);
++
+ 	hci_dev_unlock(hdev);
+ }
+ 
+@@ -4999,7 +5028,7 @@ static void hci_sync_conn_complete_evt(struct hci_dev *hdev, void *data,
+ 	 * As the connection handle is set here for the first time, it indicates
+ 	 * whether the connection is already set up.
+ 	 */
+-	if (conn->handle != HCI_CONN_HANDLE_UNSET) {
++	if (!HCI_CONN_HANDLE_UNSET(conn->handle)) {
+ 		bt_dev_err(hdev, "Ignoring HCI_Sync_Conn_Complete event for existing connection");
+ 		goto unlock;
+ 	}
+@@ -5863,7 +5892,7 @@ static void le_conn_complete_evt(struct hci_dev *hdev, u8 status,
+ 	 * As the connection handle is set here for the first time, it indicates
+ 	 * whether the connection is already set up.
+ 	 */
+-	if (conn->handle != HCI_CONN_HANDLE_UNSET) {
++	if (!HCI_CONN_HANDLE_UNSET(conn->handle)) {
+ 		bt_dev_err(hdev, "Ignoring HCI_Connection_Complete for existing connection");
+ 		goto unlock;
+ 	}
+@@ -6790,6 +6819,7 @@ static void hci_le_cis_estabilished_evt(struct hci_dev *hdev, void *data,
+ 	struct hci_evt_le_cis_established *ev = data;
+ 	struct hci_conn *conn;
+ 	struct bt_iso_qos *qos;
++	bool pending = false;
+ 	u16 handle = __le16_to_cpu(ev->handle);
+ 
+ 	bt_dev_dbg(hdev, "status 0x%2.2x", ev->status);
+@@ -6813,6 +6843,8 @@ static void hci_le_cis_estabilished_evt(struct hci_dev *hdev, void *data,
+ 
+ 	qos = &conn->iso_qos;
+ 
++	pending = test_and_clear_bit(HCI_CONN_CREATE_CIS, &conn->flags);
++
+ 	/* Convert ISO Interval (1.25 ms slots) to SDU Interval (us) */
+ 	qos->ucast.in.interval = le16_to_cpu(ev->interval) * 1250;
+ 	qos->ucast.out.interval = qos->ucast.in.interval;
+@@ -6854,10 +6886,14 @@ static void hci_le_cis_estabilished_evt(struct hci_dev *hdev, void *data,
+ 		goto unlock;
+ 	}
+ 
++	conn->state = BT_CLOSED;
+ 	hci_connect_cfm(conn, ev->status);
+ 	hci_conn_del(conn);
+ 
+ unlock:
++	if (pending)
++		hci_le_create_cis_pending(hdev);
++
+ 	hci_dev_unlock(hdev);
+ }
+ 
+@@ -6936,6 +6972,7 @@ static void hci_le_create_big_complete_evt(struct hci_dev *hdev, void *data,
+ {
+ 	struct hci_evt_le_create_big_complete *ev = data;
+ 	struct hci_conn *conn;
++	__u8 bis_idx = 0;
+ 
+ 	BT_DBG("%s status 0x%2.2x", hdev->name, ev->status);
+ 
+@@ -6944,33 +6981,44 @@ static void hci_le_create_big_complete_evt(struct hci_dev *hdev, void *data,
+ 		return;
+ 
+ 	hci_dev_lock(hdev);
++	rcu_read_lock();
+ 
+-	conn = hci_conn_hash_lookup_big(hdev, ev->handle);
+-	if (!conn)
+-		goto unlock;
++	/* Connect all BISes that are bound to the BIG */
++	list_for_each_entry_rcu(conn, &hdev->conn_hash.list, list) {
++		if (bacmp(&conn->dst, BDADDR_ANY) ||
++		    conn->type != ISO_LINK ||
++		    conn->iso_qos.bcast.big != ev->handle)
++			continue;
+ 
+-	if (conn->type != ISO_LINK) {
+-		bt_dev_err(hdev,
+-			   "Invalid connection link type handle 0x%2.2x",
+-			   ev->handle);
+-		goto unlock;
+-	}
++		conn->handle = __le16_to_cpu(ev->bis_handle[bis_idx++]);
+ 
+-	if (ev->num_bis)
+-		conn->handle = __le16_to_cpu(ev->bis_handle[0]);
++		if (!ev->status) {
++			conn->state = BT_CONNECTED;
++			set_bit(HCI_CONN_BIG_CREATED, &conn->flags);
++			rcu_read_unlock();
++			hci_debugfs_create_conn(conn);
++			hci_conn_add_sysfs(conn);
++			hci_iso_setup_path(conn);
++			rcu_read_lock();
++			continue;
++		}
+ 
+-	if (!ev->status) {
+-		conn->state = BT_CONNECTED;
+-		hci_debugfs_create_conn(conn);
+-		hci_conn_add_sysfs(conn);
+-		hci_iso_setup_path(conn);
+-		goto unlock;
++		hci_connect_cfm(conn, ev->status);
++		rcu_read_unlock();
++		hci_conn_del(conn);
++		rcu_read_lock();
+ 	}
+ 
+-	hci_connect_cfm(conn, ev->status);
+-	hci_conn_del(conn);
++	if (!ev->status && !bis_idx)
++		/* If no BISes have been connected for the BIG,
++		 * terminate. This is in case all bound connections
++		 * have been closed before the BIG creation
++		 * has completed.
++		 */
++		hci_le_terminate_big_sync(hdev, ev->handle,
++					  HCI_ERROR_LOCAL_HOST_TERM);
+ 
+-unlock:
++	rcu_read_unlock();
+ 	hci_dev_unlock(hdev);
+ }
+ 
+@@ -6987,9 +7035,6 @@ static void hci_le_big_sync_established_evt(struct hci_dev *hdev, void *data,
+ 				flex_array_size(ev, bis, ev->num_bis)))
+ 		return;
+ 
+-	if (ev->status)
+-		return;
+-
+ 	hci_dev_lock(hdev);
+ 
+ 	for (i = 0; i < ev->num_bis; i++) {
+@@ -7013,9 +7058,25 @@ static void hci_le_big_sync_established_evt(struct hci_dev *hdev, void *data,
+ 		bis->iso_qos.bcast.in.latency = le16_to_cpu(ev->interval) * 125 / 100;
+ 		bis->iso_qos.bcast.in.sdu = le16_to_cpu(ev->max_pdu);
+ 
+-		hci_iso_setup_path(bis);
++		if (!ev->status) {
++			set_bit(HCI_CONN_BIG_SYNC, &bis->flags);
++			hci_iso_setup_path(bis);
++		}
+ 	}
+ 
++	/* In case BIG sync failed, notify each failed connection to
++	 * the user after all hci connections have been added
++	 */
++	if (ev->status)
++		for (i = 0; i < ev->num_bis; i++) {
++			u16 handle = le16_to_cpu(ev->bis[i]);
++
++			bis = hci_conn_hash_lookup_handle(hdev, handle);
++
++			set_bit(HCI_CONN_BIG_SYNC_FAILED, &bis->flags);
++			hci_connect_cfm(bis, ev->status);
++		}
++
+ 	hci_dev_unlock(hdev);
+ }
+ 
+diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c
+index 4d1e32bb6a9c6..402b8522c2228 100644
+--- a/net/bluetooth/hci_sync.c
++++ b/net/bluetooth/hci_sync.c
+@@ -4684,7 +4684,10 @@ static const struct {
+ 			 "advertised, but not supported."),
+ 	HCI_QUIRK_BROKEN(SET_RPA_TIMEOUT,
+ 			 "HCI LE Set Random Private Address Timeout command is "
+-			 "advertised, but not supported.")
++			 "advertised, but not supported."),
++	HCI_QUIRK_BROKEN(LE_CODED,
++			 "HCI LE Coded PHY feature bit is set, "
++			 "but its usage is not supported.")
+ };
+ 
+ /* This function handles hdev setup stage:
+@@ -5269,22 +5272,27 @@ static int hci_disconnect_sync(struct hci_dev *hdev, struct hci_conn *conn,
+ }
+ 
+ static int hci_le_connect_cancel_sync(struct hci_dev *hdev,
+-				      struct hci_conn *conn)
++				      struct hci_conn *conn, u8 reason)
+ {
++	/* Return reason if scanning since the connection shall probably be
++	 * cleanup directly.
++	 */
+ 	if (test_bit(HCI_CONN_SCANNING, &conn->flags))
+-		return 0;
++		return reason;
+ 
+-	if (test_and_set_bit(HCI_CONN_CANCEL, &conn->flags))
++	if (conn->role == HCI_ROLE_SLAVE ||
++	    test_and_set_bit(HCI_CONN_CANCEL, &conn->flags))
+ 		return 0;
+ 
+ 	return __hci_cmd_sync_status(hdev, HCI_OP_LE_CREATE_CONN_CANCEL,
+ 				     0, NULL, HCI_CMD_TIMEOUT);
+ }
+ 
+-static int hci_connect_cancel_sync(struct hci_dev *hdev, struct hci_conn *conn)
++static int hci_connect_cancel_sync(struct hci_dev *hdev, struct hci_conn *conn,
++				   u8 reason)
+ {
+ 	if (conn->type == LE_LINK)
+-		return hci_le_connect_cancel_sync(hdev, conn);
++		return hci_le_connect_cancel_sync(hdev, conn, reason);
+ 
+ 	if (hdev->hci_ver < BLUETOOTH_VER_1_2)
+ 		return 0;
+@@ -5330,43 +5338,81 @@ static int hci_reject_conn_sync(struct hci_dev *hdev, struct hci_conn *conn,
+ 
+ int hci_abort_conn_sync(struct hci_dev *hdev, struct hci_conn *conn, u8 reason)
+ {
+-	int err;
++	int err = 0;
++	u16 handle = conn->handle;
++	struct hci_conn *c;
+ 
+ 	switch (conn->state) {
+ 	case BT_CONNECTED:
+ 	case BT_CONFIG:
+-		return hci_disconnect_sync(hdev, conn, reason);
++		err = hci_disconnect_sync(hdev, conn, reason);
++		break;
+ 	case BT_CONNECT:
+-		err = hci_connect_cancel_sync(hdev, conn);
+-		/* Cleanup hci_conn object if it cannot be cancelled as it
+-		 * likelly means the controller and host stack are out of sync.
+-		 */
+-		if (err) {
++		err = hci_connect_cancel_sync(hdev, conn, reason);
++		break;
++	case BT_CONNECT2:
++		err = hci_reject_conn_sync(hdev, conn, reason);
++		break;
++	case BT_OPEN:
++		/* Cleanup bises that failed to be established */
++		if (test_and_clear_bit(HCI_CONN_BIG_SYNC_FAILED, &conn->flags)) {
+ 			hci_dev_lock(hdev);
+-			hci_conn_failed(conn, err);
++			hci_conn_failed(conn, reason);
+ 			hci_dev_unlock(hdev);
+ 		}
+-		return err;
+-	case BT_CONNECT2:
+-		return hci_reject_conn_sync(hdev, conn, reason);
++		break;
+ 	default:
++		hci_dev_lock(hdev);
+ 		conn->state = BT_CLOSED;
+-		break;
++		hci_disconn_cfm(conn, reason);
++		hci_conn_del(conn);
++		hci_dev_unlock(hdev);
++		return 0;
+ 	}
+ 
+-	return 0;
++	hci_dev_lock(hdev);
++
++	/* Check if the connection hasn't been cleanup while waiting
++	 * commands to complete.
++	 */
++	c = hci_conn_hash_lookup_handle(hdev, handle);
++	if (!c || c != conn) {
++		err = 0;
++		goto unlock;
++	}
++
++	/* Cleanup hci_conn object if it cannot be cancelled as it
++	 * likelly means the controller and host stack are out of sync
++	 * or in case of LE it was still scanning so it can be cleanup
++	 * safely.
++	 */
++	hci_conn_failed(conn, reason);
++
++unlock:
++	hci_dev_unlock(hdev);
++	return err;
+ }
+ 
+ static int hci_disconnect_all_sync(struct hci_dev *hdev, u8 reason)
+ {
+-	struct hci_conn *conn, *tmp;
+-	int err;
++	struct list_head *head = &hdev->conn_hash.list;
++	struct hci_conn *conn;
+ 
+-	list_for_each_entry_safe(conn, tmp, &hdev->conn_hash.list, list) {
+-		err = hci_abort_conn_sync(hdev, conn, reason);
+-		if (err)
+-			return err;
++	rcu_read_lock();
++	while ((conn = list_first_or_null_rcu(head, struct hci_conn, list))) {
++		/* Make sure the connection is not freed while unlocking */
++		conn = hci_conn_get(conn);
++		rcu_read_unlock();
++		/* Disregard possible errors since hci_conn_del shall have been
++		 * called even in case of errors had occurred since it would
++		 * then cause hci_conn_failed to be called which calls
++		 * hci_conn_del internally.
++		 */
++		hci_abort_conn_sync(hdev, conn, reason);
++		hci_conn_put(conn);
++		rcu_read_lock();
+ 	}
++	rcu_read_unlock();
+ 
+ 	return 0;
+ }
+@@ -6253,63 +6299,99 @@ int hci_le_create_conn_sync(struct hci_dev *hdev, struct hci_conn *conn)
+ 
+ done:
+ 	if (err == -ETIMEDOUT)
+-		hci_le_connect_cancel_sync(hdev, conn);
++		hci_le_connect_cancel_sync(hdev, conn, 0x00);
+ 
+ 	/* Re-enable advertising after the connection attempt is finished. */
+ 	hci_resume_advertising_sync(hdev);
+ 	return err;
+ }
+ 
+-int hci_le_create_cis_sync(struct hci_dev *hdev, struct hci_conn *conn)
++int hci_le_create_cis_sync(struct hci_dev *hdev)
+ {
+ 	struct {
+ 		struct hci_cp_le_create_cis cp;
+ 		struct hci_cis cis[0x1f];
+ 	} cmd;
+-	u8 cig;
+-	struct hci_conn *hcon = conn;
++	struct hci_conn *conn;
++	u8 cig = BT_ISO_QOS_CIG_UNSET;
++
++	/* The spec allows only one pending LE Create CIS command at a time. If
++	 * the command is pending now, don't do anything. We check for pending
++	 * connections after each CIS Established event.
++	 *
++	 * BLUETOOTH CORE SPECIFICATION Version 5.3 | Vol 4, Part E
++	 * page 2566:
++	 *
++	 * If the Host issues this command before all the
++	 * HCI_LE_CIS_Established events from the previous use of the
++	 * command have been generated, the Controller shall return the
++	 * error code Command Disallowed (0x0C).
++	 *
++	 * BLUETOOTH CORE SPECIFICATION Version 5.3 | Vol 4, Part E
++	 * page 2567:
++	 *
++	 * When the Controller receives the HCI_LE_Create_CIS command, the
++	 * Controller sends the HCI_Command_Status event to the Host. An
++	 * HCI_LE_CIS_Established event will be generated for each CIS when it
++	 * is established or if it is disconnected or considered lost before
++	 * being established; until all the events are generated, the command
++	 * remains pending.
++	 */
+ 
+ 	memset(&cmd, 0, sizeof(cmd));
+-	cmd.cis[0].acl_handle = cpu_to_le16(conn->parent->handle);
+-	cmd.cis[0].cis_handle = cpu_to_le16(conn->handle);
+-	cmd.cp.num_cis++;
+-	cig = conn->iso_qos.ucast.cig;
+ 
+ 	hci_dev_lock(hdev);
+ 
+ 	rcu_read_lock();
+ 
++	/* Wait until previous Create CIS has completed */
+ 	list_for_each_entry_rcu(conn, &hdev->conn_hash.list, list) {
+-		struct hci_cis *cis = &cmd.cis[cmd.cp.num_cis];
++		if (test_bit(HCI_CONN_CREATE_CIS, &conn->flags))
++			goto done;
++	}
+ 
+-		if (conn == hcon || conn->type != ISO_LINK ||
+-		    conn->state == BT_CONNECTED ||
+-		    conn->iso_qos.ucast.cig != cig)
++	/* Find CIG with all CIS ready */
++	list_for_each_entry_rcu(conn, &hdev->conn_hash.list, list) {
++		struct hci_conn *link;
++
++		if (hci_conn_check_create_cis(conn))
+ 			continue;
+ 
+-		/* Check if all CIS(s) belonging to a CIG are ready */
+-		if (!conn->parent || conn->parent->state != BT_CONNECTED ||
+-		    conn->state != BT_CONNECT) {
+-			cmd.cp.num_cis = 0;
+-			break;
++		cig = conn->iso_qos.ucast.cig;
++
++		list_for_each_entry_rcu(link, &hdev->conn_hash.list, list) {
++			if (hci_conn_check_create_cis(link) > 0 &&
++			    link->iso_qos.ucast.cig == cig &&
++			    link->state != BT_CONNECTED) {
++				cig = BT_ISO_QOS_CIG_UNSET;
++				break;
++			}
+ 		}
+ 
+-		/* Group all CIS with state BT_CONNECT since the spec don't
+-		 * allow to send them individually:
+-		 *
+-		 * BLUETOOTH CORE SPECIFICATION Version 5.3 | Vol 4, Part E
+-		 * page 2566:
+-		 *
+-		 * If the Host issues this command before all the
+-		 * HCI_LE_CIS_Established events from the previous use of the
+-		 * command have been generated, the Controller shall return the
+-		 * error code Command Disallowed (0x0C).
+-		 */
++		if (cig != BT_ISO_QOS_CIG_UNSET)
++			break;
++	}
++
++	if (cig == BT_ISO_QOS_CIG_UNSET)
++		goto done;
++
++	list_for_each_entry_rcu(conn, &hdev->conn_hash.list, list) {
++		struct hci_cis *cis = &cmd.cis[cmd.cp.num_cis];
++
++		if (hci_conn_check_create_cis(conn) ||
++		    conn->iso_qos.ucast.cig != cig)
++			continue;
++
++		set_bit(HCI_CONN_CREATE_CIS, &conn->flags);
+ 		cis->acl_handle = cpu_to_le16(conn->parent->handle);
+ 		cis->cis_handle = cpu_to_le16(conn->handle);
+ 		cmd.cp.num_cis++;
++
++		if (cmd.cp.num_cis >= ARRAY_SIZE(cmd.cis))
++			break;
+ 	}
+ 
++done:
+ 	rcu_read_unlock();
+ 
+ 	hci_dev_unlock(hdev);
+diff --git a/net/bluetooth/iso.c b/net/bluetooth/iso.c
+index 505d622472688..9b6a7eb2015f0 100644
+--- a/net/bluetooth/iso.c
++++ b/net/bluetooth/iso.c
+@@ -48,6 +48,11 @@ static void iso_sock_kill(struct sock *sk);
+ #define EIR_SERVICE_DATA_LENGTH 4
+ #define BASE_MAX_LENGTH (HCI_MAX_PER_AD_LENGTH - EIR_SERVICE_DATA_LENGTH)
+ 
++/* iso_pinfo flags values */
++enum {
++	BT_SK_BIG_SYNC,
++};
++
+ struct iso_pinfo {
+ 	struct bt_sock		bt;
+ 	bdaddr_t		src;
+@@ -58,7 +63,7 @@ struct iso_pinfo {
+ 	__u8			bc_num_bis;
+ 	__u8			bc_bis[ISO_MAX_NUM_BIS];
+ 	__u16			sync_handle;
+-	__u32			flags;
++	unsigned long		flags;
+ 	struct bt_iso_qos	qos;
+ 	bool			qos_user_set;
+ 	__u8			base_len;
+@@ -287,13 +292,24 @@ static int iso_connect_bis(struct sock *sk)
+ 		goto unlock;
+ 	}
+ 
+-	hcon = hci_connect_bis(hdev, &iso_pi(sk)->dst,
+-			       le_addr_type(iso_pi(sk)->dst_type),
+-			       &iso_pi(sk)->qos, iso_pi(sk)->base_len,
+-			       iso_pi(sk)->base);
+-	if (IS_ERR(hcon)) {
+-		err = PTR_ERR(hcon);
+-		goto unlock;
++	/* Just bind if DEFER_SETUP has been set */
++	if (test_bit(BT_SK_DEFER_SETUP, &bt_sk(sk)->flags)) {
++		hcon = hci_bind_bis(hdev, &iso_pi(sk)->dst,
++				    &iso_pi(sk)->qos, iso_pi(sk)->base_len,
++				    iso_pi(sk)->base);
++		if (IS_ERR(hcon)) {
++			err = PTR_ERR(hcon);
++			goto unlock;
++		}
++	} else {
++		hcon = hci_connect_bis(hdev, &iso_pi(sk)->dst,
++				       le_addr_type(iso_pi(sk)->dst_type),
++				       &iso_pi(sk)->qos, iso_pi(sk)->base_len,
++				       iso_pi(sk)->base);
++		if (IS_ERR(hcon)) {
++			err = PTR_ERR(hcon);
++			goto unlock;
++		}
+ 	}
+ 
+ 	conn = iso_conn_add(hcon);
+@@ -317,6 +333,9 @@ static int iso_connect_bis(struct sock *sk)
+ 	if (hcon->state == BT_CONNECTED) {
+ 		iso_sock_clear_timer(sk);
+ 		sk->sk_state = BT_CONNECTED;
++	} else if (test_bit(BT_SK_DEFER_SETUP, &bt_sk(sk)->flags)) {
++		iso_sock_clear_timer(sk);
++		sk->sk_state = BT_CONNECT;
+ 	} else {
+ 		sk->sk_state = BT_CONNECT;
+ 		iso_sock_set_timer(sk, sk->sk_sndtimeo);
+@@ -1202,6 +1221,12 @@ static bool check_io_qos(struct bt_iso_io_qos *qos)
+ 
+ static bool check_ucast_qos(struct bt_iso_qos *qos)
+ {
++	if (qos->ucast.cig > 0xef && qos->ucast.cig != BT_ISO_QOS_CIG_UNSET)
++		return false;
++
++	if (qos->ucast.cis > 0xef && qos->ucast.cis != BT_ISO_QOS_CIS_UNSET)
++		return false;
++
+ 	if (qos->ucast.sca > 0x07)
+ 		return false;
+ 
+@@ -1466,7 +1491,7 @@ static int iso_sock_release(struct socket *sock)
+ 
+ 	iso_sock_close(sk);
+ 
+-	if (sock_flag(sk, SOCK_LINGER) && sk->sk_lingertime &&
++	if (sock_flag(sk, SOCK_LINGER) && READ_ONCE(sk->sk_lingertime) &&
+ 	    !(current->flags & PF_EXITING)) {
+ 		lock_sock(sk);
+ 		err = bt_sock_wait_state(sk, BT_CLOSED, sk->sk_lingertime);
+@@ -1563,6 +1588,12 @@ static void iso_conn_ready(struct iso_conn *conn)
+ 		hci_conn_hold(hcon);
+ 		iso_chan_add(conn, sk, parent);
+ 
++		if (ev && ((struct hci_evt_le_big_sync_estabilished *)ev)->status) {
++			/* Trigger error signal on child socket */
++			sk->sk_err = ECONNREFUSED;
++			sk->sk_error_report(sk);
++		}
++
+ 		if (test_bit(BT_SK_DEFER_SETUP, &bt_sk(parent)->flags))
+ 			sk->sk_state = BT_CONNECT2;
+ 		else
+@@ -1631,15 +1662,17 @@ int iso_connect_ind(struct hci_dev *hdev, bdaddr_t *bdaddr, __u8 *flags)
+ 			if (ev2->num_bis < iso_pi(sk)->bc_num_bis)
+ 				iso_pi(sk)->bc_num_bis = ev2->num_bis;
+ 
+-			err = hci_le_big_create_sync(hdev,
+-						     &iso_pi(sk)->qos,
+-						     iso_pi(sk)->sync_handle,
+-						     iso_pi(sk)->bc_num_bis,
+-						     iso_pi(sk)->bc_bis);
+-			if (err) {
+-				bt_dev_err(hdev, "hci_le_big_create_sync: %d",
+-					   err);
+-				sk = NULL;
++			if (!test_and_set_bit(BT_SK_BIG_SYNC, &iso_pi(sk)->flags)) {
++				err = hci_le_big_create_sync(hdev,
++							     &iso_pi(sk)->qos,
++							     iso_pi(sk)->sync_handle,
++							     iso_pi(sk)->bc_num_bis,
++							     iso_pi(sk)->bc_bis);
++				if (err) {
++					bt_dev_err(hdev, "hci_le_big_create_sync: %d",
++						   err);
++					sk = NULL;
++				}
+ 			}
+ 		}
+ 	} else {
+@@ -1676,13 +1709,18 @@ static void iso_connect_cfm(struct hci_conn *hcon, __u8 status)
+ 		}
+ 
+ 		/* Create CIS if pending */
+-		hci_le_create_cis(hcon);
++		hci_le_create_cis_pending(hcon->hdev);
+ 		return;
+ 	}
+ 
+ 	BT_DBG("hcon %p bdaddr %pMR status %d", hcon, &hcon->dst, status);
+ 
+-	if (!status) {
++	/* Similar to the success case, if HCI_CONN_BIG_SYNC_FAILED is set,
++	 * queue the failed bis connection into the accept queue of the
++	 * listening socket and wake up userspace, to inform the user about
++	 * the BIG sync failed event.
++	 */
++	if (!status || test_bit(HCI_CONN_BIG_SYNC_FAILED, &hcon->flags)) {
+ 		struct iso_conn *conn;
+ 
+ 		conn = iso_conn_add(hcon);
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index d4498037fadc6..6240b20f020a8 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -3580,18 +3580,6 @@ unlock:
+ 	return err;
+ }
+ 
+-static int abort_conn_sync(struct hci_dev *hdev, void *data)
+-{
+-	struct hci_conn *conn;
+-	u16 handle = PTR_ERR(data);
+-
+-	conn = hci_conn_hash_lookup_handle(hdev, handle);
+-	if (!conn)
+-		return 0;
+-
+-	return hci_abort_conn_sync(hdev, conn, HCI_ERROR_REMOTE_USER_TERM);
+-}
+-
+ static int cancel_pair_device(struct sock *sk, struct hci_dev *hdev, void *data,
+ 			      u16 len)
+ {
+@@ -3642,8 +3630,7 @@ static int cancel_pair_device(struct sock *sk, struct hci_dev *hdev, void *data,
+ 					      le_addr_type(addr->type));
+ 
+ 	if (conn->conn_reason == CONN_REASON_PAIR_DEVICE)
+-		hci_cmd_sync_queue(hdev, abort_conn_sync, ERR_PTR(conn->handle),
+-				   NULL);
++		hci_abort_conn(conn, HCI_ERROR_REMOTE_USER_TERM);
+ 
+ unlock:
+ 	hci_dev_unlock(hdev);
+diff --git a/net/bluetooth/msft.c b/net/bluetooth/msft.c
+index bf5cee48916c7..b80a2162a5c33 100644
+--- a/net/bluetooth/msft.c
++++ b/net/bluetooth/msft.c
+@@ -91,6 +91,33 @@ struct msft_ev_le_monitor_device {
+ struct msft_monitor_advertisement_handle_data {
+ 	__u8  msft_handle;
+ 	__u16 mgmt_handle;
++	__s8 rssi_high;
++	__s8 rssi_low;
++	__u8 rssi_low_interval;
++	__u8 rssi_sampling_period;
++	__u8 cond_type;
++	struct list_head list;
++};
++
++enum monitor_addr_filter_state {
++	AF_STATE_IDLE,
++	AF_STATE_ADDING,
++	AF_STATE_ADDED,
++	AF_STATE_REMOVING,
++};
++
++#define MSFT_MONITOR_ADVERTISEMENT_TYPE_ADDR	0x04
++struct msft_monitor_addr_filter_data {
++	__u8     msft_handle;
++	__u8     pattern_handle; /* address filters pertain to */
++	__u16    mgmt_handle;
++	int      state;
++	__s8     rssi_high;
++	__s8     rssi_low;
++	__u8     rssi_low_interval;
++	__u8     rssi_sampling_period;
++	__u8     addr_type;
++	bdaddr_t bdaddr;
+ 	struct list_head list;
+ };
+ 
+@@ -99,9 +126,12 @@ struct msft_data {
+ 	__u8  evt_prefix_len;
+ 	__u8  *evt_prefix;
+ 	struct list_head handle_map;
++	struct list_head address_filters;
+ 	__u8 resuming;
+ 	__u8 suspending;
+ 	__u8 filter_enabled;
++	/* To synchronize add/remove address filter and monitor device event.*/
++	struct mutex filter_lock;
+ };
+ 
+ bool msft_monitor_supported(struct hci_dev *hdev)
+@@ -180,6 +210,24 @@ static struct msft_monitor_advertisement_handle_data *msft_find_handle_data
+ 	return NULL;
+ }
+ 
++/* This function requires the caller holds msft->filter_lock */
++static struct msft_monitor_addr_filter_data *msft_find_address_data
++			(struct hci_dev *hdev, u8 addr_type, bdaddr_t *addr,
++			 u8 pattern_handle)
++{
++	struct msft_monitor_addr_filter_data *entry;
++	struct msft_data *msft = hdev->msft_data;
++
++	list_for_each_entry(entry, &msft->address_filters, list) {
++		if (entry->pattern_handle == pattern_handle &&
++		    addr_type == entry->addr_type &&
++		    !bacmp(addr, &entry->bdaddr))
++			return entry;
++	}
++
++	return NULL;
++}
++
+ /* This function requires the caller holds hdev->lock */
+ static int msft_monitor_device_del(struct hci_dev *hdev, __u16 mgmt_handle,
+ 				   bdaddr_t *bdaddr, __u8 addr_type,
+@@ -240,6 +288,7 @@ static int msft_le_monitor_advertisement_cb(struct hci_dev *hdev, u16 opcode,
+ 
+ 	handle_data->mgmt_handle = monitor->handle;
+ 	handle_data->msft_handle = rp->handle;
++	handle_data->cond_type   = MSFT_MONITOR_ADVERTISEMENT_TYPE_PATTERN;
+ 	INIT_LIST_HEAD(&handle_data->list);
+ 	list_add(&handle_data->list, &msft->handle_map);
+ 
+@@ -254,6 +303,70 @@ unlock:
+ 	return status;
+ }
+ 
++/* This function requires the caller holds hci_req_sync_lock */
++static void msft_remove_addr_filters_sync(struct hci_dev *hdev, u8 handle)
++{
++	struct msft_monitor_addr_filter_data *address_filter, *n;
++	struct msft_cp_le_cancel_monitor_advertisement cp;
++	struct msft_data *msft = hdev->msft_data;
++	struct list_head head;
++	struct sk_buff *skb;
++
++	INIT_LIST_HEAD(&head);
++
++	/* Cancel all corresponding address monitors */
++	mutex_lock(&msft->filter_lock);
++
++	list_for_each_entry_safe(address_filter, n, &msft->address_filters,
++				 list) {
++		if (address_filter->pattern_handle != handle)
++			continue;
++
++		list_del(&address_filter->list);
++
++		/* Keep the address filter and let
++		 * msft_add_address_filter_sync() remove and free the address
++		 * filter.
++		 */
++		if (address_filter->state == AF_STATE_ADDING) {
++			address_filter->state = AF_STATE_REMOVING;
++			continue;
++		}
++
++		/* Keep the address filter and let
++		 * msft_cancel_address_filter_sync() remove and free the address
++		 * filter
++		 */
++		if (address_filter->state == AF_STATE_REMOVING)
++			continue;
++
++		list_add_tail(&address_filter->list, &head);
++	}
++
++	mutex_unlock(&msft->filter_lock);
++
++	list_for_each_entry_safe(address_filter, n, &head, list) {
++		list_del(&address_filter->list);
++
++		cp.sub_opcode = MSFT_OP_LE_CANCEL_MONITOR_ADVERTISEMENT;
++		cp.handle = address_filter->msft_handle;
++
++		skb = __hci_cmd_sync(hdev, hdev->msft_opcode, sizeof(cp), &cp,
++				     HCI_CMD_TIMEOUT);
++		if (IS_ERR_OR_NULL(skb)) {
++			kfree(address_filter);
++			continue;
++		}
++
++		kfree_skb(skb);
++
++		bt_dev_dbg(hdev, "MSFT: Canceled device %pMR address filter",
++			   &address_filter->bdaddr);
++
++		kfree(address_filter);
++	}
++}
++
+ static int msft_le_cancel_monitor_advertisement_cb(struct hci_dev *hdev,
+ 						   u16 opcode,
+ 						   struct adv_monitor *monitor,
+@@ -263,6 +376,7 @@ static int msft_le_cancel_monitor_advertisement_cb(struct hci_dev *hdev,
+ 	struct msft_monitor_advertisement_handle_data *handle_data;
+ 	struct msft_data *msft = hdev->msft_data;
+ 	int status = 0;
++	u8 msft_handle;
+ 
+ 	rp = (struct msft_rp_le_cancel_monitor_advertisement *)skb->data;
+ 	if (skb->len < sizeof(*rp)) {
+@@ -293,11 +407,17 @@ static int msft_le_cancel_monitor_advertisement_cb(struct hci_dev *hdev,
+ 						NULL, 0, false);
+ 		}
+ 
++		msft_handle = handle_data->msft_handle;
++
+ 		list_del(&handle_data->list);
+ 		kfree(handle_data);
+-	}
+ 
+-	hci_dev_unlock(hdev);
++		hci_dev_unlock(hdev);
++
++		msft_remove_addr_filters_sync(hdev, msft_handle);
++	} else {
++		hci_dev_unlock(hdev);
++	}
+ 
+ done:
+ 	return status;
+@@ -394,12 +514,14 @@ static int msft_add_monitor_sync(struct hci_dev *hdev,
+ {
+ 	struct msft_cp_le_monitor_advertisement *cp;
+ 	struct msft_le_monitor_advertisement_pattern_data *pattern_data;
++	struct msft_monitor_advertisement_handle_data *handle_data;
+ 	struct msft_le_monitor_advertisement_pattern *pattern;
+ 	struct adv_pattern *entry;
+ 	size_t total_size = sizeof(*cp) + sizeof(*pattern_data);
+ 	ptrdiff_t offset = 0;
+ 	u8 pattern_count = 0;
+ 	struct sk_buff *skb;
++	int err;
+ 
+ 	if (!msft_monitor_pattern_valid(monitor))
+ 		return -EINVAL;
+@@ -436,16 +558,31 @@ static int msft_add_monitor_sync(struct hci_dev *hdev,
+ 
+ 	skb = __hci_cmd_sync(hdev, hdev->msft_opcode, total_size, cp,
+ 			     HCI_CMD_TIMEOUT);
+-	kfree(cp);
+ 
+ 	if (IS_ERR_OR_NULL(skb)) {
+-		if (!skb)
+-			return -EIO;
+-		return PTR_ERR(skb);
++		err = PTR_ERR(skb);
++		goto out_free;
+ 	}
+ 
+-	return msft_le_monitor_advertisement_cb(hdev, hdev->msft_opcode,
+-						monitor, skb);
++	err = msft_le_monitor_advertisement_cb(hdev, hdev->msft_opcode,
++					       monitor, skb);
++	if (err)
++		goto out_free;
++
++	handle_data = msft_find_handle_data(hdev, monitor->handle, true);
++	if (!handle_data) {
++		err = -ENODATA;
++		goto out_free;
++	}
++
++	handle_data->rssi_high	= cp->rssi_high;
++	handle_data->rssi_low	= cp->rssi_low;
++	handle_data->rssi_low_interval	  = cp->rssi_low_interval;
++	handle_data->rssi_sampling_period = cp->rssi_sampling_period;
++
++out_free:
++	kfree(cp);
++	return err;
+ }
+ 
+ /* This function requires the caller holds hci_req_sync_lock */
+@@ -538,6 +675,7 @@ void msft_do_close(struct hci_dev *hdev)
+ {
+ 	struct msft_data *msft = hdev->msft_data;
+ 	struct msft_monitor_advertisement_handle_data *handle_data, *tmp;
++	struct msft_monitor_addr_filter_data *address_filter, *n;
+ 	struct adv_monitor *monitor;
+ 
+ 	if (!msft)
+@@ -559,6 +697,14 @@ void msft_do_close(struct hci_dev *hdev)
+ 		kfree(handle_data);
+ 	}
+ 
++	mutex_lock(&msft->filter_lock);
++	list_for_each_entry_safe(address_filter, n, &msft->address_filters,
++				 list) {
++		list_del(&address_filter->list);
++		kfree(address_filter);
++	}
++	mutex_unlock(&msft->filter_lock);
++
+ 	hci_dev_lock(hdev);
+ 
+ 	/* Clear any devices that are being monitored and notify device lost */
+@@ -568,6 +714,49 @@ void msft_do_close(struct hci_dev *hdev)
+ 	hci_dev_unlock(hdev);
+ }
+ 
++static int msft_cancel_address_filter_sync(struct hci_dev *hdev, void *data)
++{
++	struct msft_monitor_addr_filter_data *address_filter = data;
++	struct msft_cp_le_cancel_monitor_advertisement cp;
++	struct msft_data *msft = hdev->msft_data;
++	struct sk_buff *skb;
++	int err = 0;
++
++	if (!msft) {
++		bt_dev_err(hdev, "MSFT: msft data is freed");
++		return -EINVAL;
++	}
++
++	/* The address filter has been removed by hci dev close */
++	if (!test_bit(HCI_UP, &hdev->flags))
++		return 0;
++
++	mutex_lock(&msft->filter_lock);
++	list_del(&address_filter->list);
++	mutex_unlock(&msft->filter_lock);
++
++	cp.sub_opcode = MSFT_OP_LE_CANCEL_MONITOR_ADVERTISEMENT;
++	cp.handle = address_filter->msft_handle;
++
++	skb = __hci_cmd_sync(hdev, hdev->msft_opcode, sizeof(cp), &cp,
++			     HCI_CMD_TIMEOUT);
++	if (IS_ERR_OR_NULL(skb)) {
++		bt_dev_err(hdev, "MSFT: Failed to cancel address (%pMR) filter",
++			   &address_filter->bdaddr);
++		err = EIO;
++		goto done;
++	}
++	kfree_skb(skb);
++
++	bt_dev_dbg(hdev, "MSFT: Canceled device %pMR address filter",
++		   &address_filter->bdaddr);
++
++done:
++	kfree(address_filter);
++
++	return err;
++}
++
+ void msft_register(struct hci_dev *hdev)
+ {
+ 	struct msft_data *msft = NULL;
+@@ -581,7 +770,9 @@ void msft_register(struct hci_dev *hdev)
+ 	}
+ 
+ 	INIT_LIST_HEAD(&msft->handle_map);
++	INIT_LIST_HEAD(&msft->address_filters);
+ 	hdev->msft_data = msft;
++	mutex_init(&msft->filter_lock);
+ }
+ 
+ void msft_unregister(struct hci_dev *hdev)
+@@ -596,6 +787,7 @@ void msft_unregister(struct hci_dev *hdev)
+ 	hdev->msft_data = NULL;
+ 
+ 	kfree(msft->evt_prefix);
++	mutex_destroy(&msft->filter_lock);
+ 	kfree(msft);
+ }
+ 
+@@ -645,11 +837,149 @@ static void *msft_skb_pull(struct hci_dev *hdev, struct sk_buff *skb,
+ 	return data;
+ }
+ 
++static int msft_add_address_filter_sync(struct hci_dev *hdev, void *data)
++{
++	struct msft_monitor_addr_filter_data *address_filter = data;
++	struct msft_rp_le_monitor_advertisement *rp;
++	struct msft_cp_le_monitor_advertisement *cp;
++	struct msft_data *msft = hdev->msft_data;
++	struct sk_buff *skb = NULL;
++	bool remove = false;
++	size_t size;
++
++	if (!msft) {
++		bt_dev_err(hdev, "MSFT: msft data is freed");
++		return -EINVAL;
++	}
++
++	/* The address filter has been removed by hci dev close */
++	if (!test_bit(HCI_UP, &hdev->flags))
++		return -ENODEV;
++
++	/* We are safe to use the address filter from now on.
++	 * msft_monitor_device_evt() wouldn't delete this filter because it's
++	 * not been added by now.
++	 * And all other functions that requiring hci_req_sync_lock wouldn't
++	 * touch this filter before this func completes because it's protected
++	 * by hci_req_sync_lock.
++	 */
++
++	if (address_filter->state == AF_STATE_REMOVING) {
++		mutex_lock(&msft->filter_lock);
++		list_del(&address_filter->list);
++		mutex_unlock(&msft->filter_lock);
++		kfree(address_filter);
++		return 0;
++	}
++
++	size = sizeof(*cp) +
++	       sizeof(address_filter->addr_type) +
++	       sizeof(address_filter->bdaddr);
++	cp = kzalloc(size, GFP_KERNEL);
++	if (!cp) {
++		bt_dev_err(hdev, "MSFT: Alloc cmd param err");
++		remove = true;
++		goto done;
++	}
++	cp->sub_opcode           = MSFT_OP_LE_MONITOR_ADVERTISEMENT;
++	cp->rssi_high		 = address_filter->rssi_high;
++	cp->rssi_low		 = address_filter->rssi_low;
++	cp->rssi_low_interval    = address_filter->rssi_low_interval;
++	cp->rssi_sampling_period = address_filter->rssi_sampling_period;
++	cp->cond_type            = MSFT_MONITOR_ADVERTISEMENT_TYPE_ADDR;
++	cp->data[0]              = address_filter->addr_type;
++	memcpy(&cp->data[1], &address_filter->bdaddr,
++	       sizeof(address_filter->bdaddr));
++
++	skb = __hci_cmd_sync(hdev, hdev->msft_opcode, size, cp,
++			     HCI_CMD_TIMEOUT);
++	if (IS_ERR_OR_NULL(skb)) {
++		bt_dev_err(hdev, "Failed to enable address %pMR filter",
++			   &address_filter->bdaddr);
++		skb = NULL;
++		remove = true;
++		goto done;
++	}
++
++	rp = skb_pull_data(skb, sizeof(*rp));
++	if (!rp || rp->sub_opcode != MSFT_OP_LE_MONITOR_ADVERTISEMENT ||
++	    rp->status)
++		remove = true;
++
++done:
++	mutex_lock(&msft->filter_lock);
++
++	if (remove) {
++		bt_dev_warn(hdev, "MSFT: Remove address (%pMR) filter",
++			    &address_filter->bdaddr);
++		list_del(&address_filter->list);
++		kfree(address_filter);
++	} else {
++		address_filter->state = AF_STATE_ADDED;
++		address_filter->msft_handle = rp->handle;
++		bt_dev_dbg(hdev, "MSFT: Address %pMR filter enabled",
++			   &address_filter->bdaddr);
++	}
++	mutex_unlock(&msft->filter_lock);
++
++	kfree_skb(skb);
++
++	return 0;
++}
++
++/* This function requires the caller holds msft->filter_lock */
++static struct msft_monitor_addr_filter_data *msft_add_address_filter
++		(struct hci_dev *hdev, u8 addr_type, bdaddr_t *bdaddr,
++		 struct msft_monitor_advertisement_handle_data *handle_data)
++{
++	struct msft_monitor_addr_filter_data *address_filter = NULL;
++	struct msft_data *msft = hdev->msft_data;
++	int err;
++
++	address_filter = kzalloc(sizeof(*address_filter), GFP_KERNEL);
++	if (!address_filter)
++		return NULL;
++
++	address_filter->state             = AF_STATE_ADDING;
++	address_filter->msft_handle       = 0xff;
++	address_filter->pattern_handle    = handle_data->msft_handle;
++	address_filter->mgmt_handle       = handle_data->mgmt_handle;
++	address_filter->rssi_high         = handle_data->rssi_high;
++	address_filter->rssi_low          = handle_data->rssi_low;
++	address_filter->rssi_low_interval = handle_data->rssi_low_interval;
++	address_filter->rssi_sampling_period = handle_data->rssi_sampling_period;
++	address_filter->addr_type            = addr_type;
++	bacpy(&address_filter->bdaddr, bdaddr);
++
++	/* With the above AF_STATE_ADDING, duplicated address filter can be
++	 * avoided when receiving monitor device event (found/lost) frequently
++	 * for the same device.
++	 */
++	list_add_tail(&address_filter->list, &msft->address_filters);
++
++	err = hci_cmd_sync_queue(hdev, msft_add_address_filter_sync,
++				 address_filter, NULL);
++	if (err < 0) {
++		bt_dev_err(hdev, "MSFT: Add address %pMR filter err", bdaddr);
++		list_del(&address_filter->list);
++		kfree(address_filter);
++		return NULL;
++	}
++
++	bt_dev_dbg(hdev, "MSFT: Add device %pMR address filter",
++		   &address_filter->bdaddr);
++
++	return address_filter;
++}
++
+ /* This function requires the caller holds hdev->lock */
+ static void msft_monitor_device_evt(struct hci_dev *hdev, struct sk_buff *skb)
+ {
++	struct msft_monitor_addr_filter_data *n, *address_filter = NULL;
+ 	struct msft_ev_le_monitor_device *ev;
+ 	struct msft_monitor_advertisement_handle_data *handle_data;
++	struct msft_data *msft = hdev->msft_data;
++	u16 mgmt_handle = 0xffff;
+ 	u8 addr_type;
+ 
+ 	ev = msft_skb_pull(hdev, skb, MSFT_EV_LE_MONITOR_DEVICE, sizeof(*ev));
+@@ -662,9 +992,53 @@ static void msft_monitor_device_evt(struct hci_dev *hdev, struct sk_buff *skb)
+ 		   ev->monitor_state, &ev->bdaddr);
+ 
+ 	handle_data = msft_find_handle_data(hdev, ev->monitor_handle, false);
+-	if (!handle_data)
++
++	if (!test_bit(HCI_QUIRK_USE_MSFT_EXT_ADDRESS_FILTER, &hdev->quirks)) {
++		if (!handle_data)
++			return;
++		mgmt_handle = handle_data->mgmt_handle;
++		goto report_state;
++	}
++
++	if (handle_data) {
++		/* Don't report any device found/lost event from pattern
++		 * monitors. Pattern monitor always has its address filters for
++		 * tracking devices.
++		 */
++
++		address_filter = msft_find_address_data(hdev, ev->addr_type,
++							&ev->bdaddr,
++							handle_data->msft_handle);
++		if (address_filter)
++			return;
++
++		if (ev->monitor_state && handle_data->cond_type ==
++				MSFT_MONITOR_ADVERTISEMENT_TYPE_PATTERN)
++			msft_add_address_filter(hdev, ev->addr_type,
++						&ev->bdaddr, handle_data);
++
+ 		return;
++	}
+ 
++	/* This device event is not from pattern monitor.
++	 * Report it if there is a corresponding address_filter for it.
++	 */
++	list_for_each_entry(n, &msft->address_filters, list) {
++		if (n->state == AF_STATE_ADDED &&
++		    n->msft_handle == ev->monitor_handle) {
++			mgmt_handle = n->mgmt_handle;
++			address_filter = n;
++			break;
++		}
++	}
++
++	if (!address_filter) {
++		bt_dev_warn(hdev, "MSFT: Unexpected device event %pMR, %u, %u",
++			    &ev->bdaddr, ev->monitor_handle, ev->monitor_state);
++		return;
++	}
++
++report_state:
+ 	switch (ev->addr_type) {
+ 	case ADDR_LE_DEV_PUBLIC:
+ 		addr_type = BDADDR_LE_PUBLIC;
+@@ -681,12 +1055,18 @@ static void msft_monitor_device_evt(struct hci_dev *hdev, struct sk_buff *skb)
+ 		return;
+ 	}
+ 
+-	if (ev->monitor_state)
+-		msft_device_found(hdev, &ev->bdaddr, addr_type,
+-				  handle_data->mgmt_handle);
+-	else
+-		msft_device_lost(hdev, &ev->bdaddr, addr_type,
+-				 handle_data->mgmt_handle);
++	if (ev->monitor_state) {
++		msft_device_found(hdev, &ev->bdaddr, addr_type, mgmt_handle);
++	} else {
++		if (address_filter && address_filter->state == AF_STATE_ADDED) {
++			address_filter->state = AF_STATE_REMOVING;
++			hci_cmd_sync_queue(hdev,
++					   msft_cancel_address_filter_sync,
++					   address_filter,
++					   NULL);
++		}
++		msft_device_lost(hdev, &ev->bdaddr, addr_type, mgmt_handle);
++	}
+ }
+ 
+ void msft_vendor_evt(struct hci_dev *hdev, void *data, struct sk_buff *skb)
+@@ -724,7 +1104,9 @@ void msft_vendor_evt(struct hci_dev *hdev, void *data, struct sk_buff *skb)
+ 
+ 	switch (*evt) {
+ 	case MSFT_EV_LE_MONITOR_DEVICE:
++		mutex_lock(&msft->filter_lock);
+ 		msft_monitor_device_evt(hdev, skb);
++		mutex_unlock(&msft->filter_lock);
+ 		break;
+ 
+ 	default:
+diff --git a/net/bluetooth/sco.c b/net/bluetooth/sco.c
+index 7762604ddfc05..99b149261949a 100644
+--- a/net/bluetooth/sco.c
++++ b/net/bluetooth/sco.c
+@@ -1267,7 +1267,7 @@ static int sco_sock_release(struct socket *sock)
+ 
+ 	sco_sock_close(sk);
+ 
+-	if (sock_flag(sk, SOCK_LINGER) && sk->sk_lingertime &&
++	if (sock_flag(sk, SOCK_LINGER) && READ_ONCE(sk->sk_lingertime) &&
+ 	    !(current->flags & PF_EXITING)) {
+ 		lock_sock(sk);
+ 		err = bt_sock_wait_state(sk, BT_CLOSED, sk->sk_lingertime);
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 28a59596987a9..f1a5775400658 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -7352,6 +7352,8 @@ BPF_CALL_3(bpf_sk_assign, struct sk_buff *, skb, struct sock *, sk, u64, flags)
+ 		return -ENETUNREACH;
+ 	if (unlikely(sk_fullsock(sk) && sk->sk_reuseport))
+ 		return -ESOCKTNOSUPPORT;
++	if (sk_unhashed(sk))
++		return -EOPNOTSUPP;
+ 	if (sk_is_refcounted(sk) &&
+ 	    unlikely(!refcount_inc_not_zero(&sk->sk_refcnt)))
+ 		return -ENOENT;
+diff --git a/net/core/lwt_bpf.c b/net/core/lwt_bpf.c
+index 8b6b5e72b2179..4a0797f0a154b 100644
+--- a/net/core/lwt_bpf.c
++++ b/net/core/lwt_bpf.c
+@@ -60,9 +60,8 @@ static int run_lwt_bpf(struct sk_buff *skb, struct bpf_lwt_prog *lwt,
+ 			ret = BPF_OK;
+ 		} else {
+ 			skb_reset_mac_header(skb);
+-			ret = skb_do_redirect(skb);
+-			if (ret == 0)
+-				ret = BPF_REDIRECT;
++			skb_do_redirect(skb);
++			ret = BPF_REDIRECT;
+ 		}
+ 		break;
+ 
+@@ -255,7 +254,7 @@ static int bpf_lwt_xmit_reroute(struct sk_buff *skb)
+ 
+ 	err = dst_output(dev_net(skb_dst(skb)->dev), skb->sk, skb);
+ 	if (unlikely(err))
+-		return err;
++		return net_xmit_errno(err);
+ 
+ 	/* ip[6]_finish_output2 understand LWTUNNEL_XMIT_DONE */
+ 	return LWTUNNEL_XMIT_DONE;
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index a298992060e6e..acdf94bb54c80 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -550,7 +550,7 @@ static void *kmalloc_reserve(unsigned int *size, gfp_t flags, int node,
+ 			     bool *pfmemalloc)
+ {
+ 	bool ret_pfmemalloc = false;
+-	unsigned int obj_size;
++	size_t obj_size;
+ 	void *obj;
+ 
+ 	obj_size = SKB_HEAD_ALIGN(*size);
+@@ -567,7 +567,13 @@ static void *kmalloc_reserve(unsigned int *size, gfp_t flags, int node,
+ 		obj = kmem_cache_alloc_node(skb_small_head_cache, flags, node);
+ 		goto out;
+ 	}
+-	*size = obj_size = kmalloc_size_roundup(obj_size);
++
++	obj_size = kmalloc_size_roundup(obj_size);
++	/* The following cast might truncate high-order bits of obj_size, this
++	 * is harmless because kmalloc(obj_size >= 2^32) will fail anyway.
++	 */
++	*size = (unsigned int)obj_size;
++
+ 	/*
+ 	 * Try a regular allocation, when that fails and we're not entitled
+ 	 * to the reserves, fail.
+@@ -4354,21 +4360,20 @@ struct sk_buff *skb_segment(struct sk_buff *head_skb,
+ 	struct sk_buff *segs = NULL;
+ 	struct sk_buff *tail = NULL;
+ 	struct sk_buff *list_skb = skb_shinfo(head_skb)->frag_list;
+-	skb_frag_t *frag = skb_shinfo(head_skb)->frags;
+ 	unsigned int mss = skb_shinfo(head_skb)->gso_size;
+ 	unsigned int doffset = head_skb->data - skb_mac_header(head_skb);
+-	struct sk_buff *frag_skb = head_skb;
+ 	unsigned int offset = doffset;
+ 	unsigned int tnl_hlen = skb_tnl_header_len(head_skb);
+ 	unsigned int partial_segs = 0;
+ 	unsigned int headroom;
+ 	unsigned int len = head_skb->len;
++	struct sk_buff *frag_skb;
++	skb_frag_t *frag;
+ 	__be16 proto;
+ 	bool csum, sg;
+-	int nfrags = skb_shinfo(head_skb)->nr_frags;
+ 	int err = -ENOMEM;
+ 	int i = 0;
+-	int pos;
++	int nfrags, pos;
+ 
+ 	if ((skb_shinfo(head_skb)->gso_type & SKB_GSO_DODGY) &&
+ 	    mss != GSO_BY_FRAGS && mss != skb_headlen(head_skb)) {
+@@ -4445,6 +4450,13 @@ normal:
+ 	headroom = skb_headroom(head_skb);
+ 	pos = skb_headlen(head_skb);
+ 
++	if (skb_orphan_frags(head_skb, GFP_ATOMIC))
++		return ERR_PTR(-ENOMEM);
++
++	nfrags = skb_shinfo(head_skb)->nr_frags;
++	frag = skb_shinfo(head_skb)->frags;
++	frag_skb = head_skb;
++
+ 	do {
+ 		struct sk_buff *nskb;
+ 		skb_frag_t *nskb_frag;
+@@ -4465,6 +4477,10 @@ normal:
+ 		    (skb_headlen(list_skb) == len || sg)) {
+ 			BUG_ON(skb_headlen(list_skb) > len);
+ 
++			nskb = skb_clone(list_skb, GFP_ATOMIC);
++			if (unlikely(!nskb))
++				goto err;
++
+ 			i = 0;
+ 			nfrags = skb_shinfo(list_skb)->nr_frags;
+ 			frag = skb_shinfo(list_skb)->frags;
+@@ -4483,12 +4499,8 @@ normal:
+ 				frag++;
+ 			}
+ 
+-			nskb = skb_clone(list_skb, GFP_ATOMIC);
+ 			list_skb = list_skb->next;
+ 
+-			if (unlikely(!nskb))
+-				goto err;
+-
+ 			if (unlikely(pskb_trim(nskb, len))) {
+ 				kfree_skb(nskb);
+ 				goto err;
+@@ -4564,12 +4576,16 @@ normal:
+ 		skb_shinfo(nskb)->flags |= skb_shinfo(head_skb)->flags &
+ 					   SKBFL_SHARED_FRAG;
+ 
+-		if (skb_orphan_frags(frag_skb, GFP_ATOMIC) ||
+-		    skb_zerocopy_clone(nskb, frag_skb, GFP_ATOMIC))
++		if (skb_zerocopy_clone(nskb, frag_skb, GFP_ATOMIC))
+ 			goto err;
+ 
+ 		while (pos < offset + len) {
+ 			if (i >= nfrags) {
++				if (skb_orphan_frags(list_skb, GFP_ATOMIC) ||
++				    skb_zerocopy_clone(nskb, list_skb,
++						       GFP_ATOMIC))
++					goto err;
++
+ 				i = 0;
+ 				nfrags = skb_shinfo(list_skb)->nr_frags;
+ 				frag = skb_shinfo(list_skb)->frags;
+@@ -4583,10 +4599,6 @@ normal:
+ 					i--;
+ 					frag--;
+ 				}
+-				if (skb_orphan_frags(frag_skb, GFP_ATOMIC) ||
+-				    skb_zerocopy_clone(nskb, frag_skb,
+-						       GFP_ATOMIC))
+-					goto err;
+ 
+ 				list_skb = list_skb->next;
+ 			}
+diff --git a/net/core/sock.c b/net/core/sock.c
+index c9cffb7acbeae..1c5c01b116e6f 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -797,7 +797,7 @@ EXPORT_SYMBOL(sock_set_reuseport);
+ void sock_no_linger(struct sock *sk)
+ {
+ 	lock_sock(sk);
+-	sk->sk_lingertime = 0;
++	WRITE_ONCE(sk->sk_lingertime, 0);
+ 	sock_set_flag(sk, SOCK_LINGER);
+ 	release_sock(sk);
+ }
+@@ -1230,15 +1230,15 @@ set_sndbuf:
+ 			ret = -EFAULT;
+ 			break;
+ 		}
+-		if (!ling.l_onoff)
++		if (!ling.l_onoff) {
+ 			sock_reset_flag(sk, SOCK_LINGER);
+-		else {
+-#if (BITS_PER_LONG == 32)
+-			if ((unsigned int)ling.l_linger >= MAX_SCHEDULE_TIMEOUT/HZ)
+-				sk->sk_lingertime = MAX_SCHEDULE_TIMEOUT;
++		} else {
++			unsigned long t_sec = ling.l_linger;
++
++			if (t_sec >= MAX_SCHEDULE_TIMEOUT / HZ)
++				WRITE_ONCE(sk->sk_lingertime, MAX_SCHEDULE_TIMEOUT);
+ 			else
+-#endif
+-				sk->sk_lingertime = (unsigned int)ling.l_linger * HZ;
++				WRITE_ONCE(sk->sk_lingertime, t_sec * HZ);
+ 			sock_set_flag(sk, SOCK_LINGER);
+ 		}
+ 		break;
+@@ -1691,7 +1691,7 @@ int sk_getsockopt(struct sock *sk, int level, int optname,
+ 	case SO_LINGER:
+ 		lv		= sizeof(v.ling);
+ 		v.ling.l_onoff	= sock_flag(sk, SOCK_LINGER);
+-		v.ling.l_linger	= sk->sk_lingertime / HZ;
++		v.ling.l_linger	= READ_ONCE(sk->sk_lingertime) / HZ;
+ 		break;
+ 
+ 	case SO_BSDCOMPAT:
+diff --git a/net/dccp/ipv4.c b/net/dccp/ipv4.c
+index a545ad71201c8..a5361fb7a415b 100644
+--- a/net/dccp/ipv4.c
++++ b/net/dccp/ipv4.c
+@@ -255,12 +255,17 @@ static int dccp_v4_err(struct sk_buff *skb, u32 info)
+ 	int err;
+ 	struct net *net = dev_net(skb->dev);
+ 
+-	/* Only need dccph_dport & dccph_sport which are the first
+-	 * 4 bytes in dccp header.
++	/* For the first __dccp_basic_hdr_len() check, we only need dh->dccph_x,
++	 * which is in byte 7 of the dccp header.
+ 	 * Our caller (icmp_socket_deliver()) already pulled 8 bytes for us.
++	 *
++	 * Later on, we want to access the sequence number fields, which are
++	 * beyond 8 bytes, so we have to pskb_may_pull() ourselves.
+ 	 */
+-	BUILD_BUG_ON(offsetofend(struct dccp_hdr, dccph_sport) > 8);
+-	BUILD_BUG_ON(offsetofend(struct dccp_hdr, dccph_dport) > 8);
++	dh = (struct dccp_hdr *)(skb->data + offset);
++	if (!pskb_may_pull(skb, offset + __dccp_basic_hdr_len(dh)))
++		return -EINVAL;
++	iph = (struct iphdr *)skb->data;
+ 	dh = (struct dccp_hdr *)(skb->data + offset);
+ 
+ 	sk = __inet_lookup_established(net, &dccp_hashinfo,
+diff --git a/net/dccp/ipv6.c b/net/dccp/ipv6.c
+index 686090bc59451..33f6ccf6ba77b 100644
+--- a/net/dccp/ipv6.c
++++ b/net/dccp/ipv6.c
+@@ -74,7 +74,7 @@ static inline __u64 dccp_v6_init_sequence(struct sk_buff *skb)
+ static int dccp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
+ 			u8 type, u8 code, int offset, __be32 info)
+ {
+-	const struct ipv6hdr *hdr = (const struct ipv6hdr *)skb->data;
++	const struct ipv6hdr *hdr;
+ 	const struct dccp_hdr *dh;
+ 	struct dccp_sock *dp;
+ 	struct ipv6_pinfo *np;
+@@ -83,12 +83,17 @@ static int dccp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
+ 	__u64 seq;
+ 	struct net *net = dev_net(skb->dev);
+ 
+-	/* Only need dccph_dport & dccph_sport which are the first
+-	 * 4 bytes in dccp header.
++	/* For the first __dccp_basic_hdr_len() check, we only need dh->dccph_x,
++	 * which is in byte 7 of the dccp header.
+ 	 * Our caller (icmpv6_notify()) already pulled 8 bytes for us.
++	 *
++	 * Later on, we want to access the sequence number fields, which are
++	 * beyond 8 bytes, so we have to pskb_may_pull() ourselves.
+ 	 */
+-	BUILD_BUG_ON(offsetofend(struct dccp_hdr, dccph_sport) > 8);
+-	BUILD_BUG_ON(offsetofend(struct dccp_hdr, dccph_dport) > 8);
++	dh = (struct dccp_hdr *)(skb->data + offset);
++	if (!pskb_may_pull(skb, offset + __dccp_basic_hdr_len(dh)))
++		return -EINVAL;
++	hdr = (const struct ipv6hdr *)skb->data;
+ 	dh = (struct dccp_hdr *)(skb->data + offset);
+ 
+ 	sk = __inet6_lookup_established(net, &dccp_hashinfo,
+diff --git a/net/ipv4/igmp.c b/net/ipv4/igmp.c
+index 48ff5f13e7979..193d8362efe2e 100644
+--- a/net/ipv4/igmp.c
++++ b/net/ipv4/igmp.c
+@@ -353,8 +353,9 @@ static struct sk_buff *igmpv3_newpack(struct net_device *dev, unsigned int mtu)
+ 	struct flowi4 fl4;
+ 	int hlen = LL_RESERVED_SPACE(dev);
+ 	int tlen = dev->needed_tailroom;
+-	unsigned int size = mtu;
++	unsigned int size;
+ 
++	size = min(mtu, IP_MAX_MTU);
+ 	while (1) {
+ 		skb = alloc_skb(size + hlen + tlen,
+ 				GFP_ATOMIC | __GFP_NOWARN);
+diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
+index 6ba1a0fafbaab..a6e4c82615d7e 100644
+--- a/net/ipv4/ip_output.c
++++ b/net/ipv4/ip_output.c
+@@ -216,7 +216,7 @@ static int ip_finish_output2(struct net *net, struct sock *sk, struct sk_buff *s
+ 	if (lwtunnel_xmit_redirect(dst->lwtstate)) {
+ 		int res = lwtunnel_xmit(skb);
+ 
+-		if (res < 0 || res == LWTUNNEL_XMIT_DONE)
++		if (res != LWTUNNEL_XMIT_CONTINUE)
+ 			return res;
+ 	}
+ 
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 57c8af1859c16..48c2b96b08435 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -287,7 +287,7 @@ static void tcp_incr_quickack(struct sock *sk, unsigned int max_quickacks)
+ 		icsk->icsk_ack.quick = quickacks;
+ }
+ 
+-void tcp_enter_quickack_mode(struct sock *sk, unsigned int max_quickacks)
++static void tcp_enter_quickack_mode(struct sock *sk, unsigned int max_quickacks)
+ {
+ 	struct inet_connection_sock *icsk = inet_csk(sk);
+ 
+@@ -295,7 +295,6 @@ void tcp_enter_quickack_mode(struct sock *sk, unsigned int max_quickacks)
+ 	inet_csk_exit_pingpong_mode(sk);
+ 	icsk->icsk_ack.ato = TCP_ATO_MIN;
+ }
+-EXPORT_SYMBOL(tcp_enter_quickack_mode);
+ 
+ /* Send ACKs quickly, if "quick" count is not exhausted
+  * and the session is not interactive.
+diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c
+index 206418b6d7c48..a9f6200f12f15 100644
+--- a/net/ipv4/tcp_timer.c
++++ b/net/ipv4/tcp_timer.c
+@@ -446,6 +446,22 @@ static void tcp_fastopen_synack_timer(struct sock *sk, struct request_sock *req)
+ 			  req->timeout << req->num_timeout, TCP_RTO_MAX);
+ }
+ 
++static bool tcp_rtx_probe0_timed_out(const struct sock *sk,
++				     const struct sk_buff *skb)
++{
++	const struct tcp_sock *tp = tcp_sk(sk);
++	const int timeout = TCP_RTO_MAX * 2;
++	u32 rcv_delta, rtx_delta;
++
++	rcv_delta = inet_csk(sk)->icsk_timeout - tp->rcv_tstamp;
++	if (rcv_delta <= timeout)
++		return false;
++
++	rtx_delta = (u32)msecs_to_jiffies(tcp_time_stamp(tp) -
++			(tp->retrans_stamp ?: tcp_skb_timestamp(skb)));
++
++	return rtx_delta > timeout;
++}
+ 
+ /**
+  *  tcp_retransmit_timer() - The TCP retransmit timeout handler
+@@ -511,7 +527,7 @@ void tcp_retransmit_timer(struct sock *sk)
+ 					    tp->snd_una, tp->snd_nxt);
+ 		}
+ #endif
+-		if (tcp_jiffies32 - tp->rcv_tstamp > TCP_RTO_MAX) {
++		if (tcp_rtx_probe0_timed_out(sk, skb)) {
+ 			tcp_write_err(sk);
+ 			goto out;
+ 		}
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index abfa860367aa9..b3aa68ea29de2 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -452,14 +452,24 @@ static struct sock *udp4_lib_lookup2(struct net *net,
+ 		score = compute_score(sk, net, saddr, sport,
+ 				      daddr, hnum, dif, sdif);
+ 		if (score > badness) {
+-			result = lookup_reuseport(net, sk, skb,
+-						  saddr, sport, daddr, hnum);
++			badness = score;
++			result = lookup_reuseport(net, sk, skb, saddr, sport, daddr, hnum);
++			if (!result) {
++				result = sk;
++				continue;
++			}
++
+ 			/* Fall back to scoring if group has connections */
+-			if (result && !reuseport_has_conns(sk))
++			if (!reuseport_has_conns(sk))
+ 				return result;
+ 
+-			result = result ? : sk;
+-			badness = score;
++			/* Reuseport logic returned an error, keep original score. */
++			if (IS_ERR(result))
++				continue;
++
++			badness = compute_score(result, net, saddr, sport,
++						daddr, hnum, dif, sdif);
++
+ 		}
+ 	}
+ 	return result;
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index 1e8c90e976080..016b0a513259f 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -113,7 +113,7 @@ static int ip6_finish_output2(struct net *net, struct sock *sk, struct sk_buff *
+ 	if (lwtunnel_xmit_redirect(dst->lwtstate)) {
+ 		int res = lwtunnel_xmit(skb);
+ 
+-		if (res < 0 || res == LWTUNNEL_XMIT_DONE)
++		if (res != LWTUNNEL_XMIT_CONTINUE)
+ 			return res;
+ 	}
+ 
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index 486d893b8e3ca..3ffca158d3e11 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -195,14 +195,23 @@ static struct sock *udp6_lib_lookup2(struct net *net,
+ 		score = compute_score(sk, net, saddr, sport,
+ 				      daddr, hnum, dif, sdif);
+ 		if (score > badness) {
+-			result = lookup_reuseport(net, sk, skb,
+-						  saddr, sport, daddr, hnum);
++			badness = score;
++			result = lookup_reuseport(net, sk, skb, saddr, sport, daddr, hnum);
++			if (!result) {
++				result = sk;
++				continue;
++			}
++
+ 			/* Fall back to scoring if group has connections */
+-			if (result && !reuseport_has_conns(sk))
++			if (!reuseport_has_conns(sk))
+ 				return result;
+ 
+-			result = result ? : sk;
+-			badness = score;
++			/* Reuseport logic returned an error, keep original score. */
++			if (IS_ERR(result))
++				continue;
++
++			badness = compute_score(sk, net, saddr, sport,
++						daddr, hnum, dif, sdif);
+ 		}
+ 	}
+ 	return result;
+diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
+index e7ac246038925..d354b32a20f8f 100644
+--- a/net/mac80211/cfg.c
++++ b/net/mac80211/cfg.c
+@@ -3648,12 +3648,6 @@ static int __ieee80211_csa_finalize(struct ieee80211_sub_if_data *sdata)
+ 	lockdep_assert_held(&local->mtx);
+ 	lockdep_assert_held(&local->chanctx_mtx);
+ 
+-	if (sdata->vif.bss_conf.eht_puncturing != sdata->vif.bss_conf.csa_punct_bitmap) {
+-		sdata->vif.bss_conf.eht_puncturing =
+-					sdata->vif.bss_conf.csa_punct_bitmap;
+-		changed |= BSS_CHANGED_EHT_PUNCTURING;
+-	}
+-
+ 	/*
+ 	 * using reservation isn't immediate as it may be deferred until later
+ 	 * with multi-vif. once reservation is complete it will re-schedule the
+@@ -3683,6 +3677,12 @@ static int __ieee80211_csa_finalize(struct ieee80211_sub_if_data *sdata)
+ 	if (err)
+ 		return err;
+ 
++	if (sdata->vif.bss_conf.eht_puncturing != sdata->vif.bss_conf.csa_punct_bitmap) {
++		sdata->vif.bss_conf.eht_puncturing =
++					sdata->vif.bss_conf.csa_punct_bitmap;
++		changed |= BSS_CHANGED_EHT_PUNCTURING;
++	}
++
+ 	ieee80211_link_info_change_notify(sdata, &sdata->deflink, changed);
+ 
+ 	if (sdata->deflink.csa_block_tx) {
+diff --git a/net/netfilter/ipset/ip_set_hash_netportnet.c b/net/netfilter/ipset/ip_set_hash_netportnet.c
+index 005a7ce87217e..bf4f91b78e1dc 100644
+--- a/net/netfilter/ipset/ip_set_hash_netportnet.c
++++ b/net/netfilter/ipset/ip_set_hash_netportnet.c
+@@ -36,6 +36,7 @@ MODULE_ALIAS("ip_set_hash:net,port,net");
+ #define IP_SET_HASH_WITH_PROTO
+ #define IP_SET_HASH_WITH_NETS
+ #define IPSET_NET_COUNT 2
++#define IP_SET_HASH_WITH_NET0
+ 
+ /* IPv4 variant */
+ 
+diff --git a/net/netfilter/nft_exthdr.c b/net/netfilter/nft_exthdr.c
+index 7f856ceb3a668..a9844eefedebc 100644
+--- a/net/netfilter/nft_exthdr.c
++++ b/net/netfilter/nft_exthdr.c
+@@ -238,7 +238,12 @@ static void nft_exthdr_tcp_set_eval(const struct nft_expr *expr,
+ 	if (!tcph)
+ 		goto err;
+ 
++	if (skb_ensure_writable(pkt->skb, nft_thoff(pkt) + tcphdr_len))
++		goto err;
++
++	tcph = (struct tcphdr *)(pkt->skb->data + nft_thoff(pkt));
+ 	opt = (u8 *)tcph;
++
+ 	for (i = sizeof(*tcph); i < tcphdr_len - 1; i += optl) {
+ 		union {
+ 			__be16 v16;
+@@ -253,15 +258,6 @@ static void nft_exthdr_tcp_set_eval(const struct nft_expr *expr,
+ 		if (i + optl > tcphdr_len || priv->len + priv->offset > optl)
+ 			goto err;
+ 
+-		if (skb_ensure_writable(pkt->skb,
+-					nft_thoff(pkt) + i + priv->len))
+-			goto err;
+-
+-		tcph = nft_tcp_header_pointer(pkt, sizeof(buff), buff,
+-					      &tcphdr_len);
+-		if (!tcph)
+-			goto err;
+-
+ 		offset = i + priv->offset;
+ 
+ 		switch (priv->len) {
+@@ -325,9 +321,9 @@ static void nft_exthdr_tcp_strip_eval(const struct nft_expr *expr,
+ 	if (skb_ensure_writable(pkt->skb, nft_thoff(pkt) + tcphdr_len))
+ 		goto drop;
+ 
+-	opt = (u8 *)nft_tcp_header_pointer(pkt, sizeof(buff), buff, &tcphdr_len);
+-	if (!opt)
+-		goto err;
++	tcph = (struct tcphdr *)(pkt->skb->data + nft_thoff(pkt));
++	opt = (u8 *)tcph;
++
+ 	for (i = sizeof(*tcph); i < tcphdr_len - 1; i += optl) {
+ 		unsigned int j;
+ 
+diff --git a/net/netfilter/xt_sctp.c b/net/netfilter/xt_sctp.c
+index e8961094a2822..b46a6a5120583 100644
+--- a/net/netfilter/xt_sctp.c
++++ b/net/netfilter/xt_sctp.c
+@@ -149,6 +149,8 @@ static int sctp_mt_check(const struct xt_mtchk_param *par)
+ {
+ 	const struct xt_sctp_info *info = par->matchinfo;
+ 
++	if (info->flag_count > ARRAY_SIZE(info->flag_info))
++		return -EINVAL;
+ 	if (info->flags & ~XT_SCTP_VALID_FLAGS)
+ 		return -EINVAL;
+ 	if (info->invflags & ~XT_SCTP_VALID_FLAGS)
+diff --git a/net/netfilter/xt_u32.c b/net/netfilter/xt_u32.c
+index 177b40d08098b..117d4615d6684 100644
+--- a/net/netfilter/xt_u32.c
++++ b/net/netfilter/xt_u32.c
+@@ -96,11 +96,32 @@ static bool u32_mt(const struct sk_buff *skb, struct xt_action_param *par)
+ 	return ret ^ data->invert;
+ }
+ 
++static int u32_mt_checkentry(const struct xt_mtchk_param *par)
++{
++	const struct xt_u32 *data = par->matchinfo;
++	const struct xt_u32_test *ct;
++	unsigned int i;
++
++	if (data->ntests > ARRAY_SIZE(data->tests))
++		return -EINVAL;
++
++	for (i = 0; i < data->ntests; ++i) {
++		ct = &data->tests[i];
++
++		if (ct->nnums > ARRAY_SIZE(ct->location) ||
++		    ct->nvalues > ARRAY_SIZE(ct->value))
++			return -EINVAL;
++	}
++
++	return 0;
++}
++
+ static struct xt_match xt_u32_mt_reg __read_mostly = {
+ 	.name       = "u32",
+ 	.revision   = 0,
+ 	.family     = NFPROTO_UNSPEC,
+ 	.match      = u32_mt,
++	.checkentry = u32_mt_checkentry,
+ 	.matchsize  = sizeof(struct xt_u32),
+ 	.me         = THIS_MODULE,
+ };
+diff --git a/net/netrom/af_netrom.c b/net/netrom/af_netrom.c
+index eb8ccbd58df74..96e91ab71573c 100644
+--- a/net/netrom/af_netrom.c
++++ b/net/netrom/af_netrom.c
+@@ -660,6 +660,11 @@ static int nr_connect(struct socket *sock, struct sockaddr *uaddr,
+ 		goto out_release;
+ 	}
+ 
++	if (sock->state == SS_CONNECTING) {
++		err = -EALREADY;
++		goto out_release;
++	}
++
+ 	sk->sk_state   = TCP_CLOSE;
+ 	sock->state = SS_UNCONNECTED;
+ 
+diff --git a/net/sched/em_meta.c b/net/sched/em_meta.c
+index 6fdba069f6bfd..da34fd4c92695 100644
+--- a/net/sched/em_meta.c
++++ b/net/sched/em_meta.c
+@@ -502,7 +502,7 @@ META_COLLECTOR(int_sk_lingertime)
+ 		*err = -1;
+ 		return;
+ 	}
+-	dst->value = sk->sk_lingertime / HZ;
++	dst->value = READ_ONCE(sk->sk_lingertime) / HZ;
+ }
+ 
+ META_COLLECTOR(int_sk_err_qlen)
+diff --git a/net/sched/sch_hfsc.c b/net/sched/sch_hfsc.c
+index 70b0c5873d326..61d52594ff6d8 100644
+--- a/net/sched/sch_hfsc.c
++++ b/net/sched/sch_hfsc.c
+@@ -1012,6 +1012,10 @@ hfsc_change_class(struct Qdisc *sch, u32 classid, u32 parentid,
+ 		if (parent == NULL)
+ 			return -ENOENT;
+ 	}
++	if (!(parent->cl_flags & HFSC_FSC) && parent != &q->root) {
++		NL_SET_ERR_MSG(extack, "Invalid parent - parent class must have FSC");
++		return -EINVAL;
++	}
+ 
+ 	if (classid == 0 || TC_H_MAJ(classid ^ sch->handle) != 0)
+ 		return -EINVAL;
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index f5834af5fad53..7c77565c39d19 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -1820,7 +1820,7 @@ void smc_close_non_accepted(struct sock *sk)
+ 	lock_sock(sk);
+ 	if (!sk->sk_lingertime)
+ 		/* wait for peer closing */
+-		sk->sk_lingertime = SMC_MAX_STREAM_WAIT_TIMEOUT;
++		WRITE_ONCE(sk->sk_lingertime, SMC_MAX_STREAM_WAIT_TIMEOUT);
+ 	__smc_release(smc);
+ 	release_sock(sk);
+ 	sock_put(sk); /* sock_hold above */
+diff --git a/net/socket.c b/net/socket.c
+index 2b0e54b2405c8..f49edb9b49185 100644
+--- a/net/socket.c
++++ b/net/socket.c
+@@ -3519,7 +3519,11 @@ EXPORT_SYMBOL(kernel_accept);
+ int kernel_connect(struct socket *sock, struct sockaddr *addr, int addrlen,
+ 		   int flags)
+ {
+-	return sock->ops->connect(sock, addr, addrlen, flags);
++	struct sockaddr_storage address;
++
++	memcpy(&address, addr, addrlen);
++
++	return sock->ops->connect(sock, (struct sockaddr *)&address, addrlen, flags);
+ }
+ EXPORT_SYMBOL(kernel_connect);
+ 
+diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
+index 2eb8df44f894d..589020ed909dc 100644
+--- a/net/sunrpc/svcsock.c
++++ b/net/sunrpc/svcsock.c
+@@ -1244,8 +1244,10 @@ static int svc_tcp_sendmsg(struct socket *sock, struct xdr_buf *xdr,
+ 	if (ret != head->iov_len)
+ 		goto out;
+ 
+-	if (xdr_buf_pagecount(xdr))
++	if (xdr_buf_pagecount(xdr)) {
+ 		xdr->bvec[0].bv_offset = offset_in_page(xdr->page_base);
++		xdr->bvec[0].bv_len -= offset_in_page(xdr->page_base);
++	}
+ 
+ 	msg.msg_flags = MSG_SPLICE_PAGES;
+ 	iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, xdr->bvec,
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index 8bcf8e293308e..4dcbc40d07c85 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -323,6 +323,7 @@ nl80211_pmsr_ftm_req_attr_policy[NL80211_PMSR_FTM_REQ_ATTR_MAX + 1] = {
+ 	[NL80211_PMSR_FTM_REQ_ATTR_TRIGGER_BASED] = { .type = NLA_FLAG },
+ 	[NL80211_PMSR_FTM_REQ_ATTR_NON_TRIGGER_BASED] = { .type = NLA_FLAG },
+ 	[NL80211_PMSR_FTM_REQ_ATTR_LMR_FEEDBACK] = { .type = NLA_FLAG },
++	[NL80211_PMSR_FTM_REQ_ATTR_BSS_COLOR] = { .type = NLA_U8 },
+ };
+ 
+ static const struct nla_policy
+diff --git a/samples/bpf/tracex3_kern.c b/samples/bpf/tracex3_kern.c
+index bde6591cb20c5..af235bd6615b1 100644
+--- a/samples/bpf/tracex3_kern.c
++++ b/samples/bpf/tracex3_kern.c
+@@ -11,6 +11,12 @@
+ #include <bpf/bpf_helpers.h>
+ #include <bpf/bpf_tracing.h>
+ 
++struct start_key {
++	dev_t dev;
++	u32 _pad;
++	sector_t sector;
++};
++
+ struct {
+ 	__uint(type, BPF_MAP_TYPE_HASH);
+ 	__type(key, long);
+@@ -18,16 +24,17 @@ struct {
+ 	__uint(max_entries, 4096);
+ } my_map SEC(".maps");
+ 
+-/* kprobe is NOT a stable ABI. If kernel internals change this bpf+kprobe
+- * example will no longer be meaningful
+- */
+-SEC("kprobe/blk_mq_start_request")
+-int bpf_prog1(struct pt_regs *ctx)
++/* from /sys/kernel/tracing/events/block/block_io_start/format */
++SEC("tracepoint/block/block_io_start")
++int bpf_prog1(struct trace_event_raw_block_rq *ctx)
+ {
+-	long rq = PT_REGS_PARM1(ctx);
+ 	u64 val = bpf_ktime_get_ns();
++	struct start_key key = {
++		.dev = ctx->dev,
++		.sector = ctx->sector
++	};
+ 
+-	bpf_map_update_elem(&my_map, &rq, &val, BPF_ANY);
++	bpf_map_update_elem(&my_map, &key, &val, BPF_ANY);
+ 	return 0;
+ }
+ 
+@@ -49,21 +56,26 @@ struct {
+ 	__uint(max_entries, SLOTS);
+ } lat_map SEC(".maps");
+ 
+-SEC("kprobe/__blk_account_io_done")
+-int bpf_prog2(struct pt_regs *ctx)
++/* from /sys/kernel/tracing/events/block/block_io_done/format */
++SEC("tracepoint/block/block_io_done")
++int bpf_prog2(struct trace_event_raw_block_rq *ctx)
+ {
+-	long rq = PT_REGS_PARM1(ctx);
++	struct start_key key = {
++		.dev = ctx->dev,
++		.sector = ctx->sector
++	};
++
+ 	u64 *value, l, base;
+ 	u32 index;
+ 
+-	value = bpf_map_lookup_elem(&my_map, &rq);
++	value = bpf_map_lookup_elem(&my_map, &key);
+ 	if (!value)
+ 		return 0;
+ 
+ 	u64 cur_time = bpf_ktime_get_ns();
+ 	u64 delta = cur_time - *value;
+ 
+-	bpf_map_delete_elem(&my_map, &rq);
++	bpf_map_delete_elem(&my_map, &key);
+ 
+ 	/* the lines below are computing index = log10(delta)*10
+ 	 * using integer arithmetic
+diff --git a/samples/bpf/tracex6_kern.c b/samples/bpf/tracex6_kern.c
+index acad5712d8b4f..fd602c2774b8b 100644
+--- a/samples/bpf/tracex6_kern.c
++++ b/samples/bpf/tracex6_kern.c
+@@ -2,6 +2,8 @@
+ #include <linux/version.h>
+ #include <uapi/linux/bpf.h>
+ #include <bpf/bpf_helpers.h>
++#include <bpf/bpf_tracing.h>
++#include <bpf/bpf_core_read.h>
+ 
+ struct {
+ 	__uint(type, BPF_MAP_TYPE_PERF_EVENT_ARRAY);
+@@ -45,13 +47,24 @@ int bpf_prog1(struct pt_regs *ctx)
+ 	return 0;
+ }
+ 
+-SEC("kprobe/htab_map_lookup_elem")
+-int bpf_prog2(struct pt_regs *ctx)
++/*
++ * Since *_map_lookup_elem can't be expected to trigger bpf programs
++ * due to potential deadlocks (bpf_disable_instrumentation), this bpf
++ * program will be attached to bpf_map_copy_value (which is called
++ * from map_lookup_elem) and will only filter the hashtable type.
++ */
++SEC("kprobe/bpf_map_copy_value")
++int BPF_KPROBE(bpf_prog2, struct bpf_map *map)
+ {
+ 	u32 key = bpf_get_smp_processor_id();
+ 	struct bpf_perf_event_value *val, buf;
++	enum bpf_map_type type;
+ 	int error;
+ 
++	type = BPF_CORE_READ(map, map_type);
++	if (type != BPF_MAP_TYPE_HASH)
++		return 0;
++
+ 	error = bpf_perf_event_read_value(&counters, key, &buf, sizeof(buf));
+ 	if (error)
+ 		return 0;
+diff --git a/scripts/gdb/linux/constants.py.in b/scripts/gdb/linux/constants.py.in
+index 50a92c4e9984e..fab74ca9df6fc 100644
+--- a/scripts/gdb/linux/constants.py.in
++++ b/scripts/gdb/linux/constants.py.in
+@@ -64,6 +64,9 @@ LX_GDBPARSED(IRQ_HIDDEN)
+ 
+ /* linux/module.h */
+ LX_GDBPARSED(MOD_TEXT)
++LX_GDBPARSED(MOD_DATA)
++LX_GDBPARSED(MOD_RODATA)
++LX_GDBPARSED(MOD_RO_AFTER_INIT)
+ 
+ /* linux/mount.h */
+ LX_VALUE(MNT_NOSUID)
+diff --git a/scripts/gdb/linux/modules.py b/scripts/gdb/linux/modules.py
+index 261f28640f4cd..f76a43bfa15fc 100644
+--- a/scripts/gdb/linux/modules.py
++++ b/scripts/gdb/linux/modules.py
+@@ -73,11 +73,17 @@ class LxLsmod(gdb.Command):
+                 "        " if utils.get_long_type().sizeof == 8 else ""))
+ 
+         for module in module_list():
+-            layout = module['mem'][constants.LX_MOD_TEXT]
++            text = module['mem'][constants.LX_MOD_TEXT]
++            text_addr = str(text['base']).split()[0]
++            total_size = 0
++
++            for i in range(constants.LX_MOD_TEXT, constants.LX_MOD_RO_AFTER_INIT + 1):
++                total_size += module['mem'][i]['size']
++
+             gdb.write("{address} {name:<19} {size:>8}  {ref}".format(
+-                address=str(layout['base']).split()[0],
++                address=text_addr,
+                 name=module['name'].string(),
+-                size=str(layout['size']),
++                size=str(total_size),
+                 ref=str(module['refcnt']['counter'] - 1)))
+ 
+             t = self._module_use_type.get_type().pointer()
+diff --git a/scripts/rust_is_available.sh b/scripts/rust_is_available.sh
+index aebbf19139709..7a925d2b20fc7 100755
+--- a/scripts/rust_is_available.sh
++++ b/scripts/rust_is_available.sh
+@@ -2,8 +2,6 @@
+ # SPDX-License-Identifier: GPL-2.0
+ #
+ # Tests whether a suitable Rust toolchain is available.
+-#
+-# Pass `-v` for human output and more checks (as warnings).
+ 
+ set -e
+ 
+@@ -23,21 +21,17 @@ get_canonical_version()
+ 
+ # Check that the Rust compiler exists.
+ if ! command -v "$RUSTC" >/dev/null; then
+-	if [ "$1" = -v ]; then
+-		echo >&2 "***"
+-		echo >&2 "*** Rust compiler '$RUSTC' could not be found."
+-		echo >&2 "***"
+-	fi
++	echo >&2 "***"
++	echo >&2 "*** Rust compiler '$RUSTC' could not be found."
++	echo >&2 "***"
+ 	exit 1
+ fi
+ 
+ # Check that the Rust bindings generator exists.
+ if ! command -v "$BINDGEN" >/dev/null; then
+-	if [ "$1" = -v ]; then
+-		echo >&2 "***"
+-		echo >&2 "*** Rust bindings generator '$BINDGEN' could not be found."
+-		echo >&2 "***"
+-	fi
++	echo >&2 "***"
++	echo >&2 "*** Rust bindings generator '$BINDGEN' could not be found."
++	echo >&2 "***"
+ 	exit 1
+ fi
+ 
+@@ -53,16 +47,14 @@ rust_compiler_min_version=$($min_tool_version rustc)
+ rust_compiler_cversion=$(get_canonical_version $rust_compiler_version)
+ rust_compiler_min_cversion=$(get_canonical_version $rust_compiler_min_version)
+ if [ "$rust_compiler_cversion" -lt "$rust_compiler_min_cversion" ]; then
+-	if [ "$1" = -v ]; then
+-		echo >&2 "***"
+-		echo >&2 "*** Rust compiler '$RUSTC' is too old."
+-		echo >&2 "***   Your version:    $rust_compiler_version"
+-		echo >&2 "***   Minimum version: $rust_compiler_min_version"
+-		echo >&2 "***"
+-	fi
++	echo >&2 "***"
++	echo >&2 "*** Rust compiler '$RUSTC' is too old."
++	echo >&2 "***   Your version:    $rust_compiler_version"
++	echo >&2 "***   Minimum version: $rust_compiler_min_version"
++	echo >&2 "***"
+ 	exit 1
+ fi
+-if [ "$1" = -v ] && [ "$rust_compiler_cversion" -gt "$rust_compiler_min_cversion" ]; then
++if [ "$rust_compiler_cversion" -gt "$rust_compiler_min_cversion" ]; then
+ 	echo >&2 "***"
+ 	echo >&2 "*** Rust compiler '$RUSTC' is too new. This may or may not work."
+ 	echo >&2 "***   Your version:     $rust_compiler_version"
+@@ -82,16 +74,14 @@ rust_bindings_generator_min_version=$($min_tool_version bindgen)
+ rust_bindings_generator_cversion=$(get_canonical_version $rust_bindings_generator_version)
+ rust_bindings_generator_min_cversion=$(get_canonical_version $rust_bindings_generator_min_version)
+ if [ "$rust_bindings_generator_cversion" -lt "$rust_bindings_generator_min_cversion" ]; then
+-	if [ "$1" = -v ]; then
+-		echo >&2 "***"
+-		echo >&2 "*** Rust bindings generator '$BINDGEN' is too old."
+-		echo >&2 "***   Your version:    $rust_bindings_generator_version"
+-		echo >&2 "***   Minimum version: $rust_bindings_generator_min_version"
+-		echo >&2 "***"
+-	fi
++	echo >&2 "***"
++	echo >&2 "*** Rust bindings generator '$BINDGEN' is too old."
++	echo >&2 "***   Your version:    $rust_bindings_generator_version"
++	echo >&2 "***   Minimum version: $rust_bindings_generator_min_version"
++	echo >&2 "***"
+ 	exit 1
+ fi
+-if [ "$1" = -v ] && [ "$rust_bindings_generator_cversion" -gt "$rust_bindings_generator_min_cversion" ]; then
++if [ "$rust_bindings_generator_cversion" -gt "$rust_bindings_generator_min_cversion" ]; then
+ 	echo >&2 "***"
+ 	echo >&2 "*** Rust bindings generator '$BINDGEN' is too new. This may or may not work."
+ 	echo >&2 "***   Your version:     $rust_bindings_generator_version"
+@@ -100,23 +90,39 @@ if [ "$1" = -v ] && [ "$rust_bindings_generator_cversion" -gt "$rust_bindings_ge
+ fi
+ 
+ # Check that the `libclang` used by the Rust bindings generator is suitable.
++#
++# In order to do that, first invoke `bindgen` to get the `libclang` version
++# found by `bindgen`. This step may already fail if, for instance, `libclang`
++# is not found, thus inform the user in such a case.
++bindgen_libclang_output=$( \
++	LC_ALL=C "$BINDGEN" $(dirname $0)/rust_is_available_bindgen_libclang.h 2>&1 >/dev/null
++) || bindgen_libclang_code=$?
++if [ -n "$bindgen_libclang_code" ]; then
++	echo >&2 "***"
++	echo >&2 "*** Running '$BINDGEN' to check the libclang version (used by the Rust"
++	echo >&2 "*** bindings generator) failed with code $bindgen_libclang_code. This may be caused by"
++	echo >&2 "*** a failure to locate libclang. See output and docs below for details:"
++	echo >&2 "***"
++	echo >&2 "$bindgen_libclang_output"
++	echo >&2 "***"
++	exit 1
++fi
++
++# `bindgen` returned successfully, thus use the output to check that the version
++# of the `libclang` found by the Rust bindings generator is suitable.
+ bindgen_libclang_version=$( \
+-	LC_ALL=C "$BINDGEN" $(dirname $0)/rust_is_available_bindgen_libclang.h 2>&1 >/dev/null \
+-		| grep -F 'clang version ' \
+-		| grep -oE '[0-9]+\.[0-9]+\.[0-9]+' \
+-		| head -n 1 \
++	echo "$bindgen_libclang_output" \
++		| sed -nE 's:.*clang version ([0-9]+\.[0-9]+\.[0-9]+).*:\1:p'
+ )
+ bindgen_libclang_min_version=$($min_tool_version llvm)
+ bindgen_libclang_cversion=$(get_canonical_version $bindgen_libclang_version)
+ bindgen_libclang_min_cversion=$(get_canonical_version $bindgen_libclang_min_version)
+ if [ "$bindgen_libclang_cversion" -lt "$bindgen_libclang_min_cversion" ]; then
+-	if [ "$1" = -v ]; then
+-		echo >&2 "***"
+-		echo >&2 "*** libclang (used by the Rust bindings generator '$BINDGEN') is too old."
+-		echo >&2 "***   Your version:    $bindgen_libclang_version"
+-		echo >&2 "***   Minimum version: $bindgen_libclang_min_version"
+-		echo >&2 "***"
+-	fi
++	echo >&2 "***"
++	echo >&2 "*** libclang (used by the Rust bindings generator '$BINDGEN') is too old."
++	echo >&2 "***   Your version:    $bindgen_libclang_version"
++	echo >&2 "***   Minimum version: $bindgen_libclang_min_version"
++	echo >&2 "***"
+ 	exit 1
+ fi
+ 
+@@ -125,21 +131,19 @@ fi
+ #
+ # In the future, we might be able to perform a full version check, see
+ # https://github.com/rust-lang/rust-bindgen/issues/2138.
+-if [ "$1" = -v ]; then
+-	cc_name=$($(dirname $0)/cc-version.sh "$CC" | cut -f1 -d' ')
+-	if [ "$cc_name" = Clang ]; then
+-		clang_version=$( \
+-			LC_ALL=C "$CC" --version 2>/dev/null \
+-				| sed -nE '1s:.*version ([0-9]+\.[0-9]+\.[0-9]+).*:\1:p'
+-		)
+-		if [ "$clang_version" != "$bindgen_libclang_version" ]; then
+-			echo >&2 "***"
+-			echo >&2 "*** libclang (used by the Rust bindings generator '$BINDGEN')"
+-			echo >&2 "*** version does not match Clang's. This may be a problem."
+-			echo >&2 "***   libclang version: $bindgen_libclang_version"
+-			echo >&2 "***   Clang version:    $clang_version"
+-			echo >&2 "***"
+-		fi
++cc_name=$($(dirname $0)/cc-version.sh $CC | cut -f1 -d' ')
++if [ "$cc_name" = Clang ]; then
++	clang_version=$( \
++		LC_ALL=C $CC --version 2>/dev/null \
++			| sed -nE '1s:.*version ([0-9]+\.[0-9]+\.[0-9]+).*:\1:p'
++	)
++	if [ "$clang_version" != "$bindgen_libclang_version" ]; then
++		echo >&2 "***"
++		echo >&2 "*** libclang (used by the Rust bindings generator '$BINDGEN')"
++		echo >&2 "*** version does not match Clang's. This may be a problem."
++		echo >&2 "***   libclang version: $bindgen_libclang_version"
++		echo >&2 "***   Clang version:    $clang_version"
++		echo >&2 "***"
+ 	fi
+ fi
+ 
+@@ -150,11 +154,9 @@ rustc_sysroot=$("$RUSTC" $KRUSTFLAGS --print sysroot)
+ rustc_src=${RUST_LIB_SRC:-"$rustc_sysroot/lib/rustlib/src/rust/library"}
+ rustc_src_core="$rustc_src/core/src/lib.rs"
+ if [ ! -e "$rustc_src_core" ]; then
+-	if [ "$1" = -v ]; then
+-		echo >&2 "***"
+-		echo >&2 "*** Source code for the 'core' standard library could not be found"
+-		echo >&2 "*** at '$rustc_src_core'."
+-		echo >&2 "***"
+-	fi
++	echo >&2 "***"
++	echo >&2 "*** Source code for the 'core' standard library could not be found"
++	echo >&2 "*** at '$rustc_src_core'."
++	echo >&2 "***"
+ 	exit 1
+ fi
+diff --git a/security/integrity/ima/Kconfig b/security/integrity/ima/Kconfig
+index 60a511c6b583e..c17660bf5f347 100644
+--- a/security/integrity/ima/Kconfig
++++ b/security/integrity/ima/Kconfig
+@@ -248,18 +248,6 @@ config IMA_APPRAISE_MODSIG
+ 	   The modsig keyword can be used in the IMA policy to allow a hook
+ 	   to accept such signatures.
+ 
+-config IMA_TRUSTED_KEYRING
+-	bool "Require all keys on the .ima keyring be signed (deprecated)"
+-	depends on IMA_APPRAISE && SYSTEM_TRUSTED_KEYRING
+-	depends on INTEGRITY_ASYMMETRIC_KEYS
+-	select INTEGRITY_TRUSTED_KEYRING
+-	default y
+-	help
+-	   This option requires that all keys added to the .ima
+-	   keyring be signed by a key on the system trusted keyring.
+-
+-	   This option is deprecated in favor of INTEGRITY_TRUSTED_KEYRING
+-
+ config IMA_KEYRINGS_PERMIT_SIGNED_BY_BUILTIN_OR_SECONDARY
+ 	bool "Permit keys validly signed by a built-in or secondary CA cert (EXPERIMENTAL)"
+ 	depends on SYSTEM_TRUSTED_KEYRING
+diff --git a/security/security.c b/security/security.c
+index b720424ca37d9..549104a447e36 100644
+--- a/security/security.c
++++ b/security/security.c
+@@ -1138,6 +1138,20 @@ void security_bprm_committed_creds(struct linux_binprm *bprm)
+ 	call_void_hook(bprm_committed_creds, bprm);
+ }
+ 
++/**
++ * security_fs_context_submount() - Initialise fc->security
++ * @fc: new filesystem context
++ * @reference: dentry reference for submount/remount
++ *
++ * Fill out the ->security field for a new fs_context.
++ *
++ * Return: Returns 0 on success or negative error code on failure.
++ */
++int security_fs_context_submount(struct fs_context *fc, struct super_block *reference)
++{
++	return call_int_hook(fs_context_submount, 0, fc, reference);
++}
++
+ /**
+  * security_fs_context_dup() - Duplicate a fs_context LSM blob
+  * @fc: destination filesystem context
+diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
+index d06e350fedee5..afd6637440418 100644
+--- a/security/selinux/hooks.c
++++ b/security/selinux/hooks.c
+@@ -2745,6 +2745,27 @@ static int selinux_umount(struct vfsmount *mnt, int flags)
+ 				   FILESYSTEM__UNMOUNT, NULL);
+ }
+ 
++static int selinux_fs_context_submount(struct fs_context *fc,
++				   struct super_block *reference)
++{
++	const struct superblock_security_struct *sbsec;
++	struct selinux_mnt_opts *opts;
++
++	opts = kzalloc(sizeof(*opts), GFP_KERNEL);
++	if (!opts)
++		return -ENOMEM;
++
++	sbsec = selinux_superblock(reference);
++	if (sbsec->flags & FSCONTEXT_MNT)
++		opts->fscontext_sid = sbsec->sid;
++	if (sbsec->flags & CONTEXT_MNT)
++		opts->context_sid = sbsec->mntpoint_sid;
++	if (sbsec->flags & DEFCONTEXT_MNT)
++		opts->defcontext_sid = sbsec->def_sid;
++	fc->security = opts;
++	return 0;
++}
++
+ static int selinux_fs_context_dup(struct fs_context *fc,
+ 				  struct fs_context *src_fc)
+ {
+@@ -7182,6 +7203,7 @@ static struct security_hook_list selinux_hooks[] __ro_after_init = {
+ 	/*
+ 	 * PUT "CLONING" (ACCESSING + ALLOCATING) HOOKS HERE
+ 	 */
++	LSM_HOOK_INIT(fs_context_submount, selinux_fs_context_submount),
+ 	LSM_HOOK_INIT(fs_context_dup, selinux_fs_context_dup),
+ 	LSM_HOOK_INIT(fs_context_parse_param, selinux_fs_context_parse_param),
+ 	LSM_HOOK_INIT(sb_eat_lsm_opts, selinux_sb_eat_lsm_opts),
+diff --git a/security/smack/smack_lsm.c b/security/smack/smack_lsm.c
+index 6e270cf3fd30c..a8201cf22f20b 100644
+--- a/security/smack/smack_lsm.c
++++ b/security/smack/smack_lsm.c
+@@ -614,6 +614,56 @@ out_opt_err:
+ 	return -EINVAL;
+ }
+ 
++/**
++ * smack_fs_context_submount - Initialise security data for a filesystem context
++ * @fc: The filesystem context.
++ * @reference: reference superblock
++ *
++ * Returns 0 on success or -ENOMEM on error.
++ */
++static int smack_fs_context_submount(struct fs_context *fc,
++				 struct super_block *reference)
++{
++	struct superblock_smack *sbsp;
++	struct smack_mnt_opts *ctx;
++	struct inode_smack *isp;
++
++	ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
++	if (!ctx)
++		return -ENOMEM;
++	fc->security = ctx;
++
++	sbsp = smack_superblock(reference);
++	isp = smack_inode(reference->s_root->d_inode);
++
++	if (sbsp->smk_default) {
++		ctx->fsdefault = kstrdup(sbsp->smk_default->smk_known, GFP_KERNEL);
++		if (!ctx->fsdefault)
++			return -ENOMEM;
++	}
++
++	if (sbsp->smk_floor) {
++		ctx->fsfloor = kstrdup(sbsp->smk_floor->smk_known, GFP_KERNEL);
++		if (!ctx->fsfloor)
++			return -ENOMEM;
++	}
++
++	if (sbsp->smk_hat) {
++		ctx->fshat = kstrdup(sbsp->smk_hat->smk_known, GFP_KERNEL);
++		if (!ctx->fshat)
++			return -ENOMEM;
++	}
++
++	if (isp->smk_flags & SMK_INODE_TRANSMUTE) {
++		if (sbsp->smk_root) {
++			ctx->fstransmute = kstrdup(sbsp->smk_root->smk_known, GFP_KERNEL);
++			if (!ctx->fstransmute)
++				return -ENOMEM;
++		}
++	}
++	return 0;
++}
++
+ /**
+  * smack_fs_context_dup - Duplicate the security data on fs_context duplication
+  * @fc: The new filesystem context.
+@@ -4876,6 +4926,7 @@ static struct security_hook_list smack_hooks[] __ro_after_init = {
+ 	LSM_HOOK_INIT(ptrace_traceme, smack_ptrace_traceme),
+ 	LSM_HOOK_INIT(syslog, smack_syslog),
+ 
++	LSM_HOOK_INIT(fs_context_submount, smack_fs_context_submount),
+ 	LSM_HOOK_INIT(fs_context_dup, smack_fs_context_dup),
+ 	LSM_HOOK_INIT(fs_context_parse_param, smack_fs_context_parse_param),
+ 
+diff --git a/security/smack/smackfs.c b/security/smack/smackfs.c
+index 5590eaad241bb..25f67d1b5c73e 100644
+--- a/security/smack/smackfs.c
++++ b/security/smack/smackfs.c
+@@ -896,7 +896,7 @@ static ssize_t smk_set_cipso(struct file *file, const char __user *buf,
+ 	}
+ 
+ 	ret = sscanf(rule, "%d", &catlen);
+-	if (ret != 1 || catlen > SMACK_CIPSO_MAXCATNUM)
++	if (ret != 1 || catlen < 0 || catlen > SMACK_CIPSO_MAXCATNUM)
+ 		goto out;
+ 
+ 	if (format == SMK_FIXED24_FMT &&
+diff --git a/sound/Kconfig b/sound/Kconfig
+index 0ddfb717b81dc..466e848689bd1 100644
+--- a/sound/Kconfig
++++ b/sound/Kconfig
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0-only
+ menuconfig SOUND
+ 	tristate "Sound card support"
+-	depends on HAS_IOMEM
++	depends on HAS_IOMEM || UML
+ 	help
+ 	  If you have a sound card in your computer, i.e. if it can say more
+ 	  than an occasional beep, say Y.
+diff --git a/sound/core/pcm_compat.c b/sound/core/pcm_compat.c
+index 42c2ada8e8887..c96483091f30a 100644
+--- a/sound/core/pcm_compat.c
++++ b/sound/core/pcm_compat.c
+@@ -253,10 +253,14 @@ static int snd_pcm_ioctl_hw_params_compat(struct snd_pcm_substream *substream,
+ 		goto error;
+ 	}
+ 
+-	if (refine)
++	if (refine) {
+ 		err = snd_pcm_hw_refine(substream, data);
+-	else
++		if (err < 0)
++			goto error;
++		err = fixup_unreferenced_params(substream, data);
++	} else {
+ 		err = snd_pcm_hw_params(substream, data);
++	}
+ 	if (err < 0)
+ 		goto error;
+ 	if (copy_to_user(data32, data, sizeof(*data32)) ||
+diff --git a/sound/core/seq/seq_memory.c b/sound/core/seq/seq_memory.c
+index 174585bf59d29..b603bb93f8960 100644
+--- a/sound/core/seq/seq_memory.c
++++ b/sound/core/seq/seq_memory.c
+@@ -187,8 +187,13 @@ int snd_seq_expand_var_event(const struct snd_seq_event *event, int count, char
+ 	err = expand_var_event(event, 0, len, buf, in_kernel);
+ 	if (err < 0)
+ 		return err;
+-	if (len != newlen)
+-		memset(buf + len, 0, newlen - len);
++	if (len != newlen) {
++		if (in_kernel)
++			memset(buf + len, 0, newlen - len);
++		else if (clear_user((__force void __user *)buf + len,
++				    newlen - len))
++			return -EFAULT;
++	}
+ 	return newlen;
+ }
+ EXPORT_SYMBOL(snd_seq_expand_var_event);
+diff --git a/sound/core/ump.c b/sound/core/ump.c
+index 246348766ec16..1e4e1e428b205 100644
+--- a/sound/core/ump.c
++++ b/sound/core/ump.c
+@@ -984,7 +984,7 @@ static int snd_ump_legacy_open(struct snd_rawmidi_substream *substream)
+ {
+ 	struct snd_ump_endpoint *ump = substream->rmidi->private_data;
+ 	int dir = substream->stream;
+-	int group = substream->number;
++	int group = ump->legacy_mapping[substream->number];
+ 	int err;
+ 
+ 	mutex_lock(&ump->open_mutex);
+@@ -1016,7 +1016,7 @@ static int snd_ump_legacy_close(struct snd_rawmidi_substream *substream)
+ {
+ 	struct snd_ump_endpoint *ump = substream->rmidi->private_data;
+ 	int dir = substream->stream;
+-	int group = substream->number;
++	int group = ump->legacy_mapping[substream->number];
+ 
+ 	mutex_lock(&ump->open_mutex);
+ 	spin_lock_irq(&ump->legacy_locks[dir]);
+@@ -1123,21 +1123,62 @@ static void process_legacy_input(struct snd_ump_endpoint *ump, const u32 *src,
+ 	spin_unlock_irqrestore(&ump->legacy_locks[dir], flags);
+ }
+ 
++/* Fill ump->legacy_mapping[] for groups to be used for legacy rawmidi */
++static int fill_legacy_mapping(struct snd_ump_endpoint *ump)
++{
++	struct snd_ump_block *fb;
++	unsigned int group_maps = 0;
++	int i, num;
++
++	if (ump->info.flags & SNDRV_UMP_EP_INFO_STATIC_BLOCKS) {
++		list_for_each_entry(fb, &ump->block_list, list) {
++			for (i = 0; i < fb->info.num_groups; i++)
++				group_maps |= 1U << (fb->info.first_group + i);
++		}
++		if (!group_maps)
++			ump_info(ump, "No UMP Group is found in FB\n");
++	}
++
++	/* use all groups for non-static case */
++	if (!group_maps)
++		group_maps = (1U << SNDRV_UMP_MAX_GROUPS) - 1;
++
++	num = 0;
++	for (i = 0; i < SNDRV_UMP_MAX_GROUPS; i++)
++		if (group_maps & (1U << i))
++			ump->legacy_mapping[num++] = i;
++
++	return num;
++}
++
++static void fill_substream_names(struct snd_ump_endpoint *ump,
++				 struct snd_rawmidi *rmidi, int dir)
++{
++	struct snd_rawmidi_substream *s;
++
++	list_for_each_entry(s, &rmidi->streams[dir].substreams, list)
++		snprintf(s->name, sizeof(s->name), "Group %d (%.16s)",
++			 ump->legacy_mapping[s->number] + 1, ump->info.name);
++}
++
+ int snd_ump_attach_legacy_rawmidi(struct snd_ump_endpoint *ump,
+ 				  char *id, int device)
+ {
+ 	struct snd_rawmidi *rmidi;
+ 	bool input, output;
+-	int err;
++	int err, num;
+ 
+-	ump->out_cvts = kcalloc(16, sizeof(*ump->out_cvts), GFP_KERNEL);
++	ump->out_cvts = kcalloc(SNDRV_UMP_MAX_GROUPS,
++				sizeof(*ump->out_cvts), GFP_KERNEL);
+ 	if (!ump->out_cvts)
+ 		return -ENOMEM;
+ 
++	num = fill_legacy_mapping(ump);
++
+ 	input = ump->core.info_flags & SNDRV_RAWMIDI_INFO_INPUT;
+ 	output = ump->core.info_flags & SNDRV_RAWMIDI_INFO_OUTPUT;
+ 	err = snd_rawmidi_new(ump->core.card, id, device,
+-			      output ? 16 : 0, input ? 16 : 0,
++			      output ? num : 0, input ? num : 0,
+ 			      &rmidi);
+ 	if (err < 0) {
+ 		kfree(ump->out_cvts);
+@@ -1150,10 +1191,17 @@ int snd_ump_attach_legacy_rawmidi(struct snd_ump_endpoint *ump,
+ 	if (output)
+ 		snd_rawmidi_set_ops(rmidi, SNDRV_RAWMIDI_STREAM_OUTPUT,
+ 				    &snd_ump_legacy_output_ops);
++	snprintf(rmidi->name, sizeof(rmidi->name), "%.68s (MIDI 1.0)",
++		 ump->info.name);
+ 	rmidi->info_flags = ump->core.info_flags & ~SNDRV_RAWMIDI_INFO_UMP;
+ 	rmidi->ops = &snd_ump_legacy_ops;
+ 	rmidi->private_data = ump;
+ 	ump->legacy_rmidi = rmidi;
++	if (input)
++		fill_substream_names(ump, rmidi, SNDRV_RAWMIDI_STREAM_INPUT);
++	if (output)
++		fill_substream_names(ump, rmidi, SNDRV_RAWMIDI_STREAM_OUTPUT);
++
+ 	ump_dbg(ump, "Created a legacy rawmidi #%d (%s)\n", device, id);
+ 	return 0;
+ }
+diff --git a/sound/pci/ac97/ac97_codec.c b/sound/pci/ac97/ac97_codec.c
+index 80a65b8ad7b9b..25f93e56cfc7a 100644
+--- a/sound/pci/ac97/ac97_codec.c
++++ b/sound/pci/ac97/ac97_codec.c
+@@ -2069,10 +2069,9 @@ int snd_ac97_mixer(struct snd_ac97_bus *bus, struct snd_ac97_template *template,
+ 		.dev_disconnect =	snd_ac97_dev_disconnect,
+ 	};
+ 
+-	if (!rac97)
+-		return -EINVAL;
+-	if (snd_BUG_ON(!bus || !template))
++	if (snd_BUG_ON(!bus || !template || !rac97))
+ 		return -EINVAL;
++	*rac97 = NULL;
+ 	if (snd_BUG_ON(template->num >= 4))
+ 		return -EINVAL;
+ 	if (bus->codec[template->num])
+diff --git a/sound/pci/hda/patch_cs8409.c b/sound/pci/hda/patch_cs8409.c
+index 0ba1fbcbb21e4..627899959ffe8 100644
+--- a/sound/pci/hda/patch_cs8409.c
++++ b/sound/pci/hda/patch_cs8409.c
+@@ -888,7 +888,7 @@ static void cs42l42_resume(struct sub_codec *cs42l42)
+ 
+ 	/* Initialize CS42L42 companion codec */
+ 	cs8409_i2c_bulk_write(cs42l42, cs42l42->init_seq, cs42l42->init_seq_num);
+-	usleep_range(30000, 35000);
++	msleep(CS42L42_INIT_TIMEOUT_MS);
+ 
+ 	/* Clear interrupts, by reading interrupt status registers */
+ 	cs8409_i2c_bulk_read(cs42l42, irq_regs, ARRAY_SIZE(irq_regs));
+diff --git a/sound/pci/hda/patch_cs8409.h b/sound/pci/hda/patch_cs8409.h
+index 2a8dfb4ff046b..937e9387abdc7 100644
+--- a/sound/pci/hda/patch_cs8409.h
++++ b/sound/pci/hda/patch_cs8409.h
+@@ -229,6 +229,7 @@ enum cs8409_coefficient_index_registers {
+ #define CS42L42_I2C_SLEEP_US			(2000)
+ #define CS42L42_PDN_TIMEOUT_US			(250000)
+ #define CS42L42_PDN_SLEEP_US			(2000)
++#define CS42L42_INIT_TIMEOUT_MS			(45)
+ #define CS42L42_FULL_SCALE_VOL_MASK		(2)
+ #define CS42L42_FULL_SCALE_VOL_0DB		(1)
+ #define CS42L42_FULL_SCALE_VOL_MINUS6DB		(0)
+diff --git a/sound/soc/codecs/Kconfig b/sound/soc/codecs/Kconfig
+index c2de4ee721836..947473d2da7d2 100644
+--- a/sound/soc/codecs/Kconfig
++++ b/sound/soc/codecs/Kconfig
+@@ -1708,6 +1708,7 @@ config SND_SOC_STA529
+ config SND_SOC_STAC9766
+ 	tristate
+ 	depends on SND_SOC_AC97_BUS
++	select REGMAP_AC97
+ 
+ config SND_SOC_STI_SAS
+ 	tristate "codec Audio support for STI SAS codec"
+diff --git a/sound/soc/codecs/cs43130.h b/sound/soc/codecs/cs43130.h
+index 1dd8936743132..90e8895275e77 100644
+--- a/sound/soc/codecs/cs43130.h
++++ b/sound/soc/codecs/cs43130.h
+@@ -381,88 +381,88 @@ struct cs43130_clk_gen {
+ 
+ /* frm_size = 16 */
+ static const struct cs43130_clk_gen cs43130_16_clk_gen[] = {
+-	{ 22579200,	32000,		.v = { 441,	10, }, },
+-	{ 22579200,	44100,		.v = { 32,	1, }, },
+-	{ 22579200,	48000,		.v = { 147,	5, }, },
+-	{ 22579200,	88200,		.v = { 16,	1, }, },
+-	{ 22579200,	96000,		.v = { 147,	10, }, },
+-	{ 22579200,	176400,		.v = { 8,	1, }, },
+-	{ 22579200,	192000,		.v = { 147,	20, }, },
+-	{ 22579200,	352800,		.v = { 4,	1, }, },
+-	{ 22579200,	384000,		.v = { 147,	40, }, },
+-	{ 24576000,	32000,		.v = { 48,	1, }, },
+-	{ 24576000,	44100,		.v = { 5120,	147, }, },
+-	{ 24576000,	48000,		.v = { 32,	1, }, },
+-	{ 24576000,	88200,		.v = { 2560,	147, }, },
+-	{ 24576000,	96000,		.v = { 16,	1, }, },
+-	{ 24576000,	176400,		.v = { 1280,	147, }, },
+-	{ 24576000,	192000,		.v = { 8,	1, }, },
+-	{ 24576000,	352800,		.v = { 640,	147, }, },
+-	{ 24576000,	384000,		.v = { 4,	1, }, },
++	{ 22579200,	32000,		.v = { 10,	441, }, },
++	{ 22579200,	44100,		.v = { 1,	32, }, },
++	{ 22579200,	48000,		.v = { 5,	147, }, },
++	{ 22579200,	88200,		.v = { 1,	16, }, },
++	{ 22579200,	96000,		.v = { 10,	147, }, },
++	{ 22579200,	176400,		.v = { 1,	8, }, },
++	{ 22579200,	192000,		.v = { 20,	147, }, },
++	{ 22579200,	352800,		.v = { 1,	4, }, },
++	{ 22579200,	384000,		.v = { 40,	147, }, },
++	{ 24576000,	32000,		.v = { 1,	48, }, },
++	{ 24576000,	44100,		.v = { 147,	5120, }, },
++	{ 24576000,	48000,		.v = { 1,	32, }, },
++	{ 24576000,	88200,		.v = { 147,	2560, }, },
++	{ 24576000,	96000,		.v = { 1,	16, }, },
++	{ 24576000,	176400,		.v = { 147,	1280, }, },
++	{ 24576000,	192000,		.v = { 1,	8, }, },
++	{ 24576000,	352800,		.v = { 147,	640, }, },
++	{ 24576000,	384000,		.v = { 1,	4, }, },
+ };
+ 
+ /* frm_size = 32 */
+ static const struct cs43130_clk_gen cs43130_32_clk_gen[] = {
+-	{ 22579200,	32000,		.v = { 441,	20, }, },
+-	{ 22579200,	44100,		.v = { 16,	1, }, },
+-	{ 22579200,	48000,		.v = { 147,	10, }, },
+-	{ 22579200,	88200,		.v = { 8,	1, }, },
+-	{ 22579200,	96000,		.v = { 147,	20, }, },
+-	{ 22579200,	176400,		.v = { 4,	1, }, },
+-	{ 22579200,	192000,		.v = { 147,	40, }, },
+-	{ 22579200,	352800,		.v = { 2,	1, }, },
+-	{ 22579200,	384000,		.v = { 147,	80, }, },
+-	{ 24576000,	32000,		.v = { 24,	1, }, },
+-	{ 24576000,	44100,		.v = { 2560,	147, }, },
+-	{ 24576000,	48000,		.v = { 16,	1, }, },
+-	{ 24576000,	88200,		.v = { 1280,	147, }, },
+-	{ 24576000,	96000,		.v = { 8,	1, }, },
+-	{ 24576000,	176400,		.v = { 640,	147, }, },
+-	{ 24576000,	192000,		.v = { 4,	1, }, },
+-	{ 24576000,	352800,		.v = { 320,	147, }, },
+-	{ 24576000,	384000,		.v = { 2,	1, }, },
++	{ 22579200,	32000,		.v = { 20,	441, }, },
++	{ 22579200,	44100,		.v = { 1,	16, }, },
++	{ 22579200,	48000,		.v = { 10,	147, }, },
++	{ 22579200,	88200,		.v = { 1,	8, }, },
++	{ 22579200,	96000,		.v = { 20,	147, }, },
++	{ 22579200,	176400,		.v = { 1,	4, }, },
++	{ 22579200,	192000,		.v = { 40,	147, }, },
++	{ 22579200,	352800,		.v = { 1,	2, }, },
++	{ 22579200,	384000,		.v = { 80,	147, }, },
++	{ 24576000,	32000,		.v = { 1,	24, }, },
++	{ 24576000,	44100,		.v = { 147,	2560, }, },
++	{ 24576000,	48000,		.v = { 1,	16, }, },
++	{ 24576000,	88200,		.v = { 147,	1280, }, },
++	{ 24576000,	96000,		.v = { 1,	8, }, },
++	{ 24576000,	176400,		.v = { 147,	640, }, },
++	{ 24576000,	192000,		.v = { 1,	4, }, },
++	{ 24576000,	352800,		.v = { 147,	320, }, },
++	{ 24576000,	384000,		.v = { 1,	2, }, },
+ };
+ 
+ /* frm_size = 48 */
+ static const struct cs43130_clk_gen cs43130_48_clk_gen[] = {
+-	{ 22579200,	32000,		.v = { 147,	100, }, },
+-	{ 22579200,	44100,		.v = { 32,	3, }, },
+-	{ 22579200,	48000,		.v = { 49,	5, }, },
+-	{ 22579200,	88200,		.v = { 16,	3, }, },
+-	{ 22579200,	96000,		.v = { 49,	10, }, },
+-	{ 22579200,	176400,		.v = { 8,	3, }, },
+-	{ 22579200,	192000,		.v = { 49,	20, }, },
+-	{ 22579200,	352800,		.v = { 4,	3, }, },
+-	{ 22579200,	384000,		.v = { 49,	40, }, },
+-	{ 24576000,	32000,		.v = { 16,	1, }, },
+-	{ 24576000,	44100,		.v = { 5120,	441, }, },
+-	{ 24576000,	48000,		.v = { 32,	3, }, },
+-	{ 24576000,	88200,		.v = { 2560,	441, }, },
+-	{ 24576000,	96000,		.v = { 16,	3, }, },
+-	{ 24576000,	176400,		.v = { 1280,	441, }, },
+-	{ 24576000,	192000,		.v = { 8,	3, }, },
+-	{ 24576000,	352800,		.v = { 640,	441, }, },
+-	{ 24576000,	384000,		.v = { 4,	3, }, },
++	{ 22579200,	32000,		.v = { 100,	147, }, },
++	{ 22579200,	44100,		.v = { 3,	32, }, },
++	{ 22579200,	48000,		.v = { 5,	49, }, },
++	{ 22579200,	88200,		.v = { 3,	16, }, },
++	{ 22579200,	96000,		.v = { 10,	49, }, },
++	{ 22579200,	176400,		.v = { 3,	8, }, },
++	{ 22579200,	192000,		.v = { 20,	49, }, },
++	{ 22579200,	352800,		.v = { 3,	4, }, },
++	{ 22579200,	384000,		.v = { 40,	49, }, },
++	{ 24576000,	32000,		.v = { 1,	16, }, },
++	{ 24576000,	44100,		.v = { 441,	5120, }, },
++	{ 24576000,	48000,		.v = { 3,	32, }, },
++	{ 24576000,	88200,		.v = { 441,	2560, }, },
++	{ 24576000,	96000,		.v = { 3,	16, }, },
++	{ 24576000,	176400,		.v = { 441,	1280, }, },
++	{ 24576000,	192000,		.v = { 3,	8, }, },
++	{ 24576000,	352800,		.v = { 441,	640, }, },
++	{ 24576000,	384000,		.v = { 3,	4, }, },
+ };
+ 
+ /* frm_size = 64 */
+ static const struct cs43130_clk_gen cs43130_64_clk_gen[] = {
+-	{ 22579200,	32000,		.v = { 441,	40, }, },
+-	{ 22579200,	44100,		.v = { 8,	1, }, },
+-	{ 22579200,	48000,		.v = { 147,	20, }, },
+-	{ 22579200,	88200,		.v = { 4,	1, }, },
+-	{ 22579200,	96000,		.v = { 147,	40, }, },
+-	{ 22579200,	176400,		.v = { 2,	1, }, },
+-	{ 22579200,	192000,		.v = { 147,	80, }, },
++	{ 22579200,	32000,		.v = { 40,	441, }, },
++	{ 22579200,	44100,		.v = { 1,	8, }, },
++	{ 22579200,	48000,		.v = { 20,	147, }, },
++	{ 22579200,	88200,		.v = { 1,	4, }, },
++	{ 22579200,	96000,		.v = { 40,	147, }, },
++	{ 22579200,	176400,		.v = { 1,	2, }, },
++	{ 22579200,	192000,		.v = { 80,	147, }, },
+ 	{ 22579200,	352800,		.v = { 1,	1, }, },
+-	{ 24576000,	32000,		.v = { 12,	1, }, },
+-	{ 24576000,	44100,		.v = { 1280,	147, }, },
+-	{ 24576000,	48000,		.v = { 8,	1, }, },
+-	{ 24576000,	88200,		.v = { 640,	147, }, },
+-	{ 24576000,	96000,		.v = { 4,	1, }, },
+-	{ 24576000,	176400,		.v = { 320,	147, }, },
+-	{ 24576000,	192000,		.v = { 2,	1, }, },
+-	{ 24576000,	352800,		.v = { 160,	147, }, },
++	{ 24576000,	32000,		.v = { 1,	12, }, },
++	{ 24576000,	44100,		.v = { 147,	1280, }, },
++	{ 24576000,	48000,		.v = { 1,	8, }, },
++	{ 24576000,	88200,		.v = { 147,	640, }, },
++	{ 24576000,	96000,		.v = { 1,	4, }, },
++	{ 24576000,	176400,		.v = { 147,	320, }, },
++	{ 24576000,	192000,		.v = { 1,	2, }, },
++	{ 24576000,	352800,		.v = { 147,	160, }, },
+ 	{ 24576000,	384000,		.v = { 1,	1, }, },
+ };
+ 
+diff --git a/sound/soc/fsl/fsl_qmc_audio.c b/sound/soc/fsl/fsl_qmc_audio.c
+index 7cbb8e4758ccc..56d6b0b039a2e 100644
+--- a/sound/soc/fsl/fsl_qmc_audio.c
++++ b/sound/soc/fsl/fsl_qmc_audio.c
+@@ -372,8 +372,8 @@ static int qmc_dai_hw_rule_format_by_channels(struct qmc_dai *qmc_dai,
+ 	struct snd_mask *f_old = hw_param_mask(params, SNDRV_PCM_HW_PARAM_FORMAT);
+ 	unsigned int channels = params_channels(params);
+ 	unsigned int slot_width;
++	snd_pcm_format_t format;
+ 	struct snd_mask f_new;
+-	unsigned int i;
+ 
+ 	if (!channels || channels > nb_ts) {
+ 		dev_err(qmc_dai->dev, "channels %u not supported\n",
+@@ -384,10 +384,10 @@ static int qmc_dai_hw_rule_format_by_channels(struct qmc_dai *qmc_dai,
+ 	slot_width = (nb_ts / channels) * 8;
+ 
+ 	snd_mask_none(&f_new);
+-	for (i = 0; i <= SNDRV_PCM_FORMAT_LAST; i++) {
+-		if (snd_mask_test(f_old, i)) {
+-			if (snd_pcm_format_physical_width(i) <= slot_width)
+-				snd_mask_set(&f_new, i);
++	pcm_for_each_format(format) {
++		if (snd_mask_test_format(f_old, format)) {
++			if (snd_pcm_format_physical_width(format) <= slot_width)
++				snd_mask_set_format(&f_new, format);
+ 		}
+ 	}
+ 
+@@ -551,26 +551,26 @@ static const struct snd_soc_dai_ops qmc_dai_ops = {
+ 
+ static u64 qmc_audio_formats(u8 nb_ts)
+ {
+-	u64 formats;
+-	unsigned int chan_width;
+ 	unsigned int format_width;
+-	int i;
++	unsigned int chan_width;
++	snd_pcm_format_t format;
++	u64 formats_mask;
+ 
+ 	if (!nb_ts)
+ 		return 0;
+ 
+-	formats = 0;
++	formats_mask = 0;
+ 	chan_width = nb_ts * 8;
+-	for (i = 0; i <= SNDRV_PCM_FORMAT_LAST; i++) {
++	pcm_for_each_format(format) {
+ 		/*
+ 		 * Support format other than little-endian (ie big-endian or
+ 		 * without endianness such as 8bit formats)
+ 		 */
+-		if (snd_pcm_format_little_endian(i) == 1)
++		if (snd_pcm_format_little_endian(format) == 1)
+ 			continue;
+ 
+ 		/* Support physical width multiple of 8bit */
+-		format_width = snd_pcm_format_physical_width(i);
++		format_width = snd_pcm_format_physical_width(format);
+ 		if (format_width == 0 || format_width % 8)
+ 			continue;
+ 
+@@ -581,9 +581,9 @@ static u64 qmc_audio_formats(u8 nb_ts)
+ 		if (format_width > chan_width || chan_width % format_width)
+ 			continue;
+ 
+-		formats |= (1ULL << i);
++		formats_mask |= pcm_format_to_bits(format);
+ 	}
+-	return formats;
++	return formats_mask;
+ }
+ 
+ static int qmc_audio_dai_parse(struct qmc_audio *qmc_audio, struct device_node *np,
+diff --git a/sound/soc/loongson/loongson_card.c b/sound/soc/loongson/loongson_card.c
+index 9ded163297477..406ee8db1a3c5 100644
+--- a/sound/soc/loongson/loongson_card.c
++++ b/sound/soc/loongson/loongson_card.c
+@@ -208,7 +208,7 @@ static struct platform_driver loongson_audio_driver = {
+ 	.driver = {
+ 		.name = "loongson-asoc-card",
+ 		.pm = &snd_soc_pm_ops,
+-		.of_match_table = of_match_ptr(loongson_asoc_dt_ids),
++		.of_match_table = loongson_asoc_dt_ids,
+ 	},
+ };
+ module_platform_driver(loongson_audio_driver);
+diff --git a/sound/soc/soc-compress.c b/sound/soc/soc-compress.c
+index 02fdb683f75f3..b58921e7921f8 100644
+--- a/sound/soc/soc-compress.c
++++ b/sound/soc/soc-compress.c
+@@ -193,6 +193,7 @@ open_err:
+ 	snd_soc_dai_compr_shutdown(cpu_dai, cstream, 1);
+ out:
+ 	dpcm_path_put(&list);
++	snd_soc_dpcm_mutex_unlock(fe);
+ be_err:
+ 	fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_NO;
+ 	snd_soc_card_mutex_unlock(fe->card);
+diff --git a/sound/soc/sof/amd/acp.c b/sound/soc/sof/amd/acp.c
+index afb505461ea17..83cf08c4cf5f6 100644
+--- a/sound/soc/sof/amd/acp.c
++++ b/sound/soc/sof/amd/acp.c
+@@ -355,9 +355,9 @@ static irqreturn_t acp_irq_handler(int irq, void *dev_id)
+ 	unsigned int val;
+ 
+ 	val = snd_sof_dsp_read(sdev, ACP_DSP_BAR, base + DSP_SW_INTR_STAT_OFFSET);
+-	if (val) {
+-		val |= ACP_DSP_TO_HOST_IRQ;
+-		snd_sof_dsp_write(sdev, ACP_DSP_BAR, base + DSP_SW_INTR_STAT_OFFSET, val);
++	if (val & ACP_DSP_TO_HOST_IRQ) {
++		snd_sof_dsp_write(sdev, ACP_DSP_BAR, base + DSP_SW_INTR_STAT_OFFSET,
++				  ACP_DSP_TO_HOST_IRQ);
+ 		return IRQ_WAKE_THREAD;
+ 	}
+ 
+diff --git a/sound/soc/sof/intel/hda-mlink.c b/sound/soc/sof/intel/hda-mlink.c
+index b7cbf66badf5b..df87b3791c23e 100644
+--- a/sound/soc/sof/intel/hda-mlink.c
++++ b/sound/soc/sof/intel/hda-mlink.c
+@@ -331,14 +331,14 @@ static bool hdaml_link_check_cmdsync(u32 __iomem *lsync, u32 cmdsync_mask)
+ 	return !!(val & cmdsync_mask);
+ }
+ 
+-static void hdaml_link_set_lsdiid(u32 __iomem *lsdiid, int dev_num)
++static void hdaml_link_set_lsdiid(u16 __iomem *lsdiid, int dev_num)
+ {
+-	u32 val;
++	u16 val;
+ 
+-	val = readl(lsdiid);
++	val = readw(lsdiid);
+ 	val |= BIT(dev_num);
+ 
+-	writel(val, lsdiid);
++	writew(val, lsdiid);
+ }
+ 
+ static void hdaml_shim_map_stream_ch(u16 __iomem *pcmsycm, int lchan, int hchan,
+@@ -781,6 +781,8 @@ int hdac_bus_eml_sdw_map_stream_ch(struct hdac_bus *bus, int sublink, int y,
+ {
+ 	struct hdac_ext2_link *h2link;
+ 	u16 __iomem *pcmsycm;
++	int hchan;
++	int lchan;
+ 	u16 val;
+ 
+ 	h2link = find_ext2_link(bus, true, AZX_REG_ML_LEPTR_ID_SDW);
+@@ -791,9 +793,17 @@ int hdac_bus_eml_sdw_map_stream_ch(struct hdac_bus *bus, int sublink, int y,
+ 		h2link->instance_offset * sublink +
+ 		AZX_REG_SDW_SHIM_PCMSyCM(y);
+ 
++	if (channel_mask) {
++		hchan = __fls(channel_mask);
++		lchan = __ffs(channel_mask);
++	} else {
++		hchan = 0;
++		lchan = 0;
++	}
++
+ 	mutex_lock(&h2link->eml_lock);
+ 
+-	hdaml_shim_map_stream_ch(pcmsycm, 0, hweight32(channel_mask),
++	hdaml_shim_map_stream_ch(pcmsycm, lchan, hchan,
+ 				 stream_id, dir);
+ 
+ 	mutex_unlock(&h2link->eml_lock);
+diff --git a/sound/usb/midi2.c b/sound/usb/midi2.c
+index ee28357414795..1ec177fe284ed 100644
+--- a/sound/usb/midi2.c
++++ b/sound/usb/midi2.c
+@@ -265,7 +265,7 @@ static void free_midi_urbs(struct snd_usb_midi2_endpoint *ep)
+ 
+ 	if (!ep)
+ 		return;
+-	for (i = 0; i < ep->num_urbs; ++i) {
++	for (i = 0; i < NUM_URBS; ++i) {
+ 		ctx = &ep->urbs[i];
+ 		if (!ctx->urb)
+ 			break;
+@@ -279,6 +279,7 @@ static void free_midi_urbs(struct snd_usb_midi2_endpoint *ep)
+ }
+ 
+ /* allocate URBs for an EP */
++/* the callers should handle allocation errors via free_midi_urbs() */
+ static int alloc_midi_urbs(struct snd_usb_midi2_endpoint *ep)
+ {
+ 	struct snd_usb_midi2_urb *ctx;
+@@ -351,8 +352,10 @@ static int snd_usb_midi_v2_open(struct snd_ump_endpoint *ump, int dir)
+ 		return -EIO;
+ 	if (ep->direction == STR_OUT) {
+ 		err = alloc_midi_urbs(ep);
+-		if (err)
++		if (err) {
++			free_midi_urbs(ep);
+ 			return err;
++		}
+ 	}
+ 	return 0;
+ }
+@@ -990,7 +993,7 @@ static int parse_midi_2_0(struct snd_usb_midi2_interface *umidi)
+ 		}
+ 	}
+ 
+-	return attach_legacy_rawmidi(umidi);
++	return 0;
+ }
+ 
+ /* is the given interface for MIDI 2.0? */
+@@ -1059,12 +1062,6 @@ static void set_fallback_rawmidi_names(struct snd_usb_midi2_interface *umidi)
+ 			usb_string(dev, dev->descriptor.iSerialNumber,
+ 				   ump->info.product_id,
+ 				   sizeof(ump->info.product_id));
+-#if IS_ENABLED(CONFIG_SND_UMP_LEGACY_RAWMIDI)
+-		if (ump->legacy_rmidi && !*ump->legacy_rmidi->name)
+-			snprintf(ump->legacy_rmidi->name,
+-				 sizeof(ump->legacy_rmidi->name),
+-				 "%s (MIDI 1.0)", ump->info.name);
+-#endif
+ 	}
+ }
+ 
+@@ -1157,6 +1154,13 @@ int snd_usb_midi_v2_create(struct snd_usb_audio *chip,
+ 	}
+ 
+ 	set_fallback_rawmidi_names(umidi);
++
++	err = attach_legacy_rawmidi(umidi);
++	if (err < 0) {
++		usb_audio_err(chip, "Failed to create legacy rawmidi\n");
++		goto error;
++	}
++
+ 	return 0;
+ 
+  error:
+diff --git a/tools/bpf/bpftool/skeleton/pid_iter.bpf.c b/tools/bpf/bpftool/skeleton/pid_iter.bpf.c
+index eb05ea53afb12..26004f0c5a6ae 100644
+--- a/tools/bpf/bpftool/skeleton/pid_iter.bpf.c
++++ b/tools/bpf/bpftool/skeleton/pid_iter.bpf.c
+@@ -15,6 +15,19 @@ enum bpf_obj_type {
+ 	BPF_OBJ_BTF,
+ };
+ 
++struct bpf_perf_link___local {
++	struct bpf_link link;
++	struct file *perf_file;
++} __attribute__((preserve_access_index));
++
++struct perf_event___local {
++	u64 bpf_cookie;
++} __attribute__((preserve_access_index));
++
++enum bpf_link_type___local {
++	BPF_LINK_TYPE_PERF_EVENT___local = 7,
++};
++
+ extern const void bpf_link_fops __ksym;
+ extern const void bpf_map_fops __ksym;
+ extern const void bpf_prog_fops __ksym;
+@@ -41,10 +54,10 @@ static __always_inline __u32 get_obj_id(void *ent, enum bpf_obj_type type)
+ /* could be used only with BPF_LINK_TYPE_PERF_EVENT links */
+ static __u64 get_bpf_cookie(struct bpf_link *link)
+ {
+-	struct bpf_perf_link *perf_link;
+-	struct perf_event *event;
++	struct bpf_perf_link___local *perf_link;
++	struct perf_event___local *event;
+ 
+-	perf_link = container_of(link, struct bpf_perf_link, link);
++	perf_link = container_of(link, struct bpf_perf_link___local, link);
+ 	event = BPF_CORE_READ(perf_link, perf_file, private_data);
+ 	return BPF_CORE_READ(event, bpf_cookie);
+ }
+@@ -84,10 +97,13 @@ int iter(struct bpf_iter__task_file *ctx)
+ 	e.pid = task->tgid;
+ 	e.id = get_obj_id(file->private_data, obj_type);
+ 
+-	if (obj_type == BPF_OBJ_LINK) {
++	if (obj_type == BPF_OBJ_LINK &&
++	    bpf_core_enum_value_exists(enum bpf_link_type___local,
++				       BPF_LINK_TYPE_PERF_EVENT___local)) {
+ 		struct bpf_link *link = (struct bpf_link *) file->private_data;
+ 
+-		if (BPF_CORE_READ(link, type) == BPF_LINK_TYPE_PERF_EVENT) {
++		if (link->type == bpf_core_enum_value(enum bpf_link_type___local,
++						      BPF_LINK_TYPE_PERF_EVENT___local)) {
+ 			e.has_bpf_cookie = true;
+ 			e.bpf_cookie = get_bpf_cookie(link);
+ 		}
+diff --git a/tools/bpf/bpftool/skeleton/profiler.bpf.c b/tools/bpf/bpftool/skeleton/profiler.bpf.c
+index ce5b65e07ab10..2f80edc682f11 100644
+--- a/tools/bpf/bpftool/skeleton/profiler.bpf.c
++++ b/tools/bpf/bpftool/skeleton/profiler.bpf.c
+@@ -4,6 +4,12 @@
+ #include <bpf/bpf_helpers.h>
+ #include <bpf/bpf_tracing.h>
+ 
++struct bpf_perf_event_value___local {
++	__u64 counter;
++	__u64 enabled;
++	__u64 running;
++} __attribute__((preserve_access_index));
++
+ /* map of perf event fds, num_cpu * num_metric entries */
+ struct {
+ 	__uint(type, BPF_MAP_TYPE_PERF_EVENT_ARRAY);
+@@ -15,14 +21,14 @@ struct {
+ struct {
+ 	__uint(type, BPF_MAP_TYPE_PERCPU_ARRAY);
+ 	__uint(key_size, sizeof(u32));
+-	__uint(value_size, sizeof(struct bpf_perf_event_value));
++	__uint(value_size, sizeof(struct bpf_perf_event_value___local));
+ } fentry_readings SEC(".maps");
+ 
+ /* accumulated readings */
+ struct {
+ 	__uint(type, BPF_MAP_TYPE_PERCPU_ARRAY);
+ 	__uint(key_size, sizeof(u32));
+-	__uint(value_size, sizeof(struct bpf_perf_event_value));
++	__uint(value_size, sizeof(struct bpf_perf_event_value___local));
+ } accum_readings SEC(".maps");
+ 
+ /* sample counts, one per cpu */
+@@ -39,7 +45,7 @@ const volatile __u32 num_metric = 1;
+ SEC("fentry/XXX")
+ int BPF_PROG(fentry_XXX)
+ {
+-	struct bpf_perf_event_value *ptrs[MAX_NUM_MATRICS];
++	struct bpf_perf_event_value___local *ptrs[MAX_NUM_MATRICS];
+ 	u32 key = bpf_get_smp_processor_id();
+ 	u32 i;
+ 
+@@ -53,10 +59,10 @@ int BPF_PROG(fentry_XXX)
+ 	}
+ 
+ 	for (i = 0; i < num_metric && i < MAX_NUM_MATRICS; i++) {
+-		struct bpf_perf_event_value reading;
++		struct bpf_perf_event_value___local reading;
+ 		int err;
+ 
+-		err = bpf_perf_event_read_value(&events, key, &reading,
++		err = bpf_perf_event_read_value(&events, key, (void *)&reading,
+ 						sizeof(reading));
+ 		if (err)
+ 			return 0;
+@@ -68,14 +74,14 @@ int BPF_PROG(fentry_XXX)
+ }
+ 
+ static inline void
+-fexit_update_maps(u32 id, struct bpf_perf_event_value *after)
++fexit_update_maps(u32 id, struct bpf_perf_event_value___local *after)
+ {
+-	struct bpf_perf_event_value *before, diff;
++	struct bpf_perf_event_value___local *before, diff;
+ 
+ 	before = bpf_map_lookup_elem(&fentry_readings, &id);
+ 	/* only account samples with a valid fentry_reading */
+ 	if (before && before->counter) {
+-		struct bpf_perf_event_value *accum;
++		struct bpf_perf_event_value___local *accum;
+ 
+ 		diff.counter = after->counter - before->counter;
+ 		diff.enabled = after->enabled - before->enabled;
+@@ -93,7 +99,7 @@ fexit_update_maps(u32 id, struct bpf_perf_event_value *after)
+ SEC("fexit/XXX")
+ int BPF_PROG(fexit_XXX)
+ {
+-	struct bpf_perf_event_value readings[MAX_NUM_MATRICS];
++	struct bpf_perf_event_value___local readings[MAX_NUM_MATRICS];
+ 	u32 cpu = bpf_get_smp_processor_id();
+ 	u32 i, zero = 0;
+ 	int err;
+@@ -102,7 +108,8 @@ int BPF_PROG(fexit_XXX)
+ 	/* read all events before updating the maps, to reduce error */
+ 	for (i = 0; i < num_metric && i < MAX_NUM_MATRICS; i++) {
+ 		err = bpf_perf_event_read_value(&events, cpu + i * num_cpu,
+-						readings + i, sizeof(*readings));
++						(void *)(readings + i),
++						sizeof(*readings));
+ 		if (err)
+ 			return 0;
+ 	}
+diff --git a/tools/include/nolibc/arch-aarch64.h b/tools/include/nolibc/arch-aarch64.h
+index 11f294a406b7c..b8c7b14c4ca85 100644
+--- a/tools/include/nolibc/arch-aarch64.h
++++ b/tools/include/nolibc/arch-aarch64.h
+@@ -175,7 +175,7 @@ char **environ __attribute__((weak));
+ const unsigned long *_auxv __attribute__((weak));
+ 
+ /* startup code */
+-void __attribute__((weak,noreturn,optimize("omit-frame-pointer"))) __no_stack_protector _start(void)
++void __attribute__((weak, noreturn, optimize("Os", "omit-frame-pointer"))) __no_stack_protector _start(void)
+ {
+ 	__asm__ volatile (
+ #ifdef _NOLIBC_STACKPROTECTOR
+diff --git a/tools/include/nolibc/arch-arm.h b/tools/include/nolibc/arch-arm.h
+index ca4c669874973..bd8bf2ebd43bf 100644
+--- a/tools/include/nolibc/arch-arm.h
++++ b/tools/include/nolibc/arch-arm.h
+@@ -225,7 +225,7 @@ char **environ __attribute__((weak));
+ const unsigned long *_auxv __attribute__((weak));
+ 
+ /* startup code */
+-void __attribute__((weak,noreturn,optimize("omit-frame-pointer"))) __no_stack_protector _start(void)
++void __attribute__((weak, noreturn, optimize("Os", "omit-frame-pointer"))) __no_stack_protector _start(void)
+ {
+ 	__asm__ volatile (
+ #ifdef _NOLIBC_STACKPROTECTOR
+diff --git a/tools/include/nolibc/arch-i386.h b/tools/include/nolibc/arch-i386.h
+index 3d672d925e9e2..1a86f86eab5c5 100644
+--- a/tools/include/nolibc/arch-i386.h
++++ b/tools/include/nolibc/arch-i386.h
+@@ -190,7 +190,7 @@ const unsigned long *_auxv __attribute__((weak));
+  * 2) The deepest stack frame should be set to zero
+  *
+  */
+-void __attribute__((weak,noreturn,optimize("omit-frame-pointer"))) __no_stack_protector _start(void)
++void __attribute__((weak, noreturn, optimize("Os", "omit-frame-pointer"))) __no_stack_protector _start(void)
+ {
+ 	__asm__ volatile (
+ #ifdef _NOLIBC_STACKPROTECTOR
+diff --git a/tools/include/nolibc/arch-loongarch.h b/tools/include/nolibc/arch-loongarch.h
+index ad3f266e70930..b0279b9411785 100644
+--- a/tools/include/nolibc/arch-loongarch.h
++++ b/tools/include/nolibc/arch-loongarch.h
+@@ -172,7 +172,7 @@ const unsigned long *_auxv __attribute__((weak));
+ #endif
+ 
+ /* startup code */
+-void __attribute__((weak,noreturn,optimize("omit-frame-pointer"))) __no_stack_protector _start(void)
++void __attribute__((weak, noreturn, optimize("Os", "omit-frame-pointer"))) __no_stack_protector _start(void)
+ {
+ 	__asm__ volatile (
+ #ifdef _NOLIBC_STACKPROTECTOR
+diff --git a/tools/include/nolibc/arch-mips.h b/tools/include/nolibc/arch-mips.h
+index db24e0837a39b..67c5d79971107 100644
+--- a/tools/include/nolibc/arch-mips.h
++++ b/tools/include/nolibc/arch-mips.h
+@@ -182,7 +182,7 @@ char **environ __attribute__((weak));
+ const unsigned long *_auxv __attribute__((weak));
+ 
+ /* startup code, note that it's called __start on MIPS */
+-void __attribute__((weak,noreturn,optimize("omit-frame-pointer"))) __no_stack_protector __start(void)
++void __attribute__((weak, noreturn, optimize("Os", "omit-frame-pointer"))) __no_stack_protector __start(void)
+ {
+ 	__asm__ volatile (
+ 		/*".set nomips16\n"*/
+diff --git a/tools/include/nolibc/arch-riscv.h b/tools/include/nolibc/arch-riscv.h
+index a2e8564e66d6a..cefefc2e93f18 100644
+--- a/tools/include/nolibc/arch-riscv.h
++++ b/tools/include/nolibc/arch-riscv.h
+@@ -180,7 +180,7 @@ char **environ __attribute__((weak));
+ const unsigned long *_auxv __attribute__((weak));
+ 
+ /* startup code */
+-void __attribute__((weak,noreturn,optimize("omit-frame-pointer"))) __no_stack_protector _start(void)
++void __attribute__((weak, noreturn, optimize("Os", "omit-frame-pointer"))) __no_stack_protector _start(void)
+ {
+ 	__asm__ volatile (
+ 		".option push\n"
+diff --git a/tools/include/nolibc/arch-s390.h b/tools/include/nolibc/arch-s390.h
+index 516dff5bff8bc..ed2c33b2de68b 100644
+--- a/tools/include/nolibc/arch-s390.h
++++ b/tools/include/nolibc/arch-s390.h
+@@ -166,7 +166,7 @@ char **environ __attribute__((weak));
+ const unsigned long *_auxv __attribute__((weak));
+ 
+ /* startup code */
+-void __attribute__((weak,noreturn,optimize("omit-frame-pointer"))) __no_stack_protector _start(void)
++void __attribute__((weak, noreturn, optimize("Os", "omit-frame-pointer"))) __no_stack_protector _start(void)
+ {
+ 	__asm__ volatile (
+ 		"lg	%r2,0(%r15)\n"		/* argument count */
+diff --git a/tools/include/nolibc/arch-x86_64.h b/tools/include/nolibc/arch-x86_64.h
+index 6fc4d83927429..1bbd95f652330 100644
+--- a/tools/include/nolibc/arch-x86_64.h
++++ b/tools/include/nolibc/arch-x86_64.h
+@@ -190,7 +190,7 @@ const unsigned long *_auxv __attribute__((weak));
+  * 2) The deepest stack frame should be zero (the %rbp).
+  *
+  */
+-void __attribute__((weak,noreturn,optimize("omit-frame-pointer"))) __no_stack_protector _start(void)
++void __attribute__((weak, noreturn, optimize("Os", "omit-frame-pointer"))) __no_stack_protector _start(void)
+ {
+ 	__asm__ volatile (
+ #ifdef _NOLIBC_STACKPROTECTOR
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index 214f828ece6bf..e07dff7eba600 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -1975,9 +1975,9 @@ static int bpf_object__read_kconfig_file(struct bpf_object *obj, void *data)
+ 		return -ENAMETOOLONG;
+ 
+ 	/* gzopen also accepts uncompressed files. */
+-	file = gzopen(buf, "r");
++	file = gzopen(buf, "re");
+ 	if (!file)
+-		file = gzopen("/proc/config.gz", "r");
++		file = gzopen("/proc/config.gz", "re");
+ 
+ 	if (!file) {
+ 		pr_warn("failed to open system Kconfig\n");
+@@ -6157,7 +6157,11 @@ static int append_subprog_relos(struct bpf_program *main_prog, struct bpf_progra
+ 	if (main_prog == subprog)
+ 		return 0;
+ 	relos = libbpf_reallocarray(main_prog->reloc_desc, new_cnt, sizeof(*relos));
+-	if (!relos)
++	/* if new count is zero, reallocarray can return a valid NULL result;
++	 * in this case the previous pointer will be freed, so we *have to*
++	 * reassign old pointer to the new value (even if it's NULL)
++	 */
++	if (!relos && new_cnt)
+ 		return -ENOMEM;
+ 	if (subprog->nr_reloc)
+ 		memcpy(relos + main_prog->nr_reloc, subprog->reloc_desc,
+@@ -8528,7 +8532,8 @@ int bpf_program__set_insns(struct bpf_program *prog,
+ 		return -EBUSY;
+ 
+ 	insns = libbpf_reallocarray(prog->insns, new_insn_cnt, sizeof(*insns));
+-	if (!insns) {
++	/* NULL is a valid return from reallocarray if the new count is zero */
++	if (!insns && new_insn_cnt) {
+ 		pr_warn("prog '%s': failed to realloc prog code\n", prog->name);
+ 		return -ENOMEM;
+ 	}
+@@ -8558,13 +8563,31 @@ enum bpf_prog_type bpf_program__type(const struct bpf_program *prog)
+ 	return prog->type;
+ }
+ 
++static size_t custom_sec_def_cnt;
++static struct bpf_sec_def *custom_sec_defs;
++static struct bpf_sec_def custom_fallback_def;
++static bool has_custom_fallback_def;
++static int last_custom_sec_def_handler_id;
++
+ int bpf_program__set_type(struct bpf_program *prog, enum bpf_prog_type type)
+ {
+ 	if (prog->obj->loaded)
+ 		return libbpf_err(-EBUSY);
+ 
++	/* if type is not changed, do nothing */
++	if (prog->type == type)
++		return 0;
++
+ 	prog->type = type;
+-	prog->sec_def = NULL;
++
++	/* If a program type was changed, we need to reset associated SEC()
++	 * handler, as it will be invalid now. The only exception is a generic
++	 * fallback handler, which by definition is program type-agnostic and
++	 * is a catch-all custom handler, optionally set by the application,
++	 * so should be able to handle any type of BPF program.
++	 */
++	if (prog->sec_def != &custom_fallback_def)
++		prog->sec_def = NULL;
+ 	return 0;
+ }
+ 
+@@ -8740,13 +8763,6 @@ static const struct bpf_sec_def section_defs[] = {
+ 	SEC_DEF("netfilter",		NETFILTER, BPF_NETFILTER, SEC_NONE),
+ };
+ 
+-static size_t custom_sec_def_cnt;
+-static struct bpf_sec_def *custom_sec_defs;
+-static struct bpf_sec_def custom_fallback_def;
+-static bool has_custom_fallback_def;
+-
+-static int last_custom_sec_def_handler_id;
+-
+ int libbpf_register_prog_handler(const char *sec,
+ 				 enum bpf_prog_type prog_type,
+ 				 enum bpf_attach_type exp_attach_type,
+@@ -8826,7 +8842,11 @@ int libbpf_unregister_prog_handler(int handler_id)
+ 
+ 	/* try to shrink the array, but it's ok if we couldn't */
+ 	sec_defs = libbpf_reallocarray(custom_sec_defs, custom_sec_def_cnt, sizeof(*sec_defs));
+-	if (sec_defs)
++	/* if new count is zero, reallocarray can return a valid NULL result;
++	 * in this case the previous pointer will be freed, so we *have to*
++	 * reassign old pointer to the new value (even if it's NULL)
++	 */
++	if (sec_defs || custom_sec_def_cnt == 0)
+ 		custom_sec_defs = sec_defs;
+ 
+ 	return 0;
+diff --git a/tools/lib/bpf/usdt.c b/tools/lib/bpf/usdt.c
+index f1a141555f084..37455d00b239c 100644
+--- a/tools/lib/bpf/usdt.c
++++ b/tools/lib/bpf/usdt.c
+@@ -852,8 +852,11 @@ static int bpf_link_usdt_detach(struct bpf_link *link)
+ 		 * system is so exhausted on memory, it's the least of user's
+ 		 * concerns, probably.
+ 		 * So just do our best here to return those IDs to usdt_manager.
++		 * Another edge case when we can legitimately get NULL is when
++		 * new_cnt is zero, which can happen in some edge cases, so we
++		 * need to be careful about that.
+ 		 */
+-		if (new_free_ids) {
++		if (new_free_ids || new_cnt == 0) {
+ 			memcpy(new_free_ids + man->free_spec_cnt, usdt_link->spec_ids,
+ 			       usdt_link->spec_cnt * sizeof(*usdt_link->spec_ids));
+ 			man->free_spec_ids = new_free_ids;
+diff --git a/tools/testing/radix-tree/multiorder.c b/tools/testing/radix-tree/multiorder.c
+index e00520cc63498..cffaf2245d4f1 100644
+--- a/tools/testing/radix-tree/multiorder.c
++++ b/tools/testing/radix-tree/multiorder.c
+@@ -159,7 +159,7 @@ void multiorder_tagged_iteration(struct xarray *xa)
+ 	item_kill_tree(xa);
+ }
+ 
+-bool stop_iteration = false;
++bool stop_iteration;
+ 
+ static void *creator_func(void *ptr)
+ {
+@@ -201,6 +201,7 @@ static void multiorder_iteration_race(struct xarray *xa)
+ 	pthread_t worker_thread[num_threads];
+ 	int i;
+ 
++	stop_iteration = false;
+ 	pthread_create(&worker_thread[0], NULL, &creator_func, xa);
+ 	for (i = 1; i < num_threads; i++)
+ 		pthread_create(&worker_thread[i], NULL, &iterator_func, xa);
+@@ -211,6 +212,61 @@ static void multiorder_iteration_race(struct xarray *xa)
+ 	item_kill_tree(xa);
+ }
+ 
++static void *load_creator(void *ptr)
++{
++	/* 'order' is set up to ensure we have sibling entries */
++	unsigned int order;
++	struct radix_tree_root *tree = ptr;
++	int i;
++
++	rcu_register_thread();
++	item_insert_order(tree, 3 << RADIX_TREE_MAP_SHIFT, 0);
++	item_insert_order(tree, 2 << RADIX_TREE_MAP_SHIFT, 0);
++	for (i = 0; i < 10000; i++) {
++		for (order = 1; order < RADIX_TREE_MAP_SHIFT; order++) {
++			unsigned long index = (3 << RADIX_TREE_MAP_SHIFT) -
++						(1 << order);
++			item_insert_order(tree, index, order);
++			item_delete_rcu(tree, index);
++		}
++	}
++	rcu_unregister_thread();
++
++	stop_iteration = true;
++	return NULL;
++}
++
++static void *load_worker(void *ptr)
++{
++	unsigned long index = (3 << RADIX_TREE_MAP_SHIFT) - 1;
++
++	rcu_register_thread();
++	while (!stop_iteration) {
++		struct item *item = xa_load(ptr, index);
++		assert(!xa_is_internal(item));
++	}
++	rcu_unregister_thread();
++
++	return NULL;
++}
++
++static void load_race(struct xarray *xa)
++{
++	const int num_threads = sysconf(_SC_NPROCESSORS_ONLN) * 4;
++	pthread_t worker_thread[num_threads];
++	int i;
++
++	stop_iteration = false;
++	pthread_create(&worker_thread[0], NULL, &load_creator, xa);
++	for (i = 1; i < num_threads; i++)
++		pthread_create(&worker_thread[i], NULL, &load_worker, xa);
++
++	for (i = 0; i < num_threads; i++)
++		pthread_join(worker_thread[i], NULL);
++
++	item_kill_tree(xa);
++}
++
+ static DEFINE_XARRAY(array);
+ 
+ void multiorder_checks(void)
+@@ -218,12 +274,20 @@ void multiorder_checks(void)
+ 	multiorder_iteration(&array);
+ 	multiorder_tagged_iteration(&array);
+ 	multiorder_iteration_race(&array);
++	load_race(&array);
+ 
+ 	radix_tree_cpu_dead(0);
+ }
+ 
+-int __weak main(void)
++int __weak main(int argc, char **argv)
+ {
++	int opt;
++
++	while ((opt = getopt(argc, argv, "ls:v")) != -1) {
++		if (opt == 'v')
++			test_verbose++;
++	}
++
+ 	rcu_register_thread();
+ 	radix_tree_init();
+ 	multiorder_checks();
+diff --git a/tools/testing/selftests/bpf/benchs/run_bench_rename.sh b/tools/testing/selftests/bpf/benchs/run_bench_rename.sh
+index 16f774b1cdbed..7b281dbe41656 100755
+--- a/tools/testing/selftests/bpf/benchs/run_bench_rename.sh
++++ b/tools/testing/selftests/bpf/benchs/run_bench_rename.sh
+@@ -2,7 +2,7 @@
+ 
+ set -eufo pipefail
+ 
+-for i in base kprobe kretprobe rawtp fentry fexit fmodret
++for i in base kprobe kretprobe rawtp fentry fexit
+ do
+ 	summary=$(sudo ./bench -w2 -d5 -a rename-$i | tail -n1 | cut -d'(' -f1 | cut -d' ' -f3-)
+ 	printf "%-10s: %s\n" $i "$summary"
+diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_nf.c b/tools/testing/selftests/bpf/prog_tests/bpf_nf.c
+index c8ba4009e4ab9..b30ff6b3b81ae 100644
+--- a/tools/testing/selftests/bpf/prog_tests/bpf_nf.c
++++ b/tools/testing/selftests/bpf/prog_tests/bpf_nf.c
+@@ -123,12 +123,13 @@ static void test_bpf_nf_ct(int mode)
+ 	ASSERT_EQ(skel->data->test_snat_addr, 0, "Test for source natting");
+ 	ASSERT_EQ(skel->data->test_dnat_addr, 0, "Test for destination natting");
+ end:
+-	if (srv_client_fd != -1)
+-		close(srv_client_fd);
+ 	if (client_fd != -1)
+ 		close(client_fd);
++	if (srv_client_fd != -1)
++		close(srv_client_fd);
+ 	if (srv_fd != -1)
+ 		close(srv_fd);
++
+ 	snprintf(cmd, sizeof(cmd), iptables, "-D");
+ 	system(cmd);
+ 	test_bpf_nf__destroy(skel);
+diff --git a/tools/testing/selftests/bpf/prog_tests/kfunc_call.c b/tools/testing/selftests/bpf/prog_tests/kfunc_call.c
+index a543742cd7bd1..2eb71559713c9 100644
+--- a/tools/testing/selftests/bpf/prog_tests/kfunc_call.c
++++ b/tools/testing/selftests/bpf/prog_tests/kfunc_call.c
+@@ -173,8 +173,8 @@ static void verify_fail(struct kfunc_test_params *param)
+ 	case tc_test:
+ 		topts.data_in = &pkt_v4;
+ 		topts.data_size_in = sizeof(pkt_v4);
+-		break;
+ 		topts.repeat = 1;
++		break;
+ 	}
+ 
+ 	skel = kfunc_call_fail__open_opts(&opts);
+diff --git a/tools/testing/selftests/bpf/progs/test_cls_redirect.h b/tools/testing/selftests/bpf/progs/test_cls_redirect.h
+index 76eab0aacba0c..233b089d1fbac 100644
+--- a/tools/testing/selftests/bpf/progs/test_cls_redirect.h
++++ b/tools/testing/selftests/bpf/progs/test_cls_redirect.h
+@@ -12,6 +12,15 @@
+ #include <linux/ipv6.h>
+ #include <linux/udp.h>
+ 
++/* offsetof() is used in static asserts, and the libbpf-redefined CO-RE
++ * friendly version breaks compilation for older clang versions <= 15
++ * when invoked in a static assert.  Restore original here.
++ */
++#ifdef offsetof
++#undef offsetof
++#define offsetof(type, member) __builtin_offsetof(type, member)
++#endif
++
+ struct gre_base_hdr {
+ 	uint16_t flags;
+ 	uint16_t protocol;
+diff --git a/tools/testing/selftests/futex/functional/futex_wait_timeout.c b/tools/testing/selftests/futex/functional/futex_wait_timeout.c
+index 3651ce17beeb9..d183f878360bc 100644
+--- a/tools/testing/selftests/futex/functional/futex_wait_timeout.c
++++ b/tools/testing/selftests/futex/functional/futex_wait_timeout.c
+@@ -24,6 +24,7 @@
+ 
+ static long timeout_ns = 100000;	/* 100us default timeout */
+ static futex_t futex_pi;
++static pthread_barrier_t barrier;
+ 
+ void usage(char *prog)
+ {
+@@ -48,6 +49,8 @@ void *get_pi_lock(void *arg)
+ 	if (ret != 0)
+ 		error("futex_lock_pi failed\n", ret);
+ 
++	pthread_barrier_wait(&barrier);
++
+ 	/* Blocks forever */
+ 	ret = futex_wait(&lock, 0, NULL, 0);
+ 	error("futex_wait failed\n", ret);
+@@ -130,6 +133,7 @@ int main(int argc, char *argv[])
+ 	       basename(argv[0]));
+ 	ksft_print_msg("\tArguments: timeout=%ldns\n", timeout_ns);
+ 
++	pthread_barrier_init(&barrier, NULL, 2);
+ 	pthread_create(&thread, NULL, get_pi_lock, NULL);
+ 
+ 	/* initialize relative timeout */
+@@ -163,6 +167,9 @@ int main(int argc, char *argv[])
+ 	res = futex_wait_requeue_pi(&f1, f1, &futex_pi, &to, 0);
+ 	test_timeout(res, &ret, "futex_wait_requeue_pi monotonic", ETIMEDOUT);
+ 
++	/* Wait until the other thread calls futex_lock_pi() */
++	pthread_barrier_wait(&barrier);
++	pthread_barrier_destroy(&barrier);
+ 	/*
+ 	 * FUTEX_LOCK_PI with CLOCK_REALTIME
+ 	 * Due to historical reasons, FUTEX_LOCK_PI supports only realtime
+diff --git a/tools/testing/selftests/kselftest_harness.h b/tools/testing/selftests/kselftest_harness.h
+index 5fd49ad0c696f..e05ac82610467 100644
+--- a/tools/testing/selftests/kselftest_harness.h
++++ b/tools/testing/selftests/kselftest_harness.h
+@@ -938,7 +938,11 @@ void __wait_for_test(struct __test_metadata *t)
+ 		fprintf(TH_LOG_STREAM,
+ 			"# %s: Test terminated by timeout\n", t->name);
+ 	} else if (WIFEXITED(status)) {
+-		if (t->termsig != -1) {
++		if (WEXITSTATUS(status) == 255) {
++			/* SKIP */
++			t->passed = 1;
++			t->skip = 1;
++		} else if (t->termsig != -1) {
+ 			t->passed = 0;
+ 			fprintf(TH_LOG_STREAM,
+ 				"# %s: Test exited normally instead of by signal (code: %d)\n",
+@@ -950,11 +954,6 @@ void __wait_for_test(struct __test_metadata *t)
+ 			case 0:
+ 				t->passed = 1;
+ 				break;
+-			/* SKIP */
+-			case 255:
+-				t->passed = 1;
+-				t->skip = 1;
+-				break;
+ 			/* Other failure, assume step report. */
+ 			default:
+ 				t->passed = 0;
+diff --git a/tools/testing/selftests/landlock/fs_test.c b/tools/testing/selftests/landlock/fs_test.c
+index 83d5655695126..251594306d409 100644
+--- a/tools/testing/selftests/landlock/fs_test.c
++++ b/tools/testing/selftests/landlock/fs_test.c
+@@ -113,7 +113,7 @@ static bool supports_filesystem(const char *const filesystem)
+ {
+ 	char str[32];
+ 	int len;
+-	bool res;
++	bool res = true;
+ 	FILE *const inf = fopen("/proc/filesystems", "r");
+ 
+ 	/*
+@@ -125,14 +125,16 @@ static bool supports_filesystem(const char *const filesystem)
+ 
+ 	/* filesystem can be null for bind mounts. */
+ 	if (!filesystem)
+-		return true;
++		goto out;
+ 
+ 	len = snprintf(str, sizeof(str), "nodev\t%s\n", filesystem);
+ 	if (len >= sizeof(str))
+ 		/* Ignores too-long filesystem names. */
+-		return true;
++		goto out;
+ 
+ 	res = fgrep(inf, str);
++
++out:
+ 	fclose(inf);
+ 	return res;
+ }
+diff --git a/tools/testing/selftests/memfd/memfd_test.c b/tools/testing/selftests/memfd/memfd_test.c
+index dba0e8ba002f8..8b7390ad81d11 100644
+--- a/tools/testing/selftests/memfd/memfd_test.c
++++ b/tools/testing/selftests/memfd/memfd_test.c
+@@ -1145,8 +1145,25 @@ static void test_sysctl_child(void)
+ 
+ 	printf("%s sysctl 2\n", memfd_str);
+ 	sysctl_assert_write("2");
+-	mfd_fail_new("kern_memfd_sysctl_2",
+-		MFD_CLOEXEC | MFD_ALLOW_SEALING);
++	mfd_fail_new("kern_memfd_sysctl_2_exec",
++		     MFD_EXEC | MFD_CLOEXEC | MFD_ALLOW_SEALING);
++
++	fd = mfd_assert_new("kern_memfd_sysctl_2_dfl",
++			    mfd_def_size,
++			    MFD_CLOEXEC | MFD_ALLOW_SEALING);
++	mfd_assert_mode(fd, 0666);
++	mfd_assert_has_seals(fd, F_SEAL_EXEC);
++	mfd_fail_chmod(fd, 0777);
++	close(fd);
++
++	fd = mfd_assert_new("kern_memfd_sysctl_2_noexec_seal",
++			    mfd_def_size,
++			    MFD_NOEXEC_SEAL | MFD_CLOEXEC | MFD_ALLOW_SEALING);
++	mfd_assert_mode(fd, 0666);
++	mfd_assert_has_seals(fd, F_SEAL_EXEC);
++	mfd_fail_chmod(fd, 0777);
++	close(fd);
++
+ 	sysctl_fail_write("0");
+ 	sysctl_fail_write("1");
+ }
+@@ -1202,7 +1219,24 @@ static pid_t spawn_newpid_thread(unsigned int flags, int (*fn)(void *))
+ 
+ static void join_newpid_thread(pid_t pid)
+ {
+-	waitpid(pid, NULL, 0);
++	int wstatus;
++
++	if (waitpid(pid, &wstatus, 0) < 0) {
++		printf("newpid thread: waitpid() failed: %m\n");
++		abort();
++	}
++
++	if (WIFEXITED(wstatus) && WEXITSTATUS(wstatus) != 0) {
++		printf("newpid thread: exited with non-zero error code %d\n",
++		       WEXITSTATUS(wstatus));
++		abort();
++	}
++
++	if (WIFSIGNALED(wstatus)) {
++		printf("newpid thread: killed by signal %d\n",
++		       WTERMSIG(wstatus));
++		abort();
++	}
+ }
+ 
+ /*
+diff --git a/tools/testing/selftests/resctrl/Makefile b/tools/testing/selftests/resctrl/Makefile
+index 73d53257df42f..5073dbc961258 100644
+--- a/tools/testing/selftests/resctrl/Makefile
++++ b/tools/testing/selftests/resctrl/Makefile
+@@ -7,4 +7,4 @@ TEST_GEN_PROGS := resctrl_tests
+ 
+ include ../lib.mk
+ 
+-$(OUTPUT)/resctrl_tests: $(wildcard *.c)
++$(OUTPUT)/resctrl_tests: $(wildcard *.[ch])
+diff --git a/tools/testing/selftests/resctrl/cache.c b/tools/testing/selftests/resctrl/cache.c
+index 8a4fe8693be63..289b619116fec 100644
+--- a/tools/testing/selftests/resctrl/cache.c
++++ b/tools/testing/selftests/resctrl/cache.c
+@@ -87,21 +87,19 @@ static int reset_enable_llc_perf(pid_t pid, int cpu_no)
+ static int get_llc_perf(unsigned long *llc_perf_miss)
+ {
+ 	__u64 total_misses;
++	int ret;
+ 
+ 	/* Stop counters after one span to get miss rate */
+ 
+ 	ioctl(fd_lm, PERF_EVENT_IOC_DISABLE, 0);
+ 
+-	if (read(fd_lm, &rf_cqm, sizeof(struct read_format)) == -1) {
++	ret = read(fd_lm, &rf_cqm, sizeof(struct read_format));
++	if (ret == -1) {
+ 		perror("Could not get llc misses through perf");
+-
+ 		return -1;
+ 	}
+ 
+ 	total_misses = rf_cqm.values[0].value;
+-
+-	close(fd_lm);
+-
+ 	*llc_perf_miss = total_misses;
+ 
+ 	return 0;
+@@ -253,19 +251,25 @@ int cat_val(struct resctrl_val_param *param)
+ 					 memflush, operation, resctrl_val)) {
+ 				fprintf(stderr, "Error-running fill buffer\n");
+ 				ret = -1;
+-				break;
++				goto pe_close;
+ 			}
+ 
+ 			sleep(1);
+ 			ret = measure_cache_vals(param, bm_pid);
+ 			if (ret)
+-				break;
++				goto pe_close;
++
++			close(fd_lm);
+ 		} else {
+ 			break;
+ 		}
+ 	}
+ 
+ 	return ret;
++
++pe_close:
++	close(fd_lm);
++	return ret;
+ }
+ 
+ /*
+diff --git a/tools/testing/selftests/resctrl/fill_buf.c b/tools/testing/selftests/resctrl/fill_buf.c
+index 341cc93ca84c4..3b328c8448964 100644
+--- a/tools/testing/selftests/resctrl/fill_buf.c
++++ b/tools/testing/selftests/resctrl/fill_buf.c
+@@ -177,12 +177,13 @@ fill_cache(unsigned long long buf_size, int malloc_and_init, int memflush,
+ 	else
+ 		ret = fill_cache_write(start_ptr, end_ptr, resctrl_val);
+ 
++	free(startptr);
++
+ 	if (ret) {
+ 		printf("\n Error in fill cache read/write...\n");
+ 		return -1;
+ 	}
+ 
+-	free(startptr);
+ 
+ 	return 0;
+ }
+diff --git a/tools/testing/selftests/resctrl/resctrl.h b/tools/testing/selftests/resctrl/resctrl.h
+index 87e39456dee08..f455f0b7e314b 100644
+--- a/tools/testing/selftests/resctrl/resctrl.h
++++ b/tools/testing/selftests/resctrl/resctrl.h
+@@ -43,6 +43,7 @@
+ 	do {					\
+ 		perror(err_msg);		\
+ 		kill(ppid, SIGKILL);		\
++		umount_resctrlfs();		\
+ 		exit(EXIT_FAILURE);		\
+ 	} while (0)
+ 
+diff --git a/virt/kvm/vfio.c b/virt/kvm/vfio.c
+index 9584eb57e0eda..365d30779768a 100644
+--- a/virt/kvm/vfio.c
++++ b/virt/kvm/vfio.c
+@@ -21,7 +21,7 @@
+ #include <asm/kvm_ppc.h>
+ #endif
+ 
+-struct kvm_vfio_group {
++struct kvm_vfio_file {
+ 	struct list_head node;
+ 	struct file *file;
+ #ifdef CONFIG_SPAPR_TCE_IOMMU
+@@ -30,7 +30,7 @@ struct kvm_vfio_group {
+ };
+ 
+ struct kvm_vfio {
+-	struct list_head group_list;
++	struct list_head file_list;
+ 	struct mutex lock;
+ 	bool noncoherent;
+ };
+@@ -98,34 +98,35 @@ static struct iommu_group *kvm_vfio_file_iommu_group(struct file *file)
+ }
+ 
+ static void kvm_spapr_tce_release_vfio_group(struct kvm *kvm,
+-					     struct kvm_vfio_group *kvg)
++					     struct kvm_vfio_file *kvf)
+ {
+-	if (WARN_ON_ONCE(!kvg->iommu_group))
++	if (WARN_ON_ONCE(!kvf->iommu_group))
+ 		return;
+ 
+-	kvm_spapr_tce_release_iommu_group(kvm, kvg->iommu_group);
+-	iommu_group_put(kvg->iommu_group);
+-	kvg->iommu_group = NULL;
++	kvm_spapr_tce_release_iommu_group(kvm, kvf->iommu_group);
++	iommu_group_put(kvf->iommu_group);
++	kvf->iommu_group = NULL;
+ }
+ #endif
+ 
+ /*
+- * Groups can use the same or different IOMMU domains.  If the same then
+- * adding a new group may change the coherency of groups we've previously
+- * been told about.  We don't want to care about any of that so we retest
+- * each group and bail as soon as we find one that's noncoherent.  This
+- * means we only ever [un]register_noncoherent_dma once for the whole device.
++ * Groups/devices can use the same or different IOMMU domains. If the same
++ * then adding a new group/device may change the coherency of groups/devices
++ * we've previously been told about. We don't want to care about any of
++ * that so we retest each group/device and bail as soon as we find one that's
++ * noncoherent.  This means we only ever [un]register_noncoherent_dma once
++ * for the whole device.
+  */
+ static void kvm_vfio_update_coherency(struct kvm_device *dev)
+ {
+ 	struct kvm_vfio *kv = dev->private;
+ 	bool noncoherent = false;
+-	struct kvm_vfio_group *kvg;
++	struct kvm_vfio_file *kvf;
+ 
+ 	mutex_lock(&kv->lock);
+ 
+-	list_for_each_entry(kvg, &kv->group_list, node) {
+-		if (!kvm_vfio_file_enforced_coherent(kvg->file)) {
++	list_for_each_entry(kvf, &kv->file_list, node) {
++		if (!kvm_vfio_file_enforced_coherent(kvf->file)) {
+ 			noncoherent = true;
+ 			break;
+ 		}
+@@ -143,10 +144,10 @@ static void kvm_vfio_update_coherency(struct kvm_device *dev)
+ 	mutex_unlock(&kv->lock);
+ }
+ 
+-static int kvm_vfio_group_add(struct kvm_device *dev, unsigned int fd)
++static int kvm_vfio_file_add(struct kvm_device *dev, unsigned int fd)
+ {
+ 	struct kvm_vfio *kv = dev->private;
+-	struct kvm_vfio_group *kvg;
++	struct kvm_vfio_file *kvf;
+ 	struct file *filp;
+ 	int ret;
+ 
+@@ -162,27 +163,27 @@ static int kvm_vfio_group_add(struct kvm_device *dev, unsigned int fd)
+ 
+ 	mutex_lock(&kv->lock);
+ 
+-	list_for_each_entry(kvg, &kv->group_list, node) {
+-		if (kvg->file == filp) {
++	list_for_each_entry(kvf, &kv->file_list, node) {
++		if (kvf->file == filp) {
+ 			ret = -EEXIST;
+ 			goto err_unlock;
+ 		}
+ 	}
+ 
+-	kvg = kzalloc(sizeof(*kvg), GFP_KERNEL_ACCOUNT);
+-	if (!kvg) {
++	kvf = kzalloc(sizeof(*kvf), GFP_KERNEL_ACCOUNT);
++	if (!kvf) {
+ 		ret = -ENOMEM;
+ 		goto err_unlock;
+ 	}
+ 
+-	kvg->file = filp;
+-	list_add_tail(&kvg->node, &kv->group_list);
++	kvf->file = filp;
++	list_add_tail(&kvf->node, &kv->file_list);
+ 
+ 	kvm_arch_start_assignment(dev->kvm);
++	kvm_vfio_file_set_kvm(kvf->file, dev->kvm);
+ 
+ 	mutex_unlock(&kv->lock);
+ 
+-	kvm_vfio_file_set_kvm(kvg->file, dev->kvm);
+ 	kvm_vfio_update_coherency(dev);
+ 
+ 	return 0;
+@@ -193,10 +194,10 @@ err_fput:
+ 	return ret;
+ }
+ 
+-static int kvm_vfio_group_del(struct kvm_device *dev, unsigned int fd)
++static int kvm_vfio_file_del(struct kvm_device *dev, unsigned int fd)
+ {
+ 	struct kvm_vfio *kv = dev->private;
+-	struct kvm_vfio_group *kvg;
++	struct kvm_vfio_file *kvf;
+ 	struct fd f;
+ 	int ret;
+ 
+@@ -208,18 +209,18 @@ static int kvm_vfio_group_del(struct kvm_device *dev, unsigned int fd)
+ 
+ 	mutex_lock(&kv->lock);
+ 
+-	list_for_each_entry(kvg, &kv->group_list, node) {
+-		if (kvg->file != f.file)
++	list_for_each_entry(kvf, &kv->file_list, node) {
++		if (kvf->file != f.file)
+ 			continue;
+ 
+-		list_del(&kvg->node);
++		list_del(&kvf->node);
+ 		kvm_arch_end_assignment(dev->kvm);
+ #ifdef CONFIG_SPAPR_TCE_IOMMU
+-		kvm_spapr_tce_release_vfio_group(dev->kvm, kvg);
++		kvm_spapr_tce_release_vfio_group(dev->kvm, kvf);
+ #endif
+-		kvm_vfio_file_set_kvm(kvg->file, NULL);
+-		fput(kvg->file);
+-		kfree(kvg);
++		kvm_vfio_file_set_kvm(kvf->file, NULL);
++		fput(kvf->file);
++		kfree(kvf);
+ 		ret = 0;
+ 		break;
+ 	}
+@@ -234,12 +235,12 @@ static int kvm_vfio_group_del(struct kvm_device *dev, unsigned int fd)
+ }
+ 
+ #ifdef CONFIG_SPAPR_TCE_IOMMU
+-static int kvm_vfio_group_set_spapr_tce(struct kvm_device *dev,
+-					void __user *arg)
++static int kvm_vfio_file_set_spapr_tce(struct kvm_device *dev,
++				       void __user *arg)
+ {
+ 	struct kvm_vfio_spapr_tce param;
+ 	struct kvm_vfio *kv = dev->private;
+-	struct kvm_vfio_group *kvg;
++	struct kvm_vfio_file *kvf;
+ 	struct fd f;
+ 	int ret;
+ 
+@@ -254,20 +255,20 @@ static int kvm_vfio_group_set_spapr_tce(struct kvm_device *dev,
+ 
+ 	mutex_lock(&kv->lock);
+ 
+-	list_for_each_entry(kvg, &kv->group_list, node) {
+-		if (kvg->file != f.file)
++	list_for_each_entry(kvf, &kv->file_list, node) {
++		if (kvf->file != f.file)
+ 			continue;
+ 
+-		if (!kvg->iommu_group) {
+-			kvg->iommu_group = kvm_vfio_file_iommu_group(kvg->file);
+-			if (WARN_ON_ONCE(!kvg->iommu_group)) {
++		if (!kvf->iommu_group) {
++			kvf->iommu_group = kvm_vfio_file_iommu_group(kvf->file);
++			if (WARN_ON_ONCE(!kvf->iommu_group)) {
+ 				ret = -EIO;
+ 				goto err_fdput;
+ 			}
+ 		}
+ 
+ 		ret = kvm_spapr_tce_attach_iommu_group(dev->kvm, param.tablefd,
+-						       kvg->iommu_group);
++						       kvf->iommu_group);
+ 		break;
+ 	}
+ 
+@@ -278,8 +279,8 @@ err_fdput:
+ }
+ #endif
+ 
+-static int kvm_vfio_set_group(struct kvm_device *dev, long attr,
+-			      void __user *arg)
++static int kvm_vfio_set_file(struct kvm_device *dev, long attr,
++			     void __user *arg)
+ {
+ 	int32_t __user *argp = arg;
+ 	int32_t fd;
+@@ -288,16 +289,16 @@ static int kvm_vfio_set_group(struct kvm_device *dev, long attr,
+ 	case KVM_DEV_VFIO_GROUP_ADD:
+ 		if (get_user(fd, argp))
+ 			return -EFAULT;
+-		return kvm_vfio_group_add(dev, fd);
++		return kvm_vfio_file_add(dev, fd);
+ 
+ 	case KVM_DEV_VFIO_GROUP_DEL:
+ 		if (get_user(fd, argp))
+ 			return -EFAULT;
+-		return kvm_vfio_group_del(dev, fd);
++		return kvm_vfio_file_del(dev, fd);
+ 
+ #ifdef CONFIG_SPAPR_TCE_IOMMU
+ 	case KVM_DEV_VFIO_GROUP_SET_SPAPR_TCE:
+-		return kvm_vfio_group_set_spapr_tce(dev, arg);
++		return kvm_vfio_file_set_spapr_tce(dev, arg);
+ #endif
+ 	}
+ 
+@@ -309,8 +310,8 @@ static int kvm_vfio_set_attr(struct kvm_device *dev,
+ {
+ 	switch (attr->group) {
+ 	case KVM_DEV_VFIO_GROUP:
+-		return kvm_vfio_set_group(dev, attr->attr,
+-					  u64_to_user_ptr(attr->addr));
++		return kvm_vfio_set_file(dev, attr->attr,
++					 u64_to_user_ptr(attr->addr));
+ 	}
+ 
+ 	return -ENXIO;
+@@ -339,16 +340,16 @@ static int kvm_vfio_has_attr(struct kvm_device *dev,
+ static void kvm_vfio_release(struct kvm_device *dev)
+ {
+ 	struct kvm_vfio *kv = dev->private;
+-	struct kvm_vfio_group *kvg, *tmp;
++	struct kvm_vfio_file *kvf, *tmp;
+ 
+-	list_for_each_entry_safe(kvg, tmp, &kv->group_list, node) {
++	list_for_each_entry_safe(kvf, tmp, &kv->file_list, node) {
+ #ifdef CONFIG_SPAPR_TCE_IOMMU
+-		kvm_spapr_tce_release_vfio_group(dev->kvm, kvg);
++		kvm_spapr_tce_release_vfio_group(dev->kvm, kvf);
+ #endif
+-		kvm_vfio_file_set_kvm(kvg->file, NULL);
+-		fput(kvg->file);
+-		list_del(&kvg->node);
+-		kfree(kvg);
++		kvm_vfio_file_set_kvm(kvf->file, NULL);
++		fput(kvf->file);
++		list_del(&kvf->node);
++		kfree(kvf);
+ 		kvm_arch_end_assignment(dev->kvm);
+ 	}
+ 
+@@ -382,7 +383,7 @@ static int kvm_vfio_create(struct kvm_device *dev, u32 type)
+ 	if (!kv)
+ 		return -ENOMEM;
+ 
+-	INIT_LIST_HEAD(&kv->group_list);
++	INIT_LIST_HEAD(&kv->file_list);
+ 	mutex_init(&kv->lock);
+ 
+ 	dev->private = kv;


             reply	other threads:[~2023-09-13 11:03 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-09-13 11:03 Mike Pagano [this message]
  -- strict thread matches above, loose matches on Subject: below --
2023-12-01 10:33 [gentoo-commits] proj/linux-patches:6.5 commit in: / Mike Pagano
2023-11-28 17:50 Mike Pagano
2023-11-20 11:27 Mike Pagano
2023-11-09 18:00 Mike Pagano
2023-11-08 14:01 Mike Pagano
2023-11-02 11:09 Mike Pagano
2023-10-25 11:35 Mike Pagano
2023-10-22 22:51 Mike Pagano
2023-10-19 22:29 Mike Pagano
2023-10-18 20:01 Mike Pagano
2023-10-17 22:58 Mike Pagano
2023-10-10 22:53 Mike Pagano
2023-10-06 12:36 Mike Pagano
2023-10-05 14:07 Mike Pagano
2023-09-23 11:08 Mike Pagano
2023-09-23 11:06 Mike Pagano
2023-09-23 10:15 Mike Pagano
2023-09-19 13:18 Mike Pagano
2023-09-15 17:55 Mike Pagano
2023-09-13 12:07 Mike Pagano
2023-09-07 14:53 Mike Pagano
2023-09-06 22:14 Mike Pagano
2023-09-02  9:54 Mike Pagano

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1694603011.26f66be9229018459d0f69bb54b8627d14a9d562.mpagano@gentoo \
    --to=mpagano@gentoo.org \
    --cc=gentoo-commits@lists.gentoo.org \
    --cc=gentoo-dev@lists.gentoo.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox