public inbox for gentoo-commits@lists.gentoo.org
 help / color / mirror / Atom feed
From: "Mike Pagano" <mpagano@gentoo.org>
To: gentoo-commits@lists.gentoo.org
Subject: [gentoo-commits] proj/linux-patches:6.4 commit in: /
Date: Wed, 19 Jul 2023 17:04:28 +0000 (UTC)	[thread overview]
Message-ID: <1689786251.cad5fd068ed2bfce525b0b8bf3b60cc4610d8839.mpagano@gentoo> (raw)

commit:     cad5fd068ed2bfce525b0b8bf3b60cc4610d8839
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jul 19 17:04:11 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jul 19 17:04:11 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=cad5fd06

Linux patch 6.4.4

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README            |     4 +
 1003_linux-6.4.4.patch | 39171 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 39175 insertions(+)

diff --git a/0000_README b/0000_README
index 42e15858..2532d9e5 100644
--- a/0000_README
+++ b/0000_README
@@ -55,6 +55,10 @@ Patch:  1002_linux-6.4.3.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.4.3
 
+Patch:  1003_linux-6.4.4.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.4.4
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1003_linux-6.4.4.patch b/1003_linux-6.4.4.patch
new file mode 100644
index 00000000..bd7fd5f3
--- /dev/null
+++ b/1003_linux-6.4.4.patch
@@ -0,0 +1,39171 @@
+diff --git a/Documentation/ABI/testing/sysfs-driver-eud b/Documentation/ABI/testing/sysfs-driver-eud
+index 83f3872182a40..2bab0db2d2f0f 100644
+--- a/Documentation/ABI/testing/sysfs-driver-eud
++++ b/Documentation/ABI/testing/sysfs-driver-eud
+@@ -1,4 +1,4 @@
+-What:		/sys/bus/platform/drivers/eud/.../enable
++What:		/sys/bus/platform/drivers/qcom_eud/.../enable
+ Date:           February 2022
+ Contact:        Souradeep Chowdhury <quic_schowdhu@quicinc.com>
+ Description:
+diff --git a/Documentation/devicetree/bindings/crypto/qcom-qce.yaml b/Documentation/devicetree/bindings/crypto/qcom-qce.yaml
+index e375bd9813009..90ddf98a6df92 100644
+--- a/Documentation/devicetree/bindings/crypto/qcom-qce.yaml
++++ b/Documentation/devicetree/bindings/crypto/qcom-qce.yaml
+@@ -24,6 +24,12 @@ properties:
+         deprecated: true
+         description: Kept only for ABI backward compatibility
+ 
++      - items:
++          - enum:
++              - qcom,ipq4019-qce
++              - qcom,sm8150-qce
++          - const: qcom,qce
++
+       - items:
+           - enum:
+               - qcom,ipq6018-qce
+diff --git a/Documentation/devicetree/bindings/iio/adc/adi,ad7192.yaml b/Documentation/devicetree/bindings/iio/adc/adi,ad7192.yaml
+index d521d516088be..16def2985ab4f 100644
+--- a/Documentation/devicetree/bindings/iio/adc/adi,ad7192.yaml
++++ b/Documentation/devicetree/bindings/iio/adc/adi,ad7192.yaml
+@@ -47,6 +47,9 @@ properties:
+   avdd-supply:
+     description: AVdd voltage supply
+ 
++  vref-supply:
++    description: VRef voltage supply
++
+   adi,rejection-60-Hz-enable:
+     description: |
+       This bit enables a notch at 60 Hz when the first notch of the sinc
+@@ -89,6 +92,7 @@ required:
+   - interrupts
+   - dvdd-supply
+   - avdd-supply
++  - vref-supply
+   - spi-cpol
+   - spi-cpha
+ 
+@@ -115,6 +119,7 @@ examples:
+             interrupt-parent = <&gpio>;
+             dvdd-supply = <&dvdd>;
+             avdd-supply = <&avdd>;
++            vref-supply = <&vref>;
+ 
+             adi,refin2-pins-enable;
+             adi,rejection-60-Hz-enable;
+diff --git a/Documentation/devicetree/bindings/iommu/arm,smmu.yaml b/Documentation/devicetree/bindings/iommu/arm,smmu.yaml
+index ba677d401e240..6cb04f35642aa 100644
+--- a/Documentation/devicetree/bindings/iommu/arm,smmu.yaml
++++ b/Documentation/devicetree/bindings/iommu/arm,smmu.yaml
+@@ -80,6 +80,7 @@ properties:
+         items:
+           - enum:
+               - qcom,sc7280-smmu-500
++              - qcom,sc8280xp-smmu-500
+               - qcom,sm6115-smmu-500
+               - qcom,sm6125-smmu-500
+               - qcom,sm8150-smmu-500
+@@ -331,7 +332,9 @@ allOf:
+       properties:
+         compatible:
+           contains:
+-            const: qcom,sc7280-smmu-500
++            enum:
++              - qcom,sc7280-smmu-500
++              - qcom,sc8280xp-smmu-500
+     then:
+       properties:
+         clock-names:
+@@ -416,7 +419,6 @@ allOf:
+               - qcom,sa8775p-smmu-500
+               - qcom,sc7180-smmu-500
+               - qcom,sc8180x-smmu-500
+-              - qcom,sc8280xp-smmu-500
+               - qcom,sdm670-smmu-500
+               - qcom,sdm845-smmu-500
+               - qcom,sdx55-smmu-500
+diff --git a/Documentation/devicetree/bindings/power/reset/qcom,pon.yaml b/Documentation/devicetree/bindings/power/reset/qcom,pon.yaml
+index d96170eecbd22..0b1eca734d3b1 100644
+--- a/Documentation/devicetree/bindings/power/reset/qcom,pon.yaml
++++ b/Documentation/devicetree/bindings/power/reset/qcom,pon.yaml
+@@ -56,7 +56,6 @@ required:
+ unevaluatedProperties: false
+ 
+ allOf:
+-  - $ref: reboot-mode.yaml#
+   - if:
+       properties:
+         compatible:
+@@ -66,6 +65,9 @@ allOf:
+               - qcom,pms405-pon
+               - qcom,pm8998-pon
+     then:
++      allOf:
++        - $ref: reboot-mode.yaml#
++
+       properties:
+         reg:
+           maxItems: 1
+diff --git a/Documentation/devicetree/bindings/sound/mediatek,mt8188-afe.yaml b/Documentation/devicetree/bindings/sound/mediatek,mt8188-afe.yaml
+index 82ccb32f08f27..9e877f0d19fbb 100644
+--- a/Documentation/devicetree/bindings/sound/mediatek,mt8188-afe.yaml
++++ b/Documentation/devicetree/bindings/sound/mediatek,mt8188-afe.yaml
+@@ -63,15 +63,15 @@ properties:
+       - const: apll12_div2
+       - const: apll12_div3
+       - const: apll12_div9
+-      - const: a1sys_hp_sel
+-      - const: aud_intbus_sel
+-      - const: audio_h_sel
+-      - const: audio_local_bus_sel
+-      - const: dptx_m_sel
+-      - const: i2so1_m_sel
+-      - const: i2so2_m_sel
+-      - const: i2si1_m_sel
+-      - const: i2si2_m_sel
++      - const: top_a1sys_hp
++      - const: top_aud_intbus
++      - const: top_audio_h
++      - const: top_audio_local_bus
++      - const: top_dptx
++      - const: top_i2so1
++      - const: top_i2so2
++      - const: top_i2si1
++      - const: top_i2si2
+       - const: adsp_audio_26m
+ 
+   mediatek,etdm-in1-cowork-source:
+@@ -193,15 +193,15 @@ examples:
+                       "apll12_div2",
+                       "apll12_div3",
+                       "apll12_div9",
+-                      "a1sys_hp_sel",
+-                      "aud_intbus_sel",
+-                      "audio_h_sel",
+-                      "audio_local_bus_sel",
+-                      "dptx_m_sel",
+-                      "i2so1_m_sel",
+-                      "i2so2_m_sel",
+-                      "i2si1_m_sel",
+-                      "i2si2_m_sel",
++                      "top_a1sys_hp",
++                      "top_aud_intbus",
++                      "top_audio_h",
++                      "top_audio_local_bus",
++                      "top_dptx",
++                      "top_i2so1",
++                      "top_i2so2",
++                      "top_i2si1",
++                      "top_i2si2",
+                       "adsp_audio_26m";
+     };
+ 
+diff --git a/Documentation/fault-injection/provoke-crashes.rst b/Documentation/fault-injection/provoke-crashes.rst
+index 3abe842256139..1f087e502ca6d 100644
+--- a/Documentation/fault-injection/provoke-crashes.rst
++++ b/Documentation/fault-injection/provoke-crashes.rst
+@@ -29,7 +29,7 @@ recur_count
+ cpoint_name
+ 	Where in the kernel to trigger the action. It can be
+ 	one of INT_HARDWARE_ENTRY, INT_HW_IRQ_EN, INT_TASKLET_ENTRY,
+-	FS_DEVRW, MEM_SWAPOUT, TIMERADD, SCSI_QUEUE_RQ, or DIRECT.
++	FS_SUBMIT_BH, MEM_SWAPOUT, TIMERADD, SCSI_QUEUE_RQ, or DIRECT.
+ 
+ cpoint_type
+ 	Indicates the action to be taken on hitting the crash point.
+diff --git a/Documentation/filesystems/autofs-mount-control.rst b/Documentation/filesystems/autofs-mount-control.rst
+index bf4b511cdbe85..b5a379d25c40b 100644
+--- a/Documentation/filesystems/autofs-mount-control.rst
++++ b/Documentation/filesystems/autofs-mount-control.rst
+@@ -196,7 +196,7 @@ information and return operation results::
+ 		    struct args_ismountpoint	ismountpoint;
+ 	    };
+ 
+-	    char path[0];
++	    char path[];
+     };
+ 
+ The ioctlfd field is a mount point file descriptor of an autofs mount
+diff --git a/Documentation/filesystems/autofs.rst b/Documentation/filesystems/autofs.rst
+index 4f490278d22fc..3b6e38e646cd8 100644
+--- a/Documentation/filesystems/autofs.rst
++++ b/Documentation/filesystems/autofs.rst
+@@ -467,7 +467,7 @@ Each ioctl is passed a pointer to an `autofs_dev_ioctl` structure::
+ 			struct args_ismountpoint	ismountpoint;
+ 		};
+ 
+-                char path[0];
++                char path[];
+         };
+ 
+ For the **OPEN_MOUNT** and **IS_MOUNTPOINT** commands, the target
+diff --git a/Documentation/filesystems/directory-locking.rst b/Documentation/filesystems/directory-locking.rst
+index 504ba940c36c1..dccd61c7c5c3b 100644
+--- a/Documentation/filesystems/directory-locking.rst
++++ b/Documentation/filesystems/directory-locking.rst
+@@ -22,12 +22,11 @@ exclusive.
+ 3) object removal.  Locking rules: caller locks parent, finds victim,
+ locks victim and calls the method.  Locks are exclusive.
+ 
+-4) rename() that is _not_ cross-directory.  Locking rules: caller locks
+-the parent and finds source and target.  In case of exchange (with
+-RENAME_EXCHANGE in flags argument) lock both.  In any case,
+-if the target already exists, lock it.  If the source is a non-directory,
+-lock it.  If we need to lock both, lock them in inode pointer order.
+-Then call the method.  All locks are exclusive.
++4) rename() that is _not_ cross-directory.  Locking rules: caller locks the
++parent and finds source and target.  We lock both (provided they exist).  If we
++need to lock two inodes of different type (dir vs non-dir), we lock directory
++first.  If we need to lock two inodes of the same type, lock them in inode
++pointer order.  Then call the method.  All locks are exclusive.
+ NB: we might get away with locking the source (and target in exchange
+ case) shared.
+ 
+@@ -44,15 +43,17 @@ All locks are exclusive.
+ rules:
+ 
+ 	* lock the filesystem
+-	* lock parents in "ancestors first" order.
++	* lock parents in "ancestors first" order. If one is not ancestor of
++	  the other, lock them in inode pointer order.
+ 	* find source and target.
+ 	* if old parent is equal to or is a descendent of target
+ 	  fail with -ENOTEMPTY
+ 	* if new parent is equal to or is a descendent of source
+ 	  fail with -ELOOP
+-	* If it's an exchange, lock both the source and the target.
+-	* If the target exists, lock it.  If the source is a non-directory,
+-	  lock it.  If we need to lock both, do so in inode pointer order.
++	* Lock both the source and the target provided they exist. If we
++	  need to lock two inodes of different type (dir vs non-dir), we lock
++	  the directory first. If we need to lock two inodes of the same type,
++	  lock them in inode pointer order.
+ 	* call the method.
+ 
+ All ->i_rwsem are taken exclusive.  Again, we might get away with locking
+@@ -66,8 +67,9 @@ If no directory is its own ancestor, the scheme above is deadlock-free.
+ 
+ Proof:
+ 
+-	First of all, at any moment we have a partial ordering of the
+-	objects - A < B iff A is an ancestor of B.
++	First of all, at any moment we have a linear ordering of the
++	objects - A < B iff (A is an ancestor of B) or (B is not an ancestor
++        of A and ptr(A) < ptr(B)).
+ 
+ 	That ordering can change.  However, the following is true:
+ 
+diff --git a/Documentation/filesystems/f2fs.rst b/Documentation/filesystems/f2fs.rst
+index c57745375edbc..9359978a5af26 100644
+--- a/Documentation/filesystems/f2fs.rst
++++ b/Documentation/filesystems/f2fs.rst
+@@ -351,6 +351,22 @@ age_extent_cache	 Enable an age extent cache based on rb-tree. It records
+ 			 data block update frequency of the extent per inode, in
+ 			 order to provide better temperature hints for data block
+ 			 allocation.
++errors=%s		 Specify f2fs behavior on critical errors. This supports modes:
++			 "panic", "continue" and "remount-ro", respectively, trigger
++			 panic immediately, continue without doing anything, and remount
++			 the partition in read-only mode. By default it uses "continue"
++			 mode.
++			 ====================== =============== =============== ========
++			 mode			continue	remount-ro	panic
++			 ====================== =============== =============== ========
++			 access ops		normal		noraml		N/A
++			 syscall errors		-EIO		-EROFS		N/A
++			 mount option		rw		ro		N/A
++			 pending dir write	keep		keep		N/A
++			 pending non-dir write	drop		keep		N/A
++			 pending node write	drop		keep		N/A
++			 pending meta write	keep		keep		N/A
++			 ====================== =============== =============== ========
+ ======================== ============================================================
+ 
+ Debugfs Entries
+diff --git a/Documentation/networking/af_xdp.rst b/Documentation/networking/af_xdp.rst
+index 247c6c4127e94..1cc35de336a41 100644
+--- a/Documentation/networking/af_xdp.rst
++++ b/Documentation/networking/af_xdp.rst
+@@ -433,6 +433,15 @@ start N bytes into the buffer leaving the first N bytes for the
+ application to use. The final option is the flags field, but it will
+ be dealt with in separate sections for each UMEM flag.
+ 
++SO_BINDTODEVICE setsockopt
++--------------------------
++
++This is a generic SOL_SOCKET option that can be used to tie AF_XDP
++socket to a particular network interface.  It is useful when a socket
++is created by a privileged process and passed to a non-privileged one.
++Once the option is set, kernel will refuse attempts to bind that socket
++to a different interface.  Updating the value requires CAP_NET_RAW.
++
+ XDP_STATISTICS getsockopt
+ -------------------------
+ 
+diff --git a/Makefile b/Makefile
+index 56abbcac061d4..d5041f7daf689 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 4
+-SUBLEVEL = 3
++SUBLEVEL = 4
+ EXTRAVERSION =
+ NAME = Hurr durr I'ma ninja sloth
+ 
+diff --git a/arch/arc/include/asm/linkage.h b/arch/arc/include/asm/linkage.h
+index c9434ff3aa4ce..8a3fb71e9cfad 100644
+--- a/arch/arc/include/asm/linkage.h
++++ b/arch/arc/include/asm/linkage.h
+@@ -8,6 +8,10 @@
+ 
+ #include <asm/dwarf.h>
+ 
++#define ASM_NL		 `	/* use '`' to mark new line in macro */
++#define __ALIGN		.align 4
++#define __ALIGN_STR	__stringify(__ALIGN)
++
+ #ifdef __ASSEMBLY__
+ 
+ .macro ST2 e, o, off
+@@ -28,10 +32,6 @@
+ #endif
+ .endm
+ 
+-#define ASM_NL		 `	/* use '`' to mark new line in macro */
+-#define __ALIGN		.align 4
+-#define __ALIGN_STR	__stringify(__ALIGN)
+-
+ /* annotation for data we want in DCCM - if enabled in .config */
+ .macro ARCFP_DATA nm
+ #ifdef CONFIG_ARC_HAS_DCCM
+diff --git a/arch/arm/boot/dts/bcm53015-meraki-mr26.dts b/arch/arm/boot/dts/bcm53015-meraki-mr26.dts
+index 14f58033efeb9..ca2266b936ee2 100644
+--- a/arch/arm/boot/dts/bcm53015-meraki-mr26.dts
++++ b/arch/arm/boot/dts/bcm53015-meraki-mr26.dts
+@@ -128,7 +128,7 @@
+ 
+ 			fixed-link {
+ 				speed = <1000>;
+-				duplex-full;
++				full-duplex;
+ 			};
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/bcm53016-meraki-mr32.dts b/arch/arm/boot/dts/bcm53016-meraki-mr32.dts
+index 46c2c93b01d88..a34e1746a6c59 100644
+--- a/arch/arm/boot/dts/bcm53016-meraki-mr32.dts
++++ b/arch/arm/boot/dts/bcm53016-meraki-mr32.dts
+@@ -187,7 +187,7 @@
+ 
+ 			fixed-link {
+ 				speed = <1000>;
+-				duplex-full;
++				full-duplex;
+ 			};
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/bcm5301x.dtsi b/arch/arm/boot/dts/bcm5301x.dtsi
+index 5fc1b847f4aa5..787a0dd8216b7 100644
+--- a/arch/arm/boot/dts/bcm5301x.dtsi
++++ b/arch/arm/boot/dts/bcm5301x.dtsi
+@@ -542,7 +542,6 @@
+ 				  "spi_lr_session_done",
+ 				  "spi_lr_overread";
+ 		clocks = <&iprocmed>;
+-		clock-names = "iprocmed";
+ 		num-cs = <2>;
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+diff --git a/arch/arm/boot/dts/iwg20d-q7-common.dtsi b/arch/arm/boot/dts/iwg20d-q7-common.dtsi
+index 03caea6fc6ffa..4351c5a02fa59 100644
+--- a/arch/arm/boot/dts/iwg20d-q7-common.dtsi
++++ b/arch/arm/boot/dts/iwg20d-q7-common.dtsi
+@@ -49,7 +49,7 @@
+ 	lcd_backlight: backlight {
+ 		compatible = "pwm-backlight";
+ 
+-		pwms = <&pwm3 0 5000000 0>;
++		pwms = <&pwm3 0 5000000>;
+ 		brightness-levels = <0 4 8 16 32 64 128 255>;
+ 		default-brightness-level = <7>;
+ 		enable-gpios = <&gpio5 14 GPIO_ACTIVE_HIGH>;
+diff --git a/arch/arm/boot/dts/lan966x-kontron-kswitch-d10-mmt.dtsi b/arch/arm/boot/dts/lan966x-kontron-kswitch-d10-mmt.dtsi
+index 0097e72e3fb22..f4df4cc1dfa5e 100644
+--- a/arch/arm/boot/dts/lan966x-kontron-kswitch-d10-mmt.dtsi
++++ b/arch/arm/boot/dts/lan966x-kontron-kswitch-d10-mmt.dtsi
+@@ -18,6 +18,8 @@
+ 
+ 	gpio-restart {
+ 		compatible = "gpio-restart";
++		pinctrl-0 = <&reset_pins>;
++		pinctrl-names = "default";
+ 		gpios = <&gpio 56 GPIO_ACTIVE_LOW>;
+ 		priority = <200>;
+ 	};
+@@ -39,7 +41,7 @@
+ 	status = "okay";
+ 
+ 	spi3: spi@400 {
+-		pinctrl-0 = <&fc3_b_pins>;
++		pinctrl-0 = <&fc3_b_pins>, <&spi3_cs_pins>;
+ 		pinctrl-names = "default";
+ 		status = "okay";
+ 		cs-gpios = <&gpio 46 GPIO_ACTIVE_LOW>;
+@@ -59,6 +61,12 @@
+ 		function = "miim_c";
+ 	};
+ 
++	reset_pins: reset-pins {
++		/* SYS_RST# */
++		pins = "GPIO_56";
++		function = "gpio";
++	};
++
+ 	sgpio_a_pins: sgpio-a-pins {
+ 		/* SCK, D0, D1 */
+ 		pins = "GPIO_32", "GPIO_33", "GPIO_34";
+@@ -71,6 +79,12 @@
+ 		function = "sgpio_b";
+ 	};
+ 
++	spi3_cs_pins: spi3-cs-pins {
++		/* CS# */
++		pins = "GPIO_46";
++		function = "gpio";
++	};
++
+ 	usart0_pins: usart0-pins {
+ 		/* RXD, TXD */
+ 		pins = "GPIO_25", "GPIO_26";
+diff --git a/arch/arm/boot/dts/meson8.dtsi b/arch/arm/boot/dts/meson8.dtsi
+index 4f22ab451aae2..59932fbfd5d5f 100644
+--- a/arch/arm/boot/dts/meson8.dtsi
++++ b/arch/arm/boot/dts/meson8.dtsi
+@@ -769,13 +769,13 @@
+ 
+ &uart_B {
+ 	compatible = "amlogic,meson8-uart";
+-	clocks = <&xtal>, <&clkc CLKID_UART0>, <&clkc CLKID_CLK81>;
++	clocks = <&xtal>, <&clkc CLKID_UART1>, <&clkc CLKID_CLK81>;
+ 	clock-names = "xtal", "pclk", "baud";
+ };
+ 
+ &uart_C {
+ 	compatible = "amlogic,meson8-uart";
+-	clocks = <&xtal>, <&clkc CLKID_UART0>, <&clkc CLKID_CLK81>;
++	clocks = <&xtal>, <&clkc CLKID_UART2>, <&clkc CLKID_CLK81>;
+ 	clock-names = "xtal", "pclk", "baud";
+ };
+ 
+diff --git a/arch/arm/boot/dts/meson8b.dtsi b/arch/arm/boot/dts/meson8b.dtsi
+index 5979209fe91ef..5198f5177c2c1 100644
+--- a/arch/arm/boot/dts/meson8b.dtsi
++++ b/arch/arm/boot/dts/meson8b.dtsi
+@@ -740,13 +740,13 @@
+ 
+ &uart_B {
+ 	compatible = "amlogic,meson8b-uart";
+-	clocks = <&xtal>, <&clkc CLKID_UART0>, <&clkc CLKID_CLK81>;
++	clocks = <&xtal>, <&clkc CLKID_UART1>, <&clkc CLKID_CLK81>;
+ 	clock-names = "xtal", "pclk", "baud";
+ };
+ 
+ &uart_C {
+ 	compatible = "amlogic,meson8b-uart";
+-	clocks = <&xtal>, <&clkc CLKID_UART0>, <&clkc CLKID_CLK81>;
++	clocks = <&xtal>, <&clkc CLKID_UART2>, <&clkc CLKID_CLK81>;
+ 	clock-names = "xtal", "pclk", "baud";
+ };
+ 
+diff --git a/arch/arm/boot/dts/omap3-gta04a5one.dts b/arch/arm/boot/dts/omap3-gta04a5one.dts
+index 9db9fe67cd63b..95df45cc70c09 100644
+--- a/arch/arm/boot/dts/omap3-gta04a5one.dts
++++ b/arch/arm/boot/dts/omap3-gta04a5one.dts
+@@ -5,9 +5,11 @@
+ 
+ #include "omap3-gta04a5.dts"
+ 
+-&omap3_pmx_core {
++/ {
+ 	model = "Goldelico GTA04A5/Letux 2804 with OneNAND";
++};
+ 
++&omap3_pmx_core {
+ 	gpmc_pins: pinmux_gpmc_pins {
+ 		pinctrl-single,pins = <
+ 
+diff --git a/arch/arm/boot/dts/qcom-apq8060-dragonboard.dts b/arch/arm/boot/dts/qcom-apq8060-dragonboard.dts
+index 8e4b61e4d4b17..e8fe321f3d89b 100644
+--- a/arch/arm/boot/dts/qcom-apq8060-dragonboard.dts
++++ b/arch/arm/boot/dts/qcom-apq8060-dragonboard.dts
+@@ -451,7 +451,7 @@
+ 	 * PM8901 supplies "preliminary regulators" whatever
+ 	 * that means
+ 	 */
+-	pm8901-regulators {
++	regulators-0 {
+ 		vdd_l0-supply = <&pm8901_s4>;
+ 		vdd_l1-supply = <&vph>;
+ 		vdd_l2-supply = <&vph>;
+@@ -537,7 +537,7 @@
+ 
+ 	};
+ 
+-	pm8058-regulators {
++	regulators-1 {
+ 		vdd_l0_l1_lvs-supply = <&pm8058_s3>;
+ 		vdd_l2_l11_l12-supply = <&vph>;
+ 		vdd_l3_l4_l5-supply = <&vph>;
+diff --git a/arch/arm/boot/dts/qcom-apq8074-dragonboard.dts b/arch/arm/boot/dts/qcom-apq8074-dragonboard.dts
+index 1345df7cbd002..6b047c6793707 100644
+--- a/arch/arm/boot/dts/qcom-apq8074-dragonboard.dts
++++ b/arch/arm/boot/dts/qcom-apq8074-dragonboard.dts
+@@ -23,6 +23,10 @@
+ 	status = "okay";
+ };
+ 
++&blsp2_dma {
++	qcom,controlled-remotely;
++};
++
+ &blsp2_i2c5 {
+ 	status = "okay";
+ 	clock-frequency = <200000>;
+diff --git a/arch/arm/boot/dts/qcom-ipq4019-ap.dk04.1-c1.dts b/arch/arm/boot/dts/qcom-ipq4019-ap.dk04.1-c1.dts
+index 79b0c6318e527..0993f840d1fc7 100644
+--- a/arch/arm/boot/dts/qcom-ipq4019-ap.dk04.1-c1.dts
++++ b/arch/arm/boot/dts/qcom-ipq4019-ap.dk04.1-c1.dts
+@@ -11,9 +11,9 @@
+ 		dma-controller@7984000 {
+ 			status = "okay";
+ 		};
+-
+-		qpic-nand@79b0000 {
+-			status = "okay";
+-		};
+ 	};
+ };
++
++&nand {
++	status = "okay";
++};
+diff --git a/arch/arm/boot/dts/qcom-ipq4019-ap.dk04.1.dtsi b/arch/arm/boot/dts/qcom-ipq4019-ap.dk04.1.dtsi
+index a63b3778636d4..468ebc40d2ad3 100644
+--- a/arch/arm/boot/dts/qcom-ipq4019-ap.dk04.1.dtsi
++++ b/arch/arm/boot/dts/qcom-ipq4019-ap.dk04.1.dtsi
+@@ -102,10 +102,10 @@
+ 			status = "okay";
+ 			perst-gpios = <&tlmm 38 GPIO_ACTIVE_LOW>;
+ 		};
+-
+-		qpic-nand@79b0000 {
+-			pinctrl-0 = <&nand_pins>;
+-			pinctrl-names = "default";
+-		};
+ 	};
+ };
++
++&nand {
++	pinctrl-0 = <&nand_pins>;
++	pinctrl-names = "default";
++};
+diff --git a/arch/arm/boot/dts/qcom-ipq4019-ap.dk07.1.dtsi b/arch/arm/boot/dts/qcom-ipq4019-ap.dk07.1.dtsi
+index 0107f552f5204..7ef635997efa4 100644
+--- a/arch/arm/boot/dts/qcom-ipq4019-ap.dk07.1.dtsi
++++ b/arch/arm/boot/dts/qcom-ipq4019-ap.dk07.1.dtsi
+@@ -65,11 +65,11 @@
+ 		dma-controller@7984000 {
+ 			status = "okay";
+ 		};
+-
+-		qpic-nand@79b0000 {
+-			pinctrl-0 = <&nand_pins>;
+-			pinctrl-names = "default";
+-			status = "okay";
+-		};
+ 	};
+ };
++
++&nand {
++	pinctrl-0 = <&nand_pins>;
++	pinctrl-names = "default";
++	status = "okay";
++};
+diff --git a/arch/arm/boot/dts/qcom-msm8974.dtsi b/arch/arm/boot/dts/qcom-msm8974.dtsi
+index 7ed0d925a4e99..a22616491dc0e 100644
+--- a/arch/arm/boot/dts/qcom-msm8974.dtsi
++++ b/arch/arm/boot/dts/qcom-msm8974.dtsi
+@@ -301,7 +301,7 @@
+ 			qcom,ipc = <&apcs 8 0>;
+ 			qcom,smd-edge = <15>;
+ 
+-			rpm_requests: rpm_requests {
++			rpm_requests: rpm-requests {
+ 				compatible = "qcom,rpm-msm8974";
+ 				qcom,smd-channels = "rpm_requests";
+ 
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi
+index 4709677151aac..46b87a27d8b37 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi
+@@ -137,10 +137,13 @@
+ 
+ 	sound {
+ 		compatible = "audio-graph-card";
+-		routing =
+-			"MIC_IN", "Capture",
+-			"Capture", "Mic Bias",
+-			"Playback", "HP_OUT";
++		widgets = "Headphone", "Headphone Jack",
++			  "Line", "Line In Jack",
++			  "Microphone", "Microphone Jack";
++		routing = "Headphone Jack", "HP_OUT",
++			  "LINE_IN", "Line In Jack",
++			  "MIC_IN", "Microphone Jack",
++			  "Microphone Jack", "Mic Bias";
+ 		dais = <&sai2a_port &sai2b_port>;
+ 		status = "okay";
+ 	};
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi
+index 50af4a27d6be4..7d5d6d4360385 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi
+@@ -87,7 +87,7 @@
+ 
+ 	sound {
+ 		compatible = "audio-graph-card";
+-		label = "STM32MP1-AV96-HDMI";
++		label = "STM32-AV96-HDMI";
+ 		dais = <&sai2a_port>;
+ 		status = "okay";
+ 	};
+@@ -321,6 +321,12 @@
+ 			};
+ 		};
+ 	};
++
++	dh_mac_eeprom: eeprom@53 {
++		compatible = "atmel,24c02";
++		reg = <0x53>;
++		pagesize = <16>;
++	};
+ };
+ 
+ &ltdc {
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcor-drc-compact.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcor-drc-compact.dtsi
+index c32c160f97f20..39af79dc654cc 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dhcor-drc-compact.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dhcor-drc-compact.dtsi
+@@ -192,6 +192,12 @@
+ 		reg = <0x50>;
+ 		pagesize = <16>;
+ 	};
++
++	dh_mac_eeprom: eeprom@53 {
++		compatible = "atmel,24c02";
++		reg = <0x53>;
++		pagesize = <16>;
++	};
+ };
+ 
+ &sdmmc1 {	/* MicroSD */
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcor-som.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcor-som.dtsi
+index bb40fb46da81d..bba19f21e5277 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dhcor-som.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dhcor-som.dtsi
+@@ -213,12 +213,6 @@
+ 			status = "disabled";
+ 		};
+ 	};
+-
+-	eeprom@53 {
+-		compatible = "atmel,24c02";
+-		reg = <0x53>;
+-		pagesize = <16>;
+-	};
+ };
+ 
+ &ipcc {
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcor-testbench.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcor-testbench.dtsi
+index 5fdb74b652aca..faed31b6d84a1 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dhcor-testbench.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dhcor-testbench.dtsi
+@@ -90,6 +90,14 @@
+ 	};
+ };
+ 
++&i2c4 {
++	dh_mac_eeprom: eeprom@53 {
++		compatible = "atmel,24c02";
++		reg = <0x53>;
++		pagesize = <16>;
++	};
++};
++
+ &sdmmc1 {
+ 	pinctrl-names = "default", "opendrain", "sleep";
+ 	pinctrl-0 = <&sdmmc1_b4_pins_a &sdmmc1_dir_pins_b>;
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dkx.dtsi b/arch/arm/boot/dts/stm32mp15xx-dkx.dtsi
+index cefeeb00fc228..aa2e92f1e63d3 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dkx.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dkx.dtsi
+@@ -435,7 +435,7 @@
+ 	i2s2_port: port {
+ 		i2s2_endpoint: endpoint {
+ 			remote-endpoint = <&sii9022_tx_endpoint>;
+-			format = "i2s";
++			dai-format = "i2s";
+ 			mclk-fs = <256>;
+ 		};
+ 	};
+diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h
+index 505a306e0271a..aebe2c8f6a686 100644
+--- a/arch/arm/include/asm/assembler.h
++++ b/arch/arm/include/asm/assembler.h
+@@ -394,6 +394,23 @@ ALT_UP_B(.L0_\@)
+ #endif
+ 	.endm
+ 
++/*
++ * Raw SMP data memory barrier
++ */
++	.macro	__smp_dmb mode
++#if __LINUX_ARM_ARCH__ >= 7
++	.ifeqs "\mode","arm"
++	dmb	ish
++	.else
++	W(dmb)	ish
++	.endif
++#elif __LINUX_ARM_ARCH__ == 6
++	mcr	p15, 0, r0, c7, c10, 5	@ dmb
++#else
++	.error "Incompatible SMP platform"
++#endif
++	.endm
++
+ #if defined(CONFIG_CPU_V7M)
+ 	/*
+ 	 * setmode is used to assert to be in svc mode during boot. For v7-M
+diff --git a/arch/arm/include/asm/sync_bitops.h b/arch/arm/include/asm/sync_bitops.h
+index 6f5d627c44a3c..f46b3c570f92e 100644
+--- a/arch/arm/include/asm/sync_bitops.h
++++ b/arch/arm/include/asm/sync_bitops.h
+@@ -14,14 +14,35 @@
+  * ops which are SMP safe even on a UP kernel.
+  */
+ 
++/*
++ * Unordered
++ */
++
+ #define sync_set_bit(nr, p)		_set_bit(nr, p)
+ #define sync_clear_bit(nr, p)		_clear_bit(nr, p)
+ #define sync_change_bit(nr, p)		_change_bit(nr, p)
+-#define sync_test_and_set_bit(nr, p)	_test_and_set_bit(nr, p)
+-#define sync_test_and_clear_bit(nr, p)	_test_and_clear_bit(nr, p)
+-#define sync_test_and_change_bit(nr, p)	_test_and_change_bit(nr, p)
+ #define sync_test_bit(nr, addr)		test_bit(nr, addr)
+-#define arch_sync_cmpxchg		arch_cmpxchg
+ 
++/*
++ * Fully ordered
++ */
++
++int _sync_test_and_set_bit(int nr, volatile unsigned long * p);
++#define sync_test_and_set_bit(nr, p)	_sync_test_and_set_bit(nr, p)
++
++int _sync_test_and_clear_bit(int nr, volatile unsigned long * p);
++#define sync_test_and_clear_bit(nr, p)	_sync_test_and_clear_bit(nr, p)
++
++int _sync_test_and_change_bit(int nr, volatile unsigned long * p);
++#define sync_test_and_change_bit(nr, p)	_sync_test_and_change_bit(nr, p)
++
++#define arch_sync_cmpxchg(ptr, old, new)				\
++({									\
++	__typeof__(*(ptr)) __ret;					\
++	__smp_mb__before_atomic();					\
++	__ret = arch_cmpxchg_relaxed((ptr), (old), (new));		\
++	__smp_mb__after_atomic();					\
++	__ret;								\
++})
+ 
+ #endif
+diff --git a/arch/arm/lib/bitops.h b/arch/arm/lib/bitops.h
+index 95bd359912889..f069d1b2318e6 100644
+--- a/arch/arm/lib/bitops.h
++++ b/arch/arm/lib/bitops.h
+@@ -28,7 +28,7 @@ UNWIND(	.fnend		)
+ ENDPROC(\name		)
+ 	.endm
+ 
+-	.macro	testop, name, instr, store
++	.macro	__testop, name, instr, store, barrier
+ ENTRY(	\name		)
+ UNWIND(	.fnstart	)
+ 	ands	ip, r1, #3
+@@ -38,7 +38,7 @@ UNWIND(	.fnstart	)
+ 	mov	r0, r0, lsr #5
+ 	add	r1, r1, r0, lsl #2	@ Get word offset
+ 	mov	r3, r2, lsl r3		@ create mask
+-	smp_dmb
++	\barrier
+ #if __LINUX_ARM_ARCH__ >= 7 && defined(CONFIG_SMP)
+ 	.arch_extension	mp
+ 	ALT_SMP(W(pldw)	[r1])
+@@ -50,13 +50,21 @@ UNWIND(	.fnstart	)
+ 	strex	ip, r2, [r1]
+ 	cmp	ip, #0
+ 	bne	1b
+-	smp_dmb
++	\barrier
+ 	cmp	r0, #0
+ 	movne	r0, #1
+ 2:	bx	lr
+ UNWIND(	.fnend		)
+ ENDPROC(\name		)
+ 	.endm
++
++	.macro	testop, name, instr, store
++	__testop \name, \instr, \store, smp_dmb
++	.endm
++
++	.macro	sync_testop, name, instr, store
++	__testop \name, \instr, \store, __smp_dmb
++	.endm
+ #else
+ 	.macro	bitop, name, instr
+ ENTRY(	\name		)
+diff --git a/arch/arm/lib/testchangebit.S b/arch/arm/lib/testchangebit.S
+index 4ebecc67e6e04..f13fe9bc2399a 100644
+--- a/arch/arm/lib/testchangebit.S
++++ b/arch/arm/lib/testchangebit.S
+@@ -10,3 +10,7 @@
+                 .text
+ 
+ testop	_test_and_change_bit, eor, str
++
++#if __LINUX_ARM_ARCH__ >= 6
++sync_testop	_sync_test_and_change_bit, eor, str
++#endif
+diff --git a/arch/arm/lib/testclearbit.S b/arch/arm/lib/testclearbit.S
+index 009afa0f5b4a7..4d2c5ca620ebf 100644
+--- a/arch/arm/lib/testclearbit.S
++++ b/arch/arm/lib/testclearbit.S
+@@ -10,3 +10,7 @@
+                 .text
+ 
+ testop	_test_and_clear_bit, bicne, strne
++
++#if __LINUX_ARM_ARCH__ >= 6
++sync_testop	_sync_test_and_clear_bit, bicne, strne
++#endif
+diff --git a/arch/arm/lib/testsetbit.S b/arch/arm/lib/testsetbit.S
+index f3192e55acc87..649dbab65d8d0 100644
+--- a/arch/arm/lib/testsetbit.S
++++ b/arch/arm/lib/testsetbit.S
+@@ -10,3 +10,7 @@
+                 .text
+ 
+ testop	_test_and_set_bit, orreq, streq
++
++#if __LINUX_ARM_ARCH__ >= 6
++sync_testop	_sync_test_and_set_bit, orreq, streq
++#endif
+diff --git a/arch/arm/mach-ep93xx/timer-ep93xx.c b/arch/arm/mach-ep93xx/timer-ep93xx.c
+index dd4b164d18317..a9efa7bc2fa12 100644
+--- a/arch/arm/mach-ep93xx/timer-ep93xx.c
++++ b/arch/arm/mach-ep93xx/timer-ep93xx.c
+@@ -9,6 +9,7 @@
+ #include <linux/io.h>
+ #include <asm/mach/time.h>
+ #include "soc.h"
++#include "platform.h"
+ 
+ /*************************************************************************
+  * Timer handling for EP93xx
+@@ -60,7 +61,7 @@ static u64 notrace ep93xx_read_sched_clock(void)
+ 	return ret;
+ }
+ 
+-u64 ep93xx_clocksource_read(struct clocksource *c)
++static u64 ep93xx_clocksource_read(struct clocksource *c)
+ {
+ 	u64 ret;
+ 
+diff --git a/arch/arm/mach-omap1/board-ams-delta.c b/arch/arm/mach-omap1/board-ams-delta.c
+index 9108c871d129a..ac47ab9fe0964 100644
+--- a/arch/arm/mach-omap1/board-ams-delta.c
++++ b/arch/arm/mach-omap1/board-ams-delta.c
+@@ -11,7 +11,6 @@
+ #include <linux/gpio/driver.h>
+ #include <linux/gpio/machine.h>
+ #include <linux/gpio/consumer.h>
+-#include <linux/gpio.h>
+ #include <linux/kernel.h>
+ #include <linux/init.h>
+ #include <linux/input.h>
+diff --git a/arch/arm/mach-omap1/board-nokia770.c b/arch/arm/mach-omap1/board-nokia770.c
+index a501a473ffd68..5ea27ca26abf2 100644
+--- a/arch/arm/mach-omap1/board-nokia770.c
++++ b/arch/arm/mach-omap1/board-nokia770.c
+@@ -6,17 +6,18 @@
+  */
+ #include <linux/clkdev.h>
+ #include <linux/irq.h>
+-#include <linux/gpio.h>
++#include <linux/gpio/consumer.h>
+ #include <linux/gpio/machine.h>
++#include <linux/gpio/property.h>
+ #include <linux/kernel.h>
+ #include <linux/init.h>
+ #include <linux/mutex.h>
+ #include <linux/platform_device.h>
++#include <linux/property.h>
+ #include <linux/input.h>
+ #include <linux/omapfb.h>
+ 
+ #include <linux/spi/spi.h>
+-#include <linux/spi/ads7846.h>
+ #include <linux/workqueue.h>
+ #include <linux/delay.h>
+ 
+@@ -35,6 +36,25 @@
+ #include "clock.h"
+ #include "mmc.h"
+ 
++static const struct software_node nokia770_mpuio_gpiochip_node = {
++	.name = "mpuio",
++};
++
++static const struct software_node nokia770_gpiochip1_node = {
++	.name = "gpio-0-15",
++};
++
++static const struct software_node nokia770_gpiochip2_node = {
++	.name = "gpio-16-31",
++};
++
++static const struct software_node *nokia770_gpiochip_nodes[] = {
++	&nokia770_mpuio_gpiochip_node,
++	&nokia770_gpiochip1_node,
++	&nokia770_gpiochip2_node,
++	NULL
++};
++
+ #define ADS7846_PENDOWN_GPIO	15
+ 
+ static const unsigned int nokia770_keymap[] = {
+@@ -85,40 +105,47 @@ static struct platform_device *nokia770_devices[] __initdata = {
+ 	&nokia770_kp_device,
+ };
+ 
+-static void mipid_shutdown(struct mipid_platform_data *pdata)
+-{
+-	if (pdata->nreset_gpio != -1) {
+-		printk(KERN_INFO "shutdown LCD\n");
+-		gpio_set_value(pdata->nreset_gpio, 0);
+-		msleep(120);
+-	}
+-}
+-
+-static struct mipid_platform_data nokia770_mipid_platform_data = {
+-	.shutdown = mipid_shutdown,
+-};
++static struct mipid_platform_data nokia770_mipid_platform_data = { };
+ 
+ static const struct omap_lcd_config nokia770_lcd_config __initconst = {
+ 	.ctrl_name	= "hwa742",
+ };
+ 
++static const struct property_entry nokia770_mipid_props[] = {
++	PROPERTY_ENTRY_GPIO("reset-gpios", &nokia770_gpiochip1_node,
++			    13, GPIO_ACTIVE_LOW),
++	{ }
++};
++
++static const struct software_node nokia770_mipid_swnode = {
++	.name = "lcd_mipid",
++	.properties = nokia770_mipid_props,
++};
++
+ static void __init mipid_dev_init(void)
+ {
+-	nokia770_mipid_platform_data.nreset_gpio = 13;
+ 	nokia770_mipid_platform_data.data_lines = 16;
+ 
+ 	omapfb_set_lcd_config(&nokia770_lcd_config);
+ }
+ 
+-static struct ads7846_platform_data nokia770_ads7846_platform_data __initdata = {
+-	.x_max		= 0x0fff,
+-	.y_max		= 0x0fff,
+-	.x_plate_ohms	= 180,
+-	.pressure_max	= 255,
+-	.debounce_max	= 10,
+-	.debounce_tol	= 3,
+-	.debounce_rep	= 1,
+-	.gpio_pendown	= ADS7846_PENDOWN_GPIO,
++static const struct property_entry nokia770_ads7846_props[] = {
++	PROPERTY_ENTRY_STRING("compatible", "ti,ads7846"),
++	PROPERTY_ENTRY_U32("touchscreen-size-x", 4096),
++	PROPERTY_ENTRY_U32("touchscreen-size-y", 4096),
++	PROPERTY_ENTRY_U32("touchscreen-max-pressure", 256),
++	PROPERTY_ENTRY_U32("touchscreen-average-samples", 10),
++	PROPERTY_ENTRY_U16("ti,x-plate-ohms", 180),
++	PROPERTY_ENTRY_U16("ti,debounce-tol", 3),
++	PROPERTY_ENTRY_U16("ti,debounce-rep", 1),
++	PROPERTY_ENTRY_GPIO("pendown-gpios", &nokia770_gpiochip1_node,
++			    ADS7846_PENDOWN_GPIO, GPIO_ACTIVE_LOW),
++	{ }
++};
++
++static const struct software_node nokia770_ads7846_swnode = {
++	.name = "ads7846",
++	.properties = nokia770_ads7846_props,
+ };
+ 
+ static struct spi_board_info nokia770_spi_board_info[] __initdata = {
+@@ -128,13 +155,14 @@ static struct spi_board_info nokia770_spi_board_info[] __initdata = {
+ 		.chip_select    = 3,
+ 		.max_speed_hz   = 12000000,
+ 		.platform_data	= &nokia770_mipid_platform_data,
++		.swnode         = &nokia770_mipid_swnode,
+ 	},
+ 	[1] = {
+ 		.modalias       = "ads7846",
+ 		.bus_num        = 2,
+ 		.chip_select    = 0,
+ 		.max_speed_hz   = 2500000,
+-		.platform_data	= &nokia770_ads7846_platform_data,
++		.swnode         = &nokia770_ads7846_swnode,
+ 	},
+ };
+ 
+@@ -156,27 +184,23 @@ static struct omap_usb_config nokia770_usb_config __initdata = {
+ 
+ #if IS_ENABLED(CONFIG_MMC_OMAP)
+ 
+-#define NOKIA770_GPIO_MMC_POWER		41
+-#define NOKIA770_GPIO_MMC_SWITCH	23
+-
+-static int nokia770_mmc_set_power(struct device *dev, int slot, int power_on,
+-				int vdd)
+-{
+-	gpio_set_value(NOKIA770_GPIO_MMC_POWER, power_on);
+-	return 0;
+-}
+-
+-static int nokia770_mmc_get_cover_state(struct device *dev, int slot)
+-{
+-	return gpio_get_value(NOKIA770_GPIO_MMC_SWITCH);
+-}
++static struct gpiod_lookup_table nokia770_mmc_gpio_table = {
++	.dev_id = "mmci-omap.1",
++	.table = {
++		/* Slot index 0, VSD power, GPIO 41 */
++		GPIO_LOOKUP_IDX("gpio-32-47", 9,
++				"vsd", 0, GPIO_ACTIVE_HIGH),
++		/* Slot index 0, switch, GPIO 23 */
++		GPIO_LOOKUP_IDX("gpio-16-31", 7,
++				"cover", 0, GPIO_ACTIVE_HIGH),
++		{ }
++	},
++};
+ 
+ static struct omap_mmc_platform_data nokia770_mmc2_data = {
+ 	.nr_slots                       = 1,
+ 	.max_freq                       = 12000000,
+ 	.slots[0]       = {
+-		.set_power		= nokia770_mmc_set_power,
+-		.get_cover_state	= nokia770_mmc_get_cover_state,
+ 		.ocr_mask               = MMC_VDD_32_33|MMC_VDD_33_34,
+ 		.name                   = "mmcblk",
+ 	},
+@@ -186,20 +210,7 @@ static struct omap_mmc_platform_data *nokia770_mmc_data[OMAP16XX_NR_MMC];
+ 
+ static void __init nokia770_mmc_init(void)
+ {
+-	int ret;
+-
+-	ret = gpio_request(NOKIA770_GPIO_MMC_POWER, "MMC power");
+-	if (ret < 0)
+-		return;
+-	gpio_direction_output(NOKIA770_GPIO_MMC_POWER, 0);
+-
+-	ret = gpio_request(NOKIA770_GPIO_MMC_SWITCH, "MMC cover");
+-	if (ret < 0) {
+-		gpio_free(NOKIA770_GPIO_MMC_POWER);
+-		return;
+-	}
+-	gpio_direction_input(NOKIA770_GPIO_MMC_SWITCH);
+-
++	gpiod_add_lookup_table(&nokia770_mmc_gpio_table);
+ 	/* Only the second MMC controller is used */
+ 	nokia770_mmc_data[1] = &nokia770_mmc2_data;
+ 	omap1_init_mmc(nokia770_mmc_data, OMAP16XX_NR_MMC);
+@@ -212,14 +223,16 @@ static inline void nokia770_mmc_init(void)
+ #endif
+ 
+ #if IS_ENABLED(CONFIG_I2C_CBUS_GPIO)
+-static struct gpiod_lookup_table nokia770_cbus_gpio_table = {
+-	.dev_id = "i2c-cbus-gpio.2",
+-	.table = {
+-		GPIO_LOOKUP_IDX("mpuio", 9, NULL, 0, 0), /* clk */
+-		GPIO_LOOKUP_IDX("mpuio", 10, NULL, 1, 0), /* dat */
+-		GPIO_LOOKUP_IDX("mpuio", 11, NULL, 2, 0), /* sel */
+-		{ },
+-	},
++
++static const struct software_node_ref_args nokia770_cbus_gpio_refs[] = {
++	SOFTWARE_NODE_REFERENCE(&nokia770_mpuio_gpiochip_node, 9, 0),
++	SOFTWARE_NODE_REFERENCE(&nokia770_mpuio_gpiochip_node, 10, 0),
++	SOFTWARE_NODE_REFERENCE(&nokia770_mpuio_gpiochip_node, 11, 0),
++};
++
++static const struct property_entry nokia770_cbus_props[] = {
++	PROPERTY_ENTRY_REF_ARRAY("gpios", nokia770_cbus_gpio_refs),
++	{ }
+ };
+ 
+ static struct platform_device nokia770_cbus_device = {
+@@ -238,22 +251,29 @@ static struct i2c_board_info nokia770_i2c_board_info_2[] __initdata = {
+ 
+ static void __init nokia770_cbus_init(void)
+ {
+-	const int retu_irq_gpio = 62;
+-	const int tahvo_irq_gpio = 40;
+-
+-	if (gpio_request_one(retu_irq_gpio, GPIOF_IN, "Retu IRQ"))
+-		return;
+-	if (gpio_request_one(tahvo_irq_gpio, GPIOF_IN, "Tahvo IRQ")) {
+-		gpio_free(retu_irq_gpio);
+-		return;
++	struct gpio_desc *d;
++	int irq;
++
++	d = gpiod_get(NULL, "retu_irq", GPIOD_IN);
++	if (IS_ERR(d)) {
++		pr_err("Unable to get CBUS Retu IRQ GPIO descriptor\n");
++	} else {
++		irq = gpiod_to_irq(d);
++		irq_set_irq_type(irq, IRQ_TYPE_EDGE_RISING);
++		nokia770_i2c_board_info_2[0].irq = irq;
++	}
++	d = gpiod_get(NULL, "tahvo_irq", GPIOD_IN);
++	if (IS_ERR(d)) {
++		pr_err("Unable to get CBUS Tahvo IRQ GPIO descriptor\n");
++	} else {
++		irq = gpiod_to_irq(d);
++		irq_set_irq_type(irq, IRQ_TYPE_EDGE_RISING);
++		nokia770_i2c_board_info_2[1].irq = irq;
+ 	}
+-	irq_set_irq_type(gpio_to_irq(retu_irq_gpio), IRQ_TYPE_EDGE_RISING);
+-	irq_set_irq_type(gpio_to_irq(tahvo_irq_gpio), IRQ_TYPE_EDGE_RISING);
+-	nokia770_i2c_board_info_2[0].irq = gpio_to_irq(retu_irq_gpio);
+-	nokia770_i2c_board_info_2[1].irq = gpio_to_irq(tahvo_irq_gpio);
+ 	i2c_register_board_info(2, nokia770_i2c_board_info_2,
+ 				ARRAY_SIZE(nokia770_i2c_board_info_2));
+-	gpiod_add_lookup_table(&nokia770_cbus_gpio_table);
++	device_create_managed_software_node(&nokia770_cbus_device.dev,
++					    nokia770_cbus_props, NULL);
+ 	platform_device_register(&nokia770_cbus_device);
+ }
+ #else /* CONFIG_I2C_CBUS_GPIO */
+@@ -262,8 +282,33 @@ static void __init nokia770_cbus_init(void)
+ }
+ #endif /* CONFIG_I2C_CBUS_GPIO */
+ 
++static struct gpiod_lookup_table nokia770_irq_gpio_table = {
++	.dev_id = NULL,
++	.table = {
++		/* GPIO used by SPI device 1 */
++		GPIO_LOOKUP("gpio-0-15", 15, "ads7846_irq",
++			    GPIO_ACTIVE_HIGH),
++		/* GPIO used for retu IRQ */
++		GPIO_LOOKUP("gpio-48-63", 15, "retu_irq",
++			    GPIO_ACTIVE_HIGH),
++		/* GPIO used for tahvo IRQ */
++		GPIO_LOOKUP("gpio-32-47", 8, "tahvo_irq",
++			    GPIO_ACTIVE_HIGH),
++		/* GPIOs used by serial wakeup IRQs */
++		GPIO_LOOKUP_IDX("gpio-32-47", 5, "wakeup", 0,
++			    GPIO_ACTIVE_HIGH),
++		GPIO_LOOKUP_IDX("gpio-16-31", 2, "wakeup", 1,
++			    GPIO_ACTIVE_HIGH),
++		GPIO_LOOKUP_IDX("gpio-48-63", 1, "wakeup", 2,
++			    GPIO_ACTIVE_HIGH),
++		{ }
++	},
++};
++
+ static void __init omap_nokia770_init(void)
+ {
++	struct gpio_desc *d;
++
+ 	/* On Nokia 770, the SleepX signal is masked with an
+ 	 * MPUIO line by default.  It has to be unmasked for it
+ 	 * to become functional */
+@@ -273,8 +318,16 @@ static void __init omap_nokia770_init(void)
+ 	/* Unmask SleepX signal */
+ 	omap_writew((omap_readw(0xfffb5004) & ~2), 0xfffb5004);
+ 
++	software_node_register_node_group(nokia770_gpiochip_nodes);
+ 	platform_add_devices(nokia770_devices, ARRAY_SIZE(nokia770_devices));
+-	nokia770_spi_board_info[1].irq = gpio_to_irq(15);
++
++	gpiod_add_lookup_table(&nokia770_irq_gpio_table);
++	d = gpiod_get(NULL, "ads7846_irq", GPIOD_IN);
++	if (IS_ERR(d))
++		pr_err("Unable to get ADS7846 IRQ GPIO descriptor\n");
++	else
++		nokia770_spi_board_info[1].irq = gpiod_to_irq(d);
++
+ 	spi_register_board_info(nokia770_spi_board_info,
+ 				ARRAY_SIZE(nokia770_spi_board_info));
+ 	omap_serial_init();
+diff --git a/arch/arm/mach-omap1/board-osk.c b/arch/arm/mach-omap1/board-osk.c
+index df758c1f92373..463687b9ca52a 100644
+--- a/arch/arm/mach-omap1/board-osk.c
++++ b/arch/arm/mach-omap1/board-osk.c
+@@ -25,7 +25,8 @@
+  * with this program; if not, write  to the Free Software Foundation, Inc.,
+  * 675 Mass Ave, Cambridge, MA 02139, USA.
+  */
+-#include <linux/gpio.h>
++#include <linux/gpio/consumer.h>
++#include <linux/gpio/driver.h>
+ #include <linux/gpio/machine.h>
+ #include <linux/kernel.h>
+ #include <linux/init.h>
+@@ -64,13 +65,12 @@
+ /* TPS65010 has four GPIOs.  nPG and LED2 can be treated like GPIOs with
+  * alternate pin configurations for hardware-controlled blinking.
+  */
+-#define OSK_TPS_GPIO_BASE		(OMAP_MAX_GPIO_LINES + 16 /* MPUIO */)
+-#	define OSK_TPS_GPIO_USB_PWR_EN	(OSK_TPS_GPIO_BASE + 0)
+-#	define OSK_TPS_GPIO_LED_D3	(OSK_TPS_GPIO_BASE + 1)
+-#	define OSK_TPS_GPIO_LAN_RESET	(OSK_TPS_GPIO_BASE + 2)
+-#	define OSK_TPS_GPIO_DSP_PWR_EN	(OSK_TPS_GPIO_BASE + 3)
+-#	define OSK_TPS_GPIO_LED_D9	(OSK_TPS_GPIO_BASE + 4)
+-#	define OSK_TPS_GPIO_LED_D2	(OSK_TPS_GPIO_BASE + 5)
++#define OSK_TPS_GPIO_USB_PWR_EN	0
++#define OSK_TPS_GPIO_LED_D3	1
++#define OSK_TPS_GPIO_LAN_RESET	2
++#define OSK_TPS_GPIO_DSP_PWR_EN	3
++#define OSK_TPS_GPIO_LED_D9	4
++#define OSK_TPS_GPIO_LED_D2	5
+ 
+ static struct mtd_partition osk_partitions[] = {
+ 	/* bootloader (U-Boot, etc) in first sector */
+@@ -174,11 +174,20 @@ static const struct gpio_led tps_leds[] = {
+ 	/* NOTE:  D9 and D2 have hardware blink support.
+ 	 * Also, D9 requires non-battery power.
+ 	 */
+-	{ .gpio = OSK_TPS_GPIO_LED_D9, .name = "d9",
+-			.default_trigger = "disk-activity", },
+-	{ .gpio = OSK_TPS_GPIO_LED_D2, .name = "d2", },
+-	{ .gpio = OSK_TPS_GPIO_LED_D3, .name = "d3", .active_low = 1,
+-			.default_trigger = "heartbeat", },
++	{ .name = "d9", .default_trigger = "disk-activity", },
++	{ .name = "d2", },
++	{ .name = "d3", .default_trigger = "heartbeat", },
++};
++
++static struct gpiod_lookup_table tps_leds_gpio_table = {
++	.dev_id = "leds-gpio",
++	.table = {
++		/* Use local offsets on TPS65010 */
++		GPIO_LOOKUP_IDX("tps65010", OSK_TPS_GPIO_LED_D9, NULL, 0, GPIO_ACTIVE_HIGH),
++		GPIO_LOOKUP_IDX("tps65010", OSK_TPS_GPIO_LED_D2, NULL, 1, GPIO_ACTIVE_HIGH),
++		GPIO_LOOKUP_IDX("tps65010", OSK_TPS_GPIO_LED_D3, NULL, 2, GPIO_ACTIVE_LOW),
++		{ }
++	},
+ };
+ 
+ static struct gpio_led_platform_data tps_leds_data = {
+@@ -192,29 +201,34 @@ static struct platform_device osk5912_tps_leds = {
+ 	.dev.platform_data	= &tps_leds_data,
+ };
+ 
+-static int osk_tps_setup(struct i2c_client *client, void *context)
++/* The board just hold these GPIOs hogged from setup to teardown */
++static struct gpio_desc *eth_reset;
++static struct gpio_desc *vdd_dsp;
++
++static int osk_tps_setup(struct i2c_client *client, struct gpio_chip *gc)
+ {
++	struct gpio_desc *d;
+ 	if (!IS_BUILTIN(CONFIG_TPS65010))
+ 		return -ENOSYS;
+ 
+ 	/* Set GPIO 1 HIGH to disable VBUS power supply;
+ 	 * OHCI driver powers it up/down as needed.
+ 	 */
+-	gpio_request(OSK_TPS_GPIO_USB_PWR_EN, "n_vbus_en");
+-	gpio_direction_output(OSK_TPS_GPIO_USB_PWR_EN, 1);
++	d = gpiochip_request_own_desc(gc, OSK_TPS_GPIO_USB_PWR_EN, "n_vbus_en",
++				      GPIO_ACTIVE_HIGH, GPIOD_OUT_HIGH);
+ 	/* Free the GPIO again as the driver will request it */
+-	gpio_free(OSK_TPS_GPIO_USB_PWR_EN);
++	gpiochip_free_own_desc(d);
+ 
+ 	/* Set GPIO 2 high so LED D3 is off by default */
+ 	tps65010_set_gpio_out_value(GPIO2, HIGH);
+ 
+ 	/* Set GPIO 3 low to take ethernet out of reset */
+-	gpio_request(OSK_TPS_GPIO_LAN_RESET, "smc_reset");
+-	gpio_direction_output(OSK_TPS_GPIO_LAN_RESET, 0);
++	eth_reset = gpiochip_request_own_desc(gc, OSK_TPS_GPIO_LAN_RESET, "smc_reset",
++					      GPIO_ACTIVE_HIGH, GPIOD_OUT_LOW);
+ 
+ 	/* GPIO4 is VDD_DSP */
+-	gpio_request(OSK_TPS_GPIO_DSP_PWR_EN, "dsp_power");
+-	gpio_direction_output(OSK_TPS_GPIO_DSP_PWR_EN, 1);
++	vdd_dsp = gpiochip_request_own_desc(gc, OSK_TPS_GPIO_DSP_PWR_EN, "dsp_power",
++					    GPIO_ACTIVE_HIGH, GPIOD_OUT_HIGH);
+ 	/* REVISIT if DSP support isn't configured, power it off ... */
+ 
+ 	/* Let LED1 (D9) blink; leds-gpio may override it */
+@@ -232,15 +246,22 @@ static int osk_tps_setup(struct i2c_client *client, void *context)
+ 
+ 	/* register these three LEDs */
+ 	osk5912_tps_leds.dev.parent = &client->dev;
++	gpiod_add_lookup_table(&tps_leds_gpio_table);
+ 	platform_device_register(&osk5912_tps_leds);
+ 
+ 	return 0;
+ }
+ 
++static void osk_tps_teardown(struct i2c_client *client, struct gpio_chip *gc)
++{
++	gpiochip_free_own_desc(eth_reset);
++	gpiochip_free_own_desc(vdd_dsp);
++}
++
+ static struct tps65010_board tps_board = {
+-	.base		= OSK_TPS_GPIO_BASE,
+ 	.outmask	= 0x0f,
+ 	.setup		= osk_tps_setup,
++	.teardown	= osk_tps_teardown,
+ };
+ 
+ static struct i2c_board_info __initdata osk_i2c_board_info[] = {
+@@ -263,11 +284,6 @@ static void __init osk_init_smc91x(void)
+ {
+ 	u32 l;
+ 
+-	if ((gpio_request(0, "smc_irq")) < 0) {
+-		printk("Error requesting gpio 0 for smc91x irq\n");
+-		return;
+-	}
+-
+ 	/* Check EMIFS wait states to fix errors with SMC_GET_PKT_HDR */
+ 	l = omap_readl(EMIFS_CCS(1));
+ 	l |= 0x3;
+@@ -279,10 +295,6 @@ static void __init osk_init_cf(int seg)
+ 	struct resource *res = &osk5912_cf_resources[1];
+ 
+ 	omap_cfg_reg(M7_1610_GPIO62);
+-	if ((gpio_request(62, "cf_irq")) < 0) {
+-		printk("Error requesting gpio 62 for CF irq\n");
+-		return;
+-	}
+ 
+ 	switch (seg) {
+ 	/* NOTE: CS0 could be configured too ... */
+@@ -308,18 +320,17 @@ static void __init osk_init_cf(int seg)
+ 		seg, omap_readl(EMIFS_CCS(seg)), omap_readl(EMIFS_ACS(seg)));
+ 	omap_writel(0x0004a1b3, EMIFS_CCS(seg));	/* synch mode 4 etc */
+ 	omap_writel(0x00000000, EMIFS_ACS(seg));	/* OE hold/setup */
+-
+-	/* the CF I/O IRQ is really active-low */
+-	irq_set_irq_type(gpio_to_irq(62), IRQ_TYPE_EDGE_FALLING);
+ }
+ 
+ static struct gpiod_lookup_table osk_usb_gpio_table = {
+ 	.dev_id = "ohci",
+ 	.table = {
+ 		/* Power GPIO on the I2C-attached TPS65010 */
+-		GPIO_LOOKUP("tps65010", 0, "power", GPIO_ACTIVE_HIGH),
++		GPIO_LOOKUP("tps65010", OSK_TPS_GPIO_USB_PWR_EN, "power",
++			    GPIO_ACTIVE_HIGH),
+ 		GPIO_LOOKUP(OMAP_GPIO_LABEL, 9, "overcurrent",
+ 			    GPIO_ACTIVE_HIGH),
++		{ }
+ 	},
+ };
+ 
+@@ -341,8 +352,32 @@ static struct omap_usb_config osk_usb_config __initdata = {
+ 
+ #define EMIFS_CS3_VAL	(0x88013141)
+ 
++static struct gpiod_lookup_table osk_irq_gpio_table = {
++	.dev_id = NULL,
++	.table = {
++		/* GPIO used for SMC91x IRQ */
++		GPIO_LOOKUP(OMAP_GPIO_LABEL, 0, "smc_irq",
++			    GPIO_ACTIVE_HIGH),
++		/* GPIO used for CF IRQ */
++		GPIO_LOOKUP("gpio-48-63", 14, "cf_irq",
++			    GPIO_ACTIVE_HIGH),
++		/* GPIO used by the TPS65010 chip */
++		GPIO_LOOKUP("mpuio", 1, "tps65010",
++			    GPIO_ACTIVE_HIGH),
++		/* GPIOs used for serial wakeup IRQs */
++		GPIO_LOOKUP_IDX("gpio-32-47", 5, "wakeup", 0,
++			    GPIO_ACTIVE_HIGH),
++		GPIO_LOOKUP_IDX("gpio-16-31", 2, "wakeup", 1,
++			    GPIO_ACTIVE_HIGH),
++		GPIO_LOOKUP_IDX("gpio-48-63", 1, "wakeup", 2,
++			    GPIO_ACTIVE_HIGH),
++		{ }
++	},
++};
++
+ static void __init osk_init(void)
+ {
++	struct gpio_desc *d;
+ 	u32 l;
+ 
+ 	osk_init_smc91x();
+@@ -359,10 +394,31 @@ static void __init osk_init(void)
+ 
+ 	osk_flash_resource.end = osk_flash_resource.start = omap_cs3_phys();
+ 	osk_flash_resource.end += SZ_32M - 1;
+-	osk5912_smc91x_resources[1].start = gpio_to_irq(0);
+-	osk5912_smc91x_resources[1].end = gpio_to_irq(0);
+-	osk5912_cf_resources[0].start = gpio_to_irq(62);
+-	osk5912_cf_resources[0].end = gpio_to_irq(62);
++
++	/*
++	 * Add the GPIOs to be used as IRQs and immediately look them up
++	 * to be passed as an IRQ resource. This is ugly but should work
++	 * until the day we convert to device tree.
++	 */
++	gpiod_add_lookup_table(&osk_irq_gpio_table);
++
++	d = gpiod_get(NULL, "smc_irq", GPIOD_IN);
++	if (IS_ERR(d)) {
++		pr_err("Unable to get SMC IRQ GPIO descriptor\n");
++	} else {
++		irq_set_irq_type(gpiod_to_irq(d), IRQ_TYPE_EDGE_RISING);
++		osk5912_smc91x_resources[1] = DEFINE_RES_IRQ(gpiod_to_irq(d));
++	}
++
++	d = gpiod_get(NULL, "cf_irq", GPIOD_IN);
++	if (IS_ERR(d)) {
++		pr_err("Unable to get CF IRQ GPIO descriptor\n");
++	} else {
++		/* the CF I/O IRQ is really active-low */
++		irq_set_irq_type(gpiod_to_irq(d), IRQ_TYPE_EDGE_FALLING);
++		osk5912_cf_resources[0] = DEFINE_RES_IRQ(gpiod_to_irq(d));
++	}
++
+ 	platform_add_devices(osk5912_devices, ARRAY_SIZE(osk5912_devices));
+ 
+ 	l = omap_readl(USB_TRANSCEIVER_CTRL);
+@@ -372,13 +428,15 @@ static void __init osk_init(void)
+ 	gpiod_add_lookup_table(&osk_usb_gpio_table);
+ 	omap1_usb_init(&osk_usb_config);
+ 
++	omap_serial_init();
++
+ 	/* irq for tps65010 chip */
+ 	/* bootloader effectively does:  omap_cfg_reg(U19_1610_MPUIO1); */
+-	if (gpio_request(OMAP_MPUIO(1), "tps65010") == 0)
+-		gpio_direction_input(OMAP_MPUIO(1));
+-
+-	omap_serial_init();
+-	osk_i2c_board_info[0].irq = gpio_to_irq(OMAP_MPUIO(1));
++	d = gpiod_get(NULL, "tps65010", GPIOD_IN);
++	if (IS_ERR(d))
++		pr_err("Unable to get TPS65010 IRQ GPIO descriptor\n");
++	else
++		osk_i2c_board_info[0].irq = gpiod_to_irq(d);
+ 	omap_register_i2c_bus(1, 400, osk_i2c_board_info,
+ 			      ARRAY_SIZE(osk_i2c_board_info));
+ }
+diff --git a/arch/arm/mach-omap1/board-palmte.c b/arch/arm/mach-omap1/board-palmte.c
+index f79c497f04d57..49b7757cb2fd3 100644
+--- a/arch/arm/mach-omap1/board-palmte.c
++++ b/arch/arm/mach-omap1/board-palmte.c
+@@ -13,7 +13,8 @@
+  *
+  * Copyright (c) 2006 Andrzej Zaborowski  <balrog@zabor.org>
+  */
+-#include <linux/gpio.h>
++#include <linux/gpio/machine.h>
++#include <linux/gpio/consumer.h>
+ #include <linux/kernel.h>
+ #include <linux/init.h>
+ #include <linux/input.h>
+@@ -187,23 +188,6 @@ static struct spi_board_info palmte_spi_info[] __initdata = {
+ 	},
+ };
+ 
+-static void __init palmte_misc_gpio_setup(void)
+-{
+-	/* Set TSC2102 PINTDAV pin as input (used by TSC2102 driver) */
+-	if (gpio_request(PALMTE_PINTDAV_GPIO, "TSC2102 PINTDAV") < 0) {
+-		printk(KERN_ERR "Could not reserve PINTDAV GPIO!\n");
+-		return;
+-	}
+-	gpio_direction_input(PALMTE_PINTDAV_GPIO);
+-
+-	/* Set USB-or-DC-IN pin as input (unused) */
+-	if (gpio_request(PALMTE_USB_OR_DC_GPIO, "USB/DC-IN") < 0) {
+-		printk(KERN_ERR "Could not reserve cable signal GPIO!\n");
+-		return;
+-	}
+-	gpio_direction_input(PALMTE_USB_OR_DC_GPIO);
+-}
+-
+ #if IS_ENABLED(CONFIG_MMC_OMAP)
+ 
+ static struct omap_mmc_platform_data _palmte_mmc_config = {
+@@ -231,8 +215,23 @@ static void palmte_mmc_init(void)
+ 
+ #endif /* CONFIG_MMC_OMAP */
+ 
++static struct gpiod_lookup_table palmte_irq_gpio_table = {
++	.dev_id = NULL,
++	.table = {
++		/* GPIO used for TSC2102 PINTDAV IRQ */
++		GPIO_LOOKUP("gpio-0-15", PALMTE_PINTDAV_GPIO, "tsc2102_irq",
++			    GPIO_ACTIVE_HIGH),
++		/* GPIO used for USB or DC input detection */
++		GPIO_LOOKUP("gpio-0-15", PALMTE_USB_OR_DC_GPIO, "usb_dc_irq",
++			    GPIO_ACTIVE_HIGH),
++		{ }
++	},
++};
++
+ static void __init omap_palmte_init(void)
+ {
++	struct gpio_desc *d;
++
+ 	/* mux pins for uarts */
+ 	omap_cfg_reg(UART1_TX);
+ 	omap_cfg_reg(UART1_RTS);
+@@ -243,9 +242,21 @@ static void __init omap_palmte_init(void)
+ 
+ 	platform_add_devices(palmte_devices, ARRAY_SIZE(palmte_devices));
+ 
+-	palmte_spi_info[0].irq = gpio_to_irq(PALMTE_PINTDAV_GPIO);
++	gpiod_add_lookup_table(&palmte_irq_gpio_table);
++	d = gpiod_get(NULL, "tsc2102_irq", GPIOD_IN);
++	if (IS_ERR(d))
++		pr_err("Unable to get TSC2102 IRQ GPIO descriptor\n");
++	else
++		palmte_spi_info[0].irq = gpiod_to_irq(d);
+ 	spi_register_board_info(palmte_spi_info, ARRAY_SIZE(palmte_spi_info));
+-	palmte_misc_gpio_setup();
++
++	/* We are getting this just to set it up as input */
++	d = gpiod_get(NULL, "usb_dc_irq", GPIOD_IN);
++	if (IS_ERR(d))
++		pr_err("Unable to get USB/DC IRQ GPIO descriptor\n");
++	else
++		gpiod_put(d);
++
+ 	omap_serial_init();
+ 	omap1_usb_init(&palmte_usb_config);
+ 	omap_register_i2c_bus(1, 100, NULL, 0);
+diff --git a/arch/arm/mach-omap1/board-sx1-mmc.c b/arch/arm/mach-omap1/board-sx1-mmc.c
+index f1c160924dfe4..f183a8448a7b0 100644
+--- a/arch/arm/mach-omap1/board-sx1-mmc.c
++++ b/arch/arm/mach-omap1/board-sx1-mmc.c
+@@ -9,7 +9,6 @@
+  * Copyright (C) 2007 Instituto Nokia de Tecnologia - INdT
+  */
+ 
+-#include <linux/gpio.h>
+ #include <linux/platform_device.h>
+ 
+ #include "hardware.h"
+diff --git a/arch/arm/mach-omap1/board-sx1.c b/arch/arm/mach-omap1/board-sx1.c
+index 0c0cdd5e77c79..a13c630be7b7f 100644
+--- a/arch/arm/mach-omap1/board-sx1.c
++++ b/arch/arm/mach-omap1/board-sx1.c
+@@ -11,7 +11,8 @@
+ * Maintainters : Vladimir Ananiev (aka Vovan888), Sergge
+ *		oslik.ru
+ */
+-#include <linux/gpio.h>
++#include <linux/gpio/machine.h>
++#include <linux/gpio/consumer.h>
+ #include <linux/kernel.h>
+ #include <linux/init.h>
+ #include <linux/input.h>
+@@ -304,8 +305,23 @@ static struct platform_device *sx1_devices[] __initdata = {
+ 
+ /*-----------------------------------------*/
+ 
++static struct gpiod_lookup_table sx1_gpio_table = {
++	.dev_id = NULL,
++	.table = {
++		GPIO_LOOKUP("gpio-0-15", 1, "irda_off",
++			    GPIO_ACTIVE_HIGH),
++		GPIO_LOOKUP("gpio-0-15", 11, "switch",
++			    GPIO_ACTIVE_HIGH),
++		GPIO_LOOKUP("gpio-0-15", 15, "usb_on",
++			    GPIO_ACTIVE_HIGH),
++		{ }
++	},
++};
++
+ static void __init omap_sx1_init(void)
+ {
++	struct gpio_desc *d;
++
+ 	/* mux pins for uarts */
+ 	omap_cfg_reg(UART1_TX);
+ 	omap_cfg_reg(UART1_RTS);
+@@ -320,15 +336,25 @@ static void __init omap_sx1_init(void)
+ 	omap_register_i2c_bus(1, 100, NULL, 0);
+ 	omap1_usb_init(&sx1_usb_config);
+ 	sx1_mmc_init();
++	gpiod_add_lookup_table(&sx1_gpio_table);
+ 
+ 	/* turn on USB power */
+ 	/* sx1_setusbpower(1); can't do it here because i2c is not ready */
+-	gpio_request(1, "A_IRDA_OFF");
+-	gpio_request(11, "A_SWITCH");
+-	gpio_request(15, "A_USB_ON");
+-	gpio_direction_output(1, 1);	/*A_IRDA_OFF = 1 */
+-	gpio_direction_output(11, 0);	/*A_SWITCH = 0 */
+-	gpio_direction_output(15, 0);	/*A_USB_ON = 0 */
++	d = gpiod_get(NULL, "irda_off", GPIOD_OUT_HIGH);
++	if (IS_ERR(d))
++		pr_err("Unable to get IRDA OFF GPIO descriptor\n");
++	else
++		gpiod_put(d);
++	d = gpiod_get(NULL, "switch", GPIOD_OUT_LOW);
++	if (IS_ERR(d))
++		pr_err("Unable to get SWITCH GPIO descriptor\n");
++	else
++		gpiod_put(d);
++	d = gpiod_get(NULL, "usb_on", GPIOD_OUT_LOW);
++	if (IS_ERR(d))
++		pr_err("Unable to get USB ON GPIO descriptor\n");
++	else
++		gpiod_put(d);
+ 
+ 	omapfb_set_lcd_config(&sx1_lcd_config);
+ }
+diff --git a/arch/arm/mach-omap1/devices.c b/arch/arm/mach-omap1/devices.c
+index 5304699c7a97e..8b2c5f911e973 100644
+--- a/arch/arm/mach-omap1/devices.c
++++ b/arch/arm/mach-omap1/devices.c
+@@ -6,7 +6,6 @@
+  */
+ 
+ #include <linux/dma-mapping.h>
+-#include <linux/gpio.h>
+ #include <linux/module.h>
+ #include <linux/kernel.h>
+ #include <linux/init.h>
+diff --git a/arch/arm/mach-omap1/gpio15xx.c b/arch/arm/mach-omap1/gpio15xx.c
+index 61fa26efd8653..6724af4925f24 100644
+--- a/arch/arm/mach-omap1/gpio15xx.c
++++ b/arch/arm/mach-omap1/gpio15xx.c
+@@ -8,7 +8,6 @@
+  *	Charulatha V <charu@ti.com>
+  */
+ 
+-#include <linux/gpio.h>
+ #include <linux/platform_data/gpio-omap.h>
+ #include <linux/soc/ti/omap1-soc.h>
+ #include <asm/irq.h>
+diff --git a/arch/arm/mach-omap1/gpio16xx.c b/arch/arm/mach-omap1/gpio16xx.c
+index cf052714b3f8a..55acec22fef4e 100644
+--- a/arch/arm/mach-omap1/gpio16xx.c
++++ b/arch/arm/mach-omap1/gpio16xx.c
+@@ -8,7 +8,6 @@
+  *	Charulatha V <charu@ti.com>
+  */
+ 
+-#include <linux/gpio.h>
+ #include <linux/platform_data/gpio-omap.h>
+ #include <linux/soc/ti/omap1-io.h>
+ 
+diff --git a/arch/arm/mach-omap1/irq.c b/arch/arm/mach-omap1/irq.c
+index bfc7ab010ae28..af06a8753fdc3 100644
+--- a/arch/arm/mach-omap1/irq.c
++++ b/arch/arm/mach-omap1/irq.c
+@@ -35,7 +35,6 @@
+  * with this program; if not, write  to the Free Software Foundation, Inc.,
+  * 675 Mass Ave, Cambridge, MA 02139, USA.
+  */
+-#include <linux/gpio.h>
+ #include <linux/init.h>
+ #include <linux/module.h>
+ #include <linux/sched.h>
+diff --git a/arch/arm/mach-omap1/serial.c b/arch/arm/mach-omap1/serial.c
+index c7f5906457748..3adceb97138fb 100644
+--- a/arch/arm/mach-omap1/serial.c
++++ b/arch/arm/mach-omap1/serial.c
+@@ -4,7 +4,8 @@
+  *
+  * OMAP1 serial support.
+  */
+-#include <linux/gpio.h>
++#include <linux/gpio/machine.h>
++#include <linux/gpio/consumer.h>
+ #include <linux/module.h>
+ #include <linux/kernel.h>
+ #include <linux/init.h>
+@@ -196,39 +197,38 @@ void omap_serial_wake_trigger(int enable)
+ 	}
+ }
+ 
+-static void __init omap_serial_set_port_wakeup(int gpio_nr)
++static void __init omap_serial_set_port_wakeup(int idx)
+ {
++	struct gpio_desc *d;
+ 	int ret;
+ 
+-	ret = gpio_request(gpio_nr, "UART wake");
+-	if (ret < 0) {
+-		printk(KERN_ERR "Could not request UART wake GPIO: %i\n",
+-		       gpio_nr);
++	d = gpiod_get_index(NULL, "wakeup", idx, GPIOD_IN);
++	if (IS_ERR(d)) {
++		pr_err("Unable to get UART wakeup GPIO descriptor\n");
+ 		return;
+ 	}
+-	gpio_direction_input(gpio_nr);
+-	ret = request_irq(gpio_to_irq(gpio_nr), &omap_serial_wake_interrupt,
++	ret = request_irq(gpiod_to_irq(d), &omap_serial_wake_interrupt,
+ 			  IRQF_TRIGGER_RISING, "serial wakeup", NULL);
+ 	if (ret) {
+-		gpio_free(gpio_nr);
+-		printk(KERN_ERR "No interrupt for UART wake GPIO: %i\n",
+-		       gpio_nr);
++		gpiod_put(d);
++		pr_err("No interrupt for UART%d wake GPIO\n", idx + 1);
+ 		return;
+ 	}
+-	enable_irq_wake(gpio_to_irq(gpio_nr));
++	enable_irq_wake(gpiod_to_irq(d));
+ }
+ 
++
+ int __init omap_serial_wakeup_init(void)
+ {
+ 	if (!cpu_is_omap16xx())
+ 		return 0;
+ 
+ 	if (uart1_ck != NULL)
+-		omap_serial_set_port_wakeup(37);
++		omap_serial_set_port_wakeup(0);
+ 	if (uart2_ck != NULL)
+-		omap_serial_set_port_wakeup(18);
++		omap_serial_set_port_wakeup(1);
+ 	if (uart3_ck != NULL)
+-		omap_serial_set_port_wakeup(49);
++		omap_serial_set_port_wakeup(2);
+ 
+ 	return 0;
+ }
+diff --git a/arch/arm/mach-omap2/board-generic.c b/arch/arm/mach-omap2/board-generic.c
+index 1610c567a6a3a..10d2f078e4a8e 100644
+--- a/arch/arm/mach-omap2/board-generic.c
++++ b/arch/arm/mach-omap2/board-generic.c
+@@ -13,6 +13,7 @@
+ #include <linux/of_platform.h>
+ #include <linux/irqdomain.h>
+ #include <linux/clocksource.h>
++#include <linux/clockchips.h>
+ 
+ #include <asm/setup.h>
+ #include <asm/mach/arch.h>
+diff --git a/arch/arm/mach-omap2/board-n8x0.c b/arch/arm/mach-omap2/board-n8x0.c
+index 3353b0a923d96..564bf80a26212 100644
+--- a/arch/arm/mach-omap2/board-n8x0.c
++++ b/arch/arm/mach-omap2/board-n8x0.c
+@@ -10,7 +10,8 @@
+ 
+ #include <linux/clk.h>
+ #include <linux/delay.h>
+-#include <linux/gpio.h>
++#include <linux/gpio/machine.h>
++#include <linux/gpio/consumer.h>
+ #include <linux/init.h>
+ #include <linux/io.h>
+ #include <linux/irq.h>
+@@ -28,13 +29,12 @@
+ 
+ #include "common.h"
+ #include "mmc.h"
++#include "usb-tusb6010.h"
+ #include "soc.h"
+ #include "common-board-devices.h"
+ 
+ #define TUSB6010_ASYNC_CS	1
+ #define TUSB6010_SYNC_CS	4
+-#define TUSB6010_GPIO_INT	58
+-#define TUSB6010_GPIO_ENABLE	0
+ #define TUSB6010_DMACHAN	0x3f
+ 
+ #define NOKIA_N810_WIMAX	(1 << 2)
+@@ -61,37 +61,6 @@ static void board_check_revision(void)
+ }
+ 
+ #if IS_ENABLED(CONFIG_USB_MUSB_TUSB6010)
+-/*
+- * Enable or disable power to TUSB6010. When enabling, turn on 3.3 V and
+- * 1.5 V voltage regulators of PM companion chip. Companion chip will then
+- * provide then PGOOD signal to TUSB6010 which will release it from reset.
+- */
+-static int tusb_set_power(int state)
+-{
+-	int i, retval = 0;
+-
+-	if (state) {
+-		gpio_set_value(TUSB6010_GPIO_ENABLE, 1);
+-		msleep(1);
+-
+-		/* Wait until TUSB6010 pulls INT pin down */
+-		i = 100;
+-		while (i && gpio_get_value(TUSB6010_GPIO_INT)) {
+-			msleep(1);
+-			i--;
+-		}
+-
+-		if (!i) {
+-			printk(KERN_ERR "tusb: powerup failed\n");
+-			retval = -ENODEV;
+-		}
+-	} else {
+-		gpio_set_value(TUSB6010_GPIO_ENABLE, 0);
+-		msleep(10);
+-	}
+-
+-	return retval;
+-}
+ 
+ static struct musb_hdrc_config musb_config = {
+ 	.multipoint	= 1,
+@@ -102,39 +71,36 @@ static struct musb_hdrc_config musb_config = {
+ 
+ static struct musb_hdrc_platform_data tusb_data = {
+ 	.mode		= MUSB_OTG,
+-	.set_power	= tusb_set_power,
+ 	.min_power	= 25,	/* x2 = 50 mA drawn from VBUS as peripheral */
+ 	.power		= 100,	/* Max 100 mA VBUS for host mode */
+ 	.config		= &musb_config,
+ };
+ 
++static struct gpiod_lookup_table tusb_gpio_table = {
++	.dev_id = "musb-tusb",
++	.table = {
++		GPIO_LOOKUP("gpio-0-15", 0, "enable",
++			    GPIO_ACTIVE_HIGH),
++		GPIO_LOOKUP("gpio-48-63", 10, "int",
++			    GPIO_ACTIVE_HIGH),
++		{ }
++	},
++};
++
+ static void __init n8x0_usb_init(void)
+ {
+ 	int ret = 0;
+-	static const char announce[] __initconst = KERN_INFO "TUSB 6010\n";
+-
+-	/* PM companion chip power control pin */
+-	ret = gpio_request_one(TUSB6010_GPIO_ENABLE, GPIOF_OUT_INIT_LOW,
+-			       "TUSB6010 enable");
+-	if (ret != 0) {
+-		printk(KERN_ERR "Could not get TUSB power GPIO%i\n",
+-		       TUSB6010_GPIO_ENABLE);
+-		return;
+-	}
+-	tusb_set_power(0);
+ 
++	gpiod_add_lookup_table(&tusb_gpio_table);
+ 	ret = tusb6010_setup_interface(&tusb_data, TUSB6010_REFCLK_19, 2,
+-					TUSB6010_ASYNC_CS, TUSB6010_SYNC_CS,
+-					TUSB6010_GPIO_INT, TUSB6010_DMACHAN);
++				       TUSB6010_ASYNC_CS, TUSB6010_SYNC_CS,
++				       TUSB6010_DMACHAN);
+ 	if (ret != 0)
+-		goto err;
++		return;
+ 
+-	printk(announce);
++	pr_info("TUSB 6010\n");
+ 
+ 	return;
+-
+-err:
+-	gpio_free(TUSB6010_GPIO_ENABLE);
+ }
+ #else
+ 
+@@ -170,22 +136,32 @@ static struct spi_board_info n800_spi_board_info[] __initdata = {
+  * GPIO23 and GPIO9		slot 2 EMMC on N810
+  *
+  */
+-#define N8X0_SLOT_SWITCH_GPIO	96
+-#define N810_EMMC_VSD_GPIO	23
+-#define N810_EMMC_VIO_GPIO	9
+-
+ static int slot1_cover_open;
+ static int slot2_cover_open;
+ static struct device *mmc_device;
+ 
+-static int n8x0_mmc_switch_slot(struct device *dev, int slot)
+-{
+-#ifdef CONFIG_MMC_DEBUG
+-	dev_dbg(dev, "Choose slot %d\n", slot + 1);
+-#endif
+-	gpio_set_value(N8X0_SLOT_SWITCH_GPIO, slot);
+-	return 0;
+-}
++static struct gpiod_lookup_table nokia8xx_mmc_gpio_table = {
++	.dev_id = "mmci-omap.0",
++	.table = {
++		/* Slot switch, GPIO 96 */
++		GPIO_LOOKUP("gpio-80-111", 16,
++			    "switch", GPIO_ACTIVE_HIGH),
++		{ }
++	},
++};
++
++static struct gpiod_lookup_table nokia810_mmc_gpio_table = {
++	.dev_id = "mmci-omap.0",
++	.table = {
++		/* Slot index 1, VSD power, GPIO 23 */
++		GPIO_LOOKUP_IDX("gpio-16-31", 7,
++				"vsd", 1, GPIO_ACTIVE_HIGH),
++		/* Slot index 1, VIO power, GPIO 9 */
++		GPIO_LOOKUP_IDX("gpio-0-15", 9,
++				"vsd", 1, GPIO_ACTIVE_HIGH),
++		{ }
++	},
++};
+ 
+ static int n8x0_mmc_set_power_menelaus(struct device *dev, int slot,
+ 					int power_on, int vdd)
+@@ -256,31 +232,13 @@ static int n8x0_mmc_set_power_menelaus(struct device *dev, int slot,
+ 	return 0;
+ }
+ 
+-static void n810_set_power_emmc(struct device *dev,
+-					 int power_on)
+-{
+-	dev_dbg(dev, "Set EMMC power %s\n", power_on ? "on" : "off");
+-
+-	if (power_on) {
+-		gpio_set_value(N810_EMMC_VSD_GPIO, 1);
+-		msleep(1);
+-		gpio_set_value(N810_EMMC_VIO_GPIO, 1);
+-		msleep(1);
+-	} else {
+-		gpio_set_value(N810_EMMC_VIO_GPIO, 0);
+-		msleep(50);
+-		gpio_set_value(N810_EMMC_VSD_GPIO, 0);
+-		msleep(50);
+-	}
+-}
+-
+ static int n8x0_mmc_set_power(struct device *dev, int slot, int power_on,
+ 			      int vdd)
+ {
+ 	if (board_is_n800() || slot == 0)
+ 		return n8x0_mmc_set_power_menelaus(dev, slot, power_on, vdd);
+ 
+-	n810_set_power_emmc(dev, power_on);
++	/* The n810 power will be handled by GPIO code in the driver */
+ 
+ 	return 0;
+ }
+@@ -418,13 +376,6 @@ static void n8x0_mmc_shutdown(struct device *dev)
+ static void n8x0_mmc_cleanup(struct device *dev)
+ {
+ 	menelaus_unregister_mmc_callback();
+-
+-	gpio_free(N8X0_SLOT_SWITCH_GPIO);
+-
+-	if (board_is_n810()) {
+-		gpio_free(N810_EMMC_VSD_GPIO);
+-		gpio_free(N810_EMMC_VIO_GPIO);
+-	}
+ }
+ 
+ /*
+@@ -433,7 +384,6 @@ static void n8x0_mmc_cleanup(struct device *dev)
+  */
+ static struct omap_mmc_platform_data mmc1_data = {
+ 	.nr_slots			= 0,
+-	.switch_slot			= n8x0_mmc_switch_slot,
+ 	.init				= n8x0_mmc_late_init,
+ 	.cleanup			= n8x0_mmc_cleanup,
+ 	.shutdown			= n8x0_mmc_shutdown,
+@@ -463,14 +413,9 @@ static struct omap_mmc_platform_data mmc1_data = {
+ 
+ static struct omap_mmc_platform_data *mmc_data[OMAP24XX_NR_MMC];
+ 
+-static struct gpio n810_emmc_gpios[] __initdata = {
+-	{ N810_EMMC_VSD_GPIO, GPIOF_OUT_INIT_LOW,  "MMC slot 2 Vddf" },
+-	{ N810_EMMC_VIO_GPIO, GPIOF_OUT_INIT_LOW,  "MMC slot 2 Vdd"  },
+-};
+-
+ static void __init n8x0_mmc_init(void)
+ {
+-	int err;
++	gpiod_add_lookup_table(&nokia8xx_mmc_gpio_table);
+ 
+ 	if (board_is_n810()) {
+ 		mmc1_data.slots[0].name = "external";
+@@ -483,20 +428,7 @@ static void __init n8x0_mmc_init(void)
+ 		 */
+ 		mmc1_data.slots[1].name = "internal";
+ 		mmc1_data.slots[1].ban_openended = 1;
+-	}
+-
+-	err = gpio_request_one(N8X0_SLOT_SWITCH_GPIO, GPIOF_OUT_INIT_LOW,
+-			       "MMC slot switch");
+-	if (err)
+-		return;
+-
+-	if (board_is_n810()) {
+-		err = gpio_request_array(n810_emmc_gpios,
+-					 ARRAY_SIZE(n810_emmc_gpios));
+-		if (err) {
+-			gpio_free(N8X0_SLOT_SWITCH_GPIO);
+-			return;
+-		}
++		gpiod_add_lookup_table(&nokia810_mmc_gpio_table);
+ 	}
+ 
+ 	mmc1_data.nr_slots = 2;
+diff --git a/arch/arm/mach-omap2/omap_device.c b/arch/arm/mach-omap2/omap_device.c
+index 4afa2f08e6681..fca7869c8075a 100644
+--- a/arch/arm/mach-omap2/omap_device.c
++++ b/arch/arm/mach-omap2/omap_device.c
+@@ -244,7 +244,6 @@ static int _omap_device_notifier_call(struct notifier_block *nb,
+ 	case BUS_NOTIFY_ADD_DEVICE:
+ 		if (pdev->dev.of_node)
+ 			omap_device_build_from_dt(pdev);
+-		omap_auxdata_legacy_init(dev);
+ 		fallthrough;
+ 	default:
+ 		od = to_omap_device(pdev);
+diff --git a/arch/arm/mach-omap2/pdata-quirks.c b/arch/arm/mach-omap2/pdata-quirks.c
+index 04208cc52784e..c1c0121f478d6 100644
+--- a/arch/arm/mach-omap2/pdata-quirks.c
++++ b/arch/arm/mach-omap2/pdata-quirks.c
+@@ -6,8 +6,8 @@
+  */
+ #include <linux/clk.h>
+ #include <linux/davinci_emac.h>
++#include <linux/gpio/machine.h>
+ #include <linux/gpio/consumer.h>
+-#include <linux/gpio.h>
+ #include <linux/init.h>
+ #include <linux/kernel.h>
+ #include <linux/of_platform.h>
+@@ -41,7 +41,6 @@ struct pdata_init {
+ };
+ 
+ static struct of_dev_auxdata omap_auxdata_lookup[];
+-static struct twl4030_gpio_platform_data twl_gpio_auxdata;
+ 
+ #ifdef CONFIG_MACH_NOKIA_N8X0
+ static void __init omap2420_n8x0_legacy_init(void)
+@@ -98,52 +97,43 @@ static struct iommu_platform_data omap3_iommu_isp_pdata = {
+ };
+ #endif
+ 
+-static int omap3_sbc_t3730_twl_callback(struct device *dev,
+-					   unsigned gpio,
+-					   unsigned ngpio)
++static void __init omap3_sbc_t3x_usb_hub_init(char *hub_name, int idx)
+ {
+-	int res;
++	struct gpio_desc *d;
+ 
+-	res = gpio_request_one(gpio + 2, GPIOF_OUT_INIT_HIGH,
+-			       "wlan pwr");
+-	if (res)
+-		return res;
+-
+-	gpiod_export(gpio_to_desc(gpio), 0);
+-
+-	return 0;
+-}
+-
+-static void __init omap3_sbc_t3x_usb_hub_init(int gpio, char *hub_name)
+-{
+-	int err = gpio_request_one(gpio, GPIOF_OUT_INIT_LOW, hub_name);
+-
+-	if (err) {
+-		pr_err("SBC-T3x: %s reset gpio request failed: %d\n",
+-			hub_name, err);
++	/* This asserts the RESET line (reverse polarity) */
++	d = gpiod_get_index(NULL, "reset", idx, GPIOD_OUT_HIGH);
++	if (IS_ERR(d)) {
++		pr_err("Unable to get T3x USB reset GPIO descriptor\n");
+ 		return;
+ 	}
+-
+-	gpiod_export(gpio_to_desc(gpio), 0);
+-
++	gpiod_set_consumer_name(d, hub_name);
++	gpiod_export(d, 0);
+ 	udelay(10);
+-	gpio_set_value(gpio, 1);
++	/* De-assert RESET */
++	gpiod_set_value(d, 0);
+ 	msleep(1);
+ }
+ 
+-static void __init omap3_sbc_t3730_twl_init(void)
+-{
+-	twl_gpio_auxdata.setup = omap3_sbc_t3730_twl_callback;
+-}
++static struct gpiod_lookup_table omap3_sbc_t3x_usb_gpio_table = {
++	.dev_id = NULL,
++	.table = {
++		GPIO_LOOKUP_IDX("gpio-160-175", 7, "reset", 0,
++				GPIO_ACTIVE_LOW),
++		{ }
++	},
++};
+ 
+ static void __init omap3_sbc_t3730_legacy_init(void)
+ {
+-	omap3_sbc_t3x_usb_hub_init(167, "sb-t35 usb hub");
++	gpiod_add_lookup_table(&omap3_sbc_t3x_usb_gpio_table);
++	omap3_sbc_t3x_usb_hub_init("sb-t35 usb hub", 0);
+ }
+ 
+ static void __init omap3_sbc_t3530_legacy_init(void)
+ {
+-	omap3_sbc_t3x_usb_hub_init(167, "sb-t35 usb hub");
++	gpiod_add_lookup_table(&omap3_sbc_t3x_usb_gpio_table);
++	omap3_sbc_t3x_usb_hub_init("sb-t35 usb hub", 0);
+ }
+ 
+ static void __init omap3_evm_legacy_init(void)
+@@ -187,31 +177,59 @@ static void __init am35xx_emac_reset(void)
+ 	omap_ctrl_readl(AM35XX_CONTROL_IP_SW_RESET); /* OCP barrier */
+ }
+ 
+-static struct gpio cm_t3517_wlan_gpios[] __initdata = {
+-	{ 56,	GPIOF_OUT_INIT_HIGH,	"wlan pwr" },
+-	{ 4,	GPIOF_OUT_INIT_HIGH,	"xcvr noe" },
++static struct gpiod_lookup_table cm_t3517_wlan_gpio_table = {
++	.dev_id = NULL,
++	.table = {
++		GPIO_LOOKUP("gpio-48-53", 8, "power",
++			    GPIO_ACTIVE_HIGH),
++		GPIO_LOOKUP("gpio-0-15", 4, "noe",
++			    GPIO_ACTIVE_HIGH),
++		{ }
++	},
+ };
+ 
+ static void __init omap3_sbc_t3517_wifi_init(void)
+ {
+-	int err = gpio_request_array(cm_t3517_wlan_gpios,
+-				ARRAY_SIZE(cm_t3517_wlan_gpios));
+-	if (err) {
+-		pr_err("SBC-T3517: wl12xx gpios request failed: %d\n", err);
+-		return;
+-	}
++	struct gpio_desc *d;
++
++	gpiod_add_lookup_table(&cm_t3517_wlan_gpio_table);
+ 
+-	gpiod_export(gpio_to_desc(cm_t3517_wlan_gpios[0].gpio), 0);
+-	gpiod_export(gpio_to_desc(cm_t3517_wlan_gpios[1].gpio), 0);
++	/* This asserts the RESET line (reverse polarity) */
++	d = gpiod_get(NULL, "power", GPIOD_OUT_HIGH);
++	if (IS_ERR(d)) {
++		pr_err("Unable to get CM T3517 WLAN power GPIO descriptor\n");
++	} else {
++		gpiod_set_consumer_name(d, "wlan pwr");
++		gpiod_export(d, 0);
++	}
+ 
++	d = gpiod_get(NULL, "noe", GPIOD_OUT_HIGH);
++	if (IS_ERR(d)) {
++		pr_err("Unable to get CM T3517 WLAN XCVR NOE GPIO descriptor\n");
++	} else {
++		gpiod_set_consumer_name(d, "xcvr noe");
++		gpiod_export(d, 0);
++	}
+ 	msleep(100);
+-	gpio_set_value(cm_t3517_wlan_gpios[1].gpio, 0);
+-}
++	gpiod_set_value(d, 0);
++}
++
++static struct gpiod_lookup_table omap3_sbc_t3517_usb_gpio_table = {
++	.dev_id = NULL,
++	.table = {
++		GPIO_LOOKUP_IDX("gpio-144-159", 8, "reset", 0,
++				GPIO_ACTIVE_LOW),
++		GPIO_LOOKUP_IDX("gpio-96-111", 2, "reset", 1,
++				GPIO_ACTIVE_LOW),
++		{ }
++	},
++};
+ 
+ static void __init omap3_sbc_t3517_legacy_init(void)
+ {
+-	omap3_sbc_t3x_usb_hub_init(152, "cm-t3517 usb hub");
+-	omap3_sbc_t3x_usb_hub_init(98, "sb-t35 usb hub");
++	gpiod_add_lookup_table(&omap3_sbc_t3517_usb_gpio_table);
++	omap3_sbc_t3x_usb_hub_init("cm-t3517 usb hub", 0);
++	omap3_sbc_t3x_usb_hub_init("sb-t35 usb hub", 1);
+ 	am35xx_emac_reset();
+ 	hsmmc2_internal_input_clk();
+ 	omap3_sbc_t3517_wifi_init();
+@@ -393,21 +411,6 @@ static struct ti_prm_platform_data ti_prm_pdata = {
+ 	.clkdm_lookup = clkdm_lookup,
+ };
+ 
+-/*
+- * GPIOs for TWL are initialized by the I2C bus and need custom
+- * handing until DSS has device tree bindings.
+- */
+-void omap_auxdata_legacy_init(struct device *dev)
+-{
+-	if (dev->platform_data)
+-		return;
+-
+-	if (strcmp("twl4030-gpio", dev_name(dev)))
+-		return;
+-
+-	dev->platform_data = &twl_gpio_auxdata;
+-}
+-
+ #if defined(CONFIG_ARCH_OMAP3) && IS_ENABLED(CONFIG_SND_SOC_OMAP_MCBSP)
+ static struct omap_mcbsp_platform_data mcbsp_pdata;
+ static void __init omap3_mcbsp_init(void)
+@@ -427,9 +430,6 @@ static struct pdata_init auxdata_quirks[] __initdata = {
+ 	{ "nokia,n800", omap2420_n8x0_legacy_init, },
+ 	{ "nokia,n810", omap2420_n8x0_legacy_init, },
+ 	{ "nokia,n810-wimax", omap2420_n8x0_legacy_init, },
+-#endif
+-#ifdef CONFIG_ARCH_OMAP3
+-	{ "compulab,omap3-sbc-t3730", omap3_sbc_t3730_twl_init, },
+ #endif
+ 	{ /* sentinel */ },
+ };
+diff --git a/arch/arm/mach-omap2/usb-tusb6010.c b/arch/arm/mach-omap2/usb-tusb6010.c
+index 18fa52f828dc7..b46c254c2bc41 100644
+--- a/arch/arm/mach-omap2/usb-tusb6010.c
++++ b/arch/arm/mach-omap2/usb-tusb6010.c
+@@ -11,12 +11,12 @@
+ #include <linux/errno.h>
+ #include <linux/delay.h>
+ #include <linux/platform_device.h>
+-#include <linux/gpio.h>
+ #include <linux/export.h>
+ #include <linux/platform_data/usb-omap.h>
+ 
+ #include <linux/usb/musb.h>
+ 
++#include "usb-tusb6010.h"
+ #include "gpmc.h"
+ 
+ static u8		async_cs, sync_cs;
+@@ -132,10 +132,6 @@ static struct resource tusb_resources[] = {
+ 	{ /* Synchronous access */
+ 		.flags	= IORESOURCE_MEM,
+ 	},
+-	{ /* IRQ */
+-		.name	= "mc",
+-		.flags	= IORESOURCE_IRQ,
+-	},
+ };
+ 
+ static u64 tusb_dmamask = ~(u32)0;
+@@ -154,9 +150,9 @@ static struct platform_device tusb_device = {
+ 
+ /* this may be called only from board-*.c setup code */
+ int __init tusb6010_setup_interface(struct musb_hdrc_platform_data *data,
+-		unsigned ps_refclk, unsigned waitpin,
+-		unsigned async, unsigned sync,
+-		unsigned irq, unsigned dmachan)
++		unsigned int ps_refclk, unsigned int waitpin,
++		unsigned int async, unsigned int sync,
++		unsigned int dmachan)
+ {
+ 	int		status;
+ 	static char	error[] __initdata =
+@@ -192,14 +188,6 @@ int __init tusb6010_setup_interface(struct musb_hdrc_platform_data *data,
+ 	if (status < 0)
+ 		return status;
+ 
+-	/* IRQ */
+-	status = gpio_request_one(irq, GPIOF_IN, "TUSB6010 irq");
+-	if (status < 0) {
+-		printk(error, 3, status);
+-		return status;
+-	}
+-	tusb_resources[2].start = gpio_to_irq(irq);
+-
+ 	/* set up memory timings ... can speed them up later */
+ 	if (!ps_refclk) {
+ 		printk(error, 4, status);
+diff --git a/arch/arm/mach-omap2/usb-tusb6010.h b/arch/arm/mach-omap2/usb-tusb6010.h
+new file mode 100644
+index 0000000000000..d210ff6238c26
+--- /dev/null
++++ b/arch/arm/mach-omap2/usb-tusb6010.h
+@@ -0,0 +1,12 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++
++#ifndef __USB_TUSB6010_H
++#define __USB_TUSB6010_H
++
++extern int __init tusb6010_setup_interface(
++		struct musb_hdrc_platform_data *data,
++		unsigned int ps_refclk, unsigned int waitpin,
++		unsigned int async_cs, unsigned int sync_cs,
++		unsigned int dmachan);
++
++#endif /* __USB_TUSB6010_H */
+diff --git a/arch/arm/mach-orion5x/board-dt.c b/arch/arm/mach-orion5x/board-dt.c
+index e3736ffc83477..be47492c6640d 100644
+--- a/arch/arm/mach-orion5x/board-dt.c
++++ b/arch/arm/mach-orion5x/board-dt.c
+@@ -60,6 +60,9 @@ static void __init orion5x_dt_init(void)
+ 	if (of_machine_is_compatible("maxtor,shared-storage-2"))
+ 		mss2_init();
+ 
++	if (of_machine_is_compatible("lacie,d2-network"))
++		d2net_init();
++
+ 	of_platform_default_populate(NULL, orion5x_auxdata_lookup, NULL);
+ }
+ 
+diff --git a/arch/arm/mach-orion5x/common.h b/arch/arm/mach-orion5x/common.h
+index f2e0577bf50f4..8df70e23aa82a 100644
+--- a/arch/arm/mach-orion5x/common.h
++++ b/arch/arm/mach-orion5x/common.h
+@@ -73,6 +73,12 @@ extern void mss2_init(void);
+ static inline void mss2_init(void) {}
+ #endif
+ 
++#ifdef CONFIG_MACH_D2NET_DT
++void d2net_init(void);
++#else
++static inline void d2net_init(void) {}
++#endif
++
+ /*****************************************************************************
+  * Helpers to access Orion registers
+  ****************************************************************************/
+diff --git a/arch/arm/mach-pxa/spitz.c b/arch/arm/mach-pxa/spitz.c
+index 4325bdc2b9ff8..28e376e06fdc8 100644
+--- a/arch/arm/mach-pxa/spitz.c
++++ b/arch/arm/mach-pxa/spitz.c
+@@ -506,10 +506,18 @@ static struct ads7846_platform_data spitz_ads7846_info = {
+ 	.x_plate_ohms		= 419,
+ 	.y_plate_ohms		= 486,
+ 	.pressure_max		= 1024,
+-	.gpio_pendown		= SPITZ_GPIO_TP_INT,
+ 	.wait_for_sync		= spitz_ads7846_wait_for_hsync,
+ };
+ 
++static struct gpiod_lookup_table spitz_ads7846_gpio_table = {
++	.dev_id = "spi2.0",
++	.table = {
++		GPIO_LOOKUP("gpio-pxa", SPITZ_GPIO_TP_INT,
++			    "pendown", GPIO_ACTIVE_LOW),
++		{ }
++	},
++};
++
+ static void spitz_bl_kick_battery(void)
+ {
+ 	void (*kick_batt)(void);
+@@ -594,6 +602,7 @@ static void __init spitz_spi_init(void)
+ 	else
+ 		gpiod_add_lookup_table(&spitz_lcdcon_gpio_table);
+ 
++	gpiod_add_lookup_table(&spitz_ads7846_gpio_table);
+ 	gpiod_add_lookup_table(&spitz_spi_gpio_table);
+ 	pxa2xx_set_spi_info(2, &spitz_spi_info);
+ 	spi_register_board_info(ARRAY_AND_SIZE(spitz_spi_devices));
+diff --git a/arch/arm/probes/kprobes/checkers-common.c b/arch/arm/probes/kprobes/checkers-common.c
+index 4d720990cf2a3..eba7ac4725c02 100644
+--- a/arch/arm/probes/kprobes/checkers-common.c
++++ b/arch/arm/probes/kprobes/checkers-common.c
+@@ -40,7 +40,7 @@ enum probes_insn checker_stack_use_imm_0xx(probes_opcode_t insn,
+  * Different from other insn uses imm8, the real addressing offset of
+  * STRD in T32 encoding should be imm8 * 4. See ARMARM description.
+  */
+-enum probes_insn checker_stack_use_t32strd(probes_opcode_t insn,
++static enum probes_insn checker_stack_use_t32strd(probes_opcode_t insn,
+ 		struct arch_probes_insn *asi,
+ 		const struct decode_header *h)
+ {
+diff --git a/arch/arm/probes/kprobes/core.c b/arch/arm/probes/kprobes/core.c
+index 9090c3a74dcce..d8238da095df7 100644
+--- a/arch/arm/probes/kprobes/core.c
++++ b/arch/arm/probes/kprobes/core.c
+@@ -233,7 +233,7 @@ singlestep(struct kprobe *p, struct pt_regs *regs, struct kprobe_ctlblk *kcb)
+  * kprobe, and that level is reserved for user kprobe handlers, so we can't
+  * risk encountering a new kprobe in an interrupt handler.
+  */
+-void __kprobes kprobe_handler(struct pt_regs *regs)
++static void __kprobes kprobe_handler(struct pt_regs *regs)
+ {
+ 	struct kprobe *p, *cur;
+ 	struct kprobe_ctlblk *kcb;
+diff --git a/arch/arm/probes/kprobes/opt-arm.c b/arch/arm/probes/kprobes/opt-arm.c
+index dbef34ed933f2..7f65048380ca5 100644
+--- a/arch/arm/probes/kprobes/opt-arm.c
++++ b/arch/arm/probes/kprobes/opt-arm.c
+@@ -145,8 +145,6 @@ __arch_remove_optimized_kprobe(struct optimized_kprobe *op, int dirty)
+ 	}
+ }
+ 
+-extern void kprobe_handler(struct pt_regs *regs);
+-
+ static void
+ optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs)
+ {
+diff --git a/arch/arm/probes/kprobes/test-core.c b/arch/arm/probes/kprobes/test-core.c
+index c562832b86272..171c7076b89f4 100644
+--- a/arch/arm/probes/kprobes/test-core.c
++++ b/arch/arm/probes/kprobes/test-core.c
+@@ -720,7 +720,7 @@ static const char coverage_register_lookup[16] = {
+ 	[REG_TYPE_NOSPPCX]	= COVERAGE_ANY_REG | COVERAGE_SP,
+ };
+ 
+-unsigned coverage_start_registers(const struct decode_header *h)
++static unsigned coverage_start_registers(const struct decode_header *h)
+ {
+ 	unsigned regs = 0;
+ 	int i;
+diff --git a/arch/arm/probes/kprobes/test-core.h b/arch/arm/probes/kprobes/test-core.h
+index 56ad3c0aaeeac..c7297037c1623 100644
+--- a/arch/arm/probes/kprobes/test-core.h
++++ b/arch/arm/probes/kprobes/test-core.h
+@@ -454,3 +454,7 @@ void kprobe_thumb32_test_cases(void);
+ #else
+ void kprobe_arm_test_cases(void);
+ #endif
++
++void __kprobes_test_case_start(void);
++void __kprobes_test_case_end_16(void);
++void __kprobes_test_case_end_32(void);
+diff --git a/arch/arm64/boot/dts/mediatek/mt7986a-bananapi-bpi-r3-nand.dtso b/arch/arm64/boot/dts/mediatek/mt7986a-bananapi-bpi-r3-nand.dtso
+index 15ee8c568f3c3..543c13385d6e3 100644
+--- a/arch/arm64/boot/dts/mediatek/mt7986a-bananapi-bpi-r3-nand.dtso
++++ b/arch/arm64/boot/dts/mediatek/mt7986a-bananapi-bpi-r3-nand.dtso
+@@ -29,13 +29,13 @@
+ 
+ 					partition@0 {
+ 						label = "bl2";
+-						reg = <0x0 0x80000>;
++						reg = <0x0 0x100000>;
+ 						read-only;
+ 					};
+ 
+-					partition@80000 {
++					partition@100000 {
+ 						label = "reserved";
+-						reg = <0x80000 0x300000>;
++						reg = <0x100000 0x280000>;
+ 					};
+ 
+ 					partition@380000 {
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi b/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi
+index 63952c1251dfd..8892b2f64a0f0 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi
+@@ -292,6 +292,10 @@
+ 	};
+ };
+ 
++&gic {
++	mediatek,broken-save-restore-fw;
++};
++
+ &gpu {
+ 	mali-supply = <&mt6358_vgpu_reg>;
+ };
+diff --git a/arch/arm64/boot/dts/mediatek/mt8192-asurada.dtsi b/arch/arm64/boot/dts/mediatek/mt8192-asurada.dtsi
+index 5a440504d4f9b..0e8b341170907 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8192-asurada.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8192-asurada.dtsi
+@@ -275,6 +275,10 @@
+ 	remote-endpoint = <&anx7625_in>;
+ };
+ 
++&gic {
++	mediatek,broken-save-restore-fw;
++};
++
+ &gpu {
+ 	mali-supply = <&mt6315_7_vbuck1>;
+ 	status = "okay";
+diff --git a/arch/arm64/boot/dts/mediatek/mt8192.dtsi b/arch/arm64/boot/dts/mediatek/mt8192.dtsi
+index 5c30caf740265..75eeba539e6fe 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8192.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8192.dtsi
+@@ -70,7 +70,8 @@
+ 			d-cache-line-size = <64>;
+ 			d-cache-sets = <128>;
+ 			next-level-cache = <&l2_0>;
+-			capacity-dmips-mhz = <530>;
++			performance-domains = <&performance 0>;
++			capacity-dmips-mhz = <427>;
+ 		};
+ 
+ 		cpu1: cpu@100 {
+@@ -87,7 +88,8 @@
+ 			d-cache-line-size = <64>;
+ 			d-cache-sets = <128>;
+ 			next-level-cache = <&l2_0>;
+-			capacity-dmips-mhz = <530>;
++			performance-domains = <&performance 0>;
++			capacity-dmips-mhz = <427>;
+ 		};
+ 
+ 		cpu2: cpu@200 {
+@@ -104,7 +106,8 @@
+ 			d-cache-line-size = <64>;
+ 			d-cache-sets = <128>;
+ 			next-level-cache = <&l2_0>;
+-			capacity-dmips-mhz = <530>;
++			performance-domains = <&performance 0>;
++			capacity-dmips-mhz = <427>;
+ 		};
+ 
+ 		cpu3: cpu@300 {
+@@ -121,7 +124,8 @@
+ 			d-cache-line-size = <64>;
+ 			d-cache-sets = <128>;
+ 			next-level-cache = <&l2_0>;
+-			capacity-dmips-mhz = <530>;
++			performance-domains = <&performance 0>;
++			capacity-dmips-mhz = <427>;
+ 		};
+ 
+ 		cpu4: cpu@400 {
+@@ -138,6 +142,7 @@
+ 			d-cache-line-size = <64>;
+ 			d-cache-sets = <256>;
+ 			next-level-cache = <&l2_1>;
++			performance-domains = <&performance 1>;
+ 			capacity-dmips-mhz = <1024>;
+ 		};
+ 
+@@ -155,6 +160,7 @@
+ 			d-cache-line-size = <64>;
+ 			d-cache-sets = <256>;
+ 			next-level-cache = <&l2_1>;
++			performance-domains = <&performance 1>;
+ 			capacity-dmips-mhz = <1024>;
+ 		};
+ 
+@@ -172,6 +178,7 @@
+ 			d-cache-line-size = <64>;
+ 			d-cache-sets = <256>;
+ 			next-level-cache = <&l2_1>;
++			performance-domains = <&performance 1>;
+ 			capacity-dmips-mhz = <1024>;
+ 		};
+ 
+@@ -189,6 +196,7 @@
+ 			d-cache-line-size = <64>;
+ 			d-cache-sets = <256>;
+ 			next-level-cache = <&l2_1>;
++			performance-domains = <&performance 1>;
+ 			capacity-dmips-mhz = <1024>;
+ 		};
+ 
+@@ -403,6 +411,12 @@
+ 		compatible = "simple-bus";
+ 		ranges;
+ 
++		performance: performance-controller@11bc10 {
++			compatible = "mediatek,cpufreq-hw";
++			reg = <0 0x0011bc10 0 0x120>, <0 0x0011bd30 0 0x120>;
++			#performance-domain-cells = <1>;
++		};
++
+ 		gic: interrupt-controller@c000000 {
+ 			compatible = "arm,gic-v3";
+ 			#interrupt-cells = <4>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt8195-cherry.dtsi b/arch/arm64/boot/dts/mediatek/mt8195-cherry.dtsi
+index 8ac80a136c371..f2d0726546c77 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8195-cherry.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8195-cherry.dtsi
+@@ -255,6 +255,10 @@
+ 	};
+ };
+ 
++&gic {
++	mediatek,broken-save-restore-fw;
++};
++
+ &gpu {
+ 	status = "okay";
+ 	mali-supply = <&mt6315_7_vbuck1>;
+diff --git a/arch/arm64/boot/dts/microchip/sparx5.dtsi b/arch/arm64/boot/dts/microchip/sparx5.dtsi
+index 0367a00a269b3..5eae6e7fd248e 100644
+--- a/arch/arm64/boot/dts/microchip/sparx5.dtsi
++++ b/arch/arm64/boot/dts/microchip/sparx5.dtsi
+@@ -61,7 +61,7 @@
+ 		interrupt-affinity = <&cpu0>, <&cpu1>;
+ 	};
+ 
+-	psci {
++	psci: psci {
+ 		compatible = "arm,psci-0.2";
+ 		method = "smc";
+ 	};
+diff --git a/arch/arm64/boot/dts/microchip/sparx5_pcb_common.dtsi b/arch/arm64/boot/dts/microchip/sparx5_pcb_common.dtsi
+index 9d1a082de3e29..32bb76b3202a0 100644
+--- a/arch/arm64/boot/dts/microchip/sparx5_pcb_common.dtsi
++++ b/arch/arm64/boot/dts/microchip/sparx5_pcb_common.dtsi
+@@ -6,6 +6,18 @@
+ /dts-v1/;
+ #include "sparx5.dtsi"
+ 
++&psci {
++	status = "disabled";
++};
++
++&cpu0 {
++	enable-method = "spin-table";
++};
++
++&cpu1 {
++	enable-method = "spin-table";
++};
++
+ &uart0 {
+ 	status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/qcom/apq8016-sbc.dts b/arch/arm64/boot/dts/qcom/apq8016-sbc.dts
+index 59860a2223b83..3ec449f5cab78 100644
+--- a/arch/arm64/boot/dts/qcom/apq8016-sbc.dts
++++ b/arch/arm64/boot/dts/qcom/apq8016-sbc.dts
+@@ -447,21 +447,21 @@
+ 	vdd_l7-supply = <&pm8916_s4>;
+ 
+ 	s3 {
+-		regulator-min-microvolt = <375000>;
+-		regulator-max-microvolt = <1562000>;
++		regulator-min-microvolt = <1250000>;
++		regulator-max-microvolt = <1350000>;
+ 	};
+ 
+ 	s4 {
+-		regulator-min-microvolt = <1800000>;
+-		regulator-max-microvolt = <1800000>;
++		regulator-min-microvolt = <1850000>;
++		regulator-max-microvolt = <2150000>;
+ 
+ 		regulator-always-on;
+ 		regulator-boot-on;
+ 	};
+ 
+ 	l1 {
+-		regulator-min-microvolt = <375000>;
+-		regulator-max-microvolt = <1525000>;
++		regulator-min-microvolt = <1225000>;
++		regulator-max-microvolt = <1225000>;
+ 	};
+ 
+ 	l2 {
+@@ -470,13 +470,13 @@
+ 	};
+ 
+ 	l4 {
+-		regulator-min-microvolt = <1750000>;
+-		regulator-max-microvolt = <3337000>;
++		regulator-min-microvolt = <2050000>;
++		regulator-max-microvolt = <2050000>;
+ 	};
+ 
+ 	l5 {
+-		regulator-min-microvolt = <1750000>;
+-		regulator-max-microvolt = <3337000>;
++		regulator-min-microvolt = <1800000>;
++		regulator-max-microvolt = <1800000>;
+ 	};
+ 
+ 	l6 {
+@@ -485,60 +485,68 @@
+ 	};
+ 
+ 	l7 {
+-		regulator-min-microvolt = <1750000>;
+-		regulator-max-microvolt = <3337000>;
++		regulator-min-microvolt = <1800000>;
++		regulator-max-microvolt = <1800000>;
+ 	};
+ 
+ 	l8 {
+-		regulator-min-microvolt = <1750000>;
+-		regulator-max-microvolt = <3337000>;
++		regulator-min-microvolt = <2900000>;
++		regulator-max-microvolt = <2900000>;
+ 	};
+ 
+ 	l9 {
+-		regulator-min-microvolt = <1750000>;
+-		regulator-max-microvolt = <3337000>;
++		regulator-min-microvolt = <3300000>;
++		regulator-max-microvolt = <3300000>;
+ 	};
+ 
+ 	l10 {
+-		regulator-min-microvolt = <1750000>;
+-		regulator-max-microvolt = <3337000>;
++		regulator-min-microvolt = <2800000>;
++		regulator-max-microvolt = <2800000>;
+ 	};
+ 
+ 	l11 {
+-		regulator-min-microvolt = <1750000>;
+-		regulator-max-microvolt = <3337000>;
++		regulator-min-microvolt = <2950000>;
++		regulator-max-microvolt = <2950000>;
+ 		regulator-allow-set-load;
+ 		regulator-system-load = <200000>;
+ 	};
+ 
+ 	l12 {
+-		regulator-min-microvolt = <1750000>;
+-		regulator-max-microvolt = <3337000>;
++		regulator-min-microvolt = <1800000>;
++		regulator-max-microvolt = <2950000>;
+ 	};
+ 
+ 	l13 {
+-		regulator-min-microvolt = <1750000>;
+-		regulator-max-microvolt = <3337000>;
++		regulator-min-microvolt = <3075000>;
++		regulator-max-microvolt = <3075000>;
+ 	};
+ 
+ 	l14 {
+-		regulator-min-microvolt = <1750000>;
+-		regulator-max-microvolt = <3337000>;
++		regulator-min-microvolt = <1800000>;
++		regulator-max-microvolt = <3300000>;
+ 	};
+ 
+-	/**
+-	 * 1.8v required on LS expansion
+-	 * for mezzanine boards
++	/*
++	 * The 96Boards specification expects a 1.8V power rail on the low-speed
++	 * expansion connector that is able to provide at least 0.18W / 100 mA.
++	 * L15/L16 are connected in parallel to provide 55 mA each. A minimum load
++	 * must be specified to ensure the regulators are not put in LPM where they
++	 * would only provide 5 mA.
+ 	 */
+ 	l15 {
+-		regulator-min-microvolt = <1750000>;
+-		regulator-max-microvolt = <3337000>;
++		regulator-min-microvolt = <1800000>;
++		regulator-max-microvolt = <1800000>;
++		regulator-system-load = <50000>;
++		regulator-allow-set-load;
+ 		regulator-always-on;
+ 	};
+ 
+ 	l16 {
+-		regulator-min-microvolt = <1750000>;
+-		regulator-max-microvolt = <3337000>;
++		regulator-min-microvolt = <1800000>;
++		regulator-max-microvolt = <1800000>;
++		regulator-system-load = <50000>;
++		regulator-allow-set-load;
++		regulator-always-on;
+ 	};
+ 
+ 	l17 {
+@@ -547,8 +555,8 @@
+ 	};
+ 
+ 	l18 {
+-		regulator-min-microvolt = <1750000>;
+-		regulator-max-microvolt = <3337000>;
++		regulator-min-microvolt = <2700000>;
++		regulator-max-microvolt = <2700000>;
+ 	};
+ };
+ 
+diff --git a/arch/arm64/boot/dts/qcom/apq8096-ifc6640.dts b/arch/arm64/boot/dts/qcom/apq8096-ifc6640.dts
+index 71e0a500599c8..ed2e2f6c6775a 100644
+--- a/arch/arm64/boot/dts/qcom/apq8096-ifc6640.dts
++++ b/arch/arm64/boot/dts/qcom/apq8096-ifc6640.dts
+@@ -26,7 +26,7 @@
+ 
+ 	v1p05: v1p05-regulator {
+ 		compatible = "regulator-fixed";
+-		reglator-name = "v1p05";
++		regulator-name = "v1p05";
+ 		regulator-always-on;
+ 		regulator-boot-on;
+ 
+@@ -38,7 +38,7 @@
+ 
+ 	v12_poe: v12-poe-regulator {
+ 		compatible = "regulator-fixed";
+-		reglator-name = "v12_poe";
++		regulator-name = "v12_poe";
+ 		regulator-always-on;
+ 		regulator-boot-on;
+ 
+diff --git a/arch/arm64/boot/dts/qcom/ipq6018.dtsi b/arch/arm64/boot/dts/qcom/ipq6018.dtsi
+index f531797f26195..c58eeb4376abe 100644
+--- a/arch/arm64/boot/dts/qcom/ipq6018.dtsi
++++ b/arch/arm64/boot/dts/qcom/ipq6018.dtsi
+@@ -302,7 +302,7 @@
+ 			status = "disabled";
+ 		};
+ 
+-		prng: qrng@e1000 {
++		prng: qrng@e3000 {
+ 			compatible = "qcom,prng-ee";
+ 			reg = <0x0 0x000e3000 0x0 0x1000>;
+ 			clocks = <&gcc GCC_PRNG_AHB_CLK>;
+diff --git a/arch/arm64/boot/dts/qcom/ipq9574.dtsi b/arch/arm64/boot/dts/qcom/ipq9574.dtsi
+index 0ed19fbf7d87d..6e3a88ee06152 100644
+--- a/arch/arm64/boot/dts/qcom/ipq9574.dtsi
++++ b/arch/arm64/boot/dts/qcom/ipq9574.dtsi
+@@ -173,14 +173,14 @@
+ 		intc: interrupt-controller@b000000 {
+ 			compatible = "qcom,msm-qgic2";
+ 			reg = <0x0b000000 0x1000>,  /* GICD */
+-			      <0x0b002000 0x1000>,  /* GICC */
++			      <0x0b002000 0x2000>,  /* GICC */
+ 			      <0x0b001000 0x1000>,  /* GICH */
+-			      <0x0b004000 0x1000>;  /* GICV */
++			      <0x0b004000 0x2000>;  /* GICV */
+ 			#address-cells = <1>;
+ 			#size-cells = <1>;
+ 			interrupt-controller;
+ 			#interrupt-cells = <3>;
+-			interrupts = <GIC_PPI 9 IRQ_TYPE_LEVEL_HIGH>;
++			interrupts = <GIC_PPI 9 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_HIGH)>;
+ 			ranges = <0 0x0b00c000 0x3000>;
+ 
+ 			v2m0: v2m@0 {
+diff --git a/arch/arm64/boot/dts/qcom/msm8916.dtsi b/arch/arm64/boot/dts/qcom/msm8916.dtsi
+index 834e0b66b7f2e..bf88c10ff55b0 100644
+--- a/arch/arm64/boot/dts/qcom/msm8916.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8916.dtsi
+@@ -1162,7 +1162,7 @@
+ 			};
+ 		};
+ 
+-		camss: camss@1b00000 {
++		camss: camss@1b0ac00 {
+ 			compatible = "qcom,msm8916-camss";
+ 			reg = <0x01b0ac00 0x200>,
+ 				<0x01b00030 0x4>,
+@@ -1554,7 +1554,7 @@
+ 			#sound-dai-cells = <1>;
+ 		};
+ 
+-		sdhc_1: mmc@7824000 {
++		sdhc_1: mmc@7824900 {
+ 			compatible = "qcom,msm8916-sdhci", "qcom,sdhci-msm-v4";
+ 			reg = <0x07824900 0x11c>, <0x07824000 0x800>;
+ 			reg-names = "hc", "core";
+@@ -1572,7 +1572,7 @@
+ 			status = "disabled";
+ 		};
+ 
+-		sdhc_2: mmc@7864000 {
++		sdhc_2: mmc@7864900 {
+ 			compatible = "qcom,msm8916-sdhci", "qcom,sdhci-msm-v4";
+ 			reg = <0x07864900 0x11c>, <0x07864000 0x800>;
+ 			reg-names = "hc", "core";
+@@ -1871,7 +1871,7 @@
+ 			};
+ 		};
+ 
+-		wcnss: remoteproc@a21b000 {
++		wcnss: remoteproc@a204000 {
+ 			compatible = "qcom,pronto-v2-pil", "qcom,pronto";
+ 			reg = <0x0a204000 0x2000>, <0x0a202000 0x1000>, <0x0a21b000 0x3000>;
+ 			reg-names = "ccu", "dxe", "pmu";
+diff --git a/arch/arm64/boot/dts/qcom/msm8953.dtsi b/arch/arm64/boot/dts/qcom/msm8953.dtsi
+index d44cfa0471e9a..d1d6f80bb2e6b 100644
+--- a/arch/arm64/boot/dts/qcom/msm8953.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8953.dtsi
+@@ -1002,7 +1002,7 @@
+ 			};
+ 		};
+ 
+-		apps_iommu: iommu@1e00000 {
++		apps_iommu: iommu@1e20000 {
+ 			compatible = "qcom,msm8953-iommu", "qcom,msm-iommu-v1";
+ 			ranges  = <0 0x01e20000 0x20000>;
+ 
+@@ -1425,7 +1425,7 @@
+ 			status = "disabled";
+ 		};
+ 
+-		wcnss: remoteproc@a21b000 {
++		wcnss: remoteproc@a204000 {
+ 			compatible = "qcom,pronto-v3-pil", "qcom,pronto";
+ 			reg = <0x0a204000 0x2000>, <0x0a202000 0x1000>, <0x0a21b000 0x3000>;
+ 			reg-names = "ccu", "dxe", "pmu";
+diff --git a/arch/arm64/boot/dts/qcom/msm8976.dtsi b/arch/arm64/boot/dts/qcom/msm8976.dtsi
+index f47fb8ea71e20..753b9a2105edd 100644
+--- a/arch/arm64/boot/dts/qcom/msm8976.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8976.dtsi
+@@ -822,7 +822,7 @@
+ 			#interrupt-cells = <4>;
+ 		};
+ 
+-		sdhc_1: mmc@7824000 {
++		sdhc_1: mmc@7824900 {
+ 			compatible = "qcom,msm8976-sdhci", "qcom,sdhci-msm-v4";
+ 			reg = <0x07824900 0x500>, <0x07824000 0x800>;
+ 			reg-names = "hc", "core";
+@@ -838,7 +838,7 @@
+ 			status = "disabled";
+ 		};
+ 
+-		sdhc_2: mmc@7864000 {
++		sdhc_2: mmc@7864900 {
+ 			compatible = "qcom,msm8976-sdhci", "qcom,sdhci-msm-v4";
+ 			reg = <0x07864900 0x11c>, <0x07864000 0x800>;
+ 			reg-names = "hc", "core";
+@@ -957,7 +957,7 @@
+ 			#reset-cells = <1>;
+ 		};
+ 
+-		sdhc_3: mmc@7a24000 {
++		sdhc_3: mmc@7a24900 {
+ 			compatible = "qcom,msm8976-sdhci", "qcom,sdhci-msm-v4";
+ 			reg = <0x07a24900 0x11c>, <0x07a24000 0x800>;
+ 			reg-names = "hc", "core";
+diff --git a/arch/arm64/boot/dts/qcom/msm8994.dtsi b/arch/arm64/boot/dts/qcom/msm8994.dtsi
+index bdc3f2ba1755e..c5cf01c7f72e1 100644
+--- a/arch/arm64/boot/dts/qcom/msm8994.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8994.dtsi
+@@ -747,7 +747,7 @@
+ 			reg = <0xfc4ab000 0x4>;
+ 		};
+ 
+-		spmi_bus: spmi@fc4c0000 {
++		spmi_bus: spmi@fc4cf000 {
+ 			compatible = "qcom,spmi-pmic-arb";
+ 			reg = <0xfc4cf000 0x1000>,
+ 			      <0xfc4cb000 0x1000>,
+diff --git a/arch/arm64/boot/dts/qcom/msm8996.dtsi b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+index 30257c07e1279..25fe2b8552fc7 100644
+--- a/arch/arm64/boot/dts/qcom/msm8996.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+@@ -2069,7 +2069,7 @@
+ 			};
+ 		};
+ 
+-		camss: camss@a00000 {
++		camss: camss@a34000 {
+ 			compatible = "qcom,msm8996-camss";
+ 			reg = <0x00a34000 0x1000>,
+ 			      <0x00a00030 0x4>,
+diff --git a/arch/arm64/boot/dts/qcom/pm7250b.dtsi b/arch/arm64/boot/dts/qcom/pm7250b.dtsi
+index d709d955a2f5a..daa6f1d30efa0 100644
+--- a/arch/arm64/boot/dts/qcom/pm7250b.dtsi
++++ b/arch/arm64/boot/dts/qcom/pm7250b.dtsi
+@@ -3,6 +3,7 @@
+  * Copyright (C) 2022 Luca Weiss <luca.weiss@fairphone.com>
+  */
+ 
++#include <dt-bindings/iio/qcom,spmi-vadc.h>
+ #include <dt-bindings/interrupt-controller/irq.h>
+ #include <dt-bindings/spmi/spmi.h>
+ 
+diff --git a/arch/arm64/boot/dts/qcom/pm8998.dtsi b/arch/arm64/boot/dts/qcom/pm8998.dtsi
+index 340033ac31860..695d79116cde2 100644
+--- a/arch/arm64/boot/dts/qcom/pm8998.dtsi
++++ b/arch/arm64/boot/dts/qcom/pm8998.dtsi
+@@ -55,7 +55,7 @@
+ 
+ 			pm8998_resin: resin {
+ 				compatible = "qcom,pm8941-resin";
+-				interrupts = <GIC_SPI 0x8 1 IRQ_TYPE_EDGE_BOTH>;
++				interrupts = <0x0 0x8 1 IRQ_TYPE_EDGE_BOTH>;
+ 				debounce = <15625>;
+ 				bias-pull-up;
+ 				status = "disabled";
+diff --git a/arch/arm64/boot/dts/qcom/qdu1000.dtsi b/arch/arm64/boot/dts/qcom/qdu1000.dtsi
+index fb553f0bb17aa..6a6830777d8a8 100644
+--- a/arch/arm64/boot/dts/qcom/qdu1000.dtsi
++++ b/arch/arm64/boot/dts/qcom/qdu1000.dtsi
+@@ -1252,6 +1252,7 @@
+ 			qcom,tcs-config = <ACTIVE_TCS    2>, <SLEEP_TCS     3>,
+ 					  <WAKE_TCS      3>, <CONTROL_TCS   0>;
+ 			label = "apps_rsc";
++			power-domains = <&CLUSTER_PD>;
+ 
+ 			apps_bcm_voter: bcm-voter {
+ 				compatible = "qcom,bcm-voter";
+diff --git a/arch/arm64/boot/dts/qcom/qrb4210-rb2.dts b/arch/arm64/boot/dts/qcom/qrb4210-rb2.dts
+index dc80f0bca7676..5554b3b9aaf32 100644
+--- a/arch/arm64/boot/dts/qcom/qrb4210-rb2.dts
++++ b/arch/arm64/boot/dts/qcom/qrb4210-rb2.dts
+@@ -199,7 +199,8 @@
+ };
+ 
+ &sdhc_2 {
+-	cd-gpios = <&tlmm 88 GPIO_ACTIVE_HIGH>; /* card detect gpio */
++	cd-gpios = <&tlmm 88 GPIO_ACTIVE_LOW>; /* card detect gpio */
++
+ 	vmmc-supply = <&vreg_l22a_2p96>;
+ 	vqmmc-supply = <&vreg_l5a_2p96>;
+ 	no-sdio;
+diff --git a/arch/arm64/boot/dts/qcom/sdm630.dtsi b/arch/arm64/boot/dts/qcom/sdm630.dtsi
+index eaead2f7beb4e..ab04903fa3ff3 100644
+--- a/arch/arm64/boot/dts/qcom/sdm630.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdm630.dtsi
+@@ -1894,7 +1894,7 @@
+ 			};
+ 		};
+ 
+-		camss: camss@ca00000 {
++		camss: camss@ca00020 {
+ 			compatible = "qcom,sdm660-camss";
+ 			reg = <0x0ca00020 0x10>,
+ 			      <0x0ca30000 0x100>,
+diff --git a/arch/arm64/boot/dts/qcom/sdm670.dtsi b/arch/arm64/boot/dts/qcom/sdm670.dtsi
+index b61e13db89bd5..a1c207c0266da 100644
+--- a/arch/arm64/boot/dts/qcom/sdm670.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdm670.dtsi
+@@ -1282,6 +1282,7 @@
+ 					  <SLEEP_TCS   3>,
+ 					  <WAKE_TCS    3>,
+ 					  <CONTROL_TCS 1>;
++			power-domains = <&CLUSTER_PD>;
+ 
+ 			apps_bcm_voter: bcm-voter {
+ 				compatible = "qcom,bcm-voter";
+diff --git a/arch/arm64/boot/dts/qcom/sdm845-xiaomi-polaris.dts b/arch/arm64/boot/dts/qcom/sdm845-xiaomi-polaris.dts
+index 8ae0ffccaab22..576f0421824f4 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845-xiaomi-polaris.dts
++++ b/arch/arm64/boot/dts/qcom/sdm845-xiaomi-polaris.dts
+@@ -483,6 +483,7 @@
+ 		};
+ 
+ 		rmi4-f12@12 {
++			reg = <0x12>;
+ 			syna,rezero-wait-ms = <0xc8>;
+ 			syna,clip-x-high = <0x438>;
+ 			syna,clip-y-high = <0x870>;
+diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+index cdeb05e95674e..1bfb938e284fb 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+@@ -4238,7 +4238,7 @@
+ 			#reset-cells = <1>;
+ 		};
+ 
+-		camss: camss@a00000 {
++		camss: camss@acb3000 {
+ 			compatible = "qcom,sdm845-camss";
+ 
+ 			reg = <0 0x0acb3000 0 0x1000>,
+@@ -5137,6 +5137,7 @@
+ 					  <SLEEP_TCS   3>,
+ 					  <WAKE_TCS    3>,
+ 					  <CONTROL_TCS 1>;
++			power-domains = <&CLUSTER_PD>;
+ 
+ 			apps_bcm_voter: bcm-voter {
+ 				compatible = "qcom,bcm-voter";
+diff --git a/arch/arm64/boot/dts/qcom/sm6115.dtsi b/arch/arm64/boot/dts/qcom/sm6115.dtsi
+index 43f31c1b9d5a7..ea71249bbdf3f 100644
+--- a/arch/arm64/boot/dts/qcom/sm6115.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm6115.dtsi
+@@ -700,7 +700,7 @@
+ 			#interrupt-cells = <4>;
+ 		};
+ 
+-		tsens0: thermal-sensor@4410000 {
++		tsens0: thermal-sensor@4411000 {
+ 			compatible = "qcom,sm6115-tsens", "qcom,tsens-v2";
+ 			reg = <0x0 0x04411000 0x0 0x1ff>, /* TM */
+ 			      <0x0 0x04410000 0x0 0x8>; /* SROT */
+diff --git a/arch/arm64/boot/dts/qcom/sm8250-sony-xperia-edo.dtsi b/arch/arm64/boot/dts/qcom/sm8250-sony-xperia-edo.dtsi
+index 2f22d348d45d7..dcabb714f0f35 100644
+--- a/arch/arm64/boot/dts/qcom/sm8250-sony-xperia-edo.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8250-sony-xperia-edo.dtsi
+@@ -26,9 +26,10 @@
+ 		framebuffer: framebuffer@9c000000 {
+ 			compatible = "simple-framebuffer";
+ 			reg = <0 0x9c000000 0 0x2300000>;
+-			width = <1644>;
+-			height = <3840>;
+-			stride = <(1644 * 4)>;
++			/* pdx203 BL initializes in 2.5k mode, not 4k */
++			width = <1096>;
++			height = <2560>;
++			stride = <(1096 * 4)>;
+ 			format = "a8r8g8b8";
+ 			/*
+ 			 * That's a lot of clocks, but it's necessary due
+diff --git a/arch/arm64/boot/dts/qcom/sm8350.dtsi b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+index 3efdc03ed0f11..425af2c38a37f 100644
+--- a/arch/arm64/boot/dts/qcom/sm8350.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+@@ -907,7 +907,7 @@
+ 			};
+ 		};
+ 
+-		gpi_dma0: dma-controller@900000 {
++		gpi_dma0: dma-controller@9800000 {
+ 			compatible = "qcom,sm8350-gpi-dma", "qcom,sm6350-gpi-dma";
+ 			reg = <0 0x09800000 0 0x60000>;
+ 			interrupts = <GIC_SPI 244 IRQ_TYPE_LEVEL_HIGH>,
+@@ -1638,7 +1638,7 @@
+ 			status = "disabled";
+ 		};
+ 
+-		pcie1_phy: phy@1c0f000 {
++		pcie1_phy: phy@1c0e000 {
+ 			compatible = "qcom,sm8350-qmp-gen3x2-pcie-phy";
+ 			reg = <0 0x01c0e000 0 0x2000>;
+ 			clocks = <&gcc GCC_PCIE_1_AUX_CLK>,
+@@ -2140,7 +2140,7 @@
+ 			resets = <&gcc GCC_QUSB2PHY_SEC_BCR>;
+ 		};
+ 
+-		usb_1_qmpphy: phy@88e9000 {
++		usb_1_qmpphy: phy@88e8000 {
+ 			compatible = "qcom,sm8350-qmp-usb3-dp-phy";
+ 			reg = <0 0x088e8000 0 0x3000>;
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sm8550.dtsi b/arch/arm64/boot/dts/qcom/sm8550.dtsi
+index 558cbc4307080..d2b404736a8e4 100644
+--- a/arch/arm64/boot/dts/qcom/sm8550.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8550.dtsi
+@@ -1858,7 +1858,7 @@
+ 				 <&apps_smmu 0x481 0x0>;
+ 		};
+ 
+-		crypto: crypto@1de0000 {
++		crypto: crypto@1dfa000 {
+ 			compatible = "qcom,sm8550-qce", "qcom,sm8150-qce", "qcom,qce";
+ 			reg = <0x0 0x01dfa000 0x0 0x6000>;
+ 			dmas = <&cryptobam 4>, <&cryptobam 5>;
+@@ -2769,6 +2769,10 @@
+ 
+ 			resets = <&gcc GCC_USB30_PRIM_BCR>;
+ 
++			interconnects = <&aggre1_noc MASTER_USB3_0 0 &mc_virt SLAVE_EBI1 0>,
++					<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_USB3_0 0>;
++			interconnect-names = "usb-ddr", "apps-usb";
++
+ 			status = "disabled";
+ 
+ 			usb_1_dwc3: usb@a600000 {
+@@ -2883,7 +2887,7 @@
+ 			#interrupt-cells = <4>;
+ 		};
+ 
+-		tlmm: pinctrl@f000000 {
++		tlmm: pinctrl@f100000 {
+ 			compatible = "qcom,sm8550-tlmm";
+ 			reg = <0 0x0f100000 0 0x300000>;
+ 			interrupts = <GIC_SPI 208 IRQ_TYPE_LEVEL_HIGH>;
+@@ -3597,6 +3601,7 @@
+ 			qcom,drv-id = <2>;
+ 			qcom,tcs-config = <ACTIVE_TCS    3>, <SLEEP_TCS     2>,
+ 					  <WAKE_TCS      2>, <CONTROL_TCS   0>;
++			power-domains = <&CLUSTER_PD>;
+ 
+ 			apps_bcm_voter: bcm-voter {
+ 				compatible = "qcom,bcm-voter";
+diff --git a/arch/arm64/boot/dts/renesas/ulcb-kf.dtsi b/arch/arm64/boot/dts/renesas/ulcb-kf.dtsi
+index efc80960380f4..c78b7a5c2e2aa 100644
+--- a/arch/arm64/boot/dts/renesas/ulcb-kf.dtsi
++++ b/arch/arm64/boot/dts/renesas/ulcb-kf.dtsi
+@@ -367,7 +367,7 @@
+ 	};
+ 
+ 	scif1_pins: scif1 {
+-		groups = "scif1_data_b", "scif1_ctrl";
++		groups = "scif1_data_b";
+ 		function = "scif1";
+ 	};
+ 
+@@ -397,7 +397,6 @@
+ &scif1 {
+ 	pinctrl-0 = <&scif1_pins>;
+ 	pinctrl-names = "default";
+-	uart-has-rtscts;
+ 
+ 	status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/rockchip/rk3566-anbernic-rgxx3.dtsi b/arch/arm64/boot/dts/rockchip/rk3566-anbernic-rgxx3.dtsi
+index 8fadd8afb1906..ad43fa199ca55 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3566-anbernic-rgxx3.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3566-anbernic-rgxx3.dtsi
+@@ -716,7 +716,7 @@
+ 	status = "okay";
+ 
+ 	bluetooth {
+-		compatible = "realtek,rtl8821cs-bt", "realtek,rtl8822cs-bt";
++		compatible = "realtek,rtl8821cs-bt", "realtek,rtl8723bs-bt";
+ 		device-wake-gpios = <&gpio4 4 GPIO_ACTIVE_HIGH>;
+ 		enable-gpios = <&gpio4 3 GPIO_ACTIVE_HIGH>;
+ 		host-wake-gpios = <&gpio4 5 GPIO_ACTIVE_HIGH>;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3588-rock-5b.dts b/arch/arm64/boot/dts/rockchip/rk3588-rock-5b.dts
+index 3e4aee8f70c1b..30cdd366813fb 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3588-rock-5b.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3588-rock-5b.dts
+@@ -133,6 +133,8 @@
+ 		reg = <0x11>;
+ 		clocks = <&cru I2S0_8CH_MCLKOUT>;
+ 		clock-names = "mclk";
++		assigned-clocks = <&cru I2S0_8CH_MCLKOUT>;
++		assigned-clock-rates = <12288000>;
+ 		#sound-dai-cells = <0>;
+ 
+ 		port {
+diff --git a/arch/arm64/boot/dts/ti/k3-am69-sk.dts b/arch/arm64/boot/dts/ti/k3-am69-sk.dts
+index bc49ba534790e..f364b7803115d 100644
+--- a/arch/arm64/boot/dts/ti/k3-am69-sk.dts
++++ b/arch/arm64/boot/dts/ti/k3-am69-sk.dts
+@@ -23,7 +23,7 @@
+ 	aliases {
+ 		serial2 = &main_uart8;
+ 		mmc1 = &main_sdhci1;
+-		i2c0 = &main_i2c0;
++		i2c3 = &main_i2c0;
+ 	};
+ 
+ 	memory@80000000 {
+diff --git a/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts b/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts
+index 0d39d6b8cc0ca..63633e4f6c59f 100644
+--- a/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts
++++ b/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts
+@@ -83,25 +83,25 @@
+ &wkup_pmx2 {
+ 	mcu_cpsw_pins_default: mcu-cpsw-pins-default {
+ 		pinctrl-single,pins = <
+-			J721E_WKUP_IOPAD(0x0068, PIN_OUTPUT, 0) /* MCU_RGMII1_TX_CTL */
+-			J721E_WKUP_IOPAD(0x006c, PIN_INPUT, 0) /* MCU_RGMII1_RX_CTL */
+-			J721E_WKUP_IOPAD(0x0070, PIN_OUTPUT, 0) /* MCU_RGMII1_TD3 */
+-			J721E_WKUP_IOPAD(0x0074, PIN_OUTPUT, 0) /* MCU_RGMII1_TD2 */
+-			J721E_WKUP_IOPAD(0x0078, PIN_OUTPUT, 0) /* MCU_RGMII1_TD1 */
+-			J721E_WKUP_IOPAD(0x007c, PIN_OUTPUT, 0) /* MCU_RGMII1_TD0 */
+-			J721E_WKUP_IOPAD(0x0088, PIN_INPUT, 0) /* MCU_RGMII1_RD3 */
+-			J721E_WKUP_IOPAD(0x008c, PIN_INPUT, 0) /* MCU_RGMII1_RD2 */
+-			J721E_WKUP_IOPAD(0x0090, PIN_INPUT, 0) /* MCU_RGMII1_RD1 */
+-			J721E_WKUP_IOPAD(0x0094, PIN_INPUT, 0) /* MCU_RGMII1_RD0 */
+-			J721E_WKUP_IOPAD(0x0080, PIN_OUTPUT, 0) /* MCU_RGMII1_TXC */
+-			J721E_WKUP_IOPAD(0x0084, PIN_INPUT, 0) /* MCU_RGMII1_RXC */
++			J721E_WKUP_IOPAD(0x0000, PIN_OUTPUT, 0) /* MCU_RGMII1_TX_CTL */
++			J721E_WKUP_IOPAD(0x0004, PIN_INPUT, 0) /* MCU_RGMII1_RX_CTL */
++			J721E_WKUP_IOPAD(0x0008, PIN_OUTPUT, 0) /* MCU_RGMII1_TD3 */
++			J721E_WKUP_IOPAD(0x000c, PIN_OUTPUT, 0) /* MCU_RGMII1_TD2 */
++			J721E_WKUP_IOPAD(0x0010, PIN_OUTPUT, 0) /* MCU_RGMII1_TD1 */
++			J721E_WKUP_IOPAD(0x0014, PIN_OUTPUT, 0) /* MCU_RGMII1_TD0 */
++			J721E_WKUP_IOPAD(0x0020, PIN_INPUT, 0) /* MCU_RGMII1_RD3 */
++			J721E_WKUP_IOPAD(0x0024, PIN_INPUT, 0) /* MCU_RGMII1_RD2 */
++			J721E_WKUP_IOPAD(0x0028, PIN_INPUT, 0) /* MCU_RGMII1_RD1 */
++			J721E_WKUP_IOPAD(0x002c, PIN_INPUT, 0) /* MCU_RGMII1_RD0 */
++			J721E_WKUP_IOPAD(0x0018, PIN_OUTPUT, 0) /* MCU_RGMII1_TXC */
++			J721E_WKUP_IOPAD(0x001c, PIN_INPUT, 0) /* MCU_RGMII1_RXC */
+ 		>;
+ 	};
+ 
+ 	mcu_mdio_pins_default: mcu-mdio1-pins-default {
+ 		pinctrl-single,pins = <
+-			J721E_WKUP_IOPAD(0x009c, PIN_OUTPUT, 0) /* (L1) MCU_MDIO0_MDC */
+-			J721E_WKUP_IOPAD(0x0098, PIN_INPUT, 0) /* (L4) MCU_MDIO0_MDIO */
++			J721E_WKUP_IOPAD(0x0034, PIN_OUTPUT, 0) /* (L1) MCU_MDIO0_MDC */
++			J721E_WKUP_IOPAD(0x0030, PIN_INPUT, 0) /* (L4) MCU_MDIO0_MDIO */
+ 		>;
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/ti/k3-j721e-beagleboneai64.dts b/arch/arm64/boot/dts/ti/k3-j721e-beagleboneai64.dts
+index 37c24b077b6aa..8a62ac263b89a 100644
+--- a/arch/arm64/boot/dts/ti/k3-j721e-beagleboneai64.dts
++++ b/arch/arm64/boot/dts/ti/k3-j721e-beagleboneai64.dts
+@@ -936,6 +936,7 @@
+ };
+ 
+ &mailbox0_cluster0 {
++	status = "okay";
+ 	interrupts = <436>;
+ 
+ 	mbox_mcu_r5fss0_core0: mbox-mcu-r5fss0-core0 {
+@@ -950,6 +951,7 @@
+ };
+ 
+ &mailbox0_cluster1 {
++	status = "okay";
+ 	interrupts = <432>;
+ 
+ 	mbox_main_r5fss0_core0: mbox-main-r5fss0-core0 {
+@@ -964,6 +966,7 @@
+ };
+ 
+ &mailbox0_cluster2 {
++	status = "okay";
+ 	interrupts = <428>;
+ 
+ 	mbox_main_r5fss1_core0: mbox-main-r5fss1-core0 {
+@@ -978,6 +981,7 @@
+ };
+ 
+ &mailbox0_cluster3 {
++	status = "okay";
+ 	interrupts = <424>;
+ 
+ 	mbox_c66_0: mbox-c66-0 {
+@@ -992,6 +996,7 @@
+ };
+ 
+ &mailbox0_cluster4 {
++	status = "okay";
+ 	interrupts = <420>;
+ 
+ 	mbox_c71_0: mbox-c71-0 {
+diff --git a/arch/arm64/boot/dts/ti/k3-j784s4-evm.dts b/arch/arm64/boot/dts/ti/k3-j784s4-evm.dts
+index f33815953e779..34e9bc89ac663 100644
+--- a/arch/arm64/boot/dts/ti/k3-j784s4-evm.dts
++++ b/arch/arm64/boot/dts/ti/k3-j784s4-evm.dts
+@@ -23,7 +23,7 @@
+ 		serial2 = &main_uart8;
+ 		mmc0 = &main_sdhci0;
+ 		mmc1 = &main_sdhci1;
+-		i2c0 = &main_i2c0;
++		i2c3 = &main_i2c0;
+ 	};
+ 
+ 	memory@80000000 {
+@@ -141,28 +141,28 @@
+ 	};
+ };
+ 
+-&wkup_pmx0 {
++&wkup_pmx2 {
+ 	mcu_cpsw_pins_default: mcu-cpsw-pins-default {
+ 		pinctrl-single,pins = <
+-			J784S4_WKUP_IOPAD(0x094, PIN_INPUT, 0) /* (A35) MCU_RGMII1_RD0 */
+-			J784S4_WKUP_IOPAD(0x090, PIN_INPUT, 0) /* (B36) MCU_RGMII1_RD1 */
+-			J784S4_WKUP_IOPAD(0x08c, PIN_INPUT, 0) /* (C36) MCU_RGMII1_RD2 */
+-			J784S4_WKUP_IOPAD(0x088, PIN_INPUT, 0) /* (D36) MCU_RGMII1_RD3 */
+-			J784S4_WKUP_IOPAD(0x084, PIN_INPUT, 0) /* (B37) MCU_RGMII1_RXC */
+-			J784S4_WKUP_IOPAD(0x06c, PIN_INPUT, 0) /* (C37) MCU_RGMII1_RX_CTL */
+-			J784S4_WKUP_IOPAD(0x07c, PIN_OUTPUT, 0) /* (D37) MCU_RGMII1_TD0 */
+-			J784S4_WKUP_IOPAD(0x078, PIN_OUTPUT, 0) /* (D38) MCU_RGMII1_TD1 */
+-			J784S4_WKUP_IOPAD(0x074, PIN_OUTPUT, 0) /* (E37) MCU_RGMII1_TD2 */
+-			J784S4_WKUP_IOPAD(0x070, PIN_OUTPUT, 0) /* (E38) MCU_RGMII1_TD3 */
+-			J784S4_WKUP_IOPAD(0x080, PIN_OUTPUT, 0) /* (E36) MCU_RGMII1_TXC */
+-			J784S4_WKUP_IOPAD(0x068, PIN_OUTPUT, 0) /* (C38) MCU_RGMII1_TX_CTL */
++			J784S4_WKUP_IOPAD(0x02c, PIN_INPUT, 0) /* (A35) MCU_RGMII1_RD0 */
++			J784S4_WKUP_IOPAD(0x028, PIN_INPUT, 0) /* (B36) MCU_RGMII1_RD1 */
++			J784S4_WKUP_IOPAD(0x024, PIN_INPUT, 0) /* (C36) MCU_RGMII1_RD2 */
++			J784S4_WKUP_IOPAD(0x020, PIN_INPUT, 0) /* (D36) MCU_RGMII1_RD3 */
++			J784S4_WKUP_IOPAD(0x01c, PIN_INPUT, 0) /* (B37) MCU_RGMII1_RXC */
++			J784S4_WKUP_IOPAD(0x004, PIN_INPUT, 0) /* (C37) MCU_RGMII1_RX_CTL */
++			J784S4_WKUP_IOPAD(0x014, PIN_OUTPUT, 0) /* (D37) MCU_RGMII1_TD0 */
++			J784S4_WKUP_IOPAD(0x010, PIN_OUTPUT, 0) /* (D38) MCU_RGMII1_TD1 */
++			J784S4_WKUP_IOPAD(0x00c, PIN_OUTPUT, 0) /* (E37) MCU_RGMII1_TD2 */
++			J784S4_WKUP_IOPAD(0x008, PIN_OUTPUT, 0) /* (E38) MCU_RGMII1_TD3 */
++			J784S4_WKUP_IOPAD(0x018, PIN_OUTPUT, 0) /* (E36) MCU_RGMII1_TXC */
++			J784S4_WKUP_IOPAD(0x000, PIN_OUTPUT, 0) /* (C38) MCU_RGMII1_TX_CTL */
+ 		>;
+ 	};
+ 
+ 	mcu_mdio_pins_default: mcu-mdio-pins-default {
+ 		pinctrl-single,pins = <
+-			J784S4_WKUP_IOPAD(0x09c, PIN_OUTPUT, 0) /* (A36) MCU_MDIO0_MDC */
+-			J784S4_WKUP_IOPAD(0x098, PIN_INPUT, 0) /* (B35) MCU_MDIO0_MDIO */
++			J784S4_WKUP_IOPAD(0x034, PIN_OUTPUT, 0) /* (A36) MCU_MDIO0_MDC */
++			J784S4_WKUP_IOPAD(0x030, PIN_INPUT, 0) /* (B35) MCU_MDIO0_MDIO */
+ 		>;
+ 	};
+ };
+diff --git a/arch/arm64/boot/dts/ti/k3-j784s4-mcu-wakeup.dtsi b/arch/arm64/boot/dts/ti/k3-j784s4-mcu-wakeup.dtsi
+index f04fcb614cbe4..ed2b40369c59a 100644
+--- a/arch/arm64/boot/dts/ti/k3-j784s4-mcu-wakeup.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j784s4-mcu-wakeup.dtsi
+@@ -50,7 +50,34 @@
+ 	wkup_pmx0: pinctrl@4301c000 {
+ 		compatible = "pinctrl-single";
+ 		/* Proxy 0 addressing */
+-		reg = <0x00 0x4301c000 0x00 0x178>;
++		reg = <0x00 0x4301c000 0x00 0x034>;
++		#pinctrl-cells = <1>;
++		pinctrl-single,register-width = <32>;
++		pinctrl-single,function-mask = <0xffffffff>;
++	};
++
++	wkup_pmx1: pinctrl@4301c038 {
++		compatible = "pinctrl-single";
++		/* Proxy 0 addressing */
++		reg = <0x00 0x4301c038 0x00 0x02c>;
++		#pinctrl-cells = <1>;
++		pinctrl-single,register-width = <32>;
++		pinctrl-single,function-mask = <0xffffffff>;
++	};
++
++	wkup_pmx2: pinctrl@4301c068 {
++		compatible = "pinctrl-single";
++		/* Proxy 0 addressing */
++		reg = <0x00 0x4301c068 0x00 0x120>;
++		#pinctrl-cells = <1>;
++		pinctrl-single,register-width = <32>;
++		pinctrl-single,function-mask = <0xffffffff>;
++	};
++
++	wkup_pmx3: pinctrl@4301c190 {
++		compatible = "pinctrl-single";
++		/* Proxy 0 addressing */
++		reg = <0x00 0x4301c190 0x00 0x004>;
+ 		#pinctrl-cells = <1>;
+ 		pinctrl-single,register-width = <32>;
+ 		pinctrl-single,function-mask = <0xffffffff>;
+diff --git a/arch/arm64/include/asm/fpsimdmacros.h b/arch/arm64/include/asm/fpsimdmacros.h
+index cd03819a3b686..cdf6a35e39944 100644
+--- a/arch/arm64/include/asm/fpsimdmacros.h
++++ b/arch/arm64/include/asm/fpsimdmacros.h
+@@ -316,12 +316,12 @@
+  _for n, 0, 15,	_sve_str_p	\n, \nxbase, \n - 16
+ 		cbz		\save_ffr, 921f
+ 		_sve_rdffr	0
+-		_sve_str_p	0, \nxbase
+-		_sve_ldr_p	0, \nxbase, -16
+ 		b		922f
+ 921:
+-		str		xzr, [x\nxbase]		// Zero out FFR
++		_sve_pfalse	0			// Zero out FFR
+ 922:
++		_sve_str_p	0, \nxbase
++		_sve_ldr_p	0, \nxbase, -16
+ 		mrs		x\nxtmp, fpsr
+ 		str		w\nxtmp, [\xpfpsr]
+ 		mrs		x\nxtmp, fpcr
+diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
+index 2cfc810d0a5b1..10b407672c427 100644
+--- a/arch/arm64/kernel/signal.c
++++ b/arch/arm64/kernel/signal.c
+@@ -398,7 +398,7 @@ static int restore_tpidr2_context(struct user_ctxs *user)
+ 
+ 	__get_user_error(tpidr2_el0, &user->tpidr2->tpidr2, err);
+ 	if (!err)
+-		current->thread.tpidr2_el0 = tpidr2_el0;
++		write_sysreg_s(tpidr2_el0, SYS_TPIDR2_EL0);
+ 
+ 	return err;
+ }
+diff --git a/arch/loongarch/Makefile b/arch/loongarch/Makefile
+index a27e264bdaa5a..63a637fdf6c28 100644
+--- a/arch/loongarch/Makefile
++++ b/arch/loongarch/Makefile
+@@ -107,7 +107,7 @@ KBUILD_CFLAGS += -isystem $(shell $(CC) -print-file-name=include)
+ KBUILD_LDFLAGS	+= -m $(ld-emul)
+ 
+ ifdef CONFIG_LOONGARCH
+-CHECKFLAGS += $(shell $(CC) $(KBUILD_CFLAGS) -dM -E -x c /dev/null | \
++CHECKFLAGS += $(shell $(CC) $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS) -dM -E -x c /dev/null | \
+ 	grep -E -vw '__GNUC_(MINOR_|PATCHLEVEL_)?_' | \
+ 	sed -e "s/^\#define /-D'/" -e "s/ /'='/" -e "s/$$/'/" -e 's/\$$/&&/g')
+ endif
+diff --git a/arch/mips/Makefile b/arch/mips/Makefile
+index a7a4ee66a9d37..ef7b05ae92ceb 100644
+--- a/arch/mips/Makefile
++++ b/arch/mips/Makefile
+@@ -346,7 +346,7 @@ KBUILD_CFLAGS += -fno-asynchronous-unwind-tables
+ KBUILD_LDFLAGS		+= -m $(ld-emul)
+ 
+ ifdef CONFIG_MIPS
+-CHECKFLAGS += $(shell $(CC) $(KBUILD_CFLAGS) -dM -E -x c /dev/null | \
++CHECKFLAGS += $(shell $(CC) $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS) -dM -E -x c /dev/null | \
+ 	grep -E -vw '__GNUC_(MINOR_|PATCHLEVEL_)?_' | \
+ 	sed -e "s/^\#define /-D'/" -e "s/ /'='/" -e "s/$$/'/" -e 's/\$$/&&/g')
+ endif
+diff --git a/arch/mips/alchemy/devboards/db1000.c b/arch/mips/alchemy/devboards/db1000.c
+index 2c52ee27b4f25..79d66faa84828 100644
+--- a/arch/mips/alchemy/devboards/db1000.c
++++ b/arch/mips/alchemy/devboards/db1000.c
+@@ -381,13 +381,21 @@ static struct platform_device db1100_mmc1_dev = {
+ static struct ads7846_platform_data db1100_touch_pd = {
+ 	.model		= 7846,
+ 	.vref_mv	= 3300,
+-	.gpio_pendown	= 21,
+ };
+ 
+ static struct spi_gpio_platform_data db1100_spictl_pd = {
+ 	.num_chipselect = 1,
+ };
+ 
++static struct gpiod_lookup_table db1100_touch_gpio_table = {
++	.dev_id = "spi0.0",
++	.table = {
++		GPIO_LOOKUP("alchemy-gpio2", 21,
++			    "pendown", GPIO_ACTIVE_LOW),
++		{ }
++	},
++};
++
+ static struct spi_board_info db1100_spi_info[] __initdata = {
+ 	[0] = {
+ 		.modalias	 = "ads7846",
+@@ -474,6 +482,7 @@ int __init db1000_dev_setup(void)
+ 		pfc |= (1 << 0);	/* SSI0 pins as GPIOs */
+ 		alchemy_wrsys(pfc, AU1000_SYS_PINFUNC);
+ 
++		gpiod_add_lookup_table(&db1100_touch_gpio_table);
+ 		spi_register_board_info(db1100_spi_info,
+ 					ARRAY_SIZE(db1100_spi_info));
+ 
+diff --git a/arch/powerpc/Kconfig.debug b/arch/powerpc/Kconfig.debug
+index 6aaf8dc60610d..2a54fadbeaf51 100644
+--- a/arch/powerpc/Kconfig.debug
++++ b/arch/powerpc/Kconfig.debug
+@@ -240,7 +240,7 @@ config PPC_EARLY_DEBUG_40x
+ 
+ config PPC_EARLY_DEBUG_CPM
+ 	bool "Early serial debugging for Freescale CPM-based serial ports"
+-	depends on SERIAL_CPM
++	depends on SERIAL_CPM=y
+ 	help
+ 	  Select this to enable early debugging for Freescale chips
+ 	  using a CPM-based serial port.  This assumes that the bootwrapper
+diff --git a/arch/powerpc/boot/dts/turris1x.dts b/arch/powerpc/boot/dts/turris1x.dts
+index 6612160c19d59..dff1ea074d9d9 100644
+--- a/arch/powerpc/boot/dts/turris1x.dts
++++ b/arch/powerpc/boot/dts/turris1x.dts
+@@ -476,12 +476,12 @@
+ 		 * channel 1 (but only USB 2.0 subset) to USB 2.0 pins on mPCIe
+ 		 * slot 1 (CN5), channels 2 and 3 to connector P600.
+ 		 *
+-		 * P2020 PCIe Root Port uses 1MB of PCIe MEM and xHCI controller
++		 * P2020 PCIe Root Port does not use PCIe MEM and xHCI controller
+ 		 * uses 64kB + 8kB of PCIe MEM. No PCIe IO is used or required.
+-		 * So allocate 2MB of PCIe MEM for this PCIe bus.
++		 * So allocate 128kB of PCIe MEM for this PCIe bus.
+ 		 */
+ 		reg = <0 0xffe08000 0 0x1000>;
+-		ranges = <0x02000000 0x0 0xc0000000 0 0xc0000000 0x0 0x00200000>, /* MEM */
++		ranges = <0x02000000 0x0 0xc0000000 0 0xc0000000 0x0 0x00020000>, /* MEM */
+ 			 <0x01000000 0x0 0x00000000 0 0xffc20000 0x0 0x00010000>; /* IO */
+ 
+ 		pcie@0 {
+diff --git a/arch/powerpc/include/asm/nmi.h b/arch/powerpc/include/asm/nmi.h
+index c3c7adef74de0..43bfd4de868f8 100644
+--- a/arch/powerpc/include/asm/nmi.h
++++ b/arch/powerpc/include/asm/nmi.h
+@@ -5,10 +5,10 @@
+ #ifdef CONFIG_PPC_WATCHDOG
+ extern void arch_touch_nmi_watchdog(void);
+ long soft_nmi_interrupt(struct pt_regs *regs);
+-void watchdog_nmi_set_timeout_pct(u64 pct);
++void watchdog_hardlockup_set_timeout_pct(u64 pct);
+ #else
+ static inline void arch_touch_nmi_watchdog(void) {}
+-static inline void watchdog_nmi_set_timeout_pct(u64 pct) {}
++static inline void watchdog_hardlockup_set_timeout_pct(u64 pct) {}
+ #endif
+ 
+ #ifdef CONFIG_NMI_IPI
+diff --git a/arch/powerpc/kernel/interrupt.c b/arch/powerpc/kernel/interrupt.c
+index e34c72285b4e9..f3fc5fe919d96 100644
+--- a/arch/powerpc/kernel/interrupt.c
++++ b/arch/powerpc/kernel/interrupt.c
+@@ -368,7 +368,6 @@ void preempt_schedule_irq(void);
+ 
+ notrace unsigned long interrupt_exit_kernel_prepare(struct pt_regs *regs)
+ {
+-	unsigned long flags;
+ 	unsigned long ret = 0;
+ 	unsigned long kuap;
+ 	bool stack_store = read_thread_flags() & _TIF_EMULATE_STACK_STORE;
+@@ -392,7 +391,7 @@ notrace unsigned long interrupt_exit_kernel_prepare(struct pt_regs *regs)
+ 
+ 	kuap = kuap_get_and_assert_locked();
+ 
+-	local_irq_save(flags);
++	local_irq_disable();
+ 
+ 	if (!arch_irq_disabled_regs(regs)) {
+ 		/* Returning to a kernel context with local irqs enabled. */
+diff --git a/arch/powerpc/kernel/ppc_save_regs.S b/arch/powerpc/kernel/ppc_save_regs.S
+index 49813f9824681..a9b9c32d0c1ff 100644
+--- a/arch/powerpc/kernel/ppc_save_regs.S
++++ b/arch/powerpc/kernel/ppc_save_regs.S
+@@ -31,10 +31,10 @@ _GLOBAL(ppc_save_regs)
+ 	lbz	r0,PACAIRQSOFTMASK(r13)
+ 	PPC_STL	r0,SOFTE(r3)
+ #endif
+-	/* go up one stack frame for SP */
+-	PPC_LL	r4,0(r1)
+-	PPC_STL	r4,GPR1(r3)
++	/* store current SP */
++	PPC_STL	r1,GPR1(r3)
+ 	/* get caller's LR */
++	PPC_LL	r4,0(r1)
+ 	PPC_LL	r0,LRSAVE(r4)
+ 	PPC_STL	r0,_LINK(r3)
+ 	mflr	r0
+diff --git a/arch/powerpc/kernel/signal_32.c b/arch/powerpc/kernel/signal_32.c
+index c114c7f25645c..7a718ed32b277 100644
+--- a/arch/powerpc/kernel/signal_32.c
++++ b/arch/powerpc/kernel/signal_32.c
+@@ -264,8 +264,9 @@ static void prepare_save_user_regs(int ctx_has_vsx_region)
+ #endif
+ }
+ 
+-static int __unsafe_save_user_regs(struct pt_regs *regs, struct mcontext __user *frame,
+-				   struct mcontext __user *tm_frame, int ctx_has_vsx_region)
++static __always_inline int
++__unsafe_save_user_regs(struct pt_regs *regs, struct mcontext __user *frame,
++			struct mcontext __user *tm_frame, int ctx_has_vsx_region)
+ {
+ 	unsigned long msr = regs->msr;
+ 
+@@ -364,8 +365,9 @@ static void prepare_save_tm_user_regs(void)
+ 		current->thread.ckvrsave = mfspr(SPRN_VRSAVE);
+ }
+ 
+-static int save_tm_user_regs_unsafe(struct pt_regs *regs, struct mcontext __user *frame,
+-				    struct mcontext __user *tm_frame, unsigned long msr)
++static __always_inline int
++save_tm_user_regs_unsafe(struct pt_regs *regs, struct mcontext __user *frame,
++			 struct mcontext __user *tm_frame, unsigned long msr)
+ {
+ 	/* Save both sets of general registers */
+ 	unsafe_save_general_regs(&current->thread.ckpt_regs, frame, failed);
+@@ -444,8 +446,9 @@ failed:
+ #else
+ static void prepare_save_tm_user_regs(void) { }
+ 
+-static int save_tm_user_regs_unsafe(struct pt_regs *regs, struct mcontext __user *frame,
+-				    struct mcontext __user *tm_frame, unsigned long msr)
++static __always_inline int
++save_tm_user_regs_unsafe(struct pt_regs *regs, struct mcontext __user *frame,
++			 struct mcontext __user *tm_frame, unsigned long msr)
+ {
+ 	return 0;
+ }
+diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
+index 265801a3e94cf..6903a72222732 100644
+--- a/arch/powerpc/kernel/smp.c
++++ b/arch/powerpc/kernel/smp.c
+@@ -1605,6 +1605,7 @@ static void add_cpu_to_masks(int cpu)
+ }
+ 
+ /* Activate a secondary processor. */
++__no_stack_protector
+ void start_secondary(void *unused)
+ {
+ 	unsigned int cpu = raw_smp_processor_id();
+diff --git a/arch/powerpc/kernel/vdso/Makefile b/arch/powerpc/kernel/vdso/Makefile
+index 4c3f34485f08f..23d3caf27d6d4 100644
+--- a/arch/powerpc/kernel/vdso/Makefile
++++ b/arch/powerpc/kernel/vdso/Makefile
+@@ -54,7 +54,7 @@ KASAN_SANITIZE := n
+ KCSAN_SANITIZE := n
+ 
+ ccflags-y := -fno-common -fno-builtin
+-ldflags-y := -Wl,--hash-style=both -nostdlib -shared -z noexecstack
++ldflags-y := -Wl,--hash-style=both -nostdlib -shared -z noexecstack $(CLANG_FLAGS)
+ ldflags-$(CONFIG_LD_IS_LLD) += $(call cc-option,--ld-path=$(LD),-fuse-ld=lld)
+ # Filter flags that clang will warn are unused for linking
+ ldflags-y += $(filter-out $(CC_AUTO_VAR_INIT_ZERO_ENABLER) $(CC_FLAGS_FTRACE) -Wa$(comma)%, $(KBUILD_CFLAGS))
+diff --git a/arch/powerpc/kernel/watchdog.c b/arch/powerpc/kernel/watchdog.c
+index dbcc4a793f0b9..edb2dd1f53ebc 100644
+--- a/arch/powerpc/kernel/watchdog.c
++++ b/arch/powerpc/kernel/watchdog.c
+@@ -438,7 +438,7 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer)
+ {
+ 	int cpu = smp_processor_id();
+ 
+-	if (!(watchdog_enabled & NMI_WATCHDOG_ENABLED))
++	if (!(watchdog_enabled & WATCHDOG_HARDLOCKUP_ENABLED))
+ 		return HRTIMER_NORESTART;
+ 
+ 	if (!cpumask_test_cpu(cpu, &watchdog_cpumask))
+@@ -479,7 +479,7 @@ static void start_watchdog(void *arg)
+ 		return;
+ 	}
+ 
+-	if (!(watchdog_enabled & NMI_WATCHDOG_ENABLED))
++	if (!(watchdog_enabled & WATCHDOG_HARDLOCKUP_ENABLED))
+ 		return;
+ 
+ 	if (!cpumask_test_cpu(cpu, &watchdog_cpumask))
+@@ -546,7 +546,7 @@ static void watchdog_calc_timeouts(void)
+ 	wd_timer_period_ms = watchdog_thresh * 1000 * 2 / 5;
+ }
+ 
+-void watchdog_nmi_stop(void)
++void watchdog_hardlockup_stop(void)
+ {
+ 	int cpu;
+ 
+@@ -554,7 +554,7 @@ void watchdog_nmi_stop(void)
+ 		stop_watchdog_on_cpu(cpu);
+ }
+ 
+-void watchdog_nmi_start(void)
++void watchdog_hardlockup_start(void)
+ {
+ 	int cpu;
+ 
+@@ -566,7 +566,7 @@ void watchdog_nmi_start(void)
+ /*
+  * Invoked from core watchdog init.
+  */
+-int __init watchdog_nmi_probe(void)
++int __init watchdog_hardlockup_probe(void)
+ {
+ 	int err;
+ 
+@@ -582,7 +582,7 @@ int __init watchdog_nmi_probe(void)
+ }
+ 
+ #ifdef CONFIG_PPC_PSERIES
+-void watchdog_nmi_set_timeout_pct(u64 pct)
++void watchdog_hardlockup_set_timeout_pct(u64 pct)
+ {
+ 	pr_info("Set the NMI watchdog timeout factor to %llu%%\n", pct);
+ 	WRITE_ONCE(wd_timeout_pct, pct);
+diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
+index 2297aa764ecdb..e8db8c8efe359 100644
+--- a/arch/powerpc/mm/book3s64/radix_pgtable.c
++++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
+@@ -745,9 +745,9 @@ static void free_pud_table(pud_t *pud_start, p4d_t *p4d)
+ }
+ 
+ static void remove_pte_table(pte_t *pte_start, unsigned long addr,
+-			     unsigned long end)
++			     unsigned long end, bool direct)
+ {
+-	unsigned long next;
++	unsigned long next, pages = 0;
+ 	pte_t *pte;
+ 
+ 	pte = pte_start + pte_index(addr);
+@@ -769,13 +769,16 @@ static void remove_pte_table(pte_t *pte_start, unsigned long addr,
+ 		}
+ 
+ 		pte_clear(&init_mm, addr, pte);
++		pages++;
+ 	}
++	if (direct)
++		update_page_count(mmu_virtual_psize, -pages);
+ }
+ 
+ static void __meminit remove_pmd_table(pmd_t *pmd_start, unsigned long addr,
+-			     unsigned long end)
++				       unsigned long end, bool direct)
+ {
+-	unsigned long next;
++	unsigned long next, pages = 0;
+ 	pte_t *pte_base;
+ 	pmd_t *pmd;
+ 
+@@ -793,19 +796,22 @@ static void __meminit remove_pmd_table(pmd_t *pmd_start, unsigned long addr,
+ 				continue;
+ 			}
+ 			pte_clear(&init_mm, addr, (pte_t *)pmd);
++			pages++;
+ 			continue;
+ 		}
+ 
+ 		pte_base = (pte_t *)pmd_page_vaddr(*pmd);
+-		remove_pte_table(pte_base, addr, next);
++		remove_pte_table(pte_base, addr, next, direct);
+ 		free_pte_table(pte_base, pmd);
+ 	}
++	if (direct)
++		update_page_count(MMU_PAGE_2M, -pages);
+ }
+ 
+ static void __meminit remove_pud_table(pud_t *pud_start, unsigned long addr,
+-			     unsigned long end)
++				       unsigned long end, bool direct)
+ {
+-	unsigned long next;
++	unsigned long next, pages = 0;
+ 	pmd_t *pmd_base;
+ 	pud_t *pud;
+ 
+@@ -823,16 +829,20 @@ static void __meminit remove_pud_table(pud_t *pud_start, unsigned long addr,
+ 				continue;
+ 			}
+ 			pte_clear(&init_mm, addr, (pte_t *)pud);
++			pages++;
+ 			continue;
+ 		}
+ 
+ 		pmd_base = pud_pgtable(*pud);
+-		remove_pmd_table(pmd_base, addr, next);
++		remove_pmd_table(pmd_base, addr, next, direct);
+ 		free_pmd_table(pmd_base, pud);
+ 	}
++	if (direct)
++		update_page_count(MMU_PAGE_1G, -pages);
+ }
+ 
+-static void __meminit remove_pagetable(unsigned long start, unsigned long end)
++static void __meminit remove_pagetable(unsigned long start, unsigned long end,
++				       bool direct)
+ {
+ 	unsigned long addr, next;
+ 	pud_t *pud_base;
+@@ -861,7 +871,7 @@ static void __meminit remove_pagetable(unsigned long start, unsigned long end)
+ 		}
+ 
+ 		pud_base = p4d_pgtable(*p4d);
+-		remove_pud_table(pud_base, addr, next);
++		remove_pud_table(pud_base, addr, next, direct);
+ 		free_pud_table(pud_base, p4d);
+ 	}
+ 
+@@ -884,7 +894,7 @@ int __meminit radix__create_section_mapping(unsigned long start,
+ 
+ int __meminit radix__remove_section_mapping(unsigned long start, unsigned long end)
+ {
+-	remove_pagetable(start, end);
++	remove_pagetable(start, end, true);
+ 	return 0;
+ }
+ #endif /* CONFIG_MEMORY_HOTPLUG */
+@@ -920,7 +930,7 @@ int __meminit radix__vmemmap_create_mapping(unsigned long start,
+ #ifdef CONFIG_MEMORY_HOTPLUG
+ void __meminit radix__vmemmap_remove_mapping(unsigned long start, unsigned long page_size)
+ {
+-	remove_pagetable(start, start + page_size);
++	remove_pagetable(start, start + page_size, false);
+ }
+ #endif
+ #endif
+diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c
+index 05b0d584e50b8..fe1b83020e0df 100644
+--- a/arch/powerpc/mm/init_64.c
++++ b/arch/powerpc/mm/init_64.c
+@@ -189,7 +189,7 @@ static bool altmap_cross_boundary(struct vmem_altmap *altmap, unsigned long star
+ 	unsigned long nr_pfn = page_size / sizeof(struct page);
+ 	unsigned long start_pfn = page_to_pfn((struct page *)start);
+ 
+-	if ((start_pfn + nr_pfn) > altmap->end_pfn)
++	if ((start_pfn + nr_pfn - 1) > altmap->end_pfn)
+ 		return true;
+ 
+ 	if (start_pfn < altmap->base_pfn)
+diff --git a/arch/powerpc/platforms/powernv/pci-sriov.c b/arch/powerpc/platforms/powernv/pci-sriov.c
+index 7195133b26bb9..59882da3e7425 100644
+--- a/arch/powerpc/platforms/powernv/pci-sriov.c
++++ b/arch/powerpc/platforms/powernv/pci-sriov.c
+@@ -594,12 +594,12 @@ static void pnv_pci_sriov_disable(struct pci_dev *pdev)
+ 	struct pnv_iov_data   *iov;
+ 
+ 	iov = pnv_iov_get(pdev);
+-	num_vfs = iov->num_vfs;
+-	base_pe = iov->vf_pe_arr[0].pe_number;
+-
+ 	if (WARN_ON(!iov))
+ 		return;
+ 
++	num_vfs = iov->num_vfs;
++	base_pe = iov->vf_pe_arr[0].pe_number;
++
+ 	/* Release VF PEs */
+ 	pnv_ioda_release_vf_PE(pdev);
+ 
+diff --git a/arch/powerpc/platforms/powernv/vas-window.c b/arch/powerpc/platforms/powernv/vas-window.c
+index 0072682531d80..b664838008c12 100644
+--- a/arch/powerpc/platforms/powernv/vas-window.c
++++ b/arch/powerpc/platforms/powernv/vas-window.c
+@@ -1310,8 +1310,8 @@ int vas_win_close(struct vas_window *vwin)
+ 	/* if send window, drop reference to matching receive window */
+ 	if (window->tx_win) {
+ 		if (window->user_win) {
+-			put_vas_user_win_ref(&vwin->task_ref);
+ 			mm_context_remove_vas_window(vwin->task_ref.mm);
++			put_vas_user_win_ref(&vwin->task_ref);
+ 		}
+ 		put_rx_win(window->rxwin);
+ 	}
+diff --git a/arch/powerpc/platforms/pseries/mobility.c b/arch/powerpc/platforms/pseries/mobility.c
+index 6f30113b5468e..cd632ba9ebfff 100644
+--- a/arch/powerpc/platforms/pseries/mobility.c
++++ b/arch/powerpc/platforms/pseries/mobility.c
+@@ -750,7 +750,7 @@ static int pseries_migrate_partition(u64 handle)
+ 		goto out;
+ 
+ 	if (factor)
+-		watchdog_nmi_set_timeout_pct(factor);
++		watchdog_hardlockup_set_timeout_pct(factor);
+ 
+ 	ret = pseries_suspend(handle);
+ 	if (ret == 0) {
+@@ -766,7 +766,7 @@ static int pseries_migrate_partition(u64 handle)
+ 		pseries_cancel_migration(handle, ret);
+ 
+ 	if (factor)
+-		watchdog_nmi_set_timeout_pct(0);
++		watchdog_hardlockup_set_timeout_pct(0);
+ 
+ out:
+ 	vas_migration_handler(VAS_RESUME);
+diff --git a/arch/powerpc/platforms/pseries/vas.c b/arch/powerpc/platforms/pseries/vas.c
+index 513180467562b..9a44a98ba3420 100644
+--- a/arch/powerpc/platforms/pseries/vas.c
++++ b/arch/powerpc/platforms/pseries/vas.c
+@@ -507,8 +507,8 @@ static int vas_deallocate_window(struct vas_window *vwin)
+ 	vascaps[win->win_type].nr_open_windows--;
+ 	mutex_unlock(&vas_pseries_mutex);
+ 
+-	put_vas_user_win_ref(&vwin->task_ref);
+ 	mm_context_remove_vas_window(vwin->task_ref.mm);
++	put_vas_user_win_ref(&vwin->task_ref);
+ 
+ 	kfree(win);
+ 	return 0;
+diff --git a/arch/riscv/kernel/hibernate-asm.S b/arch/riscv/kernel/hibernate-asm.S
+index effaf5ca5da0e..f3e62e766cb29 100644
+--- a/arch/riscv/kernel/hibernate-asm.S
++++ b/arch/riscv/kernel/hibernate-asm.S
+@@ -28,7 +28,6 @@ ENTRY(__hibernate_cpu_resume)
+ 
+ 	REG_L	a0, hibernate_cpu_context
+ 
+-	suspend_restore_csrs
+ 	suspend_restore_regs
+ 
+ 	/* Return zero value. */
+diff --git a/arch/riscv/kernel/hibernate.c b/arch/riscv/kernel/hibernate.c
+index 264b2dcdd67e3..671b686c01587 100644
+--- a/arch/riscv/kernel/hibernate.c
++++ b/arch/riscv/kernel/hibernate.c
+@@ -80,7 +80,6 @@ int pfn_is_nosave(unsigned long pfn)
+ 
+ void notrace save_processor_state(void)
+ {
+-	WARN_ON(num_online_cpus() != 1);
+ }
+ 
+ void notrace restore_processor_state(void)
+diff --git a/arch/riscv/kernel/probes/uprobes.c b/arch/riscv/kernel/probes/uprobes.c
+index c976a21cd4bd5..194f166b2cc40 100644
+--- a/arch/riscv/kernel/probes/uprobes.c
++++ b/arch/riscv/kernel/probes/uprobes.c
+@@ -67,6 +67,7 @@ int arch_uprobe_post_xol(struct arch_uprobe *auprobe, struct pt_regs *regs)
+ 	struct uprobe_task *utask = current->utask;
+ 
+ 	WARN_ON_ONCE(current->thread.bad_cause != UPROBE_TRAP_NR);
++	current->thread.bad_cause = utask->autask.saved_cause;
+ 
+ 	instruction_pointer_set(regs, utask->vaddr + auprobe->insn_size);
+ 
+@@ -102,6 +103,7 @@ void arch_uprobe_abort_xol(struct arch_uprobe *auprobe, struct pt_regs *regs)
+ {
+ 	struct uprobe_task *utask = current->utask;
+ 
++	current->thread.bad_cause = utask->autask.saved_cause;
+ 	/*
+ 	 * Task has received a fatal signal, so reset back to probbed
+ 	 * address.
+diff --git a/arch/riscv/kernel/smpboot.c b/arch/riscv/kernel/smpboot.c
+index 445a4efee267d..6765f1ce79625 100644
+--- a/arch/riscv/kernel/smpboot.c
++++ b/arch/riscv/kernel/smpboot.c
+@@ -161,10 +161,11 @@ asmlinkage __visible void smp_callin(void)
+ 	mmgrab(mm);
+ 	current->active_mm = mm;
+ 
+-	riscv_ipi_enable();
+-
+ 	store_cpu_topology(curr_cpuid);
+ 	notify_cpu_starting(curr_cpuid);
++
++	riscv_ipi_enable();
++
+ 	numa_add_cpu(curr_cpuid);
+ 	set_cpu_online(curr_cpuid, 1);
+ 	probe_vendor_features(curr_cpuid);
+diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
+index 4fa420faa7808..1306149aad57a 100644
+--- a/arch/riscv/mm/init.c
++++ b/arch/riscv/mm/init.c
+@@ -267,7 +267,6 @@ static void __init setup_bootmem(void)
+ 	dma_contiguous_reserve(dma32_phys_limit);
+ 	if (IS_ENABLED(CONFIG_64BIT))
+ 		hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT);
+-	memblock_allow_resize();
+ }
+ 
+ #ifdef CONFIG_MMU
+@@ -1370,6 +1369,9 @@ void __init paging_init(void)
+ {
+ 	setup_bootmem();
+ 	setup_vm_final();
++
++	/* Depend on that Linear Mapping is ready */
++	memblock_allow_resize();
+ }
+ 
+ void __init misc_mem_init(void)
+diff --git a/arch/s390/kvm/diag.c b/arch/s390/kvm/diag.c
+index 807fa9da1e721..3c65b8258ae67 100644
+--- a/arch/s390/kvm/diag.c
++++ b/arch/s390/kvm/diag.c
+@@ -166,6 +166,7 @@ static int diag9c_forwarding_overrun(void)
+ static int __diag_time_slice_end_directed(struct kvm_vcpu *vcpu)
+ {
+ 	struct kvm_vcpu *tcpu;
++	int tcpu_cpu;
+ 	int tid;
+ 
+ 	tid = vcpu->run->s.regs.gprs[(vcpu->arch.sie_block->ipa & 0xf0) >> 4];
+@@ -181,14 +182,15 @@ static int __diag_time_slice_end_directed(struct kvm_vcpu *vcpu)
+ 		goto no_yield;
+ 
+ 	/* target guest VCPU already running */
+-	if (READ_ONCE(tcpu->cpu) >= 0) {
++	tcpu_cpu = READ_ONCE(tcpu->cpu);
++	if (tcpu_cpu >= 0) {
+ 		if (!diag9c_forwarding_hz || diag9c_forwarding_overrun())
+ 			goto no_yield;
+ 
+ 		/* target host CPU already running */
+-		if (!vcpu_is_preempted(tcpu->cpu))
++		if (!vcpu_is_preempted(tcpu_cpu))
+ 			goto no_yield;
+-		smp_yield_cpu(tcpu->cpu);
++		smp_yield_cpu(tcpu_cpu);
+ 		VCPU_EVENT(vcpu, 5,
+ 			   "diag time slice end directed to %d: yield forwarded",
+ 			   tid);
+diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
+index 17b81659cdb20..6700196964648 100644
+--- a/arch/s390/kvm/kvm-s390.c
++++ b/arch/s390/kvm/kvm-s390.c
+@@ -2156,6 +2156,10 @@ static unsigned long kvm_s390_next_dirty_cmma(struct kvm_memslots *slots,
+ 		ms = container_of(mnode, struct kvm_memory_slot, gfn_node[slots->node_idx]);
+ 		ofs = 0;
+ 	}
++
++	if (cur_gfn < ms->base_gfn)
++		ofs = 0;
++
+ 	ofs = find_next_bit(kvm_second_dirty_bitmap(ms), ms->npages, ofs);
+ 	while (ofs >= ms->npages && (mnode = rb_next(mnode))) {
+ 		ms = container_of(mnode, struct kvm_memory_slot, gfn_node[slots->node_idx]);
+diff --git a/arch/s390/kvm/vsie.c b/arch/s390/kvm/vsie.c
+index 8d6b765abf29b..0333ee482eb89 100644
+--- a/arch/s390/kvm/vsie.c
++++ b/arch/s390/kvm/vsie.c
+@@ -177,7 +177,8 @@ static int setup_apcb00(struct kvm_vcpu *vcpu, unsigned long *apcb_s,
+ 			    sizeof(struct kvm_s390_apcb0)))
+ 		return -EFAULT;
+ 
+-	bitmap_and(apcb_s, apcb_s, apcb_h, sizeof(struct kvm_s390_apcb0));
++	bitmap_and(apcb_s, apcb_s, apcb_h,
++		   BITS_PER_BYTE * sizeof(struct kvm_s390_apcb0));
+ 
+ 	return 0;
+ }
+@@ -203,7 +204,8 @@ static int setup_apcb11(struct kvm_vcpu *vcpu, unsigned long *apcb_s,
+ 			    sizeof(struct kvm_s390_apcb1)))
+ 		return -EFAULT;
+ 
+-	bitmap_and(apcb_s, apcb_s, apcb_h, sizeof(struct kvm_s390_apcb1));
++	bitmap_and(apcb_s, apcb_s, apcb_h,
++		   BITS_PER_BYTE * sizeof(struct kvm_s390_apcb1));
+ 
+ 	return 0;
+ }
+diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c
+index 5b22c6e24528a..b9dcb4ae6c59a 100644
+--- a/arch/s390/mm/vmem.c
++++ b/arch/s390/mm/vmem.c
+@@ -667,7 +667,15 @@ static void __init memblock_region_swap(void *a, void *b, int size)
+ 
+ #ifdef CONFIG_KASAN
+ #define __sha(x)	((unsigned long)kasan_mem_to_shadow((void *)x))
++
++static inline int set_memory_kasan(unsigned long start, unsigned long end)
++{
++	start = PAGE_ALIGN_DOWN(__sha(start));
++	end = PAGE_ALIGN(__sha(end));
++	return set_memory_rwnx(start, (end - start) >> PAGE_SHIFT);
++}
+ #endif
++
+ /*
+  * map whole physical memory to virtual memory (identity mapping)
+  * we reserve enough space in the vmalloc area for vmemmap to hotplug
+@@ -737,10 +745,8 @@ void __init vmem_map_init(void)
+ 	}
+ 
+ #ifdef CONFIG_KASAN
+-	for_each_mem_range(i, &base, &end) {
+-		set_memory_rwnx(__sha(base),
+-				(__sha(end) - __sha(base)) >> PAGE_SHIFT);
+-	}
++	for_each_mem_range(i, &base, &end)
++		set_memory_kasan(base, end);
+ #endif
+ 	set_memory_rox((unsigned long)_stext,
+ 		       (unsigned long)(_etext - _stext) >> PAGE_SHIFT);
+diff --git a/arch/sh/boards/mach-dreamcast/irq.c b/arch/sh/boards/mach-dreamcast/irq.c
+index cc06e4cdb4cdf..0eec82fb85e7c 100644
+--- a/arch/sh/boards/mach-dreamcast/irq.c
++++ b/arch/sh/boards/mach-dreamcast/irq.c
+@@ -108,13 +108,13 @@ int systemasic_irq_demux(int irq)
+ 	__u32 j, bit;
+ 
+ 	switch (irq) {
+-	case 13:
++	case 13 + 16:
+ 		level = 0;
+ 		break;
+-	case 11:
++	case 11 + 16:
+ 		level = 1;
+ 		break;
+-	case  9:
++	case 9 + 16:
+ 		level = 2;
+ 		break;
+ 	default:
+diff --git a/arch/sh/boards/mach-highlander/setup.c b/arch/sh/boards/mach-highlander/setup.c
+index 533393d779c2b..01565660a6695 100644
+--- a/arch/sh/boards/mach-highlander/setup.c
++++ b/arch/sh/boards/mach-highlander/setup.c
+@@ -389,10 +389,10 @@ static unsigned char irl2irq[HL_NR_IRL];
+ 
+ static int highlander_irq_demux(int irq)
+ {
+-	if (irq >= HL_NR_IRL || irq < 0 || !irl2irq[irq])
++	if (irq >= HL_NR_IRL + 16 || irq < 16 || !irl2irq[irq - 16])
+ 		return irq;
+ 
+-	return irl2irq[irq];
++	return irl2irq[irq - 16];
+ }
+ 
+ static void __init highlander_init_irq(void)
+diff --git a/arch/sh/boards/mach-r2d/irq.c b/arch/sh/boards/mach-r2d/irq.c
+index e34f81e9ae813..d0a54a9adbce2 100644
+--- a/arch/sh/boards/mach-r2d/irq.c
++++ b/arch/sh/boards/mach-r2d/irq.c
+@@ -117,10 +117,10 @@ static unsigned char irl2irq[R2D_NR_IRL];
+ 
+ int rts7751r2d_irq_demux(int irq)
+ {
+-	if (irq >= R2D_NR_IRL || irq < 0 || !irl2irq[irq])
++	if (irq >= R2D_NR_IRL + 16 || irq < 16 || !irl2irq[irq - 16])
+ 		return irq;
+ 
+-	return irl2irq[irq];
++	return irl2irq[irq - 16];
+ }
+ 
+ /*
+diff --git a/arch/sh/cchips/Kconfig b/arch/sh/cchips/Kconfig
+index efde2edb56278..9659a0bc58dec 100644
+--- a/arch/sh/cchips/Kconfig
++++ b/arch/sh/cchips/Kconfig
+@@ -29,9 +29,9 @@ endchoice
+ config HD64461_IRQ
+ 	int "HD64461 IRQ"
+ 	depends on HD64461
+-	default "36"
++	default "52"
+ 	help
+-	  The default setting of the HD64461 IRQ is 36.
++	  The default setting of the HD64461 IRQ is 52.
+ 
+ 	  Do not change this unless you know what you are doing.
+ 
+diff --git a/arch/sh/drivers/dma/dma-sh.c b/arch/sh/drivers/dma/dma-sh.c
+index 96c626c2cd0a4..306fba1564e5e 100644
+--- a/arch/sh/drivers/dma/dma-sh.c
++++ b/arch/sh/drivers/dma/dma-sh.c
+@@ -18,6 +18,18 @@
+ #include <cpu/dma-register.h>
+ #include <cpu/dma.h>
+ 
++/*
++ * Some of the SoCs feature two DMAC modules. In such a case, the channels are
++ * distributed equally among them.
++ */
++#ifdef	SH_DMAC_BASE1
++#define	SH_DMAC_NR_MD_CH	(CONFIG_NR_ONCHIP_DMA_CHANNELS / 2)
++#else
++#define	SH_DMAC_NR_MD_CH	CONFIG_NR_ONCHIP_DMA_CHANNELS
++#endif
++
++#define	SH_DMAC_CH_SZ		0x10
++
+ /*
+  * Define the default configuration for dual address memory-memory transfer.
+  * The 0x400 value represents auto-request, external->external.
+@@ -29,7 +41,7 @@ static unsigned long dma_find_base(unsigned int chan)
+ 	unsigned long base = SH_DMAC_BASE0;
+ 
+ #ifdef SH_DMAC_BASE1
+-	if (chan >= 6)
++	if (chan >= SH_DMAC_NR_MD_CH)
+ 		base = SH_DMAC_BASE1;
+ #endif
+ 
+@@ -40,13 +52,13 @@ static unsigned long dma_base_addr(unsigned int chan)
+ {
+ 	unsigned long base = dma_find_base(chan);
+ 
+-	/* Normalize offset calculation */
+-	if (chan >= 9)
+-		chan -= 6;
+-	if (chan >= 4)
+-		base += 0x10;
++	chan = (chan % SH_DMAC_NR_MD_CH) * SH_DMAC_CH_SZ;
++
++	/* DMAOR is placed inside the channel register space. Step over it. */
++	if (chan >= DMAOR)
++		base += SH_DMAC_CH_SZ;
+ 
+-	return base + (chan * 0x10);
++	return base + chan;
+ }
+ 
+ #ifdef CONFIG_SH_DMA_IRQ_MULTI
+@@ -250,12 +262,11 @@ static int sh_dmac_get_dma_residue(struct dma_channel *chan)
+ #define NR_DMAOR	1
+ #endif
+ 
+-/*
+- * DMAOR bases are broken out amongst channel groups. DMAOR0 manages
+- * channels 0 - 5, DMAOR1 6 - 11 (optional).
+- */
+-#define dmaor_read_reg(n)		__raw_readw(dma_find_base((n)*6))
+-#define dmaor_write_reg(n, data)	__raw_writew(data, dma_find_base(n)*6)
++#define dmaor_read_reg(n)		__raw_readw(dma_find_base((n) * \
++						    SH_DMAC_NR_MD_CH) + DMAOR)
++#define dmaor_write_reg(n, data)	__raw_writew(data, \
++						     dma_find_base((n) * \
++						     SH_DMAC_NR_MD_CH) + DMAOR)
+ 
+ static inline int dmaor_reset(int no)
+ {
+diff --git a/arch/sh/include/asm/hd64461.h b/arch/sh/include/asm/hd64461.h
+index afb24cb034b11..d2c485fa333b5 100644
+--- a/arch/sh/include/asm/hd64461.h
++++ b/arch/sh/include/asm/hd64461.h
+@@ -229,7 +229,7 @@
+ #define	HD64461_NIMR		HD64461_IO_OFFSET(0x5002)
+ 
+ #define	HD64461_IRQBASE		OFFCHIP_IRQ_BASE
+-#define	OFFCHIP_IRQ_BASE	64
++#define	OFFCHIP_IRQ_BASE	(64 + 16)
+ #define	HD64461_IRQ_NUM		16
+ 
+ #define	HD64461_IRQ_UART	(HD64461_IRQBASE+5)
+diff --git a/arch/sh/include/mach-common/mach/highlander.h b/arch/sh/include/mach-common/mach/highlander.h
+index fb44c299d0337..b12c795584225 100644
+--- a/arch/sh/include/mach-common/mach/highlander.h
++++ b/arch/sh/include/mach-common/mach/highlander.h
+@@ -176,7 +176,7 @@
+ #define IVDR_CK_ON	4		/* iVDR Clock ON */
+ #endif
+ 
+-#define HL_FPGA_IRQ_BASE	200
++#define HL_FPGA_IRQ_BASE	(200 + 16)
+ #define HL_NR_IRL		15
+ 
+ #define IRQ_AX88796		(HL_FPGA_IRQ_BASE + 0)
+diff --git a/arch/sh/include/mach-common/mach/r2d.h b/arch/sh/include/mach-common/mach/r2d.h
+index 0d7e483c7d3f5..69bc1907c5637 100644
+--- a/arch/sh/include/mach-common/mach/r2d.h
++++ b/arch/sh/include/mach-common/mach/r2d.h
+@@ -47,7 +47,7 @@
+ 
+ #define IRLCNTR1	(PA_BCR + 0)	/* Interrupt Control Register1 */
+ 
+-#define R2D_FPGA_IRQ_BASE	100
++#define R2D_FPGA_IRQ_BASE	(100 + 16)
+ 
+ #define IRQ_VOYAGER		(R2D_FPGA_IRQ_BASE + 0)
+ #define IRQ_EXT			(R2D_FPGA_IRQ_BASE + 1)
+diff --git a/arch/sh/include/mach-dreamcast/mach/sysasic.h b/arch/sh/include/mach-dreamcast/mach/sysasic.h
+index ed69ce7f20301..3b27be9a527ea 100644
+--- a/arch/sh/include/mach-dreamcast/mach/sysasic.h
++++ b/arch/sh/include/mach-dreamcast/mach/sysasic.h
+@@ -22,7 +22,7 @@
+    takes.
+ */
+ 
+-#define HW_EVENT_IRQ_BASE  48
++#define HW_EVENT_IRQ_BASE  (48 + 16)
+ 
+ /* IRQ 13 */
+ #define HW_EVENT_VSYNC     (HW_EVENT_IRQ_BASE +  5) /* VSync */
+diff --git a/arch/sh/include/mach-se/mach/se7724.h b/arch/sh/include/mach-se/mach/se7724.h
+index 1fe28820dfa95..ea6c46633b337 100644
+--- a/arch/sh/include/mach-se/mach/se7724.h
++++ b/arch/sh/include/mach-se/mach/se7724.h
+@@ -37,7 +37,7 @@
+ #define IRQ2_IRQ        evt2irq(0x640)
+ 
+ /* Bits in IRQ012 registers */
+-#define SE7724_FPGA_IRQ_BASE	220
++#define SE7724_FPGA_IRQ_BASE	(220 + 16)
+ 
+ /* IRQ0 */
+ #define IRQ0_BASE	SE7724_FPGA_IRQ_BASE
+diff --git a/arch/sh/kernel/cpu/sh2/probe.c b/arch/sh/kernel/cpu/sh2/probe.c
+index d342ea08843f6..70a07f4f2142f 100644
+--- a/arch/sh/kernel/cpu/sh2/probe.c
++++ b/arch/sh/kernel/cpu/sh2/probe.c
+@@ -21,7 +21,7 @@ static int __init scan_cache(unsigned long node, const char *uname,
+ 	if (!of_flat_dt_is_compatible(node, "jcore,cache"))
+ 		return 0;
+ 
+-	j2_ccr_base = (u32 __iomem *)of_flat_dt_translate_address(node);
++	j2_ccr_base = ioremap(of_flat_dt_translate_address(node), 4);
+ 
+ 	return 1;
+ }
+diff --git a/arch/sh/kernel/cpu/sh3/entry.S b/arch/sh/kernel/cpu/sh3/entry.S
+index e48b3dd996f58..b1f5b3c58a018 100644
+--- a/arch/sh/kernel/cpu/sh3/entry.S
++++ b/arch/sh/kernel/cpu/sh3/entry.S
+@@ -470,9 +470,9 @@ ENTRY(handle_interrupt)
+ 	mov	r4, r0		! save vector->jmp table offset for later
+ 
+ 	shlr2	r4		! vector to IRQ# conversion
+-	add	#-0x10, r4
+ 
+-	cmp/pz	r4		! is it a valid IRQ?
++	mov	#0x10, r5
++	cmp/hs	r5, r4		! is it a valid IRQ?
+ 	bt	10f
+ 
+ 	/*
+diff --git a/arch/sparc/kernel/nmi.c b/arch/sparc/kernel/nmi.c
+index 060fff95a305c..9d9e29b75c43a 100644
+--- a/arch/sparc/kernel/nmi.c
++++ b/arch/sparc/kernel/nmi.c
+@@ -282,11 +282,11 @@ __setup("nmi_watchdog=", setup_nmi_watchdog);
+  * sparc specific NMI watchdog enable function.
+  * Enables watchdog if it is not enabled already.
+  */
+-int watchdog_nmi_enable(unsigned int cpu)
++void watchdog_hardlockup_enable(unsigned int cpu)
+ {
+ 	if (atomic_read(&nmi_active) == -1) {
+ 		pr_warn("NMI watchdog cannot be enabled or disabled\n");
+-		return -1;
++		return;
+ 	}
+ 
+ 	/*
+@@ -295,17 +295,15 @@ int watchdog_nmi_enable(unsigned int cpu)
+ 	 * process first.
+ 	 */
+ 	if (!nmi_init_done)
+-		return 0;
++		return;
+ 
+ 	smp_call_function_single(cpu, start_nmi_watchdog, NULL, 1);
+-
+-	return 0;
+ }
+ /*
+  * sparc specific NMI watchdog disable function.
+  * Disables watchdog if it is not disabled already.
+  */
+-void watchdog_nmi_disable(unsigned int cpu)
++void watchdog_hardlockup_disable(unsigned int cpu)
+ {
+ 	if (atomic_read(&nmi_active) == -1)
+ 		pr_warn_once("NMI watchdog cannot be enabled or disabled\n");
+diff --git a/arch/um/Makefile b/arch/um/Makefile
+index 8186d4761bda6..da4d5256af2f0 100644
+--- a/arch/um/Makefile
++++ b/arch/um/Makefile
+@@ -149,7 +149,7 @@ export CFLAGS_vmlinux := $(LINK-y) $(LINK_WRAPS) $(LD_FLAGS_CMDLINE) $(CC_FLAGS_
+ # When cleaning we don't include .config, so we don't include
+ # TT or skas makefiles and don't clean skas_ptregs.h.
+ CLEAN_FILES += linux x.i gmon.out
+-MRPROPER_FILES += arch/$(SUBARCH)/include/generated
++MRPROPER_FILES += $(HOST_DIR)/include/generated
+ 
+ archclean:
+ 	@find . \( -name '*.bb' -o -name '*.bbg' -o -name '*.da' \
+diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c
+index e146b599260f8..64f1343df062f 100644
+--- a/arch/x86/coco/tdx/tdx.c
++++ b/arch/x86/coco/tdx/tdx.c
+@@ -840,6 +840,30 @@ static bool tdx_enc_status_changed(unsigned long vaddr, int numpages, bool enc)
+ 	return true;
+ }
+ 
++static bool tdx_enc_status_change_prepare(unsigned long vaddr, int numpages,
++					  bool enc)
++{
++	/*
++	 * Only handle shared->private conversion here.
++	 * See the comment in tdx_early_init().
++	 */
++	if (enc)
++		return tdx_enc_status_changed(vaddr, numpages, enc);
++	return true;
++}
++
++static bool tdx_enc_status_change_finish(unsigned long vaddr, int numpages,
++					 bool enc)
++{
++	/*
++	 * Only handle private->shared conversion here.
++	 * See the comment in tdx_early_init().
++	 */
++	if (!enc)
++		return tdx_enc_status_changed(vaddr, numpages, enc);
++	return true;
++}
++
+ void __init tdx_early_init(void)
+ {
+ 	u64 cc_mask;
+@@ -867,9 +891,30 @@ void __init tdx_early_init(void)
+ 	 */
+ 	physical_mask &= cc_mask - 1;
+ 
+-	x86_platform.guest.enc_cache_flush_required = tdx_cache_flush_required;
+-	x86_platform.guest.enc_tlb_flush_required   = tdx_tlb_flush_required;
+-	x86_platform.guest.enc_status_change_finish = tdx_enc_status_changed;
++	/*
++	 * The kernel mapping should match the TDX metadata for the page.
++	 * load_unaligned_zeropad() can touch memory *adjacent* to that which is
++	 * owned by the caller and can catch even _momentary_ mismatches.  Bad
++	 * things happen on mismatch:
++	 *
++	 *   - Private mapping => Shared Page  == Guest shutdown
++         *   - Shared mapping  => Private Page == Recoverable #VE
++	 *
++	 * guest.enc_status_change_prepare() converts the page from
++	 * shared=>private before the mapping becomes private.
++	 *
++	 * guest.enc_status_change_finish() converts the page from
++	 * private=>shared after the mapping becomes private.
++	 *
++	 * In both cases there is a temporary shared mapping to a private page,
++	 * which can result in a #VE.  But, there is never a private mapping to
++	 * a shared page.
++	 */
++	x86_platform.guest.enc_status_change_prepare = tdx_enc_status_change_prepare;
++	x86_platform.guest.enc_status_change_finish  = tdx_enc_status_change_finish;
++
++	x86_platform.guest.enc_cache_flush_required  = tdx_cache_flush_required;
++	x86_platform.guest.enc_tlb_flush_required    = tdx_tlb_flush_required;
+ 
+ 	pr_info("Guest detected\n");
+ }
+diff --git a/arch/x86/events/amd/core.c b/arch/x86/events/amd/core.c
+index bccea57dee81e..abadd5f234254 100644
+--- a/arch/x86/events/amd/core.c
++++ b/arch/x86/events/amd/core.c
+@@ -374,7 +374,7 @@ static int amd_pmu_hw_config(struct perf_event *event)
+ 
+ 	/* pass precise event sampling to ibs: */
+ 	if (event->attr.precise_ip && get_ibs_caps())
+-		return -ENOENT;
++		return forward_event_to_ibs(event);
+ 
+ 	if (has_branch_stack(event) && !x86_pmu.lbr_nr)
+ 		return -EOPNOTSUPP;
+diff --git a/arch/x86/events/amd/ibs.c b/arch/x86/events/amd/ibs.c
+index 64582954b5f67..3710148021916 100644
+--- a/arch/x86/events/amd/ibs.c
++++ b/arch/x86/events/amd/ibs.c
+@@ -190,7 +190,7 @@ static struct perf_ibs *get_ibs_pmu(int type)
+ }
+ 
+ /*
+- * Use IBS for precise event sampling:
++ * core pmu config -> IBS config
+  *
+  *  perf record -a -e cpu-cycles:p ...    # use ibs op counting cycle count
+  *  perf record -a -e r076:p ...          # same as -e cpu-cycles:p
+@@ -199,25 +199,9 @@ static struct perf_ibs *get_ibs_pmu(int type)
+  * IbsOpCntCtl (bit 19) of IBS Execution Control Register (IbsOpCtl,
+  * MSRC001_1033) is used to select either cycle or micro-ops counting
+  * mode.
+- *
+- * The rip of IBS samples has skid 0. Thus, IBS supports precise
+- * levels 1 and 2 and the PERF_EFLAGS_EXACT is set. In rare cases the
+- * rip is invalid when IBS was not able to record the rip correctly.
+- * We clear PERF_EFLAGS_EXACT and take the rip from pt_regs then.
+- *
+  */
+-static int perf_ibs_precise_event(struct perf_event *event, u64 *config)
++static int core_pmu_ibs_config(struct perf_event *event, u64 *config)
+ {
+-	switch (event->attr.precise_ip) {
+-	case 0:
+-		return -ENOENT;
+-	case 1:
+-	case 2:
+-		break;
+-	default:
+-		return -EOPNOTSUPP;
+-	}
+-
+ 	switch (event->attr.type) {
+ 	case PERF_TYPE_HARDWARE:
+ 		switch (event->attr.config) {
+@@ -243,22 +227,37 @@ static int perf_ibs_precise_event(struct perf_event *event, u64 *config)
+ 	return -EOPNOTSUPP;
+ }
+ 
++/*
++ * The rip of IBS samples has skid 0. Thus, IBS supports precise
++ * levels 1 and 2 and the PERF_EFLAGS_EXACT is set. In rare cases the
++ * rip is invalid when IBS was not able to record the rip correctly.
++ * We clear PERF_EFLAGS_EXACT and take the rip from pt_regs then.
++ */
++int forward_event_to_ibs(struct perf_event *event)
++{
++	u64 config = 0;
++
++	if (!event->attr.precise_ip || event->attr.precise_ip > 2)
++		return -EOPNOTSUPP;
++
++	if (!core_pmu_ibs_config(event, &config)) {
++		event->attr.type = perf_ibs_op.pmu.type;
++		event->attr.config = config;
++	}
++	return -ENOENT;
++}
++
+ static int perf_ibs_init(struct perf_event *event)
+ {
+ 	struct hw_perf_event *hwc = &event->hw;
+ 	struct perf_ibs *perf_ibs;
+ 	u64 max_cnt, config;
+-	int ret;
+ 
+ 	perf_ibs = get_ibs_pmu(event->attr.type);
+-	if (perf_ibs) {
+-		config = event->attr.config;
+-	} else {
+-		perf_ibs = &perf_ibs_op;
+-		ret = perf_ibs_precise_event(event, &config);
+-		if (ret)
+-			return ret;
+-	}
++	if (!perf_ibs)
++		return -ENOENT;
++
++	config = event->attr.config;
+ 
+ 	if (event->pmu != &perf_ibs->pmu)
+ 		return -ENOENT;
+diff --git a/arch/x86/hyperv/ivm.c b/arch/x86/hyperv/ivm.c
+index cc92388b7a999..6f7c1b5606ad4 100644
+--- a/arch/x86/hyperv/ivm.c
++++ b/arch/x86/hyperv/ivm.c
+@@ -17,6 +17,7 @@
+ #include <asm/mem_encrypt.h>
+ #include <asm/mshyperv.h>
+ #include <asm/hypervisor.h>
++#include <asm/mtrr.h>
+ 
+ #ifdef CONFIG_AMD_MEM_ENCRYPT
+ 
+@@ -372,6 +373,9 @@ void __init hv_vtom_init(void)
+ 	x86_platform.guest.enc_cache_flush_required = hv_vtom_cache_flush_required;
+ 	x86_platform.guest.enc_tlb_flush_required = hv_vtom_tlb_flush_required;
+ 	x86_platform.guest.enc_status_change_finish = hv_vtom_set_host_visibility;
++
++	/* Set WB as the default cache mode. */
++	mtrr_overwrite_state(NULL, 0, MTRR_TYPE_WRBACK);
+ }
+ 
+ #endif /* CONFIG_AMD_MEM_ENCRYPT */
+diff --git a/arch/x86/include/asm/mtrr.h b/arch/x86/include/asm/mtrr.h
+index f0eeaf6e5f5f7..1bae790a553a5 100644
+--- a/arch/x86/include/asm/mtrr.h
++++ b/arch/x86/include/asm/mtrr.h
+@@ -23,14 +23,43 @@
+ #ifndef _ASM_X86_MTRR_H
+ #define _ASM_X86_MTRR_H
+ 
++#include <linux/bits.h>
+ #include <uapi/asm/mtrr.h>
+ 
++/* Defines for hardware MTRR registers. */
++#define MTRR_CAP_VCNT		GENMASK(7, 0)
++#define MTRR_CAP_FIX		BIT_MASK(8)
++#define MTRR_CAP_WC		BIT_MASK(10)
++
++#define MTRR_DEF_TYPE_TYPE	GENMASK(7, 0)
++#define MTRR_DEF_TYPE_FE	BIT_MASK(10)
++#define MTRR_DEF_TYPE_E		BIT_MASK(11)
++
++#define MTRR_DEF_TYPE_ENABLE	(MTRR_DEF_TYPE_FE | MTRR_DEF_TYPE_E)
++#define MTRR_DEF_TYPE_DISABLE	~(MTRR_DEF_TYPE_TYPE | MTRR_DEF_TYPE_ENABLE)
++
++#define MTRR_PHYSBASE_TYPE	GENMASK(7, 0)
++#define MTRR_PHYSBASE_RSVD	GENMASK(11, 8)
++
++#define MTRR_PHYSMASK_RSVD	GENMASK(10, 0)
++#define MTRR_PHYSMASK_V		BIT_MASK(11)
++
++struct mtrr_state_type {
++	struct mtrr_var_range var_ranges[MTRR_MAX_VAR_RANGES];
++	mtrr_type fixed_ranges[MTRR_NUM_FIXED_RANGES];
++	unsigned char enabled;
++	bool have_fixed;
++	mtrr_type def_type;
++};
++
+ /*
+  * The following functions are for use by other drivers that cannot use
+  * arch_phys_wc_add and arch_phys_wc_del.
+  */
+ # ifdef CONFIG_MTRR
+ void mtrr_bp_init(void);
++void mtrr_overwrite_state(struct mtrr_var_range *var, unsigned int num_var,
++			  mtrr_type def_type);
+ extern u8 mtrr_type_lookup(u64 addr, u64 end, u8 *uniform);
+ extern void mtrr_save_fixed_ranges(void *);
+ extern void mtrr_save_state(void);
+@@ -48,6 +77,12 @@ void mtrr_disable(void);
+ void mtrr_enable(void);
+ void mtrr_generic_set_state(void);
+ #  else
++static inline void mtrr_overwrite_state(struct mtrr_var_range *var,
++					unsigned int num_var,
++					mtrr_type def_type)
++{
++}
++
+ static inline u8 mtrr_type_lookup(u64 addr, u64 end, u8 *uniform)
+ {
+ 	/*
+@@ -121,7 +156,8 @@ struct mtrr_gentry32 {
+ #endif /* CONFIG_COMPAT */
+ 
+ /* Bit fields for enabled in struct mtrr_state_type */
+-#define MTRR_STATE_MTRR_FIXED_ENABLED	0x01
+-#define MTRR_STATE_MTRR_ENABLED		0x02
++#define MTRR_STATE_SHIFT		10
++#define MTRR_STATE_MTRR_FIXED_ENABLED	(MTRR_DEF_TYPE_FE >> MTRR_STATE_SHIFT)
++#define MTRR_STATE_MTRR_ENABLED		(MTRR_DEF_TYPE_E >> MTRR_STATE_SHIFT)
+ 
+ #endif /* _ASM_X86_MTRR_H */
+diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
+index abf09882f58b6..f1a46500a2753 100644
+--- a/arch/x86/include/asm/perf_event.h
++++ b/arch/x86/include/asm/perf_event.h
+@@ -478,8 +478,10 @@ struct pebs_xmm {
+ 
+ #ifdef CONFIG_X86_LOCAL_APIC
+ extern u32 get_ibs_caps(void);
++extern int forward_event_to_ibs(struct perf_event *event);
+ #else
+ static inline u32 get_ibs_caps(void) { return 0; }
++static inline int forward_event_to_ibs(struct perf_event *event) { return -ENOENT; }
+ #endif
+ 
+ #ifdef CONFIG_PERF_EVENTS
+diff --git a/arch/x86/include/asm/pgtable_64.h b/arch/x86/include/asm/pgtable_64.h
+index 7929327abe009..a629b1b9f65a6 100644
+--- a/arch/x86/include/asm/pgtable_64.h
++++ b/arch/x86/include/asm/pgtable_64.h
+@@ -237,8 +237,8 @@ static inline void native_pgd_clear(pgd_t *pgd)
+ 
+ #define __pte_to_swp_entry(pte)		((swp_entry_t) { pte_val((pte)) })
+ #define __pmd_to_swp_entry(pmd)		((swp_entry_t) { pmd_val((pmd)) })
+-#define __swp_entry_to_pte(x)		((pte_t) { .pte = (x).val })
+-#define __swp_entry_to_pmd(x)		((pmd_t) { .pmd = (x).val })
++#define __swp_entry_to_pte(x)		(__pte((x).val))
++#define __swp_entry_to_pmd(x)		(__pmd((x).val))
+ 
+ extern void cleanup_highmap(void);
+ 
+diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
+index 13dc2a9d23c1e..7ca5c9ec8b52e 100644
+--- a/arch/x86/include/asm/sev.h
++++ b/arch/x86/include/asm/sev.h
+@@ -192,12 +192,12 @@ struct snp_guest_request_ioctl;
+ 
+ void setup_ghcb(void);
+ void __init early_snp_set_memory_private(unsigned long vaddr, unsigned long paddr,
+-					 unsigned int npages);
++					 unsigned long npages);
+ void __init early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr,
+-					unsigned int npages);
++					unsigned long npages);
+ void __init snp_prep_memory(unsigned long paddr, unsigned int sz, enum psc_op op);
+-void snp_set_memory_shared(unsigned long vaddr, unsigned int npages);
+-void snp_set_memory_private(unsigned long vaddr, unsigned int npages);
++void snp_set_memory_shared(unsigned long vaddr, unsigned long npages);
++void snp_set_memory_private(unsigned long vaddr, unsigned long npages);
+ void snp_set_wakeup_secondary_cpu(void);
+ bool snp_init(struct boot_params *bp);
+ void __init __noreturn snp_abort(void);
+@@ -212,12 +212,12 @@ static inline int pvalidate(unsigned long vaddr, bool rmp_psize, bool validate)
+ static inline int rmpadjust(unsigned long vaddr, bool rmp_psize, unsigned long attrs) { return 0; }
+ static inline void setup_ghcb(void) { }
+ static inline void __init
+-early_snp_set_memory_private(unsigned long vaddr, unsigned long paddr, unsigned int npages) { }
++early_snp_set_memory_private(unsigned long vaddr, unsigned long paddr, unsigned long npages) { }
+ static inline void __init
+-early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr, unsigned int npages) { }
++early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr, unsigned long npages) { }
+ static inline void __init snp_prep_memory(unsigned long paddr, unsigned int sz, enum psc_op op) { }
+-static inline void snp_set_memory_shared(unsigned long vaddr, unsigned int npages) { }
+-static inline void snp_set_memory_private(unsigned long vaddr, unsigned int npages) { }
++static inline void snp_set_memory_shared(unsigned long vaddr, unsigned long npages) { }
++static inline void snp_set_memory_private(unsigned long vaddr, unsigned long npages) { }
+ static inline void snp_set_wakeup_secondary_cpu(void) { }
+ static inline bool snp_init(struct boot_params *bp) { return false; }
+ static inline void snp_abort(void) { }
+diff --git a/arch/x86/include/asm/x86_init.h b/arch/x86/include/asm/x86_init.h
+index 88085f369ff6f..1ca9701917c55 100644
+--- a/arch/x86/include/asm/x86_init.h
++++ b/arch/x86/include/asm/x86_init.h
+@@ -150,7 +150,7 @@ struct x86_init_acpi {
+  * @enc_cache_flush_required	Returns true if a cache flush is needed before changing page encryption status
+  */
+ struct x86_guest {
+-	void (*enc_status_change_prepare)(unsigned long vaddr, int npages, bool enc);
++	bool (*enc_status_change_prepare)(unsigned long vaddr, int npages, bool enc);
+ 	bool (*enc_status_change_finish)(unsigned long vaddr, int npages, bool enc);
+ 	bool (*enc_tlb_flush_required)(bool enc);
+ 	bool (*enc_cache_flush_required)(void);
+diff --git a/arch/x86/include/uapi/asm/mtrr.h b/arch/x86/include/uapi/asm/mtrr.h
+index 376563f2bac1f..ab194c8316259 100644
+--- a/arch/x86/include/uapi/asm/mtrr.h
++++ b/arch/x86/include/uapi/asm/mtrr.h
+@@ -81,14 +81,6 @@ typedef __u8 mtrr_type;
+ #define MTRR_NUM_FIXED_RANGES 88
+ #define MTRR_MAX_VAR_RANGES 256
+ 
+-struct mtrr_state_type {
+-	struct mtrr_var_range var_ranges[MTRR_MAX_VAR_RANGES];
+-	mtrr_type fixed_ranges[MTRR_NUM_FIXED_RANGES];
+-	unsigned char enabled;
+-	unsigned char have_fixed;
+-	mtrr_type def_type;
+-};
+-
+ #define MTRRphysBase_MSR(reg) (0x200 + 2 * (reg))
+ #define MTRRphysMask_MSR(reg) (0x200 + 2 * (reg) + 1)
+ 
+diff --git a/arch/x86/kernel/cpu/mtrr/cleanup.c b/arch/x86/kernel/cpu/mtrr/cleanup.c
+index b5f43049fa5f7..ca2d567e729e2 100644
+--- a/arch/x86/kernel/cpu/mtrr/cleanup.c
++++ b/arch/x86/kernel/cpu/mtrr/cleanup.c
+@@ -173,7 +173,7 @@ early_param("mtrr_cleanup_debug", mtrr_cleanup_debug_setup);
+ 
+ static void __init
+ set_var_mtrr(unsigned int reg, unsigned long basek, unsigned long sizek,
+-	     unsigned char type, unsigned int address_bits)
++	     unsigned char type)
+ {
+ 	u32 base_lo, base_hi, mask_lo, mask_hi;
+ 	u64 base, mask;
+@@ -183,7 +183,7 @@ set_var_mtrr(unsigned int reg, unsigned long basek, unsigned long sizek,
+ 		return;
+ 	}
+ 
+-	mask = (1ULL << address_bits) - 1;
++	mask = (1ULL << boot_cpu_data.x86_phys_bits) - 1;
+ 	mask &= ~((((u64)sizek) << 10) - 1);
+ 
+ 	base = ((u64)basek) << 10;
+@@ -209,7 +209,7 @@ save_var_mtrr(unsigned int reg, unsigned long basek, unsigned long sizek,
+ 	range_state[reg].type = type;
+ }
+ 
+-static void __init set_var_mtrr_all(unsigned int address_bits)
++static void __init set_var_mtrr_all(void)
+ {
+ 	unsigned long basek, sizek;
+ 	unsigned char type;
+@@ -220,7 +220,7 @@ static void __init set_var_mtrr_all(unsigned int address_bits)
+ 		sizek = range_state[reg].size_pfn << (PAGE_SHIFT - 10);
+ 		type = range_state[reg].type;
+ 
+-		set_var_mtrr(reg, basek, sizek, type, address_bits);
++		set_var_mtrr(reg, basek, sizek, type);
+ 	}
+ }
+ 
+@@ -680,7 +680,7 @@ static int __init mtrr_search_optimal_index(void)
+ 	return index_good;
+ }
+ 
+-int __init mtrr_cleanup(unsigned address_bits)
++int __init mtrr_cleanup(void)
+ {
+ 	unsigned long x_remove_base, x_remove_size;
+ 	unsigned long base, size, def, dummy;
+@@ -742,7 +742,7 @@ int __init mtrr_cleanup(unsigned address_bits)
+ 		mtrr_print_out_one_result(i);
+ 
+ 		if (!result[i].bad) {
+-			set_var_mtrr_all(address_bits);
++			set_var_mtrr_all();
+ 			pr_debug("New variable MTRRs\n");
+ 			print_out_mtrr_range_state();
+ 			return 1;
+@@ -786,7 +786,7 @@ int __init mtrr_cleanup(unsigned address_bits)
+ 		gran_size = result[i].gran_sizek;
+ 		gran_size <<= 10;
+ 		x86_setup_var_mtrrs(range, nr_range, chunk_size, gran_size);
+-		set_var_mtrr_all(address_bits);
++		set_var_mtrr_all();
+ 		pr_debug("New variable MTRRs\n");
+ 		print_out_mtrr_range_state();
+ 		return 1;
+@@ -802,7 +802,7 @@ int __init mtrr_cleanup(unsigned address_bits)
+ 	return 0;
+ }
+ #else
+-int __init mtrr_cleanup(unsigned address_bits)
++int __init mtrr_cleanup(void)
+ {
+ 	return 0;
+ }
+@@ -890,7 +890,7 @@ int __init mtrr_trim_uncached_memory(unsigned long end_pfn)
+ 		return 0;
+ 
+ 	rdmsr(MSR_MTRRdefType, def, dummy);
+-	def &= 0xff;
++	def &= MTRR_DEF_TYPE_TYPE;
+ 	if (def != MTRR_TYPE_UNCACHABLE)
+ 		return 0;
+ 
+diff --git a/arch/x86/kernel/cpu/mtrr/generic.c b/arch/x86/kernel/cpu/mtrr/generic.c
+index ee09d359e08f0..e81d832475a1f 100644
+--- a/arch/x86/kernel/cpu/mtrr/generic.c
++++ b/arch/x86/kernel/cpu/mtrr/generic.c
+@@ -8,10 +8,12 @@
+ #include <linux/init.h>
+ #include <linux/io.h>
+ #include <linux/mm.h>
+-
++#include <linux/cc_platform.h>
+ #include <asm/processor-flags.h>
+ #include <asm/cacheinfo.h>
+ #include <asm/cpufeature.h>
++#include <asm/hypervisor.h>
++#include <asm/mshyperv.h>
+ #include <asm/tlbflush.h>
+ #include <asm/mtrr.h>
+ #include <asm/msr.h>
+@@ -38,6 +40,9 @@ u64 mtrr_tom2;
+ struct mtrr_state_type mtrr_state;
+ EXPORT_SYMBOL_GPL(mtrr_state);
+ 
++/* Reserved bits in the high portion of the MTRRphysBaseN MSR. */
++u32 phys_hi_rsvd;
++
+ /*
+  * BIOS is expected to clear MtrrFixDramModEn bit, see for example
+  * "BIOS and Kernel Developer's Guide for the AMD Athlon 64 and AMD
+@@ -69,10 +74,9 @@ static u64 get_mtrr_size(u64 mask)
+ {
+ 	u64 size;
+ 
+-	mask >>= PAGE_SHIFT;
+-	mask |= size_or_mask;
++	mask |= (u64)phys_hi_rsvd << 32;
+ 	size = -mask;
+-	size <<= PAGE_SHIFT;
++
+ 	return size;
+ }
+ 
+@@ -171,7 +175,7 @@ static u8 mtrr_type_lookup_variable(u64 start, u64 end, u64 *partial_end,
+ 	for (i = 0; i < num_var_ranges; ++i) {
+ 		unsigned short start_state, end_state, inclusive;
+ 
+-		if (!(mtrr_state.var_ranges[i].mask_lo & (1 << 11)))
++		if (!(mtrr_state.var_ranges[i].mask_lo & MTRR_PHYSMASK_V))
+ 			continue;
+ 
+ 		base = (((u64)mtrr_state.var_ranges[i].base_hi) << 32) +
+@@ -223,7 +227,7 @@ static u8 mtrr_type_lookup_variable(u64 start, u64 end, u64 *partial_end,
+ 		if ((start & mask) != (base & mask))
+ 			continue;
+ 
+-		curr_match = mtrr_state.var_ranges[i].base_lo & 0xff;
++		curr_match = mtrr_state.var_ranges[i].base_lo & MTRR_PHYSBASE_TYPE;
+ 		if (prev_match == MTRR_TYPE_INVALID) {
+ 			prev_match = curr_match;
+ 			continue;
+@@ -240,6 +244,62 @@ static u8 mtrr_type_lookup_variable(u64 start, u64 end, u64 *partial_end,
+ 	return mtrr_state.def_type;
+ }
+ 
++/**
++ * mtrr_overwrite_state - set static MTRR state
++ *
++ * Used to set MTRR state via different means (e.g. with data obtained from
++ * a hypervisor).
++ * Is allowed only for special cases when running virtualized. Must be called
++ * from the x86_init.hyper.init_platform() hook.  It can be called only once.
++ * The MTRR state can't be changed afterwards.  To ensure that, X86_FEATURE_MTRR
++ * is cleared.
++ */
++void mtrr_overwrite_state(struct mtrr_var_range *var, unsigned int num_var,
++			  mtrr_type def_type)
++{
++	unsigned int i;
++
++	/* Only allowed to be called once before mtrr_bp_init(). */
++	if (WARN_ON_ONCE(mtrr_state_set))
++		return;
++
++	/* Only allowed when running virtualized. */
++	if (!cpu_feature_enabled(X86_FEATURE_HYPERVISOR))
++		return;
++
++	/*
++	 * Only allowed for special virtualization cases:
++	 * - when running as Hyper-V, SEV-SNP guest using vTOM
++	 * - when running as Xen PV guest
++	 * - when running as SEV-SNP or TDX guest to avoid unnecessary
++	 *   VMM communication/Virtualization exceptions (#VC, #VE)
++	 */
++	if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP) &&
++	    !hv_is_isolation_supported() &&
++	    !cpu_feature_enabled(X86_FEATURE_XENPV) &&
++	    !cpu_feature_enabled(X86_FEATURE_TDX_GUEST))
++		return;
++
++	/* Disable MTRR in order to disable MTRR modifications. */
++	setup_clear_cpu_cap(X86_FEATURE_MTRR);
++
++	if (var) {
++		if (num_var > MTRR_MAX_VAR_RANGES) {
++			pr_warn("Trying to overwrite MTRR state with %u variable entries\n",
++				num_var);
++			num_var = MTRR_MAX_VAR_RANGES;
++		}
++		for (i = 0; i < num_var; i++)
++			mtrr_state.var_ranges[i] = var[i];
++		num_var_ranges = num_var;
++	}
++
++	mtrr_state.def_type = def_type;
++	mtrr_state.enabled |= MTRR_STATE_MTRR_ENABLED;
++
++	mtrr_state_set = 1;
++}
++
+ /**
+  * mtrr_type_lookup - look up memory type in MTRR
+  *
+@@ -422,10 +482,10 @@ static void __init print_mtrr_state(void)
+ 	}
+ 	pr_debug("MTRR variable ranges %sabled:\n",
+ 		 mtrr_state.enabled & MTRR_STATE_MTRR_ENABLED ? "en" : "dis");
+-	high_width = (__ffs64(size_or_mask) - (32 - PAGE_SHIFT) + 3) / 4;
++	high_width = (boot_cpu_data.x86_phys_bits - (32 - PAGE_SHIFT) + 3) / 4;
+ 
+ 	for (i = 0; i < num_var_ranges; ++i) {
+-		if (mtrr_state.var_ranges[i].mask_lo & (1 << 11))
++		if (mtrr_state.var_ranges[i].mask_lo & MTRR_PHYSMASK_V)
+ 			pr_debug("  %u base %0*X%05X000 mask %0*X%05X000 %s\n",
+ 				 i,
+ 				 high_width,
+@@ -434,7 +494,8 @@ static void __init print_mtrr_state(void)
+ 				 high_width,
+ 				 mtrr_state.var_ranges[i].mask_hi,
+ 				 mtrr_state.var_ranges[i].mask_lo >> 12,
+-				 mtrr_attrib_to_str(mtrr_state.var_ranges[i].base_lo & 0xff));
++				 mtrr_attrib_to_str(mtrr_state.var_ranges[i].base_lo &
++						    MTRR_PHYSBASE_TYPE));
+ 		else
+ 			pr_debug("  %u disabled\n", i);
+ 	}
+@@ -452,7 +513,7 @@ bool __init get_mtrr_state(void)
+ 	vrs = mtrr_state.var_ranges;
+ 
+ 	rdmsr(MSR_MTRRcap, lo, dummy);
+-	mtrr_state.have_fixed = (lo >> 8) & 1;
++	mtrr_state.have_fixed = lo & MTRR_CAP_FIX;
+ 
+ 	for (i = 0; i < num_var_ranges; i++)
+ 		get_mtrr_var_range(i, &vrs[i]);
+@@ -460,8 +521,8 @@ bool __init get_mtrr_state(void)
+ 		get_fixed_ranges(mtrr_state.fixed_ranges);
+ 
+ 	rdmsr(MSR_MTRRdefType, lo, dummy);
+-	mtrr_state.def_type = (lo & 0xff);
+-	mtrr_state.enabled = (lo & 0xc00) >> 10;
++	mtrr_state.def_type = lo & MTRR_DEF_TYPE_TYPE;
++	mtrr_state.enabled = (lo & MTRR_DEF_TYPE_ENABLE) >> MTRR_STATE_SHIFT;
+ 
+ 	if (amd_special_default_mtrr()) {
+ 		unsigned low, high;
+@@ -574,7 +635,7 @@ static void generic_get_mtrr(unsigned int reg, unsigned long *base,
+ 
+ 	rdmsr(MTRRphysMask_MSR(reg), mask_lo, mask_hi);
+ 
+-	if ((mask_lo & 0x800) == 0) {
++	if (!(mask_lo & MTRR_PHYSMASK_V)) {
+ 		/*  Invalid (i.e. free) range */
+ 		*base = 0;
+ 		*size = 0;
+@@ -585,8 +646,8 @@ static void generic_get_mtrr(unsigned int reg, unsigned long *base,
+ 	rdmsr(MTRRphysBase_MSR(reg), base_lo, base_hi);
+ 
+ 	/* Work out the shifted address mask: */
+-	tmp = (u64)mask_hi << (32 - PAGE_SHIFT) | mask_lo >> PAGE_SHIFT;
+-	mask = size_or_mask | tmp;
++	tmp = (u64)mask_hi << 32 | (mask_lo & PAGE_MASK);
++	mask = (u64)phys_hi_rsvd << 32 | tmp;
+ 
+ 	/* Expand tmp with high bits to all 1s: */
+ 	hi = fls64(tmp);
+@@ -604,9 +665,9 @@ static void generic_get_mtrr(unsigned int reg, unsigned long *base,
+ 	 * This works correctly if size is a power of two, i.e. a
+ 	 * contiguous range:
+ 	 */
+-	*size = -mask;
++	*size = -mask >> PAGE_SHIFT;
+ 	*base = (u64)base_hi << (32 - PAGE_SHIFT) | base_lo >> PAGE_SHIFT;
+-	*type = base_lo & 0xff;
++	*type = base_lo & MTRR_PHYSBASE_TYPE;
+ 
+ out_put_cpu:
+ 	put_cpu();
+@@ -644,9 +705,8 @@ static bool set_mtrr_var_ranges(unsigned int index, struct mtrr_var_range *vr)
+ 	bool changed = false;
+ 
+ 	rdmsr(MTRRphysBase_MSR(index), lo, hi);
+-	if ((vr->base_lo & 0xfffff0ffUL) != (lo & 0xfffff0ffUL)
+-	    || (vr->base_hi & (size_and_mask >> (32 - PAGE_SHIFT))) !=
+-		(hi & (size_and_mask >> (32 - PAGE_SHIFT)))) {
++	if ((vr->base_lo & ~MTRR_PHYSBASE_RSVD) != (lo & ~MTRR_PHYSBASE_RSVD)
++	    || (vr->base_hi & ~phys_hi_rsvd) != (hi & ~phys_hi_rsvd)) {
+ 
+ 		mtrr_wrmsr(MTRRphysBase_MSR(index), vr->base_lo, vr->base_hi);
+ 		changed = true;
+@@ -654,9 +714,8 @@ static bool set_mtrr_var_ranges(unsigned int index, struct mtrr_var_range *vr)
+ 
+ 	rdmsr(MTRRphysMask_MSR(index), lo, hi);
+ 
+-	if ((vr->mask_lo & 0xfffff800UL) != (lo & 0xfffff800UL)
+-	    || (vr->mask_hi & (size_and_mask >> (32 - PAGE_SHIFT))) !=
+-		(hi & (size_and_mask >> (32 - PAGE_SHIFT)))) {
++	if ((vr->mask_lo & ~MTRR_PHYSMASK_RSVD) != (lo & ~MTRR_PHYSMASK_RSVD)
++	    || (vr->mask_hi & ~phys_hi_rsvd) != (hi & ~phys_hi_rsvd)) {
+ 		mtrr_wrmsr(MTRRphysMask_MSR(index), vr->mask_lo, vr->mask_hi);
+ 		changed = true;
+ 	}
+@@ -691,11 +750,12 @@ static unsigned long set_mtrr_state(void)
+ 	 * Set_mtrr_restore restores the old value of MTRRdefType,
+ 	 * so to set it we fiddle with the saved value:
+ 	 */
+-	if ((deftype_lo & 0xff) != mtrr_state.def_type
+-	    || ((deftype_lo & 0xc00) >> 10) != mtrr_state.enabled) {
++	if ((deftype_lo & MTRR_DEF_TYPE_TYPE) != mtrr_state.def_type ||
++	    ((deftype_lo & MTRR_DEF_TYPE_ENABLE) >> MTRR_STATE_SHIFT) != mtrr_state.enabled) {
+ 
+-		deftype_lo = (deftype_lo & ~0xcff) | mtrr_state.def_type |
+-			     (mtrr_state.enabled << 10);
++		deftype_lo = (deftype_lo & MTRR_DEF_TYPE_DISABLE) |
++			     mtrr_state.def_type |
++			     (mtrr_state.enabled << MTRR_STATE_SHIFT);
+ 		change_mask |= MTRR_CHANGE_MASK_DEFTYPE;
+ 	}
+ 
+@@ -708,7 +768,7 @@ void mtrr_disable(void)
+ 	rdmsr(MSR_MTRRdefType, deftype_lo, deftype_hi);
+ 
+ 	/* Disable MTRRs, and set the default type to uncached */
+-	mtrr_wrmsr(MSR_MTRRdefType, deftype_lo & ~0xcff, deftype_hi);
++	mtrr_wrmsr(MSR_MTRRdefType, deftype_lo & MTRR_DEF_TYPE_DISABLE, deftype_hi);
+ }
+ 
+ void mtrr_enable(void)
+@@ -762,9 +822,9 @@ static void generic_set_mtrr(unsigned int reg, unsigned long base,
+ 		memset(vr, 0, sizeof(struct mtrr_var_range));
+ 	} else {
+ 		vr->base_lo = base << PAGE_SHIFT | type;
+-		vr->base_hi = (base & size_and_mask) >> (32 - PAGE_SHIFT);
+-		vr->mask_lo = -size << PAGE_SHIFT | 0x800;
+-		vr->mask_hi = (-size & size_and_mask) >> (32 - PAGE_SHIFT);
++		vr->base_hi = (base >> (32 - PAGE_SHIFT)) & ~phys_hi_rsvd;
++		vr->mask_lo = -size << PAGE_SHIFT | MTRR_PHYSMASK_V;
++		vr->mask_hi = (-size >> (32 - PAGE_SHIFT)) & ~phys_hi_rsvd;
+ 
+ 		mtrr_wrmsr(MTRRphysBase_MSR(reg), vr->base_lo, vr->base_hi);
+ 		mtrr_wrmsr(MTRRphysMask_MSR(reg), vr->mask_lo, vr->mask_hi);
+@@ -817,7 +877,7 @@ static int generic_have_wrcomb(void)
+ {
+ 	unsigned long config, dummy;
+ 	rdmsr(MSR_MTRRcap, config, dummy);
+-	return config & (1 << 10);
++	return config & MTRR_CAP_WC;
+ }
+ 
+ int positive_have_wrcomb(void)
+diff --git a/arch/x86/kernel/cpu/mtrr/mtrr.c b/arch/x86/kernel/cpu/mtrr/mtrr.c
+index 783f3210d5827..be35a0b09604d 100644
+--- a/arch/x86/kernel/cpu/mtrr/mtrr.c
++++ b/arch/x86/kernel/cpu/mtrr/mtrr.c
+@@ -67,8 +67,6 @@ static bool mtrr_enabled(void)
+ unsigned int mtrr_usage_table[MTRR_MAX_VAR_RANGES];
+ static DEFINE_MUTEX(mtrr_mutex);
+ 
+-u64 size_or_mask, size_and_mask;
+-
+ const struct mtrr_ops *mtrr_if;
+ 
+ /*  Returns non-zero if we have the write-combining memory type  */
+@@ -117,7 +115,7 @@ static void __init set_num_var_ranges(bool use_generic)
+ 	else if (is_cpu(CYRIX) || is_cpu(CENTAUR))
+ 		config = 8;
+ 
+-	num_var_ranges = config & 0xff;
++	num_var_ranges = config & MTRR_CAP_VCNT;
+ }
+ 
+ static void __init init_table(void)
+@@ -619,77 +617,46 @@ static struct syscore_ops mtrr_syscore_ops = {
+ 
+ int __initdata changed_by_mtrr_cleanup;
+ 
+-#define SIZE_OR_MASK_BITS(n)  (~((1ULL << ((n) - PAGE_SHIFT)) - 1))
+ /**
+- * mtrr_bp_init - initialize mtrrs on the boot CPU
++ * mtrr_bp_init - initialize MTRRs on the boot CPU
+  *
+  * This needs to be called early; before any of the other CPUs are
+  * initialized (i.e. before smp_init()).
+- *
+  */
+ void __init mtrr_bp_init(void)
+ {
++	bool generic_mtrrs = cpu_feature_enabled(X86_FEATURE_MTRR);
+ 	const char *why = "(not available)";
+-	u32 phys_addr;
+ 
+-	phys_addr = 32;
+-
+-	if (boot_cpu_has(X86_FEATURE_MTRR)) {
+-		mtrr_if = &generic_mtrr_ops;
+-		size_or_mask = SIZE_OR_MASK_BITS(36);
+-		size_and_mask = 0x00f00000;
+-		phys_addr = 36;
++	phys_hi_rsvd = GENMASK(31, boot_cpu_data.x86_phys_bits - 32);
+ 
++	if (!generic_mtrrs && mtrr_state.enabled) {
+ 		/*
+-		 * This is an AMD specific MSR, but we assume(hope?) that
+-		 * Intel will implement it too when they extend the address
+-		 * bus of the Xeon.
++		 * Software overwrite of MTRR state, only for generic case.
++		 * Note that X86_FEATURE_MTRR has been reset in this case.
+ 		 */
+-		if (cpuid_eax(0x80000000) >= 0x80000008) {
+-			phys_addr = cpuid_eax(0x80000008) & 0xff;
+-			/* CPUID workaround for Intel 0F33/0F34 CPU */
+-			if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL &&
+-			    boot_cpu_data.x86 == 0xF &&
+-			    boot_cpu_data.x86_model == 0x3 &&
+-			    (boot_cpu_data.x86_stepping == 0x3 ||
+-			     boot_cpu_data.x86_stepping == 0x4))
+-				phys_addr = 36;
+-
+-			size_or_mask = SIZE_OR_MASK_BITS(phys_addr);
+-			size_and_mask = ~size_or_mask & 0xfffff00000ULL;
+-		} else if (boot_cpu_data.x86_vendor == X86_VENDOR_CENTAUR &&
+-			   boot_cpu_data.x86 == 6) {
+-			/*
+-			 * VIA C* family have Intel style MTRRs,
+-			 * but don't support PAE
+-			 */
+-			size_or_mask = SIZE_OR_MASK_BITS(32);
+-			size_and_mask = 0;
+-			phys_addr = 32;
+-		}
++		init_table();
++		pr_info("MTRRs set to read-only\n");
++
++		return;
++	}
++
++	if (generic_mtrrs) {
++		mtrr_if = &generic_mtrr_ops;
+ 	} else {
+ 		switch (boot_cpu_data.x86_vendor) {
+ 		case X86_VENDOR_AMD:
+-			if (cpu_feature_enabled(X86_FEATURE_K6_MTRR)) {
+-				/* Pre-Athlon (K6) AMD CPU MTRRs */
++			/* Pre-Athlon (K6) AMD CPU MTRRs */
++			if (cpu_feature_enabled(X86_FEATURE_K6_MTRR))
+ 				mtrr_if = &amd_mtrr_ops;
+-				size_or_mask = SIZE_OR_MASK_BITS(32);
+-				size_and_mask = 0;
+-			}
+ 			break;
+ 		case X86_VENDOR_CENTAUR:
+-			if (cpu_feature_enabled(X86_FEATURE_CENTAUR_MCR)) {
++			if (cpu_feature_enabled(X86_FEATURE_CENTAUR_MCR))
+ 				mtrr_if = &centaur_mtrr_ops;
+-				size_or_mask = SIZE_OR_MASK_BITS(32);
+-				size_and_mask = 0;
+-			}
+ 			break;
+ 		case X86_VENDOR_CYRIX:
+-			if (cpu_feature_enabled(X86_FEATURE_CYRIX_ARR)) {
++			if (cpu_feature_enabled(X86_FEATURE_CYRIX_ARR))
+ 				mtrr_if = &cyrix_mtrr_ops;
+-				size_or_mask = SIZE_OR_MASK_BITS(32);
+-				size_and_mask = 0;
+-			}
+ 			break;
+ 		default:
+ 			break;
+@@ -703,7 +670,7 @@ void __init mtrr_bp_init(void)
+ 			/* BIOS may override */
+ 			if (get_mtrr_state()) {
+ 				memory_caching_control |= CACHE_MTRR;
+-				changed_by_mtrr_cleanup = mtrr_cleanup(phys_addr);
++				changed_by_mtrr_cleanup = mtrr_cleanup();
+ 			} else {
+ 				mtrr_if = NULL;
+ 				why = "by BIOS";
+diff --git a/arch/x86/kernel/cpu/mtrr/mtrr.h b/arch/x86/kernel/cpu/mtrr/mtrr.h
+index 02eb5871492d0..59e8fb26bf9dd 100644
+--- a/arch/x86/kernel/cpu/mtrr/mtrr.h
++++ b/arch/x86/kernel/cpu/mtrr/mtrr.h
+@@ -51,7 +51,6 @@ void fill_mtrr_var_range(unsigned int index,
+ 		u32 base_lo, u32 base_hi, u32 mask_lo, u32 mask_hi);
+ bool get_mtrr_state(void);
+ 
+-extern u64 size_or_mask, size_and_mask;
+ extern const struct mtrr_ops *mtrr_if;
+ 
+ #define is_cpu(vnd)	(mtrr_if && mtrr_if->vendor == X86_VENDOR_##vnd)
+@@ -59,6 +58,7 @@ extern const struct mtrr_ops *mtrr_if;
+ extern unsigned int num_var_ranges;
+ extern u64 mtrr_tom2;
+ extern struct mtrr_state_type mtrr_state;
++extern u32 phys_hi_rsvd;
+ 
+ void mtrr_state_warn(void);
+ const char *mtrr_attrib_to_str(int x);
+@@ -70,4 +70,4 @@ extern const struct mtrr_ops cyrix_mtrr_ops;
+ extern const struct mtrr_ops centaur_mtrr_ops;
+ 
+ extern int changed_by_mtrr_cleanup;
+-extern int mtrr_cleanup(unsigned address_bits);
++extern int mtrr_cleanup(void);
+diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+index 6ad33f355861f..61cdd9b1bb6d8 100644
+--- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
++++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+@@ -726,11 +726,15 @@ unlock:
+ static void show_rdt_tasks(struct rdtgroup *r, struct seq_file *s)
+ {
+ 	struct task_struct *p, *t;
++	pid_t pid;
+ 
+ 	rcu_read_lock();
+ 	for_each_process_thread(p, t) {
+-		if (is_closid_match(t, r) || is_rmid_match(t, r))
+-			seq_printf(s, "%d\n", t->pid);
++		if (is_closid_match(t, r) || is_rmid_match(t, r)) {
++			pid = task_pid_vnr(t);
++			if (pid)
++				seq_printf(s, "%d\n", pid);
++		}
+ 	}
+ 	rcu_read_unlock();
+ }
+diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
+index 16babff771bdf..0cccfeb67c3ad 100644
+--- a/arch/x86/kernel/setup.c
++++ b/arch/x86/kernel/setup.c
+@@ -1037,6 +1037,8 @@ void __init setup_arch(char **cmdline_p)
+ 	/*
+ 	 * VMware detection requires dmi to be available, so this
+ 	 * needs to be done after dmi_setup(), for the boot CPU.
++	 * For some guest types (Xen PV, SEV-SNP, TDX) it is required to be
++	 * called before cache_bp_init() for setting up MTRR state.
+ 	 */
+ 	init_hypervisor_platform();
+ 
+diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
+index b031244d6d2df..108bbae59c35a 100644
+--- a/arch/x86/kernel/sev.c
++++ b/arch/x86/kernel/sev.c
+@@ -645,7 +645,7 @@ static u64 __init get_jump_table_addr(void)
+ 	return ret;
+ }
+ 
+-static void pvalidate_pages(unsigned long vaddr, unsigned int npages, bool validate)
++static void pvalidate_pages(unsigned long vaddr, unsigned long npages, bool validate)
+ {
+ 	unsigned long vaddr_end;
+ 	int rc;
+@@ -662,7 +662,7 @@ static void pvalidate_pages(unsigned long vaddr, unsigned int npages, bool valid
+ 	}
+ }
+ 
+-static void __init early_set_pages_state(unsigned long paddr, unsigned int npages, enum psc_op op)
++static void __init early_set_pages_state(unsigned long paddr, unsigned long npages, enum psc_op op)
+ {
+ 	unsigned long paddr_end;
+ 	u64 val;
+@@ -701,7 +701,7 @@ e_term:
+ }
+ 
+ void __init early_snp_set_memory_private(unsigned long vaddr, unsigned long paddr,
+-					 unsigned int npages)
++					 unsigned long npages)
+ {
+ 	/*
+ 	 * This can be invoked in early boot while running identity mapped, so
+@@ -723,7 +723,7 @@ void __init early_snp_set_memory_private(unsigned long vaddr, unsigned long padd
+ }
+ 
+ void __init early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr,
+-					unsigned int npages)
++					unsigned long npages)
+ {
+ 	/*
+ 	 * This can be invoked in early boot while running identity mapped, so
+@@ -879,7 +879,7 @@ static void __set_pages_state(struct snp_psc_desc *data, unsigned long vaddr,
+ 		sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_PSC);
+ }
+ 
+-static void set_pages_state(unsigned long vaddr, unsigned int npages, int op)
++static void set_pages_state(unsigned long vaddr, unsigned long npages, int op)
+ {
+ 	unsigned long vaddr_end, next_vaddr;
+ 	struct snp_psc_desc *desc;
+@@ -904,7 +904,7 @@ static void set_pages_state(unsigned long vaddr, unsigned int npages, int op)
+ 	kfree(desc);
+ }
+ 
+-void snp_set_memory_shared(unsigned long vaddr, unsigned int npages)
++void snp_set_memory_shared(unsigned long vaddr, unsigned long npages)
+ {
+ 	if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP))
+ 		return;
+@@ -914,7 +914,7 @@ void snp_set_memory_shared(unsigned long vaddr, unsigned int npages)
+ 	set_pages_state(vaddr, npages, SNP_PAGE_STATE_SHARED);
+ }
+ 
+-void snp_set_memory_private(unsigned long vaddr, unsigned int npages)
++void snp_set_memory_private(unsigned long vaddr, unsigned long npages)
+ {
+ 	if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP))
+ 		return;
+diff --git a/arch/x86/kernel/x86_init.c b/arch/x86/kernel/x86_init.c
+index d82f4fa2f1bfe..f230d4d7d8eb4 100644
+--- a/arch/x86/kernel/x86_init.c
++++ b/arch/x86/kernel/x86_init.c
+@@ -130,7 +130,7 @@ struct x86_cpuinit_ops x86_cpuinit = {
+ 
+ static void default_nmi_init(void) { };
+ 
+-static void enc_status_change_prepare_noop(unsigned long vaddr, int npages, bool enc) { }
++static bool enc_status_change_prepare_noop(unsigned long vaddr, int npages, bool enc) { return true; }
+ static bool enc_status_change_finish_noop(unsigned long vaddr, int npages, bool enc) { return false; }
+ static bool enc_tlb_flush_required_noop(bool enc) { return false; }
+ static bool enc_cache_flush_required_noop(void) { return false; }
+diff --git a/arch/x86/mm/mem_encrypt_amd.c b/arch/x86/mm/mem_encrypt_amd.c
+index e0b51c09109f6..4f95c449a406e 100644
+--- a/arch/x86/mm/mem_encrypt_amd.c
++++ b/arch/x86/mm/mem_encrypt_amd.c
+@@ -319,7 +319,7 @@ static void enc_dec_hypercall(unsigned long vaddr, int npages, bool enc)
+ #endif
+ }
+ 
+-static void amd_enc_status_change_prepare(unsigned long vaddr, int npages, bool enc)
++static bool amd_enc_status_change_prepare(unsigned long vaddr, int npages, bool enc)
+ {
+ 	/*
+ 	 * To maintain the security guarantees of SEV-SNP guests, make sure
+@@ -327,6 +327,8 @@ static void amd_enc_status_change_prepare(unsigned long vaddr, int npages, bool
+ 	 */
+ 	if (cc_platform_has(CC_ATTR_GUEST_SEV_SNP) && !enc)
+ 		snp_set_memory_shared(vaddr, npages);
++
++	return true;
+ }
+ 
+ /* Return true unconditionally: return value doesn't matter for the SEV side */
+diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
+index 7159cf7876130..b8f48ebe753c7 100644
+--- a/arch/x86/mm/pat/set_memory.c
++++ b/arch/x86/mm/pat/set_memory.c
+@@ -2151,7 +2151,8 @@ static int __set_memory_enc_pgtable(unsigned long addr, int numpages, bool enc)
+ 		cpa_flush(&cpa, x86_platform.guest.enc_cache_flush_required());
+ 
+ 	/* Notify hypervisor that we are about to set/clr encryption attribute. */
+-	x86_platform.guest.enc_status_change_prepare(addr, numpages, enc);
++	if (!x86_platform.guest.enc_status_change_prepare(addr, numpages, enc))
++		return -EIO;
+ 
+ 	ret = __change_page_attr_set_clr(&cpa, 1);
+ 
+diff --git a/arch/x86/platform/efi/efi_64.c b/arch/x86/platform/efi/efi_64.c
+index 232acf418cfbe..77f7ac3668cb4 100644
+--- a/arch/x86/platform/efi/efi_64.c
++++ b/arch/x86/platform/efi/efi_64.c
+@@ -853,9 +853,9 @@ efi_set_virtual_address_map(unsigned long memory_map_size,
+ 
+ 	/* Disable interrupts around EFI calls: */
+ 	local_irq_save(flags);
+-	status = efi_call(efi.runtime->set_virtual_address_map,
+-			  memory_map_size, descriptor_size,
+-			  descriptor_version, virtual_map);
++	status = arch_efi_call_virt(efi.runtime, set_virtual_address_map,
++				    memory_map_size, descriptor_size,
++				    descriptor_version, virtual_map);
+ 	local_irq_restore(flags);
+ 
+ 	efi_fpu_end();
+diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
+index 093b78c8bbec0..8732b85d56505 100644
+--- a/arch/x86/xen/enlighten_pv.c
++++ b/arch/x86/xen/enlighten_pv.c
+@@ -68,6 +68,7 @@
+ #include <asm/reboot.h>
+ #include <asm/hypervisor.h>
+ #include <asm/mach_traps.h>
++#include <asm/mtrr.h>
+ #include <asm/mwait.h>
+ #include <asm/pci_x86.h>
+ #include <asm/cpu.h>
+@@ -119,6 +120,54 @@ static int __init parse_xen_msr_safe(char *str)
+ }
+ early_param("xen_msr_safe", parse_xen_msr_safe);
+ 
++/* Get MTRR settings from Xen and put them into mtrr_state. */
++static void __init xen_set_mtrr_data(void)
++{
++#ifdef CONFIG_MTRR
++	struct xen_platform_op op = {
++		.cmd = XENPF_read_memtype,
++		.interface_version = XENPF_INTERFACE_VERSION,
++	};
++	unsigned int reg;
++	unsigned long mask;
++	uint32_t eax, width;
++	static struct mtrr_var_range var[MTRR_MAX_VAR_RANGES] __initdata;
++
++	/* Get physical address width (only 64-bit cpus supported). */
++	width = 36;
++	eax = cpuid_eax(0x80000000);
++	if ((eax >> 16) == 0x8000 && eax >= 0x80000008) {
++		eax = cpuid_eax(0x80000008);
++		width = eax & 0xff;
++	}
++
++	for (reg = 0; reg < MTRR_MAX_VAR_RANGES; reg++) {
++		op.u.read_memtype.reg = reg;
++		if (HYPERVISOR_platform_op(&op))
++			break;
++
++		/*
++		 * Only called in dom0, which has all RAM PFNs mapped at
++		 * RAM MFNs, and all PCI space etc. is identity mapped.
++		 * This means we can treat MFN == PFN regarding MTRR settings.
++		 */
++		var[reg].base_lo = op.u.read_memtype.type;
++		var[reg].base_lo |= op.u.read_memtype.mfn << PAGE_SHIFT;
++		var[reg].base_hi = op.u.read_memtype.mfn >> (32 - PAGE_SHIFT);
++		mask = ~((op.u.read_memtype.nr_mfns << PAGE_SHIFT) - 1);
++		mask &= (1UL << width) - 1;
++		if (mask)
++			mask |= MTRR_PHYSMASK_V;
++		var[reg].mask_lo = mask;
++		var[reg].mask_hi = mask >> 32;
++	}
++
++	/* Only overwrite MTRR state if any MTRR could be got from Xen. */
++	if (reg)
++		mtrr_overwrite_state(var, reg, MTRR_TYPE_UNCACHABLE);
++#endif
++}
++
+ static void __init xen_pv_init_platform(void)
+ {
+ 	/* PV guests can't operate virtio devices without grants. */
+@@ -135,6 +184,9 @@ static void __init xen_pv_init_platform(void)
+ 
+ 	/* pvclock is in shared info area */
+ 	xen_init_time_ops();
++
++	if (xen_initial_domain())
++		xen_set_mtrr_data();
+ }
+ 
+ static void __init xen_pv_guest_late_init(void)
+diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
+index dce1548a7a0c3..fc49be622e05b 100644
+--- a/block/blk-cgroup.c
++++ b/block/blk-cgroup.c
+@@ -624,8 +624,13 @@ static int blkcg_reset_stats(struct cgroup_subsys_state *css,
+ 			struct blkg_iostat_set *bis =
+ 				per_cpu_ptr(blkg->iostat_cpu, cpu);
+ 			memset(bis, 0, sizeof(*bis));
++
++			/* Re-initialize the cleared blkg_iostat_set */
++			u64_stats_init(&bis->sync);
++			bis->blkg = blkg;
+ 		}
+ 		memset(&blkg->iostat, 0, sizeof(blkg->iostat));
++		u64_stats_init(&blkg->iostat.sync);
+ 
+ 		for (i = 0; i < BLKCG_MAX_POLS; i++) {
+ 			struct blkcg_policy *pol = blkcg_policy[i];
+@@ -762,6 +767,13 @@ int blkg_conf_open_bdev(struct blkg_conf_ctx *ctx)
+ 		return -ENODEV;
+ 	}
+ 
++	mutex_lock(&bdev->bd_queue->rq_qos_mutex);
++	if (!disk_live(bdev->bd_disk)) {
++		blkdev_put_no_open(bdev);
++		mutex_unlock(&bdev->bd_queue->rq_qos_mutex);
++		return -ENODEV;
++	}
++
+ 	ctx->body = input;
+ 	ctx->bdev = bdev;
+ 	return 0;
+@@ -906,6 +918,7 @@ EXPORT_SYMBOL_GPL(blkg_conf_prep);
+  */
+ void blkg_conf_exit(struct blkg_conf_ctx *ctx)
+ 	__releases(&ctx->bdev->bd_queue->queue_lock)
++	__releases(&ctx->bdev->bd_queue->rq_qos_mutex)
+ {
+ 	if (ctx->blkg) {
+ 		spin_unlock_irq(&bdev_get_queue(ctx->bdev)->queue_lock);
+@@ -913,6 +926,7 @@ void blkg_conf_exit(struct blkg_conf_ctx *ctx)
+ 	}
+ 
+ 	if (ctx->bdev) {
++		mutex_unlock(&ctx->bdev->bd_queue->rq_qos_mutex);
+ 		blkdev_put_no_open(ctx->bdev);
+ 		ctx->body = NULL;
+ 		ctx->bdev = NULL;
+@@ -2072,6 +2086,9 @@ void blk_cgroup_bio_start(struct bio *bio)
+ 	struct blkg_iostat_set *bis;
+ 	unsigned long flags;
+ 
++	if (!cgroup_subsys_on_dfl(io_cgrp_subsys))
++		return;
++
+ 	/* Root-level stats are sourced from system-wide IO stats */
+ 	if (!cgroup_parent(blkcg->css.cgroup))
+ 		return;
+@@ -2102,8 +2119,7 @@ void blk_cgroup_bio_start(struct bio *bio)
+ 	}
+ 
+ 	u64_stats_update_end_irqrestore(&bis->sync, flags);
+-	if (cgroup_subsys_on_dfl(io_cgrp_subsys))
+-		cgroup_rstat_updated(blkcg->css.cgroup, cpu);
++	cgroup_rstat_updated(blkcg->css.cgroup, cpu);
+ 	put_cpu();
+ }
+ 
+diff --git a/block/blk-core.c b/block/blk-core.c
+index 1da77e7d62894..3fc68b9444791 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -420,6 +420,7 @@ struct request_queue *blk_alloc_queue(int node_id)
+ 	mutex_init(&q->debugfs_mutex);
+ 	mutex_init(&q->sysfs_lock);
+ 	mutex_init(&q->sysfs_dir_lock);
++	mutex_init(&q->rq_qos_mutex);
+ 	spin_lock_init(&q->queue_lock);
+ 
+ 	init_waitqueue_head(&q->mq_freeze_wq);
+diff --git a/block/blk-iocost.c b/block/blk-iocost.c
+index 285ced3467abb..6084a9519883e 100644
+--- a/block/blk-iocost.c
++++ b/block/blk-iocost.c
+@@ -2455,6 +2455,7 @@ static u64 adjust_inuse_and_calc_cost(struct ioc_gq *iocg, u64 vtime,
+ 	u32 hwi, adj_step;
+ 	s64 margin;
+ 	u64 cost, new_inuse;
++	unsigned long flags;
+ 
+ 	current_hweight(iocg, NULL, &hwi);
+ 	old_hwi = hwi;
+@@ -2473,11 +2474,11 @@ static u64 adjust_inuse_and_calc_cost(struct ioc_gq *iocg, u64 vtime,
+ 	    iocg->inuse == iocg->active)
+ 		return cost;
+ 
+-	spin_lock_irq(&ioc->lock);
++	spin_lock_irqsave(&ioc->lock, flags);
+ 
+ 	/* we own inuse only when @iocg is in the normal active state */
+ 	if (iocg->abs_vdebt || list_empty(&iocg->active_list)) {
+-		spin_unlock_irq(&ioc->lock);
++		spin_unlock_irqrestore(&ioc->lock, flags);
+ 		return cost;
+ 	}
+ 
+@@ -2498,7 +2499,7 @@ static u64 adjust_inuse_and_calc_cost(struct ioc_gq *iocg, u64 vtime,
+ 	} while (time_after64(vtime + cost, now->vnow) &&
+ 		 iocg->inuse != iocg->active);
+ 
+-	spin_unlock_irq(&ioc->lock);
++	spin_unlock_irqrestore(&ioc->lock, flags);
+ 
+ 	TRACE_IOCG_PATH(inuse_adjust, iocg, now,
+ 			old_inuse, iocg->inuse, old_hwi, hwi);
+diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
+index d23a8554ec4ae..7851e149d365f 100644
+--- a/block/blk-mq-debugfs.c
++++ b/block/blk-mq-debugfs.c
+@@ -399,7 +399,7 @@ static void blk_mq_debugfs_tags_show(struct seq_file *m,
+ 	seq_printf(m, "nr_tags=%u\n", tags->nr_tags);
+ 	seq_printf(m, "nr_reserved_tags=%u\n", tags->nr_reserved_tags);
+ 	seq_printf(m, "active_queues=%d\n",
+-		   atomic_read(&tags->active_queues));
++		   READ_ONCE(tags->active_queues));
+ 
+ 	seq_puts(m, "\nbitmap_tags:\n");
+ 	sbitmap_queue_show(&tags->bitmap_tags, m);
+diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
+index dfd81cab57888..cc57e2dd9a0bb 100644
+--- a/block/blk-mq-tag.c
++++ b/block/blk-mq-tag.c
+@@ -38,6 +38,7 @@ static void blk_mq_update_wake_batch(struct blk_mq_tags *tags,
+ void __blk_mq_tag_busy(struct blk_mq_hw_ctx *hctx)
+ {
+ 	unsigned int users;
++	struct blk_mq_tags *tags = hctx->tags;
+ 
+ 	/*
+ 	 * calling test_bit() prior to test_and_set_bit() is intentional,
+@@ -55,9 +56,11 @@ void __blk_mq_tag_busy(struct blk_mq_hw_ctx *hctx)
+ 			return;
+ 	}
+ 
+-	users = atomic_inc_return(&hctx->tags->active_queues);
+-
+-	blk_mq_update_wake_batch(hctx->tags, users);
++	spin_lock_irq(&tags->lock);
++	users = tags->active_queues + 1;
++	WRITE_ONCE(tags->active_queues, users);
++	blk_mq_update_wake_batch(tags, users);
++	spin_unlock_irq(&tags->lock);
+ }
+ 
+ /*
+@@ -90,9 +93,11 @@ void __blk_mq_tag_idle(struct blk_mq_hw_ctx *hctx)
+ 			return;
+ 	}
+ 
+-	users = atomic_dec_return(&tags->active_queues);
+-
++	spin_lock_irq(&tags->lock);
++	users = tags->active_queues - 1;
++	WRITE_ONCE(tags->active_queues, users);
+ 	blk_mq_update_wake_batch(tags, users);
++	spin_unlock_irq(&tags->lock);
+ 
+ 	blk_mq_tag_wakeup_all(tags, false);
+ }
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 850bfb844ed2f..b9f4546139894 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -2711,6 +2711,7 @@ static void blk_mq_dispatch_plug_list(struct blk_plug *plug, bool from_sched)
+ 	struct request *requeue_list = NULL;
+ 	struct request **requeue_lastp = &requeue_list;
+ 	unsigned int depth = 0;
++	bool is_passthrough = false;
+ 	LIST_HEAD(list);
+ 
+ 	do {
+@@ -2719,7 +2720,9 @@ static void blk_mq_dispatch_plug_list(struct blk_plug *plug, bool from_sched)
+ 		if (!this_hctx) {
+ 			this_hctx = rq->mq_hctx;
+ 			this_ctx = rq->mq_ctx;
+-		} else if (this_hctx != rq->mq_hctx || this_ctx != rq->mq_ctx) {
++			is_passthrough = blk_rq_is_passthrough(rq);
++		} else if (this_hctx != rq->mq_hctx || this_ctx != rq->mq_ctx ||
++			   is_passthrough != blk_rq_is_passthrough(rq)) {
+ 			rq_list_add_tail(&requeue_lastp, rq);
+ 			continue;
+ 		}
+@@ -2731,7 +2734,13 @@ static void blk_mq_dispatch_plug_list(struct blk_plug *plug, bool from_sched)
+ 	trace_block_unplug(this_hctx->queue, depth, !from_sched);
+ 
+ 	percpu_ref_get(&this_hctx->queue->q_usage_counter);
+-	if (this_hctx->queue->elevator) {
++	/* passthrough requests should never be issued to the I/O scheduler */
++	if (is_passthrough) {
++		spin_lock(&this_hctx->lock);
++		list_splice_tail_init(&list, &this_hctx->dispatch);
++		spin_unlock(&this_hctx->lock);
++		blk_mq_run_hw_queue(this_hctx, from_sched);
++	} else if (this_hctx->queue->elevator) {
+ 		this_hctx->queue->elevator->type->ops.insert_requests(this_hctx,
+ 				&list, 0);
+ 		blk_mq_run_hw_queue(this_hctx, from_sched);
+diff --git a/block/blk-mq.h b/block/blk-mq.h
+index e876584d35163..890fef9796bf9 100644
+--- a/block/blk-mq.h
++++ b/block/blk-mq.h
+@@ -417,8 +417,7 @@ static inline bool hctx_may_queue(struct blk_mq_hw_ctx *hctx,
+ 			return true;
+ 	}
+ 
+-	users = atomic_read(&hctx->tags->active_queues);
+-
++	users = READ_ONCE(hctx->tags->active_queues);
+ 	if (!users)
+ 		return true;
+ 
+diff --git a/block/blk-rq-qos.c b/block/blk-rq-qos.c
+index d8cc820a365e3..167be74df4eec 100644
+--- a/block/blk-rq-qos.c
++++ b/block/blk-rq-qos.c
+@@ -288,11 +288,13 @@ void rq_qos_wait(struct rq_wait *rqw, void *private_data,
+ 
+ void rq_qos_exit(struct request_queue *q)
+ {
++	mutex_lock(&q->rq_qos_mutex);
+ 	while (q->rq_qos) {
+ 		struct rq_qos *rqos = q->rq_qos;
+ 		q->rq_qos = rqos->next;
+ 		rqos->ops->exit(rqos);
+ 	}
++	mutex_unlock(&q->rq_qos_mutex);
+ }
+ 
+ int rq_qos_add(struct rq_qos *rqos, struct gendisk *disk, enum rq_qos_id id,
+@@ -300,6 +302,8 @@ int rq_qos_add(struct rq_qos *rqos, struct gendisk *disk, enum rq_qos_id id,
+ {
+ 	struct request_queue *q = disk->queue;
+ 
++	lockdep_assert_held(&q->rq_qos_mutex);
++
+ 	rqos->disk = disk;
+ 	rqos->id = id;
+ 	rqos->ops = ops;
+@@ -307,18 +311,13 @@ int rq_qos_add(struct rq_qos *rqos, struct gendisk *disk, enum rq_qos_id id,
+ 	/*
+ 	 * No IO can be in-flight when adding rqos, so freeze queue, which
+ 	 * is fine since we only support rq_qos for blk-mq queue.
+-	 *
+-	 * Reuse ->queue_lock for protecting against other concurrent
+-	 * rq_qos adding/deleting
+ 	 */
+ 	blk_mq_freeze_queue(q);
+ 
+-	spin_lock_irq(&q->queue_lock);
+ 	if (rq_qos_id(q, rqos->id))
+ 		goto ebusy;
+ 	rqos->next = q->rq_qos;
+ 	q->rq_qos = rqos;
+-	spin_unlock_irq(&q->queue_lock);
+ 
+ 	blk_mq_unfreeze_queue(q);
+ 
+@@ -330,7 +329,6 @@ int rq_qos_add(struct rq_qos *rqos, struct gendisk *disk, enum rq_qos_id id,
+ 
+ 	return 0;
+ ebusy:
+-	spin_unlock_irq(&q->queue_lock);
+ 	blk_mq_unfreeze_queue(q);
+ 	return -EBUSY;
+ }
+@@ -340,21 +338,15 @@ void rq_qos_del(struct rq_qos *rqos)
+ 	struct request_queue *q = rqos->disk->queue;
+ 	struct rq_qos **cur;
+ 
+-	/*
+-	 * See comment in rq_qos_add() about freezing queue & using
+-	 * ->queue_lock.
+-	 */
+-	blk_mq_freeze_queue(q);
++	lockdep_assert_held(&q->rq_qos_mutex);
+ 
+-	spin_lock_irq(&q->queue_lock);
++	blk_mq_freeze_queue(q);
+ 	for (cur = &q->rq_qos; *cur; cur = &(*cur)->next) {
+ 		if (*cur == rqos) {
+ 			*cur = rqos->next;
+ 			break;
+ 		}
+ 	}
+-	spin_unlock_irq(&q->queue_lock);
+-
+ 	blk_mq_unfreeze_queue(q);
+ 
+ 	mutex_lock(&q->debugfs_mutex);
+diff --git a/block/blk-throttle.c b/block/blk-throttle.c
+index 9d010d867fbf4..7397ff199d669 100644
+--- a/block/blk-throttle.c
++++ b/block/blk-throttle.c
+@@ -2178,12 +2178,6 @@ bool __blk_throtl_bio(struct bio *bio)
+ 
+ 	rcu_read_lock();
+ 
+-	if (!cgroup_subsys_on_dfl(io_cgrp_subsys)) {
+-		blkg_rwstat_add(&tg->stat_bytes, bio->bi_opf,
+-				bio->bi_iter.bi_size);
+-		blkg_rwstat_add(&tg->stat_ios, bio->bi_opf, 1);
+-	}
+-
+ 	spin_lock_irq(&q->queue_lock);
+ 
+ 	throtl_update_latency_buckets(td);
+diff --git a/block/blk-throttle.h b/block/blk-throttle.h
+index ef4b7a4de987d..d1ccbfe9f7978 100644
+--- a/block/blk-throttle.h
++++ b/block/blk-throttle.h
+@@ -185,6 +185,15 @@ static inline bool blk_should_throtl(struct bio *bio)
+ 	struct throtl_grp *tg = blkg_to_tg(bio->bi_blkg);
+ 	int rw = bio_data_dir(bio);
+ 
++	if (!cgroup_subsys_on_dfl(io_cgrp_subsys)) {
++		if (!bio_flagged(bio, BIO_CGROUP_ACCT)) {
++			bio_set_flag(bio, BIO_CGROUP_ACCT);
++			blkg_rwstat_add(&tg->stat_bytes, bio->bi_opf,
++					bio->bi_iter.bi_size);
++		}
++		blkg_rwstat_add(&tg->stat_ios, bio->bi_opf, 1);
++	}
++
+ 	/* iops limit is always counted */
+ 	if (tg->has_rules_iops[rw])
+ 		return true;
+diff --git a/block/blk-wbt.c b/block/blk-wbt.c
+index 9ec2a2f1eda38..7a87506ff8e1c 100644
+--- a/block/blk-wbt.c
++++ b/block/blk-wbt.c
+@@ -944,7 +944,9 @@ int wbt_init(struct gendisk *disk)
+ 	/*
+ 	 * Assign rwb and add the stats callback.
+ 	 */
++	mutex_lock(&q->rq_qos_mutex);
+ 	ret = rq_qos_add(&rwb->rqos, disk, RQ_QOS_WBT, &wbt_rqos_ops);
++	mutex_unlock(&q->rq_qos_mutex);
+ 	if (ret)
+ 		goto err_free;
+ 
+diff --git a/block/disk-events.c b/block/disk-events.c
+index aee25a7e1ab7d..450c2cbe23d56 100644
+--- a/block/disk-events.c
++++ b/block/disk-events.c
+@@ -307,6 +307,7 @@ bool disk_force_media_change(struct gendisk *disk, unsigned int events)
+ 	if (!(events & DISK_EVENT_MEDIA_CHANGE))
+ 		return false;
+ 
++	inc_diskseq(disk);
+ 	if (__invalidate_device(disk->part0, true))
+ 		pr_warn("VFS: busy inodes on changed media %s\n",
+ 			disk->disk_name);
+diff --git a/block/genhd.c b/block/genhd.c
+index 1cb489b927d50..bb895397e9385 100644
+--- a/block/genhd.c
++++ b/block/genhd.c
+@@ -25,8 +25,9 @@
+ #include <linux/pm_runtime.h>
+ #include <linux/badblocks.h>
+ #include <linux/part_stat.h>
+-#include "blk-throttle.h"
++#include <linux/blktrace_api.h>
+ 
++#include "blk-throttle.h"
+ #include "blk.h"
+ #include "blk-mq-sched.h"
+ #include "blk-rq-qos.h"
+@@ -1171,6 +1172,8 @@ static void disk_release(struct device *dev)
+ 	might_sleep();
+ 	WARN_ON_ONCE(disk_live(disk));
+ 
++	blk_trace_remove(disk->queue);
++
+ 	/*
+ 	 * To undo the all initialization from blk_mq_init_allocated_queue in
+ 	 * case of a probe failure where add_disk is never called we have to
+diff --git a/block/partitions/amiga.c b/block/partitions/amiga.c
+index 5c8624e26a54c..5069210954129 100644
+--- a/block/partitions/amiga.c
++++ b/block/partitions/amiga.c
+@@ -11,10 +11,18 @@
+ #define pr_fmt(fmt) fmt
+ 
+ #include <linux/types.h>
++#include <linux/mm_types.h>
++#include <linux/overflow.h>
+ #include <linux/affs_hardblocks.h>
+ 
+ #include "check.h"
+ 
++/* magic offsets in partition DosEnvVec */
++#define NR_HD	3
++#define NR_SECT	5
++#define LO_CYL	9
++#define HI_CYL	10
++
+ static __inline__ u32
+ checksum_block(__be32 *m, int size)
+ {
+@@ -31,8 +39,12 @@ int amiga_partition(struct parsed_partitions *state)
+ 	unsigned char *data;
+ 	struct RigidDiskBlock *rdb;
+ 	struct PartitionBlock *pb;
+-	int start_sect, nr_sects, blk, part, res = 0;
+-	int blksize = 1;	/* Multiplier for disk block size */
++	u64 start_sect, nr_sects;
++	sector_t blk, end_sect;
++	u32 cylblk;		/* rdb_CylBlocks = nr_heads*sect_per_track */
++	u32 nr_hd, nr_sect, lo_cyl, hi_cyl;
++	int part, res = 0;
++	unsigned int blksize = 1;	/* Multiplier for disk block size */
+ 	int slot = 1;
+ 
+ 	for (blk = 0; ; blk++, put_dev_sector(sect)) {
+@@ -40,7 +52,7 @@ int amiga_partition(struct parsed_partitions *state)
+ 			goto rdb_done;
+ 		data = read_part_sector(state, blk, &sect);
+ 		if (!data) {
+-			pr_err("Dev %s: unable to read RDB block %d\n",
++			pr_err("Dev %s: unable to read RDB block %llu\n",
+ 			       state->disk->disk_name, blk);
+ 			res = -1;
+ 			goto rdb_done;
+@@ -57,12 +69,12 @@ int amiga_partition(struct parsed_partitions *state)
+ 		*(__be32 *)(data+0xdc) = 0;
+ 		if (checksum_block((__be32 *)data,
+ 				be32_to_cpu(rdb->rdb_SummedLongs) & 0x7F)==0) {
+-			pr_err("Trashed word at 0xd0 in block %d ignored in checksum calculation\n",
++			pr_err("Trashed word at 0xd0 in block %llu ignored in checksum calculation\n",
+ 			       blk);
+ 			break;
+ 		}
+ 
+-		pr_err("Dev %s: RDB in block %d has bad checksum\n",
++		pr_err("Dev %s: RDB in block %llu has bad checksum\n",
+ 		       state->disk->disk_name, blk);
+ 	}
+ 
+@@ -78,11 +90,16 @@ int amiga_partition(struct parsed_partitions *state)
+ 	}
+ 	blk = be32_to_cpu(rdb->rdb_PartitionList);
+ 	put_dev_sector(sect);
+-	for (part = 1; blk>0 && part<=16; part++, put_dev_sector(sect)) {
+-		blk *= blksize;	/* Read in terms partition table understands */
++	for (part = 1; (s32) blk>0 && part<=16; part++, put_dev_sector(sect)) {
++		/* Read in terms partition table understands */
++		if (check_mul_overflow(blk, (sector_t) blksize, &blk)) {
++			pr_err("Dev %s: overflow calculating partition block %llu! Skipping partitions %u and beyond\n",
++				state->disk->disk_name, blk, part);
++			break;
++		}
+ 		data = read_part_sector(state, blk, &sect);
+ 		if (!data) {
+-			pr_err("Dev %s: unable to read partition block %d\n",
++			pr_err("Dev %s: unable to read partition block %llu\n",
+ 			       state->disk->disk_name, blk);
+ 			res = -1;
+ 			goto rdb_done;
+@@ -94,19 +111,70 @@ int amiga_partition(struct parsed_partitions *state)
+ 		if (checksum_block((__be32 *)pb, be32_to_cpu(pb->pb_SummedLongs) & 0x7F) != 0 )
+ 			continue;
+ 
+-		/* Tell Kernel about it */
++		/* RDB gives us more than enough rope to hang ourselves with,
++		 * many times over (2^128 bytes if all fields max out).
++		 * Some careful checks are in order, so check for potential
++		 * overflows.
++		 * We are multiplying four 32 bit numbers to one sector_t!
++		 */
++
++		nr_hd   = be32_to_cpu(pb->pb_Environment[NR_HD]);
++		nr_sect = be32_to_cpu(pb->pb_Environment[NR_SECT]);
++
++		/* CylBlocks is total number of blocks per cylinder */
++		if (check_mul_overflow(nr_hd, nr_sect, &cylblk)) {
++			pr_err("Dev %s: heads*sects %u overflows u32, skipping partition!\n",
++				state->disk->disk_name, cylblk);
++			continue;
++		}
++
++		/* check for consistency with RDB defined CylBlocks */
++		if (cylblk > be32_to_cpu(rdb->rdb_CylBlocks)) {
++			pr_warn("Dev %s: cylblk %u > rdb_CylBlocks %u!\n",
++				state->disk->disk_name, cylblk,
++				be32_to_cpu(rdb->rdb_CylBlocks));
++		}
++
++		/* RDB allows for variable logical block size -
++		 * normalize to 512 byte blocks and check result.
++		 */
++
++		if (check_mul_overflow(cylblk, blksize, &cylblk)) {
++			pr_err("Dev %s: partition %u bytes per cyl. overflows u32, skipping partition!\n",
++				state->disk->disk_name, part);
++			continue;
++		}
++
++		/* Calculate partition start and end. Limit of 32 bit on cylblk
++		 * guarantees no overflow occurs if LBD support is enabled.
++		 */
++
++		lo_cyl = be32_to_cpu(pb->pb_Environment[LO_CYL]);
++		start_sect = ((u64) lo_cyl * cylblk);
++
++		hi_cyl = be32_to_cpu(pb->pb_Environment[HI_CYL]);
++		nr_sects = (((u64) hi_cyl - lo_cyl + 1) * cylblk);
+ 
+-		nr_sects = (be32_to_cpu(pb->pb_Environment[10]) + 1 -
+-			    be32_to_cpu(pb->pb_Environment[9])) *
+-			   be32_to_cpu(pb->pb_Environment[3]) *
+-			   be32_to_cpu(pb->pb_Environment[5]) *
+-			   blksize;
+ 		if (!nr_sects)
+ 			continue;
+-		start_sect = be32_to_cpu(pb->pb_Environment[9]) *
+-			     be32_to_cpu(pb->pb_Environment[3]) *
+-			     be32_to_cpu(pb->pb_Environment[5]) *
+-			     blksize;
++
++		/* Warn user if partition end overflows u32 (AmigaDOS limit) */
++
++		if ((start_sect + nr_sects) > UINT_MAX) {
++			pr_warn("Dev %s: partition %u (%llu-%llu) needs 64 bit device support!\n",
++				state->disk->disk_name, part,
++				start_sect, start_sect + nr_sects);
++		}
++
++		if (check_add_overflow(start_sect, nr_sects, &end_sect)) {
++			pr_err("Dev %s: partition %u (%llu-%llu) needs LBD device support, skipping partition!\n",
++				state->disk->disk_name, part,
++				start_sect, end_sect);
++			continue;
++		}
++
++		/* Tell Kernel about it */
++
+ 		put_partition(state,slot++,start_sect,nr_sects);
+ 		{
+ 			/* Be even more informative to aid mounting */
+diff --git a/crypto/jitterentropy.c b/crypto/jitterentropy.c
+index 22f48bf4c6f57..227cedfa4f0ae 100644
+--- a/crypto/jitterentropy.c
++++ b/crypto/jitterentropy.c
+@@ -117,7 +117,6 @@ struct rand_data {
+ 				   * zero). */
+ #define JENT_ESTUCK		8 /* Too many stuck results during init. */
+ #define JENT_EHEALTH		9 /* Health test failed during initialization */
+-#define JENT_ERCT		10 /* RCT failed during initialization */
+ 
+ /*
+  * The output n bits can receive more than n bits of min entropy, of course,
+@@ -762,14 +761,12 @@ int jent_entropy_init(void)
+ 			if ((nonstuck % JENT_APT_WINDOW_SIZE) == 0) {
+ 				jent_apt_reset(&ec,
+ 					       delta & JENT_APT_WORD_MASK);
+-				if (jent_health_failure(&ec))
+-					return JENT_EHEALTH;
+ 			}
+ 		}
+ 
+-		/* Validate RCT */
+-		if (jent_rct_failure(&ec))
+-			return JENT_ERCT;
++		/* Validate health test result */
++		if (jent_health_failure(&ec))
++			return JENT_EHEALTH;
+ 
+ 		/* test whether we have an increasing timer */
+ 		if (!(time2 > time))
+diff --git a/drivers/accel/habanalabs/gaudi2/gaudi2.c b/drivers/accel/habanalabs/gaudi2/gaudi2.c
+index b778cf764a68a..5539c84ee7171 100644
+--- a/drivers/accel/habanalabs/gaudi2/gaudi2.c
++++ b/drivers/accel/habanalabs/gaudi2/gaudi2.c
+@@ -7231,7 +7231,7 @@ static bool gaudi2_get_tpc_idle_status(struct hl_device *hdev, u64 *mask_arr, u8
+ 
+ 	gaudi2_iterate_tpcs(hdev, &tpc_iter);
+ 
+-	return tpc_idle_data.is_idle;
++	return *tpc_idle_data.is_idle;
+ }
+ 
+ static bool gaudi2_get_decoder_idle_status(struct hl_device *hdev, u64 *mask_arr, u8 mask_len,
+diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c
+index 34ad071a64e96..4382fe13ee3e4 100644
+--- a/drivers/acpi/apei/ghes.c
++++ b/drivers/acpi/apei/ghes.c
+@@ -1544,6 +1544,8 @@ struct list_head *ghes_get_devices(void)
+ 
+ 			pr_warn_once("Force-loading ghes_edac on an unsupported platform. You're on your own!\n");
+ 		}
++	} else if (list_empty(&ghes_devs)) {
++		return NULL;
+ 	}
+ 
+ 	return &ghes_devs;
+diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
+index 32084e38b73d0..5cb2023581d4d 100644
+--- a/drivers/base/power/domain.c
++++ b/drivers/base/power/domain.c
+@@ -1632,9 +1632,6 @@ static int genpd_add_device(struct generic_pm_domain *genpd, struct device *dev,
+ 
+ 	dev_dbg(dev, "%s()\n", __func__);
+ 
+-	if (IS_ERR_OR_NULL(genpd) || IS_ERR_OR_NULL(dev))
+-		return -EINVAL;
+-
+ 	gpd_data = genpd_alloc_dev_data(dev, gd);
+ 	if (IS_ERR(gpd_data))
+ 		return PTR_ERR(gpd_data);
+@@ -1676,6 +1673,9 @@ int pm_genpd_add_device(struct generic_pm_domain *genpd, struct device *dev)
+ {
+ 	int ret;
+ 
++	if (!genpd || !dev)
++		return -EINVAL;
++
+ 	mutex_lock(&gpd_list_lock);
+ 	ret = genpd_add_device(genpd, dev, dev);
+ 	mutex_unlock(&gpd_list_lock);
+@@ -2523,6 +2523,9 @@ int of_genpd_add_device(struct of_phandle_args *genpdspec, struct device *dev)
+ 	struct generic_pm_domain *genpd;
+ 	int ret;
+ 
++	if (!dev)
++		return -EINVAL;
++
+ 	mutex_lock(&gpd_list_lock);
+ 
+ 	genpd = genpd_get_from_provider(genpdspec);
+@@ -2939,10 +2942,10 @@ static int genpd_parse_state(struct genpd_power_state *genpd_state,
+ 
+ 	err = of_property_read_u32(state_node, "min-residency-us", &residency);
+ 	if (!err)
+-		genpd_state->residency_ns = 1000 * residency;
++		genpd_state->residency_ns = 1000LL * residency;
+ 
+-	genpd_state->power_on_latency_ns = 1000 * exit_latency;
+-	genpd_state->power_off_latency_ns = 1000 * entry_latency;
++	genpd_state->power_on_latency_ns = 1000LL * exit_latency;
++	genpd_state->power_off_latency_ns = 1000LL * entry_latency;
+ 	genpd_state->fwnode = &state_node->fwnode;
+ 
+ 	return 0;
+diff --git a/drivers/base/property.c b/drivers/base/property.c
+index f6117ec9805c4..8c40abed78524 100644
+--- a/drivers/base/property.c
++++ b/drivers/base/property.c
+@@ -987,12 +987,18 @@ EXPORT_SYMBOL(fwnode_iomap);
+  * @fwnode:	Pointer to the firmware node
+  * @index:	Zero-based index of the IRQ
+  *
+- * Return: Linux IRQ number on success. Other values are determined
+- * according to acpi_irq_get() or of_irq_get() operation.
++ * Return: Linux IRQ number on success. Negative errno on failure.
+  */
+ int fwnode_irq_get(const struct fwnode_handle *fwnode, unsigned int index)
+ {
+-	return fwnode_call_int_op(fwnode, irq_get, index);
++	int ret;
++
++	ret = fwnode_call_int_op(fwnode, irq_get, index);
++	/* We treat mapping errors as invalid case */
++	if (ret == 0)
++		return -EINVAL;
++
++	return ret;
+ }
+ EXPORT_SYMBOL(fwnode_irq_get);
+ 
+diff --git a/drivers/bus/fsl-mc/dprc-driver.c b/drivers/bus/fsl-mc/dprc-driver.c
+index 4c84be378bf27..ec5f26a45641b 100644
+--- a/drivers/bus/fsl-mc/dprc-driver.c
++++ b/drivers/bus/fsl-mc/dprc-driver.c
+@@ -45,6 +45,9 @@ static int __fsl_mc_device_remove_if_not_in_mc(struct device *dev, void *data)
+ 	struct fsl_mc_child_objs *objs;
+ 	struct fsl_mc_device *mc_dev;
+ 
++	if (!dev_is_fsl_mc(dev))
++		return 0;
++
+ 	mc_dev = to_fsl_mc_device(dev);
+ 	objs = data;
+ 
+@@ -64,6 +67,9 @@ static int __fsl_mc_device_remove_if_not_in_mc(struct device *dev, void *data)
+ 
+ static int __fsl_mc_device_remove(struct device *dev, void *data)
+ {
++	if (!dev_is_fsl_mc(dev))
++		return 0;
++
+ 	fsl_mc_device_remove(to_fsl_mc_device(dev));
+ 	return 0;
+ }
+diff --git a/drivers/bus/ti-sysc.c b/drivers/bus/ti-sysc.c
+index 6c49de37d5e90..21fe9854703f9 100644
+--- a/drivers/bus/ti-sysc.c
++++ b/drivers/bus/ti-sysc.c
+@@ -1791,7 +1791,7 @@ static u32 sysc_quirk_dispc(struct sysc *ddata, int dispc_offset,
+ 	if (!ddata->module_va)
+ 		return -EIO;
+ 
+-	/* DISP_CONTROL */
++	/* DISP_CONTROL, shut down lcd and digit on disable if enabled */
+ 	val = sysc_read(ddata, dispc_offset + 0x40);
+ 	lcd_en = val & lcd_en_mask;
+ 	digit_en = val & digit_en_mask;
+@@ -1803,7 +1803,7 @@ static u32 sysc_quirk_dispc(struct sysc *ddata, int dispc_offset,
+ 		else
+ 			irq_mask |= BIT(2) | BIT(3);	/* EVSYNC bits */
+ 	}
+-	if (disable & (lcd_en | digit_en))
++	if (disable && (lcd_en || digit_en))
+ 		sysc_write(ddata, dispc_offset + 0x40,
+ 			   val & ~(lcd_en_mask | digit_en_mask));
+ 
+diff --git a/drivers/cdx/cdx.c b/drivers/cdx/cdx.c
+index 38511fd363257..d2cad4c670a07 100644
+--- a/drivers/cdx/cdx.c
++++ b/drivers/cdx/cdx.c
+@@ -62,6 +62,8 @@
+ #include <linux/mm.h>
+ #include <linux/xarray.h>
+ #include <linux/cdx/cdx_bus.h>
++#include <linux/iommu.h>
++#include <linux/dma-map-ops.h>
+ #include "cdx.h"
+ 
+ /* Default DMA mask for devices on a CDX bus */
+@@ -257,6 +259,7 @@ static void cdx_shutdown(struct device *dev)
+ 
+ static int cdx_dma_configure(struct device *dev)
+ {
++	struct cdx_driver *cdx_drv = to_cdx_driver(dev->driver);
+ 	struct cdx_device *cdx_dev = to_cdx_device(dev);
+ 	u32 input_id = cdx_dev->req_id;
+ 	int ret;
+@@ -267,9 +270,23 @@ static int cdx_dma_configure(struct device *dev)
+ 		return ret;
+ 	}
+ 
++	if (!ret && !cdx_drv->driver_managed_dma) {
++		ret = iommu_device_use_default_domain(dev);
++		if (ret)
++			arch_teardown_dma_ops(dev);
++	}
++
+ 	return 0;
+ }
+ 
++static void cdx_dma_cleanup(struct device *dev)
++{
++	struct cdx_driver *cdx_drv = to_cdx_driver(dev->driver);
++
++	if (!cdx_drv->driver_managed_dma)
++		iommu_device_unuse_default_domain(dev);
++}
++
+ /* show configuration fields */
+ #define cdx_config_attr(field, format_string)	\
+ static ssize_t	\
+@@ -405,6 +422,7 @@ struct bus_type cdx_bus_type = {
+ 	.remove		= cdx_remove,
+ 	.shutdown	= cdx_shutdown,
+ 	.dma_configure	= cdx_dma_configure,
++	.dma_cleanup	= cdx_dma_cleanup,
+ 	.bus_groups	= cdx_bus_groups,
+ 	.dev_groups	= cdx_dev_groups,
+ };
+diff --git a/drivers/char/hw_random/st-rng.c b/drivers/char/hw_random/st-rng.c
+index 15ba1e6fae4d2..6e9dfac9fc9f4 100644
+--- a/drivers/char/hw_random/st-rng.c
++++ b/drivers/char/hw_random/st-rng.c
+@@ -42,7 +42,6 @@
+ 
+ struct st_rng_data {
+ 	void __iomem	*base;
+-	struct clk	*clk;
+ 	struct hwrng	ops;
+ };
+ 
+@@ -85,26 +84,18 @@ static int st_rng_probe(struct platform_device *pdev)
+ 	if (IS_ERR(base))
+ 		return PTR_ERR(base);
+ 
+-	clk = devm_clk_get(&pdev->dev, NULL);
++	clk = devm_clk_get_enabled(&pdev->dev, NULL);
+ 	if (IS_ERR(clk))
+ 		return PTR_ERR(clk);
+ 
+-	ret = clk_prepare_enable(clk);
+-	if (ret)
+-		return ret;
+-
+ 	ddata->ops.priv	= (unsigned long)ddata;
+ 	ddata->ops.read	= st_rng_read;
+ 	ddata->ops.name	= pdev->name;
+ 	ddata->base	= base;
+-	ddata->clk	= clk;
+-
+-	dev_set_drvdata(&pdev->dev, ddata);
+ 
+ 	ret = devm_hwrng_register(&pdev->dev, &ddata->ops);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "Failed to register HW RNG\n");
+-		clk_disable_unprepare(clk);
+ 		return ret;
+ 	}
+ 
+@@ -113,15 +104,6 @@ static int st_rng_probe(struct platform_device *pdev)
+ 	return 0;
+ }
+ 
+-static int st_rng_remove(struct platform_device *pdev)
+-{
+-	struct st_rng_data *ddata = dev_get_drvdata(&pdev->dev);
+-
+-	clk_disable_unprepare(ddata->clk);
+-
+-	return 0;
+-}
+-
+ static const struct of_device_id st_rng_match[] __maybe_unused = {
+ 	{ .compatible = "st,rng" },
+ 	{},
+@@ -134,7 +116,6 @@ static struct platform_driver st_rng_driver = {
+ 		.of_match_table = of_match_ptr(st_rng_match),
+ 	},
+ 	.probe = st_rng_probe,
+-	.remove = st_rng_remove
+ };
+ 
+ module_platform_driver(st_rng_driver);
+diff --git a/drivers/char/hw_random/virtio-rng.c b/drivers/char/hw_random/virtio-rng.c
+index f7690e0f92ede..e41a84e6b4b56 100644
+--- a/drivers/char/hw_random/virtio-rng.c
++++ b/drivers/char/hw_random/virtio-rng.c
+@@ -4,6 +4,7 @@
+  *  Copyright (C) 2007, 2008 Rusty Russell IBM Corporation
+  */
+ 
++#include <asm/barrier.h>
+ #include <linux/err.h>
+ #include <linux/hw_random.h>
+ #include <linux/scatterlist.h>
+@@ -37,13 +38,13 @@ struct virtrng_info {
+ static void random_recv_done(struct virtqueue *vq)
+ {
+ 	struct virtrng_info *vi = vq->vdev->priv;
++	unsigned int len;
+ 
+ 	/* We can get spurious callbacks, e.g. shared IRQs + virtio_pci. */
+-	if (!virtqueue_get_buf(vi->vq, &vi->data_avail))
++	if (!virtqueue_get_buf(vi->vq, &len))
+ 		return;
+ 
+-	vi->data_idx = 0;
+-
++	smp_store_release(&vi->data_avail, len);
+ 	complete(&vi->have_data);
+ }
+ 
+@@ -52,7 +53,6 @@ static void request_entropy(struct virtrng_info *vi)
+ 	struct scatterlist sg;
+ 
+ 	reinit_completion(&vi->have_data);
+-	vi->data_avail = 0;
+ 	vi->data_idx = 0;
+ 
+ 	sg_init_one(&sg, vi->data, sizeof(vi->data));
+@@ -88,7 +88,7 @@ static int virtio_read(struct hwrng *rng, void *buf, size_t size, bool wait)
+ 	read = 0;
+ 
+ 	/* copy available data */
+-	if (vi->data_avail) {
++	if (smp_load_acquire(&vi->data_avail)) {
+ 		chunk = copy_data(vi, buf, size);
+ 		size -= chunk;
+ 		read += chunk;
+diff --git a/drivers/clk/bcm/clk-raspberrypi.c b/drivers/clk/bcm/clk-raspberrypi.c
+index eb399a4d141ba..829406dc44a20 100644
+--- a/drivers/clk/bcm/clk-raspberrypi.c
++++ b/drivers/clk/bcm/clk-raspberrypi.c
+@@ -356,9 +356,9 @@ static int raspberrypi_discover_clocks(struct raspberrypi_clk *rpi,
+ 	while (clks->id) {
+ 		struct raspberrypi_clk_variant *variant;
+ 
+-		if (clks->id > RPI_FIRMWARE_NUM_CLK_ID) {
++		if (clks->id >= RPI_FIRMWARE_NUM_CLK_ID) {
+ 			dev_err(rpi->dev, "Unknown clock id: %u (max: %u)\n",
+-					   clks->id, RPI_FIRMWARE_NUM_CLK_ID);
++					   clks->id, RPI_FIRMWARE_NUM_CLK_ID - 1);
+ 			return -EINVAL;
+ 		}
+ 
+diff --git a/drivers/clk/clk-cdce925.c b/drivers/clk/clk-cdce925.c
+index 6350682f7e6d2..87890669297d8 100644
+--- a/drivers/clk/clk-cdce925.c
++++ b/drivers/clk/clk-cdce925.c
+@@ -701,6 +701,10 @@ static int cdce925_probe(struct i2c_client *client)
+ 	for (i = 0; i < data->chip_info->num_plls; ++i) {
+ 		pll_clk_name[i] = kasprintf(GFP_KERNEL, "%pOFn.pll%d",
+ 			client->dev.of_node, i);
++		if (!pll_clk_name[i]) {
++			err = -ENOMEM;
++			goto error;
++		}
+ 		init.name = pll_clk_name[i];
+ 		data->pll[i].chip = data;
+ 		data->pll[i].hw.init = &init;
+@@ -742,6 +746,10 @@ static int cdce925_probe(struct i2c_client *client)
+ 	init.num_parents = 1;
+ 	init.parent_names = &parent_name; /* Mux Y1 to input */
+ 	init.name = kasprintf(GFP_KERNEL, "%pOFn.Y1", client->dev.of_node);
++	if (!init.name) {
++		err = -ENOMEM;
++		goto error;
++	}
+ 	data->clk[0].chip = data;
+ 	data->clk[0].hw.init = &init;
+ 	data->clk[0].index = 0;
+@@ -760,6 +768,10 @@ static int cdce925_probe(struct i2c_client *client)
+ 	for (i = 1; i < data->chip_info->num_outputs; ++i) {
+ 		init.name = kasprintf(GFP_KERNEL, "%pOFn.Y%d",
+ 			client->dev.of_node, i+1);
++		if (!init.name) {
++			err = -ENOMEM;
++			goto error;
++		}
+ 		data->clk[i].chip = data;
+ 		data->clk[i].hw.init = &init;
+ 		data->clk[i].index = i;
+diff --git a/drivers/clk/clk-renesas-pcie.c b/drivers/clk/clk-renesas-pcie.c
+index 10d31c222a1cb..6060cafe1aa22 100644
+--- a/drivers/clk/clk-renesas-pcie.c
++++ b/drivers/clk/clk-renesas-pcie.c
+@@ -392,8 +392,8 @@ static const struct rs9_chip_info renesas_9fgv0441_info = {
+ };
+ 
+ static const struct i2c_device_id rs9_id[] = {
+-	{ "9fgv0241", .driver_data = RENESAS_9FGV0241 },
+-	{ "9fgv0441", .driver_data = RENESAS_9FGV0441 },
++	{ "9fgv0241", .driver_data = (kernel_ulong_t)&renesas_9fgv0241_info },
++	{ "9fgv0441", .driver_data = (kernel_ulong_t)&renesas_9fgv0441_info },
+ 	{ }
+ };
+ MODULE_DEVICE_TABLE(i2c, rs9_id);
+diff --git a/drivers/clk/clk-si5341.c b/drivers/clk/clk-si5341.c
+index 0e528d7ba656e..c7d8cbd22bacc 100644
+--- a/drivers/clk/clk-si5341.c
++++ b/drivers/clk/clk-si5341.c
+@@ -1553,7 +1553,7 @@ static int si5341_probe(struct i2c_client *client)
+ 	struct clk_init_data init;
+ 	struct clk *input;
+ 	const char *root_clock_name;
+-	const char *synth_clock_names[SI5341_NUM_SYNTH];
++	const char *synth_clock_names[SI5341_NUM_SYNTH] = { NULL };
+ 	int err;
+ 	unsigned int i;
+ 	struct clk_si5341_output_config config[SI5341_MAX_NUM_OUTPUTS];
+@@ -1697,6 +1697,10 @@ static int si5341_probe(struct i2c_client *client)
+ 	for (i = 0; i < data->num_synth; ++i) {
+ 		synth_clock_names[i] = devm_kasprintf(&client->dev, GFP_KERNEL,
+ 				"%s.N%u", client->dev.of_node->name, i);
++		if (!synth_clock_names[i]) {
++			err = -ENOMEM;
++			goto free_clk_names;
++		}
+ 		init.name = synth_clock_names[i];
+ 		data->synth[i].index = i;
+ 		data->synth[i].data = data;
+@@ -1705,6 +1709,7 @@ static int si5341_probe(struct i2c_client *client)
+ 		if (err) {
+ 			dev_err(&client->dev,
+ 				"synth N%u registration failed\n", i);
++			goto free_clk_names;
+ 		}
+ 	}
+ 
+@@ -1714,6 +1719,10 @@ static int si5341_probe(struct i2c_client *client)
+ 	for (i = 0; i < data->num_outputs; ++i) {
+ 		init.name = kasprintf(GFP_KERNEL, "%s.%d",
+ 			client->dev.of_node->name, i);
++		if (!init.name) {
++			err = -ENOMEM;
++			goto free_clk_names;
++		}
+ 		init.flags = config[i].synth_master ? CLK_SET_RATE_PARENT : 0;
+ 		data->clk[i].index = i;
+ 		data->clk[i].data = data;
+@@ -1735,7 +1744,7 @@ static int si5341_probe(struct i2c_client *client)
+ 		if (err) {
+ 			dev_err(&client->dev,
+ 				"output %u registration failed\n", i);
+-			goto cleanup;
++			goto free_clk_names;
+ 		}
+ 		if (config[i].always_on)
+ 			clk_prepare(data->clk[i].hw.clk);
+@@ -1745,7 +1754,7 @@ static int si5341_probe(struct i2c_client *client)
+ 			data);
+ 	if (err) {
+ 		dev_err(&client->dev, "unable to add clk provider\n");
+-		goto cleanup;
++		goto free_clk_names;
+ 	}
+ 
+ 	if (initialization_required) {
+@@ -1753,11 +1762,11 @@ static int si5341_probe(struct i2c_client *client)
+ 		regcache_cache_only(data->regmap, false);
+ 		err = regcache_sync(data->regmap);
+ 		if (err < 0)
+-			goto cleanup;
++			goto free_clk_names;
+ 
+ 		err = si5341_finalize_defaults(data);
+ 		if (err < 0)
+-			goto cleanup;
++			goto free_clk_names;
+ 	}
+ 
+ 	/* wait for device to report input clock present and PLL lock */
+@@ -1766,32 +1775,31 @@ static int si5341_probe(struct i2c_client *client)
+ 	       10000, 250000);
+ 	if (err) {
+ 		dev_err(&client->dev, "Error waiting for input clock or PLL lock\n");
+-		goto cleanup;
++		goto free_clk_names;
+ 	}
+ 
+ 	/* clear sticky alarm bits from initialization */
+ 	err = regmap_write(data->regmap, SI5341_STATUS_STICKY, 0);
+ 	if (err) {
+ 		dev_err(&client->dev, "unable to clear sticky status\n");
+-		goto cleanup;
++		goto free_clk_names;
+ 	}
+ 
+ 	err = sysfs_create_files(&client->dev.kobj, si5341_attributes);
+-	if (err) {
++	if (err)
+ 		dev_err(&client->dev, "unable to create sysfs files\n");
+-		goto cleanup;
+-	}
+ 
++free_clk_names:
+ 	/* Free the names, clk framework makes copies */
+ 	for (i = 0; i < data->num_synth; ++i)
+ 		 devm_kfree(&client->dev, (void *)synth_clock_names[i]);
+ 
+-	return 0;
+-
+ cleanup:
+-	for (i = 0; i < SI5341_MAX_NUM_OUTPUTS; ++i) {
+-		if (data->clk[i].vddo_reg)
+-			regulator_disable(data->clk[i].vddo_reg);
++	if (err) {
++		for (i = 0; i < SI5341_MAX_NUM_OUTPUTS; ++i) {
++			if (data->clk[i].vddo_reg)
++				regulator_disable(data->clk[i].vddo_reg);
++		}
+ 	}
+ 	return err;
+ }
+diff --git a/drivers/clk/clk-versaclock5.c b/drivers/clk/clk-versaclock5.c
+index fa71a57875ce8..e9a7f3c91ae0e 100644
+--- a/drivers/clk/clk-versaclock5.c
++++ b/drivers/clk/clk-versaclock5.c
+@@ -1028,6 +1028,11 @@ static int vc5_probe(struct i2c_client *client)
+ 	}
+ 
+ 	init.name = kasprintf(GFP_KERNEL, "%pOFn.mux", client->dev.of_node);
++	if (!init.name) {
++		ret = -ENOMEM;
++		goto err_clk;
++	}
++
+ 	init.ops = &vc5_mux_ops;
+ 	init.flags = 0;
+ 	init.parent_names = parent_names;
+@@ -1042,6 +1047,10 @@ static int vc5_probe(struct i2c_client *client)
+ 		memset(&init, 0, sizeof(init));
+ 		init.name = kasprintf(GFP_KERNEL, "%pOFn.dbl",
+ 				      client->dev.of_node);
++		if (!init.name) {
++			ret = -ENOMEM;
++			goto err_clk;
++		}
+ 		init.ops = &vc5_dbl_ops;
+ 		init.flags = CLK_SET_RATE_PARENT;
+ 		init.parent_names = parent_names;
+@@ -1057,6 +1066,10 @@ static int vc5_probe(struct i2c_client *client)
+ 	/* Register PFD */
+ 	memset(&init, 0, sizeof(init));
+ 	init.name = kasprintf(GFP_KERNEL, "%pOFn.pfd", client->dev.of_node);
++	if (!init.name) {
++		ret = -ENOMEM;
++		goto err_clk;
++	}
+ 	init.ops = &vc5_pfd_ops;
+ 	init.flags = CLK_SET_RATE_PARENT;
+ 	init.parent_names = parent_names;
+@@ -1074,6 +1087,10 @@ static int vc5_probe(struct i2c_client *client)
+ 	/* Register PLL */
+ 	memset(&init, 0, sizeof(init));
+ 	init.name = kasprintf(GFP_KERNEL, "%pOFn.pll", client->dev.of_node);
++	if (!init.name) {
++		ret = -ENOMEM;
++		goto err_clk;
++	}
+ 	init.ops = &vc5_pll_ops;
+ 	init.flags = CLK_SET_RATE_PARENT;
+ 	init.parent_names = parent_names;
+@@ -1093,6 +1110,10 @@ static int vc5_probe(struct i2c_client *client)
+ 		memset(&init, 0, sizeof(init));
+ 		init.name = kasprintf(GFP_KERNEL, "%pOFn.fod%d",
+ 				      client->dev.of_node, idx);
++		if (!init.name) {
++			ret = -ENOMEM;
++			goto err_clk;
++		}
+ 		init.ops = &vc5_fod_ops;
+ 		init.flags = CLK_SET_RATE_PARENT;
+ 		init.parent_names = parent_names;
+@@ -1111,6 +1132,10 @@ static int vc5_probe(struct i2c_client *client)
+ 	memset(&init, 0, sizeof(init));
+ 	init.name = kasprintf(GFP_KERNEL, "%pOFn.out0_sel_i2cb",
+ 			      client->dev.of_node);
++	if (!init.name) {
++		ret = -ENOMEM;
++		goto err_clk;
++	}
+ 	init.ops = &vc5_clk_out_ops;
+ 	init.flags = CLK_SET_RATE_PARENT;
+ 	init.parent_names = parent_names;
+@@ -1137,6 +1162,10 @@ static int vc5_probe(struct i2c_client *client)
+ 		memset(&init, 0, sizeof(init));
+ 		init.name = kasprintf(GFP_KERNEL, "%pOFn.out%d",
+ 				      client->dev.of_node, idx + 1);
++		if (!init.name) {
++			ret = -ENOMEM;
++			goto err_clk;
++		}
+ 		init.ops = &vc5_clk_out_ops;
+ 		init.flags = CLK_SET_RATE_PARENT;
+ 		init.parent_names = parent_names;
+@@ -1271,14 +1300,14 @@ static const struct vc5_chip_info idt_5p49v6975_info = {
+ };
+ 
+ static const struct i2c_device_id vc5_id[] = {
+-	{ "5p49v5923", .driver_data = IDT_VC5_5P49V5923 },
+-	{ "5p49v5925", .driver_data = IDT_VC5_5P49V5925 },
+-	{ "5p49v5933", .driver_data = IDT_VC5_5P49V5933 },
+-	{ "5p49v5935", .driver_data = IDT_VC5_5P49V5935 },
+-	{ "5p49v60", .driver_data = IDT_VC6_5P49V60 },
+-	{ "5p49v6901", .driver_data = IDT_VC6_5P49V6901 },
+-	{ "5p49v6965", .driver_data = IDT_VC6_5P49V6965 },
+-	{ "5p49v6975", .driver_data = IDT_VC6_5P49V6975 },
++	{ "5p49v5923", .driver_data = (kernel_ulong_t)&idt_5p49v5923_info },
++	{ "5p49v5925", .driver_data = (kernel_ulong_t)&idt_5p49v5925_info },
++	{ "5p49v5933", .driver_data = (kernel_ulong_t)&idt_5p49v5933_info },
++	{ "5p49v5935", .driver_data = (kernel_ulong_t)&idt_5p49v5935_info },
++	{ "5p49v60", .driver_data = (kernel_ulong_t)&idt_5p49v60_info },
++	{ "5p49v6901", .driver_data = (kernel_ulong_t)&idt_5p49v6901_info },
++	{ "5p49v6965", .driver_data = (kernel_ulong_t)&idt_5p49v6965_info },
++	{ "5p49v6975", .driver_data = (kernel_ulong_t)&idt_5p49v6975_info },
+ 	{ }
+ };
+ MODULE_DEVICE_TABLE(i2c, vc5_id);
+diff --git a/drivers/clk/clk-versaclock7.c b/drivers/clk/clk-versaclock7.c
+index 8e4f86e852aa0..0ae191f50b4b2 100644
+--- a/drivers/clk/clk-versaclock7.c
++++ b/drivers/clk/clk-versaclock7.c
+@@ -1282,7 +1282,7 @@ static const struct regmap_config vc7_regmap_config = {
+ };
+ 
+ static const struct i2c_device_id vc7_i2c_id[] = {
+-	{ "rc21008a", VC7_RC21008A },
++	{ "rc21008a", .driver_data = (kernel_ulong_t)&vc7_rc21008a_info },
+ 	{}
+ };
+ MODULE_DEVICE_TABLE(i2c, vc7_i2c_id);
+diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
+index 27c30a533759a..8c13bcf57f1ae 100644
+--- a/drivers/clk/clk.c
++++ b/drivers/clk/clk.c
+@@ -1549,6 +1549,7 @@ void clk_hw_forward_rate_request(const struct clk_hw *hw,
+ 				  parent->core, req,
+ 				  parent_rate);
+ }
++EXPORT_SYMBOL_GPL(clk_hw_forward_rate_request);
+ 
+ static bool clk_core_can_round(struct clk_core * const core)
+ {
+@@ -4695,6 +4696,7 @@ int devm_clk_notifier_register(struct device *dev, struct clk *clk,
+ 	if (!ret) {
+ 		devres->clk = clk;
+ 		devres->nb = nb;
++		devres_add(dev, devres);
+ 	} else {
+ 		devres_free(devres);
+ 	}
+diff --git a/drivers/clk/imx/clk-composite-8m.c b/drivers/clk/imx/clk-composite-8m.c
+index cbf0d7955a00a..7a6e3ce97133b 100644
+--- a/drivers/clk/imx/clk-composite-8m.c
++++ b/drivers/clk/imx/clk-composite-8m.c
+@@ -119,10 +119,41 @@ static int imx8m_clk_composite_divider_set_rate(struct clk_hw *hw,
+ 	return ret;
+ }
+ 
++static int imx8m_divider_determine_rate(struct clk_hw *hw,
++				      struct clk_rate_request *req)
++{
++	struct clk_divider *divider = to_clk_divider(hw);
++	int prediv_value;
++	int div_value;
++
++	/* if read only, just return current value */
++	if (divider->flags & CLK_DIVIDER_READ_ONLY) {
++		u32 val;
++
++		val = readl(divider->reg);
++		prediv_value = val >> divider->shift;
++		prediv_value &= clk_div_mask(divider->width);
++		prediv_value++;
++
++		div_value = val >> PCG_DIV_SHIFT;
++		div_value &= clk_div_mask(PCG_DIV_WIDTH);
++		div_value++;
++
++		return divider_ro_determine_rate(hw, req, divider->table,
++						 PCG_PREDIV_WIDTH + PCG_DIV_WIDTH,
++						 divider->flags, prediv_value * div_value);
++	}
++
++	return divider_determine_rate(hw, req, divider->table,
++				      PCG_PREDIV_WIDTH + PCG_DIV_WIDTH,
++				      divider->flags);
++}
++
+ static const struct clk_ops imx8m_clk_composite_divider_ops = {
+ 	.recalc_rate = imx8m_clk_composite_divider_recalc_rate,
+ 	.round_rate = imx8m_clk_composite_divider_round_rate,
+ 	.set_rate = imx8m_clk_composite_divider_set_rate,
++	.determine_rate = imx8m_divider_determine_rate,
+ };
+ 
+ static u8 imx8m_clk_composite_mux_get_parent(struct clk_hw *hw)
+diff --git a/drivers/clk/imx/clk-imx8mn.c b/drivers/clk/imx/clk-imx8mn.c
+index 4b23a46486004..4bd1ed11353b3 100644
+--- a/drivers/clk/imx/clk-imx8mn.c
++++ b/drivers/clk/imx/clk-imx8mn.c
+@@ -323,7 +323,7 @@ static int imx8mn_clocks_probe(struct platform_device *pdev)
+ 	void __iomem *base;
+ 	int ret;
+ 
+-	clk_hw_data = kzalloc(struct_size(clk_hw_data, hws,
++	clk_hw_data = devm_kzalloc(dev, struct_size(clk_hw_data, hws,
+ 					  IMX8MN_CLK_END), GFP_KERNEL);
+ 	if (WARN_ON(!clk_hw_data))
+ 		return -ENOMEM;
+@@ -340,10 +340,10 @@ static int imx8mn_clocks_probe(struct platform_device *pdev)
+ 	hws[IMX8MN_CLK_EXT4] = imx_get_clk_hw_by_name(np, "clk_ext4");
+ 
+ 	np = of_find_compatible_node(NULL, NULL, "fsl,imx8mn-anatop");
+-	base = of_iomap(np, 0);
++	base = devm_of_iomap(dev, np, 0, NULL);
+ 	of_node_put(np);
+-	if (WARN_ON(!base)) {
+-		ret = -ENOMEM;
++	if (WARN_ON(IS_ERR(base))) {
++		ret = PTR_ERR(base);
+ 		goto unregister_hws;
+ 	}
+ 
+diff --git a/drivers/clk/imx/clk-imx8mp.c b/drivers/clk/imx/clk-imx8mp.c
+index f26ae8de4cc6f..1469249386dd8 100644
+--- a/drivers/clk/imx/clk-imx8mp.c
++++ b/drivers/clk/imx/clk-imx8mp.c
+@@ -414,25 +414,22 @@ static int imx8mp_clocks_probe(struct platform_device *pdev)
+ 	struct device *dev = &pdev->dev;
+ 	struct device_node *np;
+ 	void __iomem *anatop_base, *ccm_base;
++	int err;
+ 
+ 	np = of_find_compatible_node(NULL, NULL, "fsl,imx8mp-anatop");
+-	anatop_base = of_iomap(np, 0);
++	anatop_base = devm_of_iomap(dev, np, 0, NULL);
+ 	of_node_put(np);
+-	if (WARN_ON(!anatop_base))
+-		return -ENOMEM;
++	if (WARN_ON(IS_ERR(anatop_base)))
++		return PTR_ERR(anatop_base);
+ 
+ 	np = dev->of_node;
+ 	ccm_base = devm_platform_ioremap_resource(pdev, 0);
+-	if (WARN_ON(IS_ERR(ccm_base))) {
+-		iounmap(anatop_base);
++	if (WARN_ON(IS_ERR(ccm_base)))
+ 		return PTR_ERR(ccm_base);
+-	}
+ 
+-	clk_hw_data = kzalloc(struct_size(clk_hw_data, hws, IMX8MP_CLK_END), GFP_KERNEL);
+-	if (WARN_ON(!clk_hw_data)) {
+-		iounmap(anatop_base);
++	clk_hw_data = devm_kzalloc(dev, struct_size(clk_hw_data, hws, IMX8MP_CLK_END), GFP_KERNEL);
++	if (WARN_ON(!clk_hw_data))
+ 		return -ENOMEM;
+-	}
+ 
+ 	clk_hw_data->num = IMX8MP_CLK_END;
+ 	hws = clk_hw_data->hws;
+@@ -722,7 +719,12 @@ static int imx8mp_clocks_probe(struct platform_device *pdev)
+ 
+ 	imx_check_clk_hws(hws, IMX8MP_CLK_END);
+ 
+-	of_clk_add_hw_provider(np, of_clk_hw_onecell_get, clk_hw_data);
++	err = of_clk_add_hw_provider(np, of_clk_hw_onecell_get, clk_hw_data);
++	if (err < 0) {
++		dev_err(dev, "failed to register hws for i.MX8MP\n");
++		imx_unregister_hw_clocks(hws, IMX8MP_CLK_END);
++		return err;
++	}
+ 
+ 	imx_register_uart_clocks();
+ 
+diff --git a/drivers/clk/imx/clk-imx93.c b/drivers/clk/imx/clk-imx93.c
+index 07b4a043e4495..b6c7c2725906c 100644
+--- a/drivers/clk/imx/clk-imx93.c
++++ b/drivers/clk/imx/clk-imx93.c
+@@ -264,7 +264,7 @@ static int imx93_clocks_probe(struct platform_device *pdev)
+ 	void __iomem *base, *anatop_base;
+ 	int i, ret;
+ 
+-	clk_hw_data = kzalloc(struct_size(clk_hw_data, hws,
++	clk_hw_data = devm_kzalloc(dev, struct_size(clk_hw_data, hws,
+ 					  IMX93_CLK_END), GFP_KERNEL);
+ 	if (!clk_hw_data)
+ 		return -ENOMEM;
+@@ -288,10 +288,12 @@ static int imx93_clocks_probe(struct platform_device *pdev)
+ 								    "sys_pll_pfd2", 1, 2);
+ 
+ 	np = of_find_compatible_node(NULL, NULL, "fsl,imx93-anatop");
+-	anatop_base = of_iomap(np, 0);
++	anatop_base = devm_of_iomap(dev, np, 0, NULL);
+ 	of_node_put(np);
+-	if (WARN_ON(!anatop_base))
+-		return -ENOMEM;
++	if (WARN_ON(IS_ERR(anatop_base))) {
++		ret = PTR_ERR(base);
++		goto unregister_hws;
++	}
+ 
+ 	clks[IMX93_CLK_ARM_PLL] = imx_clk_fracn_gppll_integer("arm_pll", "osc_24m",
+ 							      anatop_base + 0x1000,
+@@ -304,8 +306,8 @@ static int imx93_clocks_probe(struct platform_device *pdev)
+ 	np = dev->of_node;
+ 	base = devm_platform_ioremap_resource(pdev, 0);
+ 	if (WARN_ON(IS_ERR(base))) {
+-		iounmap(anatop_base);
+-		return PTR_ERR(base);
++		ret = PTR_ERR(base);
++		goto unregister_hws;
+ 	}
+ 
+ 	for (i = 0; i < ARRAY_SIZE(root_array); i++) {
+@@ -345,7 +347,6 @@ static int imx93_clocks_probe(struct platform_device *pdev)
+ 
+ unregister_hws:
+ 	imx_unregister_hw_clocks(clks, IMX93_CLK_END);
+-	iounmap(anatop_base);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/clk/imx/clk-imxrt1050.c b/drivers/clk/imx/clk-imxrt1050.c
+index fd5c51fc92c0e..08d155feb035a 100644
+--- a/drivers/clk/imx/clk-imxrt1050.c
++++ b/drivers/clk/imx/clk-imxrt1050.c
+@@ -42,7 +42,7 @@ static int imxrt1050_clocks_probe(struct platform_device *pdev)
+ 	struct device_node *anp;
+ 	int ret;
+ 
+-	clk_hw_data = kzalloc(struct_size(clk_hw_data, hws,
++	clk_hw_data = devm_kzalloc(dev, struct_size(clk_hw_data, hws,
+ 					  IMXRT1050_CLK_END), GFP_KERNEL);
+ 	if (WARN_ON(!clk_hw_data))
+ 		return -ENOMEM;
+@@ -53,10 +53,12 @@ static int imxrt1050_clocks_probe(struct platform_device *pdev)
+ 	hws[IMXRT1050_CLK_OSC] = imx_get_clk_hw_by_name(np, "osc");
+ 
+ 	anp = of_find_compatible_node(NULL, NULL, "fsl,imxrt-anatop");
+-	pll_base = of_iomap(anp, 0);
++	pll_base = devm_of_iomap(dev, anp, 0, NULL);
+ 	of_node_put(anp);
+-	if (WARN_ON(!pll_base))
+-		return -ENOMEM;
++	if (WARN_ON(IS_ERR(pll_base))) {
++		ret = PTR_ERR(pll_base);
++		goto unregister_hws;
++	}
+ 
+ 	/* Anatop clocks */
+ 	hws[IMXRT1050_CLK_DUMMY] = imx_clk_hw_fixed("dummy", 0UL);
+@@ -104,8 +106,10 @@ static int imxrt1050_clocks_probe(struct platform_device *pdev)
+ 
+ 	/* CCM clocks */
+ 	ccm_base = devm_platform_ioremap_resource(pdev, 0);
+-	if (WARN_ON(IS_ERR(ccm_base)))
+-		return PTR_ERR(ccm_base);
++	if (WARN_ON(IS_ERR(ccm_base))) {
++		ret = PTR_ERR(ccm_base);
++		goto unregister_hws;
++	}
+ 
+ 	hws[IMXRT1050_CLK_ARM_PODF] = imx_clk_hw_divider("arm_podf", "pll1_arm", ccm_base + 0x10, 0, 3);
+ 	hws[IMXRT1050_CLK_PRE_PERIPH_SEL] = imx_clk_hw_mux("pre_periph_sel", ccm_base + 0x18, 18, 2,
+@@ -149,8 +153,12 @@ static int imxrt1050_clocks_probe(struct platform_device *pdev)
+ 	ret = of_clk_add_hw_provider(np, of_clk_hw_onecell_get, clk_hw_data);
+ 	if (ret < 0) {
+ 		dev_err(dev, "Failed to register clks for i.MXRT1050.\n");
+-		imx_unregister_hw_clocks(hws, IMXRT1050_CLK_END);
++		goto unregister_hws;
+ 	}
++	return 0;
++
++unregister_hws:
++	imx_unregister_hw_clocks(hws, IMXRT1050_CLK_END);
+ 	return ret;
+ }
+ static const struct of_device_id imxrt1050_clk_of_match[] = {
+diff --git a/drivers/clk/imx/clk-scu.c b/drivers/clk/imx/clk-scu.c
+index 1e6870f3671f6..db307890e4c16 100644
+--- a/drivers/clk/imx/clk-scu.c
++++ b/drivers/clk/imx/clk-scu.c
+@@ -707,11 +707,11 @@ struct clk_hw *imx_clk_scu_alloc_dev(const char *name,
+ 
+ void imx_clk_scu_unregister(void)
+ {
+-	struct imx_scu_clk_node *clk;
++	struct imx_scu_clk_node *clk, *n;
+ 	int i;
+ 
+ 	for (i = 0; i < IMX_SC_R_LAST; i++) {
+-		list_for_each_entry(clk, &imx_scu_clks[i], node) {
++		list_for_each_entry_safe(clk, n, &imx_scu_clks[i], node) {
+ 			clk_hw_unregister(clk->hw);
+ 			kfree(clk);
+ 		}
+diff --git a/drivers/clk/keystone/sci-clk.c b/drivers/clk/keystone/sci-clk.c
+index 910ecd58c4ca2..6c1df4f11536d 100644
+--- a/drivers/clk/keystone/sci-clk.c
++++ b/drivers/clk/keystone/sci-clk.c
+@@ -294,6 +294,8 @@ static int _sci_clk_build(struct sci_clk_provider *provider,
+ 
+ 	name = kasprintf(GFP_KERNEL, "clk:%d:%d", sci_clk->dev_id,
+ 			 sci_clk->clk_id);
++	if (!name)
++		return -ENOMEM;
+ 
+ 	init.name = name;
+ 
+diff --git a/drivers/clk/mediatek/clk-mt8173-apmixedsys.c b/drivers/clk/mediatek/clk-mt8173-apmixedsys.c
+index 8c2aa8b0f39ea..307c24aa1fb41 100644
+--- a/drivers/clk/mediatek/clk-mt8173-apmixedsys.c
++++ b/drivers/clk/mediatek/clk-mt8173-apmixedsys.c
+@@ -148,11 +148,13 @@ static int clk_mt8173_apmixed_probe(struct platform_device *pdev)
+ 
+ 	base = of_iomap(node, 0);
+ 	if (!base)
+-		return PTR_ERR(base);
++		return -ENOMEM;
+ 
+ 	clk_data = mtk_alloc_clk_data(CLK_APMIXED_NR_CLK);
+-	if (IS_ERR_OR_NULL(clk_data))
++	if (IS_ERR_OR_NULL(clk_data)) {
++		iounmap(base);
+ 		return -ENOMEM;
++	}
+ 
+ 	fhctl_parse_dt(fhctl_node, pllfhs, ARRAY_SIZE(pllfhs));
+ 	r = mtk_clk_register_pllfhs(node, plls, ARRAY_SIZE(plls),
+@@ -186,6 +188,7 @@ unregister_plls:
+ 				  ARRAY_SIZE(pllfhs), clk_data);
+ free_clk_data:
+ 	mtk_free_clk_data(clk_data);
++	iounmap(base);
+ 	return r;
+ }
+ 
+diff --git a/drivers/clk/mediatek/clk-mtk.c b/drivers/clk/mediatek/clk-mtk.c
+index fd2214c3242f2..affaf52c82bd4 100644
+--- a/drivers/clk/mediatek/clk-mtk.c
++++ b/drivers/clk/mediatek/clk-mtk.c
+@@ -469,7 +469,7 @@ static int __mtk_clk_simple_probe(struct platform_device *pdev,
+ 	const struct platform_device_id *id;
+ 	const struct mtk_clk_desc *mcd;
+ 	struct clk_hw_onecell_data *clk_data;
+-	void __iomem *base;
++	void __iomem *base = NULL;
+ 	int num_clks, r;
+ 
+ 	mcd = device_get_match_data(&pdev->dev);
+@@ -483,8 +483,8 @@ static int __mtk_clk_simple_probe(struct platform_device *pdev,
+ 			return -EINVAL;
+ 	}
+ 
+-	/* Composite clocks needs us to pass iomem pointer */
+-	if (mcd->composite_clks) {
++	/* Composite and divider clocks needs us to pass iomem pointer */
++	if (mcd->composite_clks || mcd->divider_clks) {
+ 		if (!mcd->shared_io)
+ 			base = devm_platform_ioremap_resource(pdev, 0);
+ 		else
+@@ -500,8 +500,10 @@ static int __mtk_clk_simple_probe(struct platform_device *pdev,
+ 	num_clks += mcd->num_mux_clks + mcd->num_divider_clks;
+ 
+ 	clk_data = mtk_alloc_clk_data(num_clks);
+-	if (!clk_data)
+-		return -ENOMEM;
++	if (!clk_data) {
++		r = -ENOMEM;
++		goto free_base;
++	}
+ 
+ 	if (mcd->fixed_clks) {
+ 		r = mtk_clk_register_fixed_clks(mcd->fixed_clks,
+@@ -599,6 +601,7 @@ unregister_fixed_clks:
+ 					      mcd->num_fixed_clks, clk_data);
+ free_data:
+ 	mtk_free_clk_data(clk_data);
++free_base:
+ 	if (mcd->shared_io && base)
+ 		iounmap(base);
+ 	return r;
+diff --git a/drivers/clk/qcom/camcc-sc7180.c b/drivers/clk/qcom/camcc-sc7180.c
+index e2b4804695f37..8a4ba7a19ed12 100644
+--- a/drivers/clk/qcom/camcc-sc7180.c
++++ b/drivers/clk/qcom/camcc-sc7180.c
+@@ -1480,12 +1480,21 @@ static struct clk_branch cam_cc_sys_tmr_clk = {
+ 	},
+ };
+ 
++static struct gdsc titan_top_gdsc = {
++	.gdscr = 0xb134,
++	.pd = {
++		.name = "titan_top_gdsc",
++	},
++	.pwrsts = PWRSTS_OFF_ON,
++};
++
+ static struct gdsc bps_gdsc = {
+ 	.gdscr = 0x6004,
+ 	.pd = {
+ 		.name = "bps_gdsc",
+ 	},
+ 	.pwrsts = PWRSTS_OFF_ON,
++	.parent = &titan_top_gdsc.pd,
+ 	.flags = HW_CTRL,
+ };
+ 
+@@ -1495,6 +1504,7 @@ static struct gdsc ife_0_gdsc = {
+ 		.name = "ife_0_gdsc",
+ 	},
+ 	.pwrsts = PWRSTS_OFF_ON,
++	.parent = &titan_top_gdsc.pd,
+ };
+ 
+ static struct gdsc ife_1_gdsc = {
+@@ -1503,6 +1513,7 @@ static struct gdsc ife_1_gdsc = {
+ 		.name = "ife_1_gdsc",
+ 	},
+ 	.pwrsts = PWRSTS_OFF_ON,
++	.parent = &titan_top_gdsc.pd,
+ };
+ 
+ static struct gdsc ipe_0_gdsc = {
+@@ -1512,15 +1523,9 @@ static struct gdsc ipe_0_gdsc = {
+ 	},
+ 	.pwrsts = PWRSTS_OFF_ON,
+ 	.flags = HW_CTRL,
++	.parent = &titan_top_gdsc.pd,
+ };
+ 
+-static struct gdsc titan_top_gdsc = {
+-	.gdscr = 0xb134,
+-	.pd = {
+-		.name = "titan_top_gdsc",
+-	},
+-	.pwrsts = PWRSTS_OFF_ON,
+-};
+ 
+ static struct clk_hw *cam_cc_sc7180_hws[] = {
+ 	[CAM_CC_PLL2_OUT_EARLY] = &cam_cc_pll2_out_early.hw,
+diff --git a/drivers/clk/qcom/dispcc-qcm2290.c b/drivers/clk/qcom/dispcc-qcm2290.c
+index e9cfe41c04426..44dd5cfcc1504 100644
+--- a/drivers/clk/qcom/dispcc-qcm2290.c
++++ b/drivers/clk/qcom/dispcc-qcm2290.c
+@@ -24,9 +24,11 @@
+ 
+ enum {
+ 	P_BI_TCXO,
++	P_BI_TCXO_AO,
+ 	P_DISP_CC_PLL0_OUT_MAIN,
+ 	P_DSI0_PHY_PLL_OUT_BYTECLK,
+ 	P_DSI0_PHY_PLL_OUT_DSICLK,
++	P_GPLL0_OUT_DIV,
+ 	P_GPLL0_OUT_MAIN,
+ 	P_SLEEP_CLK,
+ };
+@@ -82,8 +84,8 @@ static const struct clk_parent_data disp_cc_parent_data_1[] = {
+ };
+ 
+ static const struct parent_map disp_cc_parent_map_2[] = {
+-	{ P_BI_TCXO, 0 },
+-	{ P_GPLL0_OUT_MAIN, 4 },
++	{ P_BI_TCXO_AO, 0 },
++	{ P_GPLL0_OUT_DIV, 4 },
+ };
+ 
+ static const struct clk_parent_data disp_cc_parent_data_2[] = {
+@@ -151,9 +153,9 @@ static struct clk_regmap_div disp_cc_mdss_byte0_div_clk_src = {
+ };
+ 
+ static const struct freq_tbl ftbl_disp_cc_mdss_ahb_clk_src[] = {
+-	F(19200000, P_BI_TCXO, 1, 0, 0),
+-	F(37500000, P_GPLL0_OUT_MAIN, 8, 0, 0),
+-	F(75000000, P_GPLL0_OUT_MAIN, 4, 0, 0),
++	F(19200000, P_BI_TCXO_AO, 1, 0, 0),
++	F(37500000, P_GPLL0_OUT_DIV, 8, 0, 0),
++	F(75000000, P_GPLL0_OUT_DIV, 4, 0, 0),
+ 	{ }
+ };
+ 
+diff --git a/drivers/clk/qcom/gcc-ipq5332.c b/drivers/clk/qcom/gcc-ipq5332.c
+index bdb4a0a11d07b..a75ab88ed14c6 100644
+--- a/drivers/clk/qcom/gcc-ipq5332.c
++++ b/drivers/clk/qcom/gcc-ipq5332.c
+@@ -20,8 +20,8 @@
+ #include "reset.h"
+ 
+ enum {
+-	DT_SLEEP_CLK,
+ 	DT_XO,
++	DT_SLEEP_CLK,
+ 	DT_PCIE_2LANE_PHY_PIPE_CLK,
+ 	DT_PCIE_2LANE_PHY_PIPE_CLK_X1,
+ 	DT_USB_PCIE_WRAPPER_PIPE_CLK,
+@@ -366,7 +366,7 @@ static struct clk_rcg2 gcc_adss_pwm_clk_src = {
+ };
+ 
+ static const struct freq_tbl ftbl_gcc_apss_axi_clk_src[] = {
+-	F(480000000, P_GPLL4_OUT_MAIN, 2.5, 0, 0),
++	F(480000000, P_GPLL4_OUT_AUX, 2.5, 0, 0),
+ 	F(533333333, P_GPLL0_OUT_MAIN, 1.5, 0, 0),
+ 	{ }
+ };
+@@ -963,7 +963,7 @@ static struct clk_rcg2 gcc_sdcc1_apps_clk_src = {
+ 		.name = "gcc_sdcc1_apps_clk_src",
+ 		.parent_data = gcc_parent_data_9,
+ 		.num_parents = ARRAY_SIZE(gcc_parent_data_9),
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_floor_ops,
+ 	},
+ };
+ 
+diff --git a/drivers/clk/qcom/gcc-ipq6018.c b/drivers/clk/qcom/gcc-ipq6018.c
+index 3f9c2f61a5d93..cde62a11f5736 100644
+--- a/drivers/clk/qcom/gcc-ipq6018.c
++++ b/drivers/clk/qcom/gcc-ipq6018.c
+@@ -1654,7 +1654,7 @@ static struct clk_rcg2 sdcc1_apps_clk_src = {
+ 		.name = "sdcc1_apps_clk_src",
+ 		.parent_data = gcc_xo_gpll0_gpll2_gpll0_out_main_div2,
+ 		.num_parents = 4,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_floor_ops,
+ 	},
+ };
+ 
+@@ -4517,24 +4517,24 @@ static const struct qcom_reset_map gcc_ipq6018_resets[] = {
+ 	[GCC_PCIE0_AHB_ARES] = { 0x75040, 5 },
+ 	[GCC_PCIE0_AXI_MASTER_STICKY_ARES] = { 0x75040, 6 },
+ 	[GCC_PCIE0_AXI_SLAVE_STICKY_ARES] = { 0x75040, 7 },
+-	[GCC_PPE_FULL_RESET] = { 0x68014, 0 },
+-	[GCC_UNIPHY0_SOFT_RESET] = { 0x56004, 0 },
++	[GCC_PPE_FULL_RESET] = { .reg = 0x68014, .bitmask = 0xf0000 },
++	[GCC_UNIPHY0_SOFT_RESET] = { .reg = 0x56004, .bitmask = 0x3ff2 },
+ 	[GCC_UNIPHY0_XPCS_RESET] = { 0x56004, 2 },
+-	[GCC_UNIPHY1_SOFT_RESET] = { 0x56104, 0 },
++	[GCC_UNIPHY1_SOFT_RESET] = { .reg = 0x56104, .bitmask = 0x32 },
+ 	[GCC_UNIPHY1_XPCS_RESET] = { 0x56104, 2 },
+-	[GCC_EDMA_HW_RESET] = { 0x68014, 0 },
+-	[GCC_NSSPORT1_RESET] = { 0x68014, 0 },
+-	[GCC_NSSPORT2_RESET] = { 0x68014, 0 },
+-	[GCC_NSSPORT3_RESET] = { 0x68014, 0 },
+-	[GCC_NSSPORT4_RESET] = { 0x68014, 0 },
+-	[GCC_NSSPORT5_RESET] = { 0x68014, 0 },
+-	[GCC_UNIPHY0_PORT1_ARES] = { 0x56004, 0 },
+-	[GCC_UNIPHY0_PORT2_ARES] = { 0x56004, 0 },
+-	[GCC_UNIPHY0_PORT3_ARES] = { 0x56004, 0 },
+-	[GCC_UNIPHY0_PORT4_ARES] = { 0x56004, 0 },
+-	[GCC_UNIPHY0_PORT5_ARES] = { 0x56004, 0 },
+-	[GCC_UNIPHY0_PORT_4_5_RESET] = { 0x56004, 0 },
+-	[GCC_UNIPHY0_PORT_4_RESET] = { 0x56004, 0 },
++	[GCC_EDMA_HW_RESET] = { .reg = 0x68014, .bitmask = 0x300000 },
++	[GCC_NSSPORT1_RESET] = { .reg = 0x68014, .bitmask = 0x1000003 },
++	[GCC_NSSPORT2_RESET] = { .reg = 0x68014, .bitmask = 0x200000c },
++	[GCC_NSSPORT3_RESET] = { .reg = 0x68014, .bitmask = 0x4000030 },
++	[GCC_NSSPORT4_RESET] = { .reg = 0x68014, .bitmask = 0x8000300 },
++	[GCC_NSSPORT5_RESET] = { .reg = 0x68014, .bitmask = 0x10000c00 },
++	[GCC_UNIPHY0_PORT1_ARES] = { .reg = 0x56004, .bitmask = 0x30 },
++	[GCC_UNIPHY0_PORT2_ARES] = { .reg = 0x56004, .bitmask = 0xc0 },
++	[GCC_UNIPHY0_PORT3_ARES] = { .reg = 0x56004, .bitmask = 0x300 },
++	[GCC_UNIPHY0_PORT4_ARES] = { .reg = 0x56004, .bitmask = 0xc00 },
++	[GCC_UNIPHY0_PORT5_ARES] = { .reg = 0x56004, .bitmask = 0x3000 },
++	[GCC_UNIPHY0_PORT_4_5_RESET] = { .reg = 0x56004, .bitmask = 0x3c02 },
++	[GCC_UNIPHY0_PORT_4_RESET] = { .reg = 0x56004, .bitmask = 0xc02 },
+ 	[GCC_LPASS_BCR] = {0x1F000, 0},
+ 	[GCC_UBI32_TBU_BCR] = {0x65000, 0},
+ 	[GCC_LPASS_TBU_BCR] = {0x6C000, 0},
+diff --git a/drivers/clk/qcom/gcc-qcm2290.c b/drivers/clk/qcom/gcc-qcm2290.c
+index 096deff2ba257..48995e50c6bd7 100644
+--- a/drivers/clk/qcom/gcc-qcm2290.c
++++ b/drivers/clk/qcom/gcc-qcm2290.c
+@@ -650,7 +650,7 @@ static struct clk_rcg2 gcc_usb30_prim_mock_utmi_clk_src = {
+ 		.name = "gcc_usb30_prim_mock_utmi_clk_src",
+ 		.parent_data = gcc_parents_0,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_0),
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -686,7 +686,7 @@ static struct clk_rcg2 gcc_camss_axi_clk_src = {
+ 		.name = "gcc_camss_axi_clk_src",
+ 		.parent_data = gcc_parents_4,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_4),
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -706,7 +706,7 @@ static struct clk_rcg2 gcc_camss_cci_clk_src = {
+ 		.name = "gcc_camss_cci_clk_src",
+ 		.parent_data = gcc_parents_9,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_9),
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -728,7 +728,7 @@ static struct clk_rcg2 gcc_camss_csi0phytimer_clk_src = {
+ 		.name = "gcc_camss_csi0phytimer_clk_src",
+ 		.parent_data = gcc_parents_5,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_5),
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -742,7 +742,7 @@ static struct clk_rcg2 gcc_camss_csi1phytimer_clk_src = {
+ 		.name = "gcc_camss_csi1phytimer_clk_src",
+ 		.parent_data = gcc_parents_5,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_5),
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -764,7 +764,7 @@ static struct clk_rcg2 gcc_camss_mclk0_clk_src = {
+ 		.parent_data = gcc_parents_3,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_3),
+ 		.flags = CLK_OPS_PARENT_ENABLE,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -779,7 +779,7 @@ static struct clk_rcg2 gcc_camss_mclk1_clk_src = {
+ 		.parent_data = gcc_parents_3,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_3),
+ 		.flags = CLK_OPS_PARENT_ENABLE,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -794,7 +794,7 @@ static struct clk_rcg2 gcc_camss_mclk2_clk_src = {
+ 		.parent_data = gcc_parents_3,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_3),
+ 		.flags = CLK_OPS_PARENT_ENABLE,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -809,7 +809,7 @@ static struct clk_rcg2 gcc_camss_mclk3_clk_src = {
+ 		.parent_data = gcc_parents_3,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_3),
+ 		.flags = CLK_OPS_PARENT_ENABLE,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -830,7 +830,7 @@ static struct clk_rcg2 gcc_camss_ope_ahb_clk_src = {
+ 		.name = "gcc_camss_ope_ahb_clk_src",
+ 		.parent_data = gcc_parents_6,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_6),
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -854,7 +854,7 @@ static struct clk_rcg2 gcc_camss_ope_clk_src = {
+ 		.parent_data = gcc_parents_6,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_6),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -888,7 +888,7 @@ static struct clk_rcg2 gcc_camss_tfe_0_clk_src = {
+ 		.name = "gcc_camss_tfe_0_clk_src",
+ 		.parent_data = gcc_parents_7,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_7),
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -912,7 +912,7 @@ static struct clk_rcg2 gcc_camss_tfe_0_csid_clk_src = {
+ 		.name = "gcc_camss_tfe_0_csid_clk_src",
+ 		.parent_data = gcc_parents_8,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_8),
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -926,7 +926,7 @@ static struct clk_rcg2 gcc_camss_tfe_1_clk_src = {
+ 		.name = "gcc_camss_tfe_1_clk_src",
+ 		.parent_data = gcc_parents_7,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_7),
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -940,7 +940,7 @@ static struct clk_rcg2 gcc_camss_tfe_1_csid_clk_src = {
+ 		.name = "gcc_camss_tfe_1_csid_clk_src",
+ 		.parent_data = gcc_parents_8,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_8),
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -963,7 +963,7 @@ static struct clk_rcg2 gcc_camss_tfe_cphy_rx_clk_src = {
+ 		.parent_data = gcc_parents_10,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_10),
+ 		.flags = CLK_OPS_PARENT_ENABLE,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -984,7 +984,7 @@ static struct clk_rcg2 gcc_camss_top_ahb_clk_src = {
+ 		.name = "gcc_camss_top_ahb_clk_src",
+ 		.parent_data = gcc_parents_4,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_4),
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -1006,7 +1006,7 @@ static struct clk_rcg2 gcc_gp1_clk_src = {
+ 		.name = "gcc_gp1_clk_src",
+ 		.parent_data = gcc_parents_2,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_2),
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -1020,7 +1020,7 @@ static struct clk_rcg2 gcc_gp2_clk_src = {
+ 		.name = "gcc_gp2_clk_src",
+ 		.parent_data = gcc_parents_2,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_2),
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -1034,7 +1034,7 @@ static struct clk_rcg2 gcc_gp3_clk_src = {
+ 		.name = "gcc_gp3_clk_src",
+ 		.parent_data = gcc_parents_2,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_2),
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -1054,7 +1054,7 @@ static struct clk_rcg2 gcc_pdm2_clk_src = {
+ 		.name = "gcc_pdm2_clk_src",
+ 		.parent_data = gcc_parents_0,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_0),
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -1082,7 +1082,7 @@ static struct clk_init_data gcc_qupv3_wrap0_s0_clk_src_init = {
+ 	.name = "gcc_qupv3_wrap0_s0_clk_src",
+ 	.parent_data = gcc_parents_1,
+ 	.num_parents = ARRAY_SIZE(gcc_parents_1),
+-	.ops = &clk_rcg2_ops,
++	.ops = &clk_rcg2_shared_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap0_s0_clk_src = {
+@@ -1098,7 +1098,7 @@ static struct clk_init_data gcc_qupv3_wrap0_s1_clk_src_init = {
+ 	.name = "gcc_qupv3_wrap0_s1_clk_src",
+ 	.parent_data = gcc_parents_1,
+ 	.num_parents = ARRAY_SIZE(gcc_parents_1),
+-	.ops = &clk_rcg2_ops,
++	.ops = &clk_rcg2_shared_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap0_s1_clk_src = {
+@@ -1114,7 +1114,7 @@ static struct clk_init_data gcc_qupv3_wrap0_s2_clk_src_init = {
+ 	.name = "gcc_qupv3_wrap0_s2_clk_src",
+ 	.parent_data = gcc_parents_1,
+ 	.num_parents = ARRAY_SIZE(gcc_parents_1),
+-	.ops = &clk_rcg2_ops,
++	.ops = &clk_rcg2_shared_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap0_s2_clk_src = {
+@@ -1130,7 +1130,7 @@ static struct clk_init_data gcc_qupv3_wrap0_s3_clk_src_init = {
+ 	.name = "gcc_qupv3_wrap0_s3_clk_src",
+ 	.parent_data = gcc_parents_1,
+ 	.num_parents = ARRAY_SIZE(gcc_parents_1),
+-	.ops = &clk_rcg2_ops,
++	.ops = &clk_rcg2_shared_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap0_s3_clk_src = {
+@@ -1146,7 +1146,7 @@ static struct clk_init_data gcc_qupv3_wrap0_s4_clk_src_init = {
+ 	.name = "gcc_qupv3_wrap0_s4_clk_src",
+ 	.parent_data = gcc_parents_1,
+ 	.num_parents = ARRAY_SIZE(gcc_parents_1),
+-	.ops = &clk_rcg2_ops,
++	.ops = &clk_rcg2_shared_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap0_s4_clk_src = {
+@@ -1162,7 +1162,7 @@ static struct clk_init_data gcc_qupv3_wrap0_s5_clk_src_init = {
+ 	.name = "gcc_qupv3_wrap0_s5_clk_src",
+ 	.parent_data = gcc_parents_1,
+ 	.num_parents = ARRAY_SIZE(gcc_parents_1),
+-	.ops = &clk_rcg2_ops,
++	.ops = &clk_rcg2_shared_ops,
+ };
+ 
+ static struct clk_rcg2 gcc_qupv3_wrap0_s5_clk_src = {
+@@ -1219,7 +1219,7 @@ static struct clk_rcg2 gcc_sdcc1_ice_core_clk_src = {
+ 		.name = "gcc_sdcc1_ice_core_clk_src",
+ 		.parent_data = gcc_parents_0,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_0),
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -1266,7 +1266,7 @@ static struct clk_rcg2 gcc_usb30_prim_master_clk_src = {
+ 		.name = "gcc_usb30_prim_master_clk_src",
+ 		.parent_data = gcc_parents_0,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_0),
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -1280,7 +1280,7 @@ static struct clk_rcg2 gcc_usb3_prim_phy_aux_clk_src = {
+ 		.name = "gcc_usb3_prim_phy_aux_clk_src",
+ 		.parent_data = gcc_parents_13,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_13),
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -1303,7 +1303,7 @@ static struct clk_rcg2 gcc_video_venus_clk_src = {
+ 		.parent_data = gcc_parents_14,
+ 		.num_parents = ARRAY_SIZE(gcc_parents_14),
+ 		.flags = CLK_SET_RATE_PARENT,
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+diff --git a/drivers/clk/qcom/mmcc-msm8974.c b/drivers/clk/qcom/mmcc-msm8974.c
+index 4273fce9a4a4c..82f6bad144a9a 100644
+--- a/drivers/clk/qcom/mmcc-msm8974.c
++++ b/drivers/clk/qcom/mmcc-msm8974.c
+@@ -485,7 +485,7 @@ static struct clk_rcg2 mdp_clk_src = {
+ 		.name = "mdp_clk_src",
+ 		.parent_data = mmcc_xo_mmpll0_dsi_hdmi_gpll0,
+ 		.num_parents = ARRAY_SIZE(mmcc_xo_mmpll0_dsi_hdmi_gpll0),
+-		.ops = &clk_rcg2_ops,
++		.ops = &clk_rcg2_shared_ops,
+ 	},
+ };
+ 
+@@ -2204,23 +2204,6 @@ static struct clk_branch ocmemcx_ocmemnoc_clk = {
+ 	},
+ };
+ 
+-static struct clk_branch oxili_ocmemgx_clk = {
+-	.halt_reg = 0x402c,
+-	.clkr = {
+-		.enable_reg = 0x402c,
+-		.enable_mask = BIT(0),
+-		.hw.init = &(struct clk_init_data){
+-			.name = "oxili_ocmemgx_clk",
+-			.parent_data = (const struct clk_parent_data[]){
+-				{ .fw_name = "gfx3d_clk_src", .name = "gfx3d_clk_src" },
+-			},
+-			.num_parents = 1,
+-			.flags = CLK_SET_RATE_PARENT,
+-			.ops = &clk_branch2_ops,
+-		},
+-	},
+-};
+-
+ static struct clk_branch ocmemnoc_clk = {
+ 	.halt_reg = 0x50b4,
+ 	.clkr = {
+@@ -2401,7 +2384,7 @@ static struct gdsc mdss_gdsc = {
+ 	.pd = {
+ 		.name = "mdss",
+ 	},
+-	.pwrsts = PWRSTS_RET_ON,
++	.pwrsts = PWRSTS_OFF_ON,
+ };
+ 
+ static struct gdsc camss_jpeg_gdsc = {
+@@ -2512,7 +2495,6 @@ static struct clk_regmap *mmcc_msm8226_clocks[] = {
+ 	[MMSS_MMSSNOC_AXI_CLK] = &mmss_mmssnoc_axi_clk.clkr,
+ 	[MMSS_S0_AXI_CLK] = &mmss_s0_axi_clk.clkr,
+ 	[OCMEMCX_AHB_CLK] = &ocmemcx_ahb_clk.clkr,
+-	[OXILI_OCMEMGX_CLK] = &oxili_ocmemgx_clk.clkr,
+ 	[OXILI_GFX3D_CLK] = &oxili_gfx3d_clk.clkr,
+ 	[OXILICX_AHB_CLK] = &oxilicx_ahb_clk.clkr,
+ 	[OXILICX_AXI_CLK] = &oxilicx_axi_clk.clkr,
+@@ -2670,7 +2652,6 @@ static struct clk_regmap *mmcc_msm8974_clocks[] = {
+ 	[MMSS_S0_AXI_CLK] = &mmss_s0_axi_clk.clkr,
+ 	[OCMEMCX_AHB_CLK] = &ocmemcx_ahb_clk.clkr,
+ 	[OCMEMCX_OCMEMNOC_CLK] = &ocmemcx_ocmemnoc_clk.clkr,
+-	[OXILI_OCMEMGX_CLK] = &oxili_ocmemgx_clk.clkr,
+ 	[OCMEMNOC_CLK] = &ocmemnoc_clk.clkr,
+ 	[OXILI_GFX3D_CLK] = &oxili_gfx3d_clk.clkr,
+ 	[OXILICX_AHB_CLK] = &oxilicx_ahb_clk.clkr,
+diff --git a/drivers/clk/renesas/rzg2l-cpg.c b/drivers/clk/renesas/rzg2l-cpg.c
+index 93b02cdc98c25..ca8b921c77625 100644
+--- a/drivers/clk/renesas/rzg2l-cpg.c
++++ b/drivers/clk/renesas/rzg2l-cpg.c
+@@ -603,10 +603,8 @@ static int rzg2l_cpg_sipll5_set_rate(struct clk_hw *hw,
+ 	}
+ 
+ 	/* Output clock setting 1 */
+-	writel(CPG_SIPLL5_CLK1_POSTDIV1_WEN | CPG_SIPLL5_CLK1_POSTDIV2_WEN |
+-	       CPG_SIPLL5_CLK1_REFDIV_WEN  | (params.pl5_postdiv1 << 0) |
+-	       (params.pl5_postdiv2 << 4) | (params.pl5_refdiv << 8),
+-	       priv->base + CPG_SIPLL5_CLK1);
++	writel((params.pl5_postdiv1 << 0) | (params.pl5_postdiv2 << 4) |
++	       (params.pl5_refdiv << 8), priv->base + CPG_SIPLL5_CLK1);
+ 
+ 	/* Output clock setting, SSCG modulation value setting 3 */
+ 	writel((params.pl5_fracin << 8), priv->base + CPG_SIPLL5_CLK3);
+diff --git a/drivers/clk/renesas/rzg2l-cpg.h b/drivers/clk/renesas/rzg2l-cpg.h
+index eee780276a9e2..6cee9e56acc72 100644
+--- a/drivers/clk/renesas/rzg2l-cpg.h
++++ b/drivers/clk/renesas/rzg2l-cpg.h
+@@ -32,9 +32,6 @@
+ #define CPG_SIPLL5_STBY_RESETB_WEN	BIT(16)
+ #define CPG_SIPLL5_STBY_SSCG_EN_WEN	BIT(18)
+ #define CPG_SIPLL5_STBY_DOWNSPREAD_WEN	BIT(20)
+-#define CPG_SIPLL5_CLK1_POSTDIV1_WEN	BIT(16)
+-#define CPG_SIPLL5_CLK1_POSTDIV2_WEN	BIT(20)
+-#define CPG_SIPLL5_CLK1_REFDIV_WEN	BIT(24)
+ #define CPG_SIPLL5_CLK4_RESV_LSB	(0xFF)
+ #define CPG_SIPLL5_MON_PLL5_LOCK	BIT(4)
+ 
+diff --git a/drivers/clk/tegra/clk-tegra124-emc.c b/drivers/clk/tegra/clk-tegra124-emc.c
+index 219c80653dbdb..2a6db04342815 100644
+--- a/drivers/clk/tegra/clk-tegra124-emc.c
++++ b/drivers/clk/tegra/clk-tegra124-emc.c
+@@ -464,6 +464,7 @@ static int load_timings_from_dt(struct tegra_clk_emc *tegra,
+ 		err = load_one_timing_from_dt(tegra, timing, child);
+ 		if (err) {
+ 			of_node_put(child);
++			kfree(tegra->timings);
+ 			return err;
+ 		}
+ 
+@@ -515,6 +516,7 @@ struct clk *tegra124_clk_register_emc(void __iomem *base, struct device_node *np
+ 		err = load_timings_from_dt(tegra, node, node_ram_code);
+ 		if (err) {
+ 			of_node_put(node);
++			kfree(tegra);
+ 			return ERR_PTR(err);
+ 		}
+ 	}
+diff --git a/drivers/clk/ti/clkctrl.c b/drivers/clk/ti/clkctrl.c
+index b6fce916967ce..8c40f10280b74 100644
+--- a/drivers/clk/ti/clkctrl.c
++++ b/drivers/clk/ti/clkctrl.c
+@@ -258,6 +258,9 @@ static const char * __init clkctrl_get_clock_name(struct device_node *np,
+ 	if (clkctrl_name && !legacy_naming) {
+ 		clock_name = kasprintf(GFP_KERNEL, "%s-clkctrl:%04x:%d",
+ 				       clkctrl_name, offset, index);
++		if (!clock_name)
++			return NULL;
++
+ 		strreplace(clock_name, '_', '-');
+ 
+ 		return clock_name;
+@@ -586,6 +589,10 @@ static void __init _ti_omap4_clkctrl_setup(struct device_node *node)
+ 	if (clkctrl_name) {
+ 		provider->clkdm_name = kasprintf(GFP_KERNEL,
+ 						 "%s_clkdm", clkctrl_name);
++		if (!provider->clkdm_name) {
++			kfree(provider);
++			return;
++		}
+ 		goto clkdm_found;
+ 	}
+ 
+diff --git a/drivers/clk/xilinx/clk-xlnx-clock-wizard.c b/drivers/clk/xilinx/clk-xlnx-clock-wizard.c
+index e83f104fad029..d56822ce6126c 100644
+--- a/drivers/clk/xilinx/clk-xlnx-clock-wizard.c
++++ b/drivers/clk/xilinx/clk-xlnx-clock-wizard.c
+@@ -525,7 +525,7 @@ static struct clk *clk_wzrd_register_divider(struct device *dev,
+ 	hw = &div->hw;
+ 	ret = devm_clk_hw_register(dev, hw);
+ 	if (ret)
+-		hw = ERR_PTR(ret);
++		return ERR_PTR(ret);
+ 
+ 	return hw->clk;
+ }
+@@ -648,6 +648,11 @@ static int clk_wzrd_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	clkout_name = devm_kasprintf(&pdev->dev, GFP_KERNEL, "%s_out0", dev_name(&pdev->dev));
++	if (!clkout_name) {
++		ret = -ENOMEM;
++		goto err_disable_clk;
++	}
++
+ 	if (nr_outputs == 1) {
+ 		clk_wzrd->clkout[0] = clk_wzrd_register_divider
+ 				(&pdev->dev, clkout_name,
+diff --git a/drivers/clocksource/timer-cadence-ttc.c b/drivers/clocksource/timer-cadence-ttc.c
+index 4efd0cf3b602d..0d52e28fea4de 100644
+--- a/drivers/clocksource/timer-cadence-ttc.c
++++ b/drivers/clocksource/timer-cadence-ttc.c
+@@ -486,10 +486,10 @@ static int __init ttc_timer_probe(struct platform_device *pdev)
+ 	 * and use it. Note that the event timer uses the interrupt and it's the
+ 	 * 2nd TTC hence the irq_of_parse_and_map(,1)
+ 	 */
+-	timer_baseaddr = of_iomap(timer, 0);
+-	if (!timer_baseaddr) {
++	timer_baseaddr = devm_of_iomap(&pdev->dev, timer, 0, NULL);
++	if (IS_ERR(timer_baseaddr)) {
+ 		pr_err("ERROR: invalid timer base address\n");
+-		return -ENXIO;
++		return PTR_ERR(timer_baseaddr);
+ 	}
+ 
+ 	irq = irq_of_parse_and_map(timer, 1);
+@@ -513,20 +513,27 @@ static int __init ttc_timer_probe(struct platform_device *pdev)
+ 	clk_ce = of_clk_get(timer, clksel);
+ 	if (IS_ERR(clk_ce)) {
+ 		pr_err("ERROR: timer input clock not found\n");
+-		return PTR_ERR(clk_ce);
++		ret = PTR_ERR(clk_ce);
++		goto put_clk_cs;
+ 	}
+ 
+ 	ret = ttc_setup_clocksource(clk_cs, timer_baseaddr, timer_width);
+ 	if (ret)
+-		return ret;
++		goto put_clk_ce;
+ 
+ 	ret = ttc_setup_clockevent(clk_ce, timer_baseaddr + 4, irq);
+ 	if (ret)
+-		return ret;
++		goto put_clk_ce;
+ 
+ 	pr_info("%pOFn #0 at %p, irq=%d\n", timer, timer_baseaddr, irq);
+ 
+ 	return 0;
++
++put_clk_ce:
++	clk_put(clk_ce);
++put_clk_cs:
++	clk_put(clk_cs);
++	return ret;
+ }
+ 
+ static const struct of_device_id ttc_timer_of_match[] = {
+diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
+index 2548ec92faa28..f29182512b982 100644
+--- a/drivers/cpufreq/intel_pstate.c
++++ b/drivers/cpufreq/intel_pstate.c
+@@ -824,6 +824,8 @@ static ssize_t store_energy_performance_preference(
+ 			err = cpufreq_start_governor(policy);
+ 			if (!ret)
+ 				ret = err;
++		} else {
++			ret = 0;
+ 		}
+ 	}
+ 
+diff --git a/drivers/cpufreq/mediatek-cpufreq.c b/drivers/cpufreq/mediatek-cpufreq.c
+index 9a39a7ccfae96..fef68cb2b38f7 100644
+--- a/drivers/cpufreq/mediatek-cpufreq.c
++++ b/drivers/cpufreq/mediatek-cpufreq.c
+@@ -696,9 +696,16 @@ static const struct mtk_cpufreq_platform_data mt2701_platform_data = {
+ static const struct mtk_cpufreq_platform_data mt7622_platform_data = {
+ 	.min_volt_shift = 100000,
+ 	.max_volt_shift = 200000,
+-	.proc_max_volt = 1360000,
++	.proc_max_volt = 1350000,
+ 	.sram_min_volt = 0,
+-	.sram_max_volt = 1360000,
++	.sram_max_volt = 1350000,
++	.ccifreq_supported = false,
++};
++
++static const struct mtk_cpufreq_platform_data mt7623_platform_data = {
++	.min_volt_shift = 100000,
++	.max_volt_shift = 200000,
++	.proc_max_volt = 1300000,
+ 	.ccifreq_supported = false,
+ };
+ 
+@@ -734,7 +741,7 @@ static const struct of_device_id mtk_cpufreq_machines[] __initconst = {
+ 	{ .compatible = "mediatek,mt2701", .data = &mt2701_platform_data },
+ 	{ .compatible = "mediatek,mt2712", .data = &mt2701_platform_data },
+ 	{ .compatible = "mediatek,mt7622", .data = &mt7622_platform_data },
+-	{ .compatible = "mediatek,mt7623", .data = &mt7622_platform_data },
++	{ .compatible = "mediatek,mt7623", .data = &mt7623_platform_data },
+ 	{ .compatible = "mediatek,mt8167", .data = &mt8516_platform_data },
+ 	{ .compatible = "mediatek,mt817x", .data = &mt2701_platform_data },
+ 	{ .compatible = "mediatek,mt8173", .data = &mt2701_platform_data },
+diff --git a/drivers/cpufreq/tegra194-cpufreq.c b/drivers/cpufreq/tegra194-cpufreq.c
+index c8d03346068ab..36dad5ea59475 100644
+--- a/drivers/cpufreq/tegra194-cpufreq.c
++++ b/drivers/cpufreq/tegra194-cpufreq.c
+@@ -686,8 +686,10 @@ static int tegra194_cpufreq_probe(struct platform_device *pdev)
+ 
+ 	/* Check for optional OPPv2 and interconnect paths on CPU0 to enable ICC scaling */
+ 	cpu_dev = get_cpu_device(0);
+-	if (!cpu_dev)
+-		return -EPROBE_DEFER;
++	if (!cpu_dev) {
++		err = -EPROBE_DEFER;
++		goto err_free_res;
++	}
+ 
+ 	if (dev_pm_opp_of_get_opp_desc_node(cpu_dev)) {
+ 		err = dev_pm_opp_of_find_icc_paths(cpu_dev, NULL);
+diff --git a/drivers/crypto/intel/qat/qat_common/qat_asym_algs.c b/drivers/crypto/intel/qat/qat_common/qat_asym_algs.c
+index 935a7e012946e..4128200a90329 100644
+--- a/drivers/crypto/intel/qat/qat_common/qat_asym_algs.c
++++ b/drivers/crypto/intel/qat/qat_common/qat_asym_algs.c
+@@ -170,15 +170,14 @@ static void qat_dh_cb(struct icp_qat_fw_pke_resp *resp)
+ 	}
+ 
+ 	areq->dst_len = req->ctx.dh->p_size;
++	dma_unmap_single(dev, req->out.dh.r, req->ctx.dh->p_size,
++			 DMA_FROM_DEVICE);
+ 	if (req->dst_align) {
+ 		scatterwalk_map_and_copy(req->dst_align, areq->dst, 0,
+ 					 areq->dst_len, 1);
+ 		kfree_sensitive(req->dst_align);
+ 	}
+ 
+-	dma_unmap_single(dev, req->out.dh.r, req->ctx.dh->p_size,
+-			 DMA_FROM_DEVICE);
+-
+ 	dma_unmap_single(dev, req->phy_in, sizeof(struct qat_dh_input_params),
+ 			 DMA_TO_DEVICE);
+ 	dma_unmap_single(dev, req->phy_out,
+@@ -521,12 +520,14 @@ static void qat_rsa_cb(struct icp_qat_fw_pke_resp *resp)
+ 
+ 	err = (err == ICP_QAT_FW_COMN_STATUS_FLAG_OK) ? 0 : -EINVAL;
+ 
+-	kfree_sensitive(req->src_align);
+-
+ 	dma_unmap_single(dev, req->in.rsa.enc.m, req->ctx.rsa->key_sz,
+ 			 DMA_TO_DEVICE);
+ 
++	kfree_sensitive(req->src_align);
++
+ 	areq->dst_len = req->ctx.rsa->key_sz;
++	dma_unmap_single(dev, req->out.rsa.enc.c, req->ctx.rsa->key_sz,
++			 DMA_FROM_DEVICE);
+ 	if (req->dst_align) {
+ 		scatterwalk_map_and_copy(req->dst_align, areq->dst, 0,
+ 					 areq->dst_len, 1);
+@@ -534,9 +535,6 @@ static void qat_rsa_cb(struct icp_qat_fw_pke_resp *resp)
+ 		kfree_sensitive(req->dst_align);
+ 	}
+ 
+-	dma_unmap_single(dev, req->out.rsa.enc.c, req->ctx.rsa->key_sz,
+-			 DMA_FROM_DEVICE);
+-
+ 	dma_unmap_single(dev, req->phy_in, sizeof(struct qat_rsa_input_params),
+ 			 DMA_TO_DEVICE);
+ 	dma_unmap_single(dev, req->phy_out,
+diff --git a/drivers/crypto/marvell/cesa/cipher.c b/drivers/crypto/marvell/cesa/cipher.c
+index c6f2fa753b7c0..0f37dfd42d850 100644
+--- a/drivers/crypto/marvell/cesa/cipher.c
++++ b/drivers/crypto/marvell/cesa/cipher.c
+@@ -297,7 +297,7 @@ static int mv_cesa_des_setkey(struct crypto_skcipher *cipher, const u8 *key,
+ static int mv_cesa_des3_ede_setkey(struct crypto_skcipher *cipher,
+ 				   const u8 *key, unsigned int len)
+ {
+-	struct mv_cesa_des_ctx *ctx = crypto_skcipher_ctx(cipher);
++	struct mv_cesa_des3_ctx *ctx = crypto_skcipher_ctx(cipher);
+ 	int err;
+ 
+ 	err = verify_skcipher_des3_key(cipher, key);
+diff --git a/drivers/crypto/nx/Makefile b/drivers/crypto/nx/Makefile
+index d00181a26dd65..483cef62acee8 100644
+--- a/drivers/crypto/nx/Makefile
++++ b/drivers/crypto/nx/Makefile
+@@ -1,7 +1,6 @@
+ # SPDX-License-Identifier: GPL-2.0
+ obj-$(CONFIG_CRYPTO_DEV_NX_ENCRYPT) += nx-crypto.o
+ nx-crypto-objs := nx.o \
+-		  nx_debugfs.o \
+ 		  nx-aes-cbc.o \
+ 		  nx-aes-ecb.o \
+ 		  nx-aes-gcm.o \
+@@ -11,6 +10,7 @@ nx-crypto-objs := nx.o \
+ 		  nx-sha256.o \
+ 		  nx-sha512.o
+ 
++nx-crypto-$(CONFIG_DEBUG_FS) += nx_debugfs.o
+ obj-$(CONFIG_CRYPTO_DEV_NX_COMPRESS_PSERIES) += nx-compress-pseries.o nx-compress.o
+ obj-$(CONFIG_CRYPTO_DEV_NX_COMPRESS_POWERNV) += nx-compress-powernv.o nx-compress.o
+ nx-compress-objs := nx-842.o
+diff --git a/drivers/crypto/nx/nx.h b/drivers/crypto/nx/nx.h
+index c6233173c612e..2697baebb6a35 100644
+--- a/drivers/crypto/nx/nx.h
++++ b/drivers/crypto/nx/nx.h
+@@ -170,8 +170,8 @@ struct nx_sg *nx_walk_and_build(struct nx_sg *, unsigned int,
+ void nx_debugfs_init(struct nx_crypto_driver *);
+ void nx_debugfs_fini(struct nx_crypto_driver *);
+ #else
+-#define NX_DEBUGFS_INIT(drv)	(0)
+-#define NX_DEBUGFS_FINI(drv)	(0)
++#define NX_DEBUGFS_INIT(drv)	do {} while (0)
++#define NX_DEBUGFS_FINI(drv)	do {} while (0)
+ #endif
+ 
+ #define NX_PAGE_NUM(x)		((u64)(x) & 0xfffffffffffff000ULL)
+diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c
+index f822de44bee0a..bfdd424d68970 100644
+--- a/drivers/cxl/core/region.c
++++ b/drivers/cxl/core/region.c
+@@ -125,10 +125,38 @@ static struct cxl_region_ref *cxl_rr_load(struct cxl_port *port,
+ 	return xa_load(&port->regions, (unsigned long)cxlr);
+ }
+ 
++static int cxl_region_invalidate_memregion(struct cxl_region *cxlr)
++{
++	if (!cpu_cache_has_invalidate_memregion()) {
++		if (IS_ENABLED(CONFIG_CXL_REGION_INVALIDATION_TEST)) {
++			dev_warn_once(
++				&cxlr->dev,
++				"Bypassing cpu_cache_invalidate_memregion() for testing!\n");
++			return 0;
++		} else {
++			dev_err(&cxlr->dev,
++				"Failed to synchronize CPU cache state\n");
++			return -ENXIO;
++		}
++	}
++
++	cpu_cache_invalidate_memregion(IORES_DESC_CXL);
++	return 0;
++}
++
+ static int cxl_region_decode_reset(struct cxl_region *cxlr, int count)
+ {
+ 	struct cxl_region_params *p = &cxlr->params;
+-	int i;
++	int i, rc = 0;
++
++	/*
++	 * Before region teardown attempt to flush, and if the flush
++	 * fails cancel the region teardown for data consistency
++	 * concerns
++	 */
++	rc = cxl_region_invalidate_memregion(cxlr);
++	if (rc)
++		return rc;
+ 
+ 	for (i = count - 1; i >= 0; i--) {
+ 		struct cxl_endpoint_decoder *cxled = p->targets[i];
+@@ -136,7 +164,6 @@ static int cxl_region_decode_reset(struct cxl_region *cxlr, int count)
+ 		struct cxl_port *iter = cxled_to_port(cxled);
+ 		struct cxl_dev_state *cxlds = cxlmd->cxlds;
+ 		struct cxl_ep *ep;
+-		int rc = 0;
+ 
+ 		if (cxlds->rcd)
+ 			goto endpoint_reset;
+@@ -155,14 +182,19 @@ static int cxl_region_decode_reset(struct cxl_region *cxlr, int count)
+ 				rc = cxld->reset(cxld);
+ 			if (rc)
+ 				return rc;
++			set_bit(CXL_REGION_F_NEEDS_RESET, &cxlr->flags);
+ 		}
+ 
+ endpoint_reset:
+ 		rc = cxled->cxld.reset(&cxled->cxld);
+ 		if (rc)
+ 			return rc;
++		set_bit(CXL_REGION_F_NEEDS_RESET, &cxlr->flags);
+ 	}
+ 
++	/* all decoders associated with this region have been torn down */
++	clear_bit(CXL_REGION_F_NEEDS_RESET, &cxlr->flags);
++
+ 	return 0;
+ }
+ 
+@@ -256,9 +288,19 @@ static ssize_t commit_store(struct device *dev, struct device_attribute *attr,
+ 		goto out;
+ 	}
+ 
+-	if (commit)
++	/*
++	 * Invalidate caches before region setup to drop any speculative
++	 * consumption of this address space
++	 */
++	rc = cxl_region_invalidate_memregion(cxlr);
++	if (rc)
++		return rc;
++
++	if (commit) {
+ 		rc = cxl_region_decode_commit(cxlr);
+-	else {
++		if (rc == 0)
++			p->state = CXL_CONFIG_COMMIT;
++	} else {
+ 		p->state = CXL_CONFIG_RESET_PENDING;
+ 		up_write(&cxl_region_rwsem);
+ 		device_release_driver(&cxlr->dev);
+@@ -268,18 +310,20 @@ static ssize_t commit_store(struct device *dev, struct device_attribute *attr,
+ 		 * The lock was dropped, so need to revalidate that the reset is
+ 		 * still pending.
+ 		 */
+-		if (p->state == CXL_CONFIG_RESET_PENDING)
++		if (p->state == CXL_CONFIG_RESET_PENDING) {
+ 			rc = cxl_region_decode_reset(cxlr, p->interleave_ways);
++			/*
++			 * Revert to committed since there may still be active
++			 * decoders associated with this region, or move forward
++			 * to active to mark the reset successful
++			 */
++			if (rc)
++				p->state = CXL_CONFIG_COMMIT;
++			else
++				p->state = CXL_CONFIG_ACTIVE;
++		}
+ 	}
+ 
+-	if (rc)
+-		goto out;
+-
+-	if (commit)
+-		p->state = CXL_CONFIG_COMMIT;
+-	else if (p->state == CXL_CONFIG_RESET_PENDING)
+-		p->state = CXL_CONFIG_ACTIVE;
+-
+ out:
+ 	up_write(&cxl_region_rwsem);
+ 
+@@ -1674,7 +1718,6 @@ static int cxl_region_attach(struct cxl_region *cxlr,
+ 		if (rc)
+ 			goto err_decrement;
+ 		p->state = CXL_CONFIG_ACTIVE;
+-		set_bit(CXL_REGION_F_INCOHERENT, &cxlr->flags);
+ 	}
+ 
+ 	cxled->cxld.interleave_ways = p->interleave_ways;
+@@ -2803,30 +2846,6 @@ out:
+ }
+ EXPORT_SYMBOL_NS_GPL(cxl_add_to_region, CXL);
+ 
+-static int cxl_region_invalidate_memregion(struct cxl_region *cxlr)
+-{
+-	if (!test_bit(CXL_REGION_F_INCOHERENT, &cxlr->flags))
+-		return 0;
+-
+-	if (!cpu_cache_has_invalidate_memregion()) {
+-		if (IS_ENABLED(CONFIG_CXL_REGION_INVALIDATION_TEST)) {
+-			dev_warn_once(
+-				&cxlr->dev,
+-				"Bypassing cpu_cache_invalidate_memregion() for testing!\n");
+-			clear_bit(CXL_REGION_F_INCOHERENT, &cxlr->flags);
+-			return 0;
+-		} else {
+-			dev_err(&cxlr->dev,
+-				"Failed to synchronize CPU cache state\n");
+-			return -ENXIO;
+-		}
+-	}
+-
+-	cpu_cache_invalidate_memregion(IORES_DESC_CXL);
+-	clear_bit(CXL_REGION_F_INCOHERENT, &cxlr->flags);
+-	return 0;
+-}
+-
+ static int is_system_ram(struct resource *res, void *arg)
+ {
+ 	struct cxl_region *cxlr = arg;
+@@ -2854,7 +2873,12 @@ static int cxl_region_probe(struct device *dev)
+ 		goto out;
+ 	}
+ 
+-	rc = cxl_region_invalidate_memregion(cxlr);
++	if (test_bit(CXL_REGION_F_NEEDS_RESET, &cxlr->flags)) {
++		dev_err(&cxlr->dev,
++			"failed to activate, re-commit region and retry\n");
++		rc = -ENXIO;
++		goto out;
++	}
+ 
+ 	/*
+ 	 * From this point on any path that changes the region's state away from
+diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
+index 044a92d9813e2..dcebe48bb5bb5 100644
+--- a/drivers/cxl/cxl.h
++++ b/drivers/cxl/cxl.h
+@@ -462,18 +462,20 @@ struct cxl_region_params {
+ 	int nr_targets;
+ };
+ 
+-/*
+- * Flag whether this region needs to have its HPA span synchronized with
+- * CPU cache state at region activation time.
+- */
+-#define CXL_REGION_F_INCOHERENT 0
+-
+ /*
+  * Indicate whether this region has been assembled by autodetection or
+  * userspace assembly. Prevent endpoint decoders outside of automatic
+  * detection from being added to the region.
+  */
+-#define CXL_REGION_F_AUTO 1
++#define CXL_REGION_F_AUTO 0
++
++/*
++ * Require that a committed region successfully complete a teardown once
++ * any of its associated decoders have been torn down. This maintains
++ * the commit state for the region since there are committed decoders,
++ * but blocks cxl_region_probe().
++ */
++#define CXL_REGION_F_NEEDS_RESET 1
+ 
+ /**
+  * struct cxl_region - CXL region
+diff --git a/drivers/dax/bus.c b/drivers/dax/bus.c
+index 227800053309f..e7c61358564e1 100644
+--- a/drivers/dax/bus.c
++++ b/drivers/dax/bus.c
+@@ -446,18 +446,34 @@ static void unregister_dev_dax(void *dev)
+ 	put_device(dev);
+ }
+ 
++static void dax_region_free(struct kref *kref)
++{
++	struct dax_region *dax_region;
++
++	dax_region = container_of(kref, struct dax_region, kref);
++	kfree(dax_region);
++}
++
++void dax_region_put(struct dax_region *dax_region)
++{
++	kref_put(&dax_region->kref, dax_region_free);
++}
++EXPORT_SYMBOL_GPL(dax_region_put);
++
+ /* a return value >= 0 indicates this invocation invalidated the id */
+ static int __free_dev_dax_id(struct dev_dax *dev_dax)
+ {
+-	struct dax_region *dax_region = dev_dax->region;
+ 	struct device *dev = &dev_dax->dev;
++	struct dax_region *dax_region;
+ 	int rc = dev_dax->id;
+ 
+ 	device_lock_assert(dev);
+ 
+-	if (is_static(dax_region) || dev_dax->id < 0)
++	if (!dev_dax->dyn_id || dev_dax->id < 0)
+ 		return -1;
++	dax_region = dev_dax->region;
+ 	ida_free(&dax_region->ida, dev_dax->id);
++	dax_region_put(dax_region);
+ 	dev_dax->id = -1;
+ 	return rc;
+ }
+@@ -473,6 +489,20 @@ static int free_dev_dax_id(struct dev_dax *dev_dax)
+ 	return rc;
+ }
+ 
++static int alloc_dev_dax_id(struct dev_dax *dev_dax)
++{
++	struct dax_region *dax_region = dev_dax->region;
++	int id;
++
++	id = ida_alloc(&dax_region->ida, GFP_KERNEL);
++	if (id < 0)
++		return id;
++	kref_get(&dax_region->kref);
++	dev_dax->dyn_id = true;
++	dev_dax->id = id;
++	return id;
++}
++
+ static ssize_t delete_store(struct device *dev, struct device_attribute *attr,
+ 		const char *buf, size_t len)
+ {
+@@ -560,20 +590,6 @@ static const struct attribute_group *dax_region_attribute_groups[] = {
+ 	NULL,
+ };
+ 
+-static void dax_region_free(struct kref *kref)
+-{
+-	struct dax_region *dax_region;
+-
+-	dax_region = container_of(kref, struct dax_region, kref);
+-	kfree(dax_region);
+-}
+-
+-void dax_region_put(struct dax_region *dax_region)
+-{
+-	kref_put(&dax_region->kref, dax_region_free);
+-}
+-EXPORT_SYMBOL_GPL(dax_region_put);
+-
+ static void dax_region_unregister(void *region)
+ {
+ 	struct dax_region *dax_region = region;
+@@ -635,10 +651,12 @@ EXPORT_SYMBOL_GPL(alloc_dax_region);
+ static void dax_mapping_release(struct device *dev)
+ {
+ 	struct dax_mapping *mapping = to_dax_mapping(dev);
+-	struct dev_dax *dev_dax = to_dev_dax(dev->parent);
++	struct device *parent = dev->parent;
++	struct dev_dax *dev_dax = to_dev_dax(parent);
+ 
+ 	ida_free(&dev_dax->ida, mapping->id);
+ 	kfree(mapping);
++	put_device(parent);
+ }
+ 
+ static void unregister_dax_mapping(void *data)
+@@ -778,6 +796,7 @@ static int devm_register_dax_mapping(struct dev_dax *dev_dax, int range_id)
+ 	dev = &mapping->dev;
+ 	device_initialize(dev);
+ 	dev->parent = &dev_dax->dev;
++	get_device(dev->parent);
+ 	dev->type = &dax_mapping_type;
+ 	dev_set_name(dev, "mapping%d", mapping->id);
+ 	rc = device_add(dev);
+@@ -1295,12 +1314,10 @@ static const struct attribute_group *dax_attribute_groups[] = {
+ static void dev_dax_release(struct device *dev)
+ {
+ 	struct dev_dax *dev_dax = to_dev_dax(dev);
+-	struct dax_region *dax_region = dev_dax->region;
+ 	struct dax_device *dax_dev = dev_dax->dax_dev;
+ 
+ 	put_dax(dax_dev);
+ 	free_dev_dax_id(dev_dax);
+-	dax_region_put(dax_region);
+ 	kfree(dev_dax->pgmap);
+ 	kfree(dev_dax);
+ }
+@@ -1324,6 +1341,7 @@ struct dev_dax *devm_create_dev_dax(struct dev_dax_data *data)
+ 	if (!dev_dax)
+ 		return ERR_PTR(-ENOMEM);
+ 
++	dev_dax->region = dax_region;
+ 	if (is_static(dax_region)) {
+ 		if (dev_WARN_ONCE(parent, data->id < 0,
+ 				"dynamic id specified to static region\n")) {
+@@ -1339,13 +1357,11 @@ struct dev_dax *devm_create_dev_dax(struct dev_dax_data *data)
+ 			goto err_id;
+ 		}
+ 
+-		rc = ida_alloc(&dax_region->ida, GFP_KERNEL);
++		rc = alloc_dev_dax_id(dev_dax);
+ 		if (rc < 0)
+ 			goto err_id;
+-		dev_dax->id = rc;
+ 	}
+ 
+-	dev_dax->region = dax_region;
+ 	dev = &dev_dax->dev;
+ 	device_initialize(dev);
+ 	dev_set_name(dev, "dax%d.%d", dax_region->id, dev_dax->id);
+@@ -1386,7 +1402,6 @@ struct dev_dax *devm_create_dev_dax(struct dev_dax_data *data)
+ 	dev_dax->target_node = dax_region->target_node;
+ 	dev_dax->align = dax_region->align;
+ 	ida_init(&dev_dax->ida);
+-	kref_get(&dax_region->kref);
+ 
+ 	inode = dax_inode(dax_dev);
+ 	dev->devt = inode->i_rdev;
+diff --git a/drivers/dax/dax-private.h b/drivers/dax/dax-private.h
+index 1c974b7caae6e..afcada6fd2eda 100644
+--- a/drivers/dax/dax-private.h
++++ b/drivers/dax/dax-private.h
+@@ -52,7 +52,8 @@ struct dax_mapping {
+  * @region - parent region
+  * @dax_dev - core dax functionality
+  * @target_node: effective numa node if dev_dax memory range is onlined
+- * @id: ida allocated id
++ * @dyn_id: is this a dynamic or statically created instance
++ * @id: ida allocated id when the dax_region is not static
+  * @ida: mapping id allocator
+  * @dev - device core
+  * @pgmap - pgmap for memmap setup / lifetime (driver owned)
+@@ -64,6 +65,7 @@ struct dev_dax {
+ 	struct dax_device *dax_dev;
+ 	unsigned int align;
+ 	int target_node;
++	bool dyn_id;
+ 	int id;
+ 	struct ida ida;
+ 	struct device dev;
+diff --git a/drivers/dax/kmem.c b/drivers/dax/kmem.c
+index 7b36db6f1cbdc..898ca95057547 100644
+--- a/drivers/dax/kmem.c
++++ b/drivers/dax/kmem.c
+@@ -99,7 +99,7 @@ static int dev_dax_kmem_probe(struct dev_dax *dev_dax)
+ 	if (!data->res_name)
+ 		goto err_res_name;
+ 
+-	rc = memory_group_register_static(numa_node, total_len);
++	rc = memory_group_register_static(numa_node, PFN_UP(total_len));
+ 	if (rc < 0)
+ 		goto err_reg_mgid;
+ 	data->mgid = rc;
+diff --git a/drivers/extcon/extcon-usbc-tusb320.c b/drivers/extcon/extcon-usbc-tusb320.c
+index b408ce989c223..10dff1c512c41 100644
+--- a/drivers/extcon/extcon-usbc-tusb320.c
++++ b/drivers/extcon/extcon-usbc-tusb320.c
+@@ -78,6 +78,7 @@ struct tusb320_priv {
+ 	struct typec_capability	cap;
+ 	enum typec_port_type port_type;
+ 	enum typec_pwr_opmode pwr_opmode;
++	struct fwnode_handle *connector_fwnode;
+ };
+ 
+ static const char * const tusb_attached_states[] = {
+@@ -391,27 +392,25 @@ static int tusb320_typec_probe(struct i2c_client *client,
+ 	/* Type-C connector found. */
+ 	ret = typec_get_fw_cap(&priv->cap, connector);
+ 	if (ret)
+-		return ret;
++		goto err_put;
+ 
+ 	priv->port_type = priv->cap.type;
+ 
+ 	/* This goes into register 0x8 field CURRENT_MODE_ADVERTISE */
+ 	ret = fwnode_property_read_string(connector, "typec-power-opmode", &cap_str);
+ 	if (ret)
+-		return ret;
++		goto err_put;
+ 
+ 	ret = typec_find_pwr_opmode(cap_str);
+ 	if (ret < 0)
+-		return ret;
+-	if (ret == TYPEC_PWR_MODE_PD)
+-		return -EINVAL;
++		goto err_put;
+ 
+ 	priv->pwr_opmode = ret;
+ 
+ 	/* Initialize the hardware with the devicetree settings. */
+ 	ret = tusb320_set_adv_pwr_mode(priv);
+ 	if (ret)
+-		return ret;
++		goto err_put;
+ 
+ 	priv->cap.revision		= USB_TYPEC_REV_1_1;
+ 	priv->cap.accessory[0]		= TYPEC_ACCESSORY_AUDIO;
+@@ -422,10 +421,25 @@ static int tusb320_typec_probe(struct i2c_client *client,
+ 	priv->cap.fwnode		= connector;
+ 
+ 	priv->port = typec_register_port(&client->dev, &priv->cap);
+-	if (IS_ERR(priv->port))
+-		return PTR_ERR(priv->port);
++	if (IS_ERR(priv->port)) {
++		ret = PTR_ERR(priv->port);
++		goto err_put;
++	}
++
++	priv->connector_fwnode = connector;
+ 
+ 	return 0;
++
++err_put:
++	fwnode_handle_put(connector);
++
++	return ret;
++}
++
++static void tusb320_typec_remove(struct tusb320_priv *priv)
++{
++	typec_unregister_port(priv->port);
++	fwnode_handle_put(priv->connector_fwnode);
+ }
+ 
+ static int tusb320_probe(struct i2c_client *client)
+@@ -438,7 +452,9 @@ static int tusb320_probe(struct i2c_client *client)
+ 	priv = devm_kzalloc(&client->dev, sizeof(*priv), GFP_KERNEL);
+ 	if (!priv)
+ 		return -ENOMEM;
++
+ 	priv->dev = &client->dev;
++	i2c_set_clientdata(client, priv);
+ 
+ 	priv->regmap = devm_regmap_init_i2c(client, &tusb320_regmap_config);
+ 	if (IS_ERR(priv->regmap))
+@@ -489,10 +505,19 @@ static int tusb320_probe(struct i2c_client *client)
+ 					tusb320_irq_handler,
+ 					IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
+ 					client->name, priv);
++	if (ret)
++		tusb320_typec_remove(priv);
+ 
+ 	return ret;
+ }
+ 
++static void tusb320_remove(struct i2c_client *client)
++{
++	struct tusb320_priv *priv = i2c_get_clientdata(client);
++
++	tusb320_typec_remove(priv);
++}
++
+ static const struct of_device_id tusb320_extcon_dt_match[] = {
+ 	{ .compatible = "ti,tusb320", .data = &tusb320_ops, },
+ 	{ .compatible = "ti,tusb320l", .data = &tusb320l_ops, },
+@@ -502,6 +527,7 @@ MODULE_DEVICE_TABLE(of, tusb320_extcon_dt_match);
+ 
+ static struct i2c_driver tusb320_extcon_driver = {
+ 	.probe_new	= tusb320_probe,
++	.remove		= tusb320_remove,
+ 	.driver		= {
+ 		.name	= "extcon-tusb320",
+ 		.of_match_table = tusb320_extcon_dt_match,
+diff --git a/drivers/extcon/extcon.c b/drivers/extcon/extcon.c
+index d43ba8e7260dd..370b5b26d10b7 100644
+--- a/drivers/extcon/extcon.c
++++ b/drivers/extcon/extcon.c
+@@ -206,6 +206,14 @@ static const struct __extcon_info {
+  * @attr_name:		"name" sysfs entry
+  * @attr_state:		"state" sysfs entry
+  * @attrs:		the array pointing to attr_name and attr_state for attr_g
++ * @usb_propval:	the array of USB connector properties
++ * @chg_propval:	the array of charger connector properties
++ * @jack_propval:	the array of jack connector properties
++ * @disp_propval:	the array of display connector properties
++ * @usb_bits:		the bit array of the USB connector property capabilities
++ * @chg_bits:		the bit array of the charger connector property capabilities
++ * @jack_bits:		the bit array of the jack connector property capabilities
++ * @disp_bits:		the bit array of the display connector property capabilities
+  */
+ struct extcon_cable {
+ 	struct extcon_dev *edev;
+diff --git a/drivers/firmware/efi/libstub/efi-stub-helper.c b/drivers/firmware/efi/libstub/efi-stub-helper.c
+index 1e0203d74691f..732984295295f 100644
+--- a/drivers/firmware/efi/libstub/efi-stub-helper.c
++++ b/drivers/firmware/efi/libstub/efi-stub-helper.c
+@@ -378,6 +378,9 @@ efi_status_t efi_exit_boot_services(void *handle, void *priv,
+ 	struct efi_boot_memmap *map;
+ 	efi_status_t status;
+ 
++	if (efi_disable_pci_dma)
++		efi_pci_disable_bridge_busmaster();
++
+ 	status = efi_get_memory_map(&map, true);
+ 	if (status != EFI_SUCCESS)
+ 		return status;
+@@ -388,9 +391,6 @@ efi_status_t efi_exit_boot_services(void *handle, void *priv,
+ 		return status;
+ 	}
+ 
+-	if (efi_disable_pci_dma)
+-		efi_pci_disable_bridge_busmaster();
+-
+ 	status = efi_bs_call(exit_boot_services, handle, map->map_key);
+ 
+ 	if (status == EFI_INVALID_PARAMETER) {
+diff --git a/drivers/gpio/gpio-twl4030.c b/drivers/gpio/gpio-twl4030.c
+index c1bb2c3ca6f29..446599ac234a9 100644
+--- a/drivers/gpio/gpio-twl4030.c
++++ b/drivers/gpio/gpio-twl4030.c
+@@ -17,7 +17,9 @@
+ #include <linux/interrupt.h>
+ #include <linux/kthread.h>
+ #include <linux/irq.h>
++#include <linux/gpio/machine.h>
+ #include <linux/gpio/driver.h>
++#include <linux/gpio/consumer.h>
+ #include <linux/platform_device.h>
+ #include <linux/of.h>
+ #include <linux/irqdomain.h>
+@@ -465,8 +467,7 @@ static int gpio_twl4030_debounce(u32 debounce, u8 mmc_cd)
+ 				REG_GPIO_DEBEN1, 3);
+ }
+ 
+-static struct twl4030_gpio_platform_data *of_gpio_twl4030(struct device *dev,
+-				struct twl4030_gpio_platform_data *pdata)
++static struct twl4030_gpio_platform_data *of_gpio_twl4030(struct device *dev)
+ {
+ 	struct twl4030_gpio_platform_data *omap_twl_info;
+ 
+@@ -474,9 +475,6 @@ static struct twl4030_gpio_platform_data *of_gpio_twl4030(struct device *dev,
+ 	if (!omap_twl_info)
+ 		return NULL;
+ 
+-	if (pdata)
+-		*omap_twl_info = *pdata;
+-
+ 	omap_twl_info->use_leds = of_property_read_bool(dev->of_node,
+ 			"ti,use-leds");
+ 
+@@ -504,9 +502,18 @@ static int gpio_twl4030_remove(struct platform_device *pdev)
+ 	return 0;
+ }
+ 
++/* Called from the registered devm action */
++static void gpio_twl4030_power_off_action(void *data)
++{
++	struct gpio_desc *d = data;
++
++	gpiod_unexport(d);
++	gpiochip_free_own_desc(d);
++}
++
+ static int gpio_twl4030_probe(struct platform_device *pdev)
+ {
+-	struct twl4030_gpio_platform_data *pdata = dev_get_platdata(&pdev->dev);
++	struct twl4030_gpio_platform_data *pdata;
+ 	struct device_node *node = pdev->dev.of_node;
+ 	struct gpio_twl4030_priv *priv;
+ 	int ret, irq_base;
+@@ -546,9 +553,7 @@ no_irqs:
+ 
+ 	mutex_init(&priv->mutex);
+ 
+-	if (node)
+-		pdata = of_gpio_twl4030(&pdev->dev, pdata);
+-
++	pdata = of_gpio_twl4030(&pdev->dev);
+ 	if (pdata == NULL) {
+ 		dev_err(&pdev->dev, "Platform data is missing\n");
+ 		return -ENXIO;
+@@ -585,17 +590,32 @@ no_irqs:
+ 		goto out;
+ 	}
+ 
+-	platform_set_drvdata(pdev, priv);
++	/*
++	 * Special quirk for the OMAP3 to hog and export a WLAN power
++	 * GPIO.
++	 */
++	if (IS_ENABLED(CONFIG_ARCH_OMAP3) &&
++	    of_machine_is_compatible("compulab,omap3-sbc-t3730")) {
++		struct gpio_desc *d;
+ 
+-	if (pdata->setup) {
+-		int status;
++		d = gpiochip_request_own_desc(&priv->gpio_chip,
++						 2, "wlan pwr",
++						 GPIO_ACTIVE_HIGH,
++						 GPIOD_OUT_HIGH);
++		if (IS_ERR(d))
++			return dev_err_probe(&pdev->dev, PTR_ERR(d),
++					     "unable to hog wlan pwr GPIO\n");
++
++		gpiod_export(d, 0);
++
++		ret = devm_add_action_or_reset(&pdev->dev, gpio_twl4030_power_off_action, d);
++		if (ret)
++			return dev_err_probe(&pdev->dev, ret,
++					     "failed to install power off handler\n");
+ 
+-		status = pdata->setup(&pdev->dev, priv->gpio_chip.base,
+-				      TWL4030_GPIO_MAX);
+-		if (status)
+-			dev_dbg(&pdev->dev, "setup --> %d\n", status);
+ 	}
+ 
++	platform_set_drvdata(pdev, priv);
+ out:
+ 	return ret;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+index 2eb2c66843a88..5612caf77dd65 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+@@ -133,9 +133,6 @@ static int amdgpu_cs_p1_user_fence(struct amdgpu_cs_parser *p,
+ 	bo = amdgpu_bo_ref(gem_to_amdgpu_bo(gobj));
+ 	p->uf_entry.priority = 0;
+ 	p->uf_entry.tv.bo = &bo->tbo;
+-	/* One for TTM and two for the CS job */
+-	p->uf_entry.tv.num_shared = 3;
+-
+ 	drm_gem_object_put(gobj);
+ 
+ 	size = amdgpu_bo_size(bo);
+@@ -882,15 +879,19 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
+ 
+ 	mutex_lock(&p->bo_list->bo_list_mutex);
+ 
+-	/* One for TTM and one for the CS job */
++	/* One for TTM and one for each CS job */
+ 	amdgpu_bo_list_for_each_entry(e, p->bo_list)
+-		e->tv.num_shared = 2;
++		e->tv.num_shared = 1 + p->gang_size;
++	p->uf_entry.tv.num_shared = 1 + p->gang_size;
+ 
+ 	amdgpu_bo_list_get_list(p->bo_list, &p->validated);
+ 
+ 	INIT_LIST_HEAD(&duplicates);
+ 	amdgpu_vm_get_pd_bo(&fpriv->vm, &p->validated, &p->vm_pd);
+ 
++	/* Two for VM updates, one for TTM and one for each CS job */
++	p->vm_pd.tv.num_shared = 3 + p->gang_size;
++
+ 	if (p->uf_entry.tv.bo && !ttm_to_amdgpu_bo(p->uf_entry.tv.bo)->parent)
+ 		list_add(&p->uf_entry.tv.head, &p->validated);
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_jpeg.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_jpeg.c
+index 4fa019c8aefc4..fb9251d9c899e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_jpeg.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_jpeg.c
+@@ -251,7 +251,8 @@ int amdgpu_jpeg_ras_late_init(struct amdgpu_device *adev, struct ras_common_if *
+ 
+ 	if (amdgpu_ras_is_supported(adev, ras_block->block)) {
+ 		for (i = 0; i < adev->jpeg.num_jpeg_inst; ++i) {
+-			if (adev->jpeg.harvest_config & (1 << i))
++			if (adev->jpeg.harvest_config & (1 << i) ||
++			    !adev->jpeg.inst[i].ras_poison_irq.funcs)
+ 				continue;
+ 
+ 			r = amdgpu_irq_get(adev, &adev->jpeg.inst[i].ras_poison_irq, 0);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+index a70103ac0026a..46557bbbc18a2 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+@@ -1266,8 +1266,12 @@ void amdgpu_bo_move_notify(struct ttm_buffer_object *bo,
+ void amdgpu_bo_get_memory(struct amdgpu_bo *bo,
+ 			  struct amdgpu_mem_stats *stats)
+ {
+-	unsigned int domain;
+ 	uint64_t size = amdgpu_bo_size(bo);
++	unsigned int domain;
++
++	/* Abort if the BO doesn't currently have a backing store */
++	if (!bo->tbo.resource)
++		return;
+ 
+ 	domain = amdgpu_mem_type_to_domain(bo->tbo.resource->mem_type);
+ 	switch (domain) {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+index a150b7a4b4aae..e4757a2807d9a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+@@ -1947,6 +1947,8 @@ static int psp_securedisplay_initialize(struct psp_context *psp)
+ 		psp_securedisplay_parse_resp_status(psp, securedisplay_cmd->status);
+ 		dev_err(psp->adev->dev, "SECUREDISPLAY: query securedisplay TA failed. ret 0x%x\n",
+ 			securedisplay_cmd->securedisplay_out_message.query_ta.query_cmd_ret);
++		/* don't try again */
++		psp->securedisplay_context.context.bin_desc.size_bytes = 0;
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
+index 3ab8a88789c8f..dcca63019ea76 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
+@@ -171,8 +171,7 @@ static int amdgpu_reserve_page_direct(struct amdgpu_device *adev, uint64_t addre
+ 
+ 	memset(&err_rec, 0x0, sizeof(struct eeprom_table_record));
+ 	err_data.err_addr = &err_rec;
+-	amdgpu_umc_fill_error_record(&err_data, address,
+-			(address >> AMDGPU_GPU_PAGE_SHIFT), 0, 0);
++	amdgpu_umc_fill_error_record(&err_data, address, address, 0, 0);
+ 
+ 	if (amdgpu_bad_page_threshold != 0) {
+ 		amdgpu_ras_add_bad_pages(adev, err_data.err_addr,
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring_mux.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring_mux.c
+index 73516abef662f..b779ee4bbaa7b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring_mux.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring_mux.c
+@@ -423,6 +423,9 @@ void amdgpu_sw_ring_ib_mark_offset(struct amdgpu_ring *ring, enum amdgpu_ring_mu
+ 	struct amdgpu_ring_mux *mux = &adev->gfx.muxer;
+ 	unsigned offset;
+ 
++	if (ring->hw_prio > AMDGPU_RING_PRIO_DEFAULT)
++		return;
++
+ 	offset = ring->wptr & ring->buf_mask;
+ 
+ 	amdgpu_ring_mux_ib_mark_offset(mux, ring, offset, type);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
+index 2d94f1b63bd6c..b46a5771c3ec1 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
+@@ -1191,7 +1191,8 @@ int amdgpu_vcn_ras_late_init(struct amdgpu_device *adev, struct ras_common_if *r
+ 
+ 	if (amdgpu_ras_is_supported(adev, ras_block->block)) {
+ 		for (i = 0; i < adev->vcn.num_vcn_inst; i++) {
+-			if (adev->vcn.harvest_config & (1 << i))
++			if (adev->vcn.harvest_config & (1 << i) ||
++			    !adev->vcn.inst[i].ras_poison_irq.funcs)
+ 				continue;
+ 
+ 			r = amdgpu_irq_get(adev, &adev->vcn.inst[i].ras_poison_irq, 0);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+index 5b3a70becbdf4..ac44b6774352b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+@@ -920,42 +920,51 @@ error_unlock:
+ 	return r;
+ }
+ 
++static void amdgpu_vm_bo_get_memory(struct amdgpu_bo_va *bo_va,
++				    struct amdgpu_mem_stats *stats)
++{
++	struct amdgpu_vm *vm = bo_va->base.vm;
++	struct amdgpu_bo *bo = bo_va->base.bo;
++
++	if (!bo)
++		return;
++
++	/*
++	 * For now ignore BOs which are currently locked and potentially
++	 * changing their location.
++	 */
++	if (bo->tbo.base.resv != vm->root.bo->tbo.base.resv &&
++	    !dma_resv_trylock(bo->tbo.base.resv))
++		return;
++
++	amdgpu_bo_get_memory(bo, stats);
++	if (bo->tbo.base.resv != vm->root.bo->tbo.base.resv)
++	    dma_resv_unlock(bo->tbo.base.resv);
++}
++
+ void amdgpu_vm_get_memory(struct amdgpu_vm *vm,
+ 			  struct amdgpu_mem_stats *stats)
+ {
+ 	struct amdgpu_bo_va *bo_va, *tmp;
+ 
+ 	spin_lock(&vm->status_lock);
+-	list_for_each_entry_safe(bo_va, tmp, &vm->idle, base.vm_status) {
+-		if (!bo_va->base.bo)
+-			continue;
+-		amdgpu_bo_get_memory(bo_va->base.bo, stats);
+-	}
+-	list_for_each_entry_safe(bo_va, tmp, &vm->evicted, base.vm_status) {
+-		if (!bo_va->base.bo)
+-			continue;
+-		amdgpu_bo_get_memory(bo_va->base.bo, stats);
+-	}
+-	list_for_each_entry_safe(bo_va, tmp, &vm->relocated, base.vm_status) {
+-		if (!bo_va->base.bo)
+-			continue;
+-		amdgpu_bo_get_memory(bo_va->base.bo, stats);
+-	}
+-	list_for_each_entry_safe(bo_va, tmp, &vm->moved, base.vm_status) {
+-		if (!bo_va->base.bo)
+-			continue;
+-		amdgpu_bo_get_memory(bo_va->base.bo, stats);
+-	}
+-	list_for_each_entry_safe(bo_va, tmp, &vm->invalidated, base.vm_status) {
+-		if (!bo_va->base.bo)
+-			continue;
+-		amdgpu_bo_get_memory(bo_va->base.bo, stats);
+-	}
+-	list_for_each_entry_safe(bo_va, tmp, &vm->done, base.vm_status) {
+-		if (!bo_va->base.bo)
+-			continue;
+-		amdgpu_bo_get_memory(bo_va->base.bo, stats);
+-	}
++	list_for_each_entry_safe(bo_va, tmp, &vm->idle, base.vm_status)
++		amdgpu_vm_bo_get_memory(bo_va, stats);
++
++	list_for_each_entry_safe(bo_va, tmp, &vm->evicted, base.vm_status)
++		amdgpu_vm_bo_get_memory(bo_va, stats);
++
++	list_for_each_entry_safe(bo_va, tmp, &vm->relocated, base.vm_status)
++		amdgpu_vm_bo_get_memory(bo_va, stats);
++
++	list_for_each_entry_safe(bo_va, tmp, &vm->moved, base.vm_status)
++		amdgpu_vm_bo_get_memory(bo_va, stats);
++
++	list_for_each_entry_safe(bo_va, tmp, &vm->invalidated, base.vm_status)
++		amdgpu_vm_bo_get_memory(bo_va, stats);
++
++	list_for_each_entry_safe(bo_va, tmp, &vm->done, base.vm_status)
++		amdgpu_vm_bo_get_memory(bo_va, stats);
+ 	spin_unlock(&vm->status_lock);
+ }
+ 
+@@ -1433,14 +1442,14 @@ int amdgpu_vm_bo_map(struct amdgpu_device *adev,
+ 	uint64_t eaddr;
+ 
+ 	/* validate the parameters */
+-	if (saddr & ~PAGE_MASK || offset & ~PAGE_MASK ||
+-	    size == 0 || size & ~PAGE_MASK)
++	if (saddr & ~PAGE_MASK || offset & ~PAGE_MASK || size & ~PAGE_MASK)
++		return -EINVAL;
++	if (saddr + size <= saddr || offset + size <= offset)
+ 		return -EINVAL;
+ 
+ 	/* make sure object fit at this offset */
+ 	eaddr = saddr + size - 1;
+-	if (saddr >= eaddr ||
+-	    (bo && offset + size > amdgpu_bo_size(bo)) ||
++	if ((bo && offset + size > amdgpu_bo_size(bo)) ||
+ 	    (eaddr >= adev->vm_manager.max_pfn << AMDGPU_GPU_PAGE_SHIFT))
+ 		return -EINVAL;
+ 
+@@ -1499,14 +1508,14 @@ int amdgpu_vm_bo_replace_map(struct amdgpu_device *adev,
+ 	int r;
+ 
+ 	/* validate the parameters */
+-	if (saddr & ~PAGE_MASK || offset & ~PAGE_MASK ||
+-	    size == 0 || size & ~PAGE_MASK)
++	if (saddr & ~PAGE_MASK || offset & ~PAGE_MASK || size & ~PAGE_MASK)
++		return -EINVAL;
++	if (saddr + size <= saddr || offset + size <= offset)
+ 		return -EINVAL;
+ 
+ 	/* make sure object fit at this offset */
+ 	eaddr = saddr + size - 1;
+-	if (saddr >= eaddr ||
+-	    (bo && offset + size > amdgpu_bo_size(bo)) ||
++	if ((bo && offset + size > amdgpu_bo_size(bo)) ||
+ 	    (eaddr >= adev->vm_manager.max_pfn << AMDGPU_GPU_PAGE_SHIFT))
+ 		return -EINVAL;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/nbio_v2_3.c b/drivers/gpu/drm/amd/amdgpu/nbio_v2_3.c
+index aa761ff3a5fae..7ba47fc1917b2 100644
+--- a/drivers/gpu/drm/amd/amdgpu/nbio_v2_3.c
++++ b/drivers/gpu/drm/amd/amdgpu/nbio_v2_3.c
+@@ -346,7 +346,7 @@ static void nbio_v2_3_init_registers(struct amdgpu_device *adev)
+ 
+ #define NAVI10_PCIE__LC_L0S_INACTIVITY_DEFAULT		0x00000000 // off by default, no gains over L1
+ #define NAVI10_PCIE__LC_L1_INACTIVITY_DEFAULT		0x00000009 // 1=1us, 9=1ms
+-#define NAVI10_PCIE__LC_L1_INACTIVITY_TBT_DEFAULT	0x0000000E // 4ms
++#define NAVI10_PCIE__LC_L1_INACTIVITY_TBT_DEFAULT	0x0000000E // 400ms
+ 
+ static void nbio_v2_3_enable_aspm(struct amdgpu_device *adev,
+ 				  bool enable)
+@@ -479,9 +479,12 @@ static void nbio_v2_3_program_aspm(struct amdgpu_device *adev)
+ 		WREG32_SOC15(NBIO, 0, mmRCC_BIF_STRAP5, data);
+ 
+ 	def = data = RREG32_PCIE(smnPCIE_LC_CNTL);
+-	data &= ~PCIE_LC_CNTL__LC_L0S_INACTIVITY_MASK;
+-	data |= 0x9 << PCIE_LC_CNTL__LC_L1_INACTIVITY__SHIFT;
+-	data |= 0x1 << PCIE_LC_CNTL__LC_PMI_TO_L1_DIS__SHIFT;
++	data |= NAVI10_PCIE__LC_L0S_INACTIVITY_DEFAULT << PCIE_LC_CNTL__LC_L0S_INACTIVITY__SHIFT;
++	if (pci_is_thunderbolt_attached(adev->pdev))
++		data |= NAVI10_PCIE__LC_L1_INACTIVITY_TBT_DEFAULT  << PCIE_LC_CNTL__LC_L1_INACTIVITY__SHIFT;
++	else
++		data |= NAVI10_PCIE__LC_L1_INACTIVITY_DEFAULT << PCIE_LC_CNTL__LC_L1_INACTIVITY__SHIFT;
++	data &= ~PCIE_LC_CNTL__LC_PMI_TO_L1_DIS_MASK;
+ 	if (def != data)
+ 		WREG32_PCIE(smnPCIE_LC_CNTL, data);
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+index 9295ac7edd565..d35c8a33d06d3 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+@@ -2306,7 +2306,7 @@ const struct amd_ip_funcs sdma_v4_0_ip_funcs = {
+ 
+ static const struct amdgpu_ring_funcs sdma_v4_0_ring_funcs = {
+ 	.type = AMDGPU_RING_TYPE_SDMA,
+-	.align_mask = 0xf,
++	.align_mask = 0xff,
+ 	.nop = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP),
+ 	.support_64bit_ptrs = true,
+ 	.secure_submission_supported = true,
+@@ -2338,7 +2338,7 @@ static const struct amdgpu_ring_funcs sdma_v4_0_ring_funcs = {
+ 
+ static const struct amdgpu_ring_funcs sdma_v4_0_page_ring_funcs = {
+ 	.type = AMDGPU_RING_TYPE_SDMA,
+-	.align_mask = 0xf,
++	.align_mask = 0xff,
+ 	.nop = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP),
+ 	.support_64bit_ptrs = true,
+ 	.secure_submission_supported = true,
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
+index 64dcaa2670dd1..ac7aa8631f6a7 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
+@@ -1740,7 +1740,7 @@ const struct amd_ip_funcs sdma_v4_4_2_ip_funcs = {
+ 
+ static const struct amdgpu_ring_funcs sdma_v4_4_2_ring_funcs = {
+ 	.type = AMDGPU_RING_TYPE_SDMA,
+-	.align_mask = 0xf,
++	.align_mask = 0xff,
+ 	.nop = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP),
+ 	.support_64bit_ptrs = true,
+ 	.get_rptr = sdma_v4_4_2_ring_get_rptr,
+@@ -1771,7 +1771,7 @@ static const struct amdgpu_ring_funcs sdma_v4_4_2_ring_funcs = {
+ 
+ static const struct amdgpu_ring_funcs sdma_v4_4_2_page_ring_funcs = {
+ 	.type = AMDGPU_RING_TYPE_SDMA,
+-	.align_mask = 0xf,
++	.align_mask = 0xff,
+ 	.nop = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP),
+ 	.support_64bit_ptrs = true,
+ 	.get_rptr = sdma_v4_4_2_ring_get_rptr,
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c
+index fdbfd725841ff..51b53110341bb 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c
+@@ -115,18 +115,19 @@ static struct kfd_mem_obj *allocate_mqd(struct kfd_dev *kfd,
+ 			&(mqd_mem_obj->gtt_mem),
+ 			&(mqd_mem_obj->gpu_addr),
+ 			(void *)&(mqd_mem_obj->cpu_ptr), true);
++
++		if (retval) {
++			kfree(mqd_mem_obj);
++			return NULL;
++		}
+ 	} else {
+ 		retval = kfd_gtt_sa_allocate(kfd, sizeof(struct v9_mqd),
+ 				&mqd_mem_obj);
+-	}
+-
+-	if (retval) {
+-		kfree(mqd_mem_obj);
+-		return NULL;
++		if (retval)
++			return NULL;
+ 	}
+ 
+ 	return mqd_mem_obj;
+-
+ }
+ 
+ static void init_mqd(struct mqd_manager *mm, void **mqd,
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 7acd73e5004fb..51269b0ab9b58 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -7196,13 +7196,7 @@ static int amdgpu_dm_connector_get_modes(struct drm_connector *connector)
+ 				drm_add_modes_noedid(connector, 1920, 1080);
+ 	} else {
+ 		amdgpu_dm_connector_ddc_get_modes(connector, edid);
+-		/* most eDP supports only timings from its edid,
+-		 * usually only detailed timings are available
+-		 * from eDP edid. timings which are not from edid
+-		 * may damage eDP
+-		 */
+-		if (connector->connector_type != DRM_MODE_CONNECTOR_eDP)
+-			amdgpu_dm_connector_add_common_modes(encoder, connector);
++		amdgpu_dm_connector_add_common_modes(encoder, connector);
+ 		amdgpu_dm_connector_add_freesync_modes(connector, edid);
+ 	}
+ 	amdgpu_dm_fbc_init(connector);
+@@ -9265,6 +9259,8 @@ static int dm_update_crtc_state(struct amdgpu_display_manager *dm,
+ 
+ 		/* Now check if we should set freesync video mode */
+ 		if (amdgpu_freesync_vid_mode && dm_new_crtc_state->stream &&
++		    dc_is_stream_unchanged(new_stream, dm_old_crtc_state->stream) &&
++		    dc_is_stream_scaling_unchanged(new_stream, dm_old_crtc_state->stream) &&
+ 		    is_timing_unchanged_for_freesync(new_crtc_state,
+ 						     old_crtc_state)) {
+ 			new_crtc_state->mode_changed = false;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+index 810ab682f424f..46d0a8f57e552 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+@@ -45,8 +45,7 @@
+ #endif
+ 
+ #include "dc/dcn20/dcn20_resource.h"
+-bool is_timing_changed(struct dc_stream_state *cur_stream,
+-		       struct dc_stream_state *new_stream);
++
+ #define PEAK_FACTOR_X1000 1006
+ 
+ static ssize_t dm_dp_aux_transfer(struct drm_dp_aux *aux,
+@@ -1422,7 +1421,7 @@ int pre_validate_dsc(struct drm_atomic_state *state,
+ 		struct dc_stream_state *stream = dm_state->context->streams[i];
+ 
+ 		if (local_dc_state->streams[i] &&
+-		    is_timing_changed(stream, local_dc_state->streams[i])) {
++		    dc_is_timing_changed(stream, local_dc_state->streams[i])) {
+ 			DRM_INFO_ONCE("crtc[%d] needs mode_changed\n", i);
+ 		} else {
+ 			int ind = find_crtc_index_in_state_by_stream(state, stream);
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr_smu_msg.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr_smu_msg.c
+index 1fbf1c105dc12..bdbf183066981 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr_smu_msg.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr_smu_msg.c
+@@ -312,6 +312,9 @@ void dcn30_smu_set_display_refresh_from_mall(struct clk_mgr_internal *clk_mgr, b
+ 	/* bits 8:7 for cache timer scale, bits 6:1 for cache timer delay, bit 0 = 1 for enable, = 0 for disable */
+ 	uint32_t param = (cache_timer_scale << 7) | (cache_timer_delay << 1) | (enable ? 1 : 0);
+ 
++	smu_print("SMU Set display refresh from mall: enable = %d, cache_timer_delay = %d, cache_timer_scale = %d\n",
++		enable, cache_timer_delay, cache_timer_scale);
++
+ 	dcn30_smu_send_msg_with_param(clk_mgr,
+ 			DALSMC_MSG_SetDisplayRefreshFromMall, param, NULL);
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index 7cde67b7f0c33..dcf8631181690 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -2504,9 +2504,6 @@ static enum surface_update_type det_surface_update(const struct dc *dc,
+ 	enum surface_update_type overall_type = UPDATE_TYPE_FAST;
+ 	union surface_update_flags *update_flags = &u->surface->update_flags;
+ 
+-	if (u->flip_addr)
+-		update_flags->bits.addr_update = 1;
+-
+ 	if (!is_surface_in_context(context, u->surface) || u->surface->force_full_update) {
+ 		update_flags->raw = 0xFFFFFFFF;
+ 		return UPDATE_TYPE_FULL;
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+index fe1551393b264..ba3eb36e75bc3 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+@@ -1878,7 +1878,7 @@ bool dc_add_all_planes_for_stream(
+ 	return add_all_planes_for_stream(dc, stream, &set, 1, context);
+ }
+ 
+-bool is_timing_changed(struct dc_stream_state *cur_stream,
++bool dc_is_timing_changed(struct dc_stream_state *cur_stream,
+ 		       struct dc_stream_state *new_stream)
+ {
+ 	if (cur_stream == NULL)
+@@ -1903,7 +1903,7 @@ static bool are_stream_backends_same(
+ 	if (stream_a == NULL || stream_b == NULL)
+ 		return false;
+ 
+-	if (is_timing_changed(stream_a, stream_b))
++	if (dc_is_timing_changed(stream_a, stream_b))
+ 		return false;
+ 
+ 	if (stream_a->signal != stream_b->signal)
+@@ -3528,7 +3528,7 @@ bool pipe_need_reprogram(
+ 	if (pipe_ctx_old->stream_res.stream_enc != pipe_ctx->stream_res.stream_enc)
+ 		return true;
+ 
+-	if (is_timing_changed(pipe_ctx_old->stream, pipe_ctx->stream))
++	if (dc_is_timing_changed(pipe_ctx_old->stream, pipe_ctx->stream))
+ 		return true;
+ 
+ 	if (pipe_ctx_old->stream->dpms_off != pipe_ctx->stream->dpms_off)
+diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h
+index 30f0ba05a6e6c..4d93ca9c627b0 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc.h
++++ b/drivers/gpu/drm/amd/display/dc/dc.h
+@@ -2226,4 +2226,7 @@ void dc_process_dmub_dpia_hpd_int_enable(const struct dc *dc,
+ /* Disable acc mode Interfaces */
+ void dc_disable_accelerated_mode(struct dc *dc);
+ 
++bool dc_is_timing_changed(struct dc_stream_state *cur_stream,
++		       struct dc_stream_state *new_stream);
++
+ #endif /* DC_INTERFACE_H_ */
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_hwseq.c
+index cc3fe9cac5b53..c309933112e5e 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_hwseq.c
+@@ -400,29 +400,6 @@ void dcn314_dpp_root_clock_control(struct dce_hwseq *hws, unsigned int dpp_inst,
+ 			hws->ctx->dc->res_pool->dccg, dpp_inst, clock_on);
+ }
+ 
+-void dcn314_hubp_pg_control(struct dce_hwseq *hws, unsigned int hubp_inst, bool power_on)
+-{
+-	struct dc_context *ctx = hws->ctx;
+-	union dmub_rb_cmd cmd;
+-
+-	if (hws->ctx->dc->debug.disable_hubp_power_gate)
+-		return;
+-
+-	PERF_TRACE();
+-
+-	memset(&cmd, 0, sizeof(cmd));
+-	cmd.domain_control.header.type = DMUB_CMD__VBIOS;
+-	cmd.domain_control.header.sub_type = DMUB_CMD__VBIOS_DOMAIN_CONTROL;
+-	cmd.domain_control.header.payload_bytes = sizeof(cmd.domain_control.data);
+-	cmd.domain_control.data.inst = hubp_inst;
+-	cmd.domain_control.data.power_gate = !power_on;
+-
+-	dc_dmub_srv_cmd_queue(ctx->dmub_srv, &cmd);
+-	dc_dmub_srv_cmd_execute(ctx->dmub_srv);
+-	dc_dmub_srv_wait_idle(ctx->dmub_srv);
+-
+-	PERF_TRACE();
+-}
+ static void apply_symclk_on_tx_off_wa(struct dc_link *link)
+ {
+ 	/* There are use cases where SYMCLK is referenced by OTG. For instance
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_hwseq.h b/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_hwseq.h
+index 6d0b62503caa6..54b1379914ce5 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_hwseq.h
++++ b/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_hwseq.h
+@@ -41,8 +41,6 @@ unsigned int dcn314_calculate_dccg_k1_k2_values(struct pipe_ctx *pipe_ctx, unsig
+ 
+ void dcn314_set_pixels_per_cycle(struct pipe_ctx *pipe_ctx);
+ 
+-void dcn314_hubp_pg_control(struct dce_hwseq *hws, unsigned int hubp_inst, bool power_on);
+-
+ void dcn314_dpp_root_clock_control(struct dce_hwseq *hws, unsigned int dpp_inst, bool clock_on);
+ 
+ void dcn314_disable_link_output(struct dc_link *link, const struct link_resource *link_res, enum signal_type signal);
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_init.c b/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_init.c
+index a588f46b166f4..d9d2576f3e842 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_init.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_init.c
+@@ -138,7 +138,7 @@ static const struct hwseq_private_funcs dcn314_private_funcs = {
+ 	.plane_atomic_power_down = dcn10_plane_atomic_power_down,
+ 	.enable_power_gating_plane = dcn314_enable_power_gating_plane,
+ 	.dpp_root_clock_control = dcn314_dpp_root_clock_control,
+-	.hubp_pg_control = dcn314_hubp_pg_control,
++	.hubp_pg_control = dcn31_hubp_pg_control,
+ 	.program_all_writeback_pipes_in_tree = dcn30_program_all_writeback_pipes_in_tree,
+ 	.update_odm = dcn314_update_odm,
+ 	.dsc_pg_control = dcn314_dsc_pg_control,
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_mode_vba_21.c b/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_mode_vba_21.c
+index b7c2844d0cbee..f294f2f8c75bc 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_mode_vba_21.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_mode_vba_21.c
+@@ -810,7 +810,7 @@ static bool CalculatePrefetchSchedule(
+ 			*swath_width_chroma_ub = dml_ceil(SwathWidthY / 2 - 1, myPipe->BlockWidth256BytesC) + myPipe->BlockWidth256BytesC;
+ 	} else {
+ 		*swath_width_luma_ub = dml_ceil(SwathWidthY - 1, myPipe->BlockHeight256BytesY) + myPipe->BlockHeight256BytesY;
+-		if (myPipe->BlockWidth256BytesC > 0)
++		if (myPipe->BlockHeight256BytesC > 0)
+ 			*swath_width_chroma_ub = dml_ceil(SwathWidthY / 2 - 1, myPipe->BlockHeight256BytesC) + myPipe->BlockHeight256BytesC;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_rq_dlg_calc_32.c b/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_rq_dlg_calc_32.c
+index 395ae8761980f..9ba6cb67655f4 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_rq_dlg_calc_32.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn32/display_rq_dlg_calc_32.c
+@@ -116,7 +116,7 @@ void dml32_rq_dlg_get_rq_reg(display_rq_regs_st *rq_regs,
+ 	else
+ 		rq_regs->rq_regs_l.min_meta_chunk_size = dml_log2(min_meta_chunk_bytes) - 6 + 1;
+ 
+-	if (min_meta_chunk_bytes == 0)
++	if (p1_min_meta_chunk_bytes == 0)
+ 		rq_regs->rq_regs_c.min_meta_chunk_size = 0;
+ 	else
+ 		rq_regs->rq_regs_c.min_meta_chunk_size = dml_log2(p1_min_meta_chunk_bytes) - 6 + 1;
+diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_capability.c b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_capability.c
+index ba98013fecd00..6d2d10da2b77c 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_capability.c
++++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_capability.c
+@@ -1043,9 +1043,7 @@ static enum dc_status wake_up_aux_channel(struct dc_link *link)
+ 				DP_SET_POWER,
+ 				&dpcd_power_state,
+ 				sizeof(dpcd_power_state));
+-		if (status < 0)
+-			DC_LOG_DC("%s: Failed to power up sink: %s\n", __func__,
+-				  dpcd_power_state == DP_SET_POWER_D0 ? "D0" : "D3");
++		DC_LOG_DC("%s: Failed to power up sink\n", __func__);
+ 		return DC_ERROR_UNEXPECTED;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+index 85d53597eb07a..f7ed3e655e397 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+@@ -431,7 +431,13 @@ static int sienna_cichlid_append_powerplay_table(struct smu_context *smu)
+ {
+ 	struct atom_smc_dpm_info_v4_9 *smc_dpm_table;
+ 	int index, ret;
+-	I2cControllerConfig_t *table_member;
++	PPTable_beige_goby_t *ppt_beige_goby;
++	PPTable_t *ppt;
++
++	if (smu->adev->ip_versions[MP1_HWIP][0] == IP_VERSION(11, 0, 13))
++		ppt_beige_goby = smu->smu_table.driver_pptable;
++	else
++		ppt = smu->smu_table.driver_pptable;
+ 
+ 	index = get_index_into_master_table(atom_master_list_of_data_tables_v2_1,
+ 					    smc_dpm_info);
+@@ -440,9 +446,13 @@ static int sienna_cichlid_append_powerplay_table(struct smu_context *smu)
+ 				      (uint8_t **)&smc_dpm_table);
+ 	if (ret)
+ 		return ret;
+-	GET_PPTABLE_MEMBER(I2cControllers, &table_member);
+-	memcpy(table_member, smc_dpm_table->I2cControllers,
+-			sizeof(*smc_dpm_table) - sizeof(smc_dpm_table->table_header));
++
++	if (smu->adev->ip_versions[MP1_HWIP][0] == IP_VERSION(11, 0, 13))
++		smu_memcpy_trailing(ppt_beige_goby, I2cControllers, BoardReserved,
++				    smc_dpm_table, I2cControllers);
++	else
++		smu_memcpy_trailing(ppt, I2cControllers, BoardReserved,
++				    smc_dpm_table, I2cControllers);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
+index 08577d1b84eca..c42c0c1446f4f 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
+@@ -1300,6 +1300,7 @@ static int smu_v13_0_0_get_thermal_temperature_range(struct smu_context *smu,
+ 	range->mem_emergency_max = (pptable->SkuTable.TemperatureLimit[TEMP_MEM] + CTF_OFFSET_MEM)*
+ 		SMU_TEMPERATURE_UNITS_PER_CENTIGRADES;
+ 	range->software_shutdown_temp = powerplay_table->software_shutdown_temp;
++	range->software_shutdown_temp_offset = pptable->SkuTable.FanAbnormalTempLimitOffset;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/bridge/analogix/anx7625.c b/drivers/gpu/drm/bridge/analogix/anx7625.c
+index 6846199a2ee14..9e387c3e9b696 100644
+--- a/drivers/gpu/drm/bridge/analogix/anx7625.c
++++ b/drivers/gpu/drm/bridge/analogix/anx7625.c
+@@ -1687,6 +1687,14 @@ static int anx7625_parse_dt(struct device *dev,
+ 	if (of_property_read_bool(np, "analogix,audio-enable"))
+ 		pdata->audio_en = 1;
+ 
++	return 0;
++}
++
++static int anx7625_parse_dt_panel(struct device *dev,
++				  struct anx7625_platform_data *pdata)
++{
++	struct device_node *np = dev->of_node;
++
+ 	pdata->panel_bridge = devm_drm_of_get_bridge(dev, np, 1, 0);
+ 	if (IS_ERR(pdata->panel_bridge)) {
+ 		if (PTR_ERR(pdata->panel_bridge) == -ENODEV) {
+@@ -2032,7 +2040,7 @@ static int anx7625_register_audio(struct device *dev, struct anx7625_data *ctx)
+ 	return 0;
+ }
+ 
+-static int anx7625_attach_dsi(struct anx7625_data *ctx)
++static int anx7625_setup_dsi_device(struct anx7625_data *ctx)
+ {
+ 	struct mipi_dsi_device *dsi;
+ 	struct device *dev = &ctx->client->dev;
+@@ -2042,9 +2050,6 @@ static int anx7625_attach_dsi(struct anx7625_data *ctx)
+ 		.channel = 0,
+ 		.node = NULL,
+ 	};
+-	int ret;
+-
+-	DRM_DEV_DEBUG_DRIVER(dev, "attach dsi\n");
+ 
+ 	host = of_find_mipi_dsi_host_by_node(ctx->pdata.mipi_host_node);
+ 	if (!host) {
+@@ -2065,14 +2070,24 @@ static int anx7625_attach_dsi(struct anx7625_data *ctx)
+ 		MIPI_DSI_MODE_VIDEO_HSE	|
+ 		MIPI_DSI_HS_PKT_END_ALIGNED;
+ 
+-	ret = devm_mipi_dsi_attach(dev, dsi);
++	ctx->dsi = dsi;
++
++	return 0;
++}
++
++static int anx7625_attach_dsi(struct anx7625_data *ctx)
++{
++	struct device *dev = &ctx->client->dev;
++	int ret;
++
++	DRM_DEV_DEBUG_DRIVER(dev, "attach dsi\n");
++
++	ret = devm_mipi_dsi_attach(dev, ctx->dsi);
+ 	if (ret) {
+ 		DRM_DEV_ERROR(dev, "fail to attach dsi to host.\n");
+ 		return ret;
+ 	}
+ 
+-	ctx->dsi = dsi;
+-
+ 	DRM_DEV_DEBUG_DRIVER(dev, "attach dsi succeeded.\n");
+ 
+ 	return 0;
+@@ -2560,6 +2575,40 @@ static void anx7625_runtime_disable(void *data)
+ 	pm_runtime_disable(data);
+ }
+ 
++static int anx7625_link_bridge(struct drm_dp_aux *aux)
++{
++	struct anx7625_data *platform = container_of(aux, struct anx7625_data, aux);
++	struct device *dev = aux->dev;
++	int ret;
++
++	ret = anx7625_parse_dt_panel(dev, &platform->pdata);
++	if (ret) {
++		DRM_DEV_ERROR(dev, "fail to parse DT for panel : %d\n", ret);
++		return ret;
++	}
++
++	platform->bridge.funcs = &anx7625_bridge_funcs;
++	platform->bridge.of_node = dev->of_node;
++	if (!anx7625_of_panel_on_aux_bus(dev))
++		platform->bridge.ops |= DRM_BRIDGE_OP_EDID;
++	if (!platform->pdata.panel_bridge)
++		platform->bridge.ops |= DRM_BRIDGE_OP_HPD |
++					DRM_BRIDGE_OP_DETECT;
++	platform->bridge.type = platform->pdata.panel_bridge ?
++				    DRM_MODE_CONNECTOR_eDP :
++				    DRM_MODE_CONNECTOR_DisplayPort;
++
++	drm_bridge_add(&platform->bridge);
++
++	if (!platform->pdata.is_dpi) {
++		ret = anx7625_attach_dsi(platform);
++		if (ret)
++			drm_bridge_remove(&platform->bridge);
++	}
++
++	return ret;
++}
++
+ static int anx7625_i2c_probe(struct i2c_client *client)
+ {
+ 	struct anx7625_data *platform;
+@@ -2634,6 +2683,24 @@ static int anx7625_i2c_probe(struct i2c_client *client)
+ 	platform->aux.wait_hpd_asserted = anx7625_wait_hpd_asserted;
+ 	drm_dp_aux_init(&platform->aux);
+ 
++	ret = anx7625_parse_dt(dev, pdata);
++	if (ret) {
++		if (ret != -EPROBE_DEFER)
++			DRM_DEV_ERROR(dev, "fail to parse DT : %d\n", ret);
++		goto free_wq;
++	}
++
++	if (!platform->pdata.is_dpi) {
++		ret = anx7625_setup_dsi_device(platform);
++		if (ret < 0)
++			goto free_wq;
++	}
++
++	/*
++	 * Registering the i2c devices will retrigger deferred probe, so it
++	 * needs to be done after calls that might return EPROBE_DEFER,
++	 * otherwise we can get an infinite loop.
++	 */
+ 	if (anx7625_register_i2c_dummy_clients(platform, client) != 0) {
+ 		ret = -ENOMEM;
+ 		DRM_DEV_ERROR(dev, "fail to reserve I2C bus.\n");
+@@ -2648,13 +2715,21 @@ static int anx7625_i2c_probe(struct i2c_client *client)
+ 	if (ret)
+ 		goto free_wq;
+ 
+-	devm_of_dp_aux_populate_ep_devices(&platform->aux);
+-
+-	ret = anx7625_parse_dt(dev, pdata);
++	/*
++	 * Populating the aux bus will retrigger deferred probe, so it needs to
++	 * be done after calls that might return EPROBE_DEFER, otherwise we can
++	 * get an infinite loop.
++	 */
++	ret = devm_of_dp_aux_populate_bus(&platform->aux, anx7625_link_bridge);
+ 	if (ret) {
+-		if (ret != -EPROBE_DEFER)
+-			DRM_DEV_ERROR(dev, "fail to parse DT : %d\n", ret);
+-		goto free_wq;
++		if (ret != -ENODEV) {
++			DRM_DEV_ERROR(dev, "failed to populate aux bus : %d\n", ret);
++			goto free_wq;
++		}
++
++		ret = anx7625_link_bridge(&platform->aux);
++		if (ret)
++			goto free_wq;
+ 	}
+ 
+ 	if (!platform->pdata.low_power_mode) {
+@@ -2667,27 +2742,6 @@ static int anx7625_i2c_probe(struct i2c_client *client)
+ 	if (platform->pdata.intp_irq)
+ 		queue_work(platform->workqueue, &platform->work);
+ 
+-	platform->bridge.funcs = &anx7625_bridge_funcs;
+-	platform->bridge.of_node = client->dev.of_node;
+-	if (!anx7625_of_panel_on_aux_bus(&client->dev))
+-		platform->bridge.ops |= DRM_BRIDGE_OP_EDID;
+-	if (!platform->pdata.panel_bridge)
+-		platform->bridge.ops |= DRM_BRIDGE_OP_HPD |
+-					DRM_BRIDGE_OP_DETECT;
+-	platform->bridge.type = platform->pdata.panel_bridge ?
+-				    DRM_MODE_CONNECTOR_eDP :
+-				    DRM_MODE_CONNECTOR_DisplayPort;
+-
+-	drm_bridge_add(&platform->bridge);
+-
+-	if (!platform->pdata.is_dpi) {
+-		ret = anx7625_attach_dsi(platform);
+-		if (ret) {
+-			DRM_DEV_ERROR(dev, "Fail to attach to dsi : %d\n", ret);
+-			goto unregister_bridge;
+-		}
+-	}
+-
+ 	if (platform->pdata.audio_en)
+ 		anx7625_register_audio(dev, platform);
+ 
+@@ -2695,12 +2749,6 @@ static int anx7625_i2c_probe(struct i2c_client *client)
+ 
+ 	return 0;
+ 
+-unregister_bridge:
+-	drm_bridge_remove(&platform->bridge);
+-
+-	if (!platform->pdata.low_power_mode)
+-		pm_runtime_put_sync_suspend(&client->dev);
+-
+ free_wq:
+ 	if (platform->workqueue)
+ 		destroy_workqueue(platform->workqueue);
+diff --git a/drivers/gpu/drm/bridge/ite-it6505.c b/drivers/gpu/drm/bridge/ite-it6505.c
+index abaf6e23775eb..45f579c365e7f 100644
+--- a/drivers/gpu/drm/bridge/ite-it6505.c
++++ b/drivers/gpu/drm/bridge/ite-it6505.c
+@@ -3207,7 +3207,7 @@ static ssize_t receive_timing_debugfs_show(struct file *file, char __user *buf,
+ 					   size_t len, loff_t *ppos)
+ {
+ 	struct it6505 *it6505 = file->private_data;
+-	struct drm_display_mode *vid = &it6505->video_info;
++	struct drm_display_mode *vid;
+ 	u8 read_buf[READ_BUFFER_SIZE];
+ 	u8 *str = read_buf, *end = read_buf + READ_BUFFER_SIZE;
+ 	ssize_t ret, count;
+@@ -3216,6 +3216,7 @@ static ssize_t receive_timing_debugfs_show(struct file *file, char __user *buf,
+ 		return -ENODEV;
+ 
+ 	it6505_calc_video_info(it6505);
++	vid = &it6505->video_info;
+ 	str += scnprintf(str, end - str, "---video timing---\n");
+ 	str += scnprintf(str, end - str, "PCLK:%d.%03dMHz\n",
+ 			 vid->clock / 1000, vid->clock % 1000);
+diff --git a/drivers/gpu/drm/bridge/samsung-dsim.c b/drivers/gpu/drm/bridge/samsung-dsim.c
+index e0a402a85787c..3194cabb26b32 100644
+--- a/drivers/gpu/drm/bridge/samsung-dsim.c
++++ b/drivers/gpu/drm/bridge/samsung-dsim.c
+@@ -405,6 +405,9 @@ static const struct samsung_dsim_driver_data exynos3_dsi_driver_data = {
+ 	.num_bits_resol = 11,
+ 	.pll_p_offset = 13,
+ 	.reg_values = reg_values,
++	.m_min = 41,
++	.m_max = 125,
++	.min_freq = 500,
+ };
+ 
+ static const struct samsung_dsim_driver_data exynos4_dsi_driver_data = {
+@@ -418,6 +421,9 @@ static const struct samsung_dsim_driver_data exynos4_dsi_driver_data = {
+ 	.num_bits_resol = 11,
+ 	.pll_p_offset = 13,
+ 	.reg_values = reg_values,
++	.m_min = 41,
++	.m_max = 125,
++	.min_freq = 500,
+ };
+ 
+ static const struct samsung_dsim_driver_data exynos5_dsi_driver_data = {
+@@ -429,6 +435,9 @@ static const struct samsung_dsim_driver_data exynos5_dsi_driver_data = {
+ 	.num_bits_resol = 11,
+ 	.pll_p_offset = 13,
+ 	.reg_values = reg_values,
++	.m_min = 41,
++	.m_max = 125,
++	.min_freq = 500,
+ };
+ 
+ static const struct samsung_dsim_driver_data exynos5433_dsi_driver_data = {
+@@ -441,6 +450,9 @@ static const struct samsung_dsim_driver_data exynos5433_dsi_driver_data = {
+ 	.num_bits_resol = 12,
+ 	.pll_p_offset = 13,
+ 	.reg_values = exynos5433_reg_values,
++	.m_min = 41,
++	.m_max = 125,
++	.min_freq = 500,
+ };
+ 
+ static const struct samsung_dsim_driver_data exynos5422_dsi_driver_data = {
+@@ -453,6 +465,9 @@ static const struct samsung_dsim_driver_data exynos5422_dsi_driver_data = {
+ 	.num_bits_resol = 12,
+ 	.pll_p_offset = 13,
+ 	.reg_values = exynos5422_reg_values,
++	.m_min = 41,
++	.m_max = 125,
++	.min_freq = 500,
+ };
+ 
+ static const struct samsung_dsim_driver_data imx8mm_dsi_driver_data = {
+@@ -469,6 +484,9 @@ static const struct samsung_dsim_driver_data imx8mm_dsi_driver_data = {
+ 	 */
+ 	.pll_p_offset = 14,
+ 	.reg_values = imx8mm_dsim_reg_values,
++	.m_min = 64,
++	.m_max = 1023,
++	.min_freq = 1050,
+ };
+ 
+ static const struct samsung_dsim_driver_data *
+@@ -547,12 +565,12 @@ static unsigned long samsung_dsim_pll_find_pms(struct samsung_dsim *dsi,
+ 			tmp = (u64)fout * (_p << _s);
+ 			do_div(tmp, fin);
+ 			_m = tmp;
+-			if (_m < 41 || _m > 125)
++			if (_m < driver_data->m_min || _m > driver_data->m_max)
+ 				continue;
+ 
+ 			tmp = (u64)_m * fin;
+ 			do_div(tmp, _p);
+-			if (tmp < 500 * MHZ ||
++			if (tmp < driver_data->min_freq  * MHZ ||
+ 			    tmp > driver_data->max_freq * MHZ)
+ 				continue;
+ 
+diff --git a/drivers/gpu/drm/bridge/tc358767.c b/drivers/gpu/drm/bridge/tc358767.c
+index 91f7cb56a654d..d6349af4f1b62 100644
+--- a/drivers/gpu/drm/bridge/tc358767.c
++++ b/drivers/gpu/drm/bridge/tc358767.c
+@@ -1890,7 +1890,7 @@ static int tc_mipi_dsi_host_attach(struct tc_data *tc)
+ 	if (dsi_lanes < 0)
+ 		return dsi_lanes;
+ 
+-	dsi = mipi_dsi_device_register_full(host, &info);
++	dsi = devm_mipi_dsi_device_register_full(dev, host, &info);
+ 	if (IS_ERR(dsi))
+ 		return dev_err_probe(dev, PTR_ERR(dsi),
+ 				     "failed to create dsi device\n");
+@@ -1901,7 +1901,7 @@ static int tc_mipi_dsi_host_attach(struct tc_data *tc)
+ 	dsi->mode_flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_BURST |
+ 			  MIPI_DSI_MODE_LPM | MIPI_DSI_CLOCK_NON_CONTINUOUS;
+ 
+-	ret = mipi_dsi_attach(dsi);
++	ret = devm_mipi_dsi_attach(dev, dsi);
+ 	if (ret < 0) {
+ 		dev_err(dev, "failed to attach dsi to host: %d\n", ret);
+ 		return ret;
+diff --git a/drivers/gpu/drm/bridge/tc358768.c b/drivers/gpu/drm/bridge/tc358768.c
+index 7c0cbe84611b9..966a25cb0b108 100644
+--- a/drivers/gpu/drm/bridge/tc358768.c
++++ b/drivers/gpu/drm/bridge/tc358768.c
+@@ -9,6 +9,8 @@
+ #include <linux/gpio/consumer.h>
+ #include <linux/i2c.h>
+ #include <linux/kernel.h>
++#include <linux/media-bus-format.h>
++#include <linux/minmax.h>
+ #include <linux/module.h>
+ #include <linux/regmap.h>
+ #include <linux/regulator/consumer.h>
+@@ -146,6 +148,7 @@ struct tc358768_priv {
+ 
+ 	u32 pd_lines; /* number of Parallel Port Input Data Lines */
+ 	u32 dsi_lanes; /* number of DSI Lanes */
++	u32 dsi_bpp; /* number of Bits Per Pixel over DSI */
+ 
+ 	/* Parameters for PLL programming */
+ 	u32 fbd;	/* PLL feedback divider */
+@@ -284,12 +287,12 @@ static void tc358768_hw_disable(struct tc358768_priv *priv)
+ 
+ static u32 tc358768_pll_to_pclk(struct tc358768_priv *priv, u32 pll_clk)
+ {
+-	return (u32)div_u64((u64)pll_clk * priv->dsi_lanes, priv->pd_lines);
++	return (u32)div_u64((u64)pll_clk * priv->dsi_lanes, priv->dsi_bpp);
+ }
+ 
+ static u32 tc358768_pclk_to_pll(struct tc358768_priv *priv, u32 pclk)
+ {
+-	return (u32)div_u64((u64)pclk * priv->pd_lines, priv->dsi_lanes);
++	return (u32)div_u64((u64)pclk * priv->dsi_bpp, priv->dsi_lanes);
+ }
+ 
+ static int tc358768_calc_pll(struct tc358768_priv *priv,
+@@ -334,13 +337,17 @@ static int tc358768_calc_pll(struct tc358768_priv *priv,
+ 		u32 fbd;
+ 
+ 		for (fbd = 0; fbd < 512; ++fbd) {
+-			u32 pll, diff;
++			u32 pll, diff, pll_in;
+ 
+ 			pll = (u32)div_u64((u64)refclk * (fbd + 1), divisor);
+ 
+ 			if (pll >= max_pll || pll < min_pll)
+ 				continue;
+ 
++			pll_in = (u32)div_u64((u64)refclk, prd + 1);
++			if (pll_in < 4000000)
++				continue;
++
+ 			diff = max(pll, target_pll) - min(pll, target_pll);
+ 
+ 			if (diff < best_diff) {
+@@ -422,6 +429,7 @@ static int tc358768_dsi_host_attach(struct mipi_dsi_host *host,
+ 	priv->output.panel = panel;
+ 
+ 	priv->dsi_lanes = dev->lanes;
++	priv->dsi_bpp = mipi_dsi_pixel_format_to_bpp(dev->format);
+ 
+ 	/* get input ep (port0/endpoint0) */
+ 	ret = -EINVAL;
+@@ -433,7 +441,7 @@ static int tc358768_dsi_host_attach(struct mipi_dsi_host *host,
+ 	}
+ 
+ 	if (ret)
+-		priv->pd_lines = mipi_dsi_pixel_format_to_bpp(dev->format);
++		priv->pd_lines = priv->dsi_bpp;
+ 
+ 	drm_bridge_add(&priv->bridge);
+ 
+@@ -632,6 +640,7 @@ static void tc358768_bridge_pre_enable(struct drm_bridge *bridge)
+ 	struct mipi_dsi_device *dsi_dev = priv->output.dev;
+ 	unsigned long mode_flags = dsi_dev->mode_flags;
+ 	u32 val, val2, lptxcnt, hact, data_type;
++	s32 raw_val;
+ 	const struct drm_display_mode *mode;
+ 	u32 dsibclk_nsk, dsiclk_nsk, ui_nsk, phy_delay_nsk;
+ 	u32 dsiclk, dsibclk, video_start;
+@@ -736,25 +745,26 @@ static void tc358768_bridge_pre_enable(struct drm_bridge *bridge)
+ 
+ 	/* 38ns < TCLK_PREPARE < 95ns */
+ 	val = tc358768_ns_to_cnt(65, dsibclk_nsk) - 1;
+-	/* TCLK_PREPARE > 300ns */
+-	val2 = tc358768_ns_to_cnt(300 + tc358768_to_ns(3 * ui_nsk),
+-				  dsibclk_nsk);
+-	val |= (val2 - tc358768_to_ns(phy_delay_nsk - dsibclk_nsk)) << 8;
++	/* TCLK_PREPARE + TCLK_ZERO > 300ns */
++	val2 = tc358768_ns_to_cnt(300 - tc358768_to_ns(2 * ui_nsk),
++				  dsibclk_nsk) - 2;
++	val |= val2 << 8;
+ 	dev_dbg(priv->dev, "TCLK_HEADERCNT: 0x%x\n", val);
+ 	tc358768_write(priv, TC358768_TCLK_HEADERCNT, val);
+ 
+-	/* TCLK_TRAIL > 60ns + 3*UI */
+-	val = 60 + tc358768_to_ns(3 * ui_nsk);
+-	val = tc358768_ns_to_cnt(val, dsibclk_nsk) - 5;
++	/* TCLK_TRAIL > 60ns AND TEOT <= 105 ns + 12*UI */
++	raw_val = tc358768_ns_to_cnt(60 + tc358768_to_ns(2 * ui_nsk), dsibclk_nsk) - 5;
++	val = clamp(raw_val, 0, 127);
+ 	dev_dbg(priv->dev, "TCLK_TRAILCNT: 0x%x\n", val);
+ 	tc358768_write(priv, TC358768_TCLK_TRAILCNT, val);
+ 
+ 	/* 40ns + 4*UI < THS_PREPARE < 85ns + 6*UI */
+ 	val = 50 + tc358768_to_ns(4 * ui_nsk);
+ 	val = tc358768_ns_to_cnt(val, dsibclk_nsk) - 1;
+-	/* THS_ZERO > 145ns + 10*UI */
+-	val2 = tc358768_ns_to_cnt(145 - tc358768_to_ns(ui_nsk), dsibclk_nsk);
+-	val |= (val2 - tc358768_to_ns(phy_delay_nsk)) << 8;
++	/* THS_PREPARE + THS_ZERO > 145ns + 10*UI */
++	raw_val = tc358768_ns_to_cnt(145 - tc358768_to_ns(3 * ui_nsk), dsibclk_nsk) - 10;
++	val2 = clamp(raw_val, 0, 127);
++	val |= val2 << 8;
+ 	dev_dbg(priv->dev, "THS_HEADERCNT: 0x%x\n", val);
+ 	tc358768_write(priv, TC358768_THS_HEADERCNT, val);
+ 
+@@ -770,9 +780,10 @@ static void tc358768_bridge_pre_enable(struct drm_bridge *bridge)
+ 	dev_dbg(priv->dev, "TCLK_POSTCNT: 0x%x\n", val);
+ 	tc358768_write(priv, TC358768_TCLK_POSTCNT, val);
+ 
+-	/* 60ns + 4*UI < THS_PREPARE < 105ns + 12*UI */
+-	val = tc358768_ns_to_cnt(60 + tc358768_to_ns(15 * ui_nsk),
+-				 dsibclk_nsk) - 5;
++	/* max(60ns + 4*UI, 8*UI) < THS_TRAILCNT < 105ns + 12*UI */
++	raw_val = tc358768_ns_to_cnt(60 + tc358768_to_ns(18 * ui_nsk),
++				     dsibclk_nsk) - 4;
++	val = clamp(raw_val, 0, 15);
+ 	dev_dbg(priv->dev, "THS_TRAILCNT: 0x%x\n", val);
+ 	tc358768_write(priv, TC358768_THS_TRAILCNT, val);
+ 
+@@ -786,7 +797,7 @@ static void tc358768_bridge_pre_enable(struct drm_bridge *bridge)
+ 
+ 	/* TXTAGOCNT[26:16] RXTASURECNT[10:0] */
+ 	val = tc358768_to_ns((lptxcnt + 1) * dsibclk_nsk * 4);
+-	val = tc358768_ns_to_cnt(val, dsibclk_nsk) - 1;
++	val = tc358768_ns_to_cnt(val, dsibclk_nsk) / 4 - 1;
+ 	val2 = tc358768_ns_to_cnt(tc358768_to_ns((lptxcnt + 1) * dsibclk_nsk),
+ 				  dsibclk_nsk) - 2;
+ 	val = val << 16 | val2;
+@@ -866,8 +877,7 @@ static void tc358768_bridge_pre_enable(struct drm_bridge *bridge)
+ 	val = TC358768_DSI_CONFW_MODE_SET | TC358768_DSI_CONFW_ADDR_DSI_CONTROL;
+ 	val |= (dsi_dev->lanes - 1) << 1;
+ 
+-	if (!(dsi_dev->mode_flags & MIPI_DSI_MODE_LPM))
+-		val |= TC358768_DSI_CONTROL_TXMD;
++	val |= TC358768_DSI_CONTROL_TXMD;
+ 
+ 	if (!(mode_flags & MIPI_DSI_CLOCK_NON_CONTINUOUS))
+ 		val |= TC358768_DSI_CONTROL_HSCKMD;
+@@ -913,6 +923,44 @@ static void tc358768_bridge_enable(struct drm_bridge *bridge)
+ 	}
+ }
+ 
++#define MAX_INPUT_SEL_FORMATS	1
++
++static u32 *
++tc358768_atomic_get_input_bus_fmts(struct drm_bridge *bridge,
++				   struct drm_bridge_state *bridge_state,
++				   struct drm_crtc_state *crtc_state,
++				   struct drm_connector_state *conn_state,
++				   u32 output_fmt,
++				   unsigned int *num_input_fmts)
++{
++	struct tc358768_priv *priv = bridge_to_tc358768(bridge);
++	u32 *input_fmts;
++
++	*num_input_fmts = 0;
++
++	input_fmts = kcalloc(MAX_INPUT_SEL_FORMATS, sizeof(*input_fmts),
++			     GFP_KERNEL);
++	if (!input_fmts)
++		return NULL;
++
++	switch (priv->pd_lines) {
++	case 16:
++		input_fmts[0] = MEDIA_BUS_FMT_RGB565_1X16;
++		break;
++	case 18:
++		input_fmts[0] = MEDIA_BUS_FMT_RGB666_1X18;
++		break;
++	default:
++	case 24:
++		input_fmts[0] = MEDIA_BUS_FMT_RGB888_1X24;
++		break;
++	};
++
++	*num_input_fmts = MAX_INPUT_SEL_FORMATS;
++
++	return input_fmts;
++}
++
+ static const struct drm_bridge_funcs tc358768_bridge_funcs = {
+ 	.attach = tc358768_bridge_attach,
+ 	.mode_valid = tc358768_bridge_mode_valid,
+@@ -920,6 +968,11 @@ static const struct drm_bridge_funcs tc358768_bridge_funcs = {
+ 	.enable = tc358768_bridge_enable,
+ 	.disable = tc358768_bridge_disable,
+ 	.post_disable = tc358768_bridge_post_disable,
++
++	.atomic_duplicate_state = drm_atomic_helper_bridge_duplicate_state,
++	.atomic_destroy_state = drm_atomic_helper_bridge_destroy_state,
++	.atomic_reset = drm_atomic_helper_bridge_reset,
++	.atomic_get_input_bus_fmts = tc358768_atomic_get_input_bus_fmts,
+ };
+ 
+ static const struct drm_bridge_timings default_tc358768_timings = {
+diff --git a/drivers/gpu/drm/bridge/ti-sn65dsi83.c b/drivers/gpu/drm/bridge/ti-sn65dsi83.c
+index 75286c9afbb96..6e125ba4f0d75 100644
+--- a/drivers/gpu/drm/bridge/ti-sn65dsi83.c
++++ b/drivers/gpu/drm/bridge/ti-sn65dsi83.c
+@@ -321,8 +321,8 @@ static u8 sn65dsi83_get_dsi_div(struct sn65dsi83 *ctx)
+ 	return dsi_div - 1;
+ }
+ 
+-static void sn65dsi83_atomic_enable(struct drm_bridge *bridge,
+-				    struct drm_bridge_state *old_bridge_state)
++static void sn65dsi83_atomic_pre_enable(struct drm_bridge *bridge,
++					struct drm_bridge_state *old_bridge_state)
+ {
+ 	struct sn65dsi83 *ctx = bridge_to_sn65dsi83(bridge);
+ 	struct drm_atomic_state *state = old_bridge_state->base.state;
+@@ -478,17 +478,29 @@ static void sn65dsi83_atomic_enable(struct drm_bridge *bridge,
+ 		dev_err(ctx->dev, "failed to lock PLL, ret=%i\n", ret);
+ 		/* On failure, disable PLL again and exit. */
+ 		regmap_write(ctx->regmap, REG_RC_PLL_EN, 0x00);
++		regulator_disable(ctx->vcc);
+ 		return;
+ 	}
+ 
+ 	/* Trigger reset after CSR register update. */
+ 	regmap_write(ctx->regmap, REG_RC_RESET, REG_RC_RESET_SOFT_RESET);
+ 
++	/* Wait for 10ms after soft reset as specified in datasheet */
++	usleep_range(10000, 12000);
++}
++
++static void sn65dsi83_atomic_enable(struct drm_bridge *bridge,
++				    struct drm_bridge_state *old_bridge_state)
++{
++	struct sn65dsi83 *ctx = bridge_to_sn65dsi83(bridge);
++	unsigned int pval;
++
+ 	/* Clear all errors that got asserted during initialization. */
+ 	regmap_read(ctx->regmap, REG_IRQ_STAT, &pval);
+ 	regmap_write(ctx->regmap, REG_IRQ_STAT, pval);
+ 
+-	usleep_range(10000, 12000);
++	/* Wait for 1ms and check for errors in status register */
++	usleep_range(1000, 1100);
+ 	regmap_read(ctx->regmap, REG_IRQ_STAT, &pval);
+ 	if (pval)
+ 		dev_err(ctx->dev, "Unexpected link status 0x%02x\n", pval);
+@@ -555,6 +567,7 @@ static const struct drm_bridge_funcs sn65dsi83_funcs = {
+ 	.attach			= sn65dsi83_attach,
+ 	.detach			= sn65dsi83_detach,
+ 	.atomic_enable		= sn65dsi83_atomic_enable,
++	.atomic_pre_enable	= sn65dsi83_atomic_pre_enable,
+ 	.atomic_disable		= sn65dsi83_atomic_disable,
+ 	.mode_valid		= sn65dsi83_mode_valid,
+ 
+@@ -697,6 +710,7 @@ static int sn65dsi83_probe(struct i2c_client *client)
+ 
+ 	ctx->bridge.funcs = &sn65dsi83_funcs;
+ 	ctx->bridge.of_node = dev->of_node;
++	ctx->bridge.pre_enable_prev_first = true;
+ 	drm_bridge_add(&ctx->bridge);
+ 
+ 	ret = sn65dsi83_host_attach(ctx);
+diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
+index 0bea3df2a16dc..b67eafa557159 100644
+--- a/drivers/gpu/drm/drm_gem_vram_helper.c
++++ b/drivers/gpu/drm/drm_gem_vram_helper.c
+@@ -45,7 +45,7 @@ static const struct drm_gem_object_funcs drm_gem_vram_object_funcs;
+  * the frame's scanout buffer or the cursor image. If there's no more space
+  * left in VRAM, inactive GEM objects can be moved to system memory.
+  *
+- * To initialize the VRAM helper library call drmm_vram_helper_alloc_mm().
++ * To initialize the VRAM helper library call drmm_vram_helper_init().
+  * The function allocates and initializes an instance of &struct drm_vram_mm
+  * in &struct drm_device.vram_mm . Use &DRM_GEM_VRAM_DRIVER to initialize
+  * &struct drm_driver and  &DRM_VRAM_MM_FILE_OPERATIONS to initialize
+@@ -73,7 +73,7 @@ static const struct drm_gem_object_funcs drm_gem_vram_object_funcs;
+  *		// setup device, vram base and size
+  *		// ...
+  *
+- *		ret = drmm_vram_helper_alloc_mm(dev, vram_base, vram_size);
++ *		ret = drmm_vram_helper_init(dev, vram_base, vram_size);
+  *		if (ret)
+  *			return ret;
+  *		return 0;
+@@ -86,7 +86,7 @@ static const struct drm_gem_object_funcs drm_gem_vram_object_funcs;
+  * to userspace.
+  *
+  * You don't have to clean up the instance of VRAM MM.
+- * drmm_vram_helper_alloc_mm() is a managed interface that installs a
++ * drmm_vram_helper_init() is a managed interface that installs a
+  * clean-up handler to run during the DRM device's release.
+  *
+  * For drawing or scanout operations, rsp. buffer objects have to be pinned
+diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
+index 97b0d4ae221ac..b158f10b4269a 100644
+--- a/drivers/gpu/drm/i915/Makefile
++++ b/drivers/gpu/drm/i915/Makefile
+@@ -25,6 +25,7 @@ subdir-ccflags-$(CONFIG_DRM_I915_WERROR) += -Werror
+ 
+ # Fine grained warnings disable
+ CFLAGS_i915_pci.o = $(call cc-disable-warning, override-init)
++CFLAGS_display/intel_display_device.o = $(call cc-disable-warning, override-init)
+ CFLAGS_display/intel_fbdev.o = $(call cc-disable-warning, override-init)
+ 
+ subdir-ccflags-y += -I$(srctree)/$(src)
+@@ -300,6 +301,7 @@ i915-y += \
+ 	display/intel_crt.o \
+ 	display/intel_ddi.o \
+ 	display/intel_ddi_buf_trans.o \
++	display/intel_display_device.o \
+ 	display/intel_display_trace.o \
+ 	display/intel_dkl_phy.o \
+ 	display/intel_dp.o \
+diff --git a/drivers/gpu/drm/i915/display/intel_color.c b/drivers/gpu/drm/i915/display/intel_color.c
+index 36aac88143ac1..a5fb08f6cf136 100644
+--- a/drivers/gpu/drm/i915/display/intel_color.c
++++ b/drivers/gpu/drm/i915/display/intel_color.c
+@@ -116,10 +116,9 @@ struct intel_color_funcs {
+ #define ILK_CSC_COEFF_FP(coeff, fbits)	\
+ 	(clamp_val(((coeff) >> (32 - (fbits) - 3)) + 4, 0, 0xfff) & 0xff8)
+ 
+-#define ILK_CSC_COEFF_LIMITED_RANGE 0x0dc0
+ #define ILK_CSC_COEFF_1_0 0x7800
+-
+-#define ILK_CSC_POSTOFF_LIMITED_RANGE (16 * (1 << 12) / 255)
++#define ILK_CSC_COEFF_LIMITED_RANGE ((235 - 16) << (12 - 8)) /* exponent 0 */
++#define ILK_CSC_POSTOFF_LIMITED_RANGE (16 << (12 - 8))
+ 
+ /* Nop pre/post offsets */
+ static const u16 ilk_csc_off_zero[3] = {};
+@@ -1606,14 +1605,14 @@ static u32 intel_gamma_lut_tests(const struct intel_crtc_state *crtc_state)
+ 	if (lut_is_legacy(gamma_lut))
+ 		return 0;
+ 
+-	return INTEL_INFO(i915)->display.color.gamma_lut_tests;
++	return DISPLAY_INFO(i915)->color.gamma_lut_tests;
+ }
+ 
+ static u32 intel_degamma_lut_tests(const struct intel_crtc_state *crtc_state)
+ {
+ 	struct drm_i915_private *i915 = to_i915(crtc_state->uapi.crtc->dev);
+ 
+-	return INTEL_INFO(i915)->display.color.degamma_lut_tests;
++	return DISPLAY_INFO(i915)->color.degamma_lut_tests;
+ }
+ 
+ static int intel_gamma_lut_size(const struct intel_crtc_state *crtc_state)
+@@ -1624,14 +1623,14 @@ static int intel_gamma_lut_size(const struct intel_crtc_state *crtc_state)
+ 	if (lut_is_legacy(gamma_lut))
+ 		return LEGACY_LUT_LENGTH;
+ 
+-	return INTEL_INFO(i915)->display.color.gamma_lut_size;
++	return DISPLAY_INFO(i915)->color.gamma_lut_size;
+ }
+ 
+ static u32 intel_degamma_lut_size(const struct intel_crtc_state *crtc_state)
+ {
+ 	struct drm_i915_private *i915 = to_i915(crtc_state->uapi.crtc->dev);
+ 
+-	return INTEL_INFO(i915)->display.color.degamma_lut_size;
++	return DISPLAY_INFO(i915)->color.degamma_lut_size;
+ }
+ 
+ static int check_lut_size(const struct drm_property_blob *lut, int expected)
+@@ -2097,7 +2096,7 @@ static int glk_assign_luts(struct intel_crtc_state *crtc_state)
+ 		struct drm_property_blob *gamma_lut;
+ 
+ 		gamma_lut = create_resized_lut(i915, crtc_state->hw.gamma_lut,
+-					       INTEL_INFO(i915)->display.color.degamma_lut_size,
++					       DISPLAY_INFO(i915)->color.degamma_lut_size,
+ 					       false);
+ 		if (IS_ERR(gamma_lut))
+ 			return PTR_ERR(gamma_lut);
+@@ -2627,7 +2626,7 @@ static struct drm_property_blob *i9xx_read_lut_8(struct intel_crtc *crtc)
+ static struct drm_property_blob *i9xx_read_lut_10(struct intel_crtc *crtc)
+ {
+ 	struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
+-	u32 lut_size = INTEL_INFO(dev_priv)->display.color.gamma_lut_size;
++	u32 lut_size = DISPLAY_INFO(dev_priv)->color.gamma_lut_size;
+ 	enum pipe pipe = crtc->pipe;
+ 	struct drm_property_blob *blob;
+ 	struct drm_color_lut *lut;
+@@ -2676,7 +2675,7 @@ static void i9xx_read_luts(struct intel_crtc_state *crtc_state)
+ static struct drm_property_blob *i965_read_lut_10p6(struct intel_crtc *crtc)
+ {
+ 	struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
+-	int i, lut_size = INTEL_INFO(dev_priv)->display.color.gamma_lut_size;
++	int i, lut_size = DISPLAY_INFO(dev_priv)->color.gamma_lut_size;
+ 	enum pipe pipe = crtc->pipe;
+ 	struct drm_property_blob *blob;
+ 	struct drm_color_lut *lut;
+@@ -2726,7 +2725,7 @@ static void i965_read_luts(struct intel_crtc_state *crtc_state)
+ static struct drm_property_blob *chv_read_cgm_degamma(struct intel_crtc *crtc)
+ {
+ 	struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
+-	int i, lut_size = INTEL_INFO(dev_priv)->display.color.degamma_lut_size;
++	int i, lut_size = DISPLAY_INFO(dev_priv)->color.degamma_lut_size;
+ 	enum pipe pipe = crtc->pipe;
+ 	struct drm_property_blob *blob;
+ 	struct drm_color_lut *lut;
+@@ -2752,7 +2751,7 @@ static struct drm_property_blob *chv_read_cgm_degamma(struct intel_crtc *crtc)
+ static struct drm_property_blob *chv_read_cgm_gamma(struct intel_crtc *crtc)
+ {
+ 	struct drm_i915_private *i915 = to_i915(crtc->base.dev);
+-	int i, lut_size = INTEL_INFO(i915)->display.color.gamma_lut_size;
++	int i, lut_size = DISPLAY_INFO(i915)->color.gamma_lut_size;
+ 	enum pipe pipe = crtc->pipe;
+ 	struct drm_property_blob *blob;
+ 	struct drm_color_lut *lut;
+@@ -2816,7 +2815,7 @@ static struct drm_property_blob *ilk_read_lut_8(struct intel_crtc *crtc)
+ static struct drm_property_blob *ilk_read_lut_10(struct intel_crtc *crtc)
+ {
+ 	struct drm_i915_private *i915 = to_i915(crtc->base.dev);
+-	int i, lut_size = INTEL_INFO(i915)->display.color.gamma_lut_size;
++	int i, lut_size = DISPLAY_INFO(i915)->color.gamma_lut_size;
+ 	enum pipe pipe = crtc->pipe;
+ 	struct drm_property_blob *blob;
+ 	struct drm_color_lut *lut;
+@@ -3000,7 +2999,7 @@ static void bdw_read_luts(struct intel_crtc_state *crtc_state)
+ static struct drm_property_blob *glk_read_degamma_lut(struct intel_crtc *crtc)
+ {
+ 	struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
+-	int i, lut_size = INTEL_INFO(dev_priv)->display.color.degamma_lut_size;
++	int i, lut_size = DISPLAY_INFO(dev_priv)->color.degamma_lut_size;
+ 	enum pipe pipe = crtc->pipe;
+ 	struct drm_property_blob *blob;
+ 	struct drm_color_lut *lut;
+@@ -3065,7 +3064,7 @@ static struct drm_property_blob *
+ icl_read_lut_multi_segment(struct intel_crtc *crtc)
+ {
+ 	struct drm_i915_private *i915 = to_i915(crtc->base.dev);
+-	int i, lut_size = INTEL_INFO(i915)->display.color.gamma_lut_size;
++	int i, lut_size = DISPLAY_INFO(i915)->color.gamma_lut_size;
+ 	enum pipe pipe = crtc->pipe;
+ 	struct drm_property_blob *blob;
+ 	struct drm_color_lut *lut;
+@@ -3234,8 +3233,8 @@ void intel_color_crtc_init(struct intel_crtc *crtc)
+ 
+ 	drm_mode_crtc_set_gamma_size(&crtc->base, 256);
+ 
+-	gamma_lut_size = INTEL_INFO(i915)->display.color.gamma_lut_size;
+-	degamma_lut_size = INTEL_INFO(i915)->display.color.degamma_lut_size;
++	gamma_lut_size = DISPLAY_INFO(i915)->color.gamma_lut_size;
++	degamma_lut_size = DISPLAY_INFO(i915)->color.degamma_lut_size;
+ 	has_ctm = degamma_lut_size != 0;
+ 
+ 	/*
+@@ -3260,7 +3259,8 @@ int intel_color_init(struct drm_i915_private *i915)
+ 	if (DISPLAY_VER(i915) != 10)
+ 		return 0;
+ 
+-	blob = create_linear_lut(i915, INTEL_INFO(i915)->display.color.degamma_lut_size);
++	blob = create_linear_lut(i915,
++				 DISPLAY_INFO(i915)->color.degamma_lut_size);
+ 	if (IS_ERR(blob))
+ 		return PTR_ERR(blob);
+ 
+diff --git a/drivers/gpu/drm/i915/display/intel_crtc.c b/drivers/gpu/drm/i915/display/intel_crtc.c
+index ed45a69348548..349bc7f5f9a0d 100644
+--- a/drivers/gpu/drm/i915/display/intel_crtc.c
++++ b/drivers/gpu/drm/i915/display/intel_crtc.c
+@@ -302,7 +302,7 @@ int intel_crtc_init(struct drm_i915_private *dev_priv, enum pipe pipe)
+ 		return PTR_ERR(crtc);
+ 
+ 	crtc->pipe = pipe;
+-	crtc->num_scalers = RUNTIME_INFO(dev_priv)->num_scalers[pipe];
++	crtc->num_scalers = DISPLAY_RUNTIME_INFO(dev_priv)->num_scalers[pipe];
+ 
+ 	if (DISPLAY_VER(dev_priv) >= 9)
+ 		primary = skl_universal_plane_create(dev_priv, pipe,
+diff --git a/drivers/gpu/drm/i915/display/intel_cursor.c b/drivers/gpu/drm/i915/display/intel_cursor.c
+index 31bef04273779..b342fad180ca5 100644
+--- a/drivers/gpu/drm/i915/display/intel_cursor.c
++++ b/drivers/gpu/drm/i915/display/intel_cursor.c
+@@ -36,7 +36,7 @@ static u32 intel_cursor_base(const struct intel_plane_state *plane_state)
+ 	const struct drm_i915_gem_object *obj = intel_fb_obj(fb);
+ 	u32 base;
+ 
+-	if (INTEL_INFO(dev_priv)->display.cursor_needs_physical)
++	if (DISPLAY_INFO(dev_priv)->cursor_needs_physical)
+ 		base = sg_dma_address(obj->mm.pages->sgl);
+ 	else
+ 		base = intel_plane_ggtt_offset(plane_state);
+@@ -814,7 +814,7 @@ intel_cursor_plane_create(struct drm_i915_private *dev_priv,
+ 						   DRM_MODE_ROTATE_0 |
+ 						   DRM_MODE_ROTATE_180);
+ 
+-	zpos = RUNTIME_INFO(dev_priv)->num_sprites[pipe] + 1;
++	zpos = DISPLAY_RUNTIME_INFO(dev_priv)->num_sprites[pipe] + 1;
+ 	drm_plane_create_zpos_immutable_property(&cursor->base, zpos);
+ 
+ 	if (DISPLAY_VER(dev_priv) >= 12)
+diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
+index 0aae9a1eb3d58..7749f95d5d02a 100644
+--- a/drivers/gpu/drm/i915/display/intel_display.c
++++ b/drivers/gpu/drm/i915/display/intel_display.c
+@@ -3568,7 +3568,7 @@ static u8 bigjoiner_pipes(struct drm_i915_private *i915)
+ 	else
+ 		pipes = 0;
+ 
+-	return pipes & RUNTIME_INFO(i915)->pipe_mask;
++	return pipes & DISPLAY_RUNTIME_INFO(i915)->pipe_mask;
+ }
+ 
+ static bool transcoder_ddi_func_is_enabled(struct drm_i915_private *dev_priv,
+diff --git a/drivers/gpu/drm/i915/display/intel_display.h b/drivers/gpu/drm/i915/display/intel_display.h
+index 287159bdeb0d1..4065b6598cf31 100644
+--- a/drivers/gpu/drm/i915/display/intel_display.h
++++ b/drivers/gpu/drm/i915/display/intel_display.h
+@@ -105,7 +105,7 @@ enum i9xx_plane_id {
+ };
+ 
+ #define plane_name(p) ((p) + 'A')
+-#define sprite_name(p, s) ((p) * RUNTIME_INFO(dev_priv)->num_sprites[(p)] + (s) + 'A')
++#define sprite_name(p, s) ((p) * DISPLAY_RUNTIME_INFO(dev_priv)->num_sprites[(p)] + (s) + 'A')
+ 
+ #define for_each_plane_id_on_crtc(__crtc, __p) \
+ 	for ((__p) = PLANE_PRIMARY; (__p) < I915_MAX_PLANES; (__p)++) \
+@@ -113,7 +113,7 @@ enum i9xx_plane_id {
+ 
+ #define for_each_dbuf_slice(__dev_priv, __slice) \
+ 	for ((__slice) = DBUF_S1; (__slice) < I915_MAX_DBUF_SLICES; (__slice)++) \
+-		for_each_if(INTEL_INFO(__dev_priv)->display.dbuf.slice_mask & BIT(__slice))
++		for_each_if(INTEL_INFO(__dev_priv)->display->dbuf.slice_mask & BIT(__slice))
+ 
+ #define for_each_dbuf_slice_in_mask(__dev_priv, __slice, __mask) \
+ 	for_each_dbuf_slice((__dev_priv), (__slice)) \
+@@ -221,7 +221,7 @@ enum phy_fia {
+ 
+ #define for_each_pipe(__dev_priv, __p) \
+ 	for ((__p) = 0; (__p) < I915_MAX_PIPES; (__p)++) \
+-		for_each_if(RUNTIME_INFO(__dev_priv)->pipe_mask & BIT(__p))
++		for_each_if(DISPLAY_RUNTIME_INFO(__dev_priv)->pipe_mask & BIT(__p))
+ 
+ #define for_each_pipe_masked(__dev_priv, __p, __mask) \
+ 	for_each_pipe(__dev_priv, __p) \
+@@ -229,7 +229,7 @@ enum phy_fia {
+ 
+ #define for_each_cpu_transcoder(__dev_priv, __t) \
+ 	for ((__t) = 0; (__t) < I915_MAX_TRANSCODERS; (__t)++)	\
+-		for_each_if (RUNTIME_INFO(__dev_priv)->cpu_transcoder_mask & BIT(__t))
++		for_each_if (DISPLAY_RUNTIME_INFO(__dev_priv)->cpu_transcoder_mask & BIT(__t))
+ 
+ #define for_each_cpu_transcoder_masked(__dev_priv, __t, __mask) \
+ 	for_each_cpu_transcoder(__dev_priv, __t) \
+@@ -237,7 +237,7 @@ enum phy_fia {
+ 
+ #define for_each_sprite(__dev_priv, __p, __s)				\
+ 	for ((__s) = 0;							\
+-	     (__s) < RUNTIME_INFO(__dev_priv)->num_sprites[(__p)];	\
++	     (__s) < DISPLAY_RUNTIME_INFO(__dev_priv)->num_sprites[(__p)];	\
+ 	     (__s)++)
+ 
+ #define for_each_port(__port) \
+diff --git a/drivers/gpu/drm/i915/display/intel_display_device.c b/drivers/gpu/drm/i915/display/intel_display_device.c
+new file mode 100644
+index 0000000000000..8c57d48e8270f
+--- /dev/null
++++ b/drivers/gpu/drm/i915/display/intel_display_device.c
+@@ -0,0 +1,728 @@
++// SPDX-License-Identifier: MIT
++/*
++ * Copyright © 2023 Intel Corporation
++ */
++
++#include <drm/i915_pciids.h>
++#include <drm/drm_color_mgmt.h>
++
++#include "intel_display_device.h"
++#include "intel_display_power.h"
++#include "intel_display_reg_defs.h"
++#include "intel_fbc.h"
++
++static const struct intel_display_device_info no_display = {};
++
++#define PIPE_A_OFFSET		0x70000
++#define PIPE_B_OFFSET		0x71000
++#define PIPE_C_OFFSET		0x72000
++#define PIPE_D_OFFSET		0x73000
++#define CHV_PIPE_C_OFFSET	0x74000
++/*
++ * There's actually no pipe EDP. Some pipe registers have
++ * simply shifted from the pipe to the transcoder, while
++ * keeping their original offset. Thus we need PIPE_EDP_OFFSET
++ * to access such registers in transcoder EDP.
++ */
++#define PIPE_EDP_OFFSET	0x7f000
++
++/* ICL DSI 0 and 1 */
++#define PIPE_DSI0_OFFSET	0x7b000
++#define PIPE_DSI1_OFFSET	0x7b800
++
++#define TRANSCODER_A_OFFSET 0x60000
++#define TRANSCODER_B_OFFSET 0x61000
++#define TRANSCODER_C_OFFSET 0x62000
++#define CHV_TRANSCODER_C_OFFSET 0x63000
++#define TRANSCODER_D_OFFSET 0x63000
++#define TRANSCODER_EDP_OFFSET 0x6f000
++#define TRANSCODER_DSI0_OFFSET	0x6b000
++#define TRANSCODER_DSI1_OFFSET	0x6b800
++
++#define CURSOR_A_OFFSET 0x70080
++#define CURSOR_B_OFFSET 0x700c0
++#define CHV_CURSOR_C_OFFSET 0x700e0
++#define IVB_CURSOR_B_OFFSET 0x71080
++#define IVB_CURSOR_C_OFFSET 0x72080
++#define TGL_CURSOR_D_OFFSET 0x73080
++
++#define I845_PIPE_OFFSETS \
++	.pipe_offsets = { \
++		[TRANSCODER_A] = PIPE_A_OFFSET,	\
++	}, \
++	.trans_offsets = { \
++		[TRANSCODER_A] = TRANSCODER_A_OFFSET, \
++	}
++
++#define I9XX_PIPE_OFFSETS \
++	.pipe_offsets = { \
++		[TRANSCODER_A] = PIPE_A_OFFSET,	\
++		[TRANSCODER_B] = PIPE_B_OFFSET, \
++	}, \
++	.trans_offsets = { \
++		[TRANSCODER_A] = TRANSCODER_A_OFFSET, \
++		[TRANSCODER_B] = TRANSCODER_B_OFFSET, \
++	}
++
++#define IVB_PIPE_OFFSETS \
++	.pipe_offsets = { \
++		[TRANSCODER_A] = PIPE_A_OFFSET,	\
++		[TRANSCODER_B] = PIPE_B_OFFSET, \
++		[TRANSCODER_C] = PIPE_C_OFFSET, \
++	}, \
++	.trans_offsets = { \
++		[TRANSCODER_A] = TRANSCODER_A_OFFSET, \
++		[TRANSCODER_B] = TRANSCODER_B_OFFSET, \
++		[TRANSCODER_C] = TRANSCODER_C_OFFSET, \
++	}
++
++#define HSW_PIPE_OFFSETS \
++	.pipe_offsets = { \
++		[TRANSCODER_A] = PIPE_A_OFFSET,	\
++		[TRANSCODER_B] = PIPE_B_OFFSET, \
++		[TRANSCODER_C] = PIPE_C_OFFSET, \
++		[TRANSCODER_EDP] = PIPE_EDP_OFFSET, \
++	}, \
++	.trans_offsets = { \
++		[TRANSCODER_A] = TRANSCODER_A_OFFSET, \
++		[TRANSCODER_B] = TRANSCODER_B_OFFSET, \
++		[TRANSCODER_C] = TRANSCODER_C_OFFSET, \
++		[TRANSCODER_EDP] = TRANSCODER_EDP_OFFSET, \
++	}
++
++#define CHV_PIPE_OFFSETS \
++	.pipe_offsets = { \
++		[TRANSCODER_A] = PIPE_A_OFFSET, \
++		[TRANSCODER_B] = PIPE_B_OFFSET, \
++		[TRANSCODER_C] = CHV_PIPE_C_OFFSET, \
++	}, \
++	.trans_offsets = { \
++		[TRANSCODER_A] = TRANSCODER_A_OFFSET, \
++		[TRANSCODER_B] = TRANSCODER_B_OFFSET, \
++		[TRANSCODER_C] = CHV_TRANSCODER_C_OFFSET, \
++	}
++
++#define I845_CURSOR_OFFSETS \
++	.cursor_offsets = { \
++		[PIPE_A] = CURSOR_A_OFFSET, \
++	}
++
++#define I9XX_CURSOR_OFFSETS \
++	.cursor_offsets = { \
++		[PIPE_A] = CURSOR_A_OFFSET, \
++		[PIPE_B] = CURSOR_B_OFFSET, \
++	}
++
++#define CHV_CURSOR_OFFSETS \
++	.cursor_offsets = { \
++		[PIPE_A] = CURSOR_A_OFFSET, \
++		[PIPE_B] = CURSOR_B_OFFSET, \
++		[PIPE_C] = CHV_CURSOR_C_OFFSET, \
++	}
++
++#define IVB_CURSOR_OFFSETS \
++	.cursor_offsets = { \
++		[PIPE_A] = CURSOR_A_OFFSET, \
++		[PIPE_B] = IVB_CURSOR_B_OFFSET, \
++		[PIPE_C] = IVB_CURSOR_C_OFFSET, \
++	}
++
++#define TGL_CURSOR_OFFSETS \
++	.cursor_offsets = { \
++		[PIPE_A] = CURSOR_A_OFFSET, \
++		[PIPE_B] = IVB_CURSOR_B_OFFSET, \
++		[PIPE_C] = IVB_CURSOR_C_OFFSET, \
++		[PIPE_D] = TGL_CURSOR_D_OFFSET, \
++	}
++
++#define I845_COLORS \
++	.color = { .gamma_lut_size = 256 }
++#define I9XX_COLORS \
++	.color = { .gamma_lut_size = 129, \
++		   .gamma_lut_tests = DRM_COLOR_LUT_NON_DECREASING, \
++	}
++#define ILK_COLORS \
++	.color = { .gamma_lut_size = 1024 }
++#define IVB_COLORS \
++	.color = { .degamma_lut_size = 1024, .gamma_lut_size = 1024 }
++#define CHV_COLORS \
++	.color = { \
++		.degamma_lut_size = 65, .gamma_lut_size = 257, \
++		.degamma_lut_tests = DRM_COLOR_LUT_NON_DECREASING, \
++		.gamma_lut_tests = DRM_COLOR_LUT_NON_DECREASING, \
++	}
++#define GLK_COLORS \
++	.color = { \
++		.degamma_lut_size = 33, .gamma_lut_size = 1024, \
++		.degamma_lut_tests = DRM_COLOR_LUT_NON_DECREASING | \
++				     DRM_COLOR_LUT_EQUAL_CHANNELS, \
++	}
++#define ICL_COLORS \
++	.color = { \
++		.degamma_lut_size = 33, .gamma_lut_size = 262145, \
++		.degamma_lut_tests = DRM_COLOR_LUT_NON_DECREASING | \
++				     DRM_COLOR_LUT_EQUAL_CHANNELS, \
++		.gamma_lut_tests = DRM_COLOR_LUT_NON_DECREASING, \
++	}
++
++#define I830_DISPLAY \
++	.has_overlay = 1, \
++	.cursor_needs_physical = 1, \
++	.overlay_needs_physical = 1, \
++	.has_gmch = 1, \
++	I9XX_PIPE_OFFSETS, \
++	I9XX_CURSOR_OFFSETS, \
++	I9XX_COLORS, \
++	\
++	.__runtime_defaults.ip.ver = 2, \
++	.__runtime_defaults.pipe_mask = BIT(PIPE_A) | BIT(PIPE_B), \
++	.__runtime_defaults.cpu_transcoder_mask = \
++		BIT(TRANSCODER_A) | BIT(TRANSCODER_B)
++
++static const struct intel_display_device_info i830_display = {
++	I830_DISPLAY,
++};
++
++#define I845_DISPLAY \
++	.has_overlay = 1, \
++	.overlay_needs_physical = 1, \
++	.has_gmch = 1, \
++	I845_PIPE_OFFSETS, \
++	I845_CURSOR_OFFSETS, \
++	I845_COLORS, \
++	\
++	.__runtime_defaults.ip.ver = 2, \
++	.__runtime_defaults.pipe_mask = BIT(PIPE_A), \
++	.__runtime_defaults.cpu_transcoder_mask = BIT(TRANSCODER_A)
++
++static const struct intel_display_device_info i845_display = {
++	I845_DISPLAY,
++};
++
++static const struct intel_display_device_info i85x_display = {
++	I830_DISPLAY,
++
++	.__runtime_defaults.fbc_mask = BIT(INTEL_FBC_A),
++};
++
++static const struct intel_display_device_info i865g_display = {
++	I845_DISPLAY,
++
++	.__runtime_defaults.fbc_mask = BIT(INTEL_FBC_A),
++};
++
++#define GEN3_DISPLAY \
++	.has_gmch = 1, \
++	.has_overlay = 1, \
++	I9XX_PIPE_OFFSETS, \
++	I9XX_CURSOR_OFFSETS, \
++	\
++	.__runtime_defaults.ip.ver = 3, \
++	.__runtime_defaults.pipe_mask = BIT(PIPE_A) | BIT(PIPE_B), \
++	.__runtime_defaults.cpu_transcoder_mask = \
++		BIT(TRANSCODER_A) | BIT(TRANSCODER_B)
++
++static const struct intel_display_device_info i915g_display = {
++	GEN3_DISPLAY,
++	I845_COLORS,
++	.cursor_needs_physical = 1,
++	.overlay_needs_physical = 1,
++};
++
++static const struct intel_display_device_info i915gm_display = {
++	GEN3_DISPLAY,
++	I9XX_COLORS,
++	.cursor_needs_physical = 1,
++	.overlay_needs_physical = 1,
++	.supports_tv = 1,
++
++	.__runtime_defaults.fbc_mask = BIT(INTEL_FBC_A),
++};
++
++static const struct intel_display_device_info i945g_display = {
++	GEN3_DISPLAY,
++	I845_COLORS,
++	.has_hotplug = 1,
++	.cursor_needs_physical = 1,
++	.overlay_needs_physical = 1,
++};
++
++static const struct intel_display_device_info i945gm_display = {
++	GEN3_DISPLAY,
++	I9XX_COLORS,
++	.has_hotplug = 1,
++	.cursor_needs_physical = 1,
++	.overlay_needs_physical = 1,
++	.supports_tv = 1,
++
++	.__runtime_defaults.fbc_mask = BIT(INTEL_FBC_A),
++};
++
++static const struct intel_display_device_info g33_display = {
++	GEN3_DISPLAY,
++	I845_COLORS,
++	.has_hotplug = 1,
++};
++
++static const struct intel_display_device_info pnv_display = {
++	GEN3_DISPLAY,
++	I9XX_COLORS,
++	.has_hotplug = 1,
++};
++
++#define GEN4_DISPLAY \
++	.has_hotplug = 1, \
++	.has_gmch = 1, \
++	I9XX_PIPE_OFFSETS, \
++	I9XX_CURSOR_OFFSETS, \
++	I9XX_COLORS, \
++	\
++	.__runtime_defaults.ip.ver = 4, \
++	.__runtime_defaults.pipe_mask = BIT(PIPE_A) | BIT(PIPE_B), \
++	.__runtime_defaults.cpu_transcoder_mask = \
++		BIT(TRANSCODER_A) | BIT(TRANSCODER_B)
++
++static const struct intel_display_device_info i965g_display = {
++	GEN4_DISPLAY,
++	.has_overlay = 1,
++};
++
++static const struct intel_display_device_info i965gm_display = {
++	GEN4_DISPLAY,
++	.has_overlay = 1,
++	.supports_tv = 1,
++
++	.__runtime_defaults.fbc_mask = BIT(INTEL_FBC_A),
++};
++
++static const struct intel_display_device_info g45_display = {
++	GEN4_DISPLAY,
++};
++
++static const struct intel_display_device_info gm45_display = {
++	GEN4_DISPLAY,
++	.supports_tv = 1,
++
++	.__runtime_defaults.fbc_mask = BIT(INTEL_FBC_A),
++};
++
++#define ILK_DISPLAY \
++	.has_hotplug = 1, \
++	I9XX_PIPE_OFFSETS, \
++	I9XX_CURSOR_OFFSETS, \
++	ILK_COLORS, \
++	\
++	.__runtime_defaults.ip.ver = 5, \
++	.__runtime_defaults.pipe_mask = BIT(PIPE_A) | BIT(PIPE_B), \
++	.__runtime_defaults.cpu_transcoder_mask = \
++		BIT(TRANSCODER_A) | BIT(TRANSCODER_B)
++
++static const struct intel_display_device_info ilk_d_display = {
++	ILK_DISPLAY,
++};
++
++static const struct intel_display_device_info ilk_m_display = {
++	ILK_DISPLAY,
++
++	.__runtime_defaults.fbc_mask = BIT(INTEL_FBC_A),
++};
++
++static const struct intel_display_device_info snb_display = {
++	.has_hotplug = 1,
++	I9XX_PIPE_OFFSETS,
++	I9XX_CURSOR_OFFSETS,
++	ILK_COLORS,
++
++	.__runtime_defaults.ip.ver = 6,
++	.__runtime_defaults.pipe_mask = BIT(PIPE_A) | BIT(PIPE_B),
++	.__runtime_defaults.cpu_transcoder_mask =
++		BIT(TRANSCODER_A) | BIT(TRANSCODER_B),
++	.__runtime_defaults.fbc_mask = BIT(INTEL_FBC_A),
++};
++
++static const struct intel_display_device_info ivb_display = {
++	.has_hotplug = 1,
++	IVB_PIPE_OFFSETS,
++	IVB_CURSOR_OFFSETS,
++	IVB_COLORS,
++
++	.__runtime_defaults.ip.ver = 7,
++	.__runtime_defaults.pipe_mask = BIT(PIPE_A) | BIT(PIPE_B) | BIT(PIPE_C),
++	.__runtime_defaults.cpu_transcoder_mask =
++		BIT(TRANSCODER_A) | BIT(TRANSCODER_B) | BIT(TRANSCODER_C),
++	.__runtime_defaults.fbc_mask = BIT(INTEL_FBC_A),
++};
++
++static const struct intel_display_device_info vlv_display = {
++	.has_gmch = 1,
++	.has_hotplug = 1,
++	.mmio_offset = VLV_DISPLAY_BASE,
++	I9XX_PIPE_OFFSETS,
++	I9XX_CURSOR_OFFSETS,
++	I9XX_COLORS,
++
++	.__runtime_defaults.ip.ver = 7,
++	.__runtime_defaults.pipe_mask = BIT(PIPE_A) | BIT(PIPE_B),
++	.__runtime_defaults.cpu_transcoder_mask =
++		BIT(TRANSCODER_A) | BIT(TRANSCODER_B),
++};
++
++static const struct intel_display_device_info hsw_display = {
++	.has_ddi = 1,
++	.has_dp_mst = 1,
++	.has_fpga_dbg = 1,
++	.has_hotplug = 1,
++	HSW_PIPE_OFFSETS,
++	IVB_CURSOR_OFFSETS,
++	IVB_COLORS,
++
++	.__runtime_defaults.ip.ver = 7,
++	.__runtime_defaults.pipe_mask = BIT(PIPE_A) | BIT(PIPE_B) | BIT(PIPE_C),
++	.__runtime_defaults.cpu_transcoder_mask =
++		BIT(TRANSCODER_A) | BIT(TRANSCODER_B) |
++		BIT(TRANSCODER_C) | BIT(TRANSCODER_EDP),
++	.__runtime_defaults.fbc_mask = BIT(INTEL_FBC_A),
++};
++
++static const struct intel_display_device_info bdw_display = {
++	.has_ddi = 1,
++	.has_dp_mst = 1,
++	.has_fpga_dbg = 1,
++	.has_hotplug = 1,
++	HSW_PIPE_OFFSETS,
++	IVB_CURSOR_OFFSETS,
++	IVB_COLORS,
++
++	.__runtime_defaults.ip.ver = 8,
++	.__runtime_defaults.pipe_mask = BIT(PIPE_A) | BIT(PIPE_B) | BIT(PIPE_C),
++	.__runtime_defaults.cpu_transcoder_mask =
++		BIT(TRANSCODER_A) | BIT(TRANSCODER_B) |
++		BIT(TRANSCODER_C) | BIT(TRANSCODER_EDP),
++	.__runtime_defaults.fbc_mask = BIT(INTEL_FBC_A),
++};
++
++static const struct intel_display_device_info chv_display = {
++	.has_hotplug = 1,
++	.has_gmch = 1,
++	.mmio_offset = VLV_DISPLAY_BASE,
++	CHV_PIPE_OFFSETS,
++	CHV_CURSOR_OFFSETS,
++	CHV_COLORS,
++
++	.__runtime_defaults.ip.ver = 8,
++	.__runtime_defaults.pipe_mask = BIT(PIPE_A) | BIT(PIPE_B) | BIT(PIPE_C),
++	.__runtime_defaults.cpu_transcoder_mask =
++		BIT(TRANSCODER_A) | BIT(TRANSCODER_B) | BIT(TRANSCODER_C),
++};
++
++static const struct intel_display_device_info skl_display = {
++	.dbuf.size = 896 - 4, /* 4 blocks for bypass path allocation */
++	.dbuf.slice_mask = BIT(DBUF_S1),
++	.has_ddi = 1,
++	.has_dp_mst = 1,
++	.has_fpga_dbg = 1,
++	.has_hotplug = 1,
++	.has_ipc = 1,
++	.has_psr = 1,
++	.has_psr_hw_tracking = 1,
++	HSW_PIPE_OFFSETS,
++	IVB_CURSOR_OFFSETS,
++	IVB_COLORS,
++
++	.__runtime_defaults.ip.ver = 9,
++	.__runtime_defaults.has_dmc = 1,
++	.__runtime_defaults.has_hdcp = 1,
++	.__runtime_defaults.pipe_mask = BIT(PIPE_A) | BIT(PIPE_B) | BIT(PIPE_C),
++	.__runtime_defaults.cpu_transcoder_mask =
++		BIT(TRANSCODER_A) | BIT(TRANSCODER_B) |
++		BIT(TRANSCODER_C) | BIT(TRANSCODER_EDP),
++	.__runtime_defaults.fbc_mask = BIT(INTEL_FBC_A),
++};
++
++#define GEN9_LP_DISPLAY \
++	.dbuf.slice_mask = BIT(DBUF_S1), \
++	.has_dp_mst = 1, \
++	.has_ddi = 1, \
++	.has_fpga_dbg = 1, \
++	.has_hotplug = 1, \
++	.has_ipc = 1, \
++	.has_psr = 1, \
++	.has_psr_hw_tracking = 1, \
++	HSW_PIPE_OFFSETS, \
++	IVB_CURSOR_OFFSETS, \
++	IVB_COLORS, \
++	\
++	.__runtime_defaults.has_dmc = 1, \
++	.__runtime_defaults.has_hdcp = 1, \
++	.__runtime_defaults.fbc_mask = BIT(INTEL_FBC_A), \
++	.__runtime_defaults.pipe_mask = BIT(PIPE_A) | BIT(PIPE_B) | BIT(PIPE_C), \
++	.__runtime_defaults.cpu_transcoder_mask = \
++		BIT(TRANSCODER_A) | BIT(TRANSCODER_B) | \
++		BIT(TRANSCODER_C) | BIT(TRANSCODER_EDP) | \
++		BIT(TRANSCODER_DSI_A) | BIT(TRANSCODER_DSI_C)
++
++static const struct intel_display_device_info bxt_display = {
++	GEN9_LP_DISPLAY,
++	.dbuf.size = 512 - 4, /* 4 blocks for bypass path allocation */
++
++	.__runtime_defaults.ip.ver = 9,
++};
++
++static const struct intel_display_device_info glk_display = {
++	GEN9_LP_DISPLAY,
++	.dbuf.size = 1024 - 4, /* 4 blocks for bypass path allocation */
++	GLK_COLORS,
++
++	.__runtime_defaults.ip.ver = 10,
++};
++
++static const struct intel_display_device_info gen11_display = {
++	.abox_mask = BIT(0),
++	.dbuf.size = 2048,
++	.dbuf.slice_mask = BIT(DBUF_S1) | BIT(DBUF_S2),
++	.has_ddi = 1,
++	.has_dp_mst = 1,
++	.has_fpga_dbg = 1,
++	.has_hotplug = 1,
++	.has_ipc = 1,
++	.has_psr = 1,
++	.has_psr_hw_tracking = 1,
++	.pipe_offsets = {
++		[TRANSCODER_A] = PIPE_A_OFFSET,
++		[TRANSCODER_B] = PIPE_B_OFFSET,
++		[TRANSCODER_C] = PIPE_C_OFFSET,
++		[TRANSCODER_EDP] = PIPE_EDP_OFFSET,
++		[TRANSCODER_DSI_0] = PIPE_DSI0_OFFSET,
++		[TRANSCODER_DSI_1] = PIPE_DSI1_OFFSET,
++	},
++	.trans_offsets = {
++		[TRANSCODER_A] = TRANSCODER_A_OFFSET,
++		[TRANSCODER_B] = TRANSCODER_B_OFFSET,
++		[TRANSCODER_C] = TRANSCODER_C_OFFSET,
++		[TRANSCODER_EDP] = TRANSCODER_EDP_OFFSET,
++		[TRANSCODER_DSI_0] = TRANSCODER_DSI0_OFFSET,
++		[TRANSCODER_DSI_1] = TRANSCODER_DSI1_OFFSET,
++	},
++	IVB_CURSOR_OFFSETS,
++	ICL_COLORS,
++
++	.__runtime_defaults.ip.ver = 11,
++	.__runtime_defaults.has_dmc = 1,
++	.__runtime_defaults.has_dsc = 1,
++	.__runtime_defaults.has_hdcp = 1,
++	.__runtime_defaults.pipe_mask = BIT(PIPE_A) | BIT(PIPE_B) | BIT(PIPE_C),
++	.__runtime_defaults.cpu_transcoder_mask =
++		BIT(TRANSCODER_A) | BIT(TRANSCODER_B) |
++		BIT(TRANSCODER_C) | BIT(TRANSCODER_EDP) |
++		BIT(TRANSCODER_DSI_0) | BIT(TRANSCODER_DSI_1),
++	.__runtime_defaults.fbc_mask = BIT(INTEL_FBC_A),
++};
++
++#define XE_D_DISPLAY \
++	.abox_mask = GENMASK(2, 1), \
++	.dbuf.size = 2048, \
++	.dbuf.slice_mask = BIT(DBUF_S1) | BIT(DBUF_S2), \
++	.has_ddi = 1, \
++	.has_dp_mst = 1, \
++	.has_dsb = 1, \
++	.has_fpga_dbg = 1, \
++	.has_hotplug = 1, \
++	.has_ipc = 1, \
++	.has_psr = 1, \
++	.has_psr_hw_tracking = 1, \
++	.pipe_offsets = { \
++		[TRANSCODER_A] = PIPE_A_OFFSET, \
++		[TRANSCODER_B] = PIPE_B_OFFSET, \
++		[TRANSCODER_C] = PIPE_C_OFFSET, \
++		[TRANSCODER_D] = PIPE_D_OFFSET, \
++		[TRANSCODER_DSI_0] = PIPE_DSI0_OFFSET, \
++		[TRANSCODER_DSI_1] = PIPE_DSI1_OFFSET, \
++	}, \
++	.trans_offsets = { \
++		[TRANSCODER_A] = TRANSCODER_A_OFFSET, \
++		[TRANSCODER_B] = TRANSCODER_B_OFFSET, \
++		[TRANSCODER_C] = TRANSCODER_C_OFFSET, \
++		[TRANSCODER_D] = TRANSCODER_D_OFFSET, \
++		[TRANSCODER_DSI_0] = TRANSCODER_DSI0_OFFSET, \
++		[TRANSCODER_DSI_1] = TRANSCODER_DSI1_OFFSET, \
++	}, \
++	TGL_CURSOR_OFFSETS, \
++	ICL_COLORS, \
++	\
++	.__runtime_defaults.ip.ver = 12, \
++	.__runtime_defaults.has_dmc = 1, \
++	.__runtime_defaults.has_dsc = 1, \
++	.__runtime_defaults.has_hdcp = 1, \
++	.__runtime_defaults.pipe_mask = \
++		BIT(PIPE_A) | BIT(PIPE_B) | BIT(PIPE_C) | BIT(PIPE_D), \
++	.__runtime_defaults.cpu_transcoder_mask = \
++		BIT(TRANSCODER_A) | BIT(TRANSCODER_B) | \
++		BIT(TRANSCODER_C) | BIT(TRANSCODER_D) | \
++		BIT(TRANSCODER_DSI_0) | BIT(TRANSCODER_DSI_1), \
++	.__runtime_defaults.fbc_mask = BIT(INTEL_FBC_A)
++
++static const struct intel_display_device_info tgl_display = {
++	XE_D_DISPLAY,
++};
++
++static const struct intel_display_device_info rkl_display = {
++	XE_D_DISPLAY,
++	.abox_mask = BIT(0),
++	.has_hti = 1,
++	.has_psr_hw_tracking = 0,
++
++	.__runtime_defaults.pipe_mask = BIT(PIPE_A) | BIT(PIPE_B) | BIT(PIPE_C),
++	.__runtime_defaults.cpu_transcoder_mask =
++		BIT(TRANSCODER_A) | BIT(TRANSCODER_B) | BIT(TRANSCODER_C),
++};
++
++static const struct intel_display_device_info adl_s_display = {
++	XE_D_DISPLAY,
++	.has_hti = 1,
++	.has_psr_hw_tracking = 0,
++};
++
++#define XE_LPD_FEATURES \
++	.abox_mask = GENMASK(1, 0),						\
++	.color = {								\
++		.degamma_lut_size = 129, .gamma_lut_size = 1024,		\
++		.degamma_lut_tests = DRM_COLOR_LUT_NON_DECREASING |		\
++		DRM_COLOR_LUT_EQUAL_CHANNELS,					\
++	},									\
++	.dbuf.size = 4096,							\
++	.dbuf.slice_mask = BIT(DBUF_S1) | BIT(DBUF_S2) | BIT(DBUF_S3) |		\
++		BIT(DBUF_S4),							\
++	.has_ddi = 1,								\
++	.has_dp_mst = 1,							\
++	.has_dsb = 1,								\
++	.has_fpga_dbg = 1,							\
++	.has_hotplug = 1,							\
++	.has_ipc = 1,								\
++	.has_psr = 1,								\
++	.pipe_offsets = {							\
++		[TRANSCODER_A] = PIPE_A_OFFSET,					\
++		[TRANSCODER_B] = PIPE_B_OFFSET,					\
++		[TRANSCODER_C] = PIPE_C_OFFSET,					\
++		[TRANSCODER_D] = PIPE_D_OFFSET,					\
++		[TRANSCODER_DSI_0] = PIPE_DSI0_OFFSET,				\
++		[TRANSCODER_DSI_1] = PIPE_DSI1_OFFSET,				\
++	},									\
++	.trans_offsets = {							\
++		[TRANSCODER_A] = TRANSCODER_A_OFFSET,				\
++		[TRANSCODER_B] = TRANSCODER_B_OFFSET,				\
++		[TRANSCODER_C] = TRANSCODER_C_OFFSET,				\
++		[TRANSCODER_D] = TRANSCODER_D_OFFSET,				\
++		[TRANSCODER_DSI_0] = TRANSCODER_DSI0_OFFSET,			\
++		[TRANSCODER_DSI_1] = TRANSCODER_DSI1_OFFSET,			\
++	},									\
++	TGL_CURSOR_OFFSETS,							\
++										\
++	.__runtime_defaults.ip.ver = 13,					\
++	.__runtime_defaults.has_dmc = 1,					\
++	.__runtime_defaults.has_dsc = 1,					\
++	.__runtime_defaults.fbc_mask = BIT(INTEL_FBC_A),			\
++	.__runtime_defaults.has_hdcp = 1,					\
++	.__runtime_defaults.pipe_mask =						\
++		BIT(PIPE_A) | BIT(PIPE_B) | BIT(PIPE_C) | BIT(PIPE_D)
++
++static const struct intel_display_device_info xe_lpd_display = {
++	XE_LPD_FEATURES,
++	.has_cdclk_crawl = 1,
++	.has_psr_hw_tracking = 0,
++
++	.__runtime_defaults.cpu_transcoder_mask =
++		BIT(TRANSCODER_A) | BIT(TRANSCODER_B) |
++		BIT(TRANSCODER_C) | BIT(TRANSCODER_D) |
++		BIT(TRANSCODER_DSI_0) | BIT(TRANSCODER_DSI_1),
++};
++
++static const struct intel_display_device_info xe_hpd_display = {
++	XE_LPD_FEATURES,
++	.has_cdclk_squash = 1,
++
++	.__runtime_defaults.cpu_transcoder_mask =
++		BIT(TRANSCODER_A) | BIT(TRANSCODER_B) |
++		BIT(TRANSCODER_C) | BIT(TRANSCODER_D),
++};
++
++static const struct intel_display_device_info xe_lpdp_display = {
++	XE_LPD_FEATURES,
++	.has_cdclk_crawl = 1,
++	.has_cdclk_squash = 1,
++
++	.__runtime_defaults.ip.ver = 14,
++	.__runtime_defaults.fbc_mask = BIT(INTEL_FBC_A) | BIT(INTEL_FBC_B),
++	.__runtime_defaults.cpu_transcoder_mask =
++		BIT(TRANSCODER_A) | BIT(TRANSCODER_B) |
++		BIT(TRANSCODER_C) | BIT(TRANSCODER_D),
++};
++
++#undef INTEL_VGA_DEVICE
++#undef INTEL_QUANTA_VGA_DEVICE
++#define INTEL_VGA_DEVICE(id, info) { id, info }
++#define INTEL_QUANTA_VGA_DEVICE(info) { 0x16a, info }
++
++static const struct {
++	u32 devid;
++	const struct intel_display_device_info *info;
++} intel_display_ids[] = {
++	INTEL_I830_IDS(&i830_display),
++	INTEL_I845G_IDS(&i845_display),
++	INTEL_I85X_IDS(&i85x_display),
++	INTEL_I865G_IDS(&i865g_display),
++	INTEL_I915G_IDS(&i915g_display),
++	INTEL_I915GM_IDS(&i915gm_display),
++	INTEL_I945G_IDS(&i945g_display),
++	INTEL_I945GM_IDS(&i945gm_display),
++	INTEL_I965G_IDS(&i965g_display),
++	INTEL_G33_IDS(&g33_display),
++	INTEL_I965GM_IDS(&i965gm_display),
++	INTEL_GM45_IDS(&gm45_display),
++	INTEL_G45_IDS(&g45_display),
++	INTEL_PINEVIEW_G_IDS(&pnv_display),
++	INTEL_PINEVIEW_M_IDS(&pnv_display),
++	INTEL_IRONLAKE_D_IDS(&ilk_d_display),
++	INTEL_IRONLAKE_M_IDS(&ilk_m_display),
++	INTEL_SNB_D_IDS(&snb_display),
++	INTEL_SNB_M_IDS(&snb_display),
++	INTEL_IVB_Q_IDS(NULL),		/* must be first IVB in list */
++	INTEL_IVB_M_IDS(&ivb_display),
++	INTEL_IVB_D_IDS(&ivb_display),
++	INTEL_HSW_IDS(&hsw_display),
++	INTEL_VLV_IDS(&vlv_display),
++	INTEL_BDW_IDS(&bdw_display),
++	INTEL_CHV_IDS(&chv_display),
++	INTEL_SKL_IDS(&skl_display),
++	INTEL_BXT_IDS(&bxt_display),
++	INTEL_GLK_IDS(&glk_display),
++	INTEL_KBL_IDS(&skl_display),
++	INTEL_CFL_IDS(&skl_display),
++	INTEL_ICL_11_IDS(&gen11_display),
++	INTEL_EHL_IDS(&gen11_display),
++	INTEL_JSL_IDS(&gen11_display),
++	INTEL_TGL_12_IDS(&tgl_display),
++	INTEL_DG1_IDS(&tgl_display),
++	INTEL_RKL_IDS(&rkl_display),
++	INTEL_ADLS_IDS(&adl_s_display),
++	INTEL_RPLS_IDS(&adl_s_display),
++	INTEL_ADLP_IDS(&xe_lpd_display),
++	INTEL_ADLN_IDS(&xe_lpd_display),
++	INTEL_RPLP_IDS(&xe_lpd_display),
++	INTEL_DG2_IDS(&xe_hpd_display),
++
++	/* FIXME: Replace this with a GMD_ID lookup */
++	INTEL_MTL_IDS(&xe_lpdp_display),
++};
++
++const struct intel_display_device_info *
++intel_display_device_probe(u16 pci_devid)
++{
++	int i;
++
++	for (i = 0; i < ARRAY_SIZE(intel_display_ids); i++) {
++		if (intel_display_ids[i].devid == pci_devid)
++			return intel_display_ids[i].info;
++	}
++
++	return &no_display;
++}
+diff --git a/drivers/gpu/drm/i915/display/intel_display_device.h b/drivers/gpu/drm/i915/display/intel_display_device.h
+new file mode 100644
+index 0000000000000..1f7d08b3ad6b1
+--- /dev/null
++++ b/drivers/gpu/drm/i915/display/intel_display_device.h
+@@ -0,0 +1,86 @@
++/* SPDX-License-Identifier: MIT */
++/*
++ * Copyright © 2023 Intel Corporation
++ */
++
++#ifndef __INTEL_DISPLAY_DEVICE_H__
++#define __INTEL_DISPLAY_DEVICE_H__
++
++#include <linux/types.h>
++
++#include "display/intel_display_limits.h"
++
++#define DEV_INFO_DISPLAY_FOR_EACH_FLAG(func) \
++	/* Keep in alphabetical order */ \
++	func(cursor_needs_physical); \
++	func(has_cdclk_crawl); \
++	func(has_cdclk_squash); \
++	func(has_ddi); \
++	func(has_dp_mst); \
++	func(has_dsb); \
++	func(has_fpga_dbg); \
++	func(has_gmch); \
++	func(has_hotplug); \
++	func(has_hti); \
++	func(has_ipc); \
++	func(has_overlay); \
++	func(has_psr); \
++	func(has_psr_hw_tracking); \
++	func(overlay_needs_physical); \
++	func(supports_tv);
++
++struct intel_display_runtime_info {
++	struct {
++		u16 ver;
++		u16 rel;
++		u16 step;
++	} ip;
++
++	u8 pipe_mask;
++	u8 cpu_transcoder_mask;
++
++	u8 num_sprites[I915_MAX_PIPES];
++	u8 num_scalers[I915_MAX_PIPES];
++
++	u8 fbc_mask;
++
++	bool has_hdcp;
++	bool has_dmc;
++	bool has_dsc;
++};
++
++struct intel_display_device_info {
++	/* Initial runtime info. */
++	const struct intel_display_runtime_info __runtime_defaults;
++
++	u8 abox_mask;
++
++	struct {
++		u16 size; /* in blocks */
++		u8 slice_mask;
++	} dbuf;
++
++#define DEFINE_FLAG(name) u8 name:1
++	DEV_INFO_DISPLAY_FOR_EACH_FLAG(DEFINE_FLAG);
++#undef DEFINE_FLAG
++
++	/* Global register offset for the display engine */
++	u32 mmio_offset;
++
++	/* Register offsets for the various display pipes and transcoders */
++	u32 pipe_offsets[I915_MAX_TRANSCODERS];
++	u32 trans_offsets[I915_MAX_TRANSCODERS];
++	u32 cursor_offsets[I915_MAX_PIPES];
++
++	struct {
++		u32 degamma_lut_size;
++		u32 gamma_lut_size;
++		u32 degamma_lut_tests;
++		u32 gamma_lut_tests;
++	} color;
++};
++
++const struct intel_display_device_info *
++intel_display_device_probe(u16 pci_devid);
++
++#endif
+diff --git a/drivers/gpu/drm/i915/display/intel_display_power.c b/drivers/gpu/drm/i915/display/intel_display_power.c
+index 7c9f4288329ed..5f7deaa23b218 100644
+--- a/drivers/gpu/drm/i915/display/intel_display_power.c
++++ b/drivers/gpu/drm/i915/display/intel_display_power.c
+@@ -1052,7 +1052,7 @@ void gen9_dbuf_slices_update(struct drm_i915_private *dev_priv,
+ 			     u8 req_slices)
+ {
+ 	struct i915_power_domains *power_domains = &dev_priv->display.power.domains;
+-	u8 slice_mask = INTEL_INFO(dev_priv)->display.dbuf.slice_mask;
++	u8 slice_mask = DISPLAY_INFO(dev_priv)->dbuf.slice_mask;
+ 	enum dbuf_slice slice;
+ 
+ 	drm_WARN(&dev_priv->drm, req_slices & ~slice_mask,
+@@ -1112,7 +1112,7 @@ static void gen12_dbuf_slices_config(struct drm_i915_private *dev_priv)
+ 
+ static void icl_mbus_init(struct drm_i915_private *dev_priv)
+ {
+-	unsigned long abox_regs = INTEL_INFO(dev_priv)->display.abox_mask;
++	unsigned long abox_regs = DISPLAY_INFO(dev_priv)->abox_mask;
+ 	u32 mask, val, i;
+ 
+ 	if (IS_ALDERLAKE_P(dev_priv) || DISPLAY_VER(dev_priv) >= 14)
+@@ -1558,7 +1558,7 @@ static void tgl_bw_buddy_init(struct drm_i915_private *dev_priv)
+ 	enum intel_dram_type type = dev_priv->dram_info.type;
+ 	u8 num_channels = dev_priv->dram_info.num_channels;
+ 	const struct buddy_page_mask *table;
+-	unsigned long abox_mask = INTEL_INFO(dev_priv)->display.abox_mask;
++	unsigned long abox_mask = DISPLAY_INFO(dev_priv)->abox_mask;
+ 	int config, i;
+ 
+ 	/* BW_BUDDY registers are not used on dgpu's beyond DG1 */
+diff --git a/drivers/gpu/drm/i915/display/intel_display_reg_defs.h b/drivers/gpu/drm/i915/display/intel_display_reg_defs.h
+index 755c1ea8225c5..2f07b7afa3bfe 100644
+--- a/drivers/gpu/drm/i915/display/intel_display_reg_defs.h
++++ b/drivers/gpu/drm/i915/display/intel_display_reg_defs.h
+@@ -8,7 +8,7 @@
+ 
+ #include "i915_reg_defs.h"
+ 
+-#define DISPLAY_MMIO_BASE(dev_priv)	(INTEL_INFO(dev_priv)->display.mmio_offset)
++#define DISPLAY_MMIO_BASE(dev_priv)	(DISPLAY_INFO(dev_priv)->mmio_offset)
+ 
+ #define VLV_DISPLAY_BASE		0x180000
+ 
+@@ -36,14 +36,14 @@
+  * Device info offset array based helpers for groups of registers with unevenly
+  * spaced base offsets.
+  */
+-#define _MMIO_PIPE2(pipe, reg)		_MMIO(INTEL_INFO(dev_priv)->display.pipe_offsets[(pipe)] - \
+-					      INTEL_INFO(dev_priv)->display.pipe_offsets[PIPE_A] + \
++#define _MMIO_PIPE2(pipe, reg)		_MMIO(DISPLAY_INFO(dev_priv)->pipe_offsets[(pipe)] - \
++					      DISPLAY_INFO(dev_priv)->pipe_offsets[PIPE_A] + \
+ 					      DISPLAY_MMIO_BASE(dev_priv) + (reg))
+-#define _MMIO_TRANS2(tran, reg)		_MMIO(INTEL_INFO(dev_priv)->display.trans_offsets[(tran)] - \
+-					      INTEL_INFO(dev_priv)->display.trans_offsets[TRANSCODER_A] + \
++#define _MMIO_TRANS2(tran, reg)		_MMIO(DISPLAY_INFO(dev_priv)->trans_offsets[(tran)] - \
++					      DISPLAY_INFO(dev_priv)->trans_offsets[TRANSCODER_A] + \
+ 					      DISPLAY_MMIO_BASE(dev_priv) + (reg))
+-#define _MMIO_CURSOR2(pipe, reg)	_MMIO(INTEL_INFO(dev_priv)->display.cursor_offsets[(pipe)] - \
+-					      INTEL_INFO(dev_priv)->display.cursor_offsets[PIPE_A] + \
++#define _MMIO_CURSOR2(pipe, reg)	_MMIO(DISPLAY_INFO(dev_priv)->cursor_offsets[(pipe)] - \
++					      DISPLAY_INFO(dev_priv)->cursor_offsets[PIPE_A] + \
+ 					      DISPLAY_MMIO_BASE(dev_priv) + (reg))
+ 
+ #endif /* __INTEL_DISPLAY_REG_DEFS_H__ */
+diff --git a/drivers/gpu/drm/i915/display/intel_fb_pin.c b/drivers/gpu/drm/i915/display/intel_fb_pin.c
+index 1aca7552a85d0..fffd568070d41 100644
+--- a/drivers/gpu/drm/i915/display/intel_fb_pin.c
++++ b/drivers/gpu/drm/i915/display/intel_fb_pin.c
+@@ -243,7 +243,7 @@ int intel_plane_pin_fb(struct intel_plane_state *plane_state)
+ 	struct i915_vma *vma;
+ 	bool phys_cursor =
+ 		plane->id == PLANE_CURSOR &&
+-		INTEL_INFO(dev_priv)->display.cursor_needs_physical;
++		DISPLAY_INFO(dev_priv)->cursor_needs_physical;
+ 
+ 	if (!intel_fb_uses_dpt(fb)) {
+ 		vma = intel_pin_and_fence_fb_obj(fb, phys_cursor,
+diff --git a/drivers/gpu/drm/i915/display/intel_fbc.c b/drivers/gpu/drm/i915/display/intel_fbc.c
+index b507ff944864e..61914a1fe58df 100644
+--- a/drivers/gpu/drm/i915/display/intel_fbc.c
++++ b/drivers/gpu/drm/i915/display/intel_fbc.c
+@@ -55,7 +55,7 @@
+ 
+ #define for_each_fbc_id(__dev_priv, __fbc_id) \
+ 	for ((__fbc_id) = INTEL_FBC_A; (__fbc_id) < I915_MAX_FBCS; (__fbc_id)++) \
+-		for_each_if(RUNTIME_INFO(__dev_priv)->fbc_mask & BIT(__fbc_id))
++		for_each_if(DISPLAY_RUNTIME_INFO(__dev_priv)->fbc_mask & BIT(__fbc_id))
+ 
+ #define for_each_intel_fbc(__dev_priv, __fbc, __fbc_id) \
+ 	for_each_fbc_id((__dev_priv), (__fbc_id)) \
+@@ -1707,10 +1707,10 @@ void intel_fbc_init(struct drm_i915_private *i915)
+ 	enum intel_fbc_id fbc_id;
+ 
+ 	if (!drm_mm_initialized(&i915->mm.stolen))
+-		RUNTIME_INFO(i915)->fbc_mask = 0;
++		DISPLAY_RUNTIME_INFO(i915)->fbc_mask = 0;
+ 
+ 	if (need_fbc_vtd_wa(i915))
+-		RUNTIME_INFO(i915)->fbc_mask = 0;
++		DISPLAY_RUNTIME_INFO(i915)->fbc_mask = 0;
+ 
+ 	i915->params.enable_fbc = intel_sanitize_fbc_option(i915);
+ 	drm_dbg_kms(&i915->drm, "Sanitized enable_fbc value: %d\n",
+diff --git a/drivers/gpu/drm/i915/display/intel_hdcp.c b/drivers/gpu/drm/i915/display/intel_hdcp.c
+index b183efab04a1d..ac46350074df2 100644
+--- a/drivers/gpu/drm/i915/display/intel_hdcp.c
++++ b/drivers/gpu/drm/i915/display/intel_hdcp.c
+@@ -1141,7 +1141,7 @@ static void intel_hdcp_prop_work(struct work_struct *work)
+ 
+ bool is_hdcp_supported(struct drm_i915_private *dev_priv, enum port port)
+ {
+-	return RUNTIME_INFO(dev_priv)->has_hdcp &&
++	return DISPLAY_RUNTIME_INFO(dev_priv)->has_hdcp &&
+ 		(DISPLAY_VER(dev_priv) >= 12 || port < PORT_E);
+ }
+ 
+diff --git a/drivers/gpu/drm/i915/display/intel_hti.c b/drivers/gpu/drm/i915/display/intel_hti.c
+index c518efebdf779..a92d008d4e6e5 100644
+--- a/drivers/gpu/drm/i915/display/intel_hti.c
++++ b/drivers/gpu/drm/i915/display/intel_hti.c
+@@ -15,7 +15,7 @@ void intel_hti_init(struct drm_i915_private *i915)
+ 	 * If the platform has HTI, we need to find out whether it has reserved
+ 	 * any display resources before we create our display outputs.
+ 	 */
+-	if (INTEL_INFO(i915)->display.has_hti)
++	if (DISPLAY_INFO(i915)->has_hti)
+ 		i915->display.hti.state = intel_de_read(i915, HDPORT_STATE);
+ }
+ 
+diff --git a/drivers/gpu/drm/i915/display/intel_psr.c b/drivers/gpu/drm/i915/display/intel_psr.c
+index 6badfff2b4a28..b7cbc780e672f 100644
+--- a/drivers/gpu/drm/i915/display/intel_psr.c
++++ b/drivers/gpu/drm/i915/display/intel_psr.c
+@@ -851,9 +851,9 @@ static bool _compute_psr2_wake_times(struct intel_dp *intel_dp,
+ 	}
+ 
+ 	io_wake_lines = intel_usecs_to_scanlines(
+-		&crtc_state->uapi.adjusted_mode, io_wake_time);
++		&crtc_state->hw.adjusted_mode, io_wake_time);
+ 	fast_wake_lines = intel_usecs_to_scanlines(
+-		&crtc_state->uapi.adjusted_mode, fast_wake_time);
++		&crtc_state->hw.adjusted_mode, fast_wake_time);
+ 
+ 	if (io_wake_lines > max_wake_lines ||
+ 	    fast_wake_lines > max_wake_lines)
+diff --git a/drivers/gpu/drm/i915/display/intel_psr_regs.h b/drivers/gpu/drm/i915/display/intel_psr_regs.h
+index 958d8cabc44b5..5e3fe23ef8eb2 100644
+--- a/drivers/gpu/drm/i915/display/intel_psr_regs.h
++++ b/drivers/gpu/drm/i915/display/intel_psr_regs.h
+@@ -75,7 +75,7 @@
+ 
+ #define _SRD_AUX_DATA_A				0x60814
+ #define _SRD_AUX_DATA_EDP			0x6f814
+-#define EDP_PSR_AUX_DATA(tran, i)		_MMIO_TRANS2(tran, _SRD_AUX_DATA_A + (i) + 4) /* 5 registers */
++#define EDP_PSR_AUX_DATA(tran, i)		_MMIO_TRANS2(tran, _SRD_AUX_DATA_A + (i) * 4) /* 5 registers */
+ 
+ #define _SRD_STATUS_A				0x60840
+ #define _SRD_STATUS_EDP				0x6f840
+diff --git a/drivers/gpu/drm/i915/display/skl_universal_plane.c b/drivers/gpu/drm/i915/display/skl_universal_plane.c
+index 8ea0598a5a07e..5df7b02483629 100644
+--- a/drivers/gpu/drm/i915/display/skl_universal_plane.c
++++ b/drivers/gpu/drm/i915/display/skl_universal_plane.c
+@@ -1936,7 +1936,7 @@ static enum intel_fbc_id skl_fbc_id_for_pipe(enum pipe pipe)
+ static bool skl_plane_has_fbc(struct drm_i915_private *dev_priv,
+ 			      enum intel_fbc_id fbc_id, enum plane_id plane_id)
+ {
+-	if ((RUNTIME_INFO(dev_priv)->fbc_mask & BIT(fbc_id)) == 0)
++	if ((DISPLAY_RUNTIME_INFO(dev_priv)->fbc_mask & BIT(fbc_id)) == 0)
+ 		return false;
+ 
+ 	return plane_id == PLANE_PRIMARY;
+diff --git a/drivers/gpu/drm/i915/display/skl_watermark.c b/drivers/gpu/drm/i915/display/skl_watermark.c
+index 1c7e6468f3e34..d1245c847f1cb 100644
+--- a/drivers/gpu/drm/i915/display/skl_watermark.c
++++ b/drivers/gpu/drm/i915/display/skl_watermark.c
+@@ -507,8 +507,8 @@ static u16 skl_ddb_entry_init(struct skl_ddb_entry *entry,
+ 
+ static int intel_dbuf_slice_size(struct drm_i915_private *i915)
+ {
+-	return INTEL_INFO(i915)->display.dbuf.size /
+-		hweight8(INTEL_INFO(i915)->display.dbuf.slice_mask);
++	return DISPLAY_INFO(i915)->dbuf.size /
++		hweight8(DISPLAY_INFO(i915)->dbuf.slice_mask);
+ }
+ 
+ static void
+@@ -527,7 +527,7 @@ skl_ddb_entry_for_slices(struct drm_i915_private *i915, u8 slice_mask,
+ 	ddb->end = fls(slice_mask) * slice_size;
+ 
+ 	WARN_ON(ddb->start >= ddb->end);
+-	WARN_ON(ddb->end > INTEL_INFO(i915)->display.dbuf.size);
++	WARN_ON(ddb->end > DISPLAY_INFO(i915)->dbuf.size);
+ }
+ 
+ static unsigned int mbus_ddb_offset(struct drm_i915_private *i915, u8 slice_mask)
+@@ -2625,7 +2625,7 @@ skl_compute_ddb(struct intel_atomic_state *state)
+ 			    "Enabled dbuf slices 0x%x -> 0x%x (total dbuf slices 0x%x), mbus joined? %s->%s\n",
+ 			    old_dbuf_state->enabled_slices,
+ 			    new_dbuf_state->enabled_slices,
+-			    INTEL_INFO(i915)->display.dbuf.slice_mask,
++			    DISPLAY_INFO(i915)->dbuf.slice_mask,
+ 			    str_yes_no(old_dbuf_state->joined_mbus),
+ 			    str_yes_no(new_dbuf_state->joined_mbus));
+ 	}
+diff --git a/drivers/gpu/drm/i915/gt/intel_gt_sysfs_pm.c b/drivers/gpu/drm/i915/gt/intel_gt_sysfs_pm.c
+index 28f27091cd3b7..ee2b44f896a27 100644
+--- a/drivers/gpu/drm/i915/gt/intel_gt_sysfs_pm.c
++++ b/drivers/gpu/drm/i915/gt/intel_gt_sysfs_pm.c
+@@ -451,6 +451,33 @@ static ssize_t punit_req_freq_mhz_show(struct kobject *kobj,
+ 	return sysfs_emit(buff, "%u\n", preq);
+ }
+ 
++static ssize_t slpc_ignore_eff_freq_show(struct kobject *kobj,
++					 struct kobj_attribute *attr,
++					 char *buff)
++{
++	struct intel_gt *gt = intel_gt_sysfs_get_drvdata(kobj, attr->attr.name);
++	struct intel_guc_slpc *slpc = &gt->uc.guc.slpc;
++
++	return sysfs_emit(buff, "%u\n", slpc->ignore_eff_freq);
++}
++
++static ssize_t slpc_ignore_eff_freq_store(struct kobject *kobj,
++					  struct kobj_attribute *attr,
++					  const char *buff, size_t count)
++{
++	struct intel_gt *gt = intel_gt_sysfs_get_drvdata(kobj, attr->attr.name);
++	struct intel_guc_slpc *slpc = &gt->uc.guc.slpc;
++	int err;
++	u32 val;
++
++	err = kstrtou32(buff, 0, &val);
++	if (err)
++		return err;
++
++	err = intel_guc_slpc_set_ignore_eff_freq(slpc, val);
++	return err ?: count;
++}
++
+ struct intel_gt_bool_throttle_attr {
+ 	struct attribute attr;
+ 	ssize_t (*show)(struct kobject *kobj, struct kobj_attribute *attr,
+@@ -663,6 +690,8 @@ static struct kobj_attribute attr_media_freq_factor_scale =
+ INTEL_GT_ATTR_RO(media_RP0_freq_mhz);
+ INTEL_GT_ATTR_RO(media_RPn_freq_mhz);
+ 
++INTEL_GT_ATTR_RW(slpc_ignore_eff_freq);
++
+ static const struct attribute *media_perf_power_attrs[] = {
+ 	&attr_media_freq_factor.attr,
+ 	&attr_media_freq_factor_scale.attr,
+@@ -744,6 +773,12 @@ void intel_gt_sysfs_pm_init(struct intel_gt *gt, struct kobject *kobj)
+ 	if (ret)
+ 		gt_warn(gt, "failed to create punit_req_freq_mhz sysfs (%pe)", ERR_PTR(ret));
+ 
++	if (intel_uc_uses_guc_slpc(&gt->uc)) {
++		ret = sysfs_create_file(kobj, &attr_slpc_ignore_eff_freq.attr);
++		if (ret)
++			gt_warn(gt, "failed to create ignore_eff_freq sysfs (%pe)", ERR_PTR(ret));
++	}
++
+ 	if (i915_mmio_reg_valid(intel_gt_perf_limit_reasons_reg(gt))) {
+ 		ret = sysfs_create_files(kobj, throttle_reason_attrs);
+ 		if (ret)
+diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
+index 026d73855f36c..cc18e8f664864 100644
+--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
++++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
+@@ -277,6 +277,7 @@ int intel_guc_slpc_init(struct intel_guc_slpc *slpc)
+ 
+ 	slpc->max_freq_softlimit = 0;
+ 	slpc->min_freq_softlimit = 0;
++	slpc->ignore_eff_freq = false;
+ 	slpc->min_is_rpmax = false;
+ 
+ 	slpc->boost_freq = 0;
+@@ -457,6 +458,29 @@ int intel_guc_slpc_get_max_freq(struct intel_guc_slpc *slpc, u32 *val)
+ 	return ret;
+ }
+ 
++int intel_guc_slpc_set_ignore_eff_freq(struct intel_guc_slpc *slpc, bool val)
++{
++	struct drm_i915_private *i915 = slpc_to_i915(slpc);
++	intel_wakeref_t wakeref;
++	int ret;
++
++	mutex_lock(&slpc->lock);
++	wakeref = intel_runtime_pm_get(&i915->runtime_pm);
++
++	ret = slpc_set_param(slpc,
++			     SLPC_PARAM_IGNORE_EFFICIENT_FREQUENCY,
++			     val);
++	if (ret)
++		guc_probe_error(slpc_to_guc(slpc), "Failed to set efficient freq(%d): %pe\n",
++				val, ERR_PTR(ret));
++	else
++		slpc->ignore_eff_freq = val;
++
++	intel_runtime_pm_put(&i915->runtime_pm, wakeref);
++	mutex_unlock(&slpc->lock);
++	return ret;
++}
++
+ /**
+  * intel_guc_slpc_set_min_freq() - Set min frequency limit for SLPC.
+  * @slpc: pointer to intel_guc_slpc.
+@@ -482,16 +506,6 @@ int intel_guc_slpc_set_min_freq(struct intel_guc_slpc *slpc, u32 val)
+ 	mutex_lock(&slpc->lock);
+ 	wakeref = intel_runtime_pm_get(&i915->runtime_pm);
+ 
+-	/* Ignore efficient freq if lower min freq is requested */
+-	ret = slpc_set_param(slpc,
+-			     SLPC_PARAM_IGNORE_EFFICIENT_FREQUENCY,
+-			     val < slpc->rp1_freq);
+-	if (ret) {
+-		guc_probe_error(slpc_to_guc(slpc), "Failed to toggle efficient freq: %pe\n",
+-				ERR_PTR(ret));
+-		goto out;
+-	}
+-
+ 	ret = slpc_set_param(slpc,
+ 			     SLPC_PARAM_GLOBAL_MIN_GT_UNSLICE_FREQ_MHZ,
+ 			     val);
+@@ -499,7 +513,6 @@ int intel_guc_slpc_set_min_freq(struct intel_guc_slpc *slpc, u32 val)
+ 	if (!ret)
+ 		slpc->min_freq_softlimit = val;
+ 
+-out:
+ 	intel_runtime_pm_put(&i915->runtime_pm, wakeref);
+ 	mutex_unlock(&slpc->lock);
+ 
+@@ -593,7 +606,7 @@ static int slpc_set_softlimits(struct intel_guc_slpc *slpc)
+ 		if (unlikely(ret))
+ 			return ret;
+ 		slpc_to_gt(slpc)->defaults.min_freq = slpc->min_freq_softlimit;
+-	} else if (slpc->min_freq_softlimit != slpc->min_freq) {
++	} else {
+ 		return intel_guc_slpc_set_min_freq(slpc,
+ 						   slpc->min_freq_softlimit);
+ 	}
+@@ -752,6 +765,9 @@ int intel_guc_slpc_enable(struct intel_guc_slpc *slpc)
+ 	/* Set cached media freq ratio mode */
+ 	intel_guc_slpc_set_media_ratio_mode(slpc, slpc->media_ratio_mode);
+ 
++	/* Set cached value of ignore efficient freq */
++	intel_guc_slpc_set_ignore_eff_freq(slpc, slpc->ignore_eff_freq);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
+index 17ed515f6a852..597eb5413ddf2 100644
+--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
++++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.h
+@@ -46,5 +46,6 @@ void intel_guc_slpc_boost(struct intel_guc_slpc *slpc);
+ void intel_guc_slpc_dec_waiters(struct intel_guc_slpc *slpc);
+ int intel_guc_slpc_unset_gucrc_mode(struct intel_guc_slpc *slpc);
+ int intel_guc_slpc_override_gucrc_mode(struct intel_guc_slpc *slpc, u32 mode);
++int intel_guc_slpc_set_ignore_eff_freq(struct intel_guc_slpc *slpc, bool val);
+ 
+ #endif
+diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc_types.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc_types.h
+index a6ef53b04e047..a886513314977 100644
+--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc_types.h
++++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc_types.h
+@@ -31,6 +31,7 @@ struct intel_guc_slpc {
+ 	/* frequency softlimits */
+ 	u32 min_freq_softlimit;
+ 	u32 max_freq_softlimit;
++	bool ignore_eff_freq;
+ 
+ 	/* cached media ratio mode */
+ 	u32 media_ratio_mode;
+diff --git a/drivers/gpu/drm/i915/i915_driver.c b/drivers/gpu/drm/i915/i915_driver.c
+index 93fdc40d724fa..2980ccdef6cd6 100644
+--- a/drivers/gpu/drm/i915/i915_driver.c
++++ b/drivers/gpu/drm/i915/i915_driver.c
+@@ -720,8 +720,6 @@ i915_driver_create(struct pci_dev *pdev, const struct pci_device_id *ent)
+ {
+ 	const struct intel_device_info *match_info =
+ 		(struct intel_device_info *)ent->driver_data;
+-	struct intel_device_info *device_info;
+-	struct intel_runtime_info *runtime;
+ 	struct drm_i915_private *i915;
+ 
+ 	i915 = devm_drm_dev_alloc(&pdev->dev, &i915_drm_driver,
+@@ -734,14 +732,8 @@ i915_driver_create(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	/* Device parameters start as a copy of module parameters. */
+ 	i915_params_copy(&i915->params, &i915_modparams);
+ 
+-	/* Setup the write-once "constant" device info */
+-	device_info = mkwrite_device_info(i915);
+-	memcpy(device_info, match_info, sizeof(*device_info));
+-
+-	/* Initialize initial runtime info from static const data and pdev. */
+-	runtime = RUNTIME_INFO(i915);
+-	memcpy(runtime, &INTEL_INFO(i915)->__runtime, sizeof(*runtime));
+-	runtime->device_id = pdev->device;
++	/* Set up device info and initial runtime info. */
++	intel_device_info_driver_create(i915, pdev->device, match_info);
+ 
+ 	return i915;
+ }
+diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
+index e771fdc3099c2..dbdecf1ee24fe 100644
+--- a/drivers/gpu/drm/i915/i915_drv.h
++++ b/drivers/gpu/drm/i915/i915_drv.h
+@@ -205,6 +205,7 @@ struct drm_i915_private {
+ 
+ 	const struct intel_device_info __info; /* Use INTEL_INFO() to access. */
+ 	struct intel_runtime_info __runtime; /* Use RUNTIME_INFO() to access. */
++	struct intel_display_runtime_info __display_runtime; /* Access with DISPLAY_RUNTIME_INFO() */
+ 	struct intel_driver_caps caps;
+ 
+ 	struct i915_dsm dsm;
+@@ -408,7 +409,9 @@ static inline struct intel_gt *to_gt(struct drm_i915_private *i915)
+ 	     (engine__) = rb_to_uabi_engine(rb_next(&(engine__)->uabi_node)))
+ 
+ #define INTEL_INFO(dev_priv)	(&(dev_priv)->__info)
++#define DISPLAY_INFO(i915)	(INTEL_INFO(i915)->display)
+ #define RUNTIME_INFO(dev_priv)	(&(dev_priv)->__runtime)
++#define DISPLAY_RUNTIME_INFO(i915)	(&(i915)->__display_runtime)
+ #define DRIVER_CAPS(dev_priv)	(&(dev_priv)->caps)
+ 
+ #define INTEL_DEVID(dev_priv)	(RUNTIME_INFO(dev_priv)->device_id)
+@@ -427,7 +430,7 @@ static inline struct intel_gt *to_gt(struct drm_i915_private *i915)
+ #define IS_MEDIA_VER(i915, from, until) \
+ 	(MEDIA_VER(i915) >= (from) && MEDIA_VER(i915) <= (until))
+ 
+-#define DISPLAY_VER(i915)	(RUNTIME_INFO(i915)->display.ip.ver)
++#define DISPLAY_VER(i915)	(DISPLAY_RUNTIME_INFO(i915)->ip.ver)
+ #define IS_DISPLAY_VER(i915, from, until) \
+ 	(DISPLAY_VER(i915) >= (from) && DISPLAY_VER(i915) <= (until))
+ 
+@@ -782,9 +785,9 @@ IS_SUBPLATFORM(const struct drm_i915_private *i915,
+ 	((sizes) & ~RUNTIME_INFO(dev_priv)->page_sizes) == 0; \
+ })
+ 
+-#define HAS_OVERLAY(dev_priv)		 (INTEL_INFO(dev_priv)->display.has_overlay)
++#define HAS_OVERLAY(dev_priv)		 (DISPLAY_INFO(dev_priv)->has_overlay)
+ #define OVERLAY_NEEDS_PHYSICAL(dev_priv) \
+-		(INTEL_INFO(dev_priv)->display.overlay_needs_physical)
++		(DISPLAY_INFO(dev_priv)->overlay_needs_physical)
+ 
+ /* Early gen2 have a totally busted CS tlb and require pinned batches. */
+ #define HAS_BROKEN_CS_TLB(dev_priv)	(IS_I830(dev_priv) || IS_I845G(dev_priv))
+@@ -806,31 +809,31 @@ IS_SUBPLATFORM(const struct drm_i915_private *i915,
+  */
+ #define HAS_128_BYTE_Y_TILING(dev_priv) (GRAPHICS_VER(dev_priv) != 2 && \
+ 					 !(IS_I915G(dev_priv) || IS_I915GM(dev_priv)))
+-#define SUPPORTS_TV(dev_priv)		(INTEL_INFO(dev_priv)->display.supports_tv)
+-#define I915_HAS_HOTPLUG(dev_priv)	(INTEL_INFO(dev_priv)->display.has_hotplug)
++#define SUPPORTS_TV(dev_priv)		(DISPLAY_INFO(dev_priv)->supports_tv)
++#define I915_HAS_HOTPLUG(dev_priv)	(DISPLAY_INFO(dev_priv)->has_hotplug)
+ 
+ #define HAS_FW_BLC(dev_priv)	(DISPLAY_VER(dev_priv) > 2)
+-#define HAS_FBC(dev_priv)	(RUNTIME_INFO(dev_priv)->fbc_mask != 0)
++#define HAS_FBC(dev_priv)	(DISPLAY_RUNTIME_INFO(dev_priv)->fbc_mask != 0)
+ #define HAS_CUR_FBC(dev_priv)	(!HAS_GMCH(dev_priv) && DISPLAY_VER(dev_priv) >= 7)
+ 
+ #define HAS_DPT(dev_priv)	(DISPLAY_VER(dev_priv) >= 13)
+ 
+ #define HAS_IPS(dev_priv)	(IS_HSW_ULT(dev_priv) || IS_BROADWELL(dev_priv))
+ 
+-#define HAS_DP_MST(dev_priv)	(INTEL_INFO(dev_priv)->display.has_dp_mst)
++#define HAS_DP_MST(dev_priv)	(DISPLAY_INFO(dev_priv)->has_dp_mst)
+ #define HAS_DP20(dev_priv)	(IS_DG2(dev_priv) || DISPLAY_VER(dev_priv) >= 14)
+ 
+ #define HAS_DOUBLE_BUFFERED_M_N(dev_priv)	(DISPLAY_VER(dev_priv) >= 9 || IS_BROADWELL(dev_priv))
+ 
+-#define HAS_CDCLK_CRAWL(dev_priv)	 (INTEL_INFO(dev_priv)->display.has_cdclk_crawl)
+-#define HAS_CDCLK_SQUASH(dev_priv)	 (INTEL_INFO(dev_priv)->display.has_cdclk_squash)
+-#define HAS_DDI(dev_priv)		 (INTEL_INFO(dev_priv)->display.has_ddi)
+-#define HAS_FPGA_DBG_UNCLAIMED(dev_priv) (INTEL_INFO(dev_priv)->display.has_fpga_dbg)
+-#define HAS_PSR(dev_priv)		 (INTEL_INFO(dev_priv)->display.has_psr)
++#define HAS_CDCLK_CRAWL(dev_priv)	 (DISPLAY_INFO(dev_priv)->has_cdclk_crawl)
++#define HAS_CDCLK_SQUASH(dev_priv)	 (DISPLAY_INFO(dev_priv)->has_cdclk_squash)
++#define HAS_DDI(dev_priv)		 (DISPLAY_INFO(dev_priv)->has_ddi)
++#define HAS_FPGA_DBG_UNCLAIMED(dev_priv) (DISPLAY_INFO(dev_priv)->has_fpga_dbg)
++#define HAS_PSR(dev_priv)		 (DISPLAY_INFO(dev_priv)->has_psr)
+ #define HAS_PSR_HW_TRACKING(dev_priv) \
+-	(INTEL_INFO(dev_priv)->display.has_psr_hw_tracking)
++	(DISPLAY_INFO(dev_priv)->has_psr_hw_tracking)
+ #define HAS_PSR2_SEL_FETCH(dev_priv)	 (DISPLAY_VER(dev_priv) >= 12)
+-#define HAS_TRANSCODER(dev_priv, trans)	 ((RUNTIME_INFO(dev_priv)->cpu_transcoder_mask & BIT(trans)) != 0)
++#define HAS_TRANSCODER(dev_priv, trans)	 ((DISPLAY_RUNTIME_INFO(dev_priv)->cpu_transcoder_mask & BIT(trans)) != 0)
+ 
+ #define HAS_RC6(dev_priv)		 (INTEL_INFO(dev_priv)->has_rc6)
+ #define HAS_RC6p(dev_priv)		 (INTEL_INFO(dev_priv)->has_rc6p)
+@@ -838,9 +841,9 @@ IS_SUBPLATFORM(const struct drm_i915_private *i915,
+ 
+ #define HAS_RPS(dev_priv)	(INTEL_INFO(dev_priv)->has_rps)
+ 
+-#define HAS_DMC(dev_priv)	(RUNTIME_INFO(dev_priv)->has_dmc)
+-#define HAS_DSB(dev_priv)	(INTEL_INFO(dev_priv)->display.has_dsb)
+-#define HAS_DSC(__i915)		(RUNTIME_INFO(__i915)->has_dsc)
++#define HAS_DMC(dev_priv)	(DISPLAY_RUNTIME_INFO(dev_priv)->has_dmc)
++#define HAS_DSB(dev_priv)	(DISPLAY_INFO(dev_priv)->has_dsb)
++#define HAS_DSC(__i915)		(DISPLAY_RUNTIME_INFO(__i915)->has_dsc)
+ #define HAS_HW_SAGV_WM(i915) (DISPLAY_VER(i915) >= 13 && !IS_DGFX(i915))
+ 
+ #define HAS_HECI_PXP(dev_priv) \
+@@ -869,7 +872,7 @@ IS_SUBPLATFORM(const struct drm_i915_private *i915,
+  */
+ #define HAS_64K_PAGES(dev_priv) (INTEL_INFO(dev_priv)->has_64k_pages)
+ 
+-#define HAS_IPC(dev_priv)		(INTEL_INFO(dev_priv)->display.has_ipc)
++#define HAS_IPC(dev_priv)		(DISPLAY_INFO(dev_priv)->has_ipc)
+ #define HAS_SAGV(dev_priv)		(DISPLAY_VER(dev_priv) >= 9 && !IS_LP(dev_priv))
+ 
+ #define HAS_REGION(i915, i) (RUNTIME_INFO(i915)->memory_regions & (i))
+@@ -889,7 +892,7 @@ IS_SUBPLATFORM(const struct drm_i915_private *i915,
+ 
+ #define HAS_GLOBAL_MOCS_REGISTERS(dev_priv)	(INTEL_INFO(dev_priv)->has_global_mocs)
+ 
+-#define HAS_GMCH(dev_priv) (INTEL_INFO(dev_priv)->display.has_gmch)
++#define HAS_GMCH(dev_priv) (DISPLAY_INFO(dev_priv)->has_gmch)
+ 
+ #define HAS_GMD_ID(i915)	(INTEL_INFO(i915)->has_gmd_id)
+ 
+@@ -902,9 +905,9 @@ IS_SUBPLATFORM(const struct drm_i915_private *i915,
+ #define NUM_L3_SLICES(dev_priv) (IS_HSW_GT3(dev_priv) ? \
+ 				 2 : HAS_L3_DPF(dev_priv))
+ 
+-#define INTEL_NUM_PIPES(dev_priv) (hweight8(RUNTIME_INFO(dev_priv)->pipe_mask))
++#define INTEL_NUM_PIPES(dev_priv) (hweight8(DISPLAY_RUNTIME_INFO(dev_priv)->pipe_mask))
+ 
+-#define HAS_DISPLAY(dev_priv) (RUNTIME_INFO(dev_priv)->pipe_mask != 0)
++#define HAS_DISPLAY(dev_priv) (DISPLAY_RUNTIME_INFO(dev_priv)->pipe_mask != 0)
+ 
+ #define HAS_VRR(i915)	(DISPLAY_VER(i915) >= 11)
+ 
+@@ -931,11 +934,4 @@ IS_SUBPLATFORM(const struct drm_i915_private *i915,
+ #define HAS_LMEMBAR_SMEM_STOLEN(i915) (!HAS_LMEM(i915) && \
+ 				       GRAPHICS_VER_FULL(i915) >= IP_VER(12, 70))
+ 
+-/* intel_device_info.c */
+-static inline struct intel_device_info *
+-mkwrite_device_info(struct drm_i915_private *dev_priv)
+-{
+-	return (struct intel_device_info *)INTEL_INFO(dev_priv);
+-}
+-
+ #endif
+diff --git a/drivers/gpu/drm/i915/i915_pci.c b/drivers/gpu/drm/i915/i915_pci.c
+index edcfb5fe20b24..6b69d4c7bdb79 100644
+--- a/drivers/gpu/drm/i915/i915_pci.c
++++ b/drivers/gpu/drm/i915/i915_pci.c
+@@ -39,129 +39,7 @@
+ #define PLATFORM(x) .platform = (x)
+ #define GEN(x) \
+ 	.__runtime.graphics.ip.ver = (x), \
+-	.__runtime.media.ip.ver = (x), \
+-	.__runtime.display.ip.ver = (x)
+-
+-#define NO_DISPLAY .__runtime.pipe_mask = 0
+-
+-#define I845_PIPE_OFFSETS \
+-	.display.pipe_offsets = { \
+-		[TRANSCODER_A] = PIPE_A_OFFSET,	\
+-	}, \
+-	.display.trans_offsets = { \
+-		[TRANSCODER_A] = TRANSCODER_A_OFFSET, \
+-	}
+-
+-#define I9XX_PIPE_OFFSETS \
+-	.display.pipe_offsets = { \
+-		[TRANSCODER_A] = PIPE_A_OFFSET,	\
+-		[TRANSCODER_B] = PIPE_B_OFFSET, \
+-	}, \
+-	.display.trans_offsets = { \
+-		[TRANSCODER_A] = TRANSCODER_A_OFFSET, \
+-		[TRANSCODER_B] = TRANSCODER_B_OFFSET, \
+-	}
+-
+-#define IVB_PIPE_OFFSETS \
+-	.display.pipe_offsets = { \
+-		[TRANSCODER_A] = PIPE_A_OFFSET,	\
+-		[TRANSCODER_B] = PIPE_B_OFFSET, \
+-		[TRANSCODER_C] = PIPE_C_OFFSET, \
+-	}, \
+-	.display.trans_offsets = { \
+-		[TRANSCODER_A] = TRANSCODER_A_OFFSET, \
+-		[TRANSCODER_B] = TRANSCODER_B_OFFSET, \
+-		[TRANSCODER_C] = TRANSCODER_C_OFFSET, \
+-	}
+-
+-#define HSW_PIPE_OFFSETS \
+-	.display.pipe_offsets = { \
+-		[TRANSCODER_A] = PIPE_A_OFFSET,	\
+-		[TRANSCODER_B] = PIPE_B_OFFSET, \
+-		[TRANSCODER_C] = PIPE_C_OFFSET, \
+-		[TRANSCODER_EDP] = PIPE_EDP_OFFSET, \
+-	}, \
+-	.display.trans_offsets = { \
+-		[TRANSCODER_A] = TRANSCODER_A_OFFSET, \
+-		[TRANSCODER_B] = TRANSCODER_B_OFFSET, \
+-		[TRANSCODER_C] = TRANSCODER_C_OFFSET, \
+-		[TRANSCODER_EDP] = TRANSCODER_EDP_OFFSET, \
+-	}
+-
+-#define CHV_PIPE_OFFSETS \
+-	.display.pipe_offsets = { \
+-		[TRANSCODER_A] = PIPE_A_OFFSET, \
+-		[TRANSCODER_B] = PIPE_B_OFFSET, \
+-		[TRANSCODER_C] = CHV_PIPE_C_OFFSET, \
+-	}, \
+-	.display.trans_offsets = { \
+-		[TRANSCODER_A] = TRANSCODER_A_OFFSET, \
+-		[TRANSCODER_B] = TRANSCODER_B_OFFSET, \
+-		[TRANSCODER_C] = CHV_TRANSCODER_C_OFFSET, \
+-	}
+-
+-#define I845_CURSOR_OFFSETS \
+-	.display.cursor_offsets = { \
+-		[PIPE_A] = CURSOR_A_OFFSET, \
+-	}
+-
+-#define I9XX_CURSOR_OFFSETS \
+-	.display.cursor_offsets = { \
+-		[PIPE_A] = CURSOR_A_OFFSET, \
+-		[PIPE_B] = CURSOR_B_OFFSET, \
+-	}
+-
+-#define CHV_CURSOR_OFFSETS \
+-	.display.cursor_offsets = { \
+-		[PIPE_A] = CURSOR_A_OFFSET, \
+-		[PIPE_B] = CURSOR_B_OFFSET, \
+-		[PIPE_C] = CHV_CURSOR_C_OFFSET, \
+-	}
+-
+-#define IVB_CURSOR_OFFSETS \
+-	.display.cursor_offsets = { \
+-		[PIPE_A] = CURSOR_A_OFFSET, \
+-		[PIPE_B] = IVB_CURSOR_B_OFFSET, \
+-		[PIPE_C] = IVB_CURSOR_C_OFFSET, \
+-	}
+-
+-#define TGL_CURSOR_OFFSETS \
+-	.display.cursor_offsets = { \
+-		[PIPE_A] = CURSOR_A_OFFSET, \
+-		[PIPE_B] = IVB_CURSOR_B_OFFSET, \
+-		[PIPE_C] = IVB_CURSOR_C_OFFSET, \
+-		[PIPE_D] = TGL_CURSOR_D_OFFSET, \
+-	}
+-
+-#define I845_COLORS \
+-	.display.color = { .gamma_lut_size = 256 }
+-#define I9XX_COLORS \
+-	.display.color = { .gamma_lut_size = 129, \
+-		   .gamma_lut_tests = DRM_COLOR_LUT_NON_DECREASING, \
+-	}
+-#define ILK_COLORS \
+-	.display.color = { .gamma_lut_size = 1024 }
+-#define IVB_COLORS \
+-	.display.color = { .degamma_lut_size = 1024, .gamma_lut_size = 1024 }
+-#define CHV_COLORS \
+-	.display.color = { \
+-		.degamma_lut_size = 65, .gamma_lut_size = 257, \
+-		.degamma_lut_tests = DRM_COLOR_LUT_NON_DECREASING, \
+-		.gamma_lut_tests = DRM_COLOR_LUT_NON_DECREASING, \
+-	}
+-#define GLK_COLORS \
+-	.display.color = { \
+-		.degamma_lut_size = 33, .gamma_lut_size = 1024, \
+-		.degamma_lut_tests = DRM_COLOR_LUT_NON_DECREASING | \
+-				     DRM_COLOR_LUT_EQUAL_CHANNELS, \
+-	}
+-#define ICL_COLORS \
+-	.display.color = { \
+-		.degamma_lut_size = 33, .gamma_lut_size = 262145, \
+-		.degamma_lut_tests = DRM_COLOR_LUT_NON_DECREASING | \
+-				     DRM_COLOR_LUT_EQUAL_CHANNELS, \
+-		.gamma_lut_tests = DRM_COLOR_LUT_NON_DECREASING, \
+-	}
++	.__runtime.media.ip.ver = (x)
+ 
+ /* Keep in gen based order, and chronological order within a gen */
+ 
+@@ -174,12 +52,6 @@
+ #define I830_FEATURES \
+ 	GEN(2), \
+ 	.is_mobile = 1, \
+-	.__runtime.pipe_mask = BIT(PIPE_A) | BIT(PIPE_B), \
+-	.__runtime.cpu_transcoder_mask = BIT(TRANSCODER_A) | BIT(TRANSCODER_B), \
+-	.display.has_overlay = 1, \
+-	.display.cursor_needs_physical = 1, \
+-	.display.overlay_needs_physical = 1, \
+-	.display.has_gmch = 1, \
+ 	.gpu_reset_clobbers_display = true, \
+ 	.has_3d_pipeline = 1, \
+ 	.hws_needs_physical = 1, \
+@@ -188,19 +60,11 @@
+ 	.has_snoop = true, \
+ 	.has_coherent_ggtt = false, \
+ 	.dma_mask_size = 32, \
+-	I9XX_PIPE_OFFSETS, \
+-	I9XX_CURSOR_OFFSETS, \
+-	I9XX_COLORS, \
+ 	GEN_DEFAULT_PAGE_SIZES, \
+ 	GEN_DEFAULT_REGIONS
+ 
+ #define I845_FEATURES \
+ 	GEN(2), \
+-	.__runtime.pipe_mask = BIT(PIPE_A), \
+-	.__runtime.cpu_transcoder_mask = BIT(TRANSCODER_A), \
+-	.display.has_overlay = 1, \
+-	.display.overlay_needs_physical = 1, \
+-	.display.has_gmch = 1, \
+ 	.has_3d_pipeline = 1, \
+ 	.gpu_reset_clobbers_display = true, \
+ 	.hws_needs_physical = 1, \
+@@ -209,9 +73,6 @@
+ 	.has_snoop = true, \
+ 	.has_coherent_ggtt = false, \
+ 	.dma_mask_size = 32, \
+-	I845_PIPE_OFFSETS, \
+-	I845_CURSOR_OFFSETS, \
+-	I845_COLORS, \
+ 	GEN_DEFAULT_PAGE_SIZES, \
+ 	GEN_DEFAULT_REGIONS
+ 
+@@ -228,29 +89,21 @@ static const struct intel_device_info i845g_info = {
+ static const struct intel_device_info i85x_info = {
+ 	I830_FEATURES,
+ 	PLATFORM(INTEL_I85X),
+-	.__runtime.fbc_mask = BIT(INTEL_FBC_A),
+ };
+ 
+ static const struct intel_device_info i865g_info = {
+ 	I845_FEATURES,
+ 	PLATFORM(INTEL_I865G),
+-	.__runtime.fbc_mask = BIT(INTEL_FBC_A),
+ };
+ 
+ #define GEN3_FEATURES \
+ 	GEN(3), \
+-	.__runtime.pipe_mask = BIT(PIPE_A) | BIT(PIPE_B), \
+-	.__runtime.cpu_transcoder_mask = BIT(TRANSCODER_A) | BIT(TRANSCODER_B), \
+-	.display.has_gmch = 1, \
+ 	.gpu_reset_clobbers_display = true, \
+ 	.__runtime.platform_engine_mask = BIT(RCS0), \
+ 	.has_3d_pipeline = 1, \
+ 	.has_snoop = true, \
+ 	.has_coherent_ggtt = true, \
+ 	.dma_mask_size = 32, \
+-	I9XX_PIPE_OFFSETS, \
+-	I9XX_CURSOR_OFFSETS, \
+-	I9XX_COLORS, \
+ 	GEN_DEFAULT_PAGE_SIZES, \
+ 	GEN_DEFAULT_REGIONS
+ 
+@@ -258,9 +111,6 @@ static const struct intel_device_info i915g_info = {
+ 	GEN3_FEATURES,
+ 	PLATFORM(INTEL_I915G),
+ 	.has_coherent_ggtt = false,
+-	.display.cursor_needs_physical = 1,
+-	.display.has_overlay = 1,
+-	.display.overlay_needs_physical = 1,
+ 	.hws_needs_physical = 1,
+ 	.unfenced_needs_alignment = 1,
+ };
+@@ -269,11 +119,6 @@ static const struct intel_device_info i915gm_info = {
+ 	GEN3_FEATURES,
+ 	PLATFORM(INTEL_I915GM),
+ 	.is_mobile = 1,
+-	.display.cursor_needs_physical = 1,
+-	.display.has_overlay = 1,
+-	.display.overlay_needs_physical = 1,
+-	.display.supports_tv = 1,
+-	.__runtime.fbc_mask = BIT(INTEL_FBC_A),
+ 	.hws_needs_physical = 1,
+ 	.unfenced_needs_alignment = 1,
+ };
+@@ -281,10 +126,6 @@ static const struct intel_device_info i915gm_info = {
+ static const struct intel_device_info i945g_info = {
+ 	GEN3_FEATURES,
+ 	PLATFORM(INTEL_I945G),
+-	.display.has_hotplug = 1,
+-	.display.cursor_needs_physical = 1,
+-	.display.has_overlay = 1,
+-	.display.overlay_needs_physical = 1,
+ 	.hws_needs_physical = 1,
+ 	.unfenced_needs_alignment = 1,
+ };
+@@ -293,12 +134,6 @@ static const struct intel_device_info i945gm_info = {
+ 	GEN3_FEATURES,
+ 	PLATFORM(INTEL_I945GM),
+ 	.is_mobile = 1,
+-	.display.has_hotplug = 1,
+-	.display.cursor_needs_physical = 1,
+-	.display.has_overlay = 1,
+-	.display.overlay_needs_physical = 1,
+-	.display.supports_tv = 1,
+-	.__runtime.fbc_mask = BIT(INTEL_FBC_A),
+ 	.hws_needs_physical = 1,
+ 	.unfenced_needs_alignment = 1,
+ };
+@@ -306,16 +141,12 @@ static const struct intel_device_info i945gm_info = {
+ static const struct intel_device_info g33_info = {
+ 	GEN3_FEATURES,
+ 	PLATFORM(INTEL_G33),
+-	.display.has_hotplug = 1,
+-	.display.has_overlay = 1,
+ 	.dma_mask_size = 36,
+ };
+ 
+ static const struct intel_device_info pnv_g_info = {
+ 	GEN3_FEATURES,
+ 	PLATFORM(INTEL_PINEVIEW),
+-	.display.has_hotplug = 1,
+-	.display.has_overlay = 1,
+ 	.dma_mask_size = 36,
+ };
+ 
+@@ -323,33 +154,23 @@ static const struct intel_device_info pnv_m_info = {
+ 	GEN3_FEATURES,
+ 	PLATFORM(INTEL_PINEVIEW),
+ 	.is_mobile = 1,
+-	.display.has_hotplug = 1,
+-	.display.has_overlay = 1,
+ 	.dma_mask_size = 36,
+ };
+ 
+ #define GEN4_FEATURES \
+ 	GEN(4), \
+-	.__runtime.pipe_mask = BIT(PIPE_A) | BIT(PIPE_B), \
+-	.__runtime.cpu_transcoder_mask = BIT(TRANSCODER_A) | BIT(TRANSCODER_B), \
+-	.display.has_hotplug = 1, \
+-	.display.has_gmch = 1, \
+ 	.gpu_reset_clobbers_display = true, \
+ 	.__runtime.platform_engine_mask = BIT(RCS0), \
+ 	.has_3d_pipeline = 1, \
+ 	.has_snoop = true, \
+ 	.has_coherent_ggtt = true, \
+ 	.dma_mask_size = 36, \
+-	I9XX_PIPE_OFFSETS, \
+-	I9XX_CURSOR_OFFSETS, \
+-	I9XX_COLORS, \
+ 	GEN_DEFAULT_PAGE_SIZES, \
+ 	GEN_DEFAULT_REGIONS
+ 
+ static const struct intel_device_info i965g_info = {
+ 	GEN4_FEATURES,
+ 	PLATFORM(INTEL_I965G),
+-	.display.has_overlay = 1,
+ 	.hws_needs_physical = 1,
+ 	.has_snoop = false,
+ };
+@@ -358,9 +179,6 @@ static const struct intel_device_info i965gm_info = {
+ 	GEN4_FEATURES,
+ 	PLATFORM(INTEL_I965GM),
+ 	.is_mobile = 1,
+-	.__runtime.fbc_mask = BIT(INTEL_FBC_A),
+-	.display.has_overlay = 1,
+-	.display.supports_tv = 1,
+ 	.hws_needs_physical = 1,
+ 	.has_snoop = false,
+ };
+@@ -376,17 +194,12 @@ static const struct intel_device_info gm45_info = {
+ 	GEN4_FEATURES,
+ 	PLATFORM(INTEL_GM45),
+ 	.is_mobile = 1,
+-	.__runtime.fbc_mask = BIT(INTEL_FBC_A),
+-	.display.supports_tv = 1,
+ 	.__runtime.platform_engine_mask = BIT(RCS0) | BIT(VCS0),
+ 	.gpu_reset_clobbers_display = false,
+ };
+ 
+ #define GEN5_FEATURES \
+ 	GEN(5), \
+-	.__runtime.pipe_mask = BIT(PIPE_A) | BIT(PIPE_B), \
+-	.__runtime.cpu_transcoder_mask = BIT(TRANSCODER_A) | BIT(TRANSCODER_B), \
+-	.display.has_hotplug = 1, \
+ 	.__runtime.platform_engine_mask = BIT(RCS0) | BIT(VCS0), \
+ 	.has_3d_pipeline = 1, \
+ 	.has_snoop = true, \
+@@ -394,9 +207,6 @@ static const struct intel_device_info gm45_info = {
+ 	/* ilk does support rc6, but we do not implement [power] contexts */ \
+ 	.has_rc6 = 0, \
+ 	.dma_mask_size = 36, \
+-	I9XX_PIPE_OFFSETS, \
+-	I9XX_CURSOR_OFFSETS, \
+-	ILK_COLORS, \
+ 	GEN_DEFAULT_PAGE_SIZES, \
+ 	GEN_DEFAULT_REGIONS
+ 
+@@ -410,15 +220,10 @@ static const struct intel_device_info ilk_m_info = {
+ 	PLATFORM(INTEL_IRONLAKE),
+ 	.is_mobile = 1,
+ 	.has_rps = true,
+-	.__runtime.fbc_mask = BIT(INTEL_FBC_A),
+ };
+ 
+ #define GEN6_FEATURES \
+ 	GEN(6), \
+-	.__runtime.pipe_mask = BIT(PIPE_A) | BIT(PIPE_B), \
+-	.__runtime.cpu_transcoder_mask = BIT(TRANSCODER_A) | BIT(TRANSCODER_B), \
+-	.display.has_hotplug = 1, \
+-	.__runtime.fbc_mask = BIT(INTEL_FBC_A), \
+ 	.__runtime.platform_engine_mask = BIT(RCS0) | BIT(VCS0) | BIT(BCS0), \
+ 	.has_3d_pipeline = 1, \
+ 	.has_coherent_ggtt = true, \
+@@ -430,9 +235,6 @@ static const struct intel_device_info ilk_m_info = {
+ 	.dma_mask_size = 40, \
+ 	.__runtime.ppgtt_type = INTEL_PPGTT_ALIASING, \
+ 	.__runtime.ppgtt_size = 31, \
+-	I9XX_PIPE_OFFSETS, \
+-	I9XX_CURSOR_OFFSETS, \
+-	ILK_COLORS, \
+ 	GEN_DEFAULT_PAGE_SIZES, \
+ 	GEN_DEFAULT_REGIONS
+ 
+@@ -468,10 +270,6 @@ static const struct intel_device_info snb_m_gt2_info = {
+ 
+ #define GEN7_FEATURES  \
+ 	GEN(7), \
+-	.__runtime.pipe_mask = BIT(PIPE_A) | BIT(PIPE_B) | BIT(PIPE_C), \
+-	.__runtime.cpu_transcoder_mask = BIT(TRANSCODER_A) | BIT(TRANSCODER_B) | BIT(TRANSCODER_C), \
+-	.display.has_hotplug = 1, \
+-	.__runtime.fbc_mask = BIT(INTEL_FBC_A), \
+ 	.__runtime.platform_engine_mask = BIT(RCS0) | BIT(VCS0) | BIT(BCS0), \
+ 	.has_3d_pipeline = 1, \
+ 	.has_coherent_ggtt = true, \
+@@ -483,9 +281,6 @@ static const struct intel_device_info snb_m_gt2_info = {
+ 	.dma_mask_size = 40, \
+ 	.__runtime.ppgtt_type = INTEL_PPGTT_ALIASING, \
+ 	.__runtime.ppgtt_size = 31, \
+-	IVB_PIPE_OFFSETS, \
+-	IVB_CURSOR_OFFSETS, \
+-	IVB_COLORS, \
+ 	GEN_DEFAULT_PAGE_SIZES, \
+ 	GEN_DEFAULT_REGIONS
+ 
+@@ -523,7 +318,6 @@ static const struct intel_device_info ivb_m_gt2_info = {
+ static const struct intel_device_info ivb_q_info = {
+ 	GEN7_FEATURES,
+ 	PLATFORM(INTEL_IVYBRIDGE),
+-	NO_DISPLAY,
+ 	.gt = 2,
+ 	.has_l3_dpf = 1,
+ };
+@@ -532,24 +326,16 @@ static const struct intel_device_info vlv_info = {
+ 	PLATFORM(INTEL_VALLEYVIEW),
+ 	GEN(7),
+ 	.is_lp = 1,
+-	.__runtime.pipe_mask = BIT(PIPE_A) | BIT(PIPE_B),
+-	.__runtime.cpu_transcoder_mask = BIT(TRANSCODER_A) | BIT(TRANSCODER_B),
+ 	.has_runtime_pm = 1,
+ 	.has_rc6 = 1,
+ 	.has_reset_engine = true,
+ 	.has_rps = true,
+-	.display.has_gmch = 1,
+-	.display.has_hotplug = 1,
+ 	.dma_mask_size = 40,
+ 	.__runtime.ppgtt_type = INTEL_PPGTT_ALIASING,
+ 	.__runtime.ppgtt_size = 31,
+ 	.has_snoop = true,
+ 	.has_coherent_ggtt = false,
+ 	.__runtime.platform_engine_mask = BIT(RCS0) | BIT(VCS0) | BIT(BCS0),
+-	.display.mmio_offset = VLV_DISPLAY_BASE,
+-	I9XX_PIPE_OFFSETS,
+-	I9XX_CURSOR_OFFSETS,
+-	I9XX_COLORS,
+ 	GEN_DEFAULT_PAGE_SIZES,
+ 	GEN_DEFAULT_REGIONS,
+ };
+@@ -557,13 +343,7 @@ static const struct intel_device_info vlv_info = {
+ #define G75_FEATURES  \
+ 	GEN7_FEATURES, \
+ 	.__runtime.platform_engine_mask = BIT(RCS0) | BIT(VCS0) | BIT(BCS0) | BIT(VECS0), \
+-	.__runtime.cpu_transcoder_mask = BIT(TRANSCODER_A) | BIT(TRANSCODER_B) | \
+-		BIT(TRANSCODER_C) | BIT(TRANSCODER_EDP), \
+-	.display.has_ddi = 1, \
+-	.display.has_fpga_dbg = 1, \
+-	.display.has_dp_mst = 1, \
+ 	.has_rc6p = 0 /* RC6p removed-by HSW */, \
+-	HSW_PIPE_OFFSETS, \
+ 	.has_runtime_pm = 1
+ 
+ #define HSW_PLATFORM \
+@@ -627,9 +407,6 @@ static const struct intel_device_info bdw_gt3_info = {
+ static const struct intel_device_info chv_info = {
+ 	PLATFORM(INTEL_CHERRYVIEW),
+ 	GEN(8),
+-	.__runtime.pipe_mask = BIT(PIPE_A) | BIT(PIPE_B) | BIT(PIPE_C),
+-	.__runtime.cpu_transcoder_mask = BIT(TRANSCODER_A) | BIT(TRANSCODER_B) | BIT(TRANSCODER_C),
+-	.display.has_hotplug = 1,
+ 	.is_lp = 1,
+ 	.__runtime.platform_engine_mask = BIT(RCS0) | BIT(VCS0) | BIT(BCS0) | BIT(VECS0),
+ 	.has_64bit_reloc = 1,
+@@ -637,17 +414,12 @@ static const struct intel_device_info chv_info = {
+ 	.has_rc6 = 1,
+ 	.has_rps = true,
+ 	.has_logical_ring_contexts = 1,
+-	.display.has_gmch = 1,
+ 	.dma_mask_size = 39,
+ 	.__runtime.ppgtt_type = INTEL_PPGTT_FULL,
+ 	.__runtime.ppgtt_size = 32,
+ 	.has_reset_engine = 1,
+ 	.has_snoop = true,
+ 	.has_coherent_ggtt = false,
+-	.display.mmio_offset = VLV_DISPLAY_BASE,
+-	CHV_PIPE_OFFSETS,
+-	CHV_CURSOR_OFFSETS,
+-	CHV_COLORS,
+ 	GEN_DEFAULT_PAGE_SIZES,
+ 	GEN_DEFAULT_REGIONS,
+ };
+@@ -660,14 +432,7 @@ static const struct intel_device_info chv_info = {
+ 	GEN8_FEATURES, \
+ 	GEN(9), \
+ 	GEN9_DEFAULT_PAGE_SIZES, \
+-	.__runtime.has_dmc = 1, \
+-	.has_gt_uc = 1, \
+-	.__runtime.has_hdcp = 1, \
+-	.display.has_ipc = 1, \
+-	.display.has_psr = 1, \
+-	.display.has_psr_hw_tracking = 1, \
+-	.display.dbuf.size = 896 - 4, /* 4 blocks for bypass path allocation */ \
+-	.display.dbuf.slice_mask = BIT(DBUF_S1)
++	.has_gt_uc = 1
+ 
+ #define SKL_PLATFORM \
+ 	GEN9_FEATURES, \
+@@ -702,26 +467,12 @@ static const struct intel_device_info skl_gt4_info = {
+ #define GEN9_LP_FEATURES \
+ 	GEN(9), \
+ 	.is_lp = 1, \
+-	.display.dbuf.slice_mask = BIT(DBUF_S1), \
+-	.display.has_hotplug = 1, \
+ 	.__runtime.platform_engine_mask = BIT(RCS0) | BIT(VCS0) | BIT(BCS0) | BIT(VECS0), \
+-	.__runtime.pipe_mask = BIT(PIPE_A) | BIT(PIPE_B) | BIT(PIPE_C), \
+-	.__runtime.cpu_transcoder_mask = BIT(TRANSCODER_A) | BIT(TRANSCODER_B) | \
+-		BIT(TRANSCODER_C) | BIT(TRANSCODER_EDP) | \
+-		BIT(TRANSCODER_DSI_A) | BIT(TRANSCODER_DSI_C), \
+ 	.has_3d_pipeline = 1, \
+ 	.has_64bit_reloc = 1, \
+-	.display.has_ddi = 1, \
+-	.display.has_fpga_dbg = 1, \
+-	.__runtime.fbc_mask = BIT(INTEL_FBC_A), \
+-	.__runtime.has_hdcp = 1, \
+-	.display.has_psr = 1, \
+-	.display.has_psr_hw_tracking = 1, \
+ 	.has_runtime_pm = 1, \
+-	.__runtime.has_dmc = 1, \
+ 	.has_rc6 = 1, \
+ 	.has_rps = true, \
+-	.display.has_dp_mst = 1, \
+ 	.has_logical_ring_contexts = 1, \
+ 	.has_gt_uc = 1, \
+ 	.dma_mask_size = 39, \
+@@ -730,25 +481,17 @@ static const struct intel_device_info skl_gt4_info = {
+ 	.has_reset_engine = 1, \
+ 	.has_snoop = true, \
+ 	.has_coherent_ggtt = false, \
+-	.display.has_ipc = 1, \
+-	HSW_PIPE_OFFSETS, \
+-	IVB_CURSOR_OFFSETS, \
+-	IVB_COLORS, \
+ 	GEN9_DEFAULT_PAGE_SIZES, \
+ 	GEN_DEFAULT_REGIONS
+ 
+ static const struct intel_device_info bxt_info = {
+ 	GEN9_LP_FEATURES,
+ 	PLATFORM(INTEL_BROXTON),
+-	.display.dbuf.size = 512 - 4, /* 4 blocks for bypass path allocation */
+ };
+ 
+ static const struct intel_device_info glk_info = {
+ 	GEN9_LP_FEATURES,
+ 	PLATFORM(INTEL_GEMINILAKE),
+-	.__runtime.display.ip.ver = 10,
+-	.display.dbuf.size = 1024 - 4, /* 4 blocks for bypass path allocation */
+-	GLK_COLORS,
+ };
+ 
+ #define KBL_PLATFORM \
+@@ -815,31 +558,7 @@ static const struct intel_device_info cml_gt2_info = {
+ #define GEN11_FEATURES \
+ 	GEN9_FEATURES, \
+ 	GEN11_DEFAULT_PAGE_SIZES, \
+-	.display.abox_mask = BIT(0), \
+-	.__runtime.cpu_transcoder_mask = BIT(TRANSCODER_A) | BIT(TRANSCODER_B) | \
+-		BIT(TRANSCODER_C) | BIT(TRANSCODER_EDP) | \
+-		BIT(TRANSCODER_DSI_0) | BIT(TRANSCODER_DSI_1), \
+-	.display.pipe_offsets = { \
+-		[TRANSCODER_A] = PIPE_A_OFFSET, \
+-		[TRANSCODER_B] = PIPE_B_OFFSET, \
+-		[TRANSCODER_C] = PIPE_C_OFFSET, \
+-		[TRANSCODER_EDP] = PIPE_EDP_OFFSET, \
+-		[TRANSCODER_DSI_0] = PIPE_DSI0_OFFSET, \
+-		[TRANSCODER_DSI_1] = PIPE_DSI1_OFFSET, \
+-	}, \
+-	.display.trans_offsets = { \
+-		[TRANSCODER_A] = TRANSCODER_A_OFFSET, \
+-		[TRANSCODER_B] = TRANSCODER_B_OFFSET, \
+-		[TRANSCODER_C] = TRANSCODER_C_OFFSET, \
+-		[TRANSCODER_EDP] = TRANSCODER_EDP_OFFSET, \
+-		[TRANSCODER_DSI_0] = TRANSCODER_DSI0_OFFSET, \
+-		[TRANSCODER_DSI_1] = TRANSCODER_DSI1_OFFSET, \
+-	}, \
+ 	GEN(11), \
+-	ICL_COLORS, \
+-	.display.dbuf.size = 2048, \
+-	.display.dbuf.slice_mask = BIT(DBUF_S1) | BIT(DBUF_S2), \
+-	.__runtime.has_dsc = 1, \
+ 	.has_coherent_ggtt = false, \
+ 	.has_logical_ring_elsq = 1
+ 
+@@ -867,31 +586,8 @@ static const struct intel_device_info jsl_info = {
+ #define GEN12_FEATURES \
+ 	GEN11_FEATURES, \
+ 	GEN(12), \
+-	.display.abox_mask = GENMASK(2, 1), \
+-	.__runtime.pipe_mask = BIT(PIPE_A) | BIT(PIPE_B) | BIT(PIPE_C) | BIT(PIPE_D), \
+-	.__runtime.cpu_transcoder_mask = BIT(TRANSCODER_A) | BIT(TRANSCODER_B) | \
+-		BIT(TRANSCODER_C) | BIT(TRANSCODER_D) | \
+-		BIT(TRANSCODER_DSI_0) | BIT(TRANSCODER_DSI_1), \
+-	.display.pipe_offsets = { \
+-		[TRANSCODER_A] = PIPE_A_OFFSET, \
+-		[TRANSCODER_B] = PIPE_B_OFFSET, \
+-		[TRANSCODER_C] = PIPE_C_OFFSET, \
+-		[TRANSCODER_D] = PIPE_D_OFFSET, \
+-		[TRANSCODER_DSI_0] = PIPE_DSI0_OFFSET, \
+-		[TRANSCODER_DSI_1] = PIPE_DSI1_OFFSET, \
+-	}, \
+-	.display.trans_offsets = { \
+-		[TRANSCODER_A] = TRANSCODER_A_OFFSET, \
+-		[TRANSCODER_B] = TRANSCODER_B_OFFSET, \
+-		[TRANSCODER_C] = TRANSCODER_C_OFFSET, \
+-		[TRANSCODER_D] = TRANSCODER_D_OFFSET, \
+-		[TRANSCODER_DSI_0] = TRANSCODER_DSI0_OFFSET, \
+-		[TRANSCODER_DSI_1] = TRANSCODER_DSI1_OFFSET, \
+-	}, \
+-	TGL_CURSOR_OFFSETS, \
+ 	.has_global_mocs = 1, \
+-	.has_pxp = 1, \
+-	.display.has_dsb = 1
++	.has_pxp = 1
+ 
+ static const struct intel_device_info tgl_info = {
+ 	GEN12_FEATURES,
+@@ -903,12 +599,6 @@ static const struct intel_device_info tgl_info = {
+ static const struct intel_device_info rkl_info = {
+ 	GEN12_FEATURES,
+ 	PLATFORM(INTEL_ROCKETLAKE),
+-	.display.abox_mask = BIT(0),
+-	.__runtime.pipe_mask = BIT(PIPE_A) | BIT(PIPE_B) | BIT(PIPE_C),
+-	.__runtime.cpu_transcoder_mask = BIT(TRANSCODER_A) | BIT(TRANSCODER_B) |
+-		BIT(TRANSCODER_C),
+-	.display.has_hti = 1,
+-	.display.has_psr_hw_tracking = 0,
+ 	.__runtime.platform_engine_mask =
+ 		BIT(RCS0) | BIT(BCS0) | BIT(VECS0) | BIT(VCS0),
+ };
+@@ -926,7 +616,6 @@ static const struct intel_device_info dg1_info = {
+ 	DGFX_FEATURES,
+ 	.__runtime.graphics.ip.rel = 10,
+ 	PLATFORM(INTEL_DG1),
+-	.__runtime.pipe_mask = BIT(PIPE_A) | BIT(PIPE_B) | BIT(PIPE_C) | BIT(PIPE_D),
+ 	.require_force_probe = 1,
+ 	.__runtime.platform_engine_mask =
+ 		BIT(RCS0) | BIT(BCS0) | BIT(VECS0) |
+@@ -938,64 +627,14 @@ static const struct intel_device_info dg1_info = {
+ static const struct intel_device_info adl_s_info = {
+ 	GEN12_FEATURES,
+ 	PLATFORM(INTEL_ALDERLAKE_S),
+-	.__runtime.pipe_mask = BIT(PIPE_A) | BIT(PIPE_B) | BIT(PIPE_C) | BIT(PIPE_D),
+-	.display.has_hti = 1,
+-	.display.has_psr_hw_tracking = 0,
+ 	.__runtime.platform_engine_mask =
+ 		BIT(RCS0) | BIT(BCS0) | BIT(VECS0) | BIT(VCS0) | BIT(VCS2),
+ 	.dma_mask_size = 39,
+ };
+ 
+-#define XE_LPD_FEATURES \
+-	.display.abox_mask = GENMASK(1, 0),					\
+-	.display.color = {							\
+-		.degamma_lut_size = 129, .gamma_lut_size = 1024,		\
+-		.degamma_lut_tests = DRM_COLOR_LUT_NON_DECREASING |		\
+-				     DRM_COLOR_LUT_EQUAL_CHANNELS,		\
+-	},									\
+-	.display.dbuf.size = 4096,						\
+-	.display.dbuf.slice_mask = BIT(DBUF_S1) | BIT(DBUF_S2) | BIT(DBUF_S3) |	\
+-		BIT(DBUF_S4),							\
+-	.display.has_ddi = 1,							\
+-	.__runtime.has_dmc = 1,							\
+-	.display.has_dp_mst = 1,						\
+-	.display.has_dsb = 1,							\
+-	.__runtime.has_dsc = 1,							\
+-	.__runtime.fbc_mask = BIT(INTEL_FBC_A),					\
+-	.display.has_fpga_dbg = 1,						\
+-	.__runtime.has_hdcp = 1,						\
+-	.display.has_hotplug = 1,						\
+-	.display.has_ipc = 1,							\
+-	.display.has_psr = 1,							\
+-	.__runtime.display.ip.ver = 13,							\
+-	.__runtime.pipe_mask = BIT(PIPE_A) | BIT(PIPE_B) | BIT(PIPE_C) | BIT(PIPE_D),	\
+-	.display.pipe_offsets = {						\
+-		[TRANSCODER_A] = PIPE_A_OFFSET,					\
+-		[TRANSCODER_B] = PIPE_B_OFFSET,					\
+-		[TRANSCODER_C] = PIPE_C_OFFSET,					\
+-		[TRANSCODER_D] = PIPE_D_OFFSET,					\
+-		[TRANSCODER_DSI_0] = PIPE_DSI0_OFFSET,				\
+-		[TRANSCODER_DSI_1] = PIPE_DSI1_OFFSET,				\
+-	},									\
+-	.display.trans_offsets = {						\
+-		[TRANSCODER_A] = TRANSCODER_A_OFFSET,				\
+-		[TRANSCODER_B] = TRANSCODER_B_OFFSET,				\
+-		[TRANSCODER_C] = TRANSCODER_C_OFFSET,				\
+-		[TRANSCODER_D] = TRANSCODER_D_OFFSET,				\
+-		[TRANSCODER_DSI_0] = TRANSCODER_DSI0_OFFSET,			\
+-		[TRANSCODER_DSI_1] = TRANSCODER_DSI1_OFFSET,			\
+-	},									\
+-	TGL_CURSOR_OFFSETS
+-
+ static const struct intel_device_info adl_p_info = {
+ 	GEN12_FEATURES,
+-	XE_LPD_FEATURES,
+ 	PLATFORM(INTEL_ALDERLAKE_P),
+-	.__runtime.cpu_transcoder_mask = BIT(TRANSCODER_A) | BIT(TRANSCODER_B) |
+-			       BIT(TRANSCODER_C) | BIT(TRANSCODER_D) |
+-			       BIT(TRANSCODER_DSI_0) | BIT(TRANSCODER_DSI_1),
+-	.display.has_cdclk_crawl = 1,
+-	.display.has_psr_hw_tracking = 0,
+ 	.__runtime.platform_engine_mask =
+ 		BIT(RCS0) | BIT(BCS0) | BIT(VECS0) | BIT(VCS0) | BIT(VCS2),
+ 	.__runtime.ppgtt_size = 48,
+@@ -1044,7 +683,6 @@ static const struct intel_device_info xehpsdv_info = {
+ 	XE_HPM_FEATURES,
+ 	DGFX_FEATURES,
+ 	PLATFORM(INTEL_XEHPSDV),
+-	NO_DISPLAY,
+ 	.has_64k_pages = 1,
+ 	.has_media_ratio_mode = 1,
+ 	.__runtime.platform_engine_mask =
+@@ -1067,7 +705,6 @@ static const struct intel_device_info xehpsdv_info = {
+ 	.has_guc_deprivilege = 1, \
+ 	.has_heci_pxp = 1, \
+ 	.has_media_ratio_mode = 1, \
+-	.display.has_cdclk_squash = 1, \
+ 	.__runtime.platform_engine_mask = \
+ 		BIT(RCS0) | BIT(BCS0) | \
+ 		BIT(VECS0) | BIT(VECS1) | \
+@@ -1076,14 +713,10 @@ static const struct intel_device_info xehpsdv_info = {
+ 
+ static const struct intel_device_info dg2_info = {
+ 	DG2_FEATURES,
+-	XE_LPD_FEATURES,
+-	.__runtime.cpu_transcoder_mask = BIT(TRANSCODER_A) | BIT(TRANSCODER_B) |
+-			       BIT(TRANSCODER_C) | BIT(TRANSCODER_D),
+ };
+ 
+ static const struct intel_device_info ats_m_info = {
+ 	DG2_FEATURES,
+-	NO_DISPLAY,
+ 	.require_force_probe = 1,
+ 	.tuning_thread_rr_after_dep = 1,
+ };
+@@ -1105,7 +738,6 @@ static const struct intel_device_info pvc_info = {
+ 	.__runtime.graphics.ip.rel = 60,
+ 	.__runtime.media.ip.rel = 60,
+ 	PLATFORM(INTEL_PONTEVECCHIO),
+-	NO_DISPLAY,
+ 	.has_flat_ccs = 0,
+ 	.__runtime.platform_engine_mask =
+ 		BIT(BCS0) |
+@@ -1114,13 +746,6 @@ static const struct intel_device_info pvc_info = {
+ 	.require_force_probe = 1,
+ };
+ 
+-#define XE_LPDP_FEATURES	\
+-	XE_LPD_FEATURES,	\
+-	.__runtime.display.ip.ver = 14,	\
+-	.display.has_cdclk_crawl = 1, \
+-	.display.has_cdclk_squash = 1, \
+-	.__runtime.fbc_mask = BIT(INTEL_FBC_A) | BIT(INTEL_FBC_B)
+-
+ static const struct intel_gt_definition xelpmp_extra_gt[] = {
+ 	{
+ 		.type = GT_MEDIA,
+@@ -1133,9 +758,6 @@ static const struct intel_gt_definition xelpmp_extra_gt[] = {
+ 
+ static const struct intel_device_info mtl_info = {
+ 	XE_HP_FEATURES,
+-	XE_LPDP_FEATURES,
+-	.__runtime.cpu_transcoder_mask = BIT(TRANSCODER_A) | BIT(TRANSCODER_B) |
+-			       BIT(TRANSCODER_C) | BIT(TRANSCODER_D),
+ 	/*
+ 	 * Real graphics IP version will be obtained from hardware GMD_ID
+ 	 * register.  Value provided here is just for sanity checking.
+diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
+index c4197e31962e1..d35c89f9da778 100644
+--- a/drivers/gpu/drm/i915/i915_reg.h
++++ b/drivers/gpu/drm/i915/i915_reg.h
+@@ -1961,15 +1961,6 @@
+ #define _TRANS_VSYNC_DSI1	0x6b814
+ #define _TRANS_VSYNCSHIFT_DSI1	0x6b828
+ 
+-#define TRANSCODER_A_OFFSET 0x60000
+-#define TRANSCODER_B_OFFSET 0x61000
+-#define TRANSCODER_C_OFFSET 0x62000
+-#define CHV_TRANSCODER_C_OFFSET 0x63000
+-#define TRANSCODER_D_OFFSET 0x63000
+-#define TRANSCODER_EDP_OFFSET 0x6f000
+-#define TRANSCODER_DSI0_OFFSET	0x6b000
+-#define TRANSCODER_DSI1_OFFSET	0x6b800
+-
+ #define TRANS_HTOTAL(trans)	_MMIO_TRANS2((trans), _TRANS_HTOTAL_A)
+ #define TRANS_HBLANK(trans)	_MMIO_TRANS2((trans), _TRANS_HBLANK_A)
+ #define TRANS_HSYNC(trans)	_MMIO_TRANS2((trans), _TRANS_HSYNC_A)
+@@ -2619,23 +2610,6 @@
+ #define PIPESTAT_INT_ENABLE_MASK		0x7fff0000
+ #define PIPESTAT_INT_STATUS_MASK		0x0000ffff
+ 
+-#define PIPE_A_OFFSET		0x70000
+-#define PIPE_B_OFFSET		0x71000
+-#define PIPE_C_OFFSET		0x72000
+-#define PIPE_D_OFFSET		0x73000
+-#define CHV_PIPE_C_OFFSET	0x74000
+-/*
+- * There's actually no pipe EDP. Some pipe registers have
+- * simply shifted from the pipe to the transcoder, while
+- * keeping their original offset. Thus we need PIPE_EDP_OFFSET
+- * to access such registers in transcoder EDP.
+- */
+-#define PIPE_EDP_OFFSET	0x7f000
+-
+-/* ICL DSI 0 and 1 */
+-#define PIPE_DSI0_OFFSET	0x7b000
+-#define PIPE_DSI1_OFFSET	0x7b800
+-
+ #define TRANSCONF(trans)	_MMIO_PIPE2((trans), _TRANSACONF)
+ #define PIPEDSL(pipe)		_MMIO_PIPE2(pipe, _PIPEADSL)
+ #define PIPEFRAME(pipe)		_MMIO_PIPE2(pipe, _PIPEAFRAMEHIGH)
+@@ -3091,13 +3065,6 @@
+ #define CUR_CHICKEN(pipe) _MMIO_CURSOR2(pipe, _CUR_CHICKEN_A)
+ #define CURSURFLIVE(pipe) _MMIO_CURSOR2(pipe, _CURASURFLIVE)
+ 
+-#define CURSOR_A_OFFSET 0x70080
+-#define CURSOR_B_OFFSET 0x700c0
+-#define CHV_CURSOR_C_OFFSET 0x700e0
+-#define IVB_CURSOR_B_OFFSET 0x71080
+-#define IVB_CURSOR_C_OFFSET 0x72080
+-#define TGL_CURSOR_D_OFFSET 0x73080
+-
+ /* Display A control */
+ #define _DSPAADDR_VLV				0x7017C /* vlv/chv */
+ #define _DSPACNTR				0x70180
+diff --git a/drivers/gpu/drm/i915/intel_device_info.c b/drivers/gpu/drm/i915/intel_device_info.c
+index fc5cd14adfccb..79523e55ca9c4 100644
+--- a/drivers/gpu/drm/i915/intel_device_info.c
++++ b/drivers/gpu/drm/i915/intel_device_info.c
+@@ -95,6 +95,9 @@ void intel_device_info_print(const struct intel_device_info *info,
+ 			     const struct intel_runtime_info *runtime,
+ 			     struct drm_printer *p)
+ {
++	const struct intel_display_runtime_info *display_runtime =
++		&info->display->__runtime_defaults;
++
+ 	if (runtime->graphics.ip.rel)
+ 		drm_printf(p, "graphics version: %u.%02u\n",
+ 			   runtime->graphics.ip.ver,
+@@ -111,13 +114,13 @@ void intel_device_info_print(const struct intel_device_info *info,
+ 		drm_printf(p, "media version: %u\n",
+ 			   runtime->media.ip.ver);
+ 
+-	if (runtime->display.ip.rel)
++	if (display_runtime->ip.rel)
+ 		drm_printf(p, "display version: %u.%02u\n",
+-			   runtime->display.ip.ver,
+-			   runtime->display.ip.rel);
++			   display_runtime->ip.ver,
++			   display_runtime->ip.rel);
+ 	else
+ 		drm_printf(p, "display version: %u\n",
+-			   runtime->display.ip.ver);
++			   display_runtime->ip.ver);
+ 
+ 	drm_printf(p, "graphics stepping: %s\n", intel_step_name(runtime->step.graphics_step));
+ 	drm_printf(p, "media stepping: %s\n", intel_step_name(runtime->step.media_step));
+@@ -138,13 +141,13 @@ void intel_device_info_print(const struct intel_device_info *info,
+ 
+ 	drm_printf(p, "has_pooled_eu: %s\n", str_yes_no(runtime->has_pooled_eu));
+ 
+-#define PRINT_FLAG(name) drm_printf(p, "%s: %s\n", #name, str_yes_no(info->display.name))
++#define PRINT_FLAG(name) drm_printf(p, "%s: %s\n", #name, str_yes_no(info->display->name))
+ 	DEV_INFO_DISPLAY_FOR_EACH_FLAG(PRINT_FLAG);
+ #undef PRINT_FLAG
+ 
+-	drm_printf(p, "has_hdcp: %s\n", str_yes_no(runtime->has_hdcp));
+-	drm_printf(p, "has_dmc: %s\n", str_yes_no(runtime->has_dmc));
+-	drm_printf(p, "has_dsc: %s\n", str_yes_no(runtime->has_dsc));
++	drm_printf(p, "has_hdcp: %s\n", str_yes_no(display_runtime->has_hdcp));
++	drm_printf(p, "has_dmc: %s\n", str_yes_no(display_runtime->has_dmc));
++	drm_printf(p, "has_dsc: %s\n", str_yes_no(display_runtime->has_dsc));
+ 
+ 	drm_printf(p, "rawclk rate: %u kHz\n", runtime->rawclk_freq);
+ }
+@@ -342,6 +345,7 @@ static void ip_ver_read(struct drm_i915_private *i915, u32 offset, struct intel_
+ static void intel_ipver_early_init(struct drm_i915_private *i915)
+ {
+ 	struct intel_runtime_info *runtime = RUNTIME_INFO(i915);
++	struct intel_display_runtime_info *display_runtime = DISPLAY_RUNTIME_INFO(i915);
+ 
+ 	if (!HAS_GMD_ID(i915)) {
+ 		drm_WARN_ON(&i915->drm, RUNTIME_INFO(i915)->graphics.ip.ver > 12);
+@@ -363,7 +367,7 @@ static void intel_ipver_early_init(struct drm_i915_private *i915)
+ 		RUNTIME_INFO(i915)->graphics.ip.rel = 70;
+ 	}
+ 	ip_ver_read(i915, i915_mmio_reg_offset(GMD_ID_DISPLAY),
+-		    &runtime->display.ip);
++		    (struct intel_ip_version *)&display_runtime->ip);
+ 	ip_ver_read(i915, i915_mmio_reg_offset(GMD_ID_MEDIA),
+ 		    &runtime->media.ip);
+ }
+@@ -381,6 +385,15 @@ void intel_device_info_runtime_init_early(struct drm_i915_private *i915)
+ 	intel_device_info_subplatform_init(i915);
+ }
+ 
++/* FIXME: Remove this, and make device info a const pointer to rodata. */
++static struct intel_device_info *
++mkwrite_device_info(struct drm_i915_private *i915)
++{
++	return (struct intel_device_info *)INTEL_INFO(i915);
++}
++
++static const struct intel_display_device_info no_display = {};
++
+ /**
+  * intel_device_info_runtime_init - initialize runtime info
+  * @dev_priv: the i915 device
+@@ -401,32 +414,34 @@ void intel_device_info_runtime_init(struct drm_i915_private *dev_priv)
+ {
+ 	struct intel_device_info *info = mkwrite_device_info(dev_priv);
+ 	struct intel_runtime_info *runtime = RUNTIME_INFO(dev_priv);
++	struct intel_display_runtime_info *display_runtime =
++		DISPLAY_RUNTIME_INFO(dev_priv);
+ 	enum pipe pipe;
+ 
+ 	/* Wa_14011765242: adl-s A0,A1 */
+ 	if (IS_ADLS_DISPLAY_STEP(dev_priv, STEP_A0, STEP_A2))
+ 		for_each_pipe(dev_priv, pipe)
+-			runtime->num_scalers[pipe] = 0;
++			display_runtime->num_scalers[pipe] = 0;
+ 	else if (DISPLAY_VER(dev_priv) >= 11) {
+ 		for_each_pipe(dev_priv, pipe)
+-			runtime->num_scalers[pipe] = 2;
++			display_runtime->num_scalers[pipe] = 2;
+ 	} else if (DISPLAY_VER(dev_priv) >= 9) {
+-		runtime->num_scalers[PIPE_A] = 2;
+-		runtime->num_scalers[PIPE_B] = 2;
+-		runtime->num_scalers[PIPE_C] = 1;
++		display_runtime->num_scalers[PIPE_A] = 2;
++		display_runtime->num_scalers[PIPE_B] = 2;
++		display_runtime->num_scalers[PIPE_C] = 1;
+ 	}
+ 
+ 	BUILD_BUG_ON(BITS_PER_TYPE(intel_engine_mask_t) < I915_NUM_ENGINES);
+ 
+ 	if (DISPLAY_VER(dev_priv) >= 13 || HAS_D12_PLANE_MINIMIZATION(dev_priv))
+ 		for_each_pipe(dev_priv, pipe)
+-			runtime->num_sprites[pipe] = 4;
++			display_runtime->num_sprites[pipe] = 4;
+ 	else if (DISPLAY_VER(dev_priv) >= 11)
+ 		for_each_pipe(dev_priv, pipe)
+-			runtime->num_sprites[pipe] = 6;
++			display_runtime->num_sprites[pipe] = 6;
+ 	else if (DISPLAY_VER(dev_priv) == 10)
+ 		for_each_pipe(dev_priv, pipe)
+-			runtime->num_sprites[pipe] = 3;
++			display_runtime->num_sprites[pipe] = 3;
+ 	else if (IS_BROXTON(dev_priv)) {
+ 		/*
+ 		 * Skylake and Broxton currently don't expose the topmost plane as its
+@@ -437,15 +452,15 @@ void intel_device_info_runtime_init(struct drm_i915_private *dev_priv)
+ 		 * down the line.
+ 		 */
+ 
+-		runtime->num_sprites[PIPE_A] = 2;
+-		runtime->num_sprites[PIPE_B] = 2;
+-		runtime->num_sprites[PIPE_C] = 1;
++		display_runtime->num_sprites[PIPE_A] = 2;
++		display_runtime->num_sprites[PIPE_B] = 2;
++		display_runtime->num_sprites[PIPE_C] = 1;
+ 	} else if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) {
+ 		for_each_pipe(dev_priv, pipe)
+-			runtime->num_sprites[pipe] = 2;
++			display_runtime->num_sprites[pipe] = 2;
+ 	} else if (DISPLAY_VER(dev_priv) >= 5 || IS_G4X(dev_priv)) {
+ 		for_each_pipe(dev_priv, pipe)
+-			runtime->num_sprites[pipe] = 1;
++			display_runtime->num_sprites[pipe] = 1;
+ 	}
+ 
+ 	if (HAS_DISPLAY(dev_priv) &&
+@@ -453,7 +468,7 @@ void intel_device_info_runtime_init(struct drm_i915_private *dev_priv)
+ 	    !(intel_de_read(dev_priv, GU_CNTL_PROTECTED) & DEPRESENT)) {
+ 		drm_info(&dev_priv->drm, "Display not present, disabling\n");
+ 
+-		runtime->pipe_mask = 0;
++		display_runtime->pipe_mask = 0;
+ 	}
+ 
+ 	if (HAS_DISPLAY(dev_priv) && IS_GRAPHICS_VER(dev_priv, 7, 8) &&
+@@ -476,47 +491,47 @@ void intel_device_info_runtime_init(struct drm_i915_private *dev_priv)
+ 		     !(sfuse_strap & SFUSE_STRAP_FUSE_LOCK))) {
+ 			drm_info(&dev_priv->drm,
+ 				 "Display fused off, disabling\n");
+-			runtime->pipe_mask = 0;
++			display_runtime->pipe_mask = 0;
+ 		} else if (fuse_strap & IVB_PIPE_C_DISABLE) {
+ 			drm_info(&dev_priv->drm, "PipeC fused off\n");
+-			runtime->pipe_mask &= ~BIT(PIPE_C);
+-			runtime->cpu_transcoder_mask &= ~BIT(TRANSCODER_C);
++			display_runtime->pipe_mask &= ~BIT(PIPE_C);
++			display_runtime->cpu_transcoder_mask &= ~BIT(TRANSCODER_C);
+ 		}
+ 	} else if (HAS_DISPLAY(dev_priv) && DISPLAY_VER(dev_priv) >= 9) {
+ 		u32 dfsm = intel_de_read(dev_priv, SKL_DFSM);
+ 
+ 		if (dfsm & SKL_DFSM_PIPE_A_DISABLE) {
+-			runtime->pipe_mask &= ~BIT(PIPE_A);
+-			runtime->cpu_transcoder_mask &= ~BIT(TRANSCODER_A);
+-			runtime->fbc_mask &= ~BIT(INTEL_FBC_A);
++			display_runtime->pipe_mask &= ~BIT(PIPE_A);
++			display_runtime->cpu_transcoder_mask &= ~BIT(TRANSCODER_A);
++			display_runtime->fbc_mask &= ~BIT(INTEL_FBC_A);
+ 		}
+ 		if (dfsm & SKL_DFSM_PIPE_B_DISABLE) {
+-			runtime->pipe_mask &= ~BIT(PIPE_B);
+-			runtime->cpu_transcoder_mask &= ~BIT(TRANSCODER_B);
++			display_runtime->pipe_mask &= ~BIT(PIPE_B);
++			display_runtime->cpu_transcoder_mask &= ~BIT(TRANSCODER_B);
+ 		}
+ 		if (dfsm & SKL_DFSM_PIPE_C_DISABLE) {
+-			runtime->pipe_mask &= ~BIT(PIPE_C);
+-			runtime->cpu_transcoder_mask &= ~BIT(TRANSCODER_C);
++			display_runtime->pipe_mask &= ~BIT(PIPE_C);
++			display_runtime->cpu_transcoder_mask &= ~BIT(TRANSCODER_C);
+ 		}
+ 
+ 		if (DISPLAY_VER(dev_priv) >= 12 &&
+ 		    (dfsm & TGL_DFSM_PIPE_D_DISABLE)) {
+-			runtime->pipe_mask &= ~BIT(PIPE_D);
+-			runtime->cpu_transcoder_mask &= ~BIT(TRANSCODER_D);
++			display_runtime->pipe_mask &= ~BIT(PIPE_D);
++			display_runtime->cpu_transcoder_mask &= ~BIT(TRANSCODER_D);
+ 		}
+ 
+ 		if (dfsm & SKL_DFSM_DISPLAY_HDCP_DISABLE)
+-			runtime->has_hdcp = 0;
++			display_runtime->has_hdcp = 0;
+ 
+ 		if (dfsm & SKL_DFSM_DISPLAY_PM_DISABLE)
+-			runtime->fbc_mask = 0;
++			display_runtime->fbc_mask = 0;
+ 
+ 		if (DISPLAY_VER(dev_priv) >= 11 && (dfsm & ICL_DFSM_DMC_DISABLE))
+-			runtime->has_dmc = 0;
++			display_runtime->has_dmc = 0;
+ 
+ 		if (IS_DISPLAY_VER(dev_priv, 10, 12) &&
+ 		    (dfsm & GLK_DFSM_DISPLAY_DSC_DISABLE))
+-			runtime->has_dsc = 0;
++			display_runtime->has_dsc = 0;
+ 	}
+ 
+ 	if (GRAPHICS_VER(dev_priv) == 6 && i915_vtd_active(dev_priv)) {
+@@ -531,15 +546,15 @@ void intel_device_info_runtime_init(struct drm_i915_private *dev_priv)
+ 	if (!HAS_DISPLAY(dev_priv)) {
+ 		dev_priv->drm.driver_features &= ~(DRIVER_MODESET |
+ 						   DRIVER_ATOMIC);
+-		memset(&info->display, 0, sizeof(info->display));
+-
+-		runtime->cpu_transcoder_mask = 0;
+-		memset(runtime->num_sprites, 0, sizeof(runtime->num_sprites));
+-		memset(runtime->num_scalers, 0, sizeof(runtime->num_scalers));
+-		runtime->fbc_mask = 0;
+-		runtime->has_hdcp = false;
+-		runtime->has_dmc = false;
+-		runtime->has_dsc = false;
++		info->display = &no_display;
++
++		display_runtime->cpu_transcoder_mask = 0;
++		memset(display_runtime->num_sprites, 0, sizeof(display_runtime->num_sprites));
++		memset(display_runtime->num_scalers, 0, sizeof(display_runtime->num_scalers));
++		display_runtime->fbc_mask = 0;
++		display_runtime->has_hdcp = false;
++		display_runtime->has_dmc = false;
++		display_runtime->has_dsc = false;
+ 	}
+ 
+ 	/* Disable nuclear pageflip by default on pre-g4x */
+@@ -548,6 +563,35 @@ void intel_device_info_runtime_init(struct drm_i915_private *dev_priv)
+ 		dev_priv->drm.driver_features &= ~DRIVER_ATOMIC;
+ }
+ 
++/*
++ * Set up device info and initial runtime info at driver create.
++ *
++ * Note: i915 is only an allocated blob of memory at this point.
++ */
++void intel_device_info_driver_create(struct drm_i915_private *i915,
++				     u16 device_id,
++				     const struct intel_device_info *match_info)
++{
++	struct intel_device_info *info;
++	struct intel_runtime_info *runtime;
++
++	/* Setup the write-once "constant" device info */
++	info = mkwrite_device_info(i915);
++	memcpy(info, match_info, sizeof(*info));
++
++	/* Initialize initial runtime info from static const data and pdev. */
++	runtime = RUNTIME_INFO(i915);
++	memcpy(runtime, &INTEL_INFO(i915)->__runtime, sizeof(*runtime));
++
++	/* Probe display support */
++	info->display = intel_display_device_probe(device_id);
++	memcpy(DISPLAY_RUNTIME_INFO(i915),
++	       &DISPLAY_INFO(i915)->__runtime_defaults,
++	       sizeof(*DISPLAY_RUNTIME_INFO(i915)));
++
++	runtime->device_id = device_id;
++}
++
+ void intel_driver_caps_print(const struct intel_driver_caps *caps,
+ 			     struct drm_printer *p)
+ {
+diff --git a/drivers/gpu/drm/i915/intel_device_info.h b/drivers/gpu/drm/i915/intel_device_info.h
+index 080a4557899b6..faf6cccdb343d 100644
+--- a/drivers/gpu/drm/i915/intel_device_info.h
++++ b/drivers/gpu/drm/i915/intel_device_info.h
+@@ -29,7 +29,7 @@
+ 
+ #include "intel_step.h"
+ 
+-#include "display/intel_display_limits.h"
++#include "display/intel_display_device.h"
+ 
+ #include "gt/intel_engine_types.h"
+ #include "gt/intel_context_types.h"
+@@ -180,25 +180,6 @@ enum intel_ppgtt_type {
+ 	func(unfenced_needs_alignment); \
+ 	func(hws_needs_physical);
+ 
+-#define DEV_INFO_DISPLAY_FOR_EACH_FLAG(func) \
+-	/* Keep in alphabetical order */ \
+-	func(cursor_needs_physical); \
+-	func(has_cdclk_crawl); \
+-	func(has_cdclk_squash); \
+-	func(has_ddi); \
+-	func(has_dp_mst); \
+-	func(has_dsb); \
+-	func(has_fpga_dbg); \
+-	func(has_gmch); \
+-	func(has_hotplug); \
+-	func(has_hti); \
+-	func(has_ipc); \
+-	func(has_overlay); \
+-	func(has_psr); \
+-	func(has_psr_hw_tracking); \
+-	func(overlay_needs_physical); \
+-	func(supports_tv);
+-
+ struct intel_ip_version {
+ 	u8 ver;
+ 	u8 rel;
+@@ -216,9 +197,6 @@ struct intel_runtime_info {
+ 	struct {
+ 		struct intel_ip_version ip;
+ 	} media;
+-	struct {
+-		struct intel_ip_version ip;
+-	} display;
+ 
+ 	/*
+ 	 * Platform mask is used for optimizing or-ed IS_PLATFORM calls into
+@@ -246,21 +224,6 @@ struct intel_runtime_info {
+ 	u32 memory_regions; /* regions supported by the HW */
+ 
+ 	bool has_pooled_eu;
+-
+-	/* display */
+-	struct {
+-		u8 pipe_mask;
+-		u8 cpu_transcoder_mask;
+-
+-		u8 num_sprites[I915_MAX_PIPES];
+-		u8 num_scalers[I915_MAX_PIPES];
+-
+-		u8 fbc_mask;
+-
+-		bool has_hdcp;
+-		bool has_dmc;
+-		bool has_dsc;
+-	};
+ };
+ 
+ struct intel_device_info {
+@@ -276,33 +239,7 @@ struct intel_device_info {
+ 	DEV_INFO_FOR_EACH_FLAG(DEFINE_FLAG);
+ #undef DEFINE_FLAG
+ 
+-	struct {
+-		u8 abox_mask;
+-
+-		struct {
+-			u16 size; /* in blocks */
+-			u8 slice_mask;
+-		} dbuf;
+-
+-#define DEFINE_FLAG(name) u8 name:1
+-		DEV_INFO_DISPLAY_FOR_EACH_FLAG(DEFINE_FLAG);
+-#undef DEFINE_FLAG
+-
+-		/* Global register offset for the display engine */
+-		u32 mmio_offset;
+-
+-		/* Register offsets for the various display pipes and transcoders */
+-		u32 pipe_offsets[I915_MAX_TRANSCODERS];
+-		u32 trans_offsets[I915_MAX_TRANSCODERS];
+-		u32 cursor_offsets[I915_MAX_PIPES];
+-
+-		struct {
+-			u32 degamma_lut_size;
+-			u32 gamma_lut_size;
+-			u32 degamma_lut_tests;
+-			u32 gamma_lut_tests;
+-		} color;
+-	} display;
++	const struct intel_display_device_info *display;
+ 
+ 	/*
+ 	 * Initial runtime info. Do not access outside of i915_driver_create().
+@@ -317,6 +254,8 @@ struct intel_driver_caps {
+ 
+ const char *intel_platform_name(enum intel_platform platform);
+ 
++void intel_device_info_driver_create(struct drm_i915_private *i915, u16 device_id,
++				     const struct intel_device_info *match_info);
+ void intel_device_info_runtime_init_early(struct drm_i915_private *dev_priv);
+ void intel_device_info_runtime_init(struct drm_i915_private *dev_priv);
+ 
+diff --git a/drivers/gpu/drm/i915/intel_step.c b/drivers/gpu/drm/i915/intel_step.c
+index 84a6fe736a3b5..8a9ff6227e536 100644
+--- a/drivers/gpu/drm/i915/intel_step.c
++++ b/drivers/gpu/drm/i915/intel_step.c
+@@ -166,8 +166,12 @@ void intel_step_init(struct drm_i915_private *i915)
+ 						       &RUNTIME_INFO(i915)->graphics.ip);
+ 		step.media_step = gmd_to_intel_step(i915,
+ 						    &RUNTIME_INFO(i915)->media.ip);
+-		step.display_step = gmd_to_intel_step(i915,
+-						      &RUNTIME_INFO(i915)->display.ip);
++		step.display_step = STEP_A0 + DISPLAY_RUNTIME_INFO(i915)->ip.step;
++		if (step.display_step >= STEP_FUTURE) {
++			drm_dbg(&i915->drm, "Using future display steppings\n");
++			step.display_step = STEP_FUTURE;
++		}
++
+ 		RUNTIME_INFO(i915)->step = step;
+ 
+ 		return;
+diff --git a/drivers/gpu/drm/imx/lcdc/imx-lcdc.c b/drivers/gpu/drm/imx/lcdc/imx-lcdc.c
+index 8e6d457917daf..277ead6a459a4 100644
+--- a/drivers/gpu/drm/imx/lcdc/imx-lcdc.c
++++ b/drivers/gpu/drm/imx/lcdc/imx-lcdc.c
+@@ -400,8 +400,8 @@ static int imx_lcdc_probe(struct platform_device *pdev)
+ 
+ 	lcdc = devm_drm_dev_alloc(dev, &imx_lcdc_drm_driver,
+ 				  struct imx_lcdc, drm);
+-	if (!lcdc)
+-		return -ENOMEM;
++	if (IS_ERR(lcdc))
++		return PTR_ERR(lcdc);
+ 
+ 	drm = &lcdc->drm;
+ 
+diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+index 1e8d2982d603c..a99310b687932 100644
+--- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+@@ -1743,6 +1743,7 @@ struct msm_gpu *a5xx_gpu_init(struct drm_device *dev)
+ {
+ 	struct msm_drm_private *priv = dev->dev_private;
+ 	struct platform_device *pdev = priv->gpu_pdev;
++	struct adreno_platform_config *config = pdev->dev.platform_data;
+ 	struct a5xx_gpu *a5xx_gpu = NULL;
+ 	struct adreno_gpu *adreno_gpu;
+ 	struct msm_gpu *gpu;
+@@ -1769,7 +1770,7 @@ struct msm_gpu *a5xx_gpu_init(struct drm_device *dev)
+ 
+ 	nr_rings = 4;
+ 
+-	if (adreno_is_a510(adreno_gpu))
++	if (adreno_cmp_rev(ADRENO_REV(5, 1, 0, ANY_ID), config->rev))
+ 		nr_rings = 1;
+ 
+ 	ret = adreno_gpu_init(dev, pdev, adreno_gpu, &funcs, nr_rings);
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+index 52da3795b175d..411b7a5fa2f32 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+@@ -1744,7 +1744,8 @@ a6xx_create_address_space(struct msm_gpu *gpu, struct platform_device *pdev)
+ 	 * This allows GPU to set the bus attributes required to use system
+ 	 * cache on behalf of the iommu page table walker.
+ 	 */
+-	if (!IS_ERR_OR_NULL(a6xx_gpu->htw_llc_slice))
++	if (!IS_ERR_OR_NULL(a6xx_gpu->htw_llc_slice) &&
++	    !device_iommu_capable(&pdev->dev, IOMMU_CAP_CACHE_COHERENCY))
+ 		quirks |= IO_PGTABLE_QUIRK_ARM_OUTER_WBWA;
+ 
+ 	return adreno_iommu_create_address_space(gpu, pdev, quirks);
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
+index bdcd554fc8a80..ff9ccf72a4bf9 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
+@@ -39,8 +39,8 @@ static const struct dpu_mdp_cfg msm8998_mdp[] = {
+ 	.clk_ctrls[DPU_CLK_CTRL_DMA1] = { .reg_off = 0x2b4, .bit_off = 8 },
+ 	.clk_ctrls[DPU_CLK_CTRL_DMA2] = { .reg_off = 0x2c4, .bit_off = 8 },
+ 	.clk_ctrls[DPU_CLK_CTRL_DMA3] = { .reg_off = 0x2c4, .bit_off = 12 },
+-	.clk_ctrls[DPU_CLK_CTRL_CURSOR0] = { .reg_off = 0x3a8, .bit_off = 15 },
+-	.clk_ctrls[DPU_CLK_CTRL_CURSOR1] = { .reg_off = 0x3b0, .bit_off = 15 },
++	.clk_ctrls[DPU_CLK_CTRL_CURSOR0] = { .reg_off = 0x3a8, .bit_off = 16 },
++	.clk_ctrls[DPU_CLK_CTRL_CURSOR1] = { .reg_off = 0x3b0, .bit_off = 16 },
+ 	},
+ };
+ 
+@@ -112,16 +112,16 @@ static const struct dpu_lm_cfg msm8998_lm[] = {
+ };
+ 
+ static const struct dpu_pingpong_cfg msm8998_pp[] = {
+-	PP_BLK_TE("pingpong_0", PINGPONG_0, 0x70000, 0, sdm845_pp_sblk_te,
++	PP_BLK("pingpong_0", PINGPONG_0, 0x70000, PINGPONG_SDM845_TE2_MASK, 0, sdm845_pp_sblk_te,
+ 			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 8),
+ 			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 12)),
+-	PP_BLK_TE("pingpong_1", PINGPONG_1, 0x70800, 0, sdm845_pp_sblk_te,
++	PP_BLK("pingpong_1", PINGPONG_1, 0x70800, PINGPONG_SDM845_TE2_MASK, 0, sdm845_pp_sblk_te,
+ 			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 9),
+ 			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 13)),
+-	PP_BLK("pingpong_2", PINGPONG_2, 0x71000, 0, sdm845_pp_sblk,
++	PP_BLK("pingpong_2", PINGPONG_2, 0x71000, PINGPONG_SDM845_MASK, 0, sdm845_pp_sblk,
+ 			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 10),
+ 			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 14)),
+-	PP_BLK("pingpong_3", PINGPONG_3, 0x71800, 0, sdm845_pp_sblk,
++	PP_BLK("pingpong_3", PINGPONG_3, 0x71800, PINGPONG_SDM845_MASK, 0, sdm845_pp_sblk,
+ 			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 11),
+ 			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 15)),
+ };
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
+index ceca741e93c9b..5b9b3b99f1b5f 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
+@@ -110,16 +110,16 @@ static const struct dpu_lm_cfg sdm845_lm[] = {
+ };
+ 
+ static const struct dpu_pingpong_cfg sdm845_pp[] = {
+-	PP_BLK_TE("pingpong_0", PINGPONG_0, 0x70000, 0, sdm845_pp_sblk_te,
++	PP_BLK("pingpong_0", PINGPONG_0, 0x70000, PINGPONG_SDM845_TE2_MASK, 0, sdm845_pp_sblk_te,
+ 			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 8),
+ 			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 12)),
+-	PP_BLK_TE("pingpong_1", PINGPONG_1, 0x70800, 0, sdm845_pp_sblk_te,
++	PP_BLK("pingpong_1", PINGPONG_1, 0x70800, PINGPONG_SDM845_TE2_MASK, 0, sdm845_pp_sblk_te,
+ 			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 9),
+ 			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 13)),
+-	PP_BLK("pingpong_2", PINGPONG_2, 0x71000, 0, sdm845_pp_sblk,
++	PP_BLK("pingpong_2", PINGPONG_2, 0x71000, PINGPONG_SDM845_MASK, 0, sdm845_pp_sblk,
+ 			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 10),
+ 			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 14)),
+-	PP_BLK("pingpong_3", PINGPONG_3, 0x71800, 0, sdm845_pp_sblk,
++	PP_BLK("pingpong_3", PINGPONG_3, 0x71800, PINGPONG_SDM845_MASK, 0, sdm845_pp_sblk,
+ 			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 11),
+ 			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 15)),
+ };
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
+index 42b0e58624d00..074ba54d420f4 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
+@@ -128,22 +128,22 @@ static const struct dpu_dspp_cfg sm8150_dspp[] = {
+ };
+ 
+ static const struct dpu_pingpong_cfg sm8150_pp[] = {
+-	PP_BLK("pingpong_0", PINGPONG_0, 0x70000, MERGE_3D_0, sdm845_pp_sblk,
++	PP_BLK("pingpong_0", PINGPONG_0, 0x70000, PINGPONG_SM8150_MASK, MERGE_3D_0, sdm845_pp_sblk,
+ 			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 8),
+ 			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 12)),
+-	PP_BLK("pingpong_1", PINGPONG_1, 0x70800, MERGE_3D_0, sdm845_pp_sblk,
++	PP_BLK("pingpong_1", PINGPONG_1, 0x70800, PINGPONG_SM8150_MASK, MERGE_3D_0, sdm845_pp_sblk,
+ 			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 9),
+ 			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 13)),
+-	PP_BLK("pingpong_2", PINGPONG_2, 0x71000, MERGE_3D_1, sdm845_pp_sblk,
++	PP_BLK("pingpong_2", PINGPONG_2, 0x71000, PINGPONG_SM8150_MASK, MERGE_3D_1, sdm845_pp_sblk,
+ 			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 10),
+ 			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 14)),
+-	PP_BLK("pingpong_3", PINGPONG_3, 0x71800, MERGE_3D_1, sdm845_pp_sblk,
++	PP_BLK("pingpong_3", PINGPONG_3, 0x71800, PINGPONG_SM8150_MASK, MERGE_3D_1, sdm845_pp_sblk,
+ 			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 11),
+ 			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 15)),
+-	PP_BLK("pingpong_4", PINGPONG_4, 0x72000, MERGE_3D_2, sdm845_pp_sblk,
++	PP_BLK("pingpong_4", PINGPONG_4, 0x72000, PINGPONG_SM8150_MASK, MERGE_3D_2, sdm845_pp_sblk,
+ 			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 30),
+ 			-1),
+-	PP_BLK("pingpong_5", PINGPONG_5, 0x72800, MERGE_3D_2, sdm845_pp_sblk,
++	PP_BLK("pingpong_5", PINGPONG_5, 0x72800, PINGPONG_SM8150_MASK, MERGE_3D_2, sdm845_pp_sblk,
+ 			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 31),
+ 			-1),
+ };
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h
+index e3bdfe7b30f1f..0540d21810857 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h
+@@ -116,22 +116,22 @@ static const struct dpu_lm_cfg sc8180x_lm[] = {
+ };
+ 
+ static const struct dpu_pingpong_cfg sc8180x_pp[] = {
+-	PP_BLK("pingpong_0", PINGPONG_0, 0x70000, MERGE_3D_0, sdm845_pp_sblk,
++	PP_BLK("pingpong_0", PINGPONG_0, 0x70000, PINGPONG_SM8150_MASK, MERGE_3D_0, sdm845_pp_sblk,
+ 			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 8),
+ 			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 12)),
+-	PP_BLK("pingpong_1", PINGPONG_1, 0x70800, MERGE_3D_0, sdm845_pp_sblk,
++	PP_BLK("pingpong_1", PINGPONG_1, 0x70800, PINGPONG_SM8150_MASK, MERGE_3D_0, sdm845_pp_sblk,
+ 			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 9),
+ 			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 13)),
+-	PP_BLK("pingpong_2", PINGPONG_2, 0x71000, MERGE_3D_1, sdm845_pp_sblk,
++	PP_BLK("pingpong_2", PINGPONG_2, 0x71000, PINGPONG_SM8150_MASK, MERGE_3D_1, sdm845_pp_sblk,
+ 			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 10),
+ 			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 14)),
+-	PP_BLK("pingpong_3", PINGPONG_3, 0x71800, MERGE_3D_1, sdm845_pp_sblk,
++	PP_BLK("pingpong_3", PINGPONG_3, 0x71800, PINGPONG_SM8150_MASK, MERGE_3D_1, sdm845_pp_sblk,
+ 			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 11),
+ 			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 15)),
+-	PP_BLK("pingpong_4", PINGPONG_4, 0x72000, MERGE_3D_2, sdm845_pp_sblk,
++	PP_BLK("pingpong_4", PINGPONG_4, 0x72000, PINGPONG_SM8150_MASK, MERGE_3D_2, sdm845_pp_sblk,
+ 			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 30),
+ 			-1),
+-	PP_BLK("pingpong_5", PINGPONG_5, 0x72800, MERGE_3D_2, sdm845_pp_sblk,
++	PP_BLK("pingpong_5", PINGPONG_5, 0x72800, PINGPONG_SM8150_MASK, MERGE_3D_2, sdm845_pp_sblk,
+ 			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 31),
+ 			-1),
+ };
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_0_sm8250.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_0_sm8250.h
+index ed130582873c7..b3284de35b8fa 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_0_sm8250.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_0_sm8250.h
+@@ -129,22 +129,22 @@ static const struct dpu_dspp_cfg sm8250_dspp[] = {
+ };
+ 
+ static const struct dpu_pingpong_cfg sm8250_pp[] = {
+-	PP_BLK("pingpong_0", PINGPONG_0, 0x70000, MERGE_3D_0, sdm845_pp_sblk,
++	PP_BLK("pingpong_0", PINGPONG_0, 0x70000, PINGPONG_SM8150_MASK, MERGE_3D_0, sdm845_pp_sblk,
+ 			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 8),
+ 			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 12)),
+-	PP_BLK("pingpong_1", PINGPONG_1, 0x70800, MERGE_3D_0, sdm845_pp_sblk,
++	PP_BLK("pingpong_1", PINGPONG_1, 0x70800, PINGPONG_SM8150_MASK, MERGE_3D_0, sdm845_pp_sblk,
+ 			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 9),
+ 			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 13)),
+-	PP_BLK("pingpong_2", PINGPONG_2, 0x71000, MERGE_3D_1, sdm845_pp_sblk,
++	PP_BLK("pingpong_2", PINGPONG_2, 0x71000, PINGPONG_SM8150_MASK, MERGE_3D_1, sdm845_pp_sblk,
+ 			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 10),
+ 			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 14)),
+-	PP_BLK("pingpong_3", PINGPONG_3, 0x71800, MERGE_3D_1, sdm845_pp_sblk,
++	PP_BLK("pingpong_3", PINGPONG_3, 0x71800, PINGPONG_SM8150_MASK, MERGE_3D_1, sdm845_pp_sblk,
+ 			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 11),
+ 			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 15)),
+-	PP_BLK("pingpong_4", PINGPONG_4, 0x72000, MERGE_3D_2, sdm845_pp_sblk,
++	PP_BLK("pingpong_4", PINGPONG_4, 0x72000, PINGPONG_SM8150_MASK, MERGE_3D_2, sdm845_pp_sblk,
+ 			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 30),
+ 			-1),
+-	PP_BLK("pingpong_5", PINGPONG_5, 0x72800, MERGE_3D_2, sdm845_pp_sblk,
++	PP_BLK("pingpong_5", PINGPONG_5, 0x72800, PINGPONG_SM8150_MASK, MERGE_3D_2, sdm845_pp_sblk,
+ 			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 31),
+ 			-1),
+ };
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_2_sc7180.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_2_sc7180.h
+index a46b11730a4d4..88c211876516a 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_2_sc7180.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_2_sc7180.h
+@@ -76,12 +76,16 @@ static const struct dpu_lm_cfg sc7180_lm[] = {
+ 
+ static const struct dpu_dspp_cfg sc7180_dspp[] = {
+ 	DSPP_BLK("dspp_0", DSPP_0, 0x54000, DSPP_SC7180_MASK,
+-		 &sc7180_dspp_sblk),
++		 &sm8150_dspp_sblk),
+ };
+ 
+ static const struct dpu_pingpong_cfg sc7180_pp[] = {
+-	PP_BLK("pingpong_0", PINGPONG_0, 0x70000, 0, sdm845_pp_sblk, -1, -1),
+-	PP_BLK("pingpong_1", PINGPONG_1, 0x70800, 0, sdm845_pp_sblk, -1, -1),
++	PP_BLK("pingpong_0", PINGPONG_0, 0x70000, PINGPONG_SM8150_MASK, 0, sdm845_pp_sblk,
++			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 8),
++			-1),
++	PP_BLK("pingpong_1", PINGPONG_1, 0x70800, PINGPONG_SM8150_MASK, 0, sdm845_pp_sblk,
++			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 9),
++			-1),
+ };
+ 
+ static const struct dpu_intf_cfg sc7180_intf[] = {
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_3_sm6115.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_3_sm6115.h
+index 988d820f7ef2e..e15dc96f1286a 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_3_sm6115.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_3_sm6115.h
+@@ -60,7 +60,7 @@ static const struct dpu_dspp_cfg sm6115_dspp[] = {
+ };
+ 
+ static const struct dpu_pingpong_cfg sm6115_pp[] = {
+-	PP_BLK("pingpong_0", PINGPONG_0, 0x70000, 0, sdm845_pp_sblk,
++	PP_BLK("pingpong_0", PINGPONG_0, 0x70000, PINGPONG_SM8150_MASK, 0, sdm845_pp_sblk,
+ 		DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 8),
+ 		DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 12)),
+ };
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_5_qcm2290.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_5_qcm2290.h
+index c9003dcc1a59b..2ff98ef6999fe 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_5_qcm2290.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_5_qcm2290.h
+@@ -57,7 +57,7 @@ static const struct dpu_dspp_cfg qcm2290_dspp[] = {
+ };
+ 
+ static const struct dpu_pingpong_cfg qcm2290_pp[] = {
+-	PP_BLK("pingpong_0", PINGPONG_0, 0x70000, 0, sdm845_pp_sblk,
++	PP_BLK("pingpong_0", PINGPONG_0, 0x70000, PINGPONG_SM8150_MASK, 0, sdm845_pp_sblk,
+ 		DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 8),
+ 		DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 12)),
+ };
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_2_sc7280.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_2_sc7280.h
+index 6b2c7eae71d99..7de87185d5c0c 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_2_sc7280.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_2_sc7280.h
+@@ -83,14 +83,22 @@ static const struct dpu_lm_cfg sc7280_lm[] = {
+ 
+ static const struct dpu_dspp_cfg sc7280_dspp[] = {
+ 	DSPP_BLK("dspp_0", DSPP_0, 0x54000, DSPP_SC7180_MASK,
+-		 &sc7180_dspp_sblk),
++		 &sm8150_dspp_sblk),
+ };
+ 
+ static const struct dpu_pingpong_cfg sc7280_pp[] = {
+-	PP_BLK_DITHER("pingpong_0", PINGPONG_0, 0x69000, 0, sc7280_pp_sblk, -1, -1),
+-	PP_BLK_DITHER("pingpong_1", PINGPONG_1, 0x6a000, 0, sc7280_pp_sblk, -1, -1),
+-	PP_BLK_DITHER("pingpong_2", PINGPONG_2, 0x6b000, 0, sc7280_pp_sblk, -1, -1),
+-	PP_BLK_DITHER("pingpong_3", PINGPONG_3, 0x6c000, 0, sc7280_pp_sblk, -1, -1),
++	PP_BLK_DITHER("pingpong_0", PINGPONG_0, 0x69000, 0, sc7280_pp_sblk,
++			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 8),
++			-1),
++	PP_BLK_DITHER("pingpong_1", PINGPONG_1, 0x6a000, 0, sc7280_pp_sblk,
++			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 9),
++			-1),
++	PP_BLK_DITHER("pingpong_2", PINGPONG_2, 0x6b000, 0, sc7280_pp_sblk,
++			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 10),
++			-1),
++	PP_BLK_DITHER("pingpong_3", PINGPONG_3, 0x6c000, 0, sc7280_pp_sblk,
++			DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 11),
++			-1),
+ };
+ 
+ static const struct dpu_intf_cfg sc7280_intf[] = {
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_8_1_sm8450.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_8_1_sm8450.h
+index 4ecb3df5cbc02..8bd4bb97e639c 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_8_1_sm8450.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_8_1_sm8450.h
+@@ -107,9 +107,9 @@ static const struct dpu_lm_cfg sm8450_lm[] = {
+ 	LM_BLK("lm_1", LM_1, 0x45000, MIXER_SDM845_MASK,
+ 		&sdm845_lm_sblk, PINGPONG_1, LM_0, DSPP_1),
+ 	LM_BLK("lm_2", LM_2, 0x46000, MIXER_SDM845_MASK,
+-		&sdm845_lm_sblk, PINGPONG_2, LM_3, 0),
++		&sdm845_lm_sblk, PINGPONG_2, LM_3, DSPP_2),
+ 	LM_BLK("lm_3", LM_3, 0x47000, MIXER_SDM845_MASK,
+-		&sdm845_lm_sblk, PINGPONG_3, LM_2, 0),
++		&sdm845_lm_sblk, PINGPONG_3, LM_2, DSPP_3),
+ 	LM_BLK("lm_4", LM_4, 0x48000, MIXER_SDM845_MASK,
+ 		&sdm845_lm_sblk, PINGPONG_4, LM_5, 0),
+ 	LM_BLK("lm_5", LM_5, 0x49000, MIXER_SDM845_MASK,
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
+index cc66ddffe6723..eee48371126d8 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
+@@ -1463,6 +1463,8 @@ static const struct drm_crtc_helper_funcs dpu_crtc_helper_funcs = {
+ struct drm_crtc *dpu_crtc_init(struct drm_device *dev, struct drm_plane *plane,
+ 				struct drm_plane *cursor)
+ {
++	struct msm_drm_private *priv = dev->dev_private;
++	struct dpu_kms *dpu_kms = to_dpu_kms(priv->kms);
+ 	struct drm_crtc *crtc = NULL;
+ 	struct dpu_crtc *dpu_crtc = NULL;
+ 	int i, ret;
+@@ -1494,7 +1496,8 @@ struct drm_crtc *dpu_crtc_init(struct drm_device *dev, struct drm_plane *plane,
+ 
+ 	drm_crtc_helper_add(crtc, &dpu_crtc_helper_funcs);
+ 
+-	drm_crtc_enable_color_mgmt(crtc, 0, true, 0);
++	if (dpu_kms->catalog->dspp_count)
++		drm_crtc_enable_color_mgmt(crtc, 0, true, 0);
+ 
+ 	/* save user friendly CRTC name for later */
+ 	snprintf(dpu_crtc->name, DPU_CRTC_NAME_SIZE, "crtc%u", crtc->base.id);
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c
+index 74470d068622e..a60fb8d3736b5 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c
+@@ -36,10 +36,6 @@
+ #define DEFAULT_TEARCHECK_SYNC_THRESH_START	4
+ #define DEFAULT_TEARCHECK_SYNC_THRESH_CONTINUE	4
+ 
+-#define DPU_ENC_WR_PTR_START_TIMEOUT_US 20000
+-
+-#define DPU_ENC_MAX_POLL_TIMEOUT_US	2000
+-
+ static void dpu_encoder_phys_cmd_enable_te(struct dpu_encoder_phys *phys_enc);
+ 
+ static bool dpu_encoder_phys_cmd_is_master(struct dpu_encoder_phys *phys_enc)
+@@ -574,28 +570,8 @@ static void dpu_encoder_phys_cmd_prepare_for_kickoff(
+ 			atomic_read(&phys_enc->pending_kickoff_cnt));
+ }
+ 
+-static bool dpu_encoder_phys_cmd_is_ongoing_pptx(
+-		struct dpu_encoder_phys *phys_enc)
+-{
+-	struct dpu_hw_pp_vsync_info info;
+-
+-	if (!phys_enc)
+-		return false;
+-
+-	phys_enc->hw_pp->ops.get_vsync_info(phys_enc->hw_pp, &info);
+-	if (info.wr_ptr_line_count > 0 &&
+-	    info.wr_ptr_line_count < phys_enc->cached_mode.vdisplay)
+-		return true;
+-
+-	return false;
+-}
+-
+ static void dpu_encoder_phys_cmd_enable_te(struct dpu_encoder_phys *phys_enc)
+ {
+-	struct dpu_encoder_phys_cmd *cmd_enc =
+-		to_dpu_encoder_phys_cmd(phys_enc);
+-	int trial = 0;
+-
+ 	if (!phys_enc)
+ 		return;
+ 	if (!phys_enc->hw_pp)
+@@ -603,37 +579,11 @@ static void dpu_encoder_phys_cmd_enable_te(struct dpu_encoder_phys *phys_enc)
+ 	if (!dpu_encoder_phys_cmd_is_master(phys_enc))
+ 		return;
+ 
+-	/* If autorefresh is already disabled, we have nothing to do */
+-	if (!phys_enc->hw_pp->ops.get_autorefresh(phys_enc->hw_pp, NULL))
+-		return;
+-
+-	/*
+-	 * If autorefresh is enabled, disable it and make sure it is safe to
+-	 * proceed with current frame commit/push. Sequence fallowed is,
+-	 * 1. Disable TE
+-	 * 2. Disable autorefresh config
+-	 * 4. Poll for frame transfer ongoing to be false
+-	 * 5. Enable TE back
+-	 */
+-	_dpu_encoder_phys_cmd_connect_te(phys_enc, false);
+-	phys_enc->hw_pp->ops.setup_autorefresh(phys_enc->hw_pp, 0, false);
+-
+-	do {
+-		udelay(DPU_ENC_MAX_POLL_TIMEOUT_US);
+-		if ((trial * DPU_ENC_MAX_POLL_TIMEOUT_US)
+-				> (KICKOFF_TIMEOUT_MS * USEC_PER_MSEC)) {
+-			DPU_ERROR_CMDENC(cmd_enc,
+-					"disable autorefresh failed\n");
+-			break;
+-		}
+-
+-		trial++;
+-	} while (dpu_encoder_phys_cmd_is_ongoing_pptx(phys_enc));
+-
+-	_dpu_encoder_phys_cmd_connect_te(phys_enc, true);
+-
+-	DPU_DEBUG_CMDENC(to_dpu_encoder_phys_cmd(phys_enc),
+-			 "disabled autorefresh\n");
++	if (phys_enc->hw_pp->ops.disable_autorefresh) {
++		phys_enc->hw_pp->ops.disable_autorefresh(phys_enc->hw_pp,
++							 DRMID(phys_enc->parent),
++							 phys_enc->cached_mode.vdisplay);
++	}
+ }
+ 
+ static int _dpu_encoder_phys_cmd_wait_for_ctl_start(
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
+index 5d994bce696f9..0b604f31197bb 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
+@@ -75,11 +75,15 @@
+ #define MIXER_QCM2290_MASK \
+ 	(BIT(DPU_DIM_LAYER) | BIT(DPU_MIXER_COMBINED_ALPHA))
+ 
+-#define PINGPONG_SDM845_MASK BIT(DPU_PINGPONG_DITHER)
++#define PINGPONG_SDM845_MASK \
++	(BIT(DPU_PINGPONG_DITHER) | BIT(DPU_PINGPONG_TE))
+ 
+-#define PINGPONG_SDM845_SPLIT_MASK \
++#define PINGPONG_SDM845_TE2_MASK \
+ 	(PINGPONG_SDM845_MASK | BIT(DPU_PINGPONG_TE2))
+ 
++#define PINGPONG_SM8150_MASK \
++	(BIT(DPU_PINGPONG_DITHER))
++
+ #define CTL_SC7280_MASK \
+ 	(BIT(DPU_CTL_ACTIVE_CFG) | \
+ 	 BIT(DPU_CTL_FETCH_ACTIVE) | \
+@@ -98,9 +102,12 @@
+ #define INTF_SDM845_MASK (0)
+ 
+ #define INTF_SC7180_MASK \
+-	(BIT(DPU_INTF_INPUT_CTRL) | BIT(DPU_INTF_TE) | BIT(DPU_INTF_STATUS_SUPPORTED))
++	(BIT(DPU_INTF_INPUT_CTRL) | \
++	 BIT(DPU_INTF_TE) | \
++	 BIT(DPU_INTF_STATUS_SUPPORTED) | \
++	 BIT(DPU_DATA_HCTL_EN))
+ 
+-#define INTF_SC7280_MASK INTF_SC7180_MASK | BIT(DPU_DATA_HCTL_EN)
++#define INTF_SC7280_MASK (INTF_SC7180_MASK)
+ 
+ #define WB_SM8250_MASK (BIT(DPU_WB_LINE_MODE) | \
+ 			 BIT(DPU_WB_UBWC) | \
+@@ -453,11 +460,6 @@ static const struct dpu_dspp_sub_blks msm8998_dspp_sblk = {
+ 		.len = 0x90, .version = 0x10007},
+ };
+ 
+-static const struct dpu_dspp_sub_blks sc7180_dspp_sblk = {
+-	.pcc = {.id = DPU_DSPP_PCC, .base = 0x1700,
+-		.len = 0x90, .version = 0x10000},
+-};
+-
+ static const struct dpu_dspp_sub_blks sm8150_dspp_sblk = {
+ 	.pcc = {.id = DPU_DSPP_PCC, .base = 0x1700,
+ 		.len = 0x90, .version = 0x40000},
+@@ -501,21 +503,11 @@ static const struct dpu_pingpong_sub_blks sc7280_pp_sblk = {
+ 	.intr_done = _done, \
+ 	.intr_rdptr = _rdptr, \
+ 	}
+-#define PP_BLK_TE(_name, _id, _base, _merge_3d, _sblk, _done, _rdptr) \
++#define PP_BLK(_name, _id, _base, _features, _merge_3d, _sblk, _done, _rdptr) \
+ 	{\
+ 	.name = _name, .id = _id, \
+ 	.base = _base, .len = 0xd4, \
+-	.features = PINGPONG_SDM845_SPLIT_MASK, \
+-	.merge_3d = _merge_3d, \
+-	.sblk = &_sblk, \
+-	.intr_done = _done, \
+-	.intr_rdptr = _rdptr, \
+-	}
+-#define PP_BLK(_name, _id, _base, _merge_3d, _sblk, _done, _rdptr) \
+-	{\
+-	.name = _name, .id = _id, \
+-	.base = _base, .len = 0xd4, \
+-	.features = PINGPONG_SDM845_MASK, \
++	.features = _features, \
+ 	.merge_3d = _merge_3d, \
+ 	.sblk = &_sblk, \
+ 	.intr_done = _done, \
+@@ -528,7 +520,7 @@ static const struct dpu_pingpong_sub_blks sc7280_pp_sblk = {
+ #define MERGE_3D_BLK(_name, _id, _base) \
+ 	{\
+ 	.name = _name, .id = _id, \
+-	.base = _base, .len = 0x100, \
++	.base = _base, .len = 0x8, \
+ 	.features = MERGE_3D_SM8150_MASK, \
+ 	.sblk = NULL \
+ 	}
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.c
+index bbdc95ce374a7..f6270b7a0b140 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.c
+@@ -117,6 +117,9 @@ static inline void dpu_hw_ctl_clear_pending_flush(struct dpu_hw_ctl *ctx)
+ 	trace_dpu_hw_ctl_clear_pending_flush(ctx->pending_flush_mask,
+ 				     dpu_hw_ctl_get_flush_register(ctx));
+ 	ctx->pending_flush_mask = 0x0;
++	ctx->pending_intf_flush_mask = 0;
++	ctx->pending_wb_flush_mask = 0;
++	ctx->pending_merge_3d_flush_mask = 0;
+ 
+ 	memset(ctx->pending_dspp_flush_mask, 0,
+ 		sizeof(ctx->pending_dspp_flush_mask));
+@@ -542,7 +545,7 @@ static void dpu_hw_ctl_intf_cfg_v1(struct dpu_hw_ctl *ctx,
+ 		DPU_REG_WRITE(c, CTL_MERGE_3D_ACTIVE,
+ 			      BIT(cfg->merge_3d - MERGE_3D_0));
+ 	if (cfg->dsc) {
+-		DPU_REG_WRITE(&ctx->hw, CTL_FLUSH, DSC_IDX);
++		DPU_REG_WRITE(&ctx->hw, CTL_FLUSH, BIT(DSC_IDX));
+ 		DPU_REG_WRITE(c, CTL_DSC_ACTIVE, cfg->dsc);
+ 	}
+ }
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dsc.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dsc.c
+index 4e1396575e6aa..c3c70ba61c1c4 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dsc.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dsc.c
+@@ -54,9 +54,10 @@ static void dpu_hw_dsc_config(struct dpu_hw_dsc *hw_dsc,
+ 	if (is_cmd_mode)
+ 		initial_lines += 1;
+ 
+-	slice_last_group_size = 3 - (dsc->slice_width % 3);
++	slice_last_group_size = (dsc->slice_width + 2) % 3;
++
+ 	data = (initial_lines << 20);
+-	data |= ((slice_last_group_size - 1) << 18);
++	data |= (slice_last_group_size << 18);
+ 	/* bpp is 6.4 format, 4 LSBs bits are for fractional part */
+ 	data |= (dsc->bits_per_pixel << 8);
+ 	data |= (dsc->block_pred_enable << 7);
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_pingpong.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_pingpong.c
+index 0fcad9760b6fc..4a20a5841f223 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_pingpong.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_pingpong.c
+@@ -144,23 +144,6 @@ static bool dpu_hw_pp_get_autorefresh_config(struct dpu_hw_pingpong *pp,
+ 	return !!((val & BIT(31)) >> 31);
+ }
+ 
+-static int dpu_hw_pp_poll_timeout_wr_ptr(struct dpu_hw_pingpong *pp,
+-		u32 timeout_us)
+-{
+-	struct dpu_hw_blk_reg_map *c;
+-	u32 val;
+-	int rc;
+-
+-	if (!pp)
+-		return -EINVAL;
+-
+-	c = &pp->hw;
+-	rc = readl_poll_timeout(c->blk_addr + PP_LINE_COUNT,
+-			val, (val & 0xffff) >= 1, 10, timeout_us);
+-
+-	return rc;
+-}
+-
+ static int dpu_hw_pp_enable_te(struct dpu_hw_pingpong *pp, bool enable)
+ {
+ 	struct dpu_hw_blk_reg_map *c;
+@@ -245,6 +228,49 @@ static u32 dpu_hw_pp_get_line_count(struct dpu_hw_pingpong *pp)
+ 	return line;
+ }
+ 
++static void dpu_hw_pp_disable_autorefresh(struct dpu_hw_pingpong *pp,
++					  uint32_t encoder_id, u16 vdisplay)
++{
++	struct dpu_hw_pp_vsync_info info;
++	int trial = 0;
++
++	/* If autorefresh is already disabled, we have nothing to do */
++	if (!dpu_hw_pp_get_autorefresh_config(pp, NULL))
++		return;
++
++	/*
++	 * If autorefresh is enabled, disable it and make sure it is safe to
++	 * proceed with current frame commit/push. Sequence followed is,
++	 * 1. Disable TE
++	 * 2. Disable autorefresh config
++	 * 4. Poll for frame transfer ongoing to be false
++	 * 5. Enable TE back
++	 */
++
++	dpu_hw_pp_connect_external_te(pp, false);
++	dpu_hw_pp_setup_autorefresh_config(pp, 0, false);
++
++	do {
++		udelay(DPU_ENC_MAX_POLL_TIMEOUT_US);
++		if ((trial * DPU_ENC_MAX_POLL_TIMEOUT_US)
++				> (KICKOFF_TIMEOUT_MS * USEC_PER_MSEC)) {
++			DPU_ERROR("enc%d pp%d disable autorefresh failed\n",
++				  encoder_id, pp->idx - PINGPONG_0);
++			break;
++		}
++
++		trial++;
++
++		dpu_hw_pp_get_vsync_info(pp, &info);
++	} while (info.wr_ptr_line_count > 0 &&
++		 info.wr_ptr_line_count < vdisplay);
++
++	dpu_hw_pp_connect_external_te(pp, true);
++
++	DPU_DEBUG("enc%d pp%d disabled autorefresh\n",
++		  encoder_id, pp->idx - PINGPONG_0);
++}
++
+ static int dpu_hw_pp_dsc_enable(struct dpu_hw_pingpong *pp)
+ {
+ 	struct dpu_hw_blk_reg_map *c = &pp->hw;
+@@ -274,14 +300,13 @@ static int dpu_hw_pp_setup_dsc(struct dpu_hw_pingpong *pp)
+ static void _setup_pingpong_ops(struct dpu_hw_pingpong *c,
+ 				unsigned long features)
+ {
+-	c->ops.setup_tearcheck = dpu_hw_pp_setup_te_config;
+-	c->ops.enable_tearcheck = dpu_hw_pp_enable_te;
+-	c->ops.connect_external_te = dpu_hw_pp_connect_external_te;
+-	c->ops.get_vsync_info = dpu_hw_pp_get_vsync_info;
+-	c->ops.setup_autorefresh = dpu_hw_pp_setup_autorefresh_config;
+-	c->ops.get_autorefresh = dpu_hw_pp_get_autorefresh_config;
+-	c->ops.poll_timeout_wr_ptr = dpu_hw_pp_poll_timeout_wr_ptr;
+-	c->ops.get_line_count = dpu_hw_pp_get_line_count;
++	if (test_bit(DPU_PINGPONG_TE, &features)) {
++		c->ops.setup_tearcheck = dpu_hw_pp_setup_te_config;
++		c->ops.enable_tearcheck = dpu_hw_pp_enable_te;
++		c->ops.connect_external_te = dpu_hw_pp_connect_external_te;
++		c->ops.get_line_count = dpu_hw_pp_get_line_count;
++		c->ops.disable_autorefresh = dpu_hw_pp_disable_autorefresh;
++	}
+ 	c->ops.setup_dsc = dpu_hw_pp_setup_dsc;
+ 	c->ops.enable_dsc = dpu_hw_pp_dsc_enable;
+ 	c->ops.disable_dsc = dpu_hw_pp_dsc_disable;
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_pingpong.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_pingpong.h
+index c00223441d990..851b013c4c4b6 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_pingpong.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_pingpong.h
+@@ -61,9 +61,6 @@ struct dpu_hw_dither_cfg {
+  *  Assumption is these functions will be called after clocks are enabled
+  *  @setup_tearcheck : program tear check values
+  *  @enable_tearcheck : enables tear check
+- *  @get_vsync_info : retries timing info of the panel
+- *  @setup_autorefresh : configure and enable the autorefresh config
+- *  @get_autorefresh : retrieve autorefresh config from hardware
+  *  @setup_dither : function to program the dither hw block
+  *  @get_line_count: obtain current vertical line counter
+  */
+@@ -89,34 +86,14 @@ struct dpu_hw_pingpong_ops {
+ 			bool enable_external_te);
+ 
+ 	/**
+-	 * provides the programmed and current
+-	 * line_count
+-	 */
+-	int (*get_vsync_info)(struct dpu_hw_pingpong *pp,
+-			struct dpu_hw_pp_vsync_info  *info);
+-
+-	/**
+-	 * configure and enable the autorefresh config
+-	 */
+-	void (*setup_autorefresh)(struct dpu_hw_pingpong *pp,
+-				  u32 frame_count, bool enable);
+-
+-	/**
+-	 * retrieve autorefresh config from hardware
+-	 */
+-	bool (*get_autorefresh)(struct dpu_hw_pingpong *pp,
+-				u32 *frame_count);
+-
+-	/**
+-	 * poll until write pointer transmission starts
+-	 * @Return: 0 on success, -ETIMEDOUT on timeout
++	 * Obtain current vertical line counter
+ 	 */
+-	int (*poll_timeout_wr_ptr)(struct dpu_hw_pingpong *pp, u32 timeout_us);
++	u32 (*get_line_count)(struct dpu_hw_pingpong *pp);
+ 
+ 	/**
+-	 * Obtain current vertical line counter
++	 * Disable autorefresh if enabled
+ 	 */
+-	u32 (*get_line_count)(struct dpu_hw_pingpong *pp);
++	void (*disable_autorefresh)(struct dpu_hw_pingpong *pp, uint32_t encoder_id, u16 vdisplay);
+ 
+ 	/**
+ 	 * Setup dither matix for pingpong block
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h
+index aca39a4689f48..e7fc67381c2bd 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h
+@@ -118,6 +118,10 @@ struct vsync_info {
+ 	u32 line_count;
+ };
+ 
++#define DPU_ENC_WR_PTR_START_TIMEOUT_US 20000
++
++#define DPU_ENC_MAX_POLL_TIMEOUT_US	2000
++
+ #define to_dpu_kms(x) container_of(x, struct dpu_kms, base)
+ 
+ #define to_dpu_global_state(x) container_of(x, struct dpu_global_state, base)
+diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c
+index 03b0eda6df54a..cffb3f41f6023 100644
+--- a/drivers/gpu/drm/msm/dp/dp_display.c
++++ b/drivers/gpu/drm/msm/dp/dp_display.c
+@@ -329,6 +329,8 @@ static void dp_display_unbind(struct device *dev, struct device *master,
+ 
+ 	kthread_stop(dp->ev_tsk);
+ 
++	of_dp_aux_depopulate_bus(dp->aux);
++
+ 	dp_power_client_deinit(dp->power);
+ 	dp_unregister_audio_driver(dev, dp->audio);
+ 	dp_aux_unregister(dp->aux);
+@@ -1328,9 +1330,9 @@ static int dp_display_remove(struct platform_device *pdev)
+ {
+ 	struct dp_display_private *dp = dev_get_dp_display_private(&pdev->dev);
+ 
++	component_del(&pdev->dev, &dp_display_comp_ops);
+ 	dp_display_deinit_sub_modules(dp);
+ 
+-	component_del(&pdev->dev, &dp_display_comp_ops);
+ 	platform_set_drvdata(pdev, NULL);
+ 
+ 	return 0;
+@@ -1509,11 +1511,6 @@ void msm_dp_debugfs_init(struct msm_dp *dp_display, struct drm_minor *minor)
+ 	}
+ }
+ 
+-static void of_dp_aux_depopulate_bus_void(void *data)
+-{
+-	of_dp_aux_depopulate_bus(data);
+-}
+-
+ static int dp_display_get_next_bridge(struct msm_dp *dp)
+ {
+ 	int rc;
+@@ -1541,12 +1538,6 @@ static int dp_display_get_next_bridge(struct msm_dp *dp)
+ 		of_node_put(aux_bus);
+ 		if (rc)
+ 			goto error;
+-
+-		rc = devm_add_action_or_reset(dp->drm_dev->dev,
+-						of_dp_aux_depopulate_bus_void,
+-						dp_priv->aux);
+-		if (rc)
+-			goto error;
+ 	} else if (dp->is_edp) {
+ 		DRM_ERROR("eDP aux_bus not found\n");
+ 		return -ENODEV;
+@@ -1570,6 +1561,7 @@ static int dp_display_get_next_bridge(struct msm_dp *dp)
+ 
+ error:
+ 	if (dp->is_edp) {
++		of_dp_aux_depopulate_bus(dp_priv->aux);
+ 		dp_display_host_phy_exit(dp_priv);
+ 		dp_display_host_deinit(dp_priv);
+ 	}
+diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c b/drivers/gpu/drm/msm/dsi/dsi_host.c
+index 961689a255c47..735a7f6386df8 100644
+--- a/drivers/gpu/drm/msm/dsi/dsi_host.c
++++ b/drivers/gpu/drm/msm/dsi/dsi_host.c
+@@ -850,18 +850,17 @@ static void dsi_update_dsc_timing(struct msm_dsi_host *msm_host, bool is_cmd_mod
+ 	 */
+ 	slice_per_intf = DIV_ROUND_UP(hdisplay, dsc->slice_width);
+ 
+-	/*
+-	 * If slice_count is greater than slice_per_intf
+-	 * then default to 1. This can happen during partial
+-	 * update.
+-	 */
+-	if (dsc->slice_count > slice_per_intf)
+-		dsc->slice_count = 1;
+-
+ 	total_bytes_per_intf = dsc->slice_chunk_size * slice_per_intf;
+ 
+ 	eol_byte_num = total_bytes_per_intf % 3;
+-	pkt_per_line = slice_per_intf / dsc->slice_count;
++
++	/*
++	 * Typically, pkt_per_line = slice_per_intf * slice_per_pkt.
++	 *
++	 * Since the current driver only supports slice_per_pkt = 1,
++	 * pkt_per_line will be equal to slice per intf for now.
++	 */
++	pkt_per_line = slice_per_intf;
+ 
+ 	if (is_cmd_mode) /* packet data type */
+ 		reg = DSI_COMMAND_COMPRESSION_MODE_CTRL_STREAM0_DATATYPE(MIPI_DSI_DCS_LONG_WRITE);
+@@ -985,7 +984,14 @@ static void dsi_timing_setup(struct msm_dsi_host *msm_host, bool is_bonded_dsi)
+ 		if (!msm_host->dsc)
+ 			wc = hdisplay * dsi_get_bpp(msm_host->format) / 8 + 1;
+ 		else
+-			wc = msm_host->dsc->slice_chunk_size * msm_host->dsc->slice_count + 1;
++			/*
++			 * When DSC is enabled, WC = slice_chunk_size * slice_per_pkt + 1.
++			 * Currently, the driver only supports default value of slice_per_pkt = 1
++			 *
++			 * TODO: Expand mipi_dsi_device struct to hold slice_per_pkt info
++			 *       and adjust DSC math to account for slice_per_pkt.
++			 */
++			wc = msm_host->dsc->slice_chunk_size + 1;
+ 
+ 		dsi_write(msm_host, REG_DSI_CMD_MDP_STREAM0_CTRL,
+ 			DSI_CMD_MDP_STREAM0_CTRL_WORD_COUNT(wc) |
+diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
+index 9f488adea7f54..3ce45b023e637 100644
+--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
++++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
+@@ -539,6 +539,9 @@ static int dsi_pll_14nm_vco_prepare(struct clk_hw *hw)
+ 	if (unlikely(pll_14nm->phy->pll_on))
+ 		return 0;
+ 
++	if (dsi_pll_14nm_vco_recalc_rate(hw, VCO_REF_CLK_RATE) == 0)
++		dsi_pll_14nm_vco_set_rate(hw, pll_14nm->phy->cfg->min_pll_rate, VCO_REF_CLK_RATE);
++
+ 	dsi_phy_write(base + REG_DSI_14nm_PHY_PLL_VREF_CFG1, 0x10);
+ 	dsi_phy_write(cmn_base + REG_DSI_14nm_PHY_CMN_PLL_CNTRL, 1);
+ 
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/disp.c b/drivers/gpu/drm/nouveau/dispnv50/disp.c
+index 5bb777ff13130..9b6824f6b9e4b 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/disp.c
++++ b/drivers/gpu/drm/nouveau/dispnv50/disp.c
+@@ -64,6 +64,7 @@
+ #include "nouveau_connector.h"
+ #include "nouveau_encoder.h"
+ #include "nouveau_fence.h"
++#include "nv50_display.h"
+ 
+ #include <subdev/bios/dp.h>
+ 
+diff --git a/drivers/gpu/drm/nouveau/nv50_display.h b/drivers/gpu/drm/nouveau/nv50_display.h
+index fbd3b15583bc8..60f77766766e9 100644
+--- a/drivers/gpu/drm/nouveau/nv50_display.h
++++ b/drivers/gpu/drm/nouveau/nv50_display.h
+@@ -31,7 +31,5 @@
+ #include "nouveau_reg.h"
+ 
+ int  nv50_display_create(struct drm_device *);
+-void nv50_display_destroy(struct drm_device *);
+-int  nv50_display_init(struct drm_device *);
+-void nv50_display_fini(struct drm_device *);
++
+ #endif /* __NV50_DISPLAY_H__ */
+diff --git a/drivers/gpu/drm/panel/panel-sharp-ls043t1le01.c b/drivers/gpu/drm/panel/panel-sharp-ls043t1le01.c
+index d1ec80a3e3c72..ef148504cf24a 100644
+--- a/drivers/gpu/drm/panel/panel-sharp-ls043t1le01.c
++++ b/drivers/gpu/drm/panel/panel-sharp-ls043t1le01.c
+@@ -192,15 +192,15 @@ static int sharp_nt_panel_enable(struct drm_panel *panel)
+ }
+ 
+ static const struct drm_display_mode default_mode = {
+-	.clock = 41118,
++	.clock = (540 + 48 + 32 + 80) * (960 + 3 + 10 + 15) * 60 / 1000,
+ 	.hdisplay = 540,
+ 	.hsync_start = 540 + 48,
+-	.hsync_end = 540 + 48 + 80,
+-	.htotal = 540 + 48 + 80 + 32,
++	.hsync_end = 540 + 48 + 32,
++	.htotal = 540 + 48 + 32 + 80,
+ 	.vdisplay = 960,
+ 	.vsync_start = 960 + 3,
+-	.vsync_end = 960 + 3 + 15,
+-	.vtotal = 960 + 3 + 15 + 1,
++	.vsync_end = 960 + 3 + 10,
++	.vtotal = 960 + 3 + 10 + 15,
+ };
+ 
+ static int sharp_nt_panel_get_modes(struct drm_panel *panel,
+@@ -280,6 +280,7 @@ static int sharp_nt_panel_probe(struct mipi_dsi_device *dsi)
+ 	dsi->lanes = 2;
+ 	dsi->format = MIPI_DSI_FMT_RGB888;
+ 	dsi->mode_flags = MIPI_DSI_MODE_VIDEO |
++			MIPI_DSI_MODE_VIDEO_SYNC_PULSE |
+ 			MIPI_DSI_MODE_VIDEO_HSE |
+ 			MIPI_DSI_CLOCK_NON_CONTINUOUS |
+ 			MIPI_DSI_MODE_NO_EOT_PACKET;
+diff --git a/drivers/gpu/drm/panel/panel-simple.c b/drivers/gpu/drm/panel/panel-simple.c
+index 065f378bba9d2..d8efbcee9bc12 100644
+--- a/drivers/gpu/drm/panel/panel-simple.c
++++ b/drivers/gpu/drm/panel/panel-simple.c
+@@ -759,8 +759,8 @@ static const struct panel_desc ampire_am_480272h3tmqw_t01h = {
+ 	.num_modes = 1,
+ 	.bpc = 8,
+ 	.size = {
+-		.width = 105,
+-		.height = 67,
++		.width = 99,
++		.height = 58,
+ 	},
+ 	.bus_format = MEDIA_BUS_FMT_RGB888_1X24,
+ };
+diff --git a/drivers/gpu/drm/radeon/ci_dpm.c b/drivers/gpu/drm/radeon/ci_dpm.c
+index 8ef25ab305ae7..b8f4dac68d850 100644
+--- a/drivers/gpu/drm/radeon/ci_dpm.c
++++ b/drivers/gpu/drm/radeon/ci_dpm.c
+@@ -5517,6 +5517,7 @@ static int ci_parse_power_table(struct radeon_device *rdev)
+ 	u8 frev, crev;
+ 	u8 *power_state_offset;
+ 	struct ci_ps *ps;
++	int ret;
+ 
+ 	if (!atom_parse_data_header(mode_info->atom_context, index, NULL,
+ 				   &frev, &crev, &data_offset))
+@@ -5546,11 +5547,15 @@ static int ci_parse_power_table(struct radeon_device *rdev)
+ 		non_clock_array_index = power_state->v2.nonClockInfoIndex;
+ 		non_clock_info = (struct _ATOM_PPLIB_NONCLOCK_INFO *)
+ 			&non_clock_info_array->nonClockInfo[non_clock_array_index];
+-		if (!rdev->pm.power_state[i].clock_info)
+-			return -EINVAL;
++		if (!rdev->pm.power_state[i].clock_info) {
++			ret = -EINVAL;
++			goto err_free_ps;
++		}
+ 		ps = kzalloc(sizeof(struct ci_ps), GFP_KERNEL);
+-		if (ps == NULL)
+-			return -ENOMEM;
++		if (ps == NULL) {
++			ret = -ENOMEM;
++			goto err_free_ps;
++		}
+ 		rdev->pm.dpm.ps[i].ps_priv = ps;
+ 		ci_parse_pplib_non_clock_info(rdev, &rdev->pm.dpm.ps[i],
+ 					      non_clock_info,
+@@ -5590,6 +5595,12 @@ static int ci_parse_power_table(struct radeon_device *rdev)
+ 	}
+ 
+ 	return 0;
++
++err_free_ps:
++	for (i = 0; i < rdev->pm.dpm.num_ps; i++)
++		kfree(rdev->pm.dpm.ps[i].ps_priv);
++	kfree(rdev->pm.dpm.ps);
++	return ret;
+ }
+ 
+ static int ci_get_vbios_boot_values(struct radeon_device *rdev,
+@@ -5678,25 +5689,26 @@ int ci_dpm_init(struct radeon_device *rdev)
+ 
+ 	ret = ci_get_vbios_boot_values(rdev, &pi->vbios_boot_state);
+ 	if (ret) {
+-		ci_dpm_fini(rdev);
++		kfree(rdev->pm.dpm.priv);
+ 		return ret;
+ 	}
+ 
+ 	ret = r600_get_platform_caps(rdev);
+ 	if (ret) {
+-		ci_dpm_fini(rdev);
++		kfree(rdev->pm.dpm.priv);
+ 		return ret;
+ 	}
+ 
+ 	ret = r600_parse_extended_power_table(rdev);
+ 	if (ret) {
+-		ci_dpm_fini(rdev);
++		kfree(rdev->pm.dpm.priv);
+ 		return ret;
+ 	}
+ 
+ 	ret = ci_parse_power_table(rdev);
+ 	if (ret) {
+-		ci_dpm_fini(rdev);
++		kfree(rdev->pm.dpm.priv);
++		r600_free_extended_power_table(rdev);
+ 		return ret;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/radeon/cypress_dpm.c b/drivers/gpu/drm/radeon/cypress_dpm.c
+index fdddbbaecbb74..72a0768df00f7 100644
+--- a/drivers/gpu/drm/radeon/cypress_dpm.c
++++ b/drivers/gpu/drm/radeon/cypress_dpm.c
+@@ -557,8 +557,12 @@ static int cypress_populate_mclk_value(struct radeon_device *rdev,
+ 						     ASIC_INTERNAL_MEMORY_SS, vco_freq)) {
+ 			u32 reference_clock = rdev->clock.mpll.reference_freq;
+ 			u32 decoded_ref = rv740_get_decoded_reference_divider(dividers.ref_div);
+-			u32 clk_s = reference_clock * 5 / (decoded_ref * ss.rate);
+-			u32 clk_v = ss.percentage *
++			u32 clk_s, clk_v;
++
++			if (!decoded_ref)
++				return -EINVAL;
++			clk_s = reference_clock * 5 / (decoded_ref * ss.rate);
++			clk_v = ss.percentage *
+ 				(0x4000 * dividers.whole_fb_div + 0x800 * dividers.frac_fb_div) / (clk_s * 625);
+ 
+ 			mpll_ss1 &= ~CLKV_MASK;
+diff --git a/drivers/gpu/drm/radeon/ni_dpm.c b/drivers/gpu/drm/radeon/ni_dpm.c
+index 672d2239293e0..3e1c1a392fb7b 100644
+--- a/drivers/gpu/drm/radeon/ni_dpm.c
++++ b/drivers/gpu/drm/radeon/ni_dpm.c
+@@ -2241,8 +2241,12 @@ static int ni_populate_mclk_value(struct radeon_device *rdev,
+ 						     ASIC_INTERNAL_MEMORY_SS, vco_freq)) {
+ 			u32 reference_clock = rdev->clock.mpll.reference_freq;
+ 			u32 decoded_ref = rv740_get_decoded_reference_divider(dividers.ref_div);
+-			u32 clk_s = reference_clock * 5 / (decoded_ref * ss.rate);
+-			u32 clk_v = ss.percentage *
++			u32 clk_s, clk_v;
++
++			if (!decoded_ref)
++				return -EINVAL;
++			clk_s = reference_clock * 5 / (decoded_ref * ss.rate);
++			clk_v = ss.percentage *
+ 				(0x4000 * dividers.whole_fb_div + 0x800 * dividers.frac_fb_div) / (clk_s * 625);
+ 
+ 			mpll_ss1 &= ~CLKV_MASK;
+diff --git a/drivers/gpu/drm/radeon/rv740_dpm.c b/drivers/gpu/drm/radeon/rv740_dpm.c
+index d57a3e1df8d63..4464fd21a3029 100644
+--- a/drivers/gpu/drm/radeon/rv740_dpm.c
++++ b/drivers/gpu/drm/radeon/rv740_dpm.c
+@@ -249,8 +249,12 @@ int rv740_populate_mclk_value(struct radeon_device *rdev,
+ 						     ASIC_INTERNAL_MEMORY_SS, vco_freq)) {
+ 			u32 reference_clock = rdev->clock.mpll.reference_freq;
+ 			u32 decoded_ref = rv740_get_decoded_reference_divider(dividers.ref_div);
+-			u32 clk_s = reference_clock * 5 / (decoded_ref * ss.rate);
+-			u32 clk_v = 0x40000 * ss.percentage *
++			u32 clk_s, clk_v;
++
++			if (!decoded_ref)
++				return -EINVAL;
++			clk_s = reference_clock * 5 / (decoded_ref * ss.rate);
++			clk_v = 0x40000 * ss.percentage *
+ 				(dividers.whole_fb_div + (dividers.frac_fb_div / 8)) / (clk_s * 10000);
+ 
+ 			mpll_ss1 &= ~CLKV_MASK;
+diff --git a/drivers/gpu/drm/sun4i/sun4i_tcon.c b/drivers/gpu/drm/sun4i/sun4i_tcon.c
+index 523a6d7879210..936796851ffd3 100644
+--- a/drivers/gpu/drm/sun4i/sun4i_tcon.c
++++ b/drivers/gpu/drm/sun4i/sun4i_tcon.c
+@@ -778,21 +778,19 @@ static irqreturn_t sun4i_tcon_handler(int irq, void *private)
+ static int sun4i_tcon_init_clocks(struct device *dev,
+ 				  struct sun4i_tcon *tcon)
+ {
+-	tcon->clk = devm_clk_get(dev, "ahb");
++	tcon->clk = devm_clk_get_enabled(dev, "ahb");
+ 	if (IS_ERR(tcon->clk)) {
+ 		dev_err(dev, "Couldn't get the TCON bus clock\n");
+ 		return PTR_ERR(tcon->clk);
+ 	}
+-	clk_prepare_enable(tcon->clk);
+ 
+ 	if (tcon->quirks->has_channel_0) {
+-		tcon->sclk0 = devm_clk_get(dev, "tcon-ch0");
++		tcon->sclk0 = devm_clk_get_enabled(dev, "tcon-ch0");
+ 		if (IS_ERR(tcon->sclk0)) {
+ 			dev_err(dev, "Couldn't get the TCON channel 0 clock\n");
+ 			return PTR_ERR(tcon->sclk0);
+ 		}
+ 	}
+-	clk_prepare_enable(tcon->sclk0);
+ 
+ 	if (tcon->quirks->has_channel_1) {
+ 		tcon->sclk1 = devm_clk_get(dev, "tcon-ch1");
+@@ -805,12 +803,6 @@ static int sun4i_tcon_init_clocks(struct device *dev,
+ 	return 0;
+ }
+ 
+-static void sun4i_tcon_free_clocks(struct sun4i_tcon *tcon)
+-{
+-	clk_disable_unprepare(tcon->sclk0);
+-	clk_disable_unprepare(tcon->clk);
+-}
+-
+ static int sun4i_tcon_init_irq(struct device *dev,
+ 			       struct sun4i_tcon *tcon)
+ {
+@@ -1223,14 +1215,14 @@ static int sun4i_tcon_bind(struct device *dev, struct device *master,
+ 	ret = sun4i_tcon_init_regmap(dev, tcon);
+ 	if (ret) {
+ 		dev_err(dev, "Couldn't init our TCON regmap\n");
+-		goto err_free_clocks;
++		goto err_assert_reset;
+ 	}
+ 
+ 	if (tcon->quirks->has_channel_0) {
+ 		ret = sun4i_dclk_create(dev, tcon);
+ 		if (ret) {
+ 			dev_err(dev, "Couldn't create our TCON dot clock\n");
+-			goto err_free_clocks;
++			goto err_assert_reset;
+ 		}
+ 	}
+ 
+@@ -1293,8 +1285,6 @@ static int sun4i_tcon_bind(struct device *dev, struct device *master,
+ err_free_dotclock:
+ 	if (tcon->quirks->has_channel_0)
+ 		sun4i_dclk_free(tcon);
+-err_free_clocks:
+-	sun4i_tcon_free_clocks(tcon);
+ err_assert_reset:
+ 	reset_control_assert(tcon->lcd_rst);
+ 	return ret;
+@@ -1308,7 +1298,6 @@ static void sun4i_tcon_unbind(struct device *dev, struct device *master,
+ 	list_del(&tcon->list);
+ 	if (tcon->quirks->has_channel_0)
+ 		sun4i_dclk_free(tcon);
+-	sun4i_tcon_free_clocks(tcon);
+ }
+ 
+ static const struct component_ops sun4i_tcon_ops = {
+diff --git a/drivers/gpu/drm/vkms/vkms_composer.c b/drivers/gpu/drm/vkms/vkms_composer.c
+index 8e53fa80742b2..80164e79af006 100644
+--- a/drivers/gpu/drm/vkms/vkms_composer.c
++++ b/drivers/gpu/drm/vkms/vkms_composer.c
+@@ -99,7 +99,7 @@ static void blend(struct vkms_writeback_job *wb,
+ 			if (!check_y_limit(plane[i]->frame_info, y))
+ 				continue;
+ 
+-			plane[i]->plane_read(stage_buffer, plane[i]->frame_info, y);
++			vkms_compose_row(stage_buffer, plane[i], y);
+ 			pre_mul_alpha_blend(plane[i]->frame_info, stage_buffer,
+ 					    output_buffer);
+ 		}
+@@ -118,7 +118,7 @@ static int check_format_funcs(struct vkms_crtc_state *crtc_state,
+ 	u32 n_active_planes = crtc_state->num_active_planes;
+ 
+ 	for (size_t i = 0; i < n_active_planes; i++)
+-		if (!planes[i]->plane_read)
++		if (!planes[i]->pixel_read)
+ 			return -1;
+ 
+ 	if (active_wb && !active_wb->wb_write)
+diff --git a/drivers/gpu/drm/vkms/vkms_drv.h b/drivers/gpu/drm/vkms/vkms_drv.h
+index 4a248567efb26..f152d54baf769 100644
+--- a/drivers/gpu/drm/vkms/vkms_drv.h
++++ b/drivers/gpu/drm/vkms/vkms_drv.h
+@@ -56,8 +56,7 @@ struct vkms_writeback_job {
+ struct vkms_plane_state {
+ 	struct drm_shadow_plane_state base;
+ 	struct vkms_frame_info *frame_info;
+-	void (*plane_read)(struct line_buffer *buffer,
+-			   const struct vkms_frame_info *frame_info, int y);
++	void (*pixel_read)(u8 *src_buffer, struct pixel_argb_u16 *out_pixel);
+ };
+ 
+ struct vkms_plane {
+@@ -155,6 +154,7 @@ int vkms_verify_crc_source(struct drm_crtc *crtc, const char *source_name,
+ /* Composer Support */
+ void vkms_composer_worker(struct work_struct *work);
+ void vkms_set_composer(struct vkms_output *out, bool enabled);
++void vkms_compose_row(struct line_buffer *stage_buffer, struct vkms_plane_state *plane, int y);
+ 
+ /* Writeback */
+ int vkms_enable_writeback_connector(struct vkms_device *vkmsdev);
+diff --git a/drivers/gpu/drm/vkms/vkms_formats.c b/drivers/gpu/drm/vkms/vkms_formats.c
+index d4950688b3f17..b11342026485f 100644
+--- a/drivers/gpu/drm/vkms/vkms_formats.c
++++ b/drivers/gpu/drm/vkms/vkms_formats.c
+@@ -42,100 +42,75 @@ static void *get_packed_src_addr(const struct vkms_frame_info *frame_info, int y
+ 	return packed_pixels_addr(frame_info, x_src, y_src);
+ }
+ 
+-static void ARGB8888_to_argb_u16(struct line_buffer *stage_buffer,
+-				 const struct vkms_frame_info *frame_info, int y)
++static void ARGB8888_to_argb_u16(u8 *src_pixels, struct pixel_argb_u16 *out_pixel)
+ {
+-	struct pixel_argb_u16 *out_pixels = stage_buffer->pixels;
+-	u8 *src_pixels = get_packed_src_addr(frame_info, y);
+-	int x_limit = min_t(size_t, drm_rect_width(&frame_info->dst),
+-			    stage_buffer->n_pixels);
+-
+-	for (size_t x = 0; x < x_limit; x++, src_pixels += 4) {
+-		/*
+-		 * The 257 is the "conversion ratio". This number is obtained by the
+-		 * (2^16 - 1) / (2^8 - 1) division. Which, in this case, tries to get
+-		 * the best color value in a pixel format with more possibilities.
+-		 * A similar idea applies to others RGB color conversions.
+-		 */
+-		out_pixels[x].a = (u16)src_pixels[3] * 257;
+-		out_pixels[x].r = (u16)src_pixels[2] * 257;
+-		out_pixels[x].g = (u16)src_pixels[1] * 257;
+-		out_pixels[x].b = (u16)src_pixels[0] * 257;
+-	}
++	/*
++	 * The 257 is the "conversion ratio". This number is obtained by the
++	 * (2^16 - 1) / (2^8 - 1) division. Which, in this case, tries to get
++	 * the best color value in a pixel format with more possibilities.
++	 * A similar idea applies to others RGB color conversions.
++	 */
++	out_pixel->a = (u16)src_pixels[3] * 257;
++	out_pixel->r = (u16)src_pixels[2] * 257;
++	out_pixel->g = (u16)src_pixels[1] * 257;
++	out_pixel->b = (u16)src_pixels[0] * 257;
+ }
+ 
+-static void XRGB8888_to_argb_u16(struct line_buffer *stage_buffer,
+-				 const struct vkms_frame_info *frame_info, int y)
++static void XRGB8888_to_argb_u16(u8 *src_pixels, struct pixel_argb_u16 *out_pixel)
+ {
+-	struct pixel_argb_u16 *out_pixels = stage_buffer->pixels;
+-	u8 *src_pixels = get_packed_src_addr(frame_info, y);
+-	int x_limit = min_t(size_t, drm_rect_width(&frame_info->dst),
+-			    stage_buffer->n_pixels);
+-
+-	for (size_t x = 0; x < x_limit; x++, src_pixels += 4) {
+-		out_pixels[x].a = (u16)0xffff;
+-		out_pixels[x].r = (u16)src_pixels[2] * 257;
+-		out_pixels[x].g = (u16)src_pixels[1] * 257;
+-		out_pixels[x].b = (u16)src_pixels[0] * 257;
+-	}
++	out_pixel->a = (u16)0xffff;
++	out_pixel->r = (u16)src_pixels[2] * 257;
++	out_pixel->g = (u16)src_pixels[1] * 257;
++	out_pixel->b = (u16)src_pixels[0] * 257;
+ }
+ 
+-static void ARGB16161616_to_argb_u16(struct line_buffer *stage_buffer,
+-				     const struct vkms_frame_info *frame_info,
+-				     int y)
++static void ARGB16161616_to_argb_u16(u8 *src_pixels, struct pixel_argb_u16 *out_pixel)
+ {
+-	struct pixel_argb_u16 *out_pixels = stage_buffer->pixels;
+-	u16 *src_pixels = get_packed_src_addr(frame_info, y);
+-	int x_limit = min_t(size_t, drm_rect_width(&frame_info->dst),
+-			    stage_buffer->n_pixels);
++	u16 *pixels = (u16 *)src_pixels;
+ 
+-	for (size_t x = 0; x < x_limit; x++, src_pixels += 4) {
+-		out_pixels[x].a = le16_to_cpu(src_pixels[3]);
+-		out_pixels[x].r = le16_to_cpu(src_pixels[2]);
+-		out_pixels[x].g = le16_to_cpu(src_pixels[1]);
+-		out_pixels[x].b = le16_to_cpu(src_pixels[0]);
+-	}
++	out_pixel->a = le16_to_cpu(pixels[3]);
++	out_pixel->r = le16_to_cpu(pixels[2]);
++	out_pixel->g = le16_to_cpu(pixels[1]);
++	out_pixel->b = le16_to_cpu(pixels[0]);
+ }
+ 
+-static void XRGB16161616_to_argb_u16(struct line_buffer *stage_buffer,
+-				     const struct vkms_frame_info *frame_info,
+-				     int y)
++static void XRGB16161616_to_argb_u16(u8 *src_pixels, struct pixel_argb_u16 *out_pixel)
+ {
+-	struct pixel_argb_u16 *out_pixels = stage_buffer->pixels;
+-	u16 *src_pixels = get_packed_src_addr(frame_info, y);
+-	int x_limit = min_t(size_t, drm_rect_width(&frame_info->dst),
+-			    stage_buffer->n_pixels);
++	u16 *pixels = (u16 *)src_pixels;
+ 
+-	for (size_t x = 0; x < x_limit; x++, src_pixels += 4) {
+-		out_pixels[x].a = (u16)0xffff;
+-		out_pixels[x].r = le16_to_cpu(src_pixels[2]);
+-		out_pixels[x].g = le16_to_cpu(src_pixels[1]);
+-		out_pixels[x].b = le16_to_cpu(src_pixels[0]);
+-	}
++	out_pixel->a = (u16)0xffff;
++	out_pixel->r = le16_to_cpu(pixels[2]);
++	out_pixel->g = le16_to_cpu(pixels[1]);
++	out_pixel->b = le16_to_cpu(pixels[0]);
+ }
+ 
+-static void RGB565_to_argb_u16(struct line_buffer *stage_buffer,
+-			       const struct vkms_frame_info *frame_info, int y)
++static void RGB565_to_argb_u16(u8 *src_pixels, struct pixel_argb_u16 *out_pixel)
+ {
+-	struct pixel_argb_u16 *out_pixels = stage_buffer->pixels;
+-	u16 *src_pixels = get_packed_src_addr(frame_info, y);
+-	int x_limit = min_t(size_t, drm_rect_width(&frame_info->dst),
+-			       stage_buffer->n_pixels);
++	u16 *pixels = (u16 *)src_pixels;
+ 
+ 	s64 fp_rb_ratio = drm_fixp_div(drm_int2fixp(65535), drm_int2fixp(31));
+ 	s64 fp_g_ratio = drm_fixp_div(drm_int2fixp(65535), drm_int2fixp(63));
+ 
+-	for (size_t x = 0; x < x_limit; x++, src_pixels++) {
+-		u16 rgb_565 = le16_to_cpu(*src_pixels);
+-		s64 fp_r = drm_int2fixp((rgb_565 >> 11) & 0x1f);
+-		s64 fp_g = drm_int2fixp((rgb_565 >> 5) & 0x3f);
+-		s64 fp_b = drm_int2fixp(rgb_565 & 0x1f);
++	u16 rgb_565 = le16_to_cpu(*pixels);
++	s64 fp_r = drm_int2fixp((rgb_565 >> 11) & 0x1f);
++	s64 fp_g = drm_int2fixp((rgb_565 >> 5) & 0x3f);
++	s64 fp_b = drm_int2fixp(rgb_565 & 0x1f);
+ 
+-		out_pixels[x].a = (u16)0xffff;
+-		out_pixels[x].r = drm_fixp2int(drm_fixp_mul(fp_r, fp_rb_ratio));
+-		out_pixels[x].g = drm_fixp2int(drm_fixp_mul(fp_g, fp_g_ratio));
+-		out_pixels[x].b = drm_fixp2int(drm_fixp_mul(fp_b, fp_rb_ratio));
+-	}
++	out_pixel->a = (u16)0xffff;
++	out_pixel->r = drm_fixp2int_round(drm_fixp_mul(fp_r, fp_rb_ratio));
++	out_pixel->g = drm_fixp2int_round(drm_fixp_mul(fp_g, fp_g_ratio));
++	out_pixel->b = drm_fixp2int_round(drm_fixp_mul(fp_b, fp_rb_ratio));
++}
++
++void vkms_compose_row(struct line_buffer *stage_buffer, struct vkms_plane_state *plane, int y)
++{
++	struct pixel_argb_u16 *out_pixels = stage_buffer->pixels;
++	struct vkms_frame_info *frame_info = plane->frame_info;
++	u8 *src_pixels = get_packed_src_addr(frame_info, y);
++	int limit = min_t(size_t, drm_rect_width(&frame_info->dst), stage_buffer->n_pixels);
++
++	for (size_t x = 0; x < limit; x++, src_pixels += frame_info->cpp)
++		plane->pixel_read(src_pixels, &out_pixels[x]);
+ }
+ 
+ /*
+@@ -241,15 +216,15 @@ static void argb_u16_to_RGB565(struct vkms_frame_info *frame_info,
+ 		s64 fp_g = drm_int2fixp(in_pixels[x].g);
+ 		s64 fp_b = drm_int2fixp(in_pixels[x].b);
+ 
+-		u16 r = drm_fixp2int(drm_fixp_div(fp_r, fp_rb_ratio));
+-		u16 g = drm_fixp2int(drm_fixp_div(fp_g, fp_g_ratio));
+-		u16 b = drm_fixp2int(drm_fixp_div(fp_b, fp_rb_ratio));
++		u16 r = drm_fixp2int_round(drm_fixp_div(fp_r, fp_rb_ratio));
++		u16 g = drm_fixp2int_round(drm_fixp_div(fp_g, fp_g_ratio));
++		u16 b = drm_fixp2int_round(drm_fixp_div(fp_b, fp_rb_ratio));
+ 
+ 		*dst_pixels = cpu_to_le16(r << 11 | g << 5 | b);
+ 	}
+ }
+ 
+-void *get_frame_to_line_function(u32 format)
++void *get_pixel_conversion_function(u32 format)
+ {
+ 	switch (format) {
+ 	case DRM_FORMAT_ARGB8888:
+diff --git a/drivers/gpu/drm/vkms/vkms_formats.h b/drivers/gpu/drm/vkms/vkms_formats.h
+index 43b7c19790181..c5b113495d0c0 100644
+--- a/drivers/gpu/drm/vkms/vkms_formats.h
++++ b/drivers/gpu/drm/vkms/vkms_formats.h
+@@ -5,7 +5,7 @@
+ 
+ #include "vkms_drv.h"
+ 
+-void *get_frame_to_line_function(u32 format);
++void *get_pixel_conversion_function(u32 format);
+ 
+ void *get_line_to_frame_function(u32 format);
+ 
+diff --git a/drivers/gpu/drm/vkms/vkms_plane.c b/drivers/gpu/drm/vkms/vkms_plane.c
+index c41cec7dcb703..0a23875900ec5 100644
+--- a/drivers/gpu/drm/vkms/vkms_plane.c
++++ b/drivers/gpu/drm/vkms/vkms_plane.c
+@@ -123,7 +123,7 @@ static void vkms_plane_atomic_update(struct drm_plane *plane,
+ 	frame_info->offset = fb->offsets[0];
+ 	frame_info->pitch = fb->pitches[0];
+ 	frame_info->cpp = fb->format->cpp[0];
+-	vkms_plane_state->plane_read = get_frame_to_line_function(fmt);
++	vkms_plane_state->pixel_read = get_pixel_conversion_function(fmt);
+ }
+ 
+ static int vkms_plane_atomic_check(struct drm_plane *plane,
+diff --git a/drivers/hid/Kconfig b/drivers/hid/Kconfig
+index 4ce012f83253e..b977450cac752 100644
+--- a/drivers/hid/Kconfig
++++ b/drivers/hid/Kconfig
+@@ -1285,7 +1285,7 @@ config HID_MCP2221
+ 
+ config HID_KUNIT_TEST
+ 	tristate "KUnit tests for HID" if !KUNIT_ALL_TESTS
+-	depends on KUNIT=y
++	depends on KUNIT
+ 	depends on HID_BATTERY_STRENGTH
+ 	depends on HID_UCLOGIC
+ 	default KUNIT_ALL_TESTS
+diff --git a/drivers/hwmon/f71882fg.c b/drivers/hwmon/f71882fg.c
+index 70121482a6173..27207ec6f7feb 100644
+--- a/drivers/hwmon/f71882fg.c
++++ b/drivers/hwmon/f71882fg.c
+@@ -1096,8 +1096,11 @@ static ssize_t show_pwm(struct device *dev,
+ 		val = data->pwm[nr];
+ 	else {
+ 		/* RPM mode */
+-		val = 255 * fan_from_reg(data->fan_target[nr])
+-			/ fan_from_reg(data->fan_full_speed[nr]);
++		if (fan_from_reg(data->fan_full_speed[nr]))
++			val = 255 * fan_from_reg(data->fan_target[nr])
++				/ fan_from_reg(data->fan_full_speed[nr]);
++		else
++			val = 0;
+ 	}
+ 	mutex_unlock(&data->update_lock);
+ 	return sprintf(buf, "%d\n", val);
+diff --git a/drivers/hwmon/gsc-hwmon.c b/drivers/hwmon/gsc-hwmon.c
+index 73e5d92b200b0..1501ceb551e79 100644
+--- a/drivers/hwmon/gsc-hwmon.c
++++ b/drivers/hwmon/gsc-hwmon.c
+@@ -82,8 +82,8 @@ static ssize_t pwm_auto_point_temp_store(struct device *dev,
+ 	if (kstrtol(buf, 10, &temp))
+ 		return -EINVAL;
+ 
+-	temp = clamp_val(temp, 0, 10000);
+-	temp = DIV_ROUND_CLOSEST(temp, 10);
++	temp = clamp_val(temp, 0, 100000);
++	temp = DIV_ROUND_CLOSEST(temp, 100);
+ 
+ 	regs[0] = temp & 0xff;
+ 	regs[1] = (temp >> 8) & 0xff;
+@@ -100,7 +100,7 @@ static ssize_t pwm_auto_point_pwm_show(struct device *dev,
+ {
+ 	struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr);
+ 
+-	return sprintf(buf, "%d\n", 255 * (50 + (attr->index * 10)) / 100);
++	return sprintf(buf, "%d\n", 255 * (50 + (attr->index * 10)));
+ }
+ 
+ static SENSOR_DEVICE_ATTR_RO(pwm1_auto_point1_pwm, pwm_auto_point_pwm, 0);
+diff --git a/drivers/hwmon/pmbus/adm1275.c b/drivers/hwmon/pmbus/adm1275.c
+index 3b07bfb43e937..b8543c06d022a 100644
+--- a/drivers/hwmon/pmbus/adm1275.c
++++ b/drivers/hwmon/pmbus/adm1275.c
+@@ -37,10 +37,13 @@ enum chips { adm1075, adm1272, adm1275, adm1276, adm1278, adm1293, adm1294 };
+ 
+ #define ADM1272_IRANGE			BIT(0)
+ 
++#define ADM1278_TSFILT			BIT(15)
+ #define ADM1278_TEMP1_EN		BIT(3)
+ #define ADM1278_VIN_EN			BIT(2)
+ #define ADM1278_VOUT_EN			BIT(1)
+ 
++#define ADM1278_PMON_DEFCONFIG		(ADM1278_VOUT_EN | ADM1278_TEMP1_EN | ADM1278_TSFILT)
++
+ #define ADM1293_IRANGE_25		0
+ #define ADM1293_IRANGE_50		BIT(6)
+ #define ADM1293_IRANGE_100		BIT(7)
+@@ -462,6 +465,22 @@ static const struct i2c_device_id adm1275_id[] = {
+ };
+ MODULE_DEVICE_TABLE(i2c, adm1275_id);
+ 
++/* Enable VOUT & TEMP1 if not enabled (disabled by default) */
++static int adm1275_enable_vout_temp(struct i2c_client *client, int config)
++{
++	int ret;
++
++	if ((config & ADM1278_PMON_DEFCONFIG) != ADM1278_PMON_DEFCONFIG) {
++		config |= ADM1278_PMON_DEFCONFIG;
++		ret = i2c_smbus_write_word_data(client, ADM1275_PMON_CONFIG, config);
++		if (ret < 0) {
++			dev_err(&client->dev, "Failed to enable VOUT/TEMP1 monitoring\n");
++			return ret;
++		}
++	}
++	return 0;
++}
++
+ static int adm1275_probe(struct i2c_client *client)
+ {
+ 	s32 (*config_read_fn)(const struct i2c_client *client, u8 reg);
+@@ -615,19 +634,10 @@ static int adm1275_probe(struct i2c_client *client)
+ 			PMBUS_HAVE_VOUT | PMBUS_HAVE_STATUS_VOUT |
+ 			PMBUS_HAVE_TEMP | PMBUS_HAVE_STATUS_TEMP;
+ 
+-		/* Enable VOUT & TEMP1 if not enabled (disabled by default) */
+-		if ((config & (ADM1278_VOUT_EN | ADM1278_TEMP1_EN)) !=
+-		    (ADM1278_VOUT_EN | ADM1278_TEMP1_EN)) {
+-			config |= ADM1278_VOUT_EN | ADM1278_TEMP1_EN;
+-			ret = i2c_smbus_write_byte_data(client,
+-							ADM1275_PMON_CONFIG,
+-							config);
+-			if (ret < 0) {
+-				dev_err(&client->dev,
+-					"Failed to enable VOUT monitoring\n");
+-				return -ENODEV;
+-			}
+-		}
++		ret = adm1275_enable_vout_temp(client, config);
++		if (ret)
++			return ret;
++
+ 		if (config & ADM1278_VIN_EN)
+ 			info->func[0] |= PMBUS_HAVE_VIN;
+ 		break;
+@@ -684,19 +694,9 @@ static int adm1275_probe(struct i2c_client *client)
+ 			PMBUS_HAVE_VOUT | PMBUS_HAVE_STATUS_VOUT |
+ 			PMBUS_HAVE_TEMP | PMBUS_HAVE_STATUS_TEMP;
+ 
+-		/* Enable VOUT & TEMP1 if not enabled (disabled by default) */
+-		if ((config & (ADM1278_VOUT_EN | ADM1278_TEMP1_EN)) !=
+-		    (ADM1278_VOUT_EN | ADM1278_TEMP1_EN)) {
+-			config |= ADM1278_VOUT_EN | ADM1278_TEMP1_EN;
+-			ret = i2c_smbus_write_word_data(client,
+-							ADM1275_PMON_CONFIG,
+-							config);
+-			if (ret < 0) {
+-				dev_err(&client->dev,
+-					"Failed to enable VOUT monitoring\n");
+-				return -ENODEV;
+-			}
+-		}
++		ret = adm1275_enable_vout_temp(client, config);
++		if (ret)
++			return ret;
+ 
+ 		if (config & ADM1278_VIN_EN)
+ 			info->func[0] |= PMBUS_HAVE_VIN;
+diff --git a/drivers/hwtracing/coresight/coresight-core.c b/drivers/hwtracing/coresight/coresight-core.c
+index d3bf82c0de1d8..5733294ce5cd2 100644
+--- a/drivers/hwtracing/coresight/coresight-core.c
++++ b/drivers/hwtracing/coresight/coresight-core.c
+@@ -1419,13 +1419,8 @@ static int coresight_remove_match(struct device *dev, void *data)
+ 		if (csdev->dev.fwnode == conn->child_fwnode) {
+ 			iterator->orphan = true;
+ 			coresight_remove_links(iterator, conn);
+-			/*
+-			 * Drop the reference to the handle for the remote
+-			 * device acquired in parsing the connections from
+-			 * platform data.
+-			 */
+-			fwnode_handle_put(conn->child_fwnode);
+-			conn->child_fwnode = NULL;
++
++			conn->child_dev = NULL;
+ 			/* No need to continue */
+ 			break;
+ 		}
+diff --git a/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c b/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
+index 5e62aa40ecd0f..a9f19629f3f84 100644
+--- a/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
++++ b/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
+@@ -2411,7 +2411,6 @@ static ssize_t trctraceid_show(struct device *dev,
+ 
+ 	return sysfs_emit(buf, "0x%x\n", trace_id);
+ }
+-static DEVICE_ATTR_RO(trctraceid);
+ 
+ struct etmv4_reg {
+ 	struct coresight_device *csdev;
+@@ -2528,13 +2527,23 @@ coresight_etm4x_attr_reg_implemented(struct kobject *kobj,
+ 	return 0;
+ }
+ 
+-#define coresight_etm4x_reg(name, offset)				\
+-	&((struct dev_ext_attribute[]) {				\
+-	   {								\
+-		__ATTR(name, 0444, coresight_etm4x_reg_show, NULL),	\
+-		(void *)(unsigned long)offset				\
+-	   }								\
+-	})[0].attr.attr
++/*
++ * Macro to set an RO ext attribute with offset and show function.
++ * Offset is used in mgmt group to ensure only correct registers for
++ * the ETM / ETE variant are visible.
++ */
++#define coresight_etm4x_reg_showfn(name, offset, showfn) (	\
++	&((struct dev_ext_attribute[]) {			\
++	   {							\
++		__ATTR(name, 0444, showfn, NULL),		\
++		(void *)(unsigned long)offset			\
++	   }							\
++	})[0].attr.attr						\
++	)
++
++/* macro using the default coresight_etm4x_reg_show function */
++#define coresight_etm4x_reg(name, offset)	\
++	coresight_etm4x_reg_showfn(name, offset, coresight_etm4x_reg_show)
+ 
+ static struct attribute *coresight_etmv4_mgmt_attrs[] = {
+ 	coresight_etm4x_reg(trcpdcr, TRCPDCR),
+@@ -2549,7 +2558,7 @@ static struct attribute *coresight_etmv4_mgmt_attrs[] = {
+ 	coresight_etm4x_reg(trcpidr3, TRCPIDR3),
+ 	coresight_etm4x_reg(trcoslsr, TRCOSLSR),
+ 	coresight_etm4x_reg(trcconfig, TRCCONFIGR),
+-	&dev_attr_trctraceid.attr,
++	coresight_etm4x_reg_showfn(trctraceid, TRCTRACEIDR, trctraceid_show),
+ 	coresight_etm4x_reg(trcdevarch, TRCDEVARCH),
+ 	NULL,
+ };
+diff --git a/drivers/hwtracing/ptt/hisi_ptt.c b/drivers/hwtracing/ptt/hisi_ptt.c
+index 30f1525639b57..4140efd664097 100644
+--- a/drivers/hwtracing/ptt/hisi_ptt.c
++++ b/drivers/hwtracing/ptt/hisi_ptt.c
+@@ -341,13 +341,13 @@ static int hisi_ptt_register_irq(struct hisi_ptt *hisi_ptt)
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	ret = devm_request_threaded_irq(&pdev->dev,
+-					pci_irq_vector(pdev, HISI_PTT_TRACE_DMA_IRQ),
++	hisi_ptt->trace_irq = pci_irq_vector(pdev, HISI_PTT_TRACE_DMA_IRQ);
++	ret = devm_request_threaded_irq(&pdev->dev, hisi_ptt->trace_irq,
+ 					NULL, hisi_ptt_isr, 0,
+ 					DRV_NAME, hisi_ptt);
+ 	if (ret) {
+ 		pci_err(pdev, "failed to request irq %d, ret = %d\n",
+-			pci_irq_vector(pdev, HISI_PTT_TRACE_DMA_IRQ), ret);
++			hisi_ptt->trace_irq, ret);
+ 		return ret;
+ 	}
+ 
+@@ -757,8 +757,7 @@ static void hisi_ptt_pmu_start(struct perf_event *event, int flags)
+ 	 * core in event_function_local(). If CPU passed is offline we'll fail
+ 	 * here, just log it since we can do nothing here.
+ 	 */
+-	ret = irq_set_affinity(pci_irq_vector(hisi_ptt->pdev, HISI_PTT_TRACE_DMA_IRQ),
+-					      cpumask_of(cpu));
++	ret = irq_set_affinity(hisi_ptt->trace_irq, cpumask_of(cpu));
+ 	if (ret)
+ 		dev_warn(dev, "failed to set the affinity of trace interrupt\n");
+ 
+@@ -1018,8 +1017,7 @@ static int hisi_ptt_cpu_teardown(unsigned int cpu, struct hlist_node *node)
+ 	 * Also make sure the interrupt bind to the migrated CPU as well. Warn
+ 	 * the user on failure here.
+ 	 */
+-	if (irq_set_affinity(pci_irq_vector(hisi_ptt->pdev, HISI_PTT_TRACE_DMA_IRQ),
+-					    cpumask_of(target)))
++	if (irq_set_affinity(hisi_ptt->trace_irq, cpumask_of(target)))
+ 		dev_warn(dev, "failed to set the affinity of trace interrupt\n");
+ 
+ 	hisi_ptt->trace_ctrl.on_cpu = target;
+diff --git a/drivers/hwtracing/ptt/hisi_ptt.h b/drivers/hwtracing/ptt/hisi_ptt.h
+index 5beb1648c93ab..948a4c4231527 100644
+--- a/drivers/hwtracing/ptt/hisi_ptt.h
++++ b/drivers/hwtracing/ptt/hisi_ptt.h
+@@ -166,6 +166,7 @@ struct hisi_ptt_pmu_buf {
+  * @pdev:         pci_dev of this PTT device
+  * @tune_lock:    lock to serialize the tune process
+  * @pmu_lock:     lock to serialize the perf process
++ * @trace_irq:    interrupt number used by trace
+  * @upper_bdf:    the upper BDF range of the PCI devices managed by this PTT device
+  * @lower_bdf:    the lower BDF range of the PCI devices managed by this PTT device
+  * @port_filters: the filter list of root ports
+@@ -180,6 +181,7 @@ struct hisi_ptt {
+ 	struct pci_dev *pdev;
+ 	struct mutex tune_lock;
+ 	spinlock_t pmu_lock;
++	int trace_irq;
+ 	u32 upper_bdf;
+ 	u32 lower_bdf;
+ 
+diff --git a/drivers/i2c/busses/i2c-designware-pcidrv.c b/drivers/i2c/busses/i2c-designware-pcidrv.c
+index 782fe1ef3ca10..61d7a27aa0701 100644
+--- a/drivers/i2c/busses/i2c-designware-pcidrv.c
++++ b/drivers/i2c/busses/i2c-designware-pcidrv.c
+@@ -20,6 +20,7 @@
+ #include <linux/module.h>
+ #include <linux/pci.h>
+ #include <linux/pm_runtime.h>
++#include <linux/power_supply.h>
+ #include <linux/sched.h>
+ #include <linux/slab.h>
+ 
+@@ -234,6 +235,16 @@ static const struct dev_pm_ops i2c_dw_pm_ops = {
+ 	SET_RUNTIME_PM_OPS(i2c_dw_pci_runtime_suspend, i2c_dw_pci_runtime_resume, NULL)
+ };
+ 
++static const struct property_entry dgpu_properties[] = {
++	/* USB-C doesn't power the system */
++	PROPERTY_ENTRY_U8("scope", POWER_SUPPLY_SCOPE_DEVICE),
++	{}
++};
++
++static const struct software_node dgpu_node = {
++	.properties = dgpu_properties,
++};
++
+ static int i2c_dw_pci_probe(struct pci_dev *pdev,
+ 			    const struct pci_device_id *id)
+ {
+@@ -325,7 +336,7 @@ static int i2c_dw_pci_probe(struct pci_dev *pdev,
+ 	}
+ 
+ 	if ((dev->flags & MODEL_MASK) == MODEL_AMD_NAVI_GPU) {
+-		dev->slave = i2c_new_ccgx_ucsi(&dev->adapter, dev->irq, NULL);
++		dev->slave = i2c_new_ccgx_ucsi(&dev->adapter, dev->irq, &dgpu_node);
+ 		if (IS_ERR(dev->slave))
+ 			return dev_err_probe(dev->dev, PTR_ERR(dev->slave),
+ 					     "register UCSI failed\n");
+diff --git a/drivers/i2c/busses/i2c-nvidia-gpu.c b/drivers/i2c/busses/i2c-nvidia-gpu.c
+index a8b99e7f6262a..26622d24bb1b2 100644
+--- a/drivers/i2c/busses/i2c-nvidia-gpu.c
++++ b/drivers/i2c/busses/i2c-nvidia-gpu.c
+@@ -14,6 +14,7 @@
+ #include <linux/platform_device.h>
+ #include <linux/pm.h>
+ #include <linux/pm_runtime.h>
++#include <linux/power_supply.h>
+ 
+ #include <asm/unaligned.h>
+ 
+@@ -261,6 +262,8 @@ MODULE_DEVICE_TABLE(pci, gpu_i2c_ids);
+ static const struct property_entry ccgx_props[] = {
+ 	/* Use FW built for NVIDIA GPU only */
+ 	PROPERTY_ENTRY_STRING("firmware-name", "nvidia,gpu"),
++	/* USB-C doesn't power the system */
++	PROPERTY_ENTRY_U8("scope", POWER_SUPPLY_SCOPE_DEVICE),
+ 	{ }
+ };
+ 
+diff --git a/drivers/i2c/busses/i2c-xiic.c b/drivers/i2c/busses/i2c-xiic.c
+index 8a3d9817cb41c..ee6edc963deac 100644
+--- a/drivers/i2c/busses/i2c-xiic.c
++++ b/drivers/i2c/busses/i2c-xiic.c
+@@ -721,6 +721,8 @@ static irqreturn_t xiic_process(int irq, void *dev_id)
+ 			wakeup_req = 1;
+ 			wakeup_code = STATE_ERROR;
+ 		}
++		/* don't try to handle other events */
++		goto out;
+ 	}
+ 	if (pend & XIIC_INTR_RX_FULL_MASK) {
+ 		/* Receive register/FIFO is full */
+diff --git a/drivers/i3c/master/svc-i3c-master.c b/drivers/i3c/master/svc-i3c-master.c
+index e3f454123805e..79b08942a925d 100644
+--- a/drivers/i3c/master/svc-i3c-master.c
++++ b/drivers/i3c/master/svc-i3c-master.c
+@@ -1090,12 +1090,6 @@ static void svc_i3c_master_start_xfer_locked(struct svc_i3c_master *master)
+ 	if (!xfer)
+ 		return;
+ 
+-	ret = pm_runtime_resume_and_get(master->dev);
+-	if (ret < 0) {
+-		dev_err(master->dev, "<%s> Cannot get runtime PM.\n", __func__);
+-		return;
+-	}
+-
+ 	svc_i3c_master_clear_merrwarn(master);
+ 	svc_i3c_master_flush_fifo(master);
+ 
+@@ -1110,9 +1104,6 @@ static void svc_i3c_master_start_xfer_locked(struct svc_i3c_master *master)
+ 			break;
+ 	}
+ 
+-	pm_runtime_mark_last_busy(master->dev);
+-	pm_runtime_put_autosuspend(master->dev);
+-
+ 	xfer->ret = ret;
+ 	complete(&xfer->comp);
+ 
+@@ -1133,6 +1124,13 @@ static void svc_i3c_master_enqueue_xfer(struct svc_i3c_master *master,
+ 					struct svc_i3c_xfer *xfer)
+ {
+ 	unsigned long flags;
++	int ret;
++
++	ret = pm_runtime_resume_and_get(master->dev);
++	if (ret < 0) {
++		dev_err(master->dev, "<%s> Cannot get runtime PM.\n", __func__);
++		return;
++	}
+ 
+ 	init_completion(&xfer->comp);
+ 	spin_lock_irqsave(&master->xferqueue.lock, flags);
+@@ -1143,6 +1141,9 @@ static void svc_i3c_master_enqueue_xfer(struct svc_i3c_master *master,
+ 		svc_i3c_master_start_xfer_locked(master);
+ 	}
+ 	spin_unlock_irqrestore(&master->xferqueue.lock, flags);
++
++	pm_runtime_mark_last_busy(master->dev);
++	pm_runtime_put_autosuspend(master->dev);
+ }
+ 
+ static bool
+diff --git a/drivers/iio/accel/fxls8962af-core.c b/drivers/iio/accel/fxls8962af-core.c
+index 0d672b1469e8d..be8a15cb945fd 100644
+--- a/drivers/iio/accel/fxls8962af-core.c
++++ b/drivers/iio/accel/fxls8962af-core.c
+@@ -724,8 +724,7 @@ static const struct iio_event_spec fxls8962af_event[] = {
+ 		.sign = 's', \
+ 		.realbits = 12, \
+ 		.storagebits = 16, \
+-		.shift = 4, \
+-		.endianness = IIO_BE, \
++		.endianness = IIO_LE, \
+ 	}, \
+ 	.event_spec = fxls8962af_event, \
+ 	.num_event_specs = ARRAY_SIZE(fxls8962af_event), \
+@@ -904,9 +903,10 @@ static int fxls8962af_fifo_transfer(struct fxls8962af_data *data,
+ 	int total_length = samples * sample_length;
+ 	int ret;
+ 
+-	if (i2c_verify_client(dev))
++	if (i2c_verify_client(dev) &&
++	    data->chip_info->chip_id == FXLS8962AF_DEVICE_ID)
+ 		/*
+-		 * Due to errata bug:
++		 * Due to errata bug (only applicable on fxls8962af):
+ 		 * E3: FIFO burst read operation error using I2C interface
+ 		 * We have to avoid burst reads on I2C..
+ 		 */
+diff --git a/drivers/iio/adc/ad7192.c b/drivers/iio/adc/ad7192.c
+index 99bb604b78c8c..8685e0b58a838 100644
+--- a/drivers/iio/adc/ad7192.c
++++ b/drivers/iio/adc/ad7192.c
+@@ -367,7 +367,7 @@ static int ad7192_of_clock_select(struct ad7192_state *st)
+ 	clock_sel = AD7192_CLK_INT;
+ 
+ 	/* use internal clock */
+-	if (st->mclk) {
++	if (!st->mclk) {
+ 		if (of_property_read_bool(np, "adi,int-clock-output-enable"))
+ 			clock_sel = AD7192_CLK_INT_CO;
+ 	} else {
+@@ -380,9 +380,9 @@ static int ad7192_of_clock_select(struct ad7192_state *st)
+ 	return clock_sel;
+ }
+ 
+-static int ad7192_setup(struct ad7192_state *st, struct device_node *np)
++static int ad7192_setup(struct iio_dev *indio_dev, struct device_node *np)
+ {
+-	struct iio_dev *indio_dev = spi_get_drvdata(st->sd.spi);
++	struct ad7192_state *st = iio_priv(indio_dev);
+ 	bool rej60_en, refin2_en;
+ 	bool buf_en, bipolar, burnout_curr_en;
+ 	unsigned long long scale_uv;
+@@ -1069,7 +1069,7 @@ static int ad7192_probe(struct spi_device *spi)
+ 		}
+ 	}
+ 
+-	ret = ad7192_setup(st, spi->dev.of_node);
++	ret = ad7192_setup(indio_dev, spi->dev.of_node);
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/drivers/iio/addac/ad74413r.c b/drivers/iio/addac/ad74413r.c
+index e3366cf5eb319..6b0e8218f1507 100644
+--- a/drivers/iio/addac/ad74413r.c
++++ b/drivers/iio/addac/ad74413r.c
+@@ -1317,13 +1317,14 @@ static int ad74413r_setup_gpios(struct ad74413r_state *st)
+ 		}
+ 
+ 		if (config->func == CH_FUNC_DIGITAL_INPUT_LOGIC ||
+-		    config->func == CH_FUNC_DIGITAL_INPUT_LOOP_POWER)
++		    config->func == CH_FUNC_DIGITAL_INPUT_LOOP_POWER) {
+ 			st->comp_gpio_offsets[comp_gpio_i++] = i;
+ 
+-		strength = config->drive_strength;
+-		ret = ad74413r_set_comp_drive_strength(st, i, strength);
+-		if (ret)
+-			return ret;
++			strength = config->drive_strength;
++			ret = ad74413r_set_comp_drive_strength(st, i, strength);
++			if (ret)
++				return ret;
++		}
+ 
+ 		ret = ad74413r_set_gpo_config(st, i, gpo_config);
+ 		if (ret)
+diff --git a/drivers/infiniband/hw/bnxt_re/main.c b/drivers/infiniband/hw/bnxt_re/main.c
+index 3073398a21834..1936f4b4002a7 100644
+--- a/drivers/infiniband/hw/bnxt_re/main.c
++++ b/drivers/infiniband/hw/bnxt_re/main.c
+@@ -283,15 +283,21 @@ static void bnxt_re_start_irq(void *handle, struct bnxt_msix_entry *ent)
+ 	for (indx = 0; indx < rdev->num_msix; indx++)
+ 		rdev->en_dev->msix_entries[indx].vector = ent[indx].vector;
+ 
+-	bnxt_qplib_rcfw_start_irq(rcfw, msix_ent[BNXT_RE_AEQ_IDX].vector,
+-				  false);
++	rc = bnxt_qplib_rcfw_start_irq(rcfw, msix_ent[BNXT_RE_AEQ_IDX].vector,
++				       false);
++	if (rc) {
++		ibdev_warn(&rdev->ibdev, "Failed to reinit CREQ\n");
++		return;
++	}
+ 	for (indx = BNXT_RE_NQ_IDX ; indx < rdev->num_msix; indx++) {
+ 		nq = &rdev->nq[indx - 1];
+ 		rc = bnxt_qplib_nq_start_irq(nq, indx - 1,
+ 					     msix_ent[indx].vector, false);
+-		if (rc)
++		if (rc) {
+ 			ibdev_warn(&rdev->ibdev, "Failed to reinit NQ index %d\n",
+ 				   indx - 1);
++			return;
++		}
+ 	}
+ }
+ 
+@@ -963,12 +969,6 @@ static int bnxt_re_update_gid(struct bnxt_re_dev *rdev)
+ 	if (!ib_device_try_get(&rdev->ibdev))
+ 		return 0;
+ 
+-	if (!sgid_tbl) {
+-		ibdev_err(&rdev->ibdev, "QPLIB: SGID table not allocated");
+-		rc = -EINVAL;
+-		goto out;
+-	}
+-
+ 	for (index = 0; index < sgid_tbl->active; index++) {
+ 		gid_idx = sgid_tbl->hw_id[index];
+ 
+@@ -986,7 +986,7 @@ static int bnxt_re_update_gid(struct bnxt_re_dev *rdev)
+ 		rc = bnxt_qplib_update_sgid(sgid_tbl, &gid, gid_idx,
+ 					    rdev->qplib_res.netdev->dev_addr);
+ 	}
+-out:
++
+ 	ib_device_put(&rdev->ibdev);
+ 	return rc;
+ }
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.c b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+index 8974f6235cfaa..55f092c2c8a88 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+@@ -399,6 +399,9 @@ static irqreturn_t bnxt_qplib_nq_irq(int irq, void *dev_instance)
+ 
+ void bnxt_qplib_nq_stop_irq(struct bnxt_qplib_nq *nq, bool kill)
+ {
++	if (!nq->requested)
++		return;
++
+ 	tasklet_disable(&nq->nq_tasklet);
+ 	/* Mask h/w interrupt */
+ 	bnxt_qplib_ring_nq_db(&nq->nq_db.dbinfo, nq->res->cctx, false);
+@@ -406,11 +409,12 @@ void bnxt_qplib_nq_stop_irq(struct bnxt_qplib_nq *nq, bool kill)
+ 	synchronize_irq(nq->msix_vec);
+ 	if (kill)
+ 		tasklet_kill(&nq->nq_tasklet);
+-	if (nq->requested) {
+-		irq_set_affinity_hint(nq->msix_vec, NULL);
+-		free_irq(nq->msix_vec, nq);
+-		nq->requested = false;
+-	}
++
++	irq_set_affinity_hint(nq->msix_vec, NULL);
++	free_irq(nq->msix_vec, nq);
++	kfree(nq->name);
++	nq->name = NULL;
++	nq->requested = false;
+ }
+ 
+ void bnxt_qplib_disable_nq(struct bnxt_qplib_nq *nq)
+@@ -436,6 +440,7 @@ void bnxt_qplib_disable_nq(struct bnxt_qplib_nq *nq)
+ int bnxt_qplib_nq_start_irq(struct bnxt_qplib_nq *nq, int nq_indx,
+ 			    int msix_vector, bool need_init)
+ {
++	struct bnxt_qplib_res *res = nq->res;
+ 	int rc;
+ 
+ 	if (nq->requested)
+@@ -447,10 +452,17 @@ int bnxt_qplib_nq_start_irq(struct bnxt_qplib_nq *nq, int nq_indx,
+ 	else
+ 		tasklet_enable(&nq->nq_tasklet);
+ 
+-	snprintf(nq->name, sizeof(nq->name), "bnxt_qplib_nq-%d", nq_indx);
++	nq->name = kasprintf(GFP_KERNEL, "bnxt_re-nq-%d@pci:%s",
++			     nq_indx, pci_name(res->pdev));
++	if (!nq->name)
++		return -ENOMEM;
+ 	rc = request_irq(nq->msix_vec, bnxt_qplib_nq_irq, 0, nq->name, nq);
+-	if (rc)
++	if (rc) {
++		kfree(nq->name);
++		nq->name = NULL;
++		tasklet_disable(&nq->nq_tasklet);
+ 		return rc;
++	}
+ 
+ 	cpumask_clear(&nq->mask);
+ 	cpumask_set_cpu(nq_indx, &nq->mask);
+@@ -461,7 +473,7 @@ int bnxt_qplib_nq_start_irq(struct bnxt_qplib_nq *nq, int nq_indx,
+ 			 nq->msix_vec, nq_indx);
+ 	}
+ 	nq->requested = true;
+-	bnxt_qplib_ring_nq_db(&nq->nq_db.dbinfo, nq->res->cctx, true);
++	bnxt_qplib_ring_nq_db(&nq->nq_db.dbinfo, res->cctx, true);
+ 
+ 	return rc;
+ }
+@@ -1614,7 +1626,7 @@ static int bnxt_qplib_put_inline(struct bnxt_qplib_qp *qp,
+ 		il_src = (void *)wqe->sg_list[indx].addr;
+ 		t_len += len;
+ 		if (t_len > qp->max_inline_data)
+-			goto bad;
++			return -ENOMEM;
+ 		while (len) {
+ 			if (pull_dst) {
+ 				pull_dst = false;
+@@ -1638,8 +1650,6 @@ static int bnxt_qplib_put_inline(struct bnxt_qplib_qp *qp,
+ 	}
+ 
+ 	return t_len;
+-bad:
+-	return -ENOMEM;
+ }
+ 
+ static u32 bnxt_qplib_put_sges(struct bnxt_qplib_hwq *hwq,
+@@ -2069,7 +2079,7 @@ int bnxt_qplib_create_cq(struct bnxt_qplib_res *res, struct bnxt_qplib_cq *cq)
+ 	hwq_attr.sginfo = &cq->sg_info;
+ 	rc = bnxt_qplib_alloc_init_hwq(&cq->hwq, &hwq_attr);
+ 	if (rc)
+-		goto exit;
++		return rc;
+ 
+ 	bnxt_qplib_rcfw_cmd_prep((struct cmdq_base *)&req,
+ 				 CMDQ_BASE_OPCODE_CREATE_CQ,
+@@ -2112,7 +2122,6 @@ int bnxt_qplib_create_cq(struct bnxt_qplib_res *res, struct bnxt_qplib_cq *cq)
+ 
+ fail:
+ 	bnxt_qplib_free_hwq(res, &cq->hwq);
+-exit:
+ 	return rc;
+ }
+ 
+@@ -2790,11 +2799,8 @@ static int bnxt_qplib_cq_process_terminal(struct bnxt_qplib_cq *cq,
+ 
+ 	qp = (struct bnxt_qplib_qp *)((unsigned long)
+ 				      le64_to_cpu(hwcqe->qp_handle));
+-	if (!qp) {
+-		dev_err(&cq->hwq.pdev->dev,
+-			"FP: CQ Process terminal qp is NULL\n");
++	if (!qp)
+ 		return -EINVAL;
+-	}
+ 
+ 	/* Must block new posting of SQ and RQ */
+ 	qp->state = CMDQ_MODIFY_QP_NEW_STATE_ERR;
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.h b/drivers/infiniband/hw/bnxt_re/qplib_fp.h
+index d74d5ead2e32a..a42820821c473 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.h
++++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.h
+@@ -472,7 +472,7 @@ typedef int (*srqn_handler_t)(struct bnxt_qplib_nq *nq,
+ struct bnxt_qplib_nq {
+ 	struct pci_dev			*pdev;
+ 	struct bnxt_qplib_res		*res;
+-	char				name[32];
++	char				*name;
+ 	struct bnxt_qplib_hwq		hwq;
+ 	struct bnxt_qplib_nq_db		nq_db;
+ 	u16				ring_id;
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
+index de90691031773..c11b8e708844c 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
+@@ -180,7 +180,7 @@ static int __send_message(struct bnxt_qplib_rcfw *rcfw,
+ 	} while (bsize > 0);
+ 	cmdq->seq_num++;
+ 
+-	cmdq_prod = hwq->prod;
++	cmdq_prod = hwq->prod & 0xFFFF;
+ 	if (test_bit(FIRMWARE_FIRST_FLAG, &cmdq->flags)) {
+ 		/* The very first doorbell write
+ 		 * is required to set this flag
+@@ -295,7 +295,8 @@ static int bnxt_qplib_process_func_event(struct bnxt_qplib_rcfw *rcfw,
+ }
+ 
+ static int bnxt_qplib_process_qp_event(struct bnxt_qplib_rcfw *rcfw,
+-				       struct creq_qp_event *qp_event)
++				       struct creq_qp_event *qp_event,
++				       u32 *num_wait)
+ {
+ 	struct creq_qp_error_notification *err_event;
+ 	struct bnxt_qplib_hwq *hwq = &rcfw->cmdq.hwq;
+@@ -304,6 +305,7 @@ static int bnxt_qplib_process_qp_event(struct bnxt_qplib_rcfw *rcfw,
+ 	u16 cbit, blocked = 0;
+ 	struct pci_dev *pdev;
+ 	unsigned long flags;
++	u32 wait_cmds = 0;
+ 	__le16  mcookie;
+ 	u16 cookie;
+ 	int rc = 0;
+@@ -363,9 +365,10 @@ static int bnxt_qplib_process_qp_event(struct bnxt_qplib_rcfw *rcfw,
+ 		crsqe->req_size = 0;
+ 
+ 		if (!blocked)
+-			wake_up(&rcfw->cmdq.waitq);
++			wait_cmds++;
+ 		spin_unlock_irqrestore(&hwq->lock, flags);
+ 	}
++	*num_wait += wait_cmds;
+ 	return rc;
+ }
+ 
+@@ -379,6 +382,7 @@ static void bnxt_qplib_service_creq(struct tasklet_struct *t)
+ 	struct creq_base *creqe;
+ 	u32 sw_cons, raw_cons;
+ 	unsigned long flags;
++	u32 num_wakeup = 0;
+ 
+ 	/* Service the CREQ until budget is over */
+ 	spin_lock_irqsave(&hwq->lock, flags);
+@@ -397,7 +401,8 @@ static void bnxt_qplib_service_creq(struct tasklet_struct *t)
+ 		switch (type) {
+ 		case CREQ_BASE_TYPE_QP_EVENT:
+ 			bnxt_qplib_process_qp_event
+-				(rcfw, (struct creq_qp_event *)creqe);
++				(rcfw, (struct creq_qp_event *)creqe,
++				 &num_wakeup);
+ 			creq->stats.creq_qp_event_processed++;
+ 			break;
+ 		case CREQ_BASE_TYPE_FUNC_EVENT:
+@@ -425,6 +430,8 @@ static void bnxt_qplib_service_creq(struct tasklet_struct *t)
+ 				      rcfw->res->cctx, true);
+ 	}
+ 	spin_unlock_irqrestore(&hwq->lock, flags);
++	if (num_wakeup)
++		wake_up_nr(&rcfw->cmdq.waitq, num_wakeup);
+ }
+ 
+ static irqreturn_t bnxt_qplib_creq_irq(int irq, void *dev_instance)
+@@ -599,7 +606,7 @@ int bnxt_qplib_alloc_rcfw_channel(struct bnxt_qplib_res *res,
+ 		rcfw->cmdq_depth = BNXT_QPLIB_CMDQE_MAX_CNT_8192;
+ 
+ 	sginfo.pgsize = bnxt_qplib_cmdqe_page_size(rcfw->cmdq_depth);
+-	hwq_attr.depth = rcfw->cmdq_depth;
++	hwq_attr.depth = rcfw->cmdq_depth & 0x7FFFFFFF;
+ 	hwq_attr.stride = BNXT_QPLIB_CMDQE_UNITS;
+ 	hwq_attr.type = HWQ_TYPE_CTX;
+ 	if (bnxt_qplib_alloc_init_hwq(&cmdq->hwq, &hwq_attr)) {
+@@ -636,6 +643,10 @@ void bnxt_qplib_rcfw_stop_irq(struct bnxt_qplib_rcfw *rcfw, bool kill)
+ 	struct bnxt_qplib_creq_ctx *creq;
+ 
+ 	creq = &rcfw->creq;
++
++	if (!creq->requested)
++		return;
++
+ 	tasklet_disable(&creq->creq_tasklet);
+ 	/* Mask h/w interrupts */
+ 	bnxt_qplib_ring_nq_db(&creq->creq_db.dbinfo, rcfw->res->cctx, false);
+@@ -644,10 +655,10 @@ void bnxt_qplib_rcfw_stop_irq(struct bnxt_qplib_rcfw *rcfw, bool kill)
+ 	if (kill)
+ 		tasklet_kill(&creq->creq_tasklet);
+ 
+-	if (creq->requested) {
+-		free_irq(creq->msix_vec, rcfw);
+-		creq->requested = false;
+-	}
++	free_irq(creq->msix_vec, rcfw);
++	kfree(creq->irq_name);
++	creq->irq_name = NULL;
++	creq->requested = false;
+ }
+ 
+ void bnxt_qplib_disable_rcfw_channel(struct bnxt_qplib_rcfw *rcfw)
+@@ -679,9 +690,11 @@ int bnxt_qplib_rcfw_start_irq(struct bnxt_qplib_rcfw *rcfw, int msix_vector,
+ 			      bool need_init)
+ {
+ 	struct bnxt_qplib_creq_ctx *creq;
++	struct bnxt_qplib_res *res;
+ 	int rc;
+ 
+ 	creq = &rcfw->creq;
++	res = rcfw->res;
+ 
+ 	if (creq->requested)
+ 		return -EFAULT;
+@@ -691,13 +704,22 @@ int bnxt_qplib_rcfw_start_irq(struct bnxt_qplib_rcfw *rcfw, int msix_vector,
+ 		tasklet_setup(&creq->creq_tasklet, bnxt_qplib_service_creq);
+ 	else
+ 		tasklet_enable(&creq->creq_tasklet);
++
++	creq->irq_name = kasprintf(GFP_KERNEL, "bnxt_re-creq@pci:%s",
++				   pci_name(res->pdev));
++	if (!creq->irq_name)
++		return -ENOMEM;
+ 	rc = request_irq(creq->msix_vec, bnxt_qplib_creq_irq, 0,
+-			 "bnxt_qplib_creq", rcfw);
+-	if (rc)
++			 creq->irq_name, rcfw);
++	if (rc) {
++		kfree(creq->irq_name);
++		creq->irq_name = NULL;
++		tasklet_disable(&creq->creq_tasklet);
+ 		return rc;
++	}
+ 	creq->requested = true;
+ 
+-	bnxt_qplib_ring_nq_db(&creq->creq_db.dbinfo, rcfw->res->cctx, true);
++	bnxt_qplib_ring_nq_db(&creq->creq_db.dbinfo, res->cctx, true);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h
+index dd5651478bbb7..92f7a25533d3b 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h
++++ b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h
+@@ -186,6 +186,7 @@ struct bnxt_qplib_creq_ctx {
+ 	u16				ring_id;
+ 	int				msix_vec;
+ 	bool				requested; /*irq handler installed */
++	char				*irq_name;
+ };
+ 
+ /* RCFW Communication Channels */
+diff --git a/drivers/infiniband/hw/hfi1/ipoib_tx.c b/drivers/infiniband/hw/hfi1/ipoib_tx.c
+index 8973a081d641e..e7d831330278d 100644
+--- a/drivers/infiniband/hw/hfi1/ipoib_tx.c
++++ b/drivers/infiniband/hw/hfi1/ipoib_tx.c
+@@ -215,11 +215,11 @@ static int hfi1_ipoib_build_ulp_payload(struct ipoib_txreq *tx,
+ 		const skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
+ 
+ 		ret = sdma_txadd_page(dd,
+-				      NULL,
+ 				      txreq,
+ 				      skb_frag_page(frag),
+ 				      frag->bv_offset,
+-				      skb_frag_size(frag));
++				      skb_frag_size(frag),
++				      NULL, NULL, NULL);
+ 		if (unlikely(ret))
+ 			break;
+ 	}
+diff --git a/drivers/infiniband/hw/hfi1/mmu_rb.c b/drivers/infiniband/hw/hfi1/mmu_rb.c
+index 1cea8b0c78e0f..a864423c256dd 100644
+--- a/drivers/infiniband/hw/hfi1/mmu_rb.c
++++ b/drivers/infiniband/hw/hfi1/mmu_rb.c
+@@ -19,8 +19,7 @@ static int mmu_notifier_range_start(struct mmu_notifier *,
+ 		const struct mmu_notifier_range *);
+ static struct mmu_rb_node *__mmu_rb_search(struct mmu_rb_handler *,
+ 					   unsigned long, unsigned long);
+-static void do_remove(struct mmu_rb_handler *handler,
+-		      struct list_head *del_list);
++static void release_immediate(struct kref *refcount);
+ static void handle_remove(struct work_struct *work);
+ 
+ static const struct mmu_notifier_ops mn_opts = {
+@@ -106,7 +105,11 @@ void hfi1_mmu_rb_unregister(struct mmu_rb_handler *handler)
+ 	}
+ 	spin_unlock_irqrestore(&handler->lock, flags);
+ 
+-	do_remove(handler, &del_list);
++	while (!list_empty(&del_list)) {
++		rbnode = list_first_entry(&del_list, struct mmu_rb_node, list);
++		list_del(&rbnode->list);
++		kref_put(&rbnode->refcount, release_immediate);
++	}
+ 
+ 	/* Now the mm may be freed. */
+ 	mmdrop(handler->mn.mm);
+@@ -134,12 +137,6 @@ int hfi1_mmu_rb_insert(struct mmu_rb_handler *handler,
+ 	}
+ 	__mmu_int_rb_insert(mnode, &handler->root);
+ 	list_add_tail(&mnode->list, &handler->lru_list);
+-
+-	ret = handler->ops->insert(handler->ops_arg, mnode);
+-	if (ret) {
+-		__mmu_int_rb_remove(mnode, &handler->root);
+-		list_del(&mnode->list); /* remove from LRU list */
+-	}
+ 	mnode->handler = handler;
+ unlock:
+ 	spin_unlock_irqrestore(&handler->lock, flags);
+@@ -183,6 +180,48 @@ static struct mmu_rb_node *__mmu_rb_search(struct mmu_rb_handler *handler,
+ 	return node;
+ }
+ 
++/*
++ * Must NOT call while holding mnode->handler->lock.
++ * mnode->handler->ops->remove() may sleep and mnode->handler->lock is a
++ * spinlock.
++ */
++static void release_immediate(struct kref *refcount)
++{
++	struct mmu_rb_node *mnode =
++		container_of(refcount, struct mmu_rb_node, refcount);
++	mnode->handler->ops->remove(mnode->handler->ops_arg, mnode);
++}
++
++/* Caller must hold mnode->handler->lock */
++static void release_nolock(struct kref *refcount)
++{
++	struct mmu_rb_node *mnode =
++		container_of(refcount, struct mmu_rb_node, refcount);
++	list_move(&mnode->list, &mnode->handler->del_list);
++	queue_work(mnode->handler->wq, &mnode->handler->del_work);
++}
++
++/*
++ * struct mmu_rb_node->refcount kref_put() callback.
++ * Adds mmu_rb_node to mmu_rb_node->handler->del_list and queues
++ * handler->del_work on handler->wq.
++ * Does not remove mmu_rb_node from handler->lru_list or handler->rb_root.
++ * Acquires mmu_rb_node->handler->lock; do not call while already holding
++ * handler->lock.
++ */
++void hfi1_mmu_rb_release(struct kref *refcount)
++{
++	struct mmu_rb_node *mnode =
++		container_of(refcount, struct mmu_rb_node, refcount);
++	struct mmu_rb_handler *handler = mnode->handler;
++	unsigned long flags;
++
++	spin_lock_irqsave(&handler->lock, flags);
++	list_move(&mnode->list, &mnode->handler->del_list);
++	spin_unlock_irqrestore(&handler->lock, flags);
++	queue_work(handler->wq, &handler->del_work);
++}
++
+ void hfi1_mmu_rb_evict(struct mmu_rb_handler *handler, void *evict_arg)
+ {
+ 	struct mmu_rb_node *rbnode, *ptr;
+@@ -197,6 +236,10 @@ void hfi1_mmu_rb_evict(struct mmu_rb_handler *handler, void *evict_arg)
+ 
+ 	spin_lock_irqsave(&handler->lock, flags);
+ 	list_for_each_entry_safe(rbnode, ptr, &handler->lru_list, list) {
++		/* refcount == 1 implies mmu_rb_handler has only rbnode ref */
++		if (kref_read(&rbnode->refcount) > 1)
++			continue;
++
+ 		if (handler->ops->evict(handler->ops_arg, rbnode, evict_arg,
+ 					&stop)) {
+ 			__mmu_int_rb_remove(rbnode, &handler->root);
+@@ -209,7 +252,7 @@ void hfi1_mmu_rb_evict(struct mmu_rb_handler *handler, void *evict_arg)
+ 	spin_unlock_irqrestore(&handler->lock, flags);
+ 
+ 	list_for_each_entry_safe(rbnode, ptr, &del_list, list) {
+-		handler->ops->remove(handler->ops_arg, rbnode);
++		kref_put(&rbnode->refcount, release_immediate);
+ 	}
+ }
+ 
+@@ -221,7 +264,6 @@ static int mmu_notifier_range_start(struct mmu_notifier *mn,
+ 	struct rb_root_cached *root = &handler->root;
+ 	struct mmu_rb_node *node, *ptr = NULL;
+ 	unsigned long flags;
+-	bool added = false;
+ 
+ 	spin_lock_irqsave(&handler->lock, flags);
+ 	for (node = __mmu_int_rb_iter_first(root, range->start, range->end-1);
+@@ -230,38 +272,16 @@ static int mmu_notifier_range_start(struct mmu_notifier *mn,
+ 		ptr = __mmu_int_rb_iter_next(node, range->start,
+ 					     range->end - 1);
+ 		trace_hfi1_mmu_mem_invalidate(node->addr, node->len);
+-		if (handler->ops->invalidate(handler->ops_arg, node)) {
+-			__mmu_int_rb_remove(node, root);
+-			/* move from LRU list to delete list */
+-			list_move(&node->list, &handler->del_list);
+-			added = true;
+-		}
++		/* Remove from rb tree and lru_list. */
++		__mmu_int_rb_remove(node, root);
++		list_del_init(&node->list);
++		kref_put(&node->refcount, release_nolock);
+ 	}
+ 	spin_unlock_irqrestore(&handler->lock, flags);
+ 
+-	if (added)
+-		queue_work(handler->wq, &handler->del_work);
+-
+ 	return 0;
+ }
+ 
+-/*
+- * Call the remove function for the given handler and the list.  This
+- * is expected to be called with a delete list extracted from handler.
+- * The caller should not be holding the handler lock.
+- */
+-static void do_remove(struct mmu_rb_handler *handler,
+-		      struct list_head *del_list)
+-{
+-	struct mmu_rb_node *node;
+-
+-	while (!list_empty(del_list)) {
+-		node = list_first_entry(del_list, struct mmu_rb_node, list);
+-		list_del(&node->list);
+-		handler->ops->remove(handler->ops_arg, node);
+-	}
+-}
+-
+ /*
+  * Work queue function to remove all nodes that have been queued up to
+  * be removed.  The key feature is that mm->mmap_lock is not being held
+@@ -274,11 +294,16 @@ static void handle_remove(struct work_struct *work)
+ 						del_work);
+ 	struct list_head del_list;
+ 	unsigned long flags;
++	struct mmu_rb_node *node;
+ 
+ 	/* remove anything that is queued to get removed */
+ 	spin_lock_irqsave(&handler->lock, flags);
+ 	list_replace_init(&handler->del_list, &del_list);
+ 	spin_unlock_irqrestore(&handler->lock, flags);
+ 
+-	do_remove(handler, &del_list);
++	while (!list_empty(&del_list)) {
++		node = list_first_entry(&del_list, struct mmu_rb_node, list);
++		list_del(&node->list);
++		handler->ops->remove(handler->ops_arg, node);
++	}
+ }
+diff --git a/drivers/infiniband/hw/hfi1/mmu_rb.h b/drivers/infiniband/hw/hfi1/mmu_rb.h
+index c4da064188c9d..82c505a04fc6d 100644
+--- a/drivers/infiniband/hw/hfi1/mmu_rb.h
++++ b/drivers/infiniband/hw/hfi1/mmu_rb.h
+@@ -16,6 +16,7 @@ struct mmu_rb_node {
+ 	struct rb_node node;
+ 	struct mmu_rb_handler *handler;
+ 	struct list_head list;
++	struct kref refcount;
+ };
+ 
+ /*
+@@ -61,6 +62,8 @@ int hfi1_mmu_rb_register(void *ops_arg,
+ void hfi1_mmu_rb_unregister(struct mmu_rb_handler *handler);
+ int hfi1_mmu_rb_insert(struct mmu_rb_handler *handler,
+ 		       struct mmu_rb_node *mnode);
++void hfi1_mmu_rb_release(struct kref *refcount);
++
+ void hfi1_mmu_rb_evict(struct mmu_rb_handler *handler, void *evict_arg);
+ struct mmu_rb_node *hfi1_mmu_rb_get_first(struct mmu_rb_handler *handler,
+ 					  unsigned long addr,
+diff --git a/drivers/infiniband/hw/hfi1/sdma.c b/drivers/infiniband/hw/hfi1/sdma.c
+index bb2552dd29c1e..26c62162759ba 100644
+--- a/drivers/infiniband/hw/hfi1/sdma.c
++++ b/drivers/infiniband/hw/hfi1/sdma.c
+@@ -1593,7 +1593,20 @@ static inline void sdma_unmap_desc(
+ 	struct hfi1_devdata *dd,
+ 	struct sdma_desc *descp)
+ {
+-	system_descriptor_complete(dd, descp);
++	switch (sdma_mapping_type(descp)) {
++	case SDMA_MAP_SINGLE:
++		dma_unmap_single(&dd->pcidev->dev, sdma_mapping_addr(descp),
++				 sdma_mapping_len(descp), DMA_TO_DEVICE);
++		break;
++	case SDMA_MAP_PAGE:
++		dma_unmap_page(&dd->pcidev->dev, sdma_mapping_addr(descp),
++			       sdma_mapping_len(descp), DMA_TO_DEVICE);
++		break;
++	}
++
++	if (descp->pinning_ctx && descp->ctx_put)
++		descp->ctx_put(descp->pinning_ctx);
++	descp->pinning_ctx = NULL;
+ }
+ 
+ /*
+@@ -3113,8 +3126,8 @@ int ext_coal_sdma_tx_descs(struct hfi1_devdata *dd, struct sdma_txreq *tx,
+ 
+ 		/* Add descriptor for coalesce buffer */
+ 		tx->desc_limit = MAX_DESC;
+-		return _sdma_txadd_daddr(dd, SDMA_MAP_SINGLE, NULL, tx,
+-					 addr, tx->tlen);
++		return _sdma_txadd_daddr(dd, SDMA_MAP_SINGLE, tx,
++					 addr, tx->tlen, NULL, NULL, NULL);
+ 	}
+ 
+ 	return 1;
+@@ -3157,9 +3170,9 @@ int _pad_sdma_tx_descs(struct hfi1_devdata *dd, struct sdma_txreq *tx)
+ 	make_tx_sdma_desc(
+ 		tx,
+ 		SDMA_MAP_NONE,
+-		NULL,
+ 		dd->sdma_pad_phys,
+-		sizeof(u32) - (tx->packet_len & (sizeof(u32) - 1)));
++		sizeof(u32) - (tx->packet_len & (sizeof(u32) - 1)),
++		NULL, NULL, NULL);
+ 	tx->num_desc++;
+ 	_sdma_close_tx(dd, tx);
+ 	return rval;
+diff --git a/drivers/infiniband/hw/hfi1/sdma.h b/drivers/infiniband/hw/hfi1/sdma.h
+index 95aaec14c6c28..7fdebab202c4f 100644
+--- a/drivers/infiniband/hw/hfi1/sdma.h
++++ b/drivers/infiniband/hw/hfi1/sdma.h
+@@ -594,9 +594,11 @@ static inline dma_addr_t sdma_mapping_addr(struct sdma_desc *d)
+ static inline void make_tx_sdma_desc(
+ 	struct sdma_txreq *tx,
+ 	int type,
+-	void *pinning_ctx,
+ 	dma_addr_t addr,
+-	size_t len)
++	size_t len,
++	void *pinning_ctx,
++	void (*ctx_get)(void *),
++	void (*ctx_put)(void *))
+ {
+ 	struct sdma_desc *desc = &tx->descp[tx->num_desc];
+ 
+@@ -613,7 +615,11 @@ static inline void make_tx_sdma_desc(
+ 				<< SDMA_DESC0_PHY_ADDR_SHIFT) |
+ 			(((u64)len & SDMA_DESC0_BYTE_COUNT_MASK)
+ 				<< SDMA_DESC0_BYTE_COUNT_SHIFT);
++
+ 	desc->pinning_ctx = pinning_ctx;
++	desc->ctx_put = ctx_put;
++	if (pinning_ctx && ctx_get)
++		ctx_get(pinning_ctx);
+ }
+ 
+ /* helper to extend txreq */
+@@ -645,18 +651,20 @@ static inline void _sdma_close_tx(struct hfi1_devdata *dd,
+ static inline int _sdma_txadd_daddr(
+ 	struct hfi1_devdata *dd,
+ 	int type,
+-	void *pinning_ctx,
+ 	struct sdma_txreq *tx,
+ 	dma_addr_t addr,
+-	u16 len)
++	u16 len,
++	void *pinning_ctx,
++	void (*ctx_get)(void *),
++	void (*ctx_put)(void *))
+ {
+ 	int rval = 0;
+ 
+ 	make_tx_sdma_desc(
+ 		tx,
+ 		type,
+-		pinning_ctx,
+-		addr, len);
++		addr, len,
++		pinning_ctx, ctx_get, ctx_put);
+ 	WARN_ON(len > tx->tlen);
+ 	tx->num_desc++;
+ 	tx->tlen -= len;
+@@ -676,11 +684,18 @@ static inline int _sdma_txadd_daddr(
+ /**
+  * sdma_txadd_page() - add a page to the sdma_txreq
+  * @dd: the device to use for mapping
+- * @pinning_ctx: context to be released at descriptor retirement
+  * @tx: tx request to which the page is added
+  * @page: page to map
+  * @offset: offset within the page
+  * @len: length in bytes
++ * @pinning_ctx: context to be stored on struct sdma_desc .pinning_ctx. Not
++ *               added if coalesce buffer is used. E.g. pointer to pinned-page
++ *               cache entry for the sdma_desc.
++ * @ctx_get: optional function to take reference to @pinning_ctx. Not called if
++ *           @pinning_ctx is NULL.
++ * @ctx_put: optional function to release reference to @pinning_ctx after
++ *           sdma_desc completes. May be called in interrupt context so must
++ *           not sleep. Not called if @pinning_ctx is NULL.
+  *
+  * This is used to add a page/offset/length descriptor.
+  *
+@@ -692,11 +707,13 @@ static inline int _sdma_txadd_daddr(
+  */
+ static inline int sdma_txadd_page(
+ 	struct hfi1_devdata *dd,
+-	void *pinning_ctx,
+ 	struct sdma_txreq *tx,
+ 	struct page *page,
+ 	unsigned long offset,
+-	u16 len)
++	u16 len,
++	void *pinning_ctx,
++	void (*ctx_get)(void *),
++	void (*ctx_put)(void *))
+ {
+ 	dma_addr_t addr;
+ 	int rval;
+@@ -720,7 +737,8 @@ static inline int sdma_txadd_page(
+ 		return -ENOSPC;
+ 	}
+ 
+-	return _sdma_txadd_daddr(dd, SDMA_MAP_PAGE, pinning_ctx, tx, addr, len);
++	return _sdma_txadd_daddr(dd, SDMA_MAP_PAGE, tx, addr, len,
++				 pinning_ctx, ctx_get, ctx_put);
+ }
+ 
+ /**
+@@ -754,8 +772,8 @@ static inline int sdma_txadd_daddr(
+ 			return rval;
+ 	}
+ 
+-	return _sdma_txadd_daddr(dd, SDMA_MAP_NONE, NULL, tx,
+-				 addr, len);
++	return _sdma_txadd_daddr(dd, SDMA_MAP_NONE, tx, addr, len,
++				 NULL, NULL, NULL);
+ }
+ 
+ /**
+@@ -801,7 +819,8 @@ static inline int sdma_txadd_kvaddr(
+ 		return -ENOSPC;
+ 	}
+ 
+-	return _sdma_txadd_daddr(dd, SDMA_MAP_SINGLE, NULL, tx, addr, len);
++	return _sdma_txadd_daddr(dd, SDMA_MAP_SINGLE, tx, addr, len,
++				 NULL, NULL, NULL);
+ }
+ 
+ struct iowait_work;
+@@ -1034,6 +1053,4 @@ u16 sdma_get_descq_cnt(void);
+ extern uint mod_num_sdma;
+ 
+ void sdma_update_lmc(struct hfi1_devdata *dd, u64 mask, u32 lid);
+-
+-void system_descriptor_complete(struct hfi1_devdata *dd, struct sdma_desc *descp);
+ #endif
+diff --git a/drivers/infiniband/hw/hfi1/sdma_txreq.h b/drivers/infiniband/hw/hfi1/sdma_txreq.h
+index fad946cb5e0d8..85ae7293c2741 100644
+--- a/drivers/infiniband/hw/hfi1/sdma_txreq.h
++++ b/drivers/infiniband/hw/hfi1/sdma_txreq.h
+@@ -20,6 +20,8 @@ struct sdma_desc {
+ 	/* private:  don't use directly */
+ 	u64 qw[2];
+ 	void *pinning_ctx;
++	/* Release reference to @pinning_ctx. May be called in interrupt context. Must not sleep. */
++	void (*ctx_put)(void *ctx);
+ };
+ 
+ /**
+diff --git a/drivers/infiniband/hw/hfi1/user_sdma.c b/drivers/infiniband/hw/hfi1/user_sdma.c
+index ae58b48afe074..02bd62b857b75 100644
+--- a/drivers/infiniband/hw/hfi1/user_sdma.c
++++ b/drivers/infiniband/hw/hfi1/user_sdma.c
+@@ -62,18 +62,14 @@ static int defer_packet_queue(
+ static void activate_packet_queue(struct iowait *wait, int reason);
+ static bool sdma_rb_filter(struct mmu_rb_node *node, unsigned long addr,
+ 			   unsigned long len);
+-static int sdma_rb_insert(void *arg, struct mmu_rb_node *mnode);
+ static int sdma_rb_evict(void *arg, struct mmu_rb_node *mnode,
+ 			 void *arg2, bool *stop);
+ static void sdma_rb_remove(void *arg, struct mmu_rb_node *mnode);
+-static int sdma_rb_invalidate(void *arg, struct mmu_rb_node *mnode);
+ 
+ static struct mmu_rb_ops sdma_rb_ops = {
+ 	.filter = sdma_rb_filter,
+-	.insert = sdma_rb_insert,
+ 	.evict = sdma_rb_evict,
+ 	.remove = sdma_rb_remove,
+-	.invalidate = sdma_rb_invalidate
+ };
+ 
+ static int add_system_pages_to_sdma_packet(struct user_sdma_request *req,
+@@ -247,14 +243,14 @@ int hfi1_user_sdma_free_queues(struct hfi1_filedata *fd,
+ 		spin_unlock(&fd->pq_rcu_lock);
+ 		synchronize_srcu(&fd->pq_srcu);
+ 		/* at this point there can be no more new requests */
+-		if (pq->handler)
+-			hfi1_mmu_rb_unregister(pq->handler);
+ 		iowait_sdma_drain(&pq->busy);
+ 		/* Wait until all requests have been freed. */
+ 		wait_event_interruptible(
+ 			pq->wait,
+ 			!atomic_read(&pq->n_reqs));
+ 		kfree(pq->reqs);
++		if (pq->handler)
++			hfi1_mmu_rb_unregister(pq->handler);
+ 		bitmap_free(pq->req_in_use);
+ 		kmem_cache_destroy(pq->txreq_cache);
+ 		flush_pq_iowait(pq);
+@@ -1275,25 +1271,17 @@ static void free_system_node(struct sdma_mmu_node *node)
+ 	kfree(node);
+ }
+ 
+-static inline void acquire_node(struct sdma_mmu_node *node)
+-{
+-	atomic_inc(&node->refcount);
+-	WARN_ON(atomic_read(&node->refcount) < 0);
+-}
+-
+-static inline void release_node(struct mmu_rb_handler *handler,
+-				struct sdma_mmu_node *node)
+-{
+-	atomic_dec(&node->refcount);
+-	WARN_ON(atomic_read(&node->refcount) < 0);
+-}
+-
++/*
++ * kref_get()'s an additional kref on the returned rb_node to prevent rb_node
++ * from being released until after rb_node is assigned to an SDMA descriptor
++ * (struct sdma_desc) under add_system_iovec_to_sdma_packet(), even if the
++ * virtual address range for rb_node is invalidated between now and then.
++ */
+ static struct sdma_mmu_node *find_system_node(struct mmu_rb_handler *handler,
+ 					      unsigned long start,
+ 					      unsigned long end)
+ {
+ 	struct mmu_rb_node *rb_node;
+-	struct sdma_mmu_node *node;
+ 	unsigned long flags;
+ 
+ 	spin_lock_irqsave(&handler->lock, flags);
+@@ -1302,11 +1290,12 @@ static struct sdma_mmu_node *find_system_node(struct mmu_rb_handler *handler,
+ 		spin_unlock_irqrestore(&handler->lock, flags);
+ 		return NULL;
+ 	}
+-	node = container_of(rb_node, struct sdma_mmu_node, rb);
+-	acquire_node(node);
++
++	/* "safety" kref to prevent release before add_system_iovec_to_sdma_packet() */
++	kref_get(&rb_node->refcount);
+ 	spin_unlock_irqrestore(&handler->lock, flags);
+ 
+-	return node;
++	return container_of(rb_node, struct sdma_mmu_node, rb);
+ }
+ 
+ static int pin_system_pages(struct user_sdma_request *req,
+@@ -1355,6 +1344,13 @@ retry:
+ 	return 0;
+ }
+ 
++/*
++ * kref refcount on *node_p will be 2 on successful addition: one kref from
++ * kref_init() for mmu_rb_handler and one kref to prevent *node_p from being
++ * released until after *node_p is assigned to an SDMA descriptor (struct
++ * sdma_desc) under add_system_iovec_to_sdma_packet(), even if the virtual
++ * address range for *node_p is invalidated between now and then.
++ */
+ static int add_system_pinning(struct user_sdma_request *req,
+ 			      struct sdma_mmu_node **node_p,
+ 			      unsigned long start, unsigned long len)
+@@ -1368,6 +1364,12 @@ static int add_system_pinning(struct user_sdma_request *req,
+ 	if (!node)
+ 		return -ENOMEM;
+ 
++	/* First kref "moves" to mmu_rb_handler */
++	kref_init(&node->rb.refcount);
++
++	/* "safety" kref to prevent release before add_system_iovec_to_sdma_packet() */
++	kref_get(&node->rb.refcount);
++
+ 	node->pq = pq;
+ 	ret = pin_system_pages(req, start, len, node, PFN_DOWN(len));
+ 	if (ret == 0) {
+@@ -1431,15 +1433,15 @@ static int get_system_cache_entry(struct user_sdma_request *req,
+ 			return 0;
+ 		}
+ 
+-		SDMA_DBG(req, "prepend: node->rb.addr %lx, node->refcount %d",
+-			 node->rb.addr, atomic_read(&node->refcount));
++		SDMA_DBG(req, "prepend: node->rb.addr %lx, node->rb.refcount %d",
++			 node->rb.addr, kref_read(&node->rb.refcount));
+ 		prepend_len = node->rb.addr - start;
+ 
+ 		/*
+ 		 * This node will not be returned, instead a new node
+ 		 * will be. So release the reference.
+ 		 */
+-		release_node(handler, node);
++		kref_put(&node->rb.refcount, hfi1_mmu_rb_release);
+ 
+ 		/* Prepend a node to cover the beginning of the allocation */
+ 		ret = add_system_pinning(req, node_p, start, prepend_len);
+@@ -1451,6 +1453,20 @@ static int get_system_cache_entry(struct user_sdma_request *req,
+ 	}
+ }
+ 
++static void sdma_mmu_rb_node_get(void *ctx)
++{
++	struct mmu_rb_node *node = ctx;
++
++	kref_get(&node->refcount);
++}
++
++static void sdma_mmu_rb_node_put(void *ctx)
++{
++	struct sdma_mmu_node *node = ctx;
++
++	kref_put(&node->rb.refcount, hfi1_mmu_rb_release);
++}
++
+ static int add_mapping_to_sdma_packet(struct user_sdma_request *req,
+ 				      struct user_sdma_txreq *tx,
+ 				      struct sdma_mmu_node *cache_entry,
+@@ -1494,9 +1510,12 @@ static int add_mapping_to_sdma_packet(struct user_sdma_request *req,
+ 			ctx = cache_entry;
+ 		}
+ 
+-		ret = sdma_txadd_page(pq->dd, ctx, &tx->txreq,
++		ret = sdma_txadd_page(pq->dd, &tx->txreq,
+ 				      cache_entry->pages[page_index],
+-				      page_offset, from_this_page);
++				      page_offset, from_this_page,
++				      ctx,
++				      sdma_mmu_rb_node_get,
++				      sdma_mmu_rb_node_put);
+ 		if (ret) {
+ 			/*
+ 			 * When there's a failure, the entire request is freed by
+@@ -1518,8 +1537,6 @@ static int add_system_iovec_to_sdma_packet(struct user_sdma_request *req,
+ 					   struct user_sdma_iovec *iovec,
+ 					   size_t from_this_iovec)
+ {
+-	struct mmu_rb_handler *handler = req->pq->handler;
+-
+ 	while (from_this_iovec > 0) {
+ 		struct sdma_mmu_node *cache_entry;
+ 		size_t from_this_cache_entry;
+@@ -1540,15 +1557,15 @@ static int add_system_iovec_to_sdma_packet(struct user_sdma_request *req,
+ 
+ 		ret = add_mapping_to_sdma_packet(req, tx, cache_entry, start,
+ 						 from_this_cache_entry);
++
++		/*
++		 * Done adding cache_entry to zero or more sdma_desc. Can
++		 * kref_put() the "safety" kref taken under
++		 * get_system_cache_entry().
++		 */
++		kref_put(&cache_entry->rb.refcount, hfi1_mmu_rb_release);
++
+ 		if (ret) {
+-			/*
+-			 * We're guaranteed that there will be no descriptor
+-			 * completion callback that releases this node
+-			 * because only the last descriptor referencing it
+-			 * has a context attached, and a failure means the
+-			 * last descriptor was never added.
+-			 */
+-			release_node(handler, cache_entry);
+ 			SDMA_DBG(req, "add system segment failed %d", ret);
+ 			return ret;
+ 		}
+@@ -1599,42 +1616,12 @@ static int add_system_pages_to_sdma_packet(struct user_sdma_request *req,
+ 	return 0;
+ }
+ 
+-void system_descriptor_complete(struct hfi1_devdata *dd,
+-				struct sdma_desc *descp)
+-{
+-	switch (sdma_mapping_type(descp)) {
+-	case SDMA_MAP_SINGLE:
+-		dma_unmap_single(&dd->pcidev->dev, sdma_mapping_addr(descp),
+-				 sdma_mapping_len(descp), DMA_TO_DEVICE);
+-		break;
+-	case SDMA_MAP_PAGE:
+-		dma_unmap_page(&dd->pcidev->dev, sdma_mapping_addr(descp),
+-			       sdma_mapping_len(descp), DMA_TO_DEVICE);
+-		break;
+-	}
+-
+-	if (descp->pinning_ctx) {
+-		struct sdma_mmu_node *node = descp->pinning_ctx;
+-
+-		release_node(node->rb.handler, node);
+-	}
+-}
+-
+ static bool sdma_rb_filter(struct mmu_rb_node *node, unsigned long addr,
+ 			   unsigned long len)
+ {
+ 	return (bool)(node->addr == addr);
+ }
+ 
+-static int sdma_rb_insert(void *arg, struct mmu_rb_node *mnode)
+-{
+-	struct sdma_mmu_node *node =
+-		container_of(mnode, struct sdma_mmu_node, rb);
+-
+-	atomic_inc(&node->refcount);
+-	return 0;
+-}
+-
+ /*
+  * Return 1 to remove the node from the rb tree and call the remove op.
+  *
+@@ -1647,10 +1634,6 @@ static int sdma_rb_evict(void *arg, struct mmu_rb_node *mnode,
+ 		container_of(mnode, struct sdma_mmu_node, rb);
+ 	struct evict_data *evict_data = evict_arg;
+ 
+-	/* is this node still being used? */
+-	if (atomic_read(&node->refcount))
+-		return 0; /* keep this node */
+-
+ 	/* this node will be evicted, add its pages to our count */
+ 	evict_data->cleared += node->npages;
+ 
+@@ -1668,13 +1651,3 @@ static void sdma_rb_remove(void *arg, struct mmu_rb_node *mnode)
+ 
+ 	free_system_node(node);
+ }
+-
+-static int sdma_rb_invalidate(void *arg, struct mmu_rb_node *mnode)
+-{
+-	struct sdma_mmu_node *node =
+-		container_of(mnode, struct sdma_mmu_node, rb);
+-
+-	if (!atomic_read(&node->refcount))
+-		return 1;
+-	return 0;
+-}
+diff --git a/drivers/infiniband/hw/hfi1/user_sdma.h b/drivers/infiniband/hw/hfi1/user_sdma.h
+index a241836371dc1..548347d4c5bc2 100644
+--- a/drivers/infiniband/hw/hfi1/user_sdma.h
++++ b/drivers/infiniband/hw/hfi1/user_sdma.h
+@@ -104,7 +104,6 @@ struct hfi1_user_sdma_comp_q {
+ struct sdma_mmu_node {
+ 	struct mmu_rb_node rb;
+ 	struct hfi1_user_sdma_pkt_q *pq;
+-	atomic_t refcount;
+ 	struct page **pages;
+ 	unsigned int npages;
+ };
+diff --git a/drivers/infiniband/hw/hfi1/vnic_sdma.c b/drivers/infiniband/hw/hfi1/vnic_sdma.c
+index 727eedfba332a..cc6324d2d1ddc 100644
+--- a/drivers/infiniband/hw/hfi1/vnic_sdma.c
++++ b/drivers/infiniband/hw/hfi1/vnic_sdma.c
+@@ -64,11 +64,11 @@ static noinline int build_vnic_ulp_payload(struct sdma_engine *sde,
+ 
+ 		/* combine physically continuous fragments later? */
+ 		ret = sdma_txadd_page(sde->dd,
+-				      NULL,
+ 				      &tx->txreq,
+ 				      skb_frag_page(frag),
+ 				      skb_frag_off(frag),
+-				      skb_frag_size(frag));
++				      skb_frag_size(frag),
++				      NULL, NULL, NULL);
+ 		if (unlikely(ret))
+ 			goto bail_txadd;
+ 	}
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hem.c b/drivers/infiniband/hw/hns/hns_roce_hem.c
+index aa8a08d1c0145..f30274986c0da 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hem.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hem.c
+@@ -595,11 +595,12 @@ int hns_roce_table_get(struct hns_roce_dev *hr_dev,
+ 	}
+ 
+ 	/* Set HEM base address(128K/page, pa) to Hardware */
+-	if (hr_dev->hw->set_hem(hr_dev, table, obj, HEM_HOP_STEP_DIRECT)) {
++	ret = hr_dev->hw->set_hem(hr_dev, table, obj, HEM_HOP_STEP_DIRECT);
++	if (ret) {
+ 		hns_roce_free_hem(hr_dev, table->hem[i]);
+ 		table->hem[i] = NULL;
+-		ret = -ENODEV;
+-		dev_err(dev, "set HEM base address to HW failed.\n");
++		dev_err(dev, "set HEM base address to HW failed, ret = %d.\n",
++			ret);
+ 		goto out;
+ 	}
+ 
+diff --git a/drivers/infiniband/hw/irdma/uk.c b/drivers/infiniband/hw/irdma/uk.c
+index 16183e894da77..dd428d915c175 100644
+--- a/drivers/infiniband/hw/irdma/uk.c
++++ b/drivers/infiniband/hw/irdma/uk.c
+@@ -93,16 +93,18 @@ static int irdma_nop_1(struct irdma_qp_uk *qp)
+  */
+ void irdma_clr_wqes(struct irdma_qp_uk *qp, u32 qp_wqe_idx)
+ {
+-	__le64 *wqe;
++	struct irdma_qp_quanta *sq;
+ 	u32 wqe_idx;
+ 
+ 	if (!(qp_wqe_idx & 0x7F)) {
+ 		wqe_idx = (qp_wqe_idx + 128) % qp->sq_ring.size;
+-		wqe = qp->sq_base[wqe_idx].elem;
++		sq = qp->sq_base + wqe_idx;
+ 		if (wqe_idx)
+-			memset(wqe, qp->swqe_polarity ? 0 : 0xFF, 0x1000);
++			memset(sq, qp->swqe_polarity ? 0 : 0xFF,
++			       128 * sizeof(*sq));
+ 		else
+-			memset(wqe, qp->swqe_polarity ? 0xFF : 0, 0x1000);
++			memset(sq, qp->swqe_polarity ? 0xFF : 0,
++			       128 * sizeof(*sq));
+ 	}
+ }
+ 
+diff --git a/drivers/infiniband/sw/rxe/rxe_mw.c b/drivers/infiniband/sw/rxe/rxe_mw.c
+index afa5ce1a71166..a7ec57ab8fadd 100644
+--- a/drivers/infiniband/sw/rxe/rxe_mw.c
++++ b/drivers/infiniband/sw/rxe/rxe_mw.c
+@@ -48,7 +48,7 @@ int rxe_dealloc_mw(struct ib_mw *ibmw)
+ }
+ 
+ static int rxe_check_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
+-			 struct rxe_mw *mw, struct rxe_mr *mr)
++			 struct rxe_mw *mw, struct rxe_mr *mr, int access)
+ {
+ 	if (mw->ibmw.type == IB_MW_TYPE_1) {
+ 		if (unlikely(mw->state != RXE_MW_STATE_VALID)) {
+@@ -58,7 +58,7 @@ static int rxe_check_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
+ 		}
+ 
+ 		/* o10-36.2.2 */
+-		if (unlikely((mw->access & IB_ZERO_BASED))) {
++		if (unlikely((access & IB_ZERO_BASED))) {
+ 			rxe_dbg_mw(mw, "attempt to bind a zero based type 1 MW\n");
+ 			return -EINVAL;
+ 		}
+@@ -104,7 +104,7 @@ static int rxe_check_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
+ 	}
+ 
+ 	/* C10-74 */
+-	if (unlikely((mw->access &
++	if (unlikely((access &
+ 		      (IB_ACCESS_REMOTE_WRITE | IB_ACCESS_REMOTE_ATOMIC)) &&
+ 		     !(mr->access & IB_ACCESS_LOCAL_WRITE))) {
+ 		rxe_dbg_mw(mw,
+@@ -113,7 +113,7 @@ static int rxe_check_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
+ 	}
+ 
+ 	/* C10-75 */
+-	if (mw->access & IB_ZERO_BASED) {
++	if (access & IB_ZERO_BASED) {
+ 		if (unlikely(wqe->wr.wr.mw.length > mr->ibmr.length)) {
+ 			rxe_dbg_mw(mw,
+ 				"attempt to bind a ZB MW outside of the MR\n");
+@@ -133,12 +133,12 @@ static int rxe_check_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
+ }
+ 
+ static void rxe_do_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe,
+-		      struct rxe_mw *mw, struct rxe_mr *mr)
++		      struct rxe_mw *mw, struct rxe_mr *mr, int access)
+ {
+ 	u32 key = wqe->wr.wr.mw.rkey & 0xff;
+ 
+ 	mw->rkey = (mw->rkey & ~0xff) | key;
+-	mw->access = wqe->wr.wr.mw.access;
++	mw->access = access;
+ 	mw->state = RXE_MW_STATE_VALID;
+ 	mw->addr = wqe->wr.wr.mw.addr;
+ 	mw->length = wqe->wr.wr.mw.length;
+@@ -169,6 +169,7 @@ int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe)
+ 	struct rxe_dev *rxe = to_rdev(qp->ibqp.device);
+ 	u32 mw_rkey = wqe->wr.wr.mw.mw_rkey;
+ 	u32 mr_lkey = wqe->wr.wr.mw.mr_lkey;
++	int access = wqe->wr.wr.mw.access;
+ 
+ 	mw = rxe_pool_get_index(&rxe->mw_pool, mw_rkey >> 8);
+ 	if (unlikely(!mw)) {
+@@ -198,11 +199,11 @@ int rxe_bind_mw(struct rxe_qp *qp, struct rxe_send_wqe *wqe)
+ 
+ 	spin_lock_bh(&mw->lock);
+ 
+-	ret = rxe_check_bind_mw(qp, wqe, mw, mr);
++	ret = rxe_check_bind_mw(qp, wqe, mw, mr, access);
+ 	if (ret)
+ 		goto err_unlock;
+ 
+-	rxe_do_bind_mw(qp, wqe, mw, mr);
++	rxe_do_bind_mw(qp, wqe, mw, mr, access);
+ err_unlock:
+ 	spin_unlock_bh(&mw->lock);
+ err_drop_mr:
+diff --git a/drivers/input/Kconfig b/drivers/input/Kconfig
+index 735f90b74ee5a..3bdbd34314b34 100644
+--- a/drivers/input/Kconfig
++++ b/drivers/input/Kconfig
+@@ -168,7 +168,7 @@ config INPUT_EVBUG
+ 
+ config INPUT_KUNIT_TEST
+ 	tristate "KUnit tests for Input" if !KUNIT_ALL_TESTS
+-	depends on INPUT && KUNIT=y
++	depends on INPUT && KUNIT
+ 	default KUNIT_ALL_TESTS
+ 	help
+ 	  Say Y here if you want to build the KUnit tests for the input
+diff --git a/drivers/input/misc/adxl34x.c b/drivers/input/misc/adxl34x.c
+index eecca671b5884..a3f45e0ee0c75 100644
+--- a/drivers/input/misc/adxl34x.c
++++ b/drivers/input/misc/adxl34x.c
+@@ -817,8 +817,7 @@ struct adxl34x *adxl34x_probe(struct device *dev, int irq,
+ 	AC_WRITE(ac, POWER_CTL, 0);
+ 
+ 	err = request_threaded_irq(ac->irq, NULL, adxl34x_irq,
+-				   IRQF_TRIGGER_HIGH | IRQF_ONESHOT,
+-				   dev_name(dev), ac);
++				   IRQF_ONESHOT, dev_name(dev), ac);
+ 	if (err) {
+ 		dev_err(dev, "irq %d busy?\n", ac->irq);
+ 		goto err_free_mem;
+diff --git a/drivers/input/misc/drv260x.c b/drivers/input/misc/drv260x.c
+index 8a9ebfc04a2d9..85371fa1a03ed 100644
+--- a/drivers/input/misc/drv260x.c
++++ b/drivers/input/misc/drv260x.c
+@@ -435,6 +435,7 @@ static int drv260x_init(struct drv260x_data *haptics)
+ 	}
+ 
+ 	do {
++		usleep_range(15000, 15500);
+ 		error = regmap_read(haptics->regmap, DRV260X_GO, &cal_buf);
+ 		if (error) {
+ 			dev_err(&haptics->client->dev,
+diff --git a/drivers/input/misc/pm8941-pwrkey.c b/drivers/input/misc/pm8941-pwrkey.c
+index b6a27ebae977b..74d77d8aaeff2 100644
+--- a/drivers/input/misc/pm8941-pwrkey.c
++++ b/drivers/input/misc/pm8941-pwrkey.c
+@@ -50,7 +50,10 @@
+ #define  PON_RESIN_PULL_UP		BIT(0)
+ 
+ #define PON_DBC_CTL			0x71
+-#define  PON_DBC_DELAY_MASK		0x7
++#define  PON_DBC_DELAY_MASK_GEN1	0x7
++#define  PON_DBC_DELAY_MASK_GEN2	0xf
++#define  PON_DBC_SHIFT_GEN1		6
++#define  PON_DBC_SHIFT_GEN2		14
+ 
+ struct pm8941_data {
+ 	unsigned int	pull_up_bit;
+@@ -247,7 +250,7 @@ static int pm8941_pwrkey_probe(struct platform_device *pdev)
+ 	struct device *parent;
+ 	struct device_node *regmap_node;
+ 	const __be32 *addr;
+-	u32 req_delay;
++	u32 req_delay, mask, delay_shift;
+ 	int error;
+ 
+ 	if (of_property_read_u32(pdev->dev.of_node, "debounce", &req_delay))
+@@ -336,12 +339,20 @@ static int pm8941_pwrkey_probe(struct platform_device *pdev)
+ 	pwrkey->input->phys = pwrkey->data->phys;
+ 
+ 	if (pwrkey->data->supports_debounce_config) {
+-		req_delay = (req_delay << 6) / USEC_PER_SEC;
++		if (pwrkey->subtype >= PON_SUBTYPE_GEN2_PRIMARY) {
++			mask = PON_DBC_DELAY_MASK_GEN2;
++			delay_shift = PON_DBC_SHIFT_GEN2;
++		} else {
++			mask = PON_DBC_DELAY_MASK_GEN1;
++			delay_shift = PON_DBC_SHIFT_GEN1;
++		}
++
++		req_delay = (req_delay << delay_shift) / USEC_PER_SEC;
+ 		req_delay = ilog2(req_delay);
+ 
+ 		error = regmap_update_bits(pwrkey->regmap,
+ 					   pwrkey->baseaddr + PON_DBC_CTL,
+-					   PON_DBC_DELAY_MASK,
++					   mask,
+ 					   req_delay);
+ 		if (error) {
+ 			dev_err(&pdev->dev, "failed to set debounce: %d\n",
+diff --git a/drivers/input/tests/input_test.c b/drivers/input/tests/input_test.c
+index e5a6c1ad2167c..0540225f02886 100644
+--- a/drivers/input/tests/input_test.c
++++ b/drivers/input/tests/input_test.c
+@@ -43,8 +43,8 @@ static void input_test_exit(struct kunit *test)
+ {
+ 	struct input_dev *input_dev = test->priv;
+ 
+-	input_unregister_device(input_dev);
+-	input_free_device(input_dev);
++	if (input_dev)
++		input_unregister_device(input_dev);
+ }
+ 
+ static void input_test_poll(struct input_dev *input) { }
+@@ -87,7 +87,7 @@ static void input_test_timestamp(struct kunit *test)
+ static void input_test_match_device_id(struct kunit *test)
+ {
+ 	struct input_dev *input_dev = test->priv;
+-	struct input_device_id id;
++	struct input_device_id id = { 0 };
+ 
+ 	/*
+ 	 * Must match when the input device bus, vendor, product, version
+diff --git a/drivers/input/touchscreen/ads7846.c b/drivers/input/touchscreen/ads7846.c
+index bb1058b1e7fd4..faea40dd66d01 100644
+--- a/drivers/input/touchscreen/ads7846.c
++++ b/drivers/input/touchscreen/ads7846.c
+@@ -24,11 +24,8 @@
+ #include <linux/interrupt.h>
+ #include <linux/slab.h>
+ #include <linux/pm.h>
+-#include <linux/of.h>
+-#include <linux/of_gpio.h>
+-#include <linux/of_device.h>
++#include <linux/property.h>
+ #include <linux/gpio/consumer.h>
+-#include <linux/gpio.h>
+ #include <linux/spi/spi.h>
+ #include <linux/spi/ads7846.h>
+ #include <linux/regulator/consumer.h>
+@@ -140,7 +137,7 @@ struct ads7846 {
+ 	int			(*filter)(void *data, int data_idx, int *val);
+ 	void			*filter_data;
+ 	int			(*get_pendown_state)(void);
+-	int			gpio_pendown;
++	struct gpio_desc	*gpio_pendown;
+ 
+ 	void			(*wait_for_sync)(void);
+ };
+@@ -223,7 +220,7 @@ static int get_pendown_state(struct ads7846 *ts)
+ 	if (ts->get_pendown_state)
+ 		return ts->get_pendown_state();
+ 
+-	return !gpio_get_value(ts->gpio_pendown);
++	return gpiod_get_value(ts->gpio_pendown);
+ }
+ 
+ static void ads7846_report_pen_up(struct ads7846 *ts)
+@@ -989,8 +986,6 @@ static int ads7846_setup_pendown(struct spi_device *spi,
+ 				 struct ads7846 *ts,
+ 				 const struct ads7846_platform_data *pdata)
+ {
+-	int err;
+-
+ 	/*
+ 	 * REVISIT when the irq can be triggered active-low, or if for some
+ 	 * reason the touchscreen isn't hooked up, we don't need to access
+@@ -999,25 +994,15 @@ static int ads7846_setup_pendown(struct spi_device *spi,
+ 
+ 	if (pdata->get_pendown_state) {
+ 		ts->get_pendown_state = pdata->get_pendown_state;
+-	} else if (gpio_is_valid(pdata->gpio_pendown)) {
+-
+-		err = devm_gpio_request_one(&spi->dev, pdata->gpio_pendown,
+-					    GPIOF_IN, "ads7846_pendown");
+-		if (err) {
+-			dev_err(&spi->dev,
+-				"failed to request/setup pendown GPIO%d: %d\n",
+-				pdata->gpio_pendown, err);
+-			return err;
++	} else {
++		ts->gpio_pendown = gpiod_get(&spi->dev, "pendown", GPIOD_IN);
++		if (IS_ERR(ts->gpio_pendown)) {
++			dev_err(&spi->dev, "failed to request pendown GPIO\n");
++			return PTR_ERR(ts->gpio_pendown);
+ 		}
+-
+-		ts->gpio_pendown = pdata->gpio_pendown;
+-
+ 		if (pdata->gpio_pendown_debounce)
+-			gpiod_set_debounce(gpio_to_desc(ts->gpio_pendown),
++			gpiod_set_debounce(ts->gpio_pendown,
+ 					   pdata->gpio_pendown_debounce);
+-	} else {
+-		dev_err(&spi->dev, "no get_pendown_state nor gpio_pendown?\n");
+-		return -EINVAL;
+ 	}
+ 
+ 	return 0;
+@@ -1119,7 +1104,6 @@ static int ads7846_setup_spi_msg(struct ads7846 *ts,
+ 	return 0;
+ }
+ 
+-#ifdef CONFIG_OF
+ static const struct of_device_id ads7846_dt_ids[] = {
+ 	{ .compatible = "ti,tsc2046",	.data = (void *) 7846 },
+ 	{ .compatible = "ti,ads7843",	.data = (void *) 7843 },
+@@ -1130,82 +1114,60 @@ static const struct of_device_id ads7846_dt_ids[] = {
+ };
+ MODULE_DEVICE_TABLE(of, ads7846_dt_ids);
+ 
+-static const struct ads7846_platform_data *ads7846_probe_dt(struct device *dev)
++static const struct ads7846_platform_data *ads7846_get_props(struct device *dev)
+ {
+ 	struct ads7846_platform_data *pdata;
+-	struct device_node *node = dev->of_node;
+-	const struct of_device_id *match;
+ 	u32 value;
+ 
+-	if (!node) {
+-		dev_err(dev, "Device does not have associated DT data\n");
+-		return ERR_PTR(-EINVAL);
+-	}
+-
+-	match = of_match_device(ads7846_dt_ids, dev);
+-	if (!match) {
+-		dev_err(dev, "Unknown device model\n");
+-		return ERR_PTR(-EINVAL);
+-	}
+-
+ 	pdata = devm_kzalloc(dev, sizeof(*pdata), GFP_KERNEL);
+ 	if (!pdata)
+ 		return ERR_PTR(-ENOMEM);
+ 
+-	pdata->model = (unsigned long)match->data;
++	pdata->model = (uintptr_t)device_get_match_data(dev);
+ 
+-	of_property_read_u16(node, "ti,vref-delay-usecs",
+-			     &pdata->vref_delay_usecs);
+-	of_property_read_u16(node, "ti,vref-mv", &pdata->vref_mv);
+-	pdata->keep_vref_on = of_property_read_bool(node, "ti,keep-vref-on");
++	device_property_read_u16(dev, "ti,vref-delay-usecs",
++				 &pdata->vref_delay_usecs);
++	device_property_read_u16(dev, "ti,vref-mv", &pdata->vref_mv);
++	pdata->keep_vref_on = device_property_read_bool(dev, "ti,keep-vref-on");
+ 
+-	pdata->swap_xy = of_property_read_bool(node, "ti,swap-xy");
++	pdata->swap_xy = device_property_read_bool(dev, "ti,swap-xy");
+ 
+-	of_property_read_u16(node, "ti,settle-delay-usec",
+-			     &pdata->settle_delay_usecs);
+-	of_property_read_u16(node, "ti,penirq-recheck-delay-usecs",
+-			     &pdata->penirq_recheck_delay_usecs);
++	device_property_read_u16(dev, "ti,settle-delay-usec",
++				 &pdata->settle_delay_usecs);
++	device_property_read_u16(dev, "ti,penirq-recheck-delay-usecs",
++				 &pdata->penirq_recheck_delay_usecs);
+ 
+-	of_property_read_u16(node, "ti,x-plate-ohms", &pdata->x_plate_ohms);
+-	of_property_read_u16(node, "ti,y-plate-ohms", &pdata->y_plate_ohms);
++	device_property_read_u16(dev, "ti,x-plate-ohms", &pdata->x_plate_ohms);
++	device_property_read_u16(dev, "ti,y-plate-ohms", &pdata->y_plate_ohms);
+ 
+-	of_property_read_u16(node, "ti,x-min", &pdata->x_min);
+-	of_property_read_u16(node, "ti,y-min", &pdata->y_min);
+-	of_property_read_u16(node, "ti,x-max", &pdata->x_max);
+-	of_property_read_u16(node, "ti,y-max", &pdata->y_max);
++	device_property_read_u16(dev, "ti,x-min", &pdata->x_min);
++	device_property_read_u16(dev, "ti,y-min", &pdata->y_min);
++	device_property_read_u16(dev, "ti,x-max", &pdata->x_max);
++	device_property_read_u16(dev, "ti,y-max", &pdata->y_max);
+ 
+ 	/*
+ 	 * touchscreen-max-pressure gets parsed during
+ 	 * touchscreen_parse_properties()
+ 	 */
+-	of_property_read_u16(node, "ti,pressure-min", &pdata->pressure_min);
+-	if (!of_property_read_u32(node, "touchscreen-min-pressure", &value))
++	device_property_read_u16(dev, "ti,pressure-min", &pdata->pressure_min);
++	if (!device_property_read_u32(dev, "touchscreen-min-pressure", &value))
+ 		pdata->pressure_min = (u16) value;
+-	of_property_read_u16(node, "ti,pressure-max", &pdata->pressure_max);
++	device_property_read_u16(dev, "ti,pressure-max", &pdata->pressure_max);
+ 
+-	of_property_read_u16(node, "ti,debounce-max", &pdata->debounce_max);
+-	if (!of_property_read_u32(node, "touchscreen-average-samples", &value))
++	device_property_read_u16(dev, "ti,debounce-max", &pdata->debounce_max);
++	if (!device_property_read_u32(dev, "touchscreen-average-samples", &value))
+ 		pdata->debounce_max = (u16) value;
+-	of_property_read_u16(node, "ti,debounce-tol", &pdata->debounce_tol);
+-	of_property_read_u16(node, "ti,debounce-rep", &pdata->debounce_rep);
++	device_property_read_u16(dev, "ti,debounce-tol", &pdata->debounce_tol);
++	device_property_read_u16(dev, "ti,debounce-rep", &pdata->debounce_rep);
+ 
+-	of_property_read_u32(node, "ti,pendown-gpio-debounce",
++	device_property_read_u32(dev, "ti,pendown-gpio-debounce",
+ 			     &pdata->gpio_pendown_debounce);
+ 
+-	pdata->wakeup = of_property_read_bool(node, "wakeup-source") ||
+-			of_property_read_bool(node, "linux,wakeup");
+-
+-	pdata->gpio_pendown = of_get_named_gpio(dev->of_node, "pendown-gpio", 0);
++	pdata->wakeup = device_property_read_bool(dev, "wakeup-source") ||
++			device_property_read_bool(dev, "linux,wakeup");
+ 
+ 	return pdata;
+ }
+-#else
+-static const struct ads7846_platform_data *ads7846_probe_dt(struct device *dev)
+-{
+-	dev_err(dev, "no platform data defined\n");
+-	return ERR_PTR(-EINVAL);
+-}
+-#endif
+ 
+ static void ads7846_regulator_disable(void *regulator)
+ {
+@@ -1269,7 +1231,7 @@ static int ads7846_probe(struct spi_device *spi)
+ 
+ 	pdata = dev_get_platdata(dev);
+ 	if (!pdata) {
+-		pdata = ads7846_probe_dt(dev);
++		pdata = ads7846_get_props(dev);
+ 		if (IS_ERR(pdata))
+ 			return PTR_ERR(pdata);
+ 	}
+@@ -1426,7 +1388,7 @@ static struct spi_driver ads7846_driver = {
+ 	.driver = {
+ 		.name	= "ads7846",
+ 		.pm	= pm_sleep_ptr(&ads7846_pm),
+-		.of_match_table = of_match_ptr(ads7846_dt_ids),
++		.of_match_table = ads7846_dt_ids,
+ 	},
+ 	.probe		= ads7846_probe,
+ 	.remove		= ads7846_remove,
+diff --git a/drivers/input/touchscreen/cyttsp4_core.c b/drivers/input/touchscreen/cyttsp4_core.c
+index 0cd6f626adec5..7cb26929dc732 100644
+--- a/drivers/input/touchscreen/cyttsp4_core.c
++++ b/drivers/input/touchscreen/cyttsp4_core.c
+@@ -1263,9 +1263,8 @@ static void cyttsp4_stop_wd_timer(struct cyttsp4 *cd)
+ 	 * Ensure we wait until the watchdog timer
+ 	 * running on a different CPU finishes
+ 	 */
+-	del_timer_sync(&cd->watchdog_timer);
++	timer_shutdown_sync(&cd->watchdog_timer);
+ 	cancel_work_sync(&cd->watchdog_work);
+-	del_timer_sync(&cd->watchdog_timer);
+ }
+ 
+ static void cyttsp4_watchdog_timer(struct timer_list *t)
+diff --git a/drivers/interconnect/qcom/icc-rpm.c b/drivers/interconnect/qcom/icc-rpm.c
+index 5341fa169dbf1..8d3138e8c1ee3 100644
+--- a/drivers/interconnect/qcom/icc-rpm.c
++++ b/drivers/interconnect/qcom/icc-rpm.c
+@@ -379,7 +379,7 @@ static int qcom_icc_set(struct icc_node *src, struct icc_node *dst)
+ 			return ret;
+ 	}
+ 
+-	for (i = 0; i < qp->num_clks; i++) {
++	for (i = 0; i < qp->num_bus_clks; i++) {
+ 		/*
+ 		 * Use WAKE bucket for active clock, otherwise, use SLEEP bucket
+ 		 * for other clocks.  If a platform doesn't set interconnect
+@@ -464,7 +464,7 @@ int qnoc_probe(struct platform_device *pdev)
+ 
+ 	for (i = 0; i < cd_num; i++)
+ 		qp->bus_clks[i].id = cds[i];
+-	qp->num_clks = cd_num;
++	qp->num_bus_clks = cd_num;
+ 
+ 	qp->type = desc->type;
+ 	qp->qos_offset = desc->qos_offset;
+@@ -494,11 +494,11 @@ int qnoc_probe(struct platform_device *pdev)
+ 	}
+ 
+ regmap_done:
+-	ret = devm_clk_bulk_get_optional(dev, qp->num_clks, qp->bus_clks);
++	ret = devm_clk_bulk_get(dev, qp->num_bus_clks, qp->bus_clks);
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = clk_bulk_prepare_enable(qp->num_clks, qp->bus_clks);
++	ret = clk_bulk_prepare_enable(qp->num_bus_clks, qp->bus_clks);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -551,7 +551,7 @@ err_deregister_provider:
+ 	icc_provider_deregister(provider);
+ err_remove_nodes:
+ 	icc_nodes_remove(provider);
+-	clk_bulk_disable_unprepare(qp->num_clks, qp->bus_clks);
++	clk_bulk_disable_unprepare(qp->num_bus_clks, qp->bus_clks);
+ 
+ 	return ret;
+ }
+@@ -563,7 +563,7 @@ int qnoc_remove(struct platform_device *pdev)
+ 
+ 	icc_provider_deregister(&qp->provider);
+ 	icc_nodes_remove(&qp->provider);
+-	clk_bulk_disable_unprepare(qp->num_clks, qp->bus_clks);
++	clk_bulk_disable_unprepare(qp->num_bus_clks, qp->bus_clks);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/interconnect/qcom/icc-rpm.h b/drivers/interconnect/qcom/icc-rpm.h
+index 22bdb1e4bb123..838f3fa82278e 100644
+--- a/drivers/interconnect/qcom/icc-rpm.h
++++ b/drivers/interconnect/qcom/icc-rpm.h
+@@ -23,7 +23,7 @@ enum qcom_icc_type {
+ /**
+  * struct qcom_icc_provider - Qualcomm specific interconnect provider
+  * @provider: generic interconnect provider
+- * @num_clks: the total number of clk_bulk_data entries
++ * @num_bus_clks: the total number of bus_clks clk_bulk_data entries
+  * @type: the ICC provider type
+  * @regmap: regmap for QoS registers read/write access
+  * @qos_offset: offset to QoS registers
+@@ -32,7 +32,7 @@ enum qcom_icc_type {
+  */
+ struct qcom_icc_provider {
+ 	struct icc_provider provider;
+-	int num_clks;
++	int num_bus_clks;
+ 	enum qcom_icc_type type;
+ 	struct regmap *regmap;
+ 	unsigned int qos_offset;
+diff --git a/drivers/iommu/iommufd/device.c b/drivers/iommu/iommufd/device.c
+index 4f9b2142274ce..29d05663d4d17 100644
+--- a/drivers/iommu/iommufd/device.c
++++ b/drivers/iommu/iommufd/device.c
+@@ -553,8 +553,8 @@ void iommufd_access_unpin_pages(struct iommufd_access *access,
+ 			iopt_area_iova_to_index(
+ 				area,
+ 				min(last_iova, iopt_area_last_iova(area))));
+-	up_read(&iopt->iova_rwsem);
+ 	WARN_ON(!iopt_area_contig_done(&iter));
++	up_read(&iopt->iova_rwsem);
+ }
+ EXPORT_SYMBOL_NS_GPL(iommufd_access_unpin_pages, IOMMUFD);
+ 
+diff --git a/drivers/iommu/iommufd/io_pagetable.c b/drivers/iommu/iommufd/io_pagetable.c
+index e0ae72b9e67f8..724c4c5742417 100644
+--- a/drivers/iommu/iommufd/io_pagetable.c
++++ b/drivers/iommu/iommufd/io_pagetable.c
+@@ -458,6 +458,7 @@ static int iopt_unmap_iova_range(struct io_pagetable *iopt, unsigned long start,
+ {
+ 	struct iopt_area *area;
+ 	unsigned long unmapped_bytes = 0;
++	unsigned int tries = 0;
+ 	int rc = -ENOENT;
+ 
+ 	/*
+@@ -484,19 +485,26 @@ again:
+ 			goto out_unlock_iova;
+ 		}
+ 
++		if (area_first != start)
++			tries = 0;
++
+ 		/*
+ 		 * num_accesses writers must hold the iova_rwsem too, so we can
+ 		 * safely read it under the write side of the iovam_rwsem
+ 		 * without the pages->mutex.
+ 		 */
+ 		if (area->num_accesses) {
++			size_t length = iopt_area_length(area);
++
+ 			start = area_first;
+ 			area->prevent_access = true;
+ 			up_write(&iopt->iova_rwsem);
+ 			up_read(&iopt->domains_rwsem);
+-			iommufd_access_notify_unmap(iopt, area_first,
+-						    iopt_area_length(area));
+-			if (WARN_ON(READ_ONCE(area->num_accesses)))
++
++			iommufd_access_notify_unmap(iopt, area_first, length);
++			/* Something is not responding to unmap requests. */
++			tries++;
++			if (WARN_ON(tries > 100))
+ 				return -EDEADLOCK;
+ 			goto again;
+ 		}
+diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
+index 5b8fe9bfa9a5b..3551ed057774e 100644
+--- a/drivers/iommu/virtio-iommu.c
++++ b/drivers/iommu/virtio-iommu.c
+@@ -788,6 +788,29 @@ static int viommu_attach_dev(struct iommu_domain *domain, struct device *dev)
+ 	return 0;
+ }
+ 
++static void viommu_detach_dev(struct viommu_endpoint *vdev)
++{
++	int i;
++	struct virtio_iommu_req_detach req;
++	struct viommu_domain *vdomain = vdev->vdomain;
++	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(vdev->dev);
++
++	if (!vdomain)
++		return;
++
++	req = (struct virtio_iommu_req_detach) {
++		.head.type	= VIRTIO_IOMMU_T_DETACH,
++		.domain		= cpu_to_le32(vdomain->id),
++	};
++
++	for (i = 0; i < fwspec->num_ids; i++) {
++		req.endpoint = cpu_to_le32(fwspec->ids[i]);
++		WARN_ON(viommu_send_req_sync(vdev->viommu, &req, sizeof(req)));
++	}
++	vdomain->nr_endpoints--;
++	vdev->vdomain = NULL;
++}
++
+ static int viommu_map_pages(struct iommu_domain *domain, unsigned long iova,
+ 			    phys_addr_t paddr, size_t pgsize, size_t pgcount,
+ 			    int prot, gfp_t gfp, size_t *mapped)
+@@ -810,25 +833,26 @@ static int viommu_map_pages(struct iommu_domain *domain, unsigned long iova,
+ 	if (ret)
+ 		return ret;
+ 
+-	map = (struct virtio_iommu_req_map) {
+-		.head.type	= VIRTIO_IOMMU_T_MAP,
+-		.domain		= cpu_to_le32(vdomain->id),
+-		.virt_start	= cpu_to_le64(iova),
+-		.phys_start	= cpu_to_le64(paddr),
+-		.virt_end	= cpu_to_le64(end),
+-		.flags		= cpu_to_le32(flags),
+-	};
+-
+-	if (!vdomain->nr_endpoints)
+-		return 0;
++	if (vdomain->nr_endpoints) {
++		map = (struct virtio_iommu_req_map) {
++			.head.type	= VIRTIO_IOMMU_T_MAP,
++			.domain		= cpu_to_le32(vdomain->id),
++			.virt_start	= cpu_to_le64(iova),
++			.phys_start	= cpu_to_le64(paddr),
++			.virt_end	= cpu_to_le64(end),
++			.flags		= cpu_to_le32(flags),
++		};
+ 
+-	ret = viommu_send_req_sync(vdomain->viommu, &map, sizeof(map));
+-	if (ret)
+-		viommu_del_mappings(vdomain, iova, end);
+-	else if (mapped)
++		ret = viommu_send_req_sync(vdomain->viommu, &map, sizeof(map));
++		if (ret) {
++			viommu_del_mappings(vdomain, iova, end);
++			return ret;
++		}
++	}
++	if (mapped)
+ 		*mapped = size;
+ 
+-	return ret;
++	return 0;
+ }
+ 
+ static size_t viommu_unmap_pages(struct iommu_domain *domain, unsigned long iova,
+@@ -990,6 +1014,7 @@ static void viommu_release_device(struct device *dev)
+ {
+ 	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
+ 
++	viommu_detach_dev(vdev);
+ 	iommu_put_resv_regions(dev, &vdev->resv_regions);
+ 	kfree(vdev);
+ }
+diff --git a/drivers/irqchip/irq-jcore-aic.c b/drivers/irqchip/irq-jcore-aic.c
+index 5f47d8ee4ae39..b9dcc8e78c750 100644
+--- a/drivers/irqchip/irq-jcore-aic.c
++++ b/drivers/irqchip/irq-jcore-aic.c
+@@ -68,6 +68,7 @@ static int __init aic_irq_of_init(struct device_node *node,
+ 	unsigned min_irq = JCORE_AIC2_MIN_HWIRQ;
+ 	unsigned dom_sz = JCORE_AIC_MAX_HWIRQ+1;
+ 	struct irq_domain *domain;
++	int ret;
+ 
+ 	pr_info("Initializing J-Core AIC\n");
+ 
+@@ -100,6 +101,12 @@ static int __init aic_irq_of_init(struct device_node *node,
+ 	jcore_aic.irq_unmask = noop;
+ 	jcore_aic.name = "AIC";
+ 
++	ret = irq_alloc_descs(-1, min_irq, dom_sz - min_irq,
++			      of_node_to_nid(node));
++
++	if (ret < 0)
++		return ret;
++
+ 	domain = irq_domain_add_legacy(node, dom_sz - min_irq, min_irq, min_irq,
+ 				       &jcore_aic_irqdomain_ops,
+ 				       &jcore_aic);
+diff --git a/drivers/irqchip/irq-loongson-eiointc.c b/drivers/irqchip/irq-loongson-eiointc.c
+index 71ef19f77a5a0..a7fcde3e3ecc7 100644
+--- a/drivers/irqchip/irq-loongson-eiointc.c
++++ b/drivers/irqchip/irq-loongson-eiointc.c
+@@ -314,7 +314,7 @@ static void eiointc_resume(void)
+ 			desc = irq_resolve_mapping(eiointc_priv[i]->eiointc_domain, j);
+ 			if (desc && desc->handle_irq && desc->handle_irq != handle_bad_irq) {
+ 				raw_spin_lock(&desc->lock);
+-				irq_data = &desc->irq_data;
++				irq_data = irq_domain_get_irq_data(eiointc_priv[i]->eiointc_domain, irq_desc_get_irq(desc));
+ 				eiointc_set_irq_affinity(irq_data, irq_data->common->affinity, 0);
+ 				raw_spin_unlock(&desc->lock);
+ 			}
+diff --git a/drivers/irqchip/irq-loongson-liointc.c b/drivers/irqchip/irq-loongson-liointc.c
+index 8d00a9ad5b005..5dd9db8f8fa8e 100644
+--- a/drivers/irqchip/irq-loongson-liointc.c
++++ b/drivers/irqchip/irq-loongson-liointc.c
+@@ -32,6 +32,10 @@
+ #define LIOINTC_REG_INTC_EN_STATUS	(LIOINTC_INTC_CHIP_START + 0x04)
+ #define LIOINTC_REG_INTC_ENABLE	(LIOINTC_INTC_CHIP_START + 0x08)
+ #define LIOINTC_REG_INTC_DISABLE	(LIOINTC_INTC_CHIP_START + 0x0c)
++/*
++ * LIOINTC_REG_INTC_POL register is only valid for Loongson-2K series, and
++ * Loongson-3 series behave as noops.
++ */
+ #define LIOINTC_REG_INTC_POL	(LIOINTC_INTC_CHIP_START + 0x10)
+ #define LIOINTC_REG_INTC_EDGE	(LIOINTC_INTC_CHIP_START + 0x14)
+ 
+@@ -116,19 +120,19 @@ static int liointc_set_type(struct irq_data *data, unsigned int type)
+ 	switch (type) {
+ 	case IRQ_TYPE_LEVEL_HIGH:
+ 		liointc_set_bit(gc, LIOINTC_REG_INTC_EDGE, mask, false);
+-		liointc_set_bit(gc, LIOINTC_REG_INTC_POL, mask, true);
++		liointc_set_bit(gc, LIOINTC_REG_INTC_POL, mask, false);
+ 		break;
+ 	case IRQ_TYPE_LEVEL_LOW:
+ 		liointc_set_bit(gc, LIOINTC_REG_INTC_EDGE, mask, false);
+-		liointc_set_bit(gc, LIOINTC_REG_INTC_POL, mask, false);
++		liointc_set_bit(gc, LIOINTC_REG_INTC_POL, mask, true);
+ 		break;
+ 	case IRQ_TYPE_EDGE_RISING:
+ 		liointc_set_bit(gc, LIOINTC_REG_INTC_EDGE, mask, true);
+-		liointc_set_bit(gc, LIOINTC_REG_INTC_POL, mask, true);
++		liointc_set_bit(gc, LIOINTC_REG_INTC_POL, mask, false);
+ 		break;
+ 	case IRQ_TYPE_EDGE_FALLING:
+ 		liointc_set_bit(gc, LIOINTC_REG_INTC_EDGE, mask, true);
+-		liointc_set_bit(gc, LIOINTC_REG_INTC_POL, mask, false);
++		liointc_set_bit(gc, LIOINTC_REG_INTC_POL, mask, true);
+ 		break;
+ 	default:
+ 		irq_gc_unlock_irqrestore(gc, flags);
+diff --git a/drivers/irqchip/irq-loongson-pch-pic.c b/drivers/irqchip/irq-loongson-pch-pic.c
+index e5fe4d50be056..93a71f66efebf 100644
+--- a/drivers/irqchip/irq-loongson-pch-pic.c
++++ b/drivers/irqchip/irq-loongson-pch-pic.c
+@@ -164,7 +164,7 @@ static int pch_pic_domain_translate(struct irq_domain *d,
+ 		if (fwspec->param_count < 2)
+ 			return -EINVAL;
+ 
+-		*hwirq = fwspec->param[0] + priv->ht_vec_base;
++		*hwirq = fwspec->param[0];
+ 		*type = fwspec->param[1] & IRQ_TYPE_SENSE_MASK;
+ 	} else {
+ 		if (fwspec->param_count < 1)
+@@ -196,7 +196,7 @@ static int pch_pic_alloc(struct irq_domain *domain, unsigned int virq,
+ 
+ 	parent_fwspec.fwnode = domain->parent->fwnode;
+ 	parent_fwspec.param_count = 1;
+-	parent_fwspec.param[0] = hwirq;
++	parent_fwspec.param[0] = hwirq + priv->ht_vec_base;
+ 
+ 	err = irq_domain_alloc_irqs_parent(domain, virq, 1, &parent_fwspec);
+ 	if (err)
+@@ -401,14 +401,12 @@ static int __init acpi_cascade_irqdomain_init(void)
+ int __init pch_pic_acpi_init(struct irq_domain *parent,
+ 					struct acpi_madt_bio_pic *acpi_pchpic)
+ {
+-	int ret, vec_base;
++	int ret;
+ 	struct fwnode_handle *domain_handle;
+ 
+ 	if (find_pch_pic(acpi_pchpic->gsi_base) >= 0)
+ 		return 0;
+ 
+-	vec_base = acpi_pchpic->gsi_base - GSI_MIN_PCH_IRQ;
+-
+ 	domain_handle = irq_domain_alloc_fwnode(&acpi_pchpic->address);
+ 	if (!domain_handle) {
+ 		pr_err("Unable to allocate domain handle\n");
+@@ -416,7 +414,7 @@ int __init pch_pic_acpi_init(struct irq_domain *parent,
+ 	}
+ 
+ 	ret = pch_pic_init(acpi_pchpic->address, acpi_pchpic->size,
+-				vec_base, parent, domain_handle, acpi_pchpic->gsi_base);
++				0, parent, domain_handle, acpi_pchpic->gsi_base);
+ 
+ 	if (ret < 0) {
+ 		irq_domain_free_fwnode(domain_handle);
+diff --git a/drivers/irqchip/irq-stm32-exti.c b/drivers/irqchip/irq-stm32-exti.c
+index 6a3f7498ea8ea..8bbb2b114636c 100644
+--- a/drivers/irqchip/irq-stm32-exti.c
++++ b/drivers/irqchip/irq-stm32-exti.c
+@@ -173,6 +173,16 @@ static struct irq_chip stm32_exti_h_chip_direct;
+ #define EXTI_INVALID_IRQ       U8_MAX
+ #define STM32MP1_DESC_IRQ_SIZE (ARRAY_SIZE(stm32mp1_exti_banks) * IRQS_PER_BANK)
+ 
++/*
++ * Use some intentionally tricky logic here to initialize the whole array to
++ * EXTI_INVALID_IRQ, but then override certain fields, requiring us to indicate
++ * that we "know" that there are overrides in this structure, and we'll need to
++ * disable that warning from W=1 builds.
++ */
++__diag_push();
++__diag_ignore_all("-Woverride-init",
++		  "logic to initialize all and then override some is OK");
++
+ static const u8 stm32mp1_desc_irq[] = {
+ 	/* default value */
+ 	[0 ... (STM32MP1_DESC_IRQ_SIZE - 1)] = EXTI_INVALID_IRQ,
+@@ -266,6 +276,8 @@ static const u8 stm32mp13_desc_irq[] = {
+ 	[70] = 98,
+ };
+ 
++__diag_pop();
++
+ static const struct stm32_exti_drv_data stm32mp1_drv_data = {
+ 	.exti_banks = stm32mp1_exti_banks,
+ 	.bank_nr = ARRAY_SIZE(stm32mp1_exti_banks),
+diff --git a/drivers/leds/trigger/ledtrig-netdev.c b/drivers/leds/trigger/ledtrig-netdev.c
+index d5e774d830215..f4d670ec30bcb 100644
+--- a/drivers/leds/trigger/ledtrig-netdev.c
++++ b/drivers/leds/trigger/ledtrig-netdev.c
+@@ -318,6 +318,9 @@ static int netdev_trig_notify(struct notifier_block *nb,
+ 	clear_bit(NETDEV_LED_MODE_LINKUP, &trigger_data->mode);
+ 	switch (evt) {
+ 	case NETDEV_CHANGENAME:
++		if (netif_carrier_ok(dev))
++			set_bit(NETDEV_LED_MODE_LINKUP, &trigger_data->mode);
++		fallthrough;
+ 	case NETDEV_REGISTER:
+ 		if (trigger_data->net_dev)
+ 			dev_put(trigger_data->net_dev);
+diff --git a/drivers/mailbox/ti-msgmgr.c b/drivers/mailbox/ti-msgmgr.c
+index ddac423ac1a91..03048cbda525e 100644
+--- a/drivers/mailbox/ti-msgmgr.c
++++ b/drivers/mailbox/ti-msgmgr.c
+@@ -430,14 +430,20 @@ static int ti_msgmgr_send_data(struct mbox_chan *chan, void *data)
+ 		/* Ensure all unused data is 0 */
+ 		data_trail &= 0xFFFFFFFF >> (8 * (sizeof(u32) - trail_bytes));
+ 		writel(data_trail, data_reg);
+-		data_reg++;
++		data_reg += sizeof(u32);
+ 	}
++
+ 	/*
+ 	 * 'data_reg' indicates next register to write. If we did not already
+ 	 * write on tx complete reg(last reg), we must do so for transmit
++	 * In addition, we also need to make sure all intermediate data
++	 * registers(if any required), are reset to 0 for TISCI backward
++	 * compatibility to be maintained.
+ 	 */
+-	if (data_reg <= qinst->queue_buff_end)
+-		writel(0, qinst->queue_buff_end);
++	while (data_reg <= qinst->queue_buff_end) {
++		writel(0, data_reg);
++		data_reg += sizeof(u32);
++	}
+ 
+ 	/* If we are in polled mode, wait for a response before proceeding */
+ 	if (ti_msgmgr_chan_has_polled_queue_rx(message->chan_rx))
+diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
+index 147c493a989a5..68b9d7ca864e2 100644
+--- a/drivers/md/bcache/btree.c
++++ b/drivers/md/bcache/btree.c
+@@ -885,7 +885,7 @@ static struct btree *mca_cannibalize(struct cache_set *c, struct btree_op *op,
+  * cannibalize_bucket() will take. This means every time we unlock the root of
+  * the btree, we need to release this lock if we have it held.
+  */
+-static void bch_cannibalize_unlock(struct cache_set *c)
++void bch_cannibalize_unlock(struct cache_set *c)
+ {
+ 	spin_lock(&c->btree_cannibalize_lock);
+ 	if (c->btree_cache_alloc_lock == current) {
+@@ -1090,10 +1090,12 @@ struct btree *__bch_btree_node_alloc(struct cache_set *c, struct btree_op *op,
+ 				     struct btree *parent)
+ {
+ 	BKEY_PADDED(key) k;
+-	struct btree *b = ERR_PTR(-EAGAIN);
++	struct btree *b;
+ 
+ 	mutex_lock(&c->bucket_lock);
+ retry:
++	/* return ERR_PTR(-EAGAIN) when it fails */
++	b = ERR_PTR(-EAGAIN);
+ 	if (__bch_bucket_alloc_set(c, RESERVE_BTREE, &k.key, wait))
+ 		goto err;
+ 
+@@ -1138,7 +1140,7 @@ static struct btree *btree_node_alloc_replacement(struct btree *b,
+ {
+ 	struct btree *n = bch_btree_node_alloc(b->c, op, b->level, b->parent);
+ 
+-	if (!IS_ERR_OR_NULL(n)) {
++	if (!IS_ERR(n)) {
+ 		mutex_lock(&n->write_lock);
+ 		bch_btree_sort_into(&b->keys, &n->keys, &b->c->sort);
+ 		bkey_copy_key(&n->key, &b->key);
+@@ -1340,7 +1342,7 @@ static int btree_gc_coalesce(struct btree *b, struct btree_op *op,
+ 	memset(new_nodes, 0, sizeof(new_nodes));
+ 	closure_init_stack(&cl);
+ 
+-	while (nodes < GC_MERGE_NODES && !IS_ERR_OR_NULL(r[nodes].b))
++	while (nodes < GC_MERGE_NODES && !IS_ERR(r[nodes].b))
+ 		keys += r[nodes++].keys;
+ 
+ 	blocks = btree_default_blocks(b->c) * 2 / 3;
+@@ -1352,7 +1354,7 @@ static int btree_gc_coalesce(struct btree *b, struct btree_op *op,
+ 
+ 	for (i = 0; i < nodes; i++) {
+ 		new_nodes[i] = btree_node_alloc_replacement(r[i].b, NULL);
+-		if (IS_ERR_OR_NULL(new_nodes[i]))
++		if (IS_ERR(new_nodes[i]))
+ 			goto out_nocoalesce;
+ 	}
+ 
+@@ -1487,7 +1489,7 @@ out_nocoalesce:
+ 	bch_keylist_free(&keylist);
+ 
+ 	for (i = 0; i < nodes; i++)
+-		if (!IS_ERR_OR_NULL(new_nodes[i])) {
++		if (!IS_ERR(new_nodes[i])) {
+ 			btree_node_free(new_nodes[i]);
+ 			rw_unlock(true, new_nodes[i]);
+ 		}
+@@ -1669,7 +1671,7 @@ static int bch_btree_gc_root(struct btree *b, struct btree_op *op,
+ 	if (should_rewrite) {
+ 		n = btree_node_alloc_replacement(b, NULL);
+ 
+-		if (!IS_ERR_OR_NULL(n)) {
++		if (!IS_ERR(n)) {
+ 			bch_btree_node_write_sync(n);
+ 
+ 			bch_btree_set_root(n);
+@@ -1968,6 +1970,15 @@ static int bch_btree_check_thread(void *arg)
+ 			c->gc_stats.nodes++;
+ 			bch_btree_op_init(&op, 0);
+ 			ret = bcache_btree(check_recurse, p, c->root, &op);
++			/*
++			 * The op may be added to cache_set's btree_cache_wait
++			 * in mca_cannibalize(), must ensure it is removed from
++			 * the list and release btree_cache_alloc_lock before
++			 * free op memory.
++			 * Otherwise, the btree_cache_wait will be damaged.
++			 */
++			bch_cannibalize_unlock(c);
++			finish_wait(&c->btree_cache_wait, &(&op)->wait);
+ 			if (ret)
+ 				goto out;
+ 		}
+diff --git a/drivers/md/bcache/btree.h b/drivers/md/bcache/btree.h
+index 1b5fdbc0d83eb..a2920bbfcad56 100644
+--- a/drivers/md/bcache/btree.h
++++ b/drivers/md/bcache/btree.h
+@@ -282,6 +282,7 @@ void bch_initial_gc_finish(struct cache_set *c);
+ void bch_moving_gc(struct cache_set *c);
+ int bch_btree_check(struct cache_set *c);
+ void bch_initial_mark_key(struct cache_set *c, int level, struct bkey *k);
++void bch_cannibalize_unlock(struct cache_set *c);
+ 
+ static inline void wake_up_gc(struct cache_set *c)
+ {
+diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
+index 7e9d19fd21ddd..077149c4050b9 100644
+--- a/drivers/md/bcache/super.c
++++ b/drivers/md/bcache/super.c
+@@ -1723,7 +1723,7 @@ static void cache_set_flush(struct closure *cl)
+ 	if (!IS_ERR_OR_NULL(c->gc_thread))
+ 		kthread_stop(c->gc_thread);
+ 
+-	if (!IS_ERR_OR_NULL(c->root))
++	if (!IS_ERR(c->root))
+ 		list_add(&c->root->list, &c->btree_cache);
+ 
+ 	/*
+@@ -2087,7 +2087,7 @@ static int run_cache_set(struct cache_set *c)
+ 
+ 		err = "cannot allocate new btree root";
+ 		c->root = __bch_btree_node_alloc(c, NULL, 0, true, NULL);
+-		if (IS_ERR_OR_NULL(c->root))
++		if (IS_ERR(c->root))
+ 			goto err;
+ 
+ 		mutex_lock(&c->root->write_lock);
+diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
+index d4a5fc0650bb2..24c049067f61a 100644
+--- a/drivers/md/bcache/writeback.c
++++ b/drivers/md/bcache/writeback.c
+@@ -890,6 +890,16 @@ static int bch_root_node_dirty_init(struct cache_set *c,
+ 	if (ret < 0)
+ 		pr_warn("sectors dirty init failed, ret=%d!\n", ret);
+ 
++	/*
++	 * The op may be added to cache_set's btree_cache_wait
++	 * in mca_cannibalize(), must ensure it is removed from
++	 * the list and release btree_cache_alloc_lock before
++	 * free op memory.
++	 * Otherwise, the btree_cache_wait will be damaged.
++	 */
++	bch_cannibalize_unlock(c);
++	finish_wait(&c->btree_cache_wait, &(&op.op)->wait);
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c
+index bc8d7565171d4..ea226a37b110a 100644
+--- a/drivers/md/md-bitmap.c
++++ b/drivers/md/md-bitmap.c
+@@ -54,14 +54,7 @@ __acquires(bitmap->lock)
+ {
+ 	unsigned char *mappage;
+ 
+-	if (page >= bitmap->pages) {
+-		/* This can happen if bitmap_start_sync goes beyond
+-		 * End-of-device while looking for a whole page.
+-		 * It is harmless.
+-		 */
+-		return -EINVAL;
+-	}
+-
++	WARN_ON_ONCE(page >= bitmap->pages);
+ 	if (bitmap->bp[page].hijacked) /* it's hijacked, don't try to alloc */
+ 		return 0;
+ 
+@@ -1023,7 +1016,6 @@ static int md_bitmap_file_test_bit(struct bitmap *bitmap, sector_t block)
+ 	return set;
+ }
+ 
+-
+ /* this gets called when the md device is ready to unplug its underlying
+  * (slave) device queues -- before we let any writes go down, we need to
+  * sync the dirty pages of the bitmap file to disk */
+@@ -1033,8 +1025,7 @@ void md_bitmap_unplug(struct bitmap *bitmap)
+ 	int dirty, need_write;
+ 	int writing = 0;
+ 
+-	if (!bitmap || !bitmap->storage.filemap ||
+-	    test_bit(BITMAP_STALE, &bitmap->flags))
++	if (!md_bitmap_enabled(bitmap))
+ 		return;
+ 
+ 	/* look at each page to see if there are any set bits that need to be
+@@ -1387,6 +1378,14 @@ __acquires(bitmap->lock)
+ 	sector_t csize;
+ 	int err;
+ 
++	if (page >= bitmap->pages) {
++		/*
++		 * This can happen if bitmap_start_sync goes beyond
++		 * End-of-device while looking for a whole page or
++		 * user set a huge number to sysfs bitmap_set_bits.
++		 */
++		return NULL;
++	}
+ 	err = md_bitmap_checkpage(bitmap, page, create, 0);
+ 
+ 	if (bitmap->bp[page].hijacked ||
+diff --git a/drivers/md/md-bitmap.h b/drivers/md/md-bitmap.h
+index cfd7395de8fd3..3a4750952b3a7 100644
+--- a/drivers/md/md-bitmap.h
++++ b/drivers/md/md-bitmap.h
+@@ -273,6 +273,13 @@ int md_bitmap_copy_from_slot(struct mddev *mddev, int slot,
+ 			     sector_t *lo, sector_t *hi, bool clear_bits);
+ void md_bitmap_free(struct bitmap *bitmap);
+ void md_bitmap_wait_behind_writes(struct mddev *mddev);
++
++static inline bool md_bitmap_enabled(struct bitmap *bitmap)
++{
++	return bitmap && bitmap->storage.filemap &&
++	       !test_bit(BITMAP_STALE, &bitmap->flags);
++}
++
+ #endif
+ 
+ #endif
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 8e344b4b34446..350094f1cb09f 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -3794,8 +3794,9 @@ int strict_strtoul_scaled(const char *cp, unsigned long *res, int scale)
+ static ssize_t
+ safe_delay_show(struct mddev *mddev, char *page)
+ {
+-	int msec = (mddev->safemode_delay*1000)/HZ;
+-	return sprintf(page, "%d.%03d\n", msec/1000, msec%1000);
++	unsigned int msec = ((unsigned long)mddev->safemode_delay*1000)/HZ;
++
++	return sprintf(page, "%u.%03u\n", msec/1000, msec%1000);
+ }
+ static ssize_t
+ safe_delay_store(struct mddev *mddev, const char *cbuf, size_t len)
+@@ -3807,7 +3808,7 @@ safe_delay_store(struct mddev *mddev, const char *cbuf, size_t len)
+ 		return -EINVAL;
+ 	}
+ 
+-	if (strict_strtoul_scaled(cbuf, &msec, 3) < 0)
++	if (strict_strtoul_scaled(cbuf, &msec, 3) < 0 || msec > UINT_MAX / HZ)
+ 		return -EINVAL;
+ 	if (msec == 0)
+ 		mddev->safemode_delay = 0;
+@@ -4477,6 +4478,8 @@ max_corrected_read_errors_store(struct mddev *mddev, const char *buf, size_t len
+ 	rv = kstrtouint(buf, 10, &n);
+ 	if (rv < 0)
+ 		return rv;
++	if (n > INT_MAX)
++		return -EINVAL;
+ 	atomic_set(&mddev->max_corr_read_errors, n);
+ 	return len;
+ }
+diff --git a/drivers/md/raid1-10.c b/drivers/md/raid1-10.c
+index e61f6cad4e08e..e0c8ac8146331 100644
+--- a/drivers/md/raid1-10.c
++++ b/drivers/md/raid1-10.c
+@@ -109,3 +109,45 @@ static void md_bio_reset_resync_pages(struct bio *bio, struct resync_pages *rp,
+ 		size -= len;
+ 	} while (idx++ < RESYNC_PAGES && size > 0);
+ }
++
++
++static inline void raid1_submit_write(struct bio *bio)
++{
++	struct md_rdev *rdev = (void *)bio->bi_bdev;
++
++	bio->bi_next = NULL;
++	bio_set_dev(bio, rdev->bdev);
++	if (test_bit(Faulty, &rdev->flags))
++		bio_io_error(bio);
++	else if (unlikely(bio_op(bio) ==  REQ_OP_DISCARD &&
++			  !bdev_max_discard_sectors(bio->bi_bdev)))
++		/* Just ignore it */
++		bio_endio(bio);
++	else
++		submit_bio_noacct(bio);
++}
++
++static inline bool raid1_add_bio_to_plug(struct mddev *mddev, struct bio *bio,
++				      blk_plug_cb_fn unplug)
++{
++	struct raid1_plug_cb *plug = NULL;
++	struct blk_plug_cb *cb;
++
++	/*
++	 * If bitmap is not enabled, it's safe to submit the io directly, and
++	 * this can get optimal performance.
++	 */
++	if (!md_bitmap_enabled(mddev->bitmap)) {
++		raid1_submit_write(bio);
++		return true;
++	}
++
++	cb = blk_check_plugged(unplug, mddev, sizeof(*plug));
++	if (!cb)
++		return false;
++
++	plug = container_of(cb, struct raid1_plug_cb, cb);
++	bio_list_add(&plug->pending, bio);
++
++	return true;
++}
+diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
+index 68a9e2d9985b2..e51b77a3a8397 100644
+--- a/drivers/md/raid1.c
++++ b/drivers/md/raid1.c
+@@ -799,17 +799,8 @@ static void flush_bio_list(struct r1conf *conf, struct bio *bio)
+ 
+ 	while (bio) { /* submit pending writes */
+ 		struct bio *next = bio->bi_next;
+-		struct md_rdev *rdev = (void *)bio->bi_bdev;
+-		bio->bi_next = NULL;
+-		bio_set_dev(bio, rdev->bdev);
+-		if (test_bit(Faulty, &rdev->flags)) {
+-			bio_io_error(bio);
+-		} else if (unlikely((bio_op(bio) == REQ_OP_DISCARD) &&
+-				    !bdev_max_discard_sectors(bio->bi_bdev)))
+-			/* Just ignore it */
+-			bio_endio(bio);
+-		else
+-			submit_bio_noacct(bio);
++
++		raid1_submit_write(bio);
+ 		bio = next;
+ 		cond_resched();
+ 	}
+@@ -1343,8 +1334,6 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio,
+ 	struct bitmap *bitmap = mddev->bitmap;
+ 	unsigned long flags;
+ 	struct md_rdev *blocked_rdev;
+-	struct blk_plug_cb *cb;
+-	struct raid1_plug_cb *plug = NULL;
+ 	int first_clone;
+ 	int max_sectors;
+ 	bool write_behind = false;
+@@ -1573,15 +1562,7 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio,
+ 					      r1_bio->sector);
+ 		/* flush_pending_writes() needs access to the rdev so...*/
+ 		mbio->bi_bdev = (void *)rdev;
+-
+-		cb = blk_check_plugged(raid1_unplug, mddev, sizeof(*plug));
+-		if (cb)
+-			plug = container_of(cb, struct raid1_plug_cb, cb);
+-		else
+-			plug = NULL;
+-		if (plug) {
+-			bio_list_add(&plug->pending, mbio);
+-		} else {
++		if (!raid1_add_bio_to_plug(mddev, mbio, raid1_unplug)) {
+ 			spin_lock_irqsave(&conf->device_lock, flags);
+ 			bio_list_add(&conf->pending_bio_list, mbio);
+ 			spin_unlock_irqrestore(&conf->device_lock, flags);
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index 4fcfcb350d2b4..9d23963496194 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -325,7 +325,7 @@ static void raid_end_bio_io(struct r10bio *r10_bio)
+ 	if (!test_bit(R10BIO_Uptodate, &r10_bio->state))
+ 		bio->bi_status = BLK_STS_IOERR;
+ 
+-	if (blk_queue_io_stat(bio->bi_bdev->bd_disk->queue))
++	if (r10_bio->start_time)
+ 		bio_end_io_acct(bio, r10_bio->start_time);
+ 	bio_endio(bio);
+ 	/*
+@@ -779,8 +779,16 @@ static struct md_rdev *read_balance(struct r10conf *conf,
+ 		disk = r10_bio->devs[slot].devnum;
+ 		rdev = rcu_dereference(conf->mirrors[disk].replacement);
+ 		if (rdev == NULL || test_bit(Faulty, &rdev->flags) ||
+-		    r10_bio->devs[slot].addr + sectors > rdev->recovery_offset)
++		    r10_bio->devs[slot].addr + sectors >
++		    rdev->recovery_offset) {
++			/*
++			 * Read replacement first to prevent reading both rdev
++			 * and replacement as NULL during replacement replace
++			 * rdev.
++			 */
++			smp_mb();
+ 			rdev = rcu_dereference(conf->mirrors[disk].rdev);
++		}
+ 		if (rdev == NULL ||
+ 		    test_bit(Faulty, &rdev->flags))
+ 			continue;
+@@ -909,17 +917,8 @@ static void flush_pending_writes(struct r10conf *conf)
+ 
+ 		while (bio) { /* submit pending writes */
+ 			struct bio *next = bio->bi_next;
+-			struct md_rdev *rdev = (void*)bio->bi_bdev;
+-			bio->bi_next = NULL;
+-			bio_set_dev(bio, rdev->bdev);
+-			if (test_bit(Faulty, &rdev->flags)) {
+-				bio_io_error(bio);
+-			} else if (unlikely((bio_op(bio) ==  REQ_OP_DISCARD) &&
+-					    !bdev_max_discard_sectors(bio->bi_bdev)))
+-				/* Just ignore it */
+-				bio_endio(bio);
+-			else
+-				submit_bio_noacct(bio);
++
++			raid1_submit_write(bio);
+ 			bio = next;
+ 		}
+ 		blk_finish_plug(&plug);
+@@ -1130,17 +1129,8 @@ static void raid10_unplug(struct blk_plug_cb *cb, bool from_schedule)
+ 
+ 	while (bio) { /* submit pending writes */
+ 		struct bio *next = bio->bi_next;
+-		struct md_rdev *rdev = (void*)bio->bi_bdev;
+-		bio->bi_next = NULL;
+-		bio_set_dev(bio, rdev->bdev);
+-		if (test_bit(Faulty, &rdev->flags)) {
+-			bio_io_error(bio);
+-		} else if (unlikely((bio_op(bio) ==  REQ_OP_DISCARD) &&
+-				    !bdev_max_discard_sectors(bio->bi_bdev)))
+-			/* Just ignore it */
+-			bio_endio(bio);
+-		else
+-			submit_bio_noacct(bio);
++
++		raid1_submit_write(bio);
+ 		bio = next;
+ 	}
+ 	kfree(plug);
+@@ -1282,8 +1272,6 @@ static void raid10_write_one_disk(struct mddev *mddev, struct r10bio *r10_bio,
+ 	const blk_opf_t do_sync = bio->bi_opf & REQ_SYNC;
+ 	const blk_opf_t do_fua = bio->bi_opf & REQ_FUA;
+ 	unsigned long flags;
+-	struct blk_plug_cb *cb;
+-	struct raid1_plug_cb *plug = NULL;
+ 	struct r10conf *conf = mddev->private;
+ 	struct md_rdev *rdev;
+ 	int devnum = r10_bio->devs[n_copy].devnum;
+@@ -1323,14 +1311,7 @@ static void raid10_write_one_disk(struct mddev *mddev, struct r10bio *r10_bio,
+ 
+ 	atomic_inc(&r10_bio->remaining);
+ 
+-	cb = blk_check_plugged(raid10_unplug, mddev, sizeof(*plug));
+-	if (cb)
+-		plug = container_of(cb, struct raid1_plug_cb, cb);
+-	else
+-		plug = NULL;
+-	if (plug) {
+-		bio_list_add(&plug->pending, mbio);
+-	} else {
++	if (!raid1_add_bio_to_plug(mddev, mbio, raid10_unplug)) {
+ 		spin_lock_irqsave(&conf->device_lock, flags);
+ 		bio_list_add(&conf->pending_bio_list, mbio);
+ 		spin_unlock_irqrestore(&conf->device_lock, flags);
+@@ -1479,9 +1460,15 @@ static void raid10_write_request(struct mddev *mddev, struct bio *bio,
+ 
+ 	for (i = 0;  i < conf->copies; i++) {
+ 		int d = r10_bio->devs[i].devnum;
+-		struct md_rdev *rdev = rcu_dereference(conf->mirrors[d].rdev);
+-		struct md_rdev *rrdev = rcu_dereference(
+-			conf->mirrors[d].replacement);
++		struct md_rdev *rdev, *rrdev;
++
++		rrdev = rcu_dereference(conf->mirrors[d].replacement);
++		/*
++		 * Read replacement first to prevent reading both rdev and
++		 * replacement as NULL during replacement replace rdev.
++		 */
++		smp_mb();
++		rdev = rcu_dereference(conf->mirrors[d].rdev);
+ 		if (rdev == rrdev)
+ 			rrdev = NULL;
+ 		if (rdev && (test_bit(Faulty, &rdev->flags)))
+@@ -3438,7 +3425,6 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr,
+ 			int must_sync;
+ 			int any_working;
+ 			int need_recover = 0;
+-			int need_replace = 0;
+ 			struct raid10_info *mirror = &conf->mirrors[i];
+ 			struct md_rdev *mrdev, *mreplace;
+ 
+@@ -3450,11 +3436,10 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr,
+ 			    !test_bit(Faulty, &mrdev->flags) &&
+ 			    !test_bit(In_sync, &mrdev->flags))
+ 				need_recover = 1;
+-			if (mreplace != NULL &&
+-			    !test_bit(Faulty, &mreplace->flags))
+-				need_replace = 1;
++			if (mreplace && test_bit(Faulty, &mreplace->flags))
++				mreplace = NULL;
+ 
+-			if (!need_recover && !need_replace) {
++			if (!need_recover && !mreplace) {
+ 				rcu_read_unlock();
+ 				continue;
+ 			}
+@@ -3470,8 +3455,6 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr,
+ 				rcu_read_unlock();
+ 				continue;
+ 			}
+-			if (mreplace && test_bit(Faulty, &mreplace->flags))
+-				mreplace = NULL;
+ 			/* Unless we are doing a full sync, or a replacement
+ 			 * we only need to recover the block if it is set in
+ 			 * the bitmap
+@@ -3594,11 +3577,11 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr,
+ 				bio = r10_bio->devs[1].repl_bio;
+ 				if (bio)
+ 					bio->bi_end_io = NULL;
+-				/* Note: if need_replace, then bio
++				/* Note: if replace is not NULL, then bio
+ 				 * cannot be NULL as r10buf_pool_alloc will
+ 				 * have allocated it.
+ 				 */
+-				if (!need_replace)
++				if (!mreplace)
+ 					break;
+ 				bio->bi_next = biolist;
+ 				biolist = bio;
+diff --git a/drivers/media/cec/i2c/Kconfig b/drivers/media/cec/i2c/Kconfig
+index 70432a1d69186..d912d143fb312 100644
+--- a/drivers/media/cec/i2c/Kconfig
++++ b/drivers/media/cec/i2c/Kconfig
+@@ -5,6 +5,7 @@
+ config CEC_CH7322
+ 	tristate "Chrontel CH7322 CEC controller"
+ 	depends on I2C
++	select REGMAP
+ 	select REGMAP_I2C
+ 	select CEC_CORE
+ 	help
+diff --git a/drivers/media/common/saa7146/saa7146_core.c b/drivers/media/common/saa7146/saa7146_core.c
+index bcb957883044c..27c53eed8fe39 100644
+--- a/drivers/media/common/saa7146/saa7146_core.c
++++ b/drivers/media/common/saa7146/saa7146_core.c
+@@ -133,8 +133,8 @@ int saa7146_wait_for_debi_done(struct saa7146_dev *dev, int nobusyloop)
+  ****************************************************************************/
+ 
+ /* this is videobuf_vmalloc_to_sg() from videobuf-dma-sg.c
+-   make sure virt has been allocated with vmalloc_32(), otherwise the BUG()
+-   may be triggered on highmem machines */
++   make sure virt has been allocated with vmalloc_32(), otherwise return NULL
++   on highmem machines */
+ static struct scatterlist* vmalloc_to_sg(unsigned char *virt, int nr_pages)
+ {
+ 	struct scatterlist *sglist;
+@@ -150,7 +150,7 @@ static struct scatterlist* vmalloc_to_sg(unsigned char *virt, int nr_pages)
+ 		if (NULL == pg)
+ 			goto err;
+ 		if (WARN_ON(PageHighMem(pg)))
+-			return NULL;
++			goto err;
+ 		sg_set_page(&sglist[i], pg, PAGE_SIZE, 0);
+ 	}
+ 	return sglist;
+diff --git a/drivers/media/i2c/Kconfig b/drivers/media/i2c/Kconfig
+index 256d55bb2b1da..76d1ee3cc1bab 100644
+--- a/drivers/media/i2c/Kconfig
++++ b/drivers/media/i2c/Kconfig
+@@ -1292,6 +1292,7 @@ config VIDEO_TC358746
+ 	select VIDEO_V4L2_SUBDEV_API
+ 	select MEDIA_CONTROLLER
+ 	select V4L2_FWNODE
++	select GENERIC_PHY
+ 	select GENERIC_PHY_MIPI_DPHY
+ 	select REGMAP_I2C
+ 	help
+diff --git a/drivers/media/i2c/hi846.c b/drivers/media/i2c/hi846.c
+index 306dc35e925fd..f8709cdf28b39 100644
+--- a/drivers/media/i2c/hi846.c
++++ b/drivers/media/i2c/hi846.c
+@@ -1353,7 +1353,8 @@ static int hi846_set_ctrl(struct v4l2_ctrl *ctrl)
+ 					 exposure_max);
+ 	}
+ 
+-	if (!pm_runtime_get_if_in_use(&client->dev))
++	ret = pm_runtime_get_if_in_use(&client->dev);
++	if (!ret || ret == -EAGAIN)
+ 		return 0;
+ 
+ 	switch (ctrl->id) {
+diff --git a/drivers/media/i2c/imx296.c b/drivers/media/i2c/imx296.c
+index 4f22c0515ef8d..c3d6d52fc7727 100644
+--- a/drivers/media/i2c/imx296.c
++++ b/drivers/media/i2c/imx296.c
+@@ -922,10 +922,12 @@ static int imx296_read_temperature(struct imx296 *sensor, int *temp)
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	tmdout = imx296_read(sensor, IMX296_TMDOUT) & IMX296_TMDOUT_MASK;
++	tmdout = imx296_read(sensor, IMX296_TMDOUT);
+ 	if (tmdout < 0)
+ 		return tmdout;
+ 
++	tmdout &= IMX296_TMDOUT_MASK;
++
+ 	/* T(°C) = 246.312 - 0.304 * TMDOUT */;
+ 	*temp = 246312 - 304 * tmdout;
+ 
+diff --git a/drivers/media/i2c/st-mipid02.c b/drivers/media/i2c/st-mipid02.c
+index 31b89aff0e86a..f20f87562bf11 100644
+--- a/drivers/media/i2c/st-mipid02.c
++++ b/drivers/media/i2c/st-mipid02.c
+@@ -736,8 +736,13 @@ static void mipid02_set_fmt_source(struct v4l2_subdev *sd,
+ {
+ 	struct mipid02_dev *bridge = to_mipid02_dev(sd);
+ 
+-	/* source pad mirror active sink pad */
+-	format->format = bridge->fmt;
++	/* source pad mirror sink pad */
++	if (format->which == V4L2_SUBDEV_FORMAT_ACTIVE)
++		format->format = bridge->fmt;
++	else
++		format->format = *v4l2_subdev_get_try_format(sd, sd_state,
++							     MIPID02_SINK_0);
++
+ 	/* but code may need to be converted */
+ 	format->format.code = serial_to_parallel_code(format->format.code);
+ 
+diff --git a/drivers/media/platform/amphion/vdec.c b/drivers/media/platform/amphion/vdec.c
+index 3fa1a74a2e204..6515f3cdb7a74 100644
+--- a/drivers/media/platform/amphion/vdec.c
++++ b/drivers/media/platform/amphion/vdec.c
+@@ -279,6 +279,7 @@ static void vdec_handle_resolution_change(struct vpu_inst *inst)
+ 
+ 	vdec->source_change--;
+ 	vpu_notify_source_change(inst);
++	vpu_set_last_buffer_dequeued(inst, false);
+ }
+ 
+ static int vdec_update_state(struct vpu_inst *inst, enum vpu_codec_state state, u32 force)
+@@ -314,7 +315,7 @@ static void vdec_set_last_buffer_dequeued(struct vpu_inst *inst)
+ 		return;
+ 
+ 	if (vdec->eos_received) {
+-		if (!vpu_set_last_buffer_dequeued(inst)) {
++		if (!vpu_set_last_buffer_dequeued(inst, true)) {
+ 			vdec->eos_received--;
+ 			vdec_update_state(inst, VPU_CODEC_STATE_DRAIN, 0);
+ 		}
+@@ -569,7 +570,7 @@ static int vdec_drain(struct vpu_inst *inst)
+ 		return 0;
+ 
+ 	if (!vdec->params.frame_count) {
+-		vpu_set_last_buffer_dequeued(inst);
++		vpu_set_last_buffer_dequeued(inst, true);
+ 		return 0;
+ 	}
+ 
+@@ -608,7 +609,7 @@ static int vdec_cmd_stop(struct vpu_inst *inst)
+ 	vpu_trace(inst->dev, "[%d]\n", inst->id);
+ 
+ 	if (inst->state == VPU_CODEC_STATE_DEINIT) {
+-		vpu_set_last_buffer_dequeued(inst);
++		vpu_set_last_buffer_dequeued(inst, true);
+ 	} else {
+ 		vdec->drain = 1;
+ 		vdec_drain(inst);
+diff --git a/drivers/media/platform/amphion/venc.c b/drivers/media/platform/amphion/venc.c
+index e6e8fe45fc7c3..58480e2755ec4 100644
+--- a/drivers/media/platform/amphion/venc.c
++++ b/drivers/media/platform/amphion/venc.c
+@@ -458,7 +458,7 @@ static int venc_encoder_cmd(struct file *file, void *fh, struct v4l2_encoder_cmd
+ 	vpu_inst_lock(inst);
+ 	if (cmd->cmd == V4L2_ENC_CMD_STOP) {
+ 		if (inst->state == VPU_CODEC_STATE_DEINIT)
+-			vpu_set_last_buffer_dequeued(inst);
++			vpu_set_last_buffer_dequeued(inst, true);
+ 		else
+ 			venc_request_eos(inst);
+ 	}
+@@ -878,7 +878,7 @@ static void venc_set_last_buffer_dequeued(struct vpu_inst *inst)
+ 	struct venc_t *venc = inst->priv;
+ 
+ 	if (venc->stopped && list_empty(&venc->frames))
+-		vpu_set_last_buffer_dequeued(inst);
++		vpu_set_last_buffer_dequeued(inst, true);
+ }
+ 
+ static void venc_stop_done(struct vpu_inst *inst)
+diff --git a/drivers/media/platform/amphion/vpu_malone.c b/drivers/media/platform/amphion/vpu_malone.c
+index ef44bff9fbaf6..c1d6606ad7e57 100644
+--- a/drivers/media/platform/amphion/vpu_malone.c
++++ b/drivers/media/platform/amphion/vpu_malone.c
+@@ -1313,6 +1313,15 @@ static int vpu_malone_insert_scode_pic(struct malone_scode_t *scode, u32 codec_i
+ 	return sizeof(hdr);
+ }
+ 
++static int vpu_malone_insert_scode_vc1_g_seq(struct malone_scode_t *scode)
++{
++	if (!scode->inst->total_input_count)
++		return 0;
++	if (vpu_vb_is_codecconfig(to_vb2_v4l2_buffer(scode->vb)))
++		scode->need_data = 0;
++	return 0;
++}
++
+ static int vpu_malone_insert_scode_vc1_g_pic(struct malone_scode_t *scode)
+ {
+ 	struct vb2_v4l2_buffer *vbuf;
+@@ -1344,6 +1353,8 @@ static int vpu_malone_insert_scode_vc1_l_seq(struct malone_scode_t *scode)
+ 	int size = 0;
+ 	u8 rcv_seqhdr[MALONE_VC1_RCV_SEQ_HEADER_LEN];
+ 
++	if (vpu_vb_is_codecconfig(to_vb2_v4l2_buffer(scode->vb)))
++		scode->need_data = 0;
+ 	if (scode->inst->total_input_count)
+ 		return 0;
+ 	scode->need_data = 0;
+@@ -1458,6 +1469,7 @@ static const struct malone_scode_handler scode_handlers[] = {
+ 	},
+ 	{
+ 		.pixelformat = V4L2_PIX_FMT_VC1_ANNEX_G,
++		.insert_scode_seq = vpu_malone_insert_scode_vc1_g_seq,
+ 		.insert_scode_pic = vpu_malone_insert_scode_vc1_g_pic,
+ 	},
+ 	{
+diff --git a/drivers/media/platform/amphion/vpu_v4l2.c b/drivers/media/platform/amphion/vpu_v4l2.c
+index 6773b885597ce..810e93d2c954a 100644
+--- a/drivers/media/platform/amphion/vpu_v4l2.c
++++ b/drivers/media/platform/amphion/vpu_v4l2.c
+@@ -100,7 +100,7 @@ int vpu_notify_source_change(struct vpu_inst *inst)
+ 	return 0;
+ }
+ 
+-int vpu_set_last_buffer_dequeued(struct vpu_inst *inst)
++int vpu_set_last_buffer_dequeued(struct vpu_inst *inst, bool eos)
+ {
+ 	struct vb2_queue *q;
+ 
+@@ -116,7 +116,8 @@ int vpu_set_last_buffer_dequeued(struct vpu_inst *inst)
+ 	vpu_trace(inst->dev, "last buffer dequeued\n");
+ 	q->last_buffer_dequeued = true;
+ 	wake_up(&q->done_wq);
+-	vpu_notify_eos(inst);
++	if (eos)
++		vpu_notify_eos(inst);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/media/platform/amphion/vpu_v4l2.h b/drivers/media/platform/amphion/vpu_v4l2.h
+index ef5de6b66e474..60f43056a7a28 100644
+--- a/drivers/media/platform/amphion/vpu_v4l2.h
++++ b/drivers/media/platform/amphion/vpu_v4l2.h
+@@ -27,7 +27,7 @@ struct vb2_v4l2_buffer *vpu_find_buf_by_idx(struct vpu_inst *inst, u32 type, u32
+ void vpu_v4l2_set_error(struct vpu_inst *inst);
+ int vpu_notify_eos(struct vpu_inst *inst);
+ int vpu_notify_source_change(struct vpu_inst *inst);
+-int vpu_set_last_buffer_dequeued(struct vpu_inst *inst);
++int vpu_set_last_buffer_dequeued(struct vpu_inst *inst, bool eos);
+ void vpu_vb2_buffers_return(struct vpu_inst *inst, unsigned int type, enum vb2_buffer_state state);
+ int vpu_get_num_buffers(struct vpu_inst *inst, u32 type);
+ bool vpu_is_source_empty(struct vpu_inst *inst);
+diff --git a/drivers/media/platform/mediatek/vcodec/vdec_msg_queue.c b/drivers/media/platform/mediatek/vcodec/vdec_msg_queue.c
+index f3073d1e7f420..03f8d7cd8eddc 100644
+--- a/drivers/media/platform/mediatek/vcodec/vdec_msg_queue.c
++++ b/drivers/media/platform/mediatek/vcodec/vdec_msg_queue.c
+@@ -71,7 +71,6 @@ static void vdec_msg_queue_dec(struct vdec_msg_queue *msg_queue, int hardware_in
+ int vdec_msg_queue_qbuf(struct vdec_msg_queue_ctx *msg_ctx, struct vdec_lat_buf *buf)
+ {
+ 	struct list_head *head;
+-	int status;
+ 
+ 	head = vdec_get_buf_list(msg_ctx->hardware_index, buf);
+ 	if (!head) {
+@@ -87,12 +86,9 @@ int vdec_msg_queue_qbuf(struct vdec_msg_queue_ctx *msg_ctx, struct vdec_lat_buf
+ 	if (msg_ctx->hardware_index != MTK_VDEC_CORE) {
+ 		wake_up_all(&msg_ctx->ready_to_use);
+ 	} else {
+-		if (buf->ctx->msg_queue.core_work_cnt <
+-			atomic_read(&buf->ctx->msg_queue.core_list_cnt)) {
+-			status = queue_work(buf->ctx->dev->core_workqueue,
+-					    &buf->ctx->msg_queue.core_work);
+-			if (status)
+-				buf->ctx->msg_queue.core_work_cnt++;
++		if (!(buf->ctx->msg_queue.status & CONTEXT_LIST_QUEUED)) {
++			queue_work(buf->ctx->dev->core_workqueue, &buf->ctx->msg_queue.core_work);
++			buf->ctx->msg_queue.status |= CONTEXT_LIST_QUEUED;
+ 		}
+ 	}
+ 
+@@ -261,7 +257,10 @@ static void vdec_msg_queue_core_work(struct work_struct *work)
+ 		container_of(msg_queue, struct mtk_vcodec_ctx, msg_queue);
+ 	struct mtk_vcodec_dev *dev = ctx->dev;
+ 	struct vdec_lat_buf *lat_buf;
+-	int status;
++
++	spin_lock(&ctx->dev->msg_queue_core_ctx.ready_lock);
++	ctx->msg_queue.status &= ~CONTEXT_LIST_QUEUED;
++	spin_unlock(&ctx->dev->msg_queue_core_ctx.ready_lock);
+ 
+ 	lat_buf = vdec_msg_queue_dqbuf(&dev->msg_queue_core_ctx);
+ 	if (!lat_buf)
+@@ -278,17 +277,13 @@ static void vdec_msg_queue_core_work(struct work_struct *work)
+ 	vdec_msg_queue_qbuf(&ctx->msg_queue.lat_ctx, lat_buf);
+ 
+ 	wake_up_all(&ctx->msg_queue.core_dec_done);
+-	spin_lock(&dev->msg_queue_core_ctx.ready_lock);
+-	lat_buf->ctx->msg_queue.core_work_cnt--;
+-
+-	if (lat_buf->ctx->msg_queue.core_work_cnt <
+-		atomic_read(&lat_buf->ctx->msg_queue.core_list_cnt)) {
+-		status = queue_work(lat_buf->ctx->dev->core_workqueue,
+-				    &lat_buf->ctx->msg_queue.core_work);
+-		if (status)
+-			lat_buf->ctx->msg_queue.core_work_cnt++;
++	if (!(ctx->msg_queue.status & CONTEXT_LIST_QUEUED) &&
++	    atomic_read(&msg_queue->core_list_cnt)) {
++		spin_lock(&ctx->dev->msg_queue_core_ctx.ready_lock);
++		ctx->msg_queue.status |= CONTEXT_LIST_QUEUED;
++		spin_unlock(&ctx->dev->msg_queue_core_ctx.ready_lock);
++		queue_work(ctx->dev->core_workqueue, &msg_queue->core_work);
+ 	}
+-	spin_unlock(&dev->msg_queue_core_ctx.ready_lock);
+ }
+ 
+ int vdec_msg_queue_init(struct vdec_msg_queue *msg_queue,
+@@ -303,13 +298,13 @@ int vdec_msg_queue_init(struct vdec_msg_queue *msg_queue,
+ 		return 0;
+ 
+ 	msg_queue->ctx = ctx;
+-	msg_queue->core_work_cnt = 0;
+ 	vdec_msg_queue_init_ctx(&msg_queue->lat_ctx, MTK_VDEC_LAT0);
+ 	INIT_WORK(&msg_queue->core_work, vdec_msg_queue_core_work);
+ 
+ 	atomic_set(&msg_queue->lat_list_cnt, 0);
+ 	atomic_set(&msg_queue->core_list_cnt, 0);
+ 	init_waitqueue_head(&msg_queue->core_dec_done);
++	msg_queue->status = CONTEXT_LIST_EMPTY;
+ 
+ 	msg_queue->wdma_addr.size =
+ 		vde_msg_queue_get_trans_size(ctx->picinfo.buf_w,
+diff --git a/drivers/media/platform/mediatek/vcodec/vdec_msg_queue.h b/drivers/media/platform/mediatek/vcodec/vdec_msg_queue.h
+index a5d44bc97c16b..8f82d14847726 100644
+--- a/drivers/media/platform/mediatek/vcodec/vdec_msg_queue.h
++++ b/drivers/media/platform/mediatek/vcodec/vdec_msg_queue.h
+@@ -21,6 +21,18 @@ struct mtk_vcodec_ctx;
+ struct mtk_vcodec_dev;
+ typedef int (*core_decode_cb_t)(struct vdec_lat_buf *lat_buf);
+ 
++/**
++ * enum core_ctx_status - Context decode status for core hardwre.
++ * @CONTEXT_LIST_EMPTY: No buffer queued on core hardware(must always be 0)
++ * @CONTEXT_LIST_QUEUED: Buffer queued to core work list
++ * @CONTEXT_LIST_DEC_DONE: context decode done
++ */
++enum core_ctx_status {
++	CONTEXT_LIST_EMPTY = 0,
++	CONTEXT_LIST_QUEUED,
++	CONTEXT_LIST_DEC_DONE,
++};
++
+ /**
+  * struct vdec_msg_queue_ctx - represents a queue for buffers ready to be processed
+  * @ready_to_use: ready used queue used to signalize when get a job queue
+@@ -77,7 +89,7 @@ struct vdec_lat_buf {
+  * @lat_list_cnt: used to record each instance lat list count
+  * @core_list_cnt: used to record each instance core list count
+  * @core_dec_done: core work queue decode done event
+- * @core_work_cnt: the number of core work in work queue
++ * @status: current context decode status for core hardware
+  */
+ struct vdec_msg_queue {
+ 	struct vdec_lat_buf lat_buf[NUM_BUFFER_COUNT];
+@@ -93,7 +105,7 @@ struct vdec_msg_queue {
+ 	atomic_t lat_list_cnt;
+ 	atomic_t core_list_cnt;
+ 	wait_queue_head_t core_dec_done;
+-	int core_work_cnt;
++	int status;
+ };
+ 
+ /**
+diff --git a/drivers/media/platform/qcom/venus/helpers.c b/drivers/media/platform/qcom/venus/helpers.c
+index a2ceab7f9ddbf..a68389b0aae0a 100644
+--- a/drivers/media/platform/qcom/venus/helpers.c
++++ b/drivers/media/platform/qcom/venus/helpers.c
+@@ -1036,8 +1036,8 @@ static u32 get_framesize_raw_yuv420_tp10_ubwc(u32 width, u32 height)
+ 	u32 extradata = SZ_16K;
+ 	u32 size;
+ 
+-	y_stride = ALIGN(ALIGN(width, 192) * 4 / 3, 256);
+-	uv_stride = ALIGN(ALIGN(width, 192) * 4 / 3, 256);
++	y_stride = ALIGN(width * 4 / 3, 256);
++	uv_stride = ALIGN(width * 4 / 3, 256);
+ 	y_sclines = ALIGN(height, 16);
+ 	uv_sclines = ALIGN((height + 1) >> 1, 16);
+ 
+diff --git a/drivers/media/platform/renesas/rcar_fdp1.c b/drivers/media/platform/renesas/rcar_fdp1.c
+index f43e458590b8c..ab39cd2201c85 100644
+--- a/drivers/media/platform/renesas/rcar_fdp1.c
++++ b/drivers/media/platform/renesas/rcar_fdp1.c
+@@ -254,6 +254,8 @@ MODULE_PARM_DESC(debug, "activate debug info");
+ 
+ /* Internal Data (HW Version) */
+ #define FD1_IP_INTDATA			0x0800
++/* R-Car Gen2 HW manual says zero, but actual value matches R-Car H3 ES1.x */
++#define FD1_IP_GEN2			0x02010101
+ #define FD1_IP_M3W			0x02010202
+ #define FD1_IP_H3			0x02010203
+ #define FD1_IP_M3N			0x02010204
+@@ -2360,6 +2362,9 @@ static int fdp1_probe(struct platform_device *pdev)
+ 
+ 	hw_version = fdp1_read(fdp1, FD1_IP_INTDATA);
+ 	switch (hw_version) {
++	case FD1_IP_GEN2:
++		dprintk(fdp1, "FDP1 Version R-Car Gen2\n");
++		break;
+ 	case FD1_IP_M3W:
+ 		dprintk(fdp1, "FDP1 Version R-Car M3-W\n");
+ 		break;
+diff --git a/drivers/media/usb/dvb-usb-v2/az6007.c b/drivers/media/usb/dvb-usb-v2/az6007.c
+index 62ee09f28a0bc..7524c90f5da61 100644
+--- a/drivers/media/usb/dvb-usb-v2/az6007.c
++++ b/drivers/media/usb/dvb-usb-v2/az6007.c
+@@ -202,7 +202,8 @@ static int az6007_rc_query(struct dvb_usb_device *d)
+ 	unsigned code;
+ 	enum rc_proto proto;
+ 
+-	az6007_read(d, AZ6007_READ_IR, 0, 0, st->data, 10);
++	if (az6007_read(d, AZ6007_READ_IR, 0, 0, st->data, 10) < 0)
++		return -EIO;
+ 
+ 	if (st->data[1] == 0x44)
+ 		return 0;
+diff --git a/drivers/media/usb/siano/smsusb.c b/drivers/media/usb/siano/smsusb.c
+index 6f443c542c6da..640737d3b8aeb 100644
+--- a/drivers/media/usb/siano/smsusb.c
++++ b/drivers/media/usb/siano/smsusb.c
+@@ -179,7 +179,8 @@ static void smsusb_stop_streaming(struct smsusb_device_t *dev)
+ 
+ 	for (i = 0; i < MAX_URBS; i++) {
+ 		usb_kill_urb(&dev->surbs[i].urb);
+-		cancel_work_sync(&dev->surbs[i].wq);
++		if (dev->surbs[i].wq.func)
++			cancel_work_sync(&dev->surbs[i].wq);
+ 
+ 		if (dev->surbs[i].cb) {
+ 			smscore_putbuffer(dev->coredev, dev->surbs[i].cb);
+diff --git a/drivers/memory/brcmstb_dpfe.c b/drivers/memory/brcmstb_dpfe.c
+index 76c82e9c8fceb..9339f80b21c50 100644
+--- a/drivers/memory/brcmstb_dpfe.c
++++ b/drivers/memory/brcmstb_dpfe.c
+@@ -434,15 +434,17 @@ static void __finalize_command(struct brcmstb_dpfe_priv *priv)
+ static int __send_command(struct brcmstb_dpfe_priv *priv, unsigned int cmd,
+ 			  u32 result[])
+ {
+-	const u32 *msg = priv->dpfe_api->command[cmd];
+ 	void __iomem *regs = priv->regs;
+ 	unsigned int i, chksum, chksum_idx;
++	const u32 *msg;
+ 	int ret = 0;
+ 	u32 resp;
+ 
+ 	if (cmd >= DPFE_CMD_MAX)
+ 		return -1;
+ 
++	msg = priv->dpfe_api->command[cmd];
++
+ 	mutex_lock(&priv->lock);
+ 
+ 	/* Wait for DCPU to become ready */
+diff --git a/drivers/memstick/host/r592.c b/drivers/memstick/host/r592.c
+index 42bfc46842b82..461f5ffd02bc1 100644
+--- a/drivers/memstick/host/r592.c
++++ b/drivers/memstick/host/r592.c
+@@ -44,12 +44,10 @@ static const char *tpc_names[] = {
+  * memstick_debug_get_tpc_name - debug helper that returns string for
+  * a TPC number
+  */
+-const char *memstick_debug_get_tpc_name(int tpc)
++static __maybe_unused const char *memstick_debug_get_tpc_name(int tpc)
+ {
+ 	return tpc_names[tpc-1];
+ }
+-EXPORT_SYMBOL(memstick_debug_get_tpc_name);
+-
+ 
+ /* Read a register*/
+ static inline u32 r592_read_reg(struct r592_device *dev, int address)
+diff --git a/drivers/mfd/intel-lpss-acpi.c b/drivers/mfd/intel-lpss-acpi.c
+index a143c8dca2d93..212818aef93e2 100644
+--- a/drivers/mfd/intel-lpss-acpi.c
++++ b/drivers/mfd/intel-lpss-acpi.c
+@@ -183,6 +183,9 @@ static int intel_lpss_acpi_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	info->mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++	if (!info->mem)
++		return -ENODEV;
++
+ 	info->irq = platform_get_irq(pdev, 0);
+ 
+ 	ret = intel_lpss_probe(&pdev->dev, info);
+diff --git a/drivers/mfd/rt5033.c b/drivers/mfd/rt5033.c
+index a5e520fe50a14..8029d444b7942 100644
+--- a/drivers/mfd/rt5033.c
++++ b/drivers/mfd/rt5033.c
+@@ -40,9 +40,6 @@ static const struct mfd_cell rt5033_devs[] = {
+ 	{
+ 		.name = "rt5033-charger",
+ 		.of_compatible = "richtek,rt5033-charger",
+-	}, {
+-		.name = "rt5033-battery",
+-		.of_compatible = "richtek,rt5033-battery",
+ 	}, {
+ 		.name = "rt5033-led",
+ 		.of_compatible = "richtek,rt5033-led",
+diff --git a/drivers/mfd/stmfx.c b/drivers/mfd/stmfx.c
+index e281971ba54ed..76188212c66eb 100644
+--- a/drivers/mfd/stmfx.c
++++ b/drivers/mfd/stmfx.c
+@@ -330,9 +330,8 @@ static int stmfx_chip_init(struct i2c_client *client)
+ 	stmfx->vdd = devm_regulator_get_optional(&client->dev, "vdd");
+ 	ret = PTR_ERR_OR_ZERO(stmfx->vdd);
+ 	if (ret) {
+-		if (ret == -ENODEV)
+-			stmfx->vdd = NULL;
+-		else
++		stmfx->vdd = NULL;
++		if (ret != -ENODEV)
+ 			return dev_err_probe(&client->dev, ret, "Failed to get VDD regulator\n");
+ 	}
+ 
+@@ -387,7 +386,7 @@ static int stmfx_chip_init(struct i2c_client *client)
+ 
+ err:
+ 	if (stmfx->vdd)
+-		return regulator_disable(stmfx->vdd);
++		regulator_disable(stmfx->vdd);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/mfd/stmpe.c b/drivers/mfd/stmpe.c
+index a92301dfc7126..9c3cf58457a7d 100644
+--- a/drivers/mfd/stmpe.c
++++ b/drivers/mfd/stmpe.c
+@@ -1485,9 +1485,9 @@ int stmpe_probe(struct stmpe_client_info *ci, enum stmpe_partnum partnum)
+ 
+ void stmpe_remove(struct stmpe *stmpe)
+ {
+-	if (!IS_ERR(stmpe->vio))
++	if (!IS_ERR(stmpe->vio) && regulator_is_enabled(stmpe->vio))
+ 		regulator_disable(stmpe->vio);
+-	if (!IS_ERR(stmpe->vcc))
++	if (!IS_ERR(stmpe->vcc) && regulator_is_enabled(stmpe->vcc))
+ 		regulator_disable(stmpe->vcc);
+ 
+ 	__stmpe_disable(stmpe, STMPE_BLOCK_ADC);
+diff --git a/drivers/mfd/tps65010.c b/drivers/mfd/tps65010.c
+index fb733288cca3b..faea4ff44c6fe 100644
+--- a/drivers/mfd/tps65010.c
++++ b/drivers/mfd/tps65010.c
+@@ -506,12 +506,8 @@ static void tps65010_remove(struct i2c_client *client)
+ 	struct tps65010		*tps = i2c_get_clientdata(client);
+ 	struct tps65010_board	*board = dev_get_platdata(&client->dev);
+ 
+-	if (board && board->teardown) {
+-		int status = board->teardown(client, board->context);
+-		if (status < 0)
+-			dev_dbg(&client->dev, "board %s %s err %d\n",
+-				"teardown", client->name, status);
+-	}
++	if (board && board->teardown)
++		board->teardown(client, &tps->chip);
+ 	if (client->irq > 0)
+ 		free_irq(client->irq, tps);
+ 	cancel_delayed_work_sync(&tps->work);
+@@ -619,7 +615,7 @@ static int tps65010_probe(struct i2c_client *client)
+ 				tps, DEBUG_FOPS);
+ 
+ 	/* optionally register GPIOs */
+-	if (board && board->base != 0) {
++	if (board) {
+ 		tps->outmask = board->outmask;
+ 
+ 		tps->chip.label = client->name;
+@@ -632,7 +628,7 @@ static int tps65010_probe(struct i2c_client *client)
+ 		/* NOTE:  only partial support for inputs; nyet IRQs */
+ 		tps->chip.get = tps65010_gpio_get;
+ 
+-		tps->chip.base = board->base;
++		tps->chip.base = -1;
+ 		tps->chip.ngpio = 7;
+ 		tps->chip.can_sleep = 1;
+ 
+@@ -641,7 +637,7 @@ static int tps65010_probe(struct i2c_client *client)
+ 			dev_err(&client->dev, "can't add gpiochip, err %d\n",
+ 					status);
+ 		else if (board->setup) {
+-			status = board->setup(client, board->context);
++			status = board->setup(client, &tps->chip);
+ 			if (status < 0) {
+ 				dev_dbg(&client->dev,
+ 					"board %s %s err %d\n",
+diff --git a/drivers/mfd/wcd934x.c b/drivers/mfd/wcd934x.c
+index 07e884087f2c7..281470d6b0b99 100644
+--- a/drivers/mfd/wcd934x.c
++++ b/drivers/mfd/wcd934x.c
+@@ -258,8 +258,9 @@ static int wcd934x_slim_probe(struct slim_device *sdev)
+ 	usleep_range(600, 650);
+ 	reset_gpio = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_LOW);
+ 	if (IS_ERR(reset_gpio)) {
+-		return dev_err_probe(dev, PTR_ERR(reset_gpio),
+-				"Failed to get reset gpio: err = %ld\n", PTR_ERR(reset_gpio));
++		ret = dev_err_probe(dev, PTR_ERR(reset_gpio),
++				    "Failed to get reset gpio\n");
++		goto err_disable_regulators;
+ 	}
+ 	msleep(20);
+ 	gpiod_set_value(reset_gpio, 1);
+@@ -269,6 +270,10 @@ static int wcd934x_slim_probe(struct slim_device *sdev)
+ 	dev_set_drvdata(dev, ddata);
+ 
+ 	return 0;
++
++err_disable_regulators:
++	regulator_bulk_disable(WCD934X_MAX_SUPPLY, ddata->supplies);
++	return ret;
+ }
+ 
+ static void wcd934x_slim_remove(struct slim_device *sdev)
+diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c
+index 30d4d0476248f..9051551d99373 100644
+--- a/drivers/misc/fastrpc.c
++++ b/drivers/misc/fastrpc.c
+@@ -2225,6 +2225,9 @@ static int fastrpc_device_register(struct device *dev, struct fastrpc_channel_ct
+ 	fdev->miscdev.fops = &fastrpc_fops;
+ 	fdev->miscdev.name = devm_kasprintf(dev, GFP_KERNEL, "fastrpc-%s%s",
+ 					    domain, is_secured ? "-secure" : "");
++	if (!fdev->miscdev.name)
++		return -ENOMEM;
++
+ 	err = misc_register(&fdev->miscdev);
+ 	if (!err) {
+ 		if (is_secured)
+diff --git a/drivers/misc/lkdtm/core.c b/drivers/misc/lkdtm/core.c
+index b4712ff196b4e..0772e4a4757e9 100644
+--- a/drivers/misc/lkdtm/core.c
++++ b/drivers/misc/lkdtm/core.c
+@@ -79,7 +79,7 @@ static struct crashpoint crashpoints[] = {
+ 	CRASHPOINT("INT_HARDWARE_ENTRY", "do_IRQ"),
+ 	CRASHPOINT("INT_HW_IRQ_EN",	 "handle_irq_event"),
+ 	CRASHPOINT("INT_TASKLET_ENTRY",	 "tasklet_action"),
+-	CRASHPOINT("FS_DEVRW",		 "ll_rw_block"),
++	CRASHPOINT("FS_SUBMIT_BH",		 "submit_bh"),
+ 	CRASHPOINT("MEM_SWAPOUT",	 "shrink_inactive_list"),
+ 	CRASHPOINT("TIMERADD",		 "hrtimer_start"),
+ 	CRASHPOINT("SCSI_QUEUE_RQ",	 "scsi_queue_rq"),
+diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
+index d920c41783893..e46330815484d 100644
+--- a/drivers/mmc/core/block.c
++++ b/drivers/mmc/core/block.c
+@@ -178,6 +178,7 @@ static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq,
+ 			       int recovery_mode,
+ 			       struct mmc_queue *mq);
+ static void mmc_blk_hsq_req_done(struct mmc_request *mrq);
++static int mmc_spi_err_check(struct mmc_card *card);
+ 
+ static struct mmc_blk_data *mmc_blk_get(struct gendisk *disk)
+ {
+@@ -608,6 +609,11 @@ static int __mmc_blk_ioctl_cmd(struct mmc_card *card, struct mmc_blk_data *md,
+ 	if ((card->host->caps & MMC_CAP_WAIT_WHILE_BUSY) && use_r1b_resp)
+ 		return 0;
+ 
++	if (mmc_host_is_spi(card->host)) {
++		if (idata->ic.write_flag || r1b_resp || cmd.flags & MMC_RSP_SPI_BUSY)
++			return mmc_spi_err_check(card);
++		return err;
++	}
+ 	/* Ensure RPMB/R1B command has completed by polling with CMD13. */
+ 	if (idata->rpmb || r1b_resp)
+ 		err = mmc_poll_for_busy(card, busy_timeout_ms, false,
+diff --git a/drivers/mmc/core/card.h b/drivers/mmc/core/card.h
+index cfdd1ff40b865..4edf9057fa79d 100644
+--- a/drivers/mmc/core/card.h
++++ b/drivers/mmc/core/card.h
+@@ -53,6 +53,10 @@ struct mmc_fixup {
+ 	unsigned int manfid;
+ 	unsigned short oemid;
+ 
++	/* Manufacturing date */
++	unsigned short year;
++	unsigned char month;
++
+ 	/* SDIO-specific fields. You can use SDIO_ANY_ID here of course */
+ 	u16 cis_vendor, cis_device;
+ 
+@@ -68,6 +72,8 @@ struct mmc_fixup {
+ 
+ #define CID_MANFID_ANY (-1u)
+ #define CID_OEMID_ANY ((unsigned short) -1)
++#define CID_YEAR_ANY ((unsigned short) -1)
++#define CID_MONTH_ANY ((unsigned char) -1)
+ #define CID_NAME_ANY (NULL)
+ 
+ #define EXT_CSD_REV_ANY (-1u)
+@@ -81,17 +87,21 @@ struct mmc_fixup {
+ #define CID_MANFID_APACER       0x27
+ #define CID_MANFID_KINGSTON     0x70
+ #define CID_MANFID_HYNIX	0x90
++#define CID_MANFID_KINGSTON_SD	0x9F
+ #define CID_MANFID_NUMONYX	0xFE
+ 
+ #define END_FIXUP { NULL }
+ 
+-#define _FIXUP_EXT(_name, _manfid, _oemid, _rev_start, _rev_end,	\
+-		   _cis_vendor, _cis_device,				\
+-		   _fixup, _data, _ext_csd_rev)				\
++#define _FIXUP_EXT(_name, _manfid, _oemid, _year, _month,	\
++		   _rev_start, _rev_end,			\
++		   _cis_vendor, _cis_device,			\
++		   _fixup, _data, _ext_csd_rev)			\
+ 	{						\
+ 		.name = (_name),			\
+ 		.manfid = (_manfid),			\
+ 		.oemid = (_oemid),			\
++		.year = (_year),			\
++		.month = (_month),			\
+ 		.rev_start = (_rev_start),		\
+ 		.rev_end = (_rev_end),			\
+ 		.cis_vendor = (_cis_vendor),		\
+@@ -103,8 +113,8 @@ struct mmc_fixup {
+ 
+ #define MMC_FIXUP_REV(_name, _manfid, _oemid, _rev_start, _rev_end,	\
+ 		      _fixup, _data, _ext_csd_rev)			\
+-	_FIXUP_EXT(_name, _manfid,					\
+-		   _oemid, _rev_start, _rev_end,			\
++	_FIXUP_EXT(_name, _manfid, _oemid, CID_YEAR_ANY, CID_MONTH_ANY,	\
++		   _rev_start, _rev_end,				\
+ 		   SDIO_ANY_ID, SDIO_ANY_ID,				\
+ 		   _fixup, _data, _ext_csd_rev)				\
+ 
+@@ -118,8 +128,9 @@ struct mmc_fixup {
+ 		      _ext_csd_rev)
+ 
+ #define SDIO_FIXUP(_vendor, _device, _fixup, _data)			\
+-	_FIXUP_EXT(CID_NAME_ANY, CID_MANFID_ANY,			\
+-		    CID_OEMID_ANY, 0, -1ull,				\
++	_FIXUP_EXT(CID_NAME_ANY, CID_MANFID_ANY, CID_OEMID_ANY,		\
++		   CID_YEAR_ANY, CID_MONTH_ANY,				\
++		   0, -1ull,						\
+ 		   _vendor, _device,					\
+ 		   _fixup, _data, EXT_CSD_REV_ANY)			\
+ 
+@@ -264,4 +275,9 @@ static inline int mmc_card_broken_sd_discard(const struct mmc_card *c)
+ 	return c->quirks & MMC_QUIRK_BROKEN_SD_DISCARD;
+ }
+ 
++static inline int mmc_card_broken_sd_cache(const struct mmc_card *c)
++{
++	return c->quirks & MMC_QUIRK_BROKEN_SD_CACHE;
++}
++
+ #endif
+diff --git a/drivers/mmc/core/quirks.h b/drivers/mmc/core/quirks.h
+index 29b9497936df9..857315f185fcf 100644
+--- a/drivers/mmc/core/quirks.h
++++ b/drivers/mmc/core/quirks.h
+@@ -53,6 +53,15 @@ static const struct mmc_fixup __maybe_unused mmc_blk_fixups[] = {
+ 	MMC_FIXUP("MMC32G", CID_MANFID_TOSHIBA, CID_OEMID_ANY, add_quirk_mmc,
+ 		  MMC_QUIRK_BLK_NO_CMD23),
+ 
++	/*
++	 * Kingston Canvas Go! Plus microSD cards never finish SD cache flush.
++	 * This has so far only been observed on cards from 11/2019, while new
++	 * cards from 2023/05 do not exhibit this behavior.
++	 */
++	_FIXUP_EXT("SD64G", CID_MANFID_KINGSTON_SD, 0x5449, 2019, 11,
++		   0, -1ull, SDIO_ANY_ID, SDIO_ANY_ID, add_quirk_sd,
++		   MMC_QUIRK_BROKEN_SD_CACHE, EXT_CSD_REV_ANY),
++
+ 	/*
+ 	 * Some SD cards lockup while using CMD23 multiblock transfers.
+ 	 */
+@@ -100,6 +109,20 @@ static const struct mmc_fixup __maybe_unused mmc_blk_fixups[] = {
+ 	MMC_FIXUP("V10016", CID_MANFID_KINGSTON, CID_OEMID_ANY, add_quirk_mmc,
+ 		  MMC_QUIRK_TRIM_BROKEN),
+ 
++	/*
++	 * Kingston EMMC04G-M627 advertises TRIM but it does not seems to
++	 * support being used to offload WRITE_ZEROES.
++	 */
++	MMC_FIXUP("M62704", CID_MANFID_KINGSTON, 0x0100, add_quirk_mmc,
++		  MMC_QUIRK_TRIM_BROKEN),
++
++	/*
++	 * Micron MTFC4GACAJCN-1M advertises TRIM but it does not seems to
++	 * support being used to offload WRITE_ZEROES.
++	 */
++	MMC_FIXUP("Q2J54A", CID_MANFID_MICRON, 0x014e, add_quirk_mmc,
++		  MMC_QUIRK_TRIM_BROKEN),
++
+ 	/*
+ 	 * Some SD cards reports discard support while they don't
+ 	 */
+@@ -209,6 +232,10 @@ static inline void mmc_fixup_device(struct mmc_card *card,
+ 		if (f->of_compatible &&
+ 		    !mmc_fixup_of_compatible_match(card, f->of_compatible))
+ 			continue;
++		if (f->year != CID_YEAR_ANY && f->year != card->cid.year)
++			continue;
++		if (f->month != CID_MONTH_ANY && f->month != card->cid.month)
++			continue;
+ 
+ 		dev_dbg(&card->dev, "calling %ps\n", f->vendor_fixup);
+ 		f->vendor_fixup(card, f->data);
+diff --git a/drivers/mmc/core/sd.c b/drivers/mmc/core/sd.c
+index 72b664ed90cf6..246ce027ae0aa 100644
+--- a/drivers/mmc/core/sd.c
++++ b/drivers/mmc/core/sd.c
+@@ -1170,7 +1170,7 @@ static int sd_parse_ext_reg_perf(struct mmc_card *card, u8 fno, u8 page,
+ 		card->ext_perf.feature_support |= SD_EXT_PERF_HOST_MAINT;
+ 
+ 	/* Cache support at bit 0. */
+-	if (reg_buf[4] & BIT(0))
++	if ((reg_buf[4] & BIT(0)) && !mmc_card_broken_sd_cache(card))
+ 		card->ext_perf.feature_support |= SD_EXT_PERF_CACHE;
+ 
+ 	/* Command queue support indicated via queue depth bits (0 to 4). */
+diff --git a/drivers/mmc/host/mmci.c b/drivers/mmc/host/mmci.c
+index 696cbef3ff7de..f724bd0d2a612 100644
+--- a/drivers/mmc/host/mmci.c
++++ b/drivers/mmc/host/mmci.c
+@@ -2456,6 +2456,7 @@ static struct amba_driver mmci_driver = {
+ 	.drv		= {
+ 		.name	= DRIVER_NAME,
+ 		.pm	= &mmci_dev_pm_ops,
++		.probe_type = PROBE_PREFER_ASYNCHRONOUS,
+ 	},
+ 	.probe		= mmci_probe,
+ 	.remove		= mmci_remove,
+diff --git a/drivers/mmc/host/mtk-sd.c b/drivers/mmc/host/mtk-sd.c
+index 9785ec91654f7..97c42aacaf346 100644
+--- a/drivers/mmc/host/mtk-sd.c
++++ b/drivers/mmc/host/mtk-sd.c
+@@ -2707,7 +2707,7 @@ static int msdc_drv_probe(struct platform_device *pdev)
+ 
+ 	/* Support for SDIO eint irq ? */
+ 	if ((mmc->pm_caps & MMC_PM_WAKE_SDIO_IRQ) && (mmc->pm_caps & MMC_PM_KEEP_POWER)) {
+-		host->eint_irq = platform_get_irq_byname(pdev, "sdio_wakeup");
++		host->eint_irq = platform_get_irq_byname_optional(pdev, "sdio_wakeup");
+ 		if (host->eint_irq > 0) {
+ 			host->pins_eint = pinctrl_lookup_state(host->pinctrl, "state_eint");
+ 			if (IS_ERR(host->pins_eint)) {
+diff --git a/drivers/mmc/host/omap.c b/drivers/mmc/host/omap.c
+index 86454f1182bb1..6a259563690d6 100644
+--- a/drivers/mmc/host/omap.c
++++ b/drivers/mmc/host/omap.c
+@@ -26,6 +26,7 @@
+ #include <linux/clk.h>
+ #include <linux/scatterlist.h>
+ #include <linux/slab.h>
++#include <linux/gpio/consumer.h>
+ #include <linux/platform_data/mmc-omap.h>
+ 
+ 
+@@ -111,6 +112,9 @@ struct mmc_omap_slot {
+ 	struct mmc_request      *mrq;
+ 	struct mmc_omap_host    *host;
+ 	struct mmc_host		*mmc;
++	struct gpio_desc	*vsd;
++	struct gpio_desc	*vio;
++	struct gpio_desc	*cover;
+ 	struct omap_mmc_slot_data *pdata;
+ };
+ 
+@@ -133,6 +137,7 @@ struct mmc_omap_host {
+ 	int			irq;
+ 	unsigned char		bus_mode;
+ 	unsigned int		reg_shift;
++	struct gpio_desc	*slot_switch;
+ 
+ 	struct work_struct	cmd_abort_work;
+ 	unsigned		abort:1;
+@@ -216,8 +221,13 @@ no_claim:
+ 
+ 	if (host->current_slot != slot) {
+ 		OMAP_MMC_WRITE(host, CON, slot->saved_con & 0xFC00);
+-		if (host->pdata->switch_slot != NULL)
+-			host->pdata->switch_slot(mmc_dev(slot->mmc), slot->id);
++		if (host->slot_switch)
++			/*
++			 * With two slots and a simple GPIO switch, setting
++			 * the GPIO to 0 selects slot ID 0, setting it to 1
++			 * selects slot ID 1.
++			 */
++			gpiod_set_value(host->slot_switch, slot->id);
+ 		host->current_slot = slot;
+ 	}
+ 
+@@ -297,6 +307,9 @@ static void mmc_omap_release_slot(struct mmc_omap_slot *slot, int clk_enabled)
+ static inline
+ int mmc_omap_cover_is_open(struct mmc_omap_slot *slot)
+ {
++	/* If we have a GPIO then use that */
++	if (slot->cover)
++		return gpiod_get_value(slot->cover);
+ 	if (slot->pdata->get_cover_state)
+ 		return slot->pdata->get_cover_state(mmc_dev(slot->mmc),
+ 						    slot->id);
+@@ -1106,6 +1119,11 @@ static void mmc_omap_set_power(struct mmc_omap_slot *slot, int power_on,
+ 
+ 	host = slot->host;
+ 
++	if (slot->vsd)
++		gpiod_set_value(slot->vsd, power_on);
++	if (slot->vio)
++		gpiod_set_value(slot->vio, power_on);
++
+ 	if (slot->pdata->set_power != NULL)
+ 		slot->pdata->set_power(mmc_dev(slot->mmc), slot->id, power_on,
+ 					vdd);
+@@ -1240,6 +1258,23 @@ static int mmc_omap_new_slot(struct mmc_omap_host *host, int id)
+ 	slot->power_mode = MMC_POWER_UNDEFINED;
+ 	slot->pdata = &host->pdata->slots[id];
+ 
++	/* Check for some optional GPIO controls */
++	slot->vsd = gpiod_get_index_optional(host->dev, "vsd",
++					     id, GPIOD_OUT_LOW);
++	if (IS_ERR(slot->vsd))
++		return dev_err_probe(host->dev, PTR_ERR(slot->vsd),
++				     "error looking up VSD GPIO\n");
++	slot->vio = gpiod_get_index_optional(host->dev, "vio",
++					     id, GPIOD_OUT_LOW);
++	if (IS_ERR(slot->vio))
++		return dev_err_probe(host->dev, PTR_ERR(slot->vio),
++				     "error looking up VIO GPIO\n");
++	slot->cover = gpiod_get_index_optional(host->dev, "cover",
++						id, GPIOD_IN);
++	if (IS_ERR(slot->cover))
++		return dev_err_probe(host->dev, PTR_ERR(slot->cover),
++				     "error looking up cover switch GPIO\n");
++
+ 	host->slots[id] = slot;
+ 
+ 	mmc->caps = 0;
+@@ -1349,6 +1384,13 @@ static int mmc_omap_probe(struct platform_device *pdev)
+ 	if (IS_ERR(host->virt_base))
+ 		return PTR_ERR(host->virt_base);
+ 
++	host->slot_switch = gpiod_get_optional(host->dev, "switch",
++					       GPIOD_OUT_LOW);
++	if (IS_ERR(host->slot_switch))
++		return dev_err_probe(host->dev, PTR_ERR(host->slot_switch),
++				     "error looking up slot switch GPIO\n");
++
++
+ 	INIT_WORK(&host->slot_release_work, mmc_omap_slot_release_work);
+ 	INIT_WORK(&host->send_stop_work, mmc_omap_send_stop_work);
+ 
+diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
+index 3241916141d7d..ff41aa56564ea 100644
+--- a/drivers/mmc/host/sdhci.c
++++ b/drivers/mmc/host/sdhci.c
+@@ -1167,6 +1167,8 @@ static void sdhci_prepare_data(struct sdhci_host *host, struct mmc_command *cmd)
+ 		}
+ 	}
+ 
++	sdhci_config_dma(host);
++
+ 	if (host->flags & SDHCI_REQ_USE_DMA) {
+ 		int sg_cnt = sdhci_pre_dma_transfer(host, data, COOKIE_MAPPED);
+ 
+@@ -1186,8 +1188,6 @@ static void sdhci_prepare_data(struct sdhci_host *host, struct mmc_command *cmd)
+ 		}
+ 	}
+ 
+-	sdhci_config_dma(host);
+-
+ 	if (!(host->flags & SDHCI_REQ_USE_DMA)) {
+ 		int flags;
+ 
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index edbaa1444f8ec..091e035c76a6f 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -4197,7 +4197,7 @@ u32 bond_xmit_hash(struct bonding *bond, struct sk_buff *skb)
+ 		return skb->hash;
+ 
+ 	return __bond_xmit_hash(bond, skb, skb->data, skb->protocol,
+-				skb_mac_offset(skb), skb_network_offset(skb),
++				0, skb_network_offset(skb),
+ 				skb_headlen(skb));
+ }
+ 
+diff --git a/drivers/net/can/kvaser_pciefd.c b/drivers/net/can/kvaser_pciefd.c
+index be189edb256ce..37f4befca0345 100644
+--- a/drivers/net/can/kvaser_pciefd.c
++++ b/drivers/net/can/kvaser_pciefd.c
+@@ -538,6 +538,13 @@ static int kvaser_pciefd_set_tx_irq(struct kvaser_pciefd_can *can)
+ 	return 0;
+ }
+ 
++static inline void kvaser_pciefd_set_skb_timestamp(const struct kvaser_pciefd *pcie,
++						   struct sk_buff *skb, u64 timestamp)
++{
++	skb_hwtstamps(skb)->hwtstamp =
++		ns_to_ktime(div_u64(timestamp * 1000, pcie->freq_to_ticks_div));
++}
++
+ static void kvaser_pciefd_setup_controller(struct kvaser_pciefd_can *can)
+ {
+ 	u32 mode;
+@@ -1171,7 +1178,6 @@ static int kvaser_pciefd_handle_data_packet(struct kvaser_pciefd *pcie,
+ 	struct canfd_frame *cf;
+ 	struct can_priv *priv;
+ 	struct net_device_stats *stats;
+-	struct skb_shared_hwtstamps *shhwtstamps;
+ 	u8 ch_id = (p->header[1] >> KVASER_PCIEFD_PACKET_CHID_SHIFT) & 0x7;
+ 
+ 	if (ch_id >= pcie->nr_channels)
+@@ -1214,12 +1220,7 @@ static int kvaser_pciefd_handle_data_packet(struct kvaser_pciefd *pcie,
+ 		stats->rx_bytes += cf->len;
+ 	}
+ 	stats->rx_packets++;
+-
+-	shhwtstamps = skb_hwtstamps(skb);
+-
+-	shhwtstamps->hwtstamp =
+-		ns_to_ktime(div_u64(p->timestamp * 1000,
+-				    pcie->freq_to_ticks_div));
++	kvaser_pciefd_set_skb_timestamp(pcie, skb, p->timestamp);
+ 
+ 	return netif_rx(skb);
+ }
+@@ -1282,7 +1283,6 @@ static int kvaser_pciefd_rx_error_frame(struct kvaser_pciefd_can *can,
+ 	struct net_device *ndev = can->can.dev;
+ 	struct sk_buff *skb;
+ 	struct can_frame *cf = NULL;
+-	struct skb_shared_hwtstamps *shhwtstamps;
+ 	struct net_device_stats *stats = &ndev->stats;
+ 
+ 	old_state = can->can.state;
+@@ -1323,10 +1323,7 @@ static int kvaser_pciefd_rx_error_frame(struct kvaser_pciefd_can *can,
+ 		return -ENOMEM;
+ 	}
+ 
+-	shhwtstamps = skb_hwtstamps(skb);
+-	shhwtstamps->hwtstamp =
+-		ns_to_ktime(div_u64(p->timestamp * 1000,
+-				    can->kv_pcie->freq_to_ticks_div));
++	kvaser_pciefd_set_skb_timestamp(can->kv_pcie, skb, p->timestamp);
+ 	cf->can_id |= CAN_ERR_BUSERROR | CAN_ERR_CNT;
+ 
+ 	cf->data[6] = bec.txerr;
+@@ -1374,7 +1371,6 @@ static int kvaser_pciefd_handle_status_resp(struct kvaser_pciefd_can *can,
+ 		struct net_device *ndev = can->can.dev;
+ 		struct sk_buff *skb;
+ 		struct can_frame *cf;
+-		struct skb_shared_hwtstamps *shhwtstamps;
+ 
+ 		skb = alloc_can_err_skb(ndev, &cf);
+ 		if (!skb) {
+@@ -1394,10 +1390,7 @@ static int kvaser_pciefd_handle_status_resp(struct kvaser_pciefd_can *can,
+ 			cf->can_id |= CAN_ERR_RESTARTED;
+ 		}
+ 
+-		shhwtstamps = skb_hwtstamps(skb);
+-		shhwtstamps->hwtstamp =
+-			ns_to_ktime(div_u64(p->timestamp * 1000,
+-					    can->kv_pcie->freq_to_ticks_div));
++		kvaser_pciefd_set_skb_timestamp(can->kv_pcie, skb, p->timestamp);
+ 
+ 		cf->data[6] = bec.txerr;
+ 		cf->data[7] = bec.rxerr;
+@@ -1526,6 +1519,7 @@ static void kvaser_pciefd_handle_nack_packet(struct kvaser_pciefd_can *can,
+ 
+ 	if (skb) {
+ 		cf->can_id |= CAN_ERR_BUSERROR;
++		kvaser_pciefd_set_skb_timestamp(can->kv_pcie, skb, p->timestamp);
+ 		netif_rx(skb);
+ 	} else {
+ 		stats->rx_dropped++;
+@@ -1557,8 +1551,15 @@ static int kvaser_pciefd_handle_ack_packet(struct kvaser_pciefd *pcie,
+ 		netdev_dbg(can->can.dev, "Packet was flushed\n");
+ 	} else {
+ 		int echo_idx = p->header[0] & KVASER_PCIEFD_PACKET_SEQ_MSK;
+-		int dlc = can_get_echo_skb(can->can.dev, echo_idx, NULL);
+-		u8 count = ioread32(can->reg_base +
++		int dlc;
++		u8 count;
++		struct sk_buff *skb;
++
++		skb = can->can.echo_skb[echo_idx];
++		if (skb)
++			kvaser_pciefd_set_skb_timestamp(pcie, skb, p->timestamp);
++		dlc = can_get_echo_skb(can->can.dev, echo_idx, NULL);
++		count = ioread32(can->reg_base +
+ 				    KVASER_PCIEFD_KCAN_TX_NPACKETS_REG) & 0xff;
+ 
+ 		if (count < KVASER_PCIEFD_CAN_TX_MAX_COUNT &&
+diff --git a/drivers/net/dsa/ocelot/felix.c b/drivers/net/dsa/ocelot/felix.c
+index 80861ac090ae3..70c0e2b1936b3 100644
+--- a/drivers/net/dsa/ocelot/felix.c
++++ b/drivers/net/dsa/ocelot/felix.c
+@@ -1725,6 +1725,18 @@ static bool felix_rxtstamp(struct dsa_switch *ds, int port,
+ 	u32 tstamp_hi;
+ 	u64 tstamp;
+ 
++	switch (type & PTP_CLASS_PMASK) {
++	case PTP_CLASS_L2:
++		if (!(ocelot->ports[port]->trap_proto & OCELOT_PROTO_PTP_L2))
++			return false;
++		break;
++	case PTP_CLASS_IPV4:
++	case PTP_CLASS_IPV6:
++		if (!(ocelot->ports[port]->trap_proto & OCELOT_PROTO_PTP_L4))
++			return false;
++		break;
++	}
++
+ 	/* If the "no XTR IRQ" workaround is in use, tell DSA to defer this skb
+ 	 * for RX timestamping. Then free it, and poll for its copy through
+ 	 * MMIO in the CPU port module, and inject that into the stack from
+diff --git a/drivers/net/dsa/sja1105/sja1105.h b/drivers/net/dsa/sja1105/sja1105.h
+index fb1549a5fe321..dee35ba924ad2 100644
+--- a/drivers/net/dsa/sja1105/sja1105.h
++++ b/drivers/net/dsa/sja1105/sja1105.h
+@@ -252,6 +252,7 @@ struct sja1105_private {
+ 	unsigned long ucast_egress_floods;
+ 	unsigned long bcast_egress_floods;
+ 	unsigned long hwts_tx_en;
++	unsigned long hwts_rx_en;
+ 	const struct sja1105_info *info;
+ 	size_t max_xfer_len;
+ 	struct spi_device *spidev;
+@@ -289,7 +290,6 @@ struct sja1105_spi_message {
+ /* From sja1105_main.c */
+ enum sja1105_reset_reason {
+ 	SJA1105_VLAN_FILTERING = 0,
+-	SJA1105_RX_HWTSTAMPING,
+ 	SJA1105_AGEING_TIME,
+ 	SJA1105_SCHEDULING,
+ 	SJA1105_BEST_EFFORT_POLICING,
+diff --git a/drivers/net/dsa/sja1105/sja1105_main.c b/drivers/net/dsa/sja1105/sja1105_main.c
+index b70dcf32a26dc..947e8f7c09880 100644
+--- a/drivers/net/dsa/sja1105/sja1105_main.c
++++ b/drivers/net/dsa/sja1105/sja1105_main.c
+@@ -866,12 +866,12 @@ static int sja1105_init_general_params(struct sja1105_private *priv)
+ 		.hostprio = 7,
+ 		.mac_fltres1 = SJA1105_LINKLOCAL_FILTER_A,
+ 		.mac_flt1    = SJA1105_LINKLOCAL_FILTER_A_MASK,
+-		.incl_srcpt1 = false,
+-		.send_meta1  = false,
++		.incl_srcpt1 = true,
++		.send_meta1  = true,
+ 		.mac_fltres0 = SJA1105_LINKLOCAL_FILTER_B,
+ 		.mac_flt0    = SJA1105_LINKLOCAL_FILTER_B_MASK,
+-		.incl_srcpt0 = false,
+-		.send_meta0  = false,
++		.incl_srcpt0 = true,
++		.send_meta0  = true,
+ 		/* Default to an invalid value */
+ 		.mirr_port = priv->ds->num_ports,
+ 		/* No TTEthernet */
+@@ -2215,7 +2215,6 @@ static int sja1105_reload_cbs(struct sja1105_private *priv)
+ 
+ static const char * const sja1105_reset_reasons[] = {
+ 	[SJA1105_VLAN_FILTERING] = "VLAN filtering",
+-	[SJA1105_RX_HWTSTAMPING] = "RX timestamping",
+ 	[SJA1105_AGEING_TIME] = "Ageing time",
+ 	[SJA1105_SCHEDULING] = "Time-aware scheduling",
+ 	[SJA1105_BEST_EFFORT_POLICING] = "Best-effort policing",
+@@ -2407,11 +2406,6 @@ int sja1105_vlan_filtering(struct dsa_switch *ds, int port, bool enabled,
+ 	general_params->tpid = tpid;
+ 	/* EtherType used to identify outer tagged (S-tag) VLAN traffic */
+ 	general_params->tpid2 = tpid2;
+-	/* When VLAN filtering is on, we need to at least be able to
+-	 * decode management traffic through the "backup plan".
+-	 */
+-	general_params->incl_srcpt1 = enabled;
+-	general_params->incl_srcpt0 = enabled;
+ 
+ 	for (port = 0; port < ds->num_ports; port++) {
+ 		if (dsa_is_unused_port(ds, port))
+diff --git a/drivers/net/dsa/sja1105/sja1105_ptp.c b/drivers/net/dsa/sja1105/sja1105_ptp.c
+index 30fb2cc40164b..a7d41e7813982 100644
+--- a/drivers/net/dsa/sja1105/sja1105_ptp.c
++++ b/drivers/net/dsa/sja1105/sja1105_ptp.c
+@@ -58,35 +58,10 @@ enum sja1105_ptp_clk_mode {
+ #define ptp_data_to_sja1105(d) \
+ 		container_of((d), struct sja1105_private, ptp_data)
+ 
+-/* Must be called only while the RX timestamping state of the tagger
+- * is turned off
+- */
+-static int sja1105_change_rxtstamping(struct sja1105_private *priv,
+-				      bool on)
+-{
+-	struct sja1105_ptp_data *ptp_data = &priv->ptp_data;
+-	struct sja1105_general_params_entry *general_params;
+-	struct sja1105_table *table;
+-
+-	table = &priv->static_config.tables[BLK_IDX_GENERAL_PARAMS];
+-	general_params = table->entries;
+-	general_params->send_meta1 = on;
+-	general_params->send_meta0 = on;
+-
+-	ptp_cancel_worker_sync(ptp_data->clock);
+-	skb_queue_purge(&ptp_data->skb_txtstamp_queue);
+-	skb_queue_purge(&ptp_data->skb_rxtstamp_queue);
+-
+-	return sja1105_static_config_reload(priv, SJA1105_RX_HWTSTAMPING);
+-}
+-
+ int sja1105_hwtstamp_set(struct dsa_switch *ds, int port, struct ifreq *ifr)
+ {
+-	struct sja1105_tagger_data *tagger_data = sja1105_tagger_data(ds);
+ 	struct sja1105_private *priv = ds->priv;
+ 	struct hwtstamp_config config;
+-	bool rx_on;
+-	int rc;
+ 
+ 	if (copy_from_user(&config, ifr->ifr_data, sizeof(config)))
+ 		return -EFAULT;
+@@ -104,26 +79,13 @@ int sja1105_hwtstamp_set(struct dsa_switch *ds, int port, struct ifreq *ifr)
+ 
+ 	switch (config.rx_filter) {
+ 	case HWTSTAMP_FILTER_NONE:
+-		rx_on = false;
++		priv->hwts_rx_en &= ~BIT(port);
+ 		break;
+ 	default:
+-		rx_on = true;
++		priv->hwts_rx_en |= BIT(port);
+ 		break;
+ 	}
+ 
+-	if (rx_on != tagger_data->rxtstamp_get_state(ds)) {
+-		tagger_data->rxtstamp_set_state(ds, false);
+-
+-		rc = sja1105_change_rxtstamping(priv, rx_on);
+-		if (rc < 0) {
+-			dev_err(ds->dev,
+-				"Failed to change RX timestamping: %d\n", rc);
+-			return rc;
+-		}
+-		if (rx_on)
+-			tagger_data->rxtstamp_set_state(ds, true);
+-	}
+-
+ 	if (copy_to_user(ifr->ifr_data, &config, sizeof(config)))
+ 		return -EFAULT;
+ 	return 0;
+@@ -131,7 +93,6 @@ int sja1105_hwtstamp_set(struct dsa_switch *ds, int port, struct ifreq *ifr)
+ 
+ int sja1105_hwtstamp_get(struct dsa_switch *ds, int port, struct ifreq *ifr)
+ {
+-	struct sja1105_tagger_data *tagger_data = sja1105_tagger_data(ds);
+ 	struct sja1105_private *priv = ds->priv;
+ 	struct hwtstamp_config config;
+ 
+@@ -140,7 +101,7 @@ int sja1105_hwtstamp_get(struct dsa_switch *ds, int port, struct ifreq *ifr)
+ 		config.tx_type = HWTSTAMP_TX_ON;
+ 	else
+ 		config.tx_type = HWTSTAMP_TX_OFF;
+-	if (tagger_data->rxtstamp_get_state(ds))
++	if (priv->hwts_rx_en & BIT(port))
+ 		config.rx_filter = HWTSTAMP_FILTER_PTP_V2_L2_EVENT;
+ 	else
+ 		config.rx_filter = HWTSTAMP_FILTER_NONE;
+@@ -413,11 +374,10 @@ static long sja1105_rxtstamp_work(struct ptp_clock_info *ptp)
+ 
+ bool sja1105_rxtstamp(struct dsa_switch *ds, int port, struct sk_buff *skb)
+ {
+-	struct sja1105_tagger_data *tagger_data = sja1105_tagger_data(ds);
+ 	struct sja1105_private *priv = ds->priv;
+ 	struct sja1105_ptp_data *ptp_data = &priv->ptp_data;
+ 
+-	if (!tagger_data->rxtstamp_get_state(ds))
++	if (!(priv->hwts_rx_en & BIT(port)))
+ 		return false;
+ 
+ 	/* We need to read the full PTP clock to reconstruct the Rx
+diff --git a/drivers/net/dsa/vitesse-vsc73xx-core.c b/drivers/net/dsa/vitesse-vsc73xx-core.c
+index ae55167ce0a6f..ef1a4a7c47b23 100644
+--- a/drivers/net/dsa/vitesse-vsc73xx-core.c
++++ b/drivers/net/dsa/vitesse-vsc73xx-core.c
+@@ -1025,17 +1025,17 @@ static int vsc73xx_change_mtu(struct dsa_switch *ds, int port, int new_mtu)
+ 	struct vsc73xx *vsc = ds->priv;
+ 
+ 	return vsc73xx_write(vsc, VSC73XX_BLOCK_MAC, port,
+-			     VSC73XX_MAXLEN, new_mtu);
++			     VSC73XX_MAXLEN, new_mtu + ETH_HLEN + ETH_FCS_LEN);
+ }
+ 
+ /* According to application not "VSC7398 Jumbo Frames" setting
+- * up the MTU to 9.6 KB does not affect the performance on standard
++ * up the frame size to 9.6 KB does not affect the performance on standard
+  * frames. It is clear from the application note that
+  * "9.6 kilobytes" == 9600 bytes.
+  */
+ static int vsc73xx_get_max_mtu(struct dsa_switch *ds, int port)
+ {
+-	return 9600;
++	return 9600 - ETH_HLEN - ETH_FCS_LEN;
+ }
+ 
+ static const struct dsa_switch_ops vsc73xx_ds_ops = {
+diff --git a/drivers/net/ethernet/broadcom/tg3.c b/drivers/net/ethernet/broadcom/tg3.c
+index 58747292521d8..a52cf9aae4988 100644
+--- a/drivers/net/ethernet/broadcom/tg3.c
++++ b/drivers/net/ethernet/broadcom/tg3.c
+@@ -224,6 +224,7 @@ MODULE_AUTHOR("David S. Miller (davem@redhat.com) and Jeff Garzik (jgarzik@pobox
+ MODULE_DESCRIPTION("Broadcom Tigon3 ethernet driver");
+ MODULE_LICENSE("GPL");
+ MODULE_FIRMWARE(FIRMWARE_TG3);
++MODULE_FIRMWARE(FIRMWARE_TG357766);
+ MODULE_FIRMWARE(FIRMWARE_TG3TSO);
+ MODULE_FIRMWARE(FIRMWARE_TG3TSO5);
+ 
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index c63d3ec9d3284..763d613adbcc0 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -1816,7 +1816,14 @@ static int __ibmvnic_open(struct net_device *netdev)
+ 		if (prev_state == VNIC_CLOSED)
+ 			enable_irq(adapter->tx_scrq[i]->irq);
+ 		enable_scrq_irq(adapter, adapter->tx_scrq[i]);
+-		netdev_tx_reset_queue(netdev_get_tx_queue(netdev, i));
++		/* netdev_tx_reset_queue will reset dql stats. During NON_FATAL
++		 * resets, don't reset the stats because there could be batched
++		 * skb's waiting to be sent. If we reset dql stats, we risk
++		 * num_completed being greater than num_queued. This will cause
++		 * a BUG_ON in dql_completed().
++		 */
++		if (adapter->reset_reason != VNIC_RESET_NON_FATAL)
++			netdev_tx_reset_queue(netdev_get_tx_queue(netdev, i));
+ 	}
+ 
+ 	rc = set_link_state(adapter, IBMVNIC_LOGICAL_LNK_UP);
+diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
+index aa32111afd6ed..50ccde7942f2d 100644
+--- a/drivers/net/ethernet/intel/ice/ice.h
++++ b/drivers/net/ethernet/intel/ice/ice.h
+@@ -514,6 +514,12 @@ enum ice_pf_flags {
+ 	ICE_PF_FLAGS_NBITS		/* must be last */
+ };
+ 
++enum ice_misc_thread_tasks {
++	ICE_MISC_THREAD_EXTTS_EVENT,
++	ICE_MISC_THREAD_TX_TSTAMP,
++	ICE_MISC_THREAD_NBITS		/* must be last */
++};
++
+ struct ice_switchdev_info {
+ 	struct ice_vsi *control_vsi;
+ 	struct ice_vsi *uplink_vsi;
+@@ -556,6 +562,7 @@ struct ice_pf {
+ 	DECLARE_BITMAP(features, ICE_F_MAX);
+ 	DECLARE_BITMAP(state, ICE_STATE_NBITS);
+ 	DECLARE_BITMAP(flags, ICE_PF_FLAGS_NBITS);
++	DECLARE_BITMAP(misc_thread, ICE_MISC_THREAD_NBITS);
+ 	unsigned long *avail_txqs;	/* bitmap to track PF Tx queue usage */
+ 	unsigned long *avail_rxqs;	/* bitmap to track PF Rx queue usage */
+ 	unsigned long serv_tmr_period;
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index 42c318ceff618..fcc027c938fda 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -3141,20 +3141,28 @@ static irqreturn_t ice_misc_intr(int __always_unused irq, void *data)
+ 
+ 	if (oicr & PFINT_OICR_TSYN_TX_M) {
+ 		ena_mask &= ~PFINT_OICR_TSYN_TX_M;
+-		if (!hw->reset_ongoing)
++		if (!hw->reset_ongoing) {
++			set_bit(ICE_MISC_THREAD_TX_TSTAMP, pf->misc_thread);
+ 			ret = IRQ_WAKE_THREAD;
++		}
+ 	}
+ 
+ 	if (oicr & PFINT_OICR_TSYN_EVNT_M) {
+ 		u8 tmr_idx = hw->func_caps.ts_func_info.tmr_index_owned;
+ 		u32 gltsyn_stat = rd32(hw, GLTSYN_STAT(tmr_idx));
+ 
+-		/* Save EVENTs from GTSYN register */
+-		pf->ptp.ext_ts_irq |= gltsyn_stat & (GLTSYN_STAT_EVENT0_M |
+-						     GLTSYN_STAT_EVENT1_M |
+-						     GLTSYN_STAT_EVENT2_M);
+ 		ena_mask &= ~PFINT_OICR_TSYN_EVNT_M;
+-		kthread_queue_work(pf->ptp.kworker, &pf->ptp.extts_work);
++
++		if (hw->func_caps.ts_func_info.src_tmr_owned) {
++			/* Save EVENTs from GLTSYN register */
++			pf->ptp.ext_ts_irq |= gltsyn_stat &
++					      (GLTSYN_STAT_EVENT0_M |
++					       GLTSYN_STAT_EVENT1_M |
++					       GLTSYN_STAT_EVENT2_M);
++
++			set_bit(ICE_MISC_THREAD_EXTTS_EVENT, pf->misc_thread);
++			ret = IRQ_WAKE_THREAD;
++		}
+ 	}
+ 
+ #define ICE_AUX_CRIT_ERR (PFINT_OICR_PE_CRITERR_M | PFINT_OICR_HMC_ERR_M | PFINT_OICR_PE_PUSH_M)
+@@ -3198,8 +3206,13 @@ static irqreturn_t ice_misc_intr_thread_fn(int __always_unused irq, void *data)
+ 	if (ice_is_reset_in_progress(pf->state))
+ 		return IRQ_HANDLED;
+ 
+-	while (!ice_ptp_process_ts(pf))
+-		usleep_range(50, 100);
++	if (test_and_clear_bit(ICE_MISC_THREAD_EXTTS_EVENT, pf->misc_thread))
++		ice_ptp_extts_event(pf);
++
++	if (test_and_clear_bit(ICE_MISC_THREAD_TX_TSTAMP, pf->misc_thread)) {
++		while (!ice_ptp_process_ts(pf))
++			usleep_range(50, 100);
++	}
+ 
+ 	return IRQ_HANDLED;
+ }
+diff --git a/drivers/net/ethernet/intel/ice/ice_ptp.c b/drivers/net/ethernet/intel/ice/ice_ptp.c
+index ac6f06f9a2ed0..e8507d09b8488 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ptp.c
++++ b/drivers/net/ethernet/intel/ice/ice_ptp.c
+@@ -1458,15 +1458,11 @@ static int ice_ptp_adjfine(struct ptp_clock_info *info, long scaled_ppm)
+ }
+ 
+ /**
+- * ice_ptp_extts_work - Workqueue task function
+- * @work: external timestamp work structure
+- *
+- * Service for PTP external clock event
++ * ice_ptp_extts_event - Process PTP external clock event
++ * @pf: Board private structure
+  */
+-static void ice_ptp_extts_work(struct kthread_work *work)
++void ice_ptp_extts_event(struct ice_pf *pf)
+ {
+-	struct ice_ptp *ptp = container_of(work, struct ice_ptp, extts_work);
+-	struct ice_pf *pf = container_of(ptp, struct ice_pf, ptp);
+ 	struct ptp_clock_event event;
+ 	struct ice_hw *hw = &pf->hw;
+ 	u8 chan, tmr_idx;
+@@ -2558,7 +2554,6 @@ void ice_ptp_prepare_for_reset(struct ice_pf *pf)
+ 	ice_ptp_cfg_timestamp(pf, false);
+ 
+ 	kthread_cancel_delayed_work_sync(&ptp->work);
+-	kthread_cancel_work_sync(&ptp->extts_work);
+ 
+ 	if (test_bit(ICE_PFR_REQ, pf->state))
+ 		return;
+@@ -2656,7 +2651,6 @@ static int ice_ptp_init_work(struct ice_pf *pf, struct ice_ptp *ptp)
+ 
+ 	/* Initialize work functions */
+ 	kthread_init_delayed_work(&ptp->work, ice_ptp_periodic_work);
+-	kthread_init_work(&ptp->extts_work, ice_ptp_extts_work);
+ 
+ 	/* Allocate a kworker for handling work required for the ports
+ 	 * connected to the PTP hardware clock.
+diff --git a/drivers/net/ethernet/intel/ice/ice_ptp.h b/drivers/net/ethernet/intel/ice/ice_ptp.h
+index 9cda2f43e0e56..9f8902c1e743d 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ptp.h
++++ b/drivers/net/ethernet/intel/ice/ice_ptp.h
+@@ -169,7 +169,6 @@ struct ice_ptp_port {
+  * struct ice_ptp - data used for integrating with CONFIG_PTP_1588_CLOCK
+  * @port: data for the PHY port initialization procedure
+  * @work: delayed work function for periodic tasks
+- * @extts_work: work function for handling external Tx timestamps
+  * @cached_phc_time: a cached copy of the PHC time for timestamp extension
+  * @cached_phc_jiffies: jiffies when cached_phc_time was last updated
+  * @ext_ts_chan: the external timestamp channel in use
+@@ -190,7 +189,6 @@ struct ice_ptp_port {
+ struct ice_ptp {
+ 	struct ice_ptp_port port;
+ 	struct kthread_delayed_work work;
+-	struct kthread_work extts_work;
+ 	u64 cached_phc_time;
+ 	unsigned long cached_phc_jiffies;
+ 	u8 ext_ts_chan;
+@@ -256,6 +254,7 @@ int ice_ptp_get_ts_config(struct ice_pf *pf, struct ifreq *ifr);
+ void ice_ptp_cfg_timestamp(struct ice_pf *pf, bool ena);
+ int ice_get_ptp_clock_index(struct ice_pf *pf);
+ 
++void ice_ptp_extts_event(struct ice_pf *pf);
+ s8 ice_ptp_request_ts(struct ice_ptp_tx *tx, struct sk_buff *skb);
+ bool ice_ptp_process_ts(struct ice_pf *pf);
+ 
+@@ -284,6 +283,7 @@ static inline int ice_get_ptp_clock_index(struct ice_pf *pf)
+ 	return -1;
+ }
+ 
++static inline void ice_ptp_extts_event(struct ice_pf *pf) { }
+ static inline s8
+ ice_ptp_request_ts(struct ice_ptp_tx *tx, struct sk_buff *skb)
+ {
+diff --git a/drivers/net/ethernet/intel/igc/igc.h b/drivers/net/ethernet/intel/igc/igc.h
+index 34aebf00a5123..9dc9b982a7ea6 100644
+--- a/drivers/net/ethernet/intel/igc/igc.h
++++ b/drivers/net/ethernet/intel/igc/igc.h
+@@ -13,6 +13,7 @@
+ #include <linux/ptp_clock_kernel.h>
+ #include <linux/timecounter.h>
+ #include <linux/net_tstamp.h>
++#include <linux/bitfield.h>
+ 
+ #include "igc_hw.h"
+ 
+@@ -228,7 +229,10 @@ struct igc_adapter {
+ 
+ 	struct ptp_clock *ptp_clock;
+ 	struct ptp_clock_info ptp_caps;
+-	struct work_struct ptp_tx_work;
++	/* Access to ptp_tx_skb and ptp_tx_start are protected by the
++	 * ptp_tx_lock.
++	 */
++	spinlock_t ptp_tx_lock;
+ 	struct sk_buff *ptp_tx_skb;
+ 	struct hwtstamp_config tstamp_config;
+ 	unsigned long ptp_tx_start;
+@@ -311,6 +315,33 @@ extern char igc_driver_name[];
+ #define IGC_MRQC_RSS_FIELD_IPV4_UDP	0x00400000
+ #define IGC_MRQC_RSS_FIELD_IPV6_UDP	0x00800000
+ 
++/* RX-desc Write-Back format RSS Type's */
++enum igc_rss_type_num {
++	IGC_RSS_TYPE_NO_HASH		= 0,
++	IGC_RSS_TYPE_HASH_TCP_IPV4	= 1,
++	IGC_RSS_TYPE_HASH_IPV4		= 2,
++	IGC_RSS_TYPE_HASH_TCP_IPV6	= 3,
++	IGC_RSS_TYPE_HASH_IPV6_EX	= 4,
++	IGC_RSS_TYPE_HASH_IPV6		= 5,
++	IGC_RSS_TYPE_HASH_TCP_IPV6_EX	= 6,
++	IGC_RSS_TYPE_HASH_UDP_IPV4	= 7,
++	IGC_RSS_TYPE_HASH_UDP_IPV6	= 8,
++	IGC_RSS_TYPE_HASH_UDP_IPV6_EX	= 9,
++	IGC_RSS_TYPE_MAX		= 10,
++};
++#define IGC_RSS_TYPE_MAX_TABLE		16
++#define IGC_RSS_TYPE_MASK		GENMASK(3,0) /* 4-bits (3:0) = mask 0x0F */
++
++/* igc_rss_type - Rx descriptor RSS type field */
++static inline u32 igc_rss_type(const union igc_adv_rx_desc *rx_desc)
++{
++	/* RSS Type 4-bits (3:0) number: 0-9 (above 9 is reserved)
++	 * Accessing the same bits via u16 (wb.lower.lo_dword.hs_rss.pkt_info)
++	 * is slightly slower than via u32 (wb.lower.lo_dword.data)
++	 */
++	return le32_get_bits(rx_desc->wb.lower.lo_dword.data, IGC_RSS_TYPE_MASK);
++}
++
+ /* Interrupt defines */
+ #define IGC_START_ITR			648 /* ~6000 ints/sec */
+ #define IGC_4K_ITR			980
+@@ -401,7 +432,6 @@ enum igc_state_t {
+ 	__IGC_TESTING,
+ 	__IGC_RESETTING,
+ 	__IGC_DOWN,
+-	__IGC_PTP_TX_IN_PROGRESS,
+ };
+ 
+ enum igc_tx_flags {
+@@ -578,6 +608,7 @@ enum igc_ring_flags_t {
+ 	IGC_RING_FLAG_TX_CTX_IDX,
+ 	IGC_RING_FLAG_TX_DETECT_HANG,
+ 	IGC_RING_FLAG_AF_XDP_ZC,
++	IGC_RING_FLAG_TX_HWTSTAMP,
+ };
+ 
+ #define ring_uses_large_buffer(ring) \
+@@ -634,6 +665,7 @@ int igc_ptp_set_ts_config(struct net_device *netdev, struct ifreq *ifr);
+ int igc_ptp_get_ts_config(struct net_device *netdev, struct ifreq *ifr);
+ void igc_ptp_tx_hang(struct igc_adapter *adapter);
+ void igc_ptp_read(struct igc_adapter *adapter, struct timespec64 *ts);
++void igc_ptp_tx_tstamp_event(struct igc_adapter *adapter);
+ 
+ #define igc_rx_pg_size(_ring) (PAGE_SIZE << igc_rx_pg_order(_ring))
+ 
+diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
+index fa764190f2705..5f2e8bcd75973 100644
+--- a/drivers/net/ethernet/intel/igc/igc_main.c
++++ b/drivers/net/ethernet/intel/igc/igc_main.c
+@@ -1585,14 +1585,16 @@ done:
+ 		}
+ 	}
+ 
+-	if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP)) {
++	if (unlikely(test_bit(IGC_RING_FLAG_TX_HWTSTAMP, &tx_ring->flags) &&
++		     skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP)) {
+ 		/* FIXME: add support for retrieving timestamps from
+ 		 * the other timer registers before skipping the
+ 		 * timestamping request.
+ 		 */
+-		if (adapter->tstamp_config.tx_type == HWTSTAMP_TX_ON &&
+-		    !test_and_set_bit_lock(__IGC_PTP_TX_IN_PROGRESS,
+-					   &adapter->state)) {
++		unsigned long flags;
++
++		spin_lock_irqsave(&adapter->ptp_tx_lock, flags);
++		if (!adapter->ptp_tx_skb) {
+ 			skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
+ 			tx_flags |= IGC_TX_FLAGS_TSTAMP;
+ 
+@@ -1601,6 +1603,8 @@ done:
+ 		} else {
+ 			adapter->tx_hwtstamp_skipped++;
+ 		}
++
++		spin_unlock_irqrestore(&adapter->ptp_tx_lock, flags);
+ 	}
+ 
+ 	if (skb_vlan_tag_present(skb)) {
+@@ -1697,14 +1701,36 @@ static void igc_rx_checksum(struct igc_ring *ring,
+ 		   le32_to_cpu(rx_desc->wb.upper.status_error));
+ }
+ 
++/* Mapping HW RSS Type to enum pkt_hash_types */
++static const enum pkt_hash_types igc_rss_type_table[IGC_RSS_TYPE_MAX_TABLE] = {
++	[IGC_RSS_TYPE_NO_HASH]		= PKT_HASH_TYPE_L2,
++	[IGC_RSS_TYPE_HASH_TCP_IPV4]	= PKT_HASH_TYPE_L4,
++	[IGC_RSS_TYPE_HASH_IPV4]	= PKT_HASH_TYPE_L3,
++	[IGC_RSS_TYPE_HASH_TCP_IPV6]	= PKT_HASH_TYPE_L4,
++	[IGC_RSS_TYPE_HASH_IPV6_EX]	= PKT_HASH_TYPE_L3,
++	[IGC_RSS_TYPE_HASH_IPV6]	= PKT_HASH_TYPE_L3,
++	[IGC_RSS_TYPE_HASH_TCP_IPV6_EX] = PKT_HASH_TYPE_L4,
++	[IGC_RSS_TYPE_HASH_UDP_IPV4]	= PKT_HASH_TYPE_L4,
++	[IGC_RSS_TYPE_HASH_UDP_IPV6]	= PKT_HASH_TYPE_L4,
++	[IGC_RSS_TYPE_HASH_UDP_IPV6_EX] = PKT_HASH_TYPE_L4,
++	[10] = PKT_HASH_TYPE_NONE, /* RSS Type above 9 "Reserved" by HW  */
++	[11] = PKT_HASH_TYPE_NONE, /* keep array sized for SW bit-mask   */
++	[12] = PKT_HASH_TYPE_NONE, /* to handle future HW revisons       */
++	[13] = PKT_HASH_TYPE_NONE,
++	[14] = PKT_HASH_TYPE_NONE,
++	[15] = PKT_HASH_TYPE_NONE,
++};
++
+ static inline void igc_rx_hash(struct igc_ring *ring,
+ 			       union igc_adv_rx_desc *rx_desc,
+ 			       struct sk_buff *skb)
+ {
+-	if (ring->netdev->features & NETIF_F_RXHASH)
+-		skb_set_hash(skb,
+-			     le32_to_cpu(rx_desc->wb.lower.hi_dword.rss),
+-			     PKT_HASH_TYPE_L3);
++	if (ring->netdev->features & NETIF_F_RXHASH) {
++		u32 rss_hash = le32_to_cpu(rx_desc->wb.lower.hi_dword.rss);
++		u32 rss_type = igc_rss_type(rx_desc);
++
++		skb_set_hash(skb, rss_hash, igc_rss_type_table[rss_type]);
++	}
+ }
+ 
+ static void igc_rx_vlan(struct igc_ring *rx_ring,
+@@ -5219,7 +5245,7 @@ static void igc_tsync_interrupt(struct igc_adapter *adapter)
+ 
+ 	if (tsicr & IGC_TSICR_TXTS) {
+ 		/* retrieve hardware timestamp */
+-		schedule_work(&adapter->ptp_tx_work);
++		igc_ptp_tx_tstamp_event(adapter);
+ 		ack |= IGC_TSICR_TXTS;
+ 	}
+ 
+@@ -6561,6 +6587,7 @@ static int igc_probe(struct pci_dev *pdev,
+ 	netdev->features |= NETIF_F_TSO;
+ 	netdev->features |= NETIF_F_TSO6;
+ 	netdev->features |= NETIF_F_TSO_ECN;
++	netdev->features |= NETIF_F_RXHASH;
+ 	netdev->features |= NETIF_F_RXCSUM;
+ 	netdev->features |= NETIF_F_HW_CSUM;
+ 	netdev->features |= NETIF_F_SCTP_CRC;
+diff --git a/drivers/net/ethernet/intel/igc/igc_ptp.c b/drivers/net/ethernet/intel/igc/igc_ptp.c
+index 4e10ced736dbb..32ef112f8291a 100644
+--- a/drivers/net/ethernet/intel/igc/igc_ptp.c
++++ b/drivers/net/ethernet/intel/igc/igc_ptp.c
+@@ -536,9 +536,34 @@ static void igc_ptp_enable_rx_timestamp(struct igc_adapter *adapter)
+ 	wr32(IGC_TSYNCRXCTL, val);
+ }
+ 
++static void igc_ptp_clear_tx_tstamp(struct igc_adapter *adapter)
++{
++	unsigned long flags;
++
++	spin_lock_irqsave(&adapter->ptp_tx_lock, flags);
++
++	dev_kfree_skb_any(adapter->ptp_tx_skb);
++	adapter->ptp_tx_skb = NULL;
++
++	spin_unlock_irqrestore(&adapter->ptp_tx_lock, flags);
++}
++
+ static void igc_ptp_disable_tx_timestamp(struct igc_adapter *adapter)
+ {
+ 	struct igc_hw *hw = &adapter->hw;
++	int i;
++
++	/* Clear the flags first to avoid new packets to be enqueued
++	 * for TX timestamping.
++	 */
++	for (i = 0; i < adapter->num_tx_queues; i++) {
++		struct igc_ring *tx_ring = adapter->tx_ring[i];
++
++		clear_bit(IGC_RING_FLAG_TX_HWTSTAMP, &tx_ring->flags);
++	}
++
++	/* Now we can clean the pending TX timestamp requests. */
++	igc_ptp_clear_tx_tstamp(adapter);
+ 
+ 	wr32(IGC_TSYNCTXCTL, 0);
+ }
+@@ -546,12 +571,23 @@ static void igc_ptp_disable_tx_timestamp(struct igc_adapter *adapter)
+ static void igc_ptp_enable_tx_timestamp(struct igc_adapter *adapter)
+ {
+ 	struct igc_hw *hw = &adapter->hw;
++	int i;
+ 
+ 	wr32(IGC_TSYNCTXCTL, IGC_TSYNCTXCTL_ENABLED | IGC_TSYNCTXCTL_TXSYNSIG);
+ 
+ 	/* Read TXSTMP registers to discard any timestamp previously stored. */
+ 	rd32(IGC_TXSTMPL);
+ 	rd32(IGC_TXSTMPH);
++
++	/* The hardware is ready to accept TX timestamp requests,
++	 * notify the transmit path.
++	 */
++	for (i = 0; i < adapter->num_tx_queues; i++) {
++		struct igc_ring *tx_ring = adapter->tx_ring[i];
++
++		set_bit(IGC_RING_FLAG_TX_HWTSTAMP, &tx_ring->flags);
++	}
++
+ }
+ 
+ /**
+@@ -603,6 +639,7 @@ static int igc_ptp_set_timestamp_mode(struct igc_adapter *adapter,
+ 	return 0;
+ }
+ 
++/* Requires adapter->ptp_tx_lock held by caller. */
+ static void igc_ptp_tx_timeout(struct igc_adapter *adapter)
+ {
+ 	struct igc_hw *hw = &adapter->hw;
+@@ -610,7 +647,6 @@ static void igc_ptp_tx_timeout(struct igc_adapter *adapter)
+ 	dev_kfree_skb_any(adapter->ptp_tx_skb);
+ 	adapter->ptp_tx_skb = NULL;
+ 	adapter->tx_hwtstamp_timeouts++;
+-	clear_bit_unlock(__IGC_PTP_TX_IN_PROGRESS, &adapter->state);
+ 	/* Clear the tx valid bit in TSYNCTXCTL register to enable interrupt. */
+ 	rd32(IGC_TXSTMPH);
+ 	netdev_warn(adapter->netdev, "Tx timestamp timeout\n");
+@@ -618,20 +654,20 @@ static void igc_ptp_tx_timeout(struct igc_adapter *adapter)
+ 
+ void igc_ptp_tx_hang(struct igc_adapter *adapter)
+ {
+-	bool timeout = time_is_before_jiffies(adapter->ptp_tx_start +
+-					      IGC_PTP_TX_TIMEOUT);
++	unsigned long flags;
+ 
+-	if (!test_bit(__IGC_PTP_TX_IN_PROGRESS, &adapter->state))
+-		return;
++	spin_lock_irqsave(&adapter->ptp_tx_lock, flags);
+ 
+-	/* If we haven't received a timestamp within the timeout, it is
+-	 * reasonable to assume that it will never occur, so we can unlock the
+-	 * timestamp bit when this occurs.
+-	 */
+-	if (timeout) {
+-		cancel_work_sync(&adapter->ptp_tx_work);
+-		igc_ptp_tx_timeout(adapter);
+-	}
++	if (!adapter->ptp_tx_skb)
++		goto unlock;
++
++	if (time_is_after_jiffies(adapter->ptp_tx_start + IGC_PTP_TX_TIMEOUT))
++		goto unlock;
++
++	igc_ptp_tx_timeout(adapter);
++
++unlock:
++	spin_unlock_irqrestore(&adapter->ptp_tx_lock, flags);
+ }
+ 
+ /**
+@@ -641,20 +677,57 @@ void igc_ptp_tx_hang(struct igc_adapter *adapter)
+  * If we were asked to do hardware stamping and such a time stamp is
+  * available, then it must have been for this skb here because we only
+  * allow only one such packet into the queue.
++ *
++ * Context: Expects adapter->ptp_tx_lock to be held by caller.
+  */
+ static void igc_ptp_tx_hwtstamp(struct igc_adapter *adapter)
+ {
+ 	struct sk_buff *skb = adapter->ptp_tx_skb;
+ 	struct skb_shared_hwtstamps shhwtstamps;
+ 	struct igc_hw *hw = &adapter->hw;
++	u32 tsynctxctl;
+ 	int adjust = 0;
+ 	u64 regval;
+ 
+ 	if (WARN_ON_ONCE(!skb))
+ 		return;
+ 
+-	regval = rd32(IGC_TXSTMPL);
+-	regval |= (u64)rd32(IGC_TXSTMPH) << 32;
++	tsynctxctl = rd32(IGC_TSYNCTXCTL);
++	tsynctxctl &= IGC_TSYNCTXCTL_TXTT_0;
++	if (tsynctxctl) {
++		regval = rd32(IGC_TXSTMPL);
++		regval |= (u64)rd32(IGC_TXSTMPH) << 32;
++	} else {
++		/* There's a bug in the hardware that could cause
++		 * missing interrupts for TX timestamping. The issue
++		 * is that for new interrupts to be triggered, the
++		 * IGC_TXSTMPH_0 register must be read.
++		 *
++		 * To avoid discarding a valid timestamp that just
++		 * happened at the "wrong" time, we need to confirm
++		 * that there was no timestamp captured, we do that by
++		 * assuming that no two timestamps in sequence have
++		 * the same nanosecond value.
++		 *
++		 * So, we read the "low" register, read the "high"
++		 * register (to latch a new timestamp) and read the
++		 * "low" register again, if "old" and "new" versions
++		 * of the "low" register are different, a valid
++		 * timestamp was captured, we can read the "high"
++		 * register again.
++		 */
++		u32 txstmpl_old, txstmpl_new;
++
++		txstmpl_old = rd32(IGC_TXSTMPL);
++		rd32(IGC_TXSTMPH);
++		txstmpl_new = rd32(IGC_TXSTMPL);
++
++		if (txstmpl_old == txstmpl_new)
++			return;
++
++		regval = txstmpl_new;
++		regval |= (u64)rd32(IGC_TXSTMPH) << 32;
++	}
+ 	if (igc_ptp_systim_to_hwtstamp(adapter, &shhwtstamps, regval))
+ 		return;
+ 
+@@ -676,13 +749,7 @@ static void igc_ptp_tx_hwtstamp(struct igc_adapter *adapter)
+ 	shhwtstamps.hwtstamp =
+ 		ktime_add_ns(shhwtstamps.hwtstamp, adjust);
+ 
+-	/* Clear the lock early before calling skb_tstamp_tx so that
+-	 * applications are not woken up before the lock bit is clear. We use
+-	 * a copy of the skb pointer to ensure other threads can't change it
+-	 * while we're notifying the stack.
+-	 */
+ 	adapter->ptp_tx_skb = NULL;
+-	clear_bit_unlock(__IGC_PTP_TX_IN_PROGRESS, &adapter->state);
+ 
+ 	/* Notify the stack and free the skb after we've unlocked */
+ 	skb_tstamp_tx(skb, &shhwtstamps);
+@@ -690,27 +757,25 @@ static void igc_ptp_tx_hwtstamp(struct igc_adapter *adapter)
+ }
+ 
+ /**
+- * igc_ptp_tx_work
+- * @work: pointer to work struct
++ * igc_ptp_tx_tstamp_event
++ * @adapter: board private structure
+  *
+- * This work function polls the TSYNCTXCTL valid bit to determine when a
+- * timestamp has been taken for the current stored skb.
++ * Called when a TX timestamp interrupt happens to retrieve the
++ * timestamp and send it up to the socket.
+  */
+-static void igc_ptp_tx_work(struct work_struct *work)
++void igc_ptp_tx_tstamp_event(struct igc_adapter *adapter)
+ {
+-	struct igc_adapter *adapter = container_of(work, struct igc_adapter,
+-						   ptp_tx_work);
+-	struct igc_hw *hw = &adapter->hw;
+-	u32 tsynctxctl;
++	unsigned long flags;
+ 
+-	if (!test_bit(__IGC_PTP_TX_IN_PROGRESS, &adapter->state))
+-		return;
++	spin_lock_irqsave(&adapter->ptp_tx_lock, flags);
+ 
+-	tsynctxctl = rd32(IGC_TSYNCTXCTL);
+-	if (WARN_ON_ONCE(!(tsynctxctl & IGC_TSYNCTXCTL_TXTT_0)))
+-		return;
++	if (!adapter->ptp_tx_skb)
++		goto unlock;
+ 
+ 	igc_ptp_tx_hwtstamp(adapter);
++
++unlock:
++	spin_unlock_irqrestore(&adapter->ptp_tx_lock, flags);
+ }
+ 
+ /**
+@@ -959,8 +1024,8 @@ void igc_ptp_init(struct igc_adapter *adapter)
+ 		return;
+ 	}
+ 
++	spin_lock_init(&adapter->ptp_tx_lock);
+ 	spin_lock_init(&adapter->tmreg_lock);
+-	INIT_WORK(&adapter->ptp_tx_work, igc_ptp_tx_work);
+ 
+ 	adapter->tstamp_config.rx_filter = HWTSTAMP_FILTER_NONE;
+ 	adapter->tstamp_config.tx_type = HWTSTAMP_TX_OFF;
+@@ -1020,10 +1085,7 @@ void igc_ptp_suspend(struct igc_adapter *adapter)
+ 	if (!(adapter->ptp_flags & IGC_PTP_ENABLED))
+ 		return;
+ 
+-	cancel_work_sync(&adapter->ptp_tx_work);
+-	dev_kfree_skb_any(adapter->ptp_tx_skb);
+-	adapter->ptp_tx_skb = NULL;
+-	clear_bit_unlock(__IGC_PTP_TX_IN_PROGRESS, &adapter->state);
++	igc_ptp_clear_tx_tstamp(adapter);
+ 
+ 	if (pci_device_is_present(adapter->pdev)) {
+ 		igc_ptp_time_save(adapter);
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
+index bd77152bb8d7c..592037f4e55b6 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
+@@ -169,6 +169,9 @@ void cgx_lmac_write(int cgx_id, int lmac_id, u64 offset, u64 val)
+ {
+ 	struct cgx *cgx_dev = cgx_get_pdata(cgx_id);
+ 
++	/* Software must not access disabled LMAC registers */
++	if (!is_lmac_valid(cgx_dev, lmac_id))
++		return;
+ 	cgx_write(cgx_dev, lmac_id, offset, val);
+ }
+ 
+@@ -176,6 +179,10 @@ u64 cgx_lmac_read(int cgx_id, int lmac_id, u64 offset)
+ {
+ 	struct cgx *cgx_dev = cgx_get_pdata(cgx_id);
+ 
++	/* Software must not access disabled LMAC registers */
++	if (!is_lmac_valid(cgx_dev, lmac_id))
++		return 0;
++
+ 	return cgx_read(cgx_dev, lmac_id, offset);
+ }
+ 
+@@ -530,14 +537,15 @@ static u32 cgx_get_lmac_fifo_len(void *cgxd, int lmac_id)
+ int cgx_lmac_internal_loopback(void *cgxd, int lmac_id, bool enable)
+ {
+ 	struct cgx *cgx = cgxd;
+-	u8 lmac_type;
++	struct lmac *lmac;
+ 	u64 cfg;
+ 
+ 	if (!is_lmac_valid(cgx, lmac_id))
+ 		return -ENODEV;
+ 
+-	lmac_type = cgx->mac_ops->get_lmac_type(cgx, lmac_id);
+-	if (lmac_type == LMAC_MODE_SGMII || lmac_type == LMAC_MODE_QSGMII) {
++	lmac = lmac_pdata(lmac_id, cgx);
++	if (lmac->lmac_type == LMAC_MODE_SGMII ||
++	    lmac->lmac_type == LMAC_MODE_QSGMII) {
+ 		cfg = cgx_read(cgx, lmac_id, CGXX_GMP_PCS_MRX_CTL);
+ 		if (enable)
+ 			cfg |= CGXX_GMP_PCS_MRX_CTL_LBK;
+@@ -1556,6 +1564,23 @@ int cgx_lmac_linkup_start(void *cgxd)
+ 	return 0;
+ }
+ 
++int cgx_lmac_reset(void *cgxd, int lmac_id, u8 pf_req_flr)
++{
++	struct cgx *cgx = cgxd;
++	u64 cfg;
++
++	if (!is_lmac_valid(cgx, lmac_id))
++		return -ENODEV;
++
++	/* Resetting PFC related CSRs */
++	cfg = 0xff;
++	cgx_write(cgxd, lmac_id, CGXX_CMRX_RX_LOGL_XON, cfg);
++
++	if (pf_req_flr)
++		cgx_lmac_internal_loopback(cgxd, lmac_id, false);
++	return 0;
++}
++
+ static int cgx_configure_interrupt(struct cgx *cgx, struct lmac *lmac,
+ 				   int cnt, bool req_free)
+ {
+@@ -1675,6 +1700,7 @@ static int cgx_lmac_init(struct cgx *cgx)
+ 		cgx->lmac_idmap[lmac->lmac_id] = lmac;
+ 		set_bit(lmac->lmac_id, &cgx->lmac_bmap);
+ 		cgx->mac_ops->mac_pause_frm_config(cgx, lmac->lmac_id, true);
++		lmac->lmac_type = cgx->mac_ops->get_lmac_type(cgx, lmac->lmac_id);
+ 	}
+ 
+ 	return cgx_lmac_verify_fwi_version(cgx);
+@@ -1771,6 +1797,7 @@ static struct mac_ops	cgx_mac_ops    = {
+ 	.mac_tx_enable =		cgx_lmac_tx_enable,
+ 	.pfc_config =                   cgx_lmac_pfc_config,
+ 	.mac_get_pfc_frm_cfg   =        cgx_lmac_get_pfc_frm_cfg,
++	.mac_reset   =			cgx_lmac_reset,
+ };
+ 
+ static int cgx_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.h b/drivers/net/ethernet/marvell/octeontx2/af/cgx.h
+index 5a20d93004c71..5741141796880 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.h
+@@ -35,6 +35,7 @@
+ #define CGXX_CMRX_INT_ENA_W1S		0x058
+ #define CGXX_CMRX_RX_ID_MAP		0x060
+ #define CGXX_CMRX_RX_STAT0		0x070
++#define CGXX_CMRX_RX_LOGL_XON		0x100
+ #define CGXX_CMRX_RX_LMACS		0x128
+ #define CGXX_CMRX_RX_DMAC_CTL0		(0x1F8 + mac_ops->csr_offset)
+ #define CGX_DMAC_CTL0_CAM_ENABLE	BIT_ULL(3)
+@@ -181,4 +182,5 @@ int cgx_lmac_get_pfc_frm_cfg(void *cgxd, int lmac_id, u8 *tx_pause,
+ 			     u8 *rx_pause);
+ int verify_lmac_fc_cfg(void *cgxd, int lmac_id, u8 tx_pause, u8 rx_pause,
+ 		       int pfvf_idx);
++int cgx_lmac_reset(void *cgxd, int lmac_id, u8 pf_req_flr);
+ #endif /* CGX_H */
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h b/drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h
+index 39aaf0e4467dc..0b4cba03f2e83 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h
+@@ -24,6 +24,7 @@
+  * @cgx:		parent cgx port
+  * @mcast_filters_count:  Number of multicast filters installed
+  * @lmac_id:		lmac port id
++ * @lmac_type:	        lmac type like SGMII/XAUI
+  * @cmd_pend:		flag set before new command is started
+  *			flag cleared after command response is received
+  * @name:		lmac port name
+@@ -43,6 +44,7 @@ struct lmac {
+ 	struct cgx *cgx;
+ 	u8 mcast_filters_count;
+ 	u8 lmac_id;
++	u8 lmac_type;
+ 	bool cmd_pend;
+ 	char *name;
+ };
+@@ -125,6 +127,7 @@ struct mac_ops {
+ 
+ 	int                     (*mac_get_pfc_frm_cfg)(void *cgxd, int lmac_id,
+ 						       u8 *tx_pause, u8 *rx_pause);
++	int			(*mac_reset)(void *cgxd, int lmac_id, u8 pf_req_flr);
+ 
+ 	/* FEC stats */
+ 	int			(*get_fec_stats)(void *cgxd, int lmac_id,
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rpm.c b/drivers/net/ethernet/marvell/octeontx2/af/rpm.c
+index de0d88dd10d65..b4fcb20c3f4fd 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rpm.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rpm.c
+@@ -37,6 +37,7 @@ static struct mac_ops		rpm_mac_ops   = {
+ 	.mac_tx_enable =		rpm_lmac_tx_enable,
+ 	.pfc_config =                   rpm_lmac_pfc_config,
+ 	.mac_get_pfc_frm_cfg   =        rpm_lmac_get_pfc_frm_cfg,
++	.mac_reset   =			rpm_lmac_reset,
+ };
+ 
+ static struct mac_ops		rpm2_mac_ops   = {
+@@ -47,7 +48,7 @@ static struct mac_ops		rpm2_mac_ops   = {
+ 	.int_set_reg    =       RPM2_CMRX_SW_INT_ENA_W1S,
+ 	.irq_offset     =       1,
+ 	.int_ena_bit    =       BIT_ULL(0),
+-	.lmac_fwi	=	RPM_LMAC_FWI,
++	.lmac_fwi	=	RPM2_LMAC_FWI,
+ 	.non_contiguous_serdes_lane = true,
+ 	.rx_stats_cnt   =       43,
+ 	.tx_stats_cnt   =       34,
+@@ -68,6 +69,7 @@ static struct mac_ops		rpm2_mac_ops   = {
+ 	.mac_tx_enable =		rpm_lmac_tx_enable,
+ 	.pfc_config =                   rpm_lmac_pfc_config,
+ 	.mac_get_pfc_frm_cfg   =        rpm_lmac_get_pfc_frm_cfg,
++	.mac_reset   =			rpm_lmac_reset,
+ };
+ 
+ bool is_dev_rpm2(void *rpmd)
+@@ -537,14 +539,15 @@ u32 rpm2_get_lmac_fifo_len(void *rpmd, int lmac_id)
+ int rpm_lmac_internal_loopback(void *rpmd, int lmac_id, bool enable)
+ {
+ 	rpm_t *rpm = rpmd;
+-	u8 lmac_type;
++	struct lmac *lmac;
+ 	u64 cfg;
+ 
+ 	if (!is_lmac_valid(rpm, lmac_id))
+ 		return -ENODEV;
+-	lmac_type = rpm->mac_ops->get_lmac_type(rpm, lmac_id);
+ 
+-	if (lmac_type == LMAC_MODE_QSGMII || lmac_type == LMAC_MODE_SGMII) {
++	lmac = lmac_pdata(lmac_id, rpm);
++	if (lmac->lmac_type == LMAC_MODE_QSGMII ||
++	    lmac->lmac_type == LMAC_MODE_SGMII) {
+ 		dev_err(&rpm->pdev->dev, "loopback not supported for LPC mode\n");
+ 		return 0;
+ 	}
+@@ -713,3 +716,24 @@ int rpm_get_fec_stats(void *rpmd, int lmac_id, struct cgx_fec_stats_rsp *rsp)
+ 
+ 	return 0;
+ }
++
++int rpm_lmac_reset(void *rpmd, int lmac_id, u8 pf_req_flr)
++{
++	u64 rx_logl_xon, cfg;
++	rpm_t *rpm = rpmd;
++
++	if (!is_lmac_valid(rpm, lmac_id))
++		return -ENODEV;
++
++	/* Resetting PFC related CSRs */
++	rx_logl_xon = is_dev_rpm2(rpm) ? RPM2_CMRX_RX_LOGL_XON :
++					 RPMX_CMRX_RX_LOGL_XON;
++	cfg = 0xff;
++
++	rpm_write(rpm, lmac_id, rx_logl_xon, cfg);
++
++	if (pf_req_flr)
++		rpm_lmac_internal_loopback(rpm, lmac_id, false);
++
++	return 0;
++}
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rpm.h b/drivers/net/ethernet/marvell/octeontx2/af/rpm.h
+index 22147b4c21370..b79cfbc6f8770 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rpm.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rpm.h
+@@ -74,6 +74,7 @@
+ #define RPMX_MTI_MAC100X_CL01_PAUSE_QUANTA              0x80A8
+ #define RPMX_MTI_MAC100X_CL89_PAUSE_QUANTA		0x8108
+ #define RPM_DEFAULT_PAUSE_TIME                          0x7FF
++#define RPMX_CMRX_RX_LOGL_XON				0x4100
+ 
+ #define RPMX_MTI_MAC100X_XIF_MODE		        0x8100
+ #define RPMX_ONESTEP_ENABLE				BIT_ULL(5)
+@@ -94,7 +95,8 @@
+ 
+ /* CN10KB CSR Declaration */
+ #define  RPM2_CMRX_SW_INT				0x1b0
+-#define  RPM2_CMRX_SW_INT_ENA_W1S			0x1b8
++#define  RPM2_CMRX_SW_INT_ENA_W1S			0x1c8
++#define  RPM2_LMAC_FWI					0x12
+ #define  RPM2_CMR_CHAN_MSK_OR				0x3120
+ #define  RPM2_CMR_RX_OVR_BP_EN				BIT_ULL(2)
+ #define  RPM2_CMR_RX_OVR_BP_BP				BIT_ULL(1)
+@@ -131,4 +133,5 @@ int rpm_lmac_get_pfc_frm_cfg(void *rpmd, int lmac_id, u8 *tx_pause,
+ int rpm2_get_nr_lmacs(void *rpmd);
+ bool is_dev_rpm2(void *rpmd);
+ int rpm_get_fec_stats(void *cgxd, int lmac_id, struct cgx_fec_stats_rsp *rsp);
++int rpm_lmac_reset(void *rpmd, int lmac_id, u8 pf_req_flr);
+ #endif /* RPM_H */
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+index 9f673bda9dbdd..b26b013216933 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+@@ -2629,6 +2629,7 @@ static void __rvu_flr_handler(struct rvu *rvu, u16 pcifunc)
+ 	 * Since LF is detached use LF number as -1.
+ 	 */
+ 	rvu_npc_free_mcam_entries(rvu, pcifunc, -1);
++	rvu_mac_reset(rvu, pcifunc);
+ 
+ 	mutex_unlock(&rvu->flr_lock);
+ }
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+index d655bf04a483d..be279cd1fd729 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+@@ -23,6 +23,7 @@
+ #define	PCI_DEVID_OCTEONTX2_LBK			0xA061
+ 
+ /* Subsystem Device ID */
++#define PCI_SUBSYS_DEVID_98XX                  0xB100
+ #define PCI_SUBSYS_DEVID_96XX                  0xB200
+ #define PCI_SUBSYS_DEVID_CN10K_A	       0xB900
+ #define PCI_SUBSYS_DEVID_CNF10K_B              0xBC00
+@@ -669,6 +670,16 @@ static inline u16 rvu_nix_chan_cpt(struct rvu *rvu, u8 chan)
+ 	return rvu->hw->cpt_chan_base + chan;
+ }
+ 
++static inline bool is_rvu_supports_nix1(struct rvu *rvu)
++{
++	struct pci_dev *pdev = rvu->pdev;
++
++	if (pdev->subsystem_device == PCI_SUBSYS_DEVID_98XX)
++		return true;
++
++	return false;
++}
++
+ /* Function Prototypes
+  * RVU
+  */
+@@ -864,6 +875,7 @@ int rvu_cgx_config_tx(void *cgxd, int lmac_id, bool enable);
+ int rvu_cgx_prio_flow_ctrl_cfg(struct rvu *rvu, u16 pcifunc, u8 tx_pause, u8 rx_pause,
+ 			       u16 pfc_en);
+ int rvu_cgx_cfg_pause_frm(struct rvu *rvu, u16 pcifunc, u8 tx_pause, u8 rx_pause);
++void rvu_mac_reset(struct rvu *rvu, u16 pcifunc);
+ u32 rvu_cgx_get_lmac_fifolen(struct rvu *rvu, int cgx, int lmac);
+ int npc_get_nixlf_mcam_index(struct npc_mcam *mcam, u16 pcifunc, int nixlf,
+ 			     int type);
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
+index 83b342fa8d753..095b2cc4a6999 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
+@@ -114,7 +114,7 @@ static void rvu_map_cgx_nix_block(struct rvu *rvu, int pf,
+ 	p2x = cgx_lmac_get_p2x(cgx_id, lmac_id);
+ 	/* Firmware sets P2X_SELECT as either NIX0 or NIX1 */
+ 	pfvf->nix_blkaddr = BLKADDR_NIX0;
+-	if (p2x == CMR_P2X_SEL_NIX1)
++	if (is_rvu_supports_nix1(rvu) && p2x == CMR_P2X_SEL_NIX1)
+ 		pfvf->nix_blkaddr = BLKADDR_NIX1;
+ }
+ 
+@@ -763,7 +763,7 @@ static int rvu_cgx_ptp_rx_cfg(struct rvu *rvu, u16 pcifunc, bool enable)
+ 	cgxd = rvu_cgx_pdata(cgx_id, rvu);
+ 
+ 	mac_ops = get_mac_ops(cgxd);
+-	mac_ops->mac_enadis_ptp_config(cgxd, lmac_id, true);
++	mac_ops->mac_enadis_ptp_config(cgxd, lmac_id, enable);
+ 	/* If PTP is enabled then inform NPC that packets to be
+ 	 * parsed by this PF will have their data shifted by 8 bytes
+ 	 * and if PTP is disabled then no shift is required
+@@ -1250,3 +1250,21 @@ int rvu_mbox_handler_cgx_prio_flow_ctrl_cfg(struct rvu *rvu,
+ 	mac_ops->mac_get_pfc_frm_cfg(cgxd, lmac_id, &rsp->tx_pause, &rsp->rx_pause);
+ 	return err;
+ }
++
++void rvu_mac_reset(struct rvu *rvu, u16 pcifunc)
++{
++	int pf = rvu_get_pf(pcifunc);
++	struct mac_ops *mac_ops;
++	struct cgx *cgxd;
++	u8 cgx, lmac;
++
++	if (!is_pf_cgxmapped(rvu, pf))
++		return;
++
++	rvu_get_cgx_lmac_id(rvu->pf2cgxlmac_map[pf], &cgx, &lmac);
++	cgxd = rvu_cgx_pdata(cgx, rvu);
++	mac_ops = get_mac_ops(cgxd);
++
++	if (mac_ops->mac_reset(cgxd, lmac, !is_vf(pcifunc)))
++		dev_err(rvu->dev, "Failed to reset MAC\n");
++}
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/minimal.c b/drivers/net/ethernet/mellanox/mlxsw/minimal.c
+index 6b56eadd736e5..6b98c3287b497 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/minimal.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/minimal.c
+@@ -417,6 +417,7 @@ static int mlxsw_m_linecards_init(struct mlxsw_m *mlxsw_m)
+ err_kmalloc_array:
+ 	for (i--; i >= 0; i--)
+ 		kfree(mlxsw_m->line_cards[i]);
++	kfree(mlxsw_m->line_cards);
+ err_kcalloc:
+ 	kfree(mlxsw_m->ports);
+ 	return err;
+diff --git a/drivers/net/ethernet/mscc/ocelot.c b/drivers/net/ethernet/mscc/ocelot.c
+index 1f5f00b304418..2fa833d041baa 100644
+--- a/drivers/net/ethernet/mscc/ocelot.c
++++ b/drivers/net/ethernet/mscc/ocelot.c
+@@ -2925,7 +2925,6 @@ int ocelot_init(struct ocelot *ocelot)
+ 		}
+ 	}
+ 
+-	mutex_init(&ocelot->ptp_lock);
+ 	mutex_init(&ocelot->mact_lock);
+ 	mutex_init(&ocelot->fwd_domain_lock);
+ 	mutex_init(&ocelot->tas_lock);
+diff --git a/drivers/net/ethernet/mscc/ocelot_ptp.c b/drivers/net/ethernet/mscc/ocelot_ptp.c
+index 2180ae94c7447..cb32234a5bf1b 100644
+--- a/drivers/net/ethernet/mscc/ocelot_ptp.c
++++ b/drivers/net/ethernet/mscc/ocelot_ptp.c
+@@ -439,8 +439,12 @@ static int ocelot_ipv6_ptp_trap_del(struct ocelot *ocelot, int port)
+ static int ocelot_setup_ptp_traps(struct ocelot *ocelot, int port,
+ 				  bool l2, bool l4)
+ {
++	struct ocelot_port *ocelot_port = ocelot->ports[port];
+ 	int err;
+ 
++	ocelot_port->trap_proto &= ~(OCELOT_PROTO_PTP_L2 |
++				     OCELOT_PROTO_PTP_L4);
++
+ 	if (l2)
+ 		err = ocelot_l2_ptp_trap_add(ocelot, port);
+ 	else
+@@ -464,6 +468,11 @@ static int ocelot_setup_ptp_traps(struct ocelot *ocelot, int port,
+ 	if (err)
+ 		return err;
+ 
++	if (l2)
++		ocelot_port->trap_proto |= OCELOT_PROTO_PTP_L2;
++	if (l4)
++		ocelot_port->trap_proto |= OCELOT_PROTO_PTP_L4;
++
+ 	return 0;
+ 
+ err_ipv6:
+@@ -474,10 +483,38 @@ err_ipv4:
+ 	return err;
+ }
+ 
++static int ocelot_traps_to_ptp_rx_filter(unsigned int proto)
++{
++	if ((proto & OCELOT_PROTO_PTP_L2) && (proto & OCELOT_PROTO_PTP_L4))
++		return HWTSTAMP_FILTER_PTP_V2_EVENT;
++	else if (proto & OCELOT_PROTO_PTP_L2)
++		return HWTSTAMP_FILTER_PTP_V2_L2_EVENT;
++	else if (proto & OCELOT_PROTO_PTP_L4)
++		return HWTSTAMP_FILTER_PTP_V2_L4_EVENT;
++
++	return HWTSTAMP_FILTER_NONE;
++}
++
+ int ocelot_hwstamp_get(struct ocelot *ocelot, int port, struct ifreq *ifr)
+ {
+-	return copy_to_user(ifr->ifr_data, &ocelot->hwtstamp_config,
+-			    sizeof(ocelot->hwtstamp_config)) ? -EFAULT : 0;
++	struct ocelot_port *ocelot_port = ocelot->ports[port];
++	struct hwtstamp_config cfg = {};
++
++	switch (ocelot_port->ptp_cmd) {
++	case IFH_REW_OP_TWO_STEP_PTP:
++		cfg.tx_type = HWTSTAMP_TX_ON;
++		break;
++	case IFH_REW_OP_ORIGIN_PTP:
++		cfg.tx_type = HWTSTAMP_TX_ONESTEP_SYNC;
++		break;
++	default:
++		cfg.tx_type = HWTSTAMP_TX_OFF;
++		break;
++	}
++
++	cfg.rx_filter = ocelot_traps_to_ptp_rx_filter(ocelot_port->trap_proto);
++
++	return copy_to_user(ifr->ifr_data, &cfg, sizeof(cfg)) ? -EFAULT : 0;
+ }
+ EXPORT_SYMBOL(ocelot_hwstamp_get);
+ 
+@@ -509,8 +546,6 @@ int ocelot_hwstamp_set(struct ocelot *ocelot, int port, struct ifreq *ifr)
+ 		return -ERANGE;
+ 	}
+ 
+-	mutex_lock(&ocelot->ptp_lock);
+-
+ 	switch (cfg.rx_filter) {
+ 	case HWTSTAMP_FILTER_NONE:
+ 		break;
+@@ -531,28 +566,14 @@ int ocelot_hwstamp_set(struct ocelot *ocelot, int port, struct ifreq *ifr)
+ 		l4 = true;
+ 		break;
+ 	default:
+-		mutex_unlock(&ocelot->ptp_lock);
+ 		return -ERANGE;
+ 	}
+ 
+ 	err = ocelot_setup_ptp_traps(ocelot, port, l2, l4);
+-	if (err) {
+-		mutex_unlock(&ocelot->ptp_lock);
++	if (err)
+ 		return err;
+-	}
+ 
+-	if (l2 && l4)
+-		cfg.rx_filter = HWTSTAMP_FILTER_PTP_V2_EVENT;
+-	else if (l2)
+-		cfg.rx_filter = HWTSTAMP_FILTER_PTP_V2_L2_EVENT;
+-	else if (l4)
+-		cfg.rx_filter = HWTSTAMP_FILTER_PTP_V2_L4_EVENT;
+-	else
+-		cfg.rx_filter = HWTSTAMP_FILTER_NONE;
+-
+-	/* Commit back the result & save it */
+-	memcpy(&ocelot->hwtstamp_config, &cfg, sizeof(cfg));
+-	mutex_unlock(&ocelot->ptp_lock);
++	cfg.rx_filter = ocelot_traps_to_ptp_rx_filter(ocelot_port->trap_proto);
+ 
+ 	return copy_to_user(ifr->ifr_data, &cfg, sizeof(cfg)) ? -EFAULT : 0;
+ }
+@@ -824,11 +845,6 @@ int ocelot_init_timestamp(struct ocelot *ocelot,
+ 
+ 	ocelot_write(ocelot, PTP_CFG_MISC_PTP_EN, PTP_CFG_MISC);
+ 
+-	/* There is no device reconfiguration, PTP Rx stamping is always
+-	 * enabled.
+-	 */
+-	ocelot->hwtstamp_config.rx_filter = HWTSTAMP_FILTER_PTP_V2_EVENT;
+-
+ 	return 0;
+ }
+ EXPORT_SYMBOL(ocelot_init_timestamp);
+diff --git a/drivers/net/ethernet/sfc/ef10.c b/drivers/net/ethernet/sfc/ef10.c
+index b63e47af63655..8c019f382a7f3 100644
+--- a/drivers/net/ethernet/sfc/ef10.c
++++ b/drivers/net/ethernet/sfc/ef10.c
+@@ -1297,8 +1297,10 @@ static void efx_ef10_fini_nic(struct efx_nic *efx)
+ {
+ 	struct efx_ef10_nic_data *nic_data = efx->nic_data;
+ 
++	spin_lock_bh(&efx->stats_lock);
+ 	kfree(nic_data->mc_stats);
+ 	nic_data->mc_stats = NULL;
++	spin_unlock_bh(&efx->stats_lock);
+ }
+ 
+ static int efx_ef10_init_nic(struct efx_nic *efx)
+@@ -1852,9 +1854,14 @@ static size_t efx_ef10_update_stats_pf(struct efx_nic *efx, u64 *full_stats,
+ 
+ 	efx_ef10_get_stat_mask(efx, mask);
+ 
+-	efx_nic_copy_stats(efx, nic_data->mc_stats);
+-	efx_nic_update_stats(efx_ef10_stat_desc, EF10_STAT_COUNT,
+-			     mask, stats, nic_data->mc_stats, false);
++	/* If NIC was fini'd (probably resetting), then we can't read
++	 * updated stats right now.
++	 */
++	if (nic_data->mc_stats) {
++		efx_nic_copy_stats(efx, nic_data->mc_stats);
++		efx_nic_update_stats(efx_ef10_stat_desc, EF10_STAT_COUNT,
++				     mask, stats, nic_data->mc_stats, false);
++	}
+ 
+ 	/* Update derived statistics */
+ 	efx_nic_fix_nodesc_drop_stat(efx,
+diff --git a/drivers/net/ethernet/sfc/efx_devlink.c b/drivers/net/ethernet/sfc/efx_devlink.c
+index ef9971cbb695d..0384b134e1242 100644
+--- a/drivers/net/ethernet/sfc/efx_devlink.c
++++ b/drivers/net/ethernet/sfc/efx_devlink.c
+@@ -622,6 +622,9 @@ static struct devlink_port *ef100_set_devlink_port(struct efx_nic *efx, u32 idx)
+ 	u32 id;
+ 	int rc;
+ 
++	if (!efx->mae)
++		return NULL;
++
+ 	if (efx_mae_lookup_mport(efx, idx, &id)) {
+ 		/* This should not happen. */
+ 		if (idx == MAE_MPORT_DESC_VF_IDX_NULL)
+diff --git a/drivers/net/ethernet/sfc/tc.c b/drivers/net/ethernet/sfc/tc.c
+index c004443c1d58c..d7827ab3761f9 100644
+--- a/drivers/net/ethernet/sfc/tc.c
++++ b/drivers/net/ethernet/sfc/tc.c
+@@ -132,23 +132,6 @@ static void efx_tc_free_action_set_list(struct efx_nic *efx,
+ 	/* Don't kfree, as acts is embedded inside a struct efx_tc_flow_rule */
+ }
+ 
+-static void efx_tc_flow_free(void *ptr, void *arg)
+-{
+-	struct efx_tc_flow_rule *rule = ptr;
+-	struct efx_nic *efx = arg;
+-
+-	netif_err(efx, drv, efx->net_dev,
+-		  "tc rule %lx still present at teardown, removing\n",
+-		  rule->cookie);
+-
+-	efx_mae_delete_rule(efx, rule->fw_id);
+-
+-	/* Release entries in subsidiary tables */
+-	efx_tc_free_action_set_list(efx, &rule->acts, true);
+-
+-	kfree(rule);
+-}
+-
+ /* Boilerplate for the simple 'copy a field' cases */
+ #define _MAP_KEY_AND_MASK(_name, _type, _tcget, _tcfield, _field)	\
+ if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_##_name)) {		\
+@@ -1451,6 +1434,21 @@ static void efx_tc_encap_match_free(void *ptr, void *__unused)
+ 	kfree(encap);
+ }
+ 
++static void efx_tc_flow_free(void *ptr, void *arg)
++{
++	struct efx_tc_flow_rule *rule = ptr;
++	struct efx_nic *efx = arg;
++
++	netif_err(efx, drv, efx->net_dev,
++		  "tc rule %lx still present at teardown, removing\n",
++		  rule->cookie);
++
++	/* Also releases entries in subsidiary tables */
++	efx_tc_delete_rule(efx, rule);
++
++	kfree(rule);
++}
++
+ int efx_init_struct_tc(struct efx_nic *efx)
+ {
+ 	int rc;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 87510951f4e80..b74946bbee3c3 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -7457,12 +7457,6 @@ void stmmac_dvr_remove(struct device *dev)
+ 	netif_carrier_off(ndev);
+ 	unregister_netdev(ndev);
+ 
+-	/* Serdes power down needs to happen after VLAN filter
+-	 * is deleted that is triggered by unregister_netdev().
+-	 */
+-	if (priv->plat->serdes_powerdown)
+-		priv->plat->serdes_powerdown(ndev, priv->plat->bsp_priv);
+-
+ #ifdef CONFIG_DEBUG_FS
+ 	stmmac_exit_fs(ndev);
+ #endif
+diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+index 3e310b55bce2b..734822321e0ab 100644
+--- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
++++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+@@ -2042,6 +2042,11 @@ static int axienet_probe(struct platform_device *pdev)
+ 		goto cleanup_clk;
+ 	}
+ 
++	/* Reset core now that clocks are enabled, prior to accessing MDIO */
++	ret = __axienet_device_reset(lp);
++	if (ret)
++		goto cleanup_clk;
++
+ 	/* Autodetect the need for 64-bit DMA pointers.
+ 	 * When the IP is configured for a bus width bigger than 32 bits,
+ 	 * writing the MSB registers is mandatory, even if they are all 0.
+@@ -2096,11 +2101,6 @@ static int axienet_probe(struct platform_device *pdev)
+ 	lp->coalesce_count_tx = XAXIDMA_DFT_TX_THRESHOLD;
+ 	lp->coalesce_usec_tx = XAXIDMA_DFT_TX_USEC;
+ 
+-	/* Reset core now that clocks are enabled, prior to accessing MDIO */
+-	ret = __axienet_device_reset(lp);
+-	if (ret)
+-		goto cleanup_clk;
+-
+ 	ret = axienet_mdio_setup(lp);
+ 	if (ret)
+ 		dev_warn(&pdev->dev,
+diff --git a/drivers/net/gtp.c b/drivers/net/gtp.c
+index 15c7dc82107f4..acb20ad4e37eb 100644
+--- a/drivers/net/gtp.c
++++ b/drivers/net/gtp.c
+@@ -631,7 +631,9 @@ static void __gtp_encap_destroy(struct sock *sk)
+ 			gtp->sk1u = NULL;
+ 		udp_sk(sk)->encap_type = 0;
+ 		rcu_assign_sk_user_data(sk, NULL);
++		release_sock(sk);
+ 		sock_put(sk);
++		return;
+ 	}
+ 	release_sock(sk);
+ }
+diff --git a/drivers/net/ipvlan/ipvlan_core.c b/drivers/net/ipvlan/ipvlan_core.c
+index ab5133eb1d517..e45817caaee8d 100644
+--- a/drivers/net/ipvlan/ipvlan_core.c
++++ b/drivers/net/ipvlan/ipvlan_core.c
+@@ -585,7 +585,8 @@ static int ipvlan_xmit_mode_l3(struct sk_buff *skb, struct net_device *dev)
+ 				consume_skb(skb);
+ 				return NET_XMIT_DROP;
+ 			}
+-			return ipvlan_rcv_frame(addr, &skb, true);
++			ipvlan_rcv_frame(addr, &skb, true);
++			return NET_XMIT_SUCCESS;
+ 		}
+ 	}
+ out:
+@@ -611,7 +612,8 @@ static int ipvlan_xmit_mode_l2(struct sk_buff *skb, struct net_device *dev)
+ 					consume_skb(skb);
+ 					return NET_XMIT_DROP;
+ 				}
+-				return ipvlan_rcv_frame(addr, &skb, true);
++				ipvlan_rcv_frame(addr, &skb, true);
++				return NET_XMIT_SUCCESS;
+ 			}
+ 		}
+ 		skb = skb_share_check(skb, GFP_ATOMIC);
+@@ -623,7 +625,8 @@ static int ipvlan_xmit_mode_l2(struct sk_buff *skb, struct net_device *dev)
+ 		 * the skb for the main-dev. At the RX side we just return
+ 		 * RX_PASS for it to be processed further on the stack.
+ 		 */
+-		return dev_forward_skb(ipvlan->phy_dev, skb);
++		dev_forward_skb(ipvlan->phy_dev, skb);
++		return NET_XMIT_SUCCESS;
+ 
+ 	} else if (is_multicast_ether_addr(eth->h_dest)) {
+ 		skb_reset_mac_header(skb);
+diff --git a/drivers/net/ppp/pptp.c b/drivers/net/ppp/pptp.c
+index 0fe78826c8fa4..32183f24e63ff 100644
+--- a/drivers/net/ppp/pptp.c
++++ b/drivers/net/ppp/pptp.c
+@@ -24,6 +24,7 @@
+ #include <linux/in.h>
+ #include <linux/ip.h>
+ #include <linux/rcupdate.h>
++#include <linux/security.h>
+ #include <linux/spinlock.h>
+ 
+ #include <net/sock.h>
+@@ -128,6 +129,23 @@ static void del_chan(struct pppox_sock *sock)
+ 	spin_unlock(&chan_lock);
+ }
+ 
++static struct rtable *pptp_route_output(struct pppox_sock *po,
++					struct flowi4 *fl4)
++{
++	struct sock *sk = &po->sk;
++	struct net *net;
++
++	net = sock_net(sk);
++	flowi4_init_output(fl4, sk->sk_bound_dev_if, sk->sk_mark, 0,
++			   RT_SCOPE_UNIVERSE, IPPROTO_GRE, 0,
++			   po->proto.pptp.dst_addr.sin_addr.s_addr,
++			   po->proto.pptp.src_addr.sin_addr.s_addr,
++			   0, 0, sock_net_uid(net, sk));
++	security_sk_classify_flow(sk, flowi4_to_flowi_common(fl4));
++
++	return ip_route_output_flow(net, fl4, sk);
++}
++
+ static int pptp_xmit(struct ppp_channel *chan, struct sk_buff *skb)
+ {
+ 	struct sock *sk = (struct sock *) chan->private;
+@@ -151,11 +169,7 @@ static int pptp_xmit(struct ppp_channel *chan, struct sk_buff *skb)
+ 	if (sk_pppox(po)->sk_state & PPPOX_DEAD)
+ 		goto tx_error;
+ 
+-	rt = ip_route_output_ports(net, &fl4, NULL,
+-				   opt->dst_addr.sin_addr.s_addr,
+-				   opt->src_addr.sin_addr.s_addr,
+-				   0, 0, IPPROTO_GRE,
+-				   RT_TOS(0), sk->sk_bound_dev_if);
++	rt = pptp_route_output(po, &fl4);
+ 	if (IS_ERR(rt))
+ 		goto tx_error;
+ 
+@@ -438,12 +452,7 @@ static int pptp_connect(struct socket *sock, struct sockaddr *uservaddr,
+ 	po->chan.private = sk;
+ 	po->chan.ops = &pptp_chan_ops;
+ 
+-	rt = ip_route_output_ports(sock_net(sk), &fl4, sk,
+-				   opt->dst_addr.sin_addr.s_addr,
+-				   opt->src_addr.sin_addr.s_addr,
+-				   0, 0,
+-				   IPPROTO_GRE, RT_CONN_FLAGS(sk),
+-				   sk->sk_bound_dev_if);
++	rt = pptp_route_output(po, &fl4);
+ 	if (IS_ERR(rt)) {
+ 		error = -EHOSTUNREACH;
+ 		goto end;
+diff --git a/drivers/net/wireguard/netlink.c b/drivers/net/wireguard/netlink.c
+index 43c8c84e7ea82..6d1bd9f52d02a 100644
+--- a/drivers/net/wireguard/netlink.c
++++ b/drivers/net/wireguard/netlink.c
+@@ -546,6 +546,7 @@ static int wg_set_device(struct sk_buff *skb, struct genl_info *info)
+ 		u8 *private_key = nla_data(info->attrs[WGDEVICE_A_PRIVATE_KEY]);
+ 		u8 public_key[NOISE_PUBLIC_KEY_LEN];
+ 		struct wg_peer *peer, *temp;
++		bool send_staged_packets;
+ 
+ 		if (!crypto_memneq(wg->static_identity.static_private,
+ 				   private_key, NOISE_PUBLIC_KEY_LEN))
+@@ -564,14 +565,17 @@ static int wg_set_device(struct sk_buff *skb, struct genl_info *info)
+ 		}
+ 
+ 		down_write(&wg->static_identity.lock);
+-		wg_noise_set_static_identity_private_key(&wg->static_identity,
+-							 private_key);
+-		list_for_each_entry_safe(peer, temp, &wg->peer_list,
+-					 peer_list) {
++		send_staged_packets = !wg->static_identity.has_identity && netif_running(wg->dev);
++		wg_noise_set_static_identity_private_key(&wg->static_identity, private_key);
++		send_staged_packets = send_staged_packets && wg->static_identity.has_identity;
++
++		wg_cookie_checker_precompute_device_keys(&wg->cookie_checker);
++		list_for_each_entry_safe(peer, temp, &wg->peer_list, peer_list) {
+ 			wg_noise_precompute_static_static(peer);
+ 			wg_noise_expire_current_peer_keypairs(peer);
++			if (send_staged_packets)
++				wg_packet_send_staged_packets(peer);
+ 		}
+-		wg_cookie_checker_precompute_device_keys(&wg->cookie_checker);
+ 		up_write(&wg->static_identity.lock);
+ 	}
+ skip_set_private_key:
+diff --git a/drivers/net/wireguard/queueing.c b/drivers/net/wireguard/queueing.c
+index 8084e7408c0ae..26d235d152352 100644
+--- a/drivers/net/wireguard/queueing.c
++++ b/drivers/net/wireguard/queueing.c
+@@ -28,6 +28,7 @@ int wg_packet_queue_init(struct crypt_queue *queue, work_func_t function,
+ 	int ret;
+ 
+ 	memset(queue, 0, sizeof(*queue));
++	queue->last_cpu = -1;
+ 	ret = ptr_ring_init(&queue->ring, len, GFP_KERNEL);
+ 	if (ret)
+ 		return ret;
+diff --git a/drivers/net/wireguard/queueing.h b/drivers/net/wireguard/queueing.h
+index 125284b346a77..1ea4f874e367e 100644
+--- a/drivers/net/wireguard/queueing.h
++++ b/drivers/net/wireguard/queueing.h
+@@ -117,20 +117,17 @@ static inline int wg_cpumask_choose_online(int *stored_cpu, unsigned int id)
+ 	return cpu;
+ }
+ 
+-/* This function is racy, in the sense that next is unlocked, so it could return
+- * the same CPU twice. A race-free version of this would be to instead store an
+- * atomic sequence number, do an increment-and-return, and then iterate through
+- * every possible CPU until we get to that index -- choose_cpu. However that's
+- * a bit slower, and it doesn't seem like this potential race actually
+- * introduces any performance loss, so we live with it.
++/* This function is racy, in the sense that it's called while last_cpu is
++ * unlocked, so it could return the same CPU twice. Adding locking or using
++ * atomic sequence numbers is slower though, and the consequences of racing are
++ * harmless, so live with it.
+  */
+-static inline int wg_cpumask_next_online(int *next)
++static inline int wg_cpumask_next_online(int *last_cpu)
+ {
+-	int cpu = *next;
+-
+-	while (unlikely(!cpumask_test_cpu(cpu, cpu_online_mask)))
+-		cpu = cpumask_next(cpu, cpu_online_mask) % nr_cpumask_bits;
+-	*next = cpumask_next(cpu, cpu_online_mask) % nr_cpumask_bits;
++	int cpu = cpumask_next(*last_cpu, cpu_online_mask);
++	if (cpu >= nr_cpu_ids)
++		cpu = cpumask_first(cpu_online_mask);
++	*last_cpu = cpu;
+ 	return cpu;
+ }
+ 
+@@ -159,7 +156,7 @@ static inline void wg_prev_queue_drop_peeked(struct prev_queue *queue)
+ 
+ static inline int wg_queue_enqueue_per_device_and_peer(
+ 	struct crypt_queue *device_queue, struct prev_queue *peer_queue,
+-	struct sk_buff *skb, struct workqueue_struct *wq, int *next_cpu)
++	struct sk_buff *skb, struct workqueue_struct *wq)
+ {
+ 	int cpu;
+ 
+@@ -173,7 +170,7 @@ static inline int wg_queue_enqueue_per_device_and_peer(
+ 	/* Then we queue it up in the device queue, which consumes the
+ 	 * packet as soon as it can.
+ 	 */
+-	cpu = wg_cpumask_next_online(next_cpu);
++	cpu = wg_cpumask_next_online(&device_queue->last_cpu);
+ 	if (unlikely(ptr_ring_produce_bh(&device_queue->ring, skb)))
+ 		return -EPIPE;
+ 	queue_work_on(cpu, wq, &per_cpu_ptr(device_queue->worker, cpu)->work);
+diff --git a/drivers/net/wireguard/receive.c b/drivers/net/wireguard/receive.c
+index 7135d51d2d872..0b3f0c8435509 100644
+--- a/drivers/net/wireguard/receive.c
++++ b/drivers/net/wireguard/receive.c
+@@ -524,7 +524,7 @@ static void wg_packet_consume_data(struct wg_device *wg, struct sk_buff *skb)
+ 		goto err;
+ 
+ 	ret = wg_queue_enqueue_per_device_and_peer(&wg->decrypt_queue, &peer->rx_queue, skb,
+-						   wg->packet_crypt_wq, &wg->decrypt_queue.last_cpu);
++						   wg->packet_crypt_wq);
+ 	if (unlikely(ret == -EPIPE))
+ 		wg_queue_enqueue_per_peer_rx(skb, PACKET_STATE_DEAD);
+ 	if (likely(!ret || ret == -EPIPE)) {
+diff --git a/drivers/net/wireguard/send.c b/drivers/net/wireguard/send.c
+index 5368f7c35b4bf..95c853b59e1da 100644
+--- a/drivers/net/wireguard/send.c
++++ b/drivers/net/wireguard/send.c
+@@ -318,7 +318,7 @@ static void wg_packet_create_data(struct wg_peer *peer, struct sk_buff *first)
+ 		goto err;
+ 
+ 	ret = wg_queue_enqueue_per_device_and_peer(&wg->encrypt_queue, &peer->tx_queue, first,
+-						   wg->packet_crypt_wq, &wg->encrypt_queue.last_cpu);
++						   wg->packet_crypt_wq);
+ 	if (unlikely(ret == -EPIPE))
+ 		wg_queue_enqueue_per_peer_tx(first, PACKET_STATE_DEAD);
+ err:
+diff --git a/drivers/net/wireless/ath/ath10k/core.c b/drivers/net/wireless/ath/ath10k/core.c
+index 5eb131ab916fd..6cdb225b7eacc 100644
+--- a/drivers/net/wireless/ath/ath10k/core.c
++++ b/drivers/net/wireless/ath/ath10k/core.c
+@@ -2504,7 +2504,6 @@ EXPORT_SYMBOL(ath10k_core_napi_sync_disable);
+ static void ath10k_core_restart(struct work_struct *work)
+ {
+ 	struct ath10k *ar = container_of(work, struct ath10k, restart_work);
+-	struct ath10k_vif *arvif;
+ 	int ret;
+ 
+ 	set_bit(ATH10K_FLAG_CRASH_FLUSH, &ar->dev_flags);
+@@ -2543,14 +2542,6 @@ static void ath10k_core_restart(struct work_struct *work)
+ 		ar->state = ATH10K_STATE_RESTARTING;
+ 		ath10k_halt(ar);
+ 		ath10k_scan_finish(ar);
+-		if (ar->hw_params.hw_restart_disconnect) {
+-			list_for_each_entry(arvif, &ar->arvifs, list) {
+-				if (arvif->is_up &&
+-				    arvif->vdev_type == WMI_VDEV_TYPE_STA)
+-					ieee80211_hw_restart_disconnect(arvif->vif);
+-			}
+-		}
+-
+ 		ieee80211_restart_hw(ar->hw);
+ 		break;
+ 	case ATH10K_STATE_OFF:
+@@ -3643,6 +3634,9 @@ struct ath10k *ath10k_core_create(size_t priv_size, struct device *dev,
+ 	mutex_init(&ar->dump_mutex);
+ 	spin_lock_init(&ar->data_lock);
+ 
++	for (int ac = 0; ac < IEEE80211_NUM_ACS; ac++)
++		spin_lock_init(&ar->queue_lock[ac]);
++
+ 	INIT_LIST_HEAD(&ar->peers);
+ 	init_waitqueue_head(&ar->peer_mapping_wq);
+ 	init_waitqueue_head(&ar->htt.empty_tx_wq);
+diff --git a/drivers/net/wireless/ath/ath10k/core.h b/drivers/net/wireless/ath/ath10k/core.h
+index f5de8ce8fb456..4b5239de40184 100644
+--- a/drivers/net/wireless/ath/ath10k/core.h
++++ b/drivers/net/wireless/ath/ath10k/core.h
+@@ -1170,6 +1170,9 @@ struct ath10k {
+ 	/* protects shared structure data */
+ 	spinlock_t data_lock;
+ 
++	/* serialize wake_tx_queue calls per ac */
++	spinlock_t queue_lock[IEEE80211_NUM_ACS];
++
+ 	struct list_head arvifs;
+ 	struct list_head peers;
+ 	struct ath10k_peer *peer_map[ATH10K_MAX_NUM_PEER_IDS];
+diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c
+index 7675858f069bd..03e7bc5b6c0bd 100644
+--- a/drivers/net/wireless/ath/ath10k/mac.c
++++ b/drivers/net/wireless/ath/ath10k/mac.c
+@@ -4732,13 +4732,14 @@ static void ath10k_mac_op_wake_tx_queue(struct ieee80211_hw *hw,
+ {
+ 	struct ath10k *ar = hw->priv;
+ 	int ret;
+-	u8 ac;
++	u8 ac = txq->ac;
+ 
+ 	ath10k_htt_tx_txq_update(hw, txq);
+ 	if (ar->htt.tx_q_state.mode != HTT_TX_MODE_SWITCH_PUSH)
+ 		return;
+ 
+-	ac = txq->ac;
++	spin_lock_bh(&ar->queue_lock[ac]);
++
+ 	ieee80211_txq_schedule_start(hw, ac);
+ 	txq = ieee80211_next_txq(hw, ac);
+ 	if (!txq)
+@@ -4753,6 +4754,7 @@ static void ath10k_mac_op_wake_tx_queue(struct ieee80211_hw *hw,
+ 	ath10k_htt_tx_txq_update(hw, txq);
+ out:
+ 	ieee80211_txq_schedule_end(hw, ac);
++	spin_unlock_bh(&ar->queue_lock[ac]);
+ }
+ 
+ /* Must not be called with conf_mutex held as workers can use that also. */
+@@ -8107,6 +8109,7 @@ static void ath10k_reconfig_complete(struct ieee80211_hw *hw,
+ 				     enum ieee80211_reconfig_type reconfig_type)
+ {
+ 	struct ath10k *ar = hw->priv;
++	struct ath10k_vif *arvif;
+ 
+ 	if (reconfig_type != IEEE80211_RECONFIG_TYPE_RESTART)
+ 		return;
+@@ -8121,6 +8124,12 @@ static void ath10k_reconfig_complete(struct ieee80211_hw *hw,
+ 		ar->state = ATH10K_STATE_ON;
+ 		ieee80211_wake_queues(ar->hw);
+ 		clear_bit(ATH10K_FLAG_RESTARTING, &ar->dev_flags);
++		if (ar->hw_params.hw_restart_disconnect) {
++			list_for_each_entry(arvif, &ar->arvifs, list) {
++				if (arvif->is_up && arvif->vdev_type == WMI_VDEV_TYPE_STA)
++					ieee80211_hw_restart_disconnect(arvif->vif);
++				}
++		}
+ 	}
+ 
+ 	mutex_unlock(&ar->conf_mutex);
+diff --git a/drivers/net/wireless/ath/ath11k/ahb.c b/drivers/net/wireless/ath/ath11k/ahb.c
+index 5cbba9a8b6ba9..396548e57022f 100644
+--- a/drivers/net/wireless/ath/ath11k/ahb.c
++++ b/drivers/net/wireless/ath/ath11k/ahb.c
+@@ -1127,6 +1127,7 @@ static int ath11k_ahb_probe(struct platform_device *pdev)
+ 	switch (hw_rev) {
+ 	case ATH11K_HW_IPQ8074:
+ 	case ATH11K_HW_IPQ6018_HW10:
++	case ATH11K_HW_IPQ5018_HW10:
+ 		hif_ops = &ath11k_ahb_hif_ops_ipq8074;
+ 		pci_ops = NULL;
+ 		break;
+diff --git a/drivers/net/wireless/ath/ath11k/core.c b/drivers/net/wireless/ath/ath11k/core.c
+index b1b90bd34d67e..9de23c11e18bb 100644
+--- a/drivers/net/wireless/ath/ath11k/core.c
++++ b/drivers/net/wireless/ath/ath11k/core.c
+@@ -664,6 +664,7 @@ static const struct ath11k_hw_params ath11k_hw_params[] = {
+ 		.hal_params = &ath11k_hw_hal_params_ipq8074,
+ 		.single_pdev_only = false,
+ 		.cold_boot_calib = true,
++		.cbcal_restart_fw = true,
+ 		.fix_l1ss = true,
+ 		.supports_dynamic_smps_6ghz = false,
+ 		.alloc_cacheable_memory = true,
+diff --git a/drivers/net/wireless/ath/ath11k/hw.c b/drivers/net/wireless/ath/ath11k/hw.c
+index eb995f9cf0fa1..72797289b33e2 100644
+--- a/drivers/net/wireless/ath/ath11k/hw.c
++++ b/drivers/net/wireless/ath/ath11k/hw.c
+@@ -1175,7 +1175,7 @@ const struct ath11k_hw_ops ipq5018_ops = {
+ 	.mpdu_info_get_peerid = ath11k_hw_ipq8074_mpdu_info_get_peerid,
+ 	.rx_desc_mac_addr2_valid = ath11k_hw_ipq9074_rx_desc_mac_addr2_valid,
+ 	.rx_desc_mpdu_start_addr2 = ath11k_hw_ipq9074_rx_desc_mpdu_start_addr2,
+-
++	.get_ring_selector = ath11k_hw_ipq8074_get_tcl_ring_selector,
+ };
+ 
+ #define ATH11K_TX_RING_MASK_0 BIT(0)
+diff --git a/drivers/net/wireless/ath/ath11k/qmi.c b/drivers/net/wireless/ath/ath11k/qmi.c
+index ab923e24b0a9c..2328b9447cf1b 100644
+--- a/drivers/net/wireless/ath/ath11k/qmi.c
++++ b/drivers/net/wireless/ath/ath11k/qmi.c
+@@ -2058,6 +2058,9 @@ static int ath11k_qmi_assign_target_mem_chunk(struct ath11k_base *ab)
+ 			ab->qmi.target_mem[idx].iaddr =
+ 				ioremap(ab->qmi.target_mem[idx].paddr,
+ 					ab->qmi.target_mem[i].size);
++			if (!ab->qmi.target_mem[idx].iaddr)
++				return -EIO;
++
+ 			ab->qmi.target_mem[idx].size = ab->qmi.target_mem[i].size;
+ 			host_ddr_sz = ab->qmi.target_mem[i].size;
+ 			ab->qmi.target_mem[idx].type = ab->qmi.target_mem[i].type;
+@@ -2083,6 +2086,8 @@ static int ath11k_qmi_assign_target_mem_chunk(struct ath11k_base *ab)
+ 					ab->qmi.target_mem[idx].iaddr =
+ 						ioremap(ab->qmi.target_mem[idx].paddr,
+ 							ab->qmi.target_mem[i].size);
++					if (!ab->qmi.target_mem[idx].iaddr)
++						return -EIO;
+ 				} else {
+ 					ab->qmi.target_mem[idx].paddr =
+ 						ATH11K_QMI_CALDB_ADDRESS;
+diff --git a/drivers/net/wireless/ath/ath9k/ar9003_hw.c b/drivers/net/wireless/ath/ath9k/ar9003_hw.c
+index 4f27a9fb1482b..e9bd13eeee92f 100644
+--- a/drivers/net/wireless/ath/ath9k/ar9003_hw.c
++++ b/drivers/net/wireless/ath/ath9k/ar9003_hw.c
+@@ -1099,17 +1099,22 @@ static bool ath9k_hw_verify_hang(struct ath_hw *ah, unsigned int queue)
+ {
+ 	u32 dma_dbg_chain, dma_dbg_complete;
+ 	u8 dcu_chain_state, dcu_complete_state;
++	unsigned int dbg_reg, reg_offset;
+ 	int i;
+ 
+-	for (i = 0; i < NUM_STATUS_READS; i++) {
+-		if (queue < 6)
+-			dma_dbg_chain = REG_READ(ah, AR_DMADBG_4);
+-		else
+-			dma_dbg_chain = REG_READ(ah, AR_DMADBG_5);
++	if (queue < 6) {
++		dbg_reg = AR_DMADBG_4;
++		reg_offset = queue * 5;
++	} else {
++		dbg_reg = AR_DMADBG_5;
++		reg_offset = (queue - 6) * 5;
++	}
+ 
++	for (i = 0; i < NUM_STATUS_READS; i++) {
++		dma_dbg_chain = REG_READ(ah, dbg_reg);
+ 		dma_dbg_complete = REG_READ(ah, AR_DMADBG_6);
+ 
+-		dcu_chain_state = (dma_dbg_chain >> (5 * queue)) & 0x1f;
++		dcu_chain_state = (dma_dbg_chain >> reg_offset) & 0x1f;
+ 		dcu_complete_state = dma_dbg_complete & 0x3;
+ 
+ 		if ((dcu_chain_state != 0x6) || (dcu_complete_state != 0x1))
+@@ -1128,6 +1133,7 @@ static bool ar9003_hw_detect_mac_hang(struct ath_hw *ah)
+ 	u8 dcu_chain_state, dcu_complete_state;
+ 	bool dcu_wait_frdone = false;
+ 	unsigned long chk_dcu = 0;
++	unsigned int reg_offset;
+ 	unsigned int i = 0;
+ 
+ 	dma_dbg_4 = REG_READ(ah, AR_DMADBG_4);
+@@ -1139,12 +1145,15 @@ static bool ar9003_hw_detect_mac_hang(struct ath_hw *ah)
+ 		goto exit;
+ 
+ 	for (i = 0; i < ATH9K_NUM_TX_QUEUES; i++) {
+-		if (i < 6)
++		if (i < 6) {
+ 			chk_dbg = dma_dbg_4;
+-		else
++			reg_offset = i * 5;
++		} else {
+ 			chk_dbg = dma_dbg_5;
++			reg_offset = (i - 6) * 5;
++		}
+ 
+-		dcu_chain_state = (chk_dbg >> (5 * i)) & 0x1f;
++		dcu_chain_state = (chk_dbg >> reg_offset) & 0x1f;
+ 		if (dcu_chain_state == 0x6) {
+ 			dcu_wait_frdone = true;
+ 			chk_dcu |= BIT(i);
+diff --git a/drivers/net/wireless/ath/ath9k/htc_hst.c b/drivers/net/wireless/ath/ath9k/htc_hst.c
+index fe62ff668f757..99667aba289df 100644
+--- a/drivers/net/wireless/ath/ath9k/htc_hst.c
++++ b/drivers/net/wireless/ath/ath9k/htc_hst.c
+@@ -114,7 +114,13 @@ static void htc_process_conn_rsp(struct htc_target *target,
+ 
+ 	if (svc_rspmsg->status == HTC_SERVICE_SUCCESS) {
+ 		epid = svc_rspmsg->endpoint_id;
+-		if (epid < 0 || epid >= ENDPOINT_MAX)
++
++		/* Check that the received epid for the endpoint to attach
++		 * a new service is valid. ENDPOINT0 can't be used here as it
++		 * is already reserved for HTC_CTRL_RSVD_SVC service and thus
++		 * should not be modified.
++		 */
++		if (epid <= ENDPOINT0 || epid >= ENDPOINT_MAX)
+ 			return;
+ 
+ 		service_id = be16_to_cpu(svc_rspmsg->service_id);
+diff --git a/drivers/net/wireless/ath/ath9k/main.c b/drivers/net/wireless/ath/ath9k/main.c
+index a4197c14f0a92..6360d3356e256 100644
+--- a/drivers/net/wireless/ath/ath9k/main.c
++++ b/drivers/net/wireless/ath/ath9k/main.c
+@@ -203,7 +203,7 @@ void ath_cancel_work(struct ath_softc *sc)
+ void ath_restart_work(struct ath_softc *sc)
+ {
+ 	ieee80211_queue_delayed_work(sc->hw, &sc->hw_check_work,
+-				     ATH_HW_CHECK_POLL_INT);
++				     msecs_to_jiffies(ATH_HW_CHECK_POLL_INT));
+ 
+ 	if (AR_SREV_9340(sc->sc_ah) || AR_SREV_9330(sc->sc_ah))
+ 		ieee80211_queue_delayed_work(sc->hw, &sc->hw_pll_work,
+@@ -850,7 +850,7 @@ static bool ath9k_txq_list_has_key(struct list_head *txq_list, u32 keyix)
+ static bool ath9k_txq_has_key(struct ath_softc *sc, u32 keyix)
+ {
+ 	struct ath_hw *ah = sc->sc_ah;
+-	int i;
++	int i, j;
+ 	struct ath_txq *txq;
+ 	bool key_in_use = false;
+ 
+@@ -868,8 +868,9 @@ static bool ath9k_txq_has_key(struct ath_softc *sc, u32 keyix)
+ 		if (sc->sc_ah->caps.hw_caps & ATH9K_HW_CAP_EDMA) {
+ 			int idx = txq->txq_tailidx;
+ 
+-			while (!key_in_use &&
+-			       !list_empty(&txq->txq_fifo[idx])) {
++			for (j = 0; !key_in_use &&
++			     !list_empty(&txq->txq_fifo[idx]) &&
++			     j < ATH_TXFIFO_DEPTH; j++) {
+ 				key_in_use = ath9k_txq_list_has_key(
+ 					&txq->txq_fifo[idx], keyix);
+ 				INCR(idx, ATH_TXFIFO_DEPTH);
+@@ -2239,7 +2240,7 @@ void __ath9k_flush(struct ieee80211_hw *hw, u32 queues, bool drop,
+ 	}
+ 
+ 	ieee80211_queue_delayed_work(hw, &sc->hw_check_work,
+-				     ATH_HW_CHECK_POLL_INT);
++				     msecs_to_jiffies(ATH_HW_CHECK_POLL_INT));
+ }
+ 
+ static bool ath9k_tx_frames_pending(struct ieee80211_hw *hw)
+diff --git a/drivers/net/wireless/ath/ath9k/wmi.c b/drivers/net/wireless/ath/ath9k/wmi.c
+index 19345b8f7bfd5..d652c647d56b5 100644
+--- a/drivers/net/wireless/ath/ath9k/wmi.c
++++ b/drivers/net/wireless/ath/ath9k/wmi.c
+@@ -221,6 +221,10 @@ static void ath9k_wmi_ctrl_rx(void *priv, struct sk_buff *skb,
+ 	if (unlikely(wmi->stopped))
+ 		goto free_skb;
+ 
++	/* Validate the obtained SKB. */
++	if (unlikely(skb->len < sizeof(struct wmi_cmd_hdr)))
++		goto free_skb;
++
+ 	hdr = (struct wmi_cmd_hdr *) skb->data;
+ 	cmd_id = be16_to_cpu(hdr->command_id);
+ 
+diff --git a/drivers/net/wireless/atmel/atmel_cs.c b/drivers/net/wireless/atmel/atmel_cs.c
+index 453bb84cb3386..58bba9875d366 100644
+--- a/drivers/net/wireless/atmel/atmel_cs.c
++++ b/drivers/net/wireless/atmel/atmel_cs.c
+@@ -72,6 +72,7 @@ struct local_info {
+ static int atmel_probe(struct pcmcia_device *p_dev)
+ {
+ 	struct local_info *local;
++	int ret;
+ 
+ 	dev_dbg(&p_dev->dev, "atmel_attach()\n");
+ 
+@@ -82,8 +83,16 @@ static int atmel_probe(struct pcmcia_device *p_dev)
+ 
+ 	p_dev->priv = local;
+ 
+-	return atmel_config(p_dev);
+-} /* atmel_attach */
++	ret = atmel_config(p_dev);
++	if (ret)
++		goto err_free_priv;
++
++	return 0;
++
++err_free_priv:
++	kfree(p_dev->priv);
++	return ret;
++}
+ 
+ static void atmel_detach(struct pcmcia_device *link)
+ {
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/rs.h b/drivers/net/wireless/intel/iwlwifi/fw/api/rs.h
+index c9a48fc5fac88..a1a272433b09b 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/api/rs.h
++++ b/drivers/net/wireless/intel/iwlwifi/fw/api/rs.h
+@@ -21,6 +21,7 @@
+  * @IWL_TLC_MNG_CFG_FLAGS_HE_DCM_NSS_2_MSK: enable HE Dual Carrier Modulation
+  *					    for BPSK (MCS 0) with 2 spatial
+  *					    streams
++ * @IWL_TLC_MNG_CFG_FLAGS_EHT_EXTRA_LTF_MSK: enable support for EHT extra LTF
+  */
+ enum iwl_tlc_mng_cfg_flags {
+ 	IWL_TLC_MNG_CFG_FLAGS_STBC_MSK			= BIT(0),
+@@ -28,6 +29,7 @@ enum iwl_tlc_mng_cfg_flags {
+ 	IWL_TLC_MNG_CFG_FLAGS_HE_STBC_160MHZ_MSK	= BIT(2),
+ 	IWL_TLC_MNG_CFG_FLAGS_HE_DCM_NSS_1_MSK		= BIT(3),
+ 	IWL_TLC_MNG_CFG_FLAGS_HE_DCM_NSS_2_MSK		= BIT(4),
++	IWL_TLC_MNG_CFG_FLAGS_EHT_EXTRA_LTF_MSK		= BIT(6),
+ };
+ 
+ /**
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/dump.c b/drivers/net/wireless/intel/iwlwifi/fw/dump.c
+index f86f7b4baa181..f61f1ce7fe795 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/dump.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/dump.c
+@@ -507,11 +507,16 @@ void iwl_fwrt_dump_error_logs(struct iwl_fw_runtime *fwrt)
+ 	iwl_fwrt_dump_fseq_regs(fwrt);
+ 	if (fwrt->trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_22000) {
+ 		pc_data = fwrt->trans->dbg.pc_data;
++
++		if (!iwl_trans_grab_nic_access(fwrt->trans))
++			return;
+ 		for (count = 0; count < fwrt->trans->dbg.num_pc;
+ 		     count++, pc_data++)
+ 			IWL_ERR(fwrt, "%s: 0x%x\n",
+ 				pc_data->pc_name,
+-				pc_data->pc_address);
++				iwl_read_prph_no_grab(fwrt->trans,
++						      pc_data->pc_address));
++		iwl_trans_release_nic_access(fwrt->trans);
+ 	}
+ 
+ 	if (fwrt->trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_BZ) {
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c b/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
+index 7dcb1c3ab7282..be0eb69f2248a 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
+@@ -975,6 +975,8 @@ iwl_nvm_fixup_sband_iftd(struct iwl_trans *trans,
+ 		iftype_data->eht_cap.eht_cap_elem.phy_cap_info[6] &=
+ 			~(IEEE80211_EHT_PHY_CAP6_MCS15_SUPP_MASK |
+ 			  IEEE80211_EHT_PHY_CAP6_EHT_DUP_6GHZ_SUPP);
++		iftype_data->eht_cap.eht_cap_elem.phy_cap_info[5] |=
++			IEEE80211_EHT_PHY_CAP5_SUPP_EXTRA_EHT_LTF;
+ 	}
+ 
+ 	if (fw_has_capa(&fw->ucode_capa, IWL_UCODE_TLV_CAPA_BROADCAST_TWT))
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+index 205c09bc98634..a6367909d7fe4 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+@@ -1699,9 +1699,11 @@ int iwl_mvm_up(struct iwl_mvm *mvm)
+ 
+ 	if (test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status)) {
+ 		iwl_mvm_send_recovery_cmd(mvm, ERROR_RECOVERY_UPDATE_DB);
+-		iwl_mvm_time_sync_config(mvm, mvm->time_sync.peer_addr,
+-					 IWL_TIME_SYNC_PROTOCOL_TM |
+-					 IWL_TIME_SYNC_PROTOCOL_FTM);
++
++		if (mvm->time_sync.active)
++			iwl_mvm_time_sync_config(mvm, mvm->time_sync.peer_addr,
++						 IWL_TIME_SYNC_PROTOCOL_TM |
++						 IWL_TIME_SYNC_PROTOCOL_FTM);
+ 	}
+ 
+ 	if (!mvm->ptp_data.ptp_clock)
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+index 17f788a5ff6ba..f23cd100cf252 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+@@ -2285,8 +2285,7 @@ bool iwl_mvm_is_nic_ack_enabled(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
+ 	 * so take it from one of them.
+ 	 */
+ 	sband = mvm->hw->wiphy->bands[NL80211_BAND_2GHZ];
+-	own_he_cap = ieee80211_get_he_iftype_cap(sband,
+-						 ieee80211_vif_type_p2p(vif));
++	own_he_cap = ieee80211_get_he_iftype_cap_vif(sband, vif);
+ 
+ 	return (own_he_cap && (own_he_cap->he_cap_elem.mac_cap_info[2] &
+ 			       IEEE80211_HE_MAC_CAP2_ACK_EN));
+@@ -3468,8 +3467,7 @@ static void iwl_mvm_reset_cca_40mhz_workaround(struct iwl_mvm *mvm,
+ 
+ 	sband->ht_cap.cap |= IEEE80211_HT_CAP_SUP_WIDTH_20_40;
+ 
+-	he_cap = ieee80211_get_he_iftype_cap(sband,
+-					     ieee80211_vif_type_p2p(vif));
++	he_cap = ieee80211_get_he_iftype_cap_vif(sband, vif);
+ 
+ 	if (he_cap) {
+ 		/* we know that ours is writable */
+@@ -3848,6 +3846,7 @@ int iwl_mvm_mac_sta_state_common(struct ieee80211_hw *hw,
+ 	struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
+ 	struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ 	struct iwl_mvm_sta *mvm_sta = iwl_mvm_sta_from_mac80211(sta);
++	struct ieee80211_link_sta *link_sta;
+ 	unsigned int link_id;
+ 	int ret;
+ 
+@@ -3889,7 +3888,7 @@ int iwl_mvm_mac_sta_state_common(struct ieee80211_hw *hw,
+ 	mutex_lock(&mvm->mutex);
+ 
+ 	/* this would be a mac80211 bug ... but don't crash */
+-	for_each_mvm_vif_valid_link(mvmvif, link_id) {
++	for_each_sta_active_link(vif, sta, link_sta, link_id) {
+ 		if (WARN_ON_ONCE(!mvmvif->link[link_id]->phy_ctxt)) {
+ 			mutex_unlock(&mvm->mutex);
+ 			return test_bit(IWL_MVM_STATUS_HW_RESTART_REQUESTED,
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+index 32625bfacaaef..8a4415ef540d1 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
+ /*
+- * Copyright (C) 2012-2014, 2018-2020 Intel Corporation
++ * Copyright (C) 2012-2014, 2018-2023 Intel Corporation
+  * Copyright (C) 2013-2015 Intel Mobile Communications GmbH
+  * Copyright (C) 2016-2017 Intel Deutschland GmbH
+  */
+@@ -192,8 +192,7 @@ static void iwl_mvm_rx_monitor_notif(struct iwl_mvm *mvm,
+ 	WARN_ON(!(sband->ht_cap.cap & IEEE80211_HT_CAP_SUP_WIDTH_20_40));
+ 	sband->ht_cap.cap &= ~IEEE80211_HT_CAP_SUP_WIDTH_20_40;
+ 
+-	he_cap = ieee80211_get_he_iftype_cap(sband,
+-					     ieee80211_vif_type_p2p(vif));
++	he_cap = ieee80211_get_he_iftype_cap_vif(sband, vif);
+ 
+ 	if (he_cap) {
+ 		/* we know that ours is writable */
+@@ -1743,8 +1742,11 @@ static void iwl_mvm_queue_state_change(struct iwl_op_mode *op_mode,
+ 		else
+ 			set_bit(IWL_MVM_TXQ_STATE_STOP_FULL, &mvmtxq->state);
+ 
+-		if (start && mvmsta->sta_state != IEEE80211_STA_NOTEXIST)
++		if (start && mvmsta->sta_state != IEEE80211_STA_NOTEXIST) {
++			local_bh_disable();
+ 			iwl_mvm_mac_itxq_xmit(mvm->hw, txq);
++			local_bh_enable();
++		}
+ 	}
+ 
+ out:
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rs-fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/rs-fw.c
+index c3a00bfbeef2c..680180b894794 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/rs-fw.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/rs-fw.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
+ /*
+  * Copyright (C) 2017 Intel Deutschland GmbH
+- * Copyright (C) 2018-2022 Intel Corporation
++ * Copyright (C) 2018-2023 Intel Corporation
+  */
+ #include "rs.h"
+ #include "fw-api.h"
+@@ -63,12 +63,11 @@ static u8 rs_fw_sgi_cw_support(struct ieee80211_link_sta *link_sta)
+ static u16 rs_fw_get_config_flags(struct iwl_mvm *mvm,
+ 				  struct ieee80211_vif *vif,
+ 				  struct ieee80211_link_sta *link_sta,
+-				  struct ieee80211_supported_band *sband)
++				  const struct ieee80211_sta_he_cap *sband_he_cap)
+ {
+ 	struct ieee80211_sta_ht_cap *ht_cap = &link_sta->ht_cap;
+ 	struct ieee80211_sta_vht_cap *vht_cap = &link_sta->vht_cap;
+ 	struct ieee80211_sta_he_cap *he_cap = &link_sta->he_cap;
+-	const struct ieee80211_sta_he_cap *sband_he_cap;
+ 	bool vht_ena = vht_cap->vht_supported;
+ 	u16 flags = 0;
+ 
+@@ -94,8 +93,6 @@ static u16 rs_fw_get_config_flags(struct iwl_mvm *mvm,
+ 	    IEEE80211_HE_PHY_CAP1_LDPC_CODING_IN_PAYLOAD))
+ 		flags |= IWL_TLC_MNG_CFG_FLAGS_LDPC_MSK;
+ 
+-	sband_he_cap = ieee80211_get_he_iftype_cap(sband,
+-						   ieee80211_vif_type_p2p(vif));
+ 	if (sband_he_cap &&
+ 	    !(sband_he_cap->he_cap_elem.phy_cap_info[1] &
+ 			IEEE80211_HE_PHY_CAP1_LDPC_CODING_IN_PAYLOAD))
+@@ -197,16 +194,14 @@ static u16 rs_fw_he_ieee80211_mcs_to_rs_mcs(u16 mcs)
+ 
+ static void
+ rs_fw_he_set_enabled_rates(const struct ieee80211_link_sta *link_sta,
+-			   struct ieee80211_supported_band *sband,
++			   const struct ieee80211_sta_he_cap *sband_he_cap,
+ 			   struct iwl_tlc_config_cmd_v4 *cmd)
+ {
+ 	const struct ieee80211_sta_he_cap *he_cap = &link_sta->he_cap;
+ 	u16 mcs_160 = le16_to_cpu(he_cap->he_mcs_nss_supp.rx_mcs_160);
+ 	u16 mcs_80 = le16_to_cpu(he_cap->he_mcs_nss_supp.rx_mcs_80);
+-	u16 tx_mcs_80 =
+-		le16_to_cpu(sband->iftype_data->he_cap.he_mcs_nss_supp.tx_mcs_80);
+-	u16 tx_mcs_160 =
+-		le16_to_cpu(sband->iftype_data->he_cap.he_mcs_nss_supp.tx_mcs_160);
++	u16 tx_mcs_80 = le16_to_cpu(sband_he_cap->he_mcs_nss_supp.tx_mcs_80);
++	u16 tx_mcs_160 = le16_to_cpu(sband_he_cap->he_mcs_nss_supp.tx_mcs_160);
+ 	int i;
+ 	u8 nss = link_sta->rx_nss;
+ 
+@@ -289,7 +284,8 @@ rs_fw_rs_mcs2eht_mcs(enum IWL_TLC_MCS_PER_BW bw,
+ static void
+ rs_fw_eht_set_enabled_rates(struct ieee80211_vif *vif,
+ 			    const struct ieee80211_link_sta *link_sta,
+-			    struct ieee80211_supported_band *sband,
++			    const struct ieee80211_sta_he_cap *sband_he_cap,
++			    const struct ieee80211_sta_eht_cap *sband_eht_cap,
+ 			    struct iwl_tlc_config_cmd_v4 *cmd)
+ {
+ 	/* peer RX mcs capa */
+@@ -297,7 +293,7 @@ rs_fw_eht_set_enabled_rates(struct ieee80211_vif *vif,
+ 		&link_sta->eht_cap.eht_mcs_nss_supp;
+ 	/* our TX mcs capa */
+ 	const struct ieee80211_eht_mcs_nss_supp *eht_tx_mcs =
+-		&sband->iftype_data->eht_cap.eht_mcs_nss_supp;
++		&sband_eht_cap->eht_mcs_nss_supp;
+ 
+ 	enum IWL_TLC_MCS_PER_BW bw;
+ 	struct ieee80211_eht_mcs_nss_supp_20mhz_only mcs_rx_20;
+@@ -316,7 +312,7 @@ rs_fw_eht_set_enabled_rates(struct ieee80211_vif *vif,
+ 	}
+ 
+ 	/* nic is 20Mhz only */
+-	if (!(sband->iftype_data->he_cap.he_cap_elem.phy_cap_info[0] &
++	if (!(sband_he_cap->he_cap_elem.phy_cap_info[0] &
+ 	      IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_MASK_ALL)) {
+ 		mcs_tx_20 = eht_tx_mcs->only_20mhz;
+ 	} else {
+@@ -370,6 +366,8 @@ rs_fw_eht_set_enabled_rates(struct ieee80211_vif *vif,
+ static void rs_fw_set_supp_rates(struct ieee80211_vif *vif,
+ 				 struct ieee80211_link_sta *link_sta,
+ 				 struct ieee80211_supported_band *sband,
++				 const struct ieee80211_sta_he_cap *sband_he_cap,
++				 const struct ieee80211_sta_eht_cap *sband_eht_cap,
+ 				 struct iwl_tlc_config_cmd_v4 *cmd)
+ {
+ 	int i;
+@@ -388,12 +386,13 @@ static void rs_fw_set_supp_rates(struct ieee80211_vif *vif,
+ 	cmd->mode = IWL_TLC_MNG_MODE_NON_HT;
+ 
+ 	/* HT/VHT rates */
+-	if (link_sta->eht_cap.has_eht) {
++	if (link_sta->eht_cap.has_eht && sband_he_cap && sband_eht_cap) {
+ 		cmd->mode = IWL_TLC_MNG_MODE_EHT;
+-		rs_fw_eht_set_enabled_rates(vif, link_sta, sband, cmd);
+-	} else if (he_cap->has_he) {
++		rs_fw_eht_set_enabled_rates(vif, link_sta, sband_he_cap,
++					    sband_eht_cap, cmd);
++	} else if (he_cap->has_he && sband_he_cap) {
+ 		cmd->mode = IWL_TLC_MNG_MODE_HE;
+-		rs_fw_he_set_enabled_rates(link_sta, sband, cmd);
++		rs_fw_he_set_enabled_rates(link_sta, sband_he_cap, cmd);
+ 	} else if (vht_cap->vht_supported) {
+ 		cmd->mode = IWL_TLC_MNG_MODE_VHT;
+ 		rs_fw_vht_set_enabled_rates(link_sta, vht_cap, cmd);
+@@ -576,13 +575,17 @@ void iwl_mvm_rs_fw_rate_init(struct iwl_mvm *mvm,
+ 	u32 cmd_id = WIDE_ID(DATA_PATH_GROUP, TLC_MNG_CONFIG_CMD);
+ 	struct ieee80211_supported_band *sband = hw->wiphy->bands[band];
+ 	u16 max_amsdu_len = rs_fw_get_max_amsdu_len(sta, link_conf, link_sta);
++	const struct ieee80211_sta_he_cap *sband_he_cap =
++		ieee80211_get_he_iftype_cap_vif(sband, vif);
++	const struct ieee80211_sta_eht_cap *sband_eht_cap =
++		ieee80211_get_eht_iftype_cap_vif(sband, vif);
+ 	struct iwl_mvm_link_sta *mvm_link_sta;
+ 	struct iwl_lq_sta_rs_fw *lq_sta;
+ 	struct iwl_tlc_config_cmd_v4 cfg_cmd = {
+ 		.max_ch_width = mvmsta->authorized ?
+ 			rs_fw_bw_from_sta_bw(link_sta) : IWL_TLC_MNG_CH_WIDTH_20MHZ,
+ 		.flags = cpu_to_le16(rs_fw_get_config_flags(mvm, vif, link_sta,
+-							    sband)),
++							    sband_he_cap)),
+ 		.chains = rs_fw_set_active_chains(iwl_mvm_get_valid_tx_ant(mvm)),
+ 		.sgi_ch_width_supp = rs_fw_sgi_cw_support(link_sta),
+ 		.max_mpdu_len = iwl_mvm_is_csum_supported(mvm) ?
+@@ -592,6 +595,21 @@ void iwl_mvm_rs_fw_rate_init(struct iwl_mvm *mvm,
+ 	int cmd_ver;
+ 	int ret;
+ 
++	/* Enable external EHT LTF only for GL device and if there's
++	 * mutual support by AP and client
++	 */
++	if (CSR_HW_REV_TYPE(mvm->trans->hw_rev) == IWL_CFG_MAC_TYPE_GL &&
++	    sband_eht_cap &&
++	    sband_eht_cap->eht_cap_elem.phy_cap_info[5] &
++		IEEE80211_EHT_PHY_CAP5_SUPP_EXTRA_EHT_LTF &&
++	    link_sta->eht_cap.has_eht &&
++	    link_sta->eht_cap.eht_cap_elem.phy_cap_info[5] &
++	    IEEE80211_EHT_PHY_CAP5_SUPP_EXTRA_EHT_LTF) {
++		IWL_DEBUG_RATE(mvm, "Set support for Extra EHT LTF\n");
++		cfg_cmd.flags |=
++			cpu_to_le16(IWL_TLC_MNG_CFG_FLAGS_EHT_EXTRA_LTF_MSK);
++	}
++
+ 	rcu_read_lock();
+ 	mvm_link_sta = rcu_dereference(mvmsta->link[link_id]);
+ 	if (WARN_ON_ONCE(!mvm_link_sta)) {
+@@ -609,7 +627,9 @@ void iwl_mvm_rs_fw_rate_init(struct iwl_mvm *mvm,
+ #ifdef CONFIG_IWLWIFI_DEBUGFS
+ 	iwl_mvm_reset_frame_stats(mvm);
+ #endif
+-	rs_fw_set_supp_rates(vif, link_sta, sband, &cfg_cmd);
++	rs_fw_set_supp_rates(vif, link_sta, sband,
++			     sband_he_cap, sband_eht_cap,
++			     &cfg_cmd);
+ 
+ 	/*
+ 	 * since TLC offload works with one mode we can assume
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c b/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
+index 6226e4e54a51d..38f8d19f718ee 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
+@@ -279,7 +279,8 @@ static void iwl_mvm_get_signal_strength(struct iwl_mvm *mvm,
+ static int iwl_mvm_rx_mgmt_prot(struct ieee80211_sta *sta,
+ 				struct ieee80211_hdr *hdr,
+ 				struct iwl_rx_mpdu_desc *desc,
+-				u32 status)
++				u32 status,
++				struct ieee80211_rx_status *stats)
+ {
+ 	struct iwl_mvm_sta *mvmsta;
+ 	struct iwl_mvm_vif *mvmvif;
+@@ -308,8 +309,10 @@ static int iwl_mvm_rx_mgmt_prot(struct ieee80211_sta *sta,
+ 
+ 	/* good cases */
+ 	if (likely(status & IWL_RX_MPDU_STATUS_MIC_OK &&
+-		   !(status & IWL_RX_MPDU_STATUS_REPLAY_ERROR)))
++		   !(status & IWL_RX_MPDU_STATUS_REPLAY_ERROR))) {
++		stats->flag |= RX_FLAG_DECRYPTED;
+ 		return 0;
++	}
+ 
+ 	if (!sta)
+ 		return -1;
+@@ -378,7 +381,7 @@ static int iwl_mvm_rx_crypto(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
+ 
+ 	if (unlikely(ieee80211_is_mgmt(hdr->frame_control) &&
+ 		     !ieee80211_has_protected(hdr->frame_control)))
+-		return iwl_mvm_rx_mgmt_prot(sta, hdr, desc, status);
++		return iwl_mvm_rx_mgmt_prot(sta, hdr, desc, status, stats);
+ 
+ 	if (!ieee80211_has_protected(hdr->frame_control) ||
+ 	    (status & IWL_RX_MPDU_STATUS_SEC_MASK) ==
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
+index 05a54a69c1357..b85e363544f8b 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
+@@ -1859,6 +1859,8 @@ int iwl_mvm_add_sta(struct iwl_mvm *mvm,
+ 
+ 	ret = iwl_mvm_sta_init(mvm, vif, sta, sta_id,
+ 			       sta->tdls ? IWL_STA_TDLS_LINK : IWL_STA_LINK);
++	if (ret)
++		goto err;
+ 
+ update_fw:
+ 	ret = iwl_mvm_sta_send_to_fw(mvm, sta, sta_update, sta_flags);
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
+index 0d7890f99a5fb..90a46faaaffdf 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
+@@ -1636,14 +1636,14 @@ irqreturn_t iwl_pcie_irq_rx_msix_handler(int irq, void *dev_id)
+ 	struct msix_entry *entry = dev_id;
+ 	struct iwl_trans_pcie *trans_pcie = iwl_pcie_get_trans_pcie(entry);
+ 	struct iwl_trans *trans = trans_pcie->trans;
+-	struct iwl_rxq *rxq = &trans_pcie->rxq[entry->entry];
++	struct iwl_rxq *rxq;
+ 
+ 	trace_iwlwifi_dev_irq_msix(trans->dev, entry, false, 0, 0);
+ 
+ 	if (WARN_ON(entry->entry >= trans->num_rx_queues))
+ 		return IRQ_NONE;
+ 
+-	if (!rxq) {
++	if (!trans_pcie->rxq) {
+ 		if (net_ratelimit())
+ 			IWL_ERR(trans,
+ 				"[%d] Got MSI-X interrupt before we have Rx queues\n",
+@@ -1651,6 +1651,7 @@ irqreturn_t iwl_pcie_irq_rx_msix_handler(int irq, void *dev_id)
+ 		return IRQ_NONE;
+ 	}
+ 
++	rxq = &trans_pcie->rxq[entry->entry];
+ 	lock_map_acquire(&trans->sync_cmd_lockdep_map);
+ 	IWL_DEBUG_ISR(trans, "[%d] Got interrupt\n", entry->entry);
+ 
+diff --git a/drivers/net/wireless/intersil/orinoco/orinoco_cs.c b/drivers/net/wireless/intersil/orinoco/orinoco_cs.c
+index a956f965a1e5e..03bfd2482656c 100644
+--- a/drivers/net/wireless/intersil/orinoco/orinoco_cs.c
++++ b/drivers/net/wireless/intersil/orinoco/orinoco_cs.c
+@@ -96,6 +96,7 @@ orinoco_cs_probe(struct pcmcia_device *link)
+ {
+ 	struct orinoco_private *priv;
+ 	struct orinoco_pccard *card;
++	int ret;
+ 
+ 	priv = alloc_orinocodev(sizeof(*card), &link->dev,
+ 				orinoco_cs_hard_reset, NULL);
+@@ -107,8 +108,16 @@ orinoco_cs_probe(struct pcmcia_device *link)
+ 	card->p_dev = link;
+ 	link->priv = priv;
+ 
+-	return orinoco_cs_config(link);
+-}				/* orinoco_cs_attach */
++	ret = orinoco_cs_config(link);
++	if (ret)
++		goto err_free_orinocodev;
++
++	return 0;
++
++err_free_orinocodev:
++	free_orinocodev(priv);
++	return ret;
++}
+ 
+ static void orinoco_cs_detach(struct pcmcia_device *link)
+ {
+diff --git a/drivers/net/wireless/intersil/orinoco/spectrum_cs.c b/drivers/net/wireless/intersil/orinoco/spectrum_cs.c
+index 291ef97ed45ec..841d623c621ac 100644
+--- a/drivers/net/wireless/intersil/orinoco/spectrum_cs.c
++++ b/drivers/net/wireless/intersil/orinoco/spectrum_cs.c
+@@ -157,6 +157,7 @@ spectrum_cs_probe(struct pcmcia_device *link)
+ {
+ 	struct orinoco_private *priv;
+ 	struct orinoco_pccard *card;
++	int ret;
+ 
+ 	priv = alloc_orinocodev(sizeof(*card), &link->dev,
+ 				spectrum_cs_hard_reset,
+@@ -169,8 +170,16 @@ spectrum_cs_probe(struct pcmcia_device *link)
+ 	card->p_dev = link;
+ 	link->priv = priv;
+ 
+-	return spectrum_cs_config(link);
+-}				/* spectrum_cs_attach */
++	ret = spectrum_cs_config(link);
++	if (ret)
++		goto err_free_orinocodev;
++
++	return 0;
++
++err_free_orinocodev:
++	free_orinocodev(priv);
++	return ret;
++}
+ 
+ static void spectrum_cs_detach(struct pcmcia_device *link)
+ {
+diff --git a/drivers/net/wireless/legacy/ray_cs.c b/drivers/net/wireless/legacy/ray_cs.c
+index 1f57a0055bbd8..38782d4c4694a 100644
+--- a/drivers/net/wireless/legacy/ray_cs.c
++++ b/drivers/net/wireless/legacy/ray_cs.c
+@@ -270,13 +270,14 @@ static int ray_probe(struct pcmcia_device *p_dev)
+ {
+ 	ray_dev_t *local;
+ 	struct net_device *dev;
++	int ret;
+ 
+ 	dev_dbg(&p_dev->dev, "ray_attach()\n");
+ 
+ 	/* Allocate space for private device-specific data */
+ 	dev = alloc_etherdev(sizeof(ray_dev_t));
+ 	if (!dev)
+-		goto fail_alloc_dev;
++		return -ENOMEM;
+ 
+ 	local = netdev_priv(dev);
+ 	local->finder = p_dev;
+@@ -313,11 +314,16 @@ static int ray_probe(struct pcmcia_device *p_dev)
+ 	timer_setup(&local->timer, NULL, 0);
+ 
+ 	this_device = p_dev;
+-	return ray_config(p_dev);
++	ret = ray_config(p_dev);
++	if (ret)
++		goto err_free_dev;
++
++	return 0;
+ 
+-fail_alloc_dev:
+-	return -ENOMEM;
+-} /* ray_attach */
++err_free_dev:
++	free_netdev(dev);
++	return ret;
++}
+ 
+ static void ray_detach(struct pcmcia_device *link)
+ {
+diff --git a/drivers/net/wireless/legacy/wl3501_cs.c b/drivers/net/wireless/legacy/wl3501_cs.c
+index 7fb2f95134760..c45c4b7cbbaf1 100644
+--- a/drivers/net/wireless/legacy/wl3501_cs.c
++++ b/drivers/net/wireless/legacy/wl3501_cs.c
+@@ -1862,6 +1862,7 @@ static int wl3501_probe(struct pcmcia_device *p_dev)
+ {
+ 	struct net_device *dev;
+ 	struct wl3501_card *this;
++	int ret;
+ 
+ 	/* The io structure describes IO port mapping */
+ 	p_dev->resource[0]->end	= 16;
+@@ -1873,8 +1874,7 @@ static int wl3501_probe(struct pcmcia_device *p_dev)
+ 
+ 	dev = alloc_etherdev(sizeof(struct wl3501_card));
+ 	if (!dev)
+-		goto out_link;
+-
++		return -ENOMEM;
+ 
+ 	dev->netdev_ops		= &wl3501_netdev_ops;
+ 	dev->watchdog_timeo	= 5 * HZ;
+@@ -1887,9 +1887,15 @@ static int wl3501_probe(struct pcmcia_device *p_dev)
+ 	netif_stop_queue(dev);
+ 	p_dev->priv = dev;
+ 
+-	return wl3501_config(p_dev);
+-out_link:
+-	return -ENOMEM;
++	ret = wl3501_config(p_dev);
++	if (ret)
++		goto out_free_etherdev;
++
++	return 0;
++
++out_free_etherdev:
++	free_netdev(dev);
++	return ret;
+ }
+ 
+ static int wl3501_config(struct pcmcia_device *link)
+diff --git a/drivers/net/wireless/marvell/mwifiex/scan.c b/drivers/net/wireless/marvell/mwifiex/scan.c
+index ac8001c842935..644b1e134b01c 100644
+--- a/drivers/net/wireless/marvell/mwifiex/scan.c
++++ b/drivers/net/wireless/marvell/mwifiex/scan.c
+@@ -2187,9 +2187,9 @@ int mwifiex_ret_802_11_scan(struct mwifiex_private *priv,
+ 
+ 	if (nd_config) {
+ 		adapter->nd_info =
+-			kzalloc(sizeof(struct cfg80211_wowlan_nd_match) +
+-				sizeof(struct cfg80211_wowlan_nd_match *) *
+-				scan_rsp->number_of_sets, GFP_ATOMIC);
++			kzalloc(struct_size(adapter->nd_info, matches,
++					    scan_rsp->number_of_sets),
++				GFP_ATOMIC);
+ 
+ 		if (adapter->nd_info)
+ 			adapter->nd_info->n_matches = scan_rsp->number_of_sets;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/dma.c b/drivers/net/wireless/mediatek/mt76/mt7921/dma.c
+index f0a80c2b476ab..4153cd6c2a01d 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/dma.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/dma.c
+@@ -231,10 +231,6 @@ int mt7921_dma_init(struct mt7921_dev *dev)
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = mt7921_wfsys_reset(dev);
+-	if (ret)
+-		return ret;
+-
+ 	/* init tx queue */
+ 	ret = mt76_connac_init_tx_queues(dev->phy.mt76, MT7921_TXQ_BAND0,
+ 					 MT7921_TX_RING_SIZE,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
+index c69ce6df49561..f55caa00ac69b 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
+@@ -476,12 +476,6 @@ static int mt7921_load_firmware(struct mt7921_dev *dev)
+ {
+ 	int ret;
+ 
+-	ret = mt76_get_field(dev, MT_CONN_ON_MISC, MT_TOP_MISC2_FW_N9_RDY);
+-	if (ret && mt76_is_mmio(&dev->mt76)) {
+-		dev_dbg(dev->mt76.dev, "Firmware is already download\n");
+-		goto fw_loaded;
+-	}
+-
+ 	ret = mt76_connac2_load_patch(&dev->mt76, mt7921_patch_name(dev));
+ 	if (ret)
+ 		return ret;
+@@ -504,8 +498,6 @@ static int mt7921_load_firmware(struct mt7921_dev *dev)
+ 		return -EIO;
+ 	}
+ 
+-fw_loaded:
+-
+ #ifdef CONFIG_PM
+ 	dev->mt76.hw->wiphy->wowlan = &mt76_connac_wowlan_support;
+ #endif /* CONFIG_PM */
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/pci.c b/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
+index ddb1fa4ee01d7..95610a117d2f0 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
+@@ -325,6 +325,10 @@ static int mt7921_pci_probe(struct pci_dev *pdev,
+ 	bus_ops->rmw = mt7921_rmw;
+ 	dev->mt76.bus = bus_ops;
+ 
++	ret = mt7921e_mcu_fw_pmctrl(dev);
++	if (ret)
++		goto err_free_dev;
++
+ 	ret = __mt7921e_mcu_drv_pmctrl(dev);
+ 	if (ret)
+ 		goto err_free_dev;
+@@ -333,6 +337,10 @@ static int mt7921_pci_probe(struct pci_dev *pdev,
+ 		    (mt7921_l1_rr(dev, MT_HW_REV) & 0xff);
+ 	dev_info(mdev->dev, "ASIC revision: %04x\n", mdev->rev);
+ 
++	ret = mt7921_wfsys_reset(dev);
++	if (ret)
++		goto err_free_dev;
++
+ 	mt76_wr(dev, MT_WFDMA0_HOST_INT_ENA, 0);
+ 
+ 	mt76_wr(dev, MT_PCIE_MAC_INT_ENABLE, 0xff);
+diff --git a/drivers/net/wireless/microchip/wilc1000/hif.c b/drivers/net/wireless/microchip/wilc1000/hif.c
+index 5adc69d5bcae3..a28da59384813 100644
+--- a/drivers/net/wireless/microchip/wilc1000/hif.c
++++ b/drivers/net/wireless/microchip/wilc1000/hif.c
+@@ -485,6 +485,9 @@ void *wilc_parse_join_bss_param(struct cfg80211_bss *bss,
+ 		int rsn_ie_len = sizeof(struct element) + rsn_ie[1];
+ 		int offset = 8;
+ 
++		param->mode_802_11i = 2;
++		param->rsn_found = true;
++
+ 		/* extract RSN capabilities */
+ 		if (offset < rsn_ie_len) {
+ 			/* skip over pairwise suites */
+@@ -494,11 +497,8 @@ void *wilc_parse_join_bss_param(struct cfg80211_bss *bss,
+ 				/* skip over authentication suites */
+ 				offset += (rsn_ie[offset] * 4) + 2;
+ 
+-				if (offset + 1 < rsn_ie_len) {
+-					param->mode_802_11i = 2;
+-					param->rsn_found = true;
++				if (offset + 1 < rsn_ie_len)
+ 					memcpy(param->rsn_cap, &rsn_ie[offset], 2);
+-				}
+ 			}
+ 		}
+ 	}
+diff --git a/drivers/net/wireless/realtek/rtw88/mac80211.c b/drivers/net/wireless/realtek/rtw88/mac80211.c
+index 144618bb94c86..09bcc2345bb05 100644
+--- a/drivers/net/wireless/realtek/rtw88/mac80211.c
++++ b/drivers/net/wireless/realtek/rtw88/mac80211.c
+@@ -164,8 +164,10 @@ static int rtw_ops_add_interface(struct ieee80211_hw *hw,
+ 	mutex_lock(&rtwdev->mutex);
+ 
+ 	port = find_first_zero_bit(rtwdev->hw_port, RTW_PORT_NUM);
+-	if (port >= RTW_PORT_NUM)
++	if (port >= RTW_PORT_NUM) {
++		mutex_unlock(&rtwdev->mutex);
+ 		return -EINVAL;
++	}
+ 	set_bit(port, rtwdev->hw_port);
+ 
+ 	rtwvif->port = port;
+diff --git a/drivers/net/wireless/realtek/rtw88/usb.c b/drivers/net/wireless/realtek/rtw88/usb.c
+index 44a5fafb99055..976eafa739a2d 100644
+--- a/drivers/net/wireless/realtek/rtw88/usb.c
++++ b/drivers/net/wireless/realtek/rtw88/usb.c
+@@ -535,7 +535,7 @@ static void rtw_usb_rx_handler(struct work_struct *work)
+ 		}
+ 
+ 		if (skb_queue_len(&rtwusb->rx_queue) >= RTW_USB_MAX_RXQ_LEN) {
+-			rtw_err(rtwdev, "failed to get rx_queue, overflow\n");
++			dev_dbg_ratelimited(rtwdev->dev, "failed to get rx_queue, overflow\n");
+ 			dev_kfree_skb_any(skb);
+ 			continue;
+ 		}
+diff --git a/drivers/net/wireless/realtek/rtw89/core.c b/drivers/net/wireless/realtek/rtw89/core.c
+index bad864d56bd5c..5423f8ae187f1 100644
+--- a/drivers/net/wireless/realtek/rtw89/core.c
++++ b/drivers/net/wireless/realtek/rtw89/core.c
+@@ -3584,7 +3584,7 @@ static void rtw89_read_chip_ver(struct rtw89_dev *rtwdev)
+ 
+ 	if (chip->chip_id == RTL8852B || chip->chip_id == RTL8851B) {
+ 		ret = rtw89_mac_read_xtal_si(rtwdev, XTAL_SI_CV, &val);
+-		if (!ret)
++		if (ret)
+ 			return;
+ 
+ 		rtwdev->hal.acv = u8_get_bits(val, XTAL_SI_ACV_MASK);
+diff --git a/drivers/net/wireless/rsi/rsi_91x_sdio.c b/drivers/net/wireless/rsi/rsi_91x_sdio.c
+index d09998796ac08..1911fef3bbad6 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_sdio.c
++++ b/drivers/net/wireless/rsi/rsi_91x_sdio.c
+@@ -1463,10 +1463,8 @@ static void rsi_shutdown(struct device *dev)
+ 
+ 	rsi_dbg(ERR_ZONE, "SDIO Bus shutdown =====>\n");
+ 
+-	if (hw) {
+-		struct cfg80211_wowlan *wowlan = hw->wiphy->wowlan_config;
+-
+-		if (rsi_config_wowlan(adapter, wowlan))
++	if (hw && hw->wiphy && hw->wiphy->wowlan_config) {
++		if (rsi_config_wowlan(adapter, hw->wiphy->wowlan_config))
+ 			rsi_dbg(ERR_ZONE, "Failed to configure WoWLAN\n");
+ 	}
+ 
+@@ -1481,9 +1479,6 @@ static void rsi_shutdown(struct device *dev)
+ 	if (sdev->write_fail)
+ 		rsi_dbg(INFO_ZONE, "###### Device is not ready #######\n");
+ 
+-	if (rsi_set_sdio_pm_caps(adapter))
+-		rsi_dbg(INFO_ZONE, "Setting power management caps failed\n");
+-
+ 	rsi_dbg(INFO_ZONE, "***** RSI module shut down *****\n");
+ }
+ 
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 3ec38e2b91732..3395e27438393 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -3872,8 +3872,10 @@ static ssize_t nvme_ctrl_dhchap_secret_store(struct device *dev,
+ 		int ret;
+ 
+ 		ret = nvme_auth_generate_key(dhchap_secret, &key);
+-		if (ret)
++		if (ret) {
++			kfree(dhchap_secret);
+ 			return ret;
++		}
+ 		kfree(opts->dhchap_secret);
+ 		opts->dhchap_secret = dhchap_secret;
+ 		host_key = ctrl->host_key;
+@@ -3881,7 +3883,8 @@ static ssize_t nvme_ctrl_dhchap_secret_store(struct device *dev,
+ 		ctrl->host_key = key;
+ 		mutex_unlock(&ctrl->dhchap_auth_mutex);
+ 		nvme_auth_free_key(host_key);
+-	}
++	} else
++		kfree(dhchap_secret);
+ 	/* Start re-authentication */
+ 	dev_info(ctrl->device, "re-authenticating controller\n");
+ 	queue_work(nvme_wq, &ctrl->dhchap_auth_work);
+@@ -3926,8 +3929,10 @@ static ssize_t nvme_ctrl_dhchap_ctrl_secret_store(struct device *dev,
+ 		int ret;
+ 
+ 		ret = nvme_auth_generate_key(dhchap_secret, &key);
+-		if (ret)
++		if (ret) {
++			kfree(dhchap_secret);
+ 			return ret;
++		}
+ 		kfree(opts->dhchap_ctrl_secret);
+ 		opts->dhchap_ctrl_secret = dhchap_secret;
+ 		ctrl_key = ctrl->ctrl_key;
+@@ -3935,7 +3940,8 @@ static ssize_t nvme_ctrl_dhchap_ctrl_secret_store(struct device *dev,
+ 		ctrl->ctrl_key = key;
+ 		mutex_unlock(&ctrl->dhchap_auth_mutex);
+ 		nvme_auth_free_key(ctrl_key);
+-	}
++	} else
++		kfree(dhchap_secret);
+ 	/* Start re-authentication */
+ 	dev_info(ctrl->device, "re-authenticating controller\n");
+ 	queue_work(nvme_wq, &ctrl->dhchap_auth_work);
+@@ -5243,6 +5249,8 @@ int nvme_init_ctrl(struct nvme_ctrl *ctrl, struct device *dev,
+ 
+ 	return 0;
+ out_free_cdev:
++	nvme_fault_inject_fini(&ctrl->fault_inject);
++	dev_pm_qos_hide_latency_tolerance(ctrl->device);
+ 	cdev_device_del(&ctrl->cdev, ctrl->device);
+ out_free_name:
+ 	nvme_put_ctrl(ctrl);
+diff --git a/drivers/nvmem/imx-ocotp.c b/drivers/nvmem/imx-ocotp.c
+index ac0edb6398f1e..c1af271052276 100644
+--- a/drivers/nvmem/imx-ocotp.c
++++ b/drivers/nvmem/imx-ocotp.c
+@@ -97,7 +97,6 @@ struct ocotp_params {
+ 	unsigned int bank_address_words;
+ 	void (*set_timing)(struct ocotp_priv *priv);
+ 	struct ocotp_ctrl_reg ctrl;
+-	bool reverse_mac_address;
+ };
+ 
+ static int imx_ocotp_wait_for_busy(struct ocotp_priv *priv, u32 flags)
+@@ -545,7 +544,6 @@ static const struct ocotp_params imx8mq_params = {
+ 	.bank_address_words = 0,
+ 	.set_timing = imx_ocotp_set_imx6_timing,
+ 	.ctrl = IMX_OCOTP_BM_CTRL_DEFAULT,
+-	.reverse_mac_address = true,
+ };
+ 
+ static const struct ocotp_params imx8mm_params = {
+@@ -553,7 +551,6 @@ static const struct ocotp_params imx8mm_params = {
+ 	.bank_address_words = 0,
+ 	.set_timing = imx_ocotp_set_imx6_timing,
+ 	.ctrl = IMX_OCOTP_BM_CTRL_DEFAULT,
+-	.reverse_mac_address = true,
+ };
+ 
+ static const struct ocotp_params imx8mn_params = {
+@@ -561,7 +558,6 @@ static const struct ocotp_params imx8mn_params = {
+ 	.bank_address_words = 0,
+ 	.set_timing = imx_ocotp_set_imx6_timing,
+ 	.ctrl = IMX_OCOTP_BM_CTRL_DEFAULT,
+-	.reverse_mac_address = true,
+ };
+ 
+ static const struct ocotp_params imx8mp_params = {
+@@ -569,7 +565,6 @@ static const struct ocotp_params imx8mp_params = {
+ 	.bank_address_words = 0,
+ 	.set_timing = imx_ocotp_set_imx6_timing,
+ 	.ctrl = IMX_OCOTP_BM_CTRL_8MP,
+-	.reverse_mac_address = true,
+ };
+ 
+ static const struct of_device_id imx_ocotp_dt_ids[] = {
+@@ -624,8 +619,7 @@ static int imx_ocotp_probe(struct platform_device *pdev)
+ 	imx_ocotp_nvmem_config.size = 4 * priv->params->nregs;
+ 	imx_ocotp_nvmem_config.dev = dev;
+ 	imx_ocotp_nvmem_config.priv = priv;
+-	if (priv->params->reverse_mac_address)
+-		imx_ocotp_nvmem_config.layout = &imx_ocotp_layout;
++	imx_ocotp_nvmem_config.layout = &imx_ocotp_layout;
+ 
+ 	priv->config = &imx_ocotp_nvmem_config;
+ 
+diff --git a/drivers/nvmem/rmem.c b/drivers/nvmem/rmem.c
+index 80cb187f14817..752d0bf4445ee 100644
+--- a/drivers/nvmem/rmem.c
++++ b/drivers/nvmem/rmem.c
+@@ -71,6 +71,7 @@ static int rmem_probe(struct platform_device *pdev)
+ 	config.dev = dev;
+ 	config.priv = priv;
+ 	config.name = "rmem";
++	config.id = NVMEM_DEVID_AUTO;
+ 	config.size = mem->size;
+ 	config.reg_read = rmem_read;
+ 
+diff --git a/drivers/nvmem/sunplus-ocotp.c b/drivers/nvmem/sunplus-ocotp.c
+index 52b928a7a6d58..f85350b17d672 100644
+--- a/drivers/nvmem/sunplus-ocotp.c
++++ b/drivers/nvmem/sunplus-ocotp.c
+@@ -192,9 +192,11 @@ static int sp_ocotp_probe(struct platform_device *pdev)
+ 	sp_ocotp_nvmem_config.dev = dev;
+ 
+ 	nvmem = devm_nvmem_register(dev, &sp_ocotp_nvmem_config);
+-	if (IS_ERR(nvmem))
+-		return dev_err_probe(&pdev->dev, PTR_ERR(nvmem),
++	if (IS_ERR(nvmem)) {
++		ret = dev_err_probe(&pdev->dev, PTR_ERR(nvmem),
+ 						"register nvmem device fail\n");
++		goto err;
++	}
+ 
+ 	platform_set_drvdata(pdev, nvmem);
+ 
+@@ -203,6 +205,9 @@ static int sp_ocotp_probe(struct platform_device *pdev)
+ 		(int)OTP_WORD_SIZE, (int)QAC628_OTP_SIZE);
+ 
+ 	return 0;
++err:
++	clk_unprepare(otp->clk);
++	return ret;
+ }
+ 
+ static const struct of_device_id sp_ocotp_dt_ids[] = {
+diff --git a/drivers/pci/controller/cadence/pcie-cadence-host.c b/drivers/pci/controller/cadence/pcie-cadence-host.c
+index 940c7dd701d68..5b14f7ee3c798 100644
+--- a/drivers/pci/controller/cadence/pcie-cadence-host.c
++++ b/drivers/pci/controller/cadence/pcie-cadence-host.c
+@@ -12,6 +12,8 @@
+ 
+ #include "pcie-cadence.h"
+ 
++#define LINK_RETRAIN_TIMEOUT HZ
++
+ static u64 bar_max_size[] = {
+ 	[RP_BAR0] = _ULL(128 * SZ_2G),
+ 	[RP_BAR1] = SZ_2G,
+@@ -77,6 +79,27 @@ static struct pci_ops cdns_pcie_host_ops = {
+ 	.write		= pci_generic_config_write,
+ };
+ 
++static int cdns_pcie_host_training_complete(struct cdns_pcie *pcie)
++{
++	u32 pcie_cap_off = CDNS_PCIE_RP_CAP_OFFSET;
++	unsigned long end_jiffies;
++	u16 lnk_stat;
++
++	/* Wait for link training to complete. Exit after timeout. */
++	end_jiffies = jiffies + LINK_RETRAIN_TIMEOUT;
++	do {
++		lnk_stat = cdns_pcie_rp_readw(pcie, pcie_cap_off + PCI_EXP_LNKSTA);
++		if (!(lnk_stat & PCI_EXP_LNKSTA_LT))
++			break;
++		usleep_range(0, 1000);
++	} while (time_before(jiffies, end_jiffies));
++
++	if (!(lnk_stat & PCI_EXP_LNKSTA_LT))
++		return 0;
++
++	return -ETIMEDOUT;
++}
++
+ static int cdns_pcie_host_wait_for_link(struct cdns_pcie *pcie)
+ {
+ 	struct device *dev = pcie->dev;
+@@ -118,6 +141,10 @@ static int cdns_pcie_retrain(struct cdns_pcie *pcie)
+ 		cdns_pcie_rp_writew(pcie, pcie_cap_off + PCI_EXP_LNKCTL,
+ 				    lnk_ctl);
+ 
++		ret = cdns_pcie_host_training_complete(pcie);
++		if (ret)
++			return ret;
++
+ 		ret = cdns_pcie_host_wait_for_link(pcie);
+ 	}
+ 	return ret;
+diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
+index 4ab30892f6efb..2783e9c3ef1ba 100644
+--- a/drivers/pci/controller/dwc/pcie-qcom.c
++++ b/drivers/pci/controller/dwc/pcie-qcom.c
+@@ -61,7 +61,6 @@
+ /* DBI registers */
+ #define AXI_MSTR_RESP_COMP_CTRL0		0x818
+ #define AXI_MSTR_RESP_COMP_CTRL1		0x81c
+-#define MISC_CONTROL_1_REG			0x8bc
+ 
+ /* MHI registers */
+ #define PARF_DEBUG_CNT_PM_LINKST_IN_L2		0xc04
+@@ -132,9 +131,6 @@
+ /* AXI_MSTR_RESP_COMP_CTRL1 register fields */
+ #define CFG_BRIDGE_SB_INIT			BIT(0)
+ 
+-/* MISC_CONTROL_1_REG register fields */
+-#define DBI_RO_WR_EN				1
+-
+ /* PCI_EXP_SLTCAP register fields */
+ #define PCIE_CAP_SLOT_POWER_LIMIT_VAL		FIELD_PREP(PCI_EXP_SLTCAP_SPLV, 250)
+ #define PCIE_CAP_SLOT_POWER_LIMIT_SCALE		FIELD_PREP(PCI_EXP_SLTCAP_SPLS, 1)
+@@ -826,7 +822,9 @@ static int qcom_pcie_post_init_2_3_3(struct qcom_pcie *pcie)
+ 	writel(0, pcie->parf + PARF_Q2A_FLUSH);
+ 
+ 	writel(PCI_COMMAND_MASTER, pci->dbi_base + PCI_COMMAND);
+-	writel(DBI_RO_WR_EN, pci->dbi_base + MISC_CONTROL_1_REG);
++
++	dw_pcie_dbi_ro_wr_en(pci);
++
+ 	writel(PCIE_CAP_SLOT_VAL, pci->dbi_base + offset + PCI_EXP_SLTCAP);
+ 
+ 	val = readl(pci->dbi_base + offset + PCI_EXP_LNKCAP);
+@@ -1136,6 +1134,7 @@ static int qcom_pcie_post_init_2_9_0(struct qcom_pcie *pcie)
+ 	writel(0, pcie->parf + PARF_Q2A_FLUSH);
+ 
+ 	dw_pcie_dbi_ro_wr_en(pci);
++
+ 	writel(PCIE_CAP_SLOT_VAL, pci->dbi_base + offset + PCI_EXP_SLTCAP);
+ 
+ 	val = readl(pci->dbi_base + offset + PCI_EXP_LNKCAP);
+@@ -1145,6 +1144,8 @@ static int qcom_pcie_post_init_2_9_0(struct qcom_pcie *pcie)
+ 	writel(PCI_EXP_DEVCTL2_COMP_TMOUT_DIS, pci->dbi_base + offset +
+ 			PCI_EXP_DEVCTL2);
+ 
++	dw_pcie_dbi_ro_wr_dis(pci);
++
+ 	for (i = 0; i < 256; i++)
+ 		writel(0, pcie->parf + PARF_BDF_TO_SID_TABLE_N + (4 * i));
+ 
+diff --git a/drivers/pci/controller/pci-ftpci100.c b/drivers/pci/controller/pci-ftpci100.c
+index ecd3009df586d..6e7981d2ed5e1 100644
+--- a/drivers/pci/controller/pci-ftpci100.c
++++ b/drivers/pci/controller/pci-ftpci100.c
+@@ -429,22 +429,12 @@ static int faraday_pci_probe(struct platform_device *pdev)
+ 	p->dev = dev;
+ 
+ 	/* Retrieve and enable optional clocks */
+-	clk = devm_clk_get(dev, "PCLK");
++	clk = devm_clk_get_enabled(dev, "PCLK");
+ 	if (IS_ERR(clk))
+ 		return PTR_ERR(clk);
+-	ret = clk_prepare_enable(clk);
+-	if (ret) {
+-		dev_err(dev, "could not prepare PCLK\n");
+-		return ret;
+-	}
+-	p->bus_clk = devm_clk_get(dev, "PCICLK");
++	p->bus_clk = devm_clk_get_enabled(dev, "PCICLK");
+ 	if (IS_ERR(p->bus_clk))
+ 		return PTR_ERR(p->bus_clk);
+-	ret = clk_prepare_enable(p->bus_clk);
+-	if (ret) {
+-		dev_err(dev, "could not prepare PCICLK\n");
+-		return ret;
+-	}
+ 
+ 	p->base = devm_platform_ioremap_resource(pdev, 0);
+ 	if (IS_ERR(p->base))
+diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
+index 990630ec57c6a..e718a816d4814 100644
+--- a/drivers/pci/controller/vmd.c
++++ b/drivers/pci/controller/vmd.c
+@@ -927,7 +927,8 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
+ 		if (!list_empty(&child->devices)) {
+ 			dev = list_first_entry(&child->devices,
+ 					       struct pci_dev, bus_list);
+-			if (pci_reset_bus(dev))
++			ret = pci_reset_bus(dev);
++			if (ret)
+ 				pci_warn(dev, "can't reset device: %d\n", ret);
+ 
+ 			break;
+@@ -1036,6 +1037,13 @@ static void vmd_remove(struct pci_dev *dev)
+ 	ida_simple_remove(&vmd_instance_ida, vmd->instance);
+ }
+ 
++static void vmd_shutdown(struct pci_dev *dev)
++{
++        struct vmd_dev *vmd = pci_get_drvdata(dev);
++
++        vmd_remove_irq_domain(vmd);
++}
++
+ #ifdef CONFIG_PM_SLEEP
+ static int vmd_suspend(struct device *dev)
+ {
+@@ -1101,6 +1109,7 @@ static struct pci_driver vmd_drv = {
+ 	.id_table	= vmd_ids,
+ 	.probe		= vmd_probe,
+ 	.remove		= vmd_remove,
++	.shutdown	= vmd_shutdown,
+ 	.driver		= {
+ 		.pm	= &vmd_dev_pm_ops,
+ 	},
+diff --git a/drivers/pci/endpoint/functions/Kconfig b/drivers/pci/endpoint/functions/Kconfig
+index 9fd5608868718..8efb6a869e7ce 100644
+--- a/drivers/pci/endpoint/functions/Kconfig
++++ b/drivers/pci/endpoint/functions/Kconfig
+@@ -27,7 +27,7 @@ config PCI_EPF_NTB
+ 	  If in doubt, say "N" to disable Endpoint NTB driver.
+ 
+ config PCI_EPF_VNTB
+-	tristate "PCI Endpoint NTB driver"
++	tristate "PCI Endpoint Virtual NTB driver"
+ 	depends on PCI_ENDPOINT
+ 	depends on NTB
+ 	select CONFIGFS_FS
+diff --git a/drivers/pci/endpoint/functions/pci-epf-test.c b/drivers/pci/endpoint/functions/pci-epf-test.c
+index 0f9d2ec822ac6..172e5ac0bd96c 100644
+--- a/drivers/pci/endpoint/functions/pci-epf-test.c
++++ b/drivers/pci/endpoint/functions/pci-epf-test.c
+@@ -112,7 +112,7 @@ static int pci_epf_test_data_transfer(struct pci_epf_test *epf_test,
+ 				      size_t len, dma_addr_t dma_remote,
+ 				      enum dma_transfer_direction dir)
+ {
+-	struct dma_chan *chan = (dir == DMA_DEV_TO_MEM) ?
++	struct dma_chan *chan = (dir == DMA_MEM_TO_DEV) ?
+ 				 epf_test->dma_chan_tx : epf_test->dma_chan_rx;
+ 	dma_addr_t dma_local = (dir == DMA_MEM_TO_DEV) ? dma_src : dma_dst;
+ 	enum dma_ctrl_flags flags = DMA_CTRL_ACK | DMA_PREP_INTERRUPT;
+diff --git a/drivers/pci/hotplug/pciehp_ctrl.c b/drivers/pci/hotplug/pciehp_ctrl.c
+index 529c348084401..32baba1b7f131 100644
+--- a/drivers/pci/hotplug/pciehp_ctrl.c
++++ b/drivers/pci/hotplug/pciehp_ctrl.c
+@@ -256,6 +256,14 @@ void pciehp_handle_presence_or_link_change(struct controller *ctrl, u32 events)
+ 	present = pciehp_card_present(ctrl);
+ 	link_active = pciehp_check_link_active(ctrl);
+ 	if (present <= 0 && link_active <= 0) {
++		if (ctrl->state == BLINKINGON_STATE) {
++			ctrl->state = OFF_STATE;
++			cancel_delayed_work(&ctrl->button_work);
++			pciehp_set_indicators(ctrl, PCI_EXP_SLTCTL_PWR_IND_OFF,
++					      INDICATOR_NOOP);
++			ctrl_info(ctrl, "Slot(%s): Card not present\n",
++				  slot_name(ctrl));
++		}
+ 		mutex_unlock(&ctrl->state_lock);
+ 		return;
+ 	}
+diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c
+index 66d7514ca111b..db32335039d61 100644
+--- a/drivers/pci/pcie/aspm.c
++++ b/drivers/pci/pcie/aspm.c
+@@ -1010,21 +1010,24 @@ void pcie_aspm_exit_link_state(struct pci_dev *pdev)
+ 
+ 	down_read(&pci_bus_sem);
+ 	mutex_lock(&aspm_lock);
+-	/*
+-	 * All PCIe functions are in one slot, remove one function will remove
+-	 * the whole slot, so just wait until we are the last function left.
+-	 */
+-	if (!list_empty(&parent->subordinate->devices))
+-		goto out;
+ 
+ 	link = parent->link_state;
+ 	root = link->root;
+ 	parent_link = link->parent;
+ 
+-	/* All functions are removed, so just disable ASPM for the link */
++	/*
++	 * link->downstream is a pointer to the pci_dev of function 0.  If
++	 * we remove that function, the pci_dev is about to be deallocated,
++	 * so we can't use link->downstream again.  Free the link state to
++	 * avoid this.
++	 *
++	 * If we're removing a non-0 function, it's possible we could
++	 * retain the link state, but PCIe r6.0, sec 7.5.3.7, recommends
++	 * programming the same ASPM Control value for all functions of
++	 * multi-function devices, so disable ASPM for all of them.
++	 */
+ 	pcie_config_aspm_link(link, 0);
+ 	list_del(&link->sibling);
+-	/* Clock PM is for endpoint device */
+ 	free_link_state(link);
+ 
+ 	/* Recheck latencies and configure upstream links */
+@@ -1032,7 +1035,7 @@ void pcie_aspm_exit_link_state(struct pci_dev *pdev)
+ 		pcie_update_aspm_capable(root);
+ 		pcie_config_aspm_path(parent_link);
+ 	}
+-out:
++
+ 	mutex_unlock(&aspm_lock);
+ 	up_read(&pci_bus_sem);
+ }
+diff --git a/drivers/perf/apple_m1_cpu_pmu.c b/drivers/perf/apple_m1_cpu_pmu.c
+index 8574c6e58c83a..cd2de44b61b91 100644
+--- a/drivers/perf/apple_m1_cpu_pmu.c
++++ b/drivers/perf/apple_m1_cpu_pmu.c
+@@ -493,6 +493,17 @@ static int m1_pmu_map_event(struct perf_event *event)
+ 	return armpmu_map_event(event, &m1_pmu_perf_map, NULL, M1_PMU_CFG_EVENT);
+ }
+ 
++static int m2_pmu_map_event(struct perf_event *event)
++{
++	/*
++	 * Same deal as the above, except that M2 has 64bit counters.
++	 * Which, as far as we're concerned, actually means 63 bits.
++	 * Yes, this is getting awkward.
++	 */
++	event->hw.flags |= ARMPMU_EVT_63BIT;
++	return armpmu_map_event(event, &m1_pmu_perf_map, NULL, M1_PMU_CFG_EVENT);
++}
++
+ static void m1_pmu_reset(void *info)
+ {
+ 	int i;
+@@ -525,7 +536,7 @@ static int m1_pmu_set_event_filter(struct hw_perf_event *event,
+ 	return 0;
+ }
+ 
+-static int m1_pmu_init(struct arm_pmu *cpu_pmu)
++static int m1_pmu_init(struct arm_pmu *cpu_pmu, u32 flags)
+ {
+ 	cpu_pmu->handle_irq	  = m1_pmu_handle_irq;
+ 	cpu_pmu->enable		  = m1_pmu_enable_event;
+@@ -536,7 +547,14 @@ static int m1_pmu_init(struct arm_pmu *cpu_pmu)
+ 	cpu_pmu->clear_event_idx  = m1_pmu_clear_event_idx;
+ 	cpu_pmu->start		  = m1_pmu_start;
+ 	cpu_pmu->stop		  = m1_pmu_stop;
+-	cpu_pmu->map_event	  = m1_pmu_map_event;
++
++	if (flags & ARMPMU_EVT_47BIT)
++		cpu_pmu->map_event = m1_pmu_map_event;
++	else if (flags & ARMPMU_EVT_63BIT)
++		cpu_pmu->map_event = m2_pmu_map_event;
++	else
++		return WARN_ON(-EINVAL);
++
+ 	cpu_pmu->reset		  = m1_pmu_reset;
+ 	cpu_pmu->set_event_filter = m1_pmu_set_event_filter;
+ 
+@@ -550,25 +568,25 @@ static int m1_pmu_init(struct arm_pmu *cpu_pmu)
+ static int m1_pmu_ice_init(struct arm_pmu *cpu_pmu)
+ {
+ 	cpu_pmu->name = "apple_icestorm_pmu";
+-	return m1_pmu_init(cpu_pmu);
++	return m1_pmu_init(cpu_pmu, ARMPMU_EVT_47BIT);
+ }
+ 
+ static int m1_pmu_fire_init(struct arm_pmu *cpu_pmu)
+ {
+ 	cpu_pmu->name = "apple_firestorm_pmu";
+-	return m1_pmu_init(cpu_pmu);
++	return m1_pmu_init(cpu_pmu, ARMPMU_EVT_47BIT);
+ }
+ 
+ static int m2_pmu_avalanche_init(struct arm_pmu *cpu_pmu)
+ {
+ 	cpu_pmu->name = "apple_avalanche_pmu";
+-	return m1_pmu_init(cpu_pmu);
++	return m1_pmu_init(cpu_pmu, ARMPMU_EVT_63BIT);
+ }
+ 
+ static int m2_pmu_blizzard_init(struct arm_pmu *cpu_pmu)
+ {
+ 	cpu_pmu->name = "apple_blizzard_pmu";
+-	return m1_pmu_init(cpu_pmu);
++	return m1_pmu_init(cpu_pmu, ARMPMU_EVT_63BIT);
+ }
+ 
+ static const struct of_device_id m1_pmu_of_device_ids[] = {
+diff --git a/drivers/perf/arm-cmn.c b/drivers/perf/arm-cmn.c
+index 47d359f729579..89a685a09d848 100644
+--- a/drivers/perf/arm-cmn.c
++++ b/drivers/perf/arm-cmn.c
+@@ -1899,9 +1899,10 @@ static int arm_cmn_init_dtc(struct arm_cmn *cmn, struct arm_cmn_node *dn, int id
+ 	if (dtc->irq < 0)
+ 		return dtc->irq;
+ 
+-	writel_relaxed(0, dtc->base + CMN_DT_PMCR);
++	writel_relaxed(CMN_DT_DTC_CTL_DT_EN, dtc->base + CMN_DT_DTC_CTL);
++	writel_relaxed(CMN_DT_PMCR_PMU_EN | CMN_DT_PMCR_OVFL_INTR_EN, dtc->base + CMN_DT_PMCR);
++	writeq_relaxed(0, dtc->base + CMN_DT_PMCCNTR);
+ 	writel_relaxed(0x1ff, dtc->base + CMN_DT_PMOVSR_CLR);
+-	writel_relaxed(CMN_DT_PMCR_OVFL_INTR_EN, dtc->base + CMN_DT_PMCR);
+ 
+ 	return 0;
+ }
+@@ -1961,7 +1962,7 @@ static int arm_cmn_init_dtcs(struct arm_cmn *cmn)
+ 			dn->type = CMN_TYPE_CCLA;
+ 	}
+ 
+-	writel_relaxed(CMN_DT_DTC_CTL_DT_EN, cmn->dtc[0].base + CMN_DT_DTC_CTL);
++	arm_cmn_set_state(cmn, CMN_STATE_DISABLED);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/perf/arm_cspmu/arm_cspmu.c b/drivers/perf/arm_cspmu/arm_cspmu.c
+index a3f1c410b4173..e8bc8fc1fb9c0 100644
+--- a/drivers/perf/arm_cspmu/arm_cspmu.c
++++ b/drivers/perf/arm_cspmu/arm_cspmu.c
+@@ -189,10 +189,10 @@ static inline bool use_64b_counter_reg(const struct arm_cspmu *cspmu)
+ ssize_t arm_cspmu_sysfs_event_show(struct device *dev,
+ 				struct device_attribute *attr, char *buf)
+ {
+-	struct dev_ext_attribute *eattr =
+-		container_of(attr, struct dev_ext_attribute, attr);
+-	return sysfs_emit(buf, "event=0x%llx\n",
+-			  (unsigned long long)eattr->var);
++	struct perf_pmu_events_attr *pmu_attr;
++
++	pmu_attr = container_of(attr, typeof(*pmu_attr), attr);
++	return sysfs_emit(buf, "event=0x%llx\n", pmu_attr->id);
+ }
+ EXPORT_SYMBOL_GPL(arm_cspmu_sysfs_event_show);
+ 
+@@ -1232,7 +1232,8 @@ static struct platform_driver arm_cspmu_driver = {
+ static void arm_cspmu_set_active_cpu(int cpu, struct arm_cspmu *cspmu)
+ {
+ 	cpumask_set_cpu(cpu, &cspmu->active_cpu);
+-	WARN_ON(irq_set_affinity(cspmu->irq, &cspmu->active_cpu));
++	if (cspmu->irq)
++		WARN_ON(irq_set_affinity(cspmu->irq, &cspmu->active_cpu));
+ }
+ 
+ static int arm_cspmu_cpu_online(unsigned int cpu, struct hlist_node *node)
+diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
+index 15bd1e34a88ea..277e29fbd504f 100644
+--- a/drivers/perf/arm_pmu.c
++++ b/drivers/perf/arm_pmu.c
+@@ -109,6 +109,8 @@ static inline u64 arm_pmu_event_max_period(struct perf_event *event)
+ {
+ 	if (event->hw.flags & ARMPMU_EVT_64BIT)
+ 		return GENMASK_ULL(63, 0);
++	else if (event->hw.flags & ARMPMU_EVT_63BIT)
++		return GENMASK_ULL(62, 0);
+ 	else if (event->hw.flags & ARMPMU_EVT_47BIT)
+ 		return GENMASK_ULL(46, 0);
+ 	else
+diff --git a/drivers/perf/hisilicon/hisi_pcie_pmu.c b/drivers/perf/hisilicon/hisi_pcie_pmu.c
+index 6fee0b6e163bb..e10fc7cb9493a 100644
+--- a/drivers/perf/hisilicon/hisi_pcie_pmu.c
++++ b/drivers/perf/hisilicon/hisi_pcie_pmu.c
+@@ -683,7 +683,7 @@ static int hisi_pcie_pmu_offline_cpu(unsigned int cpu, struct hlist_node *node)
+ 
+ 	pcie_pmu->on_cpu = -1;
+ 	/* Choose a new CPU from all online cpus. */
+-	target = cpumask_first(cpu_online_mask);
++	target = cpumask_any_but(cpu_online_mask, cpu);
+ 	if (target >= nr_cpu_ids) {
+ 		pci_err(pcie_pmu->pdev, "There is no CPU to set\n");
+ 		return 0;
+diff --git a/drivers/phy/Kconfig b/drivers/phy/Kconfig
+index f46e3148d286d..8dba9596408f2 100644
+--- a/drivers/phy/Kconfig
++++ b/drivers/phy/Kconfig
+@@ -18,6 +18,7 @@ config GENERIC_PHY
+ 
+ config GENERIC_PHY_MIPI_DPHY
+ 	bool
++	depends on GENERIC_PHY
+ 	help
+ 	  Generic MIPI D-PHY support.
+ 
+diff --git a/drivers/phy/qualcomm/phy-qcom-qmp-combo.c b/drivers/phy/qualcomm/phy-qcom-qmp-combo.c
+index 87b17e5877ab8..1fdcc81661ed8 100644
+--- a/drivers/phy/qualcomm/phy-qcom-qmp-combo.c
++++ b/drivers/phy/qualcomm/phy-qcom-qmp-combo.c
+@@ -2142,6 +2142,7 @@ static void qmp_v4_configure_dp_tx(struct qmp_combo *qmp)
+ static int qmp_v456_configure_dp_phy(struct qmp_combo *qmp,
+ 				     unsigned int com_resetm_ctrl_reg,
+ 				     unsigned int com_c_ready_status_reg,
++				     unsigned int com_cmn_status_reg,
+ 				     unsigned int dp_phy_status_reg)
+ {
+ 	const struct phy_configure_opts_dp *dp_opts = &qmp->dp_opts;
+@@ -2198,14 +2199,14 @@ static int qmp_v456_configure_dp_phy(struct qmp_combo *qmp,
+ 			10000))
+ 		return -ETIMEDOUT;
+ 
+-	if (readl_poll_timeout(qmp->dp_serdes + QSERDES_V4_COM_CMN_STATUS,
++	if (readl_poll_timeout(qmp->dp_serdes + com_cmn_status_reg,
+ 			status,
+ 			((status & BIT(0)) > 0),
+ 			500,
+ 			10000))
+ 		return -ETIMEDOUT;
+ 
+-	if (readl_poll_timeout(qmp->dp_serdes + QSERDES_V4_COM_CMN_STATUS,
++	if (readl_poll_timeout(qmp->dp_serdes + com_cmn_status_reg,
+ 			status,
+ 			((status & BIT(1)) > 0),
+ 			500,
+@@ -2241,6 +2242,7 @@ static int qmp_v4_configure_dp_phy(struct qmp_combo *qmp)
+ 
+ 	ret = qmp_v456_configure_dp_phy(qmp, QSERDES_V4_COM_RESETSM_CNTRL,
+ 					QSERDES_V4_COM_C_READY_STATUS,
++					QSERDES_V4_COM_CMN_STATUS,
+ 					QSERDES_V4_DP_PHY_STATUS);
+ 	if (ret < 0)
+ 		return ret;
+@@ -2305,6 +2307,7 @@ static int qmp_v5_configure_dp_phy(struct qmp_combo *qmp)
+ 
+ 	ret = qmp_v456_configure_dp_phy(qmp, QSERDES_V4_COM_RESETSM_CNTRL,
+ 					QSERDES_V4_COM_C_READY_STATUS,
++					QSERDES_V4_COM_CMN_STATUS,
+ 					QSERDES_V4_DP_PHY_STATUS);
+ 	if (ret < 0)
+ 		return ret;
+@@ -2364,6 +2367,7 @@ static int qmp_v6_configure_dp_phy(struct qmp_combo *qmp)
+ 
+ 	ret = qmp_v456_configure_dp_phy(qmp, QSERDES_V6_COM_RESETSM_CNTRL,
+ 					QSERDES_V6_COM_C_READY_STATUS,
++					QSERDES_V6_COM_CMN_STATUS,
+ 					QSERDES_V6_DP_PHY_STATUS);
+ 	if (ret < 0)
+ 		return ret;
+diff --git a/drivers/phy/tegra/xusb.c b/drivers/phy/tegra/xusb.c
+index b55d4e9f42b5c..a296b87dced18 100644
+--- a/drivers/phy/tegra/xusb.c
++++ b/drivers/phy/tegra/xusb.c
+@@ -568,6 +568,7 @@ static void tegra_xusb_port_unregister(struct tegra_xusb_port *port)
+ 		usb_role_switch_unregister(port->usb_role_sw);
+ 		cancel_work_sync(&port->usb_phy_work);
+ 		usb_remove_phy(&port->usb_phy);
++		port->usb_phy.dev->driver = NULL;
+ 	}
+ 
+ 	if (port->ops->remove)
+@@ -675,6 +676,9 @@ static int tegra_xusb_setup_usb_role_switch(struct tegra_xusb_port *port)
+ 	port->dev.driver = devm_kzalloc(&port->dev,
+ 					sizeof(struct device_driver),
+ 					GFP_KERNEL);
++	if (!port->dev.driver)
++		return -ENOMEM;
++
+ 	port->dev.driver->owner	 = THIS_MODULE;
+ 
+ 	port->usb_role_sw = usb_role_switch_register(&port->dev,
+diff --git a/drivers/pinctrl/bcm/pinctrl-bcm2835.c b/drivers/pinctrl/bcm/pinctrl-bcm2835.c
+index 7435173e10f43..1489191a213fe 100644
+--- a/drivers/pinctrl/bcm/pinctrl-bcm2835.c
++++ b/drivers/pinctrl/bcm/pinctrl-bcm2835.c
+@@ -376,10 +376,8 @@ static int bcm2835_add_pin_ranges_fallback(struct gpio_chip *gc)
+ 	if (!pctldev)
+ 		return 0;
+ 
+-	gpiochip_add_pin_range(gc, pinctrl_dev_get_devname(pctldev), 0, 0,
+-			       gc->ngpio);
+-
+-	return 0;
++	return gpiochip_add_pin_range(gc, pinctrl_dev_get_devname(pctldev), 0, 0,
++				      gc->ngpio);
+ }
+ 
+ static const struct gpio_chip bcm2835_gpio_chip = {
+diff --git a/drivers/pinctrl/freescale/pinctrl-scu.c b/drivers/pinctrl/freescale/pinctrl-scu.c
+index ea261b6e74581..3b252d684d723 100644
+--- a/drivers/pinctrl/freescale/pinctrl-scu.c
++++ b/drivers/pinctrl/freescale/pinctrl-scu.c
+@@ -90,7 +90,7 @@ int imx_pinconf_set_scu(struct pinctrl_dev *pctldev, unsigned pin_id,
+ 	struct imx_sc_msg_req_pad_set msg;
+ 	struct imx_sc_rpc_msg *hdr = &msg.hdr;
+ 	unsigned int mux = configs[0];
+-	unsigned int conf = configs[1];
++	unsigned int conf;
+ 	unsigned int val;
+ 	int ret;
+ 
+@@ -115,6 +115,7 @@ int imx_pinconf_set_scu(struct pinctrl_dev *pctldev, unsigned pin_id,
+ 	 * Set mux and conf together in one IPC call
+ 	 */
+ 	WARN_ON(num_configs != 2);
++	conf = configs[1];
+ 
+ 	val = conf | BM_PAD_CTL_IFMUX_ENABLE | BM_PAD_CTL_GP_ENABLE;
+ 	val |= mux << BP_PAD_CTL_IFMUX;
+diff --git a/drivers/pinctrl/intel/pinctrl-cherryview.c b/drivers/pinctrl/intel/pinctrl-cherryview.c
+index 722990e278361..87cf1e7403979 100644
+--- a/drivers/pinctrl/intel/pinctrl-cherryview.c
++++ b/drivers/pinctrl/intel/pinctrl-cherryview.c
+@@ -949,11 +949,6 @@ static int chv_config_get(struct pinctrl_dev *pctldev, unsigned int pin,
+ 
+ 		break;
+ 
+-	case PIN_CONFIG_DRIVE_OPEN_DRAIN:
+-		if (!(ctrl1 & CHV_PADCTRL1_ODEN))
+-			return -EINVAL;
+-		break;
+-
+ 	case PIN_CONFIG_BIAS_HIGH_IMPEDANCE: {
+ 		u32 cfg;
+ 
+@@ -963,6 +958,16 @@ static int chv_config_get(struct pinctrl_dev *pctldev, unsigned int pin,
+ 			return -EINVAL;
+ 
+ 		break;
++
++	case PIN_CONFIG_DRIVE_PUSH_PULL:
++		if (ctrl1 & CHV_PADCTRL1_ODEN)
++			return -EINVAL;
++		break;
++
++	case PIN_CONFIG_DRIVE_OPEN_DRAIN:
++		if (!(ctrl1 & CHV_PADCTRL1_ODEN))
++			return -EINVAL;
++		break;
+ 	}
+ 
+ 	default:
+diff --git a/drivers/pinctrl/nuvoton/pinctrl-npcm7xx.c b/drivers/pinctrl/nuvoton/pinctrl-npcm7xx.c
+index 21e61c2a37988..843ffcd968774 100644
+--- a/drivers/pinctrl/nuvoton/pinctrl-npcm7xx.c
++++ b/drivers/pinctrl/nuvoton/pinctrl-npcm7xx.c
+@@ -1884,6 +1884,8 @@ static int npcm7xx_gpio_of(struct npcm7xx_pinctrl *pctrl)
+ 		}
+ 
+ 		pctrl->gpio_bank[id].base = ioremap(res.start, resource_size(&res));
++		if (!pctrl->gpio_bank[id].base)
++			return -EINVAL;
+ 
+ 		ret = bgpio_init(&pctrl->gpio_bank[id].gc, dev, 4,
+ 				 pctrl->gpio_bank[id].base + NPCM7XX_GP_N_DIN,
+diff --git a/drivers/pinctrl/pinctrl-at91-pio4.c b/drivers/pinctrl/pinctrl-at91-pio4.c
+index 2fe40acb6a3e5..98c2122817264 100644
+--- a/drivers/pinctrl/pinctrl-at91-pio4.c
++++ b/drivers/pinctrl/pinctrl-at91-pio4.c
+@@ -1146,6 +1146,8 @@ static int atmel_pinctrl_probe(struct platform_device *pdev)
+ 		/* Pin naming convention: P(bank_name)(bank_pin_number). */
+ 		pin_desc[i].name = devm_kasprintf(&pdev->dev, GFP_KERNEL, "P%c%u",
+ 						  bank + 'A', line);
++		if (!pin_desc[i].name)
++			return -ENOMEM;
+ 
+ 		group->name = group_names[i] = pin_desc[i].name;
+ 		group->pin = pin_desc[i].number;
+diff --git a/drivers/pinctrl/pinctrl-at91.c b/drivers/pinctrl/pinctrl-at91.c
+index 871209c241532..39956d821ad75 100644
+--- a/drivers/pinctrl/pinctrl-at91.c
++++ b/drivers/pinctrl/pinctrl-at91.c
+@@ -1389,8 +1389,8 @@ static int at91_pinctrl_probe(struct platform_device *pdev)
+ 		char **names;
+ 
+ 		names = devm_kasprintf_strarray(dev, "pio", MAX_NB_GPIO_PER_BANK);
+-		if (!names)
+-			return -ENOMEM;
++		if (IS_ERR(names))
++			return PTR_ERR(names);
+ 
+ 		for (j = 0; j < MAX_NB_GPIO_PER_BANK; j++, k++) {
+ 			char *name = names[j];
+@@ -1870,8 +1870,8 @@ static int at91_gpio_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	names = devm_kasprintf_strarray(dev, "pio", chip->ngpio);
+-	if (!names)
+-		return -ENOMEM;
++	if (IS_ERR(names))
++		return PTR_ERR(names);
+ 
+ 	for (i = 0; i < chip->ngpio; i++)
+ 		strreplace(names[i], '-', alias_idx + 'A');
+diff --git a/drivers/pinctrl/pinctrl-microchip-sgpio.c b/drivers/pinctrl/pinctrl-microchip-sgpio.c
+index 4794602316e7d..666d8b7cdbad3 100644
+--- a/drivers/pinctrl/pinctrl-microchip-sgpio.c
++++ b/drivers/pinctrl/pinctrl-microchip-sgpio.c
+@@ -818,6 +818,9 @@ static int microchip_sgpio_register_bank(struct device *dev,
+ 	pctl_desc->name = devm_kasprintf(dev, GFP_KERNEL, "%s-%sput",
+ 					 dev_name(dev),
+ 					 bank->is_input ? "in" : "out");
++	if (!pctl_desc->name)
++		return -ENOMEM;
++
+ 	pctl_desc->pctlops = &sgpio_pctl_ops;
+ 	pctl_desc->pmxops = &sgpio_pmx_ops;
+ 	pctl_desc->confops = &sgpio_confops;
+diff --git a/drivers/pinctrl/sunplus/sppctl.c b/drivers/pinctrl/sunplus/sppctl.c
+index 6bbbab3a6fdf3..150996949ede7 100644
+--- a/drivers/pinctrl/sunplus/sppctl.c
++++ b/drivers/pinctrl/sunplus/sppctl.c
+@@ -834,11 +834,6 @@ static int sppctl_dt_node_to_map(struct pinctrl_dev *pctldev, struct device_node
+ 	int i, size = 0;
+ 
+ 	list = of_get_property(np_config, "sunplus,pins", &size);
+-
+-	if (nmG <= 0)
+-		nmG = 0;
+-
+-	parent = of_get_parent(np_config);
+ 	*num_maps = size / sizeof(*list);
+ 
+ 	/*
+@@ -866,10 +861,14 @@ static int sppctl_dt_node_to_map(struct pinctrl_dev *pctldev, struct device_node
+ 		}
+ 	}
+ 
++	if (nmG <= 0)
++		nmG = 0;
++
+ 	*map = kcalloc(*num_maps + nmG, sizeof(**map), GFP_KERNEL);
+-	if (*map == NULL)
++	if (!(*map))
+ 		return -ENOMEM;
+ 
++	parent = of_get_parent(np_config);
+ 	for (i = 0; i < (*num_maps); i++) {
+ 		dt_pin = be32_to_cpu(list[i]);
+ 		pin_num = FIELD_GET(GENMASK(31, 24), dt_pin);
+@@ -883,6 +882,8 @@ static int sppctl_dt_node_to_map(struct pinctrl_dev *pctldev, struct device_node
+ 			(*map)[i].data.configs.num_configs = 1;
+ 			(*map)[i].data.configs.group_or_pin = pin_get_name(pctldev, pin_num);
+ 			configs = kmalloc(sizeof(*configs), GFP_KERNEL);
++			if (!configs)
++				goto sppctl_map_err;
+ 			*configs = FIELD_GET(GENMASK(7, 0), dt_pin);
+ 			(*map)[i].data.configs.configs = configs;
+ 
+@@ -896,6 +897,8 @@ static int sppctl_dt_node_to_map(struct pinctrl_dev *pctldev, struct device_node
+ 			(*map)[i].data.configs.num_configs = 1;
+ 			(*map)[i].data.configs.group_or_pin = pin_get_name(pctldev, pin_num);
+ 			configs = kmalloc(sizeof(*configs), GFP_KERNEL);
++			if (!configs)
++				goto sppctl_map_err;
+ 			*configs = SPPCTL_IOP_CONFIGS;
+ 			(*map)[i].data.configs.configs = configs;
+ 
+@@ -965,6 +968,14 @@ static int sppctl_dt_node_to_map(struct pinctrl_dev *pctldev, struct device_node
+ 	of_node_put(parent);
+ 	dev_dbg(pctldev->dev, "%d pins mapped\n", *num_maps);
+ 	return 0;
++
++sppctl_map_err:
++	for (i = 0; i < (*num_maps); i++)
++		if ((*map)[i].type == PIN_MAP_TYPE_CONFIGS_PIN)
++			kfree((*map)[i].data.configs.configs);
++	kfree(*map);
++	of_node_put(parent);
++	return -ENOMEM;
+ }
+ 
+ static const struct pinctrl_ops sppctl_pctl_ops = {
+diff --git a/drivers/pinctrl/tegra/pinctrl-tegra.c b/drivers/pinctrl/tegra/pinctrl-tegra.c
+index 1729b7ddfa946..21e08fbd1df0e 100644
+--- a/drivers/pinctrl/tegra/pinctrl-tegra.c
++++ b/drivers/pinctrl/tegra/pinctrl-tegra.c
+@@ -232,7 +232,7 @@ static const char *tegra_pinctrl_get_func_name(struct pinctrl_dev *pctldev,
+ {
+ 	struct tegra_pmx *pmx = pinctrl_dev_get_drvdata(pctldev);
+ 
+-	return pmx->soc->functions[function].name;
++	return pmx->functions[function].name;
+ }
+ 
+ static int tegra_pinctrl_get_func_groups(struct pinctrl_dev *pctldev,
+@@ -242,8 +242,8 @@ static int tegra_pinctrl_get_func_groups(struct pinctrl_dev *pctldev,
+ {
+ 	struct tegra_pmx *pmx = pinctrl_dev_get_drvdata(pctldev);
+ 
+-	*groups = pmx->soc->functions[function].groups;
+-	*num_groups = pmx->soc->functions[function].ngroups;
++	*groups = pmx->functions[function].groups;
++	*num_groups = pmx->functions[function].ngroups;
+ 
+ 	return 0;
+ }
+@@ -795,10 +795,17 @@ int tegra_pinctrl_probe(struct platform_device *pdev,
+ 	if (!pmx->group_pins)
+ 		return -ENOMEM;
+ 
++	pmx->functions = devm_kcalloc(&pdev->dev, pmx->soc->nfunctions,
++				      sizeof(*pmx->functions), GFP_KERNEL);
++	if (!pmx->functions)
++		return -ENOMEM;
++
+ 	group_pins = pmx->group_pins;
++
+ 	for (fn = 0; fn < soc_data->nfunctions; fn++) {
+-		struct tegra_function *func = &soc_data->functions[fn];
++		struct tegra_function *func = &pmx->functions[fn];
+ 
++		func->name = pmx->soc->functions[fn];
+ 		func->groups = group_pins;
+ 
+ 		for (gn = 0; gn < soc_data->ngroups; gn++) {
+diff --git a/drivers/pinctrl/tegra/pinctrl-tegra.h b/drivers/pinctrl/tegra/pinctrl-tegra.h
+index 6130cba7cce54..b3289bdf727d8 100644
+--- a/drivers/pinctrl/tegra/pinctrl-tegra.h
++++ b/drivers/pinctrl/tegra/pinctrl-tegra.h
+@@ -13,6 +13,7 @@ struct tegra_pmx {
+ 	struct pinctrl_dev *pctl;
+ 
+ 	const struct tegra_pinctrl_soc_data *soc;
++	struct tegra_function *functions;
+ 	const char **group_pins;
+ 
+ 	struct pinctrl_gpio_range gpio_range;
+@@ -191,7 +192,7 @@ struct tegra_pinctrl_soc_data {
+ 	const char *gpio_compatible;
+ 	const struct pinctrl_pin_desc *pins;
+ 	unsigned npins;
+-	struct tegra_function *functions;
++	const char * const *functions;
+ 	unsigned nfunctions;
+ 	const struct tegra_pingroup *groups;
+ 	unsigned ngroups;
+diff --git a/drivers/pinctrl/tegra/pinctrl-tegra114.c b/drivers/pinctrl/tegra/pinctrl-tegra114.c
+index e72ab1eb23983..3d425b2018e78 100644
+--- a/drivers/pinctrl/tegra/pinctrl-tegra114.c
++++ b/drivers/pinctrl/tegra/pinctrl-tegra114.c
+@@ -1452,12 +1452,9 @@ enum tegra_mux {
+ 	TEGRA_MUX_VI_ALT3,
+ };
+ 
+-#define FUNCTION(fname)					\
+-	{						\
+-		.name = #fname,				\
+-	}
++#define FUNCTION(fname) #fname
+ 
+-static struct tegra_function tegra114_functions[] = {
++static const char * const tegra114_functions[] = {
+ 	FUNCTION(blink),
+ 	FUNCTION(cec),
+ 	FUNCTION(cldvfs),
+diff --git a/drivers/pinctrl/tegra/pinctrl-tegra124.c b/drivers/pinctrl/tegra/pinctrl-tegra124.c
+index 26096c6b967e2..2a50c5c7516c3 100644
+--- a/drivers/pinctrl/tegra/pinctrl-tegra124.c
++++ b/drivers/pinctrl/tegra/pinctrl-tegra124.c
+@@ -1611,12 +1611,9 @@ enum tegra_mux {
+ 	TEGRA_MUX_VIMCLK2_ALT,
+ };
+ 
+-#define FUNCTION(fname)					\
+-	{						\
+-		.name = #fname,				\
+-	}
++#define FUNCTION(fname) #fname
+ 
+-static struct tegra_function tegra124_functions[] = {
++static const char * const tegra124_functions[] = {
+ 	FUNCTION(blink),
+ 	FUNCTION(ccla),
+ 	FUNCTION(cec),
+diff --git a/drivers/pinctrl/tegra/pinctrl-tegra194.c b/drivers/pinctrl/tegra/pinctrl-tegra194.c
+index 277973c884344..69f58df628977 100644
+--- a/drivers/pinctrl/tegra/pinctrl-tegra194.c
++++ b/drivers/pinctrl/tegra/pinctrl-tegra194.c
+@@ -1189,12 +1189,9 @@ enum tegra_mux_dt {
+ };
+ 
+ /* Make list of each function name */
+-#define TEGRA_PIN_FUNCTION(lid)			\
+-	{					\
+-		.name = #lid,			\
+-	}
++#define TEGRA_PIN_FUNCTION(lid) #lid
+ 
+-static struct tegra_function tegra194_functions[] = {
++static const char * const tegra194_functions[] = {
+ 	TEGRA_PIN_FUNCTION(rsvd0),
+ 	TEGRA_PIN_FUNCTION(rsvd1),
+ 	TEGRA_PIN_FUNCTION(rsvd2),
+diff --git a/drivers/pinctrl/tegra/pinctrl-tegra20.c b/drivers/pinctrl/tegra/pinctrl-tegra20.c
+index 0dc2cf0d05b1e..737fc2000f66b 100644
+--- a/drivers/pinctrl/tegra/pinctrl-tegra20.c
++++ b/drivers/pinctrl/tegra/pinctrl-tegra20.c
+@@ -1889,12 +1889,9 @@ enum tegra_mux {
+ 	TEGRA_MUX_XIO,
+ };
+ 
+-#define FUNCTION(fname)					\
+-	{						\
+-		.name = #fname,				\
+-	}
++#define FUNCTION(fname) #fname
+ 
+-static struct tegra_function tegra20_functions[] = {
++static const char * const tegra20_functions[] = {
+ 	FUNCTION(ahb_clk),
+ 	FUNCTION(apb_clk),
+ 	FUNCTION(audio_sync),
+diff --git a/drivers/pinctrl/tegra/pinctrl-tegra210.c b/drivers/pinctrl/tegra/pinctrl-tegra210.c
+index b480f607fa16f..9bb29146dfff7 100644
+--- a/drivers/pinctrl/tegra/pinctrl-tegra210.c
++++ b/drivers/pinctrl/tegra/pinctrl-tegra210.c
+@@ -1185,12 +1185,9 @@ enum tegra_mux {
+ 	TEGRA_MUX_VIMCLK2,
+ };
+ 
+-#define FUNCTION(fname)					\
+-	{						\
+-		.name = #fname,				\
+-	}
++#define FUNCTION(fname) #fname
+ 
+-static struct tegra_function tegra210_functions[] = {
++static const char * const tegra210_functions[] = {
+ 	FUNCTION(aud),
+ 	FUNCTION(bcl),
+ 	FUNCTION(blink),
+diff --git a/drivers/pinctrl/tegra/pinctrl-tegra30.c b/drivers/pinctrl/tegra/pinctrl-tegra30.c
+index 7299a371827f1..de5aa2d4d28d3 100644
+--- a/drivers/pinctrl/tegra/pinctrl-tegra30.c
++++ b/drivers/pinctrl/tegra/pinctrl-tegra30.c
+@@ -2010,12 +2010,9 @@ enum tegra_mux {
+ 	TEGRA_MUX_VI_ALT3,
+ };
+ 
+-#define FUNCTION(fname)					\
+-	{						\
+-		.name = #fname,				\
+-	}
++#define FUNCTION(fname) #fname
+ 
+-static struct tegra_function tegra30_functions[] = {
++static const char * const tegra30_functions[] = {
+ 	FUNCTION(blink),
+ 	FUNCTION(cec),
+ 	FUNCTION(clk_12m_out),
+diff --git a/drivers/platform/x86/dell/dell-rbtn.c b/drivers/platform/x86/dell/dell-rbtn.c
+index aa0e6c9074942..c8fcb537fd65d 100644
+--- a/drivers/platform/x86/dell/dell-rbtn.c
++++ b/drivers/platform/x86/dell/dell-rbtn.c
+@@ -395,16 +395,16 @@ static int rbtn_add(struct acpi_device *device)
+ 		return -EINVAL;
+ 	}
+ 
++	rbtn_data = devm_kzalloc(&device->dev, sizeof(*rbtn_data), GFP_KERNEL);
++	if (!rbtn_data)
++		return -ENOMEM;
++
+ 	ret = rbtn_acquire(device, true);
+ 	if (ret < 0) {
+ 		dev_err(&device->dev, "Cannot enable device\n");
+ 		return ret;
+ 	}
+ 
+-	rbtn_data = devm_kzalloc(&device->dev, sizeof(*rbtn_data), GFP_KERNEL);
+-	if (!rbtn_data)
+-		return -ENOMEM;
+-
+ 	rbtn_data->type = type;
+ 	device->driver_data = rbtn_data;
+ 
+@@ -420,10 +420,12 @@ static int rbtn_add(struct acpi_device *device)
+ 		break;
+ 	default:
+ 		ret = -EINVAL;
++		break;
+ 	}
++	if (ret)
++		rbtn_acquire(device, false);
+ 
+ 	return ret;
+-
+ }
+ 
+ static void rbtn_remove(struct acpi_device *device)
+@@ -442,7 +444,6 @@ static void rbtn_remove(struct acpi_device *device)
+ 	}
+ 
+ 	rbtn_acquire(device, false);
+-	device->driver_data = NULL;
+ }
+ 
+ static void rbtn_notify(struct acpi_device *device, u32 event)
+diff --git a/drivers/platform/x86/intel/pmc/core.c b/drivers/platform/x86/intel/pmc/core.c
+index da6e7206d38b5..ed91ef9d1cf6c 100644
+--- a/drivers/platform/x86/intel/pmc/core.c
++++ b/drivers/platform/x86/intel/pmc/core.c
+@@ -1039,7 +1039,6 @@ static const struct x86_cpu_id intel_pmc_core_ids[] = {
+ 	X86_MATCH_INTEL_FAM6_MODEL(RAPTORLAKE_P,        tgl_core_init),
+ 	X86_MATCH_INTEL_FAM6_MODEL(RAPTORLAKE,		adl_core_init),
+ 	X86_MATCH_INTEL_FAM6_MODEL(RAPTORLAKE_S,	adl_core_init),
+-	X86_MATCH_INTEL_FAM6_MODEL(METEORLAKE,          mtl_core_init),
+ 	X86_MATCH_INTEL_FAM6_MODEL(METEORLAKE_L,	mtl_core_init),
+ 	{}
+ };
+@@ -1223,11 +1222,11 @@ static inline bool pmc_core_is_s0ix_failed(struct pmc_dev *pmcdev)
+ 	return false;
+ }
+ 
+-static __maybe_unused int pmc_core_resume(struct device *dev)
++int pmc_core_resume_common(struct pmc_dev *pmcdev)
+ {
+-	struct pmc_dev *pmcdev = dev_get_drvdata(dev);
+ 	const struct pmc_bit_map **maps = pmcdev->map->lpm_sts;
+ 	int offset = pmcdev->map->lpm_status_offset;
++	struct device *dev = &pmcdev->pdev->dev;
+ 
+ 	/* Check if the syspend used S0ix */
+ 	if (pm_suspend_via_firmware())
+@@ -1257,6 +1256,16 @@ static __maybe_unused int pmc_core_resume(struct device *dev)
+ 	return 0;
+ }
+ 
++static __maybe_unused int pmc_core_resume(struct device *dev)
++{
++	struct pmc_dev *pmcdev = dev_get_drvdata(dev);
++
++	if (pmcdev->resume)
++		return pmcdev->resume(pmcdev);
++
++	return pmc_core_resume_common(pmcdev);
++}
++
+ static const struct dev_pm_ops pmc_core_pm_ops = {
+ 	SET_LATE_SYSTEM_SLEEP_PM_OPS(pmc_core_suspend, pmc_core_resume)
+ };
+diff --git a/drivers/platform/x86/intel/pmc/core.h b/drivers/platform/x86/intel/pmc/core.h
+index 9ca9b97467193..86d38270000a7 100644
+--- a/drivers/platform/x86/intel/pmc/core.h
++++ b/drivers/platform/x86/intel/pmc/core.h
+@@ -249,6 +249,14 @@ enum ppfear_regs {
+ #define MTL_LPM_STATUS_LATCH_EN_OFFSET		0x16F8
+ #define MTL_LPM_STATUS_OFFSET			0x1700
+ #define MTL_LPM_LIVE_STATUS_OFFSET		0x175C
++#define MTL_PMC_LTR_IOE_PMC			0x1C0C
++#define MTL_PMC_LTR_ESE				0x1BAC
++#define MTL_SOCM_NUM_IP_IGN_ALLOWED		25
++#define MTL_SOC_PMC_MMIO_REG_LEN		0x2708
++#define MTL_PMC_LTR_SPG				0x1B74
++
++/* Meteor Lake PGD PFET Enable Ack Status */
++#define MTL_SOCM_PPFEAR_NUM_ENTRIES		8
+ 
+ extern const char *pmc_lpm_modes[];
+ 
+@@ -327,6 +335,7 @@ struct pmc_reg_map {
+  * @lpm_en_modes:	Array of enabled modes from lowest to highest priority
+  * @lpm_req_regs:	List of substate requirements
+  * @core_configure:	Function pointer to configure the platform
++ * @resume:		Function to perform platform specific resume
+  *
+  * pmc_dev contains info about power management controller device.
+  */
+@@ -345,6 +354,7 @@ struct pmc_dev {
+ 	int lpm_en_modes[LPM_MAX_NUM_MODES];
+ 	u32 *lpm_req_regs;
+ 	void (*core_configure)(struct pmc_dev *pmcdev);
++	int (*resume)(struct pmc_dev *pmcdev);
+ };
+ 
+ extern const struct pmc_bit_map msr_map[];
+@@ -393,11 +403,30 @@ extern const struct pmc_bit_map adl_vnn_req_status_3_map[];
+ extern const struct pmc_bit_map adl_vnn_misc_status_map[];
+ extern const struct pmc_bit_map *adl_lpm_maps[];
+ extern const struct pmc_reg_map adl_reg_map;
+-extern const struct pmc_reg_map mtl_reg_map;
++extern const struct pmc_bit_map mtl_socm_pfear_map[];
++extern const struct pmc_bit_map *ext_mtl_socm_pfear_map[];
++extern const struct pmc_bit_map mtl_socm_ltr_show_map[];
++extern const struct pmc_bit_map mtl_socm_clocksource_status_map[];
++extern const struct pmc_bit_map mtl_socm_power_gating_status_0_map[];
++extern const struct pmc_bit_map mtl_socm_power_gating_status_1_map[];
++extern const struct pmc_bit_map mtl_socm_power_gating_status_2_map[];
++extern const struct pmc_bit_map mtl_socm_d3_status_0_map[];
++extern const struct pmc_bit_map mtl_socm_d3_status_1_map[];
++extern const struct pmc_bit_map mtl_socm_d3_status_2_map[];
++extern const struct pmc_bit_map mtl_socm_d3_status_3_map[];
++extern const struct pmc_bit_map mtl_socm_vnn_req_status_0_map[];
++extern const struct pmc_bit_map mtl_socm_vnn_req_status_1_map[];
++extern const struct pmc_bit_map mtl_socm_vnn_req_status_2_map[];
++extern const struct pmc_bit_map mtl_socm_vnn_req_status_3_map[];
++extern const struct pmc_bit_map mtl_socm_vnn_misc_status_map[];
++extern const struct pmc_bit_map mtl_socm_signal_status_map[];
++extern const struct pmc_bit_map *mtl_socm_lpm_maps[];
++extern const struct pmc_reg_map mtl_socm_reg_map;
+ 
+ extern void pmc_core_get_tgl_lpm_reqs(struct platform_device *pdev);
+ extern int pmc_core_send_ltr_ignore(struct pmc_dev *pmcdev, u32 value);
+ 
++int pmc_core_resume_common(struct pmc_dev *pmcdev);
+ void spt_core_init(struct pmc_dev *pmcdev);
+ void cnp_core_init(struct pmc_dev *pmcdev);
+ void icl_core_init(struct pmc_dev *pmcdev);
+diff --git a/drivers/platform/x86/intel/pmc/mtl.c b/drivers/platform/x86/intel/pmc/mtl.c
+index e8cc156412ce5..cdcf743b5e2c7 100644
+--- a/drivers/platform/x86/intel/pmc/mtl.c
++++ b/drivers/platform/x86/intel/pmc/mtl.c
+@@ -11,28 +11,458 @@
+ #include <linux/pci.h>
+ #include "core.h"
+ 
+-const struct pmc_reg_map mtl_reg_map = {
+-	.pfear_sts = ext_tgl_pfear_map,
++/*
++ * Die Mapping to Product.
++ * Product SOCDie IOEDie PCHDie
++ * MTL-M   SOC-M  IOE-M  None
++ * MTL-P   SOC-M  IOE-P  None
++ * MTL-S   SOC-S  IOE-P  PCH-S
++ */
++
++const struct pmc_bit_map mtl_socm_pfear_map[] = {
++	{"PMC",                 BIT(0)},
++	{"OPI",                 BIT(1)},
++	{"SPI",                 BIT(2)},
++	{"XHCI",                BIT(3)},
++	{"SPA",                 BIT(4)},
++	{"SPB",                 BIT(5)},
++	{"SPC",                 BIT(6)},
++	{"GBE",                 BIT(7)},
++
++	{"SATA",                BIT(0)},
++	{"DSP0",                BIT(1)},
++	{"DSP1",                BIT(2)},
++	{"DSP2",                BIT(3)},
++	{"DSP3",                BIT(4)},
++	{"SPD",                 BIT(5)},
++	{"LPSS",                BIT(6)},
++	{"LPC",                 BIT(7)},
++
++	{"SMB",                 BIT(0)},
++	{"ISH",                 BIT(1)},
++	{"P2SB",                BIT(2)},
++	{"NPK_VNN",             BIT(3)},
++	{"SDX",                 BIT(4)},
++	{"SPE",                 BIT(5)},
++	{"FUSE",                BIT(6)},
++	{"SBR8",                BIT(7)},
++
++	{"RSVD24",              BIT(0)},
++	{"OTG",                 BIT(1)},
++	{"EXI",                 BIT(2)},
++	{"CSE",                 BIT(3)},
++	{"CSME_KVM",            BIT(4)},
++	{"CSME_PMT",            BIT(5)},
++	{"CSME_CLINK",          BIT(6)},
++	{"CSME_PTIO",           BIT(7)},
++
++	{"CSME_USBR",           BIT(0)},
++	{"CSME_SUSRAM",         BIT(1)},
++	{"CSME_SMT1",           BIT(2)},
++	{"RSVD35",              BIT(3)},
++	{"CSME_SMS2",           BIT(4)},
++	{"CSME_SMS",            BIT(5)},
++	{"CSME_RTC",            BIT(6)},
++	{"CSME_PSF",            BIT(7)},
++
++	{"SBR0",                BIT(0)},
++	{"SBR1",                BIT(1)},
++	{"SBR2",                BIT(2)},
++	{"SBR3",                BIT(3)},
++	{"SBR4",                BIT(4)},
++	{"SBR5",                BIT(5)},
++	{"RSVD46",              BIT(6)},
++	{"PSF1",                BIT(7)},
++
++	{"PSF2",                BIT(0)},
++	{"PSF3",                BIT(1)},
++	{"PSF4",                BIT(2)},
++	{"CNVI",                BIT(3)},
++	{"UFSX2",               BIT(4)},
++	{"EMMC",                BIT(5)},
++	{"SPF",                 BIT(6)},
++	{"SBR6",                BIT(7)},
++
++	{"SBR7",                BIT(0)},
++	{"NPK_AON",             BIT(1)},
++	{"HDA4",                BIT(2)},
++	{"HDA5",                BIT(3)},
++	{"HDA6",                BIT(4)},
++	{"PSF6",                BIT(5)},
++	{"RSVD62",              BIT(6)},
++	{"RSVD63",              BIT(7)},
++	{}
++};
++
++const struct pmc_bit_map *ext_mtl_socm_pfear_map[] = {
++	mtl_socm_pfear_map,
++	NULL
++};
++
++const struct pmc_bit_map mtl_socm_ltr_show_map[] = {
++	{"SOUTHPORT_A",		CNP_PMC_LTR_SPA},
++	{"SOUTHPORT_B",		CNP_PMC_LTR_SPB},
++	{"SATA",		CNP_PMC_LTR_SATA},
++	{"GIGABIT_ETHERNET",	CNP_PMC_LTR_GBE},
++	{"XHCI",		CNP_PMC_LTR_XHCI},
++	{"SOUTHPORT_F",		ADL_PMC_LTR_SPF},
++	{"ME",			CNP_PMC_LTR_ME},
++	{"SATA1",		CNP_PMC_LTR_EVA},
++	{"SOUTHPORT_C",		CNP_PMC_LTR_SPC},
++	{"HD_AUDIO",		CNP_PMC_LTR_AZ},
++	{"CNV",			CNP_PMC_LTR_CNV},
++	{"LPSS",		CNP_PMC_LTR_LPSS},
++	{"SOUTHPORT_D",		CNP_PMC_LTR_SPD},
++	{"SOUTHPORT_E",		CNP_PMC_LTR_SPE},
++	{"SATA2",		CNP_PMC_LTR_CAM},
++	{"ESPI",		CNP_PMC_LTR_ESPI},
++	{"SCC",			CNP_PMC_LTR_SCC},
++	{"ISH",                 CNP_PMC_LTR_ISH},
++	{"UFSX2",		CNP_PMC_LTR_UFSX2},
++	{"EMMC",		CNP_PMC_LTR_EMMC},
++	{"WIGIG",		ICL_PMC_LTR_WIGIG},
++	{"THC0",		TGL_PMC_LTR_THC0},
++	{"THC1",		TGL_PMC_LTR_THC1},
++	{"SOUTHPORT_G",		MTL_PMC_LTR_SPG},
++	{"ESE",                 MTL_PMC_LTR_ESE},
++	{"IOE_PMC",		MTL_PMC_LTR_IOE_PMC},
++
++	/* Below two cannot be used for LTR_IGNORE */
++	{"CURRENT_PLATFORM",	CNP_PMC_LTR_CUR_PLT},
++	{"AGGREGATED_SYSTEM",	CNP_PMC_LTR_CUR_ASLT},
++	{}
++};
++
++const struct pmc_bit_map mtl_socm_clocksource_status_map[] = {
++	{"AON2_OFF_STS",                 BIT(0)},
++	{"AON3_OFF_STS",                 BIT(1)},
++	{"AON4_OFF_STS",                 BIT(2)},
++	{"AON5_OFF_STS",                 BIT(3)},
++	{"AON1_OFF_STS",                 BIT(4)},
++	{"XTAL_LVM_OFF_STS",             BIT(5)},
++	{"MPFPW1_0_PLL_OFF_STS",         BIT(6)},
++	{"MPFPW1_1_PLL_OFF_STS",         BIT(7)},
++	{"USB3_PLL_OFF_STS",             BIT(8)},
++	{"AON3_SPL_OFF_STS",             BIT(9)},
++	{"MPFPW2_0_PLL_OFF_STS",         BIT(12)},
++	{"MPFPW3_0_PLL_OFF_STS",         BIT(13)},
++	{"XTAL_AGGR_OFF_STS",            BIT(17)},
++	{"USB2_PLL_OFF_STS",             BIT(18)},
++	{"FILTER_PLL_OFF_STS",           BIT(22)},
++	{"ACE_PLL_OFF_STS",              BIT(24)},
++	{"FABRIC_PLL_OFF_STS",           BIT(25)},
++	{"SOC_PLL_OFF_STS",              BIT(26)},
++	{"PCIFAB_PLL_OFF_STS",           BIT(27)},
++	{"REF_PLL_OFF_STS",              BIT(28)},
++	{"IMG_PLL_OFF_STS",              BIT(29)},
++	{"RTC_PLL_OFF_STS",              BIT(31)},
++	{}
++};
++
++const struct pmc_bit_map mtl_socm_power_gating_status_0_map[] = {
++	{"PMC_PGD0_PG_STS",              BIT(0)},
++	{"DMI_PGD0_PG_STS",              BIT(1)},
++	{"ESPISPI_PGD0_PG_STS",          BIT(2)},
++	{"XHCI_PGD0_PG_STS",             BIT(3)},
++	{"SPA_PGD0_PG_STS",              BIT(4)},
++	{"SPB_PGD0_PG_STS",              BIT(5)},
++	{"SPC_PGD0_PG_STS",              BIT(6)},
++	{"GBE_PGD0_PG_STS",              BIT(7)},
++	{"SATA_PGD0_PG_STS",             BIT(8)},
++	{"PSF13_PGD0_PG_STS",            BIT(9)},
++	{"SOC_D2D_PGD3_PG_STS",          BIT(10)},
++	{"MPFPW3_PGD0_PG_STS",           BIT(11)},
++	{"ESE_PGD0_PG_STS",              BIT(12)},
++	{"SPD_PGD0_PG_STS",              BIT(13)},
++	{"LPSS_PGD0_PG_STS",             BIT(14)},
++	{"LPC_PGD0_PG_STS",              BIT(15)},
++	{"SMB_PGD0_PG_STS",              BIT(16)},
++	{"ISH_PGD0_PG_STS",              BIT(17)},
++	{"P2S_PGD0_PG_STS",              BIT(18)},
++	{"NPK_PGD0_PG_STS",              BIT(19)},
++	{"DBG_SBR_PGD0_PG_STS",          BIT(20)},
++	{"SBRG_PGD0_PG_STS",             BIT(21)},
++	{"FUSE_PGD0_PG_STS",             BIT(22)},
++	{"SBR8_PGD0_PG_STS",             BIT(23)},
++	{"SOC_D2D_PGD2_PG_STS",          BIT(24)},
++	{"XDCI_PGD0_PG_STS",             BIT(25)},
++	{"EXI_PGD0_PG_STS",              BIT(26)},
++	{"CSE_PGD0_PG_STS",              BIT(27)},
++	{"KVMCC_PGD0_PG_STS",            BIT(28)},
++	{"PMT_PGD0_PG_STS",              BIT(29)},
++	{"CLINK_PGD0_PG_STS",            BIT(30)},
++	{"PTIO_PGD0_PG_STS",             BIT(31)},
++	{}
++};
++
++const struct pmc_bit_map mtl_socm_power_gating_status_1_map[] = {
++	{"USBR0_PGD0_PG_STS",            BIT(0)},
++	{"SUSRAM_PGD0_PG_STS",           BIT(1)},
++	{"SMT1_PGD0_PG_STS",             BIT(2)},
++	{"FIACPCB_U_PGD0_PG_STS",        BIT(3)},
++	{"SMS2_PGD0_PG_STS",             BIT(4)},
++	{"SMS1_PGD0_PG_STS",             BIT(5)},
++	{"CSMERTC_PGD0_PG_STS",          BIT(6)},
++	{"CSMEPSF_PGD0_PG_STS",          BIT(7)},
++	{"SBR0_PGD0_PG_STS",             BIT(8)},
++	{"SBR1_PGD0_PG_STS",             BIT(9)},
++	{"SBR2_PGD0_PG_STS",             BIT(10)},
++	{"SBR3_PGD0_PG_STS",             BIT(11)},
++	{"U3FPW1_PGD0_PG_STS",           BIT(12)},
++	{"SBR5_PGD0_PG_STS",             BIT(13)},
++	{"MPFPW1_PGD0_PG_STS",           BIT(14)},
++	{"UFSPW1_PGD0_PG_STS",           BIT(15)},
++	{"FIA_X_PGD0_PG_STS",            BIT(16)},
++	{"SOC_D2D_PGD0_PG_STS",          BIT(17)},
++	{"MPFPW2_PGD0_PG_STS",           BIT(18)},
++	{"CNVI_PGD0_PG_STS",             BIT(19)},
++	{"UFSX2_PGD0_PG_STS",            BIT(20)},
++	{"ENDBG_PGD0_PG_STS",            BIT(21)},
++	{"DBG_PSF_PGD0_PG_STS",          BIT(22)},
++	{"SBR6_PGD0_PG_STS",             BIT(23)},
++	{"SBR7_PGD0_PG_STS",             BIT(24)},
++	{"NPK_PGD1_PG_STS",              BIT(25)},
++	{"FIACPCB_X_PGD0_PG_STS",        BIT(26)},
++	{"DBC_PGD0_PG_STS",              BIT(27)},
++	{"FUSEGPSB_PGD0_PG_STS",         BIT(28)},
++	{"PSF6_PGD0_PG_STS",             BIT(29)},
++	{"PSF7_PGD0_PG_STS",             BIT(30)},
++	{"GBETSN1_PGD0_PG_STS",          BIT(31)},
++	{}
++};
++
++const struct pmc_bit_map mtl_socm_power_gating_status_2_map[] = {
++	{"PSF8_PGD0_PG_STS",             BIT(0)},
++	{"FIA_PGD0_PG_STS",              BIT(1)},
++	{"SOC_D2D_PGD1_PG_STS",          BIT(2)},
++	{"FIA_U_PGD0_PG_STS",            BIT(3)},
++	{"TAM_PGD0_PG_STS",              BIT(4)},
++	{"GBETSN_PGD0_PG_STS",           BIT(5)},
++	{"TBTLSX_PGD0_PG_STS",           BIT(6)},
++	{"THC0_PGD0_PG_STS",             BIT(7)},
++	{"THC1_PGD0_PG_STS",             BIT(8)},
++	{"PMC_PGD1_PG_STS",              BIT(9)},
++	{"GNA_PGD0_PG_STS",              BIT(10)},
++	{"ACE_PGD0_PG_STS",              BIT(11)},
++	{"ACE_PGD1_PG_STS",              BIT(12)},
++	{"ACE_PGD2_PG_STS",              BIT(13)},
++	{"ACE_PGD3_PG_STS",              BIT(14)},
++	{"ACE_PGD4_PG_STS",              BIT(15)},
++	{"ACE_PGD5_PG_STS",              BIT(16)},
++	{"ACE_PGD6_PG_STS",              BIT(17)},
++	{"ACE_PGD7_PG_STS",              BIT(18)},
++	{"ACE_PGD8_PG_STS",              BIT(19)},
++	{"FIA_PGS_PGD0_PG_STS",          BIT(20)},
++	{"FIACPCB_PGS_PGD0_PG_STS",      BIT(21)},
++	{"FUSEPMSB_PGD0_PG_STS",         BIT(22)},
++	{}
++};
++
++const struct pmc_bit_map mtl_socm_d3_status_0_map[] = {
++	{"LPSS_D3_STS",                  BIT(3)},
++	{"XDCI_D3_STS",                  BIT(4)},
++	{"XHCI_D3_STS",                  BIT(5)},
++	{"SPA_D3_STS",                   BIT(12)},
++	{"SPB_D3_STS",                   BIT(13)},
++	{"SPC_D3_STS",                   BIT(14)},
++	{"SPD_D3_STS",                   BIT(15)},
++	{"ESPISPI_D3_STS",               BIT(18)},
++	{"SATA_D3_STS",                  BIT(20)},
++	{"PSTH_D3_STS",                  BIT(21)},
++	{"DMI_D3_STS",                   BIT(22)},
++	{}
++};
++
++const struct pmc_bit_map mtl_socm_d3_status_1_map[] = {
++	{"GBETSN1_D3_STS",               BIT(14)},
++	{"GBE_D3_STS",                   BIT(19)},
++	{"ITSS_D3_STS",                  BIT(23)},
++	{"P2S_D3_STS",                   BIT(24)},
++	{"CNVI_D3_STS",                  BIT(27)},
++	{"UFSX2_D3_STS",                 BIT(28)},
++	{}
++};
++
++const struct pmc_bit_map mtl_socm_d3_status_2_map[] = {
++	{"GNA_D3_STS",                   BIT(0)},
++	{"CSMERTC_D3_STS",               BIT(1)},
++	{"SUSRAM_D3_STS",                BIT(2)},
++	{"CSE_D3_STS",                   BIT(4)},
++	{"KVMCC_D3_STS",                 BIT(5)},
++	{"USBR0_D3_STS",                 BIT(6)},
++	{"ISH_D3_STS",                   BIT(7)},
++	{"SMT1_D3_STS",                  BIT(8)},
++	{"SMT2_D3_STS",                  BIT(9)},
++	{"SMT3_D3_STS",                  BIT(10)},
++	{"CLINK_D3_STS",                 BIT(14)},
++	{"PTIO_D3_STS",                  BIT(16)},
++	{"PMT_D3_STS",                   BIT(17)},
++	{"SMS1_D3_STS",                  BIT(18)},
++	{"SMS2_D3_STS",                  BIT(19)},
++	{}
++};
++
++const struct pmc_bit_map mtl_socm_d3_status_3_map[] = {
++	{"ESE_D3_STS",                   BIT(2)},
++	{"GBETSN_D3_STS",                BIT(13)},
++	{"THC0_D3_STS",                  BIT(14)},
++	{"THC1_D3_STS",                  BIT(15)},
++	{"ACE_D3_STS",                   BIT(23)},
++	{}
++};
++
++const struct pmc_bit_map mtl_socm_vnn_req_status_0_map[] = {
++	{"LPSS_VNN_REQ_STS",             BIT(3)},
++	{"FIA_VNN_REQ_STS",              BIT(17)},
++	{"ESPISPI_VNN_REQ_STS",          BIT(18)},
++	{}
++};
++
++const struct pmc_bit_map mtl_socm_vnn_req_status_1_map[] = {
++	{"NPK_VNN_REQ_STS",              BIT(4)},
++	{"DFXAGG_VNN_REQ_STS",           BIT(8)},
++	{"EXI_VNN_REQ_STS",              BIT(9)},
++	{"P2D_VNN_REQ_STS",              BIT(18)},
++	{"GBE_VNN_REQ_STS",              BIT(19)},
++	{"SMB_VNN_REQ_STS",              BIT(25)},
++	{"LPC_VNN_REQ_STS",              BIT(26)},
++	{}
++};
++
++const struct pmc_bit_map mtl_socm_vnn_req_status_2_map[] = {
++	{"CSMERTC_VNN_REQ_STS",          BIT(1)},
++	{"CSE_VNN_REQ_STS",              BIT(4)},
++	{"ISH_VNN_REQ_STS",              BIT(7)},
++	{"SMT1_VNN_REQ_STS",             BIT(8)},
++	{"CLINK_VNN_REQ_STS",            BIT(14)},
++	{"SMS1_VNN_REQ_STS",             BIT(18)},
++	{"SMS2_VNN_REQ_STS",             BIT(19)},
++	{"GPIOCOM4_VNN_REQ_STS",         BIT(20)},
++	{"GPIOCOM3_VNN_REQ_STS",         BIT(21)},
++	{"GPIOCOM2_VNN_REQ_STS",         BIT(22)},
++	{"GPIOCOM1_VNN_REQ_STS",         BIT(23)},
++	{"GPIOCOM0_VNN_REQ_STS",         BIT(24)},
++	{}
++};
++
++const struct pmc_bit_map mtl_socm_vnn_req_status_3_map[] = {
++	{"ESE_VNN_REQ_STS",              BIT(2)},
++	{"DTS0_VNN_REQ_STS",             BIT(7)},
++	{"GPIOCOM5_VNN_REQ_STS",         BIT(11)},
++	{}
++};
++
++const struct pmc_bit_map mtl_socm_vnn_misc_status_map[] = {
++	{"CPU_C10_REQ_STS",              BIT(0)},
++	{"TS_OFF_REQ_STS",               BIT(1)},
++	{"PNDE_MET_REQ_STS",             BIT(2)},
++	{"PCIE_DEEP_PM_REQ_STS",         BIT(3)},
++	{"PMC_CLK_THROTTLE_EN_REQ_STS",  BIT(4)},
++	{"NPK_VNNAON_REQ_STS",           BIT(5)},
++	{"VNN_SOC_REQ_STS",              BIT(6)},
++	{"ISH_VNNAON_REQ_STS",           BIT(7)},
++	{"IOE_COND_MET_S02I2_0_REQ_STS", BIT(8)},
++	{"IOE_COND_MET_S02I2_1_REQ_STS", BIT(9)},
++	{"IOE_COND_MET_S02I2_2_REQ_STS", BIT(10)},
++	{"PLT_GREATER_REQ_STS",          BIT(11)},
++	{"PCIE_CLKREQ_REQ_STS",          BIT(12)},
++	{"PMC_IDLE_FB_OCP_REQ_STS",      BIT(13)},
++	{"PM_SYNC_STATES_REQ_STS",       BIT(14)},
++	{"EA_REQ_STS",                   BIT(15)},
++	{"MPHY_CORE_OFF_REQ_STS",        BIT(16)},
++	{"BRK_EV_EN_REQ_STS",            BIT(17)},
++	{"AUTO_DEMO_EN_REQ_STS",         BIT(18)},
++	{"ITSS_CLK_SRC_REQ_STS",         BIT(19)},
++	{"LPC_CLK_SRC_REQ_STS",          BIT(20)},
++	{"ARC_IDLE_REQ_STS",             BIT(21)},
++	{"MPHY_SUS_REQ_STS",             BIT(22)},
++	{"FIA_DEEP_PM_REQ_STS",          BIT(23)},
++	{"UXD_CONNECTED_REQ_STS",        BIT(24)},
++	{"ARC_INTERRUPT_WAKE_REQ_STS",   BIT(25)},
++	{"USB2_VNNAON_ACT_REQ_STS",      BIT(26)},
++	{"PRE_WAKE0_REQ_STS",            BIT(27)},
++	{"PRE_WAKE1_REQ_STS",            BIT(28)},
++	{"PRE_WAKE2_EN_REQ_STS",         BIT(29)},
++	{"WOV_REQ_STS",                  BIT(30)},
++	{"CNVI_V1P05_REQ_STS",           BIT(31)},
++	{}
++};
++
++const struct pmc_bit_map mtl_socm_signal_status_map[] = {
++	{"LSX_Wake0_En_STS",             BIT(0)},
++	{"LSX_Wake0_Pol_STS",            BIT(1)},
++	{"LSX_Wake1_En_STS",             BIT(2)},
++	{"LSX_Wake1_Pol_STS",            BIT(3)},
++	{"LSX_Wake2_En_STS",             BIT(4)},
++	{"LSX_Wake2_Pol_STS",            BIT(5)},
++	{"LSX_Wake3_En_STS",             BIT(6)},
++	{"LSX_Wake3_Pol_STS",            BIT(7)},
++	{"LSX_Wake4_En_STS",             BIT(8)},
++	{"LSX_Wake4_Pol_STS",            BIT(9)},
++	{"LSX_Wake5_En_STS",             BIT(10)},
++	{"LSX_Wake5_Pol_STS",            BIT(11)},
++	{"LSX_Wake6_En_STS",             BIT(12)},
++	{"LSX_Wake6_Pol_STS",            BIT(13)},
++	{"LSX_Wake7_En_STS",             BIT(14)},
++	{"LSX_Wake7_Pol_STS",            BIT(15)},
++	{"LPSS_Wake0_En_STS",            BIT(16)},
++	{"LPSS_Wake0_Pol_STS",           BIT(17)},
++	{"LPSS_Wake1_En_STS",            BIT(18)},
++	{"LPSS_Wake1_Pol_STS",           BIT(19)},
++	{"Int_Timer_SS_Wake0_En_STS",    BIT(20)},
++	{"Int_Timer_SS_Wake0_Pol_STS",   BIT(21)},
++	{"Int_Timer_SS_Wake1_En_STS",    BIT(22)},
++	{"Int_Timer_SS_Wake1_Pol_STS",   BIT(23)},
++	{"Int_Timer_SS_Wake2_En_STS",    BIT(24)},
++	{"Int_Timer_SS_Wake2_Pol_STS",   BIT(25)},
++	{"Int_Timer_SS_Wake3_En_STS",    BIT(26)},
++	{"Int_Timer_SS_Wake3_Pol_STS",   BIT(27)},
++	{"Int_Timer_SS_Wake4_En_STS",    BIT(28)},
++	{"Int_Timer_SS_Wake4_Pol_STS",   BIT(29)},
++	{"Int_Timer_SS_Wake5_En_STS",    BIT(30)},
++	{"Int_Timer_SS_Wake5_Pol_STS",   BIT(31)},
++	{}
++};
++
++const struct pmc_bit_map *mtl_socm_lpm_maps[] = {
++	mtl_socm_clocksource_status_map,
++	mtl_socm_power_gating_status_0_map,
++	mtl_socm_power_gating_status_1_map,
++	mtl_socm_power_gating_status_2_map,
++	mtl_socm_d3_status_0_map,
++	mtl_socm_d3_status_1_map,
++	mtl_socm_d3_status_2_map,
++	mtl_socm_d3_status_3_map,
++	mtl_socm_vnn_req_status_0_map,
++	mtl_socm_vnn_req_status_1_map,
++	mtl_socm_vnn_req_status_2_map,
++	mtl_socm_vnn_req_status_3_map,
++	mtl_socm_vnn_misc_status_map,
++	mtl_socm_signal_status_map,
++	NULL
++};
++
++const struct pmc_reg_map mtl_socm_reg_map = {
++	.pfear_sts = ext_mtl_socm_pfear_map,
+ 	.slp_s0_offset = CNP_PMC_SLP_S0_RES_COUNTER_OFFSET,
+ 	.slp_s0_res_counter_step = TGL_PMC_SLP_S0_RES_COUNTER_STEP,
+-	.ltr_show_sts = adl_ltr_show_map,
++	.ltr_show_sts = mtl_socm_ltr_show_map,
+ 	.msr_sts = msr_map,
+ 	.ltr_ignore_offset = CNP_PMC_LTR_IGNORE_OFFSET,
+-	.regmap_length = CNP_PMC_MMIO_REG_LEN,
++	.regmap_length = MTL_SOC_PMC_MMIO_REG_LEN,
+ 	.ppfear0_offset = CNP_PMC_HOST_PPFEAR0A,
+-	.ppfear_buckets = ICL_PPFEAR_NUM_ENTRIES,
++	.ppfear_buckets = MTL_SOCM_PPFEAR_NUM_ENTRIES,
+ 	.pm_cfg_offset = CNP_PMC_PM_CFG_OFFSET,
+ 	.pm_read_disable_bit = CNP_PMC_READ_DISABLE_BIT,
+-	.ltr_ignore_max = ADL_NUM_IP_IGN_ALLOWED,
+-	.lpm_num_modes = ADL_LPM_NUM_MODES,
+ 	.lpm_num_maps = ADL_LPM_NUM_MAPS,
++	.ltr_ignore_max = MTL_SOCM_NUM_IP_IGN_ALLOWED,
+ 	.lpm_res_counter_step_x2 = TGL_PMC_LPM_RES_COUNTER_STEP_X2,
+ 	.etr3_offset = ETR3_OFFSET,
+ 	.lpm_sts_latch_en_offset = MTL_LPM_STATUS_LATCH_EN_OFFSET,
+ 	.lpm_priority_offset = MTL_LPM_PRI_OFFSET,
+ 	.lpm_en_offset = MTL_LPM_EN_OFFSET,
+ 	.lpm_residency_offset = MTL_LPM_RESIDENCY_OFFSET,
+-	.lpm_sts = adl_lpm_maps,
++	.lpm_sts = mtl_socm_lpm_maps,
+ 	.lpm_status_offset = MTL_LPM_STATUS_OFFSET,
+ 	.lpm_live_status_offset = MTL_LPM_LIVE_STATUS_OFFSET,
+ };
+@@ -68,16 +498,29 @@ static void mtl_set_device_d3(unsigned int device)
+ 	}
+ }
+ 
+-void mtl_core_init(struct pmc_dev *pmcdev)
++/*
++ * Set power state of select devices that do not have drivers to D3
++ * so that they do not block Package C entry.
++ */
++static void mtl_d3_fixup(void)
+ {
+-	pmcdev->map = &mtl_reg_map;
+-	pmcdev->core_configure = mtl_core_configure;
+-
+-	/*
+-	 * Set power state of select devices that do not have drivers to D3
+-	 * so that they do not block Package C entry.
+-	 */
+ 	mtl_set_device_d3(MTL_GNA_PCI_DEV);
+ 	mtl_set_device_d3(MTL_IPU_PCI_DEV);
+ 	mtl_set_device_d3(MTL_VPU_PCI_DEV);
+ }
++
++static int mtl_resume(struct pmc_dev *pmcdev)
++{
++	mtl_d3_fixup();
++	return pmc_core_resume_common(pmcdev);
++}
++
++void mtl_core_init(struct pmc_dev *pmcdev)
++{
++	pmcdev->map = &mtl_socm_reg_map;
++	pmcdev->core_configure = mtl_core_configure;
++
++	mtl_d3_fixup();
++
++	pmcdev->resume = mtl_resume;
++}
+diff --git a/drivers/platform/x86/lenovo-yogabook-wmi.c b/drivers/platform/x86/lenovo-yogabook-wmi.c
+index 5f4bd1eec38a9..d57fcc8388519 100644
+--- a/drivers/platform/x86/lenovo-yogabook-wmi.c
++++ b/drivers/platform/x86/lenovo-yogabook-wmi.c
+@@ -2,7 +2,6 @@
+ /* WMI driver for Lenovo Yoga Book YB1-X90* / -X91* tablets */
+ 
+ #include <linux/acpi.h>
+-#include <linux/devm-helpers.h>
+ #include <linux/gpio/consumer.h>
+ #include <linux/gpio/machine.h>
+ #include <linux/interrupt.h>
+@@ -248,10 +247,7 @@ static int yogabook_wmi_probe(struct wmi_device *wdev, const void *context)
+ 	data->brightness = YB_KBD_BL_DEFAULT;
+ 	set_bit(YB_KBD_IS_ON, &data->flags);
+ 	set_bit(YB_DIGITIZER_IS_ON, &data->flags);
+-
+-	r = devm_work_autocancel(&wdev->dev, &data->work, yogabook_wmi_work);
+-	if (r)
+-		return r;
++	INIT_WORK(&data->work, yogabook_wmi_work);
+ 
+ 	data->kbd_adev = acpi_dev_get_first_match_dev("GDIX1001", NULL, -1);
+ 	if (!data->kbd_adev) {
+@@ -299,10 +295,12 @@ static int yogabook_wmi_probe(struct wmi_device *wdev, const void *context)
+ 	}
+ 	data->backside_hall_irq = r;
+ 
+-	r = devm_request_irq(&wdev->dev, data->backside_hall_irq,
+-			     yogabook_backside_hall_irq,
+-			     IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING,
+-			     "backside_hall_sw", data);
++	/* Set default brightness before enabling the IRQ */
++	yogabook_wmi_set_kbd_backlight(data->wdev, YB_KBD_BL_DEFAULT);
++
++	r = request_irq(data->backside_hall_irq, yogabook_backside_hall_irq,
++			IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING,
++			"backside_hall_sw", data);
+ 	if (r) {
+ 		dev_err_probe(&wdev->dev, r, "Requesting backside_hall_sw IRQ\n");
+ 		goto error_put_devs;
+@@ -318,11 +316,14 @@ static int yogabook_wmi_probe(struct wmi_device *wdev, const void *context)
+ 	r = devm_led_classdev_register(&wdev->dev, &data->kbd_bl_led);
+ 	if (r < 0) {
+ 		dev_err_probe(&wdev->dev, r, "Registering backlight LED device\n");
+-		goto error_put_devs;
++		goto error_free_irq;
+ 	}
+ 
+ 	return 0;
+ 
++error_free_irq:
++	free_irq(data->backside_hall_irq, data);
++	cancel_work_sync(&data->work);
+ error_put_devs:
+ 	put_device(data->dig_dev);
+ 	put_device(data->kbd_dev);
+@@ -334,6 +335,19 @@ error_put_devs:
+ static void yogabook_wmi_remove(struct wmi_device *wdev)
+ {
+ 	struct yogabook_wmi *data = dev_get_drvdata(&wdev->dev);
++	int r = 0;
++
++	free_irq(data->backside_hall_irq, data);
++	cancel_work_sync(&data->work);
++
++	if (!test_bit(YB_KBD_IS_ON, &data->flags))
++		r |= device_reprobe(data->kbd_dev);
++
++	if (!test_bit(YB_DIGITIZER_IS_ON, &data->flags))
++		r |= device_reprobe(data->dig_dev);
++
++	if (r)
++		dev_warn(&wdev->dev, "Reprobe of devices failed\n");
+ 
+ 	put_device(data->dig_dev);
+ 	put_device(data->kbd_dev);
+diff --git a/drivers/platform/x86/think-lmi.c b/drivers/platform/x86/think-lmi.c
+index 1138f770149d9..e4047ee0a7546 100644
+--- a/drivers/platform/x86/think-lmi.c
++++ b/drivers/platform/x86/think-lmi.c
+@@ -14,6 +14,7 @@
+ #include <linux/acpi.h>
+ #include <linux/errno.h>
+ #include <linux/fs.h>
++#include <linux/mutex.h>
+ #include <linux/string.h>
+ #include <linux/types.h>
+ #include <linux/dmi.h>
+@@ -171,7 +172,7 @@ MODULE_PARM_DESC(debug_support, "Enable debug command support");
+ #define TLMI_POP_PWD (1 << 0)
+ #define TLMI_PAP_PWD (1 << 1)
+ #define TLMI_HDD_PWD (1 << 2)
+-#define TLMI_SYS_PWD (1 << 3)
++#define TLMI_SMP_PWD (1 << 6) /* System Management */
+ #define TLMI_CERT    (1 << 7)
+ 
+ #define to_tlmi_pwd_setting(kobj)  container_of(kobj, struct tlmi_pwd_setting, kobj)
+@@ -195,6 +196,7 @@ static const char * const level_options[] = {
+ };
+ static struct think_lmi tlmi_priv;
+ static struct class *fw_attr_class;
++static DEFINE_MUTEX(tlmi_mutex);
+ 
+ /* ------ Utility functions ------------*/
+ /* Strip out CR if one is present */
+@@ -437,6 +439,9 @@ static ssize_t new_password_store(struct kobject *kobj,
+ 	/* Strip out CR if one is present, setting password won't work if it is present */
+ 	strip_cr(new_pwd);
+ 
++	/* Use lock in case multiple WMI operations needed */
++	mutex_lock(&tlmi_mutex);
++
+ 	pwdlen = strlen(new_pwd);
+ 	/* pwdlen == 0 is allowed to clear the password */
+ 	if (pwdlen && ((pwdlen < setting->minlen) || (pwdlen > setting->maxlen))) {
+@@ -456,9 +461,9 @@ static ssize_t new_password_store(struct kobject *kobj,
+ 				sprintf(pwd_type, "mhdp%d", setting->index);
+ 		} else if (setting == tlmi_priv.pwd_nvme) {
+ 			if (setting->level == TLMI_LEVEL_USER)
+-				sprintf(pwd_type, "unvp%d", setting->index);
++				sprintf(pwd_type, "udrp%d", setting->index);
+ 			else
+-				sprintf(pwd_type, "mnvp%d", setting->index);
++				sprintf(pwd_type, "adrp%d", setting->index);
+ 		} else {
+ 			sprintf(pwd_type, "%s", setting->pwd_type);
+ 		}
+@@ -493,6 +498,7 @@ static ssize_t new_password_store(struct kobject *kobj,
+ 		kfree(auth_str);
+ 	}
+ out:
++	mutex_unlock(&tlmi_mutex);
+ 	kfree(new_pwd);
+ 	return ret ?: count;
+ }
+@@ -981,6 +987,9 @@ static ssize_t current_value_store(struct kobject *kobj,
+ 	/* Strip out CR if one is present */
+ 	strip_cr(new_setting);
+ 
++	/* Use lock in case multiple WMI operations needed */
++	mutex_lock(&tlmi_mutex);
++
+ 	/* Check if certificate authentication is enabled and active */
+ 	if (tlmi_priv.certificate_support && tlmi_priv.pwd_admin->cert_installed) {
+ 		if (!tlmi_priv.pwd_admin->signature || !tlmi_priv.pwd_admin->save_signature) {
+@@ -1039,6 +1048,7 @@ static ssize_t current_value_store(struct kobject *kobj,
+ 		kobject_uevent(&tlmi_priv.class_dev->kobj, KOBJ_CHANGE);
+ 	}
+ out:
++	mutex_unlock(&tlmi_mutex);
+ 	kfree(auth_str);
+ 	kfree(set_str);
+ 	kfree(new_setting);
+@@ -1483,11 +1493,11 @@ static int tlmi_analyze(void)
+ 		tlmi_priv.pwd_power->valid = true;
+ 
+ 	if (tlmi_priv.opcode_support) {
+-		tlmi_priv.pwd_system = tlmi_create_auth("sys", "system");
++		tlmi_priv.pwd_system = tlmi_create_auth("smp", "system");
+ 		if (!tlmi_priv.pwd_system)
+ 			goto fail_clear_attr;
+ 
+-		if (tlmi_priv.pwdcfg.core.password_state & TLMI_SYS_PWD)
++		if (tlmi_priv.pwdcfg.core.password_state & TLMI_SMP_PWD)
+ 			tlmi_priv.pwd_system->valid = true;
+ 
+ 		tlmi_priv.pwd_hdd = tlmi_create_auth("hdd", "hdd");
+diff --git a/drivers/platform/x86/thinkpad_acpi.c b/drivers/platform/x86/thinkpad_acpi.c
+index b3808ad77278d..187018ffb0686 100644
+--- a/drivers/platform/x86/thinkpad_acpi.c
++++ b/drivers/platform/x86/thinkpad_acpi.c
+@@ -10524,8 +10524,8 @@ unlock:
+ static void dytc_profile_refresh(void)
+ {
+ 	enum platform_profile_option profile;
+-	int output, err = 0;
+-	int perfmode, funcmode;
++	int output = 0, err = 0;
++	int perfmode, funcmode = 0;
+ 
+ 	mutex_lock(&dytc_mutex);
+ 	if (dytc_capabilities & BIT(DYTC_FC_MMC)) {
+@@ -10538,6 +10538,8 @@ static void dytc_profile_refresh(void)
+ 		err = dytc_command(DYTC_CMD_GET, &output);
+ 		/* Check if we are PSC mode, or have AMT enabled */
+ 		funcmode = (output >> DYTC_GET_FUNCTION_BIT) & 0xF;
++	} else { /* Unknown profile mode */
++		err = -ENODEV;
+ 	}
+ 	mutex_unlock(&dytc_mutex);
+ 	if (err)
+diff --git a/drivers/power/supply/rt9467-charger.c b/drivers/power/supply/rt9467-charger.c
+index ea33693b69779..b0b9ff8667459 100644
+--- a/drivers/power/supply/rt9467-charger.c
++++ b/drivers/power/supply/rt9467-charger.c
+@@ -1192,7 +1192,7 @@ static int rt9467_charger_probe(struct i2c_client *i2c)
+ 	i2c_set_clientdata(i2c, data);
+ 
+ 	/* Default pull charge enable gpio to make 'CHG_EN' by SW control only */
+-	ceb_gpio = devm_gpiod_get_optional(dev, "charge-enable", GPIOD_OUT_LOW);
++	ceb_gpio = devm_gpiod_get_optional(dev, "charge-enable", GPIOD_OUT_HIGH);
+ 	if (IS_ERR(ceb_gpio))
+ 		return dev_err_probe(dev, PTR_ERR(ceb_gpio),
+ 				     "Failed to config charge enable gpio\n");
+diff --git a/drivers/powercap/Kconfig b/drivers/powercap/Kconfig
+index 90d33cd1b670a..b063f75117738 100644
+--- a/drivers/powercap/Kconfig
++++ b/drivers/powercap/Kconfig
+@@ -18,10 +18,12 @@ if POWERCAP
+ # Client driver configurations go here.
+ config INTEL_RAPL_CORE
+ 	tristate
++	depends on PCI
++	select IOSF_MBI
+ 
+ config INTEL_RAPL
+ 	tristate "Intel RAPL Support via MSR Interface"
+-	depends on X86 && IOSF_MBI
++	depends on X86 && PCI
+ 	select INTEL_RAPL_CORE
+ 	help
+ 	  This enables support for the Intel Running Average Power Limit (RAPL)
+diff --git a/drivers/powercap/intel_rapl_msr.c b/drivers/powercap/intel_rapl_msr.c
+index a27673706c3d6..9ea4797d70b44 100644
+--- a/drivers/powercap/intel_rapl_msr.c
++++ b/drivers/powercap/intel_rapl_msr.c
+@@ -22,7 +22,6 @@
+ #include <linux/processor.h>
+ #include <linux/platform_device.h>
+ 
+-#include <asm/iosf_mbi.h>
+ #include <asm/cpu_device_id.h>
+ #include <asm/intel-family.h>
+ 
+@@ -137,14 +136,14 @@ static int rapl_msr_write_raw(int cpu, struct reg_action *ra)
+ 
+ /* List of verified CPUs. */
+ static const struct x86_cpu_id pl4_support_ids[] = {
+-	{ X86_VENDOR_INTEL, 6, INTEL_FAM6_TIGERLAKE_L, X86_FEATURE_ANY },
+-	{ X86_VENDOR_INTEL, 6, INTEL_FAM6_ALDERLAKE, X86_FEATURE_ANY },
+-	{ X86_VENDOR_INTEL, 6, INTEL_FAM6_ALDERLAKE_L, X86_FEATURE_ANY },
+-	{ X86_VENDOR_INTEL, 6, INTEL_FAM6_ALDERLAKE_N, X86_FEATURE_ANY },
+-	{ X86_VENDOR_INTEL, 6, INTEL_FAM6_RAPTORLAKE, X86_FEATURE_ANY },
+-	{ X86_VENDOR_INTEL, 6, INTEL_FAM6_RAPTORLAKE_P, X86_FEATURE_ANY },
+-	{ X86_VENDOR_INTEL, 6, INTEL_FAM6_METEORLAKE, X86_FEATURE_ANY },
+-	{ X86_VENDOR_INTEL, 6, INTEL_FAM6_METEORLAKE_L, X86_FEATURE_ANY },
++	X86_MATCH_INTEL_FAM6_MODEL(TIGERLAKE_L, NULL),
++	X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE, NULL),
++	X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE_L, NULL),
++	X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE_N, NULL),
++	X86_MATCH_INTEL_FAM6_MODEL(RAPTORLAKE, NULL),
++	X86_MATCH_INTEL_FAM6_MODEL(RAPTORLAKE_P, NULL),
++	X86_MATCH_INTEL_FAM6_MODEL(METEORLAKE, NULL),
++	X86_MATCH_INTEL_FAM6_MODEL(METEORLAKE_L, NULL),
+ 	{}
+ };
+ 
+diff --git a/drivers/pwm/pwm-ab8500.c b/drivers/pwm/pwm-ab8500.c
+index 507ff0d5f7bd8..583a7d69c7415 100644
+--- a/drivers/pwm/pwm-ab8500.c
++++ b/drivers/pwm/pwm-ab8500.c
+@@ -190,7 +190,7 @@ static int ab8500_pwm_probe(struct platform_device *pdev)
+ 	int err;
+ 
+ 	if (pdev->id < 1 || pdev->id > 31)
+-		return dev_err_probe(&pdev->dev, EINVAL, "Invalid device id %d\n", pdev->id);
++		return dev_err_probe(&pdev->dev, -EINVAL, "Invalid device id %d\n", pdev->id);
+ 
+ 	/*
+ 	 * Nothing to be done in probe, this is required to get the
+diff --git a/drivers/pwm/pwm-imx-tpm.c b/drivers/pwm/pwm-imx-tpm.c
+index 5e2b452ee5f2e..98ab65c896850 100644
+--- a/drivers/pwm/pwm-imx-tpm.c
++++ b/drivers/pwm/pwm-imx-tpm.c
+@@ -397,6 +397,13 @@ static int __maybe_unused pwm_imx_tpm_suspend(struct device *dev)
+ 	if (tpm->enable_count > 0)
+ 		return -EBUSY;
+ 
++	/*
++	 * Force 'real_period' to be zero to force period update code
++	 * can be executed after system resume back, since suspend causes
++	 * the period related registers to become their reset values.
++	 */
++	tpm->real_period = 0;
++
+ 	clk_disable_unprepare(tpm->clk);
+ 
+ 	return 0;
+diff --git a/drivers/pwm/pwm-mtk-disp.c b/drivers/pwm/pwm-mtk-disp.c
+index 79e321e96f56a..2401b67332417 100644
+--- a/drivers/pwm/pwm-mtk-disp.c
++++ b/drivers/pwm/pwm-mtk-disp.c
+@@ -79,14 +79,11 @@ static int mtk_disp_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ 	if (state->polarity != PWM_POLARITY_NORMAL)
+ 		return -EINVAL;
+ 
+-	if (!state->enabled) {
+-		mtk_disp_pwm_update_bits(mdp, DISP_PWM_EN, mdp->data->enable_mask,
+-					 0x0);
+-
+-		if (mdp->enabled) {
+-			clk_disable_unprepare(mdp->clk_mm);
+-			clk_disable_unprepare(mdp->clk_main);
+-		}
++	if (!state->enabled && mdp->enabled) {
++		mtk_disp_pwm_update_bits(mdp, DISP_PWM_EN,
++					 mdp->data->enable_mask, 0x0);
++		clk_disable_unprepare(mdp->clk_mm);
++		clk_disable_unprepare(mdp->clk_main);
+ 
+ 		mdp->enabled = false;
+ 		return 0;
+diff --git a/drivers/pwm/sysfs.c b/drivers/pwm/sysfs.c
+index 1a106ec329392..8d1254761e4dd 100644
+--- a/drivers/pwm/sysfs.c
++++ b/drivers/pwm/sysfs.c
+@@ -424,6 +424,13 @@ static int pwm_class_resume_npwm(struct device *parent, unsigned int npwm)
+ 		if (!export)
+ 			continue;
+ 
++		/* If pwmchip was not enabled before suspend, do nothing. */
++		if (!export->suspend.enabled) {
++			/* release lock taken in pwm_class_get_state */
++			mutex_unlock(&export->lock);
++			continue;
++		}
++
+ 		state.enabled = export->suspend.enabled;
+ 		ret = pwm_class_apply_state(export, pwm, &state);
+ 		if (ret < 0)
+@@ -448,7 +455,17 @@ static int pwm_class_suspend(struct device *parent)
+ 		if (!export)
+ 			continue;
+ 
++		/*
++		 * If pwmchip was not enabled before suspend, save
++		 * state for resume time and do nothing else.
++		 */
+ 		export->suspend = state;
++		if (!state.enabled) {
++			/* release lock taken in pwm_class_get_state */
++			mutex_unlock(&export->lock);
++			continue;
++		}
++
+ 		state.enabled = false;
+ 		ret = pwm_class_apply_state(export, pwm, &state);
+ 		if (ret < 0) {
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index 698ab7f5004bf..d8e1caaf207e1 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -1911,19 +1911,17 @@ static struct regulator *create_regulator(struct regulator_dev *rdev,
+ 
+ 	if (err != -EEXIST)
+ 		regulator->debugfs = debugfs_create_dir(supply_name, rdev->debugfs);
+-	if (!regulator->debugfs) {
++	if (IS_ERR(regulator->debugfs))
+ 		rdev_dbg(rdev, "Failed to create debugfs directory\n");
+-	} else {
+-		debugfs_create_u32("uA_load", 0444, regulator->debugfs,
+-				   &regulator->uA_load);
+-		debugfs_create_u32("min_uV", 0444, regulator->debugfs,
+-				   &regulator->voltage[PM_SUSPEND_ON].min_uV);
+-		debugfs_create_u32("max_uV", 0444, regulator->debugfs,
+-				   &regulator->voltage[PM_SUSPEND_ON].max_uV);
+-		debugfs_create_file("constraint_flags", 0444,
+-				    regulator->debugfs, regulator,
+-				    &constraint_flags_fops);
+-	}
++
++	debugfs_create_u32("uA_load", 0444, regulator->debugfs,
++			   &regulator->uA_load);
++	debugfs_create_u32("min_uV", 0444, regulator->debugfs,
++			   &regulator->voltage[PM_SUSPEND_ON].min_uV);
++	debugfs_create_u32("max_uV", 0444, regulator->debugfs,
++			   &regulator->voltage[PM_SUSPEND_ON].max_uV);
++	debugfs_create_file("constraint_flags", 0444, regulator->debugfs,
++			    regulator, &constraint_flags_fops);
+ 
+ 	/*
+ 	 * Check now if the regulator is an always on regulator - if
+@@ -5256,10 +5254,8 @@ static void rdev_init_debugfs(struct regulator_dev *rdev)
+ 	}
+ 
+ 	rdev->debugfs = debugfs_create_dir(rname, debugfs_root);
+-	if (IS_ERR(rdev->debugfs)) {
+-		rdev_warn(rdev, "Failed to create debugfs directory\n");
+-		return;
+-	}
++	if (IS_ERR(rdev->debugfs))
++		rdev_dbg(rdev, "Failed to create debugfs directory\n");
+ 
+ 	debugfs_create_u32("use_count", 0444, rdev->debugfs,
+ 			   &rdev->use_count);
+@@ -6179,7 +6175,7 @@ static int __init regulator_init(void)
+ 
+ 	debugfs_root = debugfs_create_dir("regulator", NULL);
+ 	if (IS_ERR(debugfs_root))
+-		pr_warn("regulator: Failed to create debugfs directory\n");
++		pr_debug("regulator: Failed to create debugfs directory\n");
+ 
+ #ifdef CONFIG_DEBUG_FS
+ 	debugfs_create_file("supply_map", 0444, debugfs_root, NULL,
+diff --git a/drivers/regulator/rk808-regulator.c b/drivers/regulator/rk808-regulator.c
+index 3637e81654a8e..80ba782d89239 100644
+--- a/drivers/regulator/rk808-regulator.c
++++ b/drivers/regulator/rk808-regulator.c
+@@ -1336,6 +1336,7 @@ static int rk808_regulator_probe(struct platform_device *pdev)
+ 
+ 	config.dev = &pdev->dev;
+ 	config.dev->of_node = pdev->dev.parent->of_node;
++	config.dev->of_node_reused = true;
+ 	config.driver_data = pdata;
+ 	config.regmap = regmap;
+ 
+diff --git a/drivers/regulator/tps65219-regulator.c b/drivers/regulator/tps65219-regulator.c
+index b1719ee990ab4..8971b507a79ac 100644
+--- a/drivers/regulator/tps65219-regulator.c
++++ b/drivers/regulator/tps65219-regulator.c
+@@ -289,13 +289,13 @@ static irqreturn_t tps65219_regulator_irq_handler(int irq, void *data)
+ 
+ static int tps65219_get_rdev_by_name(const char *regulator_name,
+ 				     struct regulator_dev *rdevtbl[7],
+-				     struct regulator_dev *dev)
++				     struct regulator_dev **dev)
+ {
+ 	int i;
+ 
+ 	for (i = 0; i < ARRAY_SIZE(regulators); i++) {
+ 		if (strcmp(regulator_name, regulators[i].name) == 0) {
+-			dev = rdevtbl[i];
++			*dev = rdevtbl[i];
+ 			return 0;
+ 		}
+ 	}
+@@ -348,7 +348,7 @@ static int tps65219_regulator_probe(struct platform_device *pdev)
+ 		irq_data[i].dev = tps->dev;
+ 		irq_data[i].type = irq_type;
+ 
+-		tps65219_get_rdev_by_name(irq_type->regulator_name, rdevtbl, rdev);
++		tps65219_get_rdev_by_name(irq_type->regulator_name, rdevtbl, &rdev);
+ 		if (IS_ERR(rdev)) {
+ 			dev_err(tps->dev, "Failed to get rdev for %s\n",
+ 				irq_type->regulator_name);
+diff --git a/drivers/rtc/rtc-st-lpc.c b/drivers/rtc/rtc-st-lpc.c
+index 0f8e4231098ef..d04d46f9cc65a 100644
+--- a/drivers/rtc/rtc-st-lpc.c
++++ b/drivers/rtc/rtc-st-lpc.c
+@@ -228,7 +228,7 @@ static int st_rtc_probe(struct platform_device *pdev)
+ 	enable_irq_wake(rtc->irq);
+ 	disable_irq(rtc->irq);
+ 
+-	rtc->clk = clk_get(&pdev->dev, NULL);
++	rtc->clk = devm_clk_get(&pdev->dev, NULL);
+ 	if (IS_ERR(rtc->clk)) {
+ 		dev_err(&pdev->dev, "Unable to request clock\n");
+ 		return PTR_ERR(rtc->clk);
+diff --git a/drivers/s390/net/qeth_l3_sys.c b/drivers/s390/net/qeth_l3_sys.c
+index 9f90a860ca2c9..a6b64228ead25 100644
+--- a/drivers/s390/net/qeth_l3_sys.c
++++ b/drivers/s390/net/qeth_l3_sys.c
+@@ -625,7 +625,7 @@ static QETH_DEVICE_ATTR(vipa_add4, add4, 0644,
+ static ssize_t qeth_l3_dev_vipa_del4_store(struct device *dev,
+ 		struct device_attribute *attr, const char *buf, size_t count)
+ {
+-	return qeth_l3_vipa_store(dev, buf, true, count, QETH_PROT_IPV4);
++	return qeth_l3_vipa_store(dev, buf, false, count, QETH_PROT_IPV4);
+ }
+ 
+ static QETH_DEVICE_ATTR(vipa_del4, del4, 0200, NULL,
+diff --git a/drivers/scsi/3w-xxxx.c b/drivers/scsi/3w-xxxx.c
+index 36c34ced0cc18..f39c9ec2e7810 100644
+--- a/drivers/scsi/3w-xxxx.c
++++ b/drivers/scsi/3w-xxxx.c
+@@ -2305,8 +2305,10 @@ static int tw_probe(struct pci_dev *pdev, const struct pci_device_id *dev_id)
+ 	TW_DISABLE_INTERRUPTS(tw_dev);
+ 
+ 	/* Initialize the card */
+-	if (tw_reset_sequence(tw_dev))
++	if (tw_reset_sequence(tw_dev)) {
++		retval = -EINVAL;
+ 		goto out_release_mem_region;
++	}
+ 
+ 	/* Set host specific parameters */
+ 	host->max_id = TW_MAX_UNITS;
+diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c
+index 6a15f879e5173..a5e5b7bff59b4 100644
+--- a/drivers/scsi/lpfc/lpfc_els.c
++++ b/drivers/scsi/lpfc/lpfc_els.c
+@@ -5468,9 +5468,19 @@ out:
+ 				ndlp->nlp_flag &= ~NLP_RELEASE_RPI;
+ 				spin_unlock_irq(&ndlp->lock);
+ 			}
++			lpfc_drop_node(vport, ndlp);
++		} else if (ndlp->nlp_state != NLP_STE_PLOGI_ISSUE &&
++			   ndlp->nlp_state != NLP_STE_REG_LOGIN_ISSUE &&
++			   ndlp->nlp_state != NLP_STE_PRLI_ISSUE) {
++			/* Drop ndlp if there is no planned or outstanding
++			 * issued PRLI.
++			 *
++			 * In cases when the ndlp is acting as both an initiator
++			 * and target function, let our issued PRLI determine
++			 * the final ndlp kref drop.
++			 */
++			lpfc_drop_node(vport, ndlp);
+ 		}
+-
+-		lpfc_drop_node(vport, ndlp);
+ 	}
+ 
+ 	/* Release the originating I/O reference. */
+diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c
+index 3b64de81ea0d3..2a31ddc99dde5 100644
+--- a/drivers/scsi/qedf/qedf_main.c
++++ b/drivers/scsi/qedf/qedf_main.c
+@@ -3041,9 +3041,8 @@ static int qedf_alloc_global_queues(struct qedf_ctx *qedf)
+ 	 * addresses of our queues
+ 	 */
+ 	if (!qedf->p_cpuq) {
+-		status = -EINVAL;
+ 		QEDF_ERR(&qedf->dbg_ctx, "p_cpuq is NULL.\n");
+-		goto mem_alloc_failure;
++		return -EINVAL;
+ 	}
+ 
+ 	qedf->global_queues = kzalloc((sizeof(struct global_queue *)
+diff --git a/drivers/soc/amlogic/meson-secure-pwrc.c b/drivers/soc/amlogic/meson-secure-pwrc.c
+index e935187635267..25b4b71df9b89 100644
+--- a/drivers/soc/amlogic/meson-secure-pwrc.c
++++ b/drivers/soc/amlogic/meson-secure-pwrc.c
+@@ -105,7 +105,7 @@ static struct meson_secure_pwrc_domain_desc a1_pwrc_domains[] = {
+ 	SEC_PD(ACODEC,	0),
+ 	SEC_PD(AUDIO,	0),
+ 	SEC_PD(OTP,	0),
+-	SEC_PD(DMA,	0),
++	SEC_PD(DMA,	GENPD_FLAG_ALWAYS_ON | GENPD_FLAG_IRQ_SAFE),
+ 	SEC_PD(SD_EMMC,	0),
+ 	SEC_PD(RAMA,	0),
+ 	/* SRAMB is used as ATF runtime memory, and should be always on */
+diff --git a/drivers/soc/fsl/qe/Kconfig b/drivers/soc/fsl/qe/Kconfig
+index e0d096607fefb..fa9ffbed0e929 100644
+--- a/drivers/soc/fsl/qe/Kconfig
++++ b/drivers/soc/fsl/qe/Kconfig
+@@ -62,6 +62,7 @@ config QE_TDM
+ 
+ config QE_USB
+ 	bool
++	depends on QUICC_ENGINE
+ 	default y if USB_FSL_QE
+ 	help
+ 	  QE USB Controller support
+diff --git a/drivers/soc/mediatek/mtk-svs.c b/drivers/soc/mediatek/mtk-svs.c
+index 81585733c8a99..3a2f97cd52720 100644
+--- a/drivers/soc/mediatek/mtk-svs.c
++++ b/drivers/soc/mediatek/mtk-svs.c
+@@ -2061,9 +2061,9 @@ static int svs_mt8192_platform_probe(struct svs_platform *svsp)
+ 		svsb = &svsp->banks[idx];
+ 
+ 		if (svsb->type == SVSB_HIGH)
+-			svsb->opp_dev = svs_add_device_link(svsp, "mali");
++			svsb->opp_dev = svs_add_device_link(svsp, "gpu");
+ 		else if (svsb->type == SVSB_LOW)
+-			svsb->opp_dev = svs_get_subsys_device(svsp, "mali");
++			svsb->opp_dev = svs_get_subsys_device(svsp, "gpu");
+ 
+ 		if (IS_ERR(svsb->opp_dev))
+ 			return dev_err_probe(svsp->dev, PTR_ERR(svsb->opp_dev),
+diff --git a/drivers/soc/qcom/qcom-geni-se.c b/drivers/soc/qcom/qcom-geni-se.c
+index 795a2e1d59b3a..dd50a255fa6cb 100644
+--- a/drivers/soc/qcom/qcom-geni-se.c
++++ b/drivers/soc/qcom/qcom-geni-se.c
+@@ -682,6 +682,30 @@ EXPORT_SYMBOL(geni_se_clk_freq_match);
+ #define GENI_SE_DMA_EOT_EN BIT(1)
+ #define GENI_SE_DMA_AHB_ERR_EN BIT(2)
+ #define GENI_SE_DMA_EOT_BUF BIT(0)
++
++/**
++ * geni_se_tx_init_dma() - Initiate TX DMA transfer on the serial engine
++ * @se:			Pointer to the concerned serial engine.
++ * @iova:		Mapped DMA address.
++ * @len:		Length of the TX buffer.
++ *
++ * This function is used to initiate DMA TX transfer.
++ */
++void geni_se_tx_init_dma(struct geni_se *se, dma_addr_t iova, size_t len)
++{
++	u32 val;
++
++	val = GENI_SE_DMA_DONE_EN;
++	val |= GENI_SE_DMA_EOT_EN;
++	val |= GENI_SE_DMA_AHB_ERR_EN;
++	writel_relaxed(val, se->base + SE_DMA_TX_IRQ_EN_SET);
++	writel_relaxed(lower_32_bits(iova), se->base + SE_DMA_TX_PTR_L);
++	writel_relaxed(upper_32_bits(iova), se->base + SE_DMA_TX_PTR_H);
++	writel_relaxed(GENI_SE_DMA_EOT_BUF, se->base + SE_DMA_TX_ATTR);
++	writel(len, se->base + SE_DMA_TX_LEN);
++}
++EXPORT_SYMBOL(geni_se_tx_init_dma);
++
+ /**
+  * geni_se_tx_dma_prep() - Prepare the serial engine for TX DMA transfer
+  * @se:			Pointer to the concerned serial engine.
+@@ -697,7 +721,6 @@ int geni_se_tx_dma_prep(struct geni_se *se, void *buf, size_t len,
+ 			dma_addr_t *iova)
+ {
+ 	struct geni_wrapper *wrapper = se->wrapper;
+-	u32 val;
+ 
+ 	if (!wrapper)
+ 		return -EINVAL;
+@@ -706,17 +729,34 @@ int geni_se_tx_dma_prep(struct geni_se *se, void *buf, size_t len,
+ 	if (dma_mapping_error(wrapper->dev, *iova))
+ 		return -EIO;
+ 
++	geni_se_tx_init_dma(se, *iova, len);
++	return 0;
++}
++EXPORT_SYMBOL(geni_se_tx_dma_prep);
++
++/**
++ * geni_se_rx_init_dma() - Initiate RX DMA transfer on the serial engine
++ * @se:			Pointer to the concerned serial engine.
++ * @iova:		Mapped DMA address.
++ * @len:		Length of the RX buffer.
++ *
++ * This function is used to initiate DMA RX transfer.
++ */
++void geni_se_rx_init_dma(struct geni_se *se, dma_addr_t iova, size_t len)
++{
++	u32 val;
++
+ 	val = GENI_SE_DMA_DONE_EN;
+ 	val |= GENI_SE_DMA_EOT_EN;
+ 	val |= GENI_SE_DMA_AHB_ERR_EN;
+-	writel_relaxed(val, se->base + SE_DMA_TX_IRQ_EN_SET);
+-	writel_relaxed(lower_32_bits(*iova), se->base + SE_DMA_TX_PTR_L);
+-	writel_relaxed(upper_32_bits(*iova), se->base + SE_DMA_TX_PTR_H);
+-	writel_relaxed(GENI_SE_DMA_EOT_BUF, se->base + SE_DMA_TX_ATTR);
+-	writel(len, se->base + SE_DMA_TX_LEN);
+-	return 0;
++	writel_relaxed(val, se->base + SE_DMA_RX_IRQ_EN_SET);
++	writel_relaxed(lower_32_bits(iova), se->base + SE_DMA_RX_PTR_L);
++	writel_relaxed(upper_32_bits(iova), se->base + SE_DMA_RX_PTR_H);
++	/* RX does not have EOT buffer type bit. So just reset RX_ATTR */
++	writel_relaxed(0, se->base + SE_DMA_RX_ATTR);
++	writel(len, se->base + SE_DMA_RX_LEN);
+ }
+-EXPORT_SYMBOL(geni_se_tx_dma_prep);
++EXPORT_SYMBOL(geni_se_rx_init_dma);
+ 
+ /**
+  * geni_se_rx_dma_prep() - Prepare the serial engine for RX DMA transfer
+@@ -733,7 +773,6 @@ int geni_se_rx_dma_prep(struct geni_se *se, void *buf, size_t len,
+ 			dma_addr_t *iova)
+ {
+ 	struct geni_wrapper *wrapper = se->wrapper;
+-	u32 val;
+ 
+ 	if (!wrapper)
+ 		return -EINVAL;
+@@ -742,15 +781,7 @@ int geni_se_rx_dma_prep(struct geni_se *se, void *buf, size_t len,
+ 	if (dma_mapping_error(wrapper->dev, *iova))
+ 		return -EIO;
+ 
+-	val = GENI_SE_DMA_DONE_EN;
+-	val |= GENI_SE_DMA_EOT_EN;
+-	val |= GENI_SE_DMA_AHB_ERR_EN;
+-	writel_relaxed(val, se->base + SE_DMA_RX_IRQ_EN_SET);
+-	writel_relaxed(lower_32_bits(*iova), se->base + SE_DMA_RX_PTR_L);
+-	writel_relaxed(upper_32_bits(*iova), se->base + SE_DMA_RX_PTR_H);
+-	/* RX does not have EOT buffer type bit. So just reset RX_ATTR */
+-	writel_relaxed(0, se->base + SE_DMA_RX_ATTR);
+-	writel(len, se->base + SE_DMA_RX_LEN);
++	geni_se_rx_init_dma(se, *iova, len);
+ 	return 0;
+ }
+ EXPORT_SYMBOL(geni_se_rx_dma_prep);
+diff --git a/drivers/soc/xilinx/xlnx_event_manager.c b/drivers/soc/xilinx/xlnx_event_manager.c
+index c76381899ef49..f9d9b82b562da 100644
+--- a/drivers/soc/xilinx/xlnx_event_manager.c
++++ b/drivers/soc/xilinx/xlnx_event_manager.c
+@@ -192,11 +192,12 @@ static int xlnx_remove_cb_for_suspend(event_cb_func_t cb_fun)
+ 	struct registered_event_data *eve_data;
+ 	struct agent_cb *cb_pos;
+ 	struct agent_cb *cb_next;
++	struct hlist_node *tmp;
+ 
+ 	is_need_to_unregister = false;
+ 
+ 	/* Check for existing entry in hash table for given cb_type */
+-	hash_for_each_possible(reg_driver_map, eve_data, hentry, PM_INIT_SUSPEND_CB) {
++	hash_for_each_possible_safe(reg_driver_map, eve_data, tmp, hentry, PM_INIT_SUSPEND_CB) {
+ 		if (eve_data->cb_type == PM_INIT_SUSPEND_CB) {
+ 			/* Delete the list of callback */
+ 			list_for_each_entry_safe(cb_pos, cb_next, &eve_data->cb_list_head, list) {
+@@ -228,11 +229,12 @@ static int xlnx_remove_cb_for_notify_event(const u32 node_id, const u32 event,
+ 	u64 key = ((u64)node_id << 32U) | (u64)event;
+ 	struct agent_cb *cb_pos;
+ 	struct agent_cb *cb_next;
++	struct hlist_node *tmp;
+ 
+ 	is_need_to_unregister = false;
+ 
+ 	/* Check for existing entry in hash table for given key id */
+-	hash_for_each_possible(reg_driver_map, eve_data, hentry, key) {
++	hash_for_each_possible_safe(reg_driver_map, eve_data, tmp, hentry, key) {
+ 		if (eve_data->key == key) {
+ 			/* Delete the list of callback */
+ 			list_for_each_entry_safe(cb_pos, cb_next, &eve_data->cb_list_head, list) {
+diff --git a/drivers/soundwire/debugfs.c b/drivers/soundwire/debugfs.c
+index dea782e0edc4b..c3a1a359ee5c3 100644
+--- a/drivers/soundwire/debugfs.c
++++ b/drivers/soundwire/debugfs.c
+@@ -56,8 +56,9 @@ static int sdw_slave_reg_show(struct seq_file *s_file, void *data)
+ 	if (!buf)
+ 		return -ENOMEM;
+ 
+-	ret = pm_runtime_resume_and_get(&slave->dev);
++	ret = pm_runtime_get_sync(&slave->dev);
+ 	if (ret < 0 && ret != -EACCES) {
++		pm_runtime_put_noidle(&slave->dev);
+ 		kfree(buf);
+ 		return ret;
+ 	}
+diff --git a/drivers/soundwire/qcom.c b/drivers/soundwire/qcom.c
+index 280455f047a36..bd39e78788590 100644
+--- a/drivers/soundwire/qcom.c
++++ b/drivers/soundwire/qcom.c
+@@ -278,14 +278,14 @@ static u32 swrm_get_packed_reg_val(u8 *cmd_id, u8 cmd_data,
+ 	return val;
+ }
+ 
+-static int swrm_wait_for_rd_fifo_avail(struct qcom_swrm_ctrl *swrm)
++static int swrm_wait_for_rd_fifo_avail(struct qcom_swrm_ctrl *ctrl)
+ {
+ 	u32 fifo_outstanding_data, value;
+ 	int fifo_retry_count = SWR_OVERFLOW_RETRY_COUNT;
+ 
+ 	do {
+ 		/* Check for fifo underflow during read */
+-		swrm->reg_read(swrm, SWRM_CMD_FIFO_STATUS, &value);
++		ctrl->reg_read(ctrl, SWRM_CMD_FIFO_STATUS, &value);
+ 		fifo_outstanding_data = FIELD_GET(SWRM_RD_CMD_FIFO_CNT_MASK, value);
+ 
+ 		/* Check if read data is available in read fifo */
+@@ -296,39 +296,39 @@ static int swrm_wait_for_rd_fifo_avail(struct qcom_swrm_ctrl *swrm)
+ 	} while (fifo_retry_count--);
+ 
+ 	if (fifo_outstanding_data == 0) {
+-		dev_err_ratelimited(swrm->dev, "%s err read underflow\n", __func__);
++		dev_err_ratelimited(ctrl->dev, "%s err read underflow\n", __func__);
+ 		return -EIO;
+ 	}
+ 
+ 	return 0;
+ }
+ 
+-static int swrm_wait_for_wr_fifo_avail(struct qcom_swrm_ctrl *swrm)
++static int swrm_wait_for_wr_fifo_avail(struct qcom_swrm_ctrl *ctrl)
+ {
+ 	u32 fifo_outstanding_cmds, value;
+ 	int fifo_retry_count = SWR_OVERFLOW_RETRY_COUNT;
+ 
+ 	do {
+ 		/* Check for fifo overflow during write */
+-		swrm->reg_read(swrm, SWRM_CMD_FIFO_STATUS, &value);
++		ctrl->reg_read(ctrl, SWRM_CMD_FIFO_STATUS, &value);
+ 		fifo_outstanding_cmds = FIELD_GET(SWRM_WR_CMD_FIFO_CNT_MASK, value);
+ 
+ 		/* Check for space in write fifo before writing */
+-		if (fifo_outstanding_cmds < swrm->wr_fifo_depth)
++		if (fifo_outstanding_cmds < ctrl->wr_fifo_depth)
+ 			return 0;
+ 
+ 		usleep_range(500, 510);
+ 	} while (fifo_retry_count--);
+ 
+-	if (fifo_outstanding_cmds == swrm->wr_fifo_depth) {
+-		dev_err_ratelimited(swrm->dev, "%s err write overflow\n", __func__);
++	if (fifo_outstanding_cmds == ctrl->wr_fifo_depth) {
++		dev_err_ratelimited(ctrl->dev, "%s err write overflow\n", __func__);
+ 		return -EIO;
+ 	}
+ 
+ 	return 0;
+ }
+ 
+-static int qcom_swrm_cmd_fifo_wr_cmd(struct qcom_swrm_ctrl *swrm, u8 cmd_data,
++static int qcom_swrm_cmd_fifo_wr_cmd(struct qcom_swrm_ctrl *ctrl, u8 cmd_data,
+ 				     u8 dev_addr, u16 reg_addr)
+ {
+ 
+@@ -341,20 +341,20 @@ static int qcom_swrm_cmd_fifo_wr_cmd(struct qcom_swrm_ctrl *swrm, u8 cmd_data,
+ 		val = swrm_get_packed_reg_val(&cmd_id, cmd_data,
+ 					      dev_addr, reg_addr);
+ 	} else {
+-		val = swrm_get_packed_reg_val(&swrm->wcmd_id, cmd_data,
++		val = swrm_get_packed_reg_val(&ctrl->wcmd_id, cmd_data,
+ 					      dev_addr, reg_addr);
+ 	}
+ 
+-	if (swrm_wait_for_wr_fifo_avail(swrm))
++	if (swrm_wait_for_wr_fifo_avail(ctrl))
+ 		return SDW_CMD_FAIL_OTHER;
+ 
+ 	if (cmd_id == SWR_BROADCAST_CMD_ID)
+-		reinit_completion(&swrm->broadcast);
++		reinit_completion(&ctrl->broadcast);
+ 
+ 	/* Its assumed that write is okay as we do not get any status back */
+-	swrm->reg_write(swrm, SWRM_CMD_FIFO_WR_CMD, val);
++	ctrl->reg_write(ctrl, SWRM_CMD_FIFO_WR_CMD, val);
+ 
+-	if (swrm->version <= SWRM_VERSION_1_3_0)
++	if (ctrl->version <= SWRM_VERSION_1_3_0)
+ 		usleep_range(150, 155);
+ 
+ 	if (cmd_id == SWR_BROADCAST_CMD_ID) {
+@@ -362,7 +362,7 @@ static int qcom_swrm_cmd_fifo_wr_cmd(struct qcom_swrm_ctrl *swrm, u8 cmd_data,
+ 		 * sleep for 10ms for MSM soundwire variant to allow broadcast
+ 		 * command to complete.
+ 		 */
+-		ret = wait_for_completion_timeout(&swrm->broadcast,
++		ret = wait_for_completion_timeout(&ctrl->broadcast,
+ 						  msecs_to_jiffies(TIMEOUT_MS));
+ 		if (!ret)
+ 			ret = SDW_CMD_IGNORED;
+@@ -375,41 +375,41 @@ static int qcom_swrm_cmd_fifo_wr_cmd(struct qcom_swrm_ctrl *swrm, u8 cmd_data,
+ 	return ret;
+ }
+ 
+-static int qcom_swrm_cmd_fifo_rd_cmd(struct qcom_swrm_ctrl *swrm,
++static int qcom_swrm_cmd_fifo_rd_cmd(struct qcom_swrm_ctrl *ctrl,
+ 				     u8 dev_addr, u16 reg_addr,
+ 				     u32 len, u8 *rval)
+ {
+ 	u32 cmd_data, cmd_id, val, retry_attempt = 0;
+ 
+-	val = swrm_get_packed_reg_val(&swrm->rcmd_id, len, dev_addr, reg_addr);
++	val = swrm_get_packed_reg_val(&ctrl->rcmd_id, len, dev_addr, reg_addr);
+ 
+ 	/*
+ 	 * Check for outstanding cmd wrt. write fifo depth to avoid
+ 	 * overflow as read will also increase write fifo cnt.
+ 	 */
+-	swrm_wait_for_wr_fifo_avail(swrm);
++	swrm_wait_for_wr_fifo_avail(ctrl);
+ 
+ 	/* wait for FIFO RD to complete to avoid overflow */
+ 	usleep_range(100, 105);
+-	swrm->reg_write(swrm, SWRM_CMD_FIFO_RD_CMD, val);
++	ctrl->reg_write(ctrl, SWRM_CMD_FIFO_RD_CMD, val);
+ 	/* wait for FIFO RD CMD complete to avoid overflow */
+ 	usleep_range(250, 255);
+ 
+-	if (swrm_wait_for_rd_fifo_avail(swrm))
++	if (swrm_wait_for_rd_fifo_avail(ctrl))
+ 		return SDW_CMD_FAIL_OTHER;
+ 
+ 	do {
+-		swrm->reg_read(swrm, SWRM_CMD_FIFO_RD_FIFO_ADDR, &cmd_data);
++		ctrl->reg_read(ctrl, SWRM_CMD_FIFO_RD_FIFO_ADDR, &cmd_data);
+ 		rval[0] = cmd_data & 0xFF;
+ 		cmd_id = FIELD_GET(SWRM_RD_FIFO_CMD_ID_MASK, cmd_data);
+ 
+-		if (cmd_id != swrm->rcmd_id) {
++		if (cmd_id != ctrl->rcmd_id) {
+ 			if (retry_attempt < (MAX_FIFO_RD_RETRY - 1)) {
+ 				/* wait 500 us before retry on fifo read failure */
+ 				usleep_range(500, 505);
+-				swrm->reg_write(swrm, SWRM_CMD_FIFO_CMD,
++				ctrl->reg_write(ctrl, SWRM_CMD_FIFO_CMD,
+ 						SWRM_CMD_FIFO_FLUSH);
+-				swrm->reg_write(swrm, SWRM_CMD_FIFO_RD_CMD, val);
++				ctrl->reg_write(ctrl, SWRM_CMD_FIFO_RD_CMD, val);
+ 			}
+ 			retry_attempt++;
+ 		} else {
+@@ -418,9 +418,9 @@ static int qcom_swrm_cmd_fifo_rd_cmd(struct qcom_swrm_ctrl *swrm,
+ 
+ 	} while (retry_attempt < MAX_FIFO_RD_RETRY);
+ 
+-	dev_err(swrm->dev, "failed to read fifo: reg: 0x%x, rcmd_id: 0x%x,\
++	dev_err(ctrl->dev, "failed to read fifo: reg: 0x%x, rcmd_id: 0x%x,\
+ 		dev_num: 0x%x, cmd_data: 0x%x\n",
+-		reg_addr, swrm->rcmd_id, dev_addr, cmd_data);
++		reg_addr, ctrl->rcmd_id, dev_addr, cmd_data);
+ 
+ 	return SDW_CMD_IGNORED;
+ }
+@@ -532,39 +532,40 @@ static int qcom_swrm_enumerate(struct sdw_bus *bus)
+ 
+ static irqreturn_t qcom_swrm_wake_irq_handler(int irq, void *dev_id)
+ {
+-	struct qcom_swrm_ctrl *swrm = dev_id;
++	struct qcom_swrm_ctrl *ctrl = dev_id;
+ 	int ret;
+ 
+-	ret = pm_runtime_resume_and_get(swrm->dev);
++	ret = pm_runtime_get_sync(ctrl->dev);
+ 	if (ret < 0 && ret != -EACCES) {
+-		dev_err_ratelimited(swrm->dev,
+-				    "pm_runtime_resume_and_get failed in %s, ret %d\n",
++		dev_err_ratelimited(ctrl->dev,
++				    "pm_runtime_get_sync failed in %s, ret %d\n",
+ 				    __func__, ret);
++		pm_runtime_put_noidle(ctrl->dev);
+ 		return ret;
+ 	}
+ 
+-	if (swrm->wake_irq > 0) {
+-		if (!irqd_irq_disabled(irq_get_irq_data(swrm->wake_irq)))
+-			disable_irq_nosync(swrm->wake_irq);
++	if (ctrl->wake_irq > 0) {
++		if (!irqd_irq_disabled(irq_get_irq_data(ctrl->wake_irq)))
++			disable_irq_nosync(ctrl->wake_irq);
+ 	}
+ 
+-	pm_runtime_mark_last_busy(swrm->dev);
+-	pm_runtime_put_autosuspend(swrm->dev);
++	pm_runtime_mark_last_busy(ctrl->dev);
++	pm_runtime_put_autosuspend(ctrl->dev);
+ 
+ 	return IRQ_HANDLED;
+ }
+ 
+ static irqreturn_t qcom_swrm_irq_handler(int irq, void *dev_id)
+ {
+-	struct qcom_swrm_ctrl *swrm = dev_id;
++	struct qcom_swrm_ctrl *ctrl = dev_id;
+ 	u32 value, intr_sts, intr_sts_masked, slave_status;
+ 	u32 i;
+ 	int devnum;
+ 	int ret = IRQ_HANDLED;
+-	clk_prepare_enable(swrm->hclk);
++	clk_prepare_enable(ctrl->hclk);
+ 
+-	swrm->reg_read(swrm, SWRM_INTERRUPT_STATUS, &intr_sts);
+-	intr_sts_masked = intr_sts & swrm->intr_mask;
++	ctrl->reg_read(ctrl, SWRM_INTERRUPT_STATUS, &intr_sts);
++	intr_sts_masked = intr_sts & ctrl->intr_mask;
+ 
+ 	do {
+ 		for (i = 0; i < SWRM_INTERRUPT_MAX; i++) {
+@@ -574,80 +575,80 @@ static irqreturn_t qcom_swrm_irq_handler(int irq, void *dev_id)
+ 
+ 			switch (value) {
+ 			case SWRM_INTERRUPT_STATUS_SLAVE_PEND_IRQ:
+-				devnum = qcom_swrm_get_alert_slave_dev_num(swrm);
++				devnum = qcom_swrm_get_alert_slave_dev_num(ctrl);
+ 				if (devnum < 0) {
+-					dev_err_ratelimited(swrm->dev,
++					dev_err_ratelimited(ctrl->dev,
+ 					    "no slave alert found.spurious interrupt\n");
+ 				} else {
+-					sdw_handle_slave_status(&swrm->bus, swrm->status);
++					sdw_handle_slave_status(&ctrl->bus, ctrl->status);
+ 				}
+ 
+ 				break;
+ 			case SWRM_INTERRUPT_STATUS_NEW_SLAVE_ATTACHED:
+ 			case SWRM_INTERRUPT_STATUS_CHANGE_ENUM_SLAVE_STATUS:
+-				dev_dbg_ratelimited(swrm->dev, "SWR new slave attached\n");
+-				swrm->reg_read(swrm, SWRM_MCP_SLV_STATUS, &slave_status);
+-				if (swrm->slave_status == slave_status) {
+-					dev_dbg(swrm->dev, "Slave status not changed %x\n",
++				dev_dbg_ratelimited(ctrl->dev, "SWR new slave attached\n");
++				ctrl->reg_read(ctrl, SWRM_MCP_SLV_STATUS, &slave_status);
++				if (ctrl->slave_status == slave_status) {
++					dev_dbg(ctrl->dev, "Slave status not changed %x\n",
+ 						slave_status);
+ 				} else {
+-					qcom_swrm_get_device_status(swrm);
+-					qcom_swrm_enumerate(&swrm->bus);
+-					sdw_handle_slave_status(&swrm->bus, swrm->status);
++					qcom_swrm_get_device_status(ctrl);
++					qcom_swrm_enumerate(&ctrl->bus);
++					sdw_handle_slave_status(&ctrl->bus, ctrl->status);
+ 				}
+ 				break;
+ 			case SWRM_INTERRUPT_STATUS_MASTER_CLASH_DET:
+-				dev_err_ratelimited(swrm->dev,
++				dev_err_ratelimited(ctrl->dev,
+ 						"%s: SWR bus clsh detected\n",
+ 						__func__);
+-				swrm->intr_mask &= ~SWRM_INTERRUPT_STATUS_MASTER_CLASH_DET;
+-				swrm->reg_write(swrm, SWRM_INTERRUPT_CPU_EN, swrm->intr_mask);
++				ctrl->intr_mask &= ~SWRM_INTERRUPT_STATUS_MASTER_CLASH_DET;
++				ctrl->reg_write(ctrl, SWRM_INTERRUPT_CPU_EN, ctrl->intr_mask);
+ 				break;
+ 			case SWRM_INTERRUPT_STATUS_RD_FIFO_OVERFLOW:
+-				swrm->reg_read(swrm, SWRM_CMD_FIFO_STATUS, &value);
+-				dev_err_ratelimited(swrm->dev,
++				ctrl->reg_read(ctrl, SWRM_CMD_FIFO_STATUS, &value);
++				dev_err_ratelimited(ctrl->dev,
+ 					"%s: SWR read FIFO overflow fifo status 0x%x\n",
+ 					__func__, value);
+ 				break;
+ 			case SWRM_INTERRUPT_STATUS_RD_FIFO_UNDERFLOW:
+-				swrm->reg_read(swrm, SWRM_CMD_FIFO_STATUS, &value);
+-				dev_err_ratelimited(swrm->dev,
++				ctrl->reg_read(ctrl, SWRM_CMD_FIFO_STATUS, &value);
++				dev_err_ratelimited(ctrl->dev,
+ 					"%s: SWR read FIFO underflow fifo status 0x%x\n",
+ 					__func__, value);
+ 				break;
+ 			case SWRM_INTERRUPT_STATUS_WR_CMD_FIFO_OVERFLOW:
+-				swrm->reg_read(swrm, SWRM_CMD_FIFO_STATUS, &value);
+-				dev_err(swrm->dev,
++				ctrl->reg_read(ctrl, SWRM_CMD_FIFO_STATUS, &value);
++				dev_err(ctrl->dev,
+ 					"%s: SWR write FIFO overflow fifo status %x\n",
+ 					__func__, value);
+-				swrm->reg_write(swrm, SWRM_CMD_FIFO_CMD, 0x1);
++				ctrl->reg_write(ctrl, SWRM_CMD_FIFO_CMD, 0x1);
+ 				break;
+ 			case SWRM_INTERRUPT_STATUS_CMD_ERROR:
+-				swrm->reg_read(swrm, SWRM_CMD_FIFO_STATUS, &value);
+-				dev_err_ratelimited(swrm->dev,
++				ctrl->reg_read(ctrl, SWRM_CMD_FIFO_STATUS, &value);
++				dev_err_ratelimited(ctrl->dev,
+ 					"%s: SWR CMD error, fifo status 0x%x, flushing fifo\n",
+ 					__func__, value);
+-				swrm->reg_write(swrm, SWRM_CMD_FIFO_CMD, 0x1);
++				ctrl->reg_write(ctrl, SWRM_CMD_FIFO_CMD, 0x1);
+ 				break;
+ 			case SWRM_INTERRUPT_STATUS_DOUT_PORT_COLLISION:
+-				dev_err_ratelimited(swrm->dev,
++				dev_err_ratelimited(ctrl->dev,
+ 						"%s: SWR Port collision detected\n",
+ 						__func__);
+-				swrm->intr_mask &= ~SWRM_INTERRUPT_STATUS_DOUT_PORT_COLLISION;
+-				swrm->reg_write(swrm,
+-					SWRM_INTERRUPT_CPU_EN, swrm->intr_mask);
++				ctrl->intr_mask &= ~SWRM_INTERRUPT_STATUS_DOUT_PORT_COLLISION;
++				ctrl->reg_write(ctrl,
++					SWRM_INTERRUPT_CPU_EN, ctrl->intr_mask);
+ 				break;
+ 			case SWRM_INTERRUPT_STATUS_READ_EN_RD_VALID_MISMATCH:
+-				dev_err_ratelimited(swrm->dev,
++				dev_err_ratelimited(ctrl->dev,
+ 					"%s: SWR read enable valid mismatch\n",
+ 					__func__);
+-				swrm->intr_mask &=
++				ctrl->intr_mask &=
+ 					~SWRM_INTERRUPT_STATUS_READ_EN_RD_VALID_MISMATCH;
+-				swrm->reg_write(swrm,
+-					SWRM_INTERRUPT_CPU_EN, swrm->intr_mask);
++				ctrl->reg_write(ctrl,
++					SWRM_INTERRUPT_CPU_EN, ctrl->intr_mask);
+ 				break;
+ 			case SWRM_INTERRUPT_STATUS_SPECIAL_CMD_ID_FINISHED:
+-				complete(&swrm->broadcast);
++				complete(&ctrl->broadcast);
+ 				break;
+ 			case SWRM_INTERRUPT_STATUS_BUS_RESET_FINISHED_V2:
+ 				break;
+@@ -656,19 +657,19 @@ static irqreturn_t qcom_swrm_irq_handler(int irq, void *dev_id)
+ 			case SWRM_INTERRUPT_STATUS_EXT_CLK_STOP_WAKEUP:
+ 				break;
+ 			default:
+-				dev_err_ratelimited(swrm->dev,
++				dev_err_ratelimited(ctrl->dev,
+ 						"%s: SWR unknown interrupt value: %d\n",
+ 						__func__, value);
+ 				ret = IRQ_NONE;
+ 				break;
+ 			}
+ 		}
+-		swrm->reg_write(swrm, SWRM_INTERRUPT_CLEAR, intr_sts);
+-		swrm->reg_read(swrm, SWRM_INTERRUPT_STATUS, &intr_sts);
+-		intr_sts_masked = intr_sts & swrm->intr_mask;
++		ctrl->reg_write(ctrl, SWRM_INTERRUPT_CLEAR, intr_sts);
++		ctrl->reg_read(ctrl, SWRM_INTERRUPT_STATUS, &intr_sts);
++		intr_sts_masked = intr_sts & ctrl->intr_mask;
+ 	} while (intr_sts_masked);
+ 
+-	clk_disable_unprepare(swrm->hclk);
++	clk_disable_unprepare(ctrl->hclk);
+ 	return ret;
+ }
+ 
+@@ -1090,11 +1091,12 @@ static int qcom_swrm_startup(struct snd_pcm_substream *substream,
+ 	struct snd_soc_dai *codec_dai;
+ 	int ret, i;
+ 
+-	ret = pm_runtime_resume_and_get(ctrl->dev);
++	ret = pm_runtime_get_sync(ctrl->dev);
+ 	if (ret < 0 && ret != -EACCES) {
+ 		dev_err_ratelimited(ctrl->dev,
+-				    "pm_runtime_resume_and_get failed in %s, ret %d\n",
++				    "pm_runtime_get_sync failed in %s, ret %d\n",
+ 				    __func__, ret);
++		pm_runtime_put_noidle(ctrl->dev);
+ 		return ret;
+ 	}
+ 
+@@ -1292,23 +1294,24 @@ static int qcom_swrm_get_port_config(struct qcom_swrm_ctrl *ctrl)
+ #ifdef CONFIG_DEBUG_FS
+ static int swrm_reg_show(struct seq_file *s_file, void *data)
+ {
+-	struct qcom_swrm_ctrl *swrm = s_file->private;
++	struct qcom_swrm_ctrl *ctrl = s_file->private;
+ 	int reg, reg_val, ret;
+ 
+-	ret = pm_runtime_resume_and_get(swrm->dev);
++	ret = pm_runtime_get_sync(ctrl->dev);
+ 	if (ret < 0 && ret != -EACCES) {
+-		dev_err_ratelimited(swrm->dev,
+-				    "pm_runtime_resume_and_get failed in %s, ret %d\n",
++		dev_err_ratelimited(ctrl->dev,
++				    "pm_runtime_get_sync failed in %s, ret %d\n",
+ 				    __func__, ret);
++		pm_runtime_put_noidle(ctrl->dev);
+ 		return ret;
+ 	}
+ 
+ 	for (reg = 0; reg <= SWR_MSTR_MAX_REG_ADDR; reg += 4) {
+-		swrm->reg_read(swrm, reg, &reg_val);
++		ctrl->reg_read(ctrl, reg, &reg_val);
+ 		seq_printf(s_file, "0x%.3x: 0x%.2x\n", reg, reg_val);
+ 	}
+-	pm_runtime_mark_last_busy(swrm->dev);
+-	pm_runtime_put_autosuspend(swrm->dev);
++	pm_runtime_mark_last_busy(ctrl->dev);
++	pm_runtime_put_autosuspend(ctrl->dev);
+ 
+ 
+ 	return 0;
+@@ -1489,13 +1492,13 @@ static int qcom_swrm_remove(struct platform_device *pdev)
+ 	return 0;
+ }
+ 
+-static bool swrm_wait_for_frame_gen_enabled(struct qcom_swrm_ctrl *swrm)
++static bool swrm_wait_for_frame_gen_enabled(struct qcom_swrm_ctrl *ctrl)
+ {
+ 	int retry = SWRM_LINK_STATUS_RETRY_CNT;
+ 	int comp_sts;
+ 
+ 	do {
+-		swrm->reg_read(swrm, SWRM_COMP_STATUS, &comp_sts);
++		ctrl->reg_read(ctrl, SWRM_COMP_STATUS, &comp_sts);
+ 
+ 		if (comp_sts & SWRM_FRM_GEN_ENABLED)
+ 			return true;
+@@ -1503,7 +1506,7 @@ static bool swrm_wait_for_frame_gen_enabled(struct qcom_swrm_ctrl *swrm)
+ 		usleep_range(500, 510);
+ 	} while (retry--);
+ 
+-	dev_err(swrm->dev, "%s: link status not %s\n", __func__,
++	dev_err(ctrl->dev, "%s: link status not %s\n", __func__,
+ 		comp_sts & SWRM_FRM_GEN_ENABLED ? "connected" : "disconnected");
+ 
+ 	return false;
+diff --git a/drivers/spi/spi-bcm-qspi.c b/drivers/spi/spi-bcm-qspi.c
+index 6b46a3b67c416..d91dfbe47aa50 100644
+--- a/drivers/spi/spi-bcm-qspi.c
++++ b/drivers/spi/spi-bcm-qspi.c
+@@ -1543,13 +1543,9 @@ int bcm_qspi_probe(struct platform_device *pdev,
+ 		res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
+ 						   "mspi");
+ 
+-	if (res) {
+-		qspi->base[MSPI]  = devm_ioremap_resource(dev, res);
+-		if (IS_ERR(qspi->base[MSPI]))
+-			return PTR_ERR(qspi->base[MSPI]);
+-	} else {
+-		return 0;
+-	}
++	qspi->base[MSPI]  = devm_ioremap_resource(dev, res);
++	if (IS_ERR(qspi->base[MSPI]))
++		return PTR_ERR(qspi->base[MSPI]);
+ 
+ 	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "bspi");
+ 	if (res) {
+diff --git a/drivers/spi/spi-dw-core.c b/drivers/spi/spi-dw-core.c
+index ae3108c70f508..7778b19bcb6c6 100644
+--- a/drivers/spi/spi-dw-core.c
++++ b/drivers/spi/spi-dw-core.c
+@@ -426,7 +426,10 @@ static int dw_spi_transfer_one(struct spi_controller *master,
+ 	int ret;
+ 
+ 	dws->dma_mapped = 0;
+-	dws->n_bytes = DIV_ROUND_UP(transfer->bits_per_word, BITS_PER_BYTE);
++	dws->n_bytes =
++		roundup_pow_of_two(DIV_ROUND_UP(transfer->bits_per_word,
++						BITS_PER_BYTE));
++
+ 	dws->tx = (void *)transfer->tx_buf;
+ 	dws->tx_len = transfer->len / dws->n_bytes;
+ 	dws->rx = transfer->rx_buf;
+diff --git a/drivers/spi/spi-geni-qcom.c b/drivers/spi/spi-geni-qcom.c
+index b293428760bc6..1df9d4844a68d 100644
+--- a/drivers/spi/spi-geni-qcom.c
++++ b/drivers/spi/spi-geni-qcom.c
+@@ -35,7 +35,7 @@
+ #define CS_DEMUX_OUTPUT_SEL	GENMASK(3, 0)
+ 
+ #define SE_SPI_TRANS_CFG	0x25c
+-#define CS_TOGGLE		BIT(0)
++#define CS_TOGGLE		BIT(1)
+ 
+ #define SE_SPI_WORD_LEN		0x268
+ #define WORD_LEN_MSK		GENMASK(9, 0)
+@@ -97,8 +97,6 @@ struct spi_geni_master {
+ 	struct dma_chan *tx;
+ 	struct dma_chan *rx;
+ 	int cur_xfer_mode;
+-	dma_addr_t tx_se_dma;
+-	dma_addr_t rx_se_dma;
+ };
+ 
+ static int get_spi_clk_cfg(unsigned int speed_hz,
+@@ -174,7 +172,7 @@ static void handle_se_timeout(struct spi_master *spi,
+ unmap_if_dma:
+ 	if (mas->cur_xfer_mode == GENI_SE_DMA) {
+ 		if (xfer) {
+-			if (xfer->tx_buf && mas->tx_se_dma) {
++			if (xfer->tx_buf) {
+ 				spin_lock_irq(&mas->lock);
+ 				reinit_completion(&mas->tx_reset_done);
+ 				writel(1, se->base + SE_DMA_TX_FSM_RST);
+@@ -182,9 +180,8 @@ unmap_if_dma:
+ 				time_left = wait_for_completion_timeout(&mas->tx_reset_done, HZ);
+ 				if (!time_left)
+ 					dev_err(mas->dev, "DMA TX RESET failed\n");
+-				geni_se_tx_dma_unprep(se, mas->tx_se_dma, xfer->len);
+ 			}
+-			if (xfer->rx_buf && mas->rx_se_dma) {
++			if (xfer->rx_buf) {
+ 				spin_lock_irq(&mas->lock);
+ 				reinit_completion(&mas->rx_reset_done);
+ 				writel(1, se->base + SE_DMA_RX_FSM_RST);
+@@ -192,7 +189,6 @@ unmap_if_dma:
+ 				time_left = wait_for_completion_timeout(&mas->rx_reset_done, HZ);
+ 				if (!time_left)
+ 					dev_err(mas->dev, "DMA RX RESET failed\n");
+-				geni_se_rx_dma_unprep(se, mas->rx_se_dma, xfer->len);
+ 			}
+ 		} else {
+ 			/*
+@@ -523,17 +519,36 @@ static int setup_gsi_xfer(struct spi_transfer *xfer, struct spi_geni_master *mas
+ 	return 1;
+ }
+ 
++static u32 get_xfer_len_in_words(struct spi_transfer *xfer,
++				struct spi_geni_master *mas)
++{
++	u32 len;
++
++	if (!(mas->cur_bits_per_word % MIN_WORD_LEN))
++		len = xfer->len * BITS_PER_BYTE / mas->cur_bits_per_word;
++	else
++		len = xfer->len / (mas->cur_bits_per_word / BITS_PER_BYTE + 1);
++	len &= TRANS_LEN_MSK;
++
++	return len;
++}
++
+ static bool geni_can_dma(struct spi_controller *ctlr,
+ 			 struct spi_device *slv, struct spi_transfer *xfer)
+ {
+ 	struct spi_geni_master *mas = spi_master_get_devdata(slv->master);
++	u32 len, fifo_size;
+ 
+-	/*
+-	 * Return true if transfer needs to be mapped prior to
+-	 * calling transfer_one which is the case only for GPI_DMA.
+-	 * For SE_DMA mode, map/unmap is done in geni_se_*x_dma_prep.
+-	 */
+-	return mas->cur_xfer_mode == GENI_GPI_DMA;
++	if (mas->cur_xfer_mode == GENI_GPI_DMA)
++		return true;
++
++	len = get_xfer_len_in_words(xfer, mas);
++	fifo_size = mas->tx_fifo_depth * mas->fifo_width_bits / mas->cur_bits_per_word;
++
++	if (len > fifo_size)
++		return true;
++	else
++		return false;
+ }
+ 
+ static int spi_geni_prepare_message(struct spi_master *spi,
+@@ -774,7 +789,7 @@ static int setup_se_xfer(struct spi_transfer *xfer,
+ 				u16 mode, struct spi_master *spi)
+ {
+ 	u32 m_cmd = 0;
+-	u32 len, fifo_size;
++	u32 len;
+ 	struct geni_se *se = &mas->se;
+ 	int ret;
+ 
+@@ -806,11 +821,7 @@ static int setup_se_xfer(struct spi_transfer *xfer,
+ 	mas->tx_rem_bytes = 0;
+ 	mas->rx_rem_bytes = 0;
+ 
+-	if (!(mas->cur_bits_per_word % MIN_WORD_LEN))
+-		len = xfer->len * BITS_PER_BYTE / mas->cur_bits_per_word;
+-	else
+-		len = xfer->len / (mas->cur_bits_per_word / BITS_PER_BYTE + 1);
+-	len &= TRANS_LEN_MSK;
++	len = get_xfer_len_in_words(xfer, mas);
+ 
+ 	mas->cur_xfer = xfer;
+ 	if (xfer->tx_buf) {
+@@ -825,9 +836,20 @@ static int setup_se_xfer(struct spi_transfer *xfer,
+ 		mas->rx_rem_bytes = xfer->len;
+ 	}
+ 
+-	/* Select transfer mode based on transfer length */
+-	fifo_size = mas->tx_fifo_depth * mas->fifo_width_bits / mas->cur_bits_per_word;
+-	mas->cur_xfer_mode = (len <= fifo_size) ? GENI_SE_FIFO : GENI_SE_DMA;
++	/*
++	 * Select DMA mode if sgt are present; and with only 1 entry
++	 * This is not a serious limitation because the xfer buffers are
++	 * expected to fit into in 1 entry almost always, and if any
++	 * doesn't for any reason we fall back to FIFO mode anyway
++	 */
++	if (!xfer->tx_sg.nents && !xfer->rx_sg.nents)
++		mas->cur_xfer_mode = GENI_SE_FIFO;
++	else if (xfer->tx_sg.nents > 1 || xfer->rx_sg.nents > 1) {
++		dev_warn_once(mas->dev, "Doing FIFO, cannot handle tx_nents-%d, rx_nents-%d\n",
++			xfer->tx_sg.nents, xfer->rx_sg.nents);
++		mas->cur_xfer_mode = GENI_SE_FIFO;
++	} else
++		mas->cur_xfer_mode = GENI_SE_DMA;
+ 	geni_se_select_mode(se, mas->cur_xfer_mode);
+ 
+ 	/*
+@@ -838,35 +860,17 @@ static int setup_se_xfer(struct spi_transfer *xfer,
+ 	geni_se_setup_m_cmd(se, m_cmd, FRAGMENTATION);
+ 
+ 	if (mas->cur_xfer_mode == GENI_SE_DMA) {
+-		if (m_cmd & SPI_RX_ONLY) {
+-			ret =  geni_se_rx_dma_prep(se, xfer->rx_buf,
+-				xfer->len, &mas->rx_se_dma);
+-			if (ret) {
+-				dev_err(mas->dev, "Failed to setup Rx dma %d\n", ret);
+-				mas->rx_se_dma = 0;
+-				goto unlock_and_return;
+-			}
+-		}
+-		if (m_cmd & SPI_TX_ONLY) {
+-			ret =  geni_se_tx_dma_prep(se, (void *)xfer->tx_buf,
+-				xfer->len, &mas->tx_se_dma);
+-			if (ret) {
+-				dev_err(mas->dev, "Failed to setup Tx dma %d\n", ret);
+-				mas->tx_se_dma = 0;
+-				if (m_cmd & SPI_RX_ONLY) {
+-					/* Unmap rx buffer if duplex transfer */
+-					geni_se_rx_dma_unprep(se, mas->rx_se_dma, xfer->len);
+-					mas->rx_se_dma = 0;
+-				}
+-				goto unlock_and_return;
+-			}
+-		}
++		if (m_cmd & SPI_RX_ONLY)
++			geni_se_rx_init_dma(se, sg_dma_address(xfer->rx_sg.sgl),
++				sg_dma_len(xfer->rx_sg.sgl));
++		if (m_cmd & SPI_TX_ONLY)
++			geni_se_tx_init_dma(se, sg_dma_address(xfer->tx_sg.sgl),
++				sg_dma_len(xfer->tx_sg.sgl));
+ 	} else if (m_cmd & SPI_TX_ONLY) {
+ 		if (geni_spi_handle_tx(mas))
+ 			writel(mas->tx_wm, se->base + SE_GENI_TX_WATERMARK_REG);
+ 	}
+ 
+-unlock_and_return:
+ 	spin_unlock_irq(&mas->lock);
+ 	return ret;
+ }
+@@ -967,14 +971,6 @@ static irqreturn_t geni_spi_isr(int irq, void *data)
+ 		if (dma_rx_status & RX_RESET_DONE)
+ 			complete(&mas->rx_reset_done);
+ 		if (!mas->tx_rem_bytes && !mas->rx_rem_bytes && xfer) {
+-			if (xfer->tx_buf && mas->tx_se_dma) {
+-				geni_se_tx_dma_unprep(se, mas->tx_se_dma, xfer->len);
+-				mas->tx_se_dma = 0;
+-			}
+-			if (xfer->rx_buf && mas->rx_se_dma) {
+-				geni_se_rx_dma_unprep(se, mas->rx_se_dma, xfer->len);
+-				mas->rx_se_dma = 0;
+-			}
+ 			spi_finalize_current_transfer(spi);
+ 			mas->cur_xfer = NULL;
+ 		}
+@@ -1059,6 +1055,7 @@ static int spi_geni_probe(struct platform_device *pdev)
+ 	spi->bits_per_word_mask = SPI_BPW_RANGE_MASK(4, 32);
+ 	spi->num_chipselect = 4;
+ 	spi->max_speed_hz = 50000000;
++	spi->max_dma_len = 0xffff0; /* 24 bits for tx/rx dma length */
+ 	spi->prepare_message = spi_geni_prepare_message;
+ 	spi->transfer_one = spi_geni_transfer_one;
+ 	spi->can_dma = geni_can_dma;
+@@ -1100,6 +1097,12 @@ static int spi_geni_probe(struct platform_device *pdev)
+ 	if (mas->cur_xfer_mode == GENI_SE_FIFO)
+ 		spi->set_cs = spi_geni_set_cs;
+ 
++	/*
++	 * TX is required per GSI spec, see setup_gsi_xfer().
++	 */
++	if (mas->cur_xfer_mode == GENI_GPI_DMA)
++		spi->flags = SPI_CONTROLLER_MUST_TX;
++
+ 	ret = request_irq(mas->irq, geni_spi_isr, 0, dev_name(dev), spi);
+ 	if (ret)
+ 		goto spi_geni_release_dma;
+diff --git a/drivers/staging/media/atomisp/i2c/atomisp-gc0310.c b/drivers/staging/media/atomisp/i2c/atomisp-gc0310.c
+index 273155308fe36..eb6db1571dc0d 100644
+--- a/drivers/staging/media/atomisp/i2c/atomisp-gc0310.c
++++ b/drivers/staging/media/atomisp/i2c/atomisp-gc0310.c
+@@ -377,8 +377,8 @@ static void gc0310_remove(struct i2c_client *client)
+ 	v4l2_device_unregister_subdev(sd);
+ 	media_entity_cleanup(&dev->sd.entity);
+ 	v4l2_ctrl_handler_free(&dev->ctrls.handler);
++	mutex_destroy(&dev->input_lock);
+ 	pm_runtime_disable(&client->dev);
+-	kfree(dev);
+ }
+ 
+ static int gc0310_probe(struct i2c_client *client)
+diff --git a/drivers/staging/media/atomisp/i2c/atomisp-ov2680.c b/drivers/staging/media/atomisp/i2c/atomisp-ov2680.c
+index c079368019e87..3a6bc3e56b10e 100644
+--- a/drivers/staging/media/atomisp/i2c/atomisp-ov2680.c
++++ b/drivers/staging/media/atomisp/i2c/atomisp-ov2680.c
+@@ -239,27 +239,21 @@ static void ov2680_calc_mode(struct ov2680_device *sensor, int width, int height
+ static int ov2680_set_mode(struct ov2680_device *sensor)
+ {
+ 	struct i2c_client *client = sensor->client;
+-	u8 pll_div, unknown, inc, fmt1, fmt2;
++	u8 unknown, inc, fmt1, fmt2;
+ 	int ret;
+ 
+ 	if (sensor->mode.binning) {
+-		pll_div = 1;
+ 		unknown = 0x23;
+ 		inc = 0x31;
+ 		fmt1 = 0xc2;
+ 		fmt2 = 0x01;
+ 	} else {
+-		pll_div = 0;
+ 		unknown = 0x21;
+ 		inc = 0x11;
+ 		fmt1 = 0xc0;
+ 		fmt2 = 0x00;
+ 	}
+ 
+-	ret = ov_write_reg8(client, 0x3086, pll_div);
+-	if (ret)
+-		return ret;
+-
+ 	ret = ov_write_reg8(client, 0x370a, unknown);
+ 	if (ret)
+ 		return ret;
+diff --git a/drivers/staging/media/atomisp/i2c/ov2680.h b/drivers/staging/media/atomisp/i2c/ov2680.h
+index baf49eb0659e3..eed18d6943370 100644
+--- a/drivers/staging/media/atomisp/i2c/ov2680.h
++++ b/drivers/staging/media/atomisp/i2c/ov2680.h
+@@ -172,6 +172,7 @@ static struct ov2680_reg const ov2680_global_setting[] = {
+ 	{0x3082, 0x45},
+ 	{0x3084, 0x09},
+ 	{0x3085, 0x04},
++	{0x3086, 0x00},
+ 	{0x3503, 0x03},
+ 	{0x350b, 0x36},
+ 	{0x3600, 0xb4},
+diff --git a/drivers/staging/media/atomisp/pci/atomisp_gmin_platform.c b/drivers/staging/media/atomisp/pci/atomisp_gmin_platform.c
+index c718a74ea70a3..88d4499233b98 100644
+--- a/drivers/staging/media/atomisp/pci/atomisp_gmin_platform.c
++++ b/drivers/staging/media/atomisp/pci/atomisp_gmin_platform.c
+@@ -1357,7 +1357,7 @@ static int gmin_get_config_dsm_var(struct device *dev,
+ 	dev_info(dev, "found _DSM entry for '%s': %s\n", var,
+ 		 cur->string.pointer);
+ 	strscpy(out, cur->string.pointer, *out_len);
+-	*out_len = strlen(cur->string.pointer);
++	*out_len = strlen(out);
+ 
+ 	ACPI_FREE(obj);
+ 	return 0;
+diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
+index 90a3958d1f297..aa2313f3bcab8 100644
+--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
++++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
+@@ -415,7 +415,7 @@ free_pagelist(struct vchiq_instance *instance, struct vchiq_pagelist_info *pagel
+ 	pagelistinfo->scatterlist_mapped = 0;
+ 
+ 	/* Deal with any partial cache lines (fragments) */
+-	if (pagelist->type >= PAGELIST_READ_WITH_FRAGMENTS) {
++	if (pagelist->type >= PAGELIST_READ_WITH_FRAGMENTS && g_fragments_base) {
+ 		char *fragments = g_fragments_base +
+ 			(pagelist->type - PAGELIST_READ_WITH_FRAGMENTS) *
+ 			g_fragments_size;
+@@ -462,7 +462,7 @@ free_pagelist(struct vchiq_instance *instance, struct vchiq_pagelist_info *pagel
+ 	cleanup_pagelistinfo(instance, pagelistinfo);
+ }
+ 
+-int vchiq_platform_init(struct platform_device *pdev, struct vchiq_state *state)
++static int vchiq_platform_init(struct platform_device *pdev, struct vchiq_state *state)
+ {
+ 	struct device *dev = &pdev->dev;
+ 	struct vchiq_drvdata *drvdata = platform_get_drvdata(pdev);
+diff --git a/drivers/thermal/qcom/tsens-v0_1.c b/drivers/thermal/qcom/tsens-v0_1.c
+index e89c6f39a3aea..e9ce7b62b3818 100644
+--- a/drivers/thermal/qcom/tsens-v0_1.c
++++ b/drivers/thermal/qcom/tsens-v0_1.c
+@@ -243,6 +243,18 @@ static int calibrate_8974(struct tsens_priv *priv)
+ 	return 0;
+ }
+ 
++static int __init init_8226(struct tsens_priv *priv)
++{
++	priv->sensor[0].slope = 2901;
++	priv->sensor[1].slope = 2846;
++	priv->sensor[2].slope = 3038;
++	priv->sensor[3].slope = 2955;
++	priv->sensor[4].slope = 2901;
++	priv->sensor[5].slope = 2846;
++
++	return init_common(priv);
++}
++
+ static int __init init_8939(struct tsens_priv *priv) {
+ 	priv->sensor[0].slope = 2911;
+ 	priv->sensor[1].slope = 2789;
+@@ -258,7 +270,28 @@ static int __init init_8939(struct tsens_priv *priv) {
+ 	return init_common(priv);
+ }
+ 
+-/* v0.1: 8916, 8939, 8974, 9607 */
++static int __init init_9607(struct tsens_priv *priv)
++{
++	int i;
++
++	for (i = 0; i < priv->num_sensors; ++i)
++		priv->sensor[i].slope = 3000;
++
++	priv->sensor[0].p1_calib_offset = 1;
++	priv->sensor[0].p2_calib_offset = 1;
++	priv->sensor[1].p1_calib_offset = -4;
++	priv->sensor[1].p2_calib_offset = -2;
++	priv->sensor[2].p1_calib_offset = 4;
++	priv->sensor[2].p2_calib_offset = 8;
++	priv->sensor[3].p1_calib_offset = -3;
++	priv->sensor[3].p2_calib_offset = -5;
++	priv->sensor[4].p1_calib_offset = -4;
++	priv->sensor[4].p2_calib_offset = -4;
++
++	return init_common(priv);
++}
++
++/* v0.1: 8226, 8916, 8939, 8974, 9607 */
+ 
+ static struct tsens_features tsens_v0_1_feat = {
+ 	.ver_major	= VER_0_1,
+@@ -313,6 +346,19 @@ static const struct tsens_ops ops_v0_1 = {
+ 	.get_temp	= get_temp_common,
+ };
+ 
++static const struct tsens_ops ops_8226 = {
++	.init		= init_8226,
++	.calibrate	= tsens_calibrate_common,
++	.get_temp	= get_temp_common,
++};
++
++struct tsens_plat_data data_8226 = {
++	.num_sensors	= 6,
++	.ops		= &ops_8226,
++	.feat		= &tsens_v0_1_feat,
++	.fields	= tsens_v0_1_regfields,
++};
++
+ static const struct tsens_ops ops_8916 = {
+ 	.init		= init_common,
+ 	.calibrate	= calibrate_8916,
+@@ -356,9 +402,15 @@ struct tsens_plat_data data_8974 = {
+ 	.fields	= tsens_v0_1_regfields,
+ };
+ 
++static const struct tsens_ops ops_9607 = {
++	.init		= init_9607,
++	.calibrate	= tsens_calibrate_common,
++	.get_temp	= get_temp_common,
++};
++
+ struct tsens_plat_data data_9607 = {
+ 	.num_sensors	= 5,
+-	.ops		= &ops_v0_1,
++	.ops		= &ops_9607,
+ 	.feat		= &tsens_v0_1_feat,
+ 	.fields	= tsens_v0_1_regfields,
+ };
+diff --git a/drivers/thermal/qcom/tsens.c b/drivers/thermal/qcom/tsens.c
+index d3218127e617d..9dd5e4b709117 100644
+--- a/drivers/thermal/qcom/tsens.c
++++ b/drivers/thermal/qcom/tsens.c
+@@ -134,10 +134,12 @@ int tsens_read_calibration(struct tsens_priv *priv, int shift, u32 *p1, u32 *p2,
+ 			p1[i] = p1[i] + (base1 << shift);
+ 		break;
+ 	case TWO_PT_CALIB:
++	case TWO_PT_CALIB_NO_OFFSET:
+ 		for (i = 0; i < priv->num_sensors; i++)
+ 			p2[i] = (p2[i] + base2) << shift;
+ 		fallthrough;
+ 	case ONE_PT_CALIB2:
++	case ONE_PT_CALIB2_NO_OFFSET:
+ 		for (i = 0; i < priv->num_sensors; i++)
+ 			p1[i] = (p1[i] + base1) << shift;
+ 		break;
+@@ -149,6 +151,18 @@ int tsens_read_calibration(struct tsens_priv *priv, int shift, u32 *p1, u32 *p2,
+ 		}
+ 	}
+ 
++	/* Apply calibration offset workaround except for _NO_OFFSET modes */
++	switch (mode) {
++	case TWO_PT_CALIB:
++		for (i = 0; i < priv->num_sensors; i++)
++			p2[i] += priv->sensor[i].p2_calib_offset;
++		fallthrough;
++	case ONE_PT_CALIB2:
++		for (i = 0; i < priv->num_sensors; i++)
++			p1[i] += priv->sensor[i].p1_calib_offset;
++		break;
++	}
++
+ 	return mode;
+ }
+ 
+@@ -254,7 +268,7 @@ void compute_intercept_slope(struct tsens_priv *priv, u32 *p1,
+ 
+ 		if (!priv->sensor[i].slope)
+ 			priv->sensor[i].slope = SLOPE_DEFAULT;
+-		if (mode == TWO_PT_CALIB) {
++		if (mode == TWO_PT_CALIB || mode == TWO_PT_CALIB_NO_OFFSET) {
+ 			/*
+ 			 * slope (m) = adc_code2 - adc_code1 (y2 - y1)/
+ 			 *	temp_120_degc - temp_30_degc (x2 - x1)
+@@ -1095,6 +1109,9 @@ static const struct of_device_id tsens_table[] = {
+ 	}, {
+ 		.compatible = "qcom,mdm9607-tsens",
+ 		.data = &data_9607,
++	}, {
++		.compatible = "qcom,msm8226-tsens",
++		.data = &data_8226,
+ 	}, {
+ 		.compatible = "qcom,msm8916-tsens",
+ 		.data = &data_8916,
+diff --git a/drivers/thermal/qcom/tsens.h b/drivers/thermal/qcom/tsens.h
+index dba9cd38f637c..1cd8f4fe0971f 100644
+--- a/drivers/thermal/qcom/tsens.h
++++ b/drivers/thermal/qcom/tsens.h
+@@ -10,6 +10,8 @@
+ #define ONE_PT_CALIB		0x1
+ #define ONE_PT_CALIB2		0x2
+ #define TWO_PT_CALIB		0x3
++#define ONE_PT_CALIB2_NO_OFFSET	0x6
++#define TWO_PT_CALIB_NO_OFFSET	0x7
+ #define CAL_DEGC_PT1		30
+ #define CAL_DEGC_PT2		120
+ #define SLOPE_FACTOR		1000
+@@ -57,6 +59,8 @@ struct tsens_sensor {
+ 	unsigned int			hw_id;
+ 	int				slope;
+ 	u32				status;
++	int				p1_calib_offset;
++	int				p2_calib_offset;
+ };
+ 
+ /**
+@@ -635,7 +639,7 @@ int get_temp_common(const struct tsens_sensor *s, int *temp);
+ extern struct tsens_plat_data data_8960;
+ 
+ /* TSENS v0.1 targets */
+-extern struct tsens_plat_data data_8916, data_8939, data_8974, data_9607;
++extern struct tsens_plat_data data_8226, data_8916, data_8939, data_8974, data_9607;
+ 
+ /* TSENS v1 targets */
+ extern struct tsens_plat_data data_tsens_v1, data_8976, data_8956;
+diff --git a/drivers/thermal/qoriq_thermal.c b/drivers/thermal/qoriq_thermal.c
+index e58756323457e..3eca7085d9efe 100644
+--- a/drivers/thermal/qoriq_thermal.c
++++ b/drivers/thermal/qoriq_thermal.c
+@@ -31,7 +31,6 @@
+ #define TMR_DISABLE	0x0
+ #define TMR_ME		0x80000000
+ #define TMR_ALPF	0x0c000000
+-#define TMR_MSITE_ALL	GENMASK(15, 0)
+ 
+ #define REGS_TMTMIR	0x008	/* Temperature measurement interval Register */
+ #define TMTMIR_DEFAULT	0x0000000f
+@@ -105,6 +104,11 @@ static int tmu_get_temp(struct thermal_zone_device *tz, int *temp)
+ 	 * within sensor range. TEMP is an 9 bit value representing
+ 	 * temperature in KelVin.
+ 	 */
++
++	regmap_read(qdata->regmap, REGS_TMR, &val);
++	if (!(val & TMR_ME))
++		return -EAGAIN;
++
+ 	if (regmap_read_poll_timeout(qdata->regmap,
+ 				     REGS_TRITSR(qsensor->id),
+ 				     val,
+@@ -128,15 +132,7 @@ static const struct thermal_zone_device_ops tmu_tz_ops = {
+ static int qoriq_tmu_register_tmu_zone(struct device *dev,
+ 				       struct qoriq_tmu_data *qdata)
+ {
+-	int id;
+-
+-	if (qdata->ver == TMU_VER1) {
+-		regmap_write(qdata->regmap, REGS_TMR,
+-			     TMR_MSITE_ALL | TMR_ME | TMR_ALPF);
+-	} else {
+-		regmap_write(qdata->regmap, REGS_V2_TMSR, TMR_MSITE_ALL);
+-		regmap_write(qdata->regmap, REGS_TMR, TMR_ME | TMR_ALPF_V2);
+-	}
++	int id, sites = 0;
+ 
+ 	for (id = 0; id < SITES_MAX; id++) {
+ 		struct thermal_zone_device *tzd;
+@@ -153,14 +149,26 @@ static int qoriq_tmu_register_tmu_zone(struct device *dev,
+ 			if (ret == -ENODEV)
+ 				continue;
+ 
+-			regmap_write(qdata->regmap, REGS_TMR, TMR_DISABLE);
+ 			return ret;
+ 		}
+ 
++		if (qdata->ver == TMU_VER1)
++			sites |= 0x1 << (15 - id);
++		else
++			sites |= 0x1 << id;
++
+ 		if (devm_thermal_add_hwmon_sysfs(dev, tzd))
+ 			dev_warn(dev,
+ 				 "Failed to add hwmon sysfs attributes\n");
++	}
+ 
++	if (sites) {
++		if (qdata->ver == TMU_VER1) {
++			regmap_write(qdata->regmap, REGS_TMR, TMR_ME | TMR_ALPF | sites);
++		} else {
++			regmap_write(qdata->regmap, REGS_V2_TMSR, sites);
++			regmap_write(qdata->regmap, REGS_TMR, TMR_ME | TMR_ALPF_V2);
++		}
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/thermal/sun8i_thermal.c b/drivers/thermal/sun8i_thermal.c
+index 793ddce72132f..d4d241686c810 100644
+--- a/drivers/thermal/sun8i_thermal.c
++++ b/drivers/thermal/sun8i_thermal.c
+@@ -319,6 +319,11 @@ out:
+ 	return ret;
+ }
+ 
++static void sun8i_ths_reset_control_assert(void *data)
++{
++	reset_control_assert(data);
++}
++
+ static int sun8i_ths_resource_init(struct ths_device *tmdev)
+ {
+ 	struct device *dev = tmdev->dev;
+@@ -339,47 +344,35 @@ static int sun8i_ths_resource_init(struct ths_device *tmdev)
+ 		if (IS_ERR(tmdev->reset))
+ 			return PTR_ERR(tmdev->reset);
+ 
+-		tmdev->bus_clk = devm_clk_get(&pdev->dev, "bus");
++		ret = reset_control_deassert(tmdev->reset);
++		if (ret)
++			return ret;
++
++		ret = devm_add_action_or_reset(dev, sun8i_ths_reset_control_assert,
++					       tmdev->reset);
++		if (ret)
++			return ret;
++
++		tmdev->bus_clk = devm_clk_get_enabled(&pdev->dev, "bus");
+ 		if (IS_ERR(tmdev->bus_clk))
+ 			return PTR_ERR(tmdev->bus_clk);
+ 	}
+ 
+ 	if (tmdev->chip->has_mod_clk) {
+-		tmdev->mod_clk = devm_clk_get(&pdev->dev, "mod");
++		tmdev->mod_clk = devm_clk_get_enabled(&pdev->dev, "mod");
+ 		if (IS_ERR(tmdev->mod_clk))
+ 			return PTR_ERR(tmdev->mod_clk);
+ 	}
+ 
+-	ret = reset_control_deassert(tmdev->reset);
+-	if (ret)
+-		return ret;
+-
+-	ret = clk_prepare_enable(tmdev->bus_clk);
+-	if (ret)
+-		goto assert_reset;
+-
+ 	ret = clk_set_rate(tmdev->mod_clk, 24000000);
+ 	if (ret)
+-		goto bus_disable;
+-
+-	ret = clk_prepare_enable(tmdev->mod_clk);
+-	if (ret)
+-		goto bus_disable;
++		return ret;
+ 
+ 	ret = sun8i_ths_calibrate(tmdev);
+ 	if (ret)
+-		goto mod_disable;
++		return ret;
+ 
+ 	return 0;
+-
+-mod_disable:
+-	clk_disable_unprepare(tmdev->mod_clk);
+-bus_disable:
+-	clk_disable_unprepare(tmdev->bus_clk);
+-assert_reset:
+-	reset_control_assert(tmdev->reset);
+-
+-	return ret;
+ }
+ 
+ static int sun8i_h3_thermal_init(struct ths_device *tmdev)
+@@ -530,17 +523,6 @@ static int sun8i_ths_probe(struct platform_device *pdev)
+ 	return 0;
+ }
+ 
+-static int sun8i_ths_remove(struct platform_device *pdev)
+-{
+-	struct ths_device *tmdev = platform_get_drvdata(pdev);
+-
+-	clk_disable_unprepare(tmdev->mod_clk);
+-	clk_disable_unprepare(tmdev->bus_clk);
+-	reset_control_assert(tmdev->reset);
+-
+-	return 0;
+-}
+-
+ static const struct ths_thermal_chip sun8i_a83t_ths = {
+ 	.sensor_num = 3,
+ 	.scale = 705,
+@@ -642,7 +624,6 @@ MODULE_DEVICE_TABLE(of, of_ths_match);
+ 
+ static struct platform_driver ths_driver = {
+ 	.probe = sun8i_ths_probe,
+-	.remove = sun8i_ths_remove,
+ 	.driver = {
+ 		.name = "sun8i-thermal",
+ 		.of_match_table = of_ths_match,
+diff --git a/drivers/tty/serial/8250/8250_omap.c b/drivers/tty/serial/8250/8250_omap.c
+index 734f092ef839a..b758e9b613c74 100644
+--- a/drivers/tty/serial/8250/8250_omap.c
++++ b/drivers/tty/serial/8250/8250_omap.c
+@@ -649,6 +649,8 @@ static irqreturn_t omap8250_irq(int irq, void *dev_id)
+ 	if ((lsr & UART_LSR_OE) && up->overrun_backoff_time_ms > 0) {
+ 		unsigned long delay;
+ 
++		/* Synchronize UART_IER access against the console. */
++		spin_lock(&port->lock);
+ 		up->ier = port->serial_in(port, UART_IER);
+ 		if (up->ier & (UART_IER_RLSI | UART_IER_RDI)) {
+ 			port->ops->stop_rx(port);
+@@ -658,6 +660,7 @@ static irqreturn_t omap8250_irq(int irq, void *dev_id)
+ 			 */
+ 			cancel_delayed_work(&up->overrun_backoff);
+ 		}
++		spin_unlock(&port->lock);
+ 
+ 		delay = msecs_to_jiffies(up->overrun_backoff_time_ms);
+ 		schedule_delayed_work(&up->overrun_backoff, delay);
+@@ -1532,7 +1535,9 @@ static int omap8250_probe(struct platform_device *pdev)
+ err:
+ 	pm_runtime_dont_use_autosuspend(&pdev->dev);
+ 	pm_runtime_put_sync(&pdev->dev);
++	flush_work(&priv->qos_work);
+ 	pm_runtime_disable(&pdev->dev);
++	cpu_latency_qos_remove_request(&priv->pm_qos_request);
+ 	return ret;
+ }
+ 
+@@ -1579,25 +1584,35 @@ static int omap8250_suspend(struct device *dev)
+ {
+ 	struct omap8250_priv *priv = dev_get_drvdata(dev);
+ 	struct uart_8250_port *up = serial8250_get_port(priv->line);
++	int err;
+ 
+ 	serial8250_suspend_port(priv->line);
+ 
+-	pm_runtime_get_sync(dev);
++	err = pm_runtime_resume_and_get(dev);
++	if (err)
++		return err;
+ 	if (!device_may_wakeup(dev))
+ 		priv->wer = 0;
+ 	serial_out(up, UART_OMAP_WER, priv->wer);
+-	pm_runtime_mark_last_busy(dev);
+-	pm_runtime_put_autosuspend(dev);
+-
++	err = pm_runtime_force_suspend(dev);
+ 	flush_work(&priv->qos_work);
+-	return 0;
++
++	return err;
+ }
+ 
+ static int omap8250_resume(struct device *dev)
+ {
+ 	struct omap8250_priv *priv = dev_get_drvdata(dev);
++	int err;
+ 
++	err = pm_runtime_force_resume(dev);
++	if (err)
++		return err;
+ 	serial8250_resume_port(priv->line);
++	/* Paired with pm_runtime_resume_and_get() in omap8250_suspend() */
++	pm_runtime_mark_last_busy(dev);
++	pm_runtime_put_autosuspend(dev);
++
+ 	return 0;
+ }
+ #else
+diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c
+index 7fd30fcc10c62..f38606b750967 100644
+--- a/drivers/tty/serial/fsl_lpuart.c
++++ b/drivers/tty/serial/fsl_lpuart.c
+@@ -2676,6 +2676,7 @@ OF_EARLYCON_DECLARE(lpuart, "fsl,vf610-lpuart", lpuart_early_console_setup);
+ OF_EARLYCON_DECLARE(lpuart32, "fsl,ls1021a-lpuart", lpuart32_early_console_setup);
+ OF_EARLYCON_DECLARE(lpuart32, "fsl,ls1028a-lpuart", ls1028a_early_console_setup);
+ OF_EARLYCON_DECLARE(lpuart32, "fsl,imx7ulp-lpuart", lpuart32_imx_early_console_setup);
++OF_EARLYCON_DECLARE(lpuart32, "fsl,imx8ulp-lpuart", lpuart32_imx_early_console_setup);
+ OF_EARLYCON_DECLARE(lpuart32, "fsl,imx8qxp-lpuart", lpuart32_imx_early_console_setup);
+ OF_EARLYCON_DECLARE(lpuart32, "fsl,imxrt1050-lpuart", lpuart32_imx_early_console_setup);
+ EARLYCON_DECLARE(lpuart, lpuart_early_console_setup);
+diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c
+index 54e82f476a2cc..ea4a70055ad9f 100644
+--- a/drivers/tty/serial/serial_core.c
++++ b/drivers/tty/serial/serial_core.c
+@@ -2333,8 +2333,11 @@ int uart_suspend_port(struct uart_driver *drv, struct uart_port *uport)
+ 	 * able to Re-start_rx later.
+ 	 */
+ 	if (!console_suspend_enabled && uart_console(uport)) {
+-		if (uport->ops->start_rx)
++		if (uport->ops->start_rx) {
++			spin_lock_irq(&uport->lock);
+ 			uport->ops->stop_rx(uport);
++			spin_unlock_irq(&uport->lock);
++		}
+ 		goto unlock;
+ 	}
+ 
+@@ -2427,8 +2430,11 @@ int uart_resume_port(struct uart_driver *drv, struct uart_port *uport)
+ 		if (console_suspend_enabled)
+ 			uart_change_pm(state, UART_PM_STATE_ON);
+ 		uport->ops->set_termios(uport, &termios, NULL);
+-		if (!console_suspend_enabled && uport->ops->start_rx)
++		if (!console_suspend_enabled && uport->ops->start_rx) {
++			spin_lock_irq(&uport->lock);
+ 			uport->ops->start_rx(uport);
++			spin_unlock_irq(&uport->lock);
++		}
+ 		if (console_suspend_enabled)
+ 			console_start(uport->cons);
+ 	}
+diff --git a/drivers/ufs/core/ufshcd-priv.h b/drivers/ufs/core/ufshcd-priv.h
+index d53b93c21a0c6..8f58c21693985 100644
+--- a/drivers/ufs/core/ufshcd-priv.h
++++ b/drivers/ufs/core/ufshcd-priv.h
+@@ -84,9 +84,6 @@ unsigned long ufshcd_mcq_poll_cqe_lock(struct ufs_hba *hba,
+ int ufshcd_read_string_desc(struct ufs_hba *hba, u8 desc_index,
+ 			    u8 **buf, bool ascii);
+ 
+-int ufshcd_hold(struct ufs_hba *hba, bool async);
+-void ufshcd_release(struct ufs_hba *hba);
+-
+ int ufshcd_send_uic_cmd(struct ufs_hba *hba, struct uic_command *uic_cmd);
+ 
+ int ufshcd_exec_raw_upiu_cmd(struct ufs_hba *hba,
+diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
+index e7e79f515e141..6d8ef80d9cbc4 100644
+--- a/drivers/ufs/core/ufshcd.c
++++ b/drivers/ufs/core/ufshcd.c
+@@ -2945,7 +2945,6 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
+ 		(hba->clk_gating.state != CLKS_ON));
+ 
+ 	lrbp = &hba->lrb[tag];
+-	WARN_ON(lrbp->cmd);
+ 	lrbp->cmd = cmd;
+ 	lrbp->task_tag = tag;
+ 	lrbp->lun = ufshcd_scsi_to_upiu_lun(cmd->device->lun);
+@@ -2961,7 +2960,6 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
+ 
+ 	err = ufshcd_map_sg(hba, lrbp);
+ 	if (err) {
+-		lrbp->cmd = NULL;
+ 		ufshcd_release(hba);
+ 		goto out;
+ 	}
+@@ -3099,7 +3097,7 @@ retry:
+ 		 * not trigger any race conditions.
+ 		 */
+ 		hba->dev_cmd.complete = NULL;
+-		err = ufshcd_get_tr_ocs(lrbp, hba->dev_cmd.cqe);
++		err = ufshcd_get_tr_ocs(lrbp, NULL);
+ 		if (!err)
+ 			err = ufshcd_dev_cmd_completion(hba, lrbp);
+ 	} else {
+@@ -3180,13 +3178,12 @@ static int ufshcd_exec_dev_cmd(struct ufs_hba *hba,
+ 	down_read(&hba->clk_scaling_lock);
+ 
+ 	lrbp = &hba->lrb[tag];
+-	WARN_ON(lrbp->cmd);
++	lrbp->cmd = NULL;
+ 	err = ufshcd_compose_dev_cmd(hba, lrbp, cmd_type, tag);
+ 	if (unlikely(err))
+ 		goto out;
+ 
+ 	hba->dev_cmd.complete = &wait;
+-	hba->dev_cmd.cqe = NULL;
+ 
+ 	ufshcd_add_query_upiu_trace(hba, UFS_QUERY_SEND, lrbp->ucd_req_ptr);
+ 
+@@ -5422,7 +5419,6 @@ static void ufshcd_release_scsi_cmd(struct ufs_hba *hba,
+ 	struct scsi_cmnd *cmd = lrbp->cmd;
+ 
+ 	scsi_dma_unmap(cmd);
+-	lrbp->cmd = NULL;	/* Mark the command as completed. */
+ 	ufshcd_release(hba);
+ 	ufshcd_clk_scaling_update_busy(hba);
+ }
+@@ -5438,6 +5434,7 @@ void ufshcd_compl_one_cqe(struct ufs_hba *hba, int task_tag,
+ {
+ 	struct ufshcd_lrb *lrbp;
+ 	struct scsi_cmnd *cmd;
++	enum utp_ocs ocs;
+ 
+ 	lrbp = &hba->lrb[task_tag];
+ 	lrbp->compl_time_stamp = ktime_get();
+@@ -5453,8 +5450,11 @@ void ufshcd_compl_one_cqe(struct ufs_hba *hba, int task_tag,
+ 	} else if (lrbp->command_type == UTP_CMD_TYPE_DEV_MANAGE ||
+ 		   lrbp->command_type == UTP_CMD_TYPE_UFS_STORAGE) {
+ 		if (hba->dev_cmd.complete) {
+-			hba->dev_cmd.cqe = cqe;
+-			ufshcd_add_command_trace(hba, task_tag, UFS_DEV_COMP);
++			if (cqe) {
++				ocs = le32_to_cpu(cqe->status) & MASK_OCS;
++				lrbp->utr_descriptor_ptr->header.dword_2 =
++					cpu_to_le32(ocs);
++			}
+ 			complete(hba->dev_cmd.complete);
+ 			ufshcd_clk_scaling_update_busy(hba);
+ 		}
+@@ -7037,7 +7037,6 @@ static int ufshcd_issue_devman_upiu_cmd(struct ufs_hba *hba,
+ 	down_read(&hba->clk_scaling_lock);
+ 
+ 	lrbp = &hba->lrb[tag];
+-	WARN_ON(lrbp->cmd);
+ 	lrbp->cmd = NULL;
+ 	lrbp->task_tag = tag;
+ 	lrbp->lun = 0;
+@@ -7209,7 +7208,6 @@ int ufshcd_advanced_rpmb_req_handler(struct ufs_hba *hba, struct utp_upiu_req *r
+ 	down_read(&hba->clk_scaling_lock);
+ 
+ 	lrbp = &hba->lrb[tag];
+-	WARN_ON(lrbp->cmd);
+ 	lrbp->cmd = NULL;
+ 	lrbp->task_tag = tag;
+ 	lrbp->lun = UFS_UPIU_RPMB_WLUN;
+@@ -9184,7 +9182,8 @@ static int ufshcd_execute_start_stop(struct scsi_device *sdev,
+ 	};
+ 
+ 	return scsi_execute_cmd(sdev, cdb, REQ_OP_DRV_IN, /*buffer=*/NULL,
+-			/*bufflen=*/0, /*timeout=*/HZ, /*retries=*/0, &args);
++			/*bufflen=*/0, /*timeout=*/10 * HZ, /*retries=*/0,
++			&args);
+ }
+ 
+ /**
+diff --git a/drivers/usb/core/devio.c b/drivers/usb/core/devio.c
+index fcf68818e9992..cbad2af5fd882 100644
+--- a/drivers/usb/core/devio.c
++++ b/drivers/usb/core/devio.c
+@@ -746,6 +746,7 @@ static int driver_resume(struct usb_interface *intf)
+ 	return 0;
+ }
+ 
++#ifdef CONFIG_PM
+ /* The following routines apply to the entire device, not interfaces */
+ void usbfs_notify_suspend(struct usb_device *udev)
+ {
+@@ -764,6 +765,7 @@ void usbfs_notify_resume(struct usb_device *udev)
+ 	}
+ 	mutex_unlock(&usbfs_mutex);
+ }
++#endif
+ 
+ struct usb_driver usbfs_driver = {
+ 	.name =		"usbfs",
+diff --git a/drivers/usb/core/hcd-pci.c b/drivers/usb/core/hcd-pci.c
+index ab2f3737764e4..990280688b254 100644
+--- a/drivers/usb/core/hcd-pci.c
++++ b/drivers/usb/core/hcd-pci.c
+@@ -415,12 +415,15 @@ static int check_root_hub_suspended(struct device *dev)
+ 	return 0;
+ }
+ 
+-static int suspend_common(struct device *dev, bool do_wakeup)
++static int suspend_common(struct device *dev, pm_message_t msg)
+ {
+ 	struct pci_dev		*pci_dev = to_pci_dev(dev);
+ 	struct usb_hcd		*hcd = pci_get_drvdata(pci_dev);
++	bool			do_wakeup;
+ 	int			retval;
+ 
++	do_wakeup = PMSG_IS_AUTO(msg) ? true : device_may_wakeup(dev);
++
+ 	/* Root hub suspend should have stopped all downstream traffic,
+ 	 * and all bus master traffic.  And done so for both the interface
+ 	 * and the stub usb_device (which we check here).  But maybe it
+@@ -447,7 +450,7 @@ static int suspend_common(struct device *dev, bool do_wakeup)
+ 				(retval == 0 && do_wakeup && hcd->shared_hcd &&
+ 				 HCD_WAKEUP_PENDING(hcd->shared_hcd))) {
+ 			if (hcd->driver->pci_resume)
+-				hcd->driver->pci_resume(hcd, false);
++				hcd->driver->pci_resume(hcd, msg);
+ 			retval = -EBUSY;
+ 		}
+ 		if (retval)
+@@ -470,7 +473,7 @@ static int suspend_common(struct device *dev, bool do_wakeup)
+ 	return retval;
+ }
+ 
+-static int resume_common(struct device *dev, int event)
++static int resume_common(struct device *dev, pm_message_t msg)
+ {
+ 	struct pci_dev		*pci_dev = to_pci_dev(dev);
+ 	struct usb_hcd		*hcd = pci_get_drvdata(pci_dev);
+@@ -498,12 +501,11 @@ static int resume_common(struct device *dev, int event)
+ 		 * No locking is needed because PCI controller drivers do not
+ 		 * get unbound during system resume.
+ 		 */
+-		if (pci_dev->class == CL_EHCI && event != PM_EVENT_AUTO_RESUME)
++		if (pci_dev->class == CL_EHCI && msg.event != PM_EVENT_AUTO_RESUME)
+ 			for_each_companion(pci_dev, hcd,
+ 					ehci_wait_for_companions);
+ 
+-		retval = hcd->driver->pci_resume(hcd,
+-				event == PM_EVENT_RESTORE);
++		retval = hcd->driver->pci_resume(hcd, msg);
+ 		if (retval) {
+ 			dev_err(dev, "PCI post-resume error %d!\n", retval);
+ 			usb_hc_died(hcd);
+@@ -516,7 +518,7 @@ static int resume_common(struct device *dev, int event)
+ 
+ static int hcd_pci_suspend(struct device *dev)
+ {
+-	return suspend_common(dev, device_may_wakeup(dev));
++	return suspend_common(dev, PMSG_SUSPEND);
+ }
+ 
+ static int hcd_pci_suspend_noirq(struct device *dev)
+@@ -577,12 +579,12 @@ static int hcd_pci_resume_noirq(struct device *dev)
+ 
+ static int hcd_pci_resume(struct device *dev)
+ {
+-	return resume_common(dev, PM_EVENT_RESUME);
++	return resume_common(dev, PMSG_RESUME);
+ }
+ 
+ static int hcd_pci_restore(struct device *dev)
+ {
+-	return resume_common(dev, PM_EVENT_RESTORE);
++	return resume_common(dev, PMSG_RESTORE);
+ }
+ 
+ #else
+@@ -600,7 +602,7 @@ static int hcd_pci_runtime_suspend(struct device *dev)
+ {
+ 	int	retval;
+ 
+-	retval = suspend_common(dev, true);
++	retval = suspend_common(dev, PMSG_AUTO_SUSPEND);
+ 	if (retval == 0)
+ 		powermac_set_asic(to_pci_dev(dev), 0);
+ 	dev_dbg(dev, "hcd_pci_runtime_suspend: %d\n", retval);
+@@ -612,7 +614,7 @@ static int hcd_pci_runtime_resume(struct device *dev)
+ 	int	retval;
+ 
+ 	powermac_set_asic(to_pci_dev(dev), 1);
+-	retval = resume_common(dev, PM_EVENT_AUTO_RESUME);
++	retval = resume_common(dev, PMSG_AUTO_RESUME);
+ 	dev_dbg(dev, "hcd_pci_runtime_resume: %d\n", retval);
+ 	return retval;
+ }
+diff --git a/drivers/usb/dwc2/platform.c b/drivers/usb/dwc2/platform.c
+index 5aee284018c00..5cf025511cce6 100644
+--- a/drivers/usb/dwc2/platform.c
++++ b/drivers/usb/dwc2/platform.c
+@@ -203,6 +203,11 @@ int dwc2_lowlevel_hw_disable(struct dwc2_hsotg *hsotg)
+ 	return ret;
+ }
+ 
++static void dwc2_reset_control_assert(void *data)
++{
++	reset_control_assert(data);
++}
++
+ static int dwc2_lowlevel_hw_init(struct dwc2_hsotg *hsotg)
+ {
+ 	int i, ret;
+@@ -213,6 +218,10 @@ static int dwc2_lowlevel_hw_init(struct dwc2_hsotg *hsotg)
+ 				     "error getting reset control\n");
+ 
+ 	reset_control_deassert(hsotg->reset);
++	ret = devm_add_action_or_reset(hsotg->dev, dwc2_reset_control_assert,
++				       hsotg->reset);
++	if (ret)
++		return ret;
+ 
+ 	hsotg->reset_ecc = devm_reset_control_get_optional(hsotg->dev, "dwc2-ecc");
+ 	if (IS_ERR(hsotg->reset_ecc))
+@@ -220,6 +229,10 @@ static int dwc2_lowlevel_hw_init(struct dwc2_hsotg *hsotg)
+ 				     "error getting reset control for ecc\n");
+ 
+ 	reset_control_deassert(hsotg->reset_ecc);
++	ret = devm_add_action_or_reset(hsotg->dev, dwc2_reset_control_assert,
++				       hsotg->reset_ecc);
++	if (ret)
++		return ret;
+ 
+ 	/*
+ 	 * Attempt to find a generic PHY, then look for an old style
+@@ -339,9 +352,6 @@ static int dwc2_driver_remove(struct platform_device *dev)
+ 	if (hsotg->ll_hw_enabled)
+ 		dwc2_lowlevel_hw_disable(hsotg);
+ 
+-	reset_control_assert(hsotg->reset);
+-	reset_control_assert(hsotg->reset_ecc);
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/usb/dwc3/dwc3-meson-g12a.c b/drivers/usb/dwc3/dwc3-meson-g12a.c
+index b282ad0e69c6d..eaea944ebd2ce 100644
+--- a/drivers/usb/dwc3/dwc3-meson-g12a.c
++++ b/drivers/usb/dwc3/dwc3-meson-g12a.c
+@@ -805,7 +805,7 @@ static int dwc3_meson_g12a_probe(struct platform_device *pdev)
+ 
+ 	ret = dwc3_meson_g12a_otg_init(pdev, priv);
+ 	if (ret)
+-		goto err_phys_power;
++		goto err_plat_depopulate;
+ 
+ 	pm_runtime_set_active(dev);
+ 	pm_runtime_enable(dev);
+@@ -813,6 +813,9 @@ static int dwc3_meson_g12a_probe(struct platform_device *pdev)
+ 
+ 	return 0;
+ 
++err_plat_depopulate:
++	of_platform_depopulate(dev);
++
+ err_phys_power:
+ 	for (i = 0 ; i < PHY_COUNT ; ++i)
+ 		phy_power_off(priv->phys[i]);
+diff --git a/drivers/usb/dwc3/dwc3-qcom.c b/drivers/usb/dwc3/dwc3-qcom.c
+index 79b22abf97276..72c22851d7eef 100644
+--- a/drivers/usb/dwc3/dwc3-qcom.c
++++ b/drivers/usb/dwc3/dwc3-qcom.c
+@@ -800,6 +800,7 @@ static int dwc3_qcom_probe(struct platform_device *pdev)
+ 	struct device		*dev = &pdev->dev;
+ 	struct dwc3_qcom	*qcom;
+ 	struct resource		*res, *parent_res = NULL;
++	struct resource		local_res;
+ 	int			ret, i;
+ 	bool			ignore_pipe_clk;
+ 	bool			wakeup_source;
+@@ -851,9 +852,8 @@ static int dwc3_qcom_probe(struct platform_device *pdev)
+ 	if (np) {
+ 		parent_res = res;
+ 	} else {
+-		parent_res = kmemdup(res, sizeof(struct resource), GFP_KERNEL);
+-		if (!parent_res)
+-			return -ENOMEM;
++		memcpy(&local_res, res, sizeof(struct resource));
++		parent_res = &local_res;
+ 
+ 		parent_res->start = res->start +
+ 			qcom->acpi_pdata->qscratch_base_offset;
+@@ -865,9 +865,10 @@ static int dwc3_qcom_probe(struct platform_device *pdev)
+ 			if (IS_ERR_OR_NULL(qcom->urs_usb)) {
+ 				dev_err(dev, "failed to create URS USB platdev\n");
+ 				if (!qcom->urs_usb)
+-					return -ENODEV;
++					ret = -ENODEV;
+ 				else
+-					return PTR_ERR(qcom->urs_usb);
++					ret = PTR_ERR(qcom->urs_usb);
++				goto clk_disable;
+ 			}
+ 		}
+ 	}
+@@ -950,11 +951,15 @@ reset_assert:
+ static int dwc3_qcom_remove(struct platform_device *pdev)
+ {
+ 	struct dwc3_qcom *qcom = platform_get_drvdata(pdev);
++	struct device_node *np = pdev->dev.of_node;
+ 	struct device *dev = &pdev->dev;
+ 	int i;
+ 
+ 	device_remove_software_node(&qcom->dwc3->dev);
+-	of_platform_depopulate(dev);
++	if (np)
++		of_platform_depopulate(&pdev->dev);
++	else
++		platform_device_put(pdev);
+ 
+ 	for (i = qcom->num_clocks - 1; i >= 0; i--) {
+ 		clk_disable_unprepare(qcom->clks[i]);
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index b78599dd705c2..550dc8f4d16ad 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -2744,7 +2744,9 @@ static int dwc3_gadget_pullup(struct usb_gadget *g, int is_on)
+ 	ret = pm_runtime_get_sync(dwc->dev);
+ 	if (!ret || ret < 0) {
+ 		pm_runtime_put(dwc->dev);
+-		return 0;
++		if (ret < 0)
++			pm_runtime_set_suspended(dwc->dev);
++		return ret;
+ 	}
+ 
+ 	if (dwc->pullups_connected == is_on) {
+diff --git a/drivers/usb/gadget/function/u_serial.c b/drivers/usb/gadget/function/u_serial.c
+index a0ca47fbff0fc..e5d522d54f6a3 100644
+--- a/drivers/usb/gadget/function/u_serial.c
++++ b/drivers/usb/gadget/function/u_serial.c
+@@ -1420,10 +1420,19 @@ EXPORT_SYMBOL_GPL(gserial_disconnect);
+ 
+ void gserial_suspend(struct gserial *gser)
+ {
+-	struct gs_port	*port = gser->ioport;
++	struct gs_port	*port;
+ 	unsigned long	flags;
+ 
+-	spin_lock_irqsave(&port->port_lock, flags);
++	spin_lock_irqsave(&serial_port_lock, flags);
++	port = gser->ioport;
++
++	if (!port) {
++		spin_unlock_irqrestore(&serial_port_lock, flags);
++		return;
++	}
++
++	spin_lock(&port->port_lock);
++	spin_unlock(&serial_port_lock);
+ 	port->suspended = true;
+ 	spin_unlock_irqrestore(&port->port_lock, flags);
+ }
+diff --git a/drivers/usb/host/ehci-pci.c b/drivers/usb/host/ehci-pci.c
+index 4b148fe5e43b2..889dc44262711 100644
+--- a/drivers/usb/host/ehci-pci.c
++++ b/drivers/usb/host/ehci-pci.c
+@@ -354,10 +354,11 @@ done:
+  * Also they depend on separate root hub suspend/resume.
+  */
+ 
+-static int ehci_pci_resume(struct usb_hcd *hcd, bool hibernated)
++static int ehci_pci_resume(struct usb_hcd *hcd, pm_message_t msg)
+ {
+ 	struct ehci_hcd		*ehci = hcd_to_ehci(hcd);
+ 	struct pci_dev		*pdev = to_pci_dev(hcd->self.controller);
++	bool			hibernated = (msg.event == PM_EVENT_RESTORE);
+ 
+ 	if (ehci_resume(hcd, hibernated) != 0)
+ 		(void) ehci_pci_reinit(ehci, pdev);
+diff --git a/drivers/usb/host/ohci-pci.c b/drivers/usb/host/ohci-pci.c
+index d7b4f40f9ff4e..900ea0d368e03 100644
+--- a/drivers/usb/host/ohci-pci.c
++++ b/drivers/usb/host/ohci-pci.c
+@@ -301,6 +301,12 @@ static struct pci_driver ohci_pci_driver = {
+ #endif
+ };
+ 
++#ifdef CONFIG_PM
++static int ohci_pci_resume(struct usb_hcd *hcd, pm_message_t msg)
++{
++	return ohci_resume(hcd, msg.event == PM_EVENT_RESTORE);
++}
++#endif
+ static int __init ohci_pci_init(void)
+ {
+ 	if (usb_disabled())
+@@ -311,7 +317,7 @@ static int __init ohci_pci_init(void)
+ #ifdef	CONFIG_PM
+ 	/* Entries for the PCI suspend/resume callbacks are special */
+ 	ohci_pci_hc_driver.pci_suspend = ohci_suspend;
+-	ohci_pci_hc_driver.pci_resume = ohci_resume;
++	ohci_pci_hc_driver.pci_resume = ohci_pci_resume;
+ #endif
+ 
+ 	return pci_register_driver(&ohci_pci_driver);
+diff --git a/drivers/usb/host/uhci-pci.c b/drivers/usb/host/uhci-pci.c
+index 7bd2fddde770a..5edf6a08cf82c 100644
+--- a/drivers/usb/host/uhci-pci.c
++++ b/drivers/usb/host/uhci-pci.c
+@@ -169,7 +169,7 @@ static void uhci_shutdown(struct pci_dev *pdev)
+ 
+ #ifdef CONFIG_PM
+ 
+-static int uhci_pci_resume(struct usb_hcd *hcd, bool hibernated);
++static int uhci_pci_resume(struct usb_hcd *hcd, pm_message_t state);
+ 
+ static int uhci_pci_suspend(struct usb_hcd *hcd, bool do_wakeup)
+ {
+@@ -204,14 +204,15 @@ done_okay:
+ 
+ 	/* Check for race with a wakeup request */
+ 	if (do_wakeup && HCD_WAKEUP_PENDING(hcd)) {
+-		uhci_pci_resume(hcd, false);
++		uhci_pci_resume(hcd, PMSG_SUSPEND);
+ 		rc = -EBUSY;
+ 	}
+ 	return rc;
+ }
+ 
+-static int uhci_pci_resume(struct usb_hcd *hcd, bool hibernated)
++static int uhci_pci_resume(struct usb_hcd *hcd, pm_message_t msg)
+ {
++	bool hibernated = (msg.event == PM_EVENT_RESTORE);
+ 	struct uhci_hcd *uhci = hcd_to_uhci(hcd);
+ 
+ 	dev_dbg(uhci_dev(uhci), "%s\n", __func__);
+diff --git a/drivers/usb/host/xhci-histb.c b/drivers/usb/host/xhci-histb.c
+index 08369857686e7..91ce97821de51 100644
+--- a/drivers/usb/host/xhci-histb.c
++++ b/drivers/usb/host/xhci-histb.c
+@@ -367,7 +367,7 @@ static int __maybe_unused xhci_histb_resume(struct device *dev)
+ 	if (!device_may_wakeup(dev))
+ 		xhci_histb_host_enable(histb);
+ 
+-	return xhci_resume(xhci, 0);
++	return xhci_resume(xhci, PMSG_RESUME);
+ }
+ 
+ static const struct dev_pm_ops xhci_histb_pm_ops = {
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index 79b3691f373f3..69a5cb7eba381 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -832,7 +832,7 @@ static int xhci_pci_suspend(struct usb_hcd *hcd, bool do_wakeup)
+ 	return ret;
+ }
+ 
+-static int xhci_pci_resume(struct usb_hcd *hcd, bool hibernated)
++static int xhci_pci_resume(struct usb_hcd *hcd, pm_message_t msg)
+ {
+ 	struct xhci_hcd		*xhci = hcd_to_xhci(hcd);
+ 	struct pci_dev		*pdev = to_pci_dev(hcd->self.controller);
+@@ -867,7 +867,7 @@ static int xhci_pci_resume(struct usb_hcd *hcd, bool hibernated)
+ 	if (xhci->quirks & XHCI_PME_STUCK_QUIRK)
+ 		xhci_pme_quirk(hcd);
+ 
+-	retval = xhci_resume(xhci, hibernated);
++	retval = xhci_resume(xhci, msg);
+ 	return retval;
+ }
+ 
+diff --git a/drivers/usb/host/xhci-plat.c b/drivers/usb/host/xhci-plat.c
+index b0c8e8efc43b6..f36633fa83624 100644
+--- a/drivers/usb/host/xhci-plat.c
++++ b/drivers/usb/host/xhci-plat.c
+@@ -478,7 +478,7 @@ static int __maybe_unused xhci_plat_resume(struct device *dev)
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = xhci_resume(xhci, 0);
++	ret = xhci_resume(xhci, PMSG_RESUME);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -507,7 +507,7 @@ static int __maybe_unused xhci_plat_runtime_resume(struct device *dev)
+ 	struct usb_hcd  *hcd = dev_get_drvdata(dev);
+ 	struct xhci_hcd *xhci = hcd_to_xhci(hcd);
+ 
+-	return xhci_resume(xhci, 0);
++	return xhci_resume(xhci, PMSG_AUTO_RESUME);
+ }
+ 
+ const struct dev_pm_ops xhci_plat_pm_ops = {
+diff --git a/drivers/usb/host/xhci-tegra.c b/drivers/usb/host/xhci-tegra.c
+index c75d932441436..8a9c7deb7686e 100644
+--- a/drivers/usb/host/xhci-tegra.c
++++ b/drivers/usb/host/xhci-tegra.c
+@@ -2272,7 +2272,7 @@ static int tegra_xusb_exit_elpg(struct tegra_xusb *tegra, bool runtime)
+ 	if (wakeup)
+ 		tegra_xhci_disable_phy_sleepwalk(tegra);
+ 
+-	err = xhci_resume(xhci, 0);
++	err = xhci_resume(xhci, runtime ? PMSG_AUTO_RESUME : PMSG_RESUME);
+ 	if (err < 0) {
+ 		dev_err(tegra->dev, "failed to resume XHCI: %d\n", err);
+ 		goto disable_phy;
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 78790dc13c5f1..b81313ffeb768 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -960,8 +960,9 @@ EXPORT_SYMBOL_GPL(xhci_suspend);
+  * This is called when the machine transition from S3/S4 mode.
+  *
+  */
+-int xhci_resume(struct xhci_hcd *xhci, bool hibernated)
++int xhci_resume(struct xhci_hcd *xhci, pm_message_t msg)
+ {
++	bool			hibernated = (msg.event == PM_EVENT_RESTORE);
+ 	u32			command, temp = 0;
+ 	struct usb_hcd		*hcd = xhci_to_hcd(xhci);
+ 	int			retval = 0;
+@@ -1116,7 +1117,7 @@ int xhci_resume(struct xhci_hcd *xhci, bool hibernated)
+ 		 * the first wake signalling failed, give it that chance.
+ 		 */
+ 		pending_portevent = xhci_pending_portevent(xhci);
+-		if (!pending_portevent) {
++		if (!pending_portevent && msg.event == PM_EVENT_AUTO_RESUME) {
+ 			msleep(120);
+ 			pending_portevent = xhci_pending_portevent(xhci);
+ 		}
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index 6b690ec91ff3a..f845c15073ba4 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -2140,7 +2140,7 @@ int xhci_disable_slot(struct xhci_hcd *xhci, u32 slot_id);
+ int xhci_ext_cap_init(struct xhci_hcd *xhci);
+ 
+ int xhci_suspend(struct xhci_hcd *xhci, bool do_wakeup);
+-int xhci_resume(struct xhci_hcd *xhci, bool hibernated);
++int xhci_resume(struct xhci_hcd *xhci, pm_message_t msg);
+ 
+ irqreturn_t xhci_irq(struct usb_hcd *hcd);
+ irqreturn_t xhci_msi_irq(int irq, void *hcd);
+diff --git a/drivers/usb/musb/musb_core.c b/drivers/usb/musb/musb_core.c
+index d162afbbe19f7..ecbd3784bec36 100644
+--- a/drivers/usb/musb/musb_core.c
++++ b/drivers/usb/musb/musb_core.c
+@@ -2330,7 +2330,6 @@ musb_init_controller(struct device *dev, int nIrq, void __iomem *ctrl)
+ 
+ 	spin_lock_init(&musb->lock);
+ 	spin_lock_init(&musb->list_lock);
+-	musb->board_set_power = plat->set_power;
+ 	musb->min_power = plat->min_power;
+ 	musb->ops = plat->platform_ops;
+ 	musb->port_mode = plat->mode;
+diff --git a/drivers/usb/musb/musb_core.h b/drivers/usb/musb/musb_core.h
+index b7588d11cfc59..91b5b6b66f963 100644
+--- a/drivers/usb/musb/musb_core.h
++++ b/drivers/usb/musb/musb_core.h
+@@ -352,8 +352,6 @@ struct musb {
+ 	u16 epmask;
+ 	u8 nr_endpoints;
+ 
+-	int			(*board_set_power)(int state);
+-
+ 	u8			min_power;	/* vbus for periph, in mA/2 */
+ 
+ 	enum musb_mode		port_mode;
+diff --git a/drivers/usb/musb/tusb6010.c b/drivers/usb/musb/tusb6010.c
+index a1f29dbc62e6e..cbc707fe570fa 100644
+--- a/drivers/usb/musb/tusb6010.c
++++ b/drivers/usb/musb/tusb6010.c
+@@ -11,6 +11,8 @@
+  *   interface.
+  */
+ 
++#include <linux/gpio/consumer.h>
++#include <linux/delay.h>
+ #include <linux/module.h>
+ #include <linux/kernel.h>
+ #include <linux/errno.h>
+@@ -30,6 +32,8 @@ struct tusb6010_glue {
+ 	struct device		*dev;
+ 	struct platform_device	*musb;
+ 	struct platform_device	*phy;
++	struct gpio_desc	*enable;
++	struct gpio_desc	*intpin;
+ };
+ 
+ static void tusb_musb_set_vbus(struct musb *musb, int is_on);
+@@ -1021,16 +1025,29 @@ static void tusb_setup_cpu_interface(struct musb *musb)
+ 
+ static int tusb_musb_start(struct musb *musb)
+ {
++	struct tusb6010_glue *glue = dev_get_drvdata(musb->controller->parent);
+ 	void __iomem	*tbase = musb->ctrl_base;
+-	int		ret = 0;
+ 	unsigned long	flags;
+ 	u32		reg;
++	int		i;
+ 
+-	if (musb->board_set_power)
+-		ret = musb->board_set_power(1);
+-	if (ret != 0) {
+-		printk(KERN_ERR "tusb: Cannot enable TUSB6010\n");
+-		return ret;
++	/*
++	 * Enable or disable power to TUSB6010. When enabling, turn on 3.3 V and
++	 * 1.5 V voltage regulators of PM companion chip. Companion chip will then
++	 * provide then PGOOD signal to TUSB6010 which will release it from reset.
++	 */
++	gpiod_set_value(glue->enable, 1);
++	msleep(1);
++
++	/* Wait for 100ms until TUSB6010 pulls INT pin down */
++	i = 100;
++	while (i && gpiod_get_value(glue->intpin)) {
++		msleep(1);
++		i--;
++	}
++	if (!i) {
++		pr_err("tusb: Powerup respones failed\n");
++		return -ENODEV;
+ 	}
+ 
+ 	spin_lock_irqsave(&musb->lock, flags);
+@@ -1083,8 +1100,8 @@ static int tusb_musb_start(struct musb *musb)
+ err:
+ 	spin_unlock_irqrestore(&musb->lock, flags);
+ 
+-	if (musb->board_set_power)
+-		musb->board_set_power(0);
++	gpiod_set_value(glue->enable, 0);
++	msleep(10);
+ 
+ 	return -ENODEV;
+ }
+@@ -1158,11 +1175,13 @@ done:
+ 
+ static int tusb_musb_exit(struct musb *musb)
+ {
++	struct tusb6010_glue *glue = dev_get_drvdata(musb->controller->parent);
++
+ 	del_timer_sync(&musb->dev_timer);
+ 	the_musb = NULL;
+ 
+-	if (musb->board_set_power)
+-		musb->board_set_power(0);
++	gpiod_set_value(glue->enable, 0);
++	msleep(10);
+ 
+ 	iounmap(musb->sync_va);
+ 
+@@ -1218,6 +1237,15 @@ static int tusb_probe(struct platform_device *pdev)
+ 
+ 	glue->dev			= &pdev->dev;
+ 
++	glue->enable = devm_gpiod_get(glue->dev, "enable", GPIOD_OUT_LOW);
++	if (IS_ERR(glue->enable))
++		return dev_err_probe(glue->dev, PTR_ERR(glue->enable),
++				     "could not obtain power on/off GPIO\n");
++	glue->intpin = devm_gpiod_get(glue->dev, "int", GPIOD_IN);
++	if (IS_ERR(glue->intpin))
++		return dev_err_probe(glue->dev, PTR_ERR(glue->intpin),
++				     "could not obtain INT GPIO\n");
++
+ 	pdata->platform_ops		= &tusb_ops;
+ 
+ 	usb_phy_generic_register();
+@@ -1236,10 +1264,7 @@ static int tusb_probe(struct platform_device *pdev)
+ 	musb_resources[1].end = pdev->resource[1].end;
+ 	musb_resources[1].flags = pdev->resource[1].flags;
+ 
+-	musb_resources[2].name = pdev->resource[2].name;
+-	musb_resources[2].start = pdev->resource[2].start;
+-	musb_resources[2].end = pdev->resource[2].end;
+-	musb_resources[2].flags = pdev->resource[2].flags;
++	musb_resources[2] = DEFINE_RES_IRQ_NAMED(gpiod_to_irq(glue->intpin), "mc");
+ 
+ 	pinfo = tusb_dev_info;
+ 	pinfo.parent = &pdev->dev;
+diff --git a/drivers/usb/phy/phy-tahvo.c b/drivers/usb/phy/phy-tahvo.c
+index 47562d49dfc1b..5cac31c6029b3 100644
+--- a/drivers/usb/phy/phy-tahvo.c
++++ b/drivers/usb/phy/phy-tahvo.c
+@@ -391,7 +391,7 @@ static int tahvo_usb_probe(struct platform_device *pdev)
+ 
+ 	tu->irq = ret = platform_get_irq(pdev, 0);
+ 	if (ret < 0)
+-		return ret;
++		goto err_remove_phy;
+ 	ret = request_threaded_irq(tu->irq, NULL, tahvo_usb_vbus_interrupt,
+ 				   IRQF_ONESHOT,
+ 				   "tahvo-vbus", tu);
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index fd42e3a0bd187..288a96a742661 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -1151,6 +1151,10 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE(QUALCOMM_VENDOR_ID, 0x90fa),
+ 	  .driver_info = RSVD(3) },
+ 	/* u-blox products */
++	{ USB_DEVICE(UBLOX_VENDOR_ID, 0x1311) },	/* u-blox LARA-R6 01B */
++	{ USB_DEVICE(UBLOX_VENDOR_ID, 0x1312),		/* u-blox LARA-R6 01B (RMNET) */
++	  .driver_info = RSVD(4) },
++	{ USB_DEVICE_INTERFACE_CLASS(UBLOX_VENDOR_ID, 0x1313, 0xff) },	/* u-blox LARA-R6 01B (ECM) */
+ 	{ USB_DEVICE(UBLOX_VENDOR_ID, 0x1341) },	/* u-blox LARA-L6 */
+ 	{ USB_DEVICE(UBLOX_VENDOR_ID, 0x1342),		/* u-blox LARA-L6 (RMNET) */
+ 	  .driver_info = RSVD(4) },
+diff --git a/drivers/usb/typec/ucsi/psy.c b/drivers/usb/typec/ucsi/psy.c
+index 56bf56517f75a..384b42267f1fc 100644
+--- a/drivers/usb/typec/ucsi/psy.c
++++ b/drivers/usb/typec/ucsi/psy.c
+@@ -27,8 +27,20 @@ static enum power_supply_property ucsi_psy_props[] = {
+ 	POWER_SUPPLY_PROP_VOLTAGE_NOW,
+ 	POWER_SUPPLY_PROP_CURRENT_MAX,
+ 	POWER_SUPPLY_PROP_CURRENT_NOW,
++	POWER_SUPPLY_PROP_SCOPE,
+ };
+ 
++static int ucsi_psy_get_scope(struct ucsi_connector *con,
++			      union power_supply_propval *val)
++{
++	u8 scope = POWER_SUPPLY_SCOPE_UNKNOWN;
++	struct device *dev = con->ucsi->dev;
++
++	device_property_read_u8(dev, "scope", &scope);
++	val->intval = scope;
++	return 0;
++}
++
+ static int ucsi_psy_get_online(struct ucsi_connector *con,
+ 			       union power_supply_propval *val)
+ {
+@@ -194,6 +206,8 @@ static int ucsi_psy_get_prop(struct power_supply *psy,
+ 		return ucsi_psy_get_current_max(con, val);
+ 	case POWER_SUPPLY_PROP_CURRENT_NOW:
+ 		return ucsi_psy_get_current_now(con, val);
++	case POWER_SUPPLY_PROP_SCOPE:
++		return ucsi_psy_get_scope(con, val);
+ 	default:
+ 		return -EINVAL;
+ 	}
+diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vduse_dev.c
+index 5f5c21674fdce..0d84e6a9c3cca 100644
+--- a/drivers/vdpa/vdpa_user/vduse_dev.c
++++ b/drivers/vdpa/vdpa_user/vduse_dev.c
+@@ -726,7 +726,11 @@ static int vduse_vdpa_set_vq_affinity(struct vdpa_device *vdpa, u16 idx,
+ {
+ 	struct vduse_dev *dev = vdpa_to_vduse(vdpa);
+ 
+-	cpumask_copy(&dev->vqs[idx]->irq_affinity, cpu_mask);
++	if (cpu_mask)
++		cpumask_copy(&dev->vqs[idx]->irq_affinity, cpu_mask);
++	else
++		cpumask_setall(&dev->vqs[idx]->irq_affinity);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/vfio/mdev/mdev_core.c b/drivers/vfio/mdev/mdev_core.c
+index 58f91b3bd670c..ed4737de45289 100644
+--- a/drivers/vfio/mdev/mdev_core.c
++++ b/drivers/vfio/mdev/mdev_core.c
+@@ -72,12 +72,6 @@ int mdev_register_parent(struct mdev_parent *parent, struct device *dev,
+ 	parent->nr_types = nr_types;
+ 	atomic_set(&parent->available_instances, mdev_driver->max_instances);
+ 
+-	if (!mdev_bus_compat_class) {
+-		mdev_bus_compat_class = class_compat_register("mdev_bus");
+-		if (!mdev_bus_compat_class)
+-			return -ENOMEM;
+-	}
+-
+ 	ret = parent_create_sysfs_files(parent);
+ 	if (ret)
+ 		return ret;
+@@ -251,13 +245,24 @@ int mdev_device_remove(struct mdev_device *mdev)
+ 
+ static int __init mdev_init(void)
+ {
+-	return bus_register(&mdev_bus_type);
++	int ret;
++
++	ret = bus_register(&mdev_bus_type);
++	if (ret)
++		return ret;
++
++	mdev_bus_compat_class = class_compat_register("mdev_bus");
++	if (!mdev_bus_compat_class) {
++		bus_unregister(&mdev_bus_type);
++		return -ENOMEM;
++	}
++
++	return 0;
+ }
+ 
+ static void __exit mdev_exit(void)
+ {
+-	if (mdev_bus_compat_class)
+-		class_compat_unregister(mdev_bus_compat_class);
++	class_compat_unregister(mdev_bus_compat_class);
+ 	bus_unregister(&mdev_bus_type);
+ }
+ 
+diff --git a/drivers/video/fbdev/omap/lcd_mipid.c b/drivers/video/fbdev/omap/lcd_mipid.c
+index 03cff39d392db..a0fc4570403b8 100644
+--- a/drivers/video/fbdev/omap/lcd_mipid.c
++++ b/drivers/video/fbdev/omap/lcd_mipid.c
+@@ -7,6 +7,7 @@
+  */
+ #include <linux/device.h>
+ #include <linux/delay.h>
++#include <linux/gpio/consumer.h>
+ #include <linux/slab.h>
+ #include <linux/workqueue.h>
+ #include <linux/spi/spi.h>
+@@ -41,6 +42,7 @@ struct mipid_device {
+ 						   when we can issue the
+ 						   next sleep in/out command */
+ 	unsigned long	hw_guard_wait;		/* max guard time in jiffies */
++	struct gpio_desc	*reset;
+ 
+ 	struct omapfb_device	*fbdev;
+ 	struct spi_device	*spi;
+@@ -556,6 +558,12 @@ static int mipid_spi_probe(struct spi_device *spi)
+ 		return -ENOMEM;
+ 	}
+ 
++	/* This will de-assert RESET if active */
++	md->reset = gpiod_get(&spi->dev, "reset", GPIOD_OUT_LOW);
++	if (IS_ERR(md->reset))
++		return dev_err_probe(&spi->dev, PTR_ERR(md->reset),
++				     "no reset GPIO line\n");
++
+ 	spi->mode = SPI_MODE_0;
+ 	md->spi = spi;
+ 	dev_set_drvdata(&spi->dev, md);
+@@ -563,17 +571,23 @@ static int mipid_spi_probe(struct spi_device *spi)
+ 
+ 	r = mipid_detect(md);
+ 	if (r < 0)
+-		return r;
++		goto free_md;
+ 
+ 	omapfb_register_panel(&md->panel);
+ 
+ 	return 0;
++
++free_md:
++	kfree(md);
++	return r;
+ }
+ 
+ static void mipid_spi_remove(struct spi_device *spi)
+ {
+ 	struct mipid_device *md = dev_get_drvdata(&spi->dev);
+ 
++	/* Asserts RESET */
++	gpiod_set_value(md->reset, 1);
+ 	mipid_disable(&md->panel);
+ 	kfree(md);
+ }
+diff --git a/drivers/virt/coco/sev-guest/Kconfig b/drivers/virt/coco/sev-guest/Kconfig
+index f9db0799ae67c..da2d7ca531f0f 100644
+--- a/drivers/virt/coco/sev-guest/Kconfig
++++ b/drivers/virt/coco/sev-guest/Kconfig
+@@ -2,6 +2,7 @@ config SEV_GUEST
+ 	tristate "AMD SEV Guest driver"
+ 	default m
+ 	depends on AMD_MEM_ENCRYPT
++	select CRYPTO
+ 	select CRYPTO_AEAD2
+ 	select CRYPTO_GCM
+ 	help
+diff --git a/drivers/virtio/virtio_vdpa.c b/drivers/virtio/virtio_vdpa.c
+index eb6aee8c06b2c..989e2d7184ce4 100644
+--- a/drivers/virtio/virtio_vdpa.c
++++ b/drivers/virtio/virtio_vdpa.c
+@@ -385,7 +385,9 @@ static int virtio_vdpa_find_vqs(struct virtio_device *vdev, unsigned int nvqs,
+ 			err = PTR_ERR(vqs[i]);
+ 			goto err_setup_vq;
+ 		}
+-		ops->set_vq_affinity(vdpa, i, &masks[i]);
++
++		if (ops->set_vq_affinity)
++			ops->set_vq_affinity(vdpa, i, &masks[i]);
+ 	}
+ 
+ 	cb.callback = virtio_vdpa_config_cb;
+diff --git a/drivers/w1/slaves/w1_therm.c b/drivers/w1/slaves/w1_therm.c
+index 067692626cf07..99c58bd9d2df0 100644
+--- a/drivers/w1/slaves/w1_therm.c
++++ b/drivers/w1/slaves/w1_therm.c
+@@ -1159,29 +1159,26 @@ static int convert_t(struct w1_slave *sl, struct therm_info *info)
+ 
+ 			w1_write_8(dev_master, W1_CONVERT_TEMP);
+ 
+-			if (strong_pullup) { /*some device need pullup */
++			if (SLAVE_FEATURES(sl) & W1_THERM_POLL_COMPLETION) {
++				ret = w1_poll_completion(dev_master, W1_POLL_CONVERT_TEMP);
++				if (ret) {
++					dev_dbg(&sl->dev, "%s: Timeout\n", __func__);
++					goto mt_unlock;
++				}
++				mutex_unlock(&dev_master->bus_mutex);
++			} else if (!strong_pullup) { /*no device need pullup */
+ 				sleep_rem = msleep_interruptible(t_conv);
+ 				if (sleep_rem != 0) {
+ 					ret = -EINTR;
+ 					goto mt_unlock;
+ 				}
+ 				mutex_unlock(&dev_master->bus_mutex);
+-			} else { /*no device need pullup */
+-				if (SLAVE_FEATURES(sl) & W1_THERM_POLL_COMPLETION) {
+-					ret = w1_poll_completion(dev_master, W1_POLL_CONVERT_TEMP);
+-					if (ret) {
+-						dev_dbg(&sl->dev, "%s: Timeout\n", __func__);
+-						goto mt_unlock;
+-					}
+-					mutex_unlock(&dev_master->bus_mutex);
+-				} else {
+-					/* Fixed delay */
+-					mutex_unlock(&dev_master->bus_mutex);
+-					sleep_rem = msleep_interruptible(t_conv);
+-					if (sleep_rem != 0) {
+-						ret = -EINTR;
+-						goto dec_refcnt;
+-					}
++			} else { /*some device need pullup */
++				mutex_unlock(&dev_master->bus_mutex);
++				sleep_rem = msleep_interruptible(t_conv);
++				if (sleep_rem != 0) {
++					ret = -EINTR;
++					goto dec_refcnt;
+ 				}
+ 			}
+ 			ret = read_scratchpad(sl, info);
+diff --git a/drivers/w1/w1.c b/drivers/w1/w1.c
+index 9d199fed96287..2c766bdc68cc5 100644
+--- a/drivers/w1/w1.c
++++ b/drivers/w1/w1.c
+@@ -1263,10 +1263,10 @@ err_out_exit_init:
+ 
+ static void __exit w1_fini(void)
+ {
+-	struct w1_master *dev;
++	struct w1_master *dev, *n;
+ 
+ 	/* Set netlink removal messages and some cleanup */
+-	list_for_each_entry(dev, &w1_masters, w1_master_entry)
++	list_for_each_entry_safe(dev, n, &w1_masters, w1_master_entry)
+ 		__w1_remove_master_device(dev);
+ 
+ 	w1_fini_netlink();
+diff --git a/fs/afs/write.c b/fs/afs/write.c
+index 8750b99c3f566..c1f4391ccd7c6 100644
+--- a/fs/afs/write.c
++++ b/fs/afs/write.c
+@@ -413,17 +413,19 @@ static int afs_store_data(struct afs_vnode *vnode, struct iov_iter *iter, loff_t
+ 	afs_op_set_vnode(op, 0, vnode);
+ 	op->file[0].dv_delta = 1;
+ 	op->file[0].modification = true;
+-	op->store.write_iter = iter;
+ 	op->store.pos = pos;
+ 	op->store.size = size;
+-	op->store.i_size = max(pos + size, vnode->netfs.remote_i_size);
+ 	op->store.laundering = laundering;
+-	op->mtime = vnode->netfs.inode.i_mtime;
+ 	op->flags |= AFS_OPERATION_UNINTR;
+ 	op->ops = &afs_store_data_operation;
+ 
+ try_next_key:
+ 	afs_begin_vnode_operation(op);
++
++	op->store.write_iter = iter;
++	op->store.i_size = max(pos + size, vnode->netfs.remote_i_size);
++	op->mtime = vnode->netfs.inode.i_mtime;
++
+ 	afs_wait_for_operation(op);
+ 
+ 	switch (op->error) {
+diff --git a/fs/btrfs/bio.c b/fs/btrfs/bio.c
+index b3ad0f51e6162..b86faf8126e77 100644
+--- a/fs/btrfs/bio.c
++++ b/fs/btrfs/bio.c
+@@ -95,8 +95,7 @@ static struct btrfs_bio *btrfs_split_bio(struct btrfs_fs_info *fs_info,
+ 	btrfs_bio_init(bbio, fs_info, NULL, orig_bbio);
+ 	bbio->inode = orig_bbio->inode;
+ 	bbio->file_offset = orig_bbio->file_offset;
+-	if (!(orig_bbio->bio.bi_opf & REQ_BTRFS_ONE_ORDERED))
+-		orig_bbio->file_offset += map_length;
++	orig_bbio->file_offset += map_length;
+ 
+ 	atomic_inc(&orig_bbio->pending_ios);
+ 	return bbio;
+diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
+index e97af2e510c37..e2b3448476490 100644
+--- a/fs/btrfs/block-group.c
++++ b/fs/btrfs/block-group.c
+@@ -95,14 +95,21 @@ static u64 btrfs_reduce_alloc_profile(struct btrfs_fs_info *fs_info, u64 flags)
+ 	}
+ 	allowed &= flags;
+ 
+-	if (allowed & BTRFS_BLOCK_GROUP_RAID6)
++	/* Select the highest-redundancy RAID level. */
++	if (allowed & BTRFS_BLOCK_GROUP_RAID1C4)
++		allowed = BTRFS_BLOCK_GROUP_RAID1C4;
++	else if (allowed & BTRFS_BLOCK_GROUP_RAID6)
+ 		allowed = BTRFS_BLOCK_GROUP_RAID6;
++	else if (allowed & BTRFS_BLOCK_GROUP_RAID1C3)
++		allowed = BTRFS_BLOCK_GROUP_RAID1C3;
+ 	else if (allowed & BTRFS_BLOCK_GROUP_RAID5)
+ 		allowed = BTRFS_BLOCK_GROUP_RAID5;
+ 	else if (allowed & BTRFS_BLOCK_GROUP_RAID10)
+ 		allowed = BTRFS_BLOCK_GROUP_RAID10;
+ 	else if (allowed & BTRFS_BLOCK_GROUP_RAID1)
+ 		allowed = BTRFS_BLOCK_GROUP_RAID1;
++	else if (allowed & BTRFS_BLOCK_GROUP_DUP)
++		allowed = BTRFS_BLOCK_GROUP_DUP;
+ 	else if (allowed & BTRFS_BLOCK_GROUP_RAID0)
+ 		allowed = BTRFS_BLOCK_GROUP_RAID0;
+ 
+@@ -1791,8 +1798,15 @@ void btrfs_reclaim_bgs_work(struct work_struct *work)
+ 		}
+ 		spin_unlock(&bg->lock);
+ 
+-		/* Get out fast, in case we're unmounting the filesystem */
+-		if (btrfs_fs_closing(fs_info)) {
++		/*
++		 * Get out fast, in case we're read-only or unmounting the
++		 * filesystem. It is OK to drop block groups from the list even
++		 * for the read-only case. As we did sb_start_write(),
++		 * "mount -o remount,ro" won't happen and read-only filesystem
++		 * means it is forced read-only due to a fatal error. So, it
++		 * never gets back to read-write to let us reclaim again.
++		 */
++		if (btrfs_need_cleaner_sleep(fs_info)) {
+ 			up_write(&space_info->groups_sem);
+ 			goto next;
+ 		}
+@@ -1823,11 +1837,27 @@ void btrfs_reclaim_bgs_work(struct work_struct *work)
+ 		}
+ 
+ next:
++		if (ret)
++			btrfs_mark_bg_to_reclaim(bg);
+ 		btrfs_put_block_group(bg);
++
++		mutex_unlock(&fs_info->reclaim_bgs_lock);
++		/*
++		 * Reclaiming all the block groups in the list can take really
++		 * long.  Prioritize cleaning up unused block groups.
++		 */
++		btrfs_delete_unused_bgs(fs_info);
++		/*
++		 * If we are interrupted by a balance, we can just bail out. The
++		 * cleaner thread restart again if necessary.
++		 */
++		if (!mutex_trylock(&fs_info->reclaim_bgs_lock))
++			goto end;
+ 		spin_lock(&fs_info->unused_bgs_lock);
+ 	}
+ 	spin_unlock(&fs_info->unused_bgs_lock);
+ 	mutex_unlock(&fs_info->reclaim_bgs_lock);
++end:
+ 	btrfs_exclop_finish(fs_info);
+ 	sb_end_write(fs_info->sb);
+ }
+diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
+index 2ff2961b11830..4912d624ca3d3 100644
+--- a/fs/btrfs/ctree.c
++++ b/fs/btrfs/ctree.c
+@@ -583,9 +583,14 @@ static noinline int __btrfs_cow_block(struct btrfs_trans_handle *trans,
+ 		    btrfs_header_backref_rev(buf) < BTRFS_MIXED_BACKREF_REV)
+ 			parent_start = buf->start;
+ 
+-		atomic_inc(&cow->refs);
+ 		ret = btrfs_tree_mod_log_insert_root(root->node, cow, true);
+-		BUG_ON(ret < 0);
++		if (ret < 0) {
++			btrfs_tree_unlock(cow);
++			free_extent_buffer(cow);
++			btrfs_abort_transaction(trans, ret);
++			return ret;
++		}
++		atomic_inc(&cow->refs);
+ 		rcu_assign_pointer(root->node, cow);
+ 
+ 		btrfs_free_tree_block(trans, btrfs_root_id(root), buf,
+@@ -594,8 +599,14 @@ static noinline int __btrfs_cow_block(struct btrfs_trans_handle *trans,
+ 		add_root_to_dirty_list(root);
+ 	} else {
+ 		WARN_ON(trans->transid != btrfs_header_generation(parent));
+-		btrfs_tree_mod_log_insert_key(parent, parent_slot,
+-					      BTRFS_MOD_LOG_KEY_REPLACE);
++		ret = btrfs_tree_mod_log_insert_key(parent, parent_slot,
++						    BTRFS_MOD_LOG_KEY_REPLACE);
++		if (ret) {
++			btrfs_tree_unlock(cow);
++			free_extent_buffer(cow);
++			btrfs_abort_transaction(trans, ret);
++			return ret;
++		}
+ 		btrfs_set_node_blockptr(parent, parent_slot,
+ 					cow->start);
+ 		btrfs_set_node_ptr_generation(parent, parent_slot,
+@@ -1042,7 +1053,12 @@ static noinline int balance_level(struct btrfs_trans_handle *trans,
+ 		}
+ 
+ 		ret = btrfs_tree_mod_log_insert_root(root->node, child, true);
+-		BUG_ON(ret < 0);
++		if (ret < 0) {
++			btrfs_tree_unlock(child);
++			free_extent_buffer(child);
++			btrfs_abort_transaction(trans, ret);
++			goto enospc;
++		}
+ 		rcu_assign_pointer(root->node, child);
+ 
+ 		add_root_to_dirty_list(root);
+@@ -1130,7 +1146,10 @@ static noinline int balance_level(struct btrfs_trans_handle *trans,
+ 			btrfs_node_key(right, &right_key, 0);
+ 			ret = btrfs_tree_mod_log_insert_key(parent, pslot + 1,
+ 					BTRFS_MOD_LOG_KEY_REPLACE);
+-			BUG_ON(ret < 0);
++			if (ret < 0) {
++				btrfs_abort_transaction(trans, ret);
++				goto enospc;
++			}
+ 			btrfs_set_node_key(parent, &right_key, pslot + 1);
+ 			btrfs_mark_buffer_dirty(parent);
+ 		}
+@@ -1176,7 +1195,10 @@ static noinline int balance_level(struct btrfs_trans_handle *trans,
+ 		btrfs_node_key(mid, &mid_key, 0);
+ 		ret = btrfs_tree_mod_log_insert_key(parent, pslot,
+ 						    BTRFS_MOD_LOG_KEY_REPLACE);
+-		BUG_ON(ret < 0);
++		if (ret < 0) {
++			btrfs_abort_transaction(trans, ret);
++			goto enospc;
++		}
+ 		btrfs_set_node_key(parent, &mid_key, pslot);
+ 		btrfs_mark_buffer_dirty(parent);
+ 	}
+@@ -2703,8 +2725,8 @@ static int push_node_left(struct btrfs_trans_handle *trans,
+ 
+ 	if (push_items < src_nritems) {
+ 		/*
+-		 * Don't call btrfs_tree_mod_log_insert_move() here, key removal
+-		 * was already fully logged by btrfs_tree_mod_log_eb_copy() above.
++		 * btrfs_tree_mod_log_eb_copy handles logging the move, so we
++		 * don't need to do an explicit tree mod log operation for it.
+ 		 */
+ 		memmove_extent_buffer(src, btrfs_node_key_ptr_offset(src, 0),
+ 				      btrfs_node_key_ptr_offset(src, push_items),
+@@ -2765,8 +2787,11 @@ static int balance_node_right(struct btrfs_trans_handle *trans,
+ 		btrfs_abort_transaction(trans, ret);
+ 		return ret;
+ 	}
+-	ret = btrfs_tree_mod_log_insert_move(dst, push_items, 0, dst_nritems);
+-	BUG_ON(ret < 0);
++
++	/*
++	 * btrfs_tree_mod_log_eb_copy handles logging the move, so we don't
++	 * need to do an explicit tree mod log operation for it.
++	 */
+ 	memmove_extent_buffer(dst, btrfs_node_key_ptr_offset(dst, push_items),
+ 				      btrfs_node_key_ptr_offset(dst, 0),
+ 				      (dst_nritems) *
+@@ -2962,6 +2987,8 @@ static noinline int split_node(struct btrfs_trans_handle *trans,
+ 
+ 	ret = btrfs_tree_mod_log_eb_copy(split, c, 0, mid, c_nritems - mid);
+ 	if (ret) {
++		btrfs_tree_unlock(split);
++		free_extent_buffer(split);
+ 		btrfs_abort_transaction(trans, ret);
+ 		return ret;
+ 	}
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index dabc79c1af1bd..fc59eb4024438 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -4683,7 +4683,6 @@ void btrfs_mark_buffer_dirty(struct extent_buffer *buf)
+ {
+ 	struct btrfs_fs_info *fs_info = buf->fs_info;
+ 	u64 transid = btrfs_header_generation(buf);
+-	int was_dirty;
+ 
+ #ifdef CONFIG_BTRFS_FS_RUN_SANITY_TESTS
+ 	/*
+@@ -4698,11 +4697,7 @@ void btrfs_mark_buffer_dirty(struct extent_buffer *buf)
+ 	if (transid != fs_info->generation)
+ 		WARN(1, KERN_CRIT "btrfs transid mismatch buffer %llu, found %llu running %llu\n",
+ 			buf->start, transid, fs_info->generation);
+-	was_dirty = set_extent_buffer_dirty(buf);
+-	if (!was_dirty)
+-		percpu_counter_add_batch(&fs_info->dirty_metadata_bytes,
+-					 buf->len,
+-					 fs_info->dirty_metadata_batch);
++	set_extent_buffer_dirty(buf);
+ #ifdef CONFIG_BTRFS_FS_CHECK_INTEGRITY
+ 	/*
+ 	 * Since btrfs_mark_buffer_dirty() can be called with item pointer set
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index a1adadd5d25dd..e3ae55d8bae14 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -98,33 +98,16 @@ void btrfs_extent_buffer_leak_debug_check(struct btrfs_fs_info *fs_info)
+  */
+ struct btrfs_bio_ctrl {
+ 	struct btrfs_bio *bbio;
+-	int mirror_num;
+ 	enum btrfs_compression_type compress_type;
+ 	u32 len_to_oe_boundary;
+ 	blk_opf_t opf;
+ 	btrfs_bio_end_io_t end_io_func;
+ 	struct writeback_control *wbc;
+-
+-	/*
+-	 * This is for metadata read, to provide the extra needed verification
+-	 * info.  This has to be provided for submit_one_bio(), as
+-	 * submit_one_bio() can submit a bio if it ends at stripe boundary.  If
+-	 * no such parent_check is provided, the metadata can hit false alert at
+-	 * endio time.
+-	 */
+-	struct btrfs_tree_parent_check *parent_check;
+-
+-	/*
+-	 * Tell writepage not to lock the state bits for this range, it still
+-	 * does the unlocking.
+-	 */
+-	bool extent_locked;
+ };
+ 
+ static void submit_one_bio(struct btrfs_bio_ctrl *bio_ctrl)
+ {
+ 	struct btrfs_bio *bbio = bio_ctrl->bbio;
+-	int mirror_num = bio_ctrl->mirror_num;
+ 
+ 	if (!bbio)
+ 		return;
+@@ -132,25 +115,14 @@ static void submit_one_bio(struct btrfs_bio_ctrl *bio_ctrl)
+ 	/* Caller should ensure the bio has at least some range added */
+ 	ASSERT(bbio->bio.bi_iter.bi_size);
+ 
+-	if (!is_data_inode(&bbio->inode->vfs_inode)) {
+-		if (btrfs_op(&bbio->bio) != BTRFS_MAP_WRITE) {
+-			/*
+-			 * For metadata read, we should have the parent_check,
+-			 * and copy it to bbio for metadata verification.
+-			 */
+-			ASSERT(bio_ctrl->parent_check);
+-			memcpy(&bbio->parent_check,
+-			       bio_ctrl->parent_check,
+-			       sizeof(struct btrfs_tree_parent_check));
+-		}
++	if (!is_data_inode(&bbio->inode->vfs_inode))
+ 		bbio->bio.bi_opf |= REQ_META;
+-	}
+ 
+ 	if (btrfs_op(&bbio->bio) == BTRFS_MAP_READ &&
+ 	    bio_ctrl->compress_type != BTRFS_COMPRESS_NONE)
+-		btrfs_submit_compressed_read(bbio, mirror_num);
++		btrfs_submit_compressed_read(bbio, 0);
+ 	else
+-		btrfs_submit_bio(bbio, mirror_num);
++		btrfs_submit_bio(bbio, 0);
+ 
+ 	/* The bbio is owned by the end_io handler now */
+ 	bio_ctrl->bbio = NULL;
+@@ -1572,7 +1544,6 @@ static int __extent_writepage(struct page *page, struct btrfs_bio_ctrl *bio_ctrl
+ {
+ 	struct folio *folio = page_folio(page);
+ 	struct inode *inode = page->mapping->host;
+-	struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb);
+ 	const u64 page_start = page_offset(page);
+ 	const u64 page_end = page_start + PAGE_SIZE - 1;
+ 	int ret;
+@@ -1605,13 +1576,11 @@ static int __extent_writepage(struct page *page, struct btrfs_bio_ctrl *bio_ctrl
+ 		goto done;
+ 	}
+ 
+-	if (!bio_ctrl->extent_locked) {
+-		ret = writepage_delalloc(BTRFS_I(inode), page, bio_ctrl->wbc);
+-		if (ret == 1)
+-			return 0;
+-		if (ret)
+-			goto done;
+-	}
++	ret = writepage_delalloc(BTRFS_I(inode), page, bio_ctrl->wbc);
++	if (ret == 1)
++		return 0;
++	if (ret)
++		goto done;
+ 
+ 	ret = __extent_writepage_io(BTRFS_I(inode), page, bio_ctrl, i_size, &nr);
+ 	if (ret == 1)
+@@ -1656,21 +1625,7 @@ done:
+ 	 */
+ 	if (PageError(page))
+ 		end_extent_writepage(page, ret, page_start, page_end);
+-	if (bio_ctrl->extent_locked) {
+-		struct writeback_control *wbc = bio_ctrl->wbc;
+-
+-		/*
+-		 * If bio_ctrl->extent_locked, it's from extent_write_locked_range(),
+-		 * the page can either be locked by lock_page() or
+-		 * process_one_page().
+-		 * Let btrfs_page_unlock_writer() handle both cases.
+-		 */
+-		ASSERT(wbc);
+-		btrfs_page_unlock_writer(fs_info, page, wbc->range_start,
+-					 wbc->range_end + 1 - wbc->range_start);
+-	} else {
+-		unlock_page(page);
+-	}
++	unlock_page(page);
+ 	ASSERT(ret <= 0);
+ 	return ret;
+ }
+@@ -1691,42 +1646,24 @@ static void end_extent_buffer_writeback(struct extent_buffer *eb)
+ /*
+  * Lock extent buffer status and pages for writeback.
+  *
+- * May try to flush write bio if we can't get the lock.
+- *
+- * Return  0 if the extent buffer doesn't need to be submitted.
+- *           (E.g. the extent buffer is not dirty)
+- * Return >0 is the extent buffer is submitted to bio.
+- * Return <0 if something went wrong, no page is locked.
++ * Return %false if the extent buffer doesn't need to be submitted (e.g. the
++ * extent buffer is not dirty)
++ * Return %true is the extent buffer is submitted to bio.
+  */
+-static noinline_for_stack int lock_extent_buffer_for_io(struct extent_buffer *eb,
+-			  struct btrfs_bio_ctrl *bio_ctrl)
++static noinline_for_stack bool lock_extent_buffer_for_io(struct extent_buffer *eb,
++			  struct writeback_control *wbc)
+ {
+ 	struct btrfs_fs_info *fs_info = eb->fs_info;
+-	int i, num_pages;
+-	int flush = 0;
+-	int ret = 0;
+-
+-	if (!btrfs_try_tree_write_lock(eb)) {
+-		submit_write_bio(bio_ctrl, 0);
+-		flush = 1;
+-		btrfs_tree_lock(eb);
+-	}
++	bool ret = false;
++	int i;
+ 
+-	if (test_bit(EXTENT_BUFFER_WRITEBACK, &eb->bflags)) {
++	btrfs_tree_lock(eb);
++	while (test_bit(EXTENT_BUFFER_WRITEBACK, &eb->bflags)) {
+ 		btrfs_tree_unlock(eb);
+-		if (bio_ctrl->wbc->sync_mode != WB_SYNC_ALL)
+-			return 0;
+-		if (!flush) {
+-			submit_write_bio(bio_ctrl, 0);
+-			flush = 1;
+-		}
+-		while (1) {
+-			wait_on_extent_buffer_writeback(eb);
+-			btrfs_tree_lock(eb);
+-			if (!test_bit(EXTENT_BUFFER_WRITEBACK, &eb->bflags))
+-				break;
+-			btrfs_tree_unlock(eb);
+-		}
++		if (wbc->sync_mode != WB_SYNC_ALL)
++			return false;
++		wait_on_extent_buffer_writeback(eb);
++		btrfs_tree_lock(eb);
+ 	}
+ 
+ 	/*
+@@ -1742,7 +1679,7 @@ static noinline_for_stack int lock_extent_buffer_for_io(struct extent_buffer *eb
+ 		percpu_counter_add_batch(&fs_info->dirty_metadata_bytes,
+ 					 -eb->len,
+ 					 fs_info->dirty_metadata_batch);
+-		ret = 1;
++		ret = true;
+ 	} else {
+ 		spin_unlock(&eb->refs_lock);
+ 	}
+@@ -1758,19 +1695,8 @@ static noinline_for_stack int lock_extent_buffer_for_io(struct extent_buffer *eb
+ 	if (!ret || fs_info->nodesize < PAGE_SIZE)
+ 		return ret;
+ 
+-	num_pages = num_extent_pages(eb);
+-	for (i = 0; i < num_pages; i++) {
+-		struct page *p = eb->pages[i];
+-
+-		if (!trylock_page(p)) {
+-			if (!flush) {
+-				submit_write_bio(bio_ctrl, 0);
+-				flush = 1;
+-			}
+-			lock_page(p);
+-		}
+-	}
+-
++	for (i = 0; i < num_extent_pages(eb); i++)
++		lock_page(eb->pages[i]);
+ 	return ret;
+ }
+ 
+@@ -2000,11 +1926,16 @@ static void prepare_eb_write(struct extent_buffer *eb)
+  * Page locking is only utilized at minimum to keep the VMM code happy.
+  */
+ static void write_one_subpage_eb(struct extent_buffer *eb,
+-				 struct btrfs_bio_ctrl *bio_ctrl)
++				 struct writeback_control *wbc)
+ {
+ 	struct btrfs_fs_info *fs_info = eb->fs_info;
+ 	struct page *page = eb->pages[0];
+ 	bool no_dirty_ebs = false;
++	struct btrfs_bio_ctrl bio_ctrl = {
++		.wbc = wbc,
++		.opf = REQ_OP_WRITE | wbc_to_write_flags(wbc),
++		.end_io_func = end_bio_subpage_eb_writepage,
++	};
+ 
+ 	prepare_eb_write(eb);
+ 
+@@ -2018,40 +1949,43 @@ static void write_one_subpage_eb(struct extent_buffer *eb,
+ 	if (no_dirty_ebs)
+ 		clear_page_dirty_for_io(page);
+ 
+-	bio_ctrl->end_io_func = end_bio_subpage_eb_writepage;
+-
+-	submit_extent_page(bio_ctrl, eb->start, page, eb->len,
++	submit_extent_page(&bio_ctrl, eb->start, page, eb->len,
+ 			   eb->start - page_offset(page));
+ 	unlock_page(page);
++	submit_one_bio(&bio_ctrl);
+ 	/*
+ 	 * Submission finished without problem, if no range of the page is
+ 	 * dirty anymore, we have submitted a page.  Update nr_written in wbc.
+ 	 */
+ 	if (no_dirty_ebs)
+-		bio_ctrl->wbc->nr_to_write--;
++		wbc->nr_to_write--;
+ }
+ 
+ static noinline_for_stack void write_one_eb(struct extent_buffer *eb,
+-			struct btrfs_bio_ctrl *bio_ctrl)
++					    struct writeback_control *wbc)
+ {
+ 	u64 disk_bytenr = eb->start;
+ 	int i, num_pages;
++	struct btrfs_bio_ctrl bio_ctrl = {
++		.wbc = wbc,
++		.opf = REQ_OP_WRITE | wbc_to_write_flags(wbc),
++		.end_io_func = end_bio_extent_buffer_writepage,
++	};
+ 
+ 	prepare_eb_write(eb);
+ 
+-	bio_ctrl->end_io_func = end_bio_extent_buffer_writepage;
+-
+ 	num_pages = num_extent_pages(eb);
+ 	for (i = 0; i < num_pages; i++) {
+ 		struct page *p = eb->pages[i];
+ 
+ 		clear_page_dirty_for_io(p);
+ 		set_page_writeback(p);
+-		submit_extent_page(bio_ctrl, disk_bytenr, p, PAGE_SIZE, 0);
++		submit_extent_page(&bio_ctrl, disk_bytenr, p, PAGE_SIZE, 0);
+ 		disk_bytenr += PAGE_SIZE;
+-		bio_ctrl->wbc->nr_to_write--;
++		wbc->nr_to_write--;
+ 		unlock_page(p);
+ 	}
++	submit_one_bio(&bio_ctrl);
+ }
+ 
+ /*
+@@ -2068,14 +2002,13 @@ static noinline_for_stack void write_one_eb(struct extent_buffer *eb,
+  * Return >=0 for the number of submitted extent buffers.
+  * Return <0 for fatal error.
+  */
+-static int submit_eb_subpage(struct page *page, struct btrfs_bio_ctrl *bio_ctrl)
++static int submit_eb_subpage(struct page *page, struct writeback_control *wbc)
+ {
+ 	struct btrfs_fs_info *fs_info = btrfs_sb(page->mapping->host->i_sb);
+ 	int submitted = 0;
+ 	u64 page_start = page_offset(page);
+ 	int bit_start = 0;
+ 	int sectors_per_node = fs_info->nodesize >> fs_info->sectorsize_bits;
+-	int ret;
+ 
+ 	/* Lock and write each dirty extent buffers in the range */
+ 	while (bit_start < fs_info->subpage_info->bitmap_nr_bits) {
+@@ -2121,25 +2054,13 @@ static int submit_eb_subpage(struct page *page, struct btrfs_bio_ctrl *bio_ctrl)
+ 		if (!eb)
+ 			continue;
+ 
+-		ret = lock_extent_buffer_for_io(eb, bio_ctrl);
+-		if (ret == 0) {
+-			free_extent_buffer(eb);
+-			continue;
++		if (lock_extent_buffer_for_io(eb, wbc)) {
++			write_one_subpage_eb(eb, wbc);
++			submitted++;
+ 		}
+-		if (ret < 0) {
+-			free_extent_buffer(eb);
+-			goto cleanup;
+-		}
+-		write_one_subpage_eb(eb, bio_ctrl);
+ 		free_extent_buffer(eb);
+-		submitted++;
+ 	}
+ 	return submitted;
+-
+-cleanup:
+-	/* We hit error, end bio for the submitted extent buffers */
+-	submit_write_bio(bio_ctrl, ret);
+-	return ret;
+ }
+ 
+ /*
+@@ -2162,7 +2083,7 @@ cleanup:
+  * previous call.
+  * Return <0 for fatal error.
+  */
+-static int submit_eb_page(struct page *page, struct btrfs_bio_ctrl *bio_ctrl,
++static int submit_eb_page(struct page *page, struct writeback_control *wbc,
+ 			  struct extent_buffer **eb_context)
+ {
+ 	struct address_space *mapping = page->mapping;
+@@ -2174,7 +2095,7 @@ static int submit_eb_page(struct page *page, struct btrfs_bio_ctrl *bio_ctrl,
+ 		return 0;
+ 
+ 	if (btrfs_sb(page->mapping->host->i_sb)->nodesize < PAGE_SIZE)
+-		return submit_eb_subpage(page, bio_ctrl);
++		return submit_eb_subpage(page, wbc);
+ 
+ 	spin_lock(&mapping->private_lock);
+ 	if (!PagePrivate(page)) {
+@@ -2207,8 +2128,7 @@ static int submit_eb_page(struct page *page, struct btrfs_bio_ctrl *bio_ctrl,
+ 		 * If for_sync, this hole will be filled with
+ 		 * trasnsaction commit.
+ 		 */
+-		if (bio_ctrl->wbc->sync_mode == WB_SYNC_ALL &&
+-		    !bio_ctrl->wbc->for_sync)
++		if (wbc->sync_mode == WB_SYNC_ALL && !wbc->for_sync)
+ 			ret = -EAGAIN;
+ 		else
+ 			ret = 0;
+@@ -2218,13 +2138,12 @@ static int submit_eb_page(struct page *page, struct btrfs_bio_ctrl *bio_ctrl,
+ 
+ 	*eb_context = eb;
+ 
+-	ret = lock_extent_buffer_for_io(eb, bio_ctrl);
+-	if (ret <= 0) {
++	if (!lock_extent_buffer_for_io(eb, wbc)) {
+ 		btrfs_revert_meta_write_pointer(cache, eb);
+ 		if (cache)
+ 			btrfs_put_block_group(cache);
+ 		free_extent_buffer(eb);
+-		return ret;
++		return 0;
+ 	}
+ 	if (cache) {
+ 		/*
+@@ -2233,7 +2152,7 @@ static int submit_eb_page(struct page *page, struct btrfs_bio_ctrl *bio_ctrl,
+ 		btrfs_schedule_zone_finish_bg(cache, eb);
+ 		btrfs_put_block_group(cache);
+ 	}
+-	write_one_eb(eb, bio_ctrl);
++	write_one_eb(eb, wbc);
+ 	free_extent_buffer(eb);
+ 	return 1;
+ }
+@@ -2242,11 +2161,6 @@ int btree_write_cache_pages(struct address_space *mapping,
+ 				   struct writeback_control *wbc)
+ {
+ 	struct extent_buffer *eb_context = NULL;
+-	struct btrfs_bio_ctrl bio_ctrl = {
+-		.wbc = wbc,
+-		.opf = REQ_OP_WRITE | wbc_to_write_flags(wbc),
+-		.extent_locked = 0,
+-	};
+ 	struct btrfs_fs_info *fs_info = BTRFS_I(mapping->host)->root->fs_info;
+ 	int ret = 0;
+ 	int done = 0;
+@@ -2288,7 +2202,7 @@ retry:
+ 		for (i = 0; i < nr_folios; i++) {
+ 			struct folio *folio = fbatch.folios[i];
+ 
+-			ret = submit_eb_page(&folio->page, &bio_ctrl, &eb_context);
++			ret = submit_eb_page(&folio->page, wbc, &eb_context);
+ 			if (ret == 0)
+ 				continue;
+ 			if (ret < 0) {
+@@ -2349,8 +2263,6 @@ retry:
+ 		ret = 0;
+ 	if (!ret && BTRFS_FS_ERROR(fs_info))
+ 		ret = -EROFS;
+-	submit_write_bio(&bio_ctrl, ret);
+-
+ 	btrfs_zoned_meta_io_unlock(fs_info);
+ 	return ret;
+ }
+@@ -2520,38 +2432,31 @@ retry:
+  * already been ran (aka, ordered extent inserted) and all pages are still
+  * locked.
+  */
+-int extent_write_locked_range(struct inode *inode, u64 start, u64 end)
++int extent_write_locked_range(struct inode *inode, u64 start, u64 end,
++			      struct writeback_control *wbc)
+ {
+ 	bool found_error = false;
+ 	int first_error = 0;
+ 	int ret = 0;
+ 	struct address_space *mapping = inode->i_mapping;
+-	struct page *page;
++	struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb);
++	const u32 sectorsize = fs_info->sectorsize;
++	loff_t i_size = i_size_read(inode);
+ 	u64 cur = start;
+-	unsigned long nr_pages;
+-	const u32 sectorsize = btrfs_sb(inode->i_sb)->sectorsize;
+-	struct writeback_control wbc_writepages = {
+-		.sync_mode	= WB_SYNC_ALL,
+-		.range_start	= start,
+-		.range_end	= end + 1,
+-		.no_cgroup_owner = 1,
+-	};
+ 	struct btrfs_bio_ctrl bio_ctrl = {
+-		.wbc = &wbc_writepages,
+-		/* We're called from an async helper function */
+-		.opf = REQ_OP_WRITE | REQ_BTRFS_CGROUP_PUNT |
+-			wbc_to_write_flags(&wbc_writepages),
+-		.extent_locked = 1,
++		.wbc = wbc,
++		.opf = REQ_OP_WRITE | wbc_to_write_flags(wbc),
+ 	};
+ 
++	if (wbc->no_cgroup_owner)
++		bio_ctrl.opf |= REQ_BTRFS_CGROUP_PUNT;
++
+ 	ASSERT(IS_ALIGNED(start, sectorsize) && IS_ALIGNED(end + 1, sectorsize));
+-	nr_pages = (round_up(end, PAGE_SIZE) - round_down(start, PAGE_SIZE)) >>
+-		   PAGE_SHIFT;
+-	wbc_writepages.nr_to_write = nr_pages * 2;
+ 
+-	wbc_attach_fdatawrite_inode(&wbc_writepages, inode);
+ 	while (cur <= end) {
+ 		u64 cur_end = min(round_down(cur, PAGE_SIZE) + PAGE_SIZE - 1, end);
++		struct page *page;
++		int nr = 0;
+ 
+ 		page = find_get_page(mapping, cur >> PAGE_SHIFT);
+ 		/*
+@@ -2562,19 +2467,31 @@ int extent_write_locked_range(struct inode *inode, u64 start, u64 end)
+ 		ASSERT(PageLocked(page));
+ 		ASSERT(PageDirty(page));
+ 		clear_page_dirty_for_io(page);
+-		ret = __extent_writepage(page, &bio_ctrl);
+-		ASSERT(ret <= 0);
++
++		ret = __extent_writepage_io(BTRFS_I(inode), page, &bio_ctrl,
++					    i_size, &nr);
++		if (ret == 1)
++			goto next_page;
++
++		/* Make sure the mapping tag for page dirty gets cleared. */
++		if (nr == 0) {
++			set_page_writeback(page);
++			end_page_writeback(page);
++		}
++		if (ret)
++			end_extent_writepage(page, ret, cur, cur_end);
++		btrfs_page_unlock_writer(fs_info, page, cur, cur_end + 1 - cur);
+ 		if (ret < 0) {
+ 			found_error = true;
+ 			first_error = ret;
+ 		}
++next_page:
+ 		put_page(page);
+ 		cur = cur_end + 1;
+ 	}
+ 
+ 	submit_write_bio(&bio_ctrl, found_error ? ret : 0);
+ 
+-	wbc_detach_inode(&wbc_writepages);
+ 	if (found_error)
+ 		return first_error;
+ 	return ret;
+@@ -2588,7 +2505,6 @@ int extent_writepages(struct address_space *mapping,
+ 	struct btrfs_bio_ctrl bio_ctrl = {
+ 		.wbc = wbc,
+ 		.opf = REQ_OP_WRITE | wbc_to_write_flags(wbc),
+-		.extent_locked = 0,
+ 	};
+ 
+ 	/*
+@@ -4148,7 +4064,7 @@ void btrfs_clear_buffer_dirty(struct btrfs_trans_handle *trans,
+ 	WARN_ON(atomic_read(&eb->refs) == 0);
+ }
+ 
+-bool set_extent_buffer_dirty(struct extent_buffer *eb)
++void set_extent_buffer_dirty(struct extent_buffer *eb)
+ {
+ 	int i;
+ 	int num_pages;
+@@ -4183,13 +4099,14 @@ bool set_extent_buffer_dirty(struct extent_buffer *eb)
+ 					     eb->start, eb->len);
+ 		if (subpage)
+ 			unlock_page(eb->pages[0]);
++		percpu_counter_add_batch(&eb->fs_info->dirty_metadata_bytes,
++					 eb->len,
++					 eb->fs_info->dirty_metadata_batch);
+ 	}
+ #ifdef CONFIG_BTRFS_DEBUG
+ 	for (i = 0; i < num_pages; i++)
+ 		ASSERT(PageDirty(eb->pages[i]));
+ #endif
+-
+-	return was_dirty;
+ }
+ 
+ void clear_extent_buffer_uptodate(struct extent_buffer *eb)
+@@ -4242,6 +4159,36 @@ void set_extent_buffer_uptodate(struct extent_buffer *eb)
+ 	}
+ }
+ 
++static void __read_extent_buffer_pages(struct extent_buffer *eb, int mirror_num,
++				       struct btrfs_tree_parent_check *check)
++{
++	int num_pages = num_extent_pages(eb), i;
++	struct btrfs_bio *bbio;
++
++	clear_bit(EXTENT_BUFFER_READ_ERR, &eb->bflags);
++	eb->read_mirror = 0;
++	atomic_set(&eb->io_pages, num_pages);
++	check_buffer_tree_ref(eb);
++
++	bbio = btrfs_bio_alloc(INLINE_EXTENT_BUFFER_PAGES,
++			       REQ_OP_READ | REQ_META, eb->fs_info,
++			       end_bio_extent_readpage, NULL);
++	bbio->bio.bi_iter.bi_sector = eb->start >> SECTOR_SHIFT;
++	bbio->inode = BTRFS_I(eb->fs_info->btree_inode);
++	bbio->file_offset = eb->start;
++	memcpy(&bbio->parent_check, check, sizeof(*check));
++	if (eb->fs_info->nodesize < PAGE_SIZE) {
++		__bio_add_page(&bbio->bio, eb->pages[0], eb->len,
++			       eb->start - page_offset(eb->pages[0]));
++	} else {
++		for (i = 0; i < num_pages; i++) {
++			ClearPageError(eb->pages[i]);
++			__bio_add_page(&bbio->bio, eb->pages[i], PAGE_SIZE, 0);
++		}
++	}
++	btrfs_submit_bio(bbio, mirror_num);
++}
++
+ static int read_extent_buffer_subpage(struct extent_buffer *eb, int wait,
+ 				      int mirror_num,
+ 				      struct btrfs_tree_parent_check *check)
+@@ -4250,11 +4197,6 @@ static int read_extent_buffer_subpage(struct extent_buffer *eb, int wait,
+ 	struct extent_io_tree *io_tree;
+ 	struct page *page = eb->pages[0];
+ 	struct extent_state *cached_state = NULL;
+-	struct btrfs_bio_ctrl bio_ctrl = {
+-		.opf = REQ_OP_READ,
+-		.mirror_num = mirror_num,
+-		.parent_check = check,
+-	};
+ 	int ret;
+ 
+ 	ASSERT(!test_bit(EXTENT_BUFFER_UNMAPPED, &eb->bflags));
+@@ -4282,18 +4224,10 @@ static int read_extent_buffer_subpage(struct extent_buffer *eb, int wait,
+ 		return 0;
+ 	}
+ 
+-	clear_bit(EXTENT_BUFFER_READ_ERR, &eb->bflags);
+-	eb->read_mirror = 0;
+-	atomic_set(&eb->io_pages, 1);
+-	check_buffer_tree_ref(eb);
+-	bio_ctrl.end_io_func = end_bio_extent_readpage;
+-
+ 	btrfs_subpage_clear_error(fs_info, page, eb->start, eb->len);
+-
+ 	btrfs_subpage_start_reader(fs_info, page, eb->start, eb->len);
+-	submit_extent_page(&bio_ctrl, eb->start, page, eb->len,
+-			   eb->start - page_offset(page));
+-	submit_one_bio(&bio_ctrl);
++
++	__read_extent_buffer_pages(eb, mirror_num, check);
+ 	if (wait != WAIT_COMPLETE) {
+ 		free_extent_state(cached_state);
+ 		return 0;
+@@ -4314,12 +4248,6 @@ int read_extent_buffer_pages(struct extent_buffer *eb, int wait, int mirror_num,
+ 	int locked_pages = 0;
+ 	int all_uptodate = 1;
+ 	int num_pages;
+-	unsigned long num_reads = 0;
+-	struct btrfs_bio_ctrl bio_ctrl = {
+-		.opf = REQ_OP_READ,
+-		.mirror_num = mirror_num,
+-		.parent_check = check,
+-	};
+ 
+ 	if (test_bit(EXTENT_BUFFER_UPTODATE, &eb->bflags))
+ 		return 0;
+@@ -4360,10 +4288,8 @@ int read_extent_buffer_pages(struct extent_buffer *eb, int wait, int mirror_num,
+ 	 */
+ 	for (i = 0; i < num_pages; i++) {
+ 		page = eb->pages[i];
+-		if (!PageUptodate(page)) {
+-			num_reads++;
++		if (!PageUptodate(page))
+ 			all_uptodate = 0;
+-		}
+ 	}
+ 
+ 	if (all_uptodate) {
+@@ -4371,28 +4297,7 @@ int read_extent_buffer_pages(struct extent_buffer *eb, int wait, int mirror_num,
+ 		goto unlock_exit;
+ 	}
+ 
+-	clear_bit(EXTENT_BUFFER_READ_ERR, &eb->bflags);
+-	eb->read_mirror = 0;
+-	atomic_set(&eb->io_pages, num_reads);
+-	/*
+-	 * It is possible for release_folio to clear the TREE_REF bit before we
+-	 * set io_pages. See check_buffer_tree_ref for a more detailed comment.
+-	 */
+-	check_buffer_tree_ref(eb);
+-	bio_ctrl.end_io_func = end_bio_extent_readpage;
+-	for (i = 0; i < num_pages; i++) {
+-		page = eb->pages[i];
+-
+-		if (!PageUptodate(page)) {
+-			ClearPageError(page);
+-			submit_extent_page(&bio_ctrl, page_offset(page), page,
+-					   PAGE_SIZE, 0);
+-		} else {
+-			unlock_page(page);
+-		}
+-	}
+-
+-	submit_one_bio(&bio_ctrl);
++	__read_extent_buffer_pages(eb, mirror_num, check);
+ 
+ 	if (wait != WAIT_COMPLETE)
+ 		return 0;
+diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h
+index 4341ad978fb8e..c368e9f02c74e 100644
+--- a/fs/btrfs/extent_io.h
++++ b/fs/btrfs/extent_io.h
+@@ -179,7 +179,8 @@ int try_release_extent_mapping(struct page *page, gfp_t mask);
+ int try_release_extent_buffer(struct page *page);
+ 
+ int btrfs_read_folio(struct file *file, struct folio *folio);
+-int extent_write_locked_range(struct inode *inode, u64 start, u64 end);
++int extent_write_locked_range(struct inode *inode, u64 start, u64 end,
++			      struct writeback_control *wbc);
+ int extent_writepages(struct address_space *mapping,
+ 		      struct writeback_control *wbc);
+ int btree_write_cache_pages(struct address_space *mapping,
+@@ -262,7 +263,7 @@ void extent_buffer_bitmap_set(const struct extent_buffer *eb, unsigned long star
+ void extent_buffer_bitmap_clear(const struct extent_buffer *eb,
+ 				unsigned long start, unsigned long pos,
+ 				unsigned long len);
+-bool set_extent_buffer_dirty(struct extent_buffer *eb);
++void set_extent_buffer_dirty(struct extent_buffer *eb);
+ void set_extent_buffer_uptodate(struct extent_buffer *eb);
+ void clear_extent_buffer_uptodate(struct extent_buffer *eb);
+ int extent_buffer_under_io(const struct extent_buffer *eb);
+diff --git a/fs/btrfs/free-space-tree.c b/fs/btrfs/free-space-tree.c
+index b21da1446f2aa..045ddce32eca4 100644
+--- a/fs/btrfs/free-space-tree.c
++++ b/fs/btrfs/free-space-tree.c
+@@ -1280,7 +1280,10 @@ int btrfs_delete_free_space_tree(struct btrfs_fs_info *fs_info)
+ 		goto abort;
+ 
+ 	btrfs_global_root_delete(free_space_root);
++
++	spin_lock(&fs_info->trans_lock);
+ 	list_del(&free_space_root->dirty_list);
++	spin_unlock(&fs_info->trans_lock);
+ 
+ 	btrfs_tree_lock(free_space_root->node);
+ 	btrfs_clear_buffer_dirty(trans, free_space_root->node);
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 7fcafcc5292c8..2e6eed4b1b3cc 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -934,6 +934,12 @@ static int submit_uncompressed_range(struct btrfs_inode *inode,
+ 	unsigned long nr_written = 0;
+ 	int page_started = 0;
+ 	int ret;
++	struct writeback_control wbc = {
++		.sync_mode		= WB_SYNC_ALL,
++		.range_start		= start,
++		.range_end		= end,
++		.no_cgroup_owner	= 1,
++	};
+ 
+ 	/*
+ 	 * Call cow_file_range() to run the delalloc range directly, since we
+@@ -965,7 +971,10 @@ static int submit_uncompressed_range(struct btrfs_inode *inode,
+ 	}
+ 
+ 	/* All pages will be unlocked, including @locked_page */
+-	return extent_write_locked_range(&inode->vfs_inode, start, end);
++	wbc_attach_fdatawrite_inode(&wbc, &inode->vfs_inode);
++	ret = extent_write_locked_range(&inode->vfs_inode, start, end, &wbc);
++	wbc_detach_inode(&wbc);
++	return ret;
+ }
+ 
+ static int submit_one_async_extent(struct btrfs_inode *inode,
+@@ -1521,58 +1530,36 @@ static noinline void async_cow_free(struct btrfs_work *work)
+ 		kvfree(async_cow);
+ }
+ 
+-static int cow_file_range_async(struct btrfs_inode *inode,
+-				struct writeback_control *wbc,
+-				struct page *locked_page,
+-				u64 start, u64 end, int *page_started,
+-				unsigned long *nr_written)
++static bool cow_file_range_async(struct btrfs_inode *inode,
++				 struct writeback_control *wbc,
++				 struct page *locked_page,
++				 u64 start, u64 end, int *page_started,
++				 unsigned long *nr_written)
+ {
+ 	struct btrfs_fs_info *fs_info = inode->root->fs_info;
+ 	struct cgroup_subsys_state *blkcg_css = wbc_blkcg_css(wbc);
+ 	struct async_cow *ctx;
+ 	struct async_chunk *async_chunk;
+ 	unsigned long nr_pages;
+-	u64 cur_end;
+ 	u64 num_chunks = DIV_ROUND_UP(end - start, SZ_512K);
+ 	int i;
+-	bool should_compress;
+ 	unsigned nofs_flag;
+ 	const blk_opf_t write_flags = wbc_to_write_flags(wbc);
+ 
+-	unlock_extent(&inode->io_tree, start, end, NULL);
+-
+-	if (inode->flags & BTRFS_INODE_NOCOMPRESS &&
+-	    !btrfs_test_opt(fs_info, FORCE_COMPRESS)) {
+-		num_chunks = 1;
+-		should_compress = false;
+-	} else {
+-		should_compress = true;
+-	}
+-
+ 	nofs_flag = memalloc_nofs_save();
+ 	ctx = kvmalloc(struct_size(ctx, chunks, num_chunks), GFP_KERNEL);
+ 	memalloc_nofs_restore(nofs_flag);
++	if (!ctx)
++		return false;
+ 
+-	if (!ctx) {
+-		unsigned clear_bits = EXTENT_LOCKED | EXTENT_DELALLOC |
+-			EXTENT_DELALLOC_NEW | EXTENT_DEFRAG |
+-			EXTENT_DO_ACCOUNTING;
+-		unsigned long page_ops = PAGE_UNLOCK | PAGE_START_WRITEBACK |
+-					 PAGE_END_WRITEBACK | PAGE_SET_ERROR;
+-
+-		extent_clear_unlock_delalloc(inode, start, end, locked_page,
+-					     clear_bits, page_ops);
+-		return -ENOMEM;
+-	}
++	unlock_extent(&inode->io_tree, start, end, NULL);
++	set_bit(BTRFS_INODE_HAS_ASYNC_EXTENT, &inode->runtime_flags);
+ 
+ 	async_chunk = ctx->chunks;
+ 	atomic_set(&ctx->num_chunks, num_chunks);
+ 
+ 	for (i = 0; i < num_chunks; i++) {
+-		if (should_compress)
+-			cur_end = min(end, start + SZ_512K - 1);
+-		else
+-			cur_end = end;
++		u64 cur_end = min(end, start + SZ_512K - 1);
+ 
+ 		/*
+ 		 * igrab is called higher up in the call chain, take only the
+@@ -1633,13 +1620,14 @@ static int cow_file_range_async(struct btrfs_inode *inode,
+ 		start = cur_end + 1;
+ 	}
+ 	*page_started = 1;
+-	return 0;
++	return true;
+ }
+ 
+ static noinline int run_delalloc_zoned(struct btrfs_inode *inode,
+ 				       struct page *locked_page, u64 start,
+ 				       u64 end, int *page_started,
+-				       unsigned long *nr_written)
++				       unsigned long *nr_written,
++				       struct writeback_control *wbc)
+ {
+ 	u64 done_offset = end;
+ 	int ret;
+@@ -1671,8 +1659,8 @@ static noinline int run_delalloc_zoned(struct btrfs_inode *inode,
+ 			account_page_redirty(locked_page);
+ 		}
+ 		locked_page_done = true;
+-		extent_write_locked_range(&inode->vfs_inode, start, done_offset);
+-
++		extent_write_locked_range(&inode->vfs_inode, start, done_offset,
++					  wbc);
+ 		start = done_offset + 1;
+ 	}
+ 
+@@ -2214,7 +2202,7 @@ int btrfs_run_delalloc_range(struct btrfs_inode *inode, struct page *locked_page
+ 		u64 start, u64 end, int *page_started, unsigned long *nr_written,
+ 		struct writeback_control *wbc)
+ {
+-	int ret;
++	int ret = 0;
+ 	const bool zoned = btrfs_is_zoned(inode->root->fs_info);
+ 
+ 	/*
+@@ -2235,19 +2223,23 @@ int btrfs_run_delalloc_range(struct btrfs_inode *inode, struct page *locked_page
+ 		ASSERT(!zoned || btrfs_is_data_reloc_root(inode->root));
+ 		ret = run_delalloc_nocow(inode, locked_page, start, end,
+ 					 page_started, nr_written);
+-	} else if (!btrfs_inode_can_compress(inode) ||
+-		   !inode_need_compress(inode, start, end)) {
+-		if (zoned)
+-			ret = run_delalloc_zoned(inode, locked_page, start, end,
+-						 page_started, nr_written);
+-		else
+-			ret = cow_file_range(inode, locked_page, start, end,
+-					     page_started, nr_written, 1, NULL);
+-	} else {
+-		set_bit(BTRFS_INODE_HAS_ASYNC_EXTENT, &inode->runtime_flags);
+-		ret = cow_file_range_async(inode, wbc, locked_page, start, end,
+-					   page_started, nr_written);
++		goto out;
+ 	}
++
++	if (btrfs_inode_can_compress(inode) &&
++	    inode_need_compress(inode, start, end) &&
++	    cow_file_range_async(inode, wbc, locked_page, start,
++				 end, page_started, nr_written))
++		goto out;
++
++	if (zoned)
++		ret = run_delalloc_zoned(inode, locked_page, start, end,
++					 page_started, nr_written, wbc);
++	else
++		ret = cow_file_range(inode, locked_page, start, end,
++				     page_started, nr_written, 1, NULL);
++
++out:
+ 	ASSERT(ret <= 0);
+ 	if (ret)
+ 		btrfs_cleanup_ordered_extents(inode, locked_page, start,
+diff --git a/fs/btrfs/locking.c b/fs/btrfs/locking.c
+index 3a496b0d3d2b0..7979449a58d6b 100644
+--- a/fs/btrfs/locking.c
++++ b/fs/btrfs/locking.c
+@@ -57,8 +57,8 @@
+ 
+ static struct btrfs_lockdep_keyset {
+ 	u64			id;		/* root objectid */
+-	/* Longest entry: btrfs-free-space-00 */
+-	char			names[BTRFS_MAX_LEVEL][20];
++	/* Longest entry: btrfs-block-group-00 */
++	char			names[BTRFS_MAX_LEVEL][24];
+ 	struct lock_class_key	keys[BTRFS_MAX_LEVEL];
+ } btrfs_lockdep_keysets[] = {
+ 	{ .id = BTRFS_ROOT_TREE_OBJECTID,	DEFINE_NAME("root")	},
+@@ -72,6 +72,7 @@ static struct btrfs_lockdep_keyset {
+ 	{ .id = BTRFS_DATA_RELOC_TREE_OBJECTID,	DEFINE_NAME("dreloc")	},
+ 	{ .id = BTRFS_UUID_TREE_OBJECTID,	DEFINE_NAME("uuid")	},
+ 	{ .id = BTRFS_FREE_SPACE_TREE_OBJECTID,	DEFINE_NAME("free-space") },
++	{ .id = BTRFS_BLOCK_GROUP_TREE_OBJECTID, DEFINE_NAME("block-group") },
+ 	{ .id = 0,				DEFINE_NAME("tree")	},
+ };
+ 
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index f41da7ac360d8..f8735b31da16f 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -1301,7 +1301,9 @@ int btrfs_quota_disable(struct btrfs_fs_info *fs_info)
+ 		goto out;
+ 	}
+ 
++	spin_lock(&fs_info->trans_lock);
+ 	list_del(&quota_root->dirty_list);
++	spin_unlock(&fs_info->trans_lock);
+ 
+ 	btrfs_tree_lock(quota_root->node);
+ 	btrfs_clear_buffer_dirty(trans, quota_root->node);
+diff --git a/fs/btrfs/tree-mod-log.c b/fs/btrfs/tree-mod-log.c
+index a555baa0143ac..07c086f9e35e9 100644
+--- a/fs/btrfs/tree-mod-log.c
++++ b/fs/btrfs/tree-mod-log.c
+@@ -248,6 +248,26 @@ int btrfs_tree_mod_log_insert_key(struct extent_buffer *eb, int slot,
+ 	return ret;
+ }
+ 
++static struct tree_mod_elem *tree_mod_log_alloc_move(struct extent_buffer *eb,
++						     int dst_slot, int src_slot,
++						     int nr_items)
++{
++	struct tree_mod_elem *tm;
++
++	tm = kzalloc(sizeof(*tm), GFP_NOFS);
++	if (!tm)
++		return ERR_PTR(-ENOMEM);
++
++	tm->logical = eb->start;
++	tm->slot = src_slot;
++	tm->move.dst_slot = dst_slot;
++	tm->move.nr_items = nr_items;
++	tm->op = BTRFS_MOD_LOG_MOVE_KEYS;
++	RB_CLEAR_NODE(&tm->node);
++
++	return tm;
++}
++
+ int btrfs_tree_mod_log_insert_move(struct extent_buffer *eb,
+ 				   int dst_slot, int src_slot,
+ 				   int nr_items)
+@@ -265,18 +285,13 @@ int btrfs_tree_mod_log_insert_move(struct extent_buffer *eb,
+ 	if (!tm_list)
+ 		return -ENOMEM;
+ 
+-	tm = kzalloc(sizeof(*tm), GFP_NOFS);
+-	if (!tm) {
+-		ret = -ENOMEM;
++	tm = tree_mod_log_alloc_move(eb, dst_slot, src_slot, nr_items);
++	if (IS_ERR(tm)) {
++		ret = PTR_ERR(tm);
++		tm = NULL;
+ 		goto free_tms;
+ 	}
+ 
+-	tm->logical = eb->start;
+-	tm->slot = src_slot;
+-	tm->move.dst_slot = dst_slot;
+-	tm->move.nr_items = nr_items;
+-	tm->op = BTRFS_MOD_LOG_MOVE_KEYS;
+-
+ 	for (i = 0; i + dst_slot < src_slot && i < nr_items; i++) {
+ 		tm_list[i] = alloc_tree_mod_elem(eb, i + dst_slot,
+ 				BTRFS_MOD_LOG_KEY_REMOVE_WHILE_MOVING);
+@@ -489,6 +504,10 @@ int btrfs_tree_mod_log_eb_copy(struct extent_buffer *dst,
+ 	struct tree_mod_elem **tm_list_add, **tm_list_rem;
+ 	int i;
+ 	bool locked = false;
++	struct tree_mod_elem *dst_move_tm = NULL;
++	struct tree_mod_elem *src_move_tm = NULL;
++	u32 dst_move_nr_items = btrfs_header_nritems(dst) - dst_offset;
++	u32 src_move_nr_items = btrfs_header_nritems(src) - (src_offset + nr_items);
+ 
+ 	if (!tree_mod_need_log(fs_info, NULL))
+ 		return 0;
+@@ -501,6 +520,26 @@ int btrfs_tree_mod_log_eb_copy(struct extent_buffer *dst,
+ 	if (!tm_list)
+ 		return -ENOMEM;
+ 
++	if (dst_move_nr_items) {
++		dst_move_tm = tree_mod_log_alloc_move(dst, dst_offset + nr_items,
++						      dst_offset, dst_move_nr_items);
++		if (IS_ERR(dst_move_tm)) {
++			ret = PTR_ERR(dst_move_tm);
++			dst_move_tm = NULL;
++			goto free_tms;
++		}
++	}
++	if (src_move_nr_items) {
++		src_move_tm = tree_mod_log_alloc_move(src, src_offset,
++						      src_offset + nr_items,
++						      src_move_nr_items);
++		if (IS_ERR(src_move_tm)) {
++			ret = PTR_ERR(src_move_tm);
++			src_move_tm = NULL;
++			goto free_tms;
++		}
++	}
++
+ 	tm_list_add = tm_list;
+ 	tm_list_rem = tm_list + nr_items;
+ 	for (i = 0; i < nr_items; i++) {
+@@ -523,6 +562,11 @@ int btrfs_tree_mod_log_eb_copy(struct extent_buffer *dst,
+ 		goto free_tms;
+ 	locked = true;
+ 
++	if (dst_move_tm) {
++		ret = tree_mod_log_insert(fs_info, dst_move_tm);
++		if (ret)
++			goto free_tms;
++	}
+ 	for (i = 0; i < nr_items; i++) {
+ 		ret = tree_mod_log_insert(fs_info, tm_list_rem[i]);
+ 		if (ret)
+@@ -531,6 +575,11 @@ int btrfs_tree_mod_log_eb_copy(struct extent_buffer *dst,
+ 		if (ret)
+ 			goto free_tms;
+ 	}
++	if (src_move_tm) {
++		ret = tree_mod_log_insert(fs_info, src_move_tm);
++		if (ret)
++			goto free_tms;
++	}
+ 
+ 	write_unlock(&fs_info->tree_mod_log_lock);
+ 	kfree(tm_list);
+@@ -538,6 +587,12 @@ int btrfs_tree_mod_log_eb_copy(struct extent_buffer *dst,
+ 	return 0;
+ 
+ free_tms:
++	if (dst_move_tm && !RB_EMPTY_NODE(&dst_move_tm->node))
++		rb_erase(&dst_move_tm->node, &fs_info->tree_mod_log);
++	kfree(dst_move_tm);
++	if (src_move_tm && !RB_EMPTY_NODE(&src_move_tm->node))
++		rb_erase(&src_move_tm->node, &fs_info->tree_mod_log);
++	kfree(src_move_tm);
+ 	for (i = 0; i < nr_items * 2; i++) {
+ 		if (tm_list[i] && !RB_EMPTY_NODE(&tm_list[i]->node))
+ 			rb_erase(&tm_list[i]->node, &fs_info->tree_mod_log);
+@@ -664,10 +719,27 @@ static void tree_mod_log_rewind(struct btrfs_fs_info *fs_info,
+ 	unsigned long o_dst;
+ 	unsigned long o_src;
+ 	unsigned long p_size = sizeof(struct btrfs_key_ptr);
++	/*
++	 * max_slot tracks the maximum valid slot of the rewind eb at every
++	 * step of the rewind. This is in contrast with 'n' which eventually
++	 * matches the number of items, but can be wrong during moves or if
++	 * removes overlap on already valid slots (which is probably separately
++	 * a bug). We do this to validate the offsets of memmoves for rewinding
++	 * moves and detect invalid memmoves.
++	 *
++	 * Since a rewind eb can start empty, max_slot is a signed integer with
++	 * a special meaning for -1, which is that no slot is valid to move out
++	 * of. Any other negative value is invalid.
++	 */
++	int max_slot;
++	int move_src_end_slot;
++	int move_dst_end_slot;
+ 
+ 	n = btrfs_header_nritems(eb);
++	max_slot = n - 1;
+ 	read_lock(&fs_info->tree_mod_log_lock);
+ 	while (tm && tm->seq >= time_seq) {
++		ASSERT(max_slot >= -1);
+ 		/*
+ 		 * All the operations are recorded with the operator used for
+ 		 * the modification. As we're going backwards, we do the
+@@ -684,6 +756,8 @@ static void tree_mod_log_rewind(struct btrfs_fs_info *fs_info,
+ 			btrfs_set_node_ptr_generation(eb, tm->slot,
+ 						      tm->generation);
+ 			n++;
++			if (tm->slot > max_slot)
++				max_slot = tm->slot;
+ 			break;
+ 		case BTRFS_MOD_LOG_KEY_REPLACE:
+ 			BUG_ON(tm->slot >= n);
+@@ -693,14 +767,37 @@ static void tree_mod_log_rewind(struct btrfs_fs_info *fs_info,
+ 						      tm->generation);
+ 			break;
+ 		case BTRFS_MOD_LOG_KEY_ADD:
++			/*
++			 * It is possible we could have already removed keys
++			 * behind the known max slot, so this will be an
++			 * overestimate. In practice, the copy operation
++			 * inserts them in increasing order, and overestimating
++			 * just means we miss some warnings, so it's OK. It
++			 * isn't worth carefully tracking the full array of
++			 * valid slots to check against when moving.
++			 */
++			if (tm->slot == max_slot)
++				max_slot--;
+ 			/* if a move operation is needed it's in the log */
+ 			n--;
+ 			break;
+ 		case BTRFS_MOD_LOG_MOVE_KEYS:
++			ASSERT(tm->move.nr_items > 0);
++			move_src_end_slot = tm->move.dst_slot + tm->move.nr_items - 1;
++			move_dst_end_slot = tm->slot + tm->move.nr_items - 1;
+ 			o_dst = btrfs_node_key_ptr_offset(eb, tm->slot);
+ 			o_src = btrfs_node_key_ptr_offset(eb, tm->move.dst_slot);
++			if (WARN_ON(move_src_end_slot > max_slot ||
++				    tm->move.nr_items <= 0)) {
++				btrfs_warn(fs_info,
++"move from invalid tree mod log slot eb %llu slot %d dst_slot %d nr_items %d seq %llu n %u max_slot %d",
++					   eb->start, tm->slot,
++					   tm->move.dst_slot, tm->move.nr_items,
++					   tm->seq, n, max_slot);
++			}
+ 			memmove_extent_buffer(eb, o_dst, o_src,
+ 					      tm->move.nr_items * p_size);
++			max_slot = move_dst_end_slot;
+ 			break;
+ 		case BTRFS_MOD_LOG_ROOT_REPLACE:
+ 			/*
+diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c
+index 160b3da43aecd..502893e3da010 100644
+--- a/fs/erofs/zdata.c
++++ b/fs/erofs/zdata.c
+@@ -94,11 +94,8 @@ struct z_erofs_pcluster {
+ 
+ /* let's avoid the valid 32-bit kernel addresses */
+ 
+-/* the chained workgroup has't submitted io (still open) */
++/* the end of a chain of pclusters */
+ #define Z_EROFS_PCLUSTER_TAIL           ((void *)0x5F0ECAFE)
+-/* the chained workgroup has already submitted io */
+-#define Z_EROFS_PCLUSTER_TAIL_CLOSED    ((void *)0x5F0EDEAD)
+-
+ #define Z_EROFS_PCLUSTER_NIL            (NULL)
+ 
+ struct z_erofs_decompressqueue {
+@@ -499,20 +496,6 @@ out_error_pcluster_pool:
+ 
+ enum z_erofs_pclustermode {
+ 	Z_EROFS_PCLUSTER_INFLIGHT,
+-	/*
+-	 * The current pclusters was the tail of an exist chain, in addition
+-	 * that the previous processed chained pclusters are all decided to
+-	 * be hooked up to it.
+-	 * A new chain will be created for the remaining pclusters which are
+-	 * not processed yet, so different from Z_EROFS_PCLUSTER_FOLLOWED,
+-	 * the next pcluster cannot reuse the whole page safely for inplace I/O
+-	 * in the following scenario:
+-	 *  ________________________________________________________________
+-	 * |      tail (partial) page     |       head (partial) page       |
+-	 * |   (belongs to the next pcl)  |   (belongs to the current pcl)  |
+-	 * |_______PCLUSTER_FOLLOWED______|________PCLUSTER_HOOKED__________|
+-	 */
+-	Z_EROFS_PCLUSTER_HOOKED,
+ 	/*
+ 	 * a weak form of Z_EROFS_PCLUSTER_FOLLOWED, the difference is that it
+ 	 * could be dispatched into bypass queue later due to uptodated managed
+@@ -530,8 +513,8 @@ enum z_erofs_pclustermode {
+ 	 *  ________________________________________________________________
+ 	 * |  tail (partial) page |          head (partial) page           |
+ 	 * |  (of the current cl) |      (of the previous collection)      |
+-	 * | PCLUSTER_FOLLOWED or |                                        |
+-	 * |_____PCLUSTER_HOOKED__|___________PCLUSTER_FOLLOWED____________|
++	 * |                      |                                        |
++	 * |__PCLUSTER_FOLLOWED___|___________PCLUSTER_FOLLOWED____________|
+ 	 *
+ 	 * [  (*) the above page can be used as inplace I/O.               ]
+ 	 */
+@@ -544,7 +527,7 @@ struct z_erofs_decompress_frontend {
+ 	struct z_erofs_bvec_iter biter;
+ 
+ 	struct page *candidate_bvpage;
+-	struct z_erofs_pcluster *pcl, *tailpcl;
++	struct z_erofs_pcluster *pcl;
+ 	z_erofs_next_pcluster_t owned_head;
+ 	enum z_erofs_pclustermode mode;
+ 
+@@ -750,19 +733,7 @@ static void z_erofs_try_to_claim_pcluster(struct z_erofs_decompress_frontend *f)
+ 		return;
+ 	}
+ 
+-	/*
+-	 * type 2, link to the end of an existing open chain, be careful
+-	 * that its submission is controlled by the original attached chain.
+-	 */
+-	if (*owned_head != &pcl->next && pcl != f->tailpcl &&
+-	    cmpxchg(&pcl->next, Z_EROFS_PCLUSTER_TAIL,
+-		    *owned_head) == Z_EROFS_PCLUSTER_TAIL) {
+-		*owned_head = Z_EROFS_PCLUSTER_TAIL;
+-		f->mode = Z_EROFS_PCLUSTER_HOOKED;
+-		f->tailpcl = NULL;
+-		return;
+-	}
+-	/* type 3, it belongs to a chain, but it isn't the end of the chain */
++	/* type 2, it belongs to an ongoing chain */
+ 	f->mode = Z_EROFS_PCLUSTER_INFLIGHT;
+ }
+ 
+@@ -823,9 +794,6 @@ static int z_erofs_register_pcluster(struct z_erofs_decompress_frontend *fe)
+ 			goto err_out;
+ 		}
+ 	}
+-	/* used to check tail merging loop due to corrupted images */
+-	if (fe->owned_head == Z_EROFS_PCLUSTER_TAIL)
+-		fe->tailpcl = pcl;
+ 	fe->owned_head = &pcl->next;
+ 	fe->pcl = pcl;
+ 	return 0;
+@@ -846,7 +814,6 @@ static int z_erofs_collector_begin(struct z_erofs_decompress_frontend *fe)
+ 
+ 	/* must be Z_EROFS_PCLUSTER_TAIL or pointed to previous pcluster */
+ 	DBG_BUGON(fe->owned_head == Z_EROFS_PCLUSTER_NIL);
+-	DBG_BUGON(fe->owned_head == Z_EROFS_PCLUSTER_TAIL_CLOSED);
+ 
+ 	if (!(map->m_flags & EROFS_MAP_META)) {
+ 		grp = erofs_find_workgroup(fe->inode->i_sb,
+@@ -865,10 +832,6 @@ static int z_erofs_collector_begin(struct z_erofs_decompress_frontend *fe)
+ 
+ 	if (ret == -EEXIST) {
+ 		mutex_lock(&fe->pcl->lock);
+-		/* used to check tail merging loop due to corrupted images */
+-		if (fe->owned_head == Z_EROFS_PCLUSTER_TAIL)
+-			fe->tailpcl = fe->pcl;
+-
+ 		z_erofs_try_to_claim_pcluster(fe);
+ 	} else if (ret) {
+ 		return ret;
+@@ -1025,8 +988,7 @@ hitted:
+ 	 * those chains are handled asynchronously thus the page cannot be used
+ 	 * for inplace I/O or bvpage (should be processed in a strict order.)
+ 	 */
+-	tight &= (fe->mode >= Z_EROFS_PCLUSTER_HOOKED &&
+-		  fe->mode != Z_EROFS_PCLUSTER_FOLLOWED_NOINPLACE);
++	tight &= (fe->mode > Z_EROFS_PCLUSTER_FOLLOWED_NOINPLACE);
+ 
+ 	cur = end - min_t(unsigned int, offset + end - map->m_la, end);
+ 	if (!(map->m_flags & EROFS_MAP_MAPPED)) {
+@@ -1404,10 +1366,7 @@ static void z_erofs_decompress_queue(const struct z_erofs_decompressqueue *io,
+ 	};
+ 	z_erofs_next_pcluster_t owned = io->head;
+ 
+-	while (owned != Z_EROFS_PCLUSTER_TAIL_CLOSED) {
+-		/* impossible that 'owned' equals Z_EROFS_WORK_TPTR_TAIL */
+-		DBG_BUGON(owned == Z_EROFS_PCLUSTER_TAIL);
+-		/* impossible that 'owned' equals Z_EROFS_PCLUSTER_NIL */
++	while (owned != Z_EROFS_PCLUSTER_TAIL) {
+ 		DBG_BUGON(owned == Z_EROFS_PCLUSTER_NIL);
+ 
+ 		be.pcl = container_of(owned, struct z_erofs_pcluster, next);
+@@ -1424,7 +1383,7 @@ static void z_erofs_decompressqueue_work(struct work_struct *work)
+ 		container_of(work, struct z_erofs_decompressqueue, u.work);
+ 	struct page *pagepool = NULL;
+ 
+-	DBG_BUGON(bgq->head == Z_EROFS_PCLUSTER_TAIL_CLOSED);
++	DBG_BUGON(bgq->head == Z_EROFS_PCLUSTER_TAIL);
+ 	z_erofs_decompress_queue(bgq, &pagepool);
+ 	erofs_release_pages(&pagepool);
+ 	kvfree(bgq);
+@@ -1612,7 +1571,7 @@ fg_out:
+ 		q->sync = true;
+ 	}
+ 	q->sb = sb;
+-	q->head = Z_EROFS_PCLUSTER_TAIL_CLOSED;
++	q->head = Z_EROFS_PCLUSTER_TAIL;
+ 	return q;
+ }
+ 
+@@ -1630,11 +1589,7 @@ static void move_to_bypass_jobqueue(struct z_erofs_pcluster *pcl,
+ 	z_erofs_next_pcluster_t *const submit_qtail = qtail[JQ_SUBMIT];
+ 	z_erofs_next_pcluster_t *const bypass_qtail = qtail[JQ_BYPASS];
+ 
+-	DBG_BUGON(owned_head == Z_EROFS_PCLUSTER_TAIL_CLOSED);
+-	if (owned_head == Z_EROFS_PCLUSTER_TAIL)
+-		owned_head = Z_EROFS_PCLUSTER_TAIL_CLOSED;
+-
+-	WRITE_ONCE(pcl->next, Z_EROFS_PCLUSTER_TAIL_CLOSED);
++	WRITE_ONCE(pcl->next, Z_EROFS_PCLUSTER_TAIL);
+ 
+ 	WRITE_ONCE(*submit_qtail, owned_head);
+ 	WRITE_ONCE(*bypass_qtail, &pcl->next);
+@@ -1705,15 +1660,10 @@ static void z_erofs_submit_queue(struct z_erofs_decompress_frontend *f,
+ 		unsigned int i = 0;
+ 		bool bypass = true;
+ 
+-		/* no possible 'owned_head' equals the following */
+-		DBG_BUGON(owned_head == Z_EROFS_PCLUSTER_TAIL_CLOSED);
+ 		DBG_BUGON(owned_head == Z_EROFS_PCLUSTER_NIL);
+-
+ 		pcl = container_of(owned_head, struct z_erofs_pcluster, next);
++		owned_head = READ_ONCE(pcl->next);
+ 
+-		/* close the main owned chain at first */
+-		owned_head = cmpxchg(&pcl->next, Z_EROFS_PCLUSTER_TAIL,
+-				     Z_EROFS_PCLUSTER_TAIL_CLOSED);
+ 		if (z_erofs_is_inline_pcluster(pcl)) {
+ 			move_to_bypass_jobqueue(pcl, qtail, owned_head);
+ 			continue;
+diff --git a/fs/erofs/zmap.c b/fs/erofs/zmap.c
+index d37c5c89c7287..920fb4dbc731c 100644
+--- a/fs/erofs/zmap.c
++++ b/fs/erofs/zmap.c
+@@ -129,7 +129,7 @@ static int unpack_compacted_index(struct z_erofs_maprecorder *m,
+ 	u8 *in, type;
+ 	bool big_pcluster;
+ 
+-	if (1 << amortizedshift == 4)
++	if (1 << amortizedshift == 4 && lclusterbits <= 14)
+ 		vcnt = 2;
+ 	else if (1 << amortizedshift == 2 && lclusterbits == 12)
+ 		vcnt = 16;
+@@ -231,7 +231,6 @@ static int compacted_load_cluster_from_disk(struct z_erofs_maprecorder *m,
+ {
+ 	struct inode *const inode = m->inode;
+ 	struct erofs_inode *const vi = EROFS_I(inode);
+-	const unsigned int lclusterbits = vi->z_logical_clusterbits;
+ 	const erofs_off_t ebase = sizeof(struct z_erofs_map_header) +
+ 		ALIGN(erofs_iloc(inode) + vi->inode_isize + vi->xattr_isize, 8);
+ 	unsigned int totalidx = erofs_iblks(inode);
+@@ -239,9 +238,6 @@ static int compacted_load_cluster_from_disk(struct z_erofs_maprecorder *m,
+ 	unsigned int amortizedshift;
+ 	erofs_off_t pos;
+ 
+-	if (lclusterbits != 12)
+-		return -EOPNOTSUPP;
+-
+ 	if (lcn >= totalidx)
+ 		return -EINVAL;
+ 
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index 45b579805c954..0caf6c730ce34 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -3834,19 +3834,10 @@ static int ext4_rename(struct mnt_idmap *idmap, struct inode *old_dir,
+ 			return retval;
+ 	}
+ 
+-	/*
+-	 * We need to protect against old.inode directory getting converted
+-	 * from inline directory format into a normal one.
+-	 */
+-	if (S_ISDIR(old.inode->i_mode))
+-		inode_lock_nested(old.inode, I_MUTEX_NONDIR2);
+-
+ 	old.bh = ext4_find_entry(old.dir, &old.dentry->d_name, &old.de,
+ 				 &old.inlined);
+-	if (IS_ERR(old.bh)) {
+-		retval = PTR_ERR(old.bh);
+-		goto unlock_moved_dir;
+-	}
++	if (IS_ERR(old.bh))
++		return PTR_ERR(old.bh);
+ 
+ 	/*
+ 	 *  Check for inode number is _not_ due to possible IO errors.
+@@ -4043,10 +4034,6 @@ release_bh:
+ 	brelse(old.bh);
+ 	brelse(new.bh);
+ 
+-unlock_moved_dir:
+-	if (S_ISDIR(old.inode->i_mode))
+-		inode_unlock(old.inode);
+-
+ 	return retval;
+ }
+ 
+diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
+index 64b3860f50ee5..8fd3b7f9fb88e 100644
+--- a/fs/f2fs/checkpoint.c
++++ b/fs/f2fs/checkpoint.c
+@@ -30,12 +30,9 @@ void f2fs_stop_checkpoint(struct f2fs_sb_info *sbi, bool end_io,
+ 						unsigned char reason)
+ {
+ 	f2fs_build_fault_attr(sbi, 0, 0);
+-	set_ckpt_flags(sbi, CP_ERROR_FLAG);
+-	if (!end_io) {
++	if (!end_io)
+ 		f2fs_flush_merged_writes(sbi);
+-
+-		f2fs_handle_stop(sbi, reason);
+-	}
++	f2fs_handle_critical_error(sbi, reason, end_io);
+ }
+ 
+ /*
+diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
+index 11653fa792897..1132d3cd8f337 100644
+--- a/fs/f2fs/compress.c
++++ b/fs/f2fs/compress.c
+@@ -743,8 +743,8 @@ void f2fs_decompress_cluster(struct decompress_io_ctx *dic, bool in_task)
+ 		ret = -EFSCORRUPTED;
+ 
+ 		/* Avoid f2fs_commit_super in irq context */
+-		if (in_task)
+-			f2fs_save_errors(sbi, ERROR_FAIL_DECOMPRESSION);
++		if (!in_task)
++			f2fs_handle_error_async(sbi, ERROR_FAIL_DECOMPRESSION);
+ 		else
+ 			f2fs_handle_error(sbi, ERROR_FAIL_DECOMPRESSION);
+ 		goto out_release;
+@@ -1215,6 +1215,7 @@ static int f2fs_write_compressed_pages(struct compress_ctx *cc,
+ 	unsigned int last_index = cc->cluster_size - 1;
+ 	loff_t psize;
+ 	int i, err;
++	bool quota_inode = IS_NOQUOTA(inode);
+ 
+ 	/* we should bypass data pages to proceed the kworker jobs */
+ 	if (unlikely(f2fs_cp_error(sbi))) {
+@@ -1222,7 +1223,7 @@ static int f2fs_write_compressed_pages(struct compress_ctx *cc,
+ 		goto out_free;
+ 	}
+ 
+-	if (IS_NOQUOTA(inode)) {
++	if (quota_inode) {
+ 		/*
+ 		 * We need to wait for node_write to avoid block allocation during
+ 		 * checkpoint. This can only happen to quota writes which can cause
+@@ -1344,7 +1345,7 @@ unlock_continue:
+ 		set_inode_flag(inode, FI_FIRST_BLOCK_WRITTEN);
+ 
+ 	f2fs_put_dnode(&dn);
+-	if (IS_NOQUOTA(inode))
++	if (quota_inode)
+ 		f2fs_up_read(&sbi->node_write);
+ 	else
+ 		f2fs_unlock_op(sbi);
+@@ -1370,7 +1371,7 @@ out_put_cic:
+ out_put_dnode:
+ 	f2fs_put_dnode(&dn);
+ out_unlock_op:
+-	if (IS_NOQUOTA(inode))
++	if (quota_inode)
+ 		f2fs_up_read(&sbi->node_write);
+ 	else
+ 		f2fs_unlock_op(sbi);
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index 7165b1202f539..15b6dc2e06410 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -2775,6 +2775,7 @@ int f2fs_write_single_data_page(struct page *page, int *submitted,
+ 	loff_t psize = (loff_t)(page->index + 1) << PAGE_SHIFT;
+ 	unsigned offset = 0;
+ 	bool need_balance_fs = false;
++	bool quota_inode = IS_NOQUOTA(inode);
+ 	int err = 0;
+ 	struct f2fs_io_info fio = {
+ 		.sbi = sbi,
+@@ -2807,6 +2808,10 @@ int f2fs_write_single_data_page(struct page *page, int *submitted,
+ 		if (S_ISDIR(inode->i_mode) &&
+ 				!is_sbi_flag_set(sbi, SBI_IS_CLOSE))
+ 			goto redirty_out;
++
++		/* keep data pages in remount-ro mode */
++		if (F2FS_OPTION(sbi).errors == MOUNT_ERRORS_READONLY)
++			goto redirty_out;
+ 		goto out;
+ 	}
+ 
+@@ -2832,19 +2837,19 @@ write:
+ 		goto out;
+ 
+ 	/* Dentry/quota blocks are controlled by checkpoint */
+-	if (S_ISDIR(inode->i_mode) || IS_NOQUOTA(inode)) {
++	if (S_ISDIR(inode->i_mode) || quota_inode) {
+ 		/*
+ 		 * We need to wait for node_write to avoid block allocation during
+ 		 * checkpoint. This can only happen to quota writes which can cause
+ 		 * the below discard race condition.
+ 		 */
+-		if (IS_NOQUOTA(inode))
++		if (quota_inode)
+ 			f2fs_down_read(&sbi->node_write);
+ 
+ 		fio.need_lock = LOCK_DONE;
+ 		err = f2fs_do_write_data_page(&fio);
+ 
+-		if (IS_NOQUOTA(inode))
++		if (quota_inode)
+ 			f2fs_up_read(&sbi->node_write);
+ 
+ 		goto done;
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index d211ee89c1586..d867056a01f65 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -162,6 +162,7 @@ struct f2fs_mount_info {
+ 	int fs_mode;			/* fs mode: LFS or ADAPTIVE */
+ 	int bggc_mode;			/* bggc mode: off, on or sync */
+ 	int memory_mode;		/* memory mode */
++	int errors;			/* errors parameter */
+ 	int discard_unit;		/*
+ 					 * discard command's offset/size should
+ 					 * be aligned to this unit: block,
+@@ -1370,6 +1371,12 @@ enum {
+ 	MEMORY_MODE_LOW,	/* memory mode for low memry devices */
+ };
+ 
++enum errors_option {
++	MOUNT_ERRORS_READONLY,	/* remount fs ro on errors */
++	MOUNT_ERRORS_CONTINUE,	/* continue on errors */
++	MOUNT_ERRORS_PANIC,	/* panic on errors */
++};
++
+ static inline int f2fs_test_bit(unsigned int nr, char *addr);
+ static inline void f2fs_set_bit(unsigned int nr, char *addr);
+ static inline void f2fs_clear_bit(unsigned int nr, char *addr);
+@@ -1721,8 +1728,14 @@ struct f2fs_sb_info {
+ 
+ 	struct workqueue_struct *post_read_wq;	/* post read workqueue */
+ 
+-	unsigned char errors[MAX_F2FS_ERRORS];	/* error flags */
+-	spinlock_t error_lock;			/* protect errors array */
++	/*
++	 * If we are in irq context, let's update error information into
++	 * on-disk superblock in the work.
++	 */
++	struct work_struct s_error_work;
++	unsigned char errors[MAX_F2FS_ERRORS];		/* error flags */
++	unsigned char stop_reason[MAX_STOP_REASON];	/* stop reason */
++	spinlock_t error_lock;			/* protect errors/stop_reason array */
+ 	bool error_dirty;			/* errors of sb is dirty */
+ 
+ 	struct kmem_cache *inline_xattr_slab;	/* inline xattr entry */
+@@ -3541,9 +3554,11 @@ int f2fs_enable_quota_files(struct f2fs_sb_info *sbi, bool rdonly);
+ int f2fs_quota_sync(struct super_block *sb, int type);
+ loff_t max_file_blocks(struct inode *inode);
+ void f2fs_quota_off_umount(struct super_block *sb);
+-void f2fs_handle_stop(struct f2fs_sb_info *sbi, unsigned char reason);
+ void f2fs_save_errors(struct f2fs_sb_info *sbi, unsigned char flag);
++void f2fs_handle_critical_error(struct f2fs_sb_info *sbi, unsigned char reason,
++							bool irq_context);
+ void f2fs_handle_error(struct f2fs_sb_info *sbi, unsigned char error);
++void f2fs_handle_error_async(struct f2fs_sb_info *sbi, unsigned char error);
+ int f2fs_commit_super(struct f2fs_sb_info *sbi, bool recover);
+ int f2fs_sync_fs(struct super_block *sb, int sync);
+ int f2fs_sanity_check_ckpt(struct f2fs_sb_info *sbi);
+@@ -3815,7 +3830,7 @@ void f2fs_stop_gc_thread(struct f2fs_sb_info *sbi);
+ block_t f2fs_start_bidx_of_node(unsigned int node_ofs, struct inode *inode);
+ int f2fs_gc(struct f2fs_sb_info *sbi, struct f2fs_gc_control *gc_control);
+ void f2fs_build_gc_manager(struct f2fs_sb_info *sbi);
+-int f2fs_resize_fs(struct f2fs_sb_info *sbi, __u64 block_count);
++int f2fs_resize_fs(struct file *filp, __u64 block_count);
+ int __init f2fs_create_garbage_collection_cache(void);
+ void f2fs_destroy_garbage_collection_cache(void);
+ /* victim selection function for cleaning and SSR */
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index 5ac53d2627d20..015ed274dc312 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -2225,7 +2225,6 @@ static int f2fs_ioc_shutdown(struct file *filp, unsigned long arg)
+ 				ret = 0;
+ 				f2fs_stop_checkpoint(sbi, false,
+ 						STOP_CP_REASON_SHUTDOWN);
+-				set_sbi_flag(sbi, SBI_IS_SHUTDOWN);
+ 				trace_f2fs_shutdown(sbi, in, ret);
+ 			}
+ 			return ret;
+@@ -2238,7 +2237,6 @@ static int f2fs_ioc_shutdown(struct file *filp, unsigned long arg)
+ 		if (ret)
+ 			goto out;
+ 		f2fs_stop_checkpoint(sbi, false, STOP_CP_REASON_SHUTDOWN);
+-		set_sbi_flag(sbi, SBI_IS_SHUTDOWN);
+ 		thaw_bdev(sb->s_bdev);
+ 		break;
+ 	case F2FS_GOING_DOWN_METASYNC:
+@@ -2247,16 +2245,13 @@ static int f2fs_ioc_shutdown(struct file *filp, unsigned long arg)
+ 		if (ret)
+ 			goto out;
+ 		f2fs_stop_checkpoint(sbi, false, STOP_CP_REASON_SHUTDOWN);
+-		set_sbi_flag(sbi, SBI_IS_SHUTDOWN);
+ 		break;
+ 	case F2FS_GOING_DOWN_NOSYNC:
+ 		f2fs_stop_checkpoint(sbi, false, STOP_CP_REASON_SHUTDOWN);
+-		set_sbi_flag(sbi, SBI_IS_SHUTDOWN);
+ 		break;
+ 	case F2FS_GOING_DOWN_METAFLUSH:
+ 		f2fs_sync_meta_pages(sbi, META, LONG_MAX, FS_META_IO);
+ 		f2fs_stop_checkpoint(sbi, false, STOP_CP_REASON_SHUTDOWN);
+-		set_sbi_flag(sbi, SBI_IS_SHUTDOWN);
+ 		break;
+ 	case F2FS_GOING_DOWN_NEED_FSCK:
+ 		set_sbi_flag(sbi, SBI_NEED_FSCK);
+@@ -2593,6 +2588,11 @@ static int f2fs_defragment_range(struct f2fs_sb_info *sbi,
+ 
+ 	inode_lock(inode);
+ 
++	if (is_inode_flag_set(inode, FI_COMPRESS_RELEASED)) {
++		err = -EINVAL;
++		goto unlock_out;
++	}
++
+ 	/* if in-place-update policy is enabled, don't waste time here */
+ 	set_inode_flag(inode, FI_OPU_WRITE);
+ 	if (f2fs_should_update_inplace(inode, NULL)) {
+@@ -2717,6 +2717,7 @@ clear_out:
+ 	clear_inode_flag(inode, FI_SKIP_WRITES);
+ out:
+ 	clear_inode_flag(inode, FI_OPU_WRITE);
++unlock_out:
+ 	inode_unlock(inode);
+ 	if (!err)
+ 		range->len = (u64)total << PAGE_SHIFT;
+@@ -3278,7 +3279,7 @@ static int f2fs_ioc_resize_fs(struct file *filp, unsigned long arg)
+ 			   sizeof(block_count)))
+ 		return -EFAULT;
+ 
+-	return f2fs_resize_fs(sbi, block_count);
++	return f2fs_resize_fs(filp, block_count);
+ }
+ 
+ static int f2fs_ioc_enable_verity(struct file *filp, unsigned long arg)
+diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
+index 61c5f9d26018e..719b1ba32a78b 100644
+--- a/fs/f2fs/gc.c
++++ b/fs/f2fs/gc.c
+@@ -59,7 +59,7 @@ static int gc_thread_func(void *data)
+ 		if (gc_th->gc_wake)
+ 			gc_th->gc_wake = false;
+ 
+-		if (try_to_freeze()) {
++		if (try_to_freeze() || f2fs_readonly(sbi->sb)) {
+ 			stat_other_skip_bggc_count(sbi);
+ 			continue;
+ 		}
+@@ -2099,8 +2099,9 @@ static void update_fs_metadata(struct f2fs_sb_info *sbi, int secs)
+ 	}
+ }
+ 
+-int f2fs_resize_fs(struct f2fs_sb_info *sbi, __u64 block_count)
++int f2fs_resize_fs(struct file *filp, __u64 block_count)
+ {
++	struct f2fs_sb_info *sbi = F2FS_I_SB(file_inode(filp));
+ 	__u64 old_block_count, shrunk_blocks;
+ 	struct cp_control cpc = { CP_RESIZE, 0, 0, 0 };
+ 	unsigned int secs;
+@@ -2138,12 +2139,18 @@ int f2fs_resize_fs(struct f2fs_sb_info *sbi, __u64 block_count)
+ 		return -EINVAL;
+ 	}
+ 
++	err = mnt_want_write_file(filp);
++	if (err)
++		return err;
++
+ 	shrunk_blocks = old_block_count - block_count;
+ 	secs = div_u64(shrunk_blocks, BLKS_PER_SEC(sbi));
+ 
+ 	/* stop other GC */
+-	if (!f2fs_down_write_trylock(&sbi->gc_lock))
+-		return -EAGAIN;
++	if (!f2fs_down_write_trylock(&sbi->gc_lock)) {
++		err = -EAGAIN;
++		goto out_drop_write;
++	}
+ 
+ 	/* stop CP to protect MAIN_SEC in free_segment_range */
+ 	f2fs_lock_op(sbi);
+@@ -2163,10 +2170,20 @@ int f2fs_resize_fs(struct f2fs_sb_info *sbi, __u64 block_count)
+ out_unlock:
+ 	f2fs_unlock_op(sbi);
+ 	f2fs_up_write(&sbi->gc_lock);
++out_drop_write:
++	mnt_drop_write_file(filp);
+ 	if (err)
+ 		return err;
+ 
+-	freeze_super(sbi->sb);
++	err = freeze_super(sbi->sb);
++	if (err)
++		return err;
++
++	if (f2fs_readonly(sbi->sb)) {
++		thaw_super(sbi->sb);
++		return -EROFS;
++	}
++
+ 	f2fs_down_write(&sbi->gc_lock);
+ 	f2fs_down_write(&sbi->cp_global_sem);
+ 
+diff --git a/fs/f2fs/namei.c b/fs/f2fs/namei.c
+index 77a71276ecb15..ad597b417fea5 100644
+--- a/fs/f2fs/namei.c
++++ b/fs/f2fs/namei.c
+@@ -995,20 +995,12 @@ static int f2fs_rename(struct mnt_idmap *idmap, struct inode *old_dir,
+ 			goto out;
+ 	}
+ 
+-	/*
+-	 * Copied from ext4_rename: we need to protect against old.inode
+-	 * directory getting converted from inline directory format into
+-	 * a normal one.
+-	 */
+-	if (S_ISDIR(old_inode->i_mode))
+-		inode_lock_nested(old_inode, I_MUTEX_NONDIR2);
+-
+ 	err = -ENOENT;
+ 	old_entry = f2fs_find_entry(old_dir, &old_dentry->d_name, &old_page);
+ 	if (!old_entry) {
+ 		if (IS_ERR(old_page))
+ 			err = PTR_ERR(old_page);
+-		goto out_unlock_old;
++		goto out;
+ 	}
+ 
+ 	if (S_ISDIR(old_inode->i_mode)) {
+@@ -1116,9 +1108,6 @@ static int f2fs_rename(struct mnt_idmap *idmap, struct inode *old_dir,
+ 
+ 	f2fs_unlock_op(sbi);
+ 
+-	if (S_ISDIR(old_inode->i_mode))
+-		inode_unlock(old_inode);
+-
+ 	if (IS_DIRSYNC(old_dir) || IS_DIRSYNC(new_dir))
+ 		f2fs_sync_fs(sbi->sb, 1);
+ 
+@@ -1133,9 +1122,6 @@ out_dir:
+ 		f2fs_put_page(old_dir_page, 0);
+ out_old:
+ 	f2fs_put_page(old_page, 0);
+-out_unlock_old:
+-	if (S_ISDIR(old_inode->i_mode))
+-		inode_unlock(old_inode);
+ out:
+ 	iput(whiteout);
+ 	return err;
+diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
+index bd1dad5237967..6bdb1bed29ec9 100644
+--- a/fs/f2fs/node.c
++++ b/fs/f2fs/node.c
+@@ -943,8 +943,10 @@ static int truncate_dnode(struct dnode_of_data *dn)
+ 	dn->ofs_in_node = 0;
+ 	f2fs_truncate_data_blocks(dn);
+ 	err = truncate_node(dn);
+-	if (err)
++	if (err) {
++		f2fs_put_page(page, 1);
+ 		return err;
++	}
+ 
+ 	return 1;
+ }
+@@ -1596,6 +1598,9 @@ static int __write_node_page(struct page *page, bool atomic, bool *submitted,
+ 	trace_f2fs_writepage(page, NODE);
+ 
+ 	if (unlikely(f2fs_cp_error(sbi))) {
++		/* keep node pages in remount-ro mode */
++		if (F2FS_OPTION(sbi).errors == MOUNT_ERRORS_READONLY)
++			goto redirty_out;
+ 		ClearPageUptodate(page);
+ 		dec_page_count(sbi, F2FS_DIRTY_NODES);
+ 		unlock_page(page);
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 9f15b03037dba..17082dc3c1a34 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -164,6 +164,7 @@ enum {
+ 	Opt_discard_unit,
+ 	Opt_memory_mode,
+ 	Opt_age_extent_cache,
++	Opt_errors,
+ 	Opt_err,
+ };
+ 
+@@ -243,6 +244,7 @@ static match_table_t f2fs_tokens = {
+ 	{Opt_discard_unit, "discard_unit=%s"},
+ 	{Opt_memory_mode, "memory=%s"},
+ 	{Opt_age_extent_cache, "age_extent_cache"},
++	{Opt_errors, "errors=%s"},
+ 	{Opt_err, NULL},
+ };
+ 
+@@ -1268,6 +1270,25 @@ static int parse_options(struct super_block *sb, char *options, bool is_remount)
+ 		case Opt_age_extent_cache:
+ 			set_opt(sbi, AGE_EXTENT_CACHE);
+ 			break;
++		case Opt_errors:
++			name = match_strdup(&args[0]);
++			if (!name)
++				return -ENOMEM;
++			if (!strcmp(name, "remount-ro")) {
++				F2FS_OPTION(sbi).errors =
++						MOUNT_ERRORS_READONLY;
++			} else if (!strcmp(name, "continue")) {
++				F2FS_OPTION(sbi).errors =
++						MOUNT_ERRORS_CONTINUE;
++			} else if (!strcmp(name, "panic")) {
++				F2FS_OPTION(sbi).errors =
++						MOUNT_ERRORS_PANIC;
++			} else {
++				kfree(name);
++				return -EINVAL;
++			}
++			kfree(name);
++			break;
+ 		default:
+ 			f2fs_err(sbi, "Unrecognized mount option \"%s\" or missing value",
+ 				 p);
+@@ -1622,6 +1643,9 @@ static void f2fs_put_super(struct super_block *sb)
+ 	f2fs_destroy_node_manager(sbi);
+ 	f2fs_destroy_segment_manager(sbi);
+ 
++	/* flush s_error_work before sbi destroy */
++	flush_work(&sbi->s_error_work);
++
+ 	f2fs_destroy_post_read_wq(sbi);
+ 
+ 	kvfree(sbi->ckpt);
+@@ -2052,6 +2076,13 @@ static int f2fs_show_options(struct seq_file *seq, struct dentry *root)
+ 	else if (F2FS_OPTION(sbi).memory_mode == MEMORY_MODE_LOW)
+ 		seq_printf(seq, ",memory=%s", "low");
+ 
++	if (F2FS_OPTION(sbi).errors == MOUNT_ERRORS_READONLY)
++		seq_printf(seq, ",errors=%s", "remount-ro");
++	else if (F2FS_OPTION(sbi).errors == MOUNT_ERRORS_CONTINUE)
++		seq_printf(seq, ",errors=%s", "continue");
++	else if (F2FS_OPTION(sbi).errors == MOUNT_ERRORS_PANIC)
++		seq_printf(seq, ",errors=%s", "panic");
++
+ 	return 0;
+ }
+ 
+@@ -2080,6 +2111,7 @@ static void default_options(struct f2fs_sb_info *sbi)
+ 	}
+ 	F2FS_OPTION(sbi).bggc_mode = BGGC_MODE_ON;
+ 	F2FS_OPTION(sbi).memory_mode = MEMORY_MODE_NORMAL;
++	F2FS_OPTION(sbi).errors = MOUNT_ERRORS_CONTINUE;
+ 
+ 	sbi->sb->s_flags &= ~SB_INLINECRYPT;
+ 
+@@ -2281,6 +2313,9 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data)
+ 	if (err)
+ 		goto restore_opts;
+ 
++	/* flush outstanding errors before changing fs state */
++	flush_work(&sbi->s_error_work);
++
+ 	/*
+ 	 * Previous and new state of filesystem is RO,
+ 	 * so skip checking GC and FLUSH_MERGE conditions.
+@@ -3926,55 +3961,73 @@ int f2fs_commit_super(struct f2fs_sb_info *sbi, bool recover)
+ 	return err;
+ }
+ 
+-void f2fs_handle_stop(struct f2fs_sb_info *sbi, unsigned char reason)
++static void save_stop_reason(struct f2fs_sb_info *sbi, unsigned char reason)
++{
++	unsigned long flags;
++
++	spin_lock_irqsave(&sbi->error_lock, flags);
++	if (sbi->stop_reason[reason] < GENMASK(BITS_PER_BYTE - 1, 0))
++		sbi->stop_reason[reason]++;
++	spin_unlock_irqrestore(&sbi->error_lock, flags);
++}
++
++static void f2fs_record_stop_reason(struct f2fs_sb_info *sbi)
+ {
+ 	struct f2fs_super_block *raw_super = F2FS_RAW_SUPER(sbi);
++	unsigned long flags;
+ 	int err;
+ 
+ 	f2fs_down_write(&sbi->sb_lock);
+ 
+-	if (raw_super->s_stop_reason[reason] < GENMASK(BITS_PER_BYTE - 1, 0))
+-		raw_super->s_stop_reason[reason]++;
++	spin_lock_irqsave(&sbi->error_lock, flags);
++	if (sbi->error_dirty) {
++		memcpy(F2FS_RAW_SUPER(sbi)->s_errors, sbi->errors,
++							MAX_F2FS_ERRORS);
++		sbi->error_dirty = false;
++	}
++	memcpy(raw_super->s_stop_reason, sbi->stop_reason, MAX_STOP_REASON);
++	spin_unlock_irqrestore(&sbi->error_lock, flags);
+ 
+ 	err = f2fs_commit_super(sbi, false);
+-	if (err)
+-		f2fs_err(sbi, "f2fs_commit_super fails to record reason:%u err:%d",
+-								reason, err);
++
+ 	f2fs_up_write(&sbi->sb_lock);
++	if (err)
++		f2fs_err(sbi, "f2fs_commit_super fails to record err:%d", err);
+ }
+ 
+ void f2fs_save_errors(struct f2fs_sb_info *sbi, unsigned char flag)
+ {
+-	spin_lock(&sbi->error_lock);
++	unsigned long flags;
++
++	spin_lock_irqsave(&sbi->error_lock, flags);
+ 	if (!test_bit(flag, (unsigned long *)sbi->errors)) {
+ 		set_bit(flag, (unsigned long *)sbi->errors);
+ 		sbi->error_dirty = true;
+ 	}
+-	spin_unlock(&sbi->error_lock);
++	spin_unlock_irqrestore(&sbi->error_lock, flags);
+ }
+ 
+ static bool f2fs_update_errors(struct f2fs_sb_info *sbi)
+ {
++	unsigned long flags;
+ 	bool need_update = false;
+ 
+-	spin_lock(&sbi->error_lock);
++	spin_lock_irqsave(&sbi->error_lock, flags);
+ 	if (sbi->error_dirty) {
+ 		memcpy(F2FS_RAW_SUPER(sbi)->s_errors, sbi->errors,
+ 							MAX_F2FS_ERRORS);
+ 		sbi->error_dirty = false;
+ 		need_update = true;
+ 	}
+-	spin_unlock(&sbi->error_lock);
++	spin_unlock_irqrestore(&sbi->error_lock, flags);
+ 
+ 	return need_update;
+ }
+ 
+-void f2fs_handle_error(struct f2fs_sb_info *sbi, unsigned char error)
++static void f2fs_record_errors(struct f2fs_sb_info *sbi, unsigned char error)
+ {
+ 	int err;
+ 
+-	f2fs_save_errors(sbi, error);
+-
+ 	f2fs_down_write(&sbi->sb_lock);
+ 
+ 	if (!f2fs_update_errors(sbi))
+@@ -3988,6 +4041,83 @@ out_unlock:
+ 	f2fs_up_write(&sbi->sb_lock);
+ }
+ 
++void f2fs_handle_error(struct f2fs_sb_info *sbi, unsigned char error)
++{
++	f2fs_save_errors(sbi, error);
++	f2fs_record_errors(sbi, error);
++}
++
++void f2fs_handle_error_async(struct f2fs_sb_info *sbi, unsigned char error)
++{
++	f2fs_save_errors(sbi, error);
++
++	if (!sbi->error_dirty)
++		return;
++	if (!test_bit(error, (unsigned long *)sbi->errors))
++		return;
++	schedule_work(&sbi->s_error_work);
++}
++
++static bool system_going_down(void)
++{
++	return system_state == SYSTEM_HALT || system_state == SYSTEM_POWER_OFF
++		|| system_state == SYSTEM_RESTART;
++}
++
++void f2fs_handle_critical_error(struct f2fs_sb_info *sbi, unsigned char reason,
++							bool irq_context)
++{
++	struct super_block *sb = sbi->sb;
++	bool shutdown = reason == STOP_CP_REASON_SHUTDOWN;
++	bool continue_fs = !shutdown &&
++			F2FS_OPTION(sbi).errors == MOUNT_ERRORS_CONTINUE;
++
++	set_ckpt_flags(sbi, CP_ERROR_FLAG);
++
++	if (!f2fs_hw_is_readonly(sbi)) {
++		save_stop_reason(sbi, reason);
++
++		if (irq_context && !shutdown)
++			schedule_work(&sbi->s_error_work);
++		else
++			f2fs_record_stop_reason(sbi);
++	}
++
++	/*
++	 * We force ERRORS_RO behavior when system is rebooting. Otherwise we
++	 * could panic during 'reboot -f' as the underlying device got already
++	 * disabled.
++	 */
++	if (F2FS_OPTION(sbi).errors == MOUNT_ERRORS_PANIC &&
++				!shutdown && !system_going_down() &&
++				!is_sbi_flag_set(sbi, SBI_IS_SHUTDOWN))
++		panic("F2FS-fs (device %s): panic forced after error\n",
++							sb->s_id);
++
++	if (shutdown)
++		set_sbi_flag(sbi, SBI_IS_SHUTDOWN);
++
++	/* continue filesystem operators if errors=continue */
++	if (continue_fs || f2fs_readonly(sb))
++		return;
++
++	f2fs_warn(sbi, "Remounting filesystem read-only");
++	/*
++	 * Make sure updated value of ->s_mount_flags will be visible before
++	 * ->s_flags update
++	 */
++	smp_wmb();
++	sb->s_flags |= SB_RDONLY;
++}
++
++static void f2fs_record_error_work(struct work_struct *work)
++{
++	struct f2fs_sb_info *sbi = container_of(work,
++					struct f2fs_sb_info, s_error_work);
++
++	f2fs_record_stop_reason(sbi);
++}
++
+ static int f2fs_scan_devices(struct f2fs_sb_info *sbi)
+ {
+ 	struct f2fs_super_block *raw_super = F2FS_RAW_SUPER(sbi);
+@@ -4218,7 +4348,9 @@ try_onemore:
+ 	sb->s_fs_info = sbi;
+ 	sbi->raw_super = raw_super;
+ 
++	INIT_WORK(&sbi->s_error_work, f2fs_record_error_work);
+ 	memcpy(sbi->errors, raw_super->s_errors, MAX_F2FS_ERRORS);
++	memcpy(sbi->stop_reason, raw_super->s_stop_reason, MAX_STOP_REASON);
+ 
+ 	/* precompute checksum seed for metadata */
+ 	if (f2fs_sb_has_inode_chksum(sbi))
+@@ -4615,6 +4747,8 @@ free_sm:
+ 	f2fs_destroy_segment_manager(sbi);
+ stop_ckpt_thread:
+ 	f2fs_stop_ckpt_thread(sbi);
++	/* flush s_error_work before sbi destroy */
++	flush_work(&sbi->s_error_work);
+ 	f2fs_destroy_post_read_wq(sbi);
+ free_devices:
+ 	destroy_device_list(sbi);
+diff --git a/fs/fs_context.c b/fs/fs_context.c
+index 24ce12f0db32e..851214d1d013d 100644
+--- a/fs/fs_context.c
++++ b/fs/fs_context.c
+@@ -561,7 +561,8 @@ static int legacy_parse_param(struct fs_context *fc, struct fs_parameter *param)
+ 			return -ENOMEM;
+ 	}
+ 
+-	ctx->legacy_data[size++] = ',';
++	if (size)
++		ctx->legacy_data[size++] = ',';
+ 	len = strlen(param->key);
+ 	memcpy(ctx->legacy_data + size, param->key, len);
+ 	size += len;
+diff --git a/fs/gfs2/file.c b/fs/gfs2/file.c
+index cb62c8f07d1e7..21335d1b67bf2 100644
+--- a/fs/gfs2/file.c
++++ b/fs/gfs2/file.c
+@@ -1030,8 +1030,8 @@ static ssize_t gfs2_file_buffered_write(struct kiocb *iocb,
+ 	}
+ 
+ 	gfs2_holder_init(ip->i_gl, LM_ST_EXCLUSIVE, 0, gh);
+-retry:
+ 	if (should_fault_in_pages(from, iocb, &prev_count, &window_size)) {
++retry:
+ 		window_size -= fault_in_iov_iter_readable(from, window_size);
+ 		if (!window_size) {
+ 			ret = -EFAULT;
+diff --git a/fs/inode.c b/fs/inode.c
+index 577799b7855f6..b9d4980322700 100644
+--- a/fs/inode.c
++++ b/fs/inode.c
+@@ -1103,6 +1103,48 @@ void discard_new_inode(struct inode *inode)
+ }
+ EXPORT_SYMBOL(discard_new_inode);
+ 
++/**
++ * lock_two_inodes - lock two inodes (may be regular files but also dirs)
++ *
++ * Lock any non-NULL argument. The caller must make sure that if he is passing
++ * in two directories, one is not ancestor of the other.  Zero, one or two
++ * objects may be locked by this function.
++ *
++ * @inode1: first inode to lock
++ * @inode2: second inode to lock
++ * @subclass1: inode lock subclass for the first lock obtained
++ * @subclass2: inode lock subclass for the second lock obtained
++ */
++void lock_two_inodes(struct inode *inode1, struct inode *inode2,
++		     unsigned subclass1, unsigned subclass2)
++{
++	if (!inode1 || !inode2) {
++		/*
++		 * Make sure @subclass1 will be used for the acquired lock.
++		 * This is not strictly necessary (no current caller cares) but
++		 * let's keep things consistent.
++		 */
++		if (!inode1)
++			swap(inode1, inode2);
++		goto lock;
++	}
++
++	/*
++	 * If one object is directory and the other is not, we must make sure
++	 * to lock directory first as the other object may be its child.
++	 */
++	if (S_ISDIR(inode2->i_mode) == S_ISDIR(inode1->i_mode)) {
++		if (inode1 > inode2)
++			swap(inode1, inode2);
++	} else if (!S_ISDIR(inode1->i_mode))
++		swap(inode1, inode2);
++lock:
++	if (inode1)
++		inode_lock_nested(inode1, subclass1);
++	if (inode2 && inode2 != inode1)
++		inode_lock_nested(inode2, subclass2);
++}
++
+ /**
+  * lock_two_nondirectories - take two i_mutexes on non-directory objects
+  *
+diff --git a/fs/internal.h b/fs/internal.h
+index bd3b2810a36b6..377030a50aca6 100644
+--- a/fs/internal.h
++++ b/fs/internal.h
+@@ -152,6 +152,8 @@ extern long prune_icache_sb(struct super_block *sb, struct shrink_control *sc);
+ int dentry_needs_remove_privs(struct mnt_idmap *, struct dentry *dentry);
+ bool in_group_or_capable(struct mnt_idmap *idmap,
+ 			 const struct inode *inode, vfsgid_t vfsgid);
++void lock_two_inodes(struct inode *inode1, struct inode *inode2,
++		     unsigned subclass1, unsigned subclass2);
+ 
+ /*
+  * fs-writeback.c
+diff --git a/fs/jffs2/build.c b/fs/jffs2/build.c
+index 837cd55fd4c5e..6ae9d6fefb861 100644
+--- a/fs/jffs2/build.c
++++ b/fs/jffs2/build.c
+@@ -211,7 +211,10 @@ static int jffs2_build_filesystem(struct jffs2_sb_info *c)
+ 		ic->scan_dents = NULL;
+ 		cond_resched();
+ 	}
+-	jffs2_build_xattr_subsystem(c);
++	ret = jffs2_build_xattr_subsystem(c);
++	if (ret)
++		goto exit;
++
+ 	c->flags &= ~JFFS2_SB_FLAG_BUILDING;
+ 
+ 	dbg_fsbuild("FS build complete\n");
+diff --git a/fs/jffs2/xattr.c b/fs/jffs2/xattr.c
+index aa4048a27f31f..3b6bdc9a49e1b 100644
+--- a/fs/jffs2/xattr.c
++++ b/fs/jffs2/xattr.c
+@@ -772,10 +772,10 @@ void jffs2_clear_xattr_subsystem(struct jffs2_sb_info *c)
+ }
+ 
+ #define XREF_TMPHASH_SIZE	(128)
+-void jffs2_build_xattr_subsystem(struct jffs2_sb_info *c)
++int jffs2_build_xattr_subsystem(struct jffs2_sb_info *c)
+ {
+ 	struct jffs2_xattr_ref *ref, *_ref;
+-	struct jffs2_xattr_ref *xref_tmphash[XREF_TMPHASH_SIZE];
++	struct jffs2_xattr_ref **xref_tmphash;
+ 	struct jffs2_xattr_datum *xd, *_xd;
+ 	struct jffs2_inode_cache *ic;
+ 	struct jffs2_raw_node_ref *raw;
+@@ -784,9 +784,12 @@ void jffs2_build_xattr_subsystem(struct jffs2_sb_info *c)
+ 
+ 	BUG_ON(!(c->flags & JFFS2_SB_FLAG_BUILDING));
+ 
++	xref_tmphash = kcalloc(XREF_TMPHASH_SIZE,
++			       sizeof(struct jffs2_xattr_ref *), GFP_KERNEL);
++	if (!xref_tmphash)
++		return -ENOMEM;
++
+ 	/* Phase.1 : Merge same xref */
+-	for (i=0; i < XREF_TMPHASH_SIZE; i++)
+-		xref_tmphash[i] = NULL;
+ 	for (ref=c->xref_temp; ref; ref=_ref) {
+ 		struct jffs2_xattr_ref *tmp;
+ 
+@@ -884,6 +887,8 @@ void jffs2_build_xattr_subsystem(struct jffs2_sb_info *c)
+ 		     "%u of xref (%u dead, %u orphan) found.\n",
+ 		     xdatum_count, xdatum_unchecked_count, xdatum_orphan_count,
+ 		     xref_count, xref_dead_count, xref_orphan_count);
++	kfree(xref_tmphash);
++	return 0;
+ }
+ 
+ struct jffs2_xattr_datum *jffs2_setup_xattr_datum(struct jffs2_sb_info *c,
+diff --git a/fs/jffs2/xattr.h b/fs/jffs2/xattr.h
+index 720007b2fd65d..1b5030a3349db 100644
+--- a/fs/jffs2/xattr.h
++++ b/fs/jffs2/xattr.h
+@@ -71,7 +71,7 @@ static inline int is_xattr_ref_dead(struct jffs2_xattr_ref *ref)
+ #ifdef CONFIG_JFFS2_FS_XATTR
+ 
+ extern void jffs2_init_xattr_subsystem(struct jffs2_sb_info *c);
+-extern void jffs2_build_xattr_subsystem(struct jffs2_sb_info *c);
++extern int jffs2_build_xattr_subsystem(struct jffs2_sb_info *c);
+ extern void jffs2_clear_xattr_subsystem(struct jffs2_sb_info *c);
+ 
+ extern struct jffs2_xattr_datum *jffs2_setup_xattr_datum(struct jffs2_sb_info *c,
+@@ -103,7 +103,7 @@ extern ssize_t jffs2_listxattr(struct dentry *, char *, size_t);
+ #else
+ 
+ #define jffs2_init_xattr_subsystem(c)
+-#define jffs2_build_xattr_subsystem(c)
++#define jffs2_build_xattr_subsystem(c)		(0)
+ #define jffs2_clear_xattr_subsystem(c)
+ 
+ #define jffs2_xattr_do_crccheck_inode(c, ic)
+diff --git a/fs/kernfs/dir.c b/fs/kernfs/dir.c
+index 45b6919903e6b..5a1a4af9d3d29 100644
+--- a/fs/kernfs/dir.c
++++ b/fs/kernfs/dir.c
+@@ -655,7 +655,9 @@ static struct kernfs_node *__kernfs_new_node(struct kernfs_root *root,
+ 	return kn;
+ 
+  err_out3:
++	spin_lock(&kernfs_idr_lock);
+ 	idr_remove(&root->ino_idr, (u32)kernfs_ino(kn));
++	spin_unlock(&kernfs_idr_lock);
+  err_out2:
+ 	kmem_cache_free(kernfs_node_cache, kn);
+  err_out1:
+diff --git a/fs/lockd/svc.c b/fs/lockd/svc.c
+index 04ba95b83d168..22d3ff3818f5f 100644
+--- a/fs/lockd/svc.c
++++ b/fs/lockd/svc.c
+@@ -355,7 +355,6 @@ static int lockd_get(void)
+ 	int error;
+ 
+ 	if (nlmsvc_serv) {
+-		svc_get(nlmsvc_serv);
+ 		nlmsvc_users++;
+ 		return 0;
+ 	}
+diff --git a/fs/namei.c b/fs/namei.c
+index e4fe0879ae553..7e5cb92feab3f 100644
+--- a/fs/namei.c
++++ b/fs/namei.c
+@@ -3028,8 +3028,8 @@ static struct dentry *lock_two_directories(struct dentry *p1, struct dentry *p2)
+ 		return p;
+ 	}
+ 
+-	inode_lock_nested(p1->d_inode, I_MUTEX_PARENT);
+-	inode_lock_nested(p2->d_inode, I_MUTEX_PARENT2);
++	lock_two_inodes(p1->d_inode, p2->d_inode,
++			I_MUTEX_PARENT, I_MUTEX_PARENT2);
+ 	return NULL;
+ }
+ 
+@@ -4731,7 +4731,7 @@ SYSCALL_DEFINE2(link, const char __user *, oldname, const char __user *, newname
+  *	   sb->s_vfs_rename_mutex. We might be more accurate, but that's another
+  *	   story.
+  *	c) we have to lock _four_ objects - parents and victim (if it exists),
+- *	   and source (if it is not a directory).
++ *	   and source.
+  *	   And that - after we got ->i_mutex on parents (until then we don't know
+  *	   whether the target exists).  Solution: try to be smart with locking
+  *	   order for inodes.  We rely on the fact that tree topology may change
+@@ -4815,10 +4815,16 @@ int vfs_rename(struct renamedata *rd)
+ 
+ 	take_dentry_name_snapshot(&old_name, old_dentry);
+ 	dget(new_dentry);
+-	if (!is_dir || (flags & RENAME_EXCHANGE))
+-		lock_two_nondirectories(source, target);
+-	else if (target)
+-		inode_lock(target);
++	/*
++	 * Lock all moved children. Moved directories may need to change parent
++	 * pointer so they need the lock to prevent against concurrent
++	 * directory changes moving parent pointer. For regular files we've
++	 * historically always done this. The lockdep locking subclasses are
++	 * somewhat arbitrary but RENAME_EXCHANGE in particular can swap
++	 * regular files and directories so it's difficult to tell which
++	 * subclasses to use.
++	 */
++	lock_two_inodes(source, target, I_MUTEX_NORMAL, I_MUTEX_NONDIR2);
+ 
+ 	error = -EPERM;
+ 	if (IS_SWAPFILE(source) || (target && IS_SWAPFILE(target)))
+@@ -4866,9 +4872,8 @@ int vfs_rename(struct renamedata *rd)
+ 			d_exchange(old_dentry, new_dentry);
+ 	}
+ out:
+-	if (!is_dir || (flags & RENAME_EXCHANGE))
+-		unlock_two_nondirectories(source, target);
+-	else if (target)
++	inode_unlock(source);
++	if (target)
+ 		inode_unlock(target);
+ 	dput(new_dentry);
+ 	if (!error) {
+diff --git a/fs/nfs/nfs42xattr.c b/fs/nfs/nfs42xattr.c
+index 76ae118342066..911f634ba3da7 100644
+--- a/fs/nfs/nfs42xattr.c
++++ b/fs/nfs/nfs42xattr.c
+@@ -991,6 +991,29 @@ static void nfs4_xattr_cache_init_once(void *p)
+ 	INIT_LIST_HEAD(&cache->dispose);
+ }
+ 
++static int nfs4_xattr_shrinker_init(struct shrinker *shrinker,
++				    struct list_lru *lru, const char *name)
++{
++	int ret = 0;
++
++	ret = register_shrinker(shrinker, name);
++	if (ret)
++		return ret;
++
++	ret = list_lru_init_memcg(lru, shrinker);
++	if (ret)
++		unregister_shrinker(shrinker);
++
++	return ret;
++}
++
++static void nfs4_xattr_shrinker_destroy(struct shrinker *shrinker,
++					struct list_lru *lru)
++{
++	unregister_shrinker(shrinker);
++	list_lru_destroy(lru);
++}
++
+ int __init nfs4_xattr_cache_init(void)
+ {
+ 	int ret = 0;
+@@ -1002,44 +1025,30 @@ int __init nfs4_xattr_cache_init(void)
+ 	if (nfs4_xattr_cache_cachep == NULL)
+ 		return -ENOMEM;
+ 
+-	ret = list_lru_init_memcg(&nfs4_xattr_large_entry_lru,
+-	    &nfs4_xattr_large_entry_shrinker);
+-	if (ret)
+-		goto out4;
+-
+-	ret = list_lru_init_memcg(&nfs4_xattr_entry_lru,
+-	    &nfs4_xattr_entry_shrinker);
+-	if (ret)
+-		goto out3;
+-
+-	ret = list_lru_init_memcg(&nfs4_xattr_cache_lru,
+-	    &nfs4_xattr_cache_shrinker);
+-	if (ret)
+-		goto out2;
+-
+-	ret = register_shrinker(&nfs4_xattr_cache_shrinker, "nfs-xattr_cache");
++	ret = nfs4_xattr_shrinker_init(&nfs4_xattr_cache_shrinker,
++				       &nfs4_xattr_cache_lru,
++				       "nfs-xattr_cache");
+ 	if (ret)
+ 		goto out1;
+ 
+-	ret = register_shrinker(&nfs4_xattr_entry_shrinker, "nfs-xattr_entry");
++	ret = nfs4_xattr_shrinker_init(&nfs4_xattr_entry_shrinker,
++				       &nfs4_xattr_entry_lru,
++				       "nfs-xattr_entry");
+ 	if (ret)
+-		goto out;
++		goto out2;
+ 
+-	ret = register_shrinker(&nfs4_xattr_large_entry_shrinker,
+-				"nfs-xattr_large_entry");
++	ret = nfs4_xattr_shrinker_init(&nfs4_xattr_large_entry_shrinker,
++				       &nfs4_xattr_large_entry_lru,
++				       "nfs-xattr_large_entry");
+ 	if (!ret)
+ 		return 0;
+ 
+-	unregister_shrinker(&nfs4_xattr_entry_shrinker);
+-out:
+-	unregister_shrinker(&nfs4_xattr_cache_shrinker);
+-out1:
+-	list_lru_destroy(&nfs4_xattr_cache_lru);
++	nfs4_xattr_shrinker_destroy(&nfs4_xattr_entry_shrinker,
++				    &nfs4_xattr_entry_lru);
+ out2:
+-	list_lru_destroy(&nfs4_xattr_entry_lru);
+-out3:
+-	list_lru_destroy(&nfs4_xattr_large_entry_lru);
+-out4:
++	nfs4_xattr_shrinker_destroy(&nfs4_xattr_cache_shrinker,
++				    &nfs4_xattr_cache_lru);
++out1:
+ 	kmem_cache_destroy(nfs4_xattr_cache_cachep);
+ 
+ 	return ret;
+@@ -1047,11 +1056,11 @@ out4:
+ 
+ void nfs4_xattr_cache_exit(void)
+ {
+-	unregister_shrinker(&nfs4_xattr_large_entry_shrinker);
+-	unregister_shrinker(&nfs4_xattr_entry_shrinker);
+-	unregister_shrinker(&nfs4_xattr_cache_shrinker);
+-	list_lru_destroy(&nfs4_xattr_large_entry_lru);
+-	list_lru_destroy(&nfs4_xattr_entry_lru);
+-	list_lru_destroy(&nfs4_xattr_cache_lru);
++	nfs4_xattr_shrinker_destroy(&nfs4_xattr_large_entry_shrinker,
++				    &nfs4_xattr_large_entry_lru);
++	nfs4_xattr_shrinker_destroy(&nfs4_xattr_entry_shrinker,
++				    &nfs4_xattr_entry_lru);
++	nfs4_xattr_shrinker_destroy(&nfs4_xattr_cache_shrinker,
++				    &nfs4_xattr_cache_lru);
+ 	kmem_cache_destroy(nfs4_xattr_cache_cachep);
+ }
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index d3665390c4cb8..9faba2dac11dd 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -921,6 +921,7 @@ out:
+ out_noaction:
+ 	return ret;
+ session_recover:
++	set_bit(NFS4_SLOT_TBL_DRAINING, &session->fc_slot_table.slot_tbl_state);
+ 	nfs4_schedule_session_recovery(session, status);
+ 	dprintk("%s ERROR: %d Reset session\n", __func__, status);
+ 	nfs41_sequence_free_slot(res);
+diff --git a/fs/nfsd/cache.h b/fs/nfsd/cache.h
+index f21259ead64bb..4c9b87850ab12 100644
+--- a/fs/nfsd/cache.h
++++ b/fs/nfsd/cache.h
+@@ -80,6 +80,8 @@ enum {
+ 
+ int	nfsd_drc_slab_create(void);
+ void	nfsd_drc_slab_free(void);
++int	nfsd_net_reply_cache_init(struct nfsd_net *nn);
++void	nfsd_net_reply_cache_destroy(struct nfsd_net *nn);
+ int	nfsd_reply_cache_init(struct nfsd_net *);
+ void	nfsd_reply_cache_shutdown(struct nfsd_net *);
+ int	nfsd_cache_lookup(struct svc_rqst *);
+diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
+index 76db2fe296244..ee1a24debd60c 100644
+--- a/fs/nfsd/nfs4xdr.c
++++ b/fs/nfsd/nfs4xdr.c
+@@ -3956,7 +3956,7 @@ nfsd4_encode_open(struct nfsd4_compoundres *resp, __be32 nfserr,
+ 		p = xdr_reserve_space(xdr, 32);
+ 		if (!p)
+ 			return nfserr_resource;
+-		*p++ = cpu_to_be32(0);
++		*p++ = cpu_to_be32(open->op_recall);
+ 
+ 		/*
+ 		 * TODO: space_limit's in delegations
+diff --git a/fs/nfsd/nfscache.c b/fs/nfsd/nfscache.c
+index 041faa13b852e..a8eda1c85829e 100644
+--- a/fs/nfsd/nfscache.c
++++ b/fs/nfsd/nfscache.c
+@@ -148,12 +148,23 @@ void nfsd_drc_slab_free(void)
+ 	kmem_cache_destroy(drc_slab);
+ }
+ 
+-static int nfsd_reply_cache_stats_init(struct nfsd_net *nn)
++/**
++ * nfsd_net_reply_cache_init - per net namespace reply cache set-up
++ * @nn: nfsd_net being initialized
++ *
++ * Returns zero on succes; otherwise a negative errno is returned.
++ */
++int nfsd_net_reply_cache_init(struct nfsd_net *nn)
+ {
+ 	return nfsd_percpu_counters_init(nn->counter, NFSD_NET_COUNTERS_NUM);
+ }
+ 
+-static void nfsd_reply_cache_stats_destroy(struct nfsd_net *nn)
++/**
++ * nfsd_net_reply_cache_destroy - per net namespace reply cache tear-down
++ * @nn: nfsd_net being freed
++ *
++ */
++void nfsd_net_reply_cache_destroy(struct nfsd_net *nn)
+ {
+ 	nfsd_percpu_counters_destroy(nn->counter, NFSD_NET_COUNTERS_NUM);
+ }
+@@ -169,17 +180,13 @@ int nfsd_reply_cache_init(struct nfsd_net *nn)
+ 	hashsize = nfsd_hashsize(nn->max_drc_entries);
+ 	nn->maskbits = ilog2(hashsize);
+ 
+-	status = nfsd_reply_cache_stats_init(nn);
+-	if (status)
+-		goto out_nomem;
+-
+ 	nn->nfsd_reply_cache_shrinker.scan_objects = nfsd_reply_cache_scan;
+ 	nn->nfsd_reply_cache_shrinker.count_objects = nfsd_reply_cache_count;
+ 	nn->nfsd_reply_cache_shrinker.seeks = 1;
+ 	status = register_shrinker(&nn->nfsd_reply_cache_shrinker,
+ 				   "nfsd-reply:%s", nn->nfsd_name);
+ 	if (status)
+-		goto out_stats_destroy;
++		return status;
+ 
+ 	nn->drc_hashtbl = kvzalloc(array_size(hashsize,
+ 				sizeof(*nn->drc_hashtbl)), GFP_KERNEL);
+@@ -195,9 +202,6 @@ int nfsd_reply_cache_init(struct nfsd_net *nn)
+ 	return 0;
+ out_shrinker:
+ 	unregister_shrinker(&nn->nfsd_reply_cache_shrinker);
+-out_stats_destroy:
+-	nfsd_reply_cache_stats_destroy(nn);
+-out_nomem:
+ 	printk(KERN_ERR "nfsd: failed to allocate reply cache\n");
+ 	return -ENOMEM;
+ }
+@@ -217,7 +221,6 @@ void nfsd_reply_cache_shutdown(struct nfsd_net *nn)
+ 									rp, nn);
+ 		}
+ 	}
+-	nfsd_reply_cache_stats_destroy(nn);
+ 
+ 	kvfree(nn->drc_hashtbl);
+ 	nn->drc_hashtbl = NULL;
+diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
+index b4fd7a7062d5e..7effd7db0b858 100644
+--- a/fs/nfsd/nfsctl.c
++++ b/fs/nfsd/nfsctl.c
+@@ -1488,6 +1488,9 @@ static __net_init int nfsd_init_net(struct net *net)
+ 	retval = nfsd_idmap_init(net);
+ 	if (retval)
+ 		goto out_idmap_error;
++	retval = nfsd_net_reply_cache_init(nn);
++	if (retval)
++		goto out_repcache_error;
+ 	nn->nfsd_versions = NULL;
+ 	nn->nfsd4_minorversions = NULL;
+ 	nfsd4_init_leases_net(nn);
+@@ -1496,6 +1499,8 @@ static __net_init int nfsd_init_net(struct net *net)
+ 
+ 	return 0;
+ 
++out_repcache_error:
++	nfsd_idmap_shutdown(net);
+ out_idmap_error:
+ 	nfsd_export_shutdown(net);
+ out_export_error:
+@@ -1504,9 +1509,12 @@ out_export_error:
+ 
+ static __net_exit void nfsd_exit_net(struct net *net)
+ {
++	struct nfsd_net *nn = net_generic(net, nfsd_net_id);
++
++	nfsd_net_reply_cache_destroy(nn);
+ 	nfsd_idmap_shutdown(net);
+ 	nfsd_export_shutdown(net);
+-	nfsd_netns_free_versions(net_generic(net, nfsd_net_id));
++	nfsd_netns_free_versions(nn);
+ }
+ 
+ static struct pernet_operations nfsd_net_ops = {
+diff --git a/fs/nfsd/vfs.c b/fs/nfsd/vfs.c
+index db67f8e19344a..0016bcc04a59d 100644
+--- a/fs/nfsd/vfs.c
++++ b/fs/nfsd/vfs.c
+@@ -388,7 +388,9 @@ nfsd_sanitize_attrs(struct inode *inode, struct iattr *iap)
+ 				iap->ia_mode &= ~S_ISGID;
+ 		} else {
+ 			/* set ATTR_KILL_* bits and let VFS handle it */
+-			iap->ia_valid |= (ATTR_KILL_SUID | ATTR_KILL_SGID);
++			iap->ia_valid |= ATTR_KILL_SUID;
++			iap->ia_valid |=
++				setattr_should_drop_sgid(&nop_mnt_idmap, inode);
+ 		}
+ 	}
+ }
+diff --git a/fs/notify/fanotify/fanotify_user.c b/fs/notify/fanotify/fanotify_user.c
+index 22fb1cf7e1fc5..f7e11ac763907 100644
+--- a/fs/notify/fanotify/fanotify_user.c
++++ b/fs/notify/fanotify/fanotify_user.c
+@@ -1623,6 +1623,20 @@ static int fanotify_events_supported(struct fsnotify_group *group,
+ 	    path->mnt->mnt_sb->s_type->fs_flags & FS_DISALLOW_NOTIFY_PERM)
+ 		return -EINVAL;
+ 
++	/*
++	 * mount and sb marks are not allowed on kernel internal pseudo fs,
++	 * like pipe_mnt, because that would subscribe to events on all the
++	 * anonynous pipes in the system.
++	 *
++	 * SB_NOUSER covers all of the internal pseudo fs whose objects are not
++	 * exposed to user's mount namespace, but there are other SB_KERNMOUNT
++	 * fs, like nsfs, debugfs, for which the value of allowing sb and mount
++	 * mark is questionable. For now we leave them alone.
++	 */
++	if (mark_type != FAN_MARK_INODE &&
++	    path->mnt->mnt_sb->s_flags & SB_NOUSER)
++		return -EINVAL;
++
+ 	/*
+ 	 * We shouldn't have allowed setting dirent events and the directory
+ 	 * flags FAN_ONDIR and FAN_EVENT_ON_CHILD in mask of non-dir inode,
+diff --git a/fs/ntfs3/xattr.c b/fs/ntfs3/xattr.c
+index c3de60a4543fa..fd02fcf4d4091 100644
+--- a/fs/ntfs3/xattr.c
++++ b/fs/ntfs3/xattr.c
+@@ -214,6 +214,9 @@ static ssize_t ntfs_list_ea(struct ntfs_inode *ni, char *buffer,
+ 		ea = Add2Ptr(ea_all, off);
+ 		ea_size = unpacked_ea_size(ea);
+ 
++		if (!ea->name_len)
++			break;
++
+ 		if (buffer) {
+ 			if (ret + ea->name_len + 1 > bytes_per_buffer) {
+ 				err = -ERANGE;
+diff --git a/fs/ocfs2/cluster/tcp.c b/fs/ocfs2/cluster/tcp.c
+index aecbd712a00cf..929a1133bc180 100644
+--- a/fs/ocfs2/cluster/tcp.c
++++ b/fs/ocfs2/cluster/tcp.c
+@@ -2087,18 +2087,24 @@ void o2net_stop_listening(struct o2nm_node *node)
+ 
+ int o2net_init(void)
+ {
++	struct folio *folio;
++	void *p;
+ 	unsigned long i;
+ 
+ 	o2quo_init();
+-
+ 	o2net_debugfs_init();
+ 
+-	o2net_hand = kzalloc(sizeof(struct o2net_handshake), GFP_KERNEL);
+-	o2net_keep_req = kzalloc(sizeof(struct o2net_msg), GFP_KERNEL);
+-	o2net_keep_resp = kzalloc(sizeof(struct o2net_msg), GFP_KERNEL);
+-	if (!o2net_hand || !o2net_keep_req || !o2net_keep_resp)
++	folio = folio_alloc(GFP_KERNEL | __GFP_ZERO, 0);
++	if (!folio)
+ 		goto out;
+ 
++	p = folio_address(folio);
++	o2net_hand = p;
++	p += sizeof(struct o2net_handshake);
++	o2net_keep_req = p;
++	p += sizeof(struct o2net_msg);
++	o2net_keep_resp = p;
++
+ 	o2net_hand->protocol_version = cpu_to_be64(O2NET_PROTOCOL_VERSION);
+ 	o2net_hand->connector_id = cpu_to_be64(1);
+ 
+@@ -2124,9 +2130,6 @@ int o2net_init(void)
+ 	return 0;
+ 
+ out:
+-	kfree(o2net_hand);
+-	kfree(o2net_keep_req);
+-	kfree(o2net_keep_resp);
+ 	o2net_debugfs_exit();
+ 	o2quo_exit();
+ 	return -ENOMEM;
+@@ -2135,8 +2138,6 @@ out:
+ void o2net_exit(void)
+ {
+ 	o2quo_exit();
+-	kfree(o2net_hand);
+-	kfree(o2net_keep_req);
+-	kfree(o2net_keep_resp);
+ 	o2net_debugfs_exit();
++	folio_put(virt_to_folio(o2net_hand));
+ }
+diff --git a/fs/overlayfs/copy_up.c b/fs/overlayfs/copy_up.c
+index f658cc8ea4920..95dce240ba17a 100644
+--- a/fs/overlayfs/copy_up.c
++++ b/fs/overlayfs/copy_up.c
+@@ -575,6 +575,7 @@ static int ovl_link_up(struct ovl_copy_up_ctx *c)
+ 			/* Restore timestamps on parent (best effort) */
+ 			ovl_set_timestamps(ofs, upperdir, &c->pstat);
+ 			ovl_dentry_set_upper_alias(c->dentry);
++			ovl_dentry_update_reval(c->dentry, upper);
+ 		}
+ 	}
+ 	inode_unlock(udir);
+@@ -894,6 +895,7 @@ static int ovl_do_copy_up(struct ovl_copy_up_ctx *c)
+ 		inode_unlock(udir);
+ 
+ 		ovl_dentry_set_upper_alias(c->dentry);
++		ovl_dentry_update_reval(c->dentry, ovl_dentry_upper(c->dentry));
+ 	}
+ 
+ out:
+diff --git a/fs/overlayfs/dir.c b/fs/overlayfs/dir.c
+index fc25fb95d5fc0..9be52d8013c83 100644
+--- a/fs/overlayfs/dir.c
++++ b/fs/overlayfs/dir.c
+@@ -269,8 +269,7 @@ static int ovl_instantiate(struct dentry *dentry, struct inode *inode,
+ 
+ 	ovl_dir_modified(dentry->d_parent, false);
+ 	ovl_dentry_set_upper_alias(dentry);
+-	ovl_dentry_update_reval(dentry, newdentry,
+-			DCACHE_OP_REVALIDATE | DCACHE_OP_WEAK_REVALIDATE);
++	ovl_dentry_init_reval(dentry, newdentry);
+ 
+ 	if (!hardlink) {
+ 		/*
+diff --git a/fs/overlayfs/export.c b/fs/overlayfs/export.c
+index defd4e231ad2c..5c36fb3a7bab1 100644
+--- a/fs/overlayfs/export.c
++++ b/fs/overlayfs/export.c
+@@ -326,8 +326,7 @@ static struct dentry *ovl_obtain_alias(struct super_block *sb,
+ 	if (upper_alias)
+ 		ovl_dentry_set_upper_alias(dentry);
+ 
+-	ovl_dentry_update_reval(dentry, upper,
+-			DCACHE_OP_REVALIDATE | DCACHE_OP_WEAK_REVALIDATE);
++	ovl_dentry_init_reval(dentry, upper);
+ 
+ 	return d_instantiate_anon(dentry, inode);
+ 
+diff --git a/fs/overlayfs/inode.c b/fs/overlayfs/inode.c
+index 541cf3717fc2b..e7e888dea6341 100644
+--- a/fs/overlayfs/inode.c
++++ b/fs/overlayfs/inode.c
+@@ -288,8 +288,8 @@ int ovl_permission(struct mnt_idmap *idmap,
+ 	int err;
+ 
+ 	/* Careful in RCU walk mode */
+-	ovl_i_path_real(inode, &realpath);
+-	if (!realpath.dentry) {
++	realinode = ovl_i_path_real(inode, &realpath);
++	if (!realinode) {
+ 		WARN_ON(!(mask & MAY_NOT_BLOCK));
+ 		return -ECHILD;
+ 	}
+@@ -302,7 +302,6 @@ int ovl_permission(struct mnt_idmap *idmap,
+ 	if (err)
+ 		return err;
+ 
+-	realinode = d_inode(realpath.dentry);
+ 	old_cred = ovl_override_creds(inode->i_sb);
+ 	if (!upperinode &&
+ 	    !special_file(realinode->i_mode) && mask & MAY_WRITE) {
+@@ -559,20 +558,20 @@ struct posix_acl *do_ovl_get_acl(struct mnt_idmap *idmap,
+ 				 struct inode *inode, int type,
+ 				 bool rcu, bool noperm)
+ {
+-	struct inode *realinode = ovl_inode_real(inode);
++	struct inode *realinode;
+ 	struct posix_acl *acl;
+ 	struct path realpath;
+ 
+-	if (!IS_POSIXACL(realinode))
+-		return NULL;
+-
+ 	/* Careful in RCU walk mode */
+-	ovl_i_path_real(inode, &realpath);
+-	if (!realpath.dentry) {
++	realinode = ovl_i_path_real(inode, &realpath);
++	if (!realinode) {
+ 		WARN_ON(!rcu);
+ 		return ERR_PTR(-ECHILD);
+ 	}
+ 
++	if (!IS_POSIXACL(realinode))
++		return NULL;
++
+ 	if (rcu) {
+ 		/*
+ 		 * If the layer is idmapped drop out of RCU path walk
+diff --git a/fs/overlayfs/namei.c b/fs/overlayfs/namei.c
+index cfb3420b7df0e..100a492d2b2a6 100644
+--- a/fs/overlayfs/namei.c
++++ b/fs/overlayfs/namei.c
+@@ -1122,8 +1122,7 @@ struct dentry *ovl_lookup(struct inode *dir, struct dentry *dentry,
+ 			ovl_set_flag(OVL_UPPERDATA, inode);
+ 	}
+ 
+-	ovl_dentry_update_reval(dentry, upperdentry,
+-			DCACHE_OP_REVALIDATE | DCACHE_OP_WEAK_REVALIDATE);
++	ovl_dentry_init_reval(dentry, upperdentry);
+ 
+ 	revert_creds(old_cred);
+ 	if (origin_path) {
+diff --git a/fs/overlayfs/overlayfs.h b/fs/overlayfs/overlayfs.h
+index 4d0b278f5630e..2b79a9398c132 100644
+--- a/fs/overlayfs/overlayfs.h
++++ b/fs/overlayfs/overlayfs.h
+@@ -375,14 +375,16 @@ bool ovl_index_all(struct super_block *sb);
+ bool ovl_verify_lower(struct super_block *sb);
+ struct ovl_entry *ovl_alloc_entry(unsigned int numlower);
+ bool ovl_dentry_remote(struct dentry *dentry);
+-void ovl_dentry_update_reval(struct dentry *dentry, struct dentry *upperdentry,
+-			     unsigned int mask);
++void ovl_dentry_update_reval(struct dentry *dentry, struct dentry *realdentry);
++void ovl_dentry_init_reval(struct dentry *dentry, struct dentry *upperdentry);
++void ovl_dentry_init_flags(struct dentry *dentry, struct dentry *upperdentry,
++			   unsigned int mask);
+ bool ovl_dentry_weird(struct dentry *dentry);
+ enum ovl_path_type ovl_path_type(struct dentry *dentry);
+ void ovl_path_upper(struct dentry *dentry, struct path *path);
+ void ovl_path_lower(struct dentry *dentry, struct path *path);
+ void ovl_path_lowerdata(struct dentry *dentry, struct path *path);
+-void ovl_i_path_real(struct inode *inode, struct path *path);
++struct inode *ovl_i_path_real(struct inode *inode, struct path *path);
+ enum ovl_path_type ovl_path_real(struct dentry *dentry, struct path *path);
+ enum ovl_path_type ovl_path_realdata(struct dentry *dentry, struct path *path);
+ struct dentry *ovl_dentry_upper(struct dentry *dentry);
+diff --git a/fs/overlayfs/super.c b/fs/overlayfs/super.c
+index f97ad8b40dbbd..ae1058fbfb5b2 100644
+--- a/fs/overlayfs/super.c
++++ b/fs/overlayfs/super.c
+@@ -1877,7 +1877,7 @@ static struct dentry *ovl_get_root(struct super_block *sb,
+ 	ovl_dentry_set_flag(OVL_E_CONNECTED, root);
+ 	ovl_set_upperdata(d_inode(root));
+ 	ovl_inode_init(d_inode(root), &oip, ino, fsid);
+-	ovl_dentry_update_reval(root, upperdentry, DCACHE_OP_WEAK_REVALIDATE);
++	ovl_dentry_init_flags(root, upperdentry, DCACHE_OP_WEAK_REVALIDATE);
+ 
+ 	return root;
+ }
+diff --git a/fs/overlayfs/util.c b/fs/overlayfs/util.c
+index 923d66d131c16..fb12e7fa85486 100644
+--- a/fs/overlayfs/util.c
++++ b/fs/overlayfs/util.c
+@@ -94,14 +94,30 @@ struct ovl_entry *ovl_alloc_entry(unsigned int numlower)
+ 	return oe;
+ }
+ 
++#define OVL_D_REVALIDATE (DCACHE_OP_REVALIDATE | DCACHE_OP_WEAK_REVALIDATE)
++
+ bool ovl_dentry_remote(struct dentry *dentry)
+ {
+-	return dentry->d_flags &
+-		(DCACHE_OP_REVALIDATE | DCACHE_OP_WEAK_REVALIDATE);
++	return dentry->d_flags & OVL_D_REVALIDATE;
++}
++
++void ovl_dentry_update_reval(struct dentry *dentry, struct dentry *realdentry)
++{
++	if (!ovl_dentry_remote(realdentry))
++		return;
++
++	spin_lock(&dentry->d_lock);
++	dentry->d_flags |= realdentry->d_flags & OVL_D_REVALIDATE;
++	spin_unlock(&dentry->d_lock);
++}
++
++void ovl_dentry_init_reval(struct dentry *dentry, struct dentry *upperdentry)
++{
++	return ovl_dentry_init_flags(dentry, upperdentry, OVL_D_REVALIDATE);
+ }
+ 
+-void ovl_dentry_update_reval(struct dentry *dentry, struct dentry *upperdentry,
+-			     unsigned int mask)
++void ovl_dentry_init_flags(struct dentry *dentry, struct dentry *upperdentry,
++			   unsigned int mask)
+ {
+ 	struct ovl_entry *oe = OVL_E(dentry);
+ 	unsigned int i, flags = 0;
+@@ -250,7 +266,7 @@ struct dentry *ovl_i_dentry_upper(struct inode *inode)
+ 	return ovl_upperdentry_dereference(OVL_I(inode));
+ }
+ 
+-void ovl_i_path_real(struct inode *inode, struct path *path)
++struct inode *ovl_i_path_real(struct inode *inode, struct path *path)
+ {
+ 	path->dentry = ovl_i_dentry_upper(inode);
+ 	if (!path->dentry) {
+@@ -259,6 +275,8 @@ void ovl_i_path_real(struct inode *inode, struct path *path)
+ 	} else {
+ 		path->mnt = ovl_upper_mnt(OVL_FS(inode->i_sb));
+ 	}
++
++	return path->dentry ? d_inode_rcu(path->dentry) : NULL;
+ }
+ 
+ struct inode *ovl_inode_upper(struct inode *inode)
+@@ -1105,8 +1123,7 @@ void ovl_copyattr(struct inode *inode)
+ 	vfsuid_t vfsuid;
+ 	vfsgid_t vfsgid;
+ 
+-	ovl_i_path_real(inode, &realpath);
+-	realinode = d_inode(realpath.dentry);
++	realinode = ovl_i_path_real(inode, &realpath);
+ 	real_idmap = mnt_idmap(realpath.mnt);
+ 
+ 	vfsuid = i_uid_into_vfsuid(real_idmap, realinode);
+diff --git a/fs/pstore/ram_core.c b/fs/pstore/ram_core.c
+index 966191d3a5ba2..85aaf0fc6d7d1 100644
+--- a/fs/pstore/ram_core.c
++++ b/fs/pstore/ram_core.c
+@@ -599,6 +599,8 @@ struct persistent_ram_zone *persistent_ram_new(phys_addr_t start, size_t size,
+ 	raw_spin_lock_init(&prz->buffer_lock);
+ 	prz->flags = flags;
+ 	prz->label = kstrdup(label, GFP_KERNEL);
++	if (!prz->label)
++		goto err;
+ 
+ 	ret = persistent_ram_buffer_map(start, size, prz, memtype);
+ 	if (ret)
+diff --git a/fs/ramfs/inode.c b/fs/ramfs/inode.c
+index 5ba580c78835f..fef477c781073 100644
+--- a/fs/ramfs/inode.c
++++ b/fs/ramfs/inode.c
+@@ -278,7 +278,7 @@ int ramfs_init_fs_context(struct fs_context *fc)
+ 	return 0;
+ }
+ 
+-static void ramfs_kill_sb(struct super_block *sb)
++void ramfs_kill_sb(struct super_block *sb)
+ {
+ 	kfree(sb->s_fs_info);
+ 	kill_litter_super(sb);
+diff --git a/fs/reiserfs/xattr_security.c b/fs/reiserfs/xattr_security.c
+index 6e0a099dd7886..078dd8cc312fc 100644
+--- a/fs/reiserfs/xattr_security.c
++++ b/fs/reiserfs/xattr_security.c
+@@ -67,6 +67,7 @@ int reiserfs_security_init(struct inode *dir, struct inode *inode,
+ 
+ 	sec->name = NULL;
+ 	sec->value = NULL;
++	sec->length = 0;
+ 
+ 	/* Don't add selinux attributes on xattrs - they'll never get used */
+ 	if (IS_PRIVATE(dir))
+diff --git a/fs/smb/client/cifs_debug.c b/fs/smb/client/cifs_debug.c
+index b279f745466e4..ed0f71137584f 100644
+--- a/fs/smb/client/cifs_debug.c
++++ b/fs/smb/client/cifs_debug.c
+@@ -122,6 +122,12 @@ static void cifs_debug_tcon(struct seq_file *m, struct cifs_tcon *tcon)
+ 		seq_puts(m, " nosparse");
+ 	if (tcon->need_reconnect)
+ 		seq_puts(m, "\tDISCONNECTED ");
++	spin_lock(&tcon->tc_lock);
++	if (tcon->origin_fullpath) {
++		seq_printf(m, "\n\tDFS origin fullpath: %s",
++			   tcon->origin_fullpath);
++	}
++	spin_unlock(&tcon->tc_lock);
+ 	seq_putc(m, '\n');
+ }
+ 
+@@ -427,13 +433,9 @@ skip_rdma:
+ 		seq_printf(m, "\nIn Send: %d In MaxReq Wait: %d",
+ 				atomic_read(&server->in_send),
+ 				atomic_read(&server->num_waiters));
+-		if (IS_ENABLED(CONFIG_CIFS_DFS_UPCALL)) {
+-			if (server->origin_fullpath)
+-				seq_printf(m, "\nDFS origin full path: %s",
+-					   server->origin_fullpath);
+-			if (server->leaf_fullpath)
+-				seq_printf(m, "\nDFS leaf full path:   %s",
+-					   server->leaf_fullpath);
++		if (server->leaf_fullpath) {
++			seq_printf(m, "\nDFS leaf full path: %s",
++				   server->leaf_fullpath);
+ 		}
+ 
+ 		seq_printf(m, "\n\n\tSessions: ");
+diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h
+index b212a4e16b39b..ca2da713c5fe9 100644
+--- a/fs/smb/client/cifsglob.h
++++ b/fs/smb/client/cifsglob.h
+@@ -736,23 +736,20 @@ struct TCP_Server_Info {
+ #endif
+ 	struct mutex refpath_lock; /* protects leaf_fullpath */
+ 	/*
+-	 * origin_fullpath: Canonical copy of smb3_fs_context::source.
+-	 *                  It is used for matching existing DFS tcons.
+-	 *
+ 	 * leaf_fullpath: Canonical DFS referral path related to this
+ 	 *                connection.
+ 	 *                It is used in DFS cache refresher, reconnect and may
+ 	 *                change due to nested DFS links.
+ 	 *
+-	 * Both protected by @refpath_lock and @srv_lock.  The @refpath_lock is
+-	 * mosly used for not requiring a copy of @leaf_fullpath when getting
++	 * Protected by @refpath_lock and @srv_lock.  The @refpath_lock is
++	 * mostly used for not requiring a copy of @leaf_fullpath when getting
+ 	 * cached or new DFS referrals (which might also sleep during I/O).
+ 	 * While @srv_lock is held for making string and NULL comparions against
+ 	 * both fields as in mount(2) and cache refresh.
+ 	 *
+ 	 * format: \\HOST\SHARE[\OPTIONAL PATH]
+ 	 */
+-	char *origin_fullpath, *leaf_fullpath;
++	char *leaf_fullpath;
+ };
+ 
+ static inline bool is_smb1(struct TCP_Server_Info *server)
+@@ -1205,6 +1202,7 @@ struct cifs_tcon {
+ 	struct delayed_work dfs_cache_work;
+ #endif
+ 	struct delayed_work	query_interfaces; /* query interfaces workqueue job */
++	char *origin_fullpath; /* canonical copy of smb3_fs_context::source */
+ };
+ 
+ /*
+diff --git a/fs/smb/client/cifsproto.h b/fs/smb/client/cifsproto.h
+index d127aded2f287..94ab6402965c5 100644
+--- a/fs/smb/client/cifsproto.h
++++ b/fs/smb/client/cifsproto.h
+@@ -650,7 +650,7 @@ int smb2_parse_query_directory(struct cifs_tcon *tcon, struct kvec *rsp_iov,
+ 			       int resp_buftype,
+ 			       struct cifs_search_info *srch_inf);
+ 
+-struct super_block *cifs_get_tcp_super(struct TCP_Server_Info *server);
++struct super_block *cifs_get_dfs_tcon_super(struct cifs_tcon *tcon);
+ void cifs_put_tcp_super(struct super_block *sb);
+ int cifs_update_super_prepath(struct cifs_sb_info *cifs_sb, char *prefix);
+ char *extract_hostname(const char *unc);
+diff --git a/fs/smb/client/connect.c b/fs/smb/client/connect.c
+index 9d16626e7a669..d9f0b3b94f007 100644
+--- a/fs/smb/client/connect.c
++++ b/fs/smb/client/connect.c
+@@ -996,7 +996,6 @@ static void clean_demultiplex_info(struct TCP_Server_Info *server)
+ 		 */
+ 	}
+ 
+-	kfree(server->origin_fullpath);
+ 	kfree(server->leaf_fullpath);
+ 	kfree(server);
+ 
+@@ -1436,7 +1435,9 @@ match_security(struct TCP_Server_Info *server, struct smb3_fs_context *ctx)
+ }
+ 
+ /* this function must be called with srv_lock held */
+-static int match_server(struct TCP_Server_Info *server, struct smb3_fs_context *ctx)
++static int match_server(struct TCP_Server_Info *server,
++			struct smb3_fs_context *ctx,
++			bool match_super)
+ {
+ 	struct sockaddr *addr = (struct sockaddr *)&ctx->dstaddr;
+ 
+@@ -1467,36 +1468,38 @@ static int match_server(struct TCP_Server_Info *server, struct smb3_fs_context *
+ 			       (struct sockaddr *)&server->srcaddr))
+ 		return 0;
+ 	/*
+-	 * - Match for an DFS tcon (@server->origin_fullpath).
+-	 * - Match for an DFS root server connection (@server->leaf_fullpath).
+-	 * - If none of the above and @ctx->leaf_fullpath is set, then
+-	 *   it is a new DFS connection.
+-	 * - If 'nodfs' mount option was passed, then match only connections
+-	 *   that have no DFS referrals set
+-	 *   (e.g. can't failover to other targets).
++	 * When matching cifs.ko superblocks (@match_super == true), we can't
++	 * really match either @server->leaf_fullpath or @server->dstaddr
++	 * directly since this @server might belong to a completely different
++	 * server -- in case of domain-based DFS referrals or DFS links -- as
++	 * provided earlier by mount(2) through 'source' and 'ip' options.
++	 *
++	 * Otherwise, match the DFS referral in @server->leaf_fullpath or the
++	 * destination address in @server->dstaddr.
++	 *
++	 * When using 'nodfs' mount option, we avoid sharing it with DFS
++	 * connections as they might failover.
+ 	 */
+-	if (!ctx->nodfs) {
+-		if (ctx->source && server->origin_fullpath) {
+-			if (!dfs_src_pathname_equal(ctx->source,
+-						    server->origin_fullpath))
++	if (!match_super) {
++		if (!ctx->nodfs) {
++			if (server->leaf_fullpath) {
++				if (!ctx->leaf_fullpath ||
++				    strcasecmp(server->leaf_fullpath,
++					       ctx->leaf_fullpath))
++					return 0;
++			} else if (ctx->leaf_fullpath) {
+ 				return 0;
++			}
+ 		} else if (server->leaf_fullpath) {
+-			if (!ctx->leaf_fullpath ||
+-			    strcasecmp(server->leaf_fullpath,
+-				       ctx->leaf_fullpath))
+-				return 0;
+-		} else if (ctx->leaf_fullpath) {
+ 			return 0;
+ 		}
+-	} else if (server->origin_fullpath || server->leaf_fullpath) {
+-		return 0;
+ 	}
+ 
+ 	/*
+ 	 * Match for a regular connection (address/hostname/port) which has no
+ 	 * DFS referrals set.
+ 	 */
+-	if (!server->origin_fullpath && !server->leaf_fullpath &&
++	if (!server->leaf_fullpath &&
+ 	    (strcasecmp(server->hostname, ctx->server_hostname) ||
+ 	     !match_server_address(server, addr) ||
+ 	     !match_port(server, addr)))
+@@ -1532,7 +1535,8 @@ cifs_find_tcp_session(struct smb3_fs_context *ctx)
+ 		 * Skip ses channels since they're only handled in lower layers
+ 		 * (e.g. cifs_send_recv).
+ 		 */
+-		if (CIFS_SERVER_IS_CHAN(server) || !match_server(server, ctx)) {
++		if (CIFS_SERVER_IS_CHAN(server) ||
++		    !match_server(server, ctx, false)) {
+ 			spin_unlock(&server->srv_lock);
+ 			continue;
+ 		}
+@@ -2320,10 +2324,16 @@ static int match_tcon(struct cifs_tcon *tcon, struct smb3_fs_context *ctx)
+ 
+ 	if (tcon->status == TID_EXITING)
+ 		return 0;
+-	/* Skip UNC validation when matching DFS connections or superblocks */
+-	if (!server->origin_fullpath && !server->leaf_fullpath &&
+-	    strncmp(tcon->tree_name, ctx->UNC, MAX_TREE_SIZE))
++
++	if (tcon->origin_fullpath) {
++		if (!ctx->source ||
++		    !dfs_src_pathname_equal(ctx->source,
++					    tcon->origin_fullpath))
++			return 0;
++	} else if (!server->leaf_fullpath &&
++		   strncmp(tcon->tree_name, ctx->UNC, MAX_TREE_SIZE)) {
+ 		return 0;
++	}
+ 	if (tcon->seal != ctx->seal)
+ 		return 0;
+ 	if (tcon->snapshot_time != ctx->snapshot_time)
+@@ -2722,7 +2732,7 @@ compare_mount_options(struct super_block *sb, struct cifs_mnt_data *mnt_data)
+ }
+ 
+ static int match_prepath(struct super_block *sb,
+-			 struct TCP_Server_Info *server,
++			 struct cifs_tcon *tcon,
+ 			 struct cifs_mnt_data *mnt_data)
+ {
+ 	struct smb3_fs_context *ctx = mnt_data->ctx;
+@@ -2733,8 +2743,8 @@ static int match_prepath(struct super_block *sb,
+ 	bool new_set = (new->mnt_cifs_flags & CIFS_MOUNT_USE_PREFIX_PATH) &&
+ 		new->prepath;
+ 
+-	if (server->origin_fullpath &&
+-	    dfs_src_pathname_equal(server->origin_fullpath, ctx->source))
++	if (tcon->origin_fullpath &&
++	    dfs_src_pathname_equal(tcon->origin_fullpath, ctx->source))
+ 		return 1;
+ 
+ 	if (old_set && new_set && !strcmp(new->prepath, old->prepath))
+@@ -2782,10 +2792,10 @@ cifs_match_super(struct super_block *sb, void *data)
+ 	spin_lock(&ses->ses_lock);
+ 	spin_lock(&ses->chan_lock);
+ 	spin_lock(&tcon->tc_lock);
+-	if (!match_server(tcp_srv, ctx) ||
++	if (!match_server(tcp_srv, ctx, true) ||
+ 	    !match_session(ses, ctx) ||
+ 	    !match_tcon(tcon, ctx) ||
+-	    !match_prepath(sb, tcp_srv, mnt_data)) {
++	    !match_prepath(sb, tcon, mnt_data)) {
+ 		rc = 0;
+ 		goto out;
+ 	}
+diff --git a/fs/smb/client/dfs.c b/fs/smb/client/dfs.c
+index 2390b2fedd6a3..267536a7531df 100644
+--- a/fs/smb/client/dfs.c
++++ b/fs/smb/client/dfs.c
+@@ -249,14 +249,12 @@ static int __dfs_mount_share(struct cifs_mount_ctx *mnt_ctx)
+ 		server = mnt_ctx->server;
+ 		tcon = mnt_ctx->tcon;
+ 
+-		mutex_lock(&server->refpath_lock);
+-		spin_lock(&server->srv_lock);
+-		if (!server->origin_fullpath) {
+-			server->origin_fullpath = origin_fullpath;
++		spin_lock(&tcon->tc_lock);
++		if (!tcon->origin_fullpath) {
++			tcon->origin_fullpath = origin_fullpath;
+ 			origin_fullpath = NULL;
+ 		}
+-		spin_unlock(&server->srv_lock);
+-		mutex_unlock(&server->refpath_lock);
++		spin_unlock(&tcon->tc_lock);
+ 
+ 		if (list_empty(&tcon->dfs_ses_list)) {
+ 			list_replace_init(&mnt_ctx->dfs_ses_list,
+@@ -279,18 +277,13 @@ int dfs_mount_share(struct cifs_mount_ctx *mnt_ctx, bool *isdfs)
+ {
+ 	struct smb3_fs_context *ctx = mnt_ctx->fs_ctx;
+ 	struct cifs_ses *ses;
+-	char *source = ctx->source;
+ 	bool nodfs = ctx->nodfs;
+ 	int rc;
+ 
+ 	*isdfs = false;
+-	/* Temporarily set @ctx->source to NULL as we're not matching DFS
+-	 * superblocks yet.  See cifs_match_super() and match_server().
+-	 */
+-	ctx->source = NULL;
+ 	rc = get_session(mnt_ctx, NULL);
+ 	if (rc)
+-		goto out;
++		return rc;
+ 
+ 	ctx->dfs_root_ses = mnt_ctx->ses;
+ 	/*
+@@ -304,7 +297,7 @@ int dfs_mount_share(struct cifs_mount_ctx *mnt_ctx, bool *isdfs)
+ 		rc = dfs_get_referral(mnt_ctx, ctx->UNC + 1, NULL, NULL);
+ 		if (rc) {
+ 			if (rc != -ENOENT && rc != -EOPNOTSUPP && rc != -EIO)
+-				goto out;
++				return rc;
+ 			nodfs = true;
+ 		}
+ 	}
+@@ -312,7 +305,7 @@ int dfs_mount_share(struct cifs_mount_ctx *mnt_ctx, bool *isdfs)
+ 		rc = cifs_mount_get_tcon(mnt_ctx);
+ 		if (!rc)
+ 			rc = cifs_is_path_remote(mnt_ctx);
+-		goto out;
++		return rc;
+ 	}
+ 
+ 	*isdfs = true;
+@@ -328,12 +321,7 @@ int dfs_mount_share(struct cifs_mount_ctx *mnt_ctx, bool *isdfs)
+ 	rc = __dfs_mount_share(mnt_ctx);
+ 	if (ses == ctx->dfs_root_ses)
+ 		cifs_put_smb_ses(ses);
+-out:
+-	/*
+-	 * Restore previous value of @ctx->source so DFS superblock can be
+-	 * matched in cifs_match_super().
+-	 */
+-	ctx->source = source;
++
+ 	return rc;
+ }
+ 
+@@ -567,11 +555,11 @@ int cifs_tree_connect(const unsigned int xid, struct cifs_tcon *tcon, const stru
+ 	int rc;
+ 	struct TCP_Server_Info *server = tcon->ses->server;
+ 	const struct smb_version_operations *ops = server->ops;
+-	struct super_block *sb = NULL;
+-	struct cifs_sb_info *cifs_sb;
+ 	struct dfs_cache_tgt_list tl = DFS_CACHE_TGT_LIST_INIT(tl);
+-	char *tree;
++	struct cifs_sb_info *cifs_sb = NULL;
++	struct super_block *sb = NULL;
+ 	struct dfs_info3_param ref = {0};
++	char *tree;
+ 
+ 	/* only send once per connect */
+ 	spin_lock(&tcon->tc_lock);
+@@ -603,19 +591,18 @@ int cifs_tree_connect(const unsigned int xid, struct cifs_tcon *tcon, const stru
+ 		goto out;
+ 	}
+ 
+-	sb = cifs_get_tcp_super(server);
+-	if (IS_ERR(sb)) {
+-		rc = PTR_ERR(sb);
+-		cifs_dbg(VFS, "%s: could not find superblock: %d\n", __func__, rc);
+-		goto out;
+-	}
+-
+-	cifs_sb = CIFS_SB(sb);
++	sb = cifs_get_dfs_tcon_super(tcon);
++	if (!IS_ERR(sb))
++		cifs_sb = CIFS_SB(sb);
+ 
+-	/* If it is not dfs or there was no cached dfs referral, then reconnect to same share */
+-	if (!server->leaf_fullpath ||
++	/*
++	 * Tree connect to last share in @tcon->tree_name whether dfs super or
++	 * cached dfs referral was not found.
++	 */
++	if (!cifs_sb || !server->leaf_fullpath ||
+ 	    dfs_cache_noreq_find(server->leaf_fullpath + 1, &ref, &tl)) {
+-		rc = ops->tree_connect(xid, tcon->ses, tcon->tree_name, tcon, cifs_sb->local_nls);
++		rc = ops->tree_connect(xid, tcon->ses, tcon->tree_name, tcon,
++				       cifs_sb ? cifs_sb->local_nls : nlsc);
+ 		goto out;
+ 	}
+ 
+diff --git a/fs/smb/client/dfs.h b/fs/smb/client/dfs.h
+index 1c90df5ecfbda..98e9d2aca6a7a 100644
+--- a/fs/smb/client/dfs.h
++++ b/fs/smb/client/dfs.h
+@@ -39,16 +39,15 @@ static inline char *dfs_get_automount_devname(struct dentry *dentry, void *page)
+ {
+ 	struct cifs_sb_info *cifs_sb = CIFS_SB(dentry->d_sb);
+ 	struct cifs_tcon *tcon = cifs_sb_master_tcon(cifs_sb);
+-	struct TCP_Server_Info *server = tcon->ses->server;
+ 	size_t len;
+ 	char *s;
+ 
+-	spin_lock(&server->srv_lock);
+-	if (unlikely(!server->origin_fullpath)) {
+-		spin_unlock(&server->srv_lock);
++	spin_lock(&tcon->tc_lock);
++	if (unlikely(!tcon->origin_fullpath)) {
++		spin_unlock(&tcon->tc_lock);
+ 		return ERR_PTR(-EREMOTE);
+ 	}
+-	spin_unlock(&server->srv_lock);
++	spin_unlock(&tcon->tc_lock);
+ 
+ 	s = dentry_path_raw(dentry, page, PATH_MAX);
+ 	if (IS_ERR(s))
+@@ -57,16 +56,16 @@ static inline char *dfs_get_automount_devname(struct dentry *dentry, void *page)
+ 	if (!s[1])
+ 		s++;
+ 
+-	spin_lock(&server->srv_lock);
+-	len = strlen(server->origin_fullpath);
++	spin_lock(&tcon->tc_lock);
++	len = strlen(tcon->origin_fullpath);
+ 	if (s < (char *)page + len) {
+-		spin_unlock(&server->srv_lock);
++		spin_unlock(&tcon->tc_lock);
+ 		return ERR_PTR(-ENAMETOOLONG);
+ 	}
+ 
+ 	s -= len;
+-	memcpy(s, server->origin_fullpath, len);
+-	spin_unlock(&server->srv_lock);
++	memcpy(s, tcon->origin_fullpath, len);
++	spin_unlock(&tcon->tc_lock);
+ 	convert_delimiter(s, '/');
+ 
+ 	return s;
+diff --git a/fs/smb/client/dfs_cache.c b/fs/smb/client/dfs_cache.c
+index 1513b2709889b..33adf43a01f1d 100644
+--- a/fs/smb/client/dfs_cache.c
++++ b/fs/smb/client/dfs_cache.c
+@@ -1248,18 +1248,20 @@ static int refresh_tcon(struct cifs_tcon *tcon, bool force_refresh)
+ int dfs_cache_remount_fs(struct cifs_sb_info *cifs_sb)
+ {
+ 	struct cifs_tcon *tcon;
+-	struct TCP_Server_Info *server;
+ 
+ 	if (!cifs_sb || !cifs_sb->master_tlink)
+ 		return -EINVAL;
+ 
+ 	tcon = cifs_sb_master_tcon(cifs_sb);
+-	server = tcon->ses->server;
+ 
+-	if (!server->origin_fullpath) {
++	spin_lock(&tcon->tc_lock);
++	if (!tcon->origin_fullpath) {
++		spin_unlock(&tcon->tc_lock);
+ 		cifs_dbg(FYI, "%s: not a dfs mount\n", __func__);
+ 		return 0;
+ 	}
++	spin_unlock(&tcon->tc_lock);
++
+ 	/*
+ 	 * After reconnecting to a different server, unique ids won't match anymore, so we disable
+ 	 * serverino. This prevents dentry revalidation to think the dentry are stale (ESTALE).
+diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c
+index 051283386e229..1a854dc204823 100644
+--- a/fs/smb/client/file.c
++++ b/fs/smb/client/file.c
+@@ -4936,20 +4936,19 @@ oplock_break_ack:
+ 
+ 	_cifsFileInfo_put(cfile, false /* do not wait for ourself */, false);
+ 	/*
+-	 * releasing stale oplock after recent reconnect of smb session using
+-	 * a now incorrect file handle is not a data integrity issue but do
+-	 * not bother sending an oplock release if session to server still is
+-	 * disconnected since oplock already released by the server
++	 * MS-SMB2 3.2.5.19.1 and 3.2.5.19.2 (and MS-CIFS 3.2.5.42) do not require
++	 * an acknowledgment to be sent when the file has already been closed.
++	 * check for server null, since can race with kill_sb calling tree disconnect.
+ 	 */
+-	if (!oplock_break_cancelled) {
+-		/* check for server null since can race with kill_sb calling tree disconnect */
+-		if (tcon->ses && tcon->ses->server) {
+-			rc = tcon->ses->server->ops->oplock_response(tcon, persistent_fid,
+-				volatile_fid, net_fid, cinode);
+-			cifs_dbg(FYI, "Oplock release rc = %d\n", rc);
+-		} else
+-			pr_warn_once("lease break not sent for unmounted share\n");
+-	}
++	spin_lock(&cinode->open_file_lock);
++	if (tcon->ses && tcon->ses->server && !oplock_break_cancelled &&
++					!list_empty(&cinode->openFileList)) {
++		spin_unlock(&cinode->open_file_lock);
++		rc = tcon->ses->server->ops->oplock_response(tcon, persistent_fid,
++						volatile_fid, net_fid, cinode);
++		cifs_dbg(FYI, "Oplock release rc = %d\n", rc);
++	} else
++		spin_unlock(&cinode->open_file_lock);
+ 
+ 	cifs_done_oplock_break(cinode);
+ }
+diff --git a/fs/smb/client/misc.c b/fs/smb/client/misc.c
+index cd914be905b24..b0dedc26643b6 100644
+--- a/fs/smb/client/misc.c
++++ b/fs/smb/client/misc.c
+@@ -156,6 +156,7 @@ tconInfoFree(struct cifs_tcon *tcon)
+ #ifdef CONFIG_CIFS_DFS_UPCALL
+ 	dfs_put_root_smb_sessions(&tcon->dfs_ses_list);
+ #endif
++	kfree(tcon->origin_fullpath);
+ 	kfree(tcon);
+ }
+ 
+@@ -1106,20 +1107,25 @@ struct super_cb_data {
+ 	struct super_block *sb;
+ };
+ 
+-static void tcp_super_cb(struct super_block *sb, void *arg)
++static void tcon_super_cb(struct super_block *sb, void *arg)
+ {
+ 	struct super_cb_data *sd = arg;
+-	struct TCP_Server_Info *server = sd->data;
+ 	struct cifs_sb_info *cifs_sb;
+-	struct cifs_tcon *tcon;
++	struct cifs_tcon *t1 = sd->data, *t2;
+ 
+ 	if (sd->sb)
+ 		return;
+ 
+ 	cifs_sb = CIFS_SB(sb);
+-	tcon = cifs_sb_master_tcon(cifs_sb);
+-	if (tcon->ses->server == server)
++	t2 = cifs_sb_master_tcon(cifs_sb);
++
++	spin_lock(&t2->tc_lock);
++	if (t1->ses == t2->ses &&
++	    t1->ses->server == t2->ses->server &&
++	    t2->origin_fullpath &&
++	    dfs_src_pathname_equal(t2->origin_fullpath, t1->origin_fullpath))
+ 		sd->sb = sb;
++	spin_unlock(&t2->tc_lock);
+ }
+ 
+ static struct super_block *__cifs_get_super(void (*f)(struct super_block *, void *),
+@@ -1145,6 +1151,7 @@ static struct super_block *__cifs_get_super(void (*f)(struct super_block *, void
+ 			return sd.sb;
+ 		}
+ 	}
++	pr_warn_once("%s: could not find dfs superblock\n", __func__);
+ 	return ERR_PTR(-EINVAL);
+ }
+ 
+@@ -1154,9 +1161,15 @@ static void __cifs_put_super(struct super_block *sb)
+ 		cifs_sb_deactive(sb);
+ }
+ 
+-struct super_block *cifs_get_tcp_super(struct TCP_Server_Info *server)
++struct super_block *cifs_get_dfs_tcon_super(struct cifs_tcon *tcon)
+ {
+-	return __cifs_get_super(tcp_super_cb, server);
++	spin_lock(&tcon->tc_lock);
++	if (!tcon->origin_fullpath) {
++		spin_unlock(&tcon->tc_lock);
++		return ERR_PTR(-ENOENT);
++	}
++	spin_unlock(&tcon->tc_lock);
++	return __cifs_get_super(tcon_super_cb, tcon);
+ }
+ 
+ void cifs_put_tcp_super(struct super_block *sb)
+@@ -1238,9 +1251,16 @@ int cifs_inval_name_dfs_link_error(const unsigned int xid,
+ 	 */
+ 	if (strlen(full_path) < 2 || !cifs_sb ||
+ 	    (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NO_DFS) ||
+-	    !is_tcon_dfs(tcon) || !ses->server->origin_fullpath)
++	    !is_tcon_dfs(tcon))
+ 		return 0;
+ 
++	spin_lock(&tcon->tc_lock);
++	if (!tcon->origin_fullpath) {
++		spin_unlock(&tcon->tc_lock);
++		return 0;
++	}
++	spin_unlock(&tcon->tc_lock);
++
+ 	/*
+ 	 * Slow path - tcon is DFS and @full_path has prefix path, so attempt
+ 	 * to get a referral to figure out whether it is an DFS link.
+@@ -1264,7 +1284,7 @@ int cifs_inval_name_dfs_link_error(const unsigned int xid,
+ 
+ 		/*
+ 		 * XXX: we are not using dfs_cache_find() here because we might
+-		 * end filling all the DFS cache and thus potentially
++		 * end up filling all the DFS cache and thus potentially
+ 		 * removing cached DFS targets that the client would eventually
+ 		 * need during failover.
+ 		 */
+diff --git a/fs/smb/client/smb2inode.c b/fs/smb/client/smb2inode.c
+index 163a03298430d..8e696fbd72fa8 100644
+--- a/fs/smb/client/smb2inode.c
++++ b/fs/smb/client/smb2inode.c
+@@ -398,9 +398,6 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ 					rsp_iov);
+ 
+  finished:
+-	if (cfile)
+-		cifsFileInfo_put(cfile);
+-
+ 	SMB2_open_free(&rqst[0]);
+ 	if (rc == -EREMCHG) {
+ 		pr_warn_once("server share %s deleted\n", tcon->tree_name);
+@@ -529,6 +526,9 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ 		break;
+ 	}
+ 
++	if (cfile)
++		cifsFileInfo_put(cfile);
++
+ 	if (rc && err_iov && err_buftype) {
+ 		memcpy(err_iov, rsp_iov, 3 * sizeof(*err_iov));
+ 		memcpy(err_buftype, resp_buftype, 3 * sizeof(*err_buftype));
+@@ -609,9 +609,6 @@ int smb2_query_path_info(const unsigned int xid, struct cifs_tcon *tcon,
+ 			if (islink)
+ 				rc = -EREMOTE;
+ 		}
+-		if (rc == -EREMOTE && IS_ENABLED(CONFIG_CIFS_DFS_UPCALL) && cifs_sb &&
+-		    (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NO_DFS))
+-			rc = -EOPNOTSUPP;
+ 	}
+ 
+ out:
+diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c
+index a8bb9d00d33ad..3bac586e8a8eb 100644
+--- a/fs/smb/client/smb2ops.c
++++ b/fs/smb/client/smb2ops.c
+@@ -211,6 +211,16 @@ smb2_wait_mtu_credits(struct TCP_Server_Info *server, unsigned int size,
+ 
+ 	spin_lock(&server->req_lock);
+ 	while (1) {
++		spin_unlock(&server->req_lock);
++
++		spin_lock(&server->srv_lock);
++		if (server->tcpStatus == CifsExiting) {
++			spin_unlock(&server->srv_lock);
++			return -ENOENT;
++		}
++		spin_unlock(&server->srv_lock);
++
++		spin_lock(&server->req_lock);
+ 		if (server->credits <= 0) {
+ 			spin_unlock(&server->req_lock);
+ 			cifs_num_waiters_inc(server);
+@@ -221,15 +231,6 @@ smb2_wait_mtu_credits(struct TCP_Server_Info *server, unsigned int size,
+ 				return rc;
+ 			spin_lock(&server->req_lock);
+ 		} else {
+-			spin_unlock(&server->req_lock);
+-			spin_lock(&server->srv_lock);
+-			if (server->tcpStatus == CifsExiting) {
+-				spin_unlock(&server->srv_lock);
+-				return -ENOENT;
+-			}
+-			spin_unlock(&server->srv_lock);
+-
+-			spin_lock(&server->req_lock);
+ 			scredits = server->credits;
+ 			/* can deadlock with reopen */
+ 			if (scredits <= 8) {
+diff --git a/fs/smb/client/transport.c b/fs/smb/client/transport.c
+index 0474d0bba0a2e..f280502a2aee8 100644
+--- a/fs/smb/client/transport.c
++++ b/fs/smb/client/transport.c
+@@ -522,6 +522,16 @@ wait_for_free_credits(struct TCP_Server_Info *server, const int num_credits,
+ 	}
+ 
+ 	while (1) {
++		spin_unlock(&server->req_lock);
++
++		spin_lock(&server->srv_lock);
++		if (server->tcpStatus == CifsExiting) {
++			spin_unlock(&server->srv_lock);
++			return -ENOENT;
++		}
++		spin_unlock(&server->srv_lock);
++
++		spin_lock(&server->req_lock);
+ 		if (*credits < num_credits) {
+ 			scredits = *credits;
+ 			spin_unlock(&server->req_lock);
+@@ -547,15 +557,6 @@ wait_for_free_credits(struct TCP_Server_Info *server, const int num_credits,
+ 				return -ERESTARTSYS;
+ 			spin_lock(&server->req_lock);
+ 		} else {
+-			spin_unlock(&server->req_lock);
+-
+-			spin_lock(&server->srv_lock);
+-			if (server->tcpStatus == CifsExiting) {
+-				spin_unlock(&server->srv_lock);
+-				return -ENOENT;
+-			}
+-			spin_unlock(&server->srv_lock);
+-
+ 			/*
+ 			 * For normal commands, reserve the last MAX_COMPOUND
+ 			 * credits to compound requests.
+@@ -569,7 +570,6 @@ wait_for_free_credits(struct TCP_Server_Info *server, const int num_credits,
+ 			 * for servers that are slow to hand out credits on
+ 			 * new sessions.
+ 			 */
+-			spin_lock(&server->req_lock);
+ 			if (!optype && num_credits == 1 &&
+ 			    server->in_flight > 2 * MAX_COMPOUND &&
+ 			    *credits <= MAX_COMPOUND) {
+diff --git a/fs/smb/server/smb_common.c b/fs/smb/server/smb_common.c
+index 569e5eecdf3db..3e391a7d5a3ab 100644
+--- a/fs/smb/server/smb_common.c
++++ b/fs/smb/server/smb_common.c
+@@ -536,7 +536,7 @@ int ksmbd_extract_shortname(struct ksmbd_conn *conn, const char *longname,
+ 	out[baselen + 3] = PERIOD;
+ 
+ 	if (dot_present)
+-		memcpy(&out[baselen + 4], extension, 4);
++		memcpy(out + baselen + 4, extension, 4);
+ 	else
+ 		out[baselen + 4] = '\0';
+ 	smbConvertToUTF16((__le16 *)shortname, out, PATH_MAX,
+diff --git a/fs/splice.c b/fs/splice.c
+index 3e06611d19ae5..030e162985b5d 100644
+--- a/fs/splice.c
++++ b/fs/splice.c
+@@ -355,7 +355,6 @@ ssize_t direct_splice_read(struct file *in, loff_t *ppos,
+ 		reclaim -= ret;
+ 		remain = ret;
+ 		*ppos = kiocb.ki_pos;
+-		file_accessed(in);
+ 	} else if (ret < 0) {
+ 		/*
+ 		 * callers of ->splice_read() expect -EAGAIN on
+diff --git a/fs/udf/namei.c b/fs/udf/namei.c
+index fd20423d3ed24..fd29a66e7241f 100644
+--- a/fs/udf/namei.c
++++ b/fs/udf/namei.c
+@@ -793,11 +793,6 @@ static int udf_rename(struct mnt_idmap *idmap, struct inode *old_dir,
+ 			if (!empty_dir(new_inode))
+ 				goto out_oiter;
+ 		}
+-		/*
+-		 * We need to protect against old_inode getting converted from
+-		 * ICB to normal directory.
+-		 */
+-		inode_lock_nested(old_inode, I_MUTEX_NONDIR2);
+ 		retval = udf_fiiter_find_entry(old_inode, &dotdot_name,
+ 					       &diriter);
+ 		if (retval == -ENOENT) {
+@@ -806,10 +801,8 @@ static int udf_rename(struct mnt_idmap *idmap, struct inode *old_dir,
+ 				old_inode->i_ino);
+ 			retval = -EFSCORRUPTED;
+ 		}
+-		if (retval) {
+-			inode_unlock(old_inode);
++		if (retval)
+ 			goto out_oiter;
+-		}
+ 		has_diriter = true;
+ 		tloc = lelb_to_cpu(diriter.fi.icb.extLocation);
+ 		if (udf_get_lb_pblock(old_inode->i_sb, &tloc, 0) !=
+@@ -889,7 +882,6 @@ static int udf_rename(struct mnt_idmap *idmap, struct inode *old_dir,
+ 			       udf_dir_entry_len(&diriter.fi));
+ 		udf_fiiter_write_fi(&diriter, NULL);
+ 		udf_fiiter_release(&diriter);
+-		inode_unlock(old_inode);
+ 
+ 		inode_dec_link_count(old_dir);
+ 		if (new_inode)
+@@ -901,10 +893,8 @@ static int udf_rename(struct mnt_idmap *idmap, struct inode *old_dir,
+ 	}
+ 	return 0;
+ out_oiter:
+-	if (has_diriter) {
++	if (has_diriter)
+ 		udf_fiiter_release(&diriter);
+-		inode_unlock(old_inode);
+-	}
+ 	udf_fiiter_release(&oiter);
+ 
+ 	return retval;
+diff --git a/fs/verity/enable.c b/fs/verity/enable.c
+index fc4c50e5219dc..bd86b25ac084b 100644
+--- a/fs/verity/enable.c
++++ b/fs/verity/enable.c
+@@ -7,6 +7,7 @@
+ 
+ #include "fsverity_private.h"
+ 
++#include <crypto/hash.h>
+ #include <linux/mount.h>
+ #include <linux/sched/signal.h>
+ #include <linux/uaccess.h>
+@@ -20,7 +21,7 @@ struct block_buffer {
+ /* Hash a block, writing the result to the next level's pending block buffer. */
+ static int hash_one_block(struct inode *inode,
+ 			  const struct merkle_tree_params *params,
+-			  struct ahash_request *req, struct block_buffer *cur)
++			  struct block_buffer *cur)
+ {
+ 	struct block_buffer *next = cur + 1;
+ 	int err;
+@@ -36,8 +37,7 @@ static int hash_one_block(struct inode *inode,
+ 	/* Zero-pad the block if it's shorter than the block size. */
+ 	memset(&cur->data[cur->filled], 0, params->block_size - cur->filled);
+ 
+-	err = fsverity_hash_block(params, inode, req, virt_to_page(cur->data),
+-				  offset_in_page(cur->data),
++	err = fsverity_hash_block(params, inode, cur->data,
+ 				  &next->data[next->filled]);
+ 	if (err)
+ 		return err;
+@@ -76,7 +76,6 @@ static int build_merkle_tree(struct file *filp,
+ 	struct inode *inode = file_inode(filp);
+ 	const u64 data_size = inode->i_size;
+ 	const int num_levels = params->num_levels;
+-	struct ahash_request *req;
+ 	struct block_buffer _buffers[1 + FS_VERITY_MAX_LEVELS + 1] = {};
+ 	struct block_buffer *buffers = &_buffers[1];
+ 	unsigned long level_offset[FS_VERITY_MAX_LEVELS];
+@@ -90,9 +89,6 @@ static int build_merkle_tree(struct file *filp,
+ 		return 0;
+ 	}
+ 
+-	/* This allocation never fails, since it's mempool-backed. */
+-	req = fsverity_alloc_hash_request(params->hash_alg, GFP_KERNEL);
+-
+ 	/*
+ 	 * Allocate the block buffers.  Buffer "-1" is for data blocks.
+ 	 * Buffers 0 <= level < num_levels are for the actual tree levels.
+@@ -130,7 +126,7 @@ static int build_merkle_tree(struct file *filp,
+ 			fsverity_err(inode, "Short read of file data");
+ 			goto out;
+ 		}
+-		err = hash_one_block(inode, params, req, &buffers[-1]);
++		err = hash_one_block(inode, params, &buffers[-1]);
+ 		if (err)
+ 			goto out;
+ 		for (level = 0; level < num_levels; level++) {
+@@ -141,8 +137,7 @@ static int build_merkle_tree(struct file *filp,
+ 			}
+ 			/* Next block at @level is full */
+ 
+-			err = hash_one_block(inode, params, req,
+-					     &buffers[level]);
++			err = hash_one_block(inode, params, &buffers[level]);
+ 			if (err)
+ 				goto out;
+ 			err = write_merkle_tree_block(inode,
+@@ -162,8 +157,7 @@ static int build_merkle_tree(struct file *filp,
+ 	/* Finish all nonempty pending tree blocks. */
+ 	for (level = 0; level < num_levels; level++) {
+ 		if (buffers[level].filled != 0) {
+-			err = hash_one_block(inode, params, req,
+-					     &buffers[level]);
++			err = hash_one_block(inode, params, &buffers[level]);
+ 			if (err)
+ 				goto out;
+ 			err = write_merkle_tree_block(inode,
+@@ -183,7 +177,6 @@ static int build_merkle_tree(struct file *filp,
+ out:
+ 	for (level = -1; level < num_levels; level++)
+ 		kfree(buffers[level].data);
+-	fsverity_free_hash_request(params->hash_alg, req);
+ 	return err;
+ }
+ 
+diff --git a/fs/verity/fsverity_private.h b/fs/verity/fsverity_private.h
+index d34dcc033d723..8527beca2a454 100644
+--- a/fs/verity/fsverity_private.h
++++ b/fs/verity/fsverity_private.h
+@@ -11,9 +11,6 @@
+ #define pr_fmt(fmt) "fs-verity: " fmt
+ 
+ #include <linux/fsverity.h>
+-#include <linux/mempool.h>
+-
+-struct ahash_request;
+ 
+ /*
+  * Implementation limit: maximum depth of the Merkle tree.  For now 8 is plenty;
+@@ -23,11 +20,10 @@ struct ahash_request;
+ 
+ /* A hash algorithm supported by fs-verity */
+ struct fsverity_hash_alg {
+-	struct crypto_ahash *tfm; /* hash tfm, allocated on demand */
++	struct crypto_shash *tfm; /* hash tfm, allocated on demand */
+ 	const char *name;	  /* crypto API name, e.g. sha256 */
+ 	unsigned int digest_size; /* digest size in bytes, e.g. 32 for SHA-256 */
+ 	unsigned int block_size;  /* block size in bytes, e.g. 64 for SHA-256 */
+-	mempool_t req_pool;	  /* mempool with a preallocated hash request */
+ 	/*
+ 	 * The HASH_ALGO_* constant for this algorithm.  This is different from
+ 	 * FS_VERITY_HASH_ALG_*, which uses a different numbering scheme.
+@@ -85,15 +81,10 @@ extern struct fsverity_hash_alg fsverity_hash_algs[];
+ 
+ struct fsverity_hash_alg *fsverity_get_hash_alg(const struct inode *inode,
+ 						unsigned int num);
+-struct ahash_request *fsverity_alloc_hash_request(struct fsverity_hash_alg *alg,
+-						  gfp_t gfp_flags);
+-void fsverity_free_hash_request(struct fsverity_hash_alg *alg,
+-				struct ahash_request *req);
+ const u8 *fsverity_prepare_hash_state(struct fsverity_hash_alg *alg,
+ 				      const u8 *salt, size_t salt_size);
+ int fsverity_hash_block(const struct merkle_tree_params *params,
+-			const struct inode *inode, struct ahash_request *req,
+-			struct page *page, unsigned int offset, u8 *out);
++			const struct inode *inode, const void *data, u8 *out);
+ int fsverity_hash_buffer(struct fsverity_hash_alg *alg,
+ 			 const void *data, size_t size, u8 *out);
+ void __init fsverity_check_hash_algs(void);
+diff --git a/fs/verity/hash_algs.c b/fs/verity/hash_algs.c
+index ea00dbedf756b..e7e982412e23a 100644
+--- a/fs/verity/hash_algs.c
++++ b/fs/verity/hash_algs.c
+@@ -8,7 +8,6 @@
+ #include "fsverity_private.h"
+ 
+ #include <crypto/hash.h>
+-#include <linux/scatterlist.h>
+ 
+ /* The hash algorithms supported by fs-verity */
+ struct fsverity_hash_alg fsverity_hash_algs[] = {
+@@ -44,7 +43,7 @@ struct fsverity_hash_alg *fsverity_get_hash_alg(const struct inode *inode,
+ 						unsigned int num)
+ {
+ 	struct fsverity_hash_alg *alg;
+-	struct crypto_ahash *tfm;
++	struct crypto_shash *tfm;
+ 	int err;
+ 
+ 	if (num >= ARRAY_SIZE(fsverity_hash_algs) ||
+@@ -63,11 +62,7 @@ struct fsverity_hash_alg *fsverity_get_hash_alg(const struct inode *inode,
+ 	if (alg->tfm != NULL)
+ 		goto out_unlock;
+ 
+-	/*
+-	 * Using the shash API would make things a bit simpler, but the ahash
+-	 * API is preferable as it allows the use of crypto accelerators.
+-	 */
+-	tfm = crypto_alloc_ahash(alg->name, 0, 0);
++	tfm = crypto_alloc_shash(alg->name, 0, 0);
+ 	if (IS_ERR(tfm)) {
+ 		if (PTR_ERR(tfm) == -ENOENT) {
+ 			fsverity_warn(inode,
+@@ -84,68 +79,26 @@ struct fsverity_hash_alg *fsverity_get_hash_alg(const struct inode *inode,
+ 	}
+ 
+ 	err = -EINVAL;
+-	if (WARN_ON_ONCE(alg->digest_size != crypto_ahash_digestsize(tfm)))
++	if (WARN_ON_ONCE(alg->digest_size != crypto_shash_digestsize(tfm)))
+ 		goto err_free_tfm;
+-	if (WARN_ON_ONCE(alg->block_size != crypto_ahash_blocksize(tfm)))
+-		goto err_free_tfm;
+-
+-	err = mempool_init_kmalloc_pool(&alg->req_pool, 1,
+-					sizeof(struct ahash_request) +
+-					crypto_ahash_reqsize(tfm));
+-	if (err)
++	if (WARN_ON_ONCE(alg->block_size != crypto_shash_blocksize(tfm)))
+ 		goto err_free_tfm;
+ 
+ 	pr_info("%s using implementation \"%s\"\n",
+-		alg->name, crypto_ahash_driver_name(tfm));
++		alg->name, crypto_shash_driver_name(tfm));
+ 
+ 	/* pairs with smp_load_acquire() above */
+ 	smp_store_release(&alg->tfm, tfm);
+ 	goto out_unlock;
+ 
+ err_free_tfm:
+-	crypto_free_ahash(tfm);
++	crypto_free_shash(tfm);
+ 	alg = ERR_PTR(err);
+ out_unlock:
+ 	mutex_unlock(&fsverity_hash_alg_init_mutex);
+ 	return alg;
+ }
+ 
+-/**
+- * fsverity_alloc_hash_request() - allocate a hash request object
+- * @alg: the hash algorithm for which to allocate the request
+- * @gfp_flags: memory allocation flags
+- *
+- * This is mempool-backed, so this never fails if __GFP_DIRECT_RECLAIM is set in
+- * @gfp_flags.  However, in that case this might need to wait for all
+- * previously-allocated requests to be freed.  So to avoid deadlocks, callers
+- * must never need multiple requests at a time to make forward progress.
+- *
+- * Return: the request object on success; NULL on failure (but see above)
+- */
+-struct ahash_request *fsverity_alloc_hash_request(struct fsverity_hash_alg *alg,
+-						  gfp_t gfp_flags)
+-{
+-	struct ahash_request *req = mempool_alloc(&alg->req_pool, gfp_flags);
+-
+-	if (req)
+-		ahash_request_set_tfm(req, alg->tfm);
+-	return req;
+-}
+-
+-/**
+- * fsverity_free_hash_request() - free a hash request object
+- * @alg: the hash algorithm
+- * @req: the hash request object to free
+- */
+-void fsverity_free_hash_request(struct fsverity_hash_alg *alg,
+-				struct ahash_request *req)
+-{
+-	if (req) {
+-		ahash_request_zero(req);
+-		mempool_free(req, &alg->req_pool);
+-	}
+-}
+-
+ /**
+  * fsverity_prepare_hash_state() - precompute the initial hash state
+  * @alg: hash algorithm
+@@ -159,23 +112,20 @@ const u8 *fsverity_prepare_hash_state(struct fsverity_hash_alg *alg,
+ 				      const u8 *salt, size_t salt_size)
+ {
+ 	u8 *hashstate = NULL;
+-	struct ahash_request *req = NULL;
++	SHASH_DESC_ON_STACK(desc, alg->tfm);
+ 	u8 *padded_salt = NULL;
+ 	size_t padded_salt_size;
+-	struct scatterlist sg;
+-	DECLARE_CRYPTO_WAIT(wait);
+ 	int err;
+ 
++	desc->tfm = alg->tfm;
++
+ 	if (salt_size == 0)
+ 		return NULL;
+ 
+-	hashstate = kmalloc(crypto_ahash_statesize(alg->tfm), GFP_KERNEL);
++	hashstate = kmalloc(crypto_shash_statesize(alg->tfm), GFP_KERNEL);
+ 	if (!hashstate)
+ 		return ERR_PTR(-ENOMEM);
+ 
+-	/* This allocation never fails, since it's mempool-backed. */
+-	req = fsverity_alloc_hash_request(alg, GFP_KERNEL);
+-
+ 	/*
+ 	 * Zero-pad the salt to the next multiple of the input size of the hash
+ 	 * algorithm's compression function, e.g. 64 bytes for SHA-256 or 128
+@@ -190,26 +140,18 @@ const u8 *fsverity_prepare_hash_state(struct fsverity_hash_alg *alg,
+ 		goto err_free;
+ 	}
+ 	memcpy(padded_salt, salt, salt_size);
+-
+-	sg_init_one(&sg, padded_salt, padded_salt_size);
+-	ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_SLEEP |
+-					CRYPTO_TFM_REQ_MAY_BACKLOG,
+-				   crypto_req_done, &wait);
+-	ahash_request_set_crypt(req, &sg, NULL, padded_salt_size);
+-
+-	err = crypto_wait_req(crypto_ahash_init(req), &wait);
++	err = crypto_shash_init(desc);
+ 	if (err)
+ 		goto err_free;
+ 
+-	err = crypto_wait_req(crypto_ahash_update(req), &wait);
++	err = crypto_shash_update(desc, padded_salt, padded_salt_size);
+ 	if (err)
+ 		goto err_free;
+ 
+-	err = crypto_ahash_export(req, hashstate);
++	err = crypto_shash_export(desc, hashstate);
+ 	if (err)
+ 		goto err_free;
+ out:
+-	fsverity_free_hash_request(alg, req);
+ 	kfree(padded_salt);
+ 	return hashstate;
+ 
+@@ -223,9 +165,7 @@ err_free:
+  * fsverity_hash_block() - hash a single data or hash block
+  * @params: the Merkle tree's parameters
+  * @inode: inode for which the hashing is being done
+- * @req: preallocated hash request
+- * @page: the page containing the block to hash
+- * @offset: the offset of the block within @page
++ * @data: virtual address of a buffer containing the block to hash
+  * @out: output digest, size 'params->digest_size' bytes
+  *
+  * Hash a single data or hash block.  The hash is salted if a salt is specified
+@@ -234,33 +174,24 @@ err_free:
+  * Return: 0 on success, -errno on failure
+  */
+ int fsverity_hash_block(const struct merkle_tree_params *params,
+-			const struct inode *inode, struct ahash_request *req,
+-			struct page *page, unsigned int offset, u8 *out)
++			const struct inode *inode, const void *data, u8 *out)
+ {
+-	struct scatterlist sg;
+-	DECLARE_CRYPTO_WAIT(wait);
++	SHASH_DESC_ON_STACK(desc, params->hash_alg->tfm);
+ 	int err;
+ 
+-	sg_init_table(&sg, 1);
+-	sg_set_page(&sg, page, params->block_size, offset);
+-	ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_SLEEP |
+-					CRYPTO_TFM_REQ_MAY_BACKLOG,
+-				   crypto_req_done, &wait);
+-	ahash_request_set_crypt(req, &sg, out, params->block_size);
++	desc->tfm = params->hash_alg->tfm;
+ 
+ 	if (params->hashstate) {
+-		err = crypto_ahash_import(req, params->hashstate);
++		err = crypto_shash_import(desc, params->hashstate);
+ 		if (err) {
+ 			fsverity_err(inode,
+ 				     "Error %d importing hash state", err);
+ 			return err;
+ 		}
+-		err = crypto_ahash_finup(req);
++		err = crypto_shash_finup(desc, data, params->block_size, out);
+ 	} else {
+-		err = crypto_ahash_digest(req);
++		err = crypto_shash_digest(desc, data, params->block_size, out);
+ 	}
+-
+-	err = crypto_wait_req(err, &wait);
+ 	if (err)
+ 		fsverity_err(inode, "Error %d computing block hash", err);
+ 	return err;
+@@ -273,32 +204,12 @@ int fsverity_hash_block(const struct merkle_tree_params *params,
+  * @size: size of data to hash, in bytes
+  * @out: output digest, size 'alg->digest_size' bytes
+  *
+- * Hash some data which is located in physically contiguous memory (i.e. memory
+- * allocated by kmalloc(), not by vmalloc()).  No salt is used.
+- *
+  * Return: 0 on success, -errno on failure
+  */
+ int fsverity_hash_buffer(struct fsverity_hash_alg *alg,
+ 			 const void *data, size_t size, u8 *out)
+ {
+-	struct ahash_request *req;
+-	struct scatterlist sg;
+-	DECLARE_CRYPTO_WAIT(wait);
+-	int err;
+-
+-	/* This allocation never fails, since it's mempool-backed. */
+-	req = fsverity_alloc_hash_request(alg, GFP_KERNEL);
+-
+-	sg_init_one(&sg, data, size);
+-	ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_SLEEP |
+-					CRYPTO_TFM_REQ_MAY_BACKLOG,
+-				   crypto_req_done, &wait);
+-	ahash_request_set_crypt(req, &sg, out, size);
+-
+-	err = crypto_wait_req(crypto_ahash_digest(req), &wait);
+-
+-	fsverity_free_hash_request(alg, req);
+-	return err;
++	return crypto_shash_tfm_digest(alg->tfm, data, size, out);
+ }
+ 
+ void __init fsverity_check_hash_algs(void)
+diff --git a/fs/verity/verify.c b/fs/verity/verify.c
+index e2508222750b3..cf40e2fe6ace7 100644
+--- a/fs/verity/verify.c
++++ b/fs/verity/verify.c
+@@ -29,21 +29,6 @@ static inline int cmp_hashes(const struct fsverity_info *vi,
+ 	return -EBADMSG;
+ }
+ 
+-static bool data_is_zeroed(struct inode *inode, struct page *page,
+-			   unsigned int len, unsigned int offset)
+-{
+-	void *virt = kmap_local_page(page);
+-
+-	if (memchr_inv(virt + offset, 0, len)) {
+-		kunmap_local(virt);
+-		fsverity_err(inode,
+-			     "FILE CORRUPTED!  Data past EOF is not zeroed");
+-		return false;
+-	}
+-	kunmap_local(virt);
+-	return true;
+-}
+-
+ /*
+  * Returns true if the hash block with index @hblock_idx in the tree, located in
+  * @hpage, has already been verified.
+@@ -122,9 +107,7 @@ static bool is_hash_block_verified(struct fsverity_info *vi, struct page *hpage,
+  */
+ static bool
+ verify_data_block(struct inode *inode, struct fsverity_info *vi,
+-		  struct ahash_request *req, struct page *data_page,
+-		  u64 data_pos, unsigned int dblock_offset_in_page,
+-		  unsigned long max_ra_pages)
++		  const void *data, u64 data_pos, unsigned long max_ra_pages)
+ {
+ 	const struct merkle_tree_params *params = &vi->tree_params;
+ 	const unsigned int hsize = params->digest_size;
+@@ -136,11 +119,11 @@ verify_data_block(struct inode *inode, struct fsverity_info *vi,
+ 	struct {
+ 		/* Page containing the hash block */
+ 		struct page *page;
++		/* Mapped address of the hash block (will be within @page) */
++		const void *addr;
+ 		/* Index of the hash block in the tree overall */
+ 		unsigned long index;
+-		/* Byte offset of the hash block within @page */
+-		unsigned int offset_in_page;
+-		/* Byte offset of the wanted hash within @page */
++		/* Byte offset of the wanted hash relative to @addr */
+ 		unsigned int hoffset;
+ 	} hblocks[FS_VERITY_MAX_LEVELS];
+ 	/*
+@@ -150,6 +133,9 @@ verify_data_block(struct inode *inode, struct fsverity_info *vi,
+ 	u64 hidx = data_pos >> params->log_blocksize;
+ 	int err;
+ 
++	/* Up to 1 + FS_VERITY_MAX_LEVELS pages may be mapped at once */
++	BUILD_BUG_ON(1 + FS_VERITY_MAX_LEVELS > KM_MAX_IDX);
++
+ 	if (unlikely(data_pos >= inode->i_size)) {
+ 		/*
+ 		 * This can happen in the data page spanning EOF when the Merkle
+@@ -159,8 +145,12 @@ verify_data_block(struct inode *inode, struct fsverity_info *vi,
+ 		 * any part past EOF should be all zeroes.  Therefore, we need
+ 		 * to verify that any data blocks fully past EOF are all zeroes.
+ 		 */
+-		return data_is_zeroed(inode, data_page, params->block_size,
+-				      dblock_offset_in_page);
++		if (memchr_inv(data, 0, params->block_size)) {
++			fsverity_err(inode,
++				     "FILE CORRUPTED!  Data past EOF is not zeroed");
++			return false;
++		}
++		return true;
+ 	}
+ 
+ 	/*
+@@ -175,6 +165,7 @@ verify_data_block(struct inode *inode, struct fsverity_info *vi,
+ 		unsigned int hblock_offset_in_page;
+ 		unsigned int hoffset;
+ 		struct page *hpage;
++		const void *haddr;
+ 
+ 		/*
+ 		 * The index of the block in the current level; also the index
+@@ -192,10 +183,9 @@ verify_data_block(struct inode *inode, struct fsverity_info *vi,
+ 		hblock_offset_in_page =
+ 			(hblock_idx << params->log_blocksize) & ~PAGE_MASK;
+ 
+-		/* Byte offset of the hash within the page */
+-		hoffset = hblock_offset_in_page +
+-			  ((hidx << params->log_digestsize) &
+-			   (params->block_size - 1));
++		/* Byte offset of the hash within the block */
++		hoffset = (hidx << params->log_digestsize) &
++			  (params->block_size - 1);
+ 
+ 		hpage = inode->i_sb->s_vop->read_merkle_tree_page(inode,
+ 				hpage_idx, level == 0 ? min(max_ra_pages,
+@@ -207,15 +197,17 @@ verify_data_block(struct inode *inode, struct fsverity_info *vi,
+ 				     err, hpage_idx);
+ 			goto out;
+ 		}
++		haddr = kmap_local_page(hpage) + hblock_offset_in_page;
+ 		if (is_hash_block_verified(vi, hpage, hblock_idx)) {
+-			memcpy_from_page(_want_hash, hpage, hoffset, hsize);
++			memcpy(_want_hash, haddr + hoffset, hsize);
+ 			want_hash = _want_hash;
++			kunmap_local(haddr);
+ 			put_page(hpage);
+ 			goto descend;
+ 		}
+ 		hblocks[level].page = hpage;
++		hblocks[level].addr = haddr;
+ 		hblocks[level].index = hblock_idx;
+-		hblocks[level].offset_in_page = hblock_offset_in_page;
+ 		hblocks[level].hoffset = hoffset;
+ 		hidx = next_hidx;
+ 	}
+@@ -225,13 +217,11 @@ descend:
+ 	/* Descend the tree verifying hash blocks. */
+ 	for (; level > 0; level--) {
+ 		struct page *hpage = hblocks[level - 1].page;
++		const void *haddr = hblocks[level - 1].addr;
+ 		unsigned long hblock_idx = hblocks[level - 1].index;
+-		unsigned int hblock_offset_in_page =
+-			hblocks[level - 1].offset_in_page;
+ 		unsigned int hoffset = hblocks[level - 1].hoffset;
+ 
+-		err = fsverity_hash_block(params, inode, req, hpage,
+-					  hblock_offset_in_page, real_hash);
++		err = fsverity_hash_block(params, inode, haddr, real_hash);
+ 		if (err)
+ 			goto out;
+ 		err = cmp_hashes(vi, want_hash, real_hash, data_pos, level - 1);
+@@ -246,29 +236,31 @@ descend:
+ 			set_bit(hblock_idx, vi->hash_block_verified);
+ 		else
+ 			SetPageChecked(hpage);
+-		memcpy_from_page(_want_hash, hpage, hoffset, hsize);
++		memcpy(_want_hash, haddr + hoffset, hsize);
+ 		want_hash = _want_hash;
++		kunmap_local(haddr);
+ 		put_page(hpage);
+ 	}
+ 
+ 	/* Finally, verify the data block. */
+-	err = fsverity_hash_block(params, inode, req, data_page,
+-				  dblock_offset_in_page, real_hash);
++	err = fsverity_hash_block(params, inode, data, real_hash);
+ 	if (err)
+ 		goto out;
+ 	err = cmp_hashes(vi, want_hash, real_hash, data_pos, -1);
+ out:
+-	for (; level > 0; level--)
++	for (; level > 0; level--) {
++		kunmap_local(hblocks[level - 1].addr);
+ 		put_page(hblocks[level - 1].page);
+-
++	}
+ 	return err == 0;
+ }
+ 
+ static bool
+-verify_data_blocks(struct inode *inode, struct fsverity_info *vi,
+-		   struct ahash_request *req, struct folio *data_folio,
+-		   size_t len, size_t offset, unsigned long max_ra_pages)
++verify_data_blocks(struct folio *data_folio, size_t len, size_t offset,
++		   unsigned long max_ra_pages)
+ {
++	struct inode *inode = data_folio->mapping->host;
++	struct fsverity_info *vi = inode->i_verity_info;
+ 	const unsigned int block_size = vi->tree_params.block_size;
+ 	u64 pos = (u64)data_folio->index << PAGE_SHIFT;
+ 
+@@ -278,11 +270,14 @@ verify_data_blocks(struct inode *inode, struct fsverity_info *vi,
+ 			 folio_test_uptodate(data_folio)))
+ 		return false;
+ 	do {
+-		struct page *data_page =
+-			folio_page(data_folio, offset >> PAGE_SHIFT);
+-
+-		if (!verify_data_block(inode, vi, req, data_page, pos + offset,
+-				       offset & ~PAGE_MASK, max_ra_pages))
++		void *data;
++		bool valid;
++
++		data = kmap_local_folio(data_folio, offset);
++		valid = verify_data_block(inode, vi, data, pos + offset,
++					  max_ra_pages);
++		kunmap_local(data);
++		if (!valid)
+ 			return false;
+ 		offset += block_size;
+ 		len -= block_size;
+@@ -304,19 +299,7 @@ verify_data_blocks(struct inode *inode, struct fsverity_info *vi,
+  */
+ bool fsverity_verify_blocks(struct folio *folio, size_t len, size_t offset)
+ {
+-	struct inode *inode = folio->mapping->host;
+-	struct fsverity_info *vi = inode->i_verity_info;
+-	struct ahash_request *req;
+-	bool valid;
+-
+-	/* This allocation never fails, since it's mempool-backed. */
+-	req = fsverity_alloc_hash_request(vi->tree_params.hash_alg, GFP_NOFS);
+-
+-	valid = verify_data_blocks(inode, vi, req, folio, len, offset, 0);
+-
+-	fsverity_free_hash_request(vi->tree_params.hash_alg, req);
+-
+-	return valid;
++	return verify_data_blocks(folio, len, offset, 0);
+ }
+ EXPORT_SYMBOL_GPL(fsverity_verify_blocks);
+ 
+@@ -337,15 +320,9 @@ EXPORT_SYMBOL_GPL(fsverity_verify_blocks);
+  */
+ void fsverity_verify_bio(struct bio *bio)
+ {
+-	struct inode *inode = bio_first_page_all(bio)->mapping->host;
+-	struct fsverity_info *vi = inode->i_verity_info;
+-	struct ahash_request *req;
+ 	struct folio_iter fi;
+ 	unsigned long max_ra_pages = 0;
+ 
+-	/* This allocation never fails, since it's mempool-backed. */
+-	req = fsverity_alloc_hash_request(vi->tree_params.hash_alg, GFP_NOFS);
+-
+ 	if (bio->bi_opf & REQ_RAHEAD) {
+ 		/*
+ 		 * If this bio is for data readahead, then we also do readahead
+@@ -360,14 +337,12 @@ void fsverity_verify_bio(struct bio *bio)
+ 	}
+ 
+ 	bio_for_each_folio_all(fi, bio) {
+-		if (!verify_data_blocks(inode, vi, req, fi.folio, fi.length,
+-					fi.offset, max_ra_pages)) {
++		if (!verify_data_blocks(fi.folio, fi.length, fi.offset,
++					max_ra_pages)) {
+ 			bio->bi_status = BLK_STS_IOERR;
+ 			break;
+ 		}
+ 	}
+-
+-	fsverity_free_hash_request(vi->tree_params.hash_alg, req);
+ }
+ EXPORT_SYMBOL_GPL(fsverity_verify_bio);
+ #endif /* CONFIG_BLOCK */
+diff --git a/include/drm/bridge/samsung-dsim.h b/include/drm/bridge/samsung-dsim.h
+index ba5484de2b30e..a1a5b2b89a7ab 100644
+--- a/include/drm/bridge/samsung-dsim.h
++++ b/include/drm/bridge/samsung-dsim.h
+@@ -54,11 +54,14 @@ struct samsung_dsim_driver_data {
+ 	unsigned int has_freqband:1;
+ 	unsigned int has_clklane_stop:1;
+ 	unsigned int num_clks;
++	unsigned int min_freq;
+ 	unsigned int max_freq;
+ 	unsigned int wait_for_reset;
+ 	unsigned int num_bits_resol;
+ 	unsigned int pll_p_offset;
+ 	const unsigned int *reg_values;
++	u16 m_min;
++	u16 m_max;
+ };
+ 
+ struct samsung_dsim_host_ops {
+diff --git a/include/drm/drm_fixed.h b/include/drm/drm_fixed.h
+index 255645c1f9a89..6ea339d5de088 100644
+--- a/include/drm/drm_fixed.h
++++ b/include/drm/drm_fixed.h
+@@ -71,6 +71,7 @@ static inline u32 dfixed_div(fixed20_12 A, fixed20_12 B)
+ }
+ 
+ #define DRM_FIXED_POINT		32
++#define DRM_FIXED_POINT_HALF	16
+ #define DRM_FIXED_ONE		(1ULL << DRM_FIXED_POINT)
+ #define DRM_FIXED_DECIMAL_MASK	(DRM_FIXED_ONE - 1)
+ #define DRM_FIXED_DIGITS_MASK	(~DRM_FIXED_DECIMAL_MASK)
+@@ -87,6 +88,11 @@ static inline int drm_fixp2int(s64 a)
+ 	return ((s64)a) >> DRM_FIXED_POINT;
+ }
+ 
++static inline int drm_fixp2int_round(s64 a)
++{
++	return drm_fixp2int(a + (1 << (DRM_FIXED_POINT_HALF - 1)));
++}
++
+ static inline int drm_fixp2int_ceil(s64 a)
+ {
+ 	if (a > 0)
+diff --git a/include/linux/bitmap.h b/include/linux/bitmap.h
+index 7d6d73b781472..03644237e1efb 100644
+--- a/include/linux/bitmap.h
++++ b/include/linux/bitmap.h
+@@ -302,12 +302,10 @@ void bitmap_to_arr32(u32 *buf, const unsigned long *bitmap,
+ #endif
+ 
+ /*
+- * On 64-bit systems bitmaps are represented as u64 arrays internally. On LE32
+- * machines the order of hi and lo parts of numbers match the bitmap structure.
+- * In both cases conversion is not needed when copying data from/to arrays of
+- * u64.
++ * On 64-bit systems bitmaps are represented as u64 arrays internally. So,
++ * the conversion is not needed when copying data from/to arrays of u64.
+  */
+-#if (BITS_PER_LONG == 32) && defined(__BIG_ENDIAN)
++#if BITS_PER_LONG == 32
+ void bitmap_from_arr64(unsigned long *bitmap, const u64 *buf, unsigned int nbits);
+ void bitmap_to_arr64(u64 *buf, const unsigned long *bitmap, unsigned int nbits);
+ #else
+diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
+index 06caacd77ed66..710d122472641 100644
+--- a/include/linux/blk-mq.h
++++ b/include/linux/blk-mq.h
+@@ -746,8 +746,7 @@ struct request *blk_mq_alloc_request_hctx(struct request_queue *q,
+ struct blk_mq_tags {
+ 	unsigned int nr_tags;
+ 	unsigned int nr_reserved_tags;
+-
+-	atomic_t active_queues;
++	unsigned int active_queues;
+ 
+ 	struct sbitmap_queue bitmap_tags;
+ 	struct sbitmap_queue breserved_tags;
+diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
+index c0ffe203a6022..67e942d776bd8 100644
+--- a/include/linux/blkdev.h
++++ b/include/linux/blkdev.h
+@@ -392,6 +392,7 @@ struct request_queue {
+ 
+ 	struct blk_queue_stats	*stats;
+ 	struct rq_qos		*rq_qos;
++	struct mutex		rq_qos_mutex;
+ 
+ 	const struct blk_mq_ops	*mq_ops;
+ 
+@@ -1282,7 +1283,7 @@ static inline unsigned int bdev_zone_no(struct block_device *bdev, sector_t sec)
+ }
+ 
+ static inline bool bdev_op_is_zoned_write(struct block_device *bdev,
+-					  blk_opf_t op)
++					  enum req_op op)
+ {
+ 	if (!bdev_is_zoned(bdev))
+ 		return false;
+diff --git a/include/linux/blktrace_api.h b/include/linux/blktrace_api.h
+index cfbda114348c9..122c62e561fc7 100644
+--- a/include/linux/blktrace_api.h
++++ b/include/linux/blktrace_api.h
+@@ -85,10 +85,14 @@ extern int blk_trace_remove(struct request_queue *q);
+ # define blk_add_driver_data(rq, data, len)		do {} while (0)
+ # define blk_trace_setup(q, name, dev, bdev, arg)	(-ENOTTY)
+ # define blk_trace_startstop(q, start)			(-ENOTTY)
+-# define blk_trace_remove(q)				(-ENOTTY)
+ # define blk_add_trace_msg(q, fmt, ...)			do { } while (0)
+ # define blk_add_cgroup_trace_msg(q, cg, fmt, ...)	do { } while (0)
+ # define blk_trace_note_message_enabled(q)		(false)
++
++static inline int blk_trace_remove(struct request_queue *q)
++{
++	return -ENOTTY;
++}
+ #endif /* CONFIG_BLK_DEV_IO_TRACE */
+ 
+ #ifdef CONFIG_COMPAT
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index e53ceee1df370..1ad211acf1d25 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -1125,7 +1125,6 @@ struct bpf_trampoline {
+ 	int progs_cnt[BPF_TRAMP_MAX];
+ 	/* Executable image of trampoline */
+ 	struct bpf_tramp_image *cur_image;
+-	u64 selector;
+ 	struct module *mod;
+ };
+ 
+diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
+index 3dd29a53b7112..f70f9ac884d24 100644
+--- a/include/linux/bpf_verifier.h
++++ b/include/linux/bpf_verifier.h
+@@ -18,8 +18,11 @@
+  * that converting umax_value to int cannot overflow.
+  */
+ #define BPF_MAX_VAR_SIZ	(1 << 29)
+-/* size of type_str_buf in bpf_verifier. */
+-#define TYPE_STR_BUF_LEN 128
++/* size of tmp_str_buf in bpf_verifier.
++ * we need at least 306 bytes to fit full stack mask representation
++ * (in the "-8,-16,...,-512" form)
++ */
++#define TMP_STR_BUF_LEN 320
+ 
+ /* Liveness marks, used for registers and spilled-regs (in stack slots).
+  * Read marks propagate upwards until they find a write mark; they record that
+@@ -238,6 +241,10 @@ enum bpf_stack_slot_type {
+ 
+ #define BPF_REG_SIZE 8	/* size of eBPF register in bytes */
+ 
++#define BPF_REGMASK_ARGS ((1 << BPF_REG_1) | (1 << BPF_REG_2) | \
++			  (1 << BPF_REG_3) | (1 << BPF_REG_4) | \
++			  (1 << BPF_REG_5))
++
+ #define BPF_DYNPTR_SIZE		sizeof(struct bpf_dynptr_kern)
+ #define BPF_DYNPTR_NR_SLOTS		(BPF_DYNPTR_SIZE / BPF_REG_SIZE)
+ 
+@@ -306,11 +313,6 @@ struct bpf_idx_pair {
+ 	u32 idx;
+ };
+ 
+-struct bpf_id_pair {
+-	u32 old;
+-	u32 cur;
+-};
+-
+ #define MAX_CALL_FRAMES 8
+ /* Maximum number of register states that can exist at once */
+ #define BPF_ID_MAP_SIZE ((MAX_BPF_REG + MAX_BPF_STACK / BPF_REG_SIZE) * MAX_CALL_FRAMES)
+@@ -541,6 +543,30 @@ struct bpf_subprog_info {
+ 	bool is_async_cb;
+ };
+ 
++struct bpf_verifier_env;
++
++struct backtrack_state {
++	struct bpf_verifier_env *env;
++	u32 frame;
++	u32 reg_masks[MAX_CALL_FRAMES];
++	u64 stack_masks[MAX_CALL_FRAMES];
++};
++
++struct bpf_id_pair {
++	u32 old;
++	u32 cur;
++};
++
++struct bpf_idmap {
++	u32 tmp_id_gen;
++	struct bpf_id_pair map[BPF_ID_MAP_SIZE];
++};
++
++struct bpf_idset {
++	u32 count;
++	u32 ids[BPF_ID_MAP_SIZE];
++};
++
+ /* single container for all structs
+  * one verifier_env per bpf_check() call
+  */
+@@ -572,12 +598,16 @@ struct bpf_verifier_env {
+ 	const struct bpf_line_info *prev_linfo;
+ 	struct bpf_verifier_log log;
+ 	struct bpf_subprog_info subprog_info[BPF_MAX_SUBPROGS + 1];
+-	struct bpf_id_pair idmap_scratch[BPF_ID_MAP_SIZE];
++	union {
++		struct bpf_idmap idmap_scratch;
++		struct bpf_idset idset_scratch;
++	};
+ 	struct {
+ 		int *insn_state;
+ 		int *insn_stack;
+ 		int cur_stack;
+ 	} cfg;
++	struct backtrack_state bt;
+ 	u32 pass_cnt; /* number of times do_check() was called */
+ 	u32 subprog_cnt;
+ 	/* number of instructions analyzed by the verifier */
+@@ -606,8 +636,10 @@ struct bpf_verifier_env {
+ 	/* Same as scratched_regs but for stack slots */
+ 	u64 scratched_stack_slots;
+ 	u64 prev_log_pos, prev_insn_print_pos;
+-	/* buffer used in reg_type_str() to generate reg_type string */
+-	char type_str_buf[TYPE_STR_BUF_LEN];
++	/* buffer used to generate temporary string representations,
++	 * e.g., in reg_type_str() to generate reg_type string
++	 */
++	char tmp_str_buf[TMP_STR_BUF_LEN];
+ };
+ 
+ __printf(2, 0) void bpf_verifier_vlog(struct bpf_verifier_log *log,
+diff --git a/include/linux/can/length.h b/include/linux/can/length.h
+index 6995092b774ec..ef1fd32cef16b 100644
+--- a/include/linux/can/length.h
++++ b/include/linux/can/length.h
+@@ -69,17 +69,18 @@
+  * Error Status Indicator (ESI)		1
+  * Data length code (DLC)		4
+  * Data field				0...512
+- * Stuff Bit Count (SBC)		0...16: 4 20...64:5
++ * Stuff Bit Count (SBC)		4
+  * CRC					0...16: 17 20...64:21
+  * CRC delimiter (CD)			1
++ * Fixed Stuff bits (FSB)		0...16: 6 20...64:7
+  * ACK slot (AS)			1
+  * ACK delimiter (AD)			1
+  * End-of-frame (EOF)			7
+  * Inter frame spacing			3
+  *
+- * assuming CRC21, rounded up and ignoring bitstuffing
++ * assuming CRC21, rounded up and ignoring dynamic bitstuffing
+  */
+-#define CANFD_FRAME_OVERHEAD_SFF DIV_ROUND_UP(61, 8)
++#define CANFD_FRAME_OVERHEAD_SFF DIV_ROUND_UP(67, 8)
+ 
+ /*
+  * Size of a CAN-FD Extended Frame
+@@ -98,17 +99,18 @@
+  * Error Status Indicator (ESI)		1
+  * Data length code (DLC)		4
+  * Data field				0...512
+- * Stuff Bit Count (SBC)		0...16: 4 20...64:5
++ * Stuff Bit Count (SBC)		4
+  * CRC					0...16: 17 20...64:21
+  * CRC delimiter (CD)			1
++ * Fixed Stuff bits (FSB)		0...16: 6 20...64:7
+  * ACK slot (AS)			1
+  * ACK delimiter (AD)			1
+  * End-of-frame (EOF)			7
+  * Inter frame spacing			3
+  *
+- * assuming CRC21, rounded up and ignoring bitstuffing
++ * assuming CRC21, rounded up and ignoring dynamic bitstuffing
+  */
+-#define CANFD_FRAME_OVERHEAD_EFF DIV_ROUND_UP(80, 8)
++#define CANFD_FRAME_OVERHEAD_EFF DIV_ROUND_UP(86, 8)
+ 
+ /*
+  * Maximum size of a Classical CAN frame
+diff --git a/include/linux/compiler_attributes.h b/include/linux/compiler_attributes.h
+index e659cb6fded39..84864767a56ae 100644
+--- a/include/linux/compiler_attributes.h
++++ b/include/linux/compiler_attributes.h
+@@ -255,6 +255,18 @@
+  */
+ #define __noreturn                      __attribute__((__noreturn__))
+ 
++/*
++ * Optional: only supported since GCC >= 11.1, clang >= 7.0.
++ *
++ *   gcc: https://gcc.gnu.org/onlinedocs/gcc/Common-Function-Attributes.html#index-no_005fstack_005fprotector-function-attribute
++ *   clang: https://clang.llvm.org/docs/AttributeReference.html#no-stack-protector-safebuffers
++ */
++#if __has_attribute(__no_stack_protector__)
++# define __no_stack_protector		__attribute__((__no_stack_protector__))
++#else
++# define __no_stack_protector
++#endif
++
+ /*
+  * Optional: not supported by gcc.
+  *
+diff --git a/include/linux/dsa/sja1105.h b/include/linux/dsa/sja1105.h
+index 159e43171cccf..c177322f793d6 100644
+--- a/include/linux/dsa/sja1105.h
++++ b/include/linux/dsa/sja1105.h
+@@ -48,13 +48,9 @@ struct sja1105_deferred_xmit_work {
+ 
+ /* Global tagger data */
+ struct sja1105_tagger_data {
+-	/* Tagger to switch */
+ 	void (*xmit_work_fn)(struct kthread_work *work);
+ 	void (*meta_tstamp_handler)(struct dsa_switch *ds, int port, u8 ts_id,
+ 				    enum sja1110_meta_tstamp dir, u64 tstamp);
+-	/* Switch to tagger */
+-	bool (*rxtstamp_get_state)(struct dsa_switch *ds);
+-	void (*rxtstamp_set_state)(struct dsa_switch *ds, bool on);
+ };
+ 
+ struct sja1105_skb_cb {
+diff --git a/include/linux/ieee80211.h b/include/linux/ieee80211.h
+index c4cf296e7eafe..4cda32ac3116a 100644
+--- a/include/linux/ieee80211.h
++++ b/include/linux/ieee80211.h
+@@ -2856,6 +2856,7 @@ ieee80211_he_spr_size(const u8 *he_spr_ie)
+ 
+ /* Maximum number of supported EHT LTF is split */
+ #define IEEE80211_EHT_PHY_CAP5_MAX_NUM_SUPP_EHT_LTF_MASK	0xc0
++#define IEEE80211_EHT_PHY_CAP5_SUPP_EXTRA_EHT_LTF		0x40
+ #define IEEE80211_EHT_PHY_CAP6_MAX_NUM_SUPP_EHT_LTF_MASK	0x07
+ 
+ #define IEEE80211_EHT_PHY_CAP6_MCS15_SUPP_MASK			0x78
+@@ -4611,15 +4612,12 @@ static inline u8 ieee80211_mle_common_size(const u8 *data)
+ 	case IEEE80211_ML_CONTROL_TYPE_BASIC:
+ 	case IEEE80211_ML_CONTROL_TYPE_PREQ:
+ 	case IEEE80211_ML_CONTROL_TYPE_TDLS:
++	case IEEE80211_ML_CONTROL_TYPE_RECONF:
+ 		/*
+ 		 * The length is the first octet pointed by mle->variable so no
+ 		 * need to add anything
+ 		 */
+ 		break;
+-	case IEEE80211_ML_CONTROL_TYPE_RECONF:
+-		if (control & IEEE80211_MLC_RECONF_PRES_MLD_MAC_ADDR)
+-			common += ETH_ALEN;
+-		return common;
+ 	case IEEE80211_ML_CONTROL_TYPE_PRIO_ACCESS:
+ 		if (control & IEEE80211_MLC_PRIO_ACCESS_PRES_AP_MLD_MAC_ADDR)
+ 			common += ETH_ALEN;
+diff --git a/include/linux/mfd/tps65010.h b/include/linux/mfd/tps65010.h
+index a1fb9bc5311de..5edf1aef11185 100644
+--- a/include/linux/mfd/tps65010.h
++++ b/include/linux/mfd/tps65010.h
+@@ -28,6 +28,8 @@
+ #ifndef __LINUX_I2C_TPS65010_H
+ #define __LINUX_I2C_TPS65010_H
+ 
++struct gpio_chip;
++
+ /*
+  * ----------------------------------------------------------------------------
+  * Registers, all 8 bits
+@@ -176,12 +178,10 @@ struct i2c_client;
+ 
+ /**
+  * struct tps65010_board - packages GPIO and LED lines
+- * @base: the GPIO number to assign to GPIO-1
+  * @outmask: bit (N-1) is set to allow GPIO-N to be used as an
+  *	(open drain) output
+  * @setup: optional callback issued once the GPIOs are valid
+  * @teardown: optional callback issued before the GPIOs are invalidated
+- * @context: optional parameter passed to setup() and teardown()
+  *
+  * Board data may be used to package the GPIO (and LED) lines for use
+  * in by the generic GPIO and LED frameworks.  The first four GPIOs
+@@ -193,12 +193,9 @@ struct i2c_client;
+  * devices in their initial states using these GPIOs.
+  */
+ struct tps65010_board {
+-	int				base;
+ 	unsigned			outmask;
+-
+-	int		(*setup)(struct i2c_client *client, void *context);
+-	int		(*teardown)(struct i2c_client *client, void *context);
+-	void		*context;
++	int		(*setup)(struct i2c_client *client, struct gpio_chip *gc);
++	void		(*teardown)(struct i2c_client *client, struct gpio_chip *gc);
+ };
+ 
+ #endif /*  __LINUX_I2C_TPS65010_H */
+diff --git a/include/linux/mfd/twl.h b/include/linux/mfd/twl.h
+index 6e3d99b7a0ee6..c062d91a67d92 100644
+--- a/include/linux/mfd/twl.h
++++ b/include/linux/mfd/twl.h
+@@ -593,9 +593,6 @@ struct twl4030_gpio_platform_data {
+ 	 */
+ 	u32		pullups;
+ 	u32		pulldowns;
+-
+-	int		(*setup)(struct device *dev,
+-				unsigned gpio, unsigned ngpio);
+ };
+ 
+ struct twl4030_madc_platform_data {
+diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
+index 306a3d1a0fa65..de10fc797c8e9 100644
+--- a/include/linux/mm_types.h
++++ b/include/linux/mm_types.h
+@@ -583,6 +583,21 @@ struct mm_cid {
+ struct kioctx_table;
+ struct mm_struct {
+ 	struct {
++		/*
++		 * Fields which are often written to are placed in a separate
++		 * cache line.
++		 */
++		struct {
++			/**
++			 * @mm_count: The number of references to &struct
++			 * mm_struct (@mm_users count as 1).
++			 *
++			 * Use mmgrab()/mmdrop() to modify. When this drops to
++			 * 0, the &struct mm_struct is freed.
++			 */
++			atomic_t mm_count;
++		} ____cacheline_aligned_in_smp;
++
+ 		struct maple_tree mm_mt;
+ #ifdef CONFIG_MMU
+ 		unsigned long (*get_unmapped_area) (struct file *filp,
+@@ -620,14 +635,6 @@ struct mm_struct {
+ 		 */
+ 		atomic_t mm_users;
+ 
+-		/**
+-		 * @mm_count: The number of references to &struct mm_struct
+-		 * (@mm_users count as 1).
+-		 *
+-		 * Use mmgrab()/mmdrop() to modify. When this drops to 0, the
+-		 * &struct mm_struct is freed.
+-		 */
+-		atomic_t mm_count;
+ #ifdef CONFIG_SCHED_MM_CID
+ 		/**
+ 		 * @pcpu_cid: Per-cpu current cid.
+diff --git a/include/linux/mmc/card.h b/include/linux/mmc/card.h
+index c726ea7812552..daa2f40d9ce65 100644
+--- a/include/linux/mmc/card.h
++++ b/include/linux/mmc/card.h
+@@ -294,6 +294,7 @@ struct mmc_card {
+ #define MMC_QUIRK_TRIM_BROKEN	(1<<12)		/* Skip trim */
+ #define MMC_QUIRK_BROKEN_HPI	(1<<13)		/* Disable broken HPI support */
+ #define MMC_QUIRK_BROKEN_SD_DISCARD	(1<<14)	/* Disable broken SD discard support */
++#define MMC_QUIRK_BROKEN_SD_CACHE	(1<<15)	/* Disable broken SD cache support */
+ 
+ 	bool			reenable_cmdq;	/* Re-enable Command Queue */
+ 
+diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
+index c2f0c6002a84b..68adc8af29efb 100644
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -5093,6 +5093,15 @@ static inline bool netif_is_l3_slave(const struct net_device *dev)
+ 	return dev->priv_flags & IFF_L3MDEV_SLAVE;
+ }
+ 
++static inline int dev_sdif(const struct net_device *dev)
++{
++#ifdef CONFIG_NET_L3_MASTER_DEV
++	if (netif_is_l3_slave(dev))
++		return dev->ifindex;
++#endif
++	return 0;
++}
++
+ static inline bool netif_is_bridge_master(const struct net_device *dev)
+ {
+ 	return dev->priv_flags & IFF_EBRIDGE;
+diff --git a/include/linux/nmi.h b/include/linux/nmi.h
+index 048c0b9aa623d..d54b9ba9c8247 100644
+--- a/include/linux/nmi.h
++++ b/include/linux/nmi.h
+@@ -13,13 +13,11 @@
+ 
+ #ifdef CONFIG_LOCKUP_DETECTOR
+ void lockup_detector_init(void);
++void lockup_detector_retry_init(void);
+ void lockup_detector_soft_poweroff(void);
+ void lockup_detector_cleanup(void);
+-bool is_hardlockup(void);
+ 
+ extern int watchdog_user_enabled;
+-extern int nmi_watchdog_user_enabled;
+-extern int soft_watchdog_user_enabled;
+ extern int watchdog_thresh;
+ extern unsigned long watchdog_enabled;
+ 
+@@ -35,6 +33,7 @@ extern int sysctl_hardlockup_all_cpu_backtrace;
+ 
+ #else /* CONFIG_LOCKUP_DETECTOR */
+ static inline void lockup_detector_init(void) { }
++static inline void lockup_detector_retry_init(void) { }
+ static inline void lockup_detector_soft_poweroff(void) { }
+ static inline void lockup_detector_cleanup(void) { }
+ #endif /* !CONFIG_LOCKUP_DETECTOR */
+@@ -69,17 +68,17 @@ static inline void reset_hung_task_detector(void) { }
+  * 'watchdog_enabled' variable. Each lockup detector has its dedicated bit -
+  * bit 0 for the hard lockup detector and bit 1 for the soft lockup detector.
+  *
+- * 'watchdog_user_enabled', 'nmi_watchdog_user_enabled' and
+- * 'soft_watchdog_user_enabled' are variables that are only used as an
++ * 'watchdog_user_enabled', 'watchdog_hardlockup_user_enabled' and
++ * 'watchdog_softlockup_user_enabled' are variables that are only used as an
+  * 'interface' between the parameters in /proc/sys/kernel and the internal
+  * state bits in 'watchdog_enabled'. The 'watchdog_thresh' variable is
+  * handled differently because its value is not boolean, and the lockup
+  * detectors are 'suspended' while 'watchdog_thresh' is equal zero.
+  */
+-#define NMI_WATCHDOG_ENABLED_BIT   0
+-#define SOFT_WATCHDOG_ENABLED_BIT  1
+-#define NMI_WATCHDOG_ENABLED      (1 << NMI_WATCHDOG_ENABLED_BIT)
+-#define SOFT_WATCHDOG_ENABLED     (1 << SOFT_WATCHDOG_ENABLED_BIT)
++#define WATCHDOG_HARDLOCKUP_ENABLED_BIT  0
++#define WATCHDOG_SOFTOCKUP_ENABLED_BIT   1
++#define WATCHDOG_HARDLOCKUP_ENABLED     (1 << WATCHDOG_HARDLOCKUP_ENABLED_BIT)
++#define WATCHDOG_SOFTOCKUP_ENABLED      (1 << WATCHDOG_SOFTOCKUP_ENABLED_BIT)
+ 
+ #if defined(CONFIG_HARDLOCKUP_DETECTOR)
+ extern void hardlockup_detector_disable(void);
+@@ -88,10 +87,8 @@ extern unsigned int hardlockup_panic;
+ static inline void hardlockup_detector_disable(void) {}
+ #endif
+ 
+-#if defined(CONFIG_HAVE_NMI_WATCHDOG) || defined(CONFIG_HARDLOCKUP_DETECTOR)
+-# define NMI_WATCHDOG_SYSCTL_PERM	0644
+-#else
+-# define NMI_WATCHDOG_SYSCTL_PERM	0444
++#if defined(CONFIG_HARDLOCKUP_DETECTOR_PERF)
++void watchdog_hardlockup_check(struct pt_regs *regs);
+ #endif
+ 
+ #if defined(CONFIG_HARDLOCKUP_DETECTOR_PERF)
+@@ -116,11 +113,11 @@ static inline int hardlockup_detector_perf_init(void) { return 0; }
+ # endif
+ #endif
+ 
+-void watchdog_nmi_stop(void);
+-void watchdog_nmi_start(void);
+-int watchdog_nmi_probe(void);
+-int watchdog_nmi_enable(unsigned int cpu);
+-void watchdog_nmi_disable(unsigned int cpu);
++void watchdog_hardlockup_stop(void);
++void watchdog_hardlockup_start(void);
++int watchdog_hardlockup_probe(void);
++void watchdog_hardlockup_enable(unsigned int cpu);
++void watchdog_hardlockup_disable(unsigned int cpu);
+ 
+ void lockup_detector_reconfigure(void);
+ 
+@@ -197,7 +194,7 @@ u64 hw_nmi_get_sample_period(int watchdog_thresh);
+ #endif
+ 
+ #if defined(CONFIG_HARDLOCKUP_CHECK_TIMESTAMP) && \
+-    defined(CONFIG_HARDLOCKUP_DETECTOR)
++    defined(CONFIG_HARDLOCKUP_DETECTOR_PERF)
+ void watchdog_update_hrtimer_threshold(u64 period);
+ #else
+ static inline void watchdog_update_hrtimer_threshold(u64 period) { }
+diff --git a/include/linux/pci.h b/include/linux/pci.h
+index 60b8772b5bd45..c69a2cc1f4123 100644
+--- a/include/linux/pci.h
++++ b/include/linux/pci.h
+@@ -1903,6 +1903,7 @@ static inline int pci_dev_present(const struct pci_device_id *ids)
+ #define pci_dev_put(dev)	do { } while (0)
+ 
+ static inline void pci_set_master(struct pci_dev *dev) { }
++static inline void pci_clear_master(struct pci_dev *dev) { }
+ static inline int pci_enable_device(struct pci_dev *dev) { return -EIO; }
+ static inline void pci_disable_device(struct pci_dev *dev) { }
+ static inline int pcim_enable_device(struct pci_dev *pdev) { return -EIO; }
+diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h
+index 525b5d64e3948..c0e4baf940dce 100644
+--- a/include/linux/perf/arm_pmu.h
++++ b/include/linux/perf/arm_pmu.h
+@@ -26,9 +26,11 @@
+  */
+ #define ARMPMU_EVT_64BIT		0x00001 /* Event uses a 64bit counter */
+ #define ARMPMU_EVT_47BIT		0x00002 /* Event uses a 47bit counter */
++#define ARMPMU_EVT_63BIT		0x00004 /* Event uses a 63bit counter */
+ 
+ static_assert((PERF_EVENT_FLAG_ARCH & ARMPMU_EVT_64BIT) == ARMPMU_EVT_64BIT);
+ static_assert((PERF_EVENT_FLAG_ARCH & ARMPMU_EVT_47BIT) == ARMPMU_EVT_47BIT);
++static_assert((PERF_EVENT_FLAG_ARCH & ARMPMU_EVT_63BIT) == ARMPMU_EVT_63BIT);
+ 
+ #define HW_OP_UNSUPPORTED		0xFFFF
+ #define C(_x)				PERF_COUNT_HW_CACHE_##_x
+diff --git a/include/linux/pipe_fs_i.h b/include/linux/pipe_fs_i.h
+index d2c3f16cf6b18..02e0086b10f6f 100644
+--- a/include/linux/pipe_fs_i.h
++++ b/include/linux/pipe_fs_i.h
+@@ -261,18 +261,14 @@ void generic_pipe_buf_release(struct pipe_inode_info *, struct pipe_buffer *);
+ 
+ extern const struct pipe_buf_operations nosteal_pipe_buf_ops;
+ 
+-#ifdef CONFIG_WATCH_QUEUE
+ unsigned long account_pipe_buffers(struct user_struct *user,
+ 				   unsigned long old, unsigned long new);
+ bool too_many_pipe_buffers_soft(unsigned long user_bufs);
+ bool too_many_pipe_buffers_hard(unsigned long user_bufs);
+ bool pipe_is_unprivileged_user(void);
+-#endif
+ 
+ /* for F_SETPIPE_SZ and F_GETPIPE_SZ */
+-#ifdef CONFIG_WATCH_QUEUE
+ int pipe_resize_ring(struct pipe_inode_info *pipe, unsigned int nr_slots);
+-#endif
+ long pipe_fcntl(struct file *, unsigned int, unsigned long arg);
+ struct pipe_inode_info *get_pipe_info(struct file *file, bool for_splice);
+ 
+diff --git a/include/linux/platform_data/lcd-mipid.h b/include/linux/platform_data/lcd-mipid.h
+index 63f05eb238274..4927cfc5158c6 100644
+--- a/include/linux/platform_data/lcd-mipid.h
++++ b/include/linux/platform_data/lcd-mipid.h
+@@ -15,10 +15,8 @@ enum mipid_test_result {
+ #ifdef __KERNEL__
+ 
+ struct mipid_platform_data {
+-	int	nreset_gpio;
+ 	int	data_lines;
+ 
+-	void	(*shutdown)(struct mipid_platform_data *pdata);
+ 	void	(*set_bklight_level)(struct mipid_platform_data *pdata,
+ 				     int level);
+ 	int	(*get_bklight_level)(struct mipid_platform_data *pdata);
+diff --git a/include/linux/platform_data/mmc-omap.h b/include/linux/platform_data/mmc-omap.h
+index 91051e9907f34..054d0c3c5ec58 100644
+--- a/include/linux/platform_data/mmc-omap.h
++++ b/include/linux/platform_data/mmc-omap.h
+@@ -20,8 +20,6 @@ struct omap_mmc_platform_data {
+ 	 * maximum frequency on the MMC bus */
+ 	unsigned int max_freq;
+ 
+-	/* switch the bus to a new slot */
+-	int (*switch_slot)(struct device *dev, int slot);
+ 	/* initialize board-specific MMC functionality, can be NULL if
+ 	 * not supported */
+ 	int (*init)(struct device *dev);
+diff --git a/include/linux/ramfs.h b/include/linux/ramfs.h
+index 917528d102c4e..d506dc63dd47c 100644
+--- a/include/linux/ramfs.h
++++ b/include/linux/ramfs.h
+@@ -7,6 +7,7 @@
+ struct inode *ramfs_get_inode(struct super_block *sb, const struct inode *dir,
+ 	 umode_t mode, dev_t dev);
+ extern int ramfs_init_fs_context(struct fs_context *fc);
++extern void ramfs_kill_sb(struct super_block *sb);
+ 
+ #ifdef CONFIG_MMU
+ static inline int
+diff --git a/include/linux/sh_intc.h b/include/linux/sh_intc.h
+index 37ad81058d6ae..27ae79191bdc3 100644
+--- a/include/linux/sh_intc.h
++++ b/include/linux/sh_intc.h
+@@ -13,9 +13,9 @@
+ /*
+  * Convert back and forth between INTEVT and IRQ values.
+  */
+-#ifdef CONFIG_CPU_HAS_INTEVT
+-#define evt2irq(evt)		(((evt) >> 5) - 16)
+-#define irq2evt(irq)		(((irq) + 16) << 5)
++#ifdef CONFIG_CPU_HAS_INTEVT	/* Avoid IRQ0 (invalid for platform devices) */
++#define evt2irq(evt)		((evt) >> 5)
++#define irq2evt(irq)		((irq) << 5)
+ #else
+ #define evt2irq(evt)		(evt)
+ #define irq2evt(irq)		(irq)
+diff --git a/include/linux/soc/qcom/geni-se.h b/include/linux/soc/qcom/geni-se.h
+index c55a0bc8cb0e9..821a19135bb66 100644
+--- a/include/linux/soc/qcom/geni-se.h
++++ b/include/linux/soc/qcom/geni-se.h
+@@ -490,9 +490,13 @@ int geni_se_clk_freq_match(struct geni_se *se, unsigned long req_freq,
+ 			   unsigned int *index, unsigned long *res_freq,
+ 			   bool exact);
+ 
++void geni_se_tx_init_dma(struct geni_se *se, dma_addr_t iova, size_t len);
++
+ int geni_se_tx_dma_prep(struct geni_se *se, void *buf, size_t len,
+ 			dma_addr_t *iova);
+ 
++void geni_se_rx_init_dma(struct geni_se *se, dma_addr_t iova, size_t len);
++
+ int geni_se_rx_dma_prep(struct geni_se *se, void *buf, size_t len,
+ 			dma_addr_t *iova);
+ 
+diff --git a/include/linux/spi/ads7846.h b/include/linux/spi/ads7846.h
+index d424c1aadf382..a04c1c34c3443 100644
+--- a/include/linux/spi/ads7846.h
++++ b/include/linux/spi/ads7846.h
+@@ -35,8 +35,6 @@ struct ads7846_platform_data {
+ 	u16	debounce_tol;		/* tolerance used for filtering */
+ 	u16	debounce_rep;		/* additional consecutive good readings
+ 					 * required after the first two */
+-	int	gpio_pendown;		/* the GPIO used to decide the pendown
+-					 * state if get_pendown_state == NULL */
+ 	int	gpio_pendown_debounce;	/* platform specific debounce time for
+ 					 * the gpio_pendown */
+ 	int	(*get_pendown_state)(void);
+diff --git a/include/linux/usb/hcd.h b/include/linux/usb/hcd.h
+index 0c7eff91adf4e..4e9623e8492b3 100644
+--- a/include/linux/usb/hcd.h
++++ b/include/linux/usb/hcd.h
+@@ -267,7 +267,7 @@ struct hc_driver {
+ 	int	(*pci_suspend)(struct usb_hcd *hcd, bool do_wakeup);
+ 
+ 	/* called after entering D0 (etc), before resuming the hub */
+-	int	(*pci_resume)(struct usb_hcd *hcd, bool hibernated);
++	int	(*pci_resume)(struct usb_hcd *hcd, pm_message_t state);
+ 
+ 	/* called just before hibernate final D3 state, allows host to poweroff parts */
+ 	int	(*pci_poweroff_late)(struct usb_hcd *hcd, bool do_wakeup);
+diff --git a/include/linux/usb/musb.h b/include/linux/usb/musb.h
+index e4a3ad3c800f5..3963e55e88a31 100644
+--- a/include/linux/usb/musb.h
++++ b/include/linux/usb/musb.h
+@@ -99,9 +99,6 @@ struct musb_hdrc_platform_data {
+ 	/* (HOST or OTG) program PHY for external Vbus */
+ 	unsigned	extvbus:1;
+ 
+-	/* Power the device on or off */
+-	int		(*set_power)(int state);
+-
+ 	/* MUSB configuration-specific details */
+ 	const struct musb_hdrc_config *config;
+ 
+@@ -135,14 +132,4 @@ static inline int musb_mailbox(enum musb_vbus_id_status status)
+ #define	TUSB6010_REFCLK_24	41667	/* psec/clk @ 24.0 MHz XI */
+ #define	TUSB6010_REFCLK_19	52083	/* psec/clk @ 19.2 MHz CLKIN */
+ 
+-#ifdef	CONFIG_ARCH_OMAP2
+-
+-extern int __init tusb6010_setup_interface(
+-		struct musb_hdrc_platform_data *data,
+-		unsigned ps_refclk, unsigned waitpin,
+-		unsigned async_cs, unsigned sync_cs,
+-		unsigned irq, unsigned dmachan);
+-
+-#endif	/* OMAP2 */
+-
+ #endif /* __LINUX_USB_MUSB_H */
+diff --git a/include/linux/watch_queue.h b/include/linux/watch_queue.h
+index fc6bba20273bd..45cd42f55d492 100644
+--- a/include/linux/watch_queue.h
++++ b/include/linux/watch_queue.h
+@@ -38,7 +38,7 @@ struct watch_filter {
+ struct watch_queue {
+ 	struct rcu_head		rcu;
+ 	struct watch_filter __rcu *filter;
+-	struct pipe_inode_info	*pipe;		/* The pipe we're using as a buffer */
++	struct pipe_inode_info	*pipe;		/* Pipe we use as a buffer, NULL if queue closed */
+ 	struct hlist_head	watches;	/* Contributory watches */
+ 	struct page		**notes;	/* Preallocated notifications */
+ 	unsigned long		*notes_bitmap;	/* Allocation bitmap for notes */
+@@ -46,7 +46,6 @@ struct watch_queue {
+ 	spinlock_t		lock;
+ 	unsigned int		nr_notes;	/* Number of notes */
+ 	unsigned int		nr_pages;	/* Number of pages in notes[] */
+-	bool			defunct;	/* T when queues closed */
+ };
+ 
+ /*
+diff --git a/include/net/bluetooth/mgmt.h b/include/net/bluetooth/mgmt.h
+index a5801649f6196..5e68b3dd44222 100644
+--- a/include/net/bluetooth/mgmt.h
++++ b/include/net/bluetooth/mgmt.h
+@@ -979,6 +979,7 @@ struct mgmt_ev_auth_failed {
+ #define MGMT_DEV_FOUND_NOT_CONNECTABLE		BIT(2)
+ #define MGMT_DEV_FOUND_INITIATED_CONN		BIT(3)
+ #define MGMT_DEV_FOUND_NAME_REQUEST_FAILED	BIT(4)
++#define MGMT_DEV_FOUND_SCAN_RSP			BIT(5)
+ 
+ #define MGMT_EV_DEVICE_FOUND		0x0012
+ struct mgmt_ev_device_found {
+diff --git a/include/net/dsa.h b/include/net/dsa.h
+index ab0f0a5b08602..197c5a6ca8f7f 100644
+--- a/include/net/dsa.h
++++ b/include/net/dsa.h
+@@ -314,9 +314,17 @@ struct dsa_port {
+ 	struct list_head	fdbs;
+ 	struct list_head	mdbs;
+ 
+-	/* List of VLANs that CPU and DSA ports are members of. */
+ 	struct mutex		vlans_lock;
+-	struct list_head	vlans;
++	union {
++		/* List of VLANs that CPU and DSA ports are members of.
++		 * Access to this is serialized by the sleepable @vlans_lock.
++		 */
++		struct list_head	vlans;
++		/* List of VLANs that user ports are members of.
++		 * Access to this is serialized by netif_addr_lock_bh().
++		 */
++		struct list_head	user_vlans;
++	};
+ };
+ 
+ /* TODO: ideally DSA ports would have a single dp->link_dp member,
+diff --git a/include/net/mac80211.h b/include/net/mac80211.h
+index ac0370e768749..65510cfda37af 100644
+--- a/include/net/mac80211.h
++++ b/include/net/mac80211.h
+@@ -7,7 +7,7 @@
+  * Copyright 2007-2010	Johannes Berg <johannes@sipsolutions.net>
+  * Copyright 2013-2014  Intel Mobile Communications GmbH
+  * Copyright (C) 2015 - 2017 Intel Deutschland GmbH
+- * Copyright (C) 2018 - 2022 Intel Corporation
++ * Copyright (C) 2018 - 2023 Intel Corporation
+  */
+ 
+ #ifndef MAC80211_H
+@@ -6861,6 +6861,48 @@ ieee80211_vif_type_p2p(struct ieee80211_vif *vif)
+ 	return ieee80211_iftype_p2p(vif->type, vif->p2p);
+ }
+ 
++/**
++ * ieee80211_get_he_iftype_cap_vif - return HE capabilities for sband/vif
++ * @sband: the sband to search for the iftype on
++ * @vif: the vif to get the iftype from
++ *
++ * Return: pointer to the struct ieee80211_sta_he_cap, or %NULL is none found
++ */
++static inline const struct ieee80211_sta_he_cap *
++ieee80211_get_he_iftype_cap_vif(const struct ieee80211_supported_band *sband,
++				struct ieee80211_vif *vif)
++{
++	return ieee80211_get_he_iftype_cap(sband, ieee80211_vif_type_p2p(vif));
++}
++
++/**
++ * ieee80211_get_he_6ghz_capa_vif - return HE 6 GHz capabilities
++ * @sband: the sband to search for the STA on
++ * @vif: the vif to get the iftype from
++ *
++ * Return: the 6GHz capabilities
++ */
++static inline __le16
++ieee80211_get_he_6ghz_capa_vif(const struct ieee80211_supported_band *sband,
++			       struct ieee80211_vif *vif)
++{
++	return ieee80211_get_he_6ghz_capa(sband, ieee80211_vif_type_p2p(vif));
++}
++
++/**
++ * ieee80211_get_eht_iftype_cap_vif - return ETH capabilities for sband/vif
++ * @sband: the sband to search for the iftype on
++ * @vif: the vif to get the iftype from
++ *
++ * Return: pointer to the struct ieee80211_sta_eht_cap, or %NULL is none found
++ */
++static inline const struct ieee80211_sta_eht_cap *
++ieee80211_get_eht_iftype_cap_vif(const struct ieee80211_supported_band *sband,
++				 struct ieee80211_vif *vif)
++{
++	return ieee80211_get_eht_iftype_cap(sband, ieee80211_vif_type_p2p(vif));
++}
++
+ /**
+  * ieee80211_update_mu_groups - set the VHT MU-MIMO groud data
+  *
+diff --git a/include/net/regulatory.h b/include/net/regulatory.h
+index 896191f420d50..b2cb4a9eb04dc 100644
+--- a/include/net/regulatory.h
++++ b/include/net/regulatory.h
+@@ -140,17 +140,6 @@ struct regulatory_request {
+  *      otherwise initiating radiation is not allowed. This will enable the
+  *      relaxations enabled under the CFG80211_REG_RELAX_NO_IR configuration
+  *      option
+- * @REGULATORY_IGNORE_STALE_KICKOFF: the regulatory core will _not_ make sure
+- *	all interfaces on this wiphy reside on allowed channels. If this flag
+- *	is not set, upon a regdomain change, the interfaces are given a grace
+- *	period (currently 60 seconds) to disconnect or move to an allowed
+- *	channel. Interfaces on forbidden channels are forcibly disconnected.
+- *	Currently these types of interfaces are supported for enforcement:
+- *	NL80211_IFTYPE_ADHOC, NL80211_IFTYPE_STATION, NL80211_IFTYPE_AP,
+- *	NL80211_IFTYPE_AP_VLAN, NL80211_IFTYPE_MONITOR,
+- *	NL80211_IFTYPE_P2P_CLIENT, NL80211_IFTYPE_P2P_GO,
+- *	NL80211_IFTYPE_P2P_DEVICE. The flag will be set by default if a device
+- *	includes any modes unsupported for enforcement checking.
+  * @REGULATORY_WIPHY_SELF_MANAGED: for devices that employ wiphy-specific
+  *	regdom management. These devices will ignore all regdom changes not
+  *	originating from their own wiphy.
+@@ -177,7 +166,7 @@ enum ieee80211_regulatory_flags {
+ 	REGULATORY_COUNTRY_IE_FOLLOW_POWER	= BIT(3),
+ 	REGULATORY_COUNTRY_IE_IGNORE		= BIT(4),
+ 	REGULATORY_ENABLE_RELAX_NO_IR           = BIT(5),
+-	REGULATORY_IGNORE_STALE_KICKOFF         = BIT(6),
++	/* reuse bit 6 next time */
+ 	REGULATORY_WIPHY_SELF_MANAGED		= BIT(7),
+ };
+ 
+diff --git a/include/net/sock.h b/include/net/sock.h
+index 6f428a7f35675..ad468fe71413a 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -2100,6 +2100,7 @@ static inline void sock_graft(struct sock *sk, struct socket *parent)
+ }
+ 
+ kuid_t sock_i_uid(struct sock *sk);
++unsigned long __sock_i_ino(struct sock *sk);
+ unsigned long sock_i_ino(struct sock *sk);
+ 
+ static inline kuid_t sock_net_uid(const struct net *net, const struct sock *sk)
+diff --git a/include/soc/mscc/ocelot.h b/include/soc/mscc/ocelot.h
+index cb8fbb2418795..22aae505c813b 100644
+--- a/include/soc/mscc/ocelot.h
++++ b/include/soc/mscc/ocelot.h
+@@ -730,6 +730,11 @@ enum macaccess_entry_type {
+ 	ENTRYTYPE_MACv6,
+ };
+ 
++enum ocelot_proto {
++	OCELOT_PROTO_PTP_L2 = BIT(0),
++	OCELOT_PROTO_PTP_L4 = BIT(1),
++};
++
+ #define OCELOT_QUIRK_PCS_PERFORMS_RATE_ADAPTATION	BIT(0)
+ #define OCELOT_QUIRK_QSGMII_PORTS_MUST_BE_UP		BIT(1)
+ 
+@@ -775,6 +780,8 @@ struct ocelot_port {
+ 	unsigned int			ptp_skbs_in_flight;
+ 	struct sk_buff_head		tx_skbs;
+ 
++	unsigned int			trap_proto;
++
+ 	u16				mrp_ring_id;
+ 
+ 	u8				ptp_cmd;
+@@ -868,12 +875,9 @@ struct ocelot {
+ 	u8				mm_supported:1;
+ 	struct ptp_clock		*ptp_clock;
+ 	struct ptp_clock_info		ptp_info;
+-	struct hwtstamp_config		hwtstamp_config;
+ 	unsigned int			ptp_skbs_in_flight;
+ 	/* Protects the 2-step TX timestamp ID logic */
+ 	spinlock_t			ts_id_lock;
+-	/* Protects the PTP interface state */
+-	struct mutex			ptp_lock;
+ 	/* Protects the PTP clock */
+ 	spinlock_t			ptp_clock_lock;
+ 	struct ptp_pin_desc		ptp_pins[OCELOT_PTP_PINS_NUM];
+diff --git a/include/trace/events/net.h b/include/trace/events/net.h
+index da611a7aaf970..f667c76a3b022 100644
+--- a/include/trace/events/net.h
++++ b/include/trace/events/net.h
+@@ -51,7 +51,8 @@ TRACE_EVENT(net_dev_start_xmit,
+ 		__entry->network_offset = skb_network_offset(skb);
+ 		__entry->transport_offset_valid =
+ 			skb_transport_header_was_set(skb);
+-		__entry->transport_offset = skb_transport_offset(skb);
++		__entry->transport_offset = skb_transport_header_was_set(skb) ?
++			skb_transport_offset(skb) : 0;
+ 		__entry->tx_flags = skb_shinfo(skb)->tx_flags;
+ 		__entry->gso_size = skb_shinfo(skb)->gso_size;
+ 		__entry->gso_segs = skb_shinfo(skb)->gso_segs;
+diff --git a/include/trace/events/timer.h b/include/trace/events/timer.h
+index 3e8619c72f774..b4bc2828fa09f 100644
+--- a/include/trace/events/timer.h
++++ b/include/trace/events/timer.h
+@@ -158,7 +158,11 @@ DEFINE_EVENT(timer_class, timer_cancel,
+ 		{ HRTIMER_MODE_ABS_SOFT,	"ABS|SOFT"	},	\
+ 		{ HRTIMER_MODE_REL_SOFT,	"REL|SOFT"	},	\
+ 		{ HRTIMER_MODE_ABS_PINNED_SOFT,	"ABS|PINNED|SOFT" },	\
+-		{ HRTIMER_MODE_REL_PINNED_SOFT,	"REL|PINNED|SOFT" })
++		{ HRTIMER_MODE_REL_PINNED_SOFT,	"REL|PINNED|SOFT" },	\
++		{ HRTIMER_MODE_ABS_HARD,	"ABS|HARD" },		\
++		{ HRTIMER_MODE_REL_HARD,	"REL|HARD" },		\
++		{ HRTIMER_MODE_ABS_PINNED_HARD, "ABS|PINNED|HARD" },	\
++		{ HRTIMER_MODE_REL_PINNED_HARD,	"REL|PINNED|HARD" })
+ 
+ /**
+  * hrtimer_init - called when the hrtimer is initialized
+diff --git a/include/uapi/linux/affs_hardblocks.h b/include/uapi/linux/affs_hardblocks.h
+index 5e2fb8481252a..a5aff2eb5f708 100644
+--- a/include/uapi/linux/affs_hardblocks.h
++++ b/include/uapi/linux/affs_hardblocks.h
+@@ -7,42 +7,42 @@
+ /* Just the needed definitions for the RDB of an Amiga HD. */
+ 
+ struct RigidDiskBlock {
+-	__u32	rdb_ID;
++	__be32	rdb_ID;
+ 	__be32	rdb_SummedLongs;
+-	__s32	rdb_ChkSum;
+-	__u32	rdb_HostID;
++	__be32	rdb_ChkSum;
++	__be32	rdb_HostID;
+ 	__be32	rdb_BlockBytes;
+-	__u32	rdb_Flags;
+-	__u32	rdb_BadBlockList;
++	__be32	rdb_Flags;
++	__be32	rdb_BadBlockList;
+ 	__be32	rdb_PartitionList;
+-	__u32	rdb_FileSysHeaderList;
+-	__u32	rdb_DriveInit;
+-	__u32	rdb_Reserved1[6];
+-	__u32	rdb_Cylinders;
+-	__u32	rdb_Sectors;
+-	__u32	rdb_Heads;
+-	__u32	rdb_Interleave;
+-	__u32	rdb_Park;
+-	__u32	rdb_Reserved2[3];
+-	__u32	rdb_WritePreComp;
+-	__u32	rdb_ReducedWrite;
+-	__u32	rdb_StepRate;
+-	__u32	rdb_Reserved3[5];
+-	__u32	rdb_RDBBlocksLo;
+-	__u32	rdb_RDBBlocksHi;
+-	__u32	rdb_LoCylinder;
+-	__u32	rdb_HiCylinder;
+-	__u32	rdb_CylBlocks;
+-	__u32	rdb_AutoParkSeconds;
+-	__u32	rdb_HighRDSKBlock;
+-	__u32	rdb_Reserved4;
++	__be32	rdb_FileSysHeaderList;
++	__be32	rdb_DriveInit;
++	__be32	rdb_Reserved1[6];
++	__be32	rdb_Cylinders;
++	__be32	rdb_Sectors;
++	__be32	rdb_Heads;
++	__be32	rdb_Interleave;
++	__be32	rdb_Park;
++	__be32	rdb_Reserved2[3];
++	__be32	rdb_WritePreComp;
++	__be32	rdb_ReducedWrite;
++	__be32	rdb_StepRate;
++	__be32	rdb_Reserved3[5];
++	__be32	rdb_RDBBlocksLo;
++	__be32	rdb_RDBBlocksHi;
++	__be32	rdb_LoCylinder;
++	__be32	rdb_HiCylinder;
++	__be32	rdb_CylBlocks;
++	__be32	rdb_AutoParkSeconds;
++	__be32	rdb_HighRDSKBlock;
++	__be32	rdb_Reserved4;
+ 	char	rdb_DiskVendor[8];
+ 	char	rdb_DiskProduct[16];
+ 	char	rdb_DiskRevision[4];
+ 	char	rdb_ControllerVendor[8];
+ 	char	rdb_ControllerProduct[16];
+ 	char	rdb_ControllerRevision[4];
+-	__u32	rdb_Reserved5[10];
++	__be32	rdb_Reserved5[10];
+ };
+ 
+ #define	IDNAME_RIGIDDISK	0x5244534B	/* "RDSK" */
+@@ -50,16 +50,16 @@ struct RigidDiskBlock {
+ struct PartitionBlock {
+ 	__be32	pb_ID;
+ 	__be32	pb_SummedLongs;
+-	__s32	pb_ChkSum;
+-	__u32	pb_HostID;
++	__be32	pb_ChkSum;
++	__be32	pb_HostID;
+ 	__be32	pb_Next;
+-	__u32	pb_Flags;
+-	__u32	pb_Reserved1[2];
+-	__u32	pb_DevFlags;
++	__be32	pb_Flags;
++	__be32	pb_Reserved1[2];
++	__be32	pb_DevFlags;
+ 	__u8	pb_DriveName[32];
+-	__u32	pb_Reserved2[15];
++	__be32	pb_Reserved2[15];
+ 	__be32	pb_Environment[17];
+-	__u32	pb_EReserved[15];
++	__be32	pb_EReserved[15];
+ };
+ 
+ #define	IDNAME_PARTITION	0x50415254	/* "PART" */
+diff --git a/include/uapi/linux/auto_dev-ioctl.h b/include/uapi/linux/auto_dev-ioctl.h
+index 62e625356dc81..08be539605fca 100644
+--- a/include/uapi/linux/auto_dev-ioctl.h
++++ b/include/uapi/linux/auto_dev-ioctl.h
+@@ -109,7 +109,7 @@ struct autofs_dev_ioctl {
+ 		struct args_ismountpoint	ismountpoint;
+ 	};
+ 
+-	char path[0];
++	char path[];
+ };
+ 
+ static inline void init_autofs_dev_ioctl(struct autofs_dev_ioctl *in)
+diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h
+index aee75eb9e6863..5d8bd754c69f1 100644
+--- a/include/uapi/linux/videodev2.h
++++ b/include/uapi/linux/videodev2.h
+@@ -1720,7 +1720,7 @@ struct v4l2_input {
+ 	__u8	     name[32];		/*  Label */
+ 	__u32	     type;		/*  Type of input */
+ 	__u32	     audioset;		/*  Associated audios (bitfield) */
+-	__u32        tuner;             /*  enum v4l2_tuner_type */
++	__u32        tuner;             /*  Tuner index */
+ 	v4l2_std_id  std;
+ 	__u32	     status;
+ 	__u32	     capabilities;
+@@ -1807,8 +1807,8 @@ struct v4l2_ext_control {
+ 		__u8 __user *p_u8;
+ 		__u16 __user *p_u16;
+ 		__u32 __user *p_u32;
+-		__u32 __user *p_s32;
+-		__u32 __user *p_s64;
++		__s32 __user *p_s32;
++		__s64 __user *p_s64;
+ 		struct v4l2_area __user *p_area;
+ 		struct v4l2_ctrl_h264_sps __user *p_h264_sps;
+ 		struct v4l2_ctrl_h264_pps *p_h264_pps;
+diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h
+index df1d04f7a5424..8eecbb3766158 100644
+--- a/include/ufs/ufshcd.h
++++ b/include/ufs/ufshcd.h
+@@ -225,7 +225,6 @@ struct ufs_dev_cmd {
+ 	struct mutex lock;
+ 	struct completion *complete;
+ 	struct ufs_query query;
+-	struct cq_entry *cqe;
+ };
+ 
+ /**
+diff --git a/init/Makefile b/init/Makefile
+index 26de459006c4e..ec557ada3c12e 100644
+--- a/init/Makefile
++++ b/init/Makefile
+@@ -60,3 +60,4 @@ include/generated/utsversion.h: FORCE
+ $(obj)/version-timestamp.o: include/generated/utsversion.h
+ CFLAGS_version-timestamp.o := -include include/generated/utsversion.h
+ KASAN_SANITIZE_version-timestamp.o := n
++GCOV_PROFILE_version-timestamp.o := n
+diff --git a/init/main.c b/init/main.c
+index af50044deed56..c445c1fb19b95 100644
+--- a/init/main.c
++++ b/init/main.c
+@@ -877,7 +877,8 @@ static void __init print_unknown_bootoptions(void)
+ 	memblock_free(unknown_options, len);
+ }
+ 
+-asmlinkage __visible void __init __no_sanitize_address __noreturn start_kernel(void)
++asmlinkage __visible __init __no_sanitize_address __noreturn __no_stack_protector
++void start_kernel(void)
+ {
+ 	char *command_line;
+ 	char *after_dashes;
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index 3bca7a79efda4..f1b79959d1c1d 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -2575,6 +2575,8 @@ int io_run_task_work_sig(struct io_ring_ctx *ctx)
+ static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx,
+ 					  struct io_wait_queue *iowq)
+ {
++	int token, ret;
++
+ 	if (unlikely(READ_ONCE(ctx->check_cq)))
+ 		return 1;
+ 	if (unlikely(!llist_empty(&ctx->work_llist)))
+@@ -2585,11 +2587,20 @@ static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx,
+ 		return -EINTR;
+ 	if (unlikely(io_should_wake(iowq)))
+ 		return 0;
++
++	/*
++	 * Use io_schedule_prepare/finish, so cpufreq can take into account
++	 * that the task is waiting for IO - turns out to be important for low
++	 * QD IO.
++	 */
++	token = io_schedule_prepare();
++	ret = 0;
+ 	if (iowq->timeout == KTIME_MAX)
+ 		schedule();
+ 	else if (!schedule_hrtimeout(&iowq->timeout, HRTIMER_MODE_ABS))
+-		return -ETIME;
+-	return 0;
++		ret = -ETIME;
++	io_schedule_finish(token);
++	return ret;
+ }
+ 
+ /*
+@@ -3050,7 +3061,18 @@ static __cold void io_ring_exit_work(struct work_struct *work)
+ 			/* there is little hope left, don't run it too often */
+ 			interval = HZ * 60;
+ 		}
+-	} while (!wait_for_completion_timeout(&ctx->ref_comp, interval));
++		/*
++		 * This is really an uninterruptible wait, as it has to be
++		 * complete. But it's also run from a kworker, which doesn't
++		 * take signals, so it's fine to make it interruptible. This
++		 * avoids scenarios where we knowingly can wait much longer
++		 * on completions, for example if someone does a SIGSTOP on
++		 * a task that needs to finish task_work to make this loop
++		 * complete. That's a synthetic situation that should not
++		 * cause a stuck task backtrace, and hence a potential panic
++		 * on stuck tasks if that is enabled.
++		 */
++	} while (!wait_for_completion_interruptible_timeout(&ctx->ref_comp, interval));
+ 
+ 	init_completion(&exit.completion);
+ 	init_task_work(&exit.task_work, io_tctx_exit_cb);
+@@ -3074,7 +3096,12 @@ static __cold void io_ring_exit_work(struct work_struct *work)
+ 			continue;
+ 
+ 		mutex_unlock(&ctx->uring_lock);
+-		wait_for_completion(&exit.completion);
++		/*
++		 * See comment above for
++		 * wait_for_completion_interruptible_timeout() on why this
++		 * wait is marked as interruptible.
++		 */
++		wait_for_completion_interruptible(&exit.completion);
+ 		mutex_lock(&ctx->uring_lock);
+ 	}
+ 	mutex_unlock(&ctx->uring_lock);
+diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
+index 72b32b7cd9cd9..25ca17a8e1964 100644
+--- a/kernel/bpf/btf.c
++++ b/kernel/bpf/btf.c
+@@ -7848,10 +7848,8 @@ static int __register_btf_kfunc_id_set(enum btf_kfunc_hook hook,
+ 			pr_err("missing vmlinux BTF, cannot register kfuncs\n");
+ 			return -ENOENT;
+ 		}
+-		if (kset->owner && IS_ENABLED(CONFIG_DEBUG_INFO_BTF_MODULES)) {
+-			pr_err("missing module BTF, cannot register kfuncs\n");
+-			return -ENOENT;
+-		}
++		if (kset->owner && IS_ENABLED(CONFIG_DEBUG_INFO_BTF_MODULES))
++			pr_warn("missing module BTF, cannot register kfuncs\n");
+ 		return 0;
+ 	}
+ 	if (IS_ERR(btf))
+diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
+index 517b6a5928cc0..5b2741aa0d9bb 100644
+--- a/kernel/bpf/cgroup.c
++++ b/kernel/bpf/cgroup.c
+@@ -1826,6 +1826,12 @@ int __cgroup_bpf_run_filter_setsockopt(struct sock *sk, int *level,
+ 		ret = 1;
+ 	} else if (ctx.optlen > max_optlen || ctx.optlen < -1) {
+ 		/* optlen is out of bounds */
++		if (*optlen > PAGE_SIZE && ctx.optlen >= 0) {
++			pr_info_once("bpf setsockopt: ignoring program buffer with optlen=%d (max_optlen=%d)\n",
++				     ctx.optlen, max_optlen);
++			ret = 0;
++			goto out;
++		}
+ 		ret = -EFAULT;
+ 	} else {
+ 		/* optlen within bounds, run kernel handler */
+@@ -1881,8 +1887,10 @@ int __cgroup_bpf_run_filter_getsockopt(struct sock *sk, int level,
+ 		.optname = optname,
+ 		.current_task = current,
+ 	};
++	int orig_optlen;
+ 	int ret;
+ 
++	orig_optlen = max_optlen;
+ 	ctx.optlen = max_optlen;
+ 	max_optlen = sockopt_alloc_buf(&ctx, max_optlen, &buf);
+ 	if (max_optlen < 0)
+@@ -1905,6 +1913,7 @@ int __cgroup_bpf_run_filter_getsockopt(struct sock *sk, int level,
+ 			ret = -EFAULT;
+ 			goto out;
+ 		}
++		orig_optlen = ctx.optlen;
+ 
+ 		if (copy_from_user(ctx.optval, optval,
+ 				   min(ctx.optlen, max_optlen)) != 0) {
+@@ -1922,6 +1931,12 @@ int __cgroup_bpf_run_filter_getsockopt(struct sock *sk, int level,
+ 		goto out;
+ 
+ 	if (optval && (ctx.optlen > max_optlen || ctx.optlen < 0)) {
++		if (orig_optlen > PAGE_SIZE && ctx.optlen >= 0) {
++			pr_info_once("bpf getsockopt: ignoring program buffer with optlen=%d (max_optlen=%d)\n",
++				     ctx.optlen, max_optlen);
++			ret = retval;
++			goto out;
++		}
+ 		ret = -EFAULT;
+ 		goto out;
+ 	}
+diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
+index 8d368fa353f99..f12565ba136b0 100644
+--- a/kernel/bpf/helpers.c
++++ b/kernel/bpf/helpers.c
+@@ -1926,8 +1926,12 @@ __bpf_kfunc void *bpf_refcount_acquire_impl(void *p__refcounted_kptr, void *meta
+ 	 * bpf_refcount type so that it is emitted in vmlinux BTF
+ 	 */
+ 	ref = (struct bpf_refcount *)(p__refcounted_kptr + meta->record->refcount_off);
++	if (!refcount_inc_not_zero((refcount_t *)ref))
++		return NULL;
+ 
+-	refcount_inc((refcount_t *)ref);
++	/* Verifier strips KF_RET_NULL if input is owned ref, see is_kfunc_ret_null
++	 * in verifier.c
++	 */
+ 	return (void *)p__refcounted_kptr;
+ }
+ 
+@@ -1943,7 +1947,7 @@ static int __bpf_list_add(struct bpf_list_node *node, struct bpf_list_head *head
+ 		INIT_LIST_HEAD(h);
+ 	if (!list_empty(n)) {
+ 		/* Only called from BPF prog, no need to migrate_disable */
+-		__bpf_obj_drop_impl(n - off, rec);
++		__bpf_obj_drop_impl((void *)n - off, rec);
+ 		return -EINVAL;
+ 	}
+ 
+@@ -2025,7 +2029,7 @@ static int __bpf_rbtree_add(struct bpf_rb_root *root, struct bpf_rb_node *node,
+ 
+ 	if (!RB_EMPTY_NODE(n)) {
+ 		/* Only called from BPF prog, no need to migrate_disable */
+-		__bpf_obj_drop_impl(n - off, rec);
++		__bpf_obj_drop_impl((void *)n - off, rec);
+ 		return -EINVAL;
+ 	}
+ 
+@@ -2325,7 +2329,7 @@ BTF_ID_FLAGS(func, crash_kexec, KF_DESTRUCTIVE)
+ #endif
+ BTF_ID_FLAGS(func, bpf_obj_new_impl, KF_ACQUIRE | KF_RET_NULL)
+ BTF_ID_FLAGS(func, bpf_obj_drop_impl, KF_RELEASE)
+-BTF_ID_FLAGS(func, bpf_refcount_acquire_impl, KF_ACQUIRE)
++BTF_ID_FLAGS(func, bpf_refcount_acquire_impl, KF_ACQUIRE | KF_RET_NULL)
+ BTF_ID_FLAGS(func, bpf_list_push_front_impl)
+ BTF_ID_FLAGS(func, bpf_list_push_back_impl)
+ BTF_ID_FLAGS(func, bpf_list_pop_front, KF_ACQUIRE | KF_RET_NULL)
+diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
+index ac021bc43a66b..78acf28d48732 100644
+--- a/kernel/bpf/trampoline.c
++++ b/kernel/bpf/trampoline.c
+@@ -251,11 +251,8 @@ bpf_trampoline_get_progs(const struct bpf_trampoline *tr, int *total, bool *ip_a
+ 	return tlinks;
+ }
+ 
+-static void __bpf_tramp_image_put_deferred(struct work_struct *work)
++static void bpf_tramp_image_free(struct bpf_tramp_image *im)
+ {
+-	struct bpf_tramp_image *im;
+-
+-	im = container_of(work, struct bpf_tramp_image, work);
+ 	bpf_image_ksym_del(&im->ksym);
+ 	bpf_jit_free_exec(im->image);
+ 	bpf_jit_uncharge_modmem(PAGE_SIZE);
+@@ -263,6 +260,14 @@ static void __bpf_tramp_image_put_deferred(struct work_struct *work)
+ 	kfree_rcu(im, rcu);
+ }
+ 
++static void __bpf_tramp_image_put_deferred(struct work_struct *work)
++{
++	struct bpf_tramp_image *im;
++
++	im = container_of(work, struct bpf_tramp_image, work);
++	bpf_tramp_image_free(im);
++}
++
+ /* callback, fexit step 3 or fentry step 2 */
+ static void __bpf_tramp_image_put_rcu(struct rcu_head *rcu)
+ {
+@@ -344,7 +349,7 @@ static void bpf_tramp_image_put(struct bpf_tramp_image *im)
+ 	call_rcu_tasks_trace(&im->rcu, __bpf_tramp_image_put_rcu_tasks);
+ }
+ 
+-static struct bpf_tramp_image *bpf_tramp_image_alloc(u64 key, u32 idx)
++static struct bpf_tramp_image *bpf_tramp_image_alloc(u64 key)
+ {
+ 	struct bpf_tramp_image *im;
+ 	struct bpf_ksym *ksym;
+@@ -371,7 +376,7 @@ static struct bpf_tramp_image *bpf_tramp_image_alloc(u64 key, u32 idx)
+ 
+ 	ksym = &im->ksym;
+ 	INIT_LIST_HEAD_RCU(&ksym->lnode);
+-	snprintf(ksym->name, KSYM_NAME_LEN, "bpf_trampoline_%llu_%u", key, idx);
++	snprintf(ksym->name, KSYM_NAME_LEN, "bpf_trampoline_%llu", key);
+ 	bpf_image_ksym_add(image, ksym);
+ 	return im;
+ 
+@@ -401,11 +406,10 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mut
+ 		err = unregister_fentry(tr, tr->cur_image->image);
+ 		bpf_tramp_image_put(tr->cur_image);
+ 		tr->cur_image = NULL;
+-		tr->selector = 0;
+ 		goto out;
+ 	}
+ 
+-	im = bpf_tramp_image_alloc(tr->key, tr->selector);
++	im = bpf_tramp_image_alloc(tr->key);
+ 	if (IS_ERR(im)) {
+ 		err = PTR_ERR(im);
+ 		goto out;
+@@ -438,12 +442,11 @@ again:
+ 					  &tr->func.model, tr->flags, tlinks,
+ 					  tr->func.addr);
+ 	if (err < 0)
+-		goto out;
++		goto out_free;
+ 
+ 	set_memory_rox((long)im->image, 1);
+ 
+-	WARN_ON(tr->cur_image && tr->selector == 0);
+-	WARN_ON(!tr->cur_image && tr->selector);
++	WARN_ON(tr->cur_image && total == 0);
+ 	if (tr->cur_image)
+ 		/* progs already running at this address */
+ 		err = modify_fentry(tr, tr->cur_image->image, im->image, lock_direct_mutex);
+@@ -468,18 +471,21 @@ again:
+ 	}
+ #endif
+ 	if (err)
+-		goto out;
++		goto out_free;
+ 
+ 	if (tr->cur_image)
+ 		bpf_tramp_image_put(tr->cur_image);
+ 	tr->cur_image = im;
+-	tr->selector++;
+ out:
+ 	/* If any error happens, restore previous flags */
+ 	if (err)
+ 		tr->flags = orig_flags;
+ 	kfree(tlinks);
+ 	return err;
++
++out_free:
++	bpf_tramp_image_free(im);
++	goto out;
+ }
+ 
+ static enum bpf_tramp_prog_type bpf_attach_type_to_tramp(struct bpf_prog *prog)
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index cf5f230360f53..30fabae47a07b 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -273,11 +273,6 @@ struct bpf_call_arg_meta {
+ 	struct btf_field *kptr_field;
+ };
+ 
+-struct btf_and_id {
+-	struct btf *btf;
+-	u32 btf_id;
+-};
+-
+ struct bpf_kfunc_call_arg_meta {
+ 	/* In parameters */
+ 	struct btf *btf;
+@@ -296,10 +291,21 @@ struct bpf_kfunc_call_arg_meta {
+ 		u64 value;
+ 		bool found;
+ 	} arg_constant;
+-	union {
+-		struct btf_and_id arg_obj_drop;
+-		struct btf_and_id arg_refcount_acquire;
+-	};
++
++	/* arg_{btf,btf_id,owning_ref} are used by kfunc-specific handling,
++	 * generally to pass info about user-defined local kptr types to later
++	 * verification logic
++	 *   bpf_obj_drop
++	 *     Record the local kptr type to be drop'd
++	 *   bpf_refcount_acquire (via KF_ARG_PTR_TO_REFCOUNTED_KPTR arg type)
++	 *     Record the local kptr type to be refcount_incr'd and use
++	 *     arg_owning_ref to determine whether refcount_acquire should be
++	 *     fallible
++	 */
++	struct btf *arg_btf;
++	u32 arg_btf_id;
++	bool arg_owning_ref;
++
+ 	struct {
+ 		struct btf_field *field;
+ 	} arg_list_head;
+@@ -604,9 +610,9 @@ static const char *reg_type_str(struct bpf_verifier_env *env,
+ 		 type & PTR_TRUSTED ? "trusted_" : ""
+ 	);
+ 
+-	snprintf(env->type_str_buf, TYPE_STR_BUF_LEN, "%s%s%s",
++	snprintf(env->tmp_str_buf, TMP_STR_BUF_LEN, "%s%s%s",
+ 		 prefix, str[base_type(type)], postfix);
+-	return env->type_str_buf;
++	return env->tmp_str_buf;
+ }
+ 
+ static char slot_type_char[] = {
+@@ -1254,6 +1260,12 @@ static bool is_spilled_reg(const struct bpf_stack_state *stack)
+ 	return stack->slot_type[BPF_REG_SIZE - 1] == STACK_SPILL;
+ }
+ 
++static bool is_spilled_scalar_reg(const struct bpf_stack_state *stack)
++{
++	return stack->slot_type[BPF_REG_SIZE - 1] == STACK_SPILL &&
++	       stack->spilled_ptr.type == SCALAR_VALUE;
++}
++
+ static void scrub_spilled_slot(u8 *stype)
+ {
+ 	if (*stype != STACK_INVALID)
+@@ -3144,12 +3156,167 @@ static const char *disasm_kfunc_name(void *data, const struct bpf_insn *insn)
+ 	return btf_name_by_offset(desc_btf, func->name_off);
+ }
+ 
++static inline void bt_init(struct backtrack_state *bt, u32 frame)
++{
++	bt->frame = frame;
++}
++
++static inline void bt_reset(struct backtrack_state *bt)
++{
++	struct bpf_verifier_env *env = bt->env;
++
++	memset(bt, 0, sizeof(*bt));
++	bt->env = env;
++}
++
++static inline u32 bt_empty(struct backtrack_state *bt)
++{
++	u64 mask = 0;
++	int i;
++
++	for (i = 0; i <= bt->frame; i++)
++		mask |= bt->reg_masks[i] | bt->stack_masks[i];
++
++	return mask == 0;
++}
++
++static inline int bt_subprog_enter(struct backtrack_state *bt)
++{
++	if (bt->frame == MAX_CALL_FRAMES - 1) {
++		verbose(bt->env, "BUG subprog enter from frame %d\n", bt->frame);
++		WARN_ONCE(1, "verifier backtracking bug");
++		return -EFAULT;
++	}
++	bt->frame++;
++	return 0;
++}
++
++static inline int bt_subprog_exit(struct backtrack_state *bt)
++{
++	if (bt->frame == 0) {
++		verbose(bt->env, "BUG subprog exit from frame 0\n");
++		WARN_ONCE(1, "verifier backtracking bug");
++		return -EFAULT;
++	}
++	bt->frame--;
++	return 0;
++}
++
++static inline void bt_set_frame_reg(struct backtrack_state *bt, u32 frame, u32 reg)
++{
++	bt->reg_masks[frame] |= 1 << reg;
++}
++
++static inline void bt_clear_frame_reg(struct backtrack_state *bt, u32 frame, u32 reg)
++{
++	bt->reg_masks[frame] &= ~(1 << reg);
++}
++
++static inline void bt_set_reg(struct backtrack_state *bt, u32 reg)
++{
++	bt_set_frame_reg(bt, bt->frame, reg);
++}
++
++static inline void bt_clear_reg(struct backtrack_state *bt, u32 reg)
++{
++	bt_clear_frame_reg(bt, bt->frame, reg);
++}
++
++static inline void bt_set_frame_slot(struct backtrack_state *bt, u32 frame, u32 slot)
++{
++	bt->stack_masks[frame] |= 1ull << slot;
++}
++
++static inline void bt_clear_frame_slot(struct backtrack_state *bt, u32 frame, u32 slot)
++{
++	bt->stack_masks[frame] &= ~(1ull << slot);
++}
++
++static inline void bt_set_slot(struct backtrack_state *bt, u32 slot)
++{
++	bt_set_frame_slot(bt, bt->frame, slot);
++}
++
++static inline void bt_clear_slot(struct backtrack_state *bt, u32 slot)
++{
++	bt_clear_frame_slot(bt, bt->frame, slot);
++}
++
++static inline u32 bt_frame_reg_mask(struct backtrack_state *bt, u32 frame)
++{
++	return bt->reg_masks[frame];
++}
++
++static inline u32 bt_reg_mask(struct backtrack_state *bt)
++{
++	return bt->reg_masks[bt->frame];
++}
++
++static inline u64 bt_frame_stack_mask(struct backtrack_state *bt, u32 frame)
++{
++	return bt->stack_masks[frame];
++}
++
++static inline u64 bt_stack_mask(struct backtrack_state *bt)
++{
++	return bt->stack_masks[bt->frame];
++}
++
++static inline bool bt_is_reg_set(struct backtrack_state *bt, u32 reg)
++{
++	return bt->reg_masks[bt->frame] & (1 << reg);
++}
++
++static inline bool bt_is_slot_set(struct backtrack_state *bt, u32 slot)
++{
++	return bt->stack_masks[bt->frame] & (1ull << slot);
++}
++
++/* format registers bitmask, e.g., "r0,r2,r4" for 0x15 mask */
++static void fmt_reg_mask(char *buf, ssize_t buf_sz, u32 reg_mask)
++{
++	DECLARE_BITMAP(mask, 64);
++	bool first = true;
++	int i, n;
++
++	buf[0] = '\0';
++
++	bitmap_from_u64(mask, reg_mask);
++	for_each_set_bit(i, mask, 32) {
++		n = snprintf(buf, buf_sz, "%sr%d", first ? "" : ",", i);
++		first = false;
++		buf += n;
++		buf_sz -= n;
++		if (buf_sz < 0)
++			break;
++	}
++}
++/* format stack slots bitmask, e.g., "-8,-24,-40" for 0x15 mask */
++static void fmt_stack_mask(char *buf, ssize_t buf_sz, u64 stack_mask)
++{
++	DECLARE_BITMAP(mask, 64);
++	bool first = true;
++	int i, n;
++
++	buf[0] = '\0';
++
++	bitmap_from_u64(mask, stack_mask);
++	for_each_set_bit(i, mask, 64) {
++		n = snprintf(buf, buf_sz, "%s%d", first ? "" : ",", -(i + 1) * 8);
++		first = false;
++		buf += n;
++		buf_sz -= n;
++		if (buf_sz < 0)
++			break;
++	}
++}
++
+ /* For given verifier state backtrack_insn() is called from the last insn to
+  * the first insn. Its purpose is to compute a bitmask of registers and
+  * stack slots that needs precision in the parent verifier state.
+  */
+ static int backtrack_insn(struct bpf_verifier_env *env, int idx,
+-			  u32 *reg_mask, u64 *stack_mask)
++			  struct backtrack_state *bt)
+ {
+ 	const struct bpf_insn_cbs cbs = {
+ 		.cb_call	= disasm_kfunc_name,
+@@ -3160,20 +3327,24 @@ static int backtrack_insn(struct bpf_verifier_env *env, int idx,
+ 	u8 class = BPF_CLASS(insn->code);
+ 	u8 opcode = BPF_OP(insn->code);
+ 	u8 mode = BPF_MODE(insn->code);
+-	u32 dreg = 1u << insn->dst_reg;
+-	u32 sreg = 1u << insn->src_reg;
++	u32 dreg = insn->dst_reg;
++	u32 sreg = insn->src_reg;
+ 	u32 spi;
+ 
+ 	if (insn->code == 0)
+ 		return 0;
+ 	if (env->log.level & BPF_LOG_LEVEL2) {
+-		verbose(env, "regs=%x stack=%llx before ", *reg_mask, *stack_mask);
++		fmt_reg_mask(env->tmp_str_buf, TMP_STR_BUF_LEN, bt_reg_mask(bt));
++		verbose(env, "mark_precise: frame%d: regs=%s ",
++			bt->frame, env->tmp_str_buf);
++		fmt_stack_mask(env->tmp_str_buf, TMP_STR_BUF_LEN, bt_stack_mask(bt));
++		verbose(env, "stack=%s before ", env->tmp_str_buf);
+ 		verbose(env, "%d: ", idx);
+ 		print_bpf_insn(&cbs, insn, env->allow_ptr_leaks);
+ 	}
+ 
+ 	if (class == BPF_ALU || class == BPF_ALU64) {
+-		if (!(*reg_mask & dreg))
++		if (!bt_is_reg_set(bt, dreg))
+ 			return 0;
+ 		if (opcode == BPF_MOV) {
+ 			if (BPF_SRC(insn->code) == BPF_X) {
+@@ -3181,8 +3352,8 @@ static int backtrack_insn(struct bpf_verifier_env *env, int idx,
+ 				 * dreg needs precision after this insn
+ 				 * sreg needs precision before this insn
+ 				 */
+-				*reg_mask &= ~dreg;
+-				*reg_mask |= sreg;
++				bt_clear_reg(bt, dreg);
++				bt_set_reg(bt, sreg);
+ 			} else {
+ 				/* dreg = K
+ 				 * dreg needs precision after this insn.
+@@ -3190,7 +3361,7 @@ static int backtrack_insn(struct bpf_verifier_env *env, int idx,
+ 				 * as precise=true in this verifier state.
+ 				 * No further markings in parent are necessary
+ 				 */
+-				*reg_mask &= ~dreg;
++				bt_clear_reg(bt, dreg);
+ 			}
+ 		} else {
+ 			if (BPF_SRC(insn->code) == BPF_X) {
+@@ -3198,15 +3369,15 @@ static int backtrack_insn(struct bpf_verifier_env *env, int idx,
+ 				 * both dreg and sreg need precision
+ 				 * before this insn
+ 				 */
+-				*reg_mask |= sreg;
++				bt_set_reg(bt, sreg);
+ 			} /* else dreg += K
+ 			   * dreg still needs precision before this insn
+ 			   */
+ 		}
+ 	} else if (class == BPF_LDX) {
+-		if (!(*reg_mask & dreg))
++		if (!bt_is_reg_set(bt, dreg))
+ 			return 0;
+-		*reg_mask &= ~dreg;
++		bt_clear_reg(bt, dreg);
+ 
+ 		/* scalars can only be spilled into stack w/o losing precision.
+ 		 * Load from any other memory can be zero extended.
+@@ -3227,9 +3398,9 @@ static int backtrack_insn(struct bpf_verifier_env *env, int idx,
+ 			WARN_ONCE(1, "verifier backtracking bug");
+ 			return -EFAULT;
+ 		}
+-		*stack_mask |= 1ull << spi;
++		bt_set_slot(bt, spi);
+ 	} else if (class == BPF_STX || class == BPF_ST) {
+-		if (*reg_mask & dreg)
++		if (bt_is_reg_set(bt, dreg))
+ 			/* stx & st shouldn't be using _scalar_ dst_reg
+ 			 * to access memory. It means backtracking
+ 			 * encountered a case of pointer subtraction.
+@@ -3244,11 +3415,11 @@ static int backtrack_insn(struct bpf_verifier_env *env, int idx,
+ 			WARN_ONCE(1, "verifier backtracking bug");
+ 			return -EFAULT;
+ 		}
+-		if (!(*stack_mask & (1ull << spi)))
++		if (!bt_is_slot_set(bt, spi))
+ 			return 0;
+-		*stack_mask &= ~(1ull << spi);
++		bt_clear_slot(bt, spi);
+ 		if (class == BPF_STX)
+-			*reg_mask |= sreg;
++			bt_set_reg(bt, sreg);
+ 	} else if (class == BPF_JMP || class == BPF_JMP32) {
+ 		if (opcode == BPF_CALL) {
+ 			if (insn->src_reg == BPF_PSEUDO_CALL)
+@@ -3265,19 +3436,19 @@ static int backtrack_insn(struct bpf_verifier_env *env, int idx,
+ 			if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL && insn->imm == 0)
+ 				return -ENOTSUPP;
+ 			/* regular helper call sets R0 */
+-			*reg_mask &= ~1;
+-			if (*reg_mask & 0x3f) {
++			bt_clear_reg(bt, BPF_REG_0);
++			if (bt_reg_mask(bt) & BPF_REGMASK_ARGS) {
+ 				/* if backtracing was looking for registers R1-R5
+ 				 * they should have been found already.
+ 				 */
+-				verbose(env, "BUG regs %x\n", *reg_mask);
++				verbose(env, "BUG regs %x\n", bt_reg_mask(bt));
+ 				WARN_ONCE(1, "verifier backtracking bug");
+ 				return -EFAULT;
+ 			}
+ 		} else if (opcode == BPF_EXIT) {
+ 			return -ENOTSUPP;
+ 		} else if (BPF_SRC(insn->code) == BPF_X) {
+-			if (!(*reg_mask & (dreg | sreg)))
++			if (!bt_is_reg_set(bt, dreg) && !bt_is_reg_set(bt, sreg))
+ 				return 0;
+ 			/* dreg <cond> sreg
+ 			 * Both dreg and sreg need precision before
+@@ -3285,7 +3456,8 @@ static int backtrack_insn(struct bpf_verifier_env *env, int idx,
+ 			 * before it would be equally necessary to
+ 			 * propagate it to dreg.
+ 			 */
+-			*reg_mask |= (sreg | dreg);
++			bt_set_reg(bt, dreg);
++			bt_set_reg(bt, sreg);
+ 			 /* else dreg <cond> K
+ 			  * Only dreg still needs precision before
+ 			  * this insn, so for the K-based conditional
+@@ -3293,9 +3465,9 @@ static int backtrack_insn(struct bpf_verifier_env *env, int idx,
+ 			  */
+ 		}
+ 	} else if (class == BPF_LD) {
+-		if (!(*reg_mask & dreg))
++		if (!bt_is_reg_set(bt, dreg))
+ 			return 0;
+-		*reg_mask &= ~dreg;
++		bt_clear_reg(bt, dreg);
+ 		/* It's ld_imm64 or ld_abs or ld_ind.
+ 		 * For ld_imm64 no further tracking of precision
+ 		 * into parent is necessary
+@@ -3366,6 +3538,11 @@ static void mark_all_scalars_precise(struct bpf_verifier_env *env,
+ 	struct bpf_reg_state *reg;
+ 	int i, j;
+ 
++	if (env->log.level & BPF_LOG_LEVEL2) {
++		verbose(env, "mark_precise: frame%d: falling back to forcing all scalars precise\n",
++			st->curframe);
++	}
++
+ 	/* big hammer: mark all scalars precise in this path.
+ 	 * pop_stack may still get !precise scalars.
+ 	 * We also skip current state and go straight to first parent state,
+@@ -3377,17 +3554,25 @@ static void mark_all_scalars_precise(struct bpf_verifier_env *env,
+ 			func = st->frame[i];
+ 			for (j = 0; j < BPF_REG_FP; j++) {
+ 				reg = &func->regs[j];
+-				if (reg->type != SCALAR_VALUE)
++				if (reg->type != SCALAR_VALUE || reg->precise)
+ 					continue;
+ 				reg->precise = true;
++				if (env->log.level & BPF_LOG_LEVEL2) {
++					verbose(env, "force_precise: frame%d: forcing r%d to be precise\n",
++						i, j);
++				}
+ 			}
+ 			for (j = 0; j < func->allocated_stack / BPF_REG_SIZE; j++) {
+ 				if (!is_spilled_reg(&func->stack[j]))
+ 					continue;
+ 				reg = &func->stack[j].spilled_ptr;
+-				if (reg->type != SCALAR_VALUE)
++				if (reg->type != SCALAR_VALUE || reg->precise)
+ 					continue;
+ 				reg->precise = true;
++				if (env->log.level & BPF_LOG_LEVEL2) {
++					verbose(env, "force_precise: frame%d: forcing fp%d to be precise\n",
++						i, -(j + 1) * 8);
++				}
+ 			}
+ 		}
+ 	}
+@@ -3418,6 +3603,96 @@ static void mark_all_scalars_imprecise(struct bpf_verifier_env *env, struct bpf_
+ 	}
+ }
+ 
++static bool idset_contains(struct bpf_idset *s, u32 id)
++{
++	u32 i;
++
++	for (i = 0; i < s->count; ++i)
++		if (s->ids[i] == id)
++			return true;
++
++	return false;
++}
++
++static int idset_push(struct bpf_idset *s, u32 id)
++{
++	if (WARN_ON_ONCE(s->count >= ARRAY_SIZE(s->ids)))
++		return -EFAULT;
++	s->ids[s->count++] = id;
++	return 0;
++}
++
++static void idset_reset(struct bpf_idset *s)
++{
++	s->count = 0;
++}
++
++/* Collect a set of IDs for all registers currently marked as precise in env->bt.
++ * Mark all registers with these IDs as precise.
++ */
++static int mark_precise_scalar_ids(struct bpf_verifier_env *env, struct bpf_verifier_state *st)
++{
++	struct bpf_idset *precise_ids = &env->idset_scratch;
++	struct backtrack_state *bt = &env->bt;
++	struct bpf_func_state *func;
++	struct bpf_reg_state *reg;
++	DECLARE_BITMAP(mask, 64);
++	int i, fr;
++
++	idset_reset(precise_ids);
++
++	for (fr = bt->frame; fr >= 0; fr--) {
++		func = st->frame[fr];
++
++		bitmap_from_u64(mask, bt_frame_reg_mask(bt, fr));
++		for_each_set_bit(i, mask, 32) {
++			reg = &func->regs[i];
++			if (!reg->id || reg->type != SCALAR_VALUE)
++				continue;
++			if (idset_push(precise_ids, reg->id))
++				return -EFAULT;
++		}
++
++		bitmap_from_u64(mask, bt_frame_stack_mask(bt, fr));
++		for_each_set_bit(i, mask, 64) {
++			if (i >= func->allocated_stack / BPF_REG_SIZE)
++				break;
++			if (!is_spilled_scalar_reg(&func->stack[i]))
++				continue;
++			reg = &func->stack[i].spilled_ptr;
++			if (!reg->id)
++				continue;
++			if (idset_push(precise_ids, reg->id))
++				return -EFAULT;
++		}
++	}
++
++	for (fr = 0; fr <= st->curframe; ++fr) {
++		func = st->frame[fr];
++
++		for (i = BPF_REG_0; i < BPF_REG_10; ++i) {
++			reg = &func->regs[i];
++			if (!reg->id)
++				continue;
++			if (!idset_contains(precise_ids, reg->id))
++				continue;
++			bt_set_frame_reg(bt, fr, i);
++		}
++		for (i = 0; i < func->allocated_stack / BPF_REG_SIZE; ++i) {
++			if (!is_spilled_scalar_reg(&func->stack[i]))
++				continue;
++			reg = &func->stack[i].spilled_ptr;
++			if (!reg->id)
++				continue;
++			if (!idset_contains(precise_ids, reg->id))
++				continue;
++			bt_set_frame_slot(bt, fr, i);
++		}
++	}
++
++	return 0;
++}
++
+ /*
+  * __mark_chain_precision() backtracks BPF program instruction sequence and
+  * chain of verifier states making sure that register *regno* (if regno >= 0)
+@@ -3505,62 +3780,73 @@ static void mark_all_scalars_imprecise(struct bpf_verifier_env *env, struct bpf_
+  * mark_all_scalars_imprecise() to hopefully get more permissive and generic
+  * finalized states which help in short circuiting more future states.
+  */
+-static int __mark_chain_precision(struct bpf_verifier_env *env, int frame, int regno,
+-				  int spi)
++static int __mark_chain_precision(struct bpf_verifier_env *env, int regno)
+ {
++	struct backtrack_state *bt = &env->bt;
+ 	struct bpf_verifier_state *st = env->cur_state;
+ 	int first_idx = st->first_insn_idx;
+ 	int last_idx = env->insn_idx;
+ 	struct bpf_func_state *func;
+ 	struct bpf_reg_state *reg;
+-	u32 reg_mask = regno >= 0 ? 1u << regno : 0;
+-	u64 stack_mask = spi >= 0 ? 1ull << spi : 0;
+ 	bool skip_first = true;
+-	bool new_marks = false;
+-	int i, err;
++	int i, fr, err;
+ 
+ 	if (!env->bpf_capable)
+ 		return 0;
+ 
++	/* set frame number from which we are starting to backtrack */
++	bt_init(bt, env->cur_state->curframe);
++
+ 	/* Do sanity checks against current state of register and/or stack
+ 	 * slot, but don't set precise flag in current state, as precision
+ 	 * tracking in the current state is unnecessary.
+ 	 */
+-	func = st->frame[frame];
++	func = st->frame[bt->frame];
+ 	if (regno >= 0) {
+ 		reg = &func->regs[regno];
+ 		if (reg->type != SCALAR_VALUE) {
+ 			WARN_ONCE(1, "backtracing misuse");
+ 			return -EFAULT;
+ 		}
+-		new_marks = true;
++		bt_set_reg(bt, regno);
+ 	}
+ 
+-	while (spi >= 0) {
+-		if (!is_spilled_reg(&func->stack[spi])) {
+-			stack_mask = 0;
+-			break;
+-		}
+-		reg = &func->stack[spi].spilled_ptr;
+-		if (reg->type != SCALAR_VALUE) {
+-			stack_mask = 0;
+-			break;
+-		}
+-		new_marks = true;
+-		break;
+-	}
+-
+-	if (!new_marks)
+-		return 0;
+-	if (!reg_mask && !stack_mask)
++	if (bt_empty(bt))
+ 		return 0;
+ 
+ 	for (;;) {
+ 		DECLARE_BITMAP(mask, 64);
+ 		u32 history = st->jmp_history_cnt;
+ 
+-		if (env->log.level & BPF_LOG_LEVEL2)
+-			verbose(env, "last_idx %d first_idx %d\n", last_idx, first_idx);
++		if (env->log.level & BPF_LOG_LEVEL2) {
++			verbose(env, "mark_precise: frame%d: last_idx %d first_idx %d\n",
++				bt->frame, last_idx, first_idx);
++		}
++
++		/* If some register with scalar ID is marked as precise,
++		 * make sure that all registers sharing this ID are also precise.
++		 * This is needed to estimate effect of find_equal_scalars().
++		 * Do this at the last instruction of each state,
++		 * bpf_reg_state::id fields are valid for these instructions.
++		 *
++		 * Allows to track precision in situation like below:
++		 *
++		 *     r2 = unknown value
++		 *     ...
++		 *   --- state #0 ---
++		 *     ...
++		 *     r1 = r2                 // r1 and r2 now share the same ID
++		 *     ...
++		 *   --- state #1 {r1.id = A, r2.id = A} ---
++		 *     ...
++		 *     if (r2 > 10) goto exit; // find_equal_scalars() assigns range to r1
++		 *     ...
++		 *   --- state #2 {r1.id = A, r2.id = A} ---
++		 *     r3 = r10
++		 *     r3 += r1                // need to mark both r1 and r2
++		 */
++		if (mark_precise_scalar_ids(env, st))
++			return -EFAULT;
+ 
+ 		if (last_idx < 0) {
+ 			/* we are at the entry into subprog, which
+@@ -3571,12 +3857,13 @@ static int __mark_chain_precision(struct bpf_verifier_env *env, int frame, int r
+ 			if (st->curframe == 0 &&
+ 			    st->frame[0]->subprogno > 0 &&
+ 			    st->frame[0]->callsite == BPF_MAIN_FUNC &&
+-			    stack_mask == 0 && (reg_mask & ~0x3e) == 0) {
+-				bitmap_from_u64(mask, reg_mask);
++			    bt_stack_mask(bt) == 0 &&
++			    (bt_reg_mask(bt) & ~BPF_REGMASK_ARGS) == 0) {
++				bitmap_from_u64(mask, bt_reg_mask(bt));
+ 				for_each_set_bit(i, mask, 32) {
+ 					reg = &st->frame[0]->regs[i];
+ 					if (reg->type != SCALAR_VALUE) {
+-						reg_mask &= ~(1u << i);
++						bt_clear_reg(bt, i);
+ 						continue;
+ 					}
+ 					reg->precise = true;
+@@ -3584,8 +3871,8 @@ static int __mark_chain_precision(struct bpf_verifier_env *env, int frame, int r
+ 				return 0;
+ 			}
+ 
+-			verbose(env, "BUG backtracing func entry subprog %d reg_mask %x stack_mask %llx\n",
+-				st->frame[0]->subprogno, reg_mask, stack_mask);
++			verbose(env, "BUG backtracking func entry subprog %d reg_mask %x stack_mask %llx\n",
++				st->frame[0]->subprogno, bt_reg_mask(bt), bt_stack_mask(bt));
+ 			WARN_ONCE(1, "verifier backtracking bug");
+ 			return -EFAULT;
+ 		}
+@@ -3595,15 +3882,16 @@ static int __mark_chain_precision(struct bpf_verifier_env *env, int frame, int r
+ 				err = 0;
+ 				skip_first = false;
+ 			} else {
+-				err = backtrack_insn(env, i, &reg_mask, &stack_mask);
++				err = backtrack_insn(env, i, bt);
+ 			}
+ 			if (err == -ENOTSUPP) {
+ 				mark_all_scalars_precise(env, st);
++				bt_reset(bt);
+ 				return 0;
+ 			} else if (err) {
+ 				return err;
+ 			}
+-			if (!reg_mask && !stack_mask)
++			if (bt_empty(bt))
+ 				/* Found assignment(s) into tracked register in this state.
+ 				 * Since this state is already marked, just return.
+ 				 * Nothing to be tracked further in the parent state.
+@@ -3628,63 +3916,65 @@ static int __mark_chain_precision(struct bpf_verifier_env *env, int frame, int r
+ 		if (!st)
+ 			break;
+ 
+-		new_marks = false;
+-		func = st->frame[frame];
+-		bitmap_from_u64(mask, reg_mask);
+-		for_each_set_bit(i, mask, 32) {
+-			reg = &func->regs[i];
+-			if (reg->type != SCALAR_VALUE) {
+-				reg_mask &= ~(1u << i);
+-				continue;
++		for (fr = bt->frame; fr >= 0; fr--) {
++			func = st->frame[fr];
++			bitmap_from_u64(mask, bt_frame_reg_mask(bt, fr));
++			for_each_set_bit(i, mask, 32) {
++				reg = &func->regs[i];
++				if (reg->type != SCALAR_VALUE) {
++					bt_clear_frame_reg(bt, fr, i);
++					continue;
++				}
++				if (reg->precise)
++					bt_clear_frame_reg(bt, fr, i);
++				else
++					reg->precise = true;
+ 			}
+-			if (!reg->precise)
+-				new_marks = true;
+-			reg->precise = true;
+-		}
+ 
+-		bitmap_from_u64(mask, stack_mask);
+-		for_each_set_bit(i, mask, 64) {
+-			if (i >= func->allocated_stack / BPF_REG_SIZE) {
+-				/* the sequence of instructions:
+-				 * 2: (bf) r3 = r10
+-				 * 3: (7b) *(u64 *)(r3 -8) = r0
+-				 * 4: (79) r4 = *(u64 *)(r10 -8)
+-				 * doesn't contain jmps. It's backtracked
+-				 * as a single block.
+-				 * During backtracking insn 3 is not recognized as
+-				 * stack access, so at the end of backtracking
+-				 * stack slot fp-8 is still marked in stack_mask.
+-				 * However the parent state may not have accessed
+-				 * fp-8 and it's "unallocated" stack space.
+-				 * In such case fallback to conservative.
+-				 */
+-				mark_all_scalars_precise(env, st);
+-				return 0;
+-			}
++			bitmap_from_u64(mask, bt_frame_stack_mask(bt, fr));
++			for_each_set_bit(i, mask, 64) {
++				if (i >= func->allocated_stack / BPF_REG_SIZE) {
++					/* the sequence of instructions:
++					 * 2: (bf) r3 = r10
++					 * 3: (7b) *(u64 *)(r3 -8) = r0
++					 * 4: (79) r4 = *(u64 *)(r10 -8)
++					 * doesn't contain jmps. It's backtracked
++					 * as a single block.
++					 * During backtracking insn 3 is not recognized as
++					 * stack access, so at the end of backtracking
++					 * stack slot fp-8 is still marked in stack_mask.
++					 * However the parent state may not have accessed
++					 * fp-8 and it's "unallocated" stack space.
++					 * In such case fallback to conservative.
++					 */
++					mark_all_scalars_precise(env, st);
++					bt_reset(bt);
++					return 0;
++				}
+ 
+-			if (!is_spilled_reg(&func->stack[i])) {
+-				stack_mask &= ~(1ull << i);
+-				continue;
++				if (!is_spilled_scalar_reg(&func->stack[i])) {
++					bt_clear_frame_slot(bt, fr, i);
++					continue;
++				}
++				reg = &func->stack[i].spilled_ptr;
++				if (reg->precise)
++					bt_clear_frame_slot(bt, fr, i);
++				else
++					reg->precise = true;
+ 			}
+-			reg = &func->stack[i].spilled_ptr;
+-			if (reg->type != SCALAR_VALUE) {
+-				stack_mask &= ~(1ull << i);
+-				continue;
++			if (env->log.level & BPF_LOG_LEVEL2) {
++				fmt_reg_mask(env->tmp_str_buf, TMP_STR_BUF_LEN,
++					     bt_frame_reg_mask(bt, fr));
++				verbose(env, "mark_precise: frame%d: parent state regs=%s ",
++					fr, env->tmp_str_buf);
++				fmt_stack_mask(env->tmp_str_buf, TMP_STR_BUF_LEN,
++					       bt_frame_stack_mask(bt, fr));
++				verbose(env, "stack=%s: ", env->tmp_str_buf);
++				print_verifier_state(env, func, true);
+ 			}
+-			if (!reg->precise)
+-				new_marks = true;
+-			reg->precise = true;
+-		}
+-		if (env->log.level & BPF_LOG_LEVEL2) {
+-			verbose(env, "parent %s regs=%x stack=%llx marks:",
+-				new_marks ? "didn't have" : "already had",
+-				reg_mask, stack_mask);
+-			print_verifier_state(env, func, true);
+ 		}
+ 
+-		if (!reg_mask && !stack_mask)
+-			break;
+-		if (!new_marks)
++		if (bt_empty(bt))
+ 			break;
+ 
+ 		last_idx = st->last_insn_idx;
+@@ -3695,17 +3985,15 @@ static int __mark_chain_precision(struct bpf_verifier_env *env, int frame, int r
+ 
+ int mark_chain_precision(struct bpf_verifier_env *env, int regno)
+ {
+-	return __mark_chain_precision(env, env->cur_state->curframe, regno, -1);
+-}
+-
+-static int mark_chain_precision_frame(struct bpf_verifier_env *env, int frame, int regno)
+-{
+-	return __mark_chain_precision(env, frame, regno, -1);
++	return __mark_chain_precision(env, regno);
+ }
+ 
+-static int mark_chain_precision_stack_frame(struct bpf_verifier_env *env, int frame, int spi)
++/* mark_chain_precision_batch() assumes that env->bt is set in the caller to
++ * desired reg and stack masks across all relevant frames
++ */
++static int mark_chain_precision_batch(struct bpf_verifier_env *env)
+ {
+-	return __mark_chain_precision(env, frame, -1, spi);
++	return __mark_chain_precision(env, -1);
+ }
+ 
+ static bool is_spillable_regtype(enum bpf_reg_type type)
+@@ -9327,11 +9615,6 @@ static bool is_kfunc_acquire(struct bpf_kfunc_call_arg_meta *meta)
+ 	return meta->kfunc_flags & KF_ACQUIRE;
+ }
+ 
+-static bool is_kfunc_ret_null(struct bpf_kfunc_call_arg_meta *meta)
+-{
+-	return meta->kfunc_flags & KF_RET_NULL;
+-}
+-
+ static bool is_kfunc_release(struct bpf_kfunc_call_arg_meta *meta)
+ {
+ 	return meta->kfunc_flags & KF_RELEASE;
+@@ -9639,6 +9922,16 @@ BTF_ID(func, bpf_dynptr_from_xdp)
+ BTF_ID(func, bpf_dynptr_slice)
+ BTF_ID(func, bpf_dynptr_slice_rdwr)
+ 
++static bool is_kfunc_ret_null(struct bpf_kfunc_call_arg_meta *meta)
++{
++	if (meta->func_id == special_kfunc_list[KF_bpf_refcount_acquire_impl] &&
++	    meta->arg_owning_ref) {
++		return false;
++	}
++
++	return meta->kfunc_flags & KF_RET_NULL;
++}
++
+ static bool is_kfunc_bpf_rcu_read_lock(struct bpf_kfunc_call_arg_meta *meta)
+ {
+ 	return meta->func_id == special_kfunc_list[KF_bpf_rcu_read_lock];
+@@ -10116,6 +10409,8 @@ __process_kf_arg_ptr_to_graph_node(struct bpf_verifier_env *env,
+ 			node_off, btf_name_by_offset(reg->btf, t->name_off));
+ 		return -EINVAL;
+ 	}
++	meta->arg_btf = reg->btf;
++	meta->arg_btf_id = reg->btf_id;
+ 
+ 	if (node_off != field->graph_root.node_offset) {
+ 		verbose(env, "arg#1 offset=%d, but expected %s at offset=%d in struct %s\n",
+@@ -10326,8 +10621,8 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
+ 			}
+ 			if (meta->btf == btf_vmlinux &&
+ 			    meta->func_id == special_kfunc_list[KF_bpf_obj_drop_impl]) {
+-				meta->arg_obj_drop.btf = reg->btf;
+-				meta->arg_obj_drop.btf_id = reg->btf_id;
++				meta->arg_btf = reg->btf;
++				meta->arg_btf_id = reg->btf_id;
+ 			}
+ 			break;
+ 		case KF_ARG_PTR_TO_DYNPTR:
+@@ -10497,10 +10792,12 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
+ 			meta->subprogno = reg->subprogno;
+ 			break;
+ 		case KF_ARG_PTR_TO_REFCOUNTED_KPTR:
+-			if (!type_is_ptr_alloc_obj(reg->type) && !type_is_non_owning_ref(reg->type)) {
++			if (!type_is_ptr_alloc_obj(reg->type)) {
+ 				verbose(env, "arg#%d is neither owning or non-owning ref\n", i);
+ 				return -EINVAL;
+ 			}
++			if (!type_is_non_owning_ref(reg->type))
++				meta->arg_owning_ref = true;
+ 
+ 			rec = reg_btf_record(reg);
+ 			if (!rec) {
+@@ -10516,8 +10813,8 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
+ 				verbose(env, "bpf_refcount_acquire calls are disabled for now\n");
+ 				return -EINVAL;
+ 			}
+-			meta->arg_refcount_acquire.btf = reg->btf;
+-			meta->arg_refcount_acquire.btf_id = reg->btf_id;
++			meta->arg_btf = reg->btf;
++			meta->arg_btf_id = reg->btf_id;
+ 			break;
+ 		}
+ 	}
+@@ -10663,6 +10960,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
+ 	    meta.func_id == special_kfunc_list[KF_bpf_rbtree_add_impl]) {
+ 		release_ref_obj_id = regs[BPF_REG_2].ref_obj_id;
+ 		insn_aux->insert_off = regs[BPF_REG_2].off;
++		insn_aux->kptr_struct_meta = btf_find_struct_meta(meta.arg_btf, meta.arg_btf_id);
+ 		err = ref_convert_owning_non_owning(env, release_ref_obj_id);
+ 		if (err) {
+ 			verbose(env, "kfunc %s#%d conversion of owning ref to non-owning failed\n",
+@@ -10749,12 +11047,12 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
+ 			} else if (meta.func_id == special_kfunc_list[KF_bpf_refcount_acquire_impl]) {
+ 				mark_reg_known_zero(env, regs, BPF_REG_0);
+ 				regs[BPF_REG_0].type = PTR_TO_BTF_ID | MEM_ALLOC;
+-				regs[BPF_REG_0].btf = meta.arg_refcount_acquire.btf;
+-				regs[BPF_REG_0].btf_id = meta.arg_refcount_acquire.btf_id;
++				regs[BPF_REG_0].btf = meta.arg_btf;
++				regs[BPF_REG_0].btf_id = meta.arg_btf_id;
+ 
+ 				insn_aux->kptr_struct_meta =
+-					btf_find_struct_meta(meta.arg_refcount_acquire.btf,
+-							     meta.arg_refcount_acquire.btf_id);
++					btf_find_struct_meta(meta.arg_btf,
++							     meta.arg_btf_id);
+ 			} else if (meta.func_id == special_kfunc_list[KF_bpf_list_pop_front] ||
+ 				   meta.func_id == special_kfunc_list[KF_bpf_list_pop_back]) {
+ 				struct btf_field *field = meta.arg_list_head.field;
+@@ -10884,8 +11182,8 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
+ 		if (meta.btf == btf_vmlinux && btf_id_set_contains(&special_kfunc_set, meta.func_id)) {
+ 			if (meta.func_id == special_kfunc_list[KF_bpf_obj_drop_impl]) {
+ 				insn_aux->kptr_struct_meta =
+-					btf_find_struct_meta(meta.arg_obj_drop.btf,
+-							     meta.arg_obj_drop.btf_id);
++					btf_find_struct_meta(meta.arg_btf,
++							     meta.arg_btf_id);
+ 			}
+ 		}
+ 	}
+@@ -12420,12 +12718,14 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
+ 		if (BPF_SRC(insn->code) == BPF_X) {
+ 			struct bpf_reg_state *src_reg = regs + insn->src_reg;
+ 			struct bpf_reg_state *dst_reg = regs + insn->dst_reg;
++			bool need_id = src_reg->type == SCALAR_VALUE && !src_reg->id &&
++				       !tnum_is_const(src_reg->var_off);
+ 
+ 			if (BPF_CLASS(insn->code) == BPF_ALU64) {
+ 				/* case: R1 = R2
+ 				 * copy register state to dest reg
+ 				 */
+-				if (src_reg->type == SCALAR_VALUE && !src_reg->id)
++				if (need_id)
+ 					/* Assign src and dst registers the same ID
+ 					 * that will be used by find_equal_scalars()
+ 					 * to propagate min/max range.
+@@ -12444,7 +12744,7 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
+ 				} else if (src_reg->type == SCALAR_VALUE) {
+ 					bool is_src_reg_u32 = src_reg->umax_value <= U32_MAX;
+ 
+-					if (is_src_reg_u32 && !src_reg->id)
++					if (is_src_reg_u32 && need_id)
+ 						src_reg->id = ++env->id_gen;
+ 					copy_register_state(dst_reg, src_reg);
+ 					/* Make sure ID is cleared if src_reg is not in u32 range otherwise
+@@ -14600,8 +14900,9 @@ static bool range_within(struct bpf_reg_state *old,
+  * So we look through our idmap to see if this old id has been seen before.  If
+  * so, we require the new id to match; otherwise, we add the id pair to the map.
+  */
+-static bool check_ids(u32 old_id, u32 cur_id, struct bpf_id_pair *idmap)
++static bool check_ids(u32 old_id, u32 cur_id, struct bpf_idmap *idmap)
+ {
++	struct bpf_id_pair *map = idmap->map;
+ 	unsigned int i;
+ 
+ 	/* either both IDs should be set or both should be zero */
+@@ -14612,20 +14913,34 @@ static bool check_ids(u32 old_id, u32 cur_id, struct bpf_id_pair *idmap)
+ 		return true;
+ 
+ 	for (i = 0; i < BPF_ID_MAP_SIZE; i++) {
+-		if (!idmap[i].old) {
++		if (!map[i].old) {
+ 			/* Reached an empty slot; haven't seen this id before */
+-			idmap[i].old = old_id;
+-			idmap[i].cur = cur_id;
++			map[i].old = old_id;
++			map[i].cur = cur_id;
+ 			return true;
+ 		}
+-		if (idmap[i].old == old_id)
+-			return idmap[i].cur == cur_id;
++		if (map[i].old == old_id)
++			return map[i].cur == cur_id;
++		if (map[i].cur == cur_id)
++			return false;
+ 	}
+ 	/* We ran out of idmap slots, which should be impossible */
+ 	WARN_ON_ONCE(1);
+ 	return false;
+ }
+ 
++/* Similar to check_ids(), but allocate a unique temporary ID
++ * for 'old_id' or 'cur_id' of zero.
++ * This makes pairs like '0 vs unique ID', 'unique ID vs 0' valid.
++ */
++static bool check_scalar_ids(u32 old_id, u32 cur_id, struct bpf_idmap *idmap)
++{
++	old_id = old_id ? old_id : ++idmap->tmp_id_gen;
++	cur_id = cur_id ? cur_id : ++idmap->tmp_id_gen;
++
++	return check_ids(old_id, cur_id, idmap);
++}
++
+ static void clean_func_state(struct bpf_verifier_env *env,
+ 			     struct bpf_func_state *st)
+ {
+@@ -14724,7 +15039,7 @@ next:
+ 
+ static bool regs_exact(const struct bpf_reg_state *rold,
+ 		       const struct bpf_reg_state *rcur,
+-		       struct bpf_id_pair *idmap)
++		       struct bpf_idmap *idmap)
+ {
+ 	return memcmp(rold, rcur, offsetof(struct bpf_reg_state, id)) == 0 &&
+ 	       check_ids(rold->id, rcur->id, idmap) &&
+@@ -14733,7 +15048,7 @@ static bool regs_exact(const struct bpf_reg_state *rold,
+ 
+ /* Returns true if (rold safe implies rcur safe) */
+ static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold,
+-		    struct bpf_reg_state *rcur, struct bpf_id_pair *idmap)
++		    struct bpf_reg_state *rcur, struct bpf_idmap *idmap)
+ {
+ 	if (!(rold->live & REG_LIVE_READ))
+ 		/* explored state didn't use this */
+@@ -14770,15 +15085,42 @@ static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold,
+ 
+ 	switch (base_type(rold->type)) {
+ 	case SCALAR_VALUE:
+-		if (regs_exact(rold, rcur, idmap))
+-			return true;
+-		if (env->explore_alu_limits)
+-			return false;
++		if (env->explore_alu_limits) {
++			/* explore_alu_limits disables tnum_in() and range_within()
++			 * logic and requires everything to be strict
++			 */
++			return memcmp(rold, rcur, offsetof(struct bpf_reg_state, id)) == 0 &&
++			       check_scalar_ids(rold->id, rcur->id, idmap);
++		}
+ 		if (!rold->precise)
+ 			return true;
+-		/* new val must satisfy old val knowledge */
++		/* Why check_ids() for scalar registers?
++		 *
++		 * Consider the following BPF code:
++		 *   1: r6 = ... unbound scalar, ID=a ...
++		 *   2: r7 = ... unbound scalar, ID=b ...
++		 *   3: if (r6 > r7) goto +1
++		 *   4: r6 = r7
++		 *   5: if (r6 > X) goto ...
++		 *   6: ... memory operation using r7 ...
++		 *
++		 * First verification path is [1-6]:
++		 * - at (4) same bpf_reg_state::id (b) would be assigned to r6 and r7;
++		 * - at (5) r6 would be marked <= X, find_equal_scalars() would also mark
++		 *   r7 <= X, because r6 and r7 share same id.
++		 * Next verification path is [1-4, 6].
++		 *
++		 * Instruction (6) would be reached in two states:
++		 *   I.  r6{.id=b}, r7{.id=b} via path 1-6;
++		 *   II. r6{.id=a}, r7{.id=b} via path 1-4, 6.
++		 *
++		 * Use check_ids() to distinguish these states.
++		 * ---
++		 * Also verify that new value satisfies old value range knowledge.
++		 */
+ 		return range_within(rold, rcur) &&
+-		       tnum_in(rold->var_off, rcur->var_off);
++		       tnum_in(rold->var_off, rcur->var_off) &&
++		       check_scalar_ids(rold->id, rcur->id, idmap);
+ 	case PTR_TO_MAP_KEY:
+ 	case PTR_TO_MAP_VALUE:
+ 	case PTR_TO_MEM:
+@@ -14824,7 +15166,7 @@ static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold,
+ }
+ 
+ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
+-		      struct bpf_func_state *cur, struct bpf_id_pair *idmap)
++		      struct bpf_func_state *cur, struct bpf_idmap *idmap)
+ {
+ 	int i, spi;
+ 
+@@ -14927,7 +15269,7 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
+ }
+ 
+ static bool refsafe(struct bpf_func_state *old, struct bpf_func_state *cur,
+-		    struct bpf_id_pair *idmap)
++		    struct bpf_idmap *idmap)
+ {
+ 	int i;
+ 
+@@ -14975,13 +15317,13 @@ static bool func_states_equal(struct bpf_verifier_env *env, struct bpf_func_stat
+ 
+ 	for (i = 0; i < MAX_BPF_REG; i++)
+ 		if (!regsafe(env, &old->regs[i], &cur->regs[i],
+-			     env->idmap_scratch))
++			     &env->idmap_scratch))
+ 			return false;
+ 
+-	if (!stacksafe(env, old, cur, env->idmap_scratch))
++	if (!stacksafe(env, old, cur, &env->idmap_scratch))
+ 		return false;
+ 
+-	if (!refsafe(old, cur, env->idmap_scratch))
++	if (!refsafe(old, cur, &env->idmap_scratch))
+ 		return false;
+ 
+ 	return true;
+@@ -14996,7 +15338,8 @@ static bool states_equal(struct bpf_verifier_env *env,
+ 	if (old->curframe != cur->curframe)
+ 		return false;
+ 
+-	memset(env->idmap_scratch, 0, sizeof(env->idmap_scratch));
++	env->idmap_scratch.tmp_id_gen = env->id_gen;
++	memset(&env->idmap_scratch.map, 0, sizeof(env->idmap_scratch.map));
+ 
+ 	/* Verification state from speculative execution simulation
+ 	 * must never prune a non-speculative execution one.
+@@ -15014,7 +15357,7 @@ static bool states_equal(struct bpf_verifier_env *env,
+ 		return false;
+ 
+ 	if (old->active_lock.id &&
+-	    !check_ids(old->active_lock.id, cur->active_lock.id, env->idmap_scratch))
++	    !check_ids(old->active_lock.id, cur->active_lock.id, &env->idmap_scratch))
+ 		return false;
+ 
+ 	if (old->active_rcu_lock != cur->active_rcu_lock)
+@@ -15121,20 +15464,25 @@ static int propagate_precision(struct bpf_verifier_env *env,
+ 	struct bpf_reg_state *state_reg;
+ 	struct bpf_func_state *state;
+ 	int i, err = 0, fr;
++	bool first;
+ 
+ 	for (fr = old->curframe; fr >= 0; fr--) {
+ 		state = old->frame[fr];
+ 		state_reg = state->regs;
++		first = true;
+ 		for (i = 0; i < BPF_REG_FP; i++, state_reg++) {
+ 			if (state_reg->type != SCALAR_VALUE ||
+ 			    !state_reg->precise ||
+ 			    !(state_reg->live & REG_LIVE_READ))
+ 				continue;
+-			if (env->log.level & BPF_LOG_LEVEL2)
+-				verbose(env, "frame %d: propagating r%d\n", fr, i);
+-			err = mark_chain_precision_frame(env, fr, i);
+-			if (err < 0)
+-				return err;
++			if (env->log.level & BPF_LOG_LEVEL2) {
++				if (first)
++					verbose(env, "frame %d: propagating r%d", fr, i);
++				else
++					verbose(env, ",r%d", i);
++			}
++			bt_set_frame_reg(&env->bt, fr, i);
++			first = false;
+ 		}
+ 
+ 		for (i = 0; i < state->allocated_stack / BPF_REG_SIZE; i++) {
+@@ -15145,14 +15493,24 @@ static int propagate_precision(struct bpf_verifier_env *env,
+ 			    !state_reg->precise ||
+ 			    !(state_reg->live & REG_LIVE_READ))
+ 				continue;
+-			if (env->log.level & BPF_LOG_LEVEL2)
+-				verbose(env, "frame %d: propagating fp%d\n",
+-					fr, (-i - 1) * BPF_REG_SIZE);
+-			err = mark_chain_precision_stack_frame(env, fr, i);
+-			if (err < 0)
+-				return err;
++			if (env->log.level & BPF_LOG_LEVEL2) {
++				if (first)
++					verbose(env, "frame %d: propagating fp%d",
++						fr, (-i - 1) * BPF_REG_SIZE);
++				else
++					verbose(env, ",fp%d", (-i - 1) * BPF_REG_SIZE);
++			}
++			bt_set_frame_slot(&env->bt, fr, i);
++			first = false;
+ 		}
++		if (!first)
++			verbose(env, "\n");
+ 	}
++
++	err = mark_chain_precision_batch(env);
++	if (err < 0)
++		return err;
++
+ 	return 0;
+ }
+ 
+@@ -18812,6 +19170,8 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, bpfptr_t uattr, __u3
+ 	if (!env)
+ 		return -ENOMEM;
+ 
++	env->bt.env = env;
++
+ 	len = (*prog)->len;
+ 	env->insn_aux_data =
+ 		vzalloc(array_size(sizeof(struct bpf_insn_aux_data), len));
+diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c
+index 5a60cc52adc0c..8a7baf4e332e3 100644
+--- a/kernel/kcsan/core.c
++++ b/kernel/kcsan/core.c
+@@ -1270,7 +1270,9 @@ static __always_inline void kcsan_atomic_builtin_memorder(int memorder)
+ DEFINE_TSAN_ATOMIC_OPS(8);
+ DEFINE_TSAN_ATOMIC_OPS(16);
+ DEFINE_TSAN_ATOMIC_OPS(32);
++#ifdef CONFIG_64BIT
+ DEFINE_TSAN_ATOMIC_OPS(64);
++#endif
+ 
+ void __tsan_atomic_thread_fence(int memorder);
+ void __tsan_atomic_thread_fence(int memorder)
+diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
+index 3d578c6fefee3..22acee18195a5 100644
+--- a/kernel/kexec_core.c
++++ b/kernel/kexec_core.c
+@@ -1122,6 +1122,7 @@ int crash_shrink_memory(unsigned long new_size)
+ 	start = crashk_res.start;
+ 	end = crashk_res.end;
+ 	old_size = (end == 0) ? 0 : end - start + 1;
++	new_size = roundup(new_size, KEXEC_CRASH_MEM_ALIGN);
+ 	if (new_size >= old_size) {
+ 		ret = (new_size == old_size) ? 0 : -EINVAL;
+ 		goto unlock;
+@@ -1133,9 +1134,7 @@ int crash_shrink_memory(unsigned long new_size)
+ 		goto unlock;
+ 	}
+ 
+-	start = roundup(start, KEXEC_CRASH_MEM_ALIGN);
+-	end = roundup(start + new_size, KEXEC_CRASH_MEM_ALIGN);
+-
++	end = start + new_size;
+ 	crash_free_reserved_phys_range(end, crashk_res.end);
+ 
+ 	if ((start == end) && (crashk_res.parent != NULL))
+diff --git a/kernel/rcu/rcu.h b/kernel/rcu/rcu.h
+index 4a1b9622598be..98c1544cf572b 100644
+--- a/kernel/rcu/rcu.h
++++ b/kernel/rcu/rcu.h
+@@ -642,4 +642,10 @@ void show_rcu_tasks_trace_gp_kthread(void);
+ static inline void show_rcu_tasks_trace_gp_kthread(void) {}
+ #endif
+ 
++#ifdef CONFIG_TINY_RCU
++static inline bool rcu_cpu_beenfullyonline(int cpu) { return true; }
++#else
++bool rcu_cpu_beenfullyonline(int cpu);
++#endif
++
+ #endif /* __LINUX_RCU_H */
+diff --git a/kernel/rcu/rcuscale.c b/kernel/rcu/rcuscale.c
+index e82ec9f9a5d80..d1221731c7cfd 100644
+--- a/kernel/rcu/rcuscale.c
++++ b/kernel/rcu/rcuscale.c
+@@ -522,89 +522,6 @@ rcu_scale_print_module_parms(struct rcu_scale_ops *cur_ops, const char *tag)
+ 		 scale_type, tag, nrealreaders, nrealwriters, verbose, shutdown);
+ }
+ 
+-static void
+-rcu_scale_cleanup(void)
+-{
+-	int i;
+-	int j;
+-	int ngps = 0;
+-	u64 *wdp;
+-	u64 *wdpp;
+-
+-	/*
+-	 * Would like warning at start, but everything is expedited
+-	 * during the mid-boot phase, so have to wait till the end.
+-	 */
+-	if (rcu_gp_is_expedited() && !rcu_gp_is_normal() && !gp_exp)
+-		SCALEOUT_ERRSTRING("All grace periods expedited, no normal ones to measure!");
+-	if (rcu_gp_is_normal() && gp_exp)
+-		SCALEOUT_ERRSTRING("All grace periods normal, no expedited ones to measure!");
+-	if (gp_exp && gp_async)
+-		SCALEOUT_ERRSTRING("No expedited async GPs, so went with async!");
+-
+-	if (torture_cleanup_begin())
+-		return;
+-	if (!cur_ops) {
+-		torture_cleanup_end();
+-		return;
+-	}
+-
+-	if (reader_tasks) {
+-		for (i = 0; i < nrealreaders; i++)
+-			torture_stop_kthread(rcu_scale_reader,
+-					     reader_tasks[i]);
+-		kfree(reader_tasks);
+-	}
+-
+-	if (writer_tasks) {
+-		for (i = 0; i < nrealwriters; i++) {
+-			torture_stop_kthread(rcu_scale_writer,
+-					     writer_tasks[i]);
+-			if (!writer_n_durations)
+-				continue;
+-			j = writer_n_durations[i];
+-			pr_alert("%s%s writer %d gps: %d\n",
+-				 scale_type, SCALE_FLAG, i, j);
+-			ngps += j;
+-		}
+-		pr_alert("%s%s start: %llu end: %llu duration: %llu gps: %d batches: %ld\n",
+-			 scale_type, SCALE_FLAG,
+-			 t_rcu_scale_writer_started, t_rcu_scale_writer_finished,
+-			 t_rcu_scale_writer_finished -
+-			 t_rcu_scale_writer_started,
+-			 ngps,
+-			 rcuscale_seq_diff(b_rcu_gp_test_finished,
+-					   b_rcu_gp_test_started));
+-		for (i = 0; i < nrealwriters; i++) {
+-			if (!writer_durations)
+-				break;
+-			if (!writer_n_durations)
+-				continue;
+-			wdpp = writer_durations[i];
+-			if (!wdpp)
+-				continue;
+-			for (j = 0; j < writer_n_durations[i]; j++) {
+-				wdp = &wdpp[j];
+-				pr_alert("%s%s %4d writer-duration: %5d %llu\n",
+-					scale_type, SCALE_FLAG,
+-					i, j, *wdp);
+-				if (j % 100 == 0)
+-					schedule_timeout_uninterruptible(1);
+-			}
+-			kfree(writer_durations[i]);
+-		}
+-		kfree(writer_tasks);
+-		kfree(writer_durations);
+-		kfree(writer_n_durations);
+-	}
+-
+-	/* Do torture-type-specific cleanup operations.  */
+-	if (cur_ops->cleanup != NULL)
+-		cur_ops->cleanup();
+-
+-	torture_cleanup_end();
+-}
+-
+ /*
+  * Return the number if non-negative.  If -1, the number of CPUs.
+  * If less than -1, that much less than the number of CPUs, but
+@@ -624,20 +541,6 @@ static int compute_real(int n)
+ 	return nr;
+ }
+ 
+-/*
+- * RCU scalability shutdown kthread.  Just waits to be awakened, then shuts
+- * down system.
+- */
+-static int
+-rcu_scale_shutdown(void *arg)
+-{
+-	wait_event_idle(shutdown_wq, atomic_read(&n_rcu_scale_writer_finished) >= nrealwriters);
+-	smp_mb(); /* Wake before output. */
+-	rcu_scale_cleanup();
+-	kernel_power_off();
+-	return -EINVAL;
+-}
+-
+ /*
+  * kfree_rcu() scalability tests: Start a kfree_rcu() loop on all CPUs for number
+  * of iterations and measure total time and number of GP for all iterations to complete.
+@@ -874,6 +777,108 @@ unwind:
+ 	return firsterr;
+ }
+ 
++static void
++rcu_scale_cleanup(void)
++{
++	int i;
++	int j;
++	int ngps = 0;
++	u64 *wdp;
++	u64 *wdpp;
++
++	/*
++	 * Would like warning at start, but everything is expedited
++	 * during the mid-boot phase, so have to wait till the end.
++	 */
++	if (rcu_gp_is_expedited() && !rcu_gp_is_normal() && !gp_exp)
++		SCALEOUT_ERRSTRING("All grace periods expedited, no normal ones to measure!");
++	if (rcu_gp_is_normal() && gp_exp)
++		SCALEOUT_ERRSTRING("All grace periods normal, no expedited ones to measure!");
++	if (gp_exp && gp_async)
++		SCALEOUT_ERRSTRING("No expedited async GPs, so went with async!");
++
++	if (kfree_rcu_test) {
++		kfree_scale_cleanup();
++		return;
++	}
++
++	if (torture_cleanup_begin())
++		return;
++	if (!cur_ops) {
++		torture_cleanup_end();
++		return;
++	}
++
++	if (reader_tasks) {
++		for (i = 0; i < nrealreaders; i++)
++			torture_stop_kthread(rcu_scale_reader,
++					     reader_tasks[i]);
++		kfree(reader_tasks);
++	}
++
++	if (writer_tasks) {
++		for (i = 0; i < nrealwriters; i++) {
++			torture_stop_kthread(rcu_scale_writer,
++					     writer_tasks[i]);
++			if (!writer_n_durations)
++				continue;
++			j = writer_n_durations[i];
++			pr_alert("%s%s writer %d gps: %d\n",
++				 scale_type, SCALE_FLAG, i, j);
++			ngps += j;
++		}
++		pr_alert("%s%s start: %llu end: %llu duration: %llu gps: %d batches: %ld\n",
++			 scale_type, SCALE_FLAG,
++			 t_rcu_scale_writer_started, t_rcu_scale_writer_finished,
++			 t_rcu_scale_writer_finished -
++			 t_rcu_scale_writer_started,
++			 ngps,
++			 rcuscale_seq_diff(b_rcu_gp_test_finished,
++					   b_rcu_gp_test_started));
++		for (i = 0; i < nrealwriters; i++) {
++			if (!writer_durations)
++				break;
++			if (!writer_n_durations)
++				continue;
++			wdpp = writer_durations[i];
++			if (!wdpp)
++				continue;
++			for (j = 0; j < writer_n_durations[i]; j++) {
++				wdp = &wdpp[j];
++				pr_alert("%s%s %4d writer-duration: %5d %llu\n",
++					scale_type, SCALE_FLAG,
++					i, j, *wdp);
++				if (j % 100 == 0)
++					schedule_timeout_uninterruptible(1);
++			}
++			kfree(writer_durations[i]);
++		}
++		kfree(writer_tasks);
++		kfree(writer_durations);
++		kfree(writer_n_durations);
++	}
++
++	/* Do torture-type-specific cleanup operations.  */
++	if (cur_ops->cleanup != NULL)
++		cur_ops->cleanup();
++
++	torture_cleanup_end();
++}
++
++/*
++ * RCU scalability shutdown kthread.  Just waits to be awakened, then shuts
++ * down system.
++ */
++static int
++rcu_scale_shutdown(void *arg)
++{
++	wait_event_idle(shutdown_wq, atomic_read(&n_rcu_scale_writer_finished) >= nrealwriters);
++	smp_mb(); /* Wake before output. */
++	rcu_scale_cleanup();
++	kernel_power_off();
++	return -EINVAL;
++}
++
+ static int __init
+ rcu_scale_init(void)
+ {
+diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
+index 5f4fc8184dd0b..8f08c087142b0 100644
+--- a/kernel/rcu/tasks.h
++++ b/kernel/rcu/tasks.h
+@@ -463,6 +463,7 @@ static void rcu_tasks_invoke_cbs(struct rcu_tasks *rtp, struct rcu_tasks_percpu
+ {
+ 	int cpu;
+ 	int cpunext;
++	int cpuwq;
+ 	unsigned long flags;
+ 	int len;
+ 	struct rcu_head *rhp;
+@@ -473,11 +474,13 @@ static void rcu_tasks_invoke_cbs(struct rcu_tasks *rtp, struct rcu_tasks_percpu
+ 	cpunext = cpu * 2 + 1;
+ 	if (cpunext < smp_load_acquire(&rtp->percpu_dequeue_lim)) {
+ 		rtpcp_next = per_cpu_ptr(rtp->rtpcpu, cpunext);
+-		queue_work_on(cpunext, system_wq, &rtpcp_next->rtp_work);
++		cpuwq = rcu_cpu_beenfullyonline(cpunext) ? cpunext : WORK_CPU_UNBOUND;
++		queue_work_on(cpuwq, system_wq, &rtpcp_next->rtp_work);
+ 		cpunext++;
+ 		if (cpunext < smp_load_acquire(&rtp->percpu_dequeue_lim)) {
+ 			rtpcp_next = per_cpu_ptr(rtp->rtpcpu, cpunext);
+-			queue_work_on(cpunext, system_wq, &rtpcp_next->rtp_work);
++			cpuwq = rcu_cpu_beenfullyonline(cpunext) ? cpunext : WORK_CPU_UNBOUND;
++			queue_work_on(cpuwq, system_wq, &rtpcp_next->rtp_work);
+ 		}
+ 	}
+ 
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index f52ff72410416..ce51f85f0d5e4 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -4283,7 +4283,6 @@ int rcutree_prepare_cpu(unsigned int cpu)
+ 	 */
+ 	rnp = rdp->mynode;
+ 	raw_spin_lock_rcu_node(rnp);		/* irqs already disabled. */
+-	rdp->beenonline = true;	 /* We have now been online. */
+ 	rdp->gp_seq = READ_ONCE(rnp->gp_seq);
+ 	rdp->gp_seq_needed = rdp->gp_seq;
+ 	rdp->cpu_no_qs.b.norm = true;
+@@ -4310,6 +4309,16 @@ static void rcutree_affinity_setting(unsigned int cpu, int outgoing)
+ 	rcu_boost_kthread_setaffinity(rdp->mynode, outgoing);
+ }
+ 
++/*
++ * Has the specified (known valid) CPU ever been fully online?
++ */
++bool rcu_cpu_beenfullyonline(int cpu)
++{
++	struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
++
++	return smp_load_acquire(&rdp->beenonline);
++}
++
+ /*
+  * Near the end of the CPU-online process.  Pretty much all services
+  * enabled, and the CPU is now very much alive.
+@@ -4368,15 +4377,16 @@ int rcutree_offline_cpu(unsigned int cpu)
+  * Note that this function is special in that it is invoked directly
+  * from the incoming CPU rather than from the cpuhp_step mechanism.
+  * This is because this function must be invoked at a precise location.
++ * This incoming CPU must not have enabled interrupts yet.
+  */
+ void rcu_cpu_starting(unsigned int cpu)
+ {
+-	unsigned long flags;
+ 	unsigned long mask;
+ 	struct rcu_data *rdp;
+ 	struct rcu_node *rnp;
+ 	bool newcpu;
+ 
++	lockdep_assert_irqs_disabled();
+ 	rdp = per_cpu_ptr(&rcu_data, cpu);
+ 	if (rdp->cpu_started)
+ 		return;
+@@ -4384,7 +4394,6 @@ void rcu_cpu_starting(unsigned int cpu)
+ 
+ 	rnp = rdp->mynode;
+ 	mask = rdp->grpmask;
+-	local_irq_save(flags);
+ 	arch_spin_lock(&rcu_state.ofl_lock);
+ 	rcu_dynticks_eqs_online();
+ 	raw_spin_lock(&rcu_state.barrier_lock);
+@@ -4403,17 +4412,17 @@ void rcu_cpu_starting(unsigned int cpu)
+ 	/* An incoming CPU should never be blocking a grace period. */
+ 	if (WARN_ON_ONCE(rnp->qsmask & mask)) { /* RCU waiting on incoming CPU? */
+ 		/* rcu_report_qs_rnp() *really* wants some flags to restore */
+-		unsigned long flags2;
++		unsigned long flags;
+ 
+-		local_irq_save(flags2);
++		local_irq_save(flags);
+ 		rcu_disable_urgency_upon_qs(rdp);
+ 		/* Report QS -after- changing ->qsmaskinitnext! */
+-		rcu_report_qs_rnp(mask, rnp, rnp->gp_seq, flags2);
++		rcu_report_qs_rnp(mask, rnp, rnp->gp_seq, flags);
+ 	} else {
+ 		raw_spin_unlock_rcu_node(rnp);
+ 	}
+ 	arch_spin_unlock(&rcu_state.ofl_lock);
+-	local_irq_restore(flags);
++	smp_store_release(&rdp->beenonline, true);
+ 	smp_mb(); /* Ensure RCU read-side usage follows above initialization. */
+ }
+ 
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 373ff5f558844..4da5f35417626 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -5576,6 +5576,14 @@ static void __cfsb_csd_unthrottle(void *arg)
+ 
+ 	rq_lock(rq, &rf);
+ 
++	/*
++	 * Iterating over the list can trigger several call to
++	 * update_rq_clock() in unthrottle_cfs_rq().
++	 * Do it once and skip the potential next ones.
++	 */
++	update_rq_clock(rq);
++	rq_clock_start_loop_update(rq);
++
+ 	/*
+ 	 * Since we hold rq lock we're safe from concurrent manipulation of
+ 	 * the CSD list. However, this RCU critical section annotates the
+@@ -5595,6 +5603,7 @@ static void __cfsb_csd_unthrottle(void *arg)
+ 
+ 	rcu_read_unlock();
+ 
++	rq_clock_stop_loop_update(rq);
+ 	rq_unlock(rq, &rf);
+ }
+ 
+@@ -6115,6 +6124,13 @@ static void __maybe_unused unthrottle_offline_cfs_rqs(struct rq *rq)
+ 
+ 	lockdep_assert_rq_held(rq);
+ 
++	/*
++	 * The rq clock has already been updated in the
++	 * set_rq_offline(), so we should skip updating
++	 * the rq clock again in unthrottle_cfs_rq().
++	 */
++	rq_clock_start_loop_update(rq);
++
+ 	rcu_read_lock();
+ 	list_for_each_entry_rcu(tg, &task_groups, list) {
+ 		struct cfs_rq *cfs_rq = tg->cfs_rq[cpu_of(rq)];
+@@ -6137,6 +6153,8 @@ static void __maybe_unused unthrottle_offline_cfs_rqs(struct rq *rq)
+ 			unthrottle_cfs_rq(cfs_rq);
+ 	}
+ 	rcu_read_unlock();
++
++	rq_clock_stop_loop_update(rq);
+ }
+ 
+ #else /* CONFIG_CFS_BANDWIDTH */
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index ec7b3e0a2b207..81ac605b9cd5c 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -1546,6 +1546,28 @@ static inline void rq_clock_cancel_skipupdate(struct rq *rq)
+ 	rq->clock_update_flags &= ~RQCF_REQ_SKIP;
+ }
+ 
++/*
++ * During cpu offlining and rq wide unthrottling, we can trigger
++ * an update_rq_clock() for several cfs and rt runqueues (Typically
++ * when using list_for_each_entry_*)
++ * rq_clock_start_loop_update() can be called after updating the clock
++ * once and before iterating over the list to prevent multiple update.
++ * After the iterative traversal, we need to call rq_clock_stop_loop_update()
++ * to clear RQCF_ACT_SKIP of rq->clock_update_flags.
++ */
++static inline void rq_clock_start_loop_update(struct rq *rq)
++{
++	lockdep_assert_rq_held(rq);
++	SCHED_WARN_ON(rq->clock_update_flags & RQCF_ACT_SKIP);
++	rq->clock_update_flags |= RQCF_ACT_SKIP;
++}
++
++static inline void rq_clock_stop_loop_update(struct rq *rq)
++{
++	lockdep_assert_rq_held(rq);
++	rq->clock_update_flags &= ~RQCF_ACT_SKIP;
++}
++
+ struct rq_flags {
+ 	unsigned long flags;
+ 	struct pin_cookie cookie;
+diff --git a/kernel/time/posix-timers.c b/kernel/time/posix-timers.c
+index 808a247205a9a..ed3c4a9543982 100644
+--- a/kernel/time/posix-timers.c
++++ b/kernel/time/posix-timers.c
+@@ -1037,27 +1037,52 @@ retry_delete:
+ }
+ 
+ /*
+- * return timer owned by the process, used by exit_itimers
++ * Delete a timer if it is armed, remove it from the hash and schedule it
++ * for RCU freeing.
+  */
+ static void itimer_delete(struct k_itimer *timer)
+ {
+-retry_delete:
+-	spin_lock_irq(&timer->it_lock);
++	unsigned long flags;
++
++	/*
++	 * irqsave is required to make timer_wait_running() work.
++	 */
++	spin_lock_irqsave(&timer->it_lock, flags);
+ 
++retry_delete:
++	/*
++	 * Even if the timer is not longer accessible from other tasks
++	 * it still might be armed and queued in the underlying timer
++	 * mechanism. Worse, that timer mechanism might run the expiry
++	 * function concurrently.
++	 */
+ 	if (timer_delete_hook(timer) == TIMER_RETRY) {
+-		spin_unlock_irq(&timer->it_lock);
++		/*
++		 * Timer is expired concurrently, prevent livelocks
++		 * and pointless spinning on RT.
++		 *
++		 * timer_wait_running() drops timer::it_lock, which opens
++		 * the possibility for another task to delete the timer.
++		 *
++		 * That's not possible here because this is invoked from
++		 * do_exit() only for the last thread of the thread group.
++		 * So no other task can access and delete that timer.
++		 */
++		if (WARN_ON_ONCE(timer_wait_running(timer, &flags) != timer))
++			return;
++
+ 		goto retry_delete;
+ 	}
+ 	list_del(&timer->list);
+ 
+-	spin_unlock_irq(&timer->it_lock);
++	spin_unlock_irqrestore(&timer->it_lock, flags);
+ 	release_posix_timer(timer, IT_ID_SET);
+ }
+ 
+ /*
+- * This is called by do_exit or de_thread, only when nobody else can
+- * modify the signal->posix_timers list. Yet we need sighand->siglock
+- * to prevent the race with /proc/pid/timers.
++ * Invoked from do_exit() when the last thread of a thread group exits.
++ * At that point no other task can access the timers of the dying
++ * task anymore.
+  */
+ void exit_itimers(struct task_struct *tsk)
+ {
+@@ -1067,10 +1092,12 @@ void exit_itimers(struct task_struct *tsk)
+ 	if (list_empty(&tsk->signal->posix_timers))
+ 		return;
+ 
++	/* Protect against concurrent read via /proc/$PID/timers */
+ 	spin_lock_irq(&tsk->sighand->siglock);
+ 	list_replace_init(&tsk->signal->posix_timers, &timers);
+ 	spin_unlock_irq(&tsk->sighand->siglock);
+ 
++	/* The timers are not longer accessible via tsk::signal */
+ 	while (!list_empty(&timers)) {
+ 		tmr = list_first_entry(&timers, struct k_itimer, list);
+ 		itimer_delete(tmr);
+diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
+index 42c0be3080bde..4df14db4da490 100644
+--- a/kernel/time/tick-sched.c
++++ b/kernel/time/tick-sched.c
+@@ -1041,7 +1041,7 @@ static bool report_idle_softirq(void)
+ 			return false;
+ 	}
+ 
+-	if (ratelimit < 10)
++	if (ratelimit >= 10)
+ 		return false;
+ 
+ 	/* On RT, softirqs handling may be waiting on some lock */
+diff --git a/kernel/watch_queue.c b/kernel/watch_queue.c
+index e91cb4c2833f1..d0b6b390ee423 100644
+--- a/kernel/watch_queue.c
++++ b/kernel/watch_queue.c
+@@ -42,7 +42,7 @@ MODULE_AUTHOR("Red Hat, Inc.");
+ static inline bool lock_wqueue(struct watch_queue *wqueue)
+ {
+ 	spin_lock_bh(&wqueue->lock);
+-	if (unlikely(wqueue->defunct)) {
++	if (unlikely(!wqueue->pipe)) {
+ 		spin_unlock_bh(&wqueue->lock);
+ 		return false;
+ 	}
+@@ -104,9 +104,6 @@ static bool post_one_notification(struct watch_queue *wqueue,
+ 	unsigned int head, tail, mask, note, offset, len;
+ 	bool done = false;
+ 
+-	if (!pipe)
+-		return false;
+-
+ 	spin_lock_irq(&pipe->rd_wait.lock);
+ 
+ 	mask = pipe->ring_size - 1;
+@@ -603,8 +600,11 @@ void watch_queue_clear(struct watch_queue *wqueue)
+ 	rcu_read_lock();
+ 	spin_lock_bh(&wqueue->lock);
+ 
+-	/* Prevent new notifications from being stored. */
+-	wqueue->defunct = true;
++	/*
++	 * This pipe can be freed by callers like free_pipe_info().
++	 * Removing this reference also prevents new notifications.
++	 */
++	wqueue->pipe = NULL;
+ 
+ 	while (!hlist_empty(&wqueue->watches)) {
+ 		watch = hlist_entry(wqueue->watches.first, struct watch, queue_node);
+diff --git a/kernel/watchdog.c b/kernel/watchdog.c
+index 8e61f21e7e33e..6b1754e8b6e96 100644
+--- a/kernel/watchdog.c
++++ b/kernel/watchdog.c
+@@ -30,19 +30,17 @@
+ static DEFINE_MUTEX(watchdog_mutex);
+ 
+ #if defined(CONFIG_HARDLOCKUP_DETECTOR) || defined(CONFIG_HAVE_NMI_WATCHDOG)
+-# define WATCHDOG_DEFAULT	(SOFT_WATCHDOG_ENABLED | NMI_WATCHDOG_ENABLED)
+-# define NMI_WATCHDOG_DEFAULT	1
++# define WATCHDOG_HARDLOCKUP_DEFAULT	1
+ #else
+-# define WATCHDOG_DEFAULT	(SOFT_WATCHDOG_ENABLED)
+-# define NMI_WATCHDOG_DEFAULT	0
++# define WATCHDOG_HARDLOCKUP_DEFAULT	0
+ #endif
+ 
+ unsigned long __read_mostly watchdog_enabled;
+ int __read_mostly watchdog_user_enabled = 1;
+-int __read_mostly nmi_watchdog_user_enabled = NMI_WATCHDOG_DEFAULT;
+-int __read_mostly soft_watchdog_user_enabled = 1;
++static int __read_mostly watchdog_hardlockup_user_enabled = WATCHDOG_HARDLOCKUP_DEFAULT;
++static int __read_mostly watchdog_softlockup_user_enabled = 1;
+ int __read_mostly watchdog_thresh = 10;
+-static int __read_mostly nmi_watchdog_available;
++static int __read_mostly watchdog_hardlockup_available;
+ 
+ struct cpumask watchdog_cpumask __read_mostly;
+ unsigned long *watchdog_cpumask_bits = cpumask_bits(&watchdog_cpumask);
+@@ -68,7 +66,7 @@ unsigned int __read_mostly hardlockup_panic =
+  */
+ void __init hardlockup_detector_disable(void)
+ {
+-	nmi_watchdog_user_enabled = 0;
++	watchdog_hardlockup_user_enabled = 0;
+ }
+ 
+ static int __init hardlockup_panic_setup(char *str)
+@@ -78,54 +76,131 @@ static int __init hardlockup_panic_setup(char *str)
+ 	else if (!strncmp(str, "nopanic", 7))
+ 		hardlockup_panic = 0;
+ 	else if (!strncmp(str, "0", 1))
+-		nmi_watchdog_user_enabled = 0;
++		watchdog_hardlockup_user_enabled = 0;
+ 	else if (!strncmp(str, "1", 1))
+-		nmi_watchdog_user_enabled = 1;
++		watchdog_hardlockup_user_enabled = 1;
+ 	return 1;
+ }
+ __setup("nmi_watchdog=", hardlockup_panic_setup);
+ 
+ #endif /* CONFIG_HARDLOCKUP_DETECTOR */
+ 
++#if defined(CONFIG_HARDLOCKUP_DETECTOR_PERF)
++
++static DEFINE_PER_CPU(unsigned long, hrtimer_interrupts);
++static DEFINE_PER_CPU(unsigned long, hrtimer_interrupts_saved);
++static DEFINE_PER_CPU(bool, hard_watchdog_warn);
++static unsigned long hardlockup_allcpu_dumped;
++
++static bool is_hardlockup(void)
++{
++	unsigned long hrint = __this_cpu_read(hrtimer_interrupts);
++
++	if (__this_cpu_read(hrtimer_interrupts_saved) == hrint)
++		return true;
++
++	__this_cpu_write(hrtimer_interrupts_saved, hrint);
++	return false;
++}
++
++static void watchdog_hardlockup_kick(void)
++{
++	__this_cpu_inc(hrtimer_interrupts);
++}
++
++void watchdog_hardlockup_check(struct pt_regs *regs)
++{
++	/* check for a hardlockup
++	 * This is done by making sure our timer interrupt
++	 * is incrementing.  The timer interrupt should have
++	 * fired multiple times before we overflow'd.  If it hasn't
++	 * then this is a good indication the cpu is stuck
++	 */
++	if (is_hardlockup()) {
++		int this_cpu = smp_processor_id();
++
++		/* only print hardlockups once */
++		if (__this_cpu_read(hard_watchdog_warn) == true)
++			return;
++
++		pr_emerg("Watchdog detected hard LOCKUP on cpu %d\n",
++			 this_cpu);
++		print_modules();
++		print_irqtrace_events(current);
++		if (regs)
++			show_regs(regs);
++		else
++			dump_stack();
++
++		/*
++		 * Perform all-CPU dump only once to avoid multiple hardlockups
++		 * generating interleaving traces
++		 */
++		if (sysctl_hardlockup_all_cpu_backtrace &&
++				!test_and_set_bit(0, &hardlockup_allcpu_dumped))
++			trigger_allbutself_cpu_backtrace();
++
++		if (hardlockup_panic)
++			nmi_panic(regs, "Hard LOCKUP");
++
++		__this_cpu_write(hard_watchdog_warn, true);
++		return;
++	}
++
++	__this_cpu_write(hard_watchdog_warn, false);
++	return;
++}
++
++#else /* CONFIG_HARDLOCKUP_DETECTOR_PERF */
++
++static inline void watchdog_hardlockup_kick(void) { }
++
++#endif /* !CONFIG_HARDLOCKUP_DETECTOR_PERF */
++
+ /*
+  * These functions can be overridden if an architecture implements its
+  * own hardlockup detector.
+  *
+- * watchdog_nmi_enable/disable can be implemented to start and stop when
++ * watchdog_hardlockup_enable/disable can be implemented to start and stop when
+  * softlockup watchdog start and stop. The arch must select the
+  * SOFTLOCKUP_DETECTOR Kconfig.
+  */
+-int __weak watchdog_nmi_enable(unsigned int cpu)
++void __weak watchdog_hardlockup_enable(unsigned int cpu)
+ {
+ 	hardlockup_detector_perf_enable();
+-	return 0;
+ }
+ 
+-void __weak watchdog_nmi_disable(unsigned int cpu)
++void __weak watchdog_hardlockup_disable(unsigned int cpu)
+ {
+ 	hardlockup_detector_perf_disable();
+ }
+ 
+-/* Return 0, if a NMI watchdog is available. Error code otherwise */
+-int __weak __init watchdog_nmi_probe(void)
++/*
++ * Watchdog-detector specific API.
++ *
++ * Return 0 when hardlockup watchdog is available, negative value otherwise.
++ * Note that the negative value means that a delayed probe might
++ * succeed later.
++ */
++int __weak __init watchdog_hardlockup_probe(void)
+ {
+ 	return hardlockup_detector_perf_init();
+ }
+ 
+ /**
+- * watchdog_nmi_stop - Stop the watchdog for reconfiguration
++ * watchdog_hardlockup_stop - Stop the watchdog for reconfiguration
+  *
+  * The reconfiguration steps are:
+- * watchdog_nmi_stop();
++ * watchdog_hardlockup_stop();
+  * update_variables();
+- * watchdog_nmi_start();
++ * watchdog_hardlockup_start();
+  */
+-void __weak watchdog_nmi_stop(void) { }
++void __weak watchdog_hardlockup_stop(void) { }
+ 
+ /**
+- * watchdog_nmi_start - Start the watchdog after reconfiguration
++ * watchdog_hardlockup_start - Start the watchdog after reconfiguration
+  *
+- * Counterpart to watchdog_nmi_stop().
++ * Counterpart to watchdog_hardlockup_stop().
+  *
+  * The following variables have been updated in update_variables() and
+  * contain the currently valid configuration:
+@@ -133,23 +208,23 @@ void __weak watchdog_nmi_stop(void) { }
+  * - watchdog_thresh
+  * - watchdog_cpumask
+  */
+-void __weak watchdog_nmi_start(void) { }
++void __weak watchdog_hardlockup_start(void) { }
+ 
+ /**
+  * lockup_detector_update_enable - Update the sysctl enable bit
+  *
+- * Caller needs to make sure that the NMI/perf watchdogs are off, so this
+- * can't race with watchdog_nmi_disable().
++ * Caller needs to make sure that the hard watchdogs are off, so this
++ * can't race with watchdog_hardlockup_disable().
+  */
+ static void lockup_detector_update_enable(void)
+ {
+ 	watchdog_enabled = 0;
+ 	if (!watchdog_user_enabled)
+ 		return;
+-	if (nmi_watchdog_available && nmi_watchdog_user_enabled)
+-		watchdog_enabled |= NMI_WATCHDOG_ENABLED;
+-	if (soft_watchdog_user_enabled)
+-		watchdog_enabled |= SOFT_WATCHDOG_ENABLED;
++	if (watchdog_hardlockup_available && watchdog_hardlockup_user_enabled)
++		watchdog_enabled |= WATCHDOG_HARDLOCKUP_ENABLED;
++	if (watchdog_softlockup_user_enabled)
++		watchdog_enabled |= WATCHDOG_SOFTOCKUP_ENABLED;
+ }
+ 
+ #ifdef CONFIG_SOFTLOCKUP_DETECTOR
+@@ -179,8 +254,6 @@ static DEFINE_PER_CPU(unsigned long, watchdog_touch_ts);
+ static DEFINE_PER_CPU(unsigned long, watchdog_report_ts);
+ static DEFINE_PER_CPU(struct hrtimer, watchdog_hrtimer);
+ static DEFINE_PER_CPU(bool, softlockup_touch_sync);
+-static DEFINE_PER_CPU(unsigned long, hrtimer_interrupts);
+-static DEFINE_PER_CPU(unsigned long, hrtimer_interrupts_saved);
+ static unsigned long soft_lockup_nmi_warn;
+ 
+ static int __init nowatchdog_setup(char *str)
+@@ -192,7 +265,7 @@ __setup("nowatchdog", nowatchdog_setup);
+ 
+ static int __init nosoftlockup_setup(char *str)
+ {
+-	soft_watchdog_user_enabled = 0;
++	watchdog_softlockup_user_enabled = 0;
+ 	return 1;
+ }
+ __setup("nosoftlockup", nosoftlockup_setup);
+@@ -306,7 +379,7 @@ static int is_softlockup(unsigned long touch_ts,
+ 			 unsigned long period_ts,
+ 			 unsigned long now)
+ {
+-	if ((watchdog_enabled & SOFT_WATCHDOG_ENABLED) && watchdog_thresh){
++	if ((watchdog_enabled & WATCHDOG_SOFTOCKUP_ENABLED) && watchdog_thresh) {
+ 		/* Warn about unreasonable delays. */
+ 		if (time_after(now, period_ts + get_softlockup_thresh()))
+ 			return now - touch_ts;
+@@ -315,22 +388,6 @@ static int is_softlockup(unsigned long touch_ts,
+ }
+ 
+ /* watchdog detector functions */
+-bool is_hardlockup(void)
+-{
+-	unsigned long hrint = __this_cpu_read(hrtimer_interrupts);
+-
+-	if (__this_cpu_read(hrtimer_interrupts_saved) == hrint)
+-		return true;
+-
+-	__this_cpu_write(hrtimer_interrupts_saved, hrint);
+-	return false;
+-}
+-
+-static void watchdog_interrupt_count(void)
+-{
+-	__this_cpu_inc(hrtimer_interrupts);
+-}
+-
+ static DEFINE_PER_CPU(struct completion, softlockup_completion);
+ static DEFINE_PER_CPU(struct cpu_stop_work, softlockup_stop_work);
+ 
+@@ -361,8 +418,7 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer)
+ 	if (!watchdog_enabled)
+ 		return HRTIMER_NORESTART;
+ 
+-	/* kick the hardlockup detector */
+-	watchdog_interrupt_count();
++	watchdog_hardlockup_kick();
+ 
+ 	/* kick the softlockup detector */
+ 	if (completion_done(this_cpu_ptr(&softlockup_completion))) {
+@@ -458,7 +514,7 @@ static void watchdog_enable(unsigned int cpu)
+ 	complete(done);
+ 
+ 	/*
+-	 * Start the timer first to prevent the NMI watchdog triggering
++	 * Start the timer first to prevent the hardlockup watchdog triggering
+ 	 * before the timer has a chance to fire.
+ 	 */
+ 	hrtimer_init(hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
+@@ -468,9 +524,9 @@ static void watchdog_enable(unsigned int cpu)
+ 
+ 	/* Initialize timestamp */
+ 	update_touch_ts();
+-	/* Enable the perf event */
+-	if (watchdog_enabled & NMI_WATCHDOG_ENABLED)
+-		watchdog_nmi_enable(cpu);
++	/* Enable the hardlockup detector */
++	if (watchdog_enabled & WATCHDOG_HARDLOCKUP_ENABLED)
++		watchdog_hardlockup_enable(cpu);
+ }
+ 
+ static void watchdog_disable(unsigned int cpu)
+@@ -480,11 +536,11 @@ static void watchdog_disable(unsigned int cpu)
+ 	WARN_ON_ONCE(cpu != smp_processor_id());
+ 
+ 	/*
+-	 * Disable the perf event first. That prevents that a large delay
+-	 * between disabling the timer and disabling the perf event causes
+-	 * the perf NMI to detect a false positive.
++	 * Disable the hardlockup detector first. That prevents that a large
++	 * delay between disabling the timer and disabling the hardlockup
++	 * detector causes a false positive.
+ 	 */
+-	watchdog_nmi_disable(cpu);
++	watchdog_hardlockup_disable(cpu);
+ 	hrtimer_cancel(hrtimer);
+ 	wait_for_completion(this_cpu_ptr(&softlockup_completion));
+ }
+@@ -540,7 +596,7 @@ int lockup_detector_offline_cpu(unsigned int cpu)
+ static void __lockup_detector_reconfigure(void)
+ {
+ 	cpus_read_lock();
+-	watchdog_nmi_stop();
++	watchdog_hardlockup_stop();
+ 
+ 	softlockup_stop_all();
+ 	set_sample_period();
+@@ -548,7 +604,7 @@ static void __lockup_detector_reconfigure(void)
+ 	if (watchdog_enabled && watchdog_thresh)
+ 		softlockup_start_all();
+ 
+-	watchdog_nmi_start();
++	watchdog_hardlockup_start();
+ 	cpus_read_unlock();
+ 	/*
+ 	 * Must be called outside the cpus locked section to prevent
+@@ -589,9 +645,9 @@ static __init void lockup_detector_setup(void)
+ static void __lockup_detector_reconfigure(void)
+ {
+ 	cpus_read_lock();
+-	watchdog_nmi_stop();
++	watchdog_hardlockup_stop();
+ 	lockup_detector_update_enable();
+-	watchdog_nmi_start();
++	watchdog_hardlockup_start();
+ 	cpus_read_unlock();
+ }
+ void lockup_detector_reconfigure(void)
+@@ -646,14 +702,14 @@ static void proc_watchdog_update(void)
+ /*
+  * common function for watchdog, nmi_watchdog and soft_watchdog parameter
+  *
+- * caller             | table->data points to      | 'which'
+- * -------------------|----------------------------|--------------------------
+- * proc_watchdog      | watchdog_user_enabled      | NMI_WATCHDOG_ENABLED |
+- *                    |                            | SOFT_WATCHDOG_ENABLED
+- * -------------------|----------------------------|--------------------------
+- * proc_nmi_watchdog  | nmi_watchdog_user_enabled  | NMI_WATCHDOG_ENABLED
+- * -------------------|----------------------------|--------------------------
+- * proc_soft_watchdog | soft_watchdog_user_enabled | SOFT_WATCHDOG_ENABLED
++ * caller             | table->data points to            | 'which'
++ * -------------------|----------------------------------|-------------------------------
++ * proc_watchdog      | watchdog_user_enabled            | WATCHDOG_HARDLOCKUP_ENABLED |
++ *                    |                                  | WATCHDOG_SOFTOCKUP_ENABLED
++ * -------------------|----------------------------------|-------------------------------
++ * proc_nmi_watchdog  | watchdog_hardlockup_user_enabled | WATCHDOG_HARDLOCKUP_ENABLED
++ * -------------------|----------------------------------|-------------------------------
++ * proc_soft_watchdog | watchdog_softlockup_user_enabled | WATCHDOG_SOFTOCKUP_ENABLED
+  */
+ static int proc_watchdog_common(int which, struct ctl_table *table, int write,
+ 				void *buffer, size_t *lenp, loff_t *ppos)
+@@ -685,7 +741,8 @@ static int proc_watchdog_common(int which, struct ctl_table *table, int write,
+ int proc_watchdog(struct ctl_table *table, int write,
+ 		  void *buffer, size_t *lenp, loff_t *ppos)
+ {
+-	return proc_watchdog_common(NMI_WATCHDOG_ENABLED|SOFT_WATCHDOG_ENABLED,
++	return proc_watchdog_common(WATCHDOG_HARDLOCKUP_ENABLED |
++				    WATCHDOG_SOFTOCKUP_ENABLED,
+ 				    table, write, buffer, lenp, ppos);
+ }
+ 
+@@ -695,9 +752,9 @@ int proc_watchdog(struct ctl_table *table, int write,
+ int proc_nmi_watchdog(struct ctl_table *table, int write,
+ 		      void *buffer, size_t *lenp, loff_t *ppos)
+ {
+-	if (!nmi_watchdog_available && write)
++	if (!watchdog_hardlockup_available && write)
+ 		return -ENOTSUPP;
+-	return proc_watchdog_common(NMI_WATCHDOG_ENABLED,
++	return proc_watchdog_common(WATCHDOG_HARDLOCKUP_ENABLED,
+ 				    table, write, buffer, lenp, ppos);
+ }
+ 
+@@ -707,7 +764,7 @@ int proc_nmi_watchdog(struct ctl_table *table, int write,
+ int proc_soft_watchdog(struct ctl_table *table, int write,
+ 			void *buffer, size_t *lenp, loff_t *ppos)
+ {
+-	return proc_watchdog_common(SOFT_WATCHDOG_ENABLED,
++	return proc_watchdog_common(WATCHDOG_SOFTOCKUP_ENABLED,
+ 				    table, write, buffer, lenp, ppos);
+ }
+ 
+@@ -773,15 +830,6 @@ static struct ctl_table watchdog_sysctls[] = {
+ 		.extra1		= SYSCTL_ZERO,
+ 		.extra2		= (void *)&sixty,
+ 	},
+-	{
+-		.procname       = "nmi_watchdog",
+-		.data		= &nmi_watchdog_user_enabled,
+-		.maxlen		= sizeof(int),
+-		.mode		= NMI_WATCHDOG_SYSCTL_PERM,
+-		.proc_handler   = proc_nmi_watchdog,
+-		.extra1		= SYSCTL_ZERO,
+-		.extra2		= SYSCTL_ONE,
+-	},
+ 	{
+ 		.procname	= "watchdog_cpumask",
+ 		.data		= &watchdog_cpumask_bits,
+@@ -792,7 +840,7 @@ static struct ctl_table watchdog_sysctls[] = {
+ #ifdef CONFIG_SOFTLOCKUP_DETECTOR
+ 	{
+ 		.procname       = "soft_watchdog",
+-		.data		= &soft_watchdog_user_enabled,
++		.data		= &watchdog_softlockup_user_enabled,
+ 		.maxlen		= sizeof(int),
+ 		.mode		= 0644,
+ 		.proc_handler   = proc_soft_watchdog,
+@@ -845,14 +893,90 @@ static struct ctl_table watchdog_sysctls[] = {
+ 	{}
+ };
+ 
++static struct ctl_table watchdog_hardlockup_sysctl[] = {
++	{
++		.procname       = "nmi_watchdog",
++		.data		= &watchdog_hardlockup_user_enabled,
++		.maxlen		= sizeof(int),
++		.mode		= 0444,
++		.proc_handler   = proc_nmi_watchdog,
++		.extra1		= SYSCTL_ZERO,
++		.extra2		= SYSCTL_ONE,
++	},
++	{}
++};
++
+ static void __init watchdog_sysctl_init(void)
+ {
+ 	register_sysctl_init("kernel", watchdog_sysctls);
++
++	if (watchdog_hardlockup_available)
++		watchdog_hardlockup_sysctl[0].mode = 0644;
++	register_sysctl_init("kernel", watchdog_hardlockup_sysctl);
+ }
++
+ #else
+ #define watchdog_sysctl_init() do { } while (0)
+ #endif /* CONFIG_SYSCTL */
+ 
++static void __init lockup_detector_delay_init(struct work_struct *work);
++static bool allow_lockup_detector_init_retry __initdata;
++
++static struct work_struct detector_work __initdata =
++		__WORK_INITIALIZER(detector_work, lockup_detector_delay_init);
++
++static void __init lockup_detector_delay_init(struct work_struct *work)
++{
++	int ret;
++
++	ret = watchdog_hardlockup_probe();
++	if (ret) {
++		pr_info("Delayed init of the lockup detector failed: %d\n", ret);
++		pr_info("Hard watchdog permanently disabled\n");
++		return;
++	}
++
++	allow_lockup_detector_init_retry = false;
++
++	watchdog_hardlockup_available = true;
++	lockup_detector_setup();
++}
++
++/*
++ * lockup_detector_retry_init - retry init lockup detector if possible.
++ *
++ * Retry hardlockup detector init. It is useful when it requires some
++ * functionality that has to be initialized later on a particular
++ * platform.
++ */
++void __init lockup_detector_retry_init(void)
++{
++	/* Must be called before late init calls */
++	if (!allow_lockup_detector_init_retry)
++		return;
++
++	schedule_work(&detector_work);
++}
++
++/*
++ * Ensure that optional delayed hardlockup init is proceed before
++ * the init code and memory is freed.
++ */
++static int __init lockup_detector_check(void)
++{
++	/* Prevent any later retry. */
++	allow_lockup_detector_init_retry = false;
++
++	/* Make sure no work is pending. */
++	flush_work(&detector_work);
++
++	watchdog_sysctl_init();
++
++	return 0;
++
++}
++late_initcall_sync(lockup_detector_check);
++
+ void __init lockup_detector_init(void)
+ {
+ 	if (tick_nohz_full_enabled())
+@@ -861,8 +985,10 @@ void __init lockup_detector_init(void)
+ 	cpumask_copy(&watchdog_cpumask,
+ 		     housekeeping_cpumask(HK_TYPE_TIMER));
+ 
+-	if (!watchdog_nmi_probe())
+-		nmi_watchdog_available = true;
++	if (!watchdog_hardlockup_probe())
++		watchdog_hardlockup_available = true;
++	else
++		allow_lockup_detector_init_retry = true;
++
+ 	lockup_detector_setup();
+-	watchdog_sysctl_init();
+ }
+diff --git a/kernel/watchdog_hld.c b/kernel/watchdog_hld.c
+index 247bf0b1582ca..6e66e0938bbc1 100644
+--- a/kernel/watchdog_hld.c
++++ b/kernel/watchdog_hld.c
+@@ -20,13 +20,11 @@
+ #include <asm/irq_regs.h>
+ #include <linux/perf_event.h>
+ 
+-static DEFINE_PER_CPU(bool, hard_watchdog_warn);
+ static DEFINE_PER_CPU(bool, watchdog_nmi_touch);
+ static DEFINE_PER_CPU(struct perf_event *, watchdog_ev);
+ static DEFINE_PER_CPU(struct perf_event *, dead_event);
+ static struct cpumask dead_events_mask;
+ 
+-static unsigned long hardlockup_allcpu_dumped;
+ static atomic_t watchdog_cpus = ATOMIC_INIT(0);
+ 
+ notrace void arch_touch_nmi_watchdog(void)
+@@ -114,53 +112,15 @@ static void watchdog_overflow_callback(struct perf_event *event,
+ 	/* Ensure the watchdog never gets throttled */
+ 	event->hw.interrupts = 0;
+ 
+-	if (__this_cpu_read(watchdog_nmi_touch) == true) {
+-		__this_cpu_write(watchdog_nmi_touch, false);
+-		return;
+-	}
+-
+ 	if (!watchdog_check_timestamp())
+ 		return;
+ 
+-	/* check for a hardlockup
+-	 * This is done by making sure our timer interrupt
+-	 * is incrementing.  The timer interrupt should have
+-	 * fired multiple times before we overflow'd.  If it hasn't
+-	 * then this is a good indication the cpu is stuck
+-	 */
+-	if (is_hardlockup()) {
+-		int this_cpu = smp_processor_id();
+-
+-		/* only print hardlockups once */
+-		if (__this_cpu_read(hard_watchdog_warn) == true)
+-			return;
+-
+-		pr_emerg("Watchdog detected hard LOCKUP on cpu %d\n",
+-			 this_cpu);
+-		print_modules();
+-		print_irqtrace_events(current);
+-		if (regs)
+-			show_regs(regs);
+-		else
+-			dump_stack();
+-
+-		/*
+-		 * Perform all-CPU dump only once to avoid multiple hardlockups
+-		 * generating interleaving traces
+-		 */
+-		if (sysctl_hardlockup_all_cpu_backtrace &&
+-				!test_and_set_bit(0, &hardlockup_allcpu_dumped))
+-			trigger_allbutself_cpu_backtrace();
+-
+-		if (hardlockup_panic)
+-			nmi_panic(regs, "Hard LOCKUP");
+-
+-		__this_cpu_write(hard_watchdog_warn, true);
++	if (__this_cpu_read(watchdog_nmi_touch) == true) {
++		__this_cpu_write(watchdog_nmi_touch, false);
+ 		return;
+ 	}
+ 
+-	__this_cpu_write(hard_watchdog_warn, false);
+-	return;
++	watchdog_hardlockup_check(regs);
+ }
+ 
+ static int hardlockup_detector_event_create(void)
+@@ -268,7 +228,7 @@ void __init hardlockup_detector_perf_restart(void)
+ 
+ 	lockdep_assert_cpus_held();
+ 
+-	if (!(watchdog_enabled & NMI_WATCHDOG_ENABLED))
++	if (!(watchdog_enabled & WATCHDOG_HARDLOCKUP_ENABLED))
+ 		return;
+ 
+ 	for_each_online_cpu(cpu) {
+diff --git a/lib/bitmap.c b/lib/bitmap.c
+index 1c81413c51f86..ddb31015e38ae 100644
+--- a/lib/bitmap.c
++++ b/lib/bitmap.c
+@@ -1495,7 +1495,7 @@ void bitmap_to_arr32(u32 *buf, const unsigned long *bitmap, unsigned int nbits)
+ EXPORT_SYMBOL(bitmap_to_arr32);
+ #endif
+ 
+-#if (BITS_PER_LONG == 32) && defined(__BIG_ENDIAN)
++#if BITS_PER_LONG == 32
+ /**
+  * bitmap_from_arr64 - copy the contents of u64 array of bits to bitmap
+  *	@bitmap: array of unsigned longs, the destination bitmap
+diff --git a/lib/dhry_1.c b/lib/dhry_1.c
+index 83247106824cc..08edbbb19f573 100644
+--- a/lib/dhry_1.c
++++ b/lib/dhry_1.c
+@@ -139,8 +139,15 @@ int dhry(int n)
+ 
+ 	/* Initializations */
+ 
+-	Next_Ptr_Glob = (Rec_Pointer)kzalloc(sizeof(Rec_Type), GFP_KERNEL);
+-	Ptr_Glob = (Rec_Pointer)kzalloc(sizeof(Rec_Type), GFP_KERNEL);
++	Next_Ptr_Glob = (Rec_Pointer)kzalloc(sizeof(Rec_Type), GFP_ATOMIC);
++	if (!Next_Ptr_Glob)
++		return -ENOMEM;
++
++	Ptr_Glob = (Rec_Pointer)kzalloc(sizeof(Rec_Type), GFP_ATOMIC);
++	if (!Ptr_Glob) {
++		kfree(Next_Ptr_Glob);
++		return -ENOMEM;
++	}
+ 
+ 	Ptr_Glob->Ptr_Comp = Next_Ptr_Glob;
+ 	Ptr_Glob->Discr = Ident_1;
+diff --git a/lib/test_firmware.c b/lib/test_firmware.c
+index 1d7d480b8eeb3..add4699fc6cd4 100644
+--- a/lib/test_firmware.c
++++ b/lib/test_firmware.c
+@@ -214,7 +214,7 @@ static int __kstrncpy(char **dst, const char *name, size_t count, gfp_t gfp)
+ {
+ 	*dst = kstrndup(name, count, gfp);
+ 	if (!*dst)
+-		return -ENOSPC;
++		return -ENOMEM;
+ 	return count;
+ }
+ 
+@@ -671,7 +671,7 @@ static ssize_t trigger_request_store(struct device *dev,
+ 
+ 	name = kstrndup(buf, count, GFP_KERNEL);
+ 	if (!name)
+-		return -ENOSPC;
++		return -ENOMEM;
+ 
+ 	pr_info("loading '%s'\n", name);
+ 
+@@ -719,7 +719,7 @@ static ssize_t trigger_request_platform_store(struct device *dev,
+ 
+ 	name = kstrndup(buf, count, GFP_KERNEL);
+ 	if (!name)
+-		return -ENOSPC;
++		return -ENOMEM;
+ 
+ 	pr_info("inserting test platform fw '%s'\n", name);
+ 	efi_embedded_fw.name = name;
+@@ -772,7 +772,7 @@ static ssize_t trigger_async_request_store(struct device *dev,
+ 
+ 	name = kstrndup(buf, count, GFP_KERNEL);
+ 	if (!name)
+-		return -ENOSPC;
++		return -ENOMEM;
+ 
+ 	pr_info("loading '%s'\n", name);
+ 
+@@ -817,7 +817,7 @@ static ssize_t trigger_custom_fallback_store(struct device *dev,
+ 
+ 	name = kstrndup(buf, count, GFP_KERNEL);
+ 	if (!name)
+-		return -ENOSPC;
++		return -ENOMEM;
+ 
+ 	pr_info("loading '%s' using custom fallback mechanism\n", name);
+ 
+@@ -868,7 +868,7 @@ static int test_fw_run_batch_request(void *data)
+ 
+ 		test_buf = kzalloc(TEST_FIRMWARE_BUF_SIZE, GFP_KERNEL);
+ 		if (!test_buf)
+-			return -ENOSPC;
++			return -ENOMEM;
+ 
+ 		if (test_fw_config->partial)
+ 			req->rc = request_partial_firmware_into_buf
+diff --git a/lib/ts_bm.c b/lib/ts_bm.c
+index 1f2234221dd11..c8ecbf74ef295 100644
+--- a/lib/ts_bm.c
++++ b/lib/ts_bm.c
+@@ -60,10 +60,12 @@ static unsigned int bm_find(struct ts_config *conf, struct ts_state *state)
+ 	struct ts_bm *bm = ts_config_priv(conf);
+ 	unsigned int i, text_len, consumed = state->offset;
+ 	const u8 *text;
+-	int shift = bm->patlen - 1, bs;
++	int bs;
+ 	const u8 icase = conf->flags & TS_IGNORECASE;
+ 
+ 	for (;;) {
++		int shift = bm->patlen - 1;
++
+ 		text_len = conf->get_next_block(consumed, &text, conf, state);
+ 
+ 		if (unlikely(text_len == 0))
+diff --git a/mm/damon/ops-common.c b/mm/damon/ops-common.c
+index cc63cf9536369..acc264b979034 100644
+--- a/mm/damon/ops-common.c
++++ b/mm/damon/ops-common.c
+@@ -37,7 +37,7 @@ struct folio *damon_get_folio(unsigned long pfn)
+ 	return folio;
+ }
+ 
+-void damon_ptep_mkold(pte_t *pte, struct mm_struct *mm, unsigned long addr)
++void damon_ptep_mkold(pte_t *pte, struct vm_area_struct *vma, unsigned long addr)
+ {
+ 	bool referenced = false;
+ 	struct folio *folio = damon_get_folio(pte_pfn(*pte));
+@@ -45,13 +45,11 @@ void damon_ptep_mkold(pte_t *pte, struct mm_struct *mm, unsigned long addr)
+ 	if (!folio)
+ 		return;
+ 
+-	if (pte_young(*pte)) {
++	if (ptep_test_and_clear_young(vma, addr, pte))
+ 		referenced = true;
+-		*pte = pte_mkold(*pte);
+-	}
+ 
+ #ifdef CONFIG_MMU_NOTIFIER
+-	if (mmu_notifier_clear_young(mm, addr, addr + PAGE_SIZE))
++	if (mmu_notifier_clear_young(vma->vm_mm, addr, addr + PAGE_SIZE))
+ 		referenced = true;
+ #endif /* CONFIG_MMU_NOTIFIER */
+ 
+@@ -62,7 +60,7 @@ void damon_ptep_mkold(pte_t *pte, struct mm_struct *mm, unsigned long addr)
+ 	folio_put(folio);
+ }
+ 
+-void damon_pmdp_mkold(pmd_t *pmd, struct mm_struct *mm, unsigned long addr)
++void damon_pmdp_mkold(pmd_t *pmd, struct vm_area_struct *vma, unsigned long addr)
+ {
+ #ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ 	bool referenced = false;
+@@ -71,13 +69,11 @@ void damon_pmdp_mkold(pmd_t *pmd, struct mm_struct *mm, unsigned long addr)
+ 	if (!folio)
+ 		return;
+ 
+-	if (pmd_young(*pmd)) {
++	if (pmdp_test_and_clear_young(vma, addr, pmd))
+ 		referenced = true;
+-		*pmd = pmd_mkold(*pmd);
+-	}
+ 
+ #ifdef CONFIG_MMU_NOTIFIER
+-	if (mmu_notifier_clear_young(mm, addr, addr + HPAGE_PMD_SIZE))
++	if (mmu_notifier_clear_young(vma->vm_mm, addr, addr + HPAGE_PMD_SIZE))
+ 		referenced = true;
+ #endif /* CONFIG_MMU_NOTIFIER */
+ 
+diff --git a/mm/damon/ops-common.h b/mm/damon/ops-common.h
+index 14f4bc69f29be..18d837d11bcee 100644
+--- a/mm/damon/ops-common.h
++++ b/mm/damon/ops-common.h
+@@ -9,8 +9,8 @@
+ 
+ struct folio *damon_get_folio(unsigned long pfn);
+ 
+-void damon_ptep_mkold(pte_t *pte, struct mm_struct *mm, unsigned long addr);
+-void damon_pmdp_mkold(pmd_t *pmd, struct mm_struct *mm, unsigned long addr);
++void damon_ptep_mkold(pte_t *pte, struct vm_area_struct *vma, unsigned long addr);
++void damon_pmdp_mkold(pmd_t *pmd, struct vm_area_struct *vma, unsigned long addr);
+ 
+ int damon_cold_score(struct damon_ctx *c, struct damon_region *r,
+ 			struct damos *s);
+diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c
+index 467b99166b437..5b3a3463d0782 100644
+--- a/mm/damon/paddr.c
++++ b/mm/damon/paddr.c
+@@ -24,9 +24,9 @@ static bool __damon_pa_mkold(struct folio *folio, struct vm_area_struct *vma,
+ 	while (page_vma_mapped_walk(&pvmw)) {
+ 		addr = pvmw.address;
+ 		if (pvmw.pte)
+-			damon_ptep_mkold(pvmw.pte, vma->vm_mm, addr);
++			damon_ptep_mkold(pvmw.pte, vma, addr);
+ 		else
+-			damon_pmdp_mkold(pvmw.pmd, vma->vm_mm, addr);
++			damon_pmdp_mkold(pvmw.pmd, vma, addr);
+ 	}
+ 	return true;
+ }
+diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c
+index 1fec16d7263e5..37994fb6120cb 100644
+--- a/mm/damon/vaddr.c
++++ b/mm/damon/vaddr.c
+@@ -311,7 +311,7 @@ static int damon_mkold_pmd_entry(pmd_t *pmd, unsigned long addr,
+ 		}
+ 
+ 		if (pmd_trans_huge(*pmd)) {
+-			damon_pmdp_mkold(pmd, walk->mm, addr);
++			damon_pmdp_mkold(pmd, walk->vma, addr);
+ 			spin_unlock(ptl);
+ 			return 0;
+ 		}
+@@ -323,7 +323,7 @@ static int damon_mkold_pmd_entry(pmd_t *pmd, unsigned long addr,
+ 	pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
+ 	if (!pte_present(*pte))
+ 		goto out;
+-	damon_ptep_mkold(pte, walk->mm, addr);
++	damon_ptep_mkold(pte, walk->vma, addr);
+ out:
+ 	pte_unmap_unlock(pte, ptl);
+ 	return 0;
+diff --git a/mm/filemap.c b/mm/filemap.c
+index 83dda76d1fc36..8abce63b259c9 100644
+--- a/mm/filemap.c
++++ b/mm/filemap.c
+@@ -2906,7 +2906,7 @@ ssize_t filemap_splice_read(struct file *in, loff_t *ppos,
+ 	do {
+ 		cond_resched();
+ 
+-		if (*ppos >= i_size_read(file_inode(in)))
++		if (*ppos >= i_size_read(in->f_mapping->host))
+ 			break;
+ 
+ 		iocb.ki_pos = *ppos;
+@@ -2922,7 +2922,7 @@ ssize_t filemap_splice_read(struct file *in, loff_t *ppos,
+ 		 * part of the page is not copied back to userspace (unless
+ 		 * another truncate extends the file - this is desired though).
+ 		 */
+-		isize = i_size_read(file_inode(in));
++		isize = i_size_read(in->f_mapping->host);
+ 		if (unlikely(*ppos >= isize))
+ 			break;
+ 		end_offset = min_t(loff_t, isize, *ppos + len);
+diff --git a/mm/page-writeback.c b/mm/page-writeback.c
+index db79439990073..6faa09f1783b3 100644
+--- a/mm/page-writeback.c
++++ b/mm/page-writeback.c
+@@ -2434,6 +2434,7 @@ int write_cache_pages(struct address_space *mapping,
+ 
+ 		for (i = 0; i < nr_folios; i++) {
+ 			struct folio *folio = fbatch.folios[i];
++			unsigned long nr;
+ 
+ 			done_index = folio->index;
+ 
+@@ -2471,6 +2472,7 @@ continue_unlock:
+ 
+ 			trace_wbc_writepage(wbc, inode_to_bdi(mapping->host));
+ 			error = writepage(folio, wbc, data);
++			nr = folio_nr_pages(folio);
+ 			if (unlikely(error)) {
+ 				/*
+ 				 * Handle errors according to the type of
+@@ -2489,8 +2491,7 @@ continue_unlock:
+ 					error = 0;
+ 				} else if (wbc->sync_mode != WB_SYNC_ALL) {
+ 					ret = error;
+-					done_index = folio->index +
+-						folio_nr_pages(folio);
++					done_index = folio->index + nr;
+ 					done = 1;
+ 					break;
+ 				}
+@@ -2504,7 +2505,8 @@ continue_unlock:
+ 			 * keep going until we have written all the pages
+ 			 * we tagged for writeback prior to entering this loop.
+ 			 */
+-			if (--wbc->nr_to_write <= 0 &&
++			wbc->nr_to_write -= nr;
++			if (wbc->nr_to_write <= 0 &&
+ 			    wbc->sync_mode == WB_SYNC_NONE) {
+ 				done = 1;
+ 				break;
+diff --git a/mm/shmem.c b/mm/shmem.c
+index e40a08c5c6d78..74abb97ea557b 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -4196,7 +4196,7 @@ static struct file_system_type shmem_fs_type = {
+ 	.name		= "tmpfs",
+ 	.init_fs_context = ramfs_init_fs_context,
+ 	.parameters	= ramfs_fs_parameters,
+-	.kill_sb	= kill_litter_super,
++	.kill_sb	= ramfs_kill_sb,
+ 	.fs_flags	= FS_USERNS_MOUNT,
+ };
+ 
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index 5bf98d0a22c9a..6114a1fc6c688 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -4728,10 +4728,11 @@ static void lru_gen_rotate_memcg(struct lruvec *lruvec, int op)
+ {
+ 	int seg;
+ 	int old, new;
++	unsigned long flags;
+ 	int bin = get_random_u32_below(MEMCG_NR_BINS);
+ 	struct pglist_data *pgdat = lruvec_pgdat(lruvec);
+ 
+-	spin_lock(&pgdat->memcg_lru.lock);
++	spin_lock_irqsave(&pgdat->memcg_lru.lock, flags);
+ 
+ 	VM_WARN_ON_ONCE(hlist_nulls_unhashed(&lruvec->lrugen.list));
+ 
+@@ -4766,7 +4767,7 @@ static void lru_gen_rotate_memcg(struct lruvec *lruvec, int op)
+ 	if (!pgdat->memcg_lru.nr_memcgs[old] && old == get_memcg_gen(pgdat->memcg_lru.seq))
+ 		WRITE_ONCE(pgdat->memcg_lru.seq, pgdat->memcg_lru.seq + 1);
+ 
+-	spin_unlock(&pgdat->memcg_lru.lock);
++	spin_unlock_irqrestore(&pgdat->memcg_lru.lock, flags);
+ }
+ 
+ void lru_gen_online_memcg(struct mem_cgroup *memcg)
+@@ -4779,7 +4780,7 @@ void lru_gen_online_memcg(struct mem_cgroup *memcg)
+ 		struct pglist_data *pgdat = NODE_DATA(nid);
+ 		struct lruvec *lruvec = get_lruvec(memcg, nid);
+ 
+-		spin_lock(&pgdat->memcg_lru.lock);
++		spin_lock_irq(&pgdat->memcg_lru.lock);
+ 
+ 		VM_WARN_ON_ONCE(!hlist_nulls_unhashed(&lruvec->lrugen.list));
+ 
+@@ -4790,7 +4791,7 @@ void lru_gen_online_memcg(struct mem_cgroup *memcg)
+ 
+ 		lruvec->lrugen.gen = gen;
+ 
+-		spin_unlock(&pgdat->memcg_lru.lock);
++		spin_unlock_irq(&pgdat->memcg_lru.lock);
+ 	}
+ }
+ 
+@@ -4814,7 +4815,7 @@ void lru_gen_release_memcg(struct mem_cgroup *memcg)
+ 		struct pglist_data *pgdat = NODE_DATA(nid);
+ 		struct lruvec *lruvec = get_lruvec(memcg, nid);
+ 
+-		spin_lock(&pgdat->memcg_lru.lock);
++		spin_lock_irq(&pgdat->memcg_lru.lock);
+ 
+ 		VM_WARN_ON_ONCE(hlist_nulls_unhashed(&lruvec->lrugen.list));
+ 
+@@ -4826,7 +4827,7 @@ void lru_gen_release_memcg(struct mem_cgroup *memcg)
+ 		if (!pgdat->memcg_lru.nr_memcgs[gen] && gen == get_memcg_gen(pgdat->memcg_lru.seq))
+ 			WRITE_ONCE(pgdat->memcg_lru.seq, pgdat->memcg_lru.seq + 1);
+ 
+-		spin_unlock(&pgdat->memcg_lru.lock);
++		spin_unlock_irq(&pgdat->memcg_lru.lock);
+ 	}
+ }
+ 
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index 1ef952bda97d8..2275e0d9f8419 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -775,6 +775,11 @@ static void le_conn_timeout(struct work_struct *work)
+ 	hci_abort_conn(conn, HCI_ERROR_REMOTE_USER_TERM);
+ }
+ 
++struct iso_cig_params {
++	struct hci_cp_le_set_cig_params cp;
++	struct hci_cis_params cis[0x1f];
++};
++
+ struct iso_list_data {
+ 	union {
+ 		u8  cig;
+@@ -786,10 +791,7 @@ struct iso_list_data {
+ 		u16 sync_handle;
+ 	};
+ 	int count;
+-	struct {
+-		struct hci_cp_le_set_cig_params cp;
+-		struct hci_cis_params cis[0x11];
+-	} pdu;
++	struct iso_cig_params pdu;
+ };
+ 
+ static void bis_list(struct hci_conn *conn, void *data)
+@@ -1764,10 +1766,33 @@ static int hci_le_create_big(struct hci_conn *conn, struct bt_iso_qos *qos)
+ 	return hci_send_cmd(hdev, HCI_OP_LE_CREATE_BIG, sizeof(cp), &cp);
+ }
+ 
++static void set_cig_params_complete(struct hci_dev *hdev, void *data, int err)
++{
++	struct iso_cig_params *pdu = data;
++
++	bt_dev_dbg(hdev, "");
++
++	if (err)
++		bt_dev_err(hdev, "Unable to set CIG parameters: %d", err);
++
++	kfree(pdu);
++}
++
++static int set_cig_params_sync(struct hci_dev *hdev, void *data)
++{
++	struct iso_cig_params *pdu = data;
++	u32 plen;
++
++	plen = sizeof(pdu->cp) + pdu->cp.num_cis * sizeof(pdu->cis[0]);
++	return __hci_cmd_sync_status(hdev, HCI_OP_LE_SET_CIG_PARAMS, plen, pdu,
++				     HCI_CMD_TIMEOUT);
++}
++
+ static bool hci_le_set_cig_params(struct hci_conn *conn, struct bt_iso_qos *qos)
+ {
+ 	struct hci_dev *hdev = conn->hdev;
+ 	struct iso_list_data data;
++	struct iso_cig_params *pdu;
+ 
+ 	memset(&data, 0, sizeof(data));
+ 
+@@ -1837,12 +1862,18 @@ static bool hci_le_set_cig_params(struct hci_conn *conn, struct bt_iso_qos *qos)
+ 	if (qos->ucast.cis == BT_ISO_QOS_CIS_UNSET || !data.pdu.cp.num_cis)
+ 		return false;
+ 
+-	if (hci_send_cmd(hdev, HCI_OP_LE_SET_CIG_PARAMS,
+-			 sizeof(data.pdu.cp) +
+-			 (data.pdu.cp.num_cis * sizeof(*data.pdu.cis)),
+-			 &data.pdu) < 0)
++	pdu = kzalloc(sizeof(*pdu), GFP_KERNEL);
++	if (!pdu)
+ 		return false;
+ 
++	memcpy(pdu, &data.pdu, sizeof(*pdu));
++
++	if (hci_cmd_sync_queue(hdev, set_cig_params_sync, pdu,
++			       set_cig_params_complete) < 0) {
++		kfree(pdu);
++		return false;
++	}
++
+ 	return true;
+ }
+ 
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 09ba6d8987ee1..21e26d3b286cc 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -6316,23 +6316,18 @@ static void process_adv_report(struct hci_dev *hdev, u8 type, bdaddr_t *bdaddr,
+ 		return;
+ 	}
+ 
+-	/* When receiving non-connectable or scannable undirected
+-	 * advertising reports, this means that the remote device is
+-	 * not connectable and then clearly indicate this in the
+-	 * device found event.
+-	 *
+-	 * When receiving a scan response, then there is no way to
++	/* When receiving a scan response, then there is no way to
+ 	 * know if the remote device is connectable or not. However
+ 	 * since scan responses are merged with a previously seen
+ 	 * advertising report, the flags field from that report
+ 	 * will be used.
+ 	 *
+-	 * In the really unlikely case that a controller get confused
+-	 * and just sends a scan response event, then it is marked as
+-	 * not connectable as well.
++	 * In the unlikely case that a controller just sends a scan
++	 * response event that doesn't match the pending report, then
++	 * it is marked as a standalone SCAN_RSP.
+ 	 */
+ 	if (type == LE_ADV_SCAN_RSP)
+-		flags = MGMT_DEV_FOUND_NOT_CONNECTABLE;
++		flags = MGMT_DEV_FOUND_SCAN_RSP;
+ 
+ 	/* If there's nothing pending either store the data from this
+ 	 * event or send an immediate device found event if the data
+diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c
+index 804cde43b4e02..b5b1b610df335 100644
+--- a/net/bluetooth/hci_sync.c
++++ b/net/bluetooth/hci_sync.c
+@@ -4626,23 +4626,17 @@ static int hci_dev_setup_sync(struct hci_dev *hdev)
+ 	invalid_bdaddr = test_bit(HCI_QUIRK_INVALID_BDADDR, &hdev->quirks);
+ 
+ 	if (!ret) {
+-		if (test_bit(HCI_QUIRK_USE_BDADDR_PROPERTY, &hdev->quirks)) {
+-			if (!bacmp(&hdev->public_addr, BDADDR_ANY))
+-				hci_dev_get_bd_addr_from_property(hdev);
+-
+-			if (bacmp(&hdev->public_addr, BDADDR_ANY) &&
+-			    hdev->set_bdaddr) {
+-				ret = hdev->set_bdaddr(hdev,
+-						       &hdev->public_addr);
+-
+-				/* If setting of the BD_ADDR from the device
+-				 * property succeeds, then treat the address
+-				 * as valid even if the invalid BD_ADDR
+-				 * quirk indicates otherwise.
+-				 */
+-				if (!ret)
+-					invalid_bdaddr = false;
+-			}
++		if (test_bit(HCI_QUIRK_USE_BDADDR_PROPERTY, &hdev->quirks) &&
++		    !bacmp(&hdev->public_addr, BDADDR_ANY))
++			hci_dev_get_bd_addr_from_property(hdev);
++
++		if ((invalid_bdaddr ||
++		     test_bit(HCI_QUIRK_USE_BDADDR_PROPERTY, &hdev->quirks)) &&
++		    bacmp(&hdev->public_addr, BDADDR_ANY) &&
++		    hdev->set_bdaddr) {
++			ret = hdev->set_bdaddr(hdev, &hdev->public_addr);
++			if (!ret)
++				invalid_bdaddr = false;
+ 		}
+ 	}
+ 
+diff --git a/net/bridge/br_if.c b/net/bridge/br_if.c
+index 3f04b40f60568..2450690f98cfa 100644
+--- a/net/bridge/br_if.c
++++ b/net/bridge/br_if.c
+@@ -166,8 +166,9 @@ void br_manage_promisc(struct net_bridge *br)
+ 			 * This lets us disable promiscuous mode and write
+ 			 * this config to hw.
+ 			 */
+-			if (br->auto_cnt == 0 ||
+-			    (br->auto_cnt == 1 && br_auto_port(p)))
++			if ((p->dev->priv_flags & IFF_UNICAST_FLT) &&
++			    (br->auto_cnt == 0 ||
++			     (br->auto_cnt == 1 && br_auto_port(p))))
+ 				br_port_clear_promisc(p);
+ 			else
+ 				br_port_set_promisc(p);
+diff --git a/net/core/filter.c b/net/core/filter.c
+index d9ce04ca22ce8..1c959794a8862 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -6555,12 +6555,11 @@ static struct sock *sk_lookup(struct net *net, struct bpf_sock_tuple *tuple,
+ static struct sock *
+ __bpf_skc_lookup(struct sk_buff *skb, struct bpf_sock_tuple *tuple, u32 len,
+ 		 struct net *caller_net, u32 ifindex, u8 proto, u64 netns_id,
+-		 u64 flags)
++		 u64 flags, int sdif)
+ {
+ 	struct sock *sk = NULL;
+ 	struct net *net;
+ 	u8 family;
+-	int sdif;
+ 
+ 	if (len == sizeof(tuple->ipv4))
+ 		family = AF_INET;
+@@ -6572,10 +6571,12 @@ __bpf_skc_lookup(struct sk_buff *skb, struct bpf_sock_tuple *tuple, u32 len,
+ 	if (unlikely(flags || !((s32)netns_id < 0 || netns_id <= S32_MAX)))
+ 		goto out;
+ 
+-	if (family == AF_INET)
+-		sdif = inet_sdif(skb);
+-	else
+-		sdif = inet6_sdif(skb);
++	if (sdif < 0) {
++		if (family == AF_INET)
++			sdif = inet_sdif(skb);
++		else
++			sdif = inet6_sdif(skb);
++	}
+ 
+ 	if ((s32)netns_id < 0) {
+ 		net = caller_net;
+@@ -6595,10 +6596,11 @@ out:
+ static struct sock *
+ __bpf_sk_lookup(struct sk_buff *skb, struct bpf_sock_tuple *tuple, u32 len,
+ 		struct net *caller_net, u32 ifindex, u8 proto, u64 netns_id,
+-		u64 flags)
++		u64 flags, int sdif)
+ {
+ 	struct sock *sk = __bpf_skc_lookup(skb, tuple, len, caller_net,
+-					   ifindex, proto, netns_id, flags);
++					   ifindex, proto, netns_id, flags,
++					   sdif);
+ 
+ 	if (sk) {
+ 		struct sock *sk2 = sk_to_full_sk(sk);
+@@ -6638,7 +6640,7 @@ bpf_skc_lookup(struct sk_buff *skb, struct bpf_sock_tuple *tuple, u32 len,
+ 	}
+ 
+ 	return __bpf_skc_lookup(skb, tuple, len, caller_net, ifindex, proto,
+-				netns_id, flags);
++				netns_id, flags, -1);
+ }
+ 
+ static struct sock *
+@@ -6727,6 +6729,78 @@ static const struct bpf_func_proto bpf_sk_lookup_udp_proto = {
+ 	.arg5_type	= ARG_ANYTHING,
+ };
+ 
++BPF_CALL_5(bpf_tc_skc_lookup_tcp, struct sk_buff *, skb,
++	   struct bpf_sock_tuple *, tuple, u32, len, u64, netns_id, u64, flags)
++{
++	struct net_device *dev = skb->dev;
++	int ifindex = dev->ifindex, sdif = dev_sdif(dev);
++	struct net *caller_net = dev_net(dev);
++
++	return (unsigned long)__bpf_skc_lookup(skb, tuple, len, caller_net,
++					       ifindex, IPPROTO_TCP, netns_id,
++					       flags, sdif);
++}
++
++static const struct bpf_func_proto bpf_tc_skc_lookup_tcp_proto = {
++	.func		= bpf_tc_skc_lookup_tcp,
++	.gpl_only	= false,
++	.pkt_access	= true,
++	.ret_type	= RET_PTR_TO_SOCK_COMMON_OR_NULL,
++	.arg1_type	= ARG_PTR_TO_CTX,
++	.arg2_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
++	.arg3_type	= ARG_CONST_SIZE,
++	.arg4_type	= ARG_ANYTHING,
++	.arg5_type	= ARG_ANYTHING,
++};
++
++BPF_CALL_5(bpf_tc_sk_lookup_tcp, struct sk_buff *, skb,
++	   struct bpf_sock_tuple *, tuple, u32, len, u64, netns_id, u64, flags)
++{
++	struct net_device *dev = skb->dev;
++	int ifindex = dev->ifindex, sdif = dev_sdif(dev);
++	struct net *caller_net = dev_net(dev);
++
++	return (unsigned long)__bpf_sk_lookup(skb, tuple, len, caller_net,
++					      ifindex, IPPROTO_TCP, netns_id,
++					      flags, sdif);
++}
++
++static const struct bpf_func_proto bpf_tc_sk_lookup_tcp_proto = {
++	.func		= bpf_tc_sk_lookup_tcp,
++	.gpl_only	= false,
++	.pkt_access	= true,
++	.ret_type	= RET_PTR_TO_SOCKET_OR_NULL,
++	.arg1_type	= ARG_PTR_TO_CTX,
++	.arg2_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
++	.arg3_type	= ARG_CONST_SIZE,
++	.arg4_type	= ARG_ANYTHING,
++	.arg5_type	= ARG_ANYTHING,
++};
++
++BPF_CALL_5(bpf_tc_sk_lookup_udp, struct sk_buff *, skb,
++	   struct bpf_sock_tuple *, tuple, u32, len, u64, netns_id, u64, flags)
++{
++	struct net_device *dev = skb->dev;
++	int ifindex = dev->ifindex, sdif = dev_sdif(dev);
++	struct net *caller_net = dev_net(dev);
++
++	return (unsigned long)__bpf_sk_lookup(skb, tuple, len, caller_net,
++					      ifindex, IPPROTO_UDP, netns_id,
++					      flags, sdif);
++}
++
++static const struct bpf_func_proto bpf_tc_sk_lookup_udp_proto = {
++	.func		= bpf_tc_sk_lookup_udp,
++	.gpl_only	= false,
++	.pkt_access	= true,
++	.ret_type	= RET_PTR_TO_SOCKET_OR_NULL,
++	.arg1_type	= ARG_PTR_TO_CTX,
++	.arg2_type	= ARG_PTR_TO_MEM | MEM_RDONLY,
++	.arg3_type	= ARG_CONST_SIZE,
++	.arg4_type	= ARG_ANYTHING,
++	.arg5_type	= ARG_ANYTHING,
++};
++
+ BPF_CALL_1(bpf_sk_release, struct sock *, sk)
+ {
+ 	if (sk && sk_is_refcounted(sk))
+@@ -6744,12 +6818,13 @@ static const struct bpf_func_proto bpf_sk_release_proto = {
+ BPF_CALL_5(bpf_xdp_sk_lookup_udp, struct xdp_buff *, ctx,
+ 	   struct bpf_sock_tuple *, tuple, u32, len, u32, netns_id, u64, flags)
+ {
+-	struct net *caller_net = dev_net(ctx->rxq->dev);
+-	int ifindex = ctx->rxq->dev->ifindex;
++	struct net_device *dev = ctx->rxq->dev;
++	int ifindex = dev->ifindex, sdif = dev_sdif(dev);
++	struct net *caller_net = dev_net(dev);
+ 
+ 	return (unsigned long)__bpf_sk_lookup(NULL, tuple, len, caller_net,
+ 					      ifindex, IPPROTO_UDP, netns_id,
+-					      flags);
++					      flags, sdif);
+ }
+ 
+ static const struct bpf_func_proto bpf_xdp_sk_lookup_udp_proto = {
+@@ -6767,12 +6842,13 @@ static const struct bpf_func_proto bpf_xdp_sk_lookup_udp_proto = {
+ BPF_CALL_5(bpf_xdp_skc_lookup_tcp, struct xdp_buff *, ctx,
+ 	   struct bpf_sock_tuple *, tuple, u32, len, u32, netns_id, u64, flags)
+ {
+-	struct net *caller_net = dev_net(ctx->rxq->dev);
+-	int ifindex = ctx->rxq->dev->ifindex;
++	struct net_device *dev = ctx->rxq->dev;
++	int ifindex = dev->ifindex, sdif = dev_sdif(dev);
++	struct net *caller_net = dev_net(dev);
+ 
+ 	return (unsigned long)__bpf_skc_lookup(NULL, tuple, len, caller_net,
+ 					       ifindex, IPPROTO_TCP, netns_id,
+-					       flags);
++					       flags, sdif);
+ }
+ 
+ static const struct bpf_func_proto bpf_xdp_skc_lookup_tcp_proto = {
+@@ -6790,12 +6866,13 @@ static const struct bpf_func_proto bpf_xdp_skc_lookup_tcp_proto = {
+ BPF_CALL_5(bpf_xdp_sk_lookup_tcp, struct xdp_buff *, ctx,
+ 	   struct bpf_sock_tuple *, tuple, u32, len, u32, netns_id, u64, flags)
+ {
+-	struct net *caller_net = dev_net(ctx->rxq->dev);
+-	int ifindex = ctx->rxq->dev->ifindex;
++	struct net_device *dev = ctx->rxq->dev;
++	int ifindex = dev->ifindex, sdif = dev_sdif(dev);
++	struct net *caller_net = dev_net(dev);
+ 
+ 	return (unsigned long)__bpf_sk_lookup(NULL, tuple, len, caller_net,
+ 					      ifindex, IPPROTO_TCP, netns_id,
+-					      flags);
++					      flags, sdif);
+ }
+ 
+ static const struct bpf_func_proto bpf_xdp_sk_lookup_tcp_proto = {
+@@ -6815,7 +6892,8 @@ BPF_CALL_5(bpf_sock_addr_skc_lookup_tcp, struct bpf_sock_addr_kern *, ctx,
+ {
+ 	return (unsigned long)__bpf_skc_lookup(NULL, tuple, len,
+ 					       sock_net(ctx->sk), 0,
+-					       IPPROTO_TCP, netns_id, flags);
++					       IPPROTO_TCP, netns_id, flags,
++					       -1);
+ }
+ 
+ static const struct bpf_func_proto bpf_sock_addr_skc_lookup_tcp_proto = {
+@@ -6834,7 +6912,7 @@ BPF_CALL_5(bpf_sock_addr_sk_lookup_tcp, struct bpf_sock_addr_kern *, ctx,
+ {
+ 	return (unsigned long)__bpf_sk_lookup(NULL, tuple, len,
+ 					      sock_net(ctx->sk), 0, IPPROTO_TCP,
+-					      netns_id, flags);
++					      netns_id, flags, -1);
+ }
+ 
+ static const struct bpf_func_proto bpf_sock_addr_sk_lookup_tcp_proto = {
+@@ -6853,7 +6931,7 @@ BPF_CALL_5(bpf_sock_addr_sk_lookup_udp, struct bpf_sock_addr_kern *, ctx,
+ {
+ 	return (unsigned long)__bpf_sk_lookup(NULL, tuple, len,
+ 					      sock_net(ctx->sk), 0, IPPROTO_UDP,
+-					      netns_id, flags);
++					      netns_id, flags, -1);
+ }
+ 
+ static const struct bpf_func_proto bpf_sock_addr_sk_lookup_udp_proto = {
+@@ -7980,9 +8058,9 @@ tc_cls_act_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
+ #endif
+ #ifdef CONFIG_INET
+ 	case BPF_FUNC_sk_lookup_tcp:
+-		return &bpf_sk_lookup_tcp_proto;
++		return &bpf_tc_sk_lookup_tcp_proto;
+ 	case BPF_FUNC_sk_lookup_udp:
+-		return &bpf_sk_lookup_udp_proto;
++		return &bpf_tc_sk_lookup_udp_proto;
+ 	case BPF_FUNC_sk_release:
+ 		return &bpf_sk_release_proto;
+ 	case BPF_FUNC_tcp_sock:
+@@ -7990,7 +8068,7 @@ tc_cls_act_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
+ 	case BPF_FUNC_get_listener_sock:
+ 		return &bpf_get_listener_sock_proto;
+ 	case BPF_FUNC_skc_lookup_tcp:
+-		return &bpf_skc_lookup_tcp_proto;
++		return &bpf_tc_skc_lookup_tcp_proto;
+ 	case BPF_FUNC_tcp_check_syncookie:
+ 		return &bpf_tcp_check_syncookie_proto;
+ 	case BPF_FUNC_skb_ecn_set_ce:
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index 41de3a2f29e15..2fe6a3379aaed 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -961,24 +961,27 @@ static inline int rtnl_vfinfo_size(const struct net_device *dev,
+ 			 nla_total_size(sizeof(struct ifla_vf_rate)) +
+ 			 nla_total_size(sizeof(struct ifla_vf_link_state)) +
+ 			 nla_total_size(sizeof(struct ifla_vf_rss_query_en)) +
+-			 nla_total_size(0) + /* nest IFLA_VF_STATS */
+-			 /* IFLA_VF_STATS_RX_PACKETS */
+-			 nla_total_size_64bit(sizeof(__u64)) +
+-			 /* IFLA_VF_STATS_TX_PACKETS */
+-			 nla_total_size_64bit(sizeof(__u64)) +
+-			 /* IFLA_VF_STATS_RX_BYTES */
+-			 nla_total_size_64bit(sizeof(__u64)) +
+-			 /* IFLA_VF_STATS_TX_BYTES */
+-			 nla_total_size_64bit(sizeof(__u64)) +
+-			 /* IFLA_VF_STATS_BROADCAST */
+-			 nla_total_size_64bit(sizeof(__u64)) +
+-			 /* IFLA_VF_STATS_MULTICAST */
+-			 nla_total_size_64bit(sizeof(__u64)) +
+-			 /* IFLA_VF_STATS_RX_DROPPED */
+-			 nla_total_size_64bit(sizeof(__u64)) +
+-			 /* IFLA_VF_STATS_TX_DROPPED */
+-			 nla_total_size_64bit(sizeof(__u64)) +
+ 			 nla_total_size(sizeof(struct ifla_vf_trust)));
++		if (~ext_filter_mask & RTEXT_FILTER_SKIP_STATS) {
++			size += num_vfs *
++				(nla_total_size(0) + /* nest IFLA_VF_STATS */
++				 /* IFLA_VF_STATS_RX_PACKETS */
++				 nla_total_size_64bit(sizeof(__u64)) +
++				 /* IFLA_VF_STATS_TX_PACKETS */
++				 nla_total_size_64bit(sizeof(__u64)) +
++				 /* IFLA_VF_STATS_RX_BYTES */
++				 nla_total_size_64bit(sizeof(__u64)) +
++				 /* IFLA_VF_STATS_TX_BYTES */
++				 nla_total_size_64bit(sizeof(__u64)) +
++				 /* IFLA_VF_STATS_BROADCAST */
++				 nla_total_size_64bit(sizeof(__u64)) +
++				 /* IFLA_VF_STATS_MULTICAST */
++				 nla_total_size_64bit(sizeof(__u64)) +
++				 /* IFLA_VF_STATS_RX_DROPPED */
++				 nla_total_size_64bit(sizeof(__u64)) +
++				 /* IFLA_VF_STATS_TX_DROPPED */
++				 nla_total_size_64bit(sizeof(__u64)));
++		}
+ 		return size;
+ 	} else
+ 		return 0;
+@@ -1270,7 +1273,8 @@ static noinline_for_stack int rtnl_fill_stats(struct sk_buff *skb,
+ static noinline_for_stack int rtnl_fill_vfinfo(struct sk_buff *skb,
+ 					       struct net_device *dev,
+ 					       int vfs_num,
+-					       struct nlattr *vfinfo)
++					       struct nlattr *vfinfo,
++					       u32 ext_filter_mask)
+ {
+ 	struct ifla_vf_rss_query_en vf_rss_query_en;
+ 	struct nlattr *vf, *vfstats, *vfvlanlist;
+@@ -1376,33 +1380,35 @@ static noinline_for_stack int rtnl_fill_vfinfo(struct sk_buff *skb,
+ 		goto nla_put_vf_failure;
+ 	}
+ 	nla_nest_end(skb, vfvlanlist);
+-	memset(&vf_stats, 0, sizeof(vf_stats));
+-	if (dev->netdev_ops->ndo_get_vf_stats)
+-		dev->netdev_ops->ndo_get_vf_stats(dev, vfs_num,
+-						&vf_stats);
+-	vfstats = nla_nest_start_noflag(skb, IFLA_VF_STATS);
+-	if (!vfstats)
+-		goto nla_put_vf_failure;
+-	if (nla_put_u64_64bit(skb, IFLA_VF_STATS_RX_PACKETS,
+-			      vf_stats.rx_packets, IFLA_VF_STATS_PAD) ||
+-	    nla_put_u64_64bit(skb, IFLA_VF_STATS_TX_PACKETS,
+-			      vf_stats.tx_packets, IFLA_VF_STATS_PAD) ||
+-	    nla_put_u64_64bit(skb, IFLA_VF_STATS_RX_BYTES,
+-			      vf_stats.rx_bytes, IFLA_VF_STATS_PAD) ||
+-	    nla_put_u64_64bit(skb, IFLA_VF_STATS_TX_BYTES,
+-			      vf_stats.tx_bytes, IFLA_VF_STATS_PAD) ||
+-	    nla_put_u64_64bit(skb, IFLA_VF_STATS_BROADCAST,
+-			      vf_stats.broadcast, IFLA_VF_STATS_PAD) ||
+-	    nla_put_u64_64bit(skb, IFLA_VF_STATS_MULTICAST,
+-			      vf_stats.multicast, IFLA_VF_STATS_PAD) ||
+-	    nla_put_u64_64bit(skb, IFLA_VF_STATS_RX_DROPPED,
+-			      vf_stats.rx_dropped, IFLA_VF_STATS_PAD) ||
+-	    nla_put_u64_64bit(skb, IFLA_VF_STATS_TX_DROPPED,
+-			      vf_stats.tx_dropped, IFLA_VF_STATS_PAD)) {
+-		nla_nest_cancel(skb, vfstats);
+-		goto nla_put_vf_failure;
++	if (~ext_filter_mask & RTEXT_FILTER_SKIP_STATS) {
++		memset(&vf_stats, 0, sizeof(vf_stats));
++		if (dev->netdev_ops->ndo_get_vf_stats)
++			dev->netdev_ops->ndo_get_vf_stats(dev, vfs_num,
++							  &vf_stats);
++		vfstats = nla_nest_start_noflag(skb, IFLA_VF_STATS);
++		if (!vfstats)
++			goto nla_put_vf_failure;
++		if (nla_put_u64_64bit(skb, IFLA_VF_STATS_RX_PACKETS,
++				      vf_stats.rx_packets, IFLA_VF_STATS_PAD) ||
++		    nla_put_u64_64bit(skb, IFLA_VF_STATS_TX_PACKETS,
++				      vf_stats.tx_packets, IFLA_VF_STATS_PAD) ||
++		    nla_put_u64_64bit(skb, IFLA_VF_STATS_RX_BYTES,
++				      vf_stats.rx_bytes, IFLA_VF_STATS_PAD) ||
++		    nla_put_u64_64bit(skb, IFLA_VF_STATS_TX_BYTES,
++				      vf_stats.tx_bytes, IFLA_VF_STATS_PAD) ||
++		    nla_put_u64_64bit(skb, IFLA_VF_STATS_BROADCAST,
++				      vf_stats.broadcast, IFLA_VF_STATS_PAD) ||
++		    nla_put_u64_64bit(skb, IFLA_VF_STATS_MULTICAST,
++				      vf_stats.multicast, IFLA_VF_STATS_PAD) ||
++		    nla_put_u64_64bit(skb, IFLA_VF_STATS_RX_DROPPED,
++				      vf_stats.rx_dropped, IFLA_VF_STATS_PAD) ||
++		    nla_put_u64_64bit(skb, IFLA_VF_STATS_TX_DROPPED,
++				      vf_stats.tx_dropped, IFLA_VF_STATS_PAD)) {
++			nla_nest_cancel(skb, vfstats);
++			goto nla_put_vf_failure;
++		}
++		nla_nest_end(skb, vfstats);
+ 	}
+-	nla_nest_end(skb, vfstats);
+ 	nla_nest_end(skb, vf);
+ 	return 0;
+ 
+@@ -1435,7 +1441,7 @@ static noinline_for_stack int rtnl_fill_vf(struct sk_buff *skb,
+ 		return -EMSGSIZE;
+ 
+ 	for (i = 0; i < num_vfs; i++) {
+-		if (rtnl_fill_vfinfo(skb, dev, i, vfinfo))
++		if (rtnl_fill_vfinfo(skb, dev, i, vfinfo, ext_filter_mask))
+ 			return -EMSGSIZE;
+ 	}
+ 
+@@ -4090,7 +4096,7 @@ static int nlmsg_populate_fdb_fill(struct sk_buff *skb,
+ 	ndm->ndm_ifindex = dev->ifindex;
+ 	ndm->ndm_state   = ndm_state;
+ 
+-	if (nla_put(skb, NDA_LLADDR, ETH_ALEN, addr))
++	if (nla_put(skb, NDA_LLADDR, dev->addr_len, addr))
+ 		goto nla_put_failure;
+ 	if (vid)
+ 		if (nla_put(skb, NDA_VLAN, sizeof(u16), &vid))
+@@ -4104,10 +4110,10 @@ nla_put_failure:
+ 	return -EMSGSIZE;
+ }
+ 
+-static inline size_t rtnl_fdb_nlmsg_size(void)
++static inline size_t rtnl_fdb_nlmsg_size(const struct net_device *dev)
+ {
+ 	return NLMSG_ALIGN(sizeof(struct ndmsg)) +
+-	       nla_total_size(ETH_ALEN) +	/* NDA_LLADDR */
++	       nla_total_size(dev->addr_len) +	/* NDA_LLADDR */
+ 	       nla_total_size(sizeof(u16)) +	/* NDA_VLAN */
+ 	       0;
+ }
+@@ -4119,7 +4125,7 @@ static void rtnl_fdb_notify(struct net_device *dev, u8 *addr, u16 vid, int type,
+ 	struct sk_buff *skb;
+ 	int err = -ENOBUFS;
+ 
+-	skb = nlmsg_new(rtnl_fdb_nlmsg_size(), GFP_ATOMIC);
++	skb = nlmsg_new(rtnl_fdb_nlmsg_size(dev), GFP_ATOMIC);
+ 	if (!skb)
+ 		goto errout;
+ 
+diff --git a/net/core/sock.c b/net/core/sock.c
+index 6e5662ca00fe5..4a0edccf86066 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -2550,13 +2550,24 @@ kuid_t sock_i_uid(struct sock *sk)
+ }
+ EXPORT_SYMBOL(sock_i_uid);
+ 
+-unsigned long sock_i_ino(struct sock *sk)
++unsigned long __sock_i_ino(struct sock *sk)
+ {
+ 	unsigned long ino;
+ 
+-	read_lock_bh(&sk->sk_callback_lock);
++	read_lock(&sk->sk_callback_lock);
+ 	ino = sk->sk_socket ? SOCK_INODE(sk->sk_socket)->i_ino : 0;
+-	read_unlock_bh(&sk->sk_callback_lock);
++	read_unlock(&sk->sk_callback_lock);
++	return ino;
++}
++EXPORT_SYMBOL(__sock_i_ino);
++
++unsigned long sock_i_ino(struct sock *sk)
++{
++	unsigned long ino;
++
++	local_bh_disable();
++	ino = __sock_i_ino(sk);
++	local_bh_enable();
+ 	return ino;
+ }
+ EXPORT_SYMBOL(sock_i_ino);
+diff --git a/net/dsa/dsa.c b/net/dsa/dsa.c
+index 1afed89e03c00..ccbdb98109f80 100644
+--- a/net/dsa/dsa.c
++++ b/net/dsa/dsa.c
+@@ -1106,7 +1106,7 @@ static struct dsa_port *dsa_port_touch(struct dsa_switch *ds, int index)
+ 	mutex_init(&dp->vlans_lock);
+ 	INIT_LIST_HEAD(&dp->fdbs);
+ 	INIT_LIST_HEAD(&dp->mdbs);
+-	INIT_LIST_HEAD(&dp->vlans);
++	INIT_LIST_HEAD(&dp->vlans); /* also initializes &dp->user_vlans */
+ 	INIT_LIST_HEAD(&dp->list);
+ 	list_add_tail(&dp->list, &dst->ports);
+ 
+diff --git a/net/dsa/slave.c b/net/dsa/slave.c
+index 165bb2cb84316..527b1d576460f 100644
+--- a/net/dsa/slave.c
++++ b/net/dsa/slave.c
+@@ -27,6 +27,7 @@
+ #include "master.h"
+ #include "netlink.h"
+ #include "slave.h"
++#include "switch.h"
+ #include "tag.h"
+ 
+ struct dsa_switchdev_event_work {
+@@ -161,8 +162,7 @@ static int dsa_slave_schedule_standalone_work(struct net_device *dev,
+ 	return 0;
+ }
+ 
+-static int dsa_slave_host_vlan_rx_filtering(struct net_device *vdev, int vid,
+-					    void *arg)
++static int dsa_slave_host_vlan_rx_filtering(void *arg, int vid)
+ {
+ 	struct dsa_host_vlan_rx_filtering_ctx *ctx = arg;
+ 
+@@ -170,6 +170,28 @@ static int dsa_slave_host_vlan_rx_filtering(struct net_device *vdev, int vid,
+ 						  ctx->addr, vid);
+ }
+ 
++static int dsa_slave_vlan_for_each(struct net_device *dev,
++				   int (*cb)(void *arg, int vid), void *arg)
++{
++	struct dsa_port *dp = dsa_slave_to_port(dev);
++	struct dsa_vlan *v;
++	int err;
++
++	lockdep_assert_held(&dev->addr_list_lock);
++
++	err = cb(arg, 0);
++	if (err)
++		return err;
++
++	list_for_each_entry(v, &dp->user_vlans, list) {
++		err = cb(arg, v->vid);
++		if (err)
++			return err;
++	}
++
++	return 0;
++}
++
+ static int dsa_slave_sync_uc(struct net_device *dev,
+ 			     const unsigned char *addr)
+ {
+@@ -180,18 +202,14 @@ static int dsa_slave_sync_uc(struct net_device *dev,
+ 		.addr = addr,
+ 		.event = DSA_UC_ADD,
+ 	};
+-	int err;
+ 
+ 	dev_uc_add(master, addr);
+ 
+ 	if (!dsa_switch_supports_uc_filtering(dp->ds))
+ 		return 0;
+ 
+-	err = dsa_slave_schedule_standalone_work(dev, DSA_UC_ADD, addr, 0);
+-	if (err)
+-		return err;
+-
+-	return vlan_for_each(dev, dsa_slave_host_vlan_rx_filtering, &ctx);
++	return dsa_slave_vlan_for_each(dev, dsa_slave_host_vlan_rx_filtering,
++				       &ctx);
+ }
+ 
+ static int dsa_slave_unsync_uc(struct net_device *dev,
+@@ -204,18 +222,14 @@ static int dsa_slave_unsync_uc(struct net_device *dev,
+ 		.addr = addr,
+ 		.event = DSA_UC_DEL,
+ 	};
+-	int err;
+ 
+ 	dev_uc_del(master, addr);
+ 
+ 	if (!dsa_switch_supports_uc_filtering(dp->ds))
+ 		return 0;
+ 
+-	err = dsa_slave_schedule_standalone_work(dev, DSA_UC_DEL, addr, 0);
+-	if (err)
+-		return err;
+-
+-	return vlan_for_each(dev, dsa_slave_host_vlan_rx_filtering, &ctx);
++	return dsa_slave_vlan_for_each(dev, dsa_slave_host_vlan_rx_filtering,
++				       &ctx);
+ }
+ 
+ static int dsa_slave_sync_mc(struct net_device *dev,
+@@ -228,18 +242,14 @@ static int dsa_slave_sync_mc(struct net_device *dev,
+ 		.addr = addr,
+ 		.event = DSA_MC_ADD,
+ 	};
+-	int err;
+ 
+ 	dev_mc_add(master, addr);
+ 
+ 	if (!dsa_switch_supports_mc_filtering(dp->ds))
+ 		return 0;
+ 
+-	err = dsa_slave_schedule_standalone_work(dev, DSA_MC_ADD, addr, 0);
+-	if (err)
+-		return err;
+-
+-	return vlan_for_each(dev, dsa_slave_host_vlan_rx_filtering, &ctx);
++	return dsa_slave_vlan_for_each(dev, dsa_slave_host_vlan_rx_filtering,
++				       &ctx);
+ }
+ 
+ static int dsa_slave_unsync_mc(struct net_device *dev,
+@@ -252,18 +262,14 @@ static int dsa_slave_unsync_mc(struct net_device *dev,
+ 		.addr = addr,
+ 		.event = DSA_MC_DEL,
+ 	};
+-	int err;
+ 
+ 	dev_mc_del(master, addr);
+ 
+ 	if (!dsa_switch_supports_mc_filtering(dp->ds))
+ 		return 0;
+ 
+-	err = dsa_slave_schedule_standalone_work(dev, DSA_MC_DEL, addr, 0);
+-	if (err)
+-		return err;
+-
+-	return vlan_for_each(dev, dsa_slave_host_vlan_rx_filtering, &ctx);
++	return dsa_slave_vlan_for_each(dev, dsa_slave_host_vlan_rx_filtering,
++				       &ctx);
+ }
+ 
+ void dsa_slave_sync_ha(struct net_device *dev)
+@@ -1759,6 +1765,7 @@ static int dsa_slave_vlan_rx_add_vid(struct net_device *dev, __be16 proto,
+ 	struct netlink_ext_ack extack = {0};
+ 	struct dsa_switch *ds = dp->ds;
+ 	struct netdev_hw_addr *ha;
++	struct dsa_vlan *v;
+ 	int ret;
+ 
+ 	/* User port... */
+@@ -1782,8 +1789,17 @@ static int dsa_slave_vlan_rx_add_vid(struct net_device *dev, __be16 proto,
+ 	    !dsa_switch_supports_mc_filtering(ds))
+ 		return 0;
+ 
++	v = kzalloc(sizeof(*v), GFP_KERNEL);
++	if (!v) {
++		ret = -ENOMEM;
++		goto rollback;
++	}
++
+ 	netif_addr_lock_bh(dev);
+ 
++	v->vid = vid;
++	list_add_tail(&v->list, &dp->user_vlans);
++
+ 	if (dsa_switch_supports_mc_filtering(ds)) {
+ 		netdev_for_each_synced_mc_addr(ha, dev) {
+ 			dsa_slave_schedule_standalone_work(dev, DSA_MC_ADD,
+@@ -1803,6 +1819,12 @@ static int dsa_slave_vlan_rx_add_vid(struct net_device *dev, __be16 proto,
+ 	dsa_flush_workqueue();
+ 
+ 	return 0;
++
++rollback:
++	dsa_port_host_vlan_del(dp, &vlan);
++	dsa_port_vlan_del(dp, &vlan);
++
++	return ret;
+ }
+ 
+ static int dsa_slave_vlan_rx_kill_vid(struct net_device *dev, __be16 proto,
+@@ -1816,6 +1838,7 @@ static int dsa_slave_vlan_rx_kill_vid(struct net_device *dev, __be16 proto,
+ 	};
+ 	struct dsa_switch *ds = dp->ds;
+ 	struct netdev_hw_addr *ha;
++	struct dsa_vlan *v;
+ 	int err;
+ 
+ 	err = dsa_port_vlan_del(dp, &vlan);
+@@ -1832,6 +1855,15 @@ static int dsa_slave_vlan_rx_kill_vid(struct net_device *dev, __be16 proto,
+ 
+ 	netif_addr_lock_bh(dev);
+ 
++	v = dsa_vlan_find(&dp->user_vlans, &vlan);
++	if (!v) {
++		netif_addr_unlock_bh(dev);
++		return -ENOENT;
++	}
++
++	list_del(&v->list);
++	kfree(v);
++
+ 	if (dsa_switch_supports_mc_filtering(ds)) {
+ 		netdev_for_each_synced_mc_addr(ha, dev) {
+ 			dsa_slave_schedule_standalone_work(dev, DSA_MC_DEL,
+diff --git a/net/dsa/switch.c b/net/dsa/switch.c
+index 8c9a9f94b756a..1a42f93173345 100644
+--- a/net/dsa/switch.c
++++ b/net/dsa/switch.c
+@@ -673,8 +673,8 @@ static bool dsa_port_host_vlan_match(struct dsa_port *dp,
+ 	return false;
+ }
+ 
+-static struct dsa_vlan *dsa_vlan_find(struct list_head *vlan_list,
+-				      const struct switchdev_obj_port_vlan *vlan)
++struct dsa_vlan *dsa_vlan_find(struct list_head *vlan_list,
++			       const struct switchdev_obj_port_vlan *vlan)
+ {
+ 	struct dsa_vlan *v;
+ 
+diff --git a/net/dsa/switch.h b/net/dsa/switch.h
+index 15e67b95eb6e1..ea034677da153 100644
+--- a/net/dsa/switch.h
++++ b/net/dsa/switch.h
+@@ -111,6 +111,9 @@ struct dsa_notifier_master_state_info {
+ 	bool operational;
+ };
+ 
++struct dsa_vlan *dsa_vlan_find(struct list_head *vlan_list,
++			       const struct switchdev_obj_port_vlan *vlan);
++
+ int dsa_tree_notify(struct dsa_switch_tree *dst, unsigned long e, void *v);
+ int dsa_broadcast(unsigned long e, void *v);
+ 
+diff --git a/net/dsa/tag_sja1105.c b/net/dsa/tag_sja1105.c
+index a5f3b73da417f..ade3eeb2f3e6d 100644
+--- a/net/dsa/tag_sja1105.c
++++ b/net/dsa/tag_sja1105.c
+@@ -58,11 +58,8 @@
+ #define SJA1110_TX_TRAILER_LEN			4
+ #define SJA1110_MAX_PADDING_LEN			15
+ 
+-#define SJA1105_HWTS_RX_EN			0
+-
+ struct sja1105_tagger_private {
+ 	struct sja1105_tagger_data data; /* Must be first */
+-	unsigned long state;
+ 	/* Protects concurrent access to the meta state machine
+ 	 * from taggers running on multiple ports on SMP systems
+ 	 */
+@@ -118,8 +115,8 @@ static void sja1105_meta_unpack(const struct sk_buff *skb,
+ 	 * a unified unpacking command for both device series.
+ 	 */
+ 	packing(buf,     &meta->tstamp,     31, 0, 4, UNPACK, 0);
+-	packing(buf + 4, &meta->dmac_byte_4, 7, 0, 1, UNPACK, 0);
+-	packing(buf + 5, &meta->dmac_byte_3, 7, 0, 1, UNPACK, 0);
++	packing(buf + 4, &meta->dmac_byte_3, 7, 0, 1, UNPACK, 0);
++	packing(buf + 5, &meta->dmac_byte_4, 7, 0, 1, UNPACK, 0);
+ 	packing(buf + 6, &meta->source_port, 7, 0, 1, UNPACK, 0);
+ 	packing(buf + 7, &meta->switch_id,   7, 0, 1, UNPACK, 0);
+ }
+@@ -392,10 +389,6 @@ static struct sk_buff
+ 
+ 		priv = sja1105_tagger_private(ds);
+ 
+-		if (!test_bit(SJA1105_HWTS_RX_EN, &priv->state))
+-			/* Do normal processing. */
+-			return skb;
+-
+ 		spin_lock(&priv->meta_lock);
+ 		/* Was this a link-local frame instead of the meta
+ 		 * that we were expecting?
+@@ -431,12 +424,6 @@ static struct sk_buff
+ 
+ 		priv = sja1105_tagger_private(ds);
+ 
+-		/* Drop the meta frame if we're not in the right state
+-		 * to process it.
+-		 */
+-		if (!test_bit(SJA1105_HWTS_RX_EN, &priv->state))
+-			return NULL;
+-
+ 		spin_lock(&priv->meta_lock);
+ 
+ 		stampable_skb = priv->stampable_skb;
+@@ -472,30 +459,6 @@ static struct sk_buff
+ 	return skb;
+ }
+ 
+-static bool sja1105_rxtstamp_get_state(struct dsa_switch *ds)
+-{
+-	struct sja1105_tagger_private *priv = sja1105_tagger_private(ds);
+-
+-	return test_bit(SJA1105_HWTS_RX_EN, &priv->state);
+-}
+-
+-static void sja1105_rxtstamp_set_state(struct dsa_switch *ds, bool on)
+-{
+-	struct sja1105_tagger_private *priv = sja1105_tagger_private(ds);
+-
+-	if (on)
+-		set_bit(SJA1105_HWTS_RX_EN, &priv->state);
+-	else
+-		clear_bit(SJA1105_HWTS_RX_EN, &priv->state);
+-
+-	/* Initialize the meta state machine to a known state */
+-	if (!priv->stampable_skb)
+-		return;
+-
+-	kfree_skb(priv->stampable_skb);
+-	priv->stampable_skb = NULL;
+-}
+-
+ static bool sja1105_skb_has_tag_8021q(const struct sk_buff *skb)
+ {
+ 	u16 tpid = ntohs(eth_hdr(skb)->h_proto);
+@@ -545,33 +508,53 @@ static struct sk_buff *sja1105_rcv(struct sk_buff *skb,
+ 	is_link_local = sja1105_is_link_local(skb);
+ 	is_meta = sja1105_is_meta_frame(skb);
+ 
+-	if (sja1105_skb_has_tag_8021q(skb)) {
+-		/* Normal traffic path. */
+-		sja1105_vlan_rcv(skb, &source_port, &switch_id, &vbid, &vid);
+-	} else if (is_link_local) {
++	if (is_link_local) {
+ 		/* Management traffic path. Switch embeds the switch ID and
+ 		 * port ID into bytes of the destination MAC, courtesy of
+ 		 * the incl_srcpt options.
+ 		 */
+ 		source_port = hdr->h_dest[3];
+ 		switch_id = hdr->h_dest[4];
+-		/* Clear the DMAC bytes that were mangled by the switch */
+-		hdr->h_dest[3] = 0;
+-		hdr->h_dest[4] = 0;
+ 	} else if (is_meta) {
+ 		sja1105_meta_unpack(skb, &meta);
+ 		source_port = meta.source_port;
+ 		switch_id = meta.switch_id;
+-	} else {
++	}
++
++	/* Normal data plane traffic and link-local frames are tagged with
++	 * a tag_8021q VLAN which we have to strip
++	 */
++	if (sja1105_skb_has_tag_8021q(skb)) {
++		int tmp_source_port = -1, tmp_switch_id = -1;
++
++		sja1105_vlan_rcv(skb, &tmp_source_port, &tmp_switch_id, &vbid,
++				 &vid);
++		/* Preserve the source information from the INCL_SRCPT option,
++		 * if available. This allows us to not overwrite a valid source
++		 * port and switch ID with zeroes when receiving link-local
++		 * frames from a VLAN-unaware bridged port (non-zero vbid) or a
++		 * VLAN-aware bridged port (non-zero vid). Furthermore, the
++		 * tag_8021q source port information is only of trust when the
++		 * vbid is 0 (precise port). Otherwise, tmp_source_port and
++		 * tmp_switch_id will be zeroes.
++		 */
++		if (vbid == 0 && source_port == -1)
++			source_port = tmp_source_port;
++		if (vbid == 0 && switch_id == -1)
++			switch_id = tmp_switch_id;
++	} else if (source_port == -1 && switch_id == -1) {
++		/* Packets with no source information have no chance of
++		 * getting accepted, drop them straight away.
++		 */
+ 		return NULL;
+ 	}
+ 
+-	if (vbid >= 1)
++	if (source_port != -1 && switch_id != -1)
++		skb->dev = dsa_master_find_slave(netdev, switch_id, source_port);
++	else if (vbid >= 1)
+ 		skb->dev = dsa_tag_8021q_find_port_by_vbid(netdev, vbid);
+-	else if (source_port == -1 || switch_id == -1)
+-		skb->dev = dsa_find_designated_bridge_port_by_vid(netdev, vid);
+ 	else
+-		skb->dev = dsa_master_find_slave(netdev, switch_id, source_port);
++		skb->dev = dsa_find_designated_bridge_port_by_vid(netdev, vid);
+ 	if (!skb->dev) {
+ 		netdev_warn(netdev, "Couldn't decode source port\n");
+ 		return NULL;
+@@ -762,7 +745,6 @@ static void sja1105_disconnect(struct dsa_switch *ds)
+ 
+ static int sja1105_connect(struct dsa_switch *ds)
+ {
+-	struct sja1105_tagger_data *tagger_data;
+ 	struct sja1105_tagger_private *priv;
+ 	struct kthread_worker *xmit_worker;
+ 	int err;
+@@ -782,10 +764,6 @@ static int sja1105_connect(struct dsa_switch *ds)
+ 	}
+ 
+ 	priv->xmit_worker = xmit_worker;
+-	/* Export functions for switch driver use */
+-	tagger_data = &priv->data;
+-	tagger_data->rxtstamp_get_state = sja1105_rxtstamp_get_state;
+-	tagger_data->rxtstamp_set_state = sja1105_rxtstamp_set_state;
+ 	ds->tagger_data = priv;
+ 
+ 	return 0;
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index bf8b22218dd46..57f1e4883b761 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -3590,8 +3590,11 @@ static int tcp_ack_update_window(struct sock *sk, const struct sk_buff *skb, u32
+ static bool __tcp_oow_rate_limited(struct net *net, int mib_idx,
+ 				   u32 *last_oow_ack_time)
+ {
+-	if (*last_oow_ack_time) {
+-		s32 elapsed = (s32)(tcp_jiffies32 - *last_oow_ack_time);
++	/* Paired with the WRITE_ONCE() in this function. */
++	u32 val = READ_ONCE(*last_oow_ack_time);
++
++	if (val) {
++		s32 elapsed = (s32)(tcp_jiffies32 - val);
+ 
+ 		if (0 <= elapsed &&
+ 		    elapsed < READ_ONCE(net->ipv4.sysctl_tcp_invalid_ratelimit)) {
+@@ -3600,7 +3603,10 @@ static bool __tcp_oow_rate_limited(struct net *net, int mib_idx,
+ 		}
+ 	}
+ 
+-	*last_oow_ack_time = tcp_jiffies32;
++	/* Paired with the prior READ_ONCE() and with itself,
++	 * as we might be lockless.
++	 */
++	WRITE_ONCE(*last_oow_ack_time, tcp_jiffies32);
+ 
+ 	return false;	/* not rate-limited: go ahead, send dupack now! */
+ }
+diff --git a/net/mac80211/debugfs_netdev.c b/net/mac80211/debugfs_netdev.c
+index b0cef37eb3948..03374eb8b7cb9 100644
+--- a/net/mac80211/debugfs_netdev.c
++++ b/net/mac80211/debugfs_netdev.c
+@@ -717,7 +717,7 @@ static void add_sta_files(struct ieee80211_sub_if_data *sdata)
+ 	DEBUGFS_ADD_MODE(uapsd_queues, 0600);
+ 	DEBUGFS_ADD_MODE(uapsd_max_sp_len, 0600);
+ 	DEBUGFS_ADD_MODE(tdls_wider_bw, 0600);
+-	DEBUGFS_ADD_MODE(valid_links, 0200);
++	DEBUGFS_ADD_MODE(valid_links, 0400);
+ 	DEBUGFS_ADD_MODE(active_links, 0600);
+ }
+ 
+diff --git a/net/mac80211/eht.c b/net/mac80211/eht.c
+index 18bc6b78b2679..ddc7acc68335a 100644
+--- a/net/mac80211/eht.c
++++ b/net/mac80211/eht.c
+@@ -2,7 +2,7 @@
+ /*
+  * EHT handling
+  *
+- * Copyright(c) 2021-2022 Intel Corporation
++ * Copyright(c) 2021-2023 Intel Corporation
+  */
+ 
+ #include "ieee80211_i.h"
+@@ -25,8 +25,7 @@ ieee80211_eht_cap_ie_to_sta_eht_cap(struct ieee80211_sub_if_data *sdata,
+ 	memset(eht_cap, 0, sizeof(*eht_cap));
+ 
+ 	if (!eht_cap_ie_elem ||
+-	    !ieee80211_get_eht_iftype_cap(sband,
+-					 ieee80211_vif_type_p2p(&sdata->vif)))
++	    !ieee80211_get_eht_iftype_cap_vif(sband, &sdata->vif))
+ 		return;
+ 
+ 	mcs_nss_size = ieee80211_eht_mcs_nss_size(he_cap_ie_elem,
+diff --git a/net/mac80211/he.c b/net/mac80211/he.c
+index 0322abae08250..9f5ffdc9db284 100644
+--- a/net/mac80211/he.c
++++ b/net/mac80211/he.c
+@@ -128,8 +128,7 @@ ieee80211_he_cap_ie_to_sta_he_cap(struct ieee80211_sub_if_data *sdata,
+ 		return;
+ 
+ 	own_he_cap_ptr =
+-		ieee80211_get_he_iftype_cap(sband,
+-					    ieee80211_vif_type_p2p(&sdata->vif));
++		ieee80211_get_he_iftype_cap_vif(sband, &sdata->vif);
+ 	if (!own_he_cap_ptr)
+ 		return;
+ 
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index 5a4303130ef22..93da8373583be 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -511,16 +511,14 @@ static int ieee80211_config_bw(struct ieee80211_link_data *link,
+ 
+ 	/* don't check HE if we associated as non-HE station */
+ 	if (link->u.mgd.conn_flags & IEEE80211_CONN_DISABLE_HE ||
+-	    !ieee80211_get_he_iftype_cap(sband,
+-					 ieee80211_vif_type_p2p(&sdata->vif))) {
++	    !ieee80211_get_he_iftype_cap_vif(sband, &sdata->vif)) {
+ 		he_oper = NULL;
+ 		eht_oper = NULL;
+ 	}
+ 
+ 	/* don't check EHT if we associated as non-EHT station */
+ 	if (link->u.mgd.conn_flags & IEEE80211_CONN_DISABLE_EHT ||
+-	    !ieee80211_get_eht_iftype_cap(sband,
+-					 ieee80211_vif_type_p2p(&sdata->vif)))
++	    !ieee80211_get_eht_iftype_cap_vif(sband, &sdata->vif))
+ 		eht_oper = NULL;
+ 
+ 	/*
+@@ -776,8 +774,7 @@ static void ieee80211_add_he_ie(struct ieee80211_sub_if_data *sdata,
+ 	const struct ieee80211_sta_he_cap *he_cap;
+ 	u8 he_cap_size;
+ 
+-	he_cap = ieee80211_get_he_iftype_cap(sband,
+-					     ieee80211_vif_type_p2p(&sdata->vif));
++	he_cap = ieee80211_get_he_iftype_cap_vif(sband, &sdata->vif);
+ 	if (WARN_ON(!he_cap))
+ 		return;
+ 
+@@ -806,10 +803,8 @@ static void ieee80211_add_eht_ie(struct ieee80211_sub_if_data *sdata,
+ 	const struct ieee80211_sta_eht_cap *eht_cap;
+ 	u8 eht_cap_size;
+ 
+-	he_cap = ieee80211_get_he_iftype_cap(sband,
+-					     ieee80211_vif_type_p2p(&sdata->vif));
+-	eht_cap = ieee80211_get_eht_iftype_cap(sband,
+-					       ieee80211_vif_type_p2p(&sdata->vif));
++	he_cap = ieee80211_get_he_iftype_cap_vif(sband, &sdata->vif);
++	eht_cap = ieee80211_get_eht_iftype_cap_vif(sband, &sdata->vif);
+ 
+ 	/*
+ 	 * EHT capabilities element is only added if the HE capabilities element
+@@ -3949,8 +3944,7 @@ static bool ieee80211_twt_req_supported(struct ieee80211_sub_if_data *sdata,
+ 					const struct ieee802_11_elems *elems)
+ {
+ 	const struct ieee80211_sta_he_cap *own_he_cap =
+-		ieee80211_get_he_iftype_cap(sband,
+-					    ieee80211_vif_type_p2p(&sdata->vif));
++		ieee80211_get_he_iftype_cap_vif(sband, &sdata->vif);
+ 
+ 	if (elems->ext_capab_len < 10)
+ 		return false;
+@@ -3986,8 +3980,7 @@ static bool ieee80211_twt_bcast_support(struct ieee80211_sub_if_data *sdata,
+ 					struct link_sta_info *link_sta)
+ {
+ 	const struct ieee80211_sta_he_cap *own_he_cap =
+-		ieee80211_get_he_iftype_cap(sband,
+-					    ieee80211_vif_type_p2p(&sdata->vif));
++		ieee80211_get_he_iftype_cap_vif(sband, &sdata->vif);
+ 
+ 	return bss_conf->he_support &&
+ 		(link_sta->pub->he_cap.he_cap_elem.mac_cap_info[2] &
+@@ -4624,8 +4617,7 @@ ieee80211_verify_sta_he_mcs_support(struct ieee80211_sub_if_data *sdata,
+ 				    const struct ieee80211_he_operation *he_op)
+ {
+ 	const struct ieee80211_sta_he_cap *sta_he_cap =
+-		ieee80211_get_he_iftype_cap(sband,
+-					    ieee80211_vif_type_p2p(&sdata->vif));
++		ieee80211_get_he_iftype_cap_vif(sband, &sdata->vif);
+ 	u16 ap_min_req_set;
+ 	int i;
+ 
+@@ -4759,15 +4751,13 @@ static int ieee80211_prep_channel(struct ieee80211_sub_if_data *sdata,
+ 		*conn_flags |= IEEE80211_CONN_DISABLE_EHT;
+ 	}
+ 
+-	if (!ieee80211_get_he_iftype_cap(sband,
+-					 ieee80211_vif_type_p2p(&sdata->vif))) {
++	if (!ieee80211_get_he_iftype_cap_vif(sband, &sdata->vif)) {
+ 		mlme_dbg(sdata, "HE not supported, disabling HE and EHT\n");
+ 		*conn_flags |= IEEE80211_CONN_DISABLE_HE;
+ 		*conn_flags |= IEEE80211_CONN_DISABLE_EHT;
+ 	}
+ 
+-	if (!ieee80211_get_eht_iftype_cap(sband,
+-					  ieee80211_vif_type_p2p(&sdata->vif))) {
++	if (!ieee80211_get_eht_iftype_cap_vif(sband, &sdata->vif)) {
+ 		mlme_dbg(sdata, "EHT not supported, disabling EHT\n");
+ 		*conn_flags |= IEEE80211_CONN_DISABLE_EHT;
+ 	}
+diff --git a/net/mac80211/sta_info.c b/net/mac80211/sta_info.c
+index 1400512e0dde5..a1cd5c234f47e 100644
+--- a/net/mac80211/sta_info.c
++++ b/net/mac80211/sta_info.c
+@@ -2913,6 +2913,8 @@ int ieee80211_sta_activate_link(struct sta_info *sta, unsigned int link_id)
+ 	if (!test_sta_flag(sta, WLAN_STA_INSERTED))
+ 		goto hash;
+ 
++	ieee80211_recalc_min_chandef(sdata, link_id);
++
+ 	/* Ensure the values are updated for the driver,
+ 	 * redone by sta_remove_link on failure.
+ 	 */
+diff --git a/net/mac80211/util.c b/net/mac80211/util.c
+index 3bd07a0a782f7..4cfd6b9b705cb 100644
+--- a/net/mac80211/util.c
++++ b/net/mac80211/util.c
+@@ -6,7 +6,7 @@
+  * Copyright 2007	Johannes Berg <johannes@sipsolutions.net>
+  * Copyright 2013-2014  Intel Mobile Communications GmbH
+  * Copyright (C) 2015-2017	Intel Deutschland GmbH
+- * Copyright (C) 2018-2022 Intel Corporation
++ * Copyright (C) 2018-2023 Intel Corporation
+  *
+  * utilities for mac80211
+  */
+@@ -2121,8 +2121,7 @@ static int ieee80211_build_preq_ies_band(struct ieee80211_sub_if_data *sdata,
+ 		*offset = noffset;
+ 	}
+ 
+-	he_cap = ieee80211_get_he_iftype_cap(sband,
+-					     ieee80211_vif_type_p2p(&sdata->vif));
++	he_cap = ieee80211_get_he_iftype_cap_vif(sband, &sdata->vif);
+ 	if (he_cap &&
+ 	    cfg80211_any_usable_channels(local->hw.wiphy, BIT(sband->band),
+ 					 IEEE80211_CHAN_NO_HE)) {
+@@ -2131,8 +2130,7 @@ static int ieee80211_build_preq_ies_band(struct ieee80211_sub_if_data *sdata,
+ 			goto out_err;
+ 	}
+ 
+-	eht_cap = ieee80211_get_eht_iftype_cap(sband,
+-					       ieee80211_vif_type_p2p(&sdata->vif));
++	eht_cap = ieee80211_get_eht_iftype_cap_vif(sband, &sdata->vif);
+ 
+ 	if (eht_cap &&
+ 	    cfg80211_any_usable_channels(local->hw.wiphy, BIT(sband->band),
+@@ -2150,8 +2148,7 @@ static int ieee80211_build_preq_ies_band(struct ieee80211_sub_if_data *sdata,
+ 		struct ieee80211_supported_band *sband6;
+ 
+ 		sband6 = local->hw.wiphy->bands[NL80211_BAND_6GHZ];
+-		he_cap = ieee80211_get_he_iftype_cap(sband6,
+-				ieee80211_vif_type_p2p(&sdata->vif));
++		he_cap = ieee80211_get_he_iftype_cap_vif(sband6, &sdata->vif);
+ 
+ 		if (he_cap) {
+ 			enum nl80211_iftype iftype =
+@@ -3801,10 +3798,8 @@ bool ieee80211_chandef_he_6ghz_oper(struct ieee80211_sub_if_data *sdata,
+ 	}
+ 
+ 	eht_cap = ieee80211_get_eht_iftype_cap(sband, iftype);
+-	if (!eht_cap) {
+-		sdata_info(sdata, "Missing iftype sband data/EHT cap");
++	if (!eht_cap)
+ 		eht_oper = NULL;
+-	}
+ 
+ 	he_6ghz_oper = ieee80211_he_6ghz_oper(he_oper);
+ 
+diff --git a/net/netfilter/ipvs/Kconfig b/net/netfilter/ipvs/Kconfig
+index 271da8447b293..2a3017b9c001b 100644
+--- a/net/netfilter/ipvs/Kconfig
++++ b/net/netfilter/ipvs/Kconfig
+@@ -44,7 +44,8 @@ config	IP_VS_DEBUG
+ 
+ config	IP_VS_TAB_BITS
+ 	int "IPVS connection table size (the Nth power of 2)"
+-	range 8 20
++	range 8 20 if !64BIT
++	range 8 27 if 64BIT
+ 	default 12
+ 	help
+ 	  The IPVS connection hash table uses the chaining scheme to handle
+@@ -54,24 +55,24 @@ config	IP_VS_TAB_BITS
+ 
+ 	  Note the table size must be power of 2. The table size will be the
+ 	  value of 2 to the your input number power. The number to choose is
+-	  from 8 to 20, the default number is 12, which means the table size
+-	  is 4096. Don't input the number too small, otherwise you will lose
+-	  performance on it. You can adapt the table size yourself, according
+-	  to your virtual server application. It is good to set the table size
+-	  not far less than the number of connections per second multiplying
+-	  average lasting time of connection in the table.  For example, your
+-	  virtual server gets 200 connections per second, the connection lasts
+-	  for 200 seconds in average in the connection table, the table size
+-	  should be not far less than 200x200, it is good to set the table
+-	  size 32768 (2**15).
++	  from 8 to 27 for 64BIT(20 otherwise), the default number is 12,
++	  which means the table size is 4096. Don't input the number too
++	  small, otherwise you will lose performance on it. You can adapt the
++	  table size yourself, according to your virtual server application.
++	  It is good to set the table size not far less than the number of
++	  connections per second multiplying average lasting time of
++	  connection in the table.  For example, your virtual server gets 200
++	  connections per second, the connection lasts for 200 seconds in
++	  average in the connection table, the table size should be not far
++	  less than 200x200, it is good to set the table size 32768 (2**15).
+ 
+ 	  Another note that each connection occupies 128 bytes effectively and
+ 	  each hash entry uses 8 bytes, so you can estimate how much memory is
+ 	  needed for your box.
+ 
+ 	  You can overwrite this number setting conn_tab_bits module parameter
+-	  or by appending ip_vs.conn_tab_bits=? to the kernel command line
+-	  if IP VS was compiled built-in.
++	  or by appending ip_vs.conn_tab_bits=? to the kernel command line if
++	  IP VS was compiled built-in.
+ 
+ comment "IPVS transport protocol load balancing support"
+ 
+diff --git a/net/netfilter/ipvs/ip_vs_conn.c b/net/netfilter/ipvs/ip_vs_conn.c
+index 928e64653837f..f4c55e65abd12 100644
+--- a/net/netfilter/ipvs/ip_vs_conn.c
++++ b/net/netfilter/ipvs/ip_vs_conn.c
+@@ -1485,8 +1485,8 @@ int __init ip_vs_conn_init(void)
+ 	int idx;
+ 
+ 	/* Compute size and mask */
+-	if (ip_vs_conn_tab_bits < 8 || ip_vs_conn_tab_bits > 20) {
+-		pr_info("conn_tab_bits not in [8, 20]. Using default value\n");
++	if (ip_vs_conn_tab_bits < 8 || ip_vs_conn_tab_bits > 27) {
++		pr_info("conn_tab_bits not in [8, 27]. Using default value\n");
+ 		ip_vs_conn_tab_bits = CONFIG_IP_VS_TAB_BITS;
+ 	}
+ 	ip_vs_conn_tab_size = 1 << ip_vs_conn_tab_bits;
+diff --git a/net/netfilter/nf_conntrack_helper.c b/net/netfilter/nf_conntrack_helper.c
+index 0c4db2f2ac43e..f22691f838536 100644
+--- a/net/netfilter/nf_conntrack_helper.c
++++ b/net/netfilter/nf_conntrack_helper.c
+@@ -360,6 +360,9 @@ int nf_conntrack_helper_register(struct nf_conntrack_helper *me)
+ 	BUG_ON(me->expect_class_max >= NF_CT_MAX_EXPECT_CLASSES);
+ 	BUG_ON(strlen(me->name) > NF_CT_HELPER_NAME_LEN - 1);
+ 
++	if (!nf_ct_helper_hash)
++		return -ENOENT;
++
+ 	if (me->expect_policy->max_expected > NF_CT_EXPECT_MAX_CNT)
+ 		return -EINVAL;
+ 
+@@ -515,4 +518,5 @@ int nf_conntrack_helper_init(void)
+ void nf_conntrack_helper_fini(void)
+ {
+ 	kvfree(nf_ct_helper_hash);
++	nf_ct_helper_hash = NULL;
+ }
+diff --git a/net/netfilter/nf_conntrack_proto_dccp.c b/net/netfilter/nf_conntrack_proto_dccp.c
+index c1557d47ccd1e..d4fd626d2b8c3 100644
+--- a/net/netfilter/nf_conntrack_proto_dccp.c
++++ b/net/netfilter/nf_conntrack_proto_dccp.c
+@@ -432,9 +432,19 @@ static bool dccp_error(const struct dccp_hdr *dh,
+ 		       struct sk_buff *skb, unsigned int dataoff,
+ 		       const struct nf_hook_state *state)
+ {
++	static const unsigned long require_seq48 = 1 << DCCP_PKT_REQUEST |
++						   1 << DCCP_PKT_RESPONSE |
++						   1 << DCCP_PKT_CLOSEREQ |
++						   1 << DCCP_PKT_CLOSE |
++						   1 << DCCP_PKT_RESET |
++						   1 << DCCP_PKT_SYNC |
++						   1 << DCCP_PKT_SYNCACK;
+ 	unsigned int dccp_len = skb->len - dataoff;
+ 	unsigned int cscov;
+ 	const char *msg;
++	u8 type;
++
++	BUILD_BUG_ON(DCCP_PKT_INVALID >= BITS_PER_LONG);
+ 
+ 	if (dh->dccph_doff * 4 < sizeof(struct dccp_hdr) ||
+ 	    dh->dccph_doff * 4 > dccp_len) {
+@@ -459,34 +469,70 @@ static bool dccp_error(const struct dccp_hdr *dh,
+ 		goto out_invalid;
+ 	}
+ 
+-	if (dh->dccph_type >= DCCP_PKT_INVALID) {
++	type = dh->dccph_type;
++	if (type >= DCCP_PKT_INVALID) {
+ 		msg = "nf_ct_dccp: reserved packet type ";
+ 		goto out_invalid;
+ 	}
++
++	if (test_bit(type, &require_seq48) && !dh->dccph_x) {
++		msg = "nf_ct_dccp: type lacks 48bit sequence numbers";
++		goto out_invalid;
++	}
++
+ 	return false;
+ out_invalid:
+ 	nf_l4proto_log_invalid(skb, state, IPPROTO_DCCP, "%s", msg);
+ 	return true;
+ }
+ 
++struct nf_conntrack_dccp_buf {
++	struct dccp_hdr dh;	 /* generic header part */
++	struct dccp_hdr_ext ext; /* optional depending dh->dccph_x */
++	union {			 /* depends on header type */
++		struct dccp_hdr_ack_bits ack;
++		struct dccp_hdr_request req;
++		struct dccp_hdr_response response;
++		struct dccp_hdr_reset rst;
++	} u;
++};
++
++static struct dccp_hdr *
++dccp_header_pointer(const struct sk_buff *skb, int offset, const struct dccp_hdr *dh,
++		    struct nf_conntrack_dccp_buf *buf)
++{
++	unsigned int hdrlen = __dccp_hdr_len(dh);
++
++	if (hdrlen > sizeof(*buf))
++		return NULL;
++
++	return skb_header_pointer(skb, offset, hdrlen, buf);
++}
++
+ int nf_conntrack_dccp_packet(struct nf_conn *ct, struct sk_buff *skb,
+ 			     unsigned int dataoff,
+ 			     enum ip_conntrack_info ctinfo,
+ 			     const struct nf_hook_state *state)
+ {
+ 	enum ip_conntrack_dir dir = CTINFO2DIR(ctinfo);
+-	struct dccp_hdr _dh, *dh;
++	struct nf_conntrack_dccp_buf _dh;
+ 	u_int8_t type, old_state, new_state;
+ 	enum ct_dccp_roles role;
+ 	unsigned int *timeouts;
++	struct dccp_hdr *dh;
+ 
+-	dh = skb_header_pointer(skb, dataoff, sizeof(_dh), &_dh);
++	dh = skb_header_pointer(skb, dataoff, sizeof(*dh), &_dh.dh);
+ 	if (!dh)
+ 		return NF_DROP;
+ 
+ 	if (dccp_error(dh, skb, dataoff, state))
+ 		return -NF_ACCEPT;
+ 
++	/* pull again, including possible 48 bit sequences and subtype header */
++	dh = dccp_header_pointer(skb, dataoff, dh, &_dh);
++	if (!dh)
++		return NF_DROP;
++
+ 	type = dh->dccph_type;
+ 	if (!nf_ct_is_confirmed(ct) && !dccp_new(ct, skb, dh, state))
+ 		return -NF_ACCEPT;
+diff --git a/net/netfilter/nf_conntrack_sip.c b/net/netfilter/nf_conntrack_sip.c
+index 77f5e82d8e3fe..d0eac27f6ba03 100644
+--- a/net/netfilter/nf_conntrack_sip.c
++++ b/net/netfilter/nf_conntrack_sip.c
+@@ -611,7 +611,7 @@ int ct_sip_parse_numerical_param(const struct nf_conn *ct, const char *dptr,
+ 	start += strlen(name);
+ 	*val = simple_strtoul(start, &end, 0);
+ 	if (start == end)
+-		return 0;
++		return -1;
+ 	if (matchoff && matchlen) {
+ 		*matchoff = start - dptr;
+ 		*matchlen = end - start;
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 4c7937fd803f9..79719e8cda799 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -2693,7 +2693,7 @@ err_hooks:
+ 
+ static struct nft_chain *nft_chain_lookup_byid(const struct net *net,
+ 					       const struct nft_table *table,
+-					       const struct nlattr *nla)
++					       const struct nlattr *nla, u8 genmask)
+ {
+ 	struct nftables_pernet *nft_net = nft_pernet(net);
+ 	u32 id = ntohl(nla_get_be32(nla));
+@@ -2704,7 +2704,8 @@ static struct nft_chain *nft_chain_lookup_byid(const struct net *net,
+ 
+ 		if (trans->msg_type == NFT_MSG_NEWCHAIN &&
+ 		    chain->table == table &&
+-		    id == nft_trans_chain_id(trans))
++		    id == nft_trans_chain_id(trans) &&
++		    nft_active_genmask(chain, genmask))
+ 			return chain;
+ 	}
+ 	return ERR_PTR(-ENOENT);
+@@ -3808,7 +3809,8 @@ static int nf_tables_newrule(struct sk_buff *skb, const struct nfnl_info *info,
+ 			return -EOPNOTSUPP;
+ 
+ 	} else if (nla[NFTA_RULE_CHAIN_ID]) {
+-		chain = nft_chain_lookup_byid(net, table, nla[NFTA_RULE_CHAIN_ID]);
++		chain = nft_chain_lookup_byid(net, table, nla[NFTA_RULE_CHAIN_ID],
++					      genmask);
+ 		if (IS_ERR(chain)) {
+ 			NL_SET_BAD_ATTR(extack, nla[NFTA_RULE_CHAIN_ID]);
+ 			return PTR_ERR(chain);
+@@ -5343,6 +5345,8 @@ void nf_tables_deactivate_set(const struct nft_ctx *ctx, struct nft_set *set,
+ 		nft_set_trans_unbind(ctx, set);
+ 		if (nft_set_is_anonymous(set))
+ 			nft_deactivate_next(ctx->net, set);
++		else
++			list_del_rcu(&binding->list);
+ 
+ 		set->use--;
+ 		break;
+@@ -6769,7 +6773,9 @@ err_set_full:
+ err_element_clash:
+ 	kfree(trans);
+ err_elem_free:
+-	nft_set_elem_destroy(set, elem.priv, true);
++	nf_tables_set_elem_destroy(ctx, set, elem.priv);
++	if (obj)
++		obj->use--;
+ err_parse_data:
+ 	if (nla[NFTA_SET_ELEM_DATA] != NULL)
+ 		nft_data_release(&elem.data.val, desc.type);
+@@ -10463,7 +10469,8 @@ static int nft_verdict_init(const struct nft_ctx *ctx, struct nft_data *data,
+ 						 genmask);
+ 		} else if (tb[NFTA_VERDICT_CHAIN_ID]) {
+ 			chain = nft_chain_lookup_byid(ctx->net, ctx->table,
+-						      tb[NFTA_VERDICT_CHAIN_ID]);
++						      tb[NFTA_VERDICT_CHAIN_ID],
++						      genmask);
+ 			if (IS_ERR(chain))
+ 				return PTR_ERR(chain);
+ 		} else {
+diff --git a/net/netfilter/nft_byteorder.c b/net/netfilter/nft_byteorder.c
+index b66647a5a1717..8db3481ed5549 100644
+--- a/net/netfilter/nft_byteorder.c
++++ b/net/netfilter/nft_byteorder.c
+@@ -30,11 +30,11 @@ void nft_byteorder_eval(const struct nft_expr *expr,
+ 	const struct nft_byteorder *priv = nft_expr_priv(expr);
+ 	u32 *src = &regs->data[priv->sreg];
+ 	u32 *dst = &regs->data[priv->dreg];
+-	union { u32 u32; u16 u16; } *s, *d;
++	u16 *s16, *d16;
+ 	unsigned int i;
+ 
+-	s = (void *)src;
+-	d = (void *)dst;
++	s16 = (void *)src;
++	d16 = (void *)dst;
+ 
+ 	switch (priv->size) {
+ 	case 8: {
+@@ -62,11 +62,11 @@ void nft_byteorder_eval(const struct nft_expr *expr,
+ 		switch (priv->op) {
+ 		case NFT_BYTEORDER_NTOH:
+ 			for (i = 0; i < priv->len / 4; i++)
+-				d[i].u32 = ntohl((__force __be32)s[i].u32);
++				dst[i] = ntohl((__force __be32)src[i]);
+ 			break;
+ 		case NFT_BYTEORDER_HTON:
+ 			for (i = 0; i < priv->len / 4; i++)
+-				d[i].u32 = (__force __u32)htonl(s[i].u32);
++				dst[i] = (__force __u32)htonl(src[i]);
+ 			break;
+ 		}
+ 		break;
+@@ -74,11 +74,11 @@ void nft_byteorder_eval(const struct nft_expr *expr,
+ 		switch (priv->op) {
+ 		case NFT_BYTEORDER_NTOH:
+ 			for (i = 0; i < priv->len / 2; i++)
+-				d[i].u16 = ntohs((__force __be16)s[i].u16);
++				d16[i] = ntohs((__force __be16)s16[i]);
+ 			break;
+ 		case NFT_BYTEORDER_HTON:
+ 			for (i = 0; i < priv->len / 2; i++)
+-				d[i].u16 = (__force __u16)htons(s[i].u16);
++				d16[i] = (__force __u16)htons(s16[i]);
+ 			break;
+ 		}
+ 		break;
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index 3a1e0fd5bf149..5968b6450d828 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -1600,6 +1600,7 @@ out:
+ int netlink_set_err(struct sock *ssk, u32 portid, u32 group, int code)
+ {
+ 	struct netlink_set_err_data info;
++	unsigned long flags;
+ 	struct sock *sk;
+ 	int ret = 0;
+ 
+@@ -1609,12 +1610,12 @@ int netlink_set_err(struct sock *ssk, u32 portid, u32 group, int code)
+ 	/* sk->sk_err wants a positive error value */
+ 	info.code = -code;
+ 
+-	read_lock(&nl_table_lock);
++	read_lock_irqsave(&nl_table_lock, flags);
+ 
+ 	sk_for_each_bound(sk, &nl_table[ssk->sk_protocol].mc_list)
+ 		ret += do_one_set_err(sk, &info);
+ 
+-	read_unlock(&nl_table_lock);
++	read_unlock_irqrestore(&nl_table_lock, flags);
+ 	return ret;
+ }
+ EXPORT_SYMBOL(netlink_set_err);
+diff --git a/net/netlink/diag.c b/net/netlink/diag.c
+index c6255eac305c7..e4f21b1067bcc 100644
+--- a/net/netlink/diag.c
++++ b/net/netlink/diag.c
+@@ -94,6 +94,7 @@ static int __netlink_diag_dump(struct sk_buff *skb, struct netlink_callback *cb,
+ 	struct net *net = sock_net(skb->sk);
+ 	struct netlink_diag_req *req;
+ 	struct netlink_sock *nlsk;
++	unsigned long flags;
+ 	struct sock *sk;
+ 	int num = 2;
+ 	int ret = 0;
+@@ -152,7 +153,7 @@ static int __netlink_diag_dump(struct sk_buff *skb, struct netlink_callback *cb,
+ 	num++;
+ 
+ mc_list:
+-	read_lock(&nl_table_lock);
++	read_lock_irqsave(&nl_table_lock, flags);
+ 	sk_for_each_bound(sk, &tbl->mc_list) {
+ 		if (sk_hashed(sk))
+ 			continue;
+@@ -167,13 +168,13 @@ mc_list:
+ 				 NETLINK_CB(cb->skb).portid,
+ 				 cb->nlh->nlmsg_seq,
+ 				 NLM_F_MULTI,
+-				 sock_i_ino(sk)) < 0) {
++				 __sock_i_ino(sk)) < 0) {
+ 			ret = 1;
+ 			break;
+ 		}
+ 		num++;
+ 	}
+-	read_unlock(&nl_table_lock);
++	read_unlock_irqrestore(&nl_table_lock, flags);
+ 
+ done:
+ 	cb->args[0] = num;
+diff --git a/net/nfc/llcp.h b/net/nfc/llcp.h
+index c1d9be636933c..d8345ed57c954 100644
+--- a/net/nfc/llcp.h
++++ b/net/nfc/llcp.h
+@@ -201,7 +201,6 @@ void nfc_llcp_sock_link(struct llcp_sock_list *l, struct sock *s);
+ void nfc_llcp_sock_unlink(struct llcp_sock_list *l, struct sock *s);
+ void nfc_llcp_socket_remote_param_init(struct nfc_llcp_sock *sock);
+ struct nfc_llcp_local *nfc_llcp_find_local(struct nfc_dev *dev);
+-struct nfc_llcp_local *nfc_llcp_local_get(struct nfc_llcp_local *local);
+ int nfc_llcp_local_put(struct nfc_llcp_local *local);
+ u8 nfc_llcp_get_sdp_ssap(struct nfc_llcp_local *local,
+ 			 struct nfc_llcp_sock *sock);
+diff --git a/net/nfc/llcp_commands.c b/net/nfc/llcp_commands.c
+index 41e3a20c89355..e2680a3bef799 100644
+--- a/net/nfc/llcp_commands.c
++++ b/net/nfc/llcp_commands.c
+@@ -359,6 +359,7 @@ int nfc_llcp_send_symm(struct nfc_dev *dev)
+ 	struct sk_buff *skb;
+ 	struct nfc_llcp_local *local;
+ 	u16 size = 0;
++	int err;
+ 
+ 	local = nfc_llcp_find_local(dev);
+ 	if (local == NULL)
+@@ -368,8 +369,10 @@ int nfc_llcp_send_symm(struct nfc_dev *dev)
+ 	size += dev->tx_headroom + dev->tx_tailroom + NFC_HEADER_SIZE;
+ 
+ 	skb = alloc_skb(size, GFP_KERNEL);
+-	if (skb == NULL)
+-		return -ENOMEM;
++	if (skb == NULL) {
++		err = -ENOMEM;
++		goto out;
++	}
+ 
+ 	skb_reserve(skb, dev->tx_headroom + NFC_HEADER_SIZE);
+ 
+@@ -379,8 +382,11 @@ int nfc_llcp_send_symm(struct nfc_dev *dev)
+ 
+ 	nfc_llcp_send_to_raw_sock(local, skb, NFC_DIRECTION_TX);
+ 
+-	return nfc_data_exchange(dev, local->target_idx, skb,
++	err = nfc_data_exchange(dev, local->target_idx, skb,
+ 				 nfc_llcp_recv, local);
++out:
++	nfc_llcp_local_put(local);
++	return err;
+ }
+ 
+ int nfc_llcp_send_connect(struct nfc_llcp_sock *sock)
+@@ -390,7 +396,8 @@ int nfc_llcp_send_connect(struct nfc_llcp_sock *sock)
+ 	const u8 *service_name_tlv = NULL;
+ 	const u8 *miux_tlv = NULL;
+ 	const u8 *rw_tlv = NULL;
+-	u8 service_name_tlv_length, miux_tlv_length,  rw_tlv_length, rw;
++	u8 service_name_tlv_length = 0;
++	u8 miux_tlv_length,  rw_tlv_length, rw;
+ 	int err;
+ 	u16 size = 0;
+ 	__be16 miux;
+diff --git a/net/nfc/llcp_core.c b/net/nfc/llcp_core.c
+index a27e1842b2a09..f60e424e06076 100644
+--- a/net/nfc/llcp_core.c
++++ b/net/nfc/llcp_core.c
+@@ -17,6 +17,8 @@
+ static u8 llcp_magic[3] = {0x46, 0x66, 0x6d};
+ 
+ static LIST_HEAD(llcp_devices);
++/* Protects llcp_devices list */
++static DEFINE_SPINLOCK(llcp_devices_lock);
+ 
+ static void nfc_llcp_rx_skb(struct nfc_llcp_local *local, struct sk_buff *skb);
+ 
+@@ -141,7 +143,7 @@ static void nfc_llcp_socket_release(struct nfc_llcp_local *local, bool device,
+ 	write_unlock(&local->raw_sockets.lock);
+ }
+ 
+-struct nfc_llcp_local *nfc_llcp_local_get(struct nfc_llcp_local *local)
++static struct nfc_llcp_local *nfc_llcp_local_get(struct nfc_llcp_local *local)
+ {
+ 	kref_get(&local->ref);
+ 
+@@ -169,7 +171,6 @@ static void local_release(struct kref *ref)
+ 
+ 	local = container_of(ref, struct nfc_llcp_local, ref);
+ 
+-	list_del(&local->list);
+ 	local_cleanup(local);
+ 	kfree(local);
+ }
+@@ -282,12 +283,33 @@ static void nfc_llcp_sdreq_timer(struct timer_list *t)
+ struct nfc_llcp_local *nfc_llcp_find_local(struct nfc_dev *dev)
+ {
+ 	struct nfc_llcp_local *local;
++	struct nfc_llcp_local *res = NULL;
+ 
++	spin_lock(&llcp_devices_lock);
+ 	list_for_each_entry(local, &llcp_devices, list)
+-		if (local->dev == dev)
++		if (local->dev == dev) {
++			res = nfc_llcp_local_get(local);
++			break;
++		}
++	spin_unlock(&llcp_devices_lock);
++
++	return res;
++}
++
++static struct nfc_llcp_local *nfc_llcp_remove_local(struct nfc_dev *dev)
++{
++	struct nfc_llcp_local *local, *tmp;
++
++	spin_lock(&llcp_devices_lock);
++	list_for_each_entry_safe(local, tmp, &llcp_devices, list)
++		if (local->dev == dev) {
++			list_del(&local->list);
++			spin_unlock(&llcp_devices_lock);
+ 			return local;
++		}
++	spin_unlock(&llcp_devices_lock);
+ 
+-	pr_debug("No device found\n");
++	pr_warn("Shutting down device not found\n");
+ 
+ 	return NULL;
+ }
+@@ -608,12 +630,15 @@ u8 *nfc_llcp_general_bytes(struct nfc_dev *dev, size_t *general_bytes_len)
+ 
+ 	*general_bytes_len = local->gb_len;
+ 
++	nfc_llcp_local_put(local);
++
+ 	return local->gb;
+ }
+ 
+ int nfc_llcp_set_remote_gb(struct nfc_dev *dev, const u8 *gb, u8 gb_len)
+ {
+ 	struct nfc_llcp_local *local;
++	int err;
+ 
+ 	if (gb_len < 3 || gb_len > NFC_MAX_GT_LEN)
+ 		return -EINVAL;
+@@ -630,12 +655,16 @@ int nfc_llcp_set_remote_gb(struct nfc_dev *dev, const u8 *gb, u8 gb_len)
+ 
+ 	if (memcmp(local->remote_gb, llcp_magic, 3)) {
+ 		pr_err("MAC does not support LLCP\n");
+-		return -EINVAL;
++		err = -EINVAL;
++		goto out;
+ 	}
+ 
+-	return nfc_llcp_parse_gb_tlv(local,
++	err = nfc_llcp_parse_gb_tlv(local,
+ 				     &local->remote_gb[3],
+ 				     local->remote_gb_len - 3);
++out:
++	nfc_llcp_local_put(local);
++	return err;
+ }
+ 
+ static u8 nfc_llcp_dsap(const struct sk_buff *pdu)
+@@ -1517,6 +1546,8 @@ int nfc_llcp_data_received(struct nfc_dev *dev, struct sk_buff *skb)
+ 
+ 	__nfc_llcp_recv(local, skb);
+ 
++	nfc_llcp_local_put(local);
++
+ 	return 0;
+ }
+ 
+@@ -1533,6 +1564,8 @@ void nfc_llcp_mac_is_down(struct nfc_dev *dev)
+ 
+ 	/* Close and purge all existing sockets */
+ 	nfc_llcp_socket_release(local, true, 0);
++
++	nfc_llcp_local_put(local);
+ }
+ 
+ void nfc_llcp_mac_is_up(struct nfc_dev *dev, u32 target_idx,
+@@ -1558,6 +1591,8 @@ void nfc_llcp_mac_is_up(struct nfc_dev *dev, u32 target_idx,
+ 		mod_timer(&local->link_timer,
+ 			  jiffies + msecs_to_jiffies(local->remote_lto));
+ 	}
++
++	nfc_llcp_local_put(local);
+ }
+ 
+ int nfc_llcp_register_device(struct nfc_dev *ndev)
+@@ -1608,7 +1643,7 @@ int nfc_llcp_register_device(struct nfc_dev *ndev)
+ 
+ void nfc_llcp_unregister_device(struct nfc_dev *dev)
+ {
+-	struct nfc_llcp_local *local = nfc_llcp_find_local(dev);
++	struct nfc_llcp_local *local = nfc_llcp_remove_local(dev);
+ 
+ 	if (local == NULL) {
+ 		pr_debug("No such device\n");
+diff --git a/net/nfc/llcp_sock.c b/net/nfc/llcp_sock.c
+index 77642d18a3b43..645677f84dba2 100644
+--- a/net/nfc/llcp_sock.c
++++ b/net/nfc/llcp_sock.c
+@@ -99,7 +99,7 @@ static int llcp_sock_bind(struct socket *sock, struct sockaddr *addr, int alen)
+ 	}
+ 
+ 	llcp_sock->dev = dev;
+-	llcp_sock->local = nfc_llcp_local_get(local);
++	llcp_sock->local = local;
+ 	llcp_sock->nfc_protocol = llcp_addr.nfc_protocol;
+ 	llcp_sock->service_name_len = min_t(unsigned int,
+ 					    llcp_addr.service_name_len,
+@@ -186,7 +186,7 @@ static int llcp_raw_sock_bind(struct socket *sock, struct sockaddr *addr,
+ 	}
+ 
+ 	llcp_sock->dev = dev;
+-	llcp_sock->local = nfc_llcp_local_get(local);
++	llcp_sock->local = local;
+ 	llcp_sock->nfc_protocol = llcp_addr.nfc_protocol;
+ 
+ 	nfc_llcp_sock_link(&local->raw_sockets, sk);
+@@ -696,22 +696,22 @@ static int llcp_sock_connect(struct socket *sock, struct sockaddr *_addr,
+ 	if (dev->dep_link_up == false) {
+ 		ret = -ENOLINK;
+ 		device_unlock(&dev->dev);
+-		goto put_dev;
++		goto sock_llcp_put_local;
+ 	}
+ 	device_unlock(&dev->dev);
+ 
+ 	if (local->rf_mode == NFC_RF_INITIATOR &&
+ 	    addr->target_idx != local->target_idx) {
+ 		ret = -ENOLINK;
+-		goto put_dev;
++		goto sock_llcp_put_local;
+ 	}
+ 
+ 	llcp_sock->dev = dev;
+-	llcp_sock->local = nfc_llcp_local_get(local);
++	llcp_sock->local = local;
+ 	llcp_sock->ssap = nfc_llcp_get_local_ssap(local);
+ 	if (llcp_sock->ssap == LLCP_SAP_MAX) {
+ 		ret = -ENOMEM;
+-		goto sock_llcp_put_local;
++		goto sock_llcp_nullify;
+ 	}
+ 
+ 	llcp_sock->reserved_ssap = llcp_sock->ssap;
+@@ -757,11 +757,13 @@ sock_unlink:
+ sock_llcp_release:
+ 	nfc_llcp_put_ssap(local, llcp_sock->ssap);
+ 
+-sock_llcp_put_local:
+-	nfc_llcp_local_put(llcp_sock->local);
++sock_llcp_nullify:
+ 	llcp_sock->local = NULL;
+ 	llcp_sock->dev = NULL;
+ 
++sock_llcp_put_local:
++	nfc_llcp_local_put(local);
++
+ put_dev:
+ 	nfc_put_device(dev);
+ 
+diff --git a/net/nfc/netlink.c b/net/nfc/netlink.c
+index b9264e730fd93..e9ac6a6f934e7 100644
+--- a/net/nfc/netlink.c
++++ b/net/nfc/netlink.c
+@@ -1039,11 +1039,14 @@ static int nfc_genl_llc_get_params(struct sk_buff *skb, struct genl_info *info)
+ 	msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL);
+ 	if (!msg) {
+ 		rc = -ENOMEM;
+-		goto exit;
++		goto put_local;
+ 	}
+ 
+ 	rc = nfc_genl_send_params(msg, local, info->snd_portid, info->snd_seq);
+ 
++put_local:
++	nfc_llcp_local_put(local);
++
+ exit:
+ 	device_unlock(&dev->dev);
+ 
+@@ -1105,7 +1108,7 @@ static int nfc_genl_llc_set_params(struct sk_buff *skb, struct genl_info *info)
+ 	if (info->attrs[NFC_ATTR_LLC_PARAM_LTO]) {
+ 		if (dev->dep_link_up) {
+ 			rc = -EINPROGRESS;
+-			goto exit;
++			goto put_local;
+ 		}
+ 
+ 		local->lto = nla_get_u8(info->attrs[NFC_ATTR_LLC_PARAM_LTO]);
+@@ -1117,6 +1120,9 @@ static int nfc_genl_llc_set_params(struct sk_buff *skb, struct genl_info *info)
+ 	if (info->attrs[NFC_ATTR_LLC_PARAM_MIUX])
+ 		local->miux = cpu_to_be16(miux);
+ 
++put_local:
++	nfc_llcp_local_put(local);
++
+ exit:
+ 	device_unlock(&dev->dev);
+ 
+@@ -1172,7 +1178,7 @@ static int nfc_genl_llc_sdreq(struct sk_buff *skb, struct genl_info *info)
+ 
+ 		if (rc != 0) {
+ 			rc = -EINVAL;
+-			goto exit;
++			goto put_local;
+ 		}
+ 
+ 		if (!sdp_attrs[NFC_SDP_ATTR_URI])
+@@ -1191,7 +1197,7 @@ static int nfc_genl_llc_sdreq(struct sk_buff *skb, struct genl_info *info)
+ 		sdreq = nfc_llcp_build_sdreq_tlv(tid, uri, uri_len);
+ 		if (sdreq == NULL) {
+ 			rc = -ENOMEM;
+-			goto exit;
++			goto put_local;
+ 		}
+ 
+ 		tlvs_len += sdreq->tlv_len;
+@@ -1201,10 +1207,14 @@ static int nfc_genl_llc_sdreq(struct sk_buff *skb, struct genl_info *info)
+ 
+ 	if (hlist_empty(&sdreq_list)) {
+ 		rc = -EINVAL;
+-		goto exit;
++		goto put_local;
+ 	}
+ 
+ 	rc = nfc_llcp_send_snl_sdreq(local, &sdreq_list, tlvs_len);
++
++put_local:
++	nfc_llcp_local_put(local);
++
+ exit:
+ 	device_unlock(&dev->dev);
+ 
+diff --git a/net/nfc/nfc.h b/net/nfc/nfc.h
+index de2ec66d7e83a..0b1e6466f4fbf 100644
+--- a/net/nfc/nfc.h
++++ b/net/nfc/nfc.h
+@@ -52,6 +52,7 @@ int nfc_llcp_set_remote_gb(struct nfc_dev *dev, const u8 *gb, u8 gb_len);
+ u8 *nfc_llcp_general_bytes(struct nfc_dev *dev, size_t *general_bytes_len);
+ int nfc_llcp_data_received(struct nfc_dev *dev, struct sk_buff *skb);
+ struct nfc_llcp_local *nfc_llcp_find_local(struct nfc_dev *dev);
++int nfc_llcp_local_put(struct nfc_llcp_local *local);
+ int __init nfc_llcp_init(void);
+ void nfc_llcp_exit(void);
+ void nfc_llcp_free_sdp_tlv(struct nfc_llcp_sdp_tlv *sdp);
+diff --git a/net/sched/act_ipt.c b/net/sched/act_ipt.c
+index 5d96ffebd40f0..598d6e299152a 100644
+--- a/net/sched/act_ipt.c
++++ b/net/sched/act_ipt.c
+@@ -21,6 +21,7 @@
+ #include <linux/tc_act/tc_ipt.h>
+ #include <net/tc_act/tc_ipt.h>
+ #include <net/tc_wrapper.h>
++#include <net/ip.h>
+ 
+ #include <linux/netfilter_ipv4/ip_tables.h>
+ 
+@@ -48,7 +49,7 @@ static int ipt_init_target(struct net *net, struct xt_entry_target *t,
+ 	par.entryinfo = &e;
+ 	par.target    = target;
+ 	par.targinfo  = t->data;
+-	par.hook_mask = hook;
++	par.hook_mask = 1 << hook;
+ 	par.family    = NFPROTO_IPV4;
+ 
+ 	ret = xt_check_target(&par, t->u.target_size - sizeof(*t), 0, false);
+@@ -85,7 +86,8 @@ static void tcf_ipt_release(struct tc_action *a)
+ 
+ static const struct nla_policy ipt_policy[TCA_IPT_MAX + 1] = {
+ 	[TCA_IPT_TABLE]	= { .type = NLA_STRING, .len = IFNAMSIZ },
+-	[TCA_IPT_HOOK]	= { .type = NLA_U32 },
++	[TCA_IPT_HOOK]	= NLA_POLICY_RANGE(NLA_U32, NF_INET_PRE_ROUTING,
++					   NF_INET_NUMHOOKS),
+ 	[TCA_IPT_INDEX]	= { .type = NLA_U32 },
+ 	[TCA_IPT_TARG]	= { .len = sizeof(struct xt_entry_target) },
+ };
+@@ -158,15 +160,27 @@ static int __tcf_ipt_init(struct net *net, unsigned int id, struct nlattr *nla,
+ 			return -EEXIST;
+ 		}
+ 	}
++
++	err = -EINVAL;
+ 	hook = nla_get_u32(tb[TCA_IPT_HOOK]);
++	switch (hook) {
++	case NF_INET_PRE_ROUTING:
++		break;
++	case NF_INET_POST_ROUTING:
++		break;
++	default:
++		goto err1;
++	}
++
++	if (tb[TCA_IPT_TABLE]) {
++		/* mangle only for now */
++		if (nla_strcmp(tb[TCA_IPT_TABLE], "mangle"))
++			goto err1;
++	}
+ 
+-	err = -ENOMEM;
+-	tname = kmalloc(IFNAMSIZ, GFP_KERNEL);
++	tname = kstrdup("mangle", GFP_KERNEL);
+ 	if (unlikely(!tname))
+ 		goto err1;
+-	if (tb[TCA_IPT_TABLE] == NULL ||
+-	    nla_strscpy(tname, tb[TCA_IPT_TABLE], IFNAMSIZ) >= IFNAMSIZ)
+-		strcpy(tname, "mangle");
+ 
+ 	t = kmemdup(td, td->u.target_size, GFP_KERNEL);
+ 	if (unlikely(!t))
+@@ -217,10 +231,31 @@ static int tcf_xt_init(struct net *net, struct nlattr *nla,
+ 			      a, &act_xt_ops, tp, flags);
+ }
+ 
++static bool tcf_ipt_act_check(struct sk_buff *skb)
++{
++	const struct iphdr *iph;
++	unsigned int nhoff, len;
++
++	if (!pskb_may_pull(skb, sizeof(struct iphdr)))
++		return false;
++
++	nhoff = skb_network_offset(skb);
++	iph = ip_hdr(skb);
++	if (iph->ihl < 5 || iph->version != 4)
++		return false;
++
++	len = skb_ip_totlen(skb);
++	if (skb->len < nhoff + len || len < (iph->ihl * 4u))
++		return false;
++
++	return pskb_may_pull(skb, iph->ihl * 4u);
++}
++
+ TC_INDIRECT_SCOPE int tcf_ipt_act(struct sk_buff *skb,
+ 				  const struct tc_action *a,
+ 				  struct tcf_result *res)
+ {
++	char saved_cb[sizeof_field(struct sk_buff, cb)];
+ 	int ret = 0, result = 0;
+ 	struct tcf_ipt *ipt = to_ipt(a);
+ 	struct xt_action_param par;
+@@ -231,9 +266,24 @@ TC_INDIRECT_SCOPE int tcf_ipt_act(struct sk_buff *skb,
+ 		.pf	= NFPROTO_IPV4,
+ 	};
+ 
++	if (skb_protocol(skb, false) != htons(ETH_P_IP))
++		return TC_ACT_UNSPEC;
++
+ 	if (skb_unclone(skb, GFP_ATOMIC))
+ 		return TC_ACT_UNSPEC;
+ 
++	if (!tcf_ipt_act_check(skb))
++		return TC_ACT_UNSPEC;
++
++	if (state.hook == NF_INET_POST_ROUTING) {
++		if (!skb_dst(skb))
++			return TC_ACT_UNSPEC;
++
++		state.out = skb->dev;
++	}
++
++	memcpy(saved_cb, skb->cb, sizeof(saved_cb));
++
+ 	spin_lock(&ipt->tcf_lock);
+ 
+ 	tcf_lastuse_update(&ipt->tcf_tm);
+@@ -246,6 +296,9 @@ TC_INDIRECT_SCOPE int tcf_ipt_act(struct sk_buff *skb,
+ 	par.state    = &state;
+ 	par.target   = ipt->tcfi_t->u.kernel.target;
+ 	par.targinfo = ipt->tcfi_t->data;
++
++	memset(IPCB(skb), 0, sizeof(struct inet_skb_parm));
++
+ 	ret = par.target->target(skb, &par);
+ 
+ 	switch (ret) {
+@@ -266,6 +319,9 @@ TC_INDIRECT_SCOPE int tcf_ipt_act(struct sk_buff *skb,
+ 		break;
+ 	}
+ 	spin_unlock(&ipt->tcf_lock);
++
++	memcpy(skb->cb, saved_cb, sizeof(skb->cb));
++
+ 	return result;
+ 
+ }
+diff --git a/net/sched/act_pedit.c b/net/sched/act_pedit.c
+index c819b812a899c..399d4643a940e 100644
+--- a/net/sched/act_pedit.c
++++ b/net/sched/act_pedit.c
+@@ -29,6 +29,7 @@ static struct tc_action_ops act_pedit_ops;
+ 
+ static const struct nla_policy pedit_policy[TCA_PEDIT_MAX + 1] = {
+ 	[TCA_PEDIT_PARMS]	= { .len = sizeof(struct tc_pedit) },
++	[TCA_PEDIT_PARMS_EX]	= { .len = sizeof(struct tc_pedit) },
+ 	[TCA_PEDIT_KEYS_EX]   = { .type = NLA_NESTED },
+ };
+ 
+diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c
+index e79be1b3e74da..b93ec2a3454eb 100644
+--- a/net/sched/sch_netem.c
++++ b/net/sched/sch_netem.c
+@@ -773,12 +773,10 @@ static void dist_free(struct disttable *d)
+  * signed 16 bit values.
+  */
+ 
+-static int get_dist_table(struct Qdisc *sch, struct disttable **tbl,
+-			  const struct nlattr *attr)
++static int get_dist_table(struct disttable **tbl, const struct nlattr *attr)
+ {
+ 	size_t n = nla_len(attr)/sizeof(__s16);
+ 	const __s16 *data = nla_data(attr);
+-	spinlock_t *root_lock;
+ 	struct disttable *d;
+ 	int i;
+ 
+@@ -793,13 +791,7 @@ static int get_dist_table(struct Qdisc *sch, struct disttable **tbl,
+ 	for (i = 0; i < n; i++)
+ 		d->table[i] = data[i];
+ 
+-	root_lock = qdisc_root_sleeping_lock(sch);
+-
+-	spin_lock_bh(root_lock);
+-	swap(*tbl, d);
+-	spin_unlock_bh(root_lock);
+-
+-	dist_free(d);
++	*tbl = d;
+ 	return 0;
+ }
+ 
+@@ -956,6 +948,8 @@ static int netem_change(struct Qdisc *sch, struct nlattr *opt,
+ {
+ 	struct netem_sched_data *q = qdisc_priv(sch);
+ 	struct nlattr *tb[TCA_NETEM_MAX + 1];
++	struct disttable *delay_dist = NULL;
++	struct disttable *slot_dist = NULL;
+ 	struct tc_netem_qopt *qopt;
+ 	struct clgstate old_clg;
+ 	int old_loss_model = CLG_RANDOM;
+@@ -966,6 +960,18 @@ static int netem_change(struct Qdisc *sch, struct nlattr *opt,
+ 	if (ret < 0)
+ 		return ret;
+ 
++	if (tb[TCA_NETEM_DELAY_DIST]) {
++		ret = get_dist_table(&delay_dist, tb[TCA_NETEM_DELAY_DIST]);
++		if (ret)
++			goto table_free;
++	}
++
++	if (tb[TCA_NETEM_SLOT_DIST]) {
++		ret = get_dist_table(&slot_dist, tb[TCA_NETEM_SLOT_DIST]);
++		if (ret)
++			goto table_free;
++	}
++
+ 	sch_tree_lock(sch);
+ 	/* backup q->clg and q->loss_model */
+ 	old_clg = q->clg;
+@@ -975,26 +981,17 @@ static int netem_change(struct Qdisc *sch, struct nlattr *opt,
+ 		ret = get_loss_clg(q, tb[TCA_NETEM_LOSS]);
+ 		if (ret) {
+ 			q->loss_model = old_loss_model;
++			q->clg = old_clg;
+ 			goto unlock;
+ 		}
+ 	} else {
+ 		q->loss_model = CLG_RANDOM;
+ 	}
+ 
+-	if (tb[TCA_NETEM_DELAY_DIST]) {
+-		ret = get_dist_table(sch, &q->delay_dist,
+-				     tb[TCA_NETEM_DELAY_DIST]);
+-		if (ret)
+-			goto get_table_failure;
+-	}
+-
+-	if (tb[TCA_NETEM_SLOT_DIST]) {
+-		ret = get_dist_table(sch, &q->slot_dist,
+-				     tb[TCA_NETEM_SLOT_DIST]);
+-		if (ret)
+-			goto get_table_failure;
+-	}
+-
++	if (delay_dist)
++		swap(q->delay_dist, delay_dist);
++	if (slot_dist)
++		swap(q->slot_dist, slot_dist);
+ 	sch->limit = qopt->limit;
+ 
+ 	q->latency = PSCHED_TICKS2NS(qopt->latency);
+@@ -1044,17 +1041,11 @@ static int netem_change(struct Qdisc *sch, struct nlattr *opt,
+ 
+ unlock:
+ 	sch_tree_unlock(sch);
+-	return ret;
+ 
+-get_table_failure:
+-	/* recover clg and loss_model, in case of
+-	 * q->clg and q->loss_model were modified
+-	 * in get_loss_clg()
+-	 */
+-	q->clg = old_clg;
+-	q->loss_model = old_loss_model;
+-
+-	goto unlock;
++table_free:
++	dist_free(delay_dist);
++	dist_free(slot_dist);
++	return ret;
+ }
+ 
+ static int netem_init(struct Qdisc *sch, struct nlattr *opt,
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index cda8c2874691d..ee15eff6364ee 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -364,9 +364,9 @@ static void sctp_auto_asconf_init(struct sctp_sock *sp)
+ 	struct net *net = sock_net(&sp->inet.sk);
+ 
+ 	if (net->sctp.default_auto_asconf) {
+-		spin_lock(&net->sctp.addr_wq_lock);
++		spin_lock_bh(&net->sctp.addr_wq_lock);
+ 		list_add_tail(&sp->auto_asconf_list, &net->sctp.auto_asconf_splist);
+-		spin_unlock(&net->sctp.addr_wq_lock);
++		spin_unlock_bh(&net->sctp.addr_wq_lock);
+ 		sp->do_auto_asconf = 1;
+ 	}
+ }
+@@ -8281,6 +8281,22 @@ static int sctp_getsockopt(struct sock *sk, int level, int optname,
+ 	return retval;
+ }
+ 
++static bool sctp_bpf_bypass_getsockopt(int level, int optname)
++{
++	if (level == SOL_SCTP) {
++		switch (optname) {
++		case SCTP_SOCKOPT_PEELOFF:
++		case SCTP_SOCKOPT_PEELOFF_FLAGS:
++		case SCTP_SOCKOPT_CONNECTX3:
++			return true;
++		default:
++			return false;
++		}
++	}
++
++	return false;
++}
++
+ static int sctp_hash(struct sock *sk)
+ {
+ 	/* STUB */
+@@ -9650,6 +9666,7 @@ struct proto sctp_prot = {
+ 	.shutdown    =	sctp_shutdown,
+ 	.setsockopt  =	sctp_setsockopt,
+ 	.getsockopt  =	sctp_getsockopt,
++	.bpf_bypass_getsockopt	= sctp_bpf_bypass_getsockopt,
+ 	.sendmsg     =	sctp_sendmsg,
+ 	.recvmsg     =	sctp_recvmsg,
+ 	.bind        =	sctp_bind,
+@@ -9705,6 +9722,7 @@ struct proto sctpv6_prot = {
+ 	.shutdown	= sctp_shutdown,
+ 	.setsockopt	= sctp_setsockopt,
+ 	.getsockopt	= sctp_getsockopt,
++	.bpf_bypass_getsockopt	= sctp_bpf_bypass_getsockopt,
+ 	.sendmsg	= sctp_sendmsg,
+ 	.recvmsg	= sctp_recvmsg,
+ 	.bind		= sctp_bind,
+diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
+index f77cebe2c0713..15f4d0d40bdd6 100644
+--- a/net/sunrpc/svcsock.c
++++ b/net/sunrpc/svcsock.c
+@@ -826,12 +826,6 @@ static void svc_tcp_listen_data_ready(struct sock *sk)
+ 
+ 	trace_sk_data_ready(sk);
+ 
+-	if (svsk) {
+-		/* Refer to svc_setup_socket() for details. */
+-		rmb();
+-		svsk->sk_odata(sk);
+-	}
+-
+ 	/*
+ 	 * This callback may called twice when a new connection
+ 	 * is established as a child socket inherits everything
+@@ -840,13 +834,18 @@ static void svc_tcp_listen_data_ready(struct sock *sk)
+ 	 *    when one of child sockets become ESTABLISHED.
+ 	 * 2) data_ready method of the child socket may be called
+ 	 *    when it receives data before the socket is accepted.
+-	 * In case of 2, we should ignore it silently.
++	 * In case of 2, we should ignore it silently and DO NOT
++	 * dereference svsk.
+ 	 */
+-	if (sk->sk_state == TCP_LISTEN) {
+-		if (svsk) {
+-			set_bit(XPT_CONN, &svsk->sk_xprt.xpt_flags);
+-			svc_xprt_enqueue(&svsk->sk_xprt);
+-		}
++	if (sk->sk_state != TCP_LISTEN)
++		return;
++
++	if (svsk) {
++		/* Refer to svc_setup_socket() for details. */
++		rmb();
++		svsk->sk_odata(sk);
++		set_bit(XPT_CONN, &svsk->sk_xprt.xpt_flags);
++		svc_xprt_enqueue(&svsk->sk_xprt);
+ 	}
+ }
+ 
+diff --git a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
+index a22fe7587fa6f..70207d8a318a4 100644
+--- a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
++++ b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
+@@ -796,6 +796,12 @@ int svc_rdma_recvfrom(struct svc_rqst *rqstp)
+ 	struct svc_rdma_recv_ctxt *ctxt;
+ 	int ret;
+ 
++	/* Prevent svc_xprt_release() from releasing pages in rq_pages
++	 * when returning 0 or an error.
++	 */
++	rqstp->rq_respages = rqstp->rq_pages;
++	rqstp->rq_next_page = rqstp->rq_respages;
++
+ 	rqstp->rq_xprt_ctxt = NULL;
+ 
+ 	ctxt = NULL;
+@@ -819,12 +825,6 @@ int svc_rdma_recvfrom(struct svc_rqst *rqstp)
+ 				   DMA_FROM_DEVICE);
+ 	svc_rdma_build_arg_xdr(rqstp, ctxt);
+ 
+-	/* Prevent svc_xprt_release from releasing pages in rq_pages
+-	 * if we return 0 or an error.
+-	 */
+-	rqstp->rq_respages = rqstp->rq_pages;
+-	rqstp->rq_next_page = rqstp->rq_respages;
+-
+ 	ret = svc_rdma_xdr_decode_req(&rqstp->rq_arg, ctxt);
+ 	if (ret < 0)
+ 		goto out_err;
+diff --git a/net/wireless/core.c b/net/wireless/core.c
+index b3ec9eaec36b3..609b79fe4a748 100644
+--- a/net/wireless/core.c
++++ b/net/wireless/core.c
+@@ -721,22 +721,6 @@ int wiphy_register(struct wiphy *wiphy)
+ 			return -EINVAL;
+ 	}
+ 
+-	/*
+-	 * if a wiphy has unsupported modes for regulatory channel enforcement,
+-	 * opt-out of enforcement checking
+-	 */
+-	if (wiphy->interface_modes & ~(BIT(NL80211_IFTYPE_STATION) |
+-				       BIT(NL80211_IFTYPE_P2P_CLIENT) |
+-				       BIT(NL80211_IFTYPE_AP) |
+-				       BIT(NL80211_IFTYPE_MESH_POINT) |
+-				       BIT(NL80211_IFTYPE_P2P_GO) |
+-				       BIT(NL80211_IFTYPE_ADHOC) |
+-				       BIT(NL80211_IFTYPE_P2P_DEVICE) |
+-				       BIT(NL80211_IFTYPE_NAN) |
+-				       BIT(NL80211_IFTYPE_AP_VLAN) |
+-				       BIT(NL80211_IFTYPE_MONITOR)))
+-		wiphy->regulatory_flags |= REGULATORY_IGNORE_STALE_KICKOFF;
+-
+ 	if (WARN_ON((wiphy->regulatory_flags & REGULATORY_WIPHY_SELF_MANAGED) &&
+ 		    (wiphy->regulatory_flags &
+ 					(REGULATORY_CUSTOM_REG |
+diff --git a/net/wireless/reg.c b/net/wireless/reg.c
+index 26f11e4746c05..f9e03850d71bf 100644
+--- a/net/wireless/reg.c
++++ b/net/wireless/reg.c
+@@ -2352,7 +2352,7 @@ static bool reg_wdev_chan_valid(struct wiphy *wiphy, struct wireless_dev *wdev)
+ 
+ 		if (!wdev->valid_links && link > 0)
+ 			break;
+-		if (!(wdev->valid_links & BIT(link)))
++		if (wdev->valid_links && !(wdev->valid_links & BIT(link)))
+ 			continue;
+ 		switch (iftype) {
+ 		case NL80211_IFTYPE_AP:
+@@ -2391,9 +2391,17 @@ static bool reg_wdev_chan_valid(struct wiphy *wiphy, struct wireless_dev *wdev)
+ 		case NL80211_IFTYPE_P2P_DEVICE:
+ 			/* no enforcement required */
+ 			break;
++		case NL80211_IFTYPE_OCB:
++			if (!wdev->u.ocb.chandef.chan)
++				continue;
++			chandef = wdev->u.ocb.chandef;
++			break;
++		case NL80211_IFTYPE_NAN:
++			/* we have no info, but NAN is also pretty universal */
++			continue;
+ 		default:
+ 			/* others not implemented for now */
+-			WARN_ON(1);
++			WARN_ON_ONCE(1);
+ 			break;
+ 		}
+ 
+@@ -2452,9 +2460,7 @@ static void reg_check_chans_work(struct work_struct *work)
+ 	rtnl_lock();
+ 
+ 	list_for_each_entry(rdev, &cfg80211_rdev_list, list)
+-		if (!(rdev->wiphy.regulatory_flags &
+-		      REGULATORY_IGNORE_STALE_KICKOFF))
+-			reg_leave_invalid_chans(&rdev->wiphy);
++		reg_leave_invalid_chans(&rdev->wiphy);
+ 
+ 	rtnl_unlock();
+ }
+diff --git a/net/wireless/scan.c b/net/wireless/scan.c
+index c501db7bbdb3d..396c63431e1f3 100644
+--- a/net/wireless/scan.c
++++ b/net/wireless/scan.c
+@@ -259,117 +259,152 @@ bool cfg80211_is_element_inherited(const struct element *elem,
+ }
+ EXPORT_SYMBOL(cfg80211_is_element_inherited);
+ 
+-static size_t cfg80211_gen_new_ie(const u8 *ie, size_t ielen,
+-				  const u8 *subelement, size_t subie_len,
+-				  u8 *new_ie, gfp_t gfp)
++static size_t cfg80211_copy_elem_with_frags(const struct element *elem,
++					    const u8 *ie, size_t ie_len,
++					    u8 **pos, u8 *buf, size_t buf_len)
+ {
+-	u8 *pos, *tmp;
+-	const u8 *tmp_old, *tmp_new;
+-	const struct element *non_inherit_elem;
+-	u8 *sub_copy;
++	if (WARN_ON((u8 *)elem < ie || elem->data > ie + ie_len ||
++		    elem->data + elem->datalen > ie + ie_len))
++		return 0;
+ 
+-	/* copy subelement as we need to change its content to
+-	 * mark an ie after it is processed.
+-	 */
+-	sub_copy = kmemdup(subelement, subie_len, gfp);
+-	if (!sub_copy)
++	if (elem->datalen + 2 > buf + buf_len - *pos)
+ 		return 0;
+ 
+-	pos = &new_ie[0];
++	memcpy(*pos, elem, elem->datalen + 2);
++	*pos += elem->datalen + 2;
++
++	/* Finish if it is not fragmented  */
++	if (elem->datalen != 255)
++		return *pos - buf;
++
++	ie_len = ie + ie_len - elem->data - elem->datalen;
++	ie = (const u8 *)elem->data + elem->datalen;
++
++	for_each_element(elem, ie, ie_len) {
++		if (elem->id != WLAN_EID_FRAGMENT)
++			break;
++
++		if (elem->datalen + 2 > buf + buf_len - *pos)
++			return 0;
++
++		memcpy(*pos, elem, elem->datalen + 2);
++		*pos += elem->datalen + 2;
+ 
+-	/* set new ssid */
+-	tmp_new = cfg80211_find_ie(WLAN_EID_SSID, sub_copy, subie_len);
+-	if (tmp_new) {
+-		memcpy(pos, tmp_new, tmp_new[1] + 2);
+-		pos += (tmp_new[1] + 2);
++		if (elem->datalen != 255)
++			break;
+ 	}
+ 
+-	/* get non inheritance list if exists */
+-	non_inherit_elem =
+-		cfg80211_find_ext_elem(WLAN_EID_EXT_NON_INHERITANCE,
+-				       sub_copy, subie_len);
++	return *pos - buf;
++}
+ 
+-	/* go through IEs in ie (skip SSID) and subelement,
+-	 * merge them into new_ie
++static size_t cfg80211_gen_new_ie(const u8 *ie, size_t ielen,
++				  const u8 *subie, size_t subie_len,
++				  u8 *new_ie, size_t new_ie_len)
++{
++	const struct element *non_inherit_elem, *parent, *sub;
++	u8 *pos = new_ie;
++	u8 id, ext_id;
++	unsigned int match_len;
++
++	non_inherit_elem = cfg80211_find_ext_elem(WLAN_EID_EXT_NON_INHERITANCE,
++						  subie, subie_len);
++
++	/* We copy the elements one by one from the parent to the generated
++	 * elements.
++	 * If they are not inherited (included in subie or in the non
++	 * inheritance element), then we copy all occurrences the first time
++	 * we see this element type.
+ 	 */
+-	tmp_old = cfg80211_find_ie(WLAN_EID_SSID, ie, ielen);
+-	tmp_old = (tmp_old) ? tmp_old + tmp_old[1] + 2 : ie;
+-
+-	while (tmp_old + 2 - ie <= ielen &&
+-	       tmp_old + tmp_old[1] + 2 - ie <= ielen) {
+-		if (tmp_old[0] == 0) {
+-			tmp_old++;
++	for_each_element(parent, ie, ielen) {
++		if (parent->id == WLAN_EID_FRAGMENT)
+ 			continue;
++
++		if (parent->id == WLAN_EID_EXTENSION) {
++			if (parent->datalen < 1)
++				continue;
++
++			id = WLAN_EID_EXTENSION;
++			ext_id = parent->data[0];
++			match_len = 1;
++		} else {
++			id = parent->id;
++			match_len = 0;
+ 		}
+ 
+-		if (tmp_old[0] == WLAN_EID_EXTENSION)
+-			tmp = (u8 *)cfg80211_find_ext_ie(tmp_old[2], sub_copy,
+-							 subie_len);
+-		else
+-			tmp = (u8 *)cfg80211_find_ie(tmp_old[0], sub_copy,
+-						     subie_len);
++		/* Find first occurrence in subie */
++		sub = cfg80211_find_elem_match(id, subie, subie_len,
++					       &ext_id, match_len, 0);
+ 
+-		if (!tmp) {
+-			const struct element *old_elem = (void *)tmp_old;
++		/* Copy from parent if not in subie and inherited */
++		if (!sub &&
++		    cfg80211_is_element_inherited(parent, non_inherit_elem)) {
++			if (!cfg80211_copy_elem_with_frags(parent,
++							   ie, ielen,
++							   &pos, new_ie,
++							   new_ie_len))
++				return 0;
+ 
+-			/* ie in old ie but not in subelement */
+-			if (cfg80211_is_element_inherited(old_elem,
+-							  non_inherit_elem)) {
+-				memcpy(pos, tmp_old, tmp_old[1] + 2);
+-				pos += tmp_old[1] + 2;
+-			}
+-		} else {
+-			/* ie in transmitting ie also in subelement,
+-			 * copy from subelement and flag the ie in subelement
+-			 * as copied (by setting eid field to WLAN_EID_SSID,
+-			 * which is skipped anyway).
+-			 * For vendor ie, compare OUI + type + subType to
+-			 * determine if they are the same ie.
+-			 */
+-			if (tmp_old[0] == WLAN_EID_VENDOR_SPECIFIC) {
+-				if (tmp_old[1] >= 5 && tmp[1] >= 5 &&
+-				    !memcmp(tmp_old + 2, tmp + 2, 5)) {
+-					/* same vendor ie, copy from
+-					 * subelement
+-					 */
+-					memcpy(pos, tmp, tmp[1] + 2);
+-					pos += tmp[1] + 2;
+-					tmp[0] = WLAN_EID_SSID;
+-				} else {
+-					memcpy(pos, tmp_old, tmp_old[1] + 2);
+-					pos += tmp_old[1] + 2;
+-				}
+-			} else {
+-				/* copy ie from subelement into new ie */
+-				memcpy(pos, tmp, tmp[1] + 2);
+-				pos += tmp[1] + 2;
+-				tmp[0] = WLAN_EID_SSID;
+-			}
++			continue;
+ 		}
+ 
+-		if (tmp_old + tmp_old[1] + 2 - ie == ielen)
+-			break;
++		/* Already copied if an earlier element had the same type */
++		if (cfg80211_find_elem_match(id, ie, (u8 *)parent - ie,
++					     &ext_id, match_len, 0))
++			continue;
+ 
+-		tmp_old += tmp_old[1] + 2;
++		/* Not inheriting, copy all similar elements from subie */
++		while (sub) {
++			if (!cfg80211_copy_elem_with_frags(sub,
++							   subie, subie_len,
++							   &pos, new_ie,
++							   new_ie_len))
++				return 0;
++
++			sub = cfg80211_find_elem_match(id,
++						       sub->data + sub->datalen,
++						       subie_len + subie -
++						       (sub->data +
++							sub->datalen),
++						       &ext_id, match_len, 0);
++		}
+ 	}
+ 
+-	/* go through subelement again to check if there is any ie not
+-	 * copied to new ie, skip ssid, capability, bssid-index ie
++	/* The above misses elements that are included in subie but not in the
++	 * parent, so do a pass over subie and append those.
++	 * Skip the non-tx BSSID caps and non-inheritance element.
+ 	 */
+-	tmp_new = sub_copy;
+-	while (tmp_new + 2 - sub_copy <= subie_len &&
+-	       tmp_new + tmp_new[1] + 2 - sub_copy <= subie_len) {
+-		if (!(tmp_new[0] == WLAN_EID_NON_TX_BSSID_CAP ||
+-		      tmp_new[0] == WLAN_EID_SSID)) {
+-			memcpy(pos, tmp_new, tmp_new[1] + 2);
+-			pos += tmp_new[1] + 2;
++	for_each_element(sub, subie, subie_len) {
++		if (sub->id == WLAN_EID_NON_TX_BSSID_CAP)
++			continue;
++
++		if (sub->id == WLAN_EID_FRAGMENT)
++			continue;
++
++		if (sub->id == WLAN_EID_EXTENSION) {
++			if (sub->datalen < 1)
++				continue;
++
++			id = WLAN_EID_EXTENSION;
++			ext_id = sub->data[0];
++			match_len = 1;
++
++			if (ext_id == WLAN_EID_EXT_NON_INHERITANCE)
++				continue;
++		} else {
++			id = sub->id;
++			match_len = 0;
+ 		}
+-		if (tmp_new + tmp_new[1] + 2 - sub_copy == subie_len)
+-			break;
+-		tmp_new += tmp_new[1] + 2;
++
++		/* Processed if one was included in the parent */
++		if (cfg80211_find_elem_match(id, ie, ielen,
++					     &ext_id, match_len, 0))
++			continue;
++
++		if (!cfg80211_copy_elem_with_frags(sub, subie, subie_len,
++						   &pos, new_ie, new_ie_len))
++			return 0;
+ 	}
+ 
+-	kfree(sub_copy);
+ 	return pos - new_ie;
+ }
+ 
+@@ -2212,7 +2247,7 @@ static void cfg80211_parse_mbssid_data(struct wiphy *wiphy,
+ 			new_ie_len = cfg80211_gen_new_ie(ie, ielen,
+ 							 profile,
+ 							 profile_len, new_ie,
+-							 gfp);
++							 IEEE80211_MAX_DATA_LEN);
+ 			if (!new_ie_len)
+ 				continue;
+ 
+@@ -2261,118 +2296,6 @@ cfg80211_inform_bss_data(struct wiphy *wiphy,
+ }
+ EXPORT_SYMBOL(cfg80211_inform_bss_data);
+ 
+-static void
+-cfg80211_parse_mbssid_frame_data(struct wiphy *wiphy,
+-				 struct cfg80211_inform_bss *data,
+-				 struct ieee80211_mgmt *mgmt, size_t len,
+-				 struct cfg80211_non_tx_bss *non_tx_data,
+-				 gfp_t gfp)
+-{
+-	enum cfg80211_bss_frame_type ftype;
+-	const u8 *ie = mgmt->u.probe_resp.variable;
+-	size_t ielen = len - offsetof(struct ieee80211_mgmt,
+-				      u.probe_resp.variable);
+-
+-	ftype = ieee80211_is_beacon(mgmt->frame_control) ?
+-		CFG80211_BSS_FTYPE_BEACON : CFG80211_BSS_FTYPE_PRESP;
+-
+-	cfg80211_parse_mbssid_data(wiphy, data, ftype, mgmt->bssid,
+-				   le64_to_cpu(mgmt->u.probe_resp.timestamp),
+-				   le16_to_cpu(mgmt->u.probe_resp.beacon_int),
+-				   ie, ielen, non_tx_data, gfp);
+-}
+-
+-static void
+-cfg80211_update_notlisted_nontrans(struct wiphy *wiphy,
+-				   struct cfg80211_bss *nontrans_bss,
+-				   struct ieee80211_mgmt *mgmt, size_t len)
+-{
+-	u8 *ie, *new_ie, *pos;
+-	const struct element *nontrans_ssid;
+-	const u8 *trans_ssid, *mbssid;
+-	size_t ielen = len - offsetof(struct ieee80211_mgmt,
+-				      u.probe_resp.variable);
+-	size_t new_ie_len;
+-	struct cfg80211_bss_ies *new_ies;
+-	const struct cfg80211_bss_ies *old;
+-	size_t cpy_len;
+-
+-	lockdep_assert_held(&wiphy_to_rdev(wiphy)->bss_lock);
+-
+-	ie = mgmt->u.probe_resp.variable;
+-
+-	new_ie_len = ielen;
+-	trans_ssid = cfg80211_find_ie(WLAN_EID_SSID, ie, ielen);
+-	if (!trans_ssid)
+-		return;
+-	new_ie_len -= trans_ssid[1];
+-	mbssid = cfg80211_find_ie(WLAN_EID_MULTIPLE_BSSID, ie, ielen);
+-	/*
+-	 * It's not valid to have the MBSSID element before SSID
+-	 * ignore if that happens - the code below assumes it is
+-	 * after (while copying things inbetween).
+-	 */
+-	if (!mbssid || mbssid < trans_ssid)
+-		return;
+-	new_ie_len -= mbssid[1];
+-
+-	nontrans_ssid = ieee80211_bss_get_elem(nontrans_bss, WLAN_EID_SSID);
+-	if (!nontrans_ssid)
+-		return;
+-
+-	new_ie_len += nontrans_ssid->datalen;
+-
+-	/* generate new ie for nontrans BSS
+-	 * 1. replace SSID with nontrans BSS' SSID
+-	 * 2. skip MBSSID IE
+-	 */
+-	new_ie = kzalloc(new_ie_len, GFP_ATOMIC);
+-	if (!new_ie)
+-		return;
+-
+-	new_ies = kzalloc(sizeof(*new_ies) + new_ie_len, GFP_ATOMIC);
+-	if (!new_ies)
+-		goto out_free;
+-
+-	pos = new_ie;
+-
+-	/* copy the nontransmitted SSID */
+-	cpy_len = nontrans_ssid->datalen + 2;
+-	memcpy(pos, nontrans_ssid, cpy_len);
+-	pos += cpy_len;
+-	/* copy the IEs between SSID and MBSSID */
+-	cpy_len = trans_ssid[1] + 2;
+-	memcpy(pos, (trans_ssid + cpy_len), (mbssid - (trans_ssid + cpy_len)));
+-	pos += (mbssid - (trans_ssid + cpy_len));
+-	/* copy the IEs after MBSSID */
+-	cpy_len = mbssid[1] + 2;
+-	memcpy(pos, mbssid + cpy_len, ((ie + ielen) - (mbssid + cpy_len)));
+-
+-	/* update ie */
+-	new_ies->len = new_ie_len;
+-	new_ies->tsf = le64_to_cpu(mgmt->u.probe_resp.timestamp);
+-	new_ies->from_beacon = ieee80211_is_beacon(mgmt->frame_control);
+-	memcpy(new_ies->data, new_ie, new_ie_len);
+-	if (ieee80211_is_probe_resp(mgmt->frame_control)) {
+-		old = rcu_access_pointer(nontrans_bss->proberesp_ies);
+-		rcu_assign_pointer(nontrans_bss->proberesp_ies, new_ies);
+-		rcu_assign_pointer(nontrans_bss->ies, new_ies);
+-		if (old)
+-			kfree_rcu((struct cfg80211_bss_ies *)old, rcu_head);
+-	} else {
+-		old = rcu_access_pointer(nontrans_bss->beacon_ies);
+-		rcu_assign_pointer(nontrans_bss->beacon_ies, new_ies);
+-		cfg80211_update_hidden_bsses(bss_from_pub(nontrans_bss),
+-					     new_ies, old);
+-		rcu_assign_pointer(nontrans_bss->ies, new_ies);
+-		if (old)
+-			kfree_rcu((struct cfg80211_bss_ies *)old, rcu_head);
+-	}
+-
+-out_free:
+-	kfree(new_ie);
+-}
+-
+ /* cfg80211_inform_bss_width_frame helper */
+ static struct cfg80211_bss *
+ cfg80211_inform_single_bss_frame_data(struct wiphy *wiphy,
+@@ -2505,51 +2428,31 @@ cfg80211_inform_bss_frame_data(struct wiphy *wiphy,
+ 			       struct ieee80211_mgmt *mgmt, size_t len,
+ 			       gfp_t gfp)
+ {
+-	struct cfg80211_bss *res, *tmp_bss;
++	struct cfg80211_bss *res;
+ 	const u8 *ie = mgmt->u.probe_resp.variable;
+-	const struct cfg80211_bss_ies *ies1, *ies2;
+ 	size_t ielen = len - offsetof(struct ieee80211_mgmt,
+ 				      u.probe_resp.variable);
++	enum cfg80211_bss_frame_type ftype;
+ 	struct cfg80211_non_tx_bss non_tx_data = {};
+ 
+ 	res = cfg80211_inform_single_bss_frame_data(wiphy, data, mgmt,
+ 						    len, gfp);
++	if (!res)
++		return NULL;
+ 
+ 	/* don't do any further MBSSID handling for S1G */
+ 	if (ieee80211_is_s1g_beacon(mgmt->frame_control))
+ 		return res;
+ 
+-	if (!res || !wiphy->support_mbssid ||
+-	    !cfg80211_find_elem(WLAN_EID_MULTIPLE_BSSID, ie, ielen))
+-		return res;
+-	if (wiphy->support_only_he_mbssid &&
+-	    !cfg80211_find_ext_elem(WLAN_EID_EXT_HE_CAPABILITY, ie, ielen))
+-		return res;
+-
++	ftype = ieee80211_is_beacon(mgmt->frame_control) ?
++		CFG80211_BSS_FTYPE_BEACON : CFG80211_BSS_FTYPE_PRESP;
+ 	non_tx_data.tx_bss = res;
+-	/* process each non-transmitting bss */
+-	cfg80211_parse_mbssid_frame_data(wiphy, data, mgmt, len,
+-					 &non_tx_data, gfp);
+-
+-	spin_lock_bh(&wiphy_to_rdev(wiphy)->bss_lock);
+ 
+-	/* check if the res has other nontransmitting bss which is not
+-	 * in MBSSID IE
+-	 */
+-	ies1 = rcu_access_pointer(res->ies);
+-
+-	/* go through nontrans_list, if the timestamp of the BSS is
+-	 * earlier than the timestamp of the transmitting BSS then
+-	 * update it
+-	 */
+-	list_for_each_entry(tmp_bss, &res->nontrans_list,
+-			    nontrans_list) {
+-		ies2 = rcu_access_pointer(tmp_bss->ies);
+-		if (ies2->tsf < ies1->tsf)
+-			cfg80211_update_notlisted_nontrans(wiphy, tmp_bss,
+-							   mgmt, len);
+-	}
+-	spin_unlock_bh(&wiphy_to_rdev(wiphy)->bss_lock);
++	/* process each non-transmitting bss */
++	cfg80211_parse_mbssid_data(wiphy, data, ftype, mgmt->bssid,
++				   le64_to_cpu(mgmt->u.probe_resp.timestamp),
++				   le16_to_cpu(mgmt->u.probe_resp.beacon_int),
++				   ie, ielen, &non_tx_data, gfp);
+ 
+ 	return res;
+ }
+diff --git a/net/wireless/util.c b/net/wireless/util.c
+index 9755ef281040f..60be95eea6caf 100644
+--- a/net/wireless/util.c
++++ b/net/wireless/util.c
+@@ -580,6 +580,8 @@ int ieee80211_strip_8023_mesh_hdr(struct sk_buff *skb)
+ 		hdrlen += ETH_ALEN + 2;
+ 	else if (!pskb_may_pull(skb, hdrlen))
+ 		return -EINVAL;
++	else
++		payload.eth.h_proto = htons(skb->len - hdrlen);
+ 
+ 	mesh_addr = skb->data + sizeof(payload.eth) + ETH_ALEN;
+ 	switch (payload.flags & MESH_FLAGS_AE) {
+diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
+index cc1e7f15fa731..32dd55b9ce8a8 100644
+--- a/net/xdp/xsk.c
++++ b/net/xdp/xsk.c
+@@ -886,6 +886,7 @@ static int xsk_bind(struct socket *sock, struct sockaddr *addr, int addr_len)
+ 	struct sock *sk = sock->sk;
+ 	struct xdp_sock *xs = xdp_sk(sk);
+ 	struct net_device *dev;
++	int bound_dev_if;
+ 	u32 flags, qid;
+ 	int err = 0;
+ 
+@@ -899,6 +900,10 @@ static int xsk_bind(struct socket *sock, struct sockaddr *addr, int addr_len)
+ 		      XDP_USE_NEED_WAKEUP))
+ 		return -EINVAL;
+ 
++	bound_dev_if = READ_ONCE(sk->sk_bound_dev_if);
++	if (bound_dev_if && bound_dev_if != sxdp->sxdp_ifindex)
++		return -EINVAL;
++
+ 	rtnl_lock();
+ 	mutex_lock(&xs->mutex);
+ 	if (xs->state != XSK_READY) {
+diff --git a/samples/bpf/tcp_basertt_kern.c b/samples/bpf/tcp_basertt_kern.c
+index 8dfe09a92feca..822b0742b8154 100644
+--- a/samples/bpf/tcp_basertt_kern.c
++++ b/samples/bpf/tcp_basertt_kern.c
+@@ -47,7 +47,7 @@ int bpf_basertt(struct bpf_sock_ops *skops)
+ 		case BPF_SOCK_OPS_BASE_RTT:
+ 			n = bpf_getsockopt(skops, SOL_TCP, TCP_CONGESTION,
+ 					   cong, sizeof(cong));
+-			if (!n && !__builtin_memcmp(cong, nv, sizeof(nv)+1)) {
++			if (!n && !__builtin_memcmp(cong, nv, sizeof(nv))) {
+ 				/* Set base_rtt to 80us */
+ 				rv = 80;
+ 			} else if (n) {
+diff --git a/samples/bpf/xdp1_kern.c b/samples/bpf/xdp1_kern.c
+index 0a5c704badd00..d91f27cbcfa99 100644
+--- a/samples/bpf/xdp1_kern.c
++++ b/samples/bpf/xdp1_kern.c
+@@ -39,7 +39,7 @@ static int parse_ipv6(void *data, u64 nh_off, void *data_end)
+ 	return ip6h->nexthdr;
+ }
+ 
+-#define XDPBUFSIZE	64
++#define XDPBUFSIZE	60
+ SEC("xdp.frags")
+ int xdp_prog1(struct xdp_md *ctx)
+ {
+diff --git a/samples/bpf/xdp2_kern.c b/samples/bpf/xdp2_kern.c
+index 67804ecf7ce37..8bca674451ed1 100644
+--- a/samples/bpf/xdp2_kern.c
++++ b/samples/bpf/xdp2_kern.c
+@@ -55,7 +55,7 @@ static int parse_ipv6(void *data, u64 nh_off, void *data_end)
+ 	return ip6h->nexthdr;
+ }
+ 
+-#define XDPBUFSIZE	64
++#define XDPBUFSIZE	60
+ SEC("xdp.frags")
+ int xdp_prog1(struct xdp_md *ctx)
+ {
+diff --git a/scripts/Makefile.clang b/scripts/Makefile.clang
+index 9076cc939e874..058a4c0f864ec 100644
+--- a/scripts/Makefile.clang
++++ b/scripts/Makefile.clang
+@@ -34,6 +34,5 @@ CLANG_FLAGS	+= -Werror=unknown-warning-option
+ CLANG_FLAGS	+= -Werror=ignored-optimization-argument
+ CLANG_FLAGS	+= -Werror=option-ignored
+ CLANG_FLAGS	+= -Werror=unused-command-line-argument
+-KBUILD_CFLAGS	+= $(CLANG_FLAGS)
+-KBUILD_AFLAGS	+= $(CLANG_FLAGS)
++KBUILD_CPPFLAGS	+= $(CLANG_FLAGS)
+ export CLANG_FLAGS
+diff --git a/scripts/Makefile.compiler b/scripts/Makefile.compiler
+index 7aa1fbc4aafef..e31f18625fcf5 100644
+--- a/scripts/Makefile.compiler
++++ b/scripts/Makefile.compiler
+@@ -32,13 +32,13 @@ try-run = $(shell set -e;		\
+ # Usage: aflags-y += $(call as-option,-Wa$(comma)-isa=foo,)
+ 
+ as-option = $(call try-run,\
+-	$(CC) -Werror $(KBUILD_AFLAGS) $(1) -c -x assembler-with-cpp /dev/null -o "$$TMP",$(1),$(2))
++	$(CC) -Werror $(KBUILD_CPPFLAGS) $(KBUILD_AFLAGS) $(1) -c -x assembler-with-cpp /dev/null -o "$$TMP",$(1),$(2))
+ 
+ # as-instr
+ # Usage: aflags-y += $(call as-instr,instr,option1,option2)
+ 
+ as-instr = $(call try-run,\
+-	printf "%b\n" "$(1)" | $(CC) -Werror $(KBUILD_AFLAGS) -c -x assembler-with-cpp -o "$$TMP" -,$(2),$(3))
++	printf "%b\n" "$(1)" | $(CC) -Werror $(CLANG_FLAGS) $(KBUILD_AFLAGS) -c -x assembler-with-cpp -o "$$TMP" -,$(2),$(3))
+ 
+ # __cc-option
+ # Usage: MY_CFLAGS += $(call __cc-option,$(CC),$(MY_CFLAGS),-march=winchip-c6,-march=i586)
+diff --git a/scripts/Makefile.modfinal b/scripts/Makefile.modfinal
+index 4703f652c0098..fc19f67039bda 100644
+--- a/scripts/Makefile.modfinal
++++ b/scripts/Makefile.modfinal
+@@ -23,7 +23,7 @@ modname = $(notdir $(@:.mod.o=))
+ part-of-module = y
+ 
+ quiet_cmd_cc_o_c = CC [M]  $@
+-      cmd_cc_o_c = $(CC) $(filter-out $(CC_FLAGS_CFI), $(c_flags)) -c -o $@ $<
++      cmd_cc_o_c = $(CC) $(filter-out $(CC_FLAGS_CFI) $(CFLAGS_GCOV), $(c_flags)) -c -o $@ $<
+ 
+ %.mod.o: %.mod.c FORCE
+ 	$(call if_changed_dep,cc_o_c)
+diff --git a/scripts/Makefile.vmlinux b/scripts/Makefile.vmlinux
+index 10176dec97eac..3cd6ca15f390d 100644
+--- a/scripts/Makefile.vmlinux
++++ b/scripts/Makefile.vmlinux
+@@ -19,6 +19,7 @@ quiet_cmd_cc_o_c = CC      $@
+ 
+ ifdef CONFIG_MODULES
+ KASAN_SANITIZE_.vmlinux.export.o := n
++GCOV_PROFILE_.vmlinux.export.o := n
+ targets += .vmlinux.export.o
+ vmlinux: .vmlinux.export.o
+ endif
+diff --git a/scripts/mksysmap b/scripts/mksysmap
+index cb3b1fff3eee8..ec33385261022 100755
+--- a/scripts/mksysmap
++++ b/scripts/mksysmap
+@@ -32,7 +32,7 @@ ${NM} -n ${1} | sed >${2} -e "
+ #  (do not forget a space before each pattern)
+ 
+ # local symbols for ARM, MIPS, etc.
+-/ \$/d
++/ \\$/d
+ 
+ # local labels, .LBB, .Ltmpxxx, .L__unnamed_xx, .LASANPC, etc.
+ / \.L/d
+@@ -41,7 +41,7 @@ ${NM} -n ${1} | sed >${2} -e "
+ / __efistub_/d
+ 
+ # arm64 local symbols in non-VHE KVM namespace
+-/ __kvm_nvhe_\$/d
++/ __kvm_nvhe_\\$/d
+ / __kvm_nvhe_\.L/d
+ 
+ # arm64 lld
+diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c
+index c12150f96b884..d8baa9b9ae6d8 100644
+--- a/scripts/mod/modpost.c
++++ b/scripts/mod/modpost.c
+@@ -1150,6 +1150,10 @@ static Elf_Sym *find_elf_symbol(struct elf_info *elf, Elf64_Sword addr,
+ 	if (relsym->st_name != 0)
+ 		return relsym;
+ 
++	/*
++	 * Strive to find a better symbol name, but the resulting name may not
++	 * match the symbol referenced in the original code.
++	 */
+ 	relsym_secindex = get_secindex(elf, relsym);
+ 	for (sym = elf->symtab_start; sym < elf->symtab_stop; sym++) {
+ 		if (get_secindex(elf, sym) != relsym_secindex)
+@@ -1286,49 +1290,12 @@ static void default_mismatch_handler(const char *modname, struct elf_info *elf,
+ 
+ static int is_executable_section(struct elf_info* elf, unsigned int section_index)
+ {
+-	if (section_index > elf->num_sections)
++	if (section_index >= elf->num_sections)
+ 		fatal("section_index is outside elf->num_sections!\n");
+ 
+ 	return ((elf->sechdrs[section_index].sh_flags & SHF_EXECINSTR) == SHF_EXECINSTR);
+ }
+ 
+-/*
+- * We rely on a gross hack in section_rel[a]() calling find_extable_entry_size()
+- * to know the sizeof(struct exception_table_entry) for the target architecture.
+- */
+-static unsigned int extable_entry_size = 0;
+-static void find_extable_entry_size(const char* const sec, const Elf_Rela* r)
+-{
+-	/*
+-	 * If we're currently checking the second relocation within __ex_table,
+-	 * that relocation offset tells us the offsetof(struct
+-	 * exception_table_entry, fixup) which is equal to sizeof(struct
+-	 * exception_table_entry) divided by two.  We use that to our advantage
+-	 * since there's no portable way to get that size as every architecture
+-	 * seems to go with different sized types.  Not pretty but better than
+-	 * hard-coding the size for every architecture..
+-	 */
+-	if (!extable_entry_size)
+-		extable_entry_size = r->r_offset * 2;
+-}
+-
+-static inline bool is_extable_fault_address(Elf_Rela *r)
+-{
+-	/*
+-	 * extable_entry_size is only discovered after we've handled the
+-	 * _second_ relocation in __ex_table, so only abort when we're not
+-	 * handling the first reloc and extable_entry_size is zero.
+-	 */
+-	if (r->r_offset && extable_entry_size == 0)
+-		fatal("extable_entry size hasn't been discovered!\n");
+-
+-	return ((r->r_offset == 0) ||
+-		(r->r_offset % extable_entry_size == 0));
+-}
+-
+-#define is_second_extable_reloc(Start, Cur, Sec)			\
+-	(((Cur) == (Start) + 1) && (strcmp("__ex_table", (Sec)) == 0))
+-
+ static void report_extable_warnings(const char* modname, struct elf_info* elf,
+ 				    const struct sectioncheck* const mismatch,
+ 				    Elf_Rela* r, Elf_Sym* sym,
+@@ -1384,22 +1351,9 @@ static void extable_mismatch_handler(const char* modname, struct elf_info *elf,
+ 		      "You might get more information about where this is\n"
+ 		      "coming from by using scripts/check_extable.sh %s\n",
+ 		      fromsec, (long)r->r_offset, tosec, modname);
+-	else if (!is_executable_section(elf, get_secindex(elf, sym))) {
+-		if (is_extable_fault_address(r))
+-			fatal("The relocation at %s+0x%lx references\n"
+-			      "section \"%s\" which is not executable, IOW\n"
+-			      "it is not possible for the kernel to fault\n"
+-			      "at that address.  Something is seriously wrong\n"
+-			      "and should be fixed.\n",
+-			      fromsec, (long)r->r_offset, tosec);
+-		else
+-			fatal("The relocation at %s+0x%lx references\n"
+-			      "section \"%s\" which is not executable, IOW\n"
+-			      "the kernel will fault if it ever tries to\n"
+-			      "jump to it.  Something is seriously wrong\n"
+-			      "and should be fixed.\n",
+-			      fromsec, (long)r->r_offset, tosec);
+-	}
++	else if (!is_executable_section(elf, get_secindex(elf, sym)))
++		error("%s+0x%lx references non-executable section '%s'\n",
++		      fromsec, (long)r->r_offset, tosec);
+ }
+ 
+ static void check_section_mismatch(const char *modname, struct elf_info *elf,
+@@ -1457,19 +1411,33 @@ static int addend_386_rel(struct elf_info *elf, Elf_Shdr *sechdr, Elf_Rela *r)
+ #define	R_ARM_THM_JUMP19	51
+ #endif
+ 
++static int32_t sign_extend32(int32_t value, int index)
++{
++	uint8_t shift = 31 - index;
++
++	return (int32_t)(value << shift) >> shift;
++}
++
+ static int addend_arm_rel(struct elf_info *elf, Elf_Shdr *sechdr, Elf_Rela *r)
+ {
+ 	unsigned int r_typ = ELF_R_TYPE(r->r_info);
++	Elf_Sym *sym = elf->symtab_start + ELF_R_SYM(r->r_info);
++	void *loc = reloc_location(elf, sechdr, r);
++	uint32_t inst;
++	int32_t offset;
+ 
+ 	switch (r_typ) {
+ 	case R_ARM_ABS32:
+-		/* From ARM ABI: (S + A) | T */
+-		r->r_addend = (int)(long)
+-			      (elf->symtab_start + ELF_R_SYM(r->r_info));
++		inst = TO_NATIVE(*(uint32_t *)loc);
++		r->r_addend = inst + sym->st_value;
+ 		break;
+ 	case R_ARM_PC24:
+ 	case R_ARM_CALL:
+ 	case R_ARM_JUMP24:
++		inst = TO_NATIVE(*(uint32_t *)loc);
++		offset = sign_extend32((inst & 0x00ffffff) << 2, 25);
++		r->r_addend = offset + sym->st_value + 8;
++		break;
+ 	case R_ARM_THM_CALL:
+ 	case R_ARM_THM_JUMP24:
+ 	case R_ARM_THM_JUMP19:
+@@ -1574,8 +1542,6 @@ static void section_rela(const char *modname, struct elf_info *elf,
+ 		/* Skip special sections */
+ 		if (is_shndx_special(sym->st_shndx))
+ 			continue;
+-		if (is_second_extable_reloc(start, rela, fromsec))
+-			find_extable_entry_size(fromsec, &r);
+ 		check_section_mismatch(modname, elf, &r, sym, fromsec);
+ 	}
+ }
+@@ -1633,8 +1599,6 @@ static void section_rel(const char *modname, struct elf_info *elf,
+ 		/* Skip special sections */
+ 		if (is_shndx_special(sym->st_shndx))
+ 			continue;
+-		if (is_second_extable_reloc(start, rel, fromsec))
+-			find_extable_entry_size(fromsec, &r);
+ 		check_section_mismatch(modname, elf, &r, sym, fromsec);
+ 	}
+ }
+diff --git a/scripts/package/builddeb b/scripts/package/builddeb
+index 252faaa5561cc..032774eb061e1 100755
+--- a/scripts/package/builddeb
++++ b/scripts/package/builddeb
+@@ -62,18 +62,14 @@ install_linux_image () {
+ 		${MAKE} -f ${srctree}/Makefile INSTALL_DTBS_PATH="${pdir}/usr/lib/linux-image-${KERNELRELEASE}" dtbs_install
+ 	fi
+ 
+-	if is_enabled CONFIG_MODULES; then
+-		${MAKE} -f ${srctree}/Makefile INSTALL_MOD_PATH="${pdir}" modules_install
+-		rm -f "${pdir}/lib/modules/${KERNELRELEASE}/build"
+-		rm -f "${pdir}/lib/modules/${KERNELRELEASE}/source"
+-		if [ "${SRCARCH}" = um ] ; then
+-			mkdir -p "${pdir}/usr/lib/uml/modules"
+-			mv "${pdir}/lib/modules/${KERNELRELEASE}" "${pdir}/usr/lib/uml/modules/${KERNELRELEASE}"
+-		fi
+-	fi
++	${MAKE} -f ${srctree}/Makefile INSTALL_MOD_PATH="${pdir}" modules_install
++	rm -f "${pdir}/lib/modules/${KERNELRELEASE}/build"
++	rm -f "${pdir}/lib/modules/${KERNELRELEASE}/source"
+ 
+ 	# Install the kernel
+ 	if [ "${ARCH}" = um ] ; then
++		mkdir -p "${pdir}/usr/lib/uml/modules"
++		mv "${pdir}/lib/modules/${KERNELRELEASE}" "${pdir}/usr/lib/uml/modules/${KERNELRELEASE}"
+ 		mkdir -p "${pdir}/usr/bin" "${pdir}/usr/share/doc/${pname}"
+ 		cp System.map "${pdir}/usr/lib/uml/modules/${KERNELRELEASE}/System.map"
+ 		cp ${KCONFIG_CONFIG} "${pdir}/usr/share/doc/${pname}/config"
+diff --git a/security/apparmor/policy.c b/security/apparmor/policy.c
+index 51e8184e0fec1..69711ae682e5e 100644
+--- a/security/apparmor/policy.c
++++ b/security/apparmor/policy.c
+@@ -591,7 +591,15 @@ struct aa_profile *aa_alloc_null(struct aa_profile *parent, const char *name,
+ 	profile->label.flags |= FLAG_NULL;
+ 	rules = list_first_entry(&profile->rules, typeof(*rules), list);
+ 	rules->file.dfa = aa_get_dfa(nulldfa);
++	rules->file.perms = kcalloc(2, sizeof(struct aa_perms), GFP_KERNEL);
++	if (!rules->file.perms)
++		goto fail;
++	rules->file.size = 2;
+ 	rules->policy.dfa = aa_get_dfa(nulldfa);
++	rules->policy.perms = kcalloc(2, sizeof(struct aa_perms), GFP_KERNEL);
++	if (!rules->policy.perms)
++		goto fail;
++	rules->policy.size = 2;
+ 
+ 	if (parent) {
+ 		profile->path_flags = parent->path_flags;
+@@ -602,6 +610,11 @@ struct aa_profile *aa_alloc_null(struct aa_profile *parent, const char *name,
+ 	}
+ 
+ 	return profile;
++
++fail:
++	aa_free_profile(profile);
++
++	return NULL;
+ }
+ 
+ /**
+diff --git a/security/apparmor/policy_compat.c b/security/apparmor/policy_compat.c
+index cc89d1e88fb74..0cb02da8a3193 100644
+--- a/security/apparmor/policy_compat.c
++++ b/security/apparmor/policy_compat.c
+@@ -146,7 +146,8 @@ static struct aa_perms compute_fperms_other(struct aa_dfa *dfa,
+  *
+  * Returns: remapped perm table
+  */
+-static struct aa_perms *compute_fperms(struct aa_dfa *dfa)
++static struct aa_perms *compute_fperms(struct aa_dfa *dfa,
++				       u32 *size)
+ {
+ 	aa_state_t state;
+ 	unsigned int state_count;
+@@ -159,6 +160,7 @@ static struct aa_perms *compute_fperms(struct aa_dfa *dfa)
+ 	table = kvcalloc(state_count * 2, sizeof(struct aa_perms), GFP_KERNEL);
+ 	if (!table)
+ 		return NULL;
++	*size = state_count * 2;
+ 
+ 	for (state = 0; state < state_count; state++) {
+ 		table[state * 2] = compute_fperms_user(dfa, state);
+@@ -168,7 +170,8 @@ static struct aa_perms *compute_fperms(struct aa_dfa *dfa)
+ 	return table;
+ }
+ 
+-static struct aa_perms *compute_xmatch_perms(struct aa_dfa *xmatch)
++static struct aa_perms *compute_xmatch_perms(struct aa_dfa *xmatch,
++				      u32 *size)
+ {
+ 	struct aa_perms *perms;
+ 	int state;
+@@ -179,6 +182,9 @@ static struct aa_perms *compute_xmatch_perms(struct aa_dfa *xmatch)
+ 	state_count = xmatch->tables[YYTD_ID_BASE]->td_lolen;
+ 	/* DFAs are restricted from having a state_count of less than 2 */
+ 	perms = kvcalloc(state_count, sizeof(struct aa_perms), GFP_KERNEL);
++	if (!perms)
++		return NULL;
++	*size = state_count;
+ 
+ 	/* zero init so skip the trap state (state == 0) */
+ 	for (state = 1; state < state_count; state++)
+@@ -239,7 +245,8 @@ static struct aa_perms compute_perms_entry(struct aa_dfa *dfa,
+ 	return perms;
+ }
+ 
+-static struct aa_perms *compute_perms(struct aa_dfa *dfa, u32 version)
++static struct aa_perms *compute_perms(struct aa_dfa *dfa, u32 version,
++				      u32 *size)
+ {
+ 	unsigned int state;
+ 	unsigned int state_count;
+@@ -252,6 +259,7 @@ static struct aa_perms *compute_perms(struct aa_dfa *dfa, u32 version)
+ 	table = kvcalloc(state_count, sizeof(struct aa_perms), GFP_KERNEL);
+ 	if (!table)
+ 		return NULL;
++	*size = state_count;
+ 
+ 	/* zero init so skip the trap state (state == 0) */
+ 	for (state = 1; state < state_count; state++)
+@@ -286,7 +294,7 @@ static void remap_dfa_accept(struct aa_dfa *dfa, unsigned int factor)
+ /* TODO: merge different dfa mappings into single map_policy fn */
+ int aa_compat_map_xmatch(struct aa_policydb *policy)
+ {
+-	policy->perms = compute_xmatch_perms(policy->dfa);
++	policy->perms = compute_xmatch_perms(policy->dfa, &policy->size);
+ 	if (!policy->perms)
+ 		return -ENOMEM;
+ 
+@@ -297,7 +305,7 @@ int aa_compat_map_xmatch(struct aa_policydb *policy)
+ 
+ int aa_compat_map_policy(struct aa_policydb *policy, u32 version)
+ {
+-	policy->perms = compute_perms(policy->dfa, version);
++	policy->perms = compute_perms(policy->dfa, version, &policy->size);
+ 	if (!policy->perms)
+ 		return -ENOMEM;
+ 
+@@ -308,7 +316,7 @@ int aa_compat_map_policy(struct aa_policydb *policy, u32 version)
+ 
+ int aa_compat_map_file(struct aa_policydb *policy)
+ {
+-	policy->perms = compute_fperms(policy->dfa);
++	policy->perms = compute_fperms(policy->dfa, &policy->size);
+ 	if (!policy->perms)
+ 		return -ENOMEM;
+ 
+diff --git a/security/apparmor/policy_unpack.c b/security/apparmor/policy_unpack.c
+index cf2ceec40b28a..bc9f436d49cca 100644
+--- a/security/apparmor/policy_unpack.c
++++ b/security/apparmor/policy_unpack.c
+@@ -860,10 +860,12 @@ static struct aa_profile *unpack_profile(struct aa_ext *e, char **ns_name)
+ 		}
+ 		profile->attach.xmatch_len = tmp;
+ 		profile->attach.xmatch.start[AA_CLASS_XMATCH] = DFA_START;
+-		error = aa_compat_map_xmatch(&profile->attach.xmatch);
+-		if (error) {
+-			info = "failed to convert xmatch permission table";
+-			goto fail;
++		if (!profile->attach.xmatch.perms) {
++			error = aa_compat_map_xmatch(&profile->attach.xmatch);
++			if (error) {
++				info = "failed to convert xmatch permission table";
++				goto fail;
++			}
+ 		}
+ 	}
+ 
+@@ -983,31 +985,54 @@ static struct aa_profile *unpack_profile(struct aa_ext *e, char **ns_name)
+ 				      AA_CLASS_FILE);
+ 		if (!aa_unpack_nameX(e, AA_STRUCTEND, NULL))
+ 			goto fail;
+-		error = aa_compat_map_policy(&rules->policy, e->version);
+-		if (error) {
+-			info = "failed to remap policydb permission table";
+-			goto fail;
++		if (!rules->policy.perms) {
++			error = aa_compat_map_policy(&rules->policy,
++						     e->version);
++			if (error) {
++				info = "failed to remap policydb permission table";
++				goto fail;
++			}
+ 		}
+-	} else
++	} else {
+ 		rules->policy.dfa = aa_get_dfa(nulldfa);
+-
++		rules->policy.perms = kcalloc(2, sizeof(struct aa_perms),
++					      GFP_KERNEL);
++		if (!rules->policy.perms)
++			goto fail;
++		rules->policy.size = 2;
++	}
+ 	/* get file rules */
+ 	error = unpack_pdb(e, &rules->file, false, true, &info);
+ 	if (error) {
+ 		goto fail;
+ 	} else if (rules->file.dfa) {
+-		error = aa_compat_map_file(&rules->file);
+-		if (error) {
+-			info = "failed to remap file permission table";
+-			goto fail;
++		if (!rules->file.perms) {
++			error = aa_compat_map_file(&rules->file);
++			if (error) {
++				info = "failed to remap file permission table";
++				goto fail;
++			}
+ 		}
+ 	} else if (rules->policy.dfa &&
+ 		   rules->policy.start[AA_CLASS_FILE]) {
+ 		rules->file.dfa = aa_get_dfa(rules->policy.dfa);
+ 		rules->file.start[AA_CLASS_FILE] = rules->policy.start[AA_CLASS_FILE];
+-	} else
++		rules->file.perms = kcalloc(rules->policy.size,
++					    sizeof(struct aa_perms),
++					    GFP_KERNEL);
++		if (!rules->file.perms)
++			goto fail;
++		memcpy(rules->file.perms, rules->policy.perms,
++		       rules->policy.size * sizeof(struct aa_perms));
++		rules->file.size = rules->policy.size;
++	} else {
+ 		rules->file.dfa = aa_get_dfa(nulldfa);
+-
++		rules->file.perms = kcalloc(2, sizeof(struct aa_perms),
++					    GFP_KERNEL);
++		if (!rules->file.perms)
++			goto fail;
++		rules->file.size = 2;
++	}
+ 	error = -EPROTO;
+ 	if (aa_unpack_nameX(e, AA_STRUCT, "data")) {
+ 		info = "out of memory";
+@@ -1046,8 +1071,13 @@ static struct aa_profile *unpack_profile(struct aa_ext *e, char **ns_name)
+ 				goto fail;
+ 			}
+ 
+-			rhashtable_insert_fast(profile->data, &data->head,
+-					       profile->data->p);
++			if (rhashtable_insert_fast(profile->data, &data->head,
++						   profile->data->p)) {
++				kfree_sensitive(data->key);
++				kfree_sensitive(data);
++				info = "failed to insert data to table";
++				goto fail;
++			}
+ 		}
+ 
+ 		if (!aa_unpack_nameX(e, AA_STRUCTEND, NULL)) {
+@@ -1134,22 +1164,16 @@ static int verify_header(struct aa_ext *e, int required, const char **ns)
+ 	return 0;
+ }
+ 
+-static bool verify_xindex(int xindex, int table_size)
+-{
+-	int index, xtype;
+-	xtype = xindex & AA_X_TYPE_MASK;
+-	index = xindex & AA_X_INDEX_MASK;
+-	if (xtype == AA_X_TABLE && index >= table_size)
+-		return false;
+-	return true;
+-}
+-
+-/* verify dfa xindexes are in range of transition tables */
+-static bool verify_dfa_xindex(struct aa_dfa *dfa, int table_size)
++/**
++ * verify_dfa_accept_xindex - verify accept indexes are in range of perms table
++ * @dfa: the dfa to check accept indexes are in range
++ * table_size: the permission table size the indexes should be within
++ */
++static bool verify_dfa_accept_index(struct aa_dfa *dfa, int table_size)
+ {
+ 	int i;
+ 	for (i = 0; i < dfa->tables[YYTD_ID_ACCEPT]->td_lolen; i++) {
+-		if (!verify_xindex(ACCEPT_TABLE(dfa)[i], table_size))
++		if (ACCEPT_TABLE(dfa)[i] >= table_size)
+ 			return false;
+ 	}
+ 	return true;
+@@ -1186,11 +1210,13 @@ static bool verify_perms(struct aa_policydb *pdb)
+ 		if (!verify_perm(&pdb->perms[i]))
+ 			return false;
+ 		/* verify indexes into str table */
+-		if (pdb->perms[i].xindex >= pdb->trans.size)
++		if ((pdb->perms[i].xindex & AA_X_TYPE_MASK) == AA_X_TABLE &&
++		    (pdb->perms[i].xindex & AA_X_INDEX_MASK) >= pdb->trans.size)
+ 			return false;
+-		if (pdb->perms[i].tag >= pdb->trans.size)
++		if (pdb->perms[i].tag && pdb->perms[i].tag >= pdb->trans.size)
+ 			return false;
+-		if (pdb->perms[i].label >= pdb->trans.size)
++		if (pdb->perms[i].label &&
++		    pdb->perms[i].label >= pdb->trans.size)
+ 			return false;
+ 	}
+ 
+@@ -1212,10 +1238,10 @@ static int verify_profile(struct aa_profile *profile)
+ 	if (!rules)
+ 		return 0;
+ 
+-	if ((rules->file.dfa && !verify_dfa_xindex(rules->file.dfa,
+-						  rules->file.trans.size)) ||
++	if ((rules->file.dfa && !verify_dfa_accept_index(rules->file.dfa,
++							 rules->file.size)) ||
+ 	    (rules->policy.dfa &&
+-	     !verify_dfa_xindex(rules->policy.dfa, rules->policy.trans.size))) {
++	     !verify_dfa_accept_index(rules->policy.dfa, rules->policy.size))) {
+ 		audit_iface(profile, NULL, NULL,
+ 			    "Unpack: Invalid named transition", NULL, -EPROTO);
+ 		return -EPROTO;
+diff --git a/security/integrity/evm/evm_crypto.c b/security/integrity/evm/evm_crypto.c
+index 033804f5a5f20..0dae649f3740c 100644
+--- a/security/integrity/evm/evm_crypto.c
++++ b/security/integrity/evm/evm_crypto.c
+@@ -40,7 +40,7 @@ static const char evm_hmac[] = "hmac(sha1)";
+ /**
+  * evm_set_key() - set EVM HMAC key from the kernel
+  * @key: pointer to a buffer with the key data
+- * @size: length of the key data
++ * @keylen: length of the key data
+  *
+  * This function allows setting the EVM HMAC key from the kernel
+  * without using the "encrypted" key subsystem keys. It can be used
+diff --git a/security/integrity/evm/evm_main.c b/security/integrity/evm/evm_main.c
+index cf24c5255583c..c9b6e2a43478a 100644
+--- a/security/integrity/evm/evm_main.c
++++ b/security/integrity/evm/evm_main.c
+@@ -318,7 +318,6 @@ int evm_protected_xattr_if_enabled(const char *req_xattr_name)
+ /**
+  * evm_read_protected_xattrs - read EVM protected xattr names, lengths, values
+  * @dentry: dentry of the read xattrs
+- * @inode: inode of the read xattrs
+  * @buffer: buffer xattr names, lengths or values are copied to
+  * @buffer_size: size of buffer
+  * @type: n: names, l: lengths, v: values
+@@ -390,6 +389,7 @@ int evm_read_protected_xattrs(struct dentry *dentry, u8 *buffer,
+  * @xattr_name: requested xattr
+  * @xattr_value: requested xattr value
+  * @xattr_value_len: requested xattr value length
++ * @iint: inode integrity metadata
+  *
+  * Calculate the HMAC for the given dentry and verify it against the stored
+  * security.evm xattr. For performance, use the xattr value and length
+@@ -795,7 +795,9 @@ static int evm_attr_change(struct mnt_idmap *idmap,
+ 
+ /**
+  * evm_inode_setattr - prevent updating an invalid EVM extended attribute
++ * @idmap: idmap of the mount
+  * @dentry: pointer to the affected dentry
++ * @attr: iattr structure containing the new file attributes
+  *
+  * Permit update of file attributes when files have a valid EVM signature,
+  * except in the case of them having an immutable portable signature.
+diff --git a/security/integrity/iint.c b/security/integrity/iint.c
+index c73858e8c6d51..a462df827de2d 100644
+--- a/security/integrity/iint.c
++++ b/security/integrity/iint.c
+@@ -43,12 +43,10 @@ static struct integrity_iint_cache *__integrity_iint_find(struct inode *inode)
+ 		else if (inode > iint->inode)
+ 			n = n->rb_right;
+ 		else
+-			break;
++			return iint;
+ 	}
+-	if (!n)
+-		return NULL;
+ 
+-	return iint;
++	return NULL;
+ }
+ 
+ /*
+@@ -113,10 +111,15 @@ struct integrity_iint_cache *integrity_inode_get(struct inode *inode)
+ 		parent = *p;
+ 		test_iint = rb_entry(parent, struct integrity_iint_cache,
+ 				     rb_node);
+-		if (inode < test_iint->inode)
++		if (inode < test_iint->inode) {
+ 			p = &(*p)->rb_left;
+-		else
++		} else if (inode > test_iint->inode) {
+ 			p = &(*p)->rb_right;
++		} else {
++			write_unlock(&integrity_iint_lock);
++			kmem_cache_free(iint_cache, iint);
++			return test_iint;
++		}
+ 	}
+ 
+ 	iint->inode = inode;
+diff --git a/security/integrity/ima/ima_modsig.c b/security/integrity/ima/ima_modsig.c
+index fb25723c65bc4..3e7bee30080f2 100644
+--- a/security/integrity/ima/ima_modsig.c
++++ b/security/integrity/ima/ima_modsig.c
+@@ -89,6 +89,9 @@ int ima_read_modsig(enum ima_hooks func, const void *buf, loff_t buf_len,
+ 
+ /**
+  * ima_collect_modsig - Calculate the file hash without the appended signature.
++ * @modsig: parsed module signature
++ * @buf: data to verify the signature on
++ * @size: data size
+  *
+  * Since the modsig is part of the file contents, the hash used in its signature
+  * isn't the same one ordinarily calculated by IMA. Therefore PKCS7 code
+diff --git a/security/integrity/ima/ima_policy.c b/security/integrity/ima/ima_policy.c
+index 3ca8b7348c2e4..c9b3bd8f1bb9c 100644
+--- a/security/integrity/ima/ima_policy.c
++++ b/security/integrity/ima/ima_policy.c
+@@ -721,6 +721,7 @@ static int get_subaction(struct ima_rule_entry *rule, enum ima_hooks func)
+  * @secid: LSM secid of the task to be validated
+  * @func: IMA hook identifier
+  * @mask: requested action (MAY_READ | MAY_WRITE | MAY_APPEND | MAY_EXEC)
++ * @flags: IMA actions to consider (e.g. IMA_MEASURE | IMA_APPRAISE)
+  * @pcr: set the pcr to extend
+  * @template_desc: the template that should be used for this rule
+  * @func_data: func specific data, may be NULL
+@@ -1915,7 +1916,7 @@ static int ima_parse_rule(char *rule, struct ima_rule_entry *entry)
+ 
+ /**
+  * ima_parse_add_rule - add a rule to ima_policy_rules
+- * @rule - ima measurement policy rule
++ * @rule: ima measurement policy rule
+  *
+  * Avoid locking by allowing just one writer at a time in ima_write_policy()
+  * Returns the length of the rule parsed, an error code on failure
+diff --git a/sound/core/jack.c b/sound/core/jack.c
+index 88493cc31914b..03d155ed362b4 100644
+--- a/sound/core/jack.c
++++ b/sound/core/jack.c
+@@ -654,6 +654,7 @@ void snd_jack_report(struct snd_jack *jack, int status)
+ 	struct snd_jack_kctl *jack_kctl;
+ 	unsigned int mask_bits = 0;
+ #ifdef CONFIG_SND_JACK_INPUT_DEV
++	struct input_dev *idev;
+ 	int i;
+ #endif
+ 
+@@ -670,17 +671,15 @@ void snd_jack_report(struct snd_jack *jack, int status)
+ 					     status & jack_kctl->mask_bits);
+ 
+ #ifdef CONFIG_SND_JACK_INPUT_DEV
+-	mutex_lock(&jack->input_dev_lock);
+-	if (!jack->input_dev) {
+-		mutex_unlock(&jack->input_dev_lock);
++	idev = input_get_device(jack->input_dev);
++	if (!idev)
+ 		return;
+-	}
+ 
+ 	for (i = 0; i < ARRAY_SIZE(jack->key); i++) {
+ 		int testbit = ((SND_JACK_BTN_0 >> i) & ~mask_bits);
+ 
+ 		if (jack->type & testbit)
+-			input_report_key(jack->input_dev, jack->key[i],
++			input_report_key(idev, jack->key[i],
+ 					 status & testbit);
+ 	}
+ 
+@@ -688,13 +687,13 @@ void snd_jack_report(struct snd_jack *jack, int status)
+ 		int testbit = ((1 << i) & ~mask_bits);
+ 
+ 		if (jack->type & testbit)
+-			input_report_switch(jack->input_dev,
++			input_report_switch(idev,
+ 					    jack_switch_types[i],
+ 					    status & testbit);
+ 	}
+ 
+-	input_sync(jack->input_dev);
+-	mutex_unlock(&jack->input_dev_lock);
++	input_sync(idev);
++	input_put_device(idev);
+ #endif /* CONFIG_SND_JACK_INPUT_DEV */
+ }
+ EXPORT_SYMBOL(snd_jack_report);
+diff --git a/sound/core/pcm_memory.c b/sound/core/pcm_memory.c
+index 7bde7fb64011e..a0b9514716995 100644
+--- a/sound/core/pcm_memory.c
++++ b/sound/core/pcm_memory.c
+@@ -31,15 +31,41 @@ static unsigned long max_alloc_per_card = 32UL * 1024UL * 1024UL;
+ module_param(max_alloc_per_card, ulong, 0644);
+ MODULE_PARM_DESC(max_alloc_per_card, "Max total allocation bytes per card.");
+ 
++static void __update_allocated_size(struct snd_card *card, ssize_t bytes)
++{
++	card->total_pcm_alloc_bytes += bytes;
++}
++
++static void update_allocated_size(struct snd_card *card, ssize_t bytes)
++{
++	mutex_lock(&card->memory_mutex);
++	__update_allocated_size(card, bytes);
++	mutex_unlock(&card->memory_mutex);
++}
++
++static void decrease_allocated_size(struct snd_card *card, size_t bytes)
++{
++	mutex_lock(&card->memory_mutex);
++	WARN_ON(card->total_pcm_alloc_bytes < bytes);
++	__update_allocated_size(card, -(ssize_t)bytes);
++	mutex_unlock(&card->memory_mutex);
++}
++
+ static int do_alloc_pages(struct snd_card *card, int type, struct device *dev,
+ 			  int str, size_t size, struct snd_dma_buffer *dmab)
+ {
+ 	enum dma_data_direction dir;
+ 	int err;
+ 
++	/* check and reserve the requested size */
++	mutex_lock(&card->memory_mutex);
+ 	if (max_alloc_per_card &&
+-	    card->total_pcm_alloc_bytes + size > max_alloc_per_card)
++	    card->total_pcm_alloc_bytes + size > max_alloc_per_card) {
++		mutex_unlock(&card->memory_mutex);
+ 		return -ENOMEM;
++	}
++	__update_allocated_size(card, size);
++	mutex_unlock(&card->memory_mutex);
+ 
+ 	if (str == SNDRV_PCM_STREAM_PLAYBACK)
+ 		dir = DMA_TO_DEVICE;
+@@ -47,9 +73,14 @@ static int do_alloc_pages(struct snd_card *card, int type, struct device *dev,
+ 		dir = DMA_FROM_DEVICE;
+ 	err = snd_dma_alloc_dir_pages(type, dev, dir, size, dmab);
+ 	if (!err) {
+-		mutex_lock(&card->memory_mutex);
+-		card->total_pcm_alloc_bytes += dmab->bytes;
+-		mutex_unlock(&card->memory_mutex);
++		/* the actual allocation size might be bigger than requested,
++		 * and we need to correct the account
++		 */
++		if (dmab->bytes != size)
++			update_allocated_size(card, dmab->bytes - size);
++	} else {
++		/* take back on allocation failure */
++		decrease_allocated_size(card, size);
+ 	}
+ 	return err;
+ }
+@@ -58,10 +89,7 @@ static void do_free_pages(struct snd_card *card, struct snd_dma_buffer *dmab)
+ {
+ 	if (!dmab->area)
+ 		return;
+-	mutex_lock(&card->memory_mutex);
+-	WARN_ON(card->total_pcm_alloc_bytes < dmab->bytes);
+-	card->total_pcm_alloc_bytes -= dmab->bytes;
+-	mutex_unlock(&card->memory_mutex);
++	decrease_allocated_size(card, dmab->bytes);
+ 	snd_dma_free_pages(dmab);
+ 	dmab->area = NULL;
+ }
+diff --git a/sound/pci/ac97/ac97_codec.c b/sound/pci/ac97/ac97_codec.c
+index 9afc5906d662e..80a65b8ad7b9b 100644
+--- a/sound/pci/ac97/ac97_codec.c
++++ b/sound/pci/ac97/ac97_codec.c
+@@ -2069,8 +2069,8 @@ int snd_ac97_mixer(struct snd_ac97_bus *bus, struct snd_ac97_template *template,
+ 		.dev_disconnect =	snd_ac97_dev_disconnect,
+ 	};
+ 
+-	if (rac97)
+-		*rac97 = NULL;
++	if (!rac97)
++		return -EINVAL;
+ 	if (snd_BUG_ON(!bus || !template))
+ 		return -EINVAL;
+ 	if (snd_BUG_ON(template->num >= 4))
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index dabfdecece264..f1b934a502169 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -9490,9 +9490,9 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8b63, "HP Elite Dragonfly 13.5 inch G4", ALC245_FIXUP_CS35L41_SPI_4_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8b65, "HP ProBook 455 15.6 inch G10 Notebook PC", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+ 	SND_PCI_QUIRK(0x103c, 0x8b66, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+-	SND_PCI_QUIRK(0x103c, 0x8b70, "HP EliteBook 835 G10", ALC287_FIXUP_CS35L41_I2C_2),
+-	SND_PCI_QUIRK(0x103c, 0x8b72, "HP EliteBook 845 G10", ALC287_FIXUP_CS35L41_I2C_2),
+-	SND_PCI_QUIRK(0x103c, 0x8b74, "HP EliteBook 845W G10", ALC287_FIXUP_CS35L41_I2C_2),
++	SND_PCI_QUIRK(0x103c, 0x8b70, "HP EliteBook 835 G10", ALC287_FIXUP_CS35L41_I2C_2_HP_GPIO_LED),
++	SND_PCI_QUIRK(0x103c, 0x8b72, "HP EliteBook 845 G10", ALC287_FIXUP_CS35L41_I2C_2_HP_GPIO_LED),
++	SND_PCI_QUIRK(0x103c, 0x8b74, "HP EliteBook 845W G10", ALC287_FIXUP_CS35L41_I2C_2_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8b77, "HP ElieBook 865 G10", ALC287_FIXUP_CS35L41_I2C_2),
+ 	SND_PCI_QUIRK(0x103c, 0x8b7a, "HP", ALC236_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x103c, 0x8b7d, "HP", ALC236_FIXUP_HP_GPIO_LED),
+@@ -9682,6 +9682,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1558, 0x971d, "Clevo N970T[CDF]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0xa500, "Clevo NL5[03]RU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0xa600, "Clevo NL50NU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1558, 0xa650, "Clevo NP[567]0SN[CD]", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0xa671, "Clevo NP70SN[CDE]", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0xb018, "Clevo NP50D[BE]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1558, 0xb019, "Clevo NH77D[BE]Q", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+diff --git a/sound/soc/amd/acp/acp-pdm.c b/sound/soc/amd/acp/acp-pdm.c
+index 66ec6b6a59723..f8030b79ac17c 100644
+--- a/sound/soc/amd/acp/acp-pdm.c
++++ b/sound/soc/amd/acp/acp-pdm.c
+@@ -176,7 +176,7 @@ static void acp_dmic_dai_shutdown(struct snd_pcm_substream *substream,
+ 
+ 	/* Disable DMIC interrupts */
+ 	ext_int_ctrl = readl(ACP_EXTERNAL_INTR_CNTL(adata, 0));
+-	ext_int_ctrl |= ~PDM_DMA_INTR_MASK;
++	ext_int_ctrl &= ~PDM_DMA_INTR_MASK;
+ 	writel(ext_int_ctrl, ACP_EXTERNAL_INTR_CNTL(adata, 0));
+ }
+ 
+diff --git a/sound/soc/codecs/es8316.c b/sound/soc/codecs/es8316.c
+index a27d809564593..ccecfdf700649 100644
+--- a/sound/soc/codecs/es8316.c
++++ b/sound/soc/codecs/es8316.c
+@@ -52,7 +52,12 @@ static const SNDRV_CTL_TLVD_DECLARE_DB_SCALE(dac_vol_tlv, -9600, 50, 1);
+ static const SNDRV_CTL_TLVD_DECLARE_DB_SCALE(adc_vol_tlv, -9600, 50, 1);
+ static const SNDRV_CTL_TLVD_DECLARE_DB_SCALE(alc_max_gain_tlv, -650, 150, 0);
+ static const SNDRV_CTL_TLVD_DECLARE_DB_SCALE(alc_min_gain_tlv, -1200, 150, 0);
+-static const SNDRV_CTL_TLVD_DECLARE_DB_SCALE(alc_target_tlv, -1650, 150, 0);
++
++static const SNDRV_CTL_TLVD_DECLARE_DB_RANGE(alc_target_tlv,
++	0, 10, TLV_DB_SCALE_ITEM(-1650, 150, 0),
++	11, 11, TLV_DB_SCALE_ITEM(-150, 0, 0),
++);
++
+ static const SNDRV_CTL_TLVD_DECLARE_DB_RANGE(hpmixer_gain_tlv,
+ 	0, 4, TLV_DB_SCALE_ITEM(-1200, 150, 0),
+ 	8, 11, TLV_DB_SCALE_ITEM(-450, 150, 0),
+@@ -115,7 +120,7 @@ static const struct snd_kcontrol_new es8316_snd_controls[] = {
+ 		       alc_max_gain_tlv),
+ 	SOC_SINGLE_TLV("ALC Capture Min Volume", ES8316_ADC_ALC2, 0, 28, 0,
+ 		       alc_min_gain_tlv),
+-	SOC_SINGLE_TLV("ALC Capture Target Volume", ES8316_ADC_ALC3, 4, 10, 0,
++	SOC_SINGLE_TLV("ALC Capture Target Volume", ES8316_ADC_ALC3, 4, 11, 0,
+ 		       alc_target_tlv),
+ 	SOC_SINGLE("ALC Capture Hold Time", ES8316_ADC_ALC3, 0, 10, 0),
+ 	SOC_SINGLE("ALC Capture Decay Time", ES8316_ADC_ALC4, 4, 10, 0),
+@@ -364,13 +369,11 @@ static int es8316_set_dai_sysclk(struct snd_soc_dai *codec_dai,
+ 	int count = 0;
+ 
+ 	es8316->sysclk = freq;
++	es8316->sysclk_constraints.list = NULL;
++	es8316->sysclk_constraints.count = 0;
+ 
+-	if (freq == 0) {
+-		es8316->sysclk_constraints.list = NULL;
+-		es8316->sysclk_constraints.count = 0;
+-
++	if (freq == 0)
+ 		return 0;
+-	}
+ 
+ 	ret = clk_set_rate(es8316->mclk, freq);
+ 	if (ret)
+@@ -386,8 +389,10 @@ static int es8316_set_dai_sysclk(struct snd_soc_dai *codec_dai,
+ 			es8316->allowed_rates[count++] = freq / ratio;
+ 	}
+ 
+-	es8316->sysclk_constraints.list = es8316->allowed_rates;
+-	es8316->sysclk_constraints.count = count;
++	if (count) {
++		es8316->sysclk_constraints.list = es8316->allowed_rates;
++		es8316->sysclk_constraints.count = count;
++	}
+ 
+ 	return 0;
+ }
+diff --git a/sound/soc/fsl/imx-audmix.c b/sound/soc/fsl/imx-audmix.c
+index b2c5aca92c6bf..f9ed8fcc03c48 100644
+--- a/sound/soc/fsl/imx-audmix.c
++++ b/sound/soc/fsl/imx-audmix.c
+@@ -228,6 +228,8 @@ static int imx_audmix_probe(struct platform_device *pdev)
+ 
+ 		dai_name = devm_kasprintf(&pdev->dev, GFP_KERNEL, "%s%s",
+ 					  fe_name_pref, args.np->full_name + 1);
++		if (!dai_name)
++			return -ENOMEM;
+ 
+ 		dev_info(pdev->dev.parent, "DAI FE name:%s\n", dai_name);
+ 
+@@ -236,6 +238,8 @@ static int imx_audmix_probe(struct platform_device *pdev)
+ 			capture_dai_name =
+ 				devm_kasprintf(&pdev->dev, GFP_KERNEL, "%s %s",
+ 					       dai_name, "CPU-Capture");
++			if (!capture_dai_name)
++				return -ENOMEM;
+ 		}
+ 
+ 		/*
+@@ -269,6 +273,8 @@ static int imx_audmix_probe(struct platform_device *pdev)
+ 				       "AUDMIX-Playback-%d", i);
+ 		be_cp = devm_kasprintf(&pdev->dev, GFP_KERNEL,
+ 				       "AUDMIX-Capture-%d", i);
++		if (!be_name || !be_pb || !be_cp)
++			return -ENOMEM;
+ 
+ 		priv->dai[num_dai + i].cpus	= &dlc[2];
+ 		priv->dai[num_dai + i].codecs	= &dlc[3];
+@@ -293,6 +299,9 @@ static int imx_audmix_probe(struct platform_device *pdev)
+ 		priv->dapm_routes[i].source =
+ 			devm_kasprintf(&pdev->dev, GFP_KERNEL, "%s %s",
+ 				       dai_name, "CPU-Playback");
++		if (!priv->dapm_routes[i].source)
++			return -ENOMEM;
++
+ 		priv->dapm_routes[i].sink = be_pb;
+ 		priv->dapm_routes[num_dai + i].source   = be_pb;
+ 		priv->dapm_routes[num_dai + i].sink     = be_cp;
+diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c
+index 144f082c63fda..5fa204897a52b 100644
+--- a/sound/soc/intel/boards/sof_sdw.c
++++ b/sound/soc/intel/boards/sof_sdw.c
+@@ -413,7 +413,7 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
+ 		.matches = {
+ 			DMI_MATCH(DMI_PRODUCT_FAMILY, "Intel_mtlrvp"),
+ 		},
+-		.driver_data = (void *)(RT711_JD1 | SOF_SDW_TGL_HDMI),
++		.driver_data = (void *)(RT711_JD1),
+ 	},
+ 	{}
+ };
+@@ -902,17 +902,20 @@ static int create_codec_dai_name(struct device *dev,
+ static int set_codec_init_func(struct snd_soc_card *card,
+ 			       const struct snd_soc_acpi_link_adr *link,
+ 			       struct snd_soc_dai_link *dai_links,
+-			       bool playback, int group_id)
++			       bool playback, int group_id, int adr_index)
+ {
+-	int i;
++	int i = adr_index;
+ 
+ 	do {
+ 		/*
+ 		 * Initialize the codec. If codec is part of an aggregated
+ 		 * group (group_id>0), initialize all codecs belonging to
+ 		 * same group.
++		 * The first link should start with link->adr_d[adr_index]
++		 * because that is the device that we want to initialize and
++		 * we should end immediately if it is not aggregated (group_id=0)
+ 		 */
+-		for (i = 0; i < link->num_adr; i++) {
++		for ( ; i < link->num_adr; i++) {
+ 			int codec_index;
+ 
+ 			codec_index = find_codec_info_part(link->adr_d[i].adr);
+@@ -928,9 +931,12 @@ static int set_codec_init_func(struct snd_soc_card *card,
+ 						dai_links,
+ 						&codec_info_list[codec_index],
+ 						playback);
++			if (!group_id)
++				return 0;
+ 		}
++		i = 0;
+ 		link++;
+-	} while (link->mask && group_id);
++	} while (link->mask);
+ 
+ 	return 0;
+ }
+@@ -1180,7 +1186,7 @@ static int create_sdw_dailink(struct snd_soc_card *card,
+ 		dai_links[*link_index].nonatomic = true;
+ 
+ 		ret = set_codec_init_func(card, link, dai_links + (*link_index)++,
+-					  playback, group_id);
++					  playback, group_id, adr_index);
+ 		if (ret < 0) {
+ 			dev_err(dev, "failed to init codec %d", codec_index);
+ 			return ret;
+diff --git a/sound/soc/mediatek/mt8173/mt8173-afe-pcm.c b/sound/soc/mediatek/mt8173/mt8173-afe-pcm.c
+index f93c2ec8beb7b..06269f7e37566 100644
+--- a/sound/soc/mediatek/mt8173/mt8173-afe-pcm.c
++++ b/sound/soc/mediatek/mt8173/mt8173-afe-pcm.c
+@@ -1070,6 +1070,10 @@ static int mt8173_afe_pcm_dev_probe(struct platform_device *pdev)
+ 
+ 	afe->dev = &pdev->dev;
+ 
++	irq_id = platform_get_irq(pdev, 0);
++	if (irq_id <= 0)
++		return irq_id < 0 ? irq_id : -ENXIO;
++
+ 	afe->base_addr = devm_platform_ioremap_resource(pdev, 0);
+ 	if (IS_ERR(afe->base_addr))
+ 		return PTR_ERR(afe->base_addr);
+@@ -1156,14 +1160,14 @@ static int mt8173_afe_pcm_dev_probe(struct platform_device *pdev)
+ 	comp_hdmi = devm_kzalloc(&pdev->dev, sizeof(*comp_hdmi), GFP_KERNEL);
+ 	if (!comp_hdmi) {
+ 		ret = -ENOMEM;
+-		goto err_pm_disable;
++		goto err_cleanup_components;
+ 	}
+ 
+ 	ret = snd_soc_component_initialize(comp_hdmi,
+ 					   &mt8173_afe_hdmi_dai_component,
+ 					   &pdev->dev);
+ 	if (ret)
+-		goto err_pm_disable;
++		goto err_cleanup_components;
+ 
+ #ifdef CONFIG_DEBUG_FS
+ 	comp_hdmi->debugfs_prefix = "hdmi";
+@@ -1175,14 +1179,11 @@ static int mt8173_afe_pcm_dev_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		goto err_cleanup_components;
+ 
+-	irq_id = platform_get_irq(pdev, 0);
+-	if (irq_id <= 0)
+-		return irq_id < 0 ? irq_id : -ENXIO;
+ 	ret = devm_request_irq(afe->dev, irq_id, mt8173_afe_irq_handler,
+ 			       0, "Afe_ISR_Handle", (void *)afe);
+ 	if (ret) {
+ 		dev_err(afe->dev, "could not request_irq\n");
+-		goto err_pm_disable;
++		goto err_cleanup_components;
+ 	}
+ 
+ 	dev_info(&pdev->dev, "MT8173 AFE driver initialized.\n");
+diff --git a/tools/bpf/bpftool/feature.c b/tools/bpf/bpftool/feature.c
+index da16e6a27cccd..0675d6a464138 100644
+--- a/tools/bpf/bpftool/feature.c
++++ b/tools/bpf/bpftool/feature.c
+@@ -167,12 +167,12 @@ static int get_vendor_id(int ifindex)
+ 	return strtol(buf, NULL, 0);
+ }
+ 
+-static int read_procfs(const char *path)
++static long read_procfs(const char *path)
+ {
+ 	char *endptr, *line = NULL;
+ 	size_t len = 0;
+ 	FILE *fd;
+-	int res;
++	long res;
+ 
+ 	fd = fopen(path, "r");
+ 	if (!fd)
+@@ -194,7 +194,7 @@ static int read_procfs(const char *path)
+ 
+ static void probe_unprivileged_disabled(void)
+ {
+-	int res;
++	long res;
+ 
+ 	/* No support for C-style ouptut */
+ 
+@@ -216,14 +216,14 @@ static void probe_unprivileged_disabled(void)
+ 			printf("Unable to retrieve required privileges for bpf() syscall\n");
+ 			break;
+ 		default:
+-			printf("bpf() syscall restriction has unknown value %d\n", res);
++			printf("bpf() syscall restriction has unknown value %ld\n", res);
+ 		}
+ 	}
+ }
+ 
+ static void probe_jit_enable(void)
+ {
+-	int res;
++	long res;
+ 
+ 	/* No support for C-style ouptut */
+ 
+@@ -245,7 +245,7 @@ static void probe_jit_enable(void)
+ 			printf("Unable to retrieve JIT-compiler status\n");
+ 			break;
+ 		default:
+-			printf("JIT-compiler status has unknown value %d\n",
++			printf("JIT-compiler status has unknown value %ld\n",
+ 			       res);
+ 		}
+ 	}
+@@ -253,7 +253,7 @@ static void probe_jit_enable(void)
+ 
+ static void probe_jit_harden(void)
+ {
+-	int res;
++	long res;
+ 
+ 	/* No support for C-style ouptut */
+ 
+@@ -275,7 +275,7 @@ static void probe_jit_harden(void)
+ 			printf("Unable to retrieve JIT hardening status\n");
+ 			break;
+ 		default:
+-			printf("JIT hardening status has unknown value %d\n",
++			printf("JIT hardening status has unknown value %ld\n",
+ 			       res);
+ 		}
+ 	}
+@@ -283,7 +283,7 @@ static void probe_jit_harden(void)
+ 
+ static void probe_jit_kallsyms(void)
+ {
+-	int res;
++	long res;
+ 
+ 	/* No support for C-style ouptut */
+ 
+@@ -302,14 +302,14 @@ static void probe_jit_kallsyms(void)
+ 			printf("Unable to retrieve JIT kallsyms export status\n");
+ 			break;
+ 		default:
+-			printf("JIT kallsyms exports status has unknown value %d\n", res);
++			printf("JIT kallsyms exports status has unknown value %ld\n", res);
+ 		}
+ 	}
+ }
+ 
+ static void probe_jit_limit(void)
+ {
+-	int res;
++	long res;
+ 
+ 	/* No support for C-style ouptut */
+ 
+@@ -322,7 +322,7 @@ static void probe_jit_limit(void)
+ 			printf("Unable to retrieve global memory limit for JIT compiler for unprivileged users\n");
+ 			break;
+ 		default:
+-			printf("Global memory limit for JIT compiler for unprivileged users is %d bytes\n", res);
++			printf("Global memory limit for JIT compiler for unprivileged users is %ld bytes\n", res);
+ 		}
+ 	}
+ }
+diff --git a/tools/bpf/resolve_btfids/Makefile b/tools/bpf/resolve_btfids/Makefile
+index ac548a7baa73e..4b8079f294f65 100644
+--- a/tools/bpf/resolve_btfids/Makefile
++++ b/tools/bpf/resolve_btfids/Makefile
+@@ -67,7 +67,7 @@ $(BPFOBJ): $(wildcard $(LIBBPF_SRC)/*.[ch] $(LIBBPF_SRC)/Makefile) | $(LIBBPF_OU
+ LIBELF_FLAGS := $(shell $(HOSTPKG_CONFIG) libelf --cflags 2>/dev/null)
+ LIBELF_LIBS  := $(shell $(HOSTPKG_CONFIG) libelf --libs 2>/dev/null || echo -lelf)
+ 
+-HOSTCFLAGS += -g \
++HOSTCFLAGS_resolve_btfids += -g \
+           -I$(srctree)/tools/include \
+           -I$(srctree)/tools/include/uapi \
+           -I$(LIBBPF_INCLUDE) \
+@@ -76,7 +76,7 @@ HOSTCFLAGS += -g \
+ 
+ LIBS = $(LIBELF_LIBS) -lz
+ 
+-export srctree OUTPUT HOSTCFLAGS Q HOSTCC HOSTLD HOSTAR
++export srctree OUTPUT HOSTCFLAGS_resolve_btfids Q HOSTCC HOSTLD HOSTAR
+ include $(srctree)/tools/build/Makefile.include
+ 
+ $(BINARY_IN): fixdep FORCE prepare | $(OUTPUT)
+diff --git a/tools/include/nolibc/stdint.h b/tools/include/nolibc/stdint.h
+index c1ce4f5e06034..661d942862c0b 100644
+--- a/tools/include/nolibc/stdint.h
++++ b/tools/include/nolibc/stdint.h
+@@ -36,8 +36,8 @@ typedef  ssize_t       int_fast16_t;
+ typedef   size_t      uint_fast16_t;
+ typedef  ssize_t       int_fast32_t;
+ typedef   size_t      uint_fast32_t;
+-typedef  ssize_t       int_fast64_t;
+-typedef   size_t      uint_fast64_t;
++typedef  int64_t       int_fast64_t;
++typedef uint64_t      uint_fast64_t;
+ 
+ typedef  int64_t           intmax_t;
+ typedef uint64_t          uintmax_t;
+@@ -84,16 +84,16 @@ typedef uint64_t          uintmax_t;
+ #define  INT_FAST8_MIN   INT8_MIN
+ #define INT_FAST16_MIN   INTPTR_MIN
+ #define INT_FAST32_MIN   INTPTR_MIN
+-#define INT_FAST64_MIN   INTPTR_MIN
++#define INT_FAST64_MIN   INT64_MIN
+ 
+ #define  INT_FAST8_MAX   INT8_MAX
+ #define INT_FAST16_MAX   INTPTR_MAX
+ #define INT_FAST32_MAX   INTPTR_MAX
+-#define INT_FAST64_MAX   INTPTR_MAX
++#define INT_FAST64_MAX   INT64_MAX
+ 
+ #define  UINT_FAST8_MAX  UINT8_MAX
+ #define UINT_FAST16_MAX  SIZE_MAX
+ #define UINT_FAST32_MAX  SIZE_MAX
+-#define UINT_FAST64_MAX  SIZE_MAX
++#define UINT_FAST64_MAX  UINT64_MAX
+ 
+ #endif /* _NOLIBC_STDINT_H */
+diff --git a/tools/lib/bpf/bpf_helpers.h b/tools/lib/bpf/bpf_helpers.h
+index 929a3baca8ef3..bbab9ad9dc5a7 100644
+--- a/tools/lib/bpf/bpf_helpers.h
++++ b/tools/lib/bpf/bpf_helpers.h
+@@ -77,16 +77,21 @@
+ /*
+  * Helper macros to manipulate data structures
+  */
+-#ifndef offsetof
+-#define offsetof(TYPE, MEMBER)	((unsigned long)&((TYPE *)0)->MEMBER)
+-#endif
+-#ifndef container_of
++
++/* offsetof() definition that uses __builtin_offset() might not preserve field
++ * offset CO-RE relocation properly, so force-redefine offsetof() using
++ * old-school approach which works with CO-RE correctly
++ */
++#undef offsetof
++#define offsetof(type, member)	((unsigned long)&((type *)0)->member)
++
++/* redefined container_of() to ensure we use the above offsetof() macro */
++#undef container_of
+ #define container_of(ptr, type, member)				\
+ 	({							\
+ 		void *__mptr = (void *)(ptr);			\
+ 		((type *)(__mptr - offsetof(type, member)));	\
+ 	})
+-#endif
+ 
+ /*
+  * Compiler (optimization) barrier.
+diff --git a/tools/lib/bpf/btf_dump.c b/tools/lib/bpf/btf_dump.c
+index 580985ee55458..4d9f30bf7f014 100644
+--- a/tools/lib/bpf/btf_dump.c
++++ b/tools/lib/bpf/btf_dump.c
+@@ -2250,9 +2250,25 @@ static int btf_dump_type_data_check_overflow(struct btf_dump *d,
+ 					     const struct btf_type *t,
+ 					     __u32 id,
+ 					     const void *data,
+-					     __u8 bits_offset)
++					     __u8 bits_offset,
++					     __u8 bit_sz)
+ {
+-	__s64 size = btf__resolve_size(d->btf, id);
++	__s64 size;
++
++	if (bit_sz) {
++		/* bits_offset is at most 7. bit_sz is at most 128. */
++		__u8 nr_bytes = (bits_offset + bit_sz + 7) / 8;
++
++		/* When bit_sz is non zero, it is called from
++		 * btf_dump_struct_data() where it only cares about
++		 * negative error value.
++		 * Return nr_bytes in success case to make it
++		 * consistent as the regular integer case below.
++		 */
++		return data + nr_bytes > d->typed_dump->data_end ? -E2BIG : nr_bytes;
++	}
++
++	size = btf__resolve_size(d->btf, id);
+ 
+ 	if (size < 0 || size >= INT_MAX) {
+ 		pr_warn("unexpected size [%zu] for id [%u]\n",
+@@ -2407,7 +2423,7 @@ static int btf_dump_dump_type_data(struct btf_dump *d,
+ {
+ 	int size, err = 0;
+ 
+-	size = btf_dump_type_data_check_overflow(d, t, id, data, bits_offset);
++	size = btf_dump_type_data_check_overflow(d, t, id, data, bits_offset, bit_sz);
+ 	if (size < 0)
+ 		return size;
+ 	err = btf_dump_type_data_check_zero(d, t, id, data, bits_offset, bit_sz);
+diff --git a/tools/perf/arch/x86/util/Build b/tools/perf/arch/x86/util/Build
+index 195ccfdef7aa1..005907cb97d8c 100644
+--- a/tools/perf/arch/x86/util/Build
++++ b/tools/perf/arch/x86/util/Build
+@@ -10,6 +10,7 @@ perf-y += evlist.o
+ perf-y += mem-events.o
+ perf-y += evsel.o
+ perf-y += iostat.o
++perf-y += env.o
+ 
+ perf-$(CONFIG_DWARF) += dwarf-regs.o
+ perf-$(CONFIG_BPF_PROLOGUE) += dwarf-regs.o
+diff --git a/tools/perf/arch/x86/util/env.c b/tools/perf/arch/x86/util/env.c
+new file mode 100644
+index 0000000000000..3e537ffb1353a
+--- /dev/null
++++ b/tools/perf/arch/x86/util/env.c
+@@ -0,0 +1,19 @@
++// SPDX-License-Identifier: GPL-2.0
++#include "linux/string.h"
++#include "util/env.h"
++#include "env.h"
++
++bool x86__is_amd_cpu(void)
++{
++	struct perf_env env = { .total_mem = 0, };
++	static int is_amd; /* 0: Uninitialized, 1: Yes, -1: No */
++
++	if (is_amd)
++		goto ret;
++
++	perf_env__cpuid(&env);
++	is_amd = env.cpuid && strstarts(env.cpuid, "AuthenticAMD") ? 1 : -1;
++	perf_env__exit(&env);
++ret:
++	return is_amd >= 1 ? true : false;
++}
+diff --git a/tools/perf/arch/x86/util/env.h b/tools/perf/arch/x86/util/env.h
+new file mode 100644
+index 0000000000000..d78f080b6b3f8
+--- /dev/null
++++ b/tools/perf/arch/x86/util/env.h
+@@ -0,0 +1,7 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#ifndef _X86_ENV_H
++#define _X86_ENV_H
++
++bool x86__is_amd_cpu(void);
++
++#endif /* _X86_ENV_H */
+diff --git a/tools/perf/arch/x86/util/evsel.c b/tools/perf/arch/x86/util/evsel.c
+index ea3972d785d10..d72390cdf391d 100644
+--- a/tools/perf/arch/x86/util/evsel.c
++++ b/tools/perf/arch/x86/util/evsel.c
+@@ -7,6 +7,7 @@
+ #include "linux/string.h"
+ #include "evsel.h"
+ #include "util/debug.h"
++#include "env.h"
+ 
+ #define IBS_FETCH_L3MISSONLY   (1ULL << 59)
+ #define IBS_OP_L3MISSONLY      (1ULL << 16)
+@@ -97,23 +98,10 @@ void arch__post_evsel_config(struct evsel *evsel, struct perf_event_attr *attr)
+ {
+ 	struct perf_pmu *evsel_pmu, *ibs_fetch_pmu, *ibs_op_pmu;
+ 	static int warned_once;
+-	/* 0: Uninitialized, 1: Yes, -1: No */
+-	static int is_amd;
+ 
+-	if (warned_once || is_amd == -1)
++	if (warned_once || !x86__is_amd_cpu())
+ 		return;
+ 
+-	if (!is_amd) {
+-		struct perf_env *env = evsel__env(evsel);
+-
+-		if (!perf_env__cpuid(env) || !env->cpuid ||
+-		    !strstarts(env->cpuid, "AuthenticAMD")) {
+-			is_amd = -1;
+-			return;
+-		}
+-		is_amd = 1;
+-	}
+-
+ 	evsel_pmu = evsel__find_pmu(evsel);
+ 	if (!evsel_pmu)
+ 		return;
+diff --git a/tools/perf/arch/x86/util/mem-events.c b/tools/perf/arch/x86/util/mem-events.c
+index f683ac702247c..efc0fae9ed0a7 100644
+--- a/tools/perf/arch/x86/util/mem-events.c
++++ b/tools/perf/arch/x86/util/mem-events.c
+@@ -4,6 +4,7 @@
+ #include "map_symbol.h"
+ #include "mem-events.h"
+ #include "linux/string.h"
++#include "env.h"
+ 
+ static char mem_loads_name[100];
+ static bool mem_loads_name__init;
+@@ -26,28 +27,12 @@ static struct perf_mem_event perf_mem_events_amd[PERF_MEM_EVENTS__MAX] = {
+ 	E("mem-ldst",	"ibs_op//",	"ibs_op"),
+ };
+ 
+-static int perf_mem_is_amd_cpu(void)
+-{
+-	struct perf_env env = { .total_mem = 0, };
+-
+-	perf_env__cpuid(&env);
+-	if (env.cpuid && strstarts(env.cpuid, "AuthenticAMD"))
+-		return 1;
+-	return -1;
+-}
+-
+ struct perf_mem_event *perf_mem_events__ptr(int i)
+ {
+-	/* 0: Uninitialized, 1: Yes, -1: No */
+-	static int is_amd;
+-
+ 	if (i >= PERF_MEM_EVENTS__MAX)
+ 		return NULL;
+ 
+-	if (!is_amd)
+-		is_amd = perf_mem_is_amd_cpu();
+-
+-	if (is_amd == 1)
++	if (x86__is_amd_cpu())
+ 		return &perf_mem_events_amd[i];
+ 
+ 	return &perf_mem_events_intel[i];
+diff --git a/tools/perf/builtin-bench.c b/tools/perf/builtin-bench.c
+index 58f1cfe1eb34b..db435b791a09b 100644
+--- a/tools/perf/builtin-bench.c
++++ b/tools/perf/builtin-bench.c
+@@ -21,6 +21,7 @@
+ #include "builtin.h"
+ #include "bench/bench.h"
+ 
++#include <locale.h>
+ #include <stdio.h>
+ #include <stdlib.h>
+ #include <string.h>
+@@ -260,6 +261,7 @@ int cmd_bench(int argc, const char **argv)
+ 
+ 	/* Unbuffered output */
+ 	setvbuf(stdout, NULL, _IONBF, 0);
++	setlocale(LC_ALL, "");
+ 
+ 	if (argc < 2) {
+ 		/* No collection specified. */
+diff --git a/tools/perf/builtin-script.c b/tools/perf/builtin-script.c
+index c57be48d65bb0..2ecfca0fccda0 100644
+--- a/tools/perf/builtin-script.c
++++ b/tools/perf/builtin-script.c
+@@ -2422,6 +2422,9 @@ out_put:
+ 	return ret;
+ }
+ 
++// Used when scr->per_event_dump is not set
++static struct evsel_script es_stdout;
++
+ static int process_attr(struct perf_tool *tool, union perf_event *event,
+ 			struct evlist **pevlist)
+ {
+@@ -2430,7 +2433,6 @@ static int process_attr(struct perf_tool *tool, union perf_event *event,
+ 	struct evsel *evsel, *pos;
+ 	u64 sample_type;
+ 	int err;
+-	static struct evsel_script *es;
+ 
+ 	err = perf_event__process_attr(tool, event, pevlist);
+ 	if (err)
+@@ -2440,14 +2442,13 @@ static int process_attr(struct perf_tool *tool, union perf_event *event,
+ 	evsel = evlist__last(*pevlist);
+ 
+ 	if (!evsel->priv) {
+-		if (scr->per_event_dump) {
++		if (scr->per_event_dump) { 
+ 			evsel->priv = evsel_script__new(evsel, scr->session->data);
+-		} else {
+-			es = zalloc(sizeof(*es));
+-			if (!es)
++			if (!evsel->priv)
+ 				return -ENOMEM;
+-			es->fp = stdout;
+-			evsel->priv = es;
++		} else { // Replicate what is done in perf_script__setup_per_event_dump()
++			es_stdout.fp = stdout;
++			evsel->priv = &es_stdout;
+ 		}
+ 	}
+ 
+@@ -2753,7 +2754,6 @@ out_err_fclose:
+ static int perf_script__setup_per_event_dump(struct perf_script *script)
+ {
+ 	struct evsel *evsel;
+-	static struct evsel_script es_stdout;
+ 
+ 	if (script->per_event_dump)
+ 		return perf_script__fopen_per_event_dump(script);
+diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
+index b9ad32f21e575..463643cda0d5f 100644
+--- a/tools/perf/builtin-stat.c
++++ b/tools/perf/builtin-stat.c
+@@ -723,6 +723,8 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
+ 			all_counters_use_bpf = false;
+ 	}
+ 
++	evlist__reset_aggr_stats(evsel_list);
++
+ 	evlist__for_each_cpu(evlist_cpu_itr, evsel_list, affinity) {
+ 		counter = evlist_cpu_itr.evsel;
+ 
+diff --git a/tools/perf/tests/shell/test_task_analyzer.sh b/tools/perf/tests/shell/test_task_analyzer.sh
+index a98e4ab66040e..365b61aea519a 100755
+--- a/tools/perf/tests/shell/test_task_analyzer.sh
++++ b/tools/perf/tests/shell/test_task_analyzer.sh
+@@ -5,6 +5,12 @@
+ tmpdir=$(mktemp -d /tmp/perf-script-task-analyzer-XXXXX)
+ err=0
+ 
++# set PERF_EXEC_PATH to find scripts in the source directory
++perfdir=$(dirname "$0")/../..
++if [ -e "$perfdir/scripts/python/Perf-Trace-Util" ]; then
++  export PERF_EXEC_PATH=$perfdir
++fi
++
+ cleanup() {
+   rm -f perf.data
+   rm -f perf.data.old
+@@ -31,7 +37,7 @@ report() {
+ 
+ check_exec_0() {
+ 	if [ $? != 0 ]; then
+-		report 1 "invokation of ${$1} command failed"
++		report 1 "invocation of $1 command failed"
+ 	fi
+ }
+ 
+@@ -44,9 +50,20 @@ find_str_or_fail() {
+ 	fi
+ }
+ 
++# check if perf is compiled with libtraceevent support
++skip_no_probe_record_support() {
++	perf record -e "sched:sched_switch" -a -- sleep 1 2>&1 | grep "libtraceevent is necessary for tracepoint support" && return 2
++	return 0
++}
++
+ prepare_perf_data() {
+ 	# 1s should be sufficient to catch at least some switches
+ 	perf record -e sched:sched_switch -a -- sleep 1 > /dev/null 2>&1
++	# check if perf data file got created in above step.
++	if [ ! -e "perf.data" ]; then
++		printf "FAIL: perf record failed to create \"perf.data\" \n"
++		return 1
++	fi
+ }
+ 
+ # check standard inkvokation with no arguments
+@@ -134,6 +151,13 @@ test_csvsummary_extended() {
+ 	find_str_or_fail "Out-Out;" csvsummary ${FUNCNAME[0]}
+ }
+ 
++skip_no_probe_record_support
++err=$?
++if [ $err -ne 0 ]; then
++	echo "WARN: Skipping tests. No libtraceevent support"
++	cleanup
++	exit $err
++fi
+ prepare_perf_data
+ test_basic
+ test_ns_rename
+diff --git a/tools/perf/util/bpf_skel/lock_contention.bpf.c b/tools/perf/util/bpf_skel/lock_contention.bpf.c
+index 1d48226ae75d4..8d3cfbb3cc65b 100644
+--- a/tools/perf/util/bpf_skel/lock_contention.bpf.c
++++ b/tools/perf/util/bpf_skel/lock_contention.bpf.c
+@@ -416,8 +416,6 @@ int contention_end(u64 *ctx)
+ 	return 0;
+ }
+ 
+-struct rq {};
+-
+ extern struct rq runqueues __ksym;
+ 
+ struct rq___old {
+diff --git a/tools/perf/util/bpf_skel/vmlinux.h b/tools/perf/util/bpf_skel/vmlinux.h
+index c7ed51b0c1ef9..ab84a6e1da5ee 100644
+--- a/tools/perf/util/bpf_skel/vmlinux.h
++++ b/tools/perf/util/bpf_skel/vmlinux.h
+@@ -171,4 +171,14 @@ struct bpf_perf_event_data_kern {
+ 	struct perf_sample_data *data;
+ 	struct perf_event	*event;
+ } __attribute__((preserve_access_index));
++
++/*
++ * If 'struct rq' isn't defined for lock_contention.bpf.c, for the sake of
++ * rq___old and rq___new, then the type for the 'runqueue' variable ends up
++ * being a forward declaration (BTF_KIND_FWD) while the kernel has it defined
++ * (BTF_KIND_STRUCT). The definition appears in vmlinux.h rather than
++ * lock_contention.bpf.c for consistency with a generated vmlinux.h.
++ */
++struct rq {};
++
+ #endif // __VMLINUX_H
+diff --git a/tools/perf/util/dwarf-aux.c b/tools/perf/util/dwarf-aux.c
+index b074144097710..3bff678745635 100644
+--- a/tools/perf/util/dwarf-aux.c
++++ b/tools/perf/util/dwarf-aux.c
+@@ -1103,7 +1103,7 @@ int die_get_varname(Dwarf_Die *vr_die, struct strbuf *buf)
+ 	ret = die_get_typename(vr_die, buf);
+ 	if (ret < 0) {
+ 		pr_debug("Failed to get type, make it unknown.\n");
+-		ret = strbuf_add(buf, " (unknown_type)", 14);
++		ret = strbuf_add(buf, "(unknown_type)", 14);
+ 	}
+ 
+ 	return ret < 0 ? ret : strbuf_addf(buf, "\t%s", dwarf_diename(vr_die));
+diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h
+index 0f54f28a69c25..5a488803d368f 100644
+--- a/tools/perf/util/evsel.h
++++ b/tools/perf/util/evsel.h
+@@ -460,16 +460,24 @@ static inline int evsel__group_idx(struct evsel *evsel)
+ }
+ 
+ /* Iterates group WITHOUT the leader. */
+-#define for_each_group_member(_evsel, _leader) 					\
+-for ((_evsel) = list_entry((_leader)->core.node.next, struct evsel, core.node); \
+-     (_evsel) && (_evsel)->core.leader == (&_leader->core);					\
+-     (_evsel) = list_entry((_evsel)->core.node.next, struct evsel, core.node))
++#define for_each_group_member_head(_evsel, _leader, _head)				\
++for ((_evsel) = list_entry((_leader)->core.node.next, struct evsel, core.node);		\
++	(_evsel) && &(_evsel)->core.node != (_head) &&					\
++	(_evsel)->core.leader == &(_leader)->core;					\
++	(_evsel) = list_entry((_evsel)->core.node.next, struct evsel, core.node))
++
++#define for_each_group_member(_evsel, _leader)				\
++	for_each_group_member_head(_evsel, _leader, &(_leader)->evlist->core.entries)
+ 
+ /* Iterates group WITH the leader. */
+-#define for_each_group_evsel(_evsel, _leader) 					\
+-for ((_evsel) = _leader; 							\
+-     (_evsel) && (_evsel)->core.leader == (&_leader->core);					\
+-     (_evsel) = list_entry((_evsel)->core.node.next, struct evsel, core.node))
++#define for_each_group_evsel_head(_evsel, _leader, _head)				\
++for ((_evsel) = _leader;								\
++	(_evsel) && &(_evsel)->core.node != (_head) &&					\
++	(_evsel)->core.leader == &(_leader)->core;					\
++	(_evsel) = list_entry((_evsel)->core.node.next, struct evsel, core.node))
++
++#define for_each_group_evsel(_evsel, _leader)				\
++	for_each_group_evsel_head(_evsel, _leader, &(_leader)->evlist->core.entries)
+ 
+ static inline bool evsel__has_branch_callstack(const struct evsel *evsel)
+ {
+diff --git a/tools/perf/util/evsel_fprintf.c b/tools/perf/util/evsel_fprintf.c
+index cc80ec554c0a9..036a2171dc1c5 100644
+--- a/tools/perf/util/evsel_fprintf.c
++++ b/tools/perf/util/evsel_fprintf.c
+@@ -2,6 +2,7 @@
+ #include <inttypes.h>
+ #include <stdio.h>
+ #include <stdbool.h>
++#include "util/evlist.h"
+ #include "evsel.h"
+ #include "util/evsel_fprintf.h"
+ #include "util/event.h"
+diff --git a/tools/perf/util/metricgroup.c b/tools/perf/util/metricgroup.c
+index 5e9c657dd3f7a..b659b149e5b41 100644
+--- a/tools/perf/util/metricgroup.c
++++ b/tools/perf/util/metricgroup.c
+@@ -1146,7 +1146,7 @@ static int metricgroup__add_metric_callback(const struct pmu_metric *pm,
+ 
+ 	if (pm->metric_expr && match_pm_metric(pm, data->metric_name)) {
+ 		bool metric_no_group = data->metric_no_group ||
+-			match_metric(data->metric_name, pm->metricgroup_no_group);
++			match_metric(pm->metricgroup_no_group, data->metric_name);
+ 
+ 		data->has_match = true;
+ 		ret = add_metric(data->list, pm, data->modifier, metric_no_group,
+diff --git a/tools/testing/cxl/test/mem.c b/tools/testing/cxl/test/mem.c
+index 34b48027b3def..403cd36087726 100644
+--- a/tools/testing/cxl/test/mem.c
++++ b/tools/testing/cxl/test/mem.c
+@@ -52,11 +52,11 @@ static struct cxl_cel_entry mock_cel[] = {
+ 	},
+ 	{
+ 		.opcode = cpu_to_le16(CXL_MBOX_OP_INJECT_POISON),
+-		.effect = cpu_to_le16(0),
++		.effect = cpu_to_le16(EFFECT(2)),
+ 	},
+ 	{
+ 		.opcode = cpu_to_le16(CXL_MBOX_OP_CLEAR_POISON),
+-		.effect = cpu_to_le16(0),
++		.effect = cpu_to_le16(EFFECT(2)),
+ 	},
+ };
+ 
+diff --git a/tools/testing/kunit/kunit_kernel.py b/tools/testing/kunit/kunit_kernel.py
+index f01f941061296..7f648802caf6a 100644
+--- a/tools/testing/kunit/kunit_kernel.py
++++ b/tools/testing/kunit/kunit_kernel.py
+@@ -92,7 +92,7 @@ class LinuxSourceTreeOperations:
+ 		if stderr:  # likely only due to build warnings
+ 			print(stderr.decode())
+ 
+-	def start(self, params: List[str], build_dir: str) -> subprocess.Popen[str]:
++	def start(self, params: List[str], build_dir: str) -> subprocess.Popen:
+ 		raise RuntimeError('not implemented!')
+ 
+ 
+@@ -113,7 +113,7 @@ class LinuxSourceTreeOperationsQemu(LinuxSourceTreeOperations):
+ 		kconfig.merge_in_entries(base_kunitconfig)
+ 		return kconfig
+ 
+-	def start(self, params: List[str], build_dir: str) -> subprocess.Popen[str]:
++	def start(self, params: List[str], build_dir: str) -> subprocess.Popen:
+ 		kernel_path = os.path.join(build_dir, self._kernel_path)
+ 		qemu_command = ['qemu-system-' + self._qemu_arch,
+ 				'-nodefaults',
+@@ -142,7 +142,7 @@ class LinuxSourceTreeOperationsUml(LinuxSourceTreeOperations):
+ 		kconfig.merge_in_entries(base_kunitconfig)
+ 		return kconfig
+ 
+-	def start(self, params: List[str], build_dir: str) -> subprocess.Popen[str]:
++	def start(self, params: List[str], build_dir: str) -> subprocess.Popen:
+ 		"""Runs the Linux UML binary. Must be named 'linux'."""
+ 		linux_bin = os.path.join(build_dir, 'linux')
+ 		params.extend(['mem=1G', 'console=tty', 'kunit_shutdown=halt'])
+diff --git a/tools/testing/kunit/mypy.ini b/tools/testing/kunit/mypy.ini
+new file mode 100644
+index 0000000000000..ddd288309efaa
+--- /dev/null
++++ b/tools/testing/kunit/mypy.ini
+@@ -0,0 +1,6 @@
++[mypy]
++strict = True
++
++# E.g. we can't write subprocess.Popen[str] until Python 3.9+.
++# But kunit.py tries to support Python 3.7+, so let's disable it.
++disable_error_code = type-arg
+diff --git a/tools/testing/kunit/run_checks.py b/tools/testing/kunit/run_checks.py
+index 8208c3b3135ef..c6d494ea33739 100755
+--- a/tools/testing/kunit/run_checks.py
++++ b/tools/testing/kunit/run_checks.py
+@@ -23,7 +23,7 @@ commands: Dict[str, Sequence[str]] = {
+ 	'kunit_tool_test.py': ['./kunit_tool_test.py'],
+ 	'kunit smoke test': ['./kunit.py', 'run', '--kunitconfig=lib/kunit', '--build_dir=kunit_run_checks'],
+ 	'pytype': ['/bin/sh', '-c', 'pytype *.py'],
+-	'mypy': ['mypy', '--strict', '--exclude', '_test.py$', '--exclude', 'qemu_configs/', '.'],
++	'mypy': ['mypy', '--config-file', 'mypy.ini', '--exclude', '_test.py$', '--exclude', 'qemu_configs/', '.'],
+ }
+ 
+ # The user might not have mypy or pytype installed, skip them if so.
+diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
+index 28d2c77262bed..538df8fb8c42b 100644
+--- a/tools/testing/selftests/bpf/Makefile
++++ b/tools/testing/selftests/bpf/Makefile
+@@ -88,8 +88,7 @@ TEST_GEN_PROGS_EXTENDED = test_sock_addr test_skb_cgroup_id_user \
+ 	xskxceiver xdp_redirect_multi xdp_synproxy veristat xdp_hw_metadata \
+ 	xdp_features
+ 
+-TEST_CUSTOM_PROGS = $(OUTPUT)/urandom_read $(OUTPUT)/sign-file
+-TEST_GEN_FILES += liburandom_read.so
++TEST_GEN_FILES += liburandom_read.so urandom_read sign-file
+ 
+ # Emit succinct information message describing current building step
+ # $1 - generic step name (e.g., CC, LINK, etc);
+diff --git a/tools/testing/selftests/bpf/prog_tests/check_mtu.c b/tools/testing/selftests/bpf/prog_tests/check_mtu.c
+index 5338d2ea04603..2a9a30650350e 100644
+--- a/tools/testing/selftests/bpf/prog_tests/check_mtu.c
++++ b/tools/testing/selftests/bpf/prog_tests/check_mtu.c
+@@ -183,7 +183,7 @@ cleanup:
+ 
+ void serial_test_check_mtu(void)
+ {
+-	__u32 mtu_lo;
++	int mtu_lo;
+ 
+ 	if (test__start_subtest("bpf_check_mtu XDP-attach"))
+ 		test_check_mtu_xdp_attach();
+diff --git a/tools/testing/selftests/bpf/progs/refcounted_kptr.c b/tools/testing/selftests/bpf/progs/refcounted_kptr.c
+index 1d348a225140d..a3da610b1e6b0 100644
+--- a/tools/testing/selftests/bpf/progs/refcounted_kptr.c
++++ b/tools/testing/selftests/bpf/progs/refcounted_kptr.c
+@@ -375,6 +375,8 @@ long rbtree_refcounted_node_ref_escapes(void *ctx)
+ 	bpf_rbtree_add(&aroot, &n->node, less_a);
+ 	m = bpf_refcount_acquire(n);
+ 	bpf_spin_unlock(&alock);
++	if (!m)
++		return 2;
+ 
+ 	m->key = 2;
+ 	bpf_obj_drop(m);
+diff --git a/tools/testing/selftests/bpf/progs/refcounted_kptr_fail.c b/tools/testing/selftests/bpf/progs/refcounted_kptr_fail.c
+index efcb308f80adf..0b09e5c915b15 100644
+--- a/tools/testing/selftests/bpf/progs/refcounted_kptr_fail.c
++++ b/tools/testing/selftests/bpf/progs/refcounted_kptr_fail.c
+@@ -29,7 +29,7 @@ static bool less(struct bpf_rb_node *a, const struct bpf_rb_node *b)
+ }
+ 
+ SEC("?tc")
+-__failure __msg("Unreleased reference id=3 alloc_insn=21")
++__failure __msg("Unreleased reference id=4 alloc_insn=21")
+ long rbtree_refcounted_node_ref_escapes(void *ctx)
+ {
+ 	struct node_acquire *n, *m;
+@@ -43,6 +43,8 @@ long rbtree_refcounted_node_ref_escapes(void *ctx)
+ 	/* m becomes an owning ref but is never drop'd or added to a tree */
+ 	m = bpf_refcount_acquire(n);
+ 	bpf_spin_unlock(&glock);
++	if (!m)
++		return 2;
+ 
+ 	m->key = 2;
+ 	return 0;
+diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c
+index e4657c5bc3f12..4683ff84044d6 100644
+--- a/tools/testing/selftests/bpf/test_verifier.c
++++ b/tools/testing/selftests/bpf/test_verifier.c
+@@ -1227,45 +1227,46 @@ static bool cmp_str_seq(const char *log, const char *exp)
+ 	return true;
+ }
+ 
+-static int get_xlated_program(int fd_prog, struct bpf_insn **buf, int *cnt)
++static struct bpf_insn *get_xlated_program(int fd_prog, int *cnt)
+ {
++	__u32 buf_element_size = sizeof(struct bpf_insn);
+ 	struct bpf_prog_info info = {};
+ 	__u32 info_len = sizeof(info);
+ 	__u32 xlated_prog_len;
+-	__u32 buf_element_size = sizeof(struct bpf_insn);
++	struct bpf_insn *buf;
+ 
+ 	if (bpf_prog_get_info_by_fd(fd_prog, &info, &info_len)) {
+ 		perror("bpf_prog_get_info_by_fd failed");
+-		return -1;
++		return NULL;
+ 	}
+ 
+ 	xlated_prog_len = info.xlated_prog_len;
+ 	if (xlated_prog_len % buf_element_size) {
+ 		printf("Program length %d is not multiple of %d\n",
+ 		       xlated_prog_len, buf_element_size);
+-		return -1;
++		return NULL;
+ 	}
+ 
+ 	*cnt = xlated_prog_len / buf_element_size;
+-	*buf = calloc(*cnt, buf_element_size);
++	buf = calloc(*cnt, buf_element_size);
+ 	if (!buf) {
+ 		perror("can't allocate xlated program buffer");
+-		return -ENOMEM;
++		return NULL;
+ 	}
+ 
+ 	bzero(&info, sizeof(info));
+ 	info.xlated_prog_len = xlated_prog_len;
+-	info.xlated_prog_insns = (__u64)(unsigned long)*buf;
++	info.xlated_prog_insns = (__u64)(unsigned long)buf;
+ 	if (bpf_prog_get_info_by_fd(fd_prog, &info, &info_len)) {
+ 		perror("second bpf_prog_get_info_by_fd failed");
+ 		goto out_free_buf;
+ 	}
+ 
+-	return 0;
++	return buf;
+ 
+ out_free_buf:
+-	free(*buf);
+-	return -1;
++	free(buf);
++	return NULL;
+ }
+ 
+ static bool is_null_insn(struct bpf_insn *insn)
+@@ -1398,7 +1399,8 @@ static bool check_xlated_program(struct bpf_test *test, int fd_prog)
+ 	if (!check_expected && !check_unexpected)
+ 		goto out;
+ 
+-	if (get_xlated_program(fd_prog, &buf, &cnt)) {
++	buf = get_xlated_program(fd_prog, &cnt);
++	if (!buf) {
+ 		printf("FAIL: can't get xlated program\n");
+ 		result = false;
+ 		goto out;
+diff --git a/tools/testing/selftests/bpf/verifier/precise.c b/tools/testing/selftests/bpf/verifier/precise.c
+index 6c03a7d805f9d..ac5eeea094683 100644
+--- a/tools/testing/selftests/bpf/verifier/precise.c
++++ b/tools/testing/selftests/bpf/verifier/precise.c
+@@ -38,25 +38,24 @@
+ 	.fixup_map_array_48b = { 1 },
+ 	.result = VERBOSE_ACCEPT,
+ 	.errstr =
+-	"26: (85) call bpf_probe_read_kernel#113\
+-	last_idx 26 first_idx 20\
+-	regs=4 stack=0 before 25\
+-	regs=4 stack=0 before 24\
+-	regs=4 stack=0 before 23\
+-	regs=4 stack=0 before 22\
+-	regs=4 stack=0 before 20\
+-	parent didn't have regs=4 stack=0 marks\
+-	last_idx 19 first_idx 10\
+-	regs=4 stack=0 before 19\
+-	regs=200 stack=0 before 18\
+-	regs=300 stack=0 before 17\
+-	regs=201 stack=0 before 15\
+-	regs=201 stack=0 before 14\
+-	regs=200 stack=0 before 13\
+-	regs=200 stack=0 before 12\
+-	regs=200 stack=0 before 11\
+-	regs=200 stack=0 before 10\
+-	parent already had regs=0 stack=0 marks",
++	"mark_precise: frame0: last_idx 26 first_idx 20\
++	mark_precise: frame0: regs=r2 stack= before 25\
++	mark_precise: frame0: regs=r2 stack= before 24\
++	mark_precise: frame0: regs=r2 stack= before 23\
++	mark_precise: frame0: regs=r2 stack= before 22\
++	mark_precise: frame0: regs=r2 stack= before 20\
++	mark_precise: frame0: parent state regs=r2 stack=:\
++	mark_precise: frame0: last_idx 19 first_idx 10\
++	mark_precise: frame0: regs=r2,r9 stack= before 19\
++	mark_precise: frame0: regs=r9 stack= before 18\
++	mark_precise: frame0: regs=r8,r9 stack= before 17\
++	mark_precise: frame0: regs=r0,r9 stack= before 15\
++	mark_precise: frame0: regs=r0,r9 stack= before 14\
++	mark_precise: frame0: regs=r9 stack= before 13\
++	mark_precise: frame0: regs=r9 stack= before 12\
++	mark_precise: frame0: regs=r9 stack= before 11\
++	mark_precise: frame0: regs=r9 stack= before 10\
++	mark_precise: frame0: parent state regs= stack=:",
+ },
+ {
+ 	"precise: test 2",
+@@ -100,20 +99,20 @@
+ 	.flags = BPF_F_TEST_STATE_FREQ,
+ 	.errstr =
+ 	"26: (85) call bpf_probe_read_kernel#113\
+-	last_idx 26 first_idx 22\
+-	regs=4 stack=0 before 25\
+-	regs=4 stack=0 before 24\
+-	regs=4 stack=0 before 23\
+-	regs=4 stack=0 before 22\
+-	parent didn't have regs=4 stack=0 marks\
+-	last_idx 20 first_idx 20\
+-	regs=4 stack=0 before 20\
+-	parent didn't have regs=4 stack=0 marks\
+-	last_idx 19 first_idx 17\
+-	regs=4 stack=0 before 19\
+-	regs=200 stack=0 before 18\
+-	regs=300 stack=0 before 17\
+-	parent already had regs=0 stack=0 marks",
++	mark_precise: frame0: last_idx 26 first_idx 22\
++	mark_precise: frame0: regs=r2 stack= before 25\
++	mark_precise: frame0: regs=r2 stack= before 24\
++	mark_precise: frame0: regs=r2 stack= before 23\
++	mark_precise: frame0: regs=r2 stack= before 22\
++	mark_precise: frame0: parent state regs=r2 stack=:\
++	mark_precise: frame0: last_idx 20 first_idx 20\
++	mark_precise: frame0: regs=r2,r9 stack= before 20\
++	mark_precise: frame0: parent state regs=r2,r9 stack=:\
++	mark_precise: frame0: last_idx 19 first_idx 17\
++	mark_precise: frame0: regs=r2,r9 stack= before 19\
++	mark_precise: frame0: regs=r9 stack= before 18\
++	mark_precise: frame0: regs=r8,r9 stack= before 17\
++	mark_precise: frame0: parent state regs= stack=:",
+ },
+ {
+ 	"precise: cross frame pruning",
+@@ -153,15 +152,15 @@
+ 	},
+ 	.prog_type = BPF_PROG_TYPE_XDP,
+ 	.flags = BPF_F_TEST_STATE_FREQ,
+-	.errstr = "5: (2d) if r4 > r0 goto pc+0\
+-	last_idx 5 first_idx 5\
+-	parent didn't have regs=10 stack=0 marks\
+-	last_idx 4 first_idx 2\
+-	regs=10 stack=0 before 4\
+-	regs=10 stack=0 before 3\
+-	regs=0 stack=1 before 2\
+-	last_idx 5 first_idx 5\
+-	parent didn't have regs=1 stack=0 marks",
++	.errstr = "mark_precise: frame0: last_idx 5 first_idx 5\
++	mark_precise: frame0: parent state regs=r4 stack=:\
++	mark_precise: frame0: last_idx 4 first_idx 2\
++	mark_precise: frame0: regs=r4 stack= before 4\
++	mark_precise: frame0: regs=r4 stack= before 3\
++	mark_precise: frame0: regs= stack=-8 before 2\
++	mark_precise: frame0: falling back to forcing all scalars precise\
++	mark_precise: frame0: last_idx 5 first_idx 5\
++	mark_precise: frame0: parent state regs=r0 stack=:",
+ 	.result = VERBOSE_ACCEPT,
+ 	.retval = -1,
+ },
+@@ -179,16 +178,19 @@
+ 	},
+ 	.prog_type = BPF_PROG_TYPE_XDP,
+ 	.flags = BPF_F_TEST_STATE_FREQ,
+-	.errstr = "last_idx 6 first_idx 6\
+-	parent didn't have regs=10 stack=0 marks\
+-	last_idx 5 first_idx 3\
+-	regs=10 stack=0 before 5\
+-	regs=10 stack=0 before 4\
+-	regs=0 stack=1 before 3\
+-	last_idx 6 first_idx 6\
+-	parent didn't have regs=1 stack=0 marks\
+-	last_idx 5 first_idx 3\
+-	regs=1 stack=0 before 5",
++	.errstr = "mark_precise: frame0: last_idx 6 first_idx 6\
++	mark_precise: frame0: parent state regs=r4 stack=:\
++	mark_precise: frame0: last_idx 5 first_idx 3\
++	mark_precise: frame0: regs=r4 stack= before 5\
++	mark_precise: frame0: regs=r4 stack= before 4\
++	mark_precise: frame0: regs= stack=-8 before 3\
++	mark_precise: frame0: falling back to forcing all scalars precise\
++	force_precise: frame0: forcing r0 to be precise\
++	force_precise: frame0: forcing r0 to be precise\
++	mark_precise: frame0: last_idx 6 first_idx 6\
++	mark_precise: frame0: parent state regs=r0 stack=:\
++	mark_precise: frame0: last_idx 5 first_idx 3\
++	mark_precise: frame0: regs=r0 stack= before 5",
+ 	.result = VERBOSE_ACCEPT,
+ 	.retval = -1,
+ },
+diff --git a/tools/testing/selftests/cgroup/test_memcontrol.c b/tools/testing/selftests/cgroup/test_memcontrol.c
+index f4f7c0aef702b..a2a90f4bfe9fe 100644
+--- a/tools/testing/selftests/cgroup/test_memcontrol.c
++++ b/tools/testing/selftests/cgroup/test_memcontrol.c
+@@ -292,6 +292,7 @@ static int test_memcg_protection(const char *root, bool min)
+ 	char *children[4] = {NULL};
+ 	const char *attribute = min ? "memory.min" : "memory.low";
+ 	long c[4];
++	long current;
+ 	int i, attempts;
+ 	int fd;
+ 
+@@ -400,7 +401,8 @@ static int test_memcg_protection(const char *root, bool min)
+ 		goto cleanup;
+ 	}
+ 
+-	if (!values_close(cg_read_long(parent[1], "memory.current"), MB(50), 3))
++	current = min ? MB(50) : MB(30);
++	if (!values_close(cg_read_long(parent[1], "memory.current"), current, 3))
+ 		goto cleanup;
+ 
+ 	if (!reclaim_until(children[0], MB(10)))
+diff --git a/tools/testing/selftests/ftrace/ftracetest b/tools/testing/selftests/ftrace/ftracetest
+index 2506621e75dfb..cb5f18c06593d 100755
+--- a/tools/testing/selftests/ftrace/ftracetest
++++ b/tools/testing/selftests/ftrace/ftracetest
+@@ -301,7 +301,7 @@ ktaptest() { # result comment
+     comment="# $comment"
+   fi
+ 
+-  echo $CASENO $result $INSTANCE$CASENAME $comment
++  echo $result $CASENO $INSTANCE$CASENAME $comment
+ }
+ 
+ eval_result() { # sigval
+diff --git a/tools/testing/selftests/net/rtnetlink.sh b/tools/testing/selftests/net/rtnetlink.sh
+index 383ac6fc037d0..ba286d680fd9a 100755
+--- a/tools/testing/selftests/net/rtnetlink.sh
++++ b/tools/testing/selftests/net/rtnetlink.sh
+@@ -860,6 +860,7 @@ EOF
+ 	fi
+ 
+ 	# clean up any leftovers
++	echo 0 > /sys/bus/netdevsim/del_device
+ 	$probed && rmmod netdevsim
+ 
+ 	if [ $ret -ne 0 ]; then
+diff --git a/tools/testing/selftests/nolibc/nolibc-test.c b/tools/testing/selftests/nolibc/nolibc-test.c
+index 21bacc928bf7b..d37d036876ea9 100644
+--- a/tools/testing/selftests/nolibc/nolibc-test.c
++++ b/tools/testing/selftests/nolibc/nolibc-test.c
+@@ -639,9 +639,9 @@ int run_stdlib(int min, int max)
+ 		CASE_TEST(limit_int_fast32_min);    EXPECT_EQ(1, INT_FAST32_MIN,   (int_fast32_t)    INTPTR_MIN); break;
+ 		CASE_TEST(limit_int_fast32_max);    EXPECT_EQ(1, INT_FAST32_MAX,   (int_fast32_t)    INTPTR_MAX); break;
+ 		CASE_TEST(limit_uint_fast32_max);   EXPECT_EQ(1, UINT_FAST32_MAX,  (uint_fast32_t)   UINTPTR_MAX); break;
+-		CASE_TEST(limit_int_fast64_min);    EXPECT_EQ(1, INT_FAST64_MIN,   (int_fast64_t)    INTPTR_MIN); break;
+-		CASE_TEST(limit_int_fast64_max);    EXPECT_EQ(1, INT_FAST64_MAX,   (int_fast64_t)    INTPTR_MAX); break;
+-		CASE_TEST(limit_uint_fast64_max);   EXPECT_EQ(1, UINT_FAST64_MAX,  (uint_fast64_t)   UINTPTR_MAX); break;
++		CASE_TEST(limit_int_fast64_min);    EXPECT_EQ(1, INT_FAST64_MIN,   (int_fast64_t)    INT64_MIN); break;
++		CASE_TEST(limit_int_fast64_max);    EXPECT_EQ(1, INT_FAST64_MAX,   (int_fast64_t)    INT64_MAX); break;
++		CASE_TEST(limit_uint_fast64_max);   EXPECT_EQ(1, UINT_FAST64_MAX,  (uint_fast64_t)   UINT64_MAX); break;
+ #if __SIZEOF_LONG__ == 8
+ 		CASE_TEST(limit_intptr_min);        EXPECT_EQ(1, INTPTR_MIN,       (intptr_t)        0x8000000000000000LL); break;
+ 		CASE_TEST(limit_intptr_max);        EXPECT_EQ(1, INTPTR_MAX,       (intptr_t)        0x7fffffffffffffffLL); break;
+diff --git a/tools/testing/selftests/rcutorture/configs/rcu/BUSTED-BOOST.boot b/tools/testing/selftests/rcutorture/configs/rcu/BUSTED-BOOST.boot
+index f57720c52c0f9..84f6bb98ce993 100644
+--- a/tools/testing/selftests/rcutorture/configs/rcu/BUSTED-BOOST.boot
++++ b/tools/testing/selftests/rcutorture/configs/rcu/BUSTED-BOOST.boot
+@@ -5,4 +5,4 @@ rcutree.gp_init_delay=3
+ rcutree.gp_cleanup_delay=3
+ rcutree.kthread_prio=2
+ threadirqs
+-tree.use_softirq=0
++rcutree.use_softirq=0
+diff --git a/tools/testing/selftests/rcutorture/configs/rcu/TREE03.boot b/tools/testing/selftests/rcutorture/configs/rcu/TREE03.boot
+index 64f864f1f361f..8e50bfd4b710d 100644
+--- a/tools/testing/selftests/rcutorture/configs/rcu/TREE03.boot
++++ b/tools/testing/selftests/rcutorture/configs/rcu/TREE03.boot
+@@ -4,4 +4,4 @@ rcutree.gp_init_delay=3
+ rcutree.gp_cleanup_delay=3
+ rcutree.kthread_prio=2
+ threadirqs
+-tree.use_softirq=0
++rcutree.use_softirq=0
+diff --git a/tools/testing/selftests/vDSO/vdso_test_clock_getres.c b/tools/testing/selftests/vDSO/vdso_test_clock_getres.c
+index 15dcee16ff726..38d46a8bf7cba 100644
+--- a/tools/testing/selftests/vDSO/vdso_test_clock_getres.c
++++ b/tools/testing/selftests/vDSO/vdso_test_clock_getres.c
+@@ -84,12 +84,12 @@ static inline int vdso_test_clock(unsigned int clock_id)
+ 
+ int main(int argc, char **argv)
+ {
+-	int ret;
++	int ret = 0;
+ 
+ #if _POSIX_TIMERS > 0
+ 
+ #ifdef CLOCK_REALTIME
+-	ret = vdso_test_clock(CLOCK_REALTIME);
++	ret += vdso_test_clock(CLOCK_REALTIME);
+ #endif
+ 
+ #ifdef CLOCK_BOOTTIME
+diff --git a/tools/testing/selftests/wireguard/netns.sh b/tools/testing/selftests/wireguard/netns.sh
+index 69c7796c7ca92..405ff262ca93d 100755
+--- a/tools/testing/selftests/wireguard/netns.sh
++++ b/tools/testing/selftests/wireguard/netns.sh
+@@ -514,10 +514,32 @@ n2 bash -c 'printf 0 > /proc/sys/net/ipv4/conf/all/rp_filter'
+ n1 ping -W 1 -c 1 192.168.241.2
+ [[ $(n2 wg show wg0 endpoints) == "$pub1	10.0.0.3:1" ]]
+ 
+-ip1 link del veth1
+-ip1 link del veth3
+-ip1 link del wg0
+-ip2 link del wg0
++ip1 link del dev veth3
++ip1 link del dev wg0
++ip2 link del dev wg0
++
++# Make sure persistent keep alives are sent when an adapter comes up
++ip1 link add dev wg0 type wireguard
++n1 wg set wg0 private-key <(echo "$key1") peer "$pub2" endpoint 10.0.0.1:1 persistent-keepalive 1
++read _ _ tx_bytes < <(n1 wg show wg0 transfer)
++[[ $tx_bytes -eq 0 ]]
++ip1 link set dev wg0 up
++read _ _ tx_bytes < <(n1 wg show wg0 transfer)
++[[ $tx_bytes -gt 0 ]]
++ip1 link del dev wg0
++# This should also happen even if the private key is set later
++ip1 link add dev wg0 type wireguard
++n1 wg set wg0 peer "$pub2" endpoint 10.0.0.1:1 persistent-keepalive 1
++read _ _ tx_bytes < <(n1 wg show wg0 transfer)
++[[ $tx_bytes -eq 0 ]]
++ip1 link set dev wg0 up
++read _ _ tx_bytes < <(n1 wg show wg0 transfer)
++[[ $tx_bytes -eq 0 ]]
++n1 wg set wg0 private-key <(echo "$key1")
++read _ _ tx_bytes < <(n1 wg show wg0 transfer)
++[[ $tx_bytes -gt 0 ]]
++ip1 link del dev veth1
++ip1 link del dev wg0
+ 
+ # We test that Netlink/IPC is working properly by doing things that usually cause split responses
+ ip0 link add dev wg0 type wireguard
+diff --git a/tools/tracing/rtla/src/osnoise_top.c b/tools/tracing/rtla/src/osnoise_top.c
+index 562f2e4b18c57..3ece8c09ecd95 100644
+--- a/tools/tracing/rtla/src/osnoise_top.c
++++ b/tools/tracing/rtla/src/osnoise_top.c
+@@ -340,8 +340,14 @@ struct osnoise_top_params *osnoise_top_parse_args(int argc, char **argv)
+ 	if (!params)
+ 		exit(1);
+ 
+-	if (strcmp(argv[0], "hwnoise") == 0)
++	if (strcmp(argv[0], "hwnoise") == 0) {
+ 		params->mode = MODE_HWNOISE;
++		/*
++		 * Reduce CPU usage for 75% to avoid killing the system.
++		 */
++		params->runtime = 750000;
++		params->period = 1000000;
++	}
+ 
+ 	while (1) {
+ 		static struct option long_options[] = {
+diff --git a/tools/virtio/Makefile b/tools/virtio/Makefile
+index 7b7139d97d742..d128925980e05 100644
+--- a/tools/virtio/Makefile
++++ b/tools/virtio/Makefile
+@@ -4,7 +4,18 @@ test: virtio_test vringh_test
+ virtio_test: virtio_ring.o virtio_test.o
+ vringh_test: vringh_test.o vringh.o virtio_ring.o
+ 
+-CFLAGS += -g -O2 -Werror -Wno-maybe-uninitialized -Wall -I. -I../include/ -I ../../usr/include/ -Wno-pointer-sign -fno-strict-overflow -fno-strict-aliasing -fno-common -MMD -U_FORTIFY_SOURCE -include ../../include/linux/kconfig.h -mfunction-return=thunk -fcf-protection=none -mindirect-branch-register
++try-run = $(shell set -e;		\
++	if ($(1)) >/dev/null 2>&1;	\
++	then echo "$(2)";		\
++	else echo "$(3)";		\
++	fi)
++
++__cc-option = $(call try-run,\
++	$(1) -Werror $(2) -c -x c /dev/null -o /dev/null,$(2),)
++cc-option = $(call __cc-option, $(CC),$(1))
++
++CFLAGS += -g -O2 -Werror -Wno-maybe-uninitialized -Wall -I. -I../include/ -I ../../usr/include/ -Wno-pointer-sign -fno-strict-overflow -fno-strict-aliasing -fno-common -MMD -U_FORTIFY_SOURCE -include ../../include/linux/kconfig.h $(call cc-option,-mfunction-return=thunk) $(call cc-option,-fcf-protection=none) $(call cc-option,-mindirect-branch-register)
++
+ CFLAGS += -pthread
+ LDFLAGS += -pthread
+ vpath %.c ../../drivers/virtio ../../drivers/vhost


             reply	other threads:[~2023-07-19 17:04 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-19 17:04 Mike Pagano [this message]
  -- strict thread matches above, loose matches on Subject: below --
2023-09-13 11:04 [gentoo-commits] proj/linux-patches:6.4 commit in: / Mike Pagano
2023-09-06 22:15 Mike Pagano
2023-09-02  9:55 Mike Pagano
2023-08-30 13:48 Mike Pagano
2023-08-30 13:45 Mike Pagano
2023-08-23 15:57 Mike Pagano
2023-08-16 20:20 Mike Pagano
2023-08-16 17:28 Mike Pagano
2023-08-11 11:53 Mike Pagano
2023-08-08 18:39 Mike Pagano
2023-08-03 11:47 Mike Pagano
2023-08-02 10:35 Mike Pagano
2023-07-29 18:37 Mike Pagano
2023-07-27 11:46 Mike Pagano
2023-07-27 11:41 Mike Pagano
2023-07-24 20:26 Mike Pagano
2023-07-23 15:15 Mike Pagano
2023-07-19 17:20 Mike Pagano
2023-07-11 11:45 Mike Pagano
2023-07-05 20:40 Mike Pagano
2023-07-05 20:26 Mike Pagano
2023-07-04 12:57 Mike Pagano
2023-07-04 12:57 Mike Pagano
2023-07-03 16:59 Mike Pagano
2023-07-01 18:48 Mike Pagano
2023-07-01 18:19 Mike Pagano
2023-05-09 12:38 Mike Pagano
2023-05-09 12:36 Mike Pagano

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1689786251.cad5fd068ed2bfce525b0b8bf3b60cc4610d8839.mpagano@gentoo \
    --to=mpagano@gentoo.org \
    --cc=gentoo-commits@lists.gentoo.org \
    --cc=gentoo-dev@lists.gentoo.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox