public inbox for gentoo-commits@lists.gentoo.org
 help / color / mirror / Atom feed
From: "Mike Pagano" <mpagano@gentoo.org>
To: gentoo-commits@lists.gentoo.org
Subject: [gentoo-commits] proj/linux-patches:5.18 commit in: /
Date: Thu,  9 Jun 2022 11:25:12 +0000 (UTC)	[thread overview]
Message-ID: <1654773779.565fe84a57a3471aa2c2fbfd0bfd62e7178a6fb0.mpagano@gentoo> (raw)

commit:     565fe84a57a3471aa2c2fbfd0bfd62e7178a6fb0
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Jun  9 11:22:59 2022 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Jun  9 11:22:59 2022 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=565fe84a

Linux patch 5.18.3

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |     4 +
 1002_linux-5.18.3.patch | 37846 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 37850 insertions(+)

diff --git a/0000_README b/0000_README
index 561c7140..5acbe17f 100644
--- a/0000_README
+++ b/0000_README
@@ -51,6 +51,10 @@ Patch:  1001_linux-5.18.2.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.18.2
 
+Patch:  1002_linux-5.18.3.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.18.3
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1002_linux-5.18.3.patch b/1002_linux-5.18.3.patch
new file mode 100644
index 00000000..002288d0
--- /dev/null
+++ b/1002_linux-5.18.3.patch
@@ -0,0 +1,37846 @@
+diff --git a/Documentation/accounting/psi.rst b/Documentation/accounting/psi.rst
+index 860fe651d6453..5e40b3f437f90 100644
+--- a/Documentation/accounting/psi.rst
++++ b/Documentation/accounting/psi.rst
+@@ -37,11 +37,7 @@ Pressure interface
+ Pressure information for each resource is exported through the
+ respective file in /proc/pressure/ -- cpu, memory, and io.
+ 
+-The format for CPU is as such::
+-
+-	some avg10=0.00 avg60=0.00 avg300=0.00 total=0
+-
+-and for memory and IO::
++The format is as such::
+ 
+ 	some avg10=0.00 avg60=0.00 avg300=0.00 total=0
+ 	full avg10=0.00 avg60=0.00 avg300=0.00 total=0
+@@ -58,6 +54,9 @@ situation from a state where some tasks are stalled but the CPU is
+ still doing productive work. As such, time spent in this subset of the
+ stall state is tracked separately and exported in the "full" averages.
+ 
++CPU full is undefined at the system level, but has been reported
++since 5.13, so it is set to zero for backward compatibility.
++
+ The ratios (in %) are tracked as recent trends over ten, sixty, and
+ three hundred second windows, which gives insight into short term events
+ as well as medium and long term trends. The total absolute stall time
+diff --git a/Documentation/conf.py b/Documentation/conf.py
+index 072ee31a301dc..934727e23e0eb 100644
+--- a/Documentation/conf.py
++++ b/Documentation/conf.py
+@@ -161,7 +161,7 @@ finally:
+ #
+ # This is also used if you do content translation via gettext catalogs.
+ # Usually you set "language" from the command line for these cases.
+-language = None
++language = 'en'
+ 
+ # There are two options for replacing |today|: either, you set today to some
+ # non-false value, then it is used:
+diff --git a/Documentation/devicetree/bindings/display/sitronix,st7735r.yaml b/Documentation/devicetree/bindings/display/sitronix,st7735r.yaml
+index 0cebaaefda032..419c3b2ac5a6f 100644
+--- a/Documentation/devicetree/bindings/display/sitronix,st7735r.yaml
++++ b/Documentation/devicetree/bindings/display/sitronix,st7735r.yaml
+@@ -72,6 +72,7 @@ examples:
+                     dc-gpios = <&gpio 43 GPIO_ACTIVE_HIGH>;
+                     reset-gpios = <&gpio 80 GPIO_ACTIVE_HIGH>;
+                     rotation = <270>;
++                    backlight = <&backlight>;
+             };
+     };
+ 
+diff --git a/Documentation/devicetree/bindings/gpio/gpio-altera.txt b/Documentation/devicetree/bindings/gpio/gpio-altera.txt
+index 146e554b3c676..2a80e272cd666 100644
+--- a/Documentation/devicetree/bindings/gpio/gpio-altera.txt
++++ b/Documentation/devicetree/bindings/gpio/gpio-altera.txt
+@@ -9,8 +9,9 @@ Required properties:
+   - The second cell is reserved and is currently unused.
+ - gpio-controller : Marks the device node as a GPIO controller.
+ - interrupt-controller: Mark the device node as an interrupt controller
+-- #interrupt-cells : Should be 1. The interrupt type is fixed in the hardware.
++- #interrupt-cells : Should be 2. The interrupt type is fixed in the hardware.
+   - The first cell is the GPIO offset number within the GPIO controller.
++  - The second cell is the interrupt trigger type and level flags.
+ - interrupts: Specify the interrupt.
+ - altr,interrupt-type: Specifies the interrupt trigger type the GPIO
+   hardware is synthesized. This field is required if the Altera GPIO controller
+@@ -38,6 +39,6 @@ gpio_altr: gpio@ff200000 {
+ 	altr,interrupt-type = <IRQ_TYPE_EDGE_RISING>;
+ 	#gpio-cells = <2>;
+ 	gpio-controller;
+-	#interrupt-cells = <1>;
++	#interrupt-cells = <2>;
+ 	interrupt-controller;
+ };
+diff --git a/Documentation/devicetree/bindings/regulator/mt6315-regulator.yaml b/Documentation/devicetree/bindings/regulator/mt6315-regulator.yaml
+index 61dd5af80db67..5d2d989de893c 100644
+--- a/Documentation/devicetree/bindings/regulator/mt6315-regulator.yaml
++++ b/Documentation/devicetree/bindings/regulator/mt6315-regulator.yaml
+@@ -31,7 +31,7 @@ properties:
+         $ref: "regulator.yaml#"
+ 
+         properties:
+-          regulator-name:
++          regulator-compatible:
+             pattern: "^vbuck[1-4]$"
+ 
+     additionalProperties: false
+diff --git a/Documentation/devicetree/bindings/soc/qcom/qcom,smd-rpm.yaml b/Documentation/devicetree/bindings/soc/qcom/qcom,smd-rpm.yaml
+index b32457c2fc0b0..3361218e278f3 100644
+--- a/Documentation/devicetree/bindings/soc/qcom/qcom,smd-rpm.yaml
++++ b/Documentation/devicetree/bindings/soc/qcom/qcom,smd-rpm.yaml
+@@ -34,6 +34,7 @@ properties:
+       - qcom,rpm-ipq6018
+       - qcom,rpm-msm8226
+       - qcom,rpm-msm8916
++      - qcom,rpm-msm8936
+       - qcom,rpm-msm8953
+       - qcom,rpm-msm8974
+       - qcom,rpm-msm8976
+diff --git a/Documentation/devicetree/bindings/spi/qcom,spi-qcom-qspi.yaml b/Documentation/devicetree/bindings/spi/qcom,spi-qcom-qspi.yaml
+index 5a60fba14bba0..44d08aa3fd85d 100644
+--- a/Documentation/devicetree/bindings/spi/qcom,spi-qcom-qspi.yaml
++++ b/Documentation/devicetree/bindings/spi/qcom,spi-qcom-qspi.yaml
+@@ -49,6 +49,7 @@ properties:
+     maxItems: 2
+ 
+   interconnect-names:
++    minItems: 1
+     items:
+       - const: qspi-config
+       - const: qspi-memory
+diff --git a/Documentation/driver-api/thermal/intel_dptf.rst b/Documentation/driver-api/thermal/intel_dptf.rst
+index 96668dca753a8..372bdb4d04c6d 100644
+--- a/Documentation/driver-api/thermal/intel_dptf.rst
++++ b/Documentation/driver-api/thermal/intel_dptf.rst
+@@ -4,7 +4,7 @@
+ Intel(R) Dynamic Platform and Thermal Framework Sysfs Interface
+ ===============================================================
+ 
+-:Copyright: |copy| 2022 Intel Corporation
++:Copyright: © 2022 Intel Corporation
+ 
+ :Author: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
+ 
+diff --git a/Documentation/sound/alsa-configuration.rst b/Documentation/sound/alsa-configuration.rst
+index 34888d4fc4a83..21ab5e6f7062f 100644
+--- a/Documentation/sound/alsa-configuration.rst
++++ b/Documentation/sound/alsa-configuration.rst
+@@ -2246,7 +2246,7 @@ implicit_fb
+     Apply the generic implicit feedback sync mode.  When this is set
+     and the playback stream sync mode is ASYNC, the driver tries to
+     tie an adjacent ASYNC capture stream as the implicit feedback
+-    source.
++    source.  This is equivalent with quirk_flags bit 17.
+ use_vmalloc
+     Use vmalloc() for allocations of the PCM buffers (default: yes).
+     For architectures with non-coherent memory like ARM or MIPS, the
+@@ -2288,6 +2288,8 @@ quirk_flags
+         * bit 14: Ignore errors for mixer access
+         * bit 15: Support generic DSD raw U32_BE format
+         * bit 16: Set up the interface at first like UAC1
++        * bit 17: Apply the generic implicit feedback sync mode
++        * bit 18: Don't apply implicit feedback sync mode
+ 
+ This module supports multiple devices, autoprobe and hotplugging.
+ 
+diff --git a/Documentation/userspace-api/landlock.rst b/Documentation/userspace-api/landlock.rst
+index f35552ff19ba8..b68e7a51009f8 100644
+--- a/Documentation/userspace-api/landlock.rst
++++ b/Documentation/userspace-api/landlock.rst
+@@ -267,8 +267,8 @@ restrict such paths with dedicated ruleset flags.
+ Ruleset layers
+ --------------
+ 
+-There is a limit of 64 layers of stacked rulesets.  This can be an issue for a
+-task willing to enforce a new ruleset in complement to its 64 inherited
++There is a limit of 16 layers of stacked rulesets.  This can be an issue for a
++task willing to enforce a new ruleset in complement to its 16 inherited
+ rulesets.  Once this limit is reached, sys_landlock_restrict_self() returns
+ E2BIG.  It is then strongly suggested to carefully build rulesets once in the
+ life of a thread, especially for applications able to launch other applications
+diff --git a/Documentation/userspace-api/media/lirc.h.rst.exceptions b/Documentation/userspace-api/media/lirc.h.rst.exceptions
+index 913d17b498313..1aeb7d7afe13b 100644
+--- a/Documentation/userspace-api/media/lirc.h.rst.exceptions
++++ b/Documentation/userspace-api/media/lirc.h.rst.exceptions
+@@ -30,6 +30,8 @@ ignore define LIRC_CAN_REC
+ 
+ ignore define LIRC_CAN_SEND_MASK
+ ignore define LIRC_CAN_REC_MASK
++ignore define LIRC_CAN_SET_REC_FILTER
++ignore define LIRC_CAN_NOTIFY_DECODE
+ 
+ # Obsolete ioctls
+ 
+diff --git a/Makefile b/Makefile
+index 6b1d606a92f6f..eb3adfec0b222 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 18
+-SUBLEVEL = 2
++SUBLEVEL = 3
+ EXTRAVERSION =
+ NAME = Superb Owl
+ 
+diff --git a/arch/alpha/include/asm/page.h b/arch/alpha/include/asm/page.h
+index 18f48a6f2ff6d..8f3f5eecba28b 100644
+--- a/arch/alpha/include/asm/page.h
++++ b/arch/alpha/include/asm/page.h
+@@ -18,7 +18,7 @@ extern void clear_page(void *page);
+ #define clear_user_page(page, vaddr, pg)	clear_page(page)
+ 
+ #define alloc_zeroed_user_highpage_movable(vma, vaddr) \
+-	alloc_page_vma(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, vma, vmaddr)
++	alloc_page_vma(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, vma, vaddr)
+ #define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE
+ 
+ extern void copy_page(void * _to, void * _from);
+diff --git a/arch/arm/boot/dts/bcm2835-rpi-b.dts b/arch/arm/boot/dts/bcm2835-rpi-b.dts
+index 1b63d6b19750b..25d87212cefd3 100644
+--- a/arch/arm/boot/dts/bcm2835-rpi-b.dts
++++ b/arch/arm/boot/dts/bcm2835-rpi-b.dts
+@@ -53,18 +53,17 @@
+ 			  "GPIO18",
+ 			  "NC", /* GPIO19 */
+ 			  "NC", /* GPIO20 */
+-			  "GPIO21",
++			  "CAM_GPIO0",
+ 			  "GPIO22",
+ 			  "GPIO23",
+ 			  "GPIO24",
+ 			  "GPIO25",
+ 			  "NC", /* GPIO26 */
+-			  "CAM_GPIO0",
+-			  /* Binary number representing build/revision */
+-			  "CONFIG0",
+-			  "CONFIG1",
+-			  "CONFIG2",
+-			  "CONFIG3",
++			  "GPIO27",
++			  "GPIO28",
++			  "GPIO29",
++			  "GPIO30",
++			  "GPIO31",
+ 			  "NC", /* GPIO32 */
+ 			  "NC", /* GPIO33 */
+ 			  "NC", /* GPIO34 */
+diff --git a/arch/arm/boot/dts/bcm2835-rpi-zero-w.dts b/arch/arm/boot/dts/bcm2835-rpi-zero-w.dts
+index 243236bc1e00b..8b043ab62dc83 100644
+--- a/arch/arm/boot/dts/bcm2835-rpi-zero-w.dts
++++ b/arch/arm/boot/dts/bcm2835-rpi-zero-w.dts
+@@ -74,16 +74,18 @@
+ 			  "GPIO27",
+ 			  "SDA0",
+ 			  "SCL0",
+-			  "NC", /* GPIO30 */
+-			  "NC", /* GPIO31 */
+-			  "NC", /* GPIO32 */
+-			  "NC", /* GPIO33 */
+-			  "NC", /* GPIO34 */
+-			  "NC", /* GPIO35 */
+-			  "NC", /* GPIO36 */
+-			  "NC", /* GPIO37 */
+-			  "NC", /* GPIO38 */
+-			  "NC", /* GPIO39 */
++			  /* Used by BT module */
++			  "CTS0",
++			  "RTS0",
++			  "TXD0",
++			  "RXD0",
++			  /* Used by Wifi */
++			  "SD1_CLK",
++			  "SD1_CMD",
++			  "SD1_DATA0",
++			  "SD1_DATA1",
++			  "SD1_DATA2",
++			  "SD1_DATA3",
+ 			  "CAM_GPIO1", /* GPIO40 */
+ 			  "WL_ON", /* GPIO41 */
+ 			  "NC", /* GPIO42 */
+diff --git a/arch/arm/boot/dts/bcm2837-rpi-3-b-plus.dts b/arch/arm/boot/dts/bcm2837-rpi-3-b-plus.dts
+index e12938baaf12c..c263f5b48b96b 100644
+--- a/arch/arm/boot/dts/bcm2837-rpi-3-b-plus.dts
++++ b/arch/arm/boot/dts/bcm2837-rpi-3-b-plus.dts
+@@ -45,7 +45,7 @@
+ 		#gpio-cells = <2>;
+ 		gpio-line-names = "BT_ON",
+ 				  "WL_ON",
+-				  "STATUS_LED_R",
++				  "PWR_LED_R",
+ 				  "LAN_RUN",
+ 				  "",
+ 				  "CAM_GPIO0",
+diff --git a/arch/arm/boot/dts/bcm2837-rpi-cm3-io3.dts b/arch/arm/boot/dts/bcm2837-rpi-cm3-io3.dts
+index 588d9411ceb61..3dfce4312dfc4 100644
+--- a/arch/arm/boot/dts/bcm2837-rpi-cm3-io3.dts
++++ b/arch/arm/boot/dts/bcm2837-rpi-cm3-io3.dts
+@@ -63,8 +63,8 @@
+ 			  "GPIO43",
+ 			  "GPIO44",
+ 			  "GPIO45",
+-			  "GPIO46",
+-			  "GPIO47",
++			  "SMPS_SCL",
++			  "SMPS_SDA",
+ 			  /* Used by eMMC */
+ 			  "SD_CLK_R",
+ 			  "SD_CMD_R",
+diff --git a/arch/arm/boot/dts/bcm5301x.dtsi b/arch/arm/boot/dts/bcm5301x.dtsi
+index 603c700c706f2..65f8a759f1e31 100644
+--- a/arch/arm/boot/dts/bcm5301x.dtsi
++++ b/arch/arm/boot/dts/bcm5301x.dtsi
+@@ -455,7 +455,7 @@
+ 				reg = <0x180 0x4>;
+ 			};
+ 
+-			pinctrl: pin-controller@1c0 {
++			pinctrl: pinctrl@1c0 {
+ 				compatible = "brcm,bcm4708-pinmux";
+ 				reg = <0x1c0 0x24>;
+ 				reg-names = "cru_gpio_control";
+diff --git a/arch/arm/boot/dts/exynos5250-smdk5250.dts b/arch/arm/boot/dts/exynos5250-smdk5250.dts
+index 21fbbf3d8684d..71293749ac481 100644
+--- a/arch/arm/boot/dts/exynos5250-smdk5250.dts
++++ b/arch/arm/boot/dts/exynos5250-smdk5250.dts
+@@ -129,7 +129,7 @@
+ 	samsung,i2c-max-bus-freq = <20000>;
+ 
+ 	eeprom@50 {
+-		compatible = "samsung,s524ad0xd1";
++		compatible = "samsung,s524ad0xd1", "atmel,24c128";
+ 		reg = <0x50>;
+ 	};
+ 
+@@ -289,7 +289,7 @@
+ 	samsung,i2c-max-bus-freq = <20000>;
+ 
+ 	eeprom@51 {
+-		compatible = "samsung,s524ad0xd1";
++		compatible = "samsung,s524ad0xd1", "atmel,24c128";
+ 		reg = <0x51>;
+ 	};
+ 
+diff --git a/arch/arm/boot/dts/imx6dl-eckelmann-ci4x10.dts b/arch/arm/boot/dts/imx6dl-eckelmann-ci4x10.dts
+index b4a9523e325b4..864dc5018451f 100644
+--- a/arch/arm/boot/dts/imx6dl-eckelmann-ci4x10.dts
++++ b/arch/arm/boot/dts/imx6dl-eckelmann-ci4x10.dts
+@@ -297,7 +297,11 @@
+ 	phy-mode = "rmii";
+ 	phy-reset-gpios = <&gpio1 18 GPIO_ACTIVE_LOW>;
+ 	phy-handle = <&phy>;
+-	clocks = <&clks IMX6QDL_CLK_ENET>, <&clks IMX6QDL_CLK_ENET>, <&rmii_clk>;
++	clocks = <&clks IMX6QDL_CLK_ENET>,
++		 <&clks IMX6QDL_CLK_ENET>,
++		 <&rmii_clk>,
++		 <&clks IMX6QDL_CLK_ENET_REF>;
++	clock-names = "ipg", "ahb", "ptp", "enet_out";
+ 	status = "okay";
+ 
+ 	mdio {
+diff --git a/arch/arm/boot/dts/imx6qdl-colibri.dtsi b/arch/arm/boot/dts/imx6qdl-colibri.dtsi
+index 4e2a309c93fa8..1e86b38147080 100644
+--- a/arch/arm/boot/dts/imx6qdl-colibri.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-colibri.dtsi
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0+ OR MIT
+ /*
+- * Copyright 2014-2020 Toradex
++ * Copyright 2014-2022 Toradex
+  * Copyright 2012 Freescale Semiconductor, Inc.
+  * Copyright 2011 Linaro Ltd.
+  */
+@@ -132,7 +132,7 @@
+ 	clock-frequency = <100000>;
+ 	pinctrl-names = "default", "gpio";
+ 	pinctrl-0 = <&pinctrl_i2c2>;
+-	pinctrl-0 = <&pinctrl_i2c2_gpio>;
++	pinctrl-1 = <&pinctrl_i2c2_gpio>;
+ 	scl-gpios = <&gpio2 30 (GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN)>;
+ 	sda-gpios = <&gpio3 16 (GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN)>;
+ 	status = "okay";
+@@ -488,7 +488,7 @@
+ 		>;
+ 	};
+ 
+-	pinctrl_i2c2_gpio: i2c2grp {
++	pinctrl_i2c2_gpio: i2c2gpiogrp {
+ 		fsl,pins = <
+ 			MX6QDL_PAD_EIM_EB2__GPIO2_IO30 0x4001b8b1
+ 			MX6QDL_PAD_EIM_D16__GPIO3_IO16 0x4001b8b1
+diff --git a/arch/arm/boot/dts/lan966x.dtsi b/arch/arm/boot/dts/lan966x.dtsi
+index 7d28696480508..5e9cbc8cdcbce 100644
+--- a/arch/arm/boot/dts/lan966x.dtsi
++++ b/arch/arm/boot/dts/lan966x.dtsi
+@@ -114,9 +114,9 @@
+ 			compatible = "atmel,at91sam9g46-aes";
+ 			reg = <0xe004c000 0x100>;
+ 			interrupts = <GIC_SPI 53 IRQ_TYPE_LEVEL_HIGH>;
+-			dmas = <&dma0 AT91_XDMAC_DT_PERID(13)>,
+-			       <&dma0 AT91_XDMAC_DT_PERID(12)>;
+-			dma-names = "rx", "tx";
++			dmas = <&dma0 AT91_XDMAC_DT_PERID(12)>,
++			       <&dma0 AT91_XDMAC_DT_PERID(13)>;
++			dma-names = "tx", "rx";
+ 			clocks = <&nic_clk>;
+ 			clock-names = "aes_clk";
+ 		};
+diff --git a/arch/arm/boot/dts/ox820.dtsi b/arch/arm/boot/dts/ox820.dtsi
+index 90846a7655b49..dde4364892bf0 100644
+--- a/arch/arm/boot/dts/ox820.dtsi
++++ b/arch/arm/boot/dts/ox820.dtsi
+@@ -287,7 +287,7 @@
+ 				clocks = <&armclk>;
+ 			};
+ 
+-			gic: gic@1000 {
++			gic: interrupt-controller@1000 {
+ 				compatible = "arm,arm11mp-gic";
+ 				interrupt-controller;
+ 				#interrupt-cells = <3>;
+diff --git a/arch/arm/boot/dts/qcom-sdx65.dtsi b/arch/arm/boot/dts/qcom-sdx65.dtsi
+index 796641d30e06c..0c3f93603adc8 100644
+--- a/arch/arm/boot/dts/qcom-sdx65.dtsi
++++ b/arch/arm/boot/dts/qcom-sdx65.dtsi
+@@ -202,7 +202,7 @@
+ 				<WAKE_TCS    2>,
+ 				<CONTROL_TCS 1>;
+ 
+-			rpmhcc: clock-controller@1 {
++			rpmhcc: clock-controller {
+ 				compatible = "qcom,sdx65-rpmh-clk";
+ 				#clock-cells = <1>;
+ 				clock-names = "xo";
+diff --git a/arch/arm/boot/dts/s5pv210-aries.dtsi b/arch/arm/boot/dts/s5pv210-aries.dtsi
+index 26f2be2d9faa2..962266c24aac4 100644
+--- a/arch/arm/boot/dts/s5pv210-aries.dtsi
++++ b/arch/arm/boot/dts/s5pv210-aries.dtsi
+@@ -564,7 +564,6 @@
+ 			reset-gpios = <&mp05 5 GPIO_ACTIVE_LOW>;
+ 			vdd3-supply = <&ldo7_reg>;
+ 			vci-supply = <&ldo17_reg>;
+-			spi-cs-high;
+ 			spi-max-frequency = <1200000>;
+ 
+ 			pinctrl-names = "default";
+@@ -636,7 +635,7 @@
+ };
+ 
+ &i2s0 {
+-	dmas = <&pdma0 9>, <&pdma0 10>, <&pdma0 11>;
++	dmas = <&pdma0 10>, <&pdma0 9>, <&pdma0 11>;
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm/boot/dts/s5pv210.dtsi b/arch/arm/boot/dts/s5pv210.dtsi
+index 353ba7b09a0c0..c5265f3ae31d6 100644
+--- a/arch/arm/boot/dts/s5pv210.dtsi
++++ b/arch/arm/boot/dts/s5pv210.dtsi
+@@ -239,8 +239,8 @@
+ 			reg = <0xeee30000 0x1000>;
+ 			interrupt-parent = <&vic2>;
+ 			interrupts = <16>;
+-			dma-names = "rx", "tx", "tx-sec";
+-			dmas = <&pdma1 9>, <&pdma1 10>, <&pdma1 11>;
++			dma-names = "tx", "rx", "tx-sec";
++			dmas = <&pdma1 10>, <&pdma1 9>, <&pdma1 11>;
+ 			clock-names = "iis",
+ 				      "i2s_opclk0",
+ 				      "i2s_opclk1";
+@@ -259,8 +259,8 @@
+ 			reg = <0xe2100000 0x1000>;
+ 			interrupt-parent = <&vic2>;
+ 			interrupts = <17>;
+-			dma-names = "rx", "tx";
+-			dmas = <&pdma1 12>, <&pdma1 13>;
++			dma-names = "tx", "rx";
++			dmas = <&pdma1 13>, <&pdma1 12>;
+ 			clock-names = "iis", "i2s_opclk0";
+ 			clocks = <&clocks CLK_I2S1>, <&clocks SCLK_AUDIO1>;
+ 			pinctrl-names = "default";
+@@ -274,8 +274,8 @@
+ 			reg = <0xe2a00000 0x1000>;
+ 			interrupt-parent = <&vic2>;
+ 			interrupts = <18>;
+-			dma-names = "rx", "tx";
+-			dmas = <&pdma1 14>, <&pdma1 15>;
++			dma-names = "tx", "rx";
++			dmas = <&pdma1 15>, <&pdma1 14>;
+ 			clock-names = "iis", "i2s_opclk0";
+ 			clocks = <&clocks CLK_I2S2>, <&clocks SCLK_AUDIO2>;
+ 			pinctrl-names = "default";
+diff --git a/arch/arm/boot/dts/sama7g5.dtsi b/arch/arm/boot/dts/sama7g5.dtsi
+index f691c8f08d047..b632631296928 100644
+--- a/arch/arm/boot/dts/sama7g5.dtsi
++++ b/arch/arm/boot/dts/sama7g5.dtsi
+@@ -857,7 +857,6 @@
+ 			#interrupt-cells = <3>;
+ 			#address-cells = <0>;
+ 			interrupt-controller;
+-			interrupt-parent;
+ 			reg = <0xe8c11000 0x1000>,
+ 				<0xe8c12000 0x2000>;
+ 		};
+diff --git a/arch/arm/boot/dts/socfpga.dtsi b/arch/arm/boot/dts/socfpga.dtsi
+index 7c1d6423d7f8c..b8c5dd7860cb2 100644
+--- a/arch/arm/boot/dts/socfpga.dtsi
++++ b/arch/arm/boot/dts/socfpga.dtsi
+@@ -46,7 +46,7 @@
+ 		      <0xff113000 0x1000>;
+ 	};
+ 
+-	intc: intc@fffed000 {
++	intc: interrupt-controller@fffed000 {
+ 		compatible = "arm,cortex-a9-gic";
+ 		#interrupt-cells = <3>;
+ 		interrupt-controller;
+diff --git a/arch/arm/boot/dts/socfpga_arria10.dtsi b/arch/arm/boot/dts/socfpga_arria10.dtsi
+index 3ba431dfa8c94..f1e50d2e623a3 100644
+--- a/arch/arm/boot/dts/socfpga_arria10.dtsi
++++ b/arch/arm/boot/dts/socfpga_arria10.dtsi
+@@ -38,7 +38,7 @@
+ 		      <0xff113000 0x1000>;
+ 	};
+ 
+-	intc: intc@ffffd000 {
++	intc: interrupt-controller@ffffd000 {
+ 		compatible = "arm,cortex-a9-gic";
+ 		#interrupt-cells = <3>;
+ 		interrupt-controller;
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi
+index 61e17f44ce815..76c54b006d871 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi
+@@ -141,6 +141,7 @@
+ 		compatible = "snps,dwmac-mdio";
+ 		reset-gpios = <&gpioz 2 GPIO_ACTIVE_LOW>;
+ 		reset-delay-us = <1000>;
++		reset-post-delay-us = <1000>;
+ 
+ 		phy0: ethernet-phy@7 {
+ 			reg = <7>;
+diff --git a/arch/arm/boot/dts/suniv-f1c100s.dtsi b/arch/arm/boot/dts/suniv-f1c100s.dtsi
+index 6100d3b75f613..def8301014487 100644
+--- a/arch/arm/boot/dts/suniv-f1c100s.dtsi
++++ b/arch/arm/boot/dts/suniv-f1c100s.dtsi
+@@ -104,8 +104,10 @@
+ 
+ 		wdt: watchdog@1c20ca0 {
+ 			compatible = "allwinner,suniv-f1c100s-wdt",
+-				     "allwinner,sun4i-a10-wdt";
++				     "allwinner,sun6i-a31-wdt";
+ 			reg = <0x01c20ca0 0x20>;
++			interrupts = <16>;
++			clocks = <&osc32k>;
+ 		};
+ 
+ 		uart0: serial@1c25000 {
+diff --git a/arch/arm/include/asm/arch_gicv3.h b/arch/arm/include/asm/arch_gicv3.h
+index 413abfb42989e..f82a819eb0dbb 100644
+--- a/arch/arm/include/asm/arch_gicv3.h
++++ b/arch/arm/include/asm/arch_gicv3.h
+@@ -48,6 +48,7 @@ static inline u32 read_ ## a64(void)		\
+ 	return read_sysreg(a32); 		\
+ }						\
+ 
++CPUIF_MAP(ICC_EOIR1, ICC_EOIR1_EL1)
+ CPUIF_MAP(ICC_PMR, ICC_PMR_EL1)
+ CPUIF_MAP(ICC_AP0R0, ICC_AP0R0_EL1)
+ CPUIF_MAP(ICC_AP0R1, ICC_AP0R1_EL1)
+@@ -63,12 +64,6 @@ CPUIF_MAP(ICC_AP1R3, ICC_AP1R3_EL1)
+ 
+ /* Low-level accessors */
+ 
+-static inline void gic_write_eoir(u32 irq)
+-{
+-	write_sysreg(irq, ICC_EOIR1);
+-	isb();
+-}
+-
+ static inline void gic_write_dir(u32 val)
+ {
+ 	write_sysreg(val, ICC_DIR);
+diff --git a/arch/arm/kernel/signal.c b/arch/arm/kernel/signal.c
+index 459abc5d18195..ea128e32e8ca8 100644
+--- a/arch/arm/kernel/signal.c
++++ b/arch/arm/kernel/signal.c
+@@ -708,6 +708,7 @@ static_assert(offsetof(siginfo_t, si_upper)	== 0x18);
+ static_assert(offsetof(siginfo_t, si_pkey)	== 0x14);
+ static_assert(offsetof(siginfo_t, si_perf_data)	== 0x10);
+ static_assert(offsetof(siginfo_t, si_perf_type)	== 0x14);
++static_assert(offsetof(siginfo_t, si_perf_flags) == 0x18);
+ static_assert(offsetof(siginfo_t, si_band)	== 0x0c);
+ static_assert(offsetof(siginfo_t, si_fd)	== 0x10);
+ static_assert(offsetof(siginfo_t, si_call_addr)	== 0x0c);
+diff --git a/arch/arm/mach-hisi/platsmp.c b/arch/arm/mach-hisi/platsmp.c
+index a56cc64deeb8f..9ce93e0b6cdc3 100644
+--- a/arch/arm/mach-hisi/platsmp.c
++++ b/arch/arm/mach-hisi/platsmp.c
+@@ -67,14 +67,17 @@ static void __init hi3xxx_smp_prepare_cpus(unsigned int max_cpus)
+ 		}
+ 		ctrl_base = of_iomap(np, 0);
+ 		if (!ctrl_base) {
++			of_node_put(np);
+ 			pr_err("failed to map address\n");
+ 			return;
+ 		}
+ 		if (of_property_read_u32(np, "smp-offset", &offset) < 0) {
++			of_node_put(np);
+ 			pr_err("failed to find smp-offset property\n");
+ 			return;
+ 		}
+ 		ctrl_base += offset;
++		of_node_put(np);
+ 	}
+ }
+ 
+@@ -160,6 +163,7 @@ static int hip01_boot_secondary(unsigned int cpu, struct task_struct *idle)
+ 	if (WARN_ON(!node))
+ 		return -1;
+ 	ctrl_base = of_iomap(node, 0);
++	of_node_put(node);
+ 
+ 	/* set the secondary core boot from DDR */
+ 	remap_reg_value = readl_relaxed(ctrl_base + REG_SC_CTRL);
+diff --git a/arch/arm/mach-mediatek/Kconfig b/arch/arm/mach-mediatek/Kconfig
+index 9e0f592d87d8e..35a3430c7942d 100644
+--- a/arch/arm/mach-mediatek/Kconfig
++++ b/arch/arm/mach-mediatek/Kconfig
+@@ -30,6 +30,7 @@ config MACH_MT7623
+ config MACH_MT7629
+ 	bool "MediaTek MT7629 SoCs support"
+ 	default ARCH_MEDIATEK
++	select HAVE_ARM_ARCH_TIMER
+ 
+ config MACH_MT8127
+ 	bool "MediaTek MT8127 SoCs support"
+diff --git a/arch/arm/mach-omap1/clock.c b/arch/arm/mach-omap1/clock.c
+index 9d4a0ab50a468..d63d5eb8d8fdf 100644
+--- a/arch/arm/mach-omap1/clock.c
++++ b/arch/arm/mach-omap1/clock.c
+@@ -41,7 +41,7 @@ static DEFINE_SPINLOCK(clockfw_lock);
+ unsigned long omap1_uart_recalc(struct clk *clk)
+ {
+ 	unsigned int val = __raw_readl(clk->enable_reg);
+-	return val & clk->enable_bit ? 48000000 : 12000000;
++	return val & 1 << clk->enable_bit ? 48000000 : 12000000;
+ }
+ 
+ unsigned long omap1_sossi_recalc(struct clk *clk)
+diff --git a/arch/arm/mach-pxa/cm-x300.c b/arch/arm/mach-pxa/cm-x300.c
+index 2e35354b61f56..167e871f059ef 100644
+--- a/arch/arm/mach-pxa/cm-x300.c
++++ b/arch/arm/mach-pxa/cm-x300.c
+@@ -354,13 +354,13 @@ static struct platform_device cm_x300_spi_gpio = {
+ static struct gpiod_lookup_table cm_x300_spi_gpiod_table = {
+ 	.dev_id         = "spi_gpio",
+ 	.table          = {
+-		GPIO_LOOKUP("gpio-pxa", GPIO_LCD_SCL,
++		GPIO_LOOKUP("pca9555.1", GPIO_LCD_SCL - GPIO_LCD_BASE,
+ 			    "sck", GPIO_ACTIVE_HIGH),
+-		GPIO_LOOKUP("gpio-pxa", GPIO_LCD_DIN,
++		GPIO_LOOKUP("pca9555.1", GPIO_LCD_DIN - GPIO_LCD_BASE,
+ 			    "mosi", GPIO_ACTIVE_HIGH),
+-		GPIO_LOOKUP("gpio-pxa", GPIO_LCD_DOUT,
++		GPIO_LOOKUP("pca9555.1", GPIO_LCD_DOUT - GPIO_LCD_BASE,
+ 			    "miso", GPIO_ACTIVE_HIGH),
+-		GPIO_LOOKUP("gpio-pxa", GPIO_LCD_CS,
++		GPIO_LOOKUP("pca9555.1", GPIO_LCD_CS - GPIO_LCD_BASE,
+ 			    "cs", GPIO_ACTIVE_HIGH),
+ 		{ },
+ 	},
+diff --git a/arch/arm/mach-pxa/magician.c b/arch/arm/mach-pxa/magician.c
+index 200fd35168e05..fcced6499faee 100644
+--- a/arch/arm/mach-pxa/magician.c
++++ b/arch/arm/mach-pxa/magician.c
+@@ -681,7 +681,7 @@ static struct platform_device bq24022 = {
+ static struct gpiod_lookup_table bq24022_gpiod_table = {
+ 	.dev_id = "gpio-regulator",
+ 	.table = {
+-		GPIO_LOOKUP("gpio-pxa", EGPIO_MAGICIAN_BQ24022_ISET2,
++		GPIO_LOOKUP("htc-egpio-0", EGPIO_MAGICIAN_BQ24022_ISET2 - MAGICIAN_EGPIO_BASE,
+ 			    NULL, GPIO_ACTIVE_HIGH),
+ 		GPIO_LOOKUP("gpio-pxa", GPIO30_MAGICIAN_BQ24022_nCHARGE_EN,
+ 			    "enable", GPIO_ACTIVE_LOW),
+diff --git a/arch/arm/mach-pxa/tosa.c b/arch/arm/mach-pxa/tosa.c
+index 431709725d02b..ded5e343e1984 100644
+--- a/arch/arm/mach-pxa/tosa.c
++++ b/arch/arm/mach-pxa/tosa.c
+@@ -296,9 +296,9 @@ static struct gpiod_lookup_table tosa_mci_gpio_table = {
+ 	.table = {
+ 		GPIO_LOOKUP("gpio-pxa", TOSA_GPIO_nSD_DETECT,
+ 			    "cd", GPIO_ACTIVE_LOW),
+-		GPIO_LOOKUP("gpio-pxa", TOSA_GPIO_SD_WP,
++		GPIO_LOOKUP("sharp-scoop.0", TOSA_GPIO_SD_WP - TOSA_SCOOP_GPIO_BASE,
+ 			    "wp", GPIO_ACTIVE_LOW),
+-		GPIO_LOOKUP("gpio-pxa", TOSA_GPIO_PWR_ON,
++		GPIO_LOOKUP("sharp-scoop.0", TOSA_GPIO_PWR_ON - TOSA_SCOOP_GPIO_BASE,
+ 			    "power", GPIO_ACTIVE_HIGH),
+ 		{ },
+ 	},
+diff --git a/arch/arm/mach-vexpress/dcscb.c b/arch/arm/mach-vexpress/dcscb.c
+index a0554d7d04f7c..e1adc098f89ac 100644
+--- a/arch/arm/mach-vexpress/dcscb.c
++++ b/arch/arm/mach-vexpress/dcscb.c
+@@ -144,6 +144,7 @@ static int __init dcscb_init(void)
+ 	if (!node)
+ 		return -ENODEV;
+ 	dcscb_base = of_iomap(node, 0);
++	of_node_put(node);
+ 	if (!dcscb_base)
+ 		return -EADDRNOTAVAIL;
+ 	cfg = readl_relaxed(dcscb_base + DCS_CFG_R);
+diff --git a/arch/arm64/Kconfig.platforms b/arch/arm64/Kconfig.platforms
+index 30b123cde02c5..aaeaf57c82225 100644
+--- a/arch/arm64/Kconfig.platforms
++++ b/arch/arm64/Kconfig.platforms
+@@ -253,6 +253,7 @@ config ARCH_INTEL_SOCFPGA
+ 
+ config ARCH_SYNQUACER
+ 	bool "Socionext SynQuacer SoC Family"
++	select IRQ_FASTEOI_HIERARCHY_HANDLERS
+ 
+ config ARCH_TEGRA
+ 	bool "NVIDIA Tegra SoC Family"
+diff --git a/arch/arm64/boot/dts/arm/juno-r1-scmi.dts b/arch/arm64/boot/dts/arm/juno-r1-scmi.dts
+index 190a0fba4ad68..fd1f0d26d751a 100644
+--- a/arch/arm64/boot/dts/arm/juno-r1-scmi.dts
++++ b/arch/arm64/boot/dts/arm/juno-r1-scmi.dts
+@@ -7,11 +7,11 @@
+ 	};
+ 
+ 	etf@20140000 {
+-		power-domains = <&scmi_devpd 0>;
++		power-domains = <&scmi_devpd 8>;
+ 	};
+ 
+ 	funnel@20150000 {
+-		power-domains = <&scmi_devpd 0>;
++		power-domains = <&scmi_devpd 8>;
+ 	};
+ };
+ 
+diff --git a/arch/arm64/boot/dts/arm/juno-r2-scmi.dts b/arch/arm64/boot/dts/arm/juno-r2-scmi.dts
+index dbf13770084f5..35e6d4762c463 100644
+--- a/arch/arm64/boot/dts/arm/juno-r2-scmi.dts
++++ b/arch/arm64/boot/dts/arm/juno-r2-scmi.dts
+@@ -7,11 +7,11 @@
+ 	};
+ 
+ 	etf@20140000 {
+-		power-domains = <&scmi_devpd 0>;
++		power-domains = <&scmi_devpd 8>;
+ 	};
+ 
+ 	funnel@20150000 {
+-		power-domains = <&scmi_devpd 0>;
++		power-domains = <&scmi_devpd 8>;
+ 	};
+ };
+ 
+diff --git a/arch/arm64/boot/dts/marvell/armada-3720-espressobin-ultra.dts b/arch/arm64/boot/dts/marvell/armada-3720-espressobin-ultra.dts
+index c5eb3604dd5b7..119db6b541b7b 100644
+--- a/arch/arm64/boot/dts/marvell/armada-3720-espressobin-ultra.dts
++++ b/arch/arm64/boot/dts/marvell/armada-3720-espressobin-ultra.dts
+@@ -71,10 +71,6 @@
+ 
+ &spi0 {
+ 	flash@0 {
+-		spi-max-frequency = <108000000>;
+-		spi-rx-bus-width = <4>;
+-		spi-tx-bus-width = <4>;
+-
+ 		partitions {
+ 			compatible = "fixed-partitions";
+ 			#address-cells = <1>;
+@@ -112,7 +108,6 @@
+ 
+ &usb3 {
+ 	usb-phy = <&usb3_phy>;
+-	status = "disabled";
+ };
+ 
+ &mdio {
+diff --git a/arch/arm64/boot/dts/mediatek/mt8192.dtsi b/arch/arm64/boot/dts/mediatek/mt8192.dtsi
+index 411feb2946134..bcecc74844535 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8192.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8192.dtsi
+@@ -679,7 +679,7 @@
+ 			assigned-clock-parents = <&clk26m>;
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+-			status = "disable";
++			status = "disabled";
+ 		};
+ 
+ 		audsys: clock-controller@11210000 {
+diff --git a/arch/arm64/boot/dts/nvidia/tegra210.dtsi b/arch/arm64/boot/dts/nvidia/tegra210.dtsi
+index 218a2b32200f8..4f0e51f1a3430 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra210.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra210.dtsi
+@@ -1366,8 +1366,9 @@
+ 			 <&tegra_car TEGRA210_CLK_DFLL_REF>,
+ 			 <&tegra_car TEGRA210_CLK_I2C5>;
+ 		clock-names = "soc", "ref", "i2c";
+-		resets = <&tegra_car TEGRA210_RST_DFLL_DVCO>;
+-		reset-names = "dvco";
++		resets = <&tegra_car TEGRA210_RST_DFLL_DVCO>,
++			 <&tegra_car 155>;
++		reset-names = "dvco", "dfll";
+ 		#clock-cells = <0>;
+ 		clock-output-names = "dfllCPU_out";
+ 		status = "disabled";
+diff --git a/arch/arm64/boot/dts/qcom/ipq8074.dtsi b/arch/arm64/boot/dts/qcom/ipq8074.dtsi
+index d80b1cefab100..8d4e0e1934396 100644
+--- a/arch/arm64/boot/dts/qcom/ipq8074.dtsi
++++ b/arch/arm64/boot/dts/qcom/ipq8074.dtsi
+@@ -13,7 +13,7 @@
+ 	clocks {
+ 		sleep_clk: sleep_clk {
+ 			compatible = "fixed-clock";
+-			clock-frequency = <32000>;
++			clock-frequency = <32768>;
+ 			#clock-cells = <0>;
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/qcom/msm8994.dtsi b/arch/arm64/boot/dts/qcom/msm8994.dtsi
+index 8c1dc5155b713..b1e595cb4b901 100644
+--- a/arch/arm64/boot/dts/qcom/msm8994.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8994.dtsi
+@@ -183,8 +183,8 @@
+ 			no-map;
+ 		};
+ 
+-		cont_splash_mem: memory@3800000 {
+-			reg = <0 0x03800000 0 0x2400000>;
++		cont_splash_mem: memory@3401000 {
++			reg = <0 0x03401000 0 0x2200000>;
+ 			no-map;
+ 		};
+ 
+@@ -498,7 +498,7 @@
+ 			#dma-cells = <1>;
+ 			qcom,ee = <0>;
+ 			qcom,controlled-remotely;
+-			num-channels = <18>;
++			num-channels = <24>;
+ 			qcom,num-ees = <4>;
+ 		};
+ 
+@@ -634,7 +634,7 @@
+ 			#dma-cells = <1>;
+ 			qcom,ee = <0>;
+ 			qcom,controlled-remotely;
+-			num-channels = <18>;
++			num-channels = <24>;
+ 			qcom,num-ees = <4>;
+ 		};
+ 
+diff --git a/arch/arm64/boot/dts/qcom/qrb5165-rb5.dts b/arch/arm64/boot/dts/qcom/qrb5165-rb5.dts
+index 845eb7a6bf92e..0e63f707b9115 100644
+--- a/arch/arm64/boot/dts/qcom/qrb5165-rb5.dts
++++ b/arch/arm64/boot/dts/qcom/qrb5165-rb5.dts
+@@ -29,7 +29,7 @@
+ 	};
+ 
+ 	/* Fixed crystal oscillator dedicated to MCP2518FD */
+-	clk40M: can_clock {
++	clk40M: can-clock {
+ 		compatible = "fixed-clock";
+ 		#clock-cells = <0>;
+ 		clock-frequency = <40000000>;
+diff --git a/arch/arm64/boot/dts/qcom/sc7280-herobrine.dtsi b/arch/arm64/boot/dts/qcom/sc7280-herobrine.dtsi
+index dc17f2079695f..488caa48cba3a 100644
+--- a/arch/arm64/boot/dts/qcom/sc7280-herobrine.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7280-herobrine.dtsi
+@@ -677,7 +677,6 @@ ap_ec_spi: &spi10 {
+ 		function = "gpio";
+ 		bias-disable;
+ 		drive-strength = <2>;
+-		output-high;
+ 	};
+ 
+ 	fp_to_ap_irq_l: fp-to-ap-irq-l {
+@@ -691,7 +690,6 @@ ap_ec_spi: &spi10 {
+ 		pins = "gpio68";
+ 		function = "gpio";
+ 		bias-disable;
+-		output-low;
+ 	};
+ 
+ 	gsc_ap_int_odl: gsc-ap-int-odl {
+@@ -741,7 +739,7 @@ ap_ec_spi: &spi10 {
+ 		bias-pull-up;
+ 	};
+ 
+-	sar1_irq_odl: sar0-irq-odl {
++	sar1_irq_odl: sar1-irq-odl {
+ 		pins = "gpio140";
+ 		function = "gpio";
+ 		bias-pull-up;
+diff --git a/arch/arm64/boot/dts/qcom/sc7280-idp.dtsi b/arch/arm64/boot/dts/qcom/sc7280-idp.dtsi
+index ecbf2b89d8963..5ab3696af3548 100644
+--- a/arch/arm64/boot/dts/qcom/sc7280-idp.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7280-idp.dtsi
+@@ -400,10 +400,13 @@
+ 
+ &qup_uart7_cts {
+ 	/*
+-	 * Configure a pull-down on CTS to match the pull of
+-	 * the Bluetooth module.
++	 * Configure a bias-bus-hold on CTS to lower power
++	 * usage when Bluetooth is turned off. Bus hold will
++	 * maintain a low power state regardless of whether
++	 * the Bluetooth module drives the pin in either
++	 * direction or leaves the pin fully unpowered.
+ 	 */
+-	bias-pull-down;
++	bias-bus-hold;
+ };
+ 
+ &qup_uart7_rts {
+@@ -495,10 +498,13 @@
+ 		pins = "gpio28";
+ 		function = "gpio";
+ 		/*
+-		 * Configure a pull-down on CTS to match the pull of
+-		 * the Bluetooth module.
++		 * Configure a bias-bus-hold on CTS to lower power
++		 * usage when Bluetooth is turned off. Bus hold will
++		 * maintain a low power state regardless of whether
++		 * the Bluetooth module drives the pin in either
++		 * direction or leaves the pin fully unpowered.
+ 		 */
+-		bias-pull-down;
++		bias-bus-hold;
+ 	};
+ 
+ 	qup_uart7_sleep_rts: qup-uart7-sleep-rts {
+diff --git a/arch/arm64/boot/dts/qcom/sc7280-qcard.dtsi b/arch/arm64/boot/dts/qcom/sc7280-qcard.dtsi
+index b833ba1e8f4af..98b5cd70bca52 100644
+--- a/arch/arm64/boot/dts/qcom/sc7280-qcard.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7280-qcard.dtsi
+@@ -398,8 +398,14 @@ mos_bt_uart: &uart7 {
+ 
+ /* For mos_bt_uart */
+ &qup_uart7_cts {
+-	/* Configure a pull-down on CTS to match the pull of the Bluetooth module. */
+-	bias-pull-down;
++	/*
++	 * Configure a bias-bus-hold on CTS to lower power
++	 * usage when Bluetooth is turned off. Bus hold will
++	 * maintain a low power state regardless of whether
++	 * the Bluetooth module drives the pin in either
++	 * direction or leaves the pin fully unpowered.
++	 */
++	bias-bus-hold;
+ };
+ 
+ /* For mos_bt_uart */
+@@ -490,10 +496,13 @@ mos_bt_uart: &uart7 {
+ 		pins = "gpio28";
+ 		function = "gpio";
+ 		/*
+-		 * Configure a pull-down on CTS to match the pull of
+-		 * the Bluetooth module.
++		 * Configure a bias-bus-hold on CTS to lower power
++		 * usage when Bluetooth is turned off. Bus hold will
++		 * maintain a low power state regardless of whether
++		 * the Bluetooth module drives the pin in either
++		 * direction or leaves the pin fully unpowered.
+ 		 */
+-		bias-pull-down;
++		bias-bus-hold;
+ 	};
+ 
+ 	/* For mos_bt_uart */
+diff --git a/arch/arm64/boot/dts/qcom/sdm845-xiaomi-beryllium.dts b/arch/arm64/boot/dts/qcom/sdm845-xiaomi-beryllium.dts
+index 367389526b418..a97f5e89e1d0f 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845-xiaomi-beryllium.dts
++++ b/arch/arm64/boot/dts/qcom/sdm845-xiaomi-beryllium.dts
+@@ -218,7 +218,7 @@
+ 	panel@0 {
+ 		compatible = "tianma,fhd-video";
+ 		reg = <0>;
+-		vddi0-supply = <&vreg_l14a_1p8>;
++		vddio-supply = <&vreg_l14a_1p8>;
+ 		vddpos-supply = <&lab>;
+ 		vddneg-supply = <&ibb>;
+ 
+diff --git a/arch/arm64/boot/dts/qcom/sm8450.dtsi b/arch/arm64/boot/dts/qcom/sm8450.dtsi
+index 934e29b9e153b..e63b7b0458cf8 100644
+--- a/arch/arm64/boot/dts/qcom/sm8450.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8450.dtsi
+@@ -693,6 +693,9 @@
+ 			clock-names = "m-ahb", "s-ahb";
+ 			clocks = <&gcc GCC_QUPV3_WRAP_0_M_AHB_CLK>,
+ 				 <&gcc GCC_QUPV3_WRAP_0_S_AHB_CLK>;
++			iommus = <&apps_smmu 0x5a3 0x0>;
++			interconnects = <&clk_virt MASTER_QUP_CORE_0 0 &clk_virt SLAVE_QUP_CORE_0 0>;
++			interconnect-names = "qup-core";
+ 			#address-cells = <2>;
+ 			#size-cells = <2>;
+ 			ranges;
+@@ -718,6 +721,9 @@
+ 			clock-names = "m-ahb", "s-ahb";
+ 			clocks = <&gcc GCC_QUPV3_WRAP_1_M_AHB_CLK>,
+ 				 <&gcc GCC_QUPV3_WRAP_1_S_AHB_CLK>;
++			iommus = <&apps_smmu 0x43 0x0>;
++			interconnects = <&clk_virt MASTER_QUP_CORE_1 0 &clk_virt SLAVE_QUP_CORE_1 0>;
++			interconnect-names = "qup-core";
+ 			#address-cells = <2>;
+ 			#size-cells = <2>;
+ 			ranges;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399.dtsi b/arch/arm64/boot/dts/rockchip/rk3399.dtsi
+index 080457a68e3c7..88f26d89eea1a 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3399.dtsi
+@@ -1534,6 +1534,7 @@
+ 			reg = <0xf780 0x24>;
+ 			clocks = <&sdhci>;
+ 			clock-names = "emmcclk";
++			drive-impedance-ohm = <50>;
+ 			#phy-cells = <0>;
+ 			status = "disabled";
+ 		};
+@@ -1544,7 +1545,6 @@
+ 			clock-names = "refclk";
+ 			#phy-cells = <1>;
+ 			resets = <&cru SRST_PCIEPHY>;
+-			drive-impedance-ohm = <50>;
+ 			reset-names = "phy";
+ 			status = "disabled";
+ 		};
+diff --git a/arch/arm64/boot/dts/ti/k3-am64-mcu.dtsi b/arch/arm64/boot/dts/ti/k3-am64-mcu.dtsi
+index 2bb5c9ff172c9..02d4285acbb8d 100644
+--- a/arch/arm64/boot/dts/ti/k3-am64-mcu.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am64-mcu.dtsi
+@@ -10,7 +10,6 @@
+ 		compatible = "ti,am64-uart", "ti,am654-uart";
+ 		reg = <0x00 0x04a00000 0x00 0x100>;
+ 		interrupts = <GIC_SPI 185 IRQ_TYPE_LEVEL_HIGH>;
+-		clock-frequency = <48000000>;
+ 		current-speed = <115200>;
+ 		power-domains = <&k3_pds 149 TI_SCI_PD_EXCLUSIVE>;
+ 		clocks = <&k3_clks 149 0>;
+@@ -21,7 +20,6 @@
+ 		compatible = "ti,am64-uart", "ti,am654-uart";
+ 		reg = <0x00 0x04a10000 0x00 0x100>;
+ 		interrupts = <GIC_SPI 186 IRQ_TYPE_LEVEL_HIGH>;
+-		clock-frequency = <48000000>;
+ 		current-speed = <115200>;
+ 		power-domains = <&k3_pds 160 TI_SCI_PD_EXCLUSIVE>;
+ 		clocks = <&k3_clks 160 0>;
+diff --git a/arch/arm64/configs/defconfig b/arch/arm64/configs/defconfig
+index 50aa3d75ab4f4..f30af6e1fe401 100644
+--- a/arch/arm64/configs/defconfig
++++ b/arch/arm64/configs/defconfig
+@@ -1029,6 +1029,7 @@ CONFIG_SM_GCC_8350=y
+ CONFIG_SM_GCC_8450=y
+ CONFIG_SM_GPUCC_8150=y
+ CONFIG_SM_GPUCC_8250=y
++CONFIG_SM_DISPCC_8250=y
+ CONFIG_QCOM_HFPLL=y
+ CONFIG_CLK_GFM_LPASS_SM8250=m
+ CONFIG_CLK_RCAR_USB2_CLOCK_SEL=y
+diff --git a/arch/arm64/include/asm/arch_gicv3.h b/arch/arm64/include/asm/arch_gicv3.h
+index 8bd5afc7b692e..48d4473e8eee2 100644
+--- a/arch/arm64/include/asm/arch_gicv3.h
++++ b/arch/arm64/include/asm/arch_gicv3.h
+@@ -26,12 +26,6 @@
+  * sets the GP register's most significant bits to 0 with an explicit cast.
+  */
+ 
+-static inline void gic_write_eoir(u32 irq)
+-{
+-	write_sysreg_s(irq, SYS_ICC_EOIR1_EL1);
+-	isb();
+-}
+-
+ static __always_inline void gic_write_dir(u32 irq)
+ {
+ 	write_sysreg_s(irq, SYS_ICC_DIR_EL1);
+diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
+index 73e38d9a540ce..6b1a12c23fe77 100644
+--- a/arch/arm64/include/asm/processor.h
++++ b/arch/arm64/include/asm/processor.h
+@@ -381,12 +381,10 @@ long get_tagged_addr_ctrl(struct task_struct *task);
+  * of header definitions for the use of task_stack_page.
+  */
+ 
+-#define current_top_of_stack()								\
+-({											\
+-	struct stack_info _info;							\
+-	BUG_ON(!on_accessible_stack(current, current_stack_pointer, 1, &_info));	\
+-	_info.high;									\
+-})
++/*
++ * The top of the current task's task stack
++ */
++#define current_top_of_stack()	((unsigned long)current->stack + THREAD_SIZE)
+ #define on_thread_stack()	(on_task_stack(current, current_stack_pointer, 1, NULL))
+ 
+ #endif /* __ASSEMBLY__ */
+diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
+index 4a4122ef6f39b..41b5d9d3672ab 100644
+--- a/arch/arm64/kernel/signal.c
++++ b/arch/arm64/kernel/signal.c
+@@ -1011,6 +1011,7 @@ static_assert(offsetof(siginfo_t, si_upper)	== 0x28);
+ static_assert(offsetof(siginfo_t, si_pkey)	== 0x20);
+ static_assert(offsetof(siginfo_t, si_perf_data)	== 0x18);
+ static_assert(offsetof(siginfo_t, si_perf_type)	== 0x20);
++static_assert(offsetof(siginfo_t, si_perf_flags) == 0x24);
+ static_assert(offsetof(siginfo_t, si_band)	== 0x10);
+ static_assert(offsetof(siginfo_t, si_fd)	== 0x18);
+ static_assert(offsetof(siginfo_t, si_call_addr)	== 0x10);
+diff --git a/arch/arm64/kernel/signal32.c b/arch/arm64/kernel/signal32.c
+index d984282b979f8..4700f8522d27b 100644
+--- a/arch/arm64/kernel/signal32.c
++++ b/arch/arm64/kernel/signal32.c
+@@ -487,6 +487,7 @@ static_assert(offsetof(compat_siginfo_t, si_upper)	== 0x18);
+ static_assert(offsetof(compat_siginfo_t, si_pkey)	== 0x14);
+ static_assert(offsetof(compat_siginfo_t, si_perf_data)	== 0x10);
+ static_assert(offsetof(compat_siginfo_t, si_perf_type)	== 0x14);
++static_assert(offsetof(compat_siginfo_t, si_perf_flags)	== 0x18);
+ static_assert(offsetof(compat_siginfo_t, si_band)	== 0x0c);
+ static_assert(offsetof(compat_siginfo_t, si_fd)		== 0x10);
+ static_assert(offsetof(compat_siginfo_t, si_call_addr)	== 0x0c);
+diff --git a/arch/arm64/kernel/sys_compat.c b/arch/arm64/kernel/sys_compat.c
+index 12c6864e51e13..df14336c3a29c 100644
+--- a/arch/arm64/kernel/sys_compat.c
++++ b/arch/arm64/kernel/sys_compat.c
+@@ -113,6 +113,6 @@ long compat_arm_syscall(struct pt_regs *regs, int scno)
+ 	addr = instruction_pointer(regs) - (compat_thumb_mode(regs) ? 2 : 4);
+ 
+ 	arm64_notify_die("Oops - bad compat syscall(2)", regs,
+-			 SIGILL, ILL_ILLTRP, addr, scno);
++			 SIGILL, ILL_ILLTRP, addr, 0);
+ 	return 0;
+ }
+diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c
+index b5447e53cd73e..0dea80bf6de46 100644
+--- a/arch/arm64/mm/copypage.c
++++ b/arch/arm64/mm/copypage.c
+@@ -16,8 +16,8 @@
+ 
+ void copy_highpage(struct page *to, struct page *from)
+ {
+-	struct page *kto = page_address(to);
+-	struct page *kfrom = page_address(from);
++	void *kto = page_address(to);
++	void *kfrom = page_address(from);
+ 
+ 	copy_page(kto, kfrom);
+ 
+diff --git a/arch/csky/kernel/probes/kprobes.c b/arch/csky/kernel/probes/kprobes.c
+index 42920f25e73c8..34ba684d5962b 100644
+--- a/arch/csky/kernel/probes/kprobes.c
++++ b/arch/csky/kernel/probes/kprobes.c
+@@ -30,7 +30,7 @@ static int __kprobes patch_text_cb(void *priv)
+ 	struct csky_insn_patch *param = priv;
+ 	unsigned int addr = (unsigned int)param->addr;
+ 
+-	if (atomic_inc_return(&param->cpu_count) == 1) {
++	if (atomic_inc_return(&param->cpu_count) == num_online_cpus()) {
+ 		*(u16 *) addr = cpu_to_le16(param->opcode);
+ 		dcache_wb_range(addr, addr + 2);
+ 		atomic_inc(&param->cpu_count);
+diff --git a/arch/m68k/Kconfig.cpu b/arch/m68k/Kconfig.cpu
+index 16ea9a67723c0..3d5da25c73b5a 100644
+--- a/arch/m68k/Kconfig.cpu
++++ b/arch/m68k/Kconfig.cpu
+@@ -327,7 +327,7 @@ comment "Processor Specific Options"
+ 
+ config M68KFPU_EMU
+ 	bool "Math emulation support"
+-	depends on MMU
++	depends on M68KCLASSIC && FPU
+ 	help
+ 	  At some point in the future, this will cause floating-point math
+ 	  instructions to be emulated by the kernel on machines that lack a
+diff --git a/arch/m68k/include/asm/raw_io.h b/arch/m68k/include/asm/raw_io.h
+index 80eb2396d01eb..3ba40bc1dfaa9 100644
+--- a/arch/m68k/include/asm/raw_io.h
++++ b/arch/m68k/include/asm/raw_io.h
+@@ -80,14 +80,14 @@
+ 	({ u16 __v = le16_to_cpu(*(__force volatile u16 *) (addr)); __v; })
+ 
+ #define rom_out_8(addr, b)	\
+-	({u8 __maybe_unused __w, __v = (b);  u32 _addr = ((u32) (addr)); \
++	(void)({u8 __maybe_unused __w, __v = (b);  u32 _addr = ((u32) (addr)); \
+ 	__w = ((*(__force volatile u8 *)  ((_addr | 0x10000) + (__v<<1)))); })
+ #define rom_out_be16(addr, w)	\
+-	({u16 __maybe_unused __w, __v = (w); u32 _addr = ((u32) (addr)); \
++	(void)({u16 __maybe_unused __w, __v = (w); u32 _addr = ((u32) (addr)); \
+ 	__w = ((*(__force volatile u16 *) ((_addr & 0xFFFF0000UL) + ((__v & 0xFF)<<1)))); \
+ 	__w = ((*(__force volatile u16 *) ((_addr | 0x10000) + ((__v >> 8)<<1)))); })
+ #define rom_out_le16(addr, w)	\
+-	({u16 __maybe_unused __w, __v = (w); u32 _addr = ((u32) (addr)); \
++	(void)({u16 __maybe_unused __w, __v = (w); u32 _addr = ((u32) (addr)); \
+ 	__w = ((*(__force volatile u16 *) ((_addr & 0xFFFF0000UL) + ((__v >> 8)<<1)))); \
+ 	__w = ((*(__force volatile u16 *) ((_addr | 0x10000) + ((__v & 0xFF)<<1)))); })
+ 
+diff --git a/arch/m68k/kernel/signal.c b/arch/m68k/kernel/signal.c
+index 49533f65958a6..b9f6908a31bc3 100644
+--- a/arch/m68k/kernel/signal.c
++++ b/arch/m68k/kernel/signal.c
+@@ -625,6 +625,7 @@ static inline void siginfo_build_tests(void)
+ 	/* _sigfault._perf */
+ 	BUILD_BUG_ON(offsetof(siginfo_t, si_perf_data) != 0x10);
+ 	BUILD_BUG_ON(offsetof(siginfo_t, si_perf_type) != 0x14);
++	BUILD_BUG_ON(offsetof(siginfo_t, si_perf_flags) != 0x18);
+ 
+ 	/* _sigpoll */
+ 	BUILD_BUG_ON(offsetof(siginfo_t, si_band)   != 0x0c);
+diff --git a/arch/mips/include/asm/mach-ip27/cpu-feature-overrides.h b/arch/mips/include/asm/mach-ip27/cpu-feature-overrides.h
+index c8385c4e8664a..568fe09332eb4 100644
+--- a/arch/mips/include/asm/mach-ip27/cpu-feature-overrides.h
++++ b/arch/mips/include/asm/mach-ip27/cpu-feature-overrides.h
+@@ -25,7 +25,6 @@
+ #define cpu_has_4kex			1
+ #define cpu_has_3k_cache		0
+ #define cpu_has_4k_cache		1
+-#define cpu_has_fpu			1
+ #define cpu_has_nofpuex			0
+ #define cpu_has_32fpr			1
+ #define cpu_has_counter			1
+diff --git a/arch/mips/include/asm/mach-ip30/cpu-feature-overrides.h b/arch/mips/include/asm/mach-ip30/cpu-feature-overrides.h
+index 8ad0c424a9afb..ce4e4c6e09e24 100644
+--- a/arch/mips/include/asm/mach-ip30/cpu-feature-overrides.h
++++ b/arch/mips/include/asm/mach-ip30/cpu-feature-overrides.h
+@@ -28,7 +28,6 @@
+ #define cpu_has_4kex			1
+ #define cpu_has_3k_cache		0
+ #define cpu_has_4k_cache		1
+-#define cpu_has_fpu			1
+ #define cpu_has_nofpuex			0
+ #define cpu_has_32fpr			1
+ #define cpu_has_counter			1
+diff --git a/arch/mips/include/asm/mach-ralink/spaces.h b/arch/mips/include/asm/mach-ralink/spaces.h
+index f7af11ea2d612..a9f0570d0f044 100644
+--- a/arch/mips/include/asm/mach-ralink/spaces.h
++++ b/arch/mips/include/asm/mach-ralink/spaces.h
+@@ -6,7 +6,9 @@
+ #define PCI_IOSIZE	SZ_64K
+ #define IO_SPACE_LIMIT	(PCI_IOSIZE - 1)
+ 
++#ifdef CONFIG_PCI_DRIVERS_GENERIC
+ #define pci_remap_iospace pci_remap_iospace
++#endif
+ 
+ #include <asm/mach-generic/spaces.h>
+ #endif
+diff --git a/arch/openrisc/include/asm/timex.h b/arch/openrisc/include/asm/timex.h
+index d52b4e536e3f9..5487fa93dd9be 100644
+--- a/arch/openrisc/include/asm/timex.h
++++ b/arch/openrisc/include/asm/timex.h
+@@ -23,6 +23,7 @@ static inline cycles_t get_cycles(void)
+ {
+ 	return mfspr(SPR_TTCR);
+ }
++#define get_cycles get_cycles
+ 
+ /* This isn't really used any more */
+ #define CLOCK_TICK_RATE 1000
+diff --git a/arch/openrisc/kernel/head.S b/arch/openrisc/kernel/head.S
+index 15f1b38dfe03b..871f4c8588595 100644
+--- a/arch/openrisc/kernel/head.S
++++ b/arch/openrisc/kernel/head.S
+@@ -521,6 +521,15 @@ _start:
+ 	l.ori	r3,r0,0x1
+ 	l.mtspr	r0,r3,SPR_SR
+ 
++	/*
++	 * Start the TTCR as early as possible, so that the RNG can make use of
++	 * measurements of boot time from the earliest opportunity. Especially
++	 * important is that the TTCR does not return zero by the time we reach
++	 * rand_initialize().
++	 */
++	l.movhi r3,hi(SPR_TTMR_CR)
++	l.mtspr r0,r3,SPR_TTMR
++
+ 	CLEAR_GPR(r1)
+ 	CLEAR_GPR(r2)
+ 	CLEAR_GPR(r3)
+diff --git a/arch/parisc/include/asm/fb.h b/arch/parisc/include/asm/fb.h
+index c4cd6360f9964..d63a2acb91f2b 100644
+--- a/arch/parisc/include/asm/fb.h
++++ b/arch/parisc/include/asm/fb.h
+@@ -12,9 +12,13 @@ static inline void fb_pgprotect(struct file *file, struct vm_area_struct *vma,
+ 	pgprot_val(vma->vm_page_prot) |= _PAGE_NO_CACHE;
+ }
+ 
++#if defined(CONFIG_STI_CONSOLE) || defined(CONFIG_FB_STI)
++int fb_is_primary_device(struct fb_info *info);
++#else
+ static inline int fb_is_primary_device(struct fb_info *info)
+ {
+ 	return 0;
+ }
++#endif
+ 
+ #endif /* _ASM_FB_H_ */
+diff --git a/arch/parisc/kernel/processor.c b/arch/parisc/kernel/processor.c
+index 26eb568f8b961..dddaaa6e7a825 100644
+--- a/arch/parisc/kernel/processor.c
++++ b/arch/parisc/kernel/processor.c
+@@ -327,8 +327,6 @@ int init_per_cpu(int cpunum)
+ 	set_firmware_width();
+ 	ret = pdc_coproc_cfg(&coproc_cfg);
+ 
+-	store_cpu_topology(cpunum);
+-
+ 	if(ret >= 0 && coproc_cfg.ccr_functional) {
+ 		mtctl(coproc_cfg.ccr_functional, 10);  /* 10 == Coprocessor Control Reg */
+ 
+diff --git a/arch/parisc/kernel/topology.c b/arch/parisc/kernel/topology.c
+index 9696e3cb6a2a6..b9d845e314f8b 100644
+--- a/arch/parisc/kernel/topology.c
++++ b/arch/parisc/kernel/topology.c
+@@ -20,8 +20,6 @@
+ 
+ static DEFINE_PER_CPU(struct cpu, cpu_devices);
+ 
+-static int dualcores_found;
+-
+ /*
+  * store_cpu_topology is called at boot when only one cpu is running
+  * and with the mutex cpu_hotplug.lock locked, when several cpus have booted,
+@@ -60,7 +58,6 @@ void store_cpu_topology(unsigned int cpuid)
+ 			if (p->cpu_loc) {
+ 				cpuid_topo->core_id++;
+ 				cpuid_topo->package_id = cpu_topology[cpu].package_id;
+-				dualcores_found = 1;
+ 				continue;
+ 			}
+ 		}
+@@ -80,22 +77,11 @@ void store_cpu_topology(unsigned int cpuid)
+ 		cpu_topology[cpuid].package_id);
+ }
+ 
+-static struct sched_domain_topology_level parisc_mc_topology[] = {
+-#ifdef CONFIG_SCHED_MC
+-	{ cpu_coregroup_mask, cpu_core_flags, SD_INIT_NAME(MC) },
+-#endif
+-
+-	{ cpu_cpu_mask, SD_INIT_NAME(DIE) },
+-	{ NULL, },
+-};
+-
+ /*
+  * init_cpu_topology is called at boot when only one cpu is running
+  * which prevent simultaneous write access to cpu_topology array
+  */
+ void __init init_cpu_topology(void)
+ {
+-	/* Set scheduler topology descriptor */
+-	if (dualcores_found)
+-		set_sched_topology(parisc_mc_topology);
++	reset_cpu_topology();
+ }
+diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
+index f2c5c26869f1a..03ae544eb6cc4 100644
+--- a/arch/powerpc/include/asm/page.h
++++ b/arch/powerpc/include/asm/page.h
+@@ -216,6 +216,9 @@ static inline bool pfn_valid(unsigned long pfn)
+ #define __pa(x) ((phys_addr_t)(unsigned long)(x) - VIRT_PHYS_OFFSET)
+ #else
+ #ifdef CONFIG_PPC64
++
++#define VIRTUAL_WARN_ON(x)	WARN_ON(IS_ENABLED(CONFIG_DEBUG_VIRTUAL) && (x))
++
+ /*
+  * gcc miscompiles (unsigned long)(&static_var) - PAGE_OFFSET
+  * with -mcmodel=medium, so we use & and | instead of - and + on 64-bit.
+@@ -223,13 +226,13 @@ static inline bool pfn_valid(unsigned long pfn)
+  */
+ #define __va(x)								\
+ ({									\
+-	VIRTUAL_BUG_ON((unsigned long)(x) >= PAGE_OFFSET);		\
++	VIRTUAL_WARN_ON((unsigned long)(x) >= PAGE_OFFSET);		\
+ 	(void *)(unsigned long)((phys_addr_t)(x) | PAGE_OFFSET);	\
+ })
+ 
+ #define __pa(x)								\
+ ({									\
+-	VIRTUAL_BUG_ON((unsigned long)(x) < PAGE_OFFSET);		\
++	VIRTUAL_WARN_ON((unsigned long)(x) < PAGE_OFFSET);		\
+ 	(unsigned long)(x) & 0x0fffffffffffffffUL;			\
+ })
+ 
+diff --git a/arch/powerpc/include/asm/vas.h b/arch/powerpc/include/asm/vas.h
+index 83afcb6c194b5..c36f71e01c0f0 100644
+--- a/arch/powerpc/include/asm/vas.h
++++ b/arch/powerpc/include/asm/vas.h
+@@ -126,7 +126,7 @@ static inline void vas_user_win_add_mm_context(struct vas_user_win_ref *ref)
+  * Receive window attributes specified by the (in-kernel) owner of window.
+  */
+ struct vas_rx_win_attr {
+-	void *rx_fifo;
++	u64 rx_fifo;
+ 	int rx_fifo_size;
+ 	int wcreds_max;
+ 
+diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
+index 9581906b5ee9c..da18f83ef8834 100644
+--- a/arch/powerpc/kernel/entry_64.S
++++ b/arch/powerpc/kernel/entry_64.S
+@@ -330,22 +330,22 @@ _GLOBAL(enter_rtas)
+ 	clrldi	r4,r4,2			/* convert to realmode address */
+        	mtlr	r4
+ 
+-	li	r0,0
+-	ori	r0,r0,MSR_EE|MSR_SE|MSR_BE|MSR_RI
+-	andc	r0,r6,r0
+-	
+-        li      r9,1
+-        rldicr  r9,r9,MSR_SF_LG,(63-MSR_SF_LG)
+-	ori	r9,r9,MSR_IR|MSR_DR|MSR_FE0|MSR_FE1|MSR_FP|MSR_RI|MSR_LE
+-	andc	r6,r0,r9
+-
+ __enter_rtas:
+-	sync				/* disable interrupts so SRR0/1 */
+-	mtmsrd	r0			/* don't get trashed */
+-
+ 	LOAD_REG_ADDR(r4, rtas)
+ 	ld	r5,RTASENTRY(r4)	/* get the rtas->entry value */
+ 	ld	r4,RTASBASE(r4)		/* get the rtas->base value */
++
++	/*
++	 * RTAS runs in 32-bit big endian real mode, but leave MSR[RI] on as we
++	 * may hit NMI (SRESET or MCE) while in RTAS. RTAS should disable RI in
++	 * its critical regions (as specified in PAPR+ section 7.2.1). MSR[S]
++	 * is not impacted by RFI_TO_KERNEL (only urfid can unset it). So if
++	 * MSR[S] is set, it will remain when entering RTAS.
++	 */
++	LOAD_REG_IMMEDIATE(r6, MSR_ME | MSR_RI)
++
++	li      r0,0
++	mtmsrd  r0,1                    /* disable RI before using SRR0/1 */
+ 	
+ 	mtspr	SPRN_SRR0,r5
+ 	mtspr	SPRN_SRR1,r6
+diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
+index 65562c4a0a690..dc2350b288cfe 100644
+--- a/arch/powerpc/kernel/fadump.c
++++ b/arch/powerpc/kernel/fadump.c
+@@ -867,7 +867,6 @@ static int fadump_alloc_mem_ranges(struct fadump_mrange_info *mrange_info)
+ 				       sizeof(struct fadump_memory_range));
+ 	return 0;
+ }
+-
+ static inline int fadump_add_mem_range(struct fadump_mrange_info *mrange_info,
+ 				       u64 base, u64 end)
+ {
+@@ -886,7 +885,12 @@ static inline int fadump_add_mem_range(struct fadump_mrange_info *mrange_info,
+ 		start = mem_ranges[mrange_info->mem_range_cnt - 1].base;
+ 		size  = mem_ranges[mrange_info->mem_range_cnt - 1].size;
+ 
+-		if ((start + size) == base)
++		/*
++		 * Boot memory area needs separate PT_LOAD segment(s) as it
++		 * is moved to a different location at the time of crash.
++		 * So, fold only if the region is not boot memory area.
++		 */
++		if ((start + size) == base && start >= fw_dump.boot_mem_top)
+ 			is_adjacent = true;
+ 	}
+ 	if (!is_adjacent) {
+diff --git a/arch/powerpc/kernel/idle.c b/arch/powerpc/kernel/idle.c
+index 4ad79eb638c62..77cd4c5a2d631 100644
+--- a/arch/powerpc/kernel/idle.c
++++ b/arch/powerpc/kernel/idle.c
+@@ -37,7 +37,7 @@ static int __init powersave_off(char *arg)
+ {
+ 	ppc_md.power_save = NULL;
+ 	cpuidle_disable = IDLE_POWERSAVE_OFF;
+-	return 0;
++	return 1;
+ }
+ __setup("powersave=off", powersave_off);
+ 
+diff --git a/arch/powerpc/kernel/rtas.c b/arch/powerpc/kernel/rtas.c
+index 1f42aabbbab3a..6bc89d9ccf635 100644
+--- a/arch/powerpc/kernel/rtas.c
++++ b/arch/powerpc/kernel/rtas.c
+@@ -49,6 +49,15 @@ void enter_rtas(unsigned long);
+ 
+ static inline void do_enter_rtas(unsigned long args)
+ {
++	unsigned long msr;
++
++	/*
++	 * Make sure MSR[RI] is currently enabled as it will be forced later
++	 * in enter_rtas.
++	 */
++	msr = mfmsr();
++	BUG_ON(!(msr & MSR_RI));
++
+ 	enter_rtas(args);
+ 
+ 	srr_regs_clobbered(); /* rtas uses SRRs, invalidate */
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index 6fa518f6501d5..aef0a6b423d80 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -4233,13 +4233,13 @@ static void kvmppc_vcore_blocked(struct kvmppc_vcore *vc)
+ 	start_wait = ktime_get();
+ 
+ 	vc->vcore_state = VCORE_SLEEPING;
+-	trace_kvmppc_vcore_blocked(vc, 0);
++	trace_kvmppc_vcore_blocked(vc->runner, 0);
+ 	spin_unlock(&vc->lock);
+ 	schedule();
+ 	finish_rcuwait(&vc->wait);
+ 	spin_lock(&vc->lock);
+ 	vc->vcore_state = VCORE_INACTIVE;
+-	trace_kvmppc_vcore_blocked(vc, 1);
++	trace_kvmppc_vcore_blocked(vc->runner, 1);
+ 	++vc->runner->stat.halt_successful_wait;
+ 
+ 	cur = ktime_get();
+@@ -4619,9 +4619,9 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit,
+ 			if (kvmppc_vcpu_check_block(vcpu))
+ 				break;
+ 
+-			trace_kvmppc_vcore_blocked(vc, 0);
++			trace_kvmppc_vcore_blocked(vcpu, 0);
+ 			schedule();
+-			trace_kvmppc_vcore_blocked(vc, 1);
++			trace_kvmppc_vcore_blocked(vcpu, 1);
+ 		}
+ 		finish_rcuwait(wait);
+ 	}
+@@ -5283,6 +5283,10 @@ static int kvmppc_core_init_vm_hv(struct kvm *kvm)
+ 		kvm->arch.host_lpcr = lpcr = mfspr(SPRN_LPCR);
+ 		lpcr &= LPCR_PECE | LPCR_LPES;
+ 	} else {
++		/*
++		 * The L2 LPES mode will be set by the L0 according to whether
++		 * or not it needs to take external interrupts in HV mode.
++		 */
+ 		lpcr = 0;
+ 	}
+ 	lpcr |= (4UL << LPCR_DPFD_SH) | LPCR_HDICE |
+diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c
+index c943a051c6e70..265bb30a0af2b 100644
+--- a/arch/powerpc/kvm/book3s_hv_nested.c
++++ b/arch/powerpc/kvm/book3s_hv_nested.c
+@@ -261,8 +261,7 @@ static void load_l2_hv_regs(struct kvm_vcpu *vcpu,
+ 	/*
+ 	 * Don't let L1 change LPCR bits for the L2 except these:
+ 	 */
+-	mask = LPCR_DPFD | LPCR_ILE | LPCR_TC | LPCR_AIL | LPCR_LD |
+-		LPCR_LPES | LPCR_MER;
++	mask = LPCR_DPFD | LPCR_ILE | LPCR_TC | LPCR_AIL | LPCR_LD | LPCR_MER;
+ 
+ 	/*
+ 	 * Additional filtering is required depending on hardware
+diff --git a/arch/powerpc/kvm/trace_hv.h b/arch/powerpc/kvm/trace_hv.h
+index 38cd0ed0a6178..32e2cb5811cc8 100644
+--- a/arch/powerpc/kvm/trace_hv.h
++++ b/arch/powerpc/kvm/trace_hv.h
+@@ -409,9 +409,9 @@ TRACE_EVENT(kvmppc_run_core,
+ );
+ 
+ TRACE_EVENT(kvmppc_vcore_blocked,
+-	TP_PROTO(struct kvmppc_vcore *vc, int where),
++	TP_PROTO(struct kvm_vcpu *vcpu, int where),
+ 
+-	TP_ARGS(vc, where),
++	TP_ARGS(vcpu, where),
+ 
+ 	TP_STRUCT__entry(
+ 		__field(int,	n_runnable)
+@@ -421,8 +421,8 @@ TRACE_EVENT(kvmppc_vcore_blocked,
+ 	),
+ 
+ 	TP_fast_assign(
+-		__entry->runner_vcpu = vc->runner->vcpu_id;
+-		__entry->n_runnable  = vc->n_runnable;
++		__entry->runner_vcpu = vcpu->vcpu_id;
++		__entry->n_runnable  = vcpu->arch.vcore->n_runnable;
+ 		__entry->where       = where;
+ 		__entry->tgid	     = current->tgid;
+ 	),
+diff --git a/arch/powerpc/mm/nohash/fsl_book3e.c b/arch/powerpc/mm/nohash/fsl_book3e.c
+index dfe715e0f70ac..388f7c7dabd30 100644
+--- a/arch/powerpc/mm/nohash/fsl_book3e.c
++++ b/arch/powerpc/mm/nohash/fsl_book3e.c
+@@ -287,22 +287,19 @@ void __init adjust_total_lowmem(void)
+ 
+ #ifdef CONFIG_STRICT_KERNEL_RWX
+ void mmu_mark_rodata_ro(void)
+-{
+-	/* Everything is done in mmu_mark_initmem_nx() */
+-}
+-#endif
+-
+-void mmu_mark_initmem_nx(void)
+ {
+ 	unsigned long remapped;
+ 
+-	if (!strict_kernel_rwx_enabled())
+-		return;
+-
+ 	remapped = map_mem_in_cams(__max_low_memory, CONFIG_LOWMEM_CAM_NUM, false, false);
+ 
+ 	WARN_ON(__max_low_memory != remapped);
+ }
++#endif
++
++void mmu_mark_initmem_nx(void)
++{
++	/* Everything is done in mmu_mark_rodata_ro() */
++}
+ 
+ void setup_initial_memory_limit(phys_addr_t first_memblock_base,
+ 				phys_addr_t first_memblock_size)
+diff --git a/arch/powerpc/perf/isa207-common.c b/arch/powerpc/perf/isa207-common.c
+index a74d382ecbb77..bb5d64862bc99 100644
+--- a/arch/powerpc/perf/isa207-common.c
++++ b/arch/powerpc/perf/isa207-common.c
+@@ -108,7 +108,7 @@ static void mmcra_sdar_mode(u64 event, unsigned long *mmcra)
+ 		*mmcra |= MMCRA_SDAR_MODE_TLB;
+ }
+ 
+-static u64 p10_thresh_cmp_val(u64 value)
++static int p10_thresh_cmp_val(u64 value)
+ {
+ 	int exp = 0;
+ 	u64 result = value;
+@@ -139,7 +139,7 @@ static u64 p10_thresh_cmp_val(u64 value)
+ 		 * exponent is also zero.
+ 		 */
+ 		if (!(value & 0xC0) && exp)
+-			result = 0;
++			result = -1;
+ 		else
+ 			result = (exp << 8) | value;
+ 	}
+@@ -187,7 +187,7 @@ static bool is_thresh_cmp_valid(u64 event)
+ 	unsigned int cmp, exp;
+ 
+ 	if (cpu_has_feature(CPU_FTR_ARCH_31))
+-		return p10_thresh_cmp_val(event) != 0;
++		return p10_thresh_cmp_val(event) >= 0;
+ 
+ 	/*
+ 	 * Check the mantissa upper two bits are not zero, unless the
+@@ -502,12 +502,14 @@ int isa207_get_constraint(u64 event, unsigned long *maskp, unsigned long *valp,
+ 			value |= CNST_THRESH_CTL_SEL_VAL(event >> EVENT_THRESH_SHIFT);
+ 			mask  |= p10_CNST_THRESH_CMP_MASK;
+ 			value |= p10_CNST_THRESH_CMP_VAL(p10_thresh_cmp_val(event_config1));
+-		}
++		} else if (event_is_threshold(event))
++			return -1;
+ 	} else if (cpu_has_feature(CPU_FTR_ARCH_300))  {
+ 		if (event_is_threshold(event) && is_thresh_cmp_valid(event)) {
+ 			mask  |= CNST_THRESH_MASK;
+ 			value |= CNST_THRESH_VAL(event >> EVENT_THRESH_SHIFT);
+-		}
++		} else if (event_is_threshold(event))
++			return -1;
+ 	} else {
+ 		/*
+ 		 * Special case for PM_MRK_FAB_RSP_MATCH and PM_MRK_FAB_RSP_MATCH_CYC,
+diff --git a/arch/powerpc/platforms/4xx/cpm.c b/arch/powerpc/platforms/4xx/cpm.c
+index 2571841625a23..1d3bc35ee1a7d 100644
+--- a/arch/powerpc/platforms/4xx/cpm.c
++++ b/arch/powerpc/platforms/4xx/cpm.c
+@@ -327,6 +327,6 @@ late_initcall(cpm_init);
+ static int __init cpm_powersave_off(char *arg)
+ {
+ 	cpm.powersave_off = 1;
+-	return 0;
++	return 1;
+ }
+ __setup("powersave=off", cpm_powersave_off);
+diff --git a/arch/powerpc/platforms/8xx/cpm1.c b/arch/powerpc/platforms/8xx/cpm1.c
+index c58b6f1c40e35..3ef5e9fd3a9b6 100644
+--- a/arch/powerpc/platforms/8xx/cpm1.c
++++ b/arch/powerpc/platforms/8xx/cpm1.c
+@@ -280,6 +280,7 @@ cpm_setbrg(uint brg, uint rate)
+ 		out_be32(bp, (((BRG_UART_CLK_DIV16 / rate) - 1) << 1) |
+ 			      CPM_BRG_EN | CPM_BRG_DIV16);
+ }
++EXPORT_SYMBOL(cpm_setbrg);
+ 
+ struct cpm_ioport16 {
+ 	__be16 dir, par, odr_sor, dat, intr;
+diff --git a/arch/powerpc/platforms/powernv/opal-fadump.c b/arch/powerpc/platforms/powernv/opal-fadump.c
+index c8ad057c72210..9d74d3950a523 100644
+--- a/arch/powerpc/platforms/powernv/opal-fadump.c
++++ b/arch/powerpc/platforms/powernv/opal-fadump.c
+@@ -60,7 +60,7 @@ void __init opal_fadump_dt_scan(struct fw_dump *fadump_conf, u64 node)
+ 	addr = be64_to_cpu(addr);
+ 	pr_debug("Kernel metadata addr: %llx\n", addr);
+ 	opal_fdm_active = (void *)addr;
+-	if (opal_fdm_active->registered_regions == 0)
++	if (be16_to_cpu(opal_fdm_active->registered_regions) == 0)
+ 		return;
+ 
+ 	ret = opal_mpipl_query_tag(OPAL_MPIPL_TAG_BOOT_MEM, &addr);
+@@ -95,17 +95,17 @@ static int opal_fadump_unregister(struct fw_dump *fadump_conf);
+ static void opal_fadump_update_config(struct fw_dump *fadump_conf,
+ 				      const struct opal_fadump_mem_struct *fdm)
+ {
+-	pr_debug("Boot memory regions count: %d\n", fdm->region_cnt);
++	pr_debug("Boot memory regions count: %d\n", be16_to_cpu(fdm->region_cnt));
+ 
+ 	/*
+ 	 * The destination address of the first boot memory region is the
+ 	 * destination address of boot memory regions.
+ 	 */
+-	fadump_conf->boot_mem_dest_addr = fdm->rgn[0].dest;
++	fadump_conf->boot_mem_dest_addr = be64_to_cpu(fdm->rgn[0].dest);
+ 	pr_debug("Destination address of boot memory regions: %#016llx\n",
+ 		 fadump_conf->boot_mem_dest_addr);
+ 
+-	fadump_conf->fadumphdr_addr = fdm->fadumphdr_addr;
++	fadump_conf->fadumphdr_addr = be64_to_cpu(fdm->fadumphdr_addr);
+ }
+ 
+ /*
+@@ -126,9 +126,9 @@ static void __init opal_fadump_get_config(struct fw_dump *fadump_conf,
+ 	fadump_conf->boot_memory_size = 0;
+ 
+ 	pr_debug("Boot memory regions:\n");
+-	for (i = 0; i < fdm->region_cnt; i++) {
+-		base = fdm->rgn[i].src;
+-		size = fdm->rgn[i].size;
++	for (i = 0; i < be16_to_cpu(fdm->region_cnt); i++) {
++		base = be64_to_cpu(fdm->rgn[i].src);
++		size = be64_to_cpu(fdm->rgn[i].size);
+ 		pr_debug("\t[%03d] base: 0x%lx, size: 0x%lx\n", i, base, size);
+ 
+ 		fadump_conf->boot_mem_addr[i] = base;
+@@ -143,7 +143,7 @@ static void __init opal_fadump_get_config(struct fw_dump *fadump_conf,
+ 	 * Start address of reserve dump area (permanent reservation) for
+ 	 * re-registering FADump after dump capture.
+ 	 */
+-	fadump_conf->reserve_dump_area_start = fdm->rgn[0].dest;
++	fadump_conf->reserve_dump_area_start = be64_to_cpu(fdm->rgn[0].dest);
+ 
+ 	/*
+ 	 * Rarely, but it can so happen that system crashes before all
+@@ -155,13 +155,14 @@ static void __init opal_fadump_get_config(struct fw_dump *fadump_conf,
+ 	 * Hope the memory that could not be preserved only has pages
+ 	 * that are usually filtered out while saving the vmcore.
+ 	 */
+-	if (fdm->region_cnt > fdm->registered_regions) {
++	if (be16_to_cpu(fdm->region_cnt) > be16_to_cpu(fdm->registered_regions)) {
+ 		pr_warn("Not all memory regions were saved!!!\n");
+ 		pr_warn("  Unsaved memory regions:\n");
+-		i = fdm->registered_regions;
+-		while (i < fdm->region_cnt) {
++		i = be16_to_cpu(fdm->registered_regions);
++		while (i < be16_to_cpu(fdm->region_cnt)) {
+ 			pr_warn("\t[%03d] base: 0x%llx, size: 0x%llx\n",
+-				i, fdm->rgn[i].src, fdm->rgn[i].size);
++				i, be64_to_cpu(fdm->rgn[i].src),
++				be64_to_cpu(fdm->rgn[i].size));
+ 			i++;
+ 		}
+ 
+@@ -170,7 +171,7 @@ static void __init opal_fadump_get_config(struct fw_dump *fadump_conf,
+ 	}
+ 
+ 	fadump_conf->boot_mem_top = (fadump_conf->boot_memory_size + hole_size);
+-	fadump_conf->boot_mem_regs_cnt = fdm->region_cnt;
++	fadump_conf->boot_mem_regs_cnt = be16_to_cpu(fdm->region_cnt);
+ 	opal_fadump_update_config(fadump_conf, fdm);
+ }
+ 
+@@ -178,35 +179,38 @@ static void __init opal_fadump_get_config(struct fw_dump *fadump_conf,
+ static void opal_fadump_init_metadata(struct opal_fadump_mem_struct *fdm)
+ {
+ 	fdm->version = OPAL_FADUMP_VERSION;
+-	fdm->region_cnt = 0;
+-	fdm->registered_regions = 0;
+-	fdm->fadumphdr_addr = 0;
++	fdm->region_cnt = cpu_to_be16(0);
++	fdm->registered_regions = cpu_to_be16(0);
++	fdm->fadumphdr_addr = cpu_to_be64(0);
+ }
+ 
+ static u64 opal_fadump_init_mem_struct(struct fw_dump *fadump_conf)
+ {
+ 	u64 addr = fadump_conf->reserve_dump_area_start;
++	u16 reg_cnt;
+ 	int i;
+ 
+ 	opal_fdm = __va(fadump_conf->kernel_metadata);
+ 	opal_fadump_init_metadata(opal_fdm);
+ 
+ 	/* Boot memory regions */
++	reg_cnt = be16_to_cpu(opal_fdm->region_cnt);
+ 	for (i = 0; i < fadump_conf->boot_mem_regs_cnt; i++) {
+-		opal_fdm->rgn[i].src	= fadump_conf->boot_mem_addr[i];
+-		opal_fdm->rgn[i].dest	= addr;
+-		opal_fdm->rgn[i].size	= fadump_conf->boot_mem_sz[i];
++		opal_fdm->rgn[i].src	= cpu_to_be64(fadump_conf->boot_mem_addr[i]);
++		opal_fdm->rgn[i].dest	= cpu_to_be64(addr);
++		opal_fdm->rgn[i].size	= cpu_to_be64(fadump_conf->boot_mem_sz[i]);
+ 
+-		opal_fdm->region_cnt++;
++		reg_cnt++;
+ 		addr += fadump_conf->boot_mem_sz[i];
+ 	}
++	opal_fdm->region_cnt = cpu_to_be16(reg_cnt);
+ 
+ 	/*
+ 	 * Kernel metadata is passed to f/w and retrieved in capture kerenl.
+ 	 * So, use it to save fadump header address instead of calculating it.
+ 	 */
+-	opal_fdm->fadumphdr_addr = (opal_fdm->rgn[0].dest +
+-				    fadump_conf->boot_memory_size);
++	opal_fdm->fadumphdr_addr = cpu_to_be64(be64_to_cpu(opal_fdm->rgn[0].dest) +
++					       fadump_conf->boot_memory_size);
+ 
+ 	opal_fadump_update_config(fadump_conf, opal_fdm);
+ 
+@@ -269,18 +273,21 @@ static u64 opal_fadump_get_bootmem_min(void)
+ static int opal_fadump_register(struct fw_dump *fadump_conf)
+ {
+ 	s64 rc = OPAL_PARAMETER;
++	u16 registered_regs;
+ 	int i, err = -EIO;
+ 
+-	for (i = 0; i < opal_fdm->region_cnt; i++) {
++	registered_regs = be16_to_cpu(opal_fdm->registered_regions);
++	for (i = 0; i < be16_to_cpu(opal_fdm->region_cnt); i++) {
+ 		rc = opal_mpipl_update(OPAL_MPIPL_ADD_RANGE,
+-				       opal_fdm->rgn[i].src,
+-				       opal_fdm->rgn[i].dest,
+-				       opal_fdm->rgn[i].size);
++				       be64_to_cpu(opal_fdm->rgn[i].src),
++				       be64_to_cpu(opal_fdm->rgn[i].dest),
++				       be64_to_cpu(opal_fdm->rgn[i].size));
+ 		if (rc != OPAL_SUCCESS)
+ 			break;
+ 
+-		opal_fdm->registered_regions++;
++		registered_regs++;
+ 	}
++	opal_fdm->registered_regions = cpu_to_be16(registered_regs);
+ 
+ 	switch (rc) {
+ 	case OPAL_SUCCESS:
+@@ -291,7 +298,8 @@ static int opal_fadump_register(struct fw_dump *fadump_conf)
+ 	case OPAL_RESOURCE:
+ 		/* If MAX regions limit in f/w is hit, warn and proceed. */
+ 		pr_warn("%d regions could not be registered for MPIPL as MAX limit is reached!\n",
+-			(opal_fdm->region_cnt - opal_fdm->registered_regions));
++			(be16_to_cpu(opal_fdm->region_cnt) -
++			 be16_to_cpu(opal_fdm->registered_regions)));
+ 		fadump_conf->dump_registered = 1;
+ 		err = 0;
+ 		break;
+@@ -312,7 +320,7 @@ static int opal_fadump_register(struct fw_dump *fadump_conf)
+ 	 * If some regions were registered before OPAL_MPIPL_ADD_RANGE
+ 	 * OPAL call failed, unregister all regions.
+ 	 */
+-	if ((err < 0) && (opal_fdm->registered_regions > 0))
++	if ((err < 0) && (be16_to_cpu(opal_fdm->registered_regions) > 0))
+ 		opal_fadump_unregister(fadump_conf);
+ 
+ 	return err;
+@@ -328,7 +336,7 @@ static int opal_fadump_unregister(struct fw_dump *fadump_conf)
+ 		return -EIO;
+ 	}
+ 
+-	opal_fdm->registered_regions = 0;
++	opal_fdm->registered_regions = cpu_to_be16(0);
+ 	fadump_conf->dump_registered = 0;
+ 	return 0;
+ }
+@@ -563,19 +571,20 @@ static void opal_fadump_region_show(struct fw_dump *fadump_conf,
+ 	else
+ 		fdm_ptr = opal_fdm;
+ 
+-	for (i = 0; i < fdm_ptr->region_cnt; i++) {
++	for (i = 0; i < be16_to_cpu(fdm_ptr->region_cnt); i++) {
+ 		/*
+ 		 * Only regions that are registered for MPIPL
+ 		 * would have dump data.
+ 		 */
+ 		if ((fadump_conf->dump_active) &&
+-		    (i < fdm_ptr->registered_regions))
+-			dumped_bytes = fdm_ptr->rgn[i].size;
++		    (i < be16_to_cpu(fdm_ptr->registered_regions)))
++			dumped_bytes = be64_to_cpu(fdm_ptr->rgn[i].size);
+ 
+ 		seq_printf(m, "DUMP: Src: %#016llx, Dest: %#016llx, ",
+-			   fdm_ptr->rgn[i].src, fdm_ptr->rgn[i].dest);
++			   be64_to_cpu(fdm_ptr->rgn[i].src),
++			   be64_to_cpu(fdm_ptr->rgn[i].dest));
+ 		seq_printf(m, "Size: %#llx, Dumped: %#llx bytes\n",
+-			   fdm_ptr->rgn[i].size, dumped_bytes);
++			   be64_to_cpu(fdm_ptr->rgn[i].size), dumped_bytes);
+ 	}
+ 
+ 	/* Dump is active. Show reserved area start address. */
+@@ -624,6 +633,7 @@ void __init opal_fadump_dt_scan(struct fw_dump *fadump_conf, u64 node)
+ {
+ 	const __be32 *prop;
+ 	unsigned long dn;
++	__be64 be_addr;
+ 	u64 addr = 0;
+ 	int i, len;
+ 	s64 ret;
+@@ -680,13 +690,13 @@ void __init opal_fadump_dt_scan(struct fw_dump *fadump_conf, u64 node)
+ 	if (!prop)
+ 		return;
+ 
+-	ret = opal_mpipl_query_tag(OPAL_MPIPL_TAG_KERNEL, &addr);
+-	if ((ret != OPAL_SUCCESS) || !addr) {
++	ret = opal_mpipl_query_tag(OPAL_MPIPL_TAG_KERNEL, &be_addr);
++	if ((ret != OPAL_SUCCESS) || !be_addr) {
+ 		pr_err("Failed to get Kernel metadata (%lld)\n", ret);
+ 		return;
+ 	}
+ 
+-	addr = be64_to_cpu(addr);
++	addr = be64_to_cpu(be_addr);
+ 	pr_debug("Kernel metadata addr: %llx\n", addr);
+ 
+ 	opal_fdm_active = __va(addr);
+@@ -697,14 +707,14 @@ void __init opal_fadump_dt_scan(struct fw_dump *fadump_conf, u64 node)
+ 	}
+ 
+ 	/* Kernel regions not registered with f/w for MPIPL */
+-	if (opal_fdm_active->registered_regions == 0) {
++	if (be16_to_cpu(opal_fdm_active->registered_regions) == 0) {
+ 		opal_fdm_active = NULL;
+ 		return;
+ 	}
+ 
+-	ret = opal_mpipl_query_tag(OPAL_MPIPL_TAG_CPU, &addr);
+-	if (addr) {
+-		addr = be64_to_cpu(addr);
++	ret = opal_mpipl_query_tag(OPAL_MPIPL_TAG_CPU, &be_addr);
++	if (be_addr) {
++		addr = be64_to_cpu(be_addr);
+ 		pr_debug("CPU metadata addr: %llx\n", addr);
+ 		opal_cpu_metadata = __va(addr);
+ 	}
+diff --git a/arch/powerpc/platforms/powernv/opal-fadump.h b/arch/powerpc/platforms/powernv/opal-fadump.h
+index f1e9ecf548c5d..3f715efb0aa6e 100644
+--- a/arch/powerpc/platforms/powernv/opal-fadump.h
++++ b/arch/powerpc/platforms/powernv/opal-fadump.h
+@@ -31,14 +31,14 @@
+  * OPAL FADump kernel metadata
+  *
+  * The address of this structure will be registered with f/w for retrieving
+- * and processing during crash dump.
++ * in the capture kernel to process the crash dump.
+  */
+ struct opal_fadump_mem_struct {
+ 	u8	version;
+ 	u8	reserved[3];
+-	u16	region_cnt;		/* number of regions */
+-	u16	registered_regions;	/* Regions registered for MPIPL */
+-	u64	fadumphdr_addr;
++	__be16	region_cnt;		/* number of regions */
++	__be16	registered_regions;	/* Regions registered for MPIPL */
++	__be64	fadumphdr_addr;
+ 	struct opal_mpipl_region	rgn[FADUMP_MAX_MEM_REGS];
+ } __packed;
+ 
+@@ -135,7 +135,7 @@ static inline void opal_fadump_read_regs(char *bufp, unsigned int regs_cnt,
+ 	for (i = 0; i < regs_cnt; i++, bufp += reg_entry_size) {
+ 		reg_entry = (struct hdat_fadump_reg_entry *)bufp;
+ 		val = (cpu_endian ? be64_to_cpu(reg_entry->reg_val) :
+-		       reg_entry->reg_val);
++		       (u64)(reg_entry->reg_val));
+ 		opal_fadump_set_regval_regnum(regs,
+ 					      be32_to_cpu(reg_entry->reg_type),
+ 					      be32_to_cpu(reg_entry->reg_num),
+diff --git a/arch/powerpc/platforms/powernv/setup.c b/arch/powerpc/platforms/powernv/setup.c
+index 105d889abd51a..824c3ad7a0faf 100644
+--- a/arch/powerpc/platforms/powernv/setup.c
++++ b/arch/powerpc/platforms/powernv/setup.c
+@@ -96,6 +96,15 @@ static void __init init_fw_feat_flags(struct device_node *np)
+ 
+ 	if (fw_feature_is("disabled", "needs-spec-barrier-for-bound-checks", np))
+ 		security_ftr_clear(SEC_FTR_BNDS_CHK_SPEC_BAR);
++
++	if (fw_feature_is("enabled", "no-need-l1d-flush-msr-pr-1-to-0", np))
++		security_ftr_clear(SEC_FTR_L1D_FLUSH_ENTRY);
++
++	if (fw_feature_is("enabled", "no-need-l1d-flush-kernel-on-user-access", np))
++		security_ftr_clear(SEC_FTR_L1D_FLUSH_UACCESS);
++
++	if (fw_feature_is("enabled", "no-need-store-drain-on-priv-state-switch", np))
++		security_ftr_clear(SEC_FTR_STF_BARRIER);
+ }
+ 
+ static void __init pnv_setup_security_mitigations(void)
+diff --git a/arch/powerpc/platforms/powernv/ultravisor.c b/arch/powerpc/platforms/powernv/ultravisor.c
+index e4a00ad06f9d3..67c8c4b2d8b17 100644
+--- a/arch/powerpc/platforms/powernv/ultravisor.c
++++ b/arch/powerpc/platforms/powernv/ultravisor.c
+@@ -55,6 +55,7 @@ static int __init uv_init(void)
+ 		return -ENODEV;
+ 
+ 	uv_memcons = memcons_init(node, "memcons");
++	of_node_put(node);
+ 	if (!uv_memcons)
+ 		return -ENOENT;
+ 
+diff --git a/arch/powerpc/platforms/powernv/vas-fault.c b/arch/powerpc/platforms/powernv/vas-fault.c
+index a7aabc18039eb..c1bfad56447d4 100644
+--- a/arch/powerpc/platforms/powernv/vas-fault.c
++++ b/arch/powerpc/platforms/powernv/vas-fault.c
+@@ -216,7 +216,7 @@ int vas_setup_fault_window(struct vas_instance *vinst)
+ 	vas_init_rx_win_attr(&attr, VAS_COP_TYPE_FAULT);
+ 
+ 	attr.rx_fifo_size = vinst->fault_fifo_size;
+-	attr.rx_fifo = vinst->fault_fifo;
++	attr.rx_fifo = __pa(vinst->fault_fifo);
+ 
+ 	/*
+ 	 * Max creds is based on number of CRBs can fit in the FIFO.
+diff --git a/arch/powerpc/platforms/powernv/vas-window.c b/arch/powerpc/platforms/powernv/vas-window.c
+index 0f8d39fbf2b21..0072682531d80 100644
+--- a/arch/powerpc/platforms/powernv/vas-window.c
++++ b/arch/powerpc/platforms/powernv/vas-window.c
+@@ -404,7 +404,7 @@ static void init_winctx_regs(struct pnv_vas_window *window,
+ 	 *
+ 	 * See also: Design note in function header.
+ 	 */
+-	val = __pa(winctx->rx_fifo);
++	val = winctx->rx_fifo;
+ 	val = SET_FIELD(VAS_PAGE_MIGRATION_SELECT, val, 0);
+ 	write_hvwc_reg(window, VREG(LFIFO_BAR), val);
+ 
+@@ -739,7 +739,7 @@ static void init_winctx_for_rxwin(struct pnv_vas_window *rxwin,
+ 		 */
+ 		winctx->fifo_disable = true;
+ 		winctx->intr_disable = true;
+-		winctx->rx_fifo = NULL;
++		winctx->rx_fifo = 0;
+ 	}
+ 
+ 	winctx->lnotify_lpid = rxattr->lnotify_lpid;
+diff --git a/arch/powerpc/platforms/powernv/vas.h b/arch/powerpc/platforms/powernv/vas.h
+index 8bb08e395de05..08d9d3d5a22b0 100644
+--- a/arch/powerpc/platforms/powernv/vas.h
++++ b/arch/powerpc/platforms/powernv/vas.h
+@@ -376,7 +376,7 @@ struct pnv_vas_window {
+  * is a container for the register fields in the window context.
+  */
+ struct vas_winctx {
+-	void *rx_fifo;
++	u64 rx_fifo;
+ 	int rx_fifo_size;
+ 	int wcreds_max;
+ 	int rsvd_txbuf_count;
+diff --git a/arch/powerpc/platforms/pseries/papr_scm.c b/arch/powerpc/platforms/pseries/papr_scm.c
+index 39962c9055422..181b855b30509 100644
+--- a/arch/powerpc/platforms/pseries/papr_scm.c
++++ b/arch/powerpc/platforms/pseries/papr_scm.c
+@@ -125,8 +125,8 @@ struct papr_scm_priv {
+ 	/* The bits which needs to be overridden */
+ 	u64 health_bitmap_inject_mask;
+ 
+-	 /* array to have event_code and stat_id mappings */
+-	char **nvdimm_events_map;
++	/* array to have event_code and stat_id mappings */
++	u8 *nvdimm_events_map;
+ };
+ 
+ static int papr_scm_pmem_flush(struct nd_region *nd_region,
+@@ -370,7 +370,7 @@ static int papr_scm_pmu_get_value(struct perf_event *event, struct device *dev,
+ 
+ 	stat = &stats->scm_statistic[0];
+ 	memcpy(&stat->stat_id,
+-	       p->nvdimm_events_map[event->attr.config],
++	       &p->nvdimm_events_map[event->attr.config * sizeof(stat->stat_id)],
+ 		sizeof(stat->stat_id));
+ 	stat->stat_val = 0;
+ 
+@@ -462,14 +462,13 @@ static int papr_scm_pmu_check_events(struct papr_scm_priv *p, struct nvdimm_pmu
+ {
+ 	struct papr_scm_perf_stat *stat;
+ 	struct papr_scm_perf_stats *stats;
+-	int index, rc, count;
+ 	u32 available_events;
+-
+-	if (!p->stat_buffer_len)
+-		return -ENOENT;
++	int index, rc = 0;
+ 
+ 	available_events = (p->stat_buffer_len  - sizeof(struct papr_scm_perf_stats))
+ 			/ sizeof(struct papr_scm_perf_stat);
++	if (available_events == 0)
++		return -EOPNOTSUPP;
+ 
+ 	/* Allocate the buffer for phyp where stats are written */
+ 	stats = kzalloc(p->stat_buffer_len, GFP_KERNEL);
+@@ -478,35 +477,30 @@ static int papr_scm_pmu_check_events(struct papr_scm_priv *p, struct nvdimm_pmu
+ 		return rc;
+ 	}
+ 
+-	/* Allocate memory to nvdimm_event_map */
+-	p->nvdimm_events_map = kcalloc(available_events, sizeof(char *), GFP_KERNEL);
+-	if (!p->nvdimm_events_map) {
+-		rc = -ENOMEM;
+-		goto out_stats;
+-	}
+-
+ 	/* Called to get list of events supported */
+ 	rc = drc_pmem_query_stats(p, stats, 0);
+ 	if (rc)
+-		goto out_nvdimm_events_map;
+-
+-	for (index = 0, stat = stats->scm_statistic, count = 0;
+-		     index < available_events; index++, ++stat) {
+-		p->nvdimm_events_map[count] = kmemdup_nul(stat->stat_id, 8, GFP_KERNEL);
+-		if (!p->nvdimm_events_map[count]) {
+-			rc = -ENOMEM;
+-			goto out_nvdimm_events_map;
+-		}
++		goto out;
+ 
+-		count++;
++	/*
++	 * Allocate memory and populate nvdimm_event_map.
++	 * Allocate an extra element for NULL entry
++	 */
++	p->nvdimm_events_map = kcalloc(available_events + 1,
++				       sizeof(stat->stat_id),
++				       GFP_KERNEL);
++	if (!p->nvdimm_events_map) {
++		rc = -ENOMEM;
++		goto out;
+ 	}
+-	p->nvdimm_events_map[count] = NULL;
+-	kfree(stats);
+-	return 0;
+ 
+-out_nvdimm_events_map:
+-	kfree(p->nvdimm_events_map);
+-out_stats:
++	/* Copy all stat_ids to event map */
++	for (index = 0, stat = stats->scm_statistic;
++	     index < available_events; index++, ++stat) {
++		memcpy(&p->nvdimm_events_map[index * sizeof(stat->stat_id)],
++		       &stat->stat_id, sizeof(stat->stat_id));
++	}
++out:
+ 	kfree(stats);
+ 	return rc;
+ }
+diff --git a/arch/powerpc/sysdev/dart_iommu.c b/arch/powerpc/sysdev/dart_iommu.c
+index be6b99b1b3523..9a02aed886a0d 100644
+--- a/arch/powerpc/sysdev/dart_iommu.c
++++ b/arch/powerpc/sysdev/dart_iommu.c
+@@ -404,9 +404,10 @@ void __init iommu_init_early_dart(struct pci_controller_ops *controller_ops)
+ 	}
+ 
+ 	/* Initialize the DART HW */
+-	if (dart_init(dn) != 0)
++	if (dart_init(dn) != 0) {
++		of_node_put(dn);
+ 		return;
+-
++	}
+ 	/*
+ 	 * U4 supports a DART bypass, we use it for 64-bit capable devices to
+ 	 * improve performance.  However, that only works for devices connected
+@@ -419,6 +420,7 @@ void __init iommu_init_early_dart(struct pci_controller_ops *controller_ops)
+ 
+ 	/* Setup pci_dma ops */
+ 	set_pci_dma_ops(&dma_iommu_ops);
++	of_node_put(dn);
+ }
+ 
+ #ifdef CONFIG_PM
+diff --git a/arch/powerpc/sysdev/fsl_rio.c b/arch/powerpc/sysdev/fsl_rio.c
+index ff7906b48ca1e..1bfc9afa8a1a1 100644
+--- a/arch/powerpc/sysdev/fsl_rio.c
++++ b/arch/powerpc/sysdev/fsl_rio.c
+@@ -505,8 +505,10 @@ int fsl_rio_setup(struct platform_device *dev)
+ 	if (rc) {
+ 		dev_err(&dev->dev, "Can't get %pOF property 'reg'\n",
+ 				rmu_node);
++		of_node_put(rmu_node);
+ 		goto err_rmu;
+ 	}
++	of_node_put(rmu_node);
+ 	rmu_regs_win = ioremap(rmu_regs.start, resource_size(&rmu_regs));
+ 	if (!rmu_regs_win) {
+ 		dev_err(&dev->dev, "Unable to map rmu register window\n");
+diff --git a/arch/powerpc/sysdev/xics/icp-opal.c b/arch/powerpc/sysdev/xics/icp-opal.c
+index bda4c32582d97..4dae624b9f2f4 100644
+--- a/arch/powerpc/sysdev/xics/icp-opal.c
++++ b/arch/powerpc/sysdev/xics/icp-opal.c
+@@ -196,6 +196,7 @@ int __init icp_opal_init(void)
+ 
+ 	printk("XICS: Using OPAL ICP fallbacks\n");
+ 
++	of_node_put(np);
+ 	return 0;
+ }
+ 
+diff --git a/arch/powerpc/sysdev/xive/spapr.c b/arch/powerpc/sysdev/xive/spapr.c
+index 29456c255f9f7..503f544d28e29 100644
+--- a/arch/powerpc/sysdev/xive/spapr.c
++++ b/arch/powerpc/sysdev/xive/spapr.c
+@@ -830,12 +830,12 @@ bool __init xive_spapr_init(void)
+ 	/* Resource 1 is the OS ring TIMA */
+ 	if (of_address_to_resource(np, 1, &r)) {
+ 		pr_err("Failed to get thread mgmnt area resource\n");
+-		return false;
++		goto err_put;
+ 	}
+ 	tima = ioremap(r.start, resource_size(&r));
+ 	if (!tima) {
+ 		pr_err("Failed to map thread mgmnt area\n");
+-		return false;
++		goto err_put;
+ 	}
+ 
+ 	if (!xive_get_max_prio(&max_prio))
+@@ -871,6 +871,7 @@ bool __init xive_spapr_init(void)
+ 	if (!xive_core_init(np, &xive_spapr_ops, tima, TM_QW1_OS, max_prio))
+ 		goto err_mem_free;
+ 
++	of_node_put(np);
+ 	pr_info("Using %dkB queues\n", 1 << (xive_queue_shift - 10));
+ 	return true;
+ 
+@@ -878,6 +879,8 @@ err_mem_free:
+ 	xive_irq_bitmap_remove_all();
+ err_unmap:
+ 	iounmap(tima);
++err_put:
++	of_node_put(np);
+ 	return false;
+ }
+ 
+diff --git a/arch/riscv/Makefile b/arch/riscv/Makefile
+index 7d81102cffd48..c6ca1b9cbf712 100644
+--- a/arch/riscv/Makefile
++++ b/arch/riscv/Makefile
+@@ -154,3 +154,7 @@ PHONY += rv64_randconfig
+ rv64_randconfig:
+ 	$(Q)$(MAKE) KCONFIG_ALLCONFIG=$(srctree)/arch/riscv/configs/64-bit.config \
+ 		-f $(srctree)/Makefile randconfig
++
++PHONY += rv32_defconfig
++rv32_defconfig:
++	$(Q)$(MAKE) -f $(srctree)/Makefile defconfig 32-bit.config
+diff --git a/arch/riscv/include/asm/alternative-macros.h b/arch/riscv/include/asm/alternative-macros.h
+index 67406c3763890..0377ce0fcc726 100644
+--- a/arch/riscv/include/asm/alternative-macros.h
++++ b/arch/riscv/include/asm/alternative-macros.h
+@@ -23,9 +23,9 @@
+ 888 :
+ 	\new_c
+ 889 :
+-	.previous
+ 	.org    . - (889b - 888b) + (887b - 886b)
+ 	.org    . - (887b - 886b) + (889b - 888b)
++	.previous
+ 	.endif
+ .endm
+ 
+@@ -60,9 +60,9 @@
+ 	"888 :\n"							\
+ 	new_c "\n"							\
+ 	"889 :\n"							\
+-	".previous\n"							\
+ 	".org	. - (887b - 886b) + (889b - 888b)\n"			\
+ 	".org	. - (889b - 888b) + (887b - 886b)\n"			\
++	".previous\n"							\
+ 	".endif\n"
+ 
+ #define __ALTERNATIVE_CFG(old_c, new_c, vendor_id, errata_id, enable) \
+diff --git a/arch/riscv/include/asm/asm.h b/arch/riscv/include/asm/asm.h
+index 8c2549b16ac06..618d7c5af1a2d 100644
+--- a/arch/riscv/include/asm/asm.h
++++ b/arch/riscv/include/asm/asm.h
+@@ -67,30 +67,4 @@
+ #error "Unexpected __SIZEOF_SHORT__"
+ #endif
+ 
+-#ifdef __ASSEMBLY__
+-
+-/* Common assembly source macros */
+-
+-#ifdef CONFIG_XIP_KERNEL
+-.macro XIP_FIXUP_OFFSET reg
+-	REG_L t0, _xip_fixup
+-	add \reg, \reg, t0
+-.endm
+-.macro XIP_FIXUP_FLASH_OFFSET reg
+-	la t1, __data_loc
+-	REG_L t1, _xip_phys_offset
+-	sub \reg, \reg, t1
+-	add \reg, \reg, t0
+-.endm
+-_xip_fixup: .dword CONFIG_PHYS_RAM_BASE - CONFIG_XIP_PHYS_ADDR - XIP_OFFSET
+-_xip_phys_offset: .dword CONFIG_XIP_PHYS_ADDR + XIP_OFFSET
+-#else
+-.macro XIP_FIXUP_OFFSET reg
+-.endm
+-.macro XIP_FIXUP_FLASH_OFFSET reg
+-.endm
+-#endif /* CONFIG_XIP_KERNEL */
+-
+-#endif /* __ASSEMBLY__ */
+-
+ #endif /* _ASM_RISCV_ASM_H */
+diff --git a/arch/riscv/include/asm/irq_work.h b/arch/riscv/include/asm/irq_work.h
+index d6c277992f76a..b53891964ae03 100644
+--- a/arch/riscv/include/asm/irq_work.h
++++ b/arch/riscv/include/asm/irq_work.h
+@@ -4,7 +4,7 @@
+ 
+ static inline bool arch_irq_work_has_interrupt(void)
+ {
+-	return true;
++	return IS_ENABLED(CONFIG_SMP);
+ }
+ extern void arch_irq_work_raise(void);
+ #endif /* _ASM_RISCV_IRQ_WORK_H */
+diff --git a/arch/riscv/include/asm/unistd.h b/arch/riscv/include/asm/unistd.h
+index 6c316093a1e59..977ee6181dabf 100644
+--- a/arch/riscv/include/asm/unistd.h
++++ b/arch/riscv/include/asm/unistd.h
+@@ -9,7 +9,6 @@
+  */
+ 
+ #define __ARCH_WANT_SYS_CLONE
+-#define __ARCH_WANT_MEMFD_SECRET
+ 
+ #include <uapi/asm/unistd.h>
+ 
+diff --git a/arch/riscv/include/asm/xip_fixup.h b/arch/riscv/include/asm/xip_fixup.h
+new file mode 100644
+index 0000000000000..d4ffc3c37649f
+--- /dev/null
++++ b/arch/riscv/include/asm/xip_fixup.h
+@@ -0,0 +1,31 @@
++/* SPDX-License-Identifier: GPL-2.0-only */
++/*
++ * XIP fixup macros, only useful in assembly.
++ */
++#ifndef _ASM_RISCV_XIP_FIXUP_H
++#define _ASM_RISCV_XIP_FIXUP_H
++
++#include <linux/pgtable.h>
++
++#ifdef CONFIG_XIP_KERNEL
++.macro XIP_FIXUP_OFFSET reg
++        REG_L t0, _xip_fixup
++        add \reg, \reg, t0
++.endm
++.macro XIP_FIXUP_FLASH_OFFSET reg
++	la t1, __data_loc
++	REG_L t1, _xip_phys_offset
++	sub \reg, \reg, t1
++	add \reg, \reg, t0
++.endm
++
++_xip_fixup: .dword CONFIG_PHYS_RAM_BASE - CONFIG_XIP_PHYS_ADDR - XIP_OFFSET
++_xip_phys_offset: .dword CONFIG_XIP_PHYS_ADDR + XIP_OFFSET
++#else
++.macro XIP_FIXUP_OFFSET reg
++.endm
++.macro XIP_FIXUP_FLASH_OFFSET reg
++.endm
++#endif /* CONFIG_XIP_KERNEL */
++
++#endif
+diff --git a/arch/riscv/include/uapi/asm/unistd.h b/arch/riscv/include/uapi/asm/unistd.h
+index 8062996c2dfd0..d95fbf5846b0b 100644
+--- a/arch/riscv/include/uapi/asm/unistd.h
++++ b/arch/riscv/include/uapi/asm/unistd.h
+@@ -21,6 +21,7 @@
+ #endif /* __LP64__ */
+ 
+ #define __ARCH_WANT_SYS_CLONE3
++#define __ARCH_WANT_MEMFD_SECRET
+ 
+ #include <asm-generic/unistd.h>
+ 
+diff --git a/arch/riscv/kernel/head.S b/arch/riscv/kernel/head.S
+index 893b8bb693912..b865046e4dbbc 100644
+--- a/arch/riscv/kernel/head.S
++++ b/arch/riscv/kernel/head.S
+@@ -14,6 +14,7 @@
+ #include <asm/cpu_ops_sbi.h>
+ #include <asm/hwcap.h>
+ #include <asm/image.h>
++#include <asm/xip_fixup.h>
+ #include "efi-header.S"
+ 
+ __HEAD
+@@ -297,6 +298,7 @@ clear_bss_done:
+ 	REG_S a0, (a2)
+ 
+ 	/* Initialize page tables and relocate to virtual addresses */
++	la tp, init_task
+ 	la sp, init_thread_union + THREAD_SIZE
+ 	XIP_FIXUP_OFFSET sp
+ #ifdef CONFIG_BUILTIN_DTB
+diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
+index 834eb652a7b9d..e0a00739bd13b 100644
+--- a/arch/riscv/kernel/setup.c
++++ b/arch/riscv/kernel/setup.c
+@@ -189,7 +189,7 @@ static void __init init_resources(void)
+ 		res = &mem_res[res_idx--];
+ 
+ 		res->name = "Reserved";
+-		res->flags = IORESOURCE_MEM | IORESOURCE_BUSY;
++		res->flags = IORESOURCE_MEM | IORESOURCE_EXCLUSIVE;
+ 		res->start = __pfn_to_phys(memblock_region_reserved_base_pfn(region));
+ 		res->end = __pfn_to_phys(memblock_region_reserved_end_pfn(region)) - 1;
+ 
+@@ -214,7 +214,7 @@ static void __init init_resources(void)
+ 
+ 		if (unlikely(memblock_is_nomap(region))) {
+ 			res->name = "Reserved";
+-			res->flags = IORESOURCE_MEM | IORESOURCE_BUSY;
++			res->flags = IORESOURCE_MEM | IORESOURCE_EXCLUSIVE;
+ 		} else {
+ 			res->name = "System RAM";
+ 			res->flags = IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY;
+diff --git a/arch/riscv/kernel/suspend_entry.S b/arch/riscv/kernel/suspend_entry.S
+index 4b07b809a2b8c..aafcca58c19de 100644
+--- a/arch/riscv/kernel/suspend_entry.S
++++ b/arch/riscv/kernel/suspend_entry.S
+@@ -8,6 +8,7 @@
+ #include <asm/asm.h>
+ #include <asm/asm-offsets.h>
+ #include <asm/csr.h>
++#include <asm/xip_fixup.h>
+ 
+ 	.text
+ 	.altmacro
+diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
+index 05ed641a1134c..39e2e1d0e94f4 100644
+--- a/arch/riscv/mm/init.c
++++ b/arch/riscv/mm/init.c
+@@ -677,7 +677,7 @@ static __init pgprot_t pgprot_from_va(uintptr_t va)
+ }
+ #endif /* CONFIG_STRICT_KERNEL_RWX */
+ 
+-#ifdef CONFIG_64BIT
++#if defined(CONFIG_64BIT) && !defined(CONFIG_XIP_KERNEL)
+ static void __init disable_pgtable_l5(void)
+ {
+ 	pgtable_l5_enabled = false;
+diff --git a/arch/s390/include/asm/cio.h b/arch/s390/include/asm/cio.h
+index 1effac6a01520..1c4f585dd39b6 100644
+--- a/arch/s390/include/asm/cio.h
++++ b/arch/s390/include/asm/cio.h
+@@ -369,7 +369,7 @@ void cio_gp_dma_destroy(struct gen_pool *gp_dma, struct device *dma_dev);
+ struct gen_pool *cio_gp_dma_create(struct device *dma_dev, int nr_pages);
+ 
+ /* Function from drivers/s390/cio/chsc.c */
+-int chsc_sstpc(void *page, unsigned int op, u16 ctrl, u64 *clock_delta);
++int chsc_sstpc(void *page, unsigned int op, u16 ctrl, long *clock_delta);
+ int chsc_sstpi(void *page, void *result, size_t size);
+ int chsc_stzi(void *page, void *result, size_t size);
+ int chsc_sgib(u32 origin);
+diff --git a/arch/s390/include/asm/kexec.h b/arch/s390/include/asm/kexec.h
+index 7f3c9ac34bd8d..63098df81c9f2 100644
+--- a/arch/s390/include/asm/kexec.h
++++ b/arch/s390/include/asm/kexec.h
+@@ -9,6 +9,8 @@
+ #ifndef _S390_KEXEC_H
+ #define _S390_KEXEC_H
+ 
++#include <linux/module.h>
++
+ #include <asm/processor.h>
+ #include <asm/page.h>
+ #include <asm/setup.h>
+@@ -83,4 +85,12 @@ struct kimage_arch {
+ extern const struct kexec_file_ops s390_kexec_image_ops;
+ extern const struct kexec_file_ops s390_kexec_elf_ops;
+ 
++#ifdef CONFIG_KEXEC_FILE
++struct purgatory_info;
++int arch_kexec_apply_relocations_add(struct purgatory_info *pi,
++				     Elf_Shdr *section,
++				     const Elf_Shdr *relsec,
++				     const Elf_Shdr *symtab);
++#define arch_kexec_apply_relocations_add arch_kexec_apply_relocations_add
++#endif
+ #endif /*_S390_KEXEC_H */
+diff --git a/arch/s390/include/asm/preempt.h b/arch/s390/include/asm/preempt.h
+index d9d5350cc3ec3..bf15da0fedbca 100644
+--- a/arch/s390/include/asm/preempt.h
++++ b/arch/s390/include/asm/preempt.h
+@@ -46,10 +46,17 @@ static inline bool test_preempt_need_resched(void)
+ 
+ static inline void __preempt_count_add(int val)
+ {
+-	if (__builtin_constant_p(val) && (val >= -128) && (val <= 127))
+-		__atomic_add_const(val, &S390_lowcore.preempt_count);
+-	else
+-		__atomic_add(val, &S390_lowcore.preempt_count);
++	/*
++	 * With some obscure config options and CONFIG_PROFILE_ALL_BRANCHES
++	 * enabled, gcc 12 fails to handle __builtin_constant_p().
++	 */
++	if (!IS_ENABLED(CONFIG_PROFILE_ALL_BRANCHES)) {
++		if (__builtin_constant_p(val) && (val >= -128) && (val <= 127)) {
++			__atomic_add_const(val, &S390_lowcore.preempt_count);
++			return;
++		}
++	}
++	__atomic_add(val, &S390_lowcore.preempt_count);
+ }
+ 
+ static inline void __preempt_count_sub(int val)
+diff --git a/arch/s390/kernel/perf_event.c b/arch/s390/kernel/perf_event.c
+index ea7729bebaa07..a7f8db73984b0 100644
+--- a/arch/s390/kernel/perf_event.c
++++ b/arch/s390/kernel/perf_event.c
+@@ -30,7 +30,7 @@ static struct kvm_s390_sie_block *sie_block(struct pt_regs *regs)
+ 	if (!stack)
+ 		return NULL;
+ 
+-	return (struct kvm_s390_sie_block *) stack->empty1[0];
++	return (struct kvm_s390_sie_block *)stack->empty1[1];
+ }
+ 
+ static bool is_in_guest(struct pt_regs *regs)
+diff --git a/arch/s390/kernel/time.c b/arch/s390/kernel/time.c
+index 326cb8f75f58e..f0a1484ee00b0 100644
+--- a/arch/s390/kernel/time.c
++++ b/arch/s390/kernel/time.c
+@@ -364,7 +364,7 @@ static inline int check_sync_clock(void)
+  * Apply clock delta to the global data structures.
+  * This is called once on the CPU that performed the clock sync.
+  */
+-static void clock_sync_global(unsigned long delta)
++static void clock_sync_global(long delta)
+ {
+ 	unsigned long now, adj;
+ 	struct ptff_qto qto;
+@@ -400,7 +400,7 @@ static void clock_sync_global(unsigned long delta)
+  * Apply clock delta to the per-CPU data structures of this CPU.
+  * This is called for each online CPU after the call to clock_sync_global.
+  */
+-static void clock_sync_local(unsigned long delta)
++static void clock_sync_local(long delta)
+ {
+ 	/* Add the delta to the clock comparator. */
+ 	if (S390_lowcore.clock_comparator != clock_comparator_max) {
+@@ -424,7 +424,7 @@ static void __init time_init_wq(void)
+ struct clock_sync_data {
+ 	atomic_t cpus;
+ 	int in_sync;
+-	unsigned long clock_delta;
++	long clock_delta;
+ };
+ 
+ /*
+@@ -544,7 +544,7 @@ static int stpinfo_valid(void)
+ static int stp_sync_clock(void *data)
+ {
+ 	struct clock_sync_data *sync = data;
+-	u64 clock_delta, flags;
++	long clock_delta, flags;
+ 	static int first;
+ 	int rc;
+ 
+diff --git a/arch/sparc/kernel/signal32.c b/arch/sparc/kernel/signal32.c
+index f9fe502b81c65..dad38960d1a8a 100644
+--- a/arch/sparc/kernel/signal32.c
++++ b/arch/sparc/kernel/signal32.c
+@@ -779,5 +779,6 @@ static_assert(offsetof(compat_siginfo_t, si_upper)	== 0x18);
+ static_assert(offsetof(compat_siginfo_t, si_pkey)	== 0x14);
+ static_assert(offsetof(compat_siginfo_t, si_perf_data)	== 0x10);
+ static_assert(offsetof(compat_siginfo_t, si_perf_type)	== 0x14);
++static_assert(offsetof(compat_siginfo_t, si_perf_flags)	== 0x18);
+ static_assert(offsetof(compat_siginfo_t, si_band)	== 0x0c);
+ static_assert(offsetof(compat_siginfo_t, si_fd)		== 0x10);
+diff --git a/arch/sparc/kernel/signal_64.c b/arch/sparc/kernel/signal_64.c
+index 8b9fc76cd3e02..570e43e6fda5c 100644
+--- a/arch/sparc/kernel/signal_64.c
++++ b/arch/sparc/kernel/signal_64.c
+@@ -590,5 +590,6 @@ static_assert(offsetof(siginfo_t, si_upper)	== 0x28);
+ static_assert(offsetof(siginfo_t, si_pkey)	== 0x20);
+ static_assert(offsetof(siginfo_t, si_perf_data)	== 0x18);
+ static_assert(offsetof(siginfo_t, si_perf_type)	== 0x20);
++static_assert(offsetof(siginfo_t, si_perf_flags) == 0x24);
+ static_assert(offsetof(siginfo_t, si_band)	== 0x10);
+ static_assert(offsetof(siginfo_t, si_fd)	== 0x14);
+diff --git a/arch/um/drivers/chan_user.c b/arch/um/drivers/chan_user.c
+index 6040817c036f3..25727ed648b72 100644
+--- a/arch/um/drivers/chan_user.c
++++ b/arch/um/drivers/chan_user.c
+@@ -220,7 +220,7 @@ static int winch_tramp(int fd, struct tty_port *port, int *fd_out,
+ 		       unsigned long *stack_out)
+ {
+ 	struct winch_data data;
+-	int fds[2], n, err;
++	int fds[2], n, err, pid;
+ 	char c;
+ 
+ 	err = os_pipe(fds, 1, 1);
+@@ -238,8 +238,9 @@ static int winch_tramp(int fd, struct tty_port *port, int *fd_out,
+ 	 * problem with /dev/net/tun, which if held open by this
+ 	 * thread, prevents the TUN/TAP device from being reused.
+ 	 */
+-	err = run_helper_thread(winch_thread, &data, CLONE_FILES, stack_out);
+-	if (err < 0) {
++	pid = run_helper_thread(winch_thread, &data, CLONE_FILES, stack_out);
++	if (pid < 0) {
++		err = pid;
+ 		printk(UM_KERN_ERR "fork of winch_thread failed - errno = %d\n",
+ 		       -err);
+ 		goto out_close;
+@@ -263,7 +264,7 @@ static int winch_tramp(int fd, struct tty_port *port, int *fd_out,
+ 		goto out_close;
+ 	}
+ 
+-	return err;
++	return pid;
+ 
+  out_close:
+ 	close(fds[1]);
+diff --git a/arch/um/drivers/virtio_uml.c b/arch/um/drivers/virtio_uml.c
+index ba562d68dc048..82ff3785bf69f 100644
+--- a/arch/um/drivers/virtio_uml.c
++++ b/arch/um/drivers/virtio_uml.c
+@@ -63,6 +63,7 @@ struct virtio_uml_device {
+ 
+ 	u8 config_changed_irq:1;
+ 	uint64_t vq_irq_vq_map;
++	int recv_rc;
+ };
+ 
+ struct virtio_uml_vq_info {
+@@ -148,14 +149,6 @@ static int vhost_user_recv(struct virtio_uml_device *vu_dev,
+ 
+ 	rc = vhost_user_recv_header(fd, msg);
+ 
+-	if (rc == -ECONNRESET && vu_dev->registered) {
+-		struct virtio_uml_platform_data *pdata;
+-
+-		pdata = vu_dev->pdata;
+-
+-		virtio_break_device(&vu_dev->vdev);
+-		schedule_work(&pdata->conn_broken_wk);
+-	}
+ 	if (rc)
+ 		return rc;
+ 	size = msg->header.size;
+@@ -164,6 +157,21 @@ static int vhost_user_recv(struct virtio_uml_device *vu_dev,
+ 	return full_read(fd, &msg->payload, size, false);
+ }
+ 
++static void vhost_user_check_reset(struct virtio_uml_device *vu_dev,
++				   int rc)
++{
++	struct virtio_uml_platform_data *pdata = vu_dev->pdata;
++
++	if (rc != -ECONNRESET)
++		return;
++
++	if (!vu_dev->registered)
++		return;
++
++	virtio_break_device(&vu_dev->vdev);
++	schedule_work(&pdata->conn_broken_wk);
++}
++
+ static int vhost_user_recv_resp(struct virtio_uml_device *vu_dev,
+ 				struct vhost_user_msg *msg,
+ 				size_t max_payload_size)
+@@ -171,8 +179,10 @@ static int vhost_user_recv_resp(struct virtio_uml_device *vu_dev,
+ 	int rc = vhost_user_recv(vu_dev, vu_dev->sock, msg,
+ 				 max_payload_size, true);
+ 
+-	if (rc)
++	if (rc) {
++		vhost_user_check_reset(vu_dev, rc);
+ 		return rc;
++	}
+ 
+ 	if (msg->header.flags != (VHOST_USER_FLAG_REPLY | VHOST_USER_VERSION))
+ 		return -EPROTO;
+@@ -369,6 +379,7 @@ static irqreturn_t vu_req_read_message(struct virtio_uml_device *vu_dev,
+ 				 sizeof(msg.msg.payload) +
+ 				 sizeof(msg.extra_payload));
+ 
++	vu_dev->recv_rc = rc;
+ 	if (rc)
+ 		return IRQ_NONE;
+ 
+@@ -412,7 +423,9 @@ static irqreturn_t vu_req_interrupt(int irq, void *data)
+ 	if (!um_irq_timetravel_handler_used())
+ 		ret = vu_req_read_message(vu_dev, NULL);
+ 
+-	if (vu_dev->vq_irq_vq_map) {
++	if (vu_dev->recv_rc) {
++		vhost_user_check_reset(vu_dev, vu_dev->recv_rc);
++	} else if (vu_dev->vq_irq_vq_map) {
+ 		struct virtqueue *vq;
+ 
+ 		virtio_device_for_each_vq((&vu_dev->vdev), vq) {
+diff --git a/arch/um/include/asm/Kbuild b/arch/um/include/asm/Kbuild
+index f1f3f52f1e9cc..b2d834a29f3a9 100644
+--- a/arch/um/include/asm/Kbuild
++++ b/arch/um/include/asm/Kbuild
+@@ -4,6 +4,7 @@ generic-y += bug.h
+ generic-y += compat.h
+ generic-y += current.h
+ generic-y += device.h
++generic-y += dma-mapping.h
+ generic-y += emergency-restart.h
+ generic-y += exec.h
+ generic-y += extable.h
+diff --git a/arch/um/include/asm/thread_info.h b/arch/um/include/asm/thread_info.h
+index 1395cbd7e340d..c7b4b49826a2a 100644
+--- a/arch/um/include/asm/thread_info.h
++++ b/arch/um/include/asm/thread_info.h
+@@ -60,6 +60,7 @@ static inline struct thread_info *current_thread_info(void)
+ #define TIF_RESTORE_SIGMASK	7
+ #define TIF_NOTIFY_RESUME	8
+ #define TIF_SECCOMP		9	/* secure computing */
++#define TIF_SINGLESTEP		10	/* single stepping userspace */
+ 
+ #define _TIF_SYSCALL_TRACE	(1 << TIF_SYSCALL_TRACE)
+ #define _TIF_SIGPENDING		(1 << TIF_SIGPENDING)
+@@ -68,5 +69,6 @@ static inline struct thread_info *current_thread_info(void)
+ #define _TIF_MEMDIE		(1 << TIF_MEMDIE)
+ #define _TIF_SYSCALL_AUDIT	(1 << TIF_SYSCALL_AUDIT)
+ #define _TIF_SECCOMP		(1 << TIF_SECCOMP)
++#define _TIF_SINGLESTEP		(1 << TIF_SINGLESTEP)
+ 
+ #endif
+diff --git a/arch/um/kernel/exec.c b/arch/um/kernel/exec.c
+index c85e40c72779f..58938d75871af 100644
+--- a/arch/um/kernel/exec.c
++++ b/arch/um/kernel/exec.c
+@@ -43,7 +43,7 @@ void start_thread(struct pt_regs *regs, unsigned long eip, unsigned long esp)
+ {
+ 	PT_REGS_IP(regs) = eip;
+ 	PT_REGS_SP(regs) = esp;
+-	current->ptrace &= ~PT_DTRACE;
++	clear_thread_flag(TIF_SINGLESTEP);
+ #ifdef SUBARCH_EXECVE1
+ 	SUBARCH_EXECVE1(regs->regs);
+ #endif
+diff --git a/arch/um/kernel/process.c b/arch/um/kernel/process.c
+index 80504680be084..88c5c78442813 100644
+--- a/arch/um/kernel/process.c
++++ b/arch/um/kernel/process.c
+@@ -335,7 +335,7 @@ int singlestepping(void * t)
+ {
+ 	struct task_struct *task = t ? t : current;
+ 
+-	if (!(task->ptrace & PT_DTRACE))
++	if (!test_thread_flag(TIF_SINGLESTEP))
+ 		return 0;
+ 
+ 	if (task->thread.singlestep_syscall)
+diff --git a/arch/um/kernel/ptrace.c b/arch/um/kernel/ptrace.c
+index bfaf6ab1ac037..5154b27de580f 100644
+--- a/arch/um/kernel/ptrace.c
++++ b/arch/um/kernel/ptrace.c
+@@ -11,7 +11,7 @@
+ 
+ void user_enable_single_step(struct task_struct *child)
+ {
+-	child->ptrace |= PT_DTRACE;
++	set_tsk_thread_flag(child, TIF_SINGLESTEP);
+ 	child->thread.singlestep_syscall = 0;
+ 
+ #ifdef SUBARCH_SET_SINGLESTEPPING
+@@ -21,7 +21,7 @@ void user_enable_single_step(struct task_struct *child)
+ 
+ void user_disable_single_step(struct task_struct *child)
+ {
+-	child->ptrace &= ~PT_DTRACE;
++	clear_tsk_thread_flag(child, TIF_SINGLESTEP);
+ 	child->thread.singlestep_syscall = 0;
+ 
+ #ifdef SUBARCH_SET_SINGLESTEPPING
+@@ -120,7 +120,7 @@ static void send_sigtrap(struct uml_pt_regs *regs, int error_code)
+ }
+ 
+ /*
+- * XXX Check PT_DTRACE vs TIF_SINGLESTEP for singlestepping check and
++ * XXX Check TIF_SINGLESTEP for singlestepping check and
+  * PT_PTRACED vs TIF_SYSCALL_TRACE for syscall tracing check
+  */
+ int syscall_trace_enter(struct pt_regs *regs)
+@@ -144,7 +144,7 @@ void syscall_trace_leave(struct pt_regs *regs)
+ 	audit_syscall_exit(regs);
+ 
+ 	/* Fake a debug trap */
+-	if (ptraced & PT_DTRACE)
++	if (test_thread_flag(TIF_SINGLESTEP))
+ 		send_sigtrap(&regs->regs, 0);
+ 
+ 	if (!test_thread_flag(TIF_SYSCALL_TRACE))
+diff --git a/arch/um/kernel/signal.c b/arch/um/kernel/signal.c
+index 88cd9b5c1b744..ae4658f576ab7 100644
+--- a/arch/um/kernel/signal.c
++++ b/arch/um/kernel/signal.c
+@@ -53,7 +53,7 @@ static void handle_signal(struct ksignal *ksig, struct pt_regs *regs)
+ 	unsigned long sp;
+ 	int err;
+ 
+-	if ((current->ptrace & PT_DTRACE) && (current->ptrace & PT_PTRACED))
++	if (test_thread_flag(TIF_SINGLESTEP) && (current->ptrace & PT_PTRACED))
+ 		singlestep = 1;
+ 
+ 	/* Did we come from a system call? */
+@@ -128,7 +128,7 @@ void do_signal(struct pt_regs *regs)
+ 	 * on the host.  The tracing thread will check this flag and
+ 	 * PTRACE_SYSCALL if necessary.
+ 	 */
+-	if (current->ptrace & PT_DTRACE)
++	if (test_thread_flag(TIF_SINGLESTEP))
+ 		current->thread.singlestep_syscall =
+ 			is_syscall(PT_REGS_IP(&current->thread.regs));
+ 
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 4bed3abf444d1..b2c65f5733538 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -1313,7 +1313,7 @@ config MICROCODE
+ 
+ config MICROCODE_INTEL
+ 	bool "Intel microcode loading support"
+-	depends on MICROCODE
++	depends on CPU_SUP_INTEL && MICROCODE
+ 	default MICROCODE
+ 	help
+ 	  This options enables microcode patch loading support for Intel
+@@ -1325,7 +1325,7 @@ config MICROCODE_INTEL
+ 
+ config MICROCODE_AMD
+ 	bool "AMD microcode loading support"
+-	depends on MICROCODE
++	depends on CPU_SUP_AMD && MICROCODE
+ 	help
+ 	  If you select this option, microcode patch loading support for AMD
+ 	  processors will be enabled.
+diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
+index 73d958522b6a4..d8376e5fe1afe 100644
+--- a/arch/x86/entry/entry_64.S
++++ b/arch/x86/entry/entry_64.S
+@@ -508,6 +508,7 @@ SYM_CODE_START(\asmsym)
+ 	call	vc_switch_off_ist
+ 	movq	%rax, %rsp		/* Switch to new stack */
+ 
++	ENCODE_FRAME_POINTER
+ 	UNWIND_HINT_REGS
+ 
+ 	/* Update pt_regs */
+diff --git a/arch/x86/entry/vdso/vma.c b/arch/x86/entry/vdso/vma.c
+index 235a5794296ac..1000d457c3321 100644
+--- a/arch/x86/entry/vdso/vma.c
++++ b/arch/x86/entry/vdso/vma.c
+@@ -438,7 +438,7 @@ bool arch_syscall_is_vdso_sigreturn(struct pt_regs *regs)
+ static __init int vdso_setup(char *s)
+ {
+ 	vdso64_enabled = simple_strtoul(s, NULL, 0);
+-	return 0;
++	return 1;
+ }
+ __setup("vdso=", vdso_setup);
+ 
+diff --git a/arch/x86/events/amd/ibs.c b/arch/x86/events/amd/ibs.c
+index 9739019d4b67a..2704ec1e42a30 100644
+--- a/arch/x86/events/amd/ibs.c
++++ b/arch/x86/events/amd/ibs.c
+@@ -304,6 +304,16 @@ static int perf_ibs_init(struct perf_event *event)
+ 	hwc->config_base = perf_ibs->msr;
+ 	hwc->config = config;
+ 
++	/*
++	 * rip recorded by IbsOpRip will not be consistent with rsp and rbp
++	 * recorded as part of interrupt regs. Thus we need to use rip from
++	 * interrupt regs while unwinding call stack. Setting _EARLY flag
++	 * makes sure we unwind call-stack before perf sample rip is set to
++	 * IbsOpRip.
++	 */
++	if (event->attr.sample_type & PERF_SAMPLE_CALLCHAIN)
++		event->attr.sample_type |= __PERF_SAMPLE_CALLCHAIN_EARLY;
++
+ 	return 0;
+ }
+ 
+@@ -687,6 +697,14 @@ fail:
+ 		data.raw = &raw;
+ 	}
+ 
++	/*
++	 * rip recorded by IbsOpRip will not be consistent with rsp and rbp
++	 * recorded as part of interrupt regs. Thus we need to use rip from
++	 * interrupt regs while unwinding call stack.
++	 */
++	if (event->attr.sample_type & PERF_SAMPLE_CALLCHAIN)
++		data.callchain = perf_callchain(event, iregs);
++
+ 	throttle = perf_event_overflow(event, &data, &regs);
+ out:
+ 	if (throttle) {
+@@ -759,9 +777,10 @@ static __init int perf_ibs_pmu_init(struct perf_ibs *perf_ibs, char *name)
+ 	return ret;
+ }
+ 
+-static __init void perf_event_ibs_init(void)
++static __init int perf_event_ibs_init(void)
+ {
+ 	struct attribute **attr = ibs_op_format_attrs;
++	int ret;
+ 
+ 	/*
+ 	 * Some chips fail to reset the fetch count when it is written; instead
+@@ -773,7 +792,9 @@ static __init void perf_event_ibs_init(void)
+ 	if (boot_cpu_data.x86 == 0x19 && boot_cpu_data.x86_model < 0x10)
+ 		perf_ibs_fetch.fetch_ignore_if_zero_rip = 1;
+ 
+-	perf_ibs_pmu_init(&perf_ibs_fetch, "ibs_fetch");
++	ret = perf_ibs_pmu_init(&perf_ibs_fetch, "ibs_fetch");
++	if (ret)
++		return ret;
+ 
+ 	if (ibs_caps & IBS_CAPS_OPCNT) {
+ 		perf_ibs_op.config_mask |= IBS_OP_CNT_CTL;
+@@ -786,15 +807,35 @@ static __init void perf_event_ibs_init(void)
+ 		perf_ibs_op.cnt_mask    |= IBS_OP_MAX_CNT_EXT_MASK;
+ 	}
+ 
+-	perf_ibs_pmu_init(&perf_ibs_op, "ibs_op");
++	ret = perf_ibs_pmu_init(&perf_ibs_op, "ibs_op");
++	if (ret)
++		goto err_op;
++
++	ret = register_nmi_handler(NMI_LOCAL, perf_ibs_nmi_handler, 0, "perf_ibs");
++	if (ret)
++		goto err_nmi;
+ 
+-	register_nmi_handler(NMI_LOCAL, perf_ibs_nmi_handler, 0, "perf_ibs");
+ 	pr_info("perf: AMD IBS detected (0x%08x)\n", ibs_caps);
++	return 0;
++
++err_nmi:
++	perf_pmu_unregister(&perf_ibs_op.pmu);
++	free_percpu(perf_ibs_op.pcpu);
++	perf_ibs_op.pcpu = NULL;
++err_op:
++	perf_pmu_unregister(&perf_ibs_fetch.pmu);
++	free_percpu(perf_ibs_fetch.pcpu);
++	perf_ibs_fetch.pcpu = NULL;
++
++	return ret;
+ }
+ 
+ #else /* defined(CONFIG_PERF_EVENTS) && defined(CONFIG_CPU_SUP_AMD) */
+ 
+-static __init void perf_event_ibs_init(void) { }
++static __init int perf_event_ibs_init(void)
++{
++	return 0;
++}
+ 
+ #endif
+ 
+@@ -1064,9 +1105,7 @@ static __init int amd_ibs_init(void)
+ 			  x86_pmu_amd_ibs_starting_cpu,
+ 			  x86_pmu_amd_ibs_dying_cpu);
+ 
+-	perf_event_ibs_init();
+-
+-	return 0;
++	return perf_event_ibs_init();
+ }
+ 
+ /* Since we need the pci subsystem to init ibs we can't do this earlier: */
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index fc7f458eb3de6..c6e7d358fb8d8 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -276,7 +276,7 @@ static struct event_constraint intel_icl_event_constraints[] = {
+ 	INTEL_EVENT_CONSTRAINT_RANGE(0x03, 0x0a, 0xf),
+ 	INTEL_EVENT_CONSTRAINT_RANGE(0x1f, 0x28, 0xf),
+ 	INTEL_EVENT_CONSTRAINT(0x32, 0xf),	/* SW_PREFETCH_ACCESS.* */
+-	INTEL_EVENT_CONSTRAINT_RANGE(0x48, 0x54, 0xf),
++	INTEL_EVENT_CONSTRAINT_RANGE(0x48, 0x56, 0xf),
+ 	INTEL_EVENT_CONSTRAINT_RANGE(0x60, 0x8b, 0xf),
+ 	INTEL_UEVENT_CONSTRAINT(0x04a3, 0xff),  /* CYCLE_ACTIVITY.STALLS_TOTAL */
+ 	INTEL_UEVENT_CONSTRAINT(0x10a3, 0xff),  /* CYCLE_ACTIVITY.CYCLES_MEM_ANY */
+diff --git a/arch/x86/include/asm/acenv.h b/arch/x86/include/asm/acenv.h
+index 9aff97f0de7fd..d937c55e717e6 100644
+--- a/arch/x86/include/asm/acenv.h
++++ b/arch/x86/include/asm/acenv.h
+@@ -13,7 +13,19 @@
+ 
+ /* Asm macros */
+ 
+-#define ACPI_FLUSH_CPU_CACHE()	wbinvd()
++/*
++ * ACPI_FLUSH_CPU_CACHE() flushes caches on entering sleep states.
++ * It is required to prevent data loss.
++ *
++ * While running inside virtual machine, the kernel can bypass cache flushing.
++ * Changing sleep state in a virtual machine doesn't affect the host system
++ * sleep state and cannot lead to data loss.
++ */
++#define ACPI_FLUSH_CPU_CACHE()					\
++do {								\
++	if (!cpu_feature_enabled(X86_FEATURE_HYPERVISOR))	\
++		wbinvd();					\
++} while (0)
+ 
+ int __acpi_acquire_global_lock(unsigned int *lock);
+ int __acpi_release_global_lock(unsigned int *lock);
+diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h
+index 11b7c06e2828c..6ad8d946cd3eb 100644
+--- a/arch/x86/include/asm/kexec.h
++++ b/arch/x86/include/asm/kexec.h
+@@ -186,6 +186,14 @@ extern int arch_kexec_post_alloc_pages(void *vaddr, unsigned int pages,
+ extern void arch_kexec_pre_free_pages(void *vaddr, unsigned int pages);
+ #define arch_kexec_pre_free_pages arch_kexec_pre_free_pages
+ 
++#ifdef CONFIG_KEXEC_FILE
++struct purgatory_info;
++int arch_kexec_apply_relocations_add(struct purgatory_info *pi,
++				     Elf_Shdr *section,
++				     const Elf_Shdr *relsec,
++				     const Elf_Shdr *symtab);
++#define arch_kexec_apply_relocations_add arch_kexec_apply_relocations_add
++#endif
+ #endif
+ 
+ typedef void crash_vmclear_fn(void);
+diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h
+index 78ca535124864..b45c4d27fd46e 100644
+--- a/arch/x86/include/asm/set_memory.h
++++ b/arch/x86/include/asm/set_memory.h
+@@ -86,56 +86,4 @@ bool kernel_page_present(struct page *page);
+ 
+ extern int kernel_set_to_readonly;
+ 
+-#ifdef CONFIG_X86_64
+-/*
+- * Prevent speculative access to the page by either unmapping
+- * it (if we do not require access to any part of the page) or
+- * marking it uncacheable (if we want to try to retrieve data
+- * from non-poisoned lines in the page).
+- */
+-static inline int set_mce_nospec(unsigned long pfn, bool unmap)
+-{
+-	unsigned long decoy_addr;
+-	int rc;
+-
+-	/* SGX pages are not in the 1:1 map */
+-	if (arch_is_platform_page(pfn << PAGE_SHIFT))
+-		return 0;
+-	/*
+-	 * We would like to just call:
+-	 *      set_memory_XX((unsigned long)pfn_to_kaddr(pfn), 1);
+-	 * but doing that would radically increase the odds of a
+-	 * speculative access to the poison page because we'd have
+-	 * the virtual address of the kernel 1:1 mapping sitting
+-	 * around in registers.
+-	 * Instead we get tricky.  We create a non-canonical address
+-	 * that looks just like the one we want, but has bit 63 flipped.
+-	 * This relies on set_memory_XX() properly sanitizing any __pa()
+-	 * results with __PHYSICAL_MASK or PTE_PFN_MASK.
+-	 */
+-	decoy_addr = (pfn << PAGE_SHIFT) + (PAGE_OFFSET ^ BIT(63));
+-
+-	if (unmap)
+-		rc = set_memory_np(decoy_addr, 1);
+-	else
+-		rc = set_memory_uc(decoy_addr, 1);
+-	if (rc)
+-		pr_warn("Could not invalidate pfn=0x%lx from 1:1 map\n", pfn);
+-	return rc;
+-}
+-#define set_mce_nospec set_mce_nospec
+-
+-/* Restore full speculative operation to the pfn. */
+-static inline int clear_mce_nospec(unsigned long pfn)
+-{
+-	return set_memory_wb((unsigned long) pfn_to_kaddr(pfn), 1);
+-}
+-#define clear_mce_nospec clear_mce_nospec
+-#else
+-/*
+- * Few people would run a 32-bit kernel on a machine that supports
+- * recoverable errors because they have too much memory to boot 32-bit.
+- */
+-#endif
+-
+ #endif /* _ASM_X86_SET_MEMORY_H */
+diff --git a/arch/x86/include/asm/suspend_32.h b/arch/x86/include/asm/suspend_32.h
+index 7b132d0312ebf..a800abb1a9925 100644
+--- a/arch/x86/include/asm/suspend_32.h
++++ b/arch/x86/include/asm/suspend_32.h
+@@ -19,7 +19,6 @@ struct saved_context {
+ 	u16 gs;
+ 	unsigned long cr0, cr2, cr3, cr4;
+ 	u64 misc_enable;
+-	bool misc_enable_saved;
+ 	struct saved_msrs saved_msrs;
+ 	struct desc_ptr gdt_desc;
+ 	struct desc_ptr idt;
+@@ -28,6 +27,7 @@ struct saved_context {
+ 	unsigned long tr;
+ 	unsigned long safety;
+ 	unsigned long return_address;
++	bool misc_enable_saved;
+ } __attribute__((packed));
+ 
+ /* routines for saving/restoring kernel state */
+diff --git a/arch/x86/include/asm/suspend_64.h b/arch/x86/include/asm/suspend_64.h
+index 35bb35d28733e..54df06687d834 100644
+--- a/arch/x86/include/asm/suspend_64.h
++++ b/arch/x86/include/asm/suspend_64.h
+@@ -14,9 +14,13 @@
+  * Image of the saved processor state, used by the low level ACPI suspend to
+  * RAM code and by the low level hibernation code.
+  *
+- * If you modify it, fix arch/x86/kernel/acpi/wakeup_64.S and make sure that
+- * __save/__restore_processor_state(), defined in arch/x86/kernel/suspend_64.c,
+- * still work as required.
++ * If you modify it, check how it is used in arch/x86/kernel/acpi/wakeup_64.S
++ * and make sure that __save/__restore_processor_state(), defined in
++ * arch/x86/power/cpu.c, still work as required.
++ *
++ * Because the structure is packed, make sure to avoid unaligned members. For
++ * optimisation purposes but also because tools like kmemleak only search for
++ * pointers that are aligned.
+  */
+ struct saved_context {
+ 	struct pt_regs regs;
+@@ -36,7 +40,6 @@ struct saved_context {
+ 
+ 	unsigned long cr0, cr2, cr3, cr4;
+ 	u64 misc_enable;
+-	bool misc_enable_saved;
+ 	struct saved_msrs saved_msrs;
+ 	unsigned long efer;
+ 	u16 gdt_pad; /* Unused */
+@@ -48,6 +51,7 @@ struct saved_context {
+ 	unsigned long tr;
+ 	unsigned long safety;
+ 	unsigned long return_address;
++	bool misc_enable_saved;
+ } __attribute__((packed));
+ 
+ #define loaddebug(thread,register) \
+diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
+index b70344bf66008..ed7d9cf71f68d 100644
+--- a/arch/x86/kernel/apic/apic.c
++++ b/arch/x86/kernel/apic/apic.c
+@@ -170,7 +170,7 @@ static __init int setup_apicpmtimer(char *s)
+ {
+ 	apic_calibrate_pmtmr = 1;
+ 	notsc_setup(NULL);
+-	return 0;
++	return 1;
+ }
+ __setup("apicpmtimer", setup_apicpmtimer);
+ #endif
+diff --git a/arch/x86/kernel/apic/x2apic_uv_x.c b/arch/x86/kernel/apic/x2apic_uv_x.c
+index f5a48e66e4f54..a6e9c2794ef56 100644
+--- a/arch/x86/kernel/apic/x2apic_uv_x.c
++++ b/arch/x86/kernel/apic/x2apic_uv_x.c
+@@ -199,7 +199,13 @@ static void __init uv_tsc_check_sync(void)
+ 	int mmr_shift;
+ 	char *state;
+ 
+-	/* Different returns from different UV BIOS versions */
++	/* UV5 guarantees synced TSCs; do not zero TSC_ADJUST */
++	if (!is_uv(UV2|UV3|UV4)) {
++		mark_tsc_async_resets("UV5+");
++		return;
++	}
++
++	/* UV2,3,4, UV BIOS TSC sync state available */
+ 	mmr = uv_early_read_mmr(UVH_TSC_SYNC_MMR);
+ 	mmr_shift =
+ 		is_uv2_hub() ? UVH_TSC_SYNC_SHIFT_UV2K : UVH_TSC_SYNC_SHIFT;
+diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
+index f7a5370a9b3b8..2c87d62f191e2 100644
+--- a/arch/x86/kernel/cpu/intel.c
++++ b/arch/x86/kernel/cpu/intel.c
+@@ -91,7 +91,7 @@ static bool ring3mwait_disabled __read_mostly;
+ static int __init ring3mwait_disable(char *__unused)
+ {
+ 	ring3mwait_disabled = true;
+-	return 0;
++	return 1;
+ }
+ __setup("ring3mwait=disable", ring3mwait_disable);
+ 
+diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c
+index 1940d305db1c0..1c87501e0fa3d 100644
+--- a/arch/x86/kernel/cpu/mce/amd.c
++++ b/arch/x86/kernel/cpu/mce/amd.c
+@@ -1294,10 +1294,23 @@ out_free:
+ 	kfree(bank);
+ }
+ 
++static void __threshold_remove_device(struct threshold_bank **bp)
++{
++	unsigned int bank, numbanks = this_cpu_read(mce_num_banks);
++
++	for (bank = 0; bank < numbanks; bank++) {
++		if (!bp[bank])
++			continue;
++
++		threshold_remove_bank(bp[bank]);
++		bp[bank] = NULL;
++	}
++	kfree(bp);
++}
++
+ int mce_threshold_remove_device(unsigned int cpu)
+ {
+ 	struct threshold_bank **bp = this_cpu_read(threshold_banks);
+-	unsigned int bank, numbanks = this_cpu_read(mce_num_banks);
+ 
+ 	if (!bp)
+ 		return 0;
+@@ -1308,13 +1321,7 @@ int mce_threshold_remove_device(unsigned int cpu)
+ 	 */
+ 	this_cpu_write(threshold_banks, NULL);
+ 
+-	for (bank = 0; bank < numbanks; bank++) {
+-		if (bp[bank]) {
+-			threshold_remove_bank(bp[bank]);
+-			bp[bank] = NULL;
+-		}
+-	}
+-	kfree(bp);
++	__threshold_remove_device(bp);
+ 	return 0;
+ }
+ 
+@@ -1351,15 +1358,14 @@ int mce_threshold_create_device(unsigned int cpu)
+ 		if (!(this_cpu_read(bank_map) & (1 << bank)))
+ 			continue;
+ 		err = threshold_create_bank(bp, cpu, bank);
+-		if (err)
+-			goto out_err;
++		if (err) {
++			__threshold_remove_device(bp);
++			return err;
++		}
+ 	}
+ 	this_cpu_write(threshold_banks, bp);
+ 
+ 	if (thresholding_irq_en)
+ 		mce_threshold_vector = amd_threshold_interrupt;
+ 	return 0;
+-out_err:
+-	mce_threshold_remove_device(cpu);
+-	return err;
+ }
+diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
+index 981496e6bc0e4..fa67bb9d1afed 100644
+--- a/arch/x86/kernel/cpu/mce/core.c
++++ b/arch/x86/kernel/cpu/mce/core.c
+@@ -579,7 +579,7 @@ static int uc_decode_notifier(struct notifier_block *nb, unsigned long val,
+ 
+ 	pfn = mce->addr >> PAGE_SHIFT;
+ 	if (!memory_failure(pfn, 0)) {
+-		set_mce_nospec(pfn, whole_page(mce));
++		set_mce_nospec(pfn);
+ 		mce->kflags |= MCE_HANDLED_UC;
+ 	}
+ 
+@@ -1316,7 +1316,7 @@ static void kill_me_maybe(struct callback_head *cb)
+ 
+ 	ret = memory_failure(p->mce_addr >> PAGE_SHIFT, flags);
+ 	if (!ret) {
+-		set_mce_nospec(p->mce_addr >> PAGE_SHIFT, p->mce_whole_page);
++		set_mce_nospec(p->mce_addr >> PAGE_SHIFT);
+ 		sync_core();
+ 		return;
+ 	}
+@@ -1342,7 +1342,7 @@ static void kill_me_never(struct callback_head *cb)
+ 	p->mce_count = 0;
+ 	pr_err("Kernel accessed poison in user space at %llx\n", p->mce_addr);
+ 	if (!memory_failure(p->mce_addr >> PAGE_SHIFT, 0))
+-		set_mce_nospec(p->mce_addr >> PAGE_SHIFT, p->mce_whole_page);
++		set_mce_nospec(p->mce_addr >> PAGE_SHIFT);
+ }
+ 
+ static void queue_task_work(struct mce *m, char *msg, void (*func)(struct callback_head *))
+diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c
+index 3c24e6124d955..19876ebfb5044 100644
+--- a/arch/x86/kernel/cpu/sgx/encl.c
++++ b/arch/x86/kernel/cpu/sgx/encl.c
+@@ -152,7 +152,7 @@ static int __sgx_encl_eldu(struct sgx_encl_page *encl_page,
+ 
+ 	page_pcmd_off = sgx_encl_get_backing_page_pcmd_offset(encl, page_index);
+ 
+-	ret = sgx_encl_get_backing(encl, page_index, &b);
++	ret = sgx_encl_lookup_backing(encl, page_index, &b);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -718,7 +718,7 @@ static struct page *sgx_encl_get_backing_page(struct sgx_encl *encl,
+  *   0 on success,
+  *   -errno otherwise.
+  */
+-int sgx_encl_get_backing(struct sgx_encl *encl, unsigned long page_index,
++static int sgx_encl_get_backing(struct sgx_encl *encl, unsigned long page_index,
+ 			 struct sgx_backing *backing)
+ {
+ 	pgoff_t page_pcmd_off = sgx_encl_get_backing_page_pcmd_offset(encl, page_index);
+@@ -743,6 +743,107 @@ int sgx_encl_get_backing(struct sgx_encl *encl, unsigned long page_index,
+ 	return 0;
+ }
+ 
++/*
++ * When called from ksgxd, returns the mem_cgroup of a struct mm stored
++ * in the enclave's mm_list. When not called from ksgxd, just returns
++ * the mem_cgroup of the current task.
++ */
++static struct mem_cgroup *sgx_encl_get_mem_cgroup(struct sgx_encl *encl)
++{
++	struct mem_cgroup *memcg = NULL;
++	struct sgx_encl_mm *encl_mm;
++	int idx;
++
++	/*
++	 * If called from normal task context, return the mem_cgroup
++	 * of the current task's mm. The remainder of the handling is for
++	 * ksgxd.
++	 */
++	if (!current_is_ksgxd())
++		return get_mem_cgroup_from_mm(current->mm);
++
++	/*
++	 * Search the enclave's mm_list to find an mm associated with
++	 * this enclave to charge the allocation to.
++	 */
++	idx = srcu_read_lock(&encl->srcu);
++
++	list_for_each_entry_rcu(encl_mm, &encl->mm_list, list) {
++		if (!mmget_not_zero(encl_mm->mm))
++			continue;
++
++		memcg = get_mem_cgroup_from_mm(encl_mm->mm);
++
++		mmput_async(encl_mm->mm);
++
++		break;
++	}
++
++	srcu_read_unlock(&encl->srcu, idx);
++
++	/*
++	 * In the rare case that there isn't an mm associated with
++	 * the enclave, set memcg to the current active mem_cgroup.
++	 * This will be the root mem_cgroup if there is no active
++	 * mem_cgroup.
++	 */
++	if (!memcg)
++		return get_mem_cgroup_from_mm(NULL);
++
++	return memcg;
++}
++
++/**
++ * sgx_encl_alloc_backing() - allocate a new backing storage page
++ * @encl:	an enclave pointer
++ * @page_index:	enclave page index
++ * @backing:	data for accessing backing storage for the page
++ *
++ * When called from ksgxd, sets the active memcg from one of the
++ * mms in the enclave's mm_list prior to any backing page allocation,
++ * in order to ensure that shmem page allocations are charged to the
++ * enclave.
++ *
++ * Return:
++ *   0 on success,
++ *   -errno otherwise.
++ */
++int sgx_encl_alloc_backing(struct sgx_encl *encl, unsigned long page_index,
++			   struct sgx_backing *backing)
++{
++	struct mem_cgroup *encl_memcg = sgx_encl_get_mem_cgroup(encl);
++	struct mem_cgroup *memcg = set_active_memcg(encl_memcg);
++	int ret;
++
++	ret = sgx_encl_get_backing(encl, page_index, backing);
++
++	set_active_memcg(memcg);
++	mem_cgroup_put(encl_memcg);
++
++	return ret;
++}
++
++/**
++ * sgx_encl_lookup_backing() - retrieve an existing backing storage page
++ * @encl:	an enclave pointer
++ * @page_index:	enclave page index
++ * @backing:	data for accessing backing storage for the page
++ *
++ * Retrieve a backing page for loading data back into an EPC page with ELDU.
++ * It is the caller's responsibility to ensure that it is appropriate to use
++ * sgx_encl_lookup_backing() rather than sgx_encl_alloc_backing(). If lookup is
++ * not used correctly, this will cause an allocation which is not accounted for.
++ *
++ * Return:
++ *   0 on success,
++ *   -errno otherwise.
++ */
++int sgx_encl_lookup_backing(struct sgx_encl *encl, unsigned long page_index,
++			   struct sgx_backing *backing)
++{
++	return sgx_encl_get_backing(encl, page_index, backing);
++}
++
+ /**
+  * sgx_encl_put_backing() - Unpin the backing storage
+  * @backing:	data for accessing backing storage for the page
+diff --git a/arch/x86/kernel/cpu/sgx/encl.h b/arch/x86/kernel/cpu/sgx/encl.h
+index d44e7372151f0..332ef3568267e 100644
+--- a/arch/x86/kernel/cpu/sgx/encl.h
++++ b/arch/x86/kernel/cpu/sgx/encl.h
+@@ -103,10 +103,13 @@ static inline int sgx_encl_find(struct mm_struct *mm, unsigned long addr,
+ int sgx_encl_may_map(struct sgx_encl *encl, unsigned long start,
+ 		     unsigned long end, unsigned long vm_flags);
+ 
++bool current_is_ksgxd(void);
+ void sgx_encl_release(struct kref *ref);
+ int sgx_encl_mm_add(struct sgx_encl *encl, struct mm_struct *mm);
+-int sgx_encl_get_backing(struct sgx_encl *encl, unsigned long page_index,
+-			 struct sgx_backing *backing);
++int sgx_encl_lookup_backing(struct sgx_encl *encl, unsigned long page_index,
++			    struct sgx_backing *backing);
++int sgx_encl_alloc_backing(struct sgx_encl *encl, unsigned long page_index,
++			   struct sgx_backing *backing);
+ void sgx_encl_put_backing(struct sgx_backing *backing);
+ int sgx_encl_test_and_clear_young(struct mm_struct *mm,
+ 				  struct sgx_encl_page *page);
+diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
+index ab4ec54bbdd94..a78652d43e61b 100644
+--- a/arch/x86/kernel/cpu/sgx/main.c
++++ b/arch/x86/kernel/cpu/sgx/main.c
+@@ -313,7 +313,7 @@ static void sgx_reclaimer_write(struct sgx_epc_page *epc_page,
+ 	sgx_encl_put_backing(backing);
+ 
+ 	if (!encl->secs_child_cnt && test_bit(SGX_ENCL_INITIALIZED, &encl->flags)) {
+-		ret = sgx_encl_get_backing(encl, PFN_DOWN(encl->size),
++		ret = sgx_encl_alloc_backing(encl, PFN_DOWN(encl->size),
+ 					   &secs_backing);
+ 		if (ret)
+ 			goto out;
+@@ -384,7 +384,7 @@ static void sgx_reclaim_pages(void)
+ 		page_index = PFN_DOWN(encl_page->desc - encl_page->encl->base);
+ 
+ 		mutex_lock(&encl_page->encl->lock);
+-		ret = sgx_encl_get_backing(encl_page->encl, page_index, &backing[i]);
++		ret = sgx_encl_alloc_backing(encl_page->encl, page_index, &backing[i]);
+ 		if (ret) {
+ 			mutex_unlock(&encl_page->encl->lock);
+ 			goto skip;
+@@ -475,6 +475,11 @@ static bool __init sgx_page_reclaimer_init(void)
+ 	return true;
+ }
+ 
++bool current_is_ksgxd(void)
++{
++	return current == ksgxd_tsk;
++}
++
+ static struct sgx_epc_page *__sgx_alloc_epc_page_from_node(int nid)
+ {
+ 	struct sgx_numa_node *node = &sgx_numa_nodes[nid];
+diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c
+index 566bb8e171492..0611fd83858e6 100644
+--- a/arch/x86/kernel/machine_kexec_64.c
++++ b/arch/x86/kernel/machine_kexec_64.c
+@@ -376,9 +376,6 @@ void machine_kexec(struct kimage *image)
+ #ifdef CONFIG_KEXEC_FILE
+ void *arch_kexec_kernel_image_load(struct kimage *image)
+ {
+-	vfree(image->elf_headers);
+-	image->elf_headers = NULL;
+-
+ 	if (!image->fops || !image->fops->load)
+ 		return ERR_PTR(-ENOEXEC);
+ 
+@@ -514,6 +511,15 @@ overflow:
+ 	       (int)ELF64_R_TYPE(rel[i].r_info), value);
+ 	return -ENOEXEC;
+ }
++
++int arch_kimage_file_post_load_cleanup(struct kimage *image)
++{
++	vfree(image->elf_headers);
++	image->elf_headers = NULL;
++	image->elf_headers_sz = 0;
++
++	return kexec_image_post_load_cleanup_default(image);
++}
+ #endif /* CONFIG_KEXEC_FILE */
+ 
+ static int
+diff --git a/arch/x86/kernel/signal_compat.c b/arch/x86/kernel/signal_compat.c
+index b52407c56000e..879ef8c72f5c0 100644
+--- a/arch/x86/kernel/signal_compat.c
++++ b/arch/x86/kernel/signal_compat.c
+@@ -149,8 +149,10 @@ static inline void signal_compat_build_tests(void)
+ 
+ 	BUILD_BUG_ON(offsetof(siginfo_t, si_perf_data) != 0x18);
+ 	BUILD_BUG_ON(offsetof(siginfo_t, si_perf_type) != 0x20);
++	BUILD_BUG_ON(offsetof(siginfo_t, si_perf_flags) != 0x24);
+ 	BUILD_BUG_ON(offsetof(compat_siginfo_t, si_perf_data) != 0x10);
+ 	BUILD_BUG_ON(offsetof(compat_siginfo_t, si_perf_type) != 0x14);
++	BUILD_BUG_ON(offsetof(compat_siginfo_t, si_perf_flags) != 0x18);
+ 
+ 	CHECK_CSI_OFFSET(_sigpoll);
+ 	CHECK_CSI_SIZE  (_sigpoll, 2*sizeof(int));
+diff --git a/arch/x86/kernel/step.c b/arch/x86/kernel/step.c
+index 0f3c307b37b3a..8e2b2552b5eea 100644
+--- a/arch/x86/kernel/step.c
++++ b/arch/x86/kernel/step.c
+@@ -180,8 +180,7 @@ void set_task_blockstep(struct task_struct *task, bool on)
+ 	 *
+ 	 * NOTE: this means that set/clear TIF_BLOCKSTEP is only safe if
+ 	 * task is current or it can't be running, otherwise we can race
+-	 * with __switch_to_xtra(). We rely on ptrace_freeze_traced() but
+-	 * PTRACE_KILL is not safe.
++	 * with __switch_to_xtra(). We rely on ptrace_freeze_traced().
+ 	 */
+ 	local_irq_disable();
+ 	debugctl = get_debugctlmsr();
+diff --git a/arch/x86/kernel/sys_x86_64.c b/arch/x86/kernel/sys_x86_64.c
+index 660b78827638f..8cc653ffdccd7 100644
+--- a/arch/x86/kernel/sys_x86_64.c
++++ b/arch/x86/kernel/sys_x86_64.c
+@@ -68,9 +68,6 @@ static int __init control_va_addr_alignment(char *str)
+ 	if (*str == 0)
+ 		return 1;
+ 
+-	if (*str == '=')
+-		str++;
+-
+ 	if (!strcmp(str, "32"))
+ 		va_align.flags = ALIGN_VA_32;
+ 	else if (!strcmp(str, "64"))
+@@ -80,11 +77,11 @@ static int __init control_va_addr_alignment(char *str)
+ 	else if (!strcmp(str, "on"))
+ 		va_align.flags = ALIGN_VA_32 | ALIGN_VA_64;
+ 	else
+-		return 0;
++		pr_warn("invalid option value: 'align_va_addr=%s'\n", str);
+ 
+ 	return 1;
+ }
+-__setup("align_va_addr", control_va_addr_alignment);
++__setup("align_va_addr=", control_va_addr_alignment);
+ 
+ SYSCALL_DEFINE6(mmap, unsigned long, addr, unsigned long, len,
+ 		unsigned long, prot, unsigned long, flags,
+diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
+index 66b0eb0bda94e..6268880c8eed6 100644
+--- a/arch/x86/kvm/lapic.c
++++ b/arch/x86/kvm/lapic.c
+@@ -1548,6 +1548,7 @@ static void cancel_apic_timer(struct kvm_lapic *apic)
+ 	if (apic->lapic_timer.hv_timer_in_use)
+ 		cancel_hv_timer(apic);
+ 	preempt_enable();
++	atomic_set(&apic->lapic_timer.pending, 0);
+ }
+ 
+ static void apic_update_lvtt(struct kvm_lapic *apic)
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index 880d0b0c9315b..ee7df31883cd9 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -3695,12 +3695,34 @@ vmcs12_guest_cr4(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12)
+ }
+ 
+ static void vmcs12_save_pending_event(struct kvm_vcpu *vcpu,
+-				      struct vmcs12 *vmcs12)
++				      struct vmcs12 *vmcs12,
++				      u32 vm_exit_reason, u32 exit_intr_info)
+ {
+ 	u32 idt_vectoring;
+ 	unsigned int nr;
+ 
+-	if (vcpu->arch.exception.injected) {
++	/*
++	 * Per the SDM, VM-Exits due to double and triple faults are never
++	 * considered to occur during event delivery, even if the double/triple
++	 * fault is the result of an escalating vectoring issue.
++	 *
++	 * Note, the SDM qualifies the double fault behavior with "The original
++	 * event results in a double-fault exception".  It's unclear why the
++	 * qualification exists since exits due to double fault can occur only
++	 * while vectoring a different exception (injected events are never
++	 * subject to interception), i.e. there's _always_ an original event.
++	 *
++	 * The SDM also uses NMI as a confusing example for the "original event
++	 * causes the VM exit directly" clause.  NMI isn't special in any way,
++	 * the same rule applies to all events that cause an exit directly.
++	 * NMI is an odd choice for the example because NMIs can only occur on
++	 * instruction boundaries, i.e. they _can't_ occur during vectoring.
++	 */
++	if ((u16)vm_exit_reason == EXIT_REASON_TRIPLE_FAULT ||
++	    ((u16)vm_exit_reason == EXIT_REASON_EXCEPTION_NMI &&
++	     is_double_fault(exit_intr_info))) {
++		vmcs12->idt_vectoring_info_field = 0;
++	} else if (vcpu->arch.exception.injected) {
+ 		nr = vcpu->arch.exception.nr;
+ 		idt_vectoring = nr | VECTORING_INFO_VALID_MASK;
+ 
+@@ -3733,6 +3755,8 @@ static void vmcs12_save_pending_event(struct kvm_vcpu *vcpu,
+ 			idt_vectoring |= INTR_TYPE_EXT_INTR;
+ 
+ 		vmcs12->idt_vectoring_info_field = idt_vectoring;
++	} else {
++		vmcs12->idt_vectoring_info_field = 0;
+ 	}
+ }
+ 
+@@ -4202,12 +4226,12 @@ static void prepare_vmcs12(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
+ 	if (to_vmx(vcpu)->exit_reason.enclave_mode)
+ 		vmcs12->vm_exit_reason |= VMX_EXIT_REASONS_SGX_ENCLAVE_MODE;
+ 	vmcs12->exit_qualification = exit_qualification;
+-	vmcs12->vm_exit_intr_info = exit_intr_info;
+-
+-	vmcs12->idt_vectoring_info_field = 0;
+-	vmcs12->vm_exit_instruction_len = vmcs_read32(VM_EXIT_INSTRUCTION_LEN);
+-	vmcs12->vmx_instruction_info = vmcs_read32(VMX_INSTRUCTION_INFO);
+ 
++	/*
++	 * On VM-Exit due to a failed VM-Entry, the VMCS isn't marked launched
++	 * and only EXIT_REASON and EXIT_QUALIFICATION are updated, all other
++	 * exit info fields are unmodified.
++	 */
+ 	if (!(vmcs12->vm_exit_reason & VMX_EXIT_REASONS_FAILED_VMENTRY)) {
+ 		vmcs12->launch_state = 1;
+ 
+@@ -4219,7 +4243,12 @@ static void prepare_vmcs12(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
+ 		 * Transfer the event that L0 or L1 may wanted to inject into
+ 		 * L2 to IDT_VECTORING_INFO_FIELD.
+ 		 */
+-		vmcs12_save_pending_event(vcpu, vmcs12);
++		vmcs12_save_pending_event(vcpu, vmcs12,
++					  vm_exit_reason, exit_intr_info);
++
++		vmcs12->vm_exit_intr_info = exit_intr_info;
++		vmcs12->vm_exit_instruction_len = vmcs_read32(VM_EXIT_INSTRUCTION_LEN);
++		vmcs12->vmx_instruction_info = vmcs_read32(VMX_INSTRUCTION_INFO);
+ 
+ 		/*
+ 		 * According to spec, there's no need to store the guest's
+diff --git a/arch/x86/kvm/vmx/vmcs.h b/arch/x86/kvm/vmx/vmcs.h
+index e325c290a8162..2b9d7a7e83f77 100644
+--- a/arch/x86/kvm/vmx/vmcs.h
++++ b/arch/x86/kvm/vmx/vmcs.h
+@@ -104,6 +104,11 @@ static inline bool is_breakpoint(u32 intr_info)
+ 	return is_exception_n(intr_info, BP_VECTOR);
+ }
+ 
++static inline bool is_double_fault(u32 intr_info)
++{
++	return is_exception_n(intr_info, DF_VECTOR);
++}
++
+ static inline bool is_page_fault(u32 intr_info)
+ {
+ 	return is_exception_n(intr_info, PF_VECTOR);
+diff --git a/arch/x86/lib/delay.c b/arch/x86/lib/delay.c
+index 65d15df6212d6..0e65d00e2339f 100644
+--- a/arch/x86/lib/delay.c
++++ b/arch/x86/lib/delay.c
+@@ -54,8 +54,8 @@ static void delay_loop(u64 __loops)
+ 		"	jnz 2b		\n"
+ 		"3:	dec %0		\n"
+ 
+-		: /* we don't need output */
+-		:"a" (loops)
++		: "+a" (loops)
++		:
+ 	);
+ }
+ 
+diff --git a/arch/x86/mm/pat/memtype.c b/arch/x86/mm/pat/memtype.c
+index 4ba2a3ee4bce1..d5ef64ddd35e9 100644
+--- a/arch/x86/mm/pat/memtype.c
++++ b/arch/x86/mm/pat/memtype.c
+@@ -101,7 +101,7 @@ int pat_debug_enable;
+ static int __init pat_debug_setup(char *str)
+ {
+ 	pat_debug_enable = 1;
+-	return 0;
++	return 1;
+ }
+ __setup("debugpat", pat_debug_setup);
+ 
+diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
+index 0656db33574d3..1abd5438f1269 100644
+--- a/arch/x86/mm/pat/set_memory.c
++++ b/arch/x86/mm/pat/set_memory.c
+@@ -19,6 +19,7 @@
+ #include <linux/vmstat.h>
+ #include <linux/kernel.h>
+ #include <linux/cc_platform.h>
++#include <linux/set_memory.h>
+ 
+ #include <asm/e820/api.h>
+ #include <asm/processor.h>
+@@ -29,7 +30,6 @@
+ #include <asm/pgalloc.h>
+ #include <asm/proto.h>
+ #include <asm/memtype.h>
+-#include <asm/set_memory.h>
+ #include <asm/hyperv-tlfs.h>
+ #include <asm/mshyperv.h>
+ 
+@@ -1805,7 +1805,7 @@ static inline int cpa_clear_pages_array(struct page **pages, int numpages,
+ }
+ 
+ /*
+- * _set_memory_prot is an internal helper for callers that have been passed
++ * __set_memory_prot is an internal helper for callers that have been passed
+  * a pgprot_t value from upper layers and a reservation has already been taken.
+  * If you want to set the pgprot to a specific page protocol, use the
+  * set_memory_xx() functions.
+@@ -1914,6 +1914,51 @@ int set_memory_wb(unsigned long addr, int numpages)
+ }
+ EXPORT_SYMBOL(set_memory_wb);
+ 
++/* Prevent speculative access to a page by marking it not-present */
++#ifdef CONFIG_X86_64
++int set_mce_nospec(unsigned long pfn)
++{
++	unsigned long decoy_addr;
++	int rc;
++
++	/* SGX pages are not in the 1:1 map */
++	if (arch_is_platform_page(pfn << PAGE_SHIFT))
++		return 0;
++	/*
++	 * We would like to just call:
++	 *      set_memory_XX((unsigned long)pfn_to_kaddr(pfn), 1);
++	 * but doing that would radically increase the odds of a
++	 * speculative access to the poison page because we'd have
++	 * the virtual address of the kernel 1:1 mapping sitting
++	 * around in registers.
++	 * Instead we get tricky.  We create a non-canonical address
++	 * that looks just like the one we want, but has bit 63 flipped.
++	 * This relies on set_memory_XX() properly sanitizing any __pa()
++	 * results with __PHYSICAL_MASK or PTE_PFN_MASK.
++	 */
++	decoy_addr = (pfn << PAGE_SHIFT) + (PAGE_OFFSET ^ BIT(63));
++
++	rc = set_memory_np(decoy_addr, 1);
++	if (rc)
++		pr_warn("Could not invalidate pfn=0x%lx from 1:1 map\n", pfn);
++	return rc;
++}
++
++static int set_memory_present(unsigned long *addr, int numpages)
++{
++	return change_page_attr_set(addr, numpages, __pgprot(_PAGE_PRESENT), 0);
++}
++
++/* Restore full speculative operation to the pfn. */
++int clear_mce_nospec(unsigned long pfn)
++{
++	unsigned long addr = (unsigned long) pfn_to_kaddr(pfn);
++
++	return set_memory_present(&addr, 1);
++}
++EXPORT_SYMBOL_GPL(clear_mce_nospec);
++#endif /* CONFIG_X86_64 */
++
+ int set_memory_x(unsigned long addr, int numpages)
+ {
+ 	if (!(__supported_pte_mask & _PAGE_NX))
+diff --git a/arch/x86/pci/irq.c b/arch/x86/pci/irq.c
+index 97b63e35e1528..21c4bc41741fe 100644
+--- a/arch/x86/pci/irq.c
++++ b/arch/x86/pci/irq.c
+@@ -253,6 +253,15 @@ static void write_pc_conf_nybble(u8 base, u8 index, u8 val)
+ 	pc_conf_set(reg, x);
+ }
+ 
++/*
++ * FinALi pirq rules are as follows:
++ *
++ * - bit 0 selects between INTx Routing Table Mapping Registers,
++ *
++ * - bit 3 selects the nibble within the INTx Routing Table Mapping Register,
++ *
++ * - bits 7:4 map to bits 3:0 of the PCI INTx Sensitivity Register.
++ */
+ static int pirq_finali_get(struct pci_dev *router, struct pci_dev *dev,
+ 			   int pirq)
+ {
+@@ -260,11 +269,13 @@ static int pirq_finali_get(struct pci_dev *router, struct pci_dev *dev,
+ 		0, 9, 3, 10, 4, 5, 7, 6, 0, 11, 0, 12, 0, 14, 0, 15
+ 	};
+ 	unsigned long flags;
++	u8 index;
+ 	u8 x;
+ 
++	index = (pirq & 1) << 1 | (pirq & 8) >> 3;
+ 	raw_spin_lock_irqsave(&pc_conf_lock, flags);
+ 	pc_conf_set(PC_CONF_FINALI_LOCK, PC_CONF_FINALI_LOCK_KEY);
+-	x = irqmap[read_pc_conf_nybble(PC_CONF_FINALI_PCI_INTX_RT1, pirq - 1)];
++	x = irqmap[read_pc_conf_nybble(PC_CONF_FINALI_PCI_INTX_RT1, index)];
+ 	pc_conf_set(PC_CONF_FINALI_LOCK, 0);
+ 	raw_spin_unlock_irqrestore(&pc_conf_lock, flags);
+ 	return x;
+@@ -278,13 +289,15 @@ static int pirq_finali_set(struct pci_dev *router, struct pci_dev *dev,
+ 	};
+ 	u8 val = irqmap[irq];
+ 	unsigned long flags;
++	u8 index;
+ 
+ 	if (!val)
+ 		return 0;
+ 
++	index = (pirq & 1) << 1 | (pirq & 8) >> 3;
+ 	raw_spin_lock_irqsave(&pc_conf_lock, flags);
+ 	pc_conf_set(PC_CONF_FINALI_LOCK, PC_CONF_FINALI_LOCK_KEY);
+-	write_pc_conf_nybble(PC_CONF_FINALI_PCI_INTX_RT1, pirq - 1, val);
++	write_pc_conf_nybble(PC_CONF_FINALI_PCI_INTX_RT1, index, val);
+ 	pc_conf_set(PC_CONF_FINALI_LOCK, 0);
+ 	raw_spin_unlock_irqrestore(&pc_conf_lock, flags);
+ 	return 1;
+@@ -293,7 +306,7 @@ static int pirq_finali_set(struct pci_dev *router, struct pci_dev *dev,
+ static int pirq_finali_lvl(struct pci_dev *router, struct pci_dev *dev,
+ 			   int pirq, int irq)
+ {
+-	u8 mask = ~(1u << (pirq - 1));
++	u8 mask = ~((pirq & 0xf0u) >> 4);
+ 	unsigned long flags;
+ 	u8 trig;
+ 
+diff --git a/arch/x86/um/ldt.c b/arch/x86/um/ldt.c
+index 3ee234b6234dd..255a44dd415a9 100644
+--- a/arch/x86/um/ldt.c
++++ b/arch/x86/um/ldt.c
+@@ -23,9 +23,11 @@ static long write_ldt_entry(struct mm_id *mm_idp, int func,
+ {
+ 	long res;
+ 	void *stub_addr;
++
++	BUILD_BUG_ON(sizeof(*desc) % sizeof(long));
++
+ 	res = syscall_stub_data(mm_idp, (unsigned long *)desc,
+-				(sizeof(*desc) + sizeof(long) - 1) &
+-				    ~(sizeof(long) - 1),
++				sizeof(*desc) / sizeof(long),
+ 				addr, &stub_addr);
+ 	if (!res) {
+ 		unsigned long args[] = { func,
+diff --git a/arch/xtensa/kernel/entry.S b/arch/xtensa/kernel/entry.S
+index 6b6eff658795c..07d683d94e175 100644
+--- a/arch/xtensa/kernel/entry.S
++++ b/arch/xtensa/kernel/entry.S
+@@ -442,7 +442,6 @@ KABI_W	or	a3, a3, a0
+ 	moveqz	a3, a0, a2		# a3 = LOCKLEVEL iff interrupt
+ KABI_W	movi	a2, PS_WOE_MASK
+ KABI_W	or	a3, a3, a2
+-	rsr	a2, exccause
+ #endif
+ 
+ 	/* restore return address (or 0 if return to userspace) */
+@@ -469,19 +468,27 @@ KABI_W	or	a3, a3, a2
+ 
+ 	save_xtregs_opt a1 a3 a4 a5 a6 a7 PT_XTREGS_OPT
+ 	
++#ifdef CONFIG_TRACE_IRQFLAGS
++	rsr		abi_tmp0, ps
++	extui		abi_tmp0, abi_tmp0, PS_INTLEVEL_SHIFT, PS_INTLEVEL_WIDTH
++	beqz		abi_tmp0, 1f
++	abi_call	trace_hardirqs_off
++1:
++#endif
++
+ 	/* Go to second-level dispatcher. Set up parameters to pass to the
+ 	 * exception handler and call the exception handler.
+ 	 */
+ 
+-	rsr	a4, excsave1
+-	addx4	a4, a2, a4
+-	l32i	a4, a4, EXC_TABLE_DEFAULT		# load handler
+-	mov	abi_arg1, a2			# pass EXCCAUSE
++	l32i	abi_arg1, a1, PT_EXCCAUSE	# pass EXCCAUSE
++	rsr	abi_tmp0, excsave1
++	addx4	abi_tmp0, abi_arg1, abi_tmp0
++	l32i	abi_tmp0, abi_tmp0, EXC_TABLE_DEFAULT	# load handler
+ 	mov	abi_arg0, a1			# pass stack frame
+ 
+ 	/* Call the second-level handler */
+ 
+-	abi_callx	a4
++	abi_callx	abi_tmp0
+ 
+ 	/* Jump here for exception exit */
+ 	.global common_exception_return
+diff --git a/arch/xtensa/kernel/ptrace.c b/arch/xtensa/kernel/ptrace.c
+index 323c678a691ff..b952e67cc0ccd 100644
+--- a/arch/xtensa/kernel/ptrace.c
++++ b/arch/xtensa/kernel/ptrace.c
+@@ -225,12 +225,12 @@ const struct user_regset_view *task_user_regset_view(struct task_struct *task)
+ 
+ void user_enable_single_step(struct task_struct *child)
+ {
+-	child->ptrace |= PT_SINGLESTEP;
++	set_tsk_thread_flag(child, TIF_SINGLESTEP);
+ }
+ 
+ void user_disable_single_step(struct task_struct *child)
+ {
+-	child->ptrace &= ~PT_SINGLESTEP;
++	clear_tsk_thread_flag(child, TIF_SINGLESTEP);
+ }
+ 
+ /*
+diff --git a/arch/xtensa/kernel/signal.c b/arch/xtensa/kernel/signal.c
+index 6f68649e86ba5..ac50ec46c8f14 100644
+--- a/arch/xtensa/kernel/signal.c
++++ b/arch/xtensa/kernel/signal.c
+@@ -473,7 +473,7 @@ static void do_signal(struct pt_regs *regs)
+ 		/* Set up the stack frame */
+ 		ret = setup_frame(&ksig, sigmask_to_save(), regs);
+ 		signal_setup_done(ret, &ksig, 0);
+-		if (current->ptrace & PT_SINGLESTEP)
++		if (test_thread_flag(TIF_SINGLESTEP))
+ 			task_pt_regs(current)->icountlevel = 1;
+ 
+ 		return;
+@@ -499,7 +499,7 @@ static void do_signal(struct pt_regs *regs)
+ 	/* If there's no signal to deliver, we just restore the saved mask.  */
+ 	restore_saved_sigmask();
+ 
+-	if (current->ptrace & PT_SINGLESTEP)
++	if (test_thread_flag(TIF_SINGLESTEP))
+ 		task_pt_regs(current)->icountlevel = 1;
+ 	return;
+ }
+diff --git a/arch/xtensa/kernel/traps.c b/arch/xtensa/kernel/traps.c
+index 9345007d474d3..5f86208c67c87 100644
+--- a/arch/xtensa/kernel/traps.c
++++ b/arch/xtensa/kernel/traps.c
+@@ -242,12 +242,8 @@ DEFINE_PER_CPU(unsigned long, nmi_count);
+ 
+ void do_nmi(struct pt_regs *regs)
+ {
+-	struct pt_regs *old_regs;
++	struct pt_regs *old_regs = set_irq_regs(regs);
+ 
+-	if ((regs->ps & PS_INTLEVEL_MASK) < LOCKLEVEL)
+-		trace_hardirqs_off();
+-
+-	old_regs = set_irq_regs(regs);
+ 	nmi_enter();
+ 	++*this_cpu_ptr(&nmi_count);
+ 	check_valid_nmi();
+@@ -269,12 +265,9 @@ void do_interrupt(struct pt_regs *regs)
+ 		XCHAL_INTLEVEL6_MASK,
+ 		XCHAL_INTLEVEL7_MASK,
+ 	};
+-	struct pt_regs *old_regs;
++	struct pt_regs *old_regs = set_irq_regs(regs);
+ 	unsigned unhandled = ~0u;
+ 
+-	trace_hardirqs_off();
+-
+-	old_regs = set_irq_regs(regs);
+ 	irq_enter();
+ 
+ 	for (;;) {
+diff --git a/arch/xtensa/platforms/iss/simdisk.c b/arch/xtensa/platforms/iss/simdisk.c
+index 0f0e0724397f4..4255b92fa3eb0 100644
+--- a/arch/xtensa/platforms/iss/simdisk.c
++++ b/arch/xtensa/platforms/iss/simdisk.c
+@@ -211,12 +211,18 @@ static ssize_t proc_read_simdisk(struct file *file, char __user *buf,
+ 	struct simdisk *dev = pde_data(file_inode(file));
+ 	const char *s = dev->filename;
+ 	if (s) {
+-		ssize_t n = simple_read_from_buffer(buf, size, ppos,
+-							s, strlen(s));
+-		if (n < 0)
+-			return n;
+-		buf += n;
+-		size -= n;
++		ssize_t len = strlen(s);
++		char *temp = kmalloc(len + 2, GFP_KERNEL);
++
++		if (!temp)
++			return -ENOMEM;
++
++		len = scnprintf(temp, len + 2, "%s\n", s);
++		len = simple_read_from_buffer(buf, size, ppos,
++					      temp, len);
++
++		kfree(temp);
++		return len;
+ 	}
+ 	return simple_read_from_buffer(buf, size, ppos, "\n", 1);
+ }
+diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
+index 420eda2589c0e..09574af835662 100644
+--- a/block/bfq-cgroup.c
++++ b/block/bfq-cgroup.c
+@@ -557,6 +557,7 @@ static void bfq_pd_init(struct blkg_policy_data *pd)
+ 				   */
+ 	bfqg->bfqd = bfqd;
+ 	bfqg->active_entities = 0;
++	bfqg->online = true;
+ 	bfqg->rq_pos_tree = RB_ROOT;
+ }
+ 
+@@ -585,28 +586,11 @@ static void bfq_group_set_parent(struct bfq_group *bfqg,
+ 	entity->sched_data = &parent->sched_data;
+ }
+ 
+-static struct bfq_group *bfq_lookup_bfqg(struct bfq_data *bfqd,
+-					 struct blkcg *blkcg)
++static void bfq_link_bfqg(struct bfq_data *bfqd, struct bfq_group *bfqg)
+ {
+-	struct blkcg_gq *blkg;
+-
+-	blkg = blkg_lookup(blkcg, bfqd->queue);
+-	if (likely(blkg))
+-		return blkg_to_bfqg(blkg);
+-	return NULL;
+-}
+-
+-struct bfq_group *bfq_find_set_group(struct bfq_data *bfqd,
+-				     struct blkcg *blkcg)
+-{
+-	struct bfq_group *bfqg, *parent;
++	struct bfq_group *parent;
+ 	struct bfq_entity *entity;
+ 
+-	bfqg = bfq_lookup_bfqg(bfqd, blkcg);
+-
+-	if (unlikely(!bfqg))
+-		return NULL;
+-
+ 	/*
+ 	 * Update chain of bfq_groups as we might be handling a leaf group
+ 	 * which, along with some of its relatives, has not been hooked yet
+@@ -623,8 +607,24 @@ struct bfq_group *bfq_find_set_group(struct bfq_data *bfqd,
+ 			bfq_group_set_parent(curr_bfqg, parent);
+ 		}
+ 	}
++}
+ 
+-	return bfqg;
++struct bfq_group *bfq_bio_bfqg(struct bfq_data *bfqd, struct bio *bio)
++{
++	struct blkcg_gq *blkg = bio->bi_blkg;
++	struct bfq_group *bfqg;
++
++	while (blkg) {
++		bfqg = blkg_to_bfqg(blkg);
++		if (bfqg->online) {
++			bio_associate_blkg_from_css(bio, &blkg->blkcg->css);
++			return bfqg;
++		}
++		blkg = blkg->parent;
++	}
++	bio_associate_blkg_from_css(bio,
++				&bfqg_to_blkg(bfqd->root_group)->blkcg->css);
++	return bfqd->root_group;
+ }
+ 
+ /**
+@@ -714,25 +714,15 @@ void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+  * Move bic to blkcg, assuming that bfqd->lock is held; which makes
+  * sure that the reference to cgroup is valid across the call (see
+  * comments in bfq_bic_update_cgroup on this issue)
+- *
+- * NOTE: an alternative approach might have been to store the current
+- * cgroup in bfqq and getting a reference to it, reducing the lookup
+- * time here, at the price of slightly more complex code.
+  */
+-static struct bfq_group *__bfq_bic_change_cgroup(struct bfq_data *bfqd,
+-						struct bfq_io_cq *bic,
+-						struct blkcg *blkcg)
++static void *__bfq_bic_change_cgroup(struct bfq_data *bfqd,
++				     struct bfq_io_cq *bic,
++				     struct bfq_group *bfqg)
+ {
+ 	struct bfq_queue *async_bfqq = bic_to_bfqq(bic, 0);
+ 	struct bfq_queue *sync_bfqq = bic_to_bfqq(bic, 1);
+-	struct bfq_group *bfqg;
+ 	struct bfq_entity *entity;
+ 
+-	bfqg = bfq_find_set_group(bfqd, blkcg);
+-
+-	if (unlikely(!bfqg))
+-		bfqg = bfqd->root_group;
+-
+ 	if (async_bfqq) {
+ 		entity = &async_bfqq->entity;
+ 
+@@ -743,9 +733,39 @@ static struct bfq_group *__bfq_bic_change_cgroup(struct bfq_data *bfqd,
+ 	}
+ 
+ 	if (sync_bfqq) {
+-		entity = &sync_bfqq->entity;
+-		if (entity->sched_data != &bfqg->sched_data)
+-			bfq_bfqq_move(bfqd, sync_bfqq, bfqg);
++		if (!sync_bfqq->new_bfqq && !bfq_bfqq_coop(sync_bfqq)) {
++			/* We are the only user of this bfqq, just move it */
++			if (sync_bfqq->entity.sched_data != &bfqg->sched_data)
++				bfq_bfqq_move(bfqd, sync_bfqq, bfqg);
++		} else {
++			struct bfq_queue *bfqq;
++
++			/*
++			 * The queue was merged to a different queue. Check
++			 * that the merge chain still belongs to the same
++			 * cgroup.
++			 */
++			for (bfqq = sync_bfqq; bfqq; bfqq = bfqq->new_bfqq)
++				if (bfqq->entity.sched_data !=
++				    &bfqg->sched_data)
++					break;
++			if (bfqq) {
++				/*
++				 * Some queue changed cgroup so the merge is
++				 * not valid anymore. We cannot easily just
++				 * cancel the merge (by clearing new_bfqq) as
++				 * there may be other processes using this
++				 * queue and holding refs to all queues below
++				 * sync_bfqq->new_bfqq. Similarly if the merge
++				 * already happened, we need to detach from
++				 * bfqq now so that we cannot merge bio to a
++				 * request from the old cgroup.
++				 */
++				bfq_put_cooperator(sync_bfqq);
++				bfq_release_process_ref(bfqd, sync_bfqq);
++				bic_set_bfqq(bic, NULL, 1);
++			}
++		}
+ 	}
+ 
+ 	return bfqg;
+@@ -754,20 +774,24 @@ static struct bfq_group *__bfq_bic_change_cgroup(struct bfq_data *bfqd,
+ void bfq_bic_update_cgroup(struct bfq_io_cq *bic, struct bio *bio)
+ {
+ 	struct bfq_data *bfqd = bic_to_bfqd(bic);
+-	struct bfq_group *bfqg = NULL;
++	struct bfq_group *bfqg = bfq_bio_bfqg(bfqd, bio);
+ 	uint64_t serial_nr;
+ 
+-	rcu_read_lock();
+-	serial_nr = __bio_blkcg(bio)->css.serial_nr;
++	serial_nr = bfqg_to_blkg(bfqg)->blkcg->css.serial_nr;
+ 
+ 	/*
+ 	 * Check whether blkcg has changed.  The condition may trigger
+ 	 * spuriously on a newly created cic but there's no harm.
+ 	 */
+ 	if (unlikely(!bfqd) || likely(bic->blkcg_serial_nr == serial_nr))
+-		goto out;
++		return;
+ 
+-	bfqg = __bfq_bic_change_cgroup(bfqd, bic, __bio_blkcg(bio));
++	/*
++	 * New cgroup for this process. Make sure it is linked to bfq internal
++	 * cgroup hierarchy.
++	 */
++	bfq_link_bfqg(bfqd, bfqg);
++	__bfq_bic_change_cgroup(bfqd, bic, bfqg);
+ 	/*
+ 	 * Update blkg_path for bfq_log_* functions. We cache this
+ 	 * path, and update it here, for the following
+@@ -820,8 +844,6 @@ void bfq_bic_update_cgroup(struct bfq_io_cq *bic, struct bio *bio)
+ 	 */
+ 	blkg_path(bfqg_to_blkg(bfqg), bfqg->blkg_path, sizeof(bfqg->blkg_path));
+ 	bic->blkcg_serial_nr = serial_nr;
+-out:
+-	rcu_read_unlock();
+ }
+ 
+ /**
+@@ -949,6 +971,7 @@ static void bfq_pd_offline(struct blkg_policy_data *pd)
+ 
+ put_async_queues:
+ 	bfq_put_async_queues(bfqd, bfqg);
++	bfqg->online = false;
+ 
+ 	spin_unlock_irqrestore(&bfqd->lock, flags);
+ 	/*
+@@ -1438,7 +1461,7 @@ void bfq_end_wr_async(struct bfq_data *bfqd)
+ 	bfq_end_wr_async_queues(bfqd, bfqd->root_group);
+ }
+ 
+-struct bfq_group *bfq_find_set_group(struct bfq_data *bfqd, struct blkcg *blkcg)
++struct bfq_group *bfq_bio_bfqg(struct bfq_data *bfqd, struct bio *bio)
+ {
+ 	return bfqd->root_group;
+ }
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index 1f62dbdc521ff..bf5acd8f43229 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -2133,9 +2133,7 @@ static void bfq_check_waker(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ 	if (!bfqd->last_completed_rq_bfqq ||
+ 	    bfqd->last_completed_rq_bfqq == bfqq ||
+ 	    bfq_bfqq_has_short_ttime(bfqq) ||
+-	    bfqq->dispatched > 0 ||
+-	    now_ns - bfqd->last_completion >= 4 * NSEC_PER_MSEC ||
+-	    bfqd->last_completed_rq_bfqq == bfqq->waker_bfqq)
++	    now_ns - bfqd->last_completion >= 4 * NSEC_PER_MSEC)
+ 		return;
+ 
+ 	/*
+@@ -2210,7 +2208,7 @@ static void bfq_add_request(struct request *rq)
+ 	bfqq->queued[rq_is_sync(rq)]++;
+ 	bfqd->queued++;
+ 
+-	if (RB_EMPTY_ROOT(&bfqq->sort_list) && bfq_bfqq_sync(bfqq)) {
++	if (bfq_bfqq_sync(bfqq) && RQ_BIC(rq)->requests <= 1) {
+ 		bfq_check_waker(bfqd, bfqq, now_ns);
+ 
+ 		/*
+@@ -2463,10 +2461,17 @@ static bool bfq_bio_merge(struct request_queue *q, struct bio *bio,
+ 
+ 	spin_lock_irq(&bfqd->lock);
+ 
+-	if (bic)
++	if (bic) {
++		/*
++		 * Make sure cgroup info is uptodate for current process before
++		 * considering the merge.
++		 */
++		bfq_bic_update_cgroup(bic, bio);
++
+ 		bfqd->bio_bfqq = bic_to_bfqq(bic, op_is_sync(bio->bi_opf));
+-	else
++	} else {
+ 		bfqd->bio_bfqq = NULL;
++	}
+ 	bfqd->bio_bic = bic;
+ 
+ 	ret = blk_mq_sched_try_merge(q, bio, nr_segs, &free);
+@@ -2496,8 +2501,6 @@ static int bfq_request_merge(struct request_queue *q, struct request **req,
+ 	return ELEVATOR_NO_MERGE;
+ }
+ 
+-static struct bfq_queue *bfq_init_rq(struct request *rq);
+-
+ static void bfq_request_merged(struct request_queue *q, struct request *req,
+ 			       enum elv_merge type)
+ {
+@@ -2506,7 +2509,7 @@ static void bfq_request_merged(struct request_queue *q, struct request *req,
+ 	    blk_rq_pos(req) <
+ 	    blk_rq_pos(container_of(rb_prev(&req->rb_node),
+ 				    struct request, rb_node))) {
+-		struct bfq_queue *bfqq = bfq_init_rq(req);
++		struct bfq_queue *bfqq = RQ_BFQQ(req);
+ 		struct bfq_data *bfqd;
+ 		struct request *prev, *next_rq;
+ 
+@@ -2558,8 +2561,8 @@ static void bfq_request_merged(struct request_queue *q, struct request *req,
+ static void bfq_requests_merged(struct request_queue *q, struct request *rq,
+ 				struct request *next)
+ {
+-	struct bfq_queue *bfqq = bfq_init_rq(rq),
+-		*next_bfqq = bfq_init_rq(next);
++	struct bfq_queue *bfqq = RQ_BFQQ(rq),
++		*next_bfqq = RQ_BFQQ(next);
+ 
+ 	if (!bfqq)
+ 		goto remove;
+@@ -2764,6 +2767,14 @@ bfq_setup_merge(struct bfq_queue *bfqq, struct bfq_queue *new_bfqq)
+ 	if (process_refs == 0 || new_process_refs == 0)
+ 		return NULL;
+ 
++	/*
++	 * Make sure merged queues belong to the same parent. Parents could
++	 * have changed since the time we decided the two queues are suitable
++	 * for merging.
++	 */
++	if (new_bfqq->entity.parent != bfqq->entity.parent)
++		return NULL;
++
+ 	bfq_log_bfqq(bfqq->bfqd, bfqq, "scheduling merge with queue %d",
+ 		new_bfqq->pid);
+ 
+@@ -2901,9 +2912,12 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ 				struct bfq_queue *new_bfqq =
+ 					bfq_setup_merge(bfqq, stable_merge_bfqq);
+ 
+-				bic->stably_merged = true;
+-				if (new_bfqq && new_bfqq->bic)
+-					new_bfqq->bic->stably_merged = true;
++				if (new_bfqq) {
++					bic->stably_merged = true;
++					if (new_bfqq->bic)
++						new_bfqq->bic->stably_merged =
++									true;
++				}
+ 				return new_bfqq;
+ 			} else
+ 				return NULL;
+@@ -5310,7 +5324,7 @@ static void bfq_put_stable_ref(struct bfq_queue *bfqq)
+ 	bfq_put_queue(bfqq);
+ }
+ 
+-static void bfq_put_cooperator(struct bfq_queue *bfqq)
++void bfq_put_cooperator(struct bfq_queue *bfqq)
+ {
+ 	struct bfq_queue *__bfqq, *next;
+ 
+@@ -5716,14 +5730,7 @@ static struct bfq_queue *bfq_get_queue(struct bfq_data *bfqd,
+ 	struct bfq_queue *bfqq;
+ 	struct bfq_group *bfqg;
+ 
+-	rcu_read_lock();
+-
+-	bfqg = bfq_find_set_group(bfqd, __bio_blkcg(bio));
+-	if (!bfqg) {
+-		bfqq = &bfqd->oom_bfqq;
+-		goto out;
+-	}
+-
++	bfqg = bfq_bio_bfqg(bfqd, bio);
+ 	if (!is_sync) {
+ 		async_bfqq = bfq_async_queue_prio(bfqd, bfqg, ioprio_class,
+ 						  ioprio);
+@@ -5769,8 +5776,6 @@ out:
+ 
+ 	if (bfqq != &bfqd->oom_bfqq && is_sync && !respawn)
+ 		bfqq = bfq_do_or_sched_stable_merge(bfqd, bfqq, bic);
+-
+-	rcu_read_unlock();
+ 	return bfqq;
+ }
+ 
+@@ -6117,6 +6122,8 @@ static inline void bfq_update_insert_stats(struct request_queue *q,
+ 					   unsigned int cmd_flags) {}
+ #endif /* CONFIG_BFQ_CGROUP_DEBUG */
+ 
++static struct bfq_queue *bfq_init_rq(struct request *rq);
++
+ static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
+ 			       bool at_head)
+ {
+@@ -6132,18 +6139,15 @@ static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
+ 		bfqg_stats_update_legacy_io(q, rq);
+ #endif
+ 	spin_lock_irq(&bfqd->lock);
++	bfqq = bfq_init_rq(rq);
+ 	if (blk_mq_sched_try_insert_merge(q, rq, &free)) {
+ 		spin_unlock_irq(&bfqd->lock);
+ 		blk_mq_free_requests(&free);
+ 		return;
+ 	}
+ 
+-	spin_unlock_irq(&bfqd->lock);
+-
+ 	trace_block_rq_insert(rq);
+ 
+-	spin_lock_irq(&bfqd->lock);
+-	bfqq = bfq_init_rq(rq);
+ 	if (!bfqq || at_head) {
+ 		if (at_head)
+ 			list_add(&rq->queuelist, &bfqd->dispatch);
+@@ -6563,6 +6567,7 @@ static void bfq_finish_requeue_request(struct request *rq)
+ 		bfq_completed_request(bfqq, bfqd);
+ 	}
+ 	bfq_finish_requeue_request_body(bfqq);
++	RQ_BIC(rq)->requests--;
+ 	spin_unlock_irqrestore(&bfqd->lock, flags);
+ 
+ 	/*
+@@ -6796,6 +6801,7 @@ static struct bfq_queue *bfq_init_rq(struct request *rq)
+ 
+ 	bfqq_request_allocated(bfqq);
+ 	bfqq->ref++;
++	bic->requests++;
+ 	bfq_log_bfqq(bfqd, bfqq, "get_request %p: bfqq %p, %d",
+ 		     rq, bfqq, bfqq->ref);
+ 
+diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h
+index 3b83e3d1c2e58..9af8599ab90ff 100644
+--- a/block/bfq-iosched.h
++++ b/block/bfq-iosched.h
+@@ -468,6 +468,7 @@ struct bfq_io_cq {
+ 	struct bfq_queue *stable_merge_bfqq;
+ 
+ 	bool stably_merged;	/* non splittable if true */
++	unsigned int requests;	/* Number of requests this process has in flight */
+ };
+ 
+ /**
+@@ -928,6 +929,8 @@ struct bfq_group {
+ 
+ 	/* reference counter (see comments in bfq_bic_update_cgroup) */
+ 	int ref;
++	/* Is bfq_group still online? */
++	bool online;
+ 
+ 	struct bfq_entity entity;
+ 	struct bfq_sched_data sched_data;
+@@ -979,6 +982,7 @@ void bfq_weights_tree_remove(struct bfq_data *bfqd,
+ void bfq_bfqq_expire(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ 		     bool compensate, enum bfqq_expiration reason);
+ void bfq_put_queue(struct bfq_queue *bfqq);
++void bfq_put_cooperator(struct bfq_queue *bfqq);
+ void bfq_end_wr_async_queues(struct bfq_data *bfqd, struct bfq_group *bfqg);
+ void bfq_release_process_ref(struct bfq_data *bfqd, struct bfq_queue *bfqq);
+ void bfq_schedule_dispatch(struct bfq_data *bfqd);
+@@ -1006,8 +1010,7 @@ void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ void bfq_init_entity(struct bfq_entity *entity, struct bfq_group *bfqg);
+ void bfq_bic_update_cgroup(struct bfq_io_cq *bic, struct bio *bio);
+ void bfq_end_wr_async(struct bfq_data *bfqd);
+-struct bfq_group *bfq_find_set_group(struct bfq_data *bfqd,
+-				     struct blkcg *blkcg);
++struct bfq_group *bfq_bio_bfqg(struct bfq_data *bfqd, struct bio *bio);
+ struct blkcg_gq *bfqg_to_blkg(struct bfq_group *bfqg);
+ struct bfq_group *bfqq_group(struct bfq_queue *bfqq);
+ struct bfq_group *bfq_create_group_hierarchy(struct bfq_data *bfqd, int node);
+diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
+index 8dfe62786cd5f..1c52752abb084 100644
+--- a/block/blk-cgroup.c
++++ b/block/blk-cgroup.c
+@@ -905,7 +905,6 @@ static void blkcg_print_one_stat(struct blkcg_gq *blkg, struct seq_file *s)
+ {
+ 	struct blkg_iostat_set *bis = &blkg->iostat;
+ 	u64 rbytes, wbytes, rios, wios, dbytes, dios;
+-	bool has_stats = false;
+ 	const char *dname;
+ 	unsigned seq;
+ 	int i;
+@@ -931,14 +930,12 @@ static void blkcg_print_one_stat(struct blkcg_gq *blkg, struct seq_file *s)
+ 	} while (u64_stats_fetch_retry(&bis->sync, seq));
+ 
+ 	if (rbytes || wbytes || rios || wios) {
+-		has_stats = true;
+ 		seq_printf(s, "rbytes=%llu wbytes=%llu rios=%llu wios=%llu dbytes=%llu dios=%llu",
+ 			rbytes, wbytes, rios, wios,
+ 			dbytes, dios);
+ 	}
+ 
+ 	if (blkcg_debug_stats && atomic_read(&blkg->use_delay)) {
+-		has_stats = true;
+ 		seq_printf(s, " use_delay=%d delay_nsec=%llu",
+ 			atomic_read(&blkg->use_delay),
+ 			atomic64_read(&blkg->delay_nsec));
+@@ -950,12 +947,10 @@ static void blkcg_print_one_stat(struct blkcg_gq *blkg, struct seq_file *s)
+ 		if (!blkg->pd[i] || !pol->pd_stat_fn)
+ 			continue;
+ 
+-		if (pol->pd_stat_fn(blkg->pd[i], s))
+-			has_stats = true;
++		pol->pd_stat_fn(blkg->pd[i], s);
+ 	}
+ 
+-	if (has_stats)
+-		seq_printf(s, "\n");
++	seq_puts(s, "\n");
+ }
+ 
+ static int blkcg_print_stat(struct seq_file *sf, void *v)
+@@ -1906,12 +1901,8 @@ EXPORT_SYMBOL_GPL(bio_associate_blkg);
+  */
+ void bio_clone_blkg_association(struct bio *dst, struct bio *src)
+ {
+-	if (src->bi_blkg) {
+-		if (dst->bi_blkg)
+-			blkg_put(dst->bi_blkg);
+-		blkg_get(src->bi_blkg);
+-		dst->bi_blkg = src->bi_blkg;
+-	}
++	if (src->bi_blkg)
++		bio_associate_blkg_from_css(dst, &bio_blkcg(src)->css);
+ }
+ EXPORT_SYMBOL_GPL(bio_clone_blkg_association);
+ 
+diff --git a/block/blk-cgroup.h b/block/blk-cgroup.h
+index 47e1e38390c96..b56ba16fb6c57 100644
+--- a/block/blk-cgroup.h
++++ b/block/blk-cgroup.h
+@@ -63,7 +63,7 @@ typedef void (blkcg_pol_online_pd_fn)(struct blkg_policy_data *pd);
+ typedef void (blkcg_pol_offline_pd_fn)(struct blkg_policy_data *pd);
+ typedef void (blkcg_pol_free_pd_fn)(struct blkg_policy_data *pd);
+ typedef void (blkcg_pol_reset_pd_stats_fn)(struct blkg_policy_data *pd);
+-typedef bool (blkcg_pol_stat_pd_fn)(struct blkg_policy_data *pd,
++typedef void (blkcg_pol_stat_pd_fn)(struct blkg_policy_data *pd,
+ 				struct seq_file *s);
+ 
+ struct blkcg_policy {
+diff --git a/block/blk-ia-ranges.c b/block/blk-ia-ranges.c
+index 18c68d8b9138e..56ed48d2954e6 100644
+--- a/block/blk-ia-ranges.c
++++ b/block/blk-ia-ranges.c
+@@ -54,13 +54,8 @@ static ssize_t blk_ia_range_sysfs_show(struct kobject *kobj,
+ 		container_of(attr, struct blk_ia_range_sysfs_entry, attr);
+ 	struct blk_independent_access_range *iar =
+ 		container_of(kobj, struct blk_independent_access_range, kobj);
+-	ssize_t ret;
+ 
+-	mutex_lock(&iar->queue->sysfs_lock);
+-	ret = entry->show(iar, buf);
+-	mutex_unlock(&iar->queue->sysfs_lock);
+-
+-	return ret;
++	return entry->show(iar, buf);
+ }
+ 
+ static const struct sysfs_ops blk_ia_range_sysfs_ops = {
+diff --git a/block/blk-iocost.c b/block/blk-iocost.c
+index 9bd670999d0af..16705fbd06991 100644
+--- a/block/blk-iocost.c
++++ b/block/blk-iocost.c
+@@ -3005,13 +3005,13 @@ static void ioc_pd_free(struct blkg_policy_data *pd)
+ 	kfree(iocg);
+ }
+ 
+-static bool ioc_pd_stat(struct blkg_policy_data *pd, struct seq_file *s)
++static void ioc_pd_stat(struct blkg_policy_data *pd, struct seq_file *s)
+ {
+ 	struct ioc_gq *iocg = pd_to_iocg(pd);
+ 	struct ioc *ioc = iocg->ioc;
+ 
+ 	if (!ioc->enabled)
+-		return false;
++		return;
+ 
+ 	if (iocg->level == 0) {
+ 		unsigned vp10k = DIV64_U64_ROUND_CLOSEST(
+@@ -3027,7 +3027,6 @@ static bool ioc_pd_stat(struct blkg_policy_data *pd, struct seq_file *s)
+ 			iocg->last_stat.wait_us,
+ 			iocg->last_stat.indebt_us,
+ 			iocg->last_stat.indelay_us);
+-	return true;
+ }
+ 
+ static u64 ioc_weight_prfill(struct seq_file *sf, struct blkg_policy_data *pd,
+diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c
+index 2f33932e72e36..9568bf8dfe82b 100644
+--- a/block/blk-iolatency.c
++++ b/block/blk-iolatency.c
+@@ -87,7 +87,17 @@ struct iolatency_grp;
+ struct blk_iolatency {
+ 	struct rq_qos rqos;
+ 	struct timer_list timer;
+-	atomic_t enabled;
++
++	/*
++	 * ->enabled is the master enable switch gating the throttling logic and
++	 * inflight tracking. The number of cgroups which have iolat enabled is
++	 * tracked in ->enable_cnt, and ->enable is flipped on/off accordingly
++	 * from ->enable_work with the request_queue frozen. For details, See
++	 * blkiolatency_enable_work_fn().
++	 */
++	bool enabled;
++	atomic_t enable_cnt;
++	struct work_struct enable_work;
+ };
+ 
+ static inline struct blk_iolatency *BLKIOLATENCY(struct rq_qos *rqos)
+@@ -95,11 +105,6 @@ static inline struct blk_iolatency *BLKIOLATENCY(struct rq_qos *rqos)
+ 	return container_of(rqos, struct blk_iolatency, rqos);
+ }
+ 
+-static inline bool blk_iolatency_enabled(struct blk_iolatency *blkiolat)
+-{
+-	return atomic_read(&blkiolat->enabled) > 0;
+-}
+-
+ struct child_latency_info {
+ 	spinlock_t lock;
+ 
+@@ -464,7 +469,7 @@ static void blkcg_iolatency_throttle(struct rq_qos *rqos, struct bio *bio)
+ 	struct blkcg_gq *blkg = bio->bi_blkg;
+ 	bool issue_as_root = bio_issue_as_root_blkg(bio);
+ 
+-	if (!blk_iolatency_enabled(blkiolat))
++	if (!blkiolat->enabled)
+ 		return;
+ 
+ 	while (blkg && blkg->parent) {
+@@ -594,7 +599,6 @@ static void blkcg_iolatency_done_bio(struct rq_qos *rqos, struct bio *bio)
+ 	u64 window_start;
+ 	u64 now;
+ 	bool issue_as_root = bio_issue_as_root_blkg(bio);
+-	bool enabled = false;
+ 	int inflight = 0;
+ 
+ 	blkg = bio->bi_blkg;
+@@ -605,8 +609,7 @@ static void blkcg_iolatency_done_bio(struct rq_qos *rqos, struct bio *bio)
+ 	if (!iolat)
+ 		return;
+ 
+-	enabled = blk_iolatency_enabled(iolat->blkiolat);
+-	if (!enabled)
++	if (!iolat->blkiolat->enabled)
+ 		return;
+ 
+ 	now = ktime_to_ns(ktime_get());
+@@ -645,6 +648,7 @@ static void blkcg_iolatency_exit(struct rq_qos *rqos)
+ 	struct blk_iolatency *blkiolat = BLKIOLATENCY(rqos);
+ 
+ 	del_timer_sync(&blkiolat->timer);
++	flush_work(&blkiolat->enable_work);
+ 	blkcg_deactivate_policy(rqos->q, &blkcg_policy_iolatency);
+ 	kfree(blkiolat);
+ }
+@@ -716,6 +720,44 @@ next:
+ 	rcu_read_unlock();
+ }
+ 
++/**
++ * blkiolatency_enable_work_fn - Enable or disable iolatency on the device
++ * @work: enable_work of the blk_iolatency of interest
++ *
++ * iolatency needs to keep track of the number of in-flight IOs per cgroup. This
++ * is relatively expensive as it involves walking up the hierarchy twice for
++ * every IO. Thus, if iolatency is not enabled in any cgroup for the device, we
++ * want to disable the in-flight tracking.
++ *
++ * We have to make sure that the counting is balanced - we don't want to leak
++ * the in-flight counts by disabling accounting in the completion path while IOs
++ * are in flight. This is achieved by ensuring that no IO is in flight by
++ * freezing the queue while flipping ->enabled. As this requires a sleepable
++ * context, ->enabled flipping is punted to this work function.
++ */
++static void blkiolatency_enable_work_fn(struct work_struct *work)
++{
++	struct blk_iolatency *blkiolat = container_of(work, struct blk_iolatency,
++						      enable_work);
++	bool enabled;
++
++	/*
++	 * There can only be one instance of this function running for @blkiolat
++	 * and it's guaranteed to be executed at least once after the latest
++	 * ->enabled_cnt modification. Acting on the latest ->enable_cnt is
++	 * sufficient.
++	 *
++	 * Also, we know @blkiolat is safe to access as ->enable_work is flushed
++	 * in blkcg_iolatency_exit().
++	 */
++	enabled = atomic_read(&blkiolat->enable_cnt);
++	if (enabled != blkiolat->enabled) {
++		blk_mq_freeze_queue(blkiolat->rqos.q);
++		blkiolat->enabled = enabled;
++		blk_mq_unfreeze_queue(blkiolat->rqos.q);
++	}
++}
++
+ int blk_iolatency_init(struct request_queue *q)
+ {
+ 	struct blk_iolatency *blkiolat;
+@@ -741,17 +783,15 @@ int blk_iolatency_init(struct request_queue *q)
+ 	}
+ 
+ 	timer_setup(&blkiolat->timer, blkiolatency_timer_fn, 0);
++	INIT_WORK(&blkiolat->enable_work, blkiolatency_enable_work_fn);
+ 
+ 	return 0;
+ }
+ 
+-/*
+- * return 1 for enabling iolatency, return -1 for disabling iolatency, otherwise
+- * return 0.
+- */
+-static int iolatency_set_min_lat_nsec(struct blkcg_gq *blkg, u64 val)
++static void iolatency_set_min_lat_nsec(struct blkcg_gq *blkg, u64 val)
+ {
+ 	struct iolatency_grp *iolat = blkg_to_lat(blkg);
++	struct blk_iolatency *blkiolat = iolat->blkiolat;
+ 	u64 oldval = iolat->min_lat_nsec;
+ 
+ 	iolat->min_lat_nsec = val;
+@@ -759,13 +799,15 @@ static int iolatency_set_min_lat_nsec(struct blkcg_gq *blkg, u64 val)
+ 	iolat->cur_win_nsec = min_t(u64, iolat->cur_win_nsec,
+ 				    BLKIOLATENCY_MAX_WIN_SIZE);
+ 
+-	if (!oldval && val)
+-		return 1;
++	if (!oldval && val) {
++		if (atomic_inc_return(&blkiolat->enable_cnt) == 1)
++			schedule_work(&blkiolat->enable_work);
++	}
+ 	if (oldval && !val) {
+ 		blkcg_clear_delay(blkg);
+-		return -1;
++		if (atomic_dec_return(&blkiolat->enable_cnt) == 0)
++			schedule_work(&blkiolat->enable_work);
+ 	}
+-	return 0;
+ }
+ 
+ static void iolatency_clear_scaling(struct blkcg_gq *blkg)
+@@ -797,7 +839,6 @@ static ssize_t iolatency_set_limit(struct kernfs_open_file *of, char *buf,
+ 	u64 lat_val = 0;
+ 	u64 oldval;
+ 	int ret;
+-	int enable = 0;
+ 
+ 	ret = blkg_conf_prep(blkcg, &blkcg_policy_iolatency, buf, &ctx);
+ 	if (ret)
+@@ -832,41 +873,12 @@ static ssize_t iolatency_set_limit(struct kernfs_open_file *of, char *buf,
+ 	blkg = ctx.blkg;
+ 	oldval = iolat->min_lat_nsec;
+ 
+-	enable = iolatency_set_min_lat_nsec(blkg, lat_val);
+-	if (enable) {
+-		if (!blk_get_queue(blkg->q)) {
+-			ret = -ENODEV;
+-			goto out;
+-		}
+-
+-		blkg_get(blkg);
+-	}
+-
+-	if (oldval != iolat->min_lat_nsec) {
++	iolatency_set_min_lat_nsec(blkg, lat_val);
++	if (oldval != iolat->min_lat_nsec)
+ 		iolatency_clear_scaling(blkg);
+-	}
+-
+ 	ret = 0;
+ out:
+ 	blkg_conf_finish(&ctx);
+-	if (ret == 0 && enable) {
+-		struct iolatency_grp *tmp = blkg_to_lat(blkg);
+-		struct blk_iolatency *blkiolat = tmp->blkiolat;
+-
+-		blk_mq_freeze_queue(blkg->q);
+-
+-		if (enable == 1)
+-			atomic_inc(&blkiolat->enabled);
+-		else if (enable == -1)
+-			atomic_dec(&blkiolat->enabled);
+-		else
+-			WARN_ON_ONCE(1);
+-
+-		blk_mq_unfreeze_queue(blkg->q);
+-
+-		blkg_put(blkg);
+-		blk_put_queue(blkg->q);
+-	}
+ 	return ret ?: nbytes;
+ }
+ 
+@@ -891,7 +903,7 @@ static int iolatency_print_limit(struct seq_file *sf, void *v)
+ 	return 0;
+ }
+ 
+-static bool iolatency_ssd_stat(struct iolatency_grp *iolat, struct seq_file *s)
++static void iolatency_ssd_stat(struct iolatency_grp *iolat, struct seq_file *s)
+ {
+ 	struct latency_stat stat;
+ 	int cpu;
+@@ -914,17 +926,16 @@ static bool iolatency_ssd_stat(struct iolatency_grp *iolat, struct seq_file *s)
+ 			(unsigned long long)stat.ps.missed,
+ 			(unsigned long long)stat.ps.total,
+ 			iolat->rq_depth.max_depth);
+-	return true;
+ }
+ 
+-static bool iolatency_pd_stat(struct blkg_policy_data *pd, struct seq_file *s)
++static void iolatency_pd_stat(struct blkg_policy_data *pd, struct seq_file *s)
+ {
+ 	struct iolatency_grp *iolat = pd_to_lat(pd);
+ 	unsigned long long avg_lat;
+ 	unsigned long long cur_win;
+ 
+ 	if (!blkcg_debug_stats)
+-		return false;
++		return;
+ 
+ 	if (iolat->ssd)
+ 		return iolatency_ssd_stat(iolat, s);
+@@ -937,7 +948,6 @@ static bool iolatency_pd_stat(struct blkg_policy_data *pd, struct seq_file *s)
+ 	else
+ 		seq_printf(s, " depth=%u avg_lat=%llu win=%llu",
+ 			iolat->rq_depth.max_depth, avg_lat, cur_win);
+-	return true;
+ }
+ 
+ static struct blkg_policy_data *iolatency_pd_alloc(gfp_t gfp,
+@@ -1007,14 +1017,8 @@ static void iolatency_pd_offline(struct blkg_policy_data *pd)
+ {
+ 	struct iolatency_grp *iolat = pd_to_lat(pd);
+ 	struct blkcg_gq *blkg = lat_to_blkg(iolat);
+-	struct blk_iolatency *blkiolat = iolat->blkiolat;
+-	int ret;
+ 
+-	ret = iolatency_set_min_lat_nsec(blkg, 0);
+-	if (ret == 1)
+-		atomic_inc(&blkiolat->enabled);
+-	if (ret == -1)
+-		atomic_dec(&blkiolat->enabled);
++	iolatency_set_min_lat_nsec(blkg, 0);
+ 	iolatency_clear_scaling(blkg);
+ }
+ 
+diff --git a/block/blk-throttle.c b/block/blk-throttle.c
+index 469c483719bea..5c5f2741a95fa 100644
+--- a/block/blk-throttle.c
++++ b/block/blk-throttle.c
+@@ -2189,13 +2189,14 @@ again:
+ 	}
+ 
+ out_unlock:
+-	spin_unlock_irq(&q->queue_lock);
+ 	bio_set_flag(bio, BIO_THROTTLED);
+ 
+ #ifdef CONFIG_BLK_DEV_THROTTLING_LOW
+ 	if (throttled || !td->track_bio_latency)
+ 		bio->bi_issue.value |= BIO_ISSUE_THROTL_SKIP_LATENCY;
+ #endif
++	spin_unlock_irq(&q->queue_lock);
++
+ 	rcu_read_unlock();
+ 	return throttled;
+ }
+diff --git a/crypto/cryptd.c b/crypto/cryptd.c
+index a1bea0f4baa88..668095eca0faf 100644
+--- a/crypto/cryptd.c
++++ b/crypto/cryptd.c
+@@ -39,6 +39,10 @@ struct cryptd_cpu_queue {
+ };
+ 
+ struct cryptd_queue {
++	/*
++	 * Protected by disabling BH to allow enqueueing from softinterrupt and
++	 * dequeuing from kworker (cryptd_queue_worker()).
++	 */
+ 	struct cryptd_cpu_queue __percpu *cpu_queue;
+ };
+ 
+@@ -125,28 +129,28 @@ static void cryptd_fini_queue(struct cryptd_queue *queue)
+ static int cryptd_enqueue_request(struct cryptd_queue *queue,
+ 				  struct crypto_async_request *request)
+ {
+-	int cpu, err;
++	int err;
+ 	struct cryptd_cpu_queue *cpu_queue;
+ 	refcount_t *refcnt;
+ 
+-	cpu = get_cpu();
++	local_bh_disable();
+ 	cpu_queue = this_cpu_ptr(queue->cpu_queue);
+ 	err = crypto_enqueue_request(&cpu_queue->queue, request);
+ 
+ 	refcnt = crypto_tfm_ctx(request->tfm);
+ 
+ 	if (err == -ENOSPC)
+-		goto out_put_cpu;
++		goto out;
+ 
+-	queue_work_on(cpu, cryptd_wq, &cpu_queue->work);
++	queue_work_on(smp_processor_id(), cryptd_wq, &cpu_queue->work);
+ 
+ 	if (!refcount_read(refcnt))
+-		goto out_put_cpu;
++		goto out;
+ 
+ 	refcount_inc(refcnt);
+ 
+-out_put_cpu:
+-	put_cpu();
++out:
++	local_bh_enable();
+ 
+ 	return err;
+ }
+@@ -162,15 +166,10 @@ static void cryptd_queue_worker(struct work_struct *work)
+ 	cpu_queue = container_of(work, struct cryptd_cpu_queue, work);
+ 	/*
+ 	 * Only handle one request at a time to avoid hogging crypto workqueue.
+-	 * preempt_disable/enable is used to prevent being preempted by
+-	 * cryptd_enqueue_request(). local_bh_disable/enable is used to prevent
+-	 * cryptd_enqueue_request() being accessed from software interrupts.
+ 	 */
+ 	local_bh_disable();
+-	preempt_disable();
+ 	backlog = crypto_get_backlog(&cpu_queue->queue);
+ 	req = crypto_dequeue_request(&cpu_queue->queue);
+-	preempt_enable();
+ 	local_bh_enable();
+ 
+ 	if (!req)
+diff --git a/drivers/acpi/arm64/agdi.c b/drivers/acpi/arm64/agdi.c
+index 4df337d545b73..cf31abd0ed1bb 100644
+--- a/drivers/acpi/arm64/agdi.c
++++ b/drivers/acpi/arm64/agdi.c
+@@ -9,6 +9,7 @@
+ #define pr_fmt(fmt) "ACPI: AGDI: " fmt
+ 
+ #include <linux/acpi.h>
++#include <linux/acpi_agdi.h>
+ #include <linux/arm_sdei.h>
+ #include <linux/io.h>
+ #include <linux/kernel.h>
+diff --git a/drivers/acpi/cppc_acpi.c b/drivers/acpi/cppc_acpi.c
+index bc1454789a065..34576ab0e2e1d 100644
+--- a/drivers/acpi/cppc_acpi.c
++++ b/drivers/acpi/cppc_acpi.c
+@@ -100,6 +100,16 @@ static DEFINE_PER_CPU(struct cpc_desc *, cpc_desc_ptr);
+ 				(cpc)->cpc_entry.reg.space_id ==	\
+ 				ACPI_ADR_SPACE_PLATFORM_COMM)
+ 
++/* Check if a CPC register is in SystemMemory */
++#define CPC_IN_SYSTEM_MEMORY(cpc) ((cpc)->type == ACPI_TYPE_BUFFER &&	\
++				(cpc)->cpc_entry.reg.space_id ==	\
++				ACPI_ADR_SPACE_SYSTEM_MEMORY)
++
++/* Check if a CPC register is in SystemIo */
++#define CPC_IN_SYSTEM_IO(cpc) ((cpc)->type == ACPI_TYPE_BUFFER &&	\
++				(cpc)->cpc_entry.reg.space_id ==	\
++				ACPI_ADR_SPACE_SYSTEM_IO)
++
+ /* Evaluates to True if reg is a NULL register descriptor */
+ #define IS_NULL_REG(reg) ((reg)->space_id ==  ACPI_ADR_SPACE_SYSTEM_MEMORY && \
+ 				(reg)->address == 0 &&			\
+@@ -1447,6 +1457,9 @@ EXPORT_SYMBOL_GPL(cppc_set_perf);
+  * transition latency for performance change requests. The closest we have
+  * is the timing information from the PCCT tables which provides the info
+  * on the number and frequency of PCC commands the platform can handle.
++ *
++ * If desired_reg is in the SystemMemory or SystemIo ACPI address space,
++ * then assume there is no latency.
+  */
+ unsigned int cppc_get_transition_latency(int cpu_num)
+ {
+@@ -1472,7 +1485,9 @@ unsigned int cppc_get_transition_latency(int cpu_num)
+ 		return CPUFREQ_ETERNAL;
+ 
+ 	desired_reg = &cpc_desc->cpc_regs[DESIRED_PERF];
+-	if (!CPC_IN_PCC(desired_reg))
++	if (CPC_IN_SYSTEM_MEMORY(desired_reg) || CPC_IN_SYSTEM_IO(desired_reg))
++		return 0;
++	else if (!CPC_IN_PCC(desired_reg))
+ 		return CPUFREQ_ETERNAL;
+ 
+ 	if (pcc_ss_id < 0)
+diff --git a/drivers/acpi/property.c b/drivers/acpi/property.c
+index 12bbfe8336095..2b5d53c2a0a4e 100644
+--- a/drivers/acpi/property.c
++++ b/drivers/acpi/property.c
+@@ -433,6 +433,16 @@ void acpi_init_properties(struct acpi_device *adev)
+ 		acpi_extract_apple_properties(adev);
+ }
+ 
++static void acpi_free_device_properties(struct list_head *list)
++{
++	struct acpi_device_properties *props, *tmp;
++
++	list_for_each_entry_safe(props, tmp, list, list) {
++		list_del(&props->list);
++		kfree(props);
++	}
++}
++
+ static void acpi_destroy_nondev_subnodes(struct list_head *list)
+ {
+ 	struct acpi_data_node *dn, *next;
+@@ -445,22 +455,18 @@ static void acpi_destroy_nondev_subnodes(struct list_head *list)
+ 		wait_for_completion(&dn->kobj_done);
+ 		list_del(&dn->sibling);
+ 		ACPI_FREE((void *)dn->data.pointer);
++		acpi_free_device_properties(&dn->data.properties);
+ 		kfree(dn);
+ 	}
+ }
+ 
+ void acpi_free_properties(struct acpi_device *adev)
+ {
+-	struct acpi_device_properties *props, *tmp;
+-
+ 	acpi_destroy_nondev_subnodes(&adev->data.subnodes);
+ 	ACPI_FREE((void *)adev->data.pointer);
+ 	adev->data.of_compatible = NULL;
+ 	adev->data.pointer = NULL;
+-	list_for_each_entry_safe(props, tmp, &adev->data.properties, list) {
+-		list_del(&props->list);
+-		kfree(props);
+-	}
++	acpi_free_device_properties(&adev->data.properties);
+ }
+ 
+ /**
+diff --git a/drivers/acpi/sleep.c b/drivers/acpi/sleep.c
+index c992e57b2c790..3147702710afe 100644
+--- a/drivers/acpi/sleep.c
++++ b/drivers/acpi/sleep.c
+@@ -373,6 +373,18 @@ static const struct dmi_system_id acpisleep_dmi_table[] __initconst = {
+ 		DMI_MATCH(DMI_PRODUCT_NAME, "20GGA00L00"),
+ 		},
+ 	},
++	/*
++	 * ASUS B1400CEAE hangs on resume from suspend (see
++	 * https://bugzilla.kernel.org/show_bug.cgi?id=215742).
++	 */
++	{
++	.callback = init_default_s3,
++	.ident = "ASUS B1400CEAE",
++	.matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++		DMI_MATCH(DMI_PRODUCT_NAME, "ASUS EXPERTBOOK B1400CEAE"),
++		},
++	},
+ 	{},
+ };
+ 
+diff --git a/drivers/base/memory.c b/drivers/base/memory.c
+index 7222ff9b5e05c..084d67fd55cc8 100644
+--- a/drivers/base/memory.c
++++ b/drivers/base/memory.c
+@@ -636,10 +636,9 @@ static int __add_memory_block(struct memory_block *memory)
+ 	}
+ 	ret = xa_err(xa_store(&memory_blocks, memory->dev.id, memory,
+ 			      GFP_KERNEL));
+-	if (ret) {
+-		put_device(&memory->dev);
++	if (ret)
+ 		device_unregister(&memory->dev);
+-	}
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/base/node.c b/drivers/base/node.c
+index ec8bb24a5a227..0ac6376ef7a10 100644
+--- a/drivers/base/node.c
++++ b/drivers/base/node.c
+@@ -682,6 +682,7 @@ static int register_node(struct node *node, int num)
+  */
+ void unregister_node(struct node *node)
+ {
++	compaction_unregister_node(node);
+ 	hugetlb_unregister_node(node);		/* no-op, if memoryless node */
+ 	node_remove_accesses(node);
+ 	node_remove_caches(node);
+diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
+index 1ee878d126fdf..f0e4b0ea93e8c 100644
+--- a/drivers/base/power/domain.c
++++ b/drivers/base/power/domain.c
+@@ -1997,6 +1997,7 @@ int pm_genpd_init(struct generic_pm_domain *genpd,
+ 	genpd->device_count = 0;
+ 	genpd->max_off_time_ns = -1;
+ 	genpd->max_off_time_changed = true;
++	genpd->next_wakeup = KTIME_MAX;
+ 	genpd->provider = NULL;
+ 	genpd->has_provider = false;
+ 	genpd->accounting_time = ktime_get();
+diff --git a/drivers/base/property.c b/drivers/base/property.c
+index c0e94cce9c294..2f6843c9612bd 100644
+--- a/drivers/base/property.c
++++ b/drivers/base/property.c
+@@ -47,12 +47,14 @@ bool fwnode_property_present(const struct fwnode_handle *fwnode,
+ {
+ 	bool ret;
+ 
++	if (IS_ERR_OR_NULL(fwnode))
++		return false;
++
+ 	ret = fwnode_call_bool_op(fwnode, property_present, propname);
+-	if (ret == false && !IS_ERR_OR_NULL(fwnode) &&
+-	    !IS_ERR_OR_NULL(fwnode->secondary))
+-		ret = fwnode_call_bool_op(fwnode->secondary, property_present,
+-					 propname);
+-	return ret;
++	if (ret)
++		return ret;
++
++	return fwnode_call_bool_op(fwnode->secondary, property_present, propname);
+ }
+ EXPORT_SYMBOL_GPL(fwnode_property_present);
+ 
+@@ -232,15 +234,16 @@ static int fwnode_property_read_int_array(const struct fwnode_handle *fwnode,
+ {
+ 	int ret;
+ 
++	if (IS_ERR_OR_NULL(fwnode))
++		return -EINVAL;
++
+ 	ret = fwnode_call_int_op(fwnode, property_read_int_array, propname,
+ 				 elem_size, val, nval);
+-	if (ret == -EINVAL && !IS_ERR_OR_NULL(fwnode) &&
+-	    !IS_ERR_OR_NULL(fwnode->secondary))
+-		ret = fwnode_call_int_op(
+-			fwnode->secondary, property_read_int_array, propname,
+-			elem_size, val, nval);
++	if (ret != -EINVAL)
++		return ret;
+ 
+-	return ret;
++	return fwnode_call_int_op(fwnode->secondary, property_read_int_array, propname,
++				  elem_size, val, nval);
+ }
+ 
+ /**
+@@ -371,14 +374,16 @@ int fwnode_property_read_string_array(const struct fwnode_handle *fwnode,
+ {
+ 	int ret;
+ 
++	if (IS_ERR_OR_NULL(fwnode))
++		return -EINVAL;
++
+ 	ret = fwnode_call_int_op(fwnode, property_read_string_array, propname,
+ 				 val, nval);
+-	if (ret == -EINVAL && !IS_ERR_OR_NULL(fwnode) &&
+-	    !IS_ERR_OR_NULL(fwnode->secondary))
+-		ret = fwnode_call_int_op(fwnode->secondary,
+-					 property_read_string_array, propname,
+-					 val, nval);
+-	return ret;
++	if (ret != -EINVAL)
++		return ret;
++
++	return fwnode_call_int_op(fwnode->secondary, property_read_string_array, propname,
++				  val, nval);
+ }
+ EXPORT_SYMBOL_GPL(fwnode_property_read_string_array);
+ 
+@@ -480,15 +485,19 @@ int fwnode_property_get_reference_args(const struct fwnode_handle *fwnode,
+ {
+ 	int ret;
+ 
++	if (IS_ERR_OR_NULL(fwnode))
++		return -ENOENT;
++
+ 	ret = fwnode_call_int_op(fwnode, get_reference_args, prop, nargs_prop,
+ 				 nargs, index, args);
++	if (ret == 0)
++		return ret;
+ 
+-	if (ret < 0 && !IS_ERR_OR_NULL(fwnode) &&
+-	    !IS_ERR_OR_NULL(fwnode->secondary))
+-		ret = fwnode_call_int_op(fwnode->secondary, get_reference_args,
+-					 prop, nargs_prop, nargs, index, args);
++	if (IS_ERR_OR_NULL(fwnode->secondary))
++		return ret;
+ 
+-	return ret;
++	return fwnode_call_int_op(fwnode->secondary, get_reference_args, prop, nargs_prop,
++				  nargs, index, args);
+ }
+ EXPORT_SYMBOL_GPL(fwnode_property_get_reference_args);
+ 
+@@ -635,12 +644,13 @@ EXPORT_SYMBOL_GPL(fwnode_count_parents);
+ struct fwnode_handle *fwnode_get_nth_parent(struct fwnode_handle *fwnode,
+ 					    unsigned int depth)
+ {
+-	unsigned int i;
+-
+ 	fwnode_handle_get(fwnode);
+ 
+-	for (i = 0; i < depth && fwnode; i++)
++	do {
++		if (depth-- == 0)
++			break;
+ 		fwnode = fwnode_get_next_parent(fwnode);
++	} while (fwnode);
+ 
+ 	return fwnode;
+ }
+@@ -659,17 +669,17 @@ EXPORT_SYMBOL_GPL(fwnode_get_nth_parent);
+ bool fwnode_is_ancestor_of(struct fwnode_handle *test_ancestor,
+ 				  struct fwnode_handle *test_child)
+ {
+-	if (!test_ancestor)
++	if (IS_ERR_OR_NULL(test_ancestor))
+ 		return false;
+ 
+ 	fwnode_handle_get(test_child);
+-	while (test_child) {
++	do {
+ 		if (test_child == test_ancestor) {
+ 			fwnode_handle_put(test_child);
+ 			return true;
+ 		}
+ 		test_child = fwnode_get_next_parent(test_child);
+-	}
++	} while (test_child);
+ 	return false;
+ }
+ 
+@@ -698,7 +708,7 @@ fwnode_get_next_available_child_node(const struct fwnode_handle *fwnode,
+ {
+ 	struct fwnode_handle *next_child = child;
+ 
+-	if (!fwnode)
++	if (IS_ERR_OR_NULL(fwnode))
+ 		return NULL;
+ 
+ 	do {
+@@ -722,16 +732,16 @@ struct fwnode_handle *device_get_next_child_node(struct device *dev,
+ 	const struct fwnode_handle *fwnode = dev_fwnode(dev);
+ 	struct fwnode_handle *next;
+ 
++	if (IS_ERR_OR_NULL(fwnode))
++		return NULL;
++
+ 	/* Try to find a child in primary fwnode */
+ 	next = fwnode_get_next_child_node(fwnode, child);
+ 	if (next)
+ 		return next;
+ 
+ 	/* When no more children in primary, continue with secondary */
+-	if (fwnode && !IS_ERR_OR_NULL(fwnode->secondary))
+-		next = fwnode_get_next_child_node(fwnode->secondary, child);
+-
+-	return next;
++	return fwnode_get_next_child_node(fwnode->secondary, child);
+ }
+ EXPORT_SYMBOL_GPL(device_get_next_child_node);
+ 
+@@ -798,6 +808,9 @@ EXPORT_SYMBOL_GPL(fwnode_handle_put);
+  */
+ bool fwnode_device_is_available(const struct fwnode_handle *fwnode)
+ {
++	if (IS_ERR_OR_NULL(fwnode))
++		return false;
++
+ 	if (!fwnode_has_op(fwnode, device_is_available))
+ 		return true;
+ 
+@@ -988,14 +1001,14 @@ fwnode_graph_get_next_endpoint(const struct fwnode_handle *fwnode,
+ 		parent = fwnode_graph_get_port_parent(prev);
+ 	else
+ 		parent = fwnode;
++	if (IS_ERR_OR_NULL(parent))
++		return NULL;
+ 
+ 	ep = fwnode_call_ptr_op(parent, graph_get_next_endpoint, prev);
++	if (ep)
++		return ep;
+ 
+-	if (IS_ERR_OR_NULL(ep) &&
+-	    !IS_ERR_OR_NULL(parent) && !IS_ERR_OR_NULL(parent->secondary))
+-		ep = fwnode_graph_get_next_endpoint(parent->secondary, NULL);
+-
+-	return ep;
++	return fwnode_graph_get_next_endpoint(parent->secondary, NULL);
+ }
+ EXPORT_SYMBOL_GPL(fwnode_graph_get_next_endpoint);
+ 
+diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c
+index 4b0b25cc916ee..57b23e49ee91b 100644
+--- a/drivers/block/drbd/drbd_main.c
++++ b/drivers/block/drbd/drbd_main.c
+@@ -903,31 +903,6 @@ void drbd_gen_and_send_sync_uuid(struct drbd_peer_device *peer_device)
+ 	}
+ }
+ 
+-/* communicated if (agreed_features & DRBD_FF_WSAME) */
+-static void
+-assign_p_sizes_qlim(struct drbd_device *device, struct p_sizes *p,
+-					struct request_queue *q)
+-{
+-	if (q) {
+-		p->qlim->physical_block_size = cpu_to_be32(queue_physical_block_size(q));
+-		p->qlim->logical_block_size = cpu_to_be32(queue_logical_block_size(q));
+-		p->qlim->alignment_offset = cpu_to_be32(queue_alignment_offset(q));
+-		p->qlim->io_min = cpu_to_be32(queue_io_min(q));
+-		p->qlim->io_opt = cpu_to_be32(queue_io_opt(q));
+-		p->qlim->discard_enabled = blk_queue_discard(q);
+-		p->qlim->write_same_capable = 0;
+-	} else {
+-		q = device->rq_queue;
+-		p->qlim->physical_block_size = cpu_to_be32(queue_physical_block_size(q));
+-		p->qlim->logical_block_size = cpu_to_be32(queue_logical_block_size(q));
+-		p->qlim->alignment_offset = 0;
+-		p->qlim->io_min = cpu_to_be32(queue_io_min(q));
+-		p->qlim->io_opt = cpu_to_be32(queue_io_opt(q));
+-		p->qlim->discard_enabled = 0;
+-		p->qlim->write_same_capable = 0;
+-	}
+-}
+-
+ int drbd_send_sizes(struct drbd_peer_device *peer_device, int trigger_reply, enum dds_flags flags)
+ {
+ 	struct drbd_device *device = peer_device->device;
+@@ -949,7 +924,9 @@ int drbd_send_sizes(struct drbd_peer_device *peer_device, int trigger_reply, enu
+ 
+ 	memset(p, 0, packet_size);
+ 	if (get_ldev_if_state(device, D_NEGOTIATING)) {
+-		struct request_queue *q = bdev_get_queue(device->ldev->backing_bdev);
++		struct block_device *bdev = device->ldev->backing_bdev;
++		struct request_queue *q = bdev_get_queue(bdev);
++
+ 		d_size = drbd_get_max_capacity(device->ldev);
+ 		rcu_read_lock();
+ 		u_size = rcu_dereference(device->ldev->disk_conf)->disk_size;
+@@ -957,14 +934,32 @@ int drbd_send_sizes(struct drbd_peer_device *peer_device, int trigger_reply, enu
+ 		q_order_type = drbd_queue_order_type(device);
+ 		max_bio_size = queue_max_hw_sectors(q) << 9;
+ 		max_bio_size = min(max_bio_size, DRBD_MAX_BIO_SIZE);
+-		assign_p_sizes_qlim(device, p, q);
++		p->qlim->physical_block_size =
++			cpu_to_be32(bdev_physical_block_size(bdev));
++		p->qlim->logical_block_size =
++			cpu_to_be32(bdev_logical_block_size(bdev));
++		p->qlim->alignment_offset =
++			cpu_to_be32(bdev_alignment_offset(bdev));
++		p->qlim->io_min = cpu_to_be32(bdev_io_min(bdev));
++		p->qlim->io_opt = cpu_to_be32(bdev_io_opt(bdev));
++		p->qlim->discard_enabled = blk_queue_discard(q);
+ 		put_ldev(device);
+ 	} else {
++		struct request_queue *q = device->rq_queue;
++
++		p->qlim->physical_block_size =
++			cpu_to_be32(queue_physical_block_size(q));
++		p->qlim->logical_block_size =
++			cpu_to_be32(queue_logical_block_size(q));
++		p->qlim->alignment_offset = 0;
++		p->qlim->io_min = cpu_to_be32(queue_io_min(q));
++		p->qlim->io_opt = cpu_to_be32(queue_io_opt(q));
++		p->qlim->discard_enabled = 0;
++
+ 		d_size = 0;
+ 		u_size = 0;
+ 		q_order_type = QUEUE_ORDERED_NONE;
+ 		max_bio_size = DRBD_MAX_BIO_SIZE; /* ... multiple BIOs per peer_request */
+-		assign_p_sizes_qlim(device, p, NULL);
+ 	}
+ 
+ 	if (peer_device->connection->agreed_pro_version <= 94)
+@@ -3586,9 +3581,8 @@ const char *cmdname(enum drbd_packet cmd)
+ 	 * when we want to support more than
+ 	 * one PRO_VERSION */
+ 	static const char *cmdnames[] = {
++
+ 		[P_DATA]	        = "Data",
+-		[P_WSAME]	        = "WriteSame",
+-		[P_TRIM]	        = "Trim",
+ 		[P_DATA_REPLY]	        = "DataReply",
+ 		[P_RS_DATA_REPLY]	= "RSDataReply",
+ 		[P_BARRIER]	        = "Barrier",
+@@ -3599,7 +3593,6 @@ const char *cmdname(enum drbd_packet cmd)
+ 		[P_DATA_REQUEST]	= "DataRequest",
+ 		[P_RS_DATA_REQUEST]     = "RSDataRequest",
+ 		[P_SYNC_PARAM]	        = "SyncParam",
+-		[P_SYNC_PARAM89]	= "SyncParam89",
+ 		[P_PROTOCOL]            = "ReportProtocol",
+ 		[P_UUIDS]	        = "ReportUUIDs",
+ 		[P_SIZES]	        = "ReportSizes",
+@@ -3607,6 +3600,7 @@ const char *cmdname(enum drbd_packet cmd)
+ 		[P_SYNC_UUID]           = "ReportSyncUUID",
+ 		[P_AUTH_CHALLENGE]      = "AuthChallenge",
+ 		[P_AUTH_RESPONSE]	= "AuthResponse",
++		[P_STATE_CHG_REQ]       = "StateChgRequest",
+ 		[P_PING]		= "Ping",
+ 		[P_PING_ACK]	        = "PingAck",
+ 		[P_RECV_ACK]	        = "RecvAck",
+@@ -3617,23 +3611,25 @@ const char *cmdname(enum drbd_packet cmd)
+ 		[P_NEG_DREPLY]	        = "NegDReply",
+ 		[P_NEG_RS_DREPLY]	= "NegRSDReply",
+ 		[P_BARRIER_ACK]	        = "BarrierAck",
+-		[P_STATE_CHG_REQ]       = "StateChgRequest",
+ 		[P_STATE_CHG_REPLY]     = "StateChgReply",
+ 		[P_OV_REQUEST]          = "OVRequest",
+ 		[P_OV_REPLY]            = "OVReply",
+ 		[P_OV_RESULT]           = "OVResult",
+ 		[P_CSUM_RS_REQUEST]     = "CsumRSRequest",
+ 		[P_RS_IS_IN_SYNC]	= "CsumRSIsInSync",
++		[P_SYNC_PARAM89]	= "SyncParam89",
+ 		[P_COMPRESSED_BITMAP]   = "CBitmap",
+ 		[P_DELAY_PROBE]         = "DelayProbe",
+ 		[P_OUT_OF_SYNC]		= "OutOfSync",
+-		[P_RETRY_WRITE]		= "RetryWrite",
+ 		[P_RS_CANCEL]		= "RSCancel",
+ 		[P_CONN_ST_CHG_REQ]	= "conn_st_chg_req",
+ 		[P_CONN_ST_CHG_REPLY]	= "conn_st_chg_reply",
+ 		[P_PROTOCOL_UPDATE]	= "protocol_update",
++		[P_TRIM]	        = "Trim",
+ 		[P_RS_THIN_REQ]         = "rs_thin_req",
+ 		[P_RS_DEALLOCATED]      = "rs_deallocated",
++		[P_WSAME]	        = "WriteSame",
++		[P_ZEROES]		= "Zeroes",
+ 
+ 		/* enum drbd_packet, but not commands - obsoleted flags:
+ 		 *	P_MAY_IGNORE
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index a58595f5ee2c8..ed7bec11948cd 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -1768,6 +1768,14 @@ out_unlock:
+ 	mutex_unlock(&lo->lo_mutex);
+ }
+ 
++static void lo_free_disk(struct gendisk *disk)
++{
++	struct loop_device *lo = disk->private_data;
++
++	mutex_destroy(&lo->lo_mutex);
++	kfree(lo);
++}
++
+ static const struct block_device_operations lo_fops = {
+ 	.owner =	THIS_MODULE,
+ 	.open =		lo_open,
+@@ -1776,6 +1784,7 @@ static const struct block_device_operations lo_fops = {
+ #ifdef CONFIG_COMPAT
+ 	.compat_ioctl =	lo_compat_ioctl,
+ #endif
++	.free_disk =	lo_free_disk,
+ };
+ 
+ /*
+@@ -2090,15 +2099,14 @@ static void loop_remove(struct loop_device *lo)
+ {
+ 	/* Make this loop device unreachable from pathname. */
+ 	del_gendisk(lo->lo_disk);
+-	blk_cleanup_disk(lo->lo_disk);
++	blk_cleanup_queue(lo->lo_disk->queue);
+ 	blk_mq_free_tag_set(&lo->tag_set);
+ 
+ 	mutex_lock(&loop_ctl_mutex);
+ 	idr_remove(&loop_index_idr, lo->lo_number);
+ 	mutex_unlock(&loop_ctl_mutex);
+-	/* There is no route which can find this loop device. */
+-	mutex_destroy(&lo->lo_mutex);
+-	kfree(lo);
++
++	put_disk(lo->lo_disk);
+ }
+ 
+ static void loop_probe(dev_t dev)
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
+index 5a1f98494dddf..2845570413360 100644
+--- a/drivers/block/nbd.c
++++ b/drivers/block/nbd.c
+@@ -947,11 +947,15 @@ static int wait_for_reconnect(struct nbd_device *nbd)
+ 	struct nbd_config *config = nbd->config;
+ 	if (!config->dead_conn_timeout)
+ 		return 0;
+-	if (test_bit(NBD_RT_DISCONNECTED, &config->runtime_flags))
++
++	if (!wait_event_timeout(config->conn_wait,
++				test_bit(NBD_RT_DISCONNECTED,
++					 &config->runtime_flags) ||
++				atomic_read(&config->live_connections) > 0,
++				config->dead_conn_timeout))
+ 		return 0;
+-	return wait_event_timeout(config->conn_wait,
+-				  atomic_read(&config->live_connections) > 0,
+-				  config->dead_conn_timeout) > 0;
++
++	return !test_bit(NBD_RT_DISCONNECTED, &config->runtime_flags);
+ }
+ 
+ static int nbd_handle_cmd(struct nbd_cmd *cmd, int index)
+@@ -2082,6 +2086,7 @@ static void nbd_disconnect_and_put(struct nbd_device *nbd)
+ 	mutex_lock(&nbd->config_lock);
+ 	nbd_disconnect(nbd);
+ 	sock_shutdown(nbd);
++	wake_up(&nbd->config->conn_wait);
+ 	/*
+ 	 * Make sure recv thread has finished, we can safely call nbd_clear_que()
+ 	 * to cancel the inflight I/Os.
+diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
+index a8bcf3f664af1..10bba1e00f2b4 100644
+--- a/drivers/block/virtio_blk.c
++++ b/drivers/block/virtio_blk.c
+@@ -867,11 +867,12 @@ static int virtblk_probe(struct virtio_device *vdev)
+ 		blk_queue_io_opt(q, blk_size * opt_io_size);
+ 
+ 	if (virtio_has_feature(vdev, VIRTIO_BLK_F_DISCARD)) {
+-		q->limits.discard_granularity = blk_size;
+-
+ 		virtio_cread(vdev, struct virtio_blk_config,
+ 			     discard_sector_alignment, &v);
+-		q->limits.discard_alignment = v ? v << SECTOR_SHIFT : 0;
++		if (v)
++			q->limits.discard_granularity = v << SECTOR_SHIFT;
++		else
++			q->limits.discard_granularity = blk_size;
+ 
+ 		virtio_cread(vdev, struct virtio_blk_config,
+ 			     max_discard_sectors, &v);
+diff --git a/drivers/bluetooth/btmtksdio.c b/drivers/bluetooth/btmtksdio.c
+index f3dc5881fff70..d6700efcfe8cd 100644
+--- a/drivers/bluetooth/btmtksdio.c
++++ b/drivers/bluetooth/btmtksdio.c
+@@ -379,6 +379,7 @@ static int btmtksdio_recv_event(struct hci_dev *hdev, struct sk_buff *skb)
+ {
+ 	struct btmtksdio_dev *bdev = hci_get_drvdata(hdev);
+ 	struct hci_event_hdr *hdr = (void *)skb->data;
++	u8 evt = hdr->evt;
+ 	int err;
+ 
+ 	/* When someone waits for the WMT event, the skb is being cloned
+@@ -396,7 +397,7 @@ static int btmtksdio_recv_event(struct hci_dev *hdev, struct sk_buff *skb)
+ 	if (err < 0)
+ 		goto err_free_skb;
+ 
+-	if (hdr->evt == HCI_EV_WMT) {
++	if (evt == HCI_EV_WMT) {
+ 		if (test_and_clear_bit(BTMTKSDIO_TX_WAIT_VND_EVT,
+ 				       &bdev->tx_state)) {
+ 			/* Barrier to sync with other CPUs */
+@@ -863,6 +864,14 @@ static int mt79xx_setup(struct hci_dev *hdev, const char *fwname)
+ 		return err;
+ 	}
+ 
++	err = btmtksdio_fw_pmctrl(bdev);
++	if (err < 0)
++		return err;
++
++	err = btmtksdio_drv_pmctrl(bdev);
++	if (err < 0)
++		return err;
++
+ 	/* Enable Bluetooth protocol */
+ 	wmt_params.op = BTMTK_WMT_FUNC_CTRL;
+ 	wmt_params.flag = 0;
+@@ -961,7 +970,7 @@ static int btmtksdio_get_codec_config_data(struct hci_dev *hdev,
+ 	}
+ 
+ 	*ven_data = kmalloc(sizeof(__u8), GFP_KERNEL);
+-	if (!ven_data) {
++	if (!*ven_data) {
+ 		err = -ENOMEM;
+ 		goto error;
+ 	}
+@@ -1108,14 +1117,6 @@ static int btmtksdio_setup(struct hci_dev *hdev)
+ 		if (err < 0)
+ 			return err;
+ 
+-		err = btmtksdio_fw_pmctrl(bdev);
+-		if (err < 0)
+-			return err;
+-
+-		err = btmtksdio_drv_pmctrl(bdev);
+-		if (err < 0)
+-			return err;
+-
+ 		/* Enable SCO over I2S/PCM */
+ 		err = btmtksdio_sco_setting(hdev);
+ 		if (err < 0) {
+@@ -1188,6 +1189,10 @@ static int btmtksdio_shutdown(struct hci_dev *hdev)
+ 	 */
+ 	pm_runtime_get_sync(bdev->dev);
+ 
++	/* wmt command only works until the reset is complete */
++	if (test_bit(BTMTKSDIO_HW_RESET_ACTIVE, &bdev->tx_state))
++		goto ignore_wmt_cmd;
++
+ 	/* Disable the device */
+ 	wmt_params.op = BTMTK_WMT_FUNC_CTRL;
+ 	wmt_params.flag = 0;
+@@ -1201,6 +1206,7 @@ static int btmtksdio_shutdown(struct hci_dev *hdev)
+ 		return err;
+ 	}
+ 
++ignore_wmt_cmd:
+ 	pm_runtime_put_noidle(bdev->dev);
+ 	pm_runtime_disable(bdev->dev);
+ 
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 50df417207afd..e48c3ad069bb4 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -3335,6 +3335,12 @@ static int btusb_setup_qca(struct hci_dev *hdev)
+ 			msleep(QCA_BT_RESET_WAIT_MS);
+ 	}
+ 
++	/* Mark HCI_OP_ENHANCED_SETUP_SYNC_CONN as broken as it doesn't seem to
++	 * work with the likes of HSP/HFP mSBC.
++	 */
++	set_bit(HCI_QUIRK_BROKEN_ENHANCED_SETUP_SYNC_CONN, &hdev->quirks);
++	set_bit(HCI_QUIRK_BROKEN_ERR_DATA_REPORTING, &hdev->quirks);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/char/hw_random/cn10k-rng.c b/drivers/char/hw_random/cn10k-rng.c
+index 35001c63648bb..a01e9307737c5 100644
+--- a/drivers/char/hw_random/cn10k-rng.c
++++ b/drivers/char/hw_random/cn10k-rng.c
+@@ -31,26 +31,23 @@ struct cn10k_rng {
+ 
+ #define PLAT_OCTEONTX_RESET_RNG_EBG_HEALTH_STATE     0xc2000b0f
+ 
+-static int reset_rng_health_state(struct cn10k_rng *rng)
++static unsigned long reset_rng_health_state(struct cn10k_rng *rng)
+ {
+ 	struct arm_smccc_res res;
+ 
+ 	/* Send SMC service call to reset EBG health state */
+ 	arm_smccc_smc(PLAT_OCTEONTX_RESET_RNG_EBG_HEALTH_STATE, 0, 0, 0, 0, 0, 0, 0, &res);
+-	if (res.a0 != 0UL)
+-		return -EIO;
+-
+-	return 0;
++	return res.a0;
+ }
+ 
+ static int check_rng_health(struct cn10k_rng *rng)
+ {
+ 	u64 status;
+-	int err;
++	unsigned long err;
+ 
+ 	/* Skip checking health */
+ 	if (!rng->reg_base)
+-		return 0;
++		return -ENODEV;
+ 
+ 	status = readq(rng->reg_base + RNM_PF_EBG_HEALTH);
+ 	if (status & BIT_ULL(20)) {
+@@ -58,7 +55,9 @@ static int check_rng_health(struct cn10k_rng *rng)
+ 		if (err) {
+ 			dev_err(&rng->pdev->dev, "HWRNG: Health test failed (status=%llx)\n",
+ 					status);
+-			dev_err(&rng->pdev->dev, "HWRNG: error during reset\n");
++			dev_err(&rng->pdev->dev, "HWRNG: error during reset (error=%lx)\n",
++					err);
++			return -EIO;
+ 		}
+ 	}
+ 	return 0;
+@@ -90,6 +89,7 @@ static int cn10k_rng_read(struct hwrng *hwrng, void *data,
+ {
+ 	struct cn10k_rng *rng = (struct cn10k_rng *)hwrng->priv;
+ 	unsigned int size;
++	u8 *pos = data;
+ 	int err = 0;
+ 	u64 value;
+ 
+@@ -102,17 +102,20 @@ static int cn10k_rng_read(struct hwrng *hwrng, void *data,
+ 	while (size >= 8) {
+ 		cn10k_read_trng(rng, &value);
+ 
+-		*((u64 *)data) = (u64)value;
++		*((u64 *)pos) = value;
+ 		size -= 8;
+-		data += 8;
++		pos += 8;
+ 	}
+ 
+-	while (size > 0) {
++	if (size > 0) {
+ 		cn10k_read_trng(rng, &value);
+ 
+-		*((u8 *)data) = (u8)value;
+-		size--;
+-		data++;
++		while (size > 0) {
++			*pos = (u8)value;
++			value >>= 8;
++			size--;
++			pos++;
++		}
+ 	}
+ 
+ 	return max - size;
+diff --git a/drivers/char/hw_random/omap3-rom-rng.c b/drivers/char/hw_random/omap3-rom-rng.c
+index e0d77fa048fb6..f06e4f95114f9 100644
+--- a/drivers/char/hw_random/omap3-rom-rng.c
++++ b/drivers/char/hw_random/omap3-rom-rng.c
+@@ -92,7 +92,7 @@ static int __maybe_unused omap_rom_rng_runtime_resume(struct device *dev)
+ 
+ 	r = ddata->rom_rng_call(0, 0, RNG_GEN_PRNG_HW_INIT);
+ 	if (r != 0) {
+-		clk_disable(ddata->clk);
++		clk_disable_unprepare(ddata->clk);
+ 		dev_err(dev, "HW init failed: %d\n", r);
+ 
+ 		return -EIO;
+diff --git a/drivers/char/ipmi/ipmi_ipmb.c b/drivers/char/ipmi/ipmi_ipmb.c
+index b81b862532fb0..a8bfe0ade082b 100644
+--- a/drivers/char/ipmi/ipmi_ipmb.c
++++ b/drivers/char/ipmi/ipmi_ipmb.c
+@@ -476,6 +476,7 @@ static int ipmi_ipmb_probe(struct i2c_client *client,
+ 	slave_np = of_parse_phandle(dev->of_node, "slave-dev", 0);
+ 	if (slave_np) {
+ 		slave_adap = of_get_i2c_adapter_by_node(slave_np);
++		of_node_put(slave_np);
+ 		if (!slave_adap) {
+ 			dev_notice(&client->dev,
+ 				   "Could not find slave adapter\n");
+diff --git a/drivers/char/ipmi/ipmi_msghandler.c b/drivers/char/ipmi/ipmi_msghandler.c
+index f1827257ef0e0..2610e809c802b 100644
+--- a/drivers/char/ipmi/ipmi_msghandler.c
++++ b/drivers/char/ipmi/ipmi_msghandler.c
+@@ -11,8 +11,8 @@
+  * Copyright 2002 MontaVista Software Inc.
+  */
+ 
+-#define pr_fmt(fmt) "%s" fmt, "IPMI message handler: "
+-#define dev_fmt pr_fmt
++#define pr_fmt(fmt) "IPMI message handler: " fmt
++#define dev_fmt(fmt) pr_fmt(fmt)
+ 
+ #include <linux/module.h>
+ #include <linux/errno.h>
+diff --git a/drivers/char/ipmi/ipmi_poweroff.c b/drivers/char/ipmi/ipmi_poweroff.c
+index bc3a18daf97a6..62e71c46ac5f7 100644
+--- a/drivers/char/ipmi/ipmi_poweroff.c
++++ b/drivers/char/ipmi/ipmi_poweroff.c
+@@ -94,9 +94,7 @@ static void dummy_recv_free(struct ipmi_recv_msg *msg)
+ {
+ 	atomic_dec(&dummy_count);
+ }
+-static struct ipmi_smi_msg halt_smi_msg = {
+-	.done = dummy_smi_free
+-};
++static struct ipmi_smi_msg halt_smi_msg = INIT_IPMI_SMI_MSG(dummy_smi_free);
+ static struct ipmi_recv_msg halt_recv_msg = {
+ 	.done = dummy_recv_free
+ };
+diff --git a/drivers/char/ipmi/ipmi_ssif.c b/drivers/char/ipmi/ipmi_ssif.c
+index f199cc1948446..64c73ea9c9151 100644
+--- a/drivers/char/ipmi/ipmi_ssif.c
++++ b/drivers/char/ipmi/ipmi_ssif.c
+@@ -814,6 +814,14 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result,
+ 		break;
+ 
+ 	case SSIF_GETTING_EVENTS:
++		if (!msg) {
++			/* Should never happen, but just in case. */
++			dev_warn(&ssif_info->client->dev,
++				 "No message set while getting events\n");
++			ipmi_ssif_unlock_cond(ssif_info, flags);
++			break;
++		}
++
+ 		if ((result < 0) || (len < 3) || (msg->rsp[2] != 0)) {
+ 			/* Error getting event, probably done. */
+ 			msg->done(msg);
+@@ -838,6 +846,14 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result,
+ 		break;
+ 
+ 	case SSIF_GETTING_MESSAGES:
++		if (!msg) {
++			/* Should never happen, but just in case. */
++			dev_warn(&ssif_info->client->dev,
++				 "No message set while getting messages\n");
++			ipmi_ssif_unlock_cond(ssif_info, flags);
++			break;
++		}
++
+ 		if ((result < 0) || (len < 3) || (msg->rsp[2] != 0)) {
+ 			/* Error getting event, probably done. */
+ 			msg->done(msg);
+@@ -861,6 +877,13 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result,
+ 			deliver_recv_msg(ssif_info, msg);
+ 		}
+ 		break;
++
++	default:
++		/* Should never happen, but just in case. */
++		dev_warn(&ssif_info->client->dev,
++			 "Invalid state in message done handling: %d\n",
++			 ssif_info->ssif_state);
++		ipmi_ssif_unlock_cond(ssif_info, flags);
+ 	}
+ 
+ 	flags = ipmi_ssif_lock_cond(ssif_info, &oflags);
+diff --git a/drivers/char/ipmi/ipmi_watchdog.c b/drivers/char/ipmi/ipmi_watchdog.c
+index 0604abdd249a1..4c1e9663ea479 100644
+--- a/drivers/char/ipmi/ipmi_watchdog.c
++++ b/drivers/char/ipmi/ipmi_watchdog.c
+@@ -354,9 +354,7 @@ static void msg_free_recv(struct ipmi_recv_msg *msg)
+ 			complete(&msg_wait);
+ 	}
+ }
+-static struct ipmi_smi_msg smi_msg = {
+-	.done = msg_free_smi
+-};
++static struct ipmi_smi_msg smi_msg = INIT_IPMI_SMI_MSG(msg_free_smi);
+ static struct ipmi_recv_msg recv_msg = {
+ 	.done = msg_free_recv
+ };
+@@ -475,9 +473,8 @@ static void panic_recv_free(struct ipmi_recv_msg *msg)
+ 	atomic_dec(&panic_done_count);
+ }
+ 
+-static struct ipmi_smi_msg panic_halt_heartbeat_smi_msg = {
+-	.done = panic_smi_free
+-};
++static struct ipmi_smi_msg panic_halt_heartbeat_smi_msg =
++	INIT_IPMI_SMI_MSG(panic_smi_free);
+ static struct ipmi_recv_msg panic_halt_heartbeat_recv_msg = {
+ 	.done = panic_recv_free
+ };
+@@ -516,9 +513,8 @@ static void panic_halt_ipmi_heartbeat(void)
+ 		atomic_sub(2, &panic_done_count);
+ }
+ 
+-static struct ipmi_smi_msg panic_halt_smi_msg = {
+-	.done = panic_smi_free
+-};
++static struct ipmi_smi_msg panic_halt_smi_msg =
++	INIT_IPMI_SMI_MSG(panic_smi_free);
+ static struct ipmi_recv_msg panic_halt_recv_msg = {
+ 	.done = panic_recv_free
+ };
+diff --git a/drivers/char/random.c b/drivers/char/random.c
+index 7a66eec08e373..0cfbfa8d5b50a 100644
+--- a/drivers/char/random.c
++++ b/drivers/char/random.c
+@@ -78,8 +78,7 @@ static enum {
+ 	CRNG_EARLY = 1, /* At least POOL_EARLY_BITS collected */
+ 	CRNG_READY = 2  /* Fully initialized with POOL_READY_BITS collected */
+ } crng_init __read_mostly = CRNG_EMPTY;
+-static DEFINE_STATIC_KEY_FALSE(crng_is_ready);
+-#define crng_ready() (static_branch_likely(&crng_is_ready) || crng_init >= CRNG_READY)
++#define crng_ready() (likely(crng_init >= CRNG_READY))
+ /* Various types of waiters for crng_init->CRNG_READY transition. */
+ static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait);
+ static struct fasync_struct *fasync;
+@@ -109,11 +108,6 @@ bool rng_is_initialized(void)
+ }
+ EXPORT_SYMBOL(rng_is_initialized);
+ 
+-static void __cold crng_set_ready(struct work_struct *work)
+-{
+-	static_branch_enable(&crng_is_ready);
+-}
+-
+ /* Used by wait_for_random_bytes(), and considered an entropy collector, below. */
+ static void try_to_generate_entropy(void);
+ 
+@@ -267,7 +261,7 @@ static void crng_reseed(void)
+ 		++next_gen;
+ 	WRITE_ONCE(base_crng.generation, next_gen);
+ 	WRITE_ONCE(base_crng.birth, jiffies);
+-	if (!static_branch_likely(&crng_is_ready))
++	if (!crng_ready())
+ 		crng_init = CRNG_READY;
+ 	spin_unlock_irqrestore(&base_crng.lock, flags);
+ 	memzero_explicit(key, sizeof(key));
+@@ -710,7 +704,6 @@ static void extract_entropy(void *buf, size_t len)
+ 
+ static void __cold _credit_init_bits(size_t bits)
+ {
+-	static struct execute_work set_ready;
+ 	unsigned int new, orig, add;
+ 	unsigned long flags;
+ 
+@@ -726,7 +719,6 @@ static void __cold _credit_init_bits(size_t bits)
+ 
+ 	if (orig < POOL_READY_BITS && new >= POOL_READY_BITS) {
+ 		crng_reseed(); /* Sets crng_init to CRNG_READY under base_crng.lock. */
+-		execute_in_process_context(crng_set_ready, &set_ready);
+ 		process_random_ready_list();
+ 		wake_up_interruptible(&crng_init_wait);
+ 		kill_fasync(&fasync, SIGIO, POLL_IN);
+diff --git a/drivers/char/tpm/tpm_tis_i2c_cr50.c b/drivers/char/tpm/tpm_tis_i2c_cr50.c
+index f6c0affbb4567..bf608b6af3395 100644
+--- a/drivers/char/tpm/tpm_tis_i2c_cr50.c
++++ b/drivers/char/tpm/tpm_tis_i2c_cr50.c
+@@ -768,8 +768,8 @@ static int tpm_cr50_i2c_remove(struct i2c_client *client)
+ 	struct device *dev = &client->dev;
+ 
+ 	if (!chip) {
+-		dev_err(dev, "Could not get client data at remove\n");
+-		return -ENODEV;
++		dev_crit(dev, "Could not get client data at remove, memory corruption ahead\n");
++		return 0;
+ 	}
+ 
+ 	tpm_chip_unregister(chip);
+diff --git a/drivers/clk/tegra/clk-dfll.c b/drivers/clk/tegra/clk-dfll.c
+index 6144447f86c63..62238dca9a534 100644
+--- a/drivers/clk/tegra/clk-dfll.c
++++ b/drivers/clk/tegra/clk-dfll.c
+@@ -271,6 +271,7 @@ struct tegra_dfll {
+ 	struct clk			*ref_clk;
+ 	struct clk			*i2c_clk;
+ 	struct clk			*dfll_clk;
++	struct reset_control		*dfll_rst;
+ 	struct reset_control		*dvco_rst;
+ 	unsigned long			ref_rate;
+ 	unsigned long			i2c_clk_rate;
+@@ -1464,6 +1465,7 @@ static int dfll_init(struct tegra_dfll *td)
+ 		return -EINVAL;
+ 	}
+ 
++	reset_control_deassert(td->dfll_rst);
+ 	reset_control_deassert(td->dvco_rst);
+ 
+ 	ret = clk_prepare(td->ref_clk);
+@@ -1509,6 +1511,7 @@ di_err1:
+ 	clk_unprepare(td->ref_clk);
+ 
+ 	reset_control_assert(td->dvco_rst);
++	reset_control_assert(td->dfll_rst);
+ 
+ 	return ret;
+ }
+@@ -1530,6 +1533,7 @@ int tegra_dfll_suspend(struct device *dev)
+ 	}
+ 
+ 	reset_control_assert(td->dvco_rst);
++	reset_control_assert(td->dfll_rst);
+ 
+ 	return 0;
+ }
+@@ -1548,6 +1552,7 @@ int tegra_dfll_resume(struct device *dev)
+ {
+ 	struct tegra_dfll *td = dev_get_drvdata(dev);
+ 
++	reset_control_deassert(td->dfll_rst);
+ 	reset_control_deassert(td->dvco_rst);
+ 
+ 	pm_runtime_get_sync(td->dev);
+@@ -1951,6 +1956,12 @@ int tegra_dfll_register(struct platform_device *pdev,
+ 
+ 	td->soc = soc;
+ 
++	td->dfll_rst = devm_reset_control_get_optional(td->dev, "dfll");
++	if (IS_ERR(td->dfll_rst)) {
++		dev_err(td->dev, "couldn't get dfll reset\n");
++		return PTR_ERR(td->dfll_rst);
++	}
++
+ 	td->dvco_rst = devm_reset_control_get(td->dev, "dvco");
+ 	if (IS_ERR(td->dvco_rst)) {
+ 		dev_err(td->dev, "couldn't get dvco reset\n");
+@@ -2087,6 +2098,7 @@ struct tegra_dfll_soc_data *tegra_dfll_unregister(struct platform_device *pdev)
+ 	clk_unprepare(td->i2c_clk);
+ 
+ 	reset_control_assert(td->dvco_rst);
++	reset_control_assert(td->dfll_rst);
+ 
+ 	return td->soc;
+ }
+diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
+index 80f535cc8a757..fbaa8e6c7d232 100644
+--- a/drivers/cpufreq/cpufreq.c
++++ b/drivers/cpufreq/cpufreq.c
+@@ -28,6 +28,7 @@
+ #include <linux/suspend.h>
+ #include <linux/syscore_ops.h>
+ #include <linux/tick.h>
++#include <linux/units.h>
+ #include <trace/events/power.h>
+ 
+ static LIST_HEAD(cpufreq_policy_list);
+@@ -1707,6 +1708,16 @@ static unsigned int cpufreq_verify_current_freq(struct cpufreq_policy *policy, b
+ 		return new_freq;
+ 
+ 	if (policy->cur != new_freq) {
++		/*
++		 * For some platforms, the frequency returned by hardware may be
++		 * slightly different from what is provided in the frequency
++		 * table, for example hardware may return 499 MHz instead of 500
++		 * MHz. In such cases it is better to avoid getting into
++		 * unnecessary frequency updates.
++		 */
++		if (abs(policy->cur - new_freq) < HZ_PER_MHZ)
++			return policy->cur;
++
+ 		cpufreq_out_of_sync(policy, new_freq);
+ 		if (update)
+ 			schedule_work(&policy->update);
+diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c
+index 0d42cf8b88d8a..85da677c43d6b 100644
+--- a/drivers/cpufreq/cpufreq_governor.c
++++ b/drivers/cpufreq/cpufreq_governor.c
+@@ -388,6 +388,15 @@ static void free_policy_dbs_info(struct policy_dbs_info *policy_dbs,
+ 	gov->free(policy_dbs);
+ }
+ 
++static void cpufreq_dbs_data_release(struct kobject *kobj)
++{
++	struct dbs_data *dbs_data = to_dbs_data(to_gov_attr_set(kobj));
++	struct dbs_governor *gov = dbs_data->gov;
++
++	gov->exit(dbs_data);
++	kfree(dbs_data);
++}
++
+ int cpufreq_dbs_governor_init(struct cpufreq_policy *policy)
+ {
+ 	struct dbs_governor *gov = dbs_governor_of(policy);
+@@ -425,6 +434,7 @@ int cpufreq_dbs_governor_init(struct cpufreq_policy *policy)
+ 		goto free_policy_dbs_info;
+ 	}
+ 
++	dbs_data->gov = gov;
+ 	gov_attr_set_init(&dbs_data->attr_set, &policy_dbs->list);
+ 
+ 	ret = gov->init(dbs_data);
+@@ -447,6 +457,7 @@ int cpufreq_dbs_governor_init(struct cpufreq_policy *policy)
+ 	policy->governor_data = policy_dbs;
+ 
+ 	gov->kobj_type.sysfs_ops = &governor_sysfs_ops;
++	gov->kobj_type.release = cpufreq_dbs_data_release;
+ 	ret = kobject_init_and_add(&dbs_data->attr_set.kobj, &gov->kobj_type,
+ 				   get_governor_parent_kobj(policy),
+ 				   "%s", gov->gov.name);
+@@ -488,13 +499,8 @@ void cpufreq_dbs_governor_exit(struct cpufreq_policy *policy)
+ 
+ 	policy->governor_data = NULL;
+ 
+-	if (!count) {
+-		if (!have_governor_per_policy())
+-			gov->gdbs_data = NULL;
+-
+-		gov->exit(dbs_data);
+-		kfree(dbs_data);
+-	}
++	if (!count && !have_governor_per_policy())
++		gov->gdbs_data = NULL;
+ 
+ 	free_policy_dbs_info(policy_dbs, gov);
+ 
+diff --git a/drivers/cpufreq/cpufreq_governor.h b/drivers/cpufreq/cpufreq_governor.h
+index a5a0bc3cc23ec..168c23fd7fcac 100644
+--- a/drivers/cpufreq/cpufreq_governor.h
++++ b/drivers/cpufreq/cpufreq_governor.h
+@@ -37,6 +37,7 @@ enum {OD_NORMAL_SAMPLE, OD_SUB_SAMPLE};
+ /* Governor demand based switching data (per-policy or global). */
+ struct dbs_data {
+ 	struct gov_attr_set attr_set;
++	struct dbs_governor *gov;
+ 	void *tuners;
+ 	unsigned int ignore_nice_load;
+ 	unsigned int sampling_rate;
+diff --git a/drivers/cpufreq/mediatek-cpufreq.c b/drivers/cpufreq/mediatek-cpufreq.c
+index 866163883b48d..bfe240c726e34 100644
+--- a/drivers/cpufreq/mediatek-cpufreq.c
++++ b/drivers/cpufreq/mediatek-cpufreq.c
+@@ -44,6 +44,8 @@ struct mtk_cpu_dvfs_info {
+ 	bool need_voltage_tracking;
+ };
+ 
++static struct platform_device *cpufreq_pdev;
++
+ static LIST_HEAD(dvfs_info_list);
+ 
+ static struct mtk_cpu_dvfs_info *mtk_cpu_dvfs_info_lookup(int cpu)
+@@ -547,7 +549,6 @@ static int __init mtk_cpufreq_driver_init(void)
+ {
+ 	struct device_node *np;
+ 	const struct of_device_id *match;
+-	struct platform_device *pdev;
+ 	int err;
+ 
+ 	np = of_find_node_by_path("/");
+@@ -571,16 +572,23 @@ static int __init mtk_cpufreq_driver_init(void)
+ 	 * and the device registration codes are put here to handle defer
+ 	 * probing.
+ 	 */
+-	pdev = platform_device_register_simple("mtk-cpufreq", -1, NULL, 0);
+-	if (IS_ERR(pdev)) {
++	cpufreq_pdev = platform_device_register_simple("mtk-cpufreq", -1, NULL, 0);
++	if (IS_ERR(cpufreq_pdev)) {
+ 		pr_err("failed to register mtk-cpufreq platform device\n");
+ 		platform_driver_unregister(&mtk_cpufreq_platdrv);
+-		return PTR_ERR(pdev);
++		return PTR_ERR(cpufreq_pdev);
+ 	}
+ 
+ 	return 0;
+ }
+-device_initcall(mtk_cpufreq_driver_init);
++module_init(mtk_cpufreq_driver_init)
++
++static void __exit mtk_cpufreq_driver_exit(void)
++{
++	platform_device_unregister(cpufreq_pdev);
++	platform_driver_unregister(&mtk_cpufreq_platdrv);
++}
++module_exit(mtk_cpufreq_driver_exit)
+ 
+ MODULE_DESCRIPTION("MediaTek CPUFreq driver");
+ MODULE_AUTHOR("Pi-Cheng Chen <pi-cheng.chen@linaro.org>");
+diff --git a/drivers/cpuidle/cpuidle-psci-domain.c b/drivers/cpuidle/cpuidle-psci-domain.c
+index 755bbdfc5b82f..3db4fca1172b4 100644
+--- a/drivers/cpuidle/cpuidle-psci-domain.c
++++ b/drivers/cpuidle/cpuidle-psci-domain.c
+@@ -52,7 +52,7 @@ static int psci_pd_init(struct device_node *np, bool use_osi)
+ 	struct generic_pm_domain *pd;
+ 	struct psci_pd_provider *pd_provider;
+ 	struct dev_power_governor *pd_gov;
+-	int ret = -ENOMEM, state_count = 0;
++	int ret = -ENOMEM;
+ 
+ 	pd = dt_idle_pd_alloc(np, psci_dt_parse_state_node);
+ 	if (!pd)
+@@ -71,7 +71,7 @@ static int psci_pd_init(struct device_node *np, bool use_osi)
+ 		pd->flags |= GENPD_FLAG_ALWAYS_ON;
+ 
+ 	/* Use governor for CPU PM domains if it has some states to manage. */
+-	pd_gov = state_count > 0 ? &pm_domain_cpu_gov : NULL;
++	pd_gov = pd->states ? &pm_domain_cpu_gov : NULL;
+ 
+ 	ret = pm_genpd_init(pd, pd_gov, false);
+ 	if (ret)
+diff --git a/drivers/cpuidle/cpuidle-psci.c b/drivers/cpuidle/cpuidle-psci.c
+index b51b5df084500..540105ca0781f 100644
+--- a/drivers/cpuidle/cpuidle-psci.c
++++ b/drivers/cpuidle/cpuidle-psci.c
+@@ -23,6 +23,7 @@
+ #include <linux/pm_runtime.h>
+ #include <linux/slab.h>
+ #include <linux/string.h>
++#include <linux/syscore_ops.h>
+ 
+ #include <asm/cpuidle.h>
+ 
+@@ -131,6 +132,49 @@ static int psci_idle_cpuhp_down(unsigned int cpu)
+ 	return 0;
+ }
+ 
++static void psci_idle_syscore_switch(bool suspend)
++{
++	bool cleared = false;
++	struct device *dev;
++	int cpu;
++
++	for_each_possible_cpu(cpu) {
++		dev = per_cpu_ptr(&psci_cpuidle_data, cpu)->dev;
++
++		if (dev && suspend) {
++			dev_pm_genpd_suspend(dev);
++		} else if (dev) {
++			dev_pm_genpd_resume(dev);
++
++			/* Account for userspace having offlined a CPU. */
++			if (pm_runtime_status_suspended(dev))
++				pm_runtime_set_active(dev);
++
++			/* Clear domain state to re-start fresh. */
++			if (!cleared) {
++				psci_set_domain_state(0);
++				cleared = true;
++			}
++		}
++	}
++}
++
++static int psci_idle_syscore_suspend(void)
++{
++	psci_idle_syscore_switch(true);
++	return 0;
++}
++
++static void psci_idle_syscore_resume(void)
++{
++	psci_idle_syscore_switch(false);
++}
++
++static struct syscore_ops psci_idle_syscore_ops = {
++	.suspend = psci_idle_syscore_suspend,
++	.resume = psci_idle_syscore_resume,
++};
++
+ static void psci_idle_init_cpuhp(void)
+ {
+ 	int err;
+@@ -138,6 +182,8 @@ static void psci_idle_init_cpuhp(void)
+ 	if (!psci_cpuidle_use_cpuhp)
+ 		return;
+ 
++	register_syscore_ops(&psci_idle_syscore_ops);
++
+ 	err = cpuhp_setup_state_nocalls(CPUHP_AP_CPU_PM_STARTING,
+ 					"cpuidle/psci:online",
+ 					psci_idle_cpuhp_up,
+diff --git a/drivers/cpuidle/cpuidle-riscv-sbi.c b/drivers/cpuidle/cpuidle-riscv-sbi.c
+index 5c852e6719924..1151e5e2ba824 100644
+--- a/drivers/cpuidle/cpuidle-riscv-sbi.c
++++ b/drivers/cpuidle/cpuidle-riscv-sbi.c
+@@ -414,7 +414,7 @@ static int sbi_pd_init(struct device_node *np)
+ 	struct generic_pm_domain *pd;
+ 	struct sbi_pd_provider *pd_provider;
+ 	struct dev_power_governor *pd_gov;
+-	int ret = -ENOMEM, state_count = 0;
++	int ret = -ENOMEM;
+ 
+ 	pd = dt_idle_pd_alloc(np, sbi_dt_parse_state_node);
+ 	if (!pd)
+@@ -433,7 +433,7 @@ static int sbi_pd_init(struct device_node *np)
+ 		pd->flags |= GENPD_FLAG_ALWAYS_ON;
+ 
+ 	/* Use governor for CPU PM domains if it has some states to manage. */
+-	pd_gov = state_count > 0 ? &pm_domain_cpu_gov : NULL;
++	pd_gov = pd->states ? &pm_domain_cpu_gov : NULL;
+ 
+ 	ret = pm_genpd_init(pd, pd_gov, false);
+ 	if (ret)
+diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c
+index 554e400d41cad..70e2e6e373897 100644
+--- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c
++++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c
+@@ -93,6 +93,68 @@ static int sun8i_ss_cipher_fallback(struct skcipher_request *areq)
+ 	return err;
+ }
+ 
++static int sun8i_ss_setup_ivs(struct skcipher_request *areq)
++{
++	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(areq);
++	struct sun8i_cipher_tfm_ctx *op = crypto_skcipher_ctx(tfm);
++	struct sun8i_ss_dev *ss = op->ss;
++	struct sun8i_cipher_req_ctx *rctx = skcipher_request_ctx(areq);
++	struct scatterlist *sg = areq->src;
++	unsigned int todo, offset;
++	unsigned int len = areq->cryptlen;
++	unsigned int ivsize = crypto_skcipher_ivsize(tfm);
++	struct sun8i_ss_flow *sf = &ss->flows[rctx->flow];
++	int i = 0;
++	u32 a;
++	int err;
++
++	rctx->ivlen = ivsize;
++	if (rctx->op_dir & SS_DECRYPTION) {
++		offset = areq->cryptlen - ivsize;
++		scatterwalk_map_and_copy(sf->biv, areq->src, offset,
++					 ivsize, 0);
++	}
++
++	/* we need to copy all IVs from source in case DMA is bi-directionnal */
++	while (sg && len) {
++		if (sg_dma_len(sg) == 0) {
++			sg = sg_next(sg);
++			continue;
++		}
++		if (i == 0)
++			memcpy(sf->iv[0], areq->iv, ivsize);
++		a = dma_map_single(ss->dev, sf->iv[i], ivsize, DMA_TO_DEVICE);
++		if (dma_mapping_error(ss->dev, a)) {
++			memzero_explicit(sf->iv[i], ivsize);
++			dev_err(ss->dev, "Cannot DMA MAP IV\n");
++			err = -EFAULT;
++			goto dma_iv_error;
++		}
++		rctx->p_iv[i] = a;
++		/* we need to setup all others IVs only in the decrypt way */
++		if (rctx->op_dir & SS_ENCRYPTION)
++			return 0;
++		todo = min(len, sg_dma_len(sg));
++		len -= todo;
++		i++;
++		if (i < MAX_SG) {
++			offset = sg->length - ivsize;
++			scatterwalk_map_and_copy(sf->iv[i], sg, offset, ivsize, 0);
++		}
++		rctx->niv = i;
++		sg = sg_next(sg);
++	}
++
++	return 0;
++dma_iv_error:
++	i--;
++	while (i >= 0) {
++		dma_unmap_single(ss->dev, rctx->p_iv[i], ivsize, DMA_TO_DEVICE);
++		memzero_explicit(sf->iv[i], ivsize);
++	}
++	return err;
++}
++
+ static int sun8i_ss_cipher(struct skcipher_request *areq)
+ {
+ 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(areq);
+@@ -101,9 +163,9 @@ static int sun8i_ss_cipher(struct skcipher_request *areq)
+ 	struct sun8i_cipher_req_ctx *rctx = skcipher_request_ctx(areq);
+ 	struct skcipher_alg *alg = crypto_skcipher_alg(tfm);
+ 	struct sun8i_ss_alg_template *algt;
++	struct sun8i_ss_flow *sf = &ss->flows[rctx->flow];
+ 	struct scatterlist *sg;
+ 	unsigned int todo, len, offset, ivsize;
+-	void *backup_iv = NULL;
+ 	int nr_sgs = 0;
+ 	int nr_sgd = 0;
+ 	int err = 0;
+@@ -134,30 +196,9 @@ static int sun8i_ss_cipher(struct skcipher_request *areq)
+ 
+ 	ivsize = crypto_skcipher_ivsize(tfm);
+ 	if (areq->iv && crypto_skcipher_ivsize(tfm) > 0) {
+-		rctx->ivlen = ivsize;
+-		rctx->biv = kzalloc(ivsize, GFP_KERNEL | GFP_DMA);
+-		if (!rctx->biv) {
+-			err = -ENOMEM;
++		err = sun8i_ss_setup_ivs(areq);
++		if (err)
+ 			goto theend_key;
+-		}
+-		if (rctx->op_dir & SS_DECRYPTION) {
+-			backup_iv = kzalloc(ivsize, GFP_KERNEL);
+-			if (!backup_iv) {
+-				err = -ENOMEM;
+-				goto theend_key;
+-			}
+-			offset = areq->cryptlen - ivsize;
+-			scatterwalk_map_and_copy(backup_iv, areq->src, offset,
+-						 ivsize, 0);
+-		}
+-		memcpy(rctx->biv, areq->iv, ivsize);
+-		rctx->p_iv = dma_map_single(ss->dev, rctx->biv, rctx->ivlen,
+-					    DMA_TO_DEVICE);
+-		if (dma_mapping_error(ss->dev, rctx->p_iv)) {
+-			dev_err(ss->dev, "Cannot DMA MAP IV\n");
+-			err = -ENOMEM;
+-			goto theend_iv;
+-		}
+ 	}
+ 	if (areq->src == areq->dst) {
+ 		nr_sgs = dma_map_sg(ss->dev, areq->src, sg_nents(areq->src),
+@@ -243,21 +284,19 @@ theend_sgs:
+ 	}
+ 
+ theend_iv:
+-	if (rctx->p_iv)
+-		dma_unmap_single(ss->dev, rctx->p_iv, rctx->ivlen,
+-				 DMA_TO_DEVICE);
+-
+ 	if (areq->iv && ivsize > 0) {
+-		if (rctx->biv) {
+-			offset = areq->cryptlen - ivsize;
+-			if (rctx->op_dir & SS_DECRYPTION) {
+-				memcpy(areq->iv, backup_iv, ivsize);
+-				kfree_sensitive(backup_iv);
+-			} else {
+-				scatterwalk_map_and_copy(areq->iv, areq->dst, offset,
+-							 ivsize, 0);
+-			}
+-			kfree(rctx->biv);
++		for (i = 0; i < rctx->niv; i++) {
++			dma_unmap_single(ss->dev, rctx->p_iv[i], ivsize, DMA_TO_DEVICE);
++			memzero_explicit(sf->iv[i], ivsize);
++		}
++
++		offset = areq->cryptlen - ivsize;
++		if (rctx->op_dir & SS_DECRYPTION) {
++			memcpy(areq->iv, sf->biv, ivsize);
++			memzero_explicit(sf->biv, ivsize);
++		} else {
++			scatterwalk_map_and_copy(areq->iv, areq->dst, offset,
++					ivsize, 0);
+ 		}
+ 	}
+ 
+diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c
+index 319fe3279a716..6575305786436 100644
+--- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c
++++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c
+@@ -66,6 +66,7 @@ int sun8i_ss_run_task(struct sun8i_ss_dev *ss, struct sun8i_cipher_req_ctx *rctx
+ 		      const char *name)
+ {
+ 	int flow = rctx->flow;
++	unsigned int ivlen = rctx->ivlen;
+ 	u32 v = SS_START;
+ 	int i;
+ 
+@@ -104,15 +105,14 @@ int sun8i_ss_run_task(struct sun8i_ss_dev *ss, struct sun8i_cipher_req_ctx *rctx
+ 		mutex_lock(&ss->mlock);
+ 		writel(rctx->p_key, ss->base + SS_KEY_ADR_REG);
+ 
+-		if (i == 0) {
+-			if (rctx->p_iv)
+-				writel(rctx->p_iv, ss->base + SS_IV_ADR_REG);
+-		} else {
+-			if (rctx->biv) {
+-				if (rctx->op_dir == SS_ENCRYPTION)
+-					writel(rctx->t_dst[i - 1].addr + rctx->t_dst[i - 1].len * 4 - rctx->ivlen, ss->base + SS_IV_ADR_REG);
++		if (ivlen) {
++			if (rctx->op_dir == SS_ENCRYPTION) {
++				if (i == 0)
++					writel(rctx->p_iv[0], ss->base + SS_IV_ADR_REG);
+ 				else
+-					writel(rctx->t_src[i - 1].addr + rctx->t_src[i - 1].len * 4 - rctx->ivlen, ss->base + SS_IV_ADR_REG);
++					writel(rctx->t_dst[i - 1].addr + rctx->t_dst[i - 1].len * 4 - ivlen, ss->base + SS_IV_ADR_REG);
++			} else {
++				writel(rctx->p_iv[i], ss->base + SS_IV_ADR_REG);
+ 			}
+ 		}
+ 
+@@ -464,7 +464,7 @@ static void sun8i_ss_free_flows(struct sun8i_ss_dev *ss, int i)
+  */
+ static int allocate_flows(struct sun8i_ss_dev *ss)
+ {
+-	int i, err;
++	int i, j, err;
+ 
+ 	ss->flows = devm_kcalloc(ss->dev, MAXFLOW, sizeof(struct sun8i_ss_flow),
+ 				 GFP_KERNEL);
+@@ -474,6 +474,18 @@ static int allocate_flows(struct sun8i_ss_dev *ss)
+ 	for (i = 0; i < MAXFLOW; i++) {
+ 		init_completion(&ss->flows[i].complete);
+ 
++		ss->flows[i].biv = devm_kmalloc(ss->dev, AES_BLOCK_SIZE,
++						GFP_KERNEL | GFP_DMA);
++		if (!ss->flows[i].biv)
++			goto error_engine;
++
++		for (j = 0; j < MAX_SG; j++) {
++			ss->flows[i].iv[j] = devm_kmalloc(ss->dev, AES_BLOCK_SIZE,
++							  GFP_KERNEL | GFP_DMA);
++			if (!ss->flows[i].iv[j])
++				goto error_engine;
++		}
++
+ 		ss->flows[i].engine = crypto_engine_alloc_init(ss->dev, true);
+ 		if (!ss->flows[i].engine) {
+ 			dev_err(ss->dev, "Cannot allocate engine\n");
+diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c
+index 1a71ed49d2333..ca4f280af35d2 100644
+--- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c
++++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c
+@@ -380,13 +380,21 @@ int sun8i_ss_hash_run(struct crypto_engine *engine, void *breq)
+ 	}
+ 
+ 	len = areq->nbytes;
+-	for_each_sg(areq->src, sg, nr_sgs, i) {
++	sg = areq->src;
++	i = 0;
++	while (len > 0 && sg) {
++		if (sg_dma_len(sg) == 0) {
++			sg = sg_next(sg);
++			continue;
++		}
+ 		rctx->t_src[i].addr = sg_dma_address(sg);
+ 		todo = min(len, sg_dma_len(sg));
+ 		rctx->t_src[i].len = todo / 4;
+ 		len -= todo;
+ 		rctx->t_dst[i].addr = addr_res;
+ 		rctx->t_dst[i].len = digestsize / 4;
++		sg = sg_next(sg);
++		i++;
+ 	}
+ 	if (len > 0) {
+ 		dev_err(ss->dev, "remaining len %d\n", len);
+diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss.h b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss.h
+index 28188685b9100..57ada86538550 100644
+--- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss.h
++++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss.h
+@@ -121,11 +121,15 @@ struct sginfo {
+  * @complete:	completion for the current task on this flow
+  * @status:	set to 1 by interrupt if task is done
+  * @stat_req:	number of request done by this flow
++ * @iv:		list of IV to use for each step
++ * @biv:	buffer which contain the backuped IV
+  */
+ struct sun8i_ss_flow {
+ 	struct crypto_engine *engine;
+ 	struct completion complete;
+ 	int status;
++	u8 *iv[MAX_SG];
++	u8 *biv;
+ #ifdef CONFIG_CRYPTO_DEV_SUN8I_SS_DEBUG
+ 	unsigned long stat_req;
+ #endif
+@@ -164,28 +168,28 @@ struct sun8i_ss_dev {
+  * @t_src:		list of mapped SGs with their size
+  * @t_dst:		list of mapped SGs with their size
+  * @p_key:		DMA address of the key
+- * @p_iv:		DMA address of the IV
++ * @p_iv:		DMA address of the IVs
++ * @niv:		Number of IVs DMA mapped
+  * @method:		current algorithm for this request
+  * @op_mode:		op_mode for this request
+  * @op_dir:		direction (encrypt vs decrypt) for this request
+  * @flow:		the flow to use for this request
+- * @ivlen:		size of biv
++ * @ivlen:		size of IVs
+  * @keylen:		keylen for this request
+- * @biv:		buffer which contain the IV
+  * @fallback_req:	request struct for invoking the fallback skcipher TFM
+  */
+ struct sun8i_cipher_req_ctx {
+ 	struct sginfo t_src[MAX_SG];
+ 	struct sginfo t_dst[MAX_SG];
+ 	u32 p_key;
+-	u32 p_iv;
++	u32 p_iv[MAX_SG];
++	int niv;
+ 	u32 method;
+ 	u32 op_mode;
+ 	u32 op_dir;
+ 	int flow;
+ 	unsigned int ivlen;
+ 	unsigned int keylen;
+-	void *biv;
+ 	struct skcipher_request fallback_req;   // keep at the end
+ };
+ 
+diff --git a/drivers/crypto/ccp/sev-dev.c b/drivers/crypto/ccp/sev-dev.c
+index 6ab93dfd478a9..3aefb177715e9 100644
+--- a/drivers/crypto/ccp/sev-dev.c
++++ b/drivers/crypto/ccp/sev-dev.c
+@@ -23,6 +23,7 @@
+ #include <linux/gfp.h>
+ #include <linux/cpufeature.h>
+ #include <linux/fs.h>
++#include <linux/fs_struct.h>
+ 
+ #include <asm/smp.h>
+ 
+@@ -170,6 +171,31 @@ static void *sev_fw_alloc(unsigned long len)
+ 	return page_address(page);
+ }
+ 
++static struct file *open_file_as_root(const char *filename, int flags, umode_t mode)
++{
++	struct file *fp;
++	struct path root;
++	struct cred *cred;
++	const struct cred *old_cred;
++
++	task_lock(&init_task);
++	get_fs_root(init_task.fs, &root);
++	task_unlock(&init_task);
++
++	cred = prepare_creds();
++	if (!cred)
++		return ERR_PTR(-ENOMEM);
++	cred->fsuid = GLOBAL_ROOT_UID;
++	old_cred = override_creds(cred);
++
++	fp = file_open_root(&root, filename, flags, mode);
++	path_put(&root);
++
++	revert_creds(old_cred);
++
++	return fp;
++}
++
+ static int sev_read_init_ex_file(void)
+ {
+ 	struct sev_device *sev = psp_master->sev_data;
+@@ -181,7 +207,7 @@ static int sev_read_init_ex_file(void)
+ 	if (!sev_init_ex_buffer)
+ 		return -EOPNOTSUPP;
+ 
+-	fp = filp_open(init_ex_path, O_RDONLY, 0);
++	fp = open_file_as_root(init_ex_path, O_RDONLY, 0);
+ 	if (IS_ERR(fp)) {
+ 		int ret = PTR_ERR(fp);
+ 
+@@ -217,7 +243,7 @@ static void sev_write_init_ex_file(void)
+ 	if (!sev_init_ex_buffer)
+ 		return;
+ 
+-	fp = filp_open(init_ex_path, O_CREAT | O_WRONLY, 0600);
++	fp = open_file_as_root(init_ex_path, O_CREAT | O_WRONLY, 0600);
+ 	if (IS_ERR(fp)) {
+ 		dev_err(sev->dev,
+ 			"SEV: could not open file for write, error %ld\n",
+diff --git a/drivers/crypto/ccree/cc_buffer_mgr.c b/drivers/crypto/ccree/cc_buffer_mgr.c
+index 11e0278c8631d..6140e49273226 100644
+--- a/drivers/crypto/ccree/cc_buffer_mgr.c
++++ b/drivers/crypto/ccree/cc_buffer_mgr.c
+@@ -356,12 +356,14 @@ void cc_unmap_cipher_request(struct device *dev, void *ctx,
+ 			      req_ctx->mlli_params.mlli_dma_addr);
+ 	}
+ 
+-	dma_unmap_sg(dev, src, req_ctx->in_nents, DMA_BIDIRECTIONAL);
+-	dev_dbg(dev, "Unmapped req->src=%pK\n", sg_virt(src));
+-
+ 	if (src != dst) {
+-		dma_unmap_sg(dev, dst, req_ctx->out_nents, DMA_BIDIRECTIONAL);
++		dma_unmap_sg(dev, src, req_ctx->in_nents, DMA_TO_DEVICE);
++		dma_unmap_sg(dev, dst, req_ctx->out_nents, DMA_FROM_DEVICE);
+ 		dev_dbg(dev, "Unmapped req->dst=%pK\n", sg_virt(dst));
++		dev_dbg(dev, "Unmapped req->src=%pK\n", sg_virt(src));
++	} else {
++		dma_unmap_sg(dev, src, req_ctx->in_nents, DMA_BIDIRECTIONAL);
++		dev_dbg(dev, "Unmapped req->src=%pK\n", sg_virt(src));
+ 	}
+ }
+ 
+@@ -377,6 +379,7 @@ int cc_map_cipher_request(struct cc_drvdata *drvdata, void *ctx,
+ 	u32 dummy = 0;
+ 	int rc = 0;
+ 	u32 mapped_nents = 0;
++	int src_direction = (src != dst ? DMA_TO_DEVICE : DMA_BIDIRECTIONAL);
+ 
+ 	req_ctx->dma_buf_type = CC_DMA_BUF_DLLI;
+ 	mlli_params->curr_pool = NULL;
+@@ -399,7 +402,7 @@ int cc_map_cipher_request(struct cc_drvdata *drvdata, void *ctx,
+ 	}
+ 
+ 	/* Map the src SGL */
+-	rc = cc_map_sg(dev, src, nbytes, DMA_BIDIRECTIONAL, &req_ctx->in_nents,
++	rc = cc_map_sg(dev, src, nbytes, src_direction, &req_ctx->in_nents,
+ 		       LLI_MAX_NUM_OF_DATA_ENTRIES, &dummy, &mapped_nents);
+ 	if (rc)
+ 		goto cipher_exit;
+@@ -416,7 +419,7 @@ int cc_map_cipher_request(struct cc_drvdata *drvdata, void *ctx,
+ 		}
+ 	} else {
+ 		/* Map the dst sg */
+-		rc = cc_map_sg(dev, dst, nbytes, DMA_BIDIRECTIONAL,
++		rc = cc_map_sg(dev, dst, nbytes, DMA_FROM_DEVICE,
+ 			       &req_ctx->out_nents, LLI_MAX_NUM_OF_DATA_ENTRIES,
+ 			       &dummy, &mapped_nents);
+ 		if (rc)
+@@ -456,6 +459,7 @@ void cc_unmap_aead_request(struct device *dev, struct aead_request *req)
+ 	struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
+ 	unsigned int hw_iv_size = areq_ctx->hw_iv_size;
+ 	struct cc_drvdata *drvdata = dev_get_drvdata(dev);
++	int src_direction = (req->src != req->dst ? DMA_TO_DEVICE : DMA_BIDIRECTIONAL);
+ 
+ 	if (areq_ctx->mac_buf_dma_addr) {
+ 		dma_unmap_single(dev, areq_ctx->mac_buf_dma_addr,
+@@ -514,13 +518,11 @@ void cc_unmap_aead_request(struct device *dev, struct aead_request *req)
+ 		sg_virt(req->src), areq_ctx->src.nents, areq_ctx->assoc.nents,
+ 		areq_ctx->assoclen, req->cryptlen);
+ 
+-	dma_unmap_sg(dev, req->src, areq_ctx->src.mapped_nents,
+-		     DMA_BIDIRECTIONAL);
++	dma_unmap_sg(dev, req->src, areq_ctx->src.mapped_nents, src_direction);
+ 	if (req->src != req->dst) {
+ 		dev_dbg(dev, "Unmapping dst sgl: req->dst=%pK\n",
+ 			sg_virt(req->dst));
+-		dma_unmap_sg(dev, req->dst, areq_ctx->dst.mapped_nents,
+-			     DMA_BIDIRECTIONAL);
++		dma_unmap_sg(dev, req->dst, areq_ctx->dst.mapped_nents, DMA_FROM_DEVICE);
+ 	}
+ 	if (drvdata->coherent &&
+ 	    areq_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_DECRYPT &&
+@@ -843,7 +845,7 @@ static int cc_aead_chain_data(struct cc_drvdata *drvdata,
+ 		else
+ 			size_for_map -= authsize;
+ 
+-		rc = cc_map_sg(dev, req->dst, size_for_map, DMA_BIDIRECTIONAL,
++		rc = cc_map_sg(dev, req->dst, size_for_map, DMA_FROM_DEVICE,
+ 			       &areq_ctx->dst.mapped_nents,
+ 			       LLI_MAX_NUM_OF_DATA_ENTRIES, &dst_last_bytes,
+ 			       &dst_mapped_nents);
+@@ -1056,7 +1058,8 @@ int cc_map_aead_request(struct cc_drvdata *drvdata, struct aead_request *req)
+ 		size_to_map += authsize;
+ 	}
+ 
+-	rc = cc_map_sg(dev, req->src, size_to_map, DMA_BIDIRECTIONAL,
++	rc = cc_map_sg(dev, req->src, size_to_map,
++		       (req->src != req->dst ? DMA_TO_DEVICE : DMA_BIDIRECTIONAL),
+ 		       &areq_ctx->src.mapped_nents,
+ 		       (LLI_MAX_NUM_OF_ASSOC_DATA_ENTRIES +
+ 			LLI_MAX_NUM_OF_DATA_ENTRIES),
+diff --git a/drivers/crypto/marvell/cesa/cipher.c b/drivers/crypto/marvell/cesa/cipher.c
+index b739d3b873dcf..c6f2fa753b7c0 100644
+--- a/drivers/crypto/marvell/cesa/cipher.c
++++ b/drivers/crypto/marvell/cesa/cipher.c
+@@ -624,7 +624,6 @@ struct skcipher_alg mv_cesa_ecb_des3_ede_alg = {
+ 	.decrypt = mv_cesa_ecb_des3_ede_decrypt,
+ 	.min_keysize = DES3_EDE_KEY_SIZE,
+ 	.max_keysize = DES3_EDE_KEY_SIZE,
+-	.ivsize = DES3_EDE_BLOCK_SIZE,
+ 	.base = {
+ 		.cra_name = "ecb(des3_ede)",
+ 		.cra_driver_name = "mv-ecb-des3-ede",
+diff --git a/drivers/crypto/nx/nx-common-powernv.c b/drivers/crypto/nx/nx-common-powernv.c
+index 32a036ada5d0a..f418817c0f43e 100644
+--- a/drivers/crypto/nx/nx-common-powernv.c
++++ b/drivers/crypto/nx/nx-common-powernv.c
+@@ -827,7 +827,7 @@ static int __init vas_cfg_coproc_info(struct device_node *dn, int chip_id,
+ 		goto err_out;
+ 
+ 	vas_init_rx_win_attr(&rxattr, coproc->ct);
+-	rxattr.rx_fifo = (void *)rx_fifo;
++	rxattr.rx_fifo = rx_fifo;
+ 	rxattr.rx_fifo_size = fifo_size;
+ 	rxattr.lnotify_lpid = lpid;
+ 	rxattr.lnotify_pid = pid;
+diff --git a/drivers/crypto/qat/qat_common/adf_pfvf_pf_proto.c b/drivers/crypto/qat/qat_common/adf_pfvf_pf_proto.c
+index 588352de1ef0e..d17318d3f63a4 100644
+--- a/drivers/crypto/qat/qat_common/adf_pfvf_pf_proto.c
++++ b/drivers/crypto/qat/qat_common/adf_pfvf_pf_proto.c
+@@ -154,7 +154,7 @@ static struct pfvf_message handle_blkmsg_req(struct adf_accel_vf_info *vf_info,
+ 	if (FIELD_GET(ADF_VF2PF_BLOCK_CRC_REQ_MASK, req.data)) {
+ 		dev_dbg(&GET_DEV(vf_info->accel_dev),
+ 			"BlockMsg of type %d for CRC over %d bytes received from VF%d\n",
+-			blk_type, blk_byte, vf_info->vf_nr);
++			blk_type, blk_byte + 1, vf_info->vf_nr);
+ 
+ 		if (!adf_pf2vf_blkmsg_get_data(vf_info, blk_type, blk_byte,
+ 					       byte_max, &resp_data,
+diff --git a/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c b/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c
+index 1e7bed8b011fe..91095ad479dc3 100644
+--- a/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c
++++ b/drivers/crypto/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c
+@@ -60,17 +60,24 @@ static u32 get_accel_cap(struct adf_accel_dev *accel_dev)
+ 
+ 	capabilities = ICP_ACCEL_CAPABILITIES_CRYPTO_SYMMETRIC |
+ 		       ICP_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC |
+-		       ICP_ACCEL_CAPABILITIES_AUTHENTICATION;
++		       ICP_ACCEL_CAPABILITIES_AUTHENTICATION |
++		       ICP_ACCEL_CAPABILITIES_CIPHER |
++		       ICP_ACCEL_CAPABILITIES_COMPRESSION;
+ 
+ 	/* Read accelerator capabilities mask */
+ 	pci_read_config_dword(pdev, ADF_DEVICE_LEGFUSE_OFFSET, &legfuses);
+ 
+-	if (legfuses & ICP_ACCEL_MASK_CIPHER_SLICE)
++	/* A set bit in legfuses means the feature is OFF in this SKU */
++	if (legfuses & ICP_ACCEL_MASK_CIPHER_SLICE) {
+ 		capabilities &= ~ICP_ACCEL_CAPABILITIES_CRYPTO_SYMMETRIC;
++		capabilities &= ~ICP_ACCEL_CAPABILITIES_CIPHER;
++	}
+ 	if (legfuses & ICP_ACCEL_MASK_PKE_SLICE)
+ 		capabilities &= ~ICP_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC;
+-	if (legfuses & ICP_ACCEL_MASK_AUTH_SLICE)
++	if (legfuses & ICP_ACCEL_MASK_AUTH_SLICE) {
+ 		capabilities &= ~ICP_ACCEL_CAPABILITIES_AUTHENTICATION;
++		capabilities &= ~ICP_ACCEL_CAPABILITIES_CIPHER;
++	}
+ 	if (legfuses & ICP_ACCEL_MASK_COMPRESS_SLICE)
+ 		capabilities &= ~ICP_ACCEL_CAPABILITIES_COMPRESSION;
+ 
+diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c
+index 49a4b1c47299f..44e899f06094c 100644
+--- a/drivers/cxl/mem.c
++++ b/drivers/cxl/mem.c
+@@ -27,12 +27,8 @@
+ static int wait_for_media(struct cxl_memdev *cxlmd)
+ {
+ 	struct cxl_dev_state *cxlds = cxlmd->cxlds;
+-	struct cxl_endpoint_dvsec_info *info = &cxlds->info;
+ 	int rc;
+ 
+-	if (!info->mem_enabled)
+-		return -EBUSY;
+-
+ 	rc = cxlds->wait_media_ready(cxlds);
+ 	if (rc)
+ 		return rc;
+diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c
+index 3f2182d668292..bb92853c3b933 100644
+--- a/drivers/cxl/pci.c
++++ b/drivers/cxl/pci.c
+@@ -462,16 +462,24 @@ static int wait_for_media_ready(struct cxl_dev_state *cxlds)
+ 	return 0;
+ }
+ 
+-static int cxl_dvsec_ranges(struct cxl_dev_state *cxlds)
++/*
++ * Return positive number of non-zero ranges on success and a negative
++ * error code on failure. The cxl_mem driver depends on ranges == 0 to
++ * init HDM operation.
++ */
++static int __cxl_dvsec_ranges(struct cxl_dev_state *cxlds,
++			      struct cxl_endpoint_dvsec_info *info)
+ {
+-	struct cxl_endpoint_dvsec_info *info = &cxlds->info;
+ 	struct pci_dev *pdev = to_pci_dev(cxlds->dev);
++	int hdm_count, rc, i, ranges = 0;
++	struct device *dev = &pdev->dev;
+ 	int d = cxlds->cxl_dvsec;
+-	int hdm_count, rc, i;
+ 	u16 cap, ctrl;
+ 
+-	if (!d)
++	if (!d) {
++		dev_dbg(dev, "No DVSEC Capability\n");
+ 		return -ENXIO;
++	}
+ 
+ 	rc = pci_read_config_word(pdev, d + CXL_DVSEC_CAP_OFFSET, &cap);
+ 	if (rc)
+@@ -481,8 +489,10 @@ static int cxl_dvsec_ranges(struct cxl_dev_state *cxlds)
+ 	if (rc)
+ 		return rc;
+ 
+-	if (!(cap & CXL_DVSEC_MEM_CAPABLE))
++	if (!(cap & CXL_DVSEC_MEM_CAPABLE)) {
++		dev_dbg(dev, "Not MEM Capable\n");
+ 		return -ENXIO;
++	}
+ 
+ 	/*
+ 	 * It is not allowed by spec for MEM.capable to be set and have 0 legacy
+@@ -495,8 +505,10 @@ static int cxl_dvsec_ranges(struct cxl_dev_state *cxlds)
+ 		return -EINVAL;
+ 
+ 	rc = wait_for_valid(cxlds);
+-	if (rc)
++	if (rc) {
++		dev_dbg(dev, "Failure awaiting MEM_INFO_VALID (%d)\n", rc);
+ 		return rc;
++	}
+ 
+ 	info->mem_enabled = FIELD_GET(CXL_DVSEC_MEM_ENABLE, ctrl);
+ 
+@@ -538,10 +550,17 @@ static int cxl_dvsec_ranges(struct cxl_dev_state *cxlds)
+ 		};
+ 
+ 		if (size)
+-			info->ranges++;
++			ranges++;
+ 	}
+ 
+-	return 0;
++	return ranges;
++}
++
++static void cxl_dvsec_ranges(struct cxl_dev_state *cxlds)
++{
++	struct cxl_endpoint_dvsec_info *info = &cxlds->info;
++
++	info->ranges = __cxl_dvsec_ranges(cxlds, info);
+ }
+ 
+ static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+@@ -610,10 +629,7 @@ static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 	if (rc)
+ 		return rc;
+ 
+-	rc = cxl_dvsec_ranges(cxlds);
+-	if (rc)
+-		dev_warn(&pdev->dev,
+-			 "Failed to get DVSEC range information (%d)\n", rc);
++	cxl_dvsec_ranges(cxlds);
+ 
+ 	cxlmd = devm_cxl_add_memdev(cxlds);
+ 	if (IS_ERR(cxlmd))
+diff --git a/drivers/devfreq/rk3399_dmc.c b/drivers/devfreq/rk3399_dmc.c
+index 293857ebfd75d..538e8dc74f40a 100644
+--- a/drivers/devfreq/rk3399_dmc.c
++++ b/drivers/devfreq/rk3399_dmc.c
+@@ -477,6 +477,8 @@ static int rk3399_dmcfreq_remove(struct platform_device *pdev)
+ {
+ 	struct rk3399_dmcfreq *dmcfreq = dev_get_drvdata(&pdev->dev);
+ 
++	devfreq_event_disable_edev(dmcfreq->edev);
++
+ 	/*
+ 	 * Before remove the opp table we need to unregister the opp notifier.
+ 	 */
+diff --git a/drivers/dma/idxd/cdev.c b/drivers/dma/idxd/cdev.c
+index b9b2b4a4124ee..033df43db0cec 100644
+--- a/drivers/dma/idxd/cdev.c
++++ b/drivers/dma/idxd/cdev.c
+@@ -369,10 +369,16 @@ int idxd_cdev_register(void)
+ 		rc = alloc_chrdev_region(&ictx[i].devt, 0, MINORMASK,
+ 					 ictx[i].name);
+ 		if (rc)
+-			return rc;
++			goto err_free_chrdev_region;
+ 	}
+ 
+ 	return 0;
++
++err_free_chrdev_region:
++	for (i--; i >= 0; i--)
++		unregister_chrdev_region(ictx[i].devt, MINORMASK);
++
++	return rc;
+ }
+ 
+ void idxd_cdev_remove(void)
+diff --git a/drivers/dma/stm32-mdma.c b/drivers/dma/stm32-mdma.c
+index 6f57ff0e7b37b..f8c8b9d76aadc 100644
+--- a/drivers/dma/stm32-mdma.c
++++ b/drivers/dma/stm32-mdma.c
+@@ -34,7 +34,6 @@
+ #include "virt-dma.h"
+ 
+ #define STM32_MDMA_GISR0		0x0000 /* MDMA Int Status Reg 1 */
+-#define STM32_MDMA_GISR1		0x0004 /* MDMA Int Status Reg 2 */
+ 
+ /* MDMA Channel x interrupt/status register */
+ #define STM32_MDMA_CISR(x)		(0x40 + 0x40 * (x)) /* x = 0..62 */
+@@ -168,7 +167,7 @@
+ 
+ #define STM32_MDMA_MAX_BUF_LEN		128
+ #define STM32_MDMA_MAX_BLOCK_LEN	65536
+-#define STM32_MDMA_MAX_CHANNELS		63
++#define STM32_MDMA_MAX_CHANNELS		32
+ #define STM32_MDMA_MAX_REQUESTS		256
+ #define STM32_MDMA_MAX_BURST		128
+ #define STM32_MDMA_VERY_HIGH_PRIORITY	0x3
+@@ -1317,26 +1316,16 @@ static void stm32_mdma_xfer_end(struct stm32_mdma_chan *chan)
+ static irqreturn_t stm32_mdma_irq_handler(int irq, void *devid)
+ {
+ 	struct stm32_mdma_device *dmadev = devid;
+-	struct stm32_mdma_chan *chan = devid;
++	struct stm32_mdma_chan *chan;
+ 	u32 reg, id, ccr, ien, status;
+ 
+ 	/* Find out which channel generates the interrupt */
+ 	status = readl_relaxed(dmadev->base + STM32_MDMA_GISR0);
+-	if (status) {
+-		id = __ffs(status);
+-	} else {
+-		status = readl_relaxed(dmadev->base + STM32_MDMA_GISR1);
+-		if (!status) {
+-			dev_dbg(mdma2dev(dmadev), "spurious it\n");
+-			return IRQ_NONE;
+-		}
+-		id = __ffs(status);
+-		/*
+-		 * As GISR0 provides status for channel id from 0 to 31,
+-		 * so GISR1 provides status for channel id from 32 to 62
+-		 */
+-		id += 32;
++	if (!status) {
++		dev_dbg(mdma2dev(dmadev), "spurious it\n");
++		return IRQ_NONE;
+ 	}
++	id = __ffs(status);
+ 
+ 	chan = &dmadev->chan[id];
+ 	if (!chan) {
+diff --git a/drivers/dma/ti/k3-psil-am62.c b/drivers/dma/ti/k3-psil-am62.c
+index d431e20332378..2b6fd6e37c610 100644
+--- a/drivers/dma/ti/k3-psil-am62.c
++++ b/drivers/dma/ti/k3-psil-am62.c
+@@ -70,10 +70,10 @@
+ /* PSI-L source thread IDs, used for RX (DMA_DEV_TO_MEM) */
+ static struct psil_ep am62_src_ep_map[] = {
+ 	/* SAUL */
+-	PSIL_SAUL(0x7500, 20, 35, 8, 35, 0),
+-	PSIL_SAUL(0x7501, 21, 35, 8, 36, 0),
+-	PSIL_SAUL(0x7502, 22, 43, 8, 43, 0),
+-	PSIL_SAUL(0x7503, 23, 43, 8, 44, 0),
++	PSIL_SAUL(0x7504, 20, 35, 8, 35, 0),
++	PSIL_SAUL(0x7505, 21, 35, 8, 36, 0),
++	PSIL_SAUL(0x7506, 22, 43, 8, 43, 0),
++	PSIL_SAUL(0x7507, 23, 43, 8, 44, 0),
+ 	/* PDMA_MAIN0 - SPI0-3 */
+ 	PSIL_PDMA_XY_PKT(0x4302),
+ 	PSIL_PDMA_XY_PKT(0x4303),
+diff --git a/drivers/edac/dmc520_edac.c b/drivers/edac/dmc520_edac.c
+index b8a7d9594afd4..1fa5ca57e9ec1 100644
+--- a/drivers/edac/dmc520_edac.c
++++ b/drivers/edac/dmc520_edac.c
+@@ -489,7 +489,7 @@ static int dmc520_edac_probe(struct platform_device *pdev)
+ 	dev = &pdev->dev;
+ 
+ 	for (idx = 0; idx < NUMBER_OF_IRQS; idx++) {
+-		irq = platform_get_irq_byname(pdev, dmc520_irq_configs[idx].name);
++		irq = platform_get_irq_byname_optional(pdev, dmc520_irq_configs[idx].name);
+ 		irqs[idx] = irq;
+ 		masks[idx] = dmc520_irq_configs[idx].mask;
+ 		if (irq >= 0) {
+diff --git a/drivers/firmware/arm_ffa/driver.c b/drivers/firmware/arm_ffa/driver.c
+index 14f900047ac0c..44300dbcc643d 100644
+--- a/drivers/firmware/arm_ffa/driver.c
++++ b/drivers/firmware/arm_ffa/driver.c
+@@ -582,7 +582,7 @@ static int ffa_partition_info_get(const char *uuid_str,
+ 		return -ENODEV;
+ 	}
+ 
+-	count = ffa_partition_probe(&uuid_null, &pbuf);
++	count = ffa_partition_probe(&uuid, &pbuf);
+ 	if (count <= 0)
+ 		return -ENOENT;
+ 
+@@ -688,8 +688,6 @@ static void ffa_setup_partitions(void)
+ 			       __func__, tpbuf->id);
+ 			continue;
+ 		}
+-
+-		ffa_dev_set_drvdata(ffa_dev, drv_info);
+ 	}
+ 	kfree(pbuf);
+ }
+diff --git a/drivers/firmware/arm_scmi/base.c b/drivers/firmware/arm_scmi/base.c
+index f5219334fd3a5..3fe172c03c247 100644
+--- a/drivers/firmware/arm_scmi/base.c
++++ b/drivers/firmware/arm_scmi/base.c
+@@ -197,7 +197,7 @@ scmi_base_implementation_list_get(const struct scmi_protocol_handle *ph,
+ 			break;
+ 
+ 		loop_num_ret = le32_to_cpu(*num_ret);
+-		if (tot_num_ret + loop_num_ret > MAX_PROTOCOLS_IMP) {
++		if (loop_num_ret > MAX_PROTOCOLS_IMP - tot_num_ret) {
+ 			dev_err(dev, "No. of Protocol > MAX_PROTOCOLS_IMP");
+ 			break;
+ 		}
+diff --git a/drivers/firmware/efi/Kconfig b/drivers/firmware/efi/Kconfig
+index 2c3dac5ecb36d..243882f5e5f99 100644
+--- a/drivers/firmware/efi/Kconfig
++++ b/drivers/firmware/efi/Kconfig
+@@ -284,3 +284,18 @@ config EFI_CUSTOM_SSDT_OVERLAYS
+ 
+ 	  See Documentation/admin-guide/acpi/ssdt-overlays.rst for more
+ 	  information.
++
++config EFI_DISABLE_RUNTIME
++	bool "Disable EFI runtime services support by default"
++	default y if PREEMPT_RT
++	help
++	  Allow to disable the EFI runtime services support by default. This can
++	  already be achieved by using the efi=noruntime option, but it could be
++	  useful to have this default without any kernel command line parameter.
++
++	  The EFI runtime services are disabled by default when PREEMPT_RT is
++	  enabled, because measurements have shown that some EFI functions calls
++	  might take too much time to complete, causing large latencies which is
++	  an issue for Real-Time kernels.
++
++	  This default can be overridden by using the efi=runtime option.
+diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c
+index 5502e176d51be..ff57db8f8d059 100644
+--- a/drivers/firmware/efi/efi.c
++++ b/drivers/firmware/efi/efi.c
+@@ -66,7 +66,7 @@ struct mm_struct efi_mm = {
+ 
+ struct workqueue_struct *efi_rts_wq;
+ 
+-static bool disable_runtime = IS_ENABLED(CONFIG_PREEMPT_RT);
++static bool disable_runtime = IS_ENABLED(CONFIG_EFI_DISABLE_RUNTIME);
+ static int __init setup_noefi(char *arg)
+ {
+ 	disable_runtime = true;
+diff --git a/drivers/gpio/gpio-rockchip.c b/drivers/gpio/gpio-rockchip.c
+index 099e358d24915..bcf5214e35866 100644
+--- a/drivers/gpio/gpio-rockchip.c
++++ b/drivers/gpio/gpio-rockchip.c
+@@ -19,6 +19,7 @@
+ #include <linux/of_address.h>
+ #include <linux/of_device.h>
+ #include <linux/of_irq.h>
++#include <linux/pinctrl/pinconf-generic.h>
+ #include <linux/regmap.h>
+ 
+ #include "../pinctrl/core.h"
+@@ -706,7 +707,7 @@ static int rockchip_gpio_probe(struct platform_device *pdev)
+ 	struct device_node *pctlnp = of_get_parent(np);
+ 	struct pinctrl_dev *pctldev = NULL;
+ 	struct rockchip_pin_bank *bank = NULL;
+-	struct rockchip_pin_output_deferred *cfg;
++	struct rockchip_pin_deferred *cfg;
+ 	static int gpio;
+ 	int id, ret;
+ 
+@@ -747,15 +748,22 @@ static int rockchip_gpio_probe(struct platform_device *pdev)
+ 		return ret;
+ 	}
+ 
+-	while (!list_empty(&bank->deferred_output)) {
+-		cfg = list_first_entry(&bank->deferred_output,
+-				       struct rockchip_pin_output_deferred, head);
++	while (!list_empty(&bank->deferred_pins)) {
++		cfg = list_first_entry(&bank->deferred_pins,
++				       struct rockchip_pin_deferred, head);
+ 		list_del(&cfg->head);
+ 
+-		ret = rockchip_gpio_direction_output(&bank->gpio_chip, cfg->pin, cfg->arg);
+-		if (ret)
+-			dev_warn(dev, "setting output pin %u to %u failed\n", cfg->pin, cfg->arg);
+-
++		switch (cfg->param) {
++		case PIN_CONFIG_OUTPUT:
++			ret = rockchip_gpio_direction_output(&bank->gpio_chip, cfg->pin, cfg->arg);
++			if (ret)
++				dev_warn(dev, "setting output pin %u to %u failed\n", cfg->pin,
++					 cfg->arg);
++			break;
++		default:
++			dev_warn(dev, "unknown deferred config param %d\n", cfg->param);
++			break;
++		}
+ 		kfree(cfg);
+ 	}
+ 
+diff --git a/drivers/gpio/gpio-sim.c b/drivers/gpio/gpio-sim.c
+index 41c31b10ae848..98109839102fb 100644
+--- a/drivers/gpio/gpio-sim.c
++++ b/drivers/gpio/gpio-sim.c
+@@ -314,8 +314,8 @@ static int gpio_sim_setup_sysfs(struct gpio_sim_chip *chip)
+ 
+ 	for (i = 0; i < num_lines; i++) {
+ 		attr_group = devm_kzalloc(dev, sizeof(*attr_group), GFP_KERNEL);
+-		attrs = devm_kcalloc(dev, sizeof(*attrs),
+-				     GPIO_SIM_NUM_ATTRS, GFP_KERNEL);
++		attrs = devm_kcalloc(dev, GPIO_SIM_NUM_ATTRS, sizeof(*attrs),
++				     GFP_KERNEL);
+ 		val_attr = devm_kzalloc(dev, sizeof(*val_attr), GFP_KERNEL);
+ 		pull_attr = devm_kzalloc(dev, sizeof(*pull_attr), GFP_KERNEL);
+ 		if (!attr_group || !attrs || !val_attr || !pull_attr)
+diff --git a/drivers/gpio/gpiolib-of.c b/drivers/gpio/gpiolib-of.c
+index 7e5e51d49d09e..6dec81b1f24be 100644
+--- a/drivers/gpio/gpiolib-of.c
++++ b/drivers/gpio/gpiolib-of.c
+@@ -931,6 +931,11 @@ static int of_gpiochip_add_pin_range(struct gpio_chip *chip)
+ 	if (!np)
+ 		return 0;
+ 
++	if (!of_property_read_bool(np, "gpio-ranges") &&
++	    chip->of_gpio_ranges_fallback) {
++		return chip->of_gpio_ranges_fallback(chip, np);
++	}
++
+ 	group_names = of_find_property(np, group_names_propname, NULL);
+ 
+ 	for (;; index++) {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+index d0d0ea565e3df..2019622191b59 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+@@ -116,7 +116,7 @@ static int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p, union drm_amdgpu_cs
+ 	int ret;
+ 
+ 	if (cs->in.num_chunks == 0)
+-		return 0;
++		return -EINVAL;
+ 
+ 	chunk_array = kvmalloc_array(cs->in.num_chunks, sizeof(uint64_t), GFP_KERNEL);
+ 	if (!chunk_array)
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+index 46ef57b07c151..7e6f8475819dc 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+@@ -1931,6 +1931,7 @@ static const struct pci_device_id pciidlist[] = {
+ 	{0x1002, 0x7421, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BEIGE_GOBY},
+ 	{0x1002, 0x7422, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BEIGE_GOBY},
+ 	{0x1002, 0x7423, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BEIGE_GOBY},
++	{0x1002, 0x7424, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BEIGE_GOBY},
+ 	{0x1002, 0x743F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BEIGE_GOBY},
+ 
+ 	{ PCI_DEVICE(0x1002, PCI_ANY_ID),
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
+index a66a0881a934b..88b852b3a2cb6 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
+@@ -25,6 +25,9 @@
+  */
+ 
+ #include <linux/io-64-nonatomic-lo-hi.h>
++#ifdef CONFIG_X86
++#include <asm/hypervisor.h>
++#endif
+ 
+ #include "amdgpu.h"
+ #include "amdgpu_gmc.h"
+@@ -647,12 +650,14 @@ void amdgpu_gmc_get_vbios_allocations(struct amdgpu_device *adev)
+ 	case CHIP_VEGA10:
+ 		adev->mman.keep_stolen_vga_memory = true;
+ 		/*
+-		 * VEGA10 SRIOV VF needs some firmware reserved area.
++		 * VEGA10 SRIOV VF with MS_HYPERV host needs some firmware reserved area.
+ 		 */
+-		if (amdgpu_sriov_vf(adev)) {
+-			adev->mman.stolen_reserved_offset = 0x100000;
+-			adev->mman.stolen_reserved_size = 0x600000;
++#ifdef CONFIG_X86
++		if (amdgpu_sriov_vf(adev) && hypervisor_is_type(X86_HYPER_MS_HYPERV)) {
++			adev->mman.stolen_reserved_offset = 0x500000;
++			adev->mman.stolen_reserved_size = 0x200000;
+ 		}
++#endif
+ 		break;
+ 	case CHIP_RAVEN:
+ 	case CHIP_RENOIR:
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+index a6acec1a6155d..21aa556a6befa 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+@@ -357,7 +357,39 @@ static int psp_sw_init(void *handle)
+ 		}
+ 	}
+ 
++	ret = amdgpu_bo_create_kernel(adev, PSP_1_MEG, PSP_1_MEG,
++				      amdgpu_sriov_vf(adev) ?
++				      AMDGPU_GEM_DOMAIN_VRAM : AMDGPU_GEM_DOMAIN_GTT,
++				      &psp->fw_pri_bo,
++				      &psp->fw_pri_mc_addr,
++				      &psp->fw_pri_buf);
++	if (ret)
++		return ret;
++
++	ret = amdgpu_bo_create_kernel(adev, PSP_FENCE_BUFFER_SIZE, PAGE_SIZE,
++				      AMDGPU_GEM_DOMAIN_VRAM,
++				      &psp->fence_buf_bo,
++				      &psp->fence_buf_mc_addr,
++				      &psp->fence_buf);
++	if (ret)
++		goto failed1;
++
++	ret = amdgpu_bo_create_kernel(adev, PSP_CMD_BUFFER_SIZE, PAGE_SIZE,
++				      AMDGPU_GEM_DOMAIN_VRAM,
++				      &psp->cmd_buf_bo, &psp->cmd_buf_mc_addr,
++				      (void **)&psp->cmd_buf_mem);
++	if (ret)
++		goto failed2;
++
+ 	return 0;
++
++failed2:
++	amdgpu_bo_free_kernel(&psp->fw_pri_bo,
++			      &psp->fw_pri_mc_addr, &psp->fw_pri_buf);
++failed1:
++	amdgpu_bo_free_kernel(&psp->fence_buf_bo,
++			      &psp->fence_buf_mc_addr, &psp->fence_buf);
++	return ret;
+ }
+ 
+ static int psp_sw_fini(void *handle)
+@@ -391,6 +423,13 @@ static int psp_sw_fini(void *handle)
+ 	kfree(cmd);
+ 	cmd = NULL;
+ 
++	amdgpu_bo_free_kernel(&psp->fw_pri_bo,
++			      &psp->fw_pri_mc_addr, &psp->fw_pri_buf);
++	amdgpu_bo_free_kernel(&psp->fence_buf_bo,
++			      &psp->fence_buf_mc_addr, &psp->fence_buf);
++	amdgpu_bo_free_kernel(&psp->cmd_buf_bo, &psp->cmd_buf_mc_addr,
++			      (void **)&psp->cmd_buf_mem);
++
+ 	return 0;
+ }
+ 
+@@ -2430,51 +2469,18 @@ static int psp_load_fw(struct amdgpu_device *adev)
+ 	struct psp_context *psp = &adev->psp;
+ 
+ 	if (amdgpu_sriov_vf(adev) && amdgpu_in_reset(adev)) {
+-		psp_ring_stop(psp, PSP_RING_TYPE__KM); /* should not destroy ring, only stop */
+-		goto skip_memalloc;
+-	}
+-
+-	if (amdgpu_sriov_vf(adev)) {
+-		ret = amdgpu_bo_create_kernel(adev, PSP_1_MEG, PSP_1_MEG,
+-						AMDGPU_GEM_DOMAIN_VRAM,
+-						&psp->fw_pri_bo,
+-						&psp->fw_pri_mc_addr,
+-						&psp->fw_pri_buf);
++		/* should not destroy ring, only stop */
++		psp_ring_stop(psp, PSP_RING_TYPE__KM);
+ 	} else {
+-		ret = amdgpu_bo_create_kernel(adev, PSP_1_MEG, PSP_1_MEG,
+-						AMDGPU_GEM_DOMAIN_GTT,
+-						&psp->fw_pri_bo,
+-						&psp->fw_pri_mc_addr,
+-						&psp->fw_pri_buf);
+-	}
+-
+-	if (ret)
+-		goto failed;
+-
+-	ret = amdgpu_bo_create_kernel(adev, PSP_FENCE_BUFFER_SIZE, PAGE_SIZE,
+-					AMDGPU_GEM_DOMAIN_VRAM,
+-					&psp->fence_buf_bo,
+-					&psp->fence_buf_mc_addr,
+-					&psp->fence_buf);
+-	if (ret)
+-		goto failed;
+-
+-	ret = amdgpu_bo_create_kernel(adev, PSP_CMD_BUFFER_SIZE, PAGE_SIZE,
+-				      AMDGPU_GEM_DOMAIN_VRAM,
+-				      &psp->cmd_buf_bo, &psp->cmd_buf_mc_addr,
+-				      (void **)&psp->cmd_buf_mem);
+-	if (ret)
+-		goto failed;
++		memset(psp->fence_buf, 0, PSP_FENCE_BUFFER_SIZE);
+ 
+-	memset(psp->fence_buf, 0, PSP_FENCE_BUFFER_SIZE);
+-
+-	ret = psp_ring_init(psp, PSP_RING_TYPE__KM);
+-	if (ret) {
+-		DRM_ERROR("PSP ring init failed!\n");
+-		goto failed;
++		ret = psp_ring_init(psp, PSP_RING_TYPE__KM);
++		if (ret) {
++			DRM_ERROR("PSP ring init failed!\n");
++			goto failed;
++		}
+ 	}
+ 
+-skip_memalloc:
+ 	ret = psp_hw_start(psp);
+ 	if (ret)
+ 		goto failed;
+@@ -2592,13 +2598,6 @@ static int psp_hw_fini(void *handle)
+ 	psp_tmr_terminate(psp);
+ 	psp_ring_destroy(psp, PSP_RING_TYPE__KM);
+ 
+-	amdgpu_bo_free_kernel(&psp->fw_pri_bo,
+-			      &psp->fw_pri_mc_addr, &psp->fw_pri_buf);
+-	amdgpu_bo_free_kernel(&psp->fence_buf_bo,
+-			      &psp->fence_buf_mc_addr, &psp->fence_buf);
+-	amdgpu_bo_free_kernel(&psp->cmd_buf_bo, &psp->cmd_buf_mc_addr,
+-			      (void **)&psp->cmd_buf_mem);
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
+index ca33505026189..aebafbc327fb2 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
+@@ -714,8 +714,7 @@ int amdgpu_ucode_create_bo(struct amdgpu_device *adev)
+ 
+ void amdgpu_ucode_free_bo(struct amdgpu_device *adev)
+ {
+-	if (adev->firmware.load_type != AMDGPU_FW_LOAD_DIRECT)
+-		amdgpu_bo_free_kernel(&adev->firmware.fw_buf,
++	amdgpu_bo_free_kernel(&adev->firmware.fw_buf,
+ 		&adev->firmware.fw_buf_mc,
+ 		&adev->firmware.fw_buf_ptr);
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
+index 5e3756643da3f..1d55b2bae37e7 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
+@@ -864,11 +864,11 @@ static u32 amdgpu_virt_rlcg_reg_rw(struct amdgpu_device *adev, u32 offset, u32 v
+ 	uint32_t timeout = 50000;
+ 	uint32_t i, tmp;
+ 	uint32_t ret = 0;
+-	static void *scratch_reg0;
+-	static void *scratch_reg1;
+-	static void *scratch_reg2;
+-	static void *scratch_reg3;
+-	static void *spare_int;
++	void *scratch_reg0;
++	void *scratch_reg1;
++	void *scratch_reg2;
++	void *scratch_reg3;
++	void *spare_int;
+ 
+ 	if (!adev->gfx.rlc.rlcg_reg_access_supported) {
+ 		dev_err(adev->dev,
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+index d7e8f72323641..ff86c43b63d11 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+@@ -772,8 +772,8 @@ static void sdma_v4_0_ring_set_wptr(struct amdgpu_ring *ring)
+ 
+ 		DRM_DEBUG("Using doorbell -- "
+ 				"wptr_offs == 0x%08x "
+-				"lower_32_bits(ring->wptr) << 2 == 0x%08x "
+-				"upper_32_bits(ring->wptr) << 2 == 0x%08x\n",
++				"lower_32_bits(ring->wptr << 2) == 0x%08x "
++				"upper_32_bits(ring->wptr << 2) == 0x%08x\n",
+ 				ring->wptr_offs,
+ 				lower_32_bits(ring->wptr << 2),
+ 				upper_32_bits(ring->wptr << 2));
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
+index a8d49c005f73d..627eb1f147c20 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
+@@ -394,8 +394,8 @@ static void sdma_v5_0_ring_set_wptr(struct amdgpu_ring *ring)
+ 	if (ring->use_doorbell) {
+ 		DRM_DEBUG("Using doorbell -- "
+ 				"wptr_offs == 0x%08x "
+-				"lower_32_bits(ring->wptr) << 2 == 0x%08x "
+-				"upper_32_bits(ring->wptr) << 2 == 0x%08x\n",
++				"lower_32_bits(ring->wptr << 2) == 0x%08x "
++				"upper_32_bits(ring->wptr << 2) == 0x%08x\n",
+ 				ring->wptr_offs,
+ 				lower_32_bits(ring->wptr << 2),
+ 				upper_32_bits(ring->wptr << 2));
+@@ -774,9 +774,9 @@ static int sdma_v5_0_gfx_resume(struct amdgpu_device *adev)
+ 
+ 		if (!amdgpu_sriov_vf(adev)) { /* only bare-metal use register write for wptr */
+ 			WREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_GFX_RB_WPTR),
+-			       lower_32_bits(ring->wptr) << 2);
++			       lower_32_bits(ring->wptr << 2));
+ 			WREG32(sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_GFX_RB_WPTR_HI),
+-			       upper_32_bits(ring->wptr) << 2);
++			       upper_32_bits(ring->wptr << 2));
+ 		}
+ 
+ 		doorbell = RREG32_SOC15_IP(GC, sdma_v5_0_get_reg_offset(adev, i, mmSDMA0_GFX_DOORBELL));
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
+index 824eace698842..a5eb82bfeaa8d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
+@@ -295,8 +295,8 @@ static void sdma_v5_2_ring_set_wptr(struct amdgpu_ring *ring)
+ 	if (ring->use_doorbell) {
+ 		DRM_DEBUG("Using doorbell -- "
+ 				"wptr_offs == 0x%08x "
+-				"lower_32_bits(ring->wptr) << 2 == 0x%08x "
+-				"upper_32_bits(ring->wptr) << 2 == 0x%08x\n",
++				"lower_32_bits(ring->wptr << 2) == 0x%08x "
++				"upper_32_bits(ring->wptr << 2) == 0x%08x\n",
+ 				ring->wptr_offs,
+ 				lower_32_bits(ring->wptr << 2),
+ 				upper_32_bits(ring->wptr << 2));
+@@ -672,8 +672,8 @@ static int sdma_v5_2_gfx_resume(struct amdgpu_device *adev)
+ 		WREG32_SOC15_IP(GC, sdma_v5_2_get_reg_offset(adev, i, mmSDMA0_GFX_MINOR_PTR_UPDATE), 1);
+ 
+ 		if (!amdgpu_sriov_vf(adev)) { /* only bare-metal use register write for wptr */
+-			WREG32(sdma_v5_2_get_reg_offset(adev, i, mmSDMA0_GFX_RB_WPTR), lower_32_bits(ring->wptr) << 2);
+-			WREG32(sdma_v5_2_get_reg_offset(adev, i, mmSDMA0_GFX_RB_WPTR_HI), upper_32_bits(ring->wptr) << 2);
++			WREG32(sdma_v5_2_get_reg_offset(adev, i, mmSDMA0_GFX_RB_WPTR), lower_32_bits(ring->wptr << 2));
++			WREG32(sdma_v5_2_get_reg_offset(adev, i, mmSDMA0_GFX_RB_WPTR_HI), upper_32_bits(ring->wptr << 2));
+ 		}
+ 
+ 		doorbell = RREG32_SOC15_IP(GC, sdma_v5_2_get_reg_offset(adev, i, mmSDMA0_GFX_DOORBELL));
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+index 607f65ab39ac8..10cc834a5ac31 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+@@ -944,8 +944,6 @@ err_drm_file:
+ 
+ bool kfd_dev_is_large_bar(struct kfd_dev *dev)
+ {
+-	struct kfd_local_mem_info mem_info;
+-
+ 	if (debug_largebar) {
+ 		pr_debug("Simulate large-bar allocation on non large-bar machine\n");
+ 		return true;
+@@ -954,9 +952,8 @@ bool kfd_dev_is_large_bar(struct kfd_dev *dev)
+ 	if (dev->use_iommu_v2)
+ 		return false;
+ 
+-	amdgpu_amdkfd_get_local_mem_info(dev->adev, &mem_info);
+-	if (mem_info.local_mem_size_private == 0 &&
+-			mem_info.local_mem_size_public > 0)
++	if (dev->local_mem_info.local_mem_size_private == 0 &&
++			dev->local_mem_info.local_mem_size_public > 0)
+ 		return true;
+ 	return false;
+ }
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_crat.c b/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
+index 1eaabd2cb41b0..59b349a4c04a3 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
+@@ -2152,7 +2152,7 @@ static int kfd_create_vcrat_image_gpu(void *pcrat_image,
+ 	 * report the total FB size (public+private) as a single
+ 	 * private heap.
+ 	 */
+-	amdgpu_amdkfd_get_local_mem_info(kdev->adev, &local_mem_info);
++	local_mem_info = kdev->local_mem_info;
+ 	sub_type_hdr = (typeof(sub_type_hdr))((char *)sub_type_hdr +
+ 			sub_type_hdr->length);
+ 
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device.c b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+index 62aa6c9d5123d..c96d521447fcf 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+@@ -575,6 +575,8 @@ bool kgd2kfd_device_init(struct kfd_dev *kfd,
+ 	if (kfd_resume(kfd))
+ 		goto kfd_resume_error;
+ 
++	amdgpu_amdkfd_get_local_mem_info(kfd->adev, &kfd->local_mem_info);
++
+ 	if (kfd_topology_add_device(kfd)) {
+ 		dev_err(kfd_device, "Error adding device to topology\n");
+ 		goto kfd_topology_add_device_error;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
+index 8f58fc491b289..49a29a60b71e4 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
+@@ -272,6 +272,7 @@ struct kfd_dev {
+ 
+ 	struct kgd2kfd_shared_resources shared_resources;
+ 	struct kfd_vmid_info vm_info;
++	struct kfd_local_mem_info local_mem_info;
+ 
+ 	const struct kfd2kgd_calls *kfd2kgd;
+ 	struct mutex doorbell_mutex;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
+index 3bdcae239bc0f..9fc24f6823dfc 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
+@@ -1102,15 +1102,12 @@ static uint32_t kfd_generate_gpu_id(struct kfd_dev *gpu)
+ 	uint32_t buf[7];
+ 	uint64_t local_mem_size;
+ 	int i;
+-	struct kfd_local_mem_info local_mem_info;
+ 
+ 	if (!gpu)
+ 		return 0;
+ 
+-	amdgpu_amdkfd_get_local_mem_info(gpu->adev, &local_mem_info);
+-
+-	local_mem_size = local_mem_info.local_mem_size_private +
+-			local_mem_info.local_mem_size_public;
++	local_mem_size = gpu->local_mem_info.local_mem_size_private +
++			gpu->local_mem_info.local_mem_size_public;
+ 
+ 	buf[0] = gpu->pdev->devfn;
+ 	buf[1] = gpu->pdev->subsystem_vendor |
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c
+index 63934ecf6be84..d71e625cc476e 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c
+@@ -1030,6 +1030,7 @@ static const struct dc_debug_options debug_defaults_drv = {
+ 			.afmt = true,
+ 		}
+ 	},
++	.disable_z10 = true,
+ 	.optimize_edp_link_rate = true,
+ 	.enable_sw_cntl_psr = true,
+ 	.apply_vendor_specific_lttpr_wa = true,
+diff --git a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
+index 72e7b5d40af69..5472f9936febc 100644
+--- a/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
++++ b/drivers/gpu/drm/amd/pm/amdgpu_dpm.c
+@@ -790,7 +790,7 @@ int amdgpu_dpm_force_performance_level(struct amdgpu_device *adev,
+ 					AMD_DPM_FORCED_LEVEL_PROFILE_MIN_MCLK |
+ 					AMD_DPM_FORCED_LEVEL_PROFILE_PEAK;
+ 
+-	if (!pp_funcs->force_performance_level)
++	if (!pp_funcs || !pp_funcs->force_performance_level)
+ 		return 0;
+ 
+ 	if (adev->pm.dpm.thermal_active)
+diff --git a/drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c b/drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c
+index 8b23cc9f098ad..8fd0782a2b206 100644
+--- a/drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c
++++ b/drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c
+@@ -1623,19 +1623,7 @@ static int kv_update_samu_dpm(struct amdgpu_device *adev, bool gate)
+ 
+ static u8 kv_get_acp_boot_level(struct amdgpu_device *adev)
+ {
+-	u8 i;
+-	struct amdgpu_clock_voltage_dependency_table *table =
+-		&adev->pm.dpm.dyn_state.acp_clock_voltage_dependency_table;
+-
+-	for (i = 0; i < table->count; i++) {
+-		if (table->entries[i].clk >= 0) /* XXX */
+-			break;
+-	}
+-
+-	if (i >= table->count)
+-		i = table->count - 1;
+-
+-	return i;
++	return 0;
+ }
+ 
+ static void kv_update_acp_boot_level(struct amdgpu_device *adev)
+diff --git a/drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c b/drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c
+index 633dab14f51c2..49c398ec0aaf6 100644
+--- a/drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c
++++ b/drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c
+@@ -7297,17 +7297,15 @@ static int si_parse_power_table(struct amdgpu_device *adev)
+ 	if (!adev->pm.dpm.ps)
+ 		return -ENOMEM;
+ 	power_state_offset = (u8 *)state_array->states;
+-	for (i = 0; i < state_array->ucNumEntries; i++) {
++	for (adev->pm.dpm.num_ps = 0, i = 0; i < state_array->ucNumEntries; i++) {
+ 		u8 *idx;
+ 		power_state = (union pplib_power_state *)power_state_offset;
+ 		non_clock_array_index = power_state->v2.nonClockInfoIndex;
+ 		non_clock_info = (struct _ATOM_PPLIB_NONCLOCK_INFO *)
+ 			&non_clock_info_array->nonClockInfo[non_clock_array_index];
+ 		ps = kzalloc(sizeof(struct  si_ps), GFP_KERNEL);
+-		if (ps == NULL) {
+-			kfree(adev->pm.dpm.ps);
++		if (ps == NULL)
+ 			return -ENOMEM;
+-		}
+ 		adev->pm.dpm.ps[i].ps_priv = ps;
+ 		si_parse_pplib_non_clock_info(adev, &adev->pm.dpm.ps[i],
+ 					      non_clock_info,
+@@ -7329,8 +7327,8 @@ static int si_parse_power_table(struct amdgpu_device *adev)
+ 			k++;
+ 		}
+ 		power_state_offset += 2 + power_state->v2.ucNumDPMLevels;
++		adev->pm.dpm.num_ps++;
+ 	}
+-	adev->pm.dpm.num_ps = state_array->ucNumEntries;
+ 
+ 	/* fill in the vce power states */
+ 	for (i = 0; i < adev->pm.dpm.num_of_vce_states; i++) {
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+index f10a0256413e6..32cff21f261ca 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+@@ -576,6 +576,8 @@ static int smu_early_init(void *handle)
+ 	smu->smu_baco.platform_support = false;
+ 	smu->user_dpm_profile.fan_mode = -1;
+ 
++	mutex_init(&smu->message_lock);
++
+ 	adev->powerplay.pp_handle = smu;
+ 	adev->powerplay.pp_funcs = &swsmu_pm_funcs;
+ 
+@@ -975,8 +977,6 @@ static int smu_sw_init(void *handle)
+ 	bitmap_zero(smu->smu_feature.supported, SMU_FEATURE_MAX);
+ 	bitmap_zero(smu->smu_feature.allowed, SMU_FEATURE_MAX);
+ 
+-	mutex_init(&smu->message_lock);
+-
+ 	INIT_WORK(&smu->throttling_logging_work, smu_throttling_logging_work_fn);
+ 	INIT_WORK(&smu->interrupt_work, smu_interrupt_work_fn);
+ 	atomic64_set(&smu->throttle_int_counter, 0);
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu12/renoir_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu12/renoir_ppt.c
+index fd6c44ece1688..012e3bd99cc23 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu12/renoir_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu12/renoir_ppt.c
+@@ -1119,6 +1119,39 @@ static int renoir_get_power_profile_mode(struct smu_context *smu,
+ 	return size;
+ }
+ 
++static void renoir_get_ss_power_percent(SmuMetrics_t *metrics,
++					uint32_t *apu_percent, uint32_t *dgpu_percent)
++{
++	uint32_t apu_boost = 0;
++	uint32_t dgpu_boost = 0;
++	uint16_t apu_limit = 0;
++	uint16_t dgpu_limit = 0;
++	uint16_t apu_power = 0;
++	uint16_t dgpu_power = 0;
++
++	apu_power = metrics->ApuPower;
++	apu_limit = metrics->StapmOriginalLimit;
++	if (apu_power > apu_limit && apu_limit != 0)
++		apu_boost =  ((apu_power - apu_limit) * 100) / apu_limit;
++	apu_boost = (apu_boost > 100) ? 100 : apu_boost;
++
++	dgpu_power = metrics->dGpuPower;
++	if (metrics->StapmCurrentLimit > metrics->StapmOriginalLimit)
++		dgpu_limit = metrics->StapmCurrentLimit - metrics->StapmOriginalLimit;
++	if (dgpu_power > dgpu_limit && dgpu_limit != 0)
++		dgpu_boost = ((dgpu_power - dgpu_limit) * 100) / dgpu_limit;
++	dgpu_boost = (dgpu_boost > 100) ? 100 : dgpu_boost;
++
++	if (dgpu_boost >= apu_boost)
++		apu_boost = 0;
++	else
++		dgpu_boost = 0;
++
++	*apu_percent = apu_boost;
++	*dgpu_percent = dgpu_boost;
++}
++
++
+ static int renoir_get_smu_metrics_data(struct smu_context *smu,
+ 				       MetricsMember_t member,
+ 				       uint32_t *value)
+@@ -1127,6 +1160,9 @@ static int renoir_get_smu_metrics_data(struct smu_context *smu,
+ 
+ 	SmuMetrics_t *metrics = (SmuMetrics_t *)smu_table->metrics_table;
+ 	int ret = 0;
++	uint32_t apu_percent = 0;
++	uint32_t dgpu_percent = 0;
++
+ 
+ 	ret = smu_cmn_get_metrics_table(smu,
+ 					NULL,
+@@ -1171,26 +1207,18 @@ static int renoir_get_smu_metrics_data(struct smu_context *smu,
+ 		*value = metrics->Voltage[1];
+ 		break;
+ 	case METRICS_SS_APU_SHARE:
+-		/* return the percentage of APU power with respect to APU's power limit.
+-		 * percentage is reported, this isn't boost value. Smartshift power
+-		 * boost/shift is only when the percentage is more than 100.
++		/* return the percentage of APU power boost
++		 * with respect to APU's power limit.
+ 		 */
+-		if (metrics->StapmOriginalLimit > 0)
+-			*value =  (metrics->ApuPower * 100) / metrics->StapmOriginalLimit;
+-		else
+-			*value = 0;
++		renoir_get_ss_power_percent(metrics, &apu_percent, &dgpu_percent);
++		*value = apu_percent;
+ 		break;
+ 	case METRICS_SS_DGPU_SHARE:
+-		/* return the percentage of dGPU power with respect to dGPU's power limit.
+-		 * percentage is reported, this isn't boost value. Smartshift power
+-		 * boost/shift is only when the percentage is more than 100.
++		/* return the percentage of dGPU power boost
++		 * with respect to dGPU's power limit.
+ 		 */
+-		if ((metrics->dGpuPower > 0) &&
+-		    (metrics->StapmCurrentLimit > metrics->StapmOriginalLimit))
+-			*value = (metrics->dGpuPower * 100) /
+-				  (metrics->StapmCurrentLimit - metrics->StapmOriginalLimit);
+-		else
+-			*value = 0;
++		renoir_get_ss_power_percent(metrics, &apu_percent, &dgpu_percent);
++		*value = dgpu_percent;
+ 		break;
+ 	default:
+ 		*value = UINT_MAX;
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/yellow_carp_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/yellow_carp_ppt.c
+index e2d099409123a..87257b1b028f7 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/yellow_carp_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/yellow_carp_ppt.c
+@@ -276,6 +276,42 @@ static int yellow_carp_mode2_reset(struct smu_context *smu)
+ 	return yellow_carp_mode_reset(smu, SMU_RESET_MODE_2);
+ }
+ 
++
++static void yellow_carp_get_ss_power_percent(SmuMetrics_t *metrics,
++					uint32_t *apu_percent, uint32_t *dgpu_percent)
++{
++	uint32_t apu_boost = 0;
++	uint32_t dgpu_boost = 0;
++	uint16_t apu_limit = 0;
++	uint16_t dgpu_limit = 0;
++	uint16_t apu_power = 0;
++	uint16_t dgpu_power = 0;
++
++	/* APU and dGPU power values are reported in milli Watts
++	 * and STAPM power limits are in Watts */
++	apu_power = metrics->ApuPower/1000;
++	apu_limit = metrics->StapmOpnLimit;
++	if (apu_power > apu_limit && apu_limit != 0)
++		apu_boost =  ((apu_power - apu_limit) * 100) / apu_limit;
++	apu_boost = (apu_boost > 100) ? 100 : apu_boost;
++
++	dgpu_power = metrics->dGpuPower/1000;
++	if (metrics->StapmCurrentLimit > metrics->StapmOpnLimit)
++		dgpu_limit = metrics->StapmCurrentLimit - metrics->StapmOpnLimit;
++	if (dgpu_power > dgpu_limit && dgpu_limit != 0)
++		dgpu_boost = ((dgpu_power - dgpu_limit) * 100) / dgpu_limit;
++	dgpu_boost = (dgpu_boost > 100) ? 100 : dgpu_boost;
++
++	if (dgpu_boost >= apu_boost)
++		apu_boost = 0;
++	else
++		dgpu_boost = 0;
++
++	*apu_percent = apu_boost;
++	*dgpu_percent = dgpu_boost;
++
++}
++
+ static int yellow_carp_get_smu_metrics_data(struct smu_context *smu,
+ 							MetricsMember_t member,
+ 							uint32_t *value)
+@@ -284,6 +320,8 @@ static int yellow_carp_get_smu_metrics_data(struct smu_context *smu,
+ 
+ 	SmuMetrics_t *metrics = (SmuMetrics_t *)smu_table->metrics_table;
+ 	int ret = 0;
++	uint32_t apu_percent = 0;
++	uint32_t dgpu_percent = 0;
+ 
+ 	ret = smu_cmn_get_metrics_table(smu, NULL, false);
+ 	if (ret)
+@@ -332,26 +370,18 @@ static int yellow_carp_get_smu_metrics_data(struct smu_context *smu,
+ 		*value = metrics->Voltage[1];
+ 		break;
+ 	case METRICS_SS_APU_SHARE:
+-		/* return the percentage of APU power with respect to APU's power limit.
+-		 * percentage is reported, this isn't boost value. Smartshift power
+-		 * boost/shift is only when the percentage is more than 100.
++		/* return the percentage of APU power boost
++		 * with respect to APU's power limit.
+ 		 */
+-		if (metrics->StapmOpnLimit > 0)
+-			*value =  (metrics->ApuPower * 100) / metrics->StapmOpnLimit;
+-		else
+-			*value = 0;
++		yellow_carp_get_ss_power_percent(metrics, &apu_percent, &dgpu_percent);
++		*value = apu_percent;
+ 		break;
+ 	case METRICS_SS_DGPU_SHARE:
+-		/* return the percentage of dGPU power with respect to dGPU's power limit.
+-		 * percentage is reported, this isn't boost value. Smartshift power
+-		 * boost/shift is only when the percentage is more than 100.
++		/* return the percentage of dGPU power boost
++		 * with respect to dGPU's power limit.
+ 		 */
+-		if ((metrics->dGpuPower > 0) &&
+-		    (metrics->StapmCurrentLimit > metrics->StapmOpnLimit))
+-			*value = (metrics->dGpuPower * 100) /
+-				  (metrics->StapmCurrentLimit - metrics->StapmOpnLimit);
+-		else
+-			*value = 0;
++		yellow_carp_get_ss_power_percent(metrics, &apu_percent, &dgpu_percent);
++		*value = dgpu_percent;
+ 		break;
+ 	default:
+ 		*value = UINT_MAX;
+diff --git a/drivers/gpu/drm/arm/display/komeda/komeda_plane.c b/drivers/gpu/drm/arm/display/komeda/komeda_plane.c
+index d63d83800a8a3..517b94c3bcaf9 100644
+--- a/drivers/gpu/drm/arm/display/komeda/komeda_plane.c
++++ b/drivers/gpu/drm/arm/display/komeda/komeda_plane.c
+@@ -265,6 +265,10 @@ static int komeda_plane_add(struct komeda_kms_dev *kms,
+ 
+ 	formats = komeda_get_layer_fourcc_list(&mdev->fmt_tbl,
+ 					       layer->layer_type, &n_formats);
++	if (!formats) {
++		kfree(kplane);
++		return -ENOMEM;
++	}
+ 
+ 	err = drm_universal_plane_init(&kms->base, plane,
+ 			get_possible_crtcs(kms, c->pipeline),
+@@ -275,8 +279,10 @@ static int komeda_plane_add(struct komeda_kms_dev *kms,
+ 
+ 	komeda_put_fourcc_list(formats);
+ 
+-	if (err)
+-		goto cleanup;
++	if (err) {
++		kfree(kplane);
++		return err;
++	}
+ 
+ 	drm_plane_helper_add(plane, &komeda_plane_helper_funcs);
+ 
+diff --git a/drivers/gpu/drm/arm/malidp_crtc.c b/drivers/gpu/drm/arm/malidp_crtc.c
+index 494075ddbef68..b5928b52e2791 100644
+--- a/drivers/gpu/drm/arm/malidp_crtc.c
++++ b/drivers/gpu/drm/arm/malidp_crtc.c
+@@ -487,7 +487,10 @@ static void malidp_crtc_reset(struct drm_crtc *crtc)
+ 	if (crtc->state)
+ 		malidp_crtc_destroy_state(crtc, crtc->state);
+ 
+-	__drm_atomic_helper_crtc_reset(crtc, &state->base);
++	if (state)
++		__drm_atomic_helper_crtc_reset(crtc, &state->base);
++	else
++		__drm_atomic_helper_crtc_reset(crtc, NULL);
+ }
+ 
+ static int malidp_crtc_enable_vblank(struct drm_crtc *crtc)
+diff --git a/drivers/gpu/drm/bridge/Kconfig b/drivers/gpu/drm/bridge/Kconfig
+index 2145b08f95346..be2fc4791c1da 100644
+--- a/drivers/gpu/drm/bridge/Kconfig
++++ b/drivers/gpu/drm/bridge/Kconfig
+@@ -77,6 +77,8 @@ config DRM_DISPLAY_CONNECTOR
+ config DRM_ITE_IT6505
+         tristate "ITE IT6505 DisplayPort bridge"
+         depends on OF
++        select DRM_DP_AUX_BUS
++	select DRM_DP_HELPER
+         select DRM_KMS_HELPER
+         select DRM_DP_HELPER
+         select EXTCON
+diff --git a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
+index 005bf18682ff8..668dcefbae17a 100644
+--- a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
++++ b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
+@@ -1313,6 +1313,7 @@ err_unregister_audio:
+ 	adv7511_audio_exit(adv7511);
+ 	drm_bridge_remove(&adv7511->bridge);
+ err_unregister_cec:
++	cec_unregister_adapter(adv7511->cec_adap);
+ 	i2c_unregister_device(adv7511->i2c_cec);
+ 	clk_disable_unprepare(adv7511->cec_clk);
+ err_i2c_unregister_packet:
+diff --git a/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c b/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c
+index eb590fb8e8d0d..988669505aa5e 100644
+--- a/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c
++++ b/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c
+@@ -1632,8 +1632,19 @@ static ssize_t analogix_dpaux_transfer(struct drm_dp_aux *aux,
+ 				       struct drm_dp_aux_msg *msg)
+ {
+ 	struct analogix_dp_device *dp = to_dp(aux);
++	int ret;
++
++	pm_runtime_get_sync(dp->dev);
++
++	ret = analogix_dp_detect_hpd(dp);
++	if (ret)
++		goto out;
+ 
+-	return analogix_dp_transfer(dp, msg);
++	ret = analogix_dp_transfer(dp, msg);
++out:
++	pm_runtime_put(dp->dev);
++
++	return ret;
+ }
+ 
+ struct analogix_dp_device *
+@@ -1698,8 +1709,10 @@ analogix_dp_probe(struct device *dev, struct analogix_dp_plat_data *plat_data)
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ 
+ 	dp->reg_base = devm_ioremap_resource(&pdev->dev, res);
+-	if (IS_ERR(dp->reg_base))
+-		return ERR_CAST(dp->reg_base);
++	if (IS_ERR(dp->reg_base)) {
++		ret = PTR_ERR(dp->reg_base);
++		goto err_disable_clk;
++	}
+ 
+ 	dp->force_hpd = of_property_read_bool(dev->of_node, "force-hpd");
+ 
+@@ -1711,7 +1724,8 @@ analogix_dp_probe(struct device *dev, struct analogix_dp_plat_data *plat_data)
+ 	if (IS_ERR(dp->hpd_gpiod)) {
+ 		dev_err(dev, "error getting HDP GPIO: %ld\n",
+ 			PTR_ERR(dp->hpd_gpiod));
+-		return ERR_CAST(dp->hpd_gpiod);
++		ret = PTR_ERR(dp->hpd_gpiod);
++		goto err_disable_clk;
+ 	}
+ 
+ 	if (dp->hpd_gpiod) {
+@@ -1731,7 +1745,8 @@ analogix_dp_probe(struct device *dev, struct analogix_dp_plat_data *plat_data)
+ 
+ 	if (dp->irq == -ENXIO) {
+ 		dev_err(&pdev->dev, "failed to get irq\n");
+-		return ERR_PTR(-ENODEV);
++		ret = -ENODEV;
++		goto err_disable_clk;
+ 	}
+ 
+ 	ret = devm_request_threaded_irq(&pdev->dev, dp->irq,
+@@ -1740,11 +1755,15 @@ analogix_dp_probe(struct device *dev, struct analogix_dp_plat_data *plat_data)
+ 					irq_flags, "analogix-dp", dp);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "failed to request irq\n");
+-		return ERR_PTR(ret);
++		goto err_disable_clk;
+ 	}
+ 	disable_irq(dp->irq);
+ 
+ 	return dp;
++
++err_disable_clk:
++	clk_disable_unprepare(dp->clock);
++	return ERR_PTR(ret);
+ }
+ EXPORT_SYMBOL_GPL(analogix_dp_probe);
+ 
+diff --git a/drivers/gpu/drm/bridge/analogix/anx7625.c b/drivers/gpu/drm/bridge/analogix/anx7625.c
+index 31ecf5626f1d9..060849f8ad8b4 100644
+--- a/drivers/gpu/drm/bridge/analogix/anx7625.c
++++ b/drivers/gpu/drm/bridge/analogix/anx7625.c
+@@ -874,7 +874,10 @@ static int anx7625_hdcp_enable(struct anx7625_data *ctx)
+ 	}
+ 
+ 	/* Read downstream capability */
+-	anx7625_aux_trans(ctx, DP_AUX_NATIVE_READ, 0x68028, 1, &bcap);
++	ret = anx7625_aux_trans(ctx, DP_AUX_NATIVE_READ, 0x68028, 1, &bcap);
++	if (ret < 0)
++		return ret;
++
+ 	if (!(bcap & 0x01)) {
+ 		pr_warn("downstream not support HDCP 1.4, cap(%x).\n", bcap);
+ 		return 0;
+@@ -1475,12 +1478,12 @@ static void anx7625_dp_adjust_swing(struct anx7625_data *ctx)
+ 	for (i = 0; i < ctx->pdata.dp_lane0_swing_reg_cnt; i++)
+ 		anx7625_reg_write(ctx, ctx->i2c.tx_p1_client,
+ 				  DP_TX_LANE0_SWING_REG0 + i,
+-				  ctx->pdata.lane0_reg_data[i] & 0xFF);
++				  ctx->pdata.lane0_reg_data[i]);
+ 
+ 	for (i = 0; i < ctx->pdata.dp_lane1_swing_reg_cnt; i++)
+ 		anx7625_reg_write(ctx, ctx->i2c.tx_p1_client,
+ 				  DP_TX_LANE1_SWING_REG0 + i,
+-				  ctx->pdata.lane1_reg_data[i] & 0xFF);
++				  ctx->pdata.lane1_reg_data[i]);
+ }
+ 
+ static void dp_hpd_change_handler(struct anx7625_data *ctx, bool on)
+@@ -1587,8 +1590,8 @@ static int anx7625_get_swing_setting(struct device *dev,
+ 			num_regs = DP_TX_SWING_REG_CNT;
+ 
+ 		pdata->dp_lane0_swing_reg_cnt = num_regs;
+-		of_property_read_u32_array(dev->of_node, "analogix,lane0-swing",
+-					   pdata->lane0_reg_data, num_regs);
++		of_property_read_u8_array(dev->of_node, "analogix,lane0-swing",
++					  pdata->lane0_reg_data, num_regs);
+ 	}
+ 
+ 	if (of_get_property(dev->of_node,
+@@ -1597,8 +1600,8 @@ static int anx7625_get_swing_setting(struct device *dev,
+ 			num_regs = DP_TX_SWING_REG_CNT;
+ 
+ 		pdata->dp_lane1_swing_reg_cnt = num_regs;
+-		of_property_read_u32_array(dev->of_node, "analogix,lane1-swing",
+-					   pdata->lane1_reg_data, num_regs);
++		of_property_read_u8_array(dev->of_node, "analogix,lane1-swing",
++					  pdata->lane1_reg_data, num_regs);
+ 	}
+ 
+ 	return 0;
+@@ -2654,7 +2657,7 @@ static int anx7625_i2c_probe(struct i2c_client *client,
+ 	if (ret) {
+ 		if (ret != -EPROBE_DEFER)
+ 			DRM_DEV_ERROR(dev, "fail to parse DT : %d\n", ret);
+-		return ret;
++		goto free_wq;
+ 	}
+ 
+ 	if (anx7625_register_i2c_dummy_clients(platform, client) != 0) {
+@@ -2669,7 +2672,7 @@ static int anx7625_i2c_probe(struct i2c_client *client,
+ 	pm_suspend_ignore_children(dev, true);
+ 	ret = devm_add_action_or_reset(dev, anx7625_runtime_disable, dev);
+ 	if (ret)
+-		return ret;
++		goto free_wq;
+ 
+ 	if (!platform->pdata.low_power_mode) {
+ 		anx7625_disable_pd_protocol(platform);
+diff --git a/drivers/gpu/drm/bridge/analogix/anx7625.h b/drivers/gpu/drm/bridge/analogix/anx7625.h
+index edbbfe410a56e..e257a84db9626 100644
+--- a/drivers/gpu/drm/bridge/analogix/anx7625.h
++++ b/drivers/gpu/drm/bridge/analogix/anx7625.h
+@@ -426,9 +426,9 @@ struct anx7625_platform_data {
+ 	int mipi_lanes;
+ 	int audio_en;
+ 	int dp_lane0_swing_reg_cnt;
+-	int lane0_reg_data[DP_TX_SWING_REG_CNT];
++	u8 lane0_reg_data[DP_TX_SWING_REG_CNT];
+ 	int dp_lane1_swing_reg_cnt;
+-	int lane1_reg_data[DP_TX_SWING_REG_CNT];
++	u8 lane1_reg_data[DP_TX_SWING_REG_CNT];
+ 	u32 low_power_mode;
+ 	struct device_node *mipi_host_node;
+ };
+diff --git a/drivers/gpu/drm/bridge/chipone-icn6211.c b/drivers/gpu/drm/bridge/chipone-icn6211.c
+index d9b7f48b99fbf..c871a90c0b8f4 100644
+--- a/drivers/gpu/drm/bridge/chipone-icn6211.c
++++ b/drivers/gpu/drm/bridge/chipone-icn6211.c
+@@ -15,8 +15,19 @@
+ #include <linux/of_device.h>
+ #include <linux/regulator/consumer.h>
+ 
+-#include <video/mipi_display.h>
+-
++#define VENDOR_ID		0x00
++#define DEVICE_ID_H		0x01
++#define DEVICE_ID_L		0x02
++#define VERSION_ID		0x03
++#define FIRMWARE_VERSION	0x08
++#define CONFIG_FINISH		0x09
++#define PD_CTRL(n)		(0x0a + ((n) & 0x3)) /* 0..3 */
++#define RST_CTRL(n)		(0x0e + ((n) & 0x1)) /* 0..1 */
++#define SYS_CTRL(n)		(0x10 + ((n) & 0x7)) /* 0..4 */
++#define RGB_DRV(n)		(0x18 + ((n) & 0x3)) /* 0..3 */
++#define RGB_DLY(n)		(0x1c + ((n) & 0x1)) /* 0..1 */
++#define RGB_TEST_CTRL		0x1e
++#define ATE_PLL_EN		0x1f
+ #define HACTIVE_LI		0x20
+ #define VACTIVE_LI		0x21
+ #define VACTIVE_HACTIVE_HI	0x22
+@@ -24,9 +35,101 @@
+ #define HSYNC_LI		0x24
+ #define HBP_LI			0x25
+ #define HFP_HSW_HBP_HI		0x26
++#define HFP_HSW_HBP_HI_HFP(n)		(((n) & 0x300) >> 4)
++#define HFP_HSW_HBP_HI_HS(n)		(((n) & 0x300) >> 6)
++#define HFP_HSW_HBP_HI_HBP(n)		(((n) & 0x300) >> 8)
+ #define VFP			0x27
+ #define VSYNC			0x28
+ #define VBP			0x29
++#define BIST_POL		0x2a
++#define BIST_POL_BIST_MODE(n)		(((n) & 0xf) << 4)
++#define BIST_POL_BIST_GEN		BIT(3)
++#define BIST_POL_HSYNC_POL		BIT(2)
++#define BIST_POL_VSYNC_POL		BIT(1)
++#define BIST_POL_DE_POL			BIT(0)
++#define BIST_RED		0x2b
++#define BIST_GREEN		0x2c
++#define BIST_BLUE		0x2d
++#define BIST_CHESS_X		0x2e
++#define BIST_CHESS_Y		0x2f
++#define BIST_CHESS_XY_H		0x30
++#define BIST_FRAME_TIME_L	0x31
++#define BIST_FRAME_TIME_H	0x32
++#define FIFO_MAX_ADDR_LOW	0x33
++#define SYNC_EVENT_DLY		0x34
++#define HSW_MIN			0x35
++#define HFP_MIN			0x36
++#define LOGIC_RST_NUM		0x37
++#define OSC_CTRL(n)		(0x48 + ((n) & 0x7)) /* 0..5 */
++#define BG_CTRL			0x4e
++#define LDO_PLL			0x4f
++#define PLL_CTRL(n)		(0x50 + ((n) & 0xf)) /* 0..15 */
++#define PLL_CTRL_6_EXTERNAL		0x90
++#define PLL_CTRL_6_MIPI_CLK		0x92
++#define PLL_CTRL_6_INTERNAL		0x93
++#define PLL_REM(n)		(0x60 + ((n) & 0x3)) /* 0..2 */
++#define PLL_DIV(n)		(0x63 + ((n) & 0x3)) /* 0..2 */
++#define PLL_FRAC(n)		(0x66 + ((n) & 0x3)) /* 0..2 */
++#define PLL_INT(n)		(0x69 + ((n) & 0x1)) /* 0..1 */
++#define PLL_REF_DIV		0x6b
++#define PLL_REF_DIV_P(n)		((n) & 0xf)
++#define PLL_REF_DIV_Pe			BIT(4)
++#define PLL_REF_DIV_S(n)		(((n) & 0x7) << 5)
++#define PLL_SSC_P(n)		(0x6c + ((n) & 0x3)) /* 0..2 */
++#define PLL_SSC_STEP(n)		(0x6f + ((n) & 0x3)) /* 0..2 */
++#define PLL_SSC_OFFSET(n)	(0x72 + ((n) & 0x3)) /* 0..3 */
++#define GPIO_OEN		0x79
++#define MIPI_CFG_PW		0x7a
++#define MIPI_CFG_PW_CONFIG_DSI		0xc1
++#define MIPI_CFG_PW_CONFIG_I2C		0x3e
++#define GPIO_SEL(n)		(0x7b + ((n) & 0x1)) /* 0..1 */
++#define IRQ_SEL			0x7d
++#define DBG_SEL			0x7e
++#define DBG_SIGNAL		0x7f
++#define MIPI_ERR_VECTOR_L	0x80
++#define MIPI_ERR_VECTOR_H	0x81
++#define MIPI_ERR_VECTOR_EN_L	0x82
++#define MIPI_ERR_VECTOR_EN_H	0x83
++#define MIPI_MAX_SIZE_L		0x84
++#define MIPI_MAX_SIZE_H		0x85
++#define DSI_CTRL		0x86
++#define DSI_CTRL_UNKNOWN		0x28
++#define DSI_CTRL_DSI_LANES(n)		((n) & 0x3)
++#define MIPI_PN_SWAP		0x87
++#define MIPI_PN_SWAP_CLK		BIT(4)
++#define MIPI_PN_SWAP_D(n)		BIT((n) & 0x3)
++#define MIPI_SOT_SYNC_BIT_(n)	(0x88 + ((n) & 0x1)) /* 0..1 */
++#define MIPI_ULPS_CTRL		0x8a
++#define MIPI_CLK_CHK_VAR	0x8e
++#define MIPI_CLK_CHK_INI	0x8f
++#define MIPI_T_TERM_EN		0x90
++#define MIPI_T_HS_SETTLE	0x91
++#define MIPI_T_TA_SURE_PRE	0x92
++#define MIPI_T_LPX_SET		0x94
++#define MIPI_T_CLK_MISS		0x95
++#define MIPI_INIT_TIME_L	0x96
++#define MIPI_INIT_TIME_H	0x97
++#define MIPI_T_CLK_TERM_EN	0x99
++#define MIPI_T_CLK_SETTLE	0x9a
++#define MIPI_TO_HS_RX_L		0x9e
++#define MIPI_TO_HS_RX_H		0x9f
++#define MIPI_PHY_(n)		(0xa0 + ((n) & 0x7)) /* 0..5 */
++#define MIPI_PD_RX		0xb0
++#define MIPI_PD_TERM		0xb1
++#define MIPI_PD_HSRX		0xb2
++#define MIPI_PD_LPTX		0xb3
++#define MIPI_PD_LPRX		0xb4
++#define MIPI_PD_CK_LANE		0xb5
++#define MIPI_FORCE_0		0xb6
++#define MIPI_RST_CTRL		0xb7
++#define MIPI_RST_NUM		0xb8
++#define MIPI_DBG_SET_(n)	(0xc0 + ((n) & 0xf)) /* 0..9 */
++#define MIPI_DBG_SEL		0xe0
++#define MIPI_DBG_DATA		0xe1
++#define MIPI_ATE_TEST_SEL	0xe2
++#define MIPI_ATE_STATUS_(n)	(0xe3 + ((n) & 0x1)) /* 0..1 */
++#define MIPI_ATE_STATUS_1	0xe4
++#define ICN6211_MAX_REGISTER	MIPI_ATE_STATUS(1)
+ 
+ struct chipone {
+ 	struct device *dev;
+@@ -63,14 +166,15 @@ static void chipone_atomic_enable(struct drm_bridge *bridge,
+ {
+ 	struct chipone *icn = bridge_to_chipone(bridge);
+ 	struct drm_display_mode *mode = &icn->mode;
++	u16 hfp, hbp, hsync;
+ 
+-	ICN6211_DSI(icn, 0x7a, 0xc1);
++	ICN6211_DSI(icn, MIPI_CFG_PW, MIPI_CFG_PW_CONFIG_DSI);
+ 
+ 	ICN6211_DSI(icn, HACTIVE_LI, mode->hdisplay & 0xff);
+ 
+ 	ICN6211_DSI(icn, VACTIVE_LI, mode->vdisplay & 0xff);
+ 
+-	/**
++	/*
+ 	 * lsb nibble: 2nd nibble of hdisplay
+ 	 * msb nibble: 2nd nibble of vdisplay
+ 	 */
+@@ -78,13 +182,18 @@ static void chipone_atomic_enable(struct drm_bridge *bridge,
+ 		    ((mode->hdisplay >> 8) & 0xf) |
+ 		    (((mode->vdisplay >> 8) & 0xf) << 4));
+ 
+-	ICN6211_DSI(icn, HFP_LI, mode->hsync_start - mode->hdisplay);
+-
+-	ICN6211_DSI(icn, HSYNC_LI, mode->hsync_end - mode->hsync_start);
+-
+-	ICN6211_DSI(icn, HBP_LI, mode->htotal - mode->hsync_end);
++	hfp = mode->hsync_start - mode->hdisplay;
++	hsync = mode->hsync_end - mode->hsync_start;
++	hbp = mode->htotal - mode->hsync_end;
+ 
+-	ICN6211_DSI(icn, HFP_HSW_HBP_HI, 0x00);
++	ICN6211_DSI(icn, HFP_LI, hfp & 0xff);
++	ICN6211_DSI(icn, HSYNC_LI, hsync & 0xff);
++	ICN6211_DSI(icn, HBP_LI, hbp & 0xff);
++	/* Top two bits of Horizontal Front porch/Sync/Back porch */
++	ICN6211_DSI(icn, HFP_HSW_HBP_HI,
++		    HFP_HSW_HBP_HI_HFP(hfp) |
++		    HFP_HSW_HBP_HI_HS(hsync) |
++		    HFP_HSW_HBP_HI_HBP(hbp));
+ 
+ 	ICN6211_DSI(icn, VFP, mode->vsync_start - mode->vdisplay);
+ 
+@@ -93,21 +202,21 @@ static void chipone_atomic_enable(struct drm_bridge *bridge,
+ 	ICN6211_DSI(icn, VBP, mode->vtotal - mode->vsync_end);
+ 
+ 	/* dsi specific sequence */
+-	ICN6211_DSI(icn, MIPI_DCS_SET_TEAR_OFF, 0x80);
+-	ICN6211_DSI(icn, MIPI_DCS_SET_ADDRESS_MODE, 0x28);
+-	ICN6211_DSI(icn, 0xb5, 0xa0);
+-	ICN6211_DSI(icn, 0x5c, 0xff);
+-	ICN6211_DSI(icn, MIPI_DCS_SET_COLUMN_ADDRESS, 0x01);
+-	ICN6211_DSI(icn, MIPI_DCS_GET_POWER_SAVE, 0x92);
+-	ICN6211_DSI(icn, 0x6b, 0x71);
+-	ICN6211_DSI(icn, 0x69, 0x2b);
+-	ICN6211_DSI(icn, MIPI_DCS_ENTER_SLEEP_MODE, 0x40);
+-	ICN6211_DSI(icn, MIPI_DCS_EXIT_SLEEP_MODE, 0x98);
++	ICN6211_DSI(icn, SYNC_EVENT_DLY, 0x80);
++	ICN6211_DSI(icn, HFP_MIN, hfp & 0xff);
++	ICN6211_DSI(icn, MIPI_PD_CK_LANE, 0xa0);
++	ICN6211_DSI(icn, PLL_CTRL(12), 0xff);
++	ICN6211_DSI(icn, BIST_POL, BIST_POL_DE_POL);
++	ICN6211_DSI(icn, PLL_CTRL(6), PLL_CTRL_6_MIPI_CLK);
++	ICN6211_DSI(icn, PLL_REF_DIV, 0x71);
++	ICN6211_DSI(icn, PLL_INT(0), 0x2b);
++	ICN6211_DSI(icn, SYS_CTRL(0), 0x40);
++	ICN6211_DSI(icn, SYS_CTRL(1), 0x98);
+ 
+ 	/* icn6211 specific sequence */
+-	ICN6211_DSI(icn, 0xb6, 0x20);
+-	ICN6211_DSI(icn, 0x51, 0x20);
+-	ICN6211_DSI(icn, 0x09, 0x10);
++	ICN6211_DSI(icn, MIPI_FORCE_0, 0x20);
++	ICN6211_DSI(icn, PLL_CTRL(1), 0x20);
++	ICN6211_DSI(icn, CONFIG_FINISH, 0x10);
+ 
+ 	usleep_range(10000, 11000);
+ }
+diff --git a/drivers/gpu/drm/bridge/ite-it6505.c b/drivers/gpu/drm/bridge/ite-it6505.c
+index f2f101220ade9..c546646771724 100644
+--- a/drivers/gpu/drm/bridge/ite-it6505.c
++++ b/drivers/gpu/drm/bridge/ite-it6505.c
+@@ -737,8 +737,9 @@ static int it6505_drm_dp_link_probe(struct drm_dp_aux *aux,
+ 	return 0;
+ }
+ 
+-static int it6505_drm_dp_link_power_up(struct drm_dp_aux *aux,
+-				       struct it6505_drm_dp_link *link)
++static int it6505_drm_dp_link_set_power(struct drm_dp_aux *aux,
++					struct it6505_drm_dp_link *link,
++					u8 mode)
+ {
+ 	u8 value;
+ 	int err;
+@@ -752,18 +753,20 @@ static int it6505_drm_dp_link_power_up(struct drm_dp_aux *aux,
+ 		return err;
+ 
+ 	value &= ~DP_SET_POWER_MASK;
+-	value |= DP_SET_POWER_D0;
++	value |= mode;
+ 
+ 	err = drm_dp_dpcd_writeb(aux, DP_SET_POWER, value);
+ 	if (err < 0)
+ 		return err;
+ 
+-	/*
+-	 * According to the DP 1.1 specification, a "Sink Device must exit the
+-	 * power saving state within 1 ms" (Section 2.5.3.1, Table 5-52, "Sink
+-	 * Control Field" (register 0x600).
+-	 */
+-	usleep_range(1000, 2000);
++	if (mode == DP_SET_POWER_D0) {
++		/*
++		 * According to the DP 1.1 specification, a "Sink Device must
++		 * exit the power saving state within 1 ms" (Section 2.5.3.1,
++		 * Table 5-52, "Sink Control Field" (register 0x600).
++		 */
++		usleep_range(1000, 2000);
++	}
+ 
+ 	return 0;
+ }
+@@ -2624,7 +2627,8 @@ static enum drm_connector_status it6505_detect(struct it6505 *it6505)
+ 	if (it6505_get_sink_hpd_status(it6505)) {
+ 		it6505_aux_on(it6505);
+ 		it6505_drm_dp_link_probe(&it6505->aux, &it6505->link);
+-		it6505_drm_dp_link_power_up(&it6505->aux, &it6505->link);
++		it6505_drm_dp_link_set_power(&it6505->aux, &it6505->link,
++					     DP_SET_POWER_D0);
+ 		it6505->auto_train_retry = AUTO_TRAIN_RETRY;
+ 
+ 		if (it6505->dpcd[0] == 0) {
+@@ -2960,8 +2964,11 @@ static void it6505_bridge_atomic_disable(struct drm_bridge *bridge,
+ 
+ 	DRM_DEV_DEBUG_DRIVER(dev, "start");
+ 
+-	if (it6505->powered)
++	if (it6505->powered) {
+ 		it6505_video_disable(it6505);
++		it6505_drm_dp_link_set_power(&it6505->aux, &it6505->link,
++					     DP_SET_POWER_D3);
++	}
+ }
+ 
+ static enum drm_connector_status
+diff --git a/drivers/gpu/drm/bridge/ite-it66121.c b/drivers/gpu/drm/bridge/ite-it66121.c
+index 69288cf894b99..e81c106e2c2bb 100644
+--- a/drivers/gpu/drm/bridge/ite-it66121.c
++++ b/drivers/gpu/drm/bridge/ite-it66121.c
+@@ -227,7 +227,7 @@ static const struct regmap_range_cfg it66121_regmap_banks[] = {
+ 		.selector_mask = 0x1,
+ 		.selector_shift = 0,
+ 		.window_start = 0x00,
+-		.window_len = 0x130,
++		.window_len = 0x100,
+ 	},
+ };
+ 
+diff --git a/drivers/gpu/drm/drm_bridge_connector.c b/drivers/gpu/drm/drm_bridge_connector.c
+index 60923cdfe8e1c..6b3dad03d77d0 100644
+--- a/drivers/gpu/drm/drm_bridge_connector.c
++++ b/drivers/gpu/drm/drm_bridge_connector.c
+@@ -384,8 +384,10 @@ struct drm_connector *drm_bridge_connector_init(struct drm_device *drm,
+ 				    connector_type, ddc);
+ 	drm_connector_helper_add(connector, &drm_bridge_connector_helper_funcs);
+ 
+-	if (bridge_connector->bridge_hpd)
++	if (bridge_connector->bridge_hpd) {
+ 		connector->polled = DRM_CONNECTOR_POLL_HPD;
++		drm_bridge_connector_enable_hpd(connector);
++	}
+ 	else if (bridge_connector->bridge_detect)
+ 		connector->polled = DRM_CONNECTOR_POLL_CONNECT
+ 				  | DRM_CONNECTOR_POLL_DISCONNECT;
+diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
+index cc7bd58369dfe..c5b86414873e2 100644
+--- a/drivers/gpu/drm/drm_edid.c
++++ b/drivers/gpu/drm/drm_edid.c
+@@ -2031,9 +2031,6 @@ struct edid *drm_do_get_edid(struct drm_connector *connector,
+ 
+ 		connector_bad_edid(connector, edid, edid[0x7e] + 1);
+ 
+-		edid[EDID_LENGTH-1] += edid[0x7e] - valid_extensions;
+-		edid[0x7e] = valid_extensions;
+-
+ 		new = kmalloc_array(valid_extensions + 1, EDID_LENGTH,
+ 				    GFP_KERNEL);
+ 		if (!new)
+@@ -2050,6 +2047,9 @@ struct edid *drm_do_get_edid(struct drm_connector *connector,
+ 			base += EDID_LENGTH;
+ 		}
+ 
++		new[EDID_LENGTH - 1] += new[0x7e] - valid_extensions;
++		new[0x7e] = valid_extensions;
++
+ 		kfree(edid);
+ 		edid = new;
+ 	}
+diff --git a/drivers/gpu/drm/drm_format_helper.c b/drivers/gpu/drm/drm_format_helper.c
+index bc0f49773868a..e085f855a1990 100644
+--- a/drivers/gpu/drm/drm_format_helper.c
++++ b/drivers/gpu/drm/drm_format_helper.c
+@@ -594,35 +594,24 @@ int drm_fb_blit_toio(void __iomem *dst, unsigned int dst_pitch, uint32_t dst_for
+ }
+ EXPORT_SYMBOL(drm_fb_blit_toio);
+ 
+-static void drm_fb_gray8_to_mono_reversed_line(u8 *dst, const u8 *src, unsigned int pixels,
+-					       unsigned int start_offset, unsigned int end_len)
+-{
+-	unsigned int xb, i;
+-
+-	for (xb = 0; xb < pixels; xb++) {
+-		unsigned int start = 0, end = 8;
+-		u8 byte = 0x00;
+-
+-		if (xb == 0 && start_offset)
+-			start = start_offset;
+ 
+-		if (xb == pixels - 1 && end_len)
+-			end = end_len;
+-
+-		for (i = start; i < end; i++) {
+-			unsigned int x = xb * 8 + i;
++static void drm_fb_gray8_to_mono_line(u8 *dst, const u8 *src, unsigned int pixels)
++{
++	while (pixels) {
++		unsigned int i, bits = min(pixels, 8U);
++		u8 byte = 0;
+ 
+-			byte >>= 1;
+-			if (src[x] >> 7)
+-				byte |= BIT(7);
++		for (i = 0; i < bits; i++, pixels--) {
++			if (*src++ >= 128)
++				byte |= BIT(i);
+ 		}
+ 		*dst++ = byte;
+ 	}
+ }
+ 
+ /**
+- * drm_fb_xrgb8888_to_mono_reversed - Convert XRGB8888 to reversed monochrome
+- * @dst: reversed monochrome destination buffer
++ * drm_fb_xrgb8888_to_mono - Convert XRGB8888 to monochrome
++ * @dst: monochrome destination buffer (0=black, 1=white)
+  * @dst_pitch: Number of bytes between two consecutive scanlines within dst
+  * @src: XRGB8888 source buffer
+  * @fb: DRM framebuffer
+@@ -633,17 +622,23 @@ static void drm_fb_gray8_to_mono_reversed_line(u8 *dst, const u8 *src, unsigned
+  * and use this function to convert to the native format.
+  *
+  * This function uses drm_fb_xrgb8888_to_gray8() to convert to grayscale and
+- * then the result is converted from grayscale to reversed monohrome.
++ * then the result is converted from grayscale to monochrome.
++ *
++ * The first pixel (upper left corner of the clip rectangle) will be converted
++ * and copied to the first bit (LSB) in the first byte of the monochrome
++ * destination buffer.
++ * If the caller requires that the first pixel in a byte must be located at an
++ * x-coordinate that is a multiple of 8, then the caller must take care itself
++ * of supplying a suitable clip rectangle.
+  */
+-void drm_fb_xrgb8888_to_mono_reversed(void *dst, unsigned int dst_pitch, const void *vaddr,
+-				      const struct drm_framebuffer *fb, const struct drm_rect *clip)
++void drm_fb_xrgb8888_to_mono(void *dst, unsigned int dst_pitch, const void *vaddr,
++			     const struct drm_framebuffer *fb, const struct drm_rect *clip)
+ {
+ 	unsigned int linepixels = drm_rect_width(clip);
+-	unsigned int lines = clip->y2 - clip->y1;
++	unsigned int lines = drm_rect_height(clip);
+ 	unsigned int cpp = fb->format->cpp[0];
+ 	unsigned int len_src32 = linepixels * cpp;
+ 	struct drm_device *dev = fb->dev;
+-	unsigned int start_offset, end_len;
+ 	unsigned int y;
+ 	u8 *mono = dst, *gray8;
+ 	u32 *src32;
+@@ -652,21 +647,18 @@ void drm_fb_xrgb8888_to_mono_reversed(void *dst, unsigned int dst_pitch, const v
+ 		return;
+ 
+ 	/*
+-	 * The reversed mono destination buffer contains 1 bit per pixel
+-	 * and destination scanlines have to be in multiple of 8 pixels.
++	 * The mono destination buffer contains 1 bit per pixel
+ 	 */
+ 	if (!dst_pitch)
+ 		dst_pitch = DIV_ROUND_UP(linepixels, 8);
+ 
+-	drm_WARN_ONCE(dev, dst_pitch % 8 != 0, "dst_pitch is not a multiple of 8\n");
+-
+ 	/*
+ 	 * The cma memory is write-combined so reads are uncached.
+ 	 * Speed up by fetching one line at a time.
+ 	 *
+-	 * Also, format conversion from XR24 to reversed monochrome
+-	 * are done line-by-line but are converted to 8-bit grayscale
+-	 * as an intermediate step.
++	 * Also, format conversion from XR24 to monochrome are done
++	 * line-by-line but are converted to 8-bit grayscale as an
++	 * intermediate step.
+ 	 *
+ 	 * Allocate a buffer to be used for both copying from the cma
+ 	 * memory and to store the intermediate grayscale line pixels.
+@@ -677,27 +669,15 @@ void drm_fb_xrgb8888_to_mono_reversed(void *dst, unsigned int dst_pitch, const v
+ 
+ 	gray8 = (u8 *)src32 + len_src32;
+ 
+-	/*
+-	 * For damage handling, it is possible that only parts of the source
+-	 * buffer is copied and this could lead to start and end pixels that
+-	 * are not aligned to multiple of 8.
+-	 *
+-	 * Calculate if the start and end pixels are not aligned and set the
+-	 * offsets for the reversed mono line conversion function to adjust.
+-	 */
+-	start_offset = clip->x1 % 8;
+-	end_len = clip->x2 % 8;
+-
+ 	vaddr += clip_offset(clip, fb->pitches[0], cpp);
+ 	for (y = 0; y < lines; y++) {
+ 		src32 = memcpy(src32, vaddr, len_src32);
+ 		drm_fb_xrgb8888_to_gray8_line(gray8, src32, linepixels);
+-		drm_fb_gray8_to_mono_reversed_line(mono, gray8, dst_pitch,
+-						   start_offset, end_len);
++		drm_fb_gray8_to_mono_line(mono, gray8, linepixels);
+ 		vaddr += fb->pitches[0];
+ 		mono += dst_pitch;
+ 	}
+ 
+ 	kfree(src32);
+ }
+-EXPORT_SYMBOL(drm_fb_xrgb8888_to_mono_reversed);
++EXPORT_SYMBOL(drm_fb_xrgb8888_to_mono);
+diff --git a/drivers/gpu/drm/drm_plane.c b/drivers/gpu/drm/drm_plane.c
+index bf0daa8d9bbd9..726f2f163c269 100644
+--- a/drivers/gpu/drm/drm_plane.c
++++ b/drivers/gpu/drm/drm_plane.c
+@@ -247,6 +247,13 @@ static int __drm_universal_plane_init(struct drm_device *dev,
+ 	if (WARN_ON(config->num_total_plane >= 32))
+ 		return -EINVAL;
+ 
++	/*
++	 * First driver to need more than 64 formats needs to fix this. Each
++	 * format is encoded as a bit and the current code only supports a u64.
++	 */
++	if (WARN_ON(format_count > 64))
++		return -EINVAL;
++
+ 	WARN_ON(drm_drv_uses_atomic_modeset(dev) &&
+ 		(!funcs->atomic_destroy_state ||
+ 		 !funcs->atomic_duplicate_state));
+@@ -268,13 +275,6 @@ static int __drm_universal_plane_init(struct drm_device *dev,
+ 		return -ENOMEM;
+ 	}
+ 
+-	/*
+-	 * First driver to need more than 64 formats needs to fix this. Each
+-	 * format is encoded as a bit and the current code only supports a u64.
+-	 */
+-	if (WARN_ON(format_count > 64))
+-		return -EINVAL;
+-
+ 	if (format_modifiers) {
+ 		const uint64_t *temp_modifiers = format_modifiers;
+ 
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_mmu.c b/drivers/gpu/drm/etnaviv/etnaviv_mmu.c
+index 9fb1a2aadbcb0..aabb997a74eb4 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_mmu.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_mmu.c
+@@ -286,6 +286,12 @@ void etnaviv_iommu_unmap_gem(struct etnaviv_iommu_context *context,
+ 
+ 	mutex_lock(&context->lock);
+ 
++	/* Bail if the mapping has been reaped by another thread */
++	if (!mapping->context) {
++		mutex_unlock(&context->lock);
++		return;
++	}
++
+ 	/* If the vram node is on the mm, unmap and remove the node */
+ 	if (mapping->vram_node.mm == &context->mm)
+ 		etnaviv_iommu_remove_mapping(context, mapping);
+diff --git a/drivers/gpu/drm/gma500/psb_intel_display.c b/drivers/gpu/drm/gma500/psb_intel_display.c
+index d5f95212934e2..42d1a733e1249 100644
+--- a/drivers/gpu/drm/gma500/psb_intel_display.c
++++ b/drivers/gpu/drm/gma500/psb_intel_display.c
+@@ -535,14 +535,15 @@ void psb_intel_crtc_init(struct drm_device *dev, int pipe,
+ 
+ struct drm_crtc *psb_intel_get_crtc_from_pipe(struct drm_device *dev, int pipe)
+ {
+-	struct drm_crtc *crtc = NULL;
++	struct drm_crtc *crtc;
+ 
+ 	list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {
+ 		struct gma_crtc *gma_crtc = to_gma_crtc(crtc);
++
+ 		if (gma_crtc->pipe == pipe)
+-			break;
++			return crtc;
+ 	}
+-	return crtc;
++	return NULL;
+ }
+ 
+ int gma_connector_clones(struct drm_device *dev, int type_mask)
+diff --git a/drivers/gpu/drm/i915/display/intel_dsi_vbt.c b/drivers/gpu/drm/i915/display/intel_dsi_vbt.c
+index 6b4a27372c825..9b631dd651d90 100644
+--- a/drivers/gpu/drm/i915/display/intel_dsi_vbt.c
++++ b/drivers/gpu/drm/i915/display/intel_dsi_vbt.c
+@@ -124,9 +124,25 @@ struct i2c_adapter_lookup {
+ #define  ICL_GPIO_DDPA_CTRLCLK_2	8
+ #define  ICL_GPIO_DDPA_CTRLDATA_2	9
+ 
+-static enum port intel_dsi_seq_port_to_port(u8 port)
++static enum port intel_dsi_seq_port_to_port(struct intel_dsi *intel_dsi,
++					    u8 seq_port)
+ {
+-	return port ? PORT_C : PORT_A;
++	/*
++	 * If single link DSI is being used on any port, the VBT sequence block
++	 * send packet apparently always has 0 for the port. Just use the port
++	 * we have configured, and ignore the sequence block port.
++	 */
++	if (hweight8(intel_dsi->ports) == 1)
++		return ffs(intel_dsi->ports) - 1;
++
++	if (seq_port) {
++		if (intel_dsi->ports & PORT_B)
++			return PORT_B;
++		else if (intel_dsi->ports & PORT_C)
++			return PORT_C;
++	}
++
++	return PORT_A;
+ }
+ 
+ static const u8 *mipi_exec_send_packet(struct intel_dsi *intel_dsi,
+@@ -148,15 +164,10 @@ static const u8 *mipi_exec_send_packet(struct intel_dsi *intel_dsi,
+ 
+ 	seq_port = (flags >> MIPI_PORT_SHIFT) & 3;
+ 
+-	/* For DSI single link on Port A & C, the seq_port value which is
+-	 * parsed from Sequence Block#53 of VBT has been set to 0
+-	 * Now, read/write of packets for the DSI single link on Port A and
+-	 * Port C will based on the DVO port from VBT block 2.
+-	 */
+-	if (intel_dsi->ports == (1 << PORT_C))
+-		port = PORT_C;
+-	else
+-		port = intel_dsi_seq_port_to_port(seq_port);
++	port = intel_dsi_seq_port_to_port(intel_dsi, seq_port);
++
++	if (drm_WARN_ON(&dev_priv->drm, !intel_dsi->dsi_hosts[port]))
++		goto out;
+ 
+ 	dsi_device = intel_dsi->dsi_hosts[port]->device;
+ 	if (!dsi_device) {
+diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
+index 0a9c3fcc09b1e..1577ab6754db1 100644
+--- a/drivers/gpu/drm/i915/i915_perf.c
++++ b/drivers/gpu/drm/i915/i915_perf.c
+@@ -4050,8 +4050,8 @@ addr_err:
+ 	return ERR_PTR(err);
+ }
+ 
+-static ssize_t show_dynamic_id(struct device *dev,
+-			       struct device_attribute *attr,
++static ssize_t show_dynamic_id(struct kobject *kobj,
++			       struct kobj_attribute *attr,
+ 			       char *buf)
+ {
+ 	struct i915_oa_config *oa_config =
+diff --git a/drivers/gpu/drm/i915/i915_perf_types.h b/drivers/gpu/drm/i915/i915_perf_types.h
+index 473a3c0544bb8..05cb9a335a971 100644
+--- a/drivers/gpu/drm/i915/i915_perf_types.h
++++ b/drivers/gpu/drm/i915/i915_perf_types.h
+@@ -55,7 +55,7 @@ struct i915_oa_config {
+ 
+ 	struct attribute_group sysfs_metric;
+ 	struct attribute *attrs[2];
+-	struct device_attribute sysfs_metric_id;
++	struct kobj_attribute sysfs_metric_id;
+ 
+ 	struct kref ref;
+ 	struct rcu_head rcu;
+diff --git a/drivers/gpu/drm/mediatek/mtk_cec.c b/drivers/gpu/drm/mediatek/mtk_cec.c
+index e9cef5c0c8f7e..cdfa648910b23 100644
+--- a/drivers/gpu/drm/mediatek/mtk_cec.c
++++ b/drivers/gpu/drm/mediatek/mtk_cec.c
+@@ -85,7 +85,7 @@ static void mtk_cec_mask(struct mtk_cec *cec, unsigned int offset,
+ 	u32 tmp = readl(cec->regs + offset) & ~mask;
+ 
+ 	tmp |= val & mask;
+-	writel(val, cec->regs + offset);
++	writel(tmp, cec->regs + offset);
+ }
+ 
+ void mtk_cec_set_hpd_event(struct device *dev,
+diff --git a/drivers/gpu/drm/mediatek/mtk_disp_drv.h b/drivers/gpu/drm/mediatek/mtk_disp_drv.h
+index 86c3068894b11..974462831133b 100644
+--- a/drivers/gpu/drm/mediatek/mtk_disp_drv.h
++++ b/drivers/gpu/drm/mediatek/mtk_disp_drv.h
+@@ -76,9 +76,11 @@ void mtk_ovl_layer_off(struct device *dev, unsigned int idx,
+ void mtk_ovl_start(struct device *dev);
+ void mtk_ovl_stop(struct device *dev);
+ unsigned int mtk_ovl_supported_rotations(struct device *dev);
+-void mtk_ovl_enable_vblank(struct device *dev,
+-			   void (*vblank_cb)(void *),
+-			   void *vblank_cb_data);
++void mtk_ovl_register_vblank_cb(struct device *dev,
++				void (*vblank_cb)(void *),
++				void *vblank_cb_data);
++void mtk_ovl_unregister_vblank_cb(struct device *dev);
++void mtk_ovl_enable_vblank(struct device *dev);
+ void mtk_ovl_disable_vblank(struct device *dev);
+ 
+ void mtk_rdma_bypass_shadow(struct device *dev);
+@@ -93,9 +95,11 @@ void mtk_rdma_layer_config(struct device *dev, unsigned int idx,
+ 			   struct cmdq_pkt *cmdq_pkt);
+ void mtk_rdma_start(struct device *dev);
+ void mtk_rdma_stop(struct device *dev);
+-void mtk_rdma_enable_vblank(struct device *dev,
+-			    void (*vblank_cb)(void *),
+-			    void *vblank_cb_data);
++void mtk_rdma_register_vblank_cb(struct device *dev,
++				 void (*vblank_cb)(void *),
++				 void *vblank_cb_data);
++void mtk_rdma_unregister_vblank_cb(struct device *dev);
++void mtk_rdma_enable_vblank(struct device *dev);
+ void mtk_rdma_disable_vblank(struct device *dev);
+ 
+ #endif
+diff --git a/drivers/gpu/drm/mediatek/mtk_disp_ovl.c b/drivers/gpu/drm/mediatek/mtk_disp_ovl.c
+index 17cd9b9322988..70ab22964f3b5 100644
+--- a/drivers/gpu/drm/mediatek/mtk_disp_ovl.c
++++ b/drivers/gpu/drm/mediatek/mtk_disp_ovl.c
+@@ -97,14 +97,28 @@ static irqreturn_t mtk_disp_ovl_irq_handler(int irq, void *dev_id)
+ 	return IRQ_HANDLED;
+ }
+ 
+-void mtk_ovl_enable_vblank(struct device *dev,
+-			   void (*vblank_cb)(void *),
+-			   void *vblank_cb_data)
++void mtk_ovl_register_vblank_cb(struct device *dev,
++				void (*vblank_cb)(void *),
++				void *vblank_cb_data)
+ {
+ 	struct mtk_disp_ovl *ovl = dev_get_drvdata(dev);
+ 
+ 	ovl->vblank_cb = vblank_cb;
+ 	ovl->vblank_cb_data = vblank_cb_data;
++}
++
++void mtk_ovl_unregister_vblank_cb(struct device *dev)
++{
++	struct mtk_disp_ovl *ovl = dev_get_drvdata(dev);
++
++	ovl->vblank_cb = NULL;
++	ovl->vblank_cb_data = NULL;
++}
++
++void mtk_ovl_enable_vblank(struct device *dev)
++{
++	struct mtk_disp_ovl *ovl = dev_get_drvdata(dev);
++
+ 	writel(0x0, ovl->regs + DISP_REG_OVL_INTSTA);
+ 	writel_relaxed(OVL_FME_CPL_INT, ovl->regs + DISP_REG_OVL_INTEN);
+ }
+@@ -113,8 +127,6 @@ void mtk_ovl_disable_vblank(struct device *dev)
+ {
+ 	struct mtk_disp_ovl *ovl = dev_get_drvdata(dev);
+ 
+-	ovl->vblank_cb = NULL;
+-	ovl->vblank_cb_data = NULL;
+ 	writel_relaxed(0x0, ovl->regs + DISP_REG_OVL_INTEN);
+ }
+ 
+diff --git a/drivers/gpu/drm/mediatek/mtk_disp_rdma.c b/drivers/gpu/drm/mediatek/mtk_disp_rdma.c
+index 662e91d9d45f6..1be4caf9ff963 100644
+--- a/drivers/gpu/drm/mediatek/mtk_disp_rdma.c
++++ b/drivers/gpu/drm/mediatek/mtk_disp_rdma.c
+@@ -95,24 +95,32 @@ static void rdma_update_bits(struct device *dev, unsigned int reg,
+ 	writel(tmp, rdma->regs + reg);
+ }
+ 
+-void mtk_rdma_enable_vblank(struct device *dev,
+-			    void (*vblank_cb)(void *),
+-			    void *vblank_cb_data)
++void mtk_rdma_register_vblank_cb(struct device *dev,
++				 void (*vblank_cb)(void *),
++				 void *vblank_cb_data)
+ {
+ 	struct mtk_disp_rdma *rdma = dev_get_drvdata(dev);
+ 
+ 	rdma->vblank_cb = vblank_cb;
+ 	rdma->vblank_cb_data = vblank_cb_data;
+-	rdma_update_bits(dev, DISP_REG_RDMA_INT_ENABLE, RDMA_FRAME_END_INT,
+-			 RDMA_FRAME_END_INT);
+ }
+ 
+-void mtk_rdma_disable_vblank(struct device *dev)
++void mtk_rdma_unregister_vblank_cb(struct device *dev)
+ {
+ 	struct mtk_disp_rdma *rdma = dev_get_drvdata(dev);
+ 
+ 	rdma->vblank_cb = NULL;
+ 	rdma->vblank_cb_data = NULL;
++}
++
++void mtk_rdma_enable_vblank(struct device *dev)
++{
++	rdma_update_bits(dev, DISP_REG_RDMA_INT_ENABLE, RDMA_FRAME_END_INT,
++			 RDMA_FRAME_END_INT);
++}
++
++void mtk_rdma_disable_vblank(struct device *dev)
++{
+ 	rdma_update_bits(dev, DISP_REG_RDMA_INT_ENABLE, RDMA_FRAME_END_INT, 0);
+ }
+ 
+diff --git a/drivers/gpu/drm/mediatek/mtk_dpi.c b/drivers/gpu/drm/mediatek/mtk_dpi.c
+index 4554e2de14309..e61cd67b978ff 100644
+--- a/drivers/gpu/drm/mediatek/mtk_dpi.c
++++ b/drivers/gpu/drm/mediatek/mtk_dpi.c
+@@ -819,8 +819,8 @@ static const struct mtk_dpi_conf mt8192_conf = {
+ 	.cal_factor = mt8183_calculate_factor,
+ 	.reg_h_fre_con = 0xe0,
+ 	.max_clock_khz = 150000,
+-	.output_fmts = mt8173_output_fmts,
+-	.num_output_fmts = ARRAY_SIZE(mt8173_output_fmts),
++	.output_fmts = mt8183_output_fmts,
++	.num_output_fmts = ARRAY_SIZE(mt8183_output_fmts),
+ };
+ 
+ static int mtk_dpi_probe(struct platform_device *pdev)
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_crtc.c b/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
+index ede435d2c1efa..f24b21eb03cdf 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
++++ b/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
+@@ -152,6 +152,7 @@ static void mtk_drm_cmdq_pkt_destroy(struct cmdq_pkt *pkt)
+ static void mtk_drm_crtc_destroy(struct drm_crtc *crtc)
+ {
+ 	struct mtk_drm_crtc *mtk_crtc = to_mtk_crtc(crtc);
++	int i;
+ 
+ 	mtk_mutex_put(mtk_crtc->mutex);
+ #if IS_REACHABLE(CONFIG_MTK_CMDQ)
+@@ -162,6 +163,14 @@ static void mtk_drm_crtc_destroy(struct drm_crtc *crtc)
+ 		mtk_crtc->cmdq_client.chan = NULL;
+ 	}
+ #endif
++
++	for (i = 0; i < mtk_crtc->ddp_comp_nr; i++) {
++		struct mtk_ddp_comp *comp;
++
++		comp = mtk_crtc->ddp_comp[i];
++		mtk_ddp_comp_unregister_vblank_cb(comp);
++	}
++
+ 	drm_crtc_cleanup(crtc);
+ }
+ 
+@@ -617,7 +626,7 @@ static int mtk_drm_crtc_enable_vblank(struct drm_crtc *crtc)
+ 	struct mtk_drm_crtc *mtk_crtc = to_mtk_crtc(crtc);
+ 	struct mtk_ddp_comp *comp = mtk_crtc->ddp_comp[0];
+ 
+-	mtk_ddp_comp_enable_vblank(comp, mtk_crtc_ddp_irq, &mtk_crtc->base);
++	mtk_ddp_comp_enable_vblank(comp);
+ 
+ 	return 0;
+ }
+@@ -926,6 +935,9 @@ int mtk_drm_crtc_create(struct drm_device *drm_dev,
+ 			if (comp->funcs->ctm_set)
+ 				has_ctm = true;
+ 		}
++
++		mtk_ddp_comp_register_vblank_cb(comp, mtk_crtc_ddp_irq,
++						&mtk_crtc->base);
+ 	}
+ 
+ 	for (i = 0; i < mtk_crtc->ddp_comp_nr; i++)
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.c b/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.c
+index 2e99aee13dfe4..5d7504a72b11c 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.c
++++ b/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.c
+@@ -297,6 +297,8 @@ static const struct mtk_ddp_comp_funcs ddp_ovl = {
+ 	.config = mtk_ovl_config,
+ 	.start = mtk_ovl_start,
+ 	.stop = mtk_ovl_stop,
++	.register_vblank_cb = mtk_ovl_register_vblank_cb,
++	.unregister_vblank_cb = mtk_ovl_unregister_vblank_cb,
+ 	.enable_vblank = mtk_ovl_enable_vblank,
+ 	.disable_vblank = mtk_ovl_disable_vblank,
+ 	.supported_rotations = mtk_ovl_supported_rotations,
+@@ -321,6 +323,8 @@ static const struct mtk_ddp_comp_funcs ddp_rdma = {
+ 	.config = mtk_rdma_config,
+ 	.start = mtk_rdma_start,
+ 	.stop = mtk_rdma_stop,
++	.register_vblank_cb = mtk_rdma_register_vblank_cb,
++	.unregister_vblank_cb = mtk_rdma_unregister_vblank_cb,
+ 	.enable_vblank = mtk_rdma_enable_vblank,
+ 	.disable_vblank = mtk_rdma_disable_vblank,
+ 	.layer_nr = mtk_rdma_layer_nr,
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.h b/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.h
+index ad267bb8fc9b5..1cbc6332282dc 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.h
++++ b/drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.h
+@@ -48,9 +48,11 @@ struct mtk_ddp_comp_funcs {
+ 		       unsigned int bpc, struct cmdq_pkt *cmdq_pkt);
+ 	void (*start)(struct device *dev);
+ 	void (*stop)(struct device *dev);
+-	void (*enable_vblank)(struct device *dev,
+-			      void (*vblank_cb)(void *),
+-			      void *vblank_cb_data);
++	void (*register_vblank_cb)(struct device *dev,
++				   void (*vblank_cb)(void *),
++				   void *vblank_cb_data);
++	void (*unregister_vblank_cb)(struct device *dev);
++	void (*enable_vblank)(struct device *dev);
+ 	void (*disable_vblank)(struct device *dev);
+ 	unsigned int (*supported_rotations)(struct device *dev);
+ 	unsigned int (*layer_nr)(struct device *dev);
+@@ -110,12 +112,25 @@ static inline void mtk_ddp_comp_stop(struct mtk_ddp_comp *comp)
+ 		comp->funcs->stop(comp->dev);
+ }
+ 
+-static inline void mtk_ddp_comp_enable_vblank(struct mtk_ddp_comp *comp,
+-					      void (*vblank_cb)(void *),
+-					      void *vblank_cb_data)
++static inline void mtk_ddp_comp_register_vblank_cb(struct mtk_ddp_comp *comp,
++						   void (*vblank_cb)(void *),
++						   void *vblank_cb_data)
++{
++	if (comp->funcs && comp->funcs->register_vblank_cb)
++		comp->funcs->register_vblank_cb(comp->dev, vblank_cb,
++						vblank_cb_data);
++}
++
++static inline void mtk_ddp_comp_unregister_vblank_cb(struct mtk_ddp_comp *comp)
++{
++	if (comp->funcs && comp->funcs->unregister_vblank_cb)
++		comp->funcs->unregister_vblank_cb(comp->dev);
++}
++
++static inline void mtk_ddp_comp_enable_vblank(struct mtk_ddp_comp *comp)
+ {
+ 	if (comp->funcs && comp->funcs->enable_vblank)
+-		comp->funcs->enable_vblank(comp->dev, vblank_cb, vblank_cb_data);
++		comp->funcs->enable_vblank(comp->dev);
+ }
+ 
+ static inline void mtk_ddp_comp_disable_vblank(struct mtk_ddp_comp *comp)
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_drv.c b/drivers/gpu/drm/mediatek/mtk_drm_drv.c
+index 247c6ff277efd..b0e4e5d689272 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_drv.c
++++ b/drivers/gpu/drm/mediatek/mtk_drm_drv.c
+@@ -509,6 +509,8 @@ static const struct of_device_id mtk_ddp_comp_dt_ids[] = {
+ 	  .data = (void *)MTK_DPI },
+ 	{ .compatible = "mediatek,mt8183-dpi",
+ 	  .data = (void *)MTK_DPI },
++	{ .compatible = "mediatek,mt8192-dpi",
++	  .data = (void *)MTK_DPI },
+ 	{ .compatible = "mediatek,mt2701-dsi",
+ 	  .data = (void *)MTK_DSI },
+ 	{ .compatible = "mediatek,mt8173-dsi",
+diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+index 407f50a15faa4..217615e0e8507 100644
+--- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+@@ -1662,28 +1662,23 @@ static struct msm_ringbuffer *a5xx_active_ring(struct msm_gpu *gpu)
+ 	return a5xx_gpu->cur_ring;
+ }
+ 
+-static unsigned long a5xx_gpu_busy(struct msm_gpu *gpu)
++static u64 a5xx_gpu_busy(struct msm_gpu *gpu, unsigned long *out_sample_rate)
+ {
+-	u64 busy_cycles, busy_time;
++	u64 busy_cycles;
+ 
+ 	/* Only read the gpu busy if the hardware is already active */
+-	if (pm_runtime_get_if_in_use(&gpu->pdev->dev) == 0)
++	if (pm_runtime_get_if_in_use(&gpu->pdev->dev) == 0) {
++		*out_sample_rate = 1;
+ 		return 0;
++	}
+ 
+ 	busy_cycles = gpu_read64(gpu, REG_A5XX_RBBM_PERFCTR_RBBM_0_LO,
+ 			REG_A5XX_RBBM_PERFCTR_RBBM_0_HI);
+-
+-	busy_time = busy_cycles - gpu->devfreq.busy_cycles;
+-	do_div(busy_time, clk_get_rate(gpu->core_clk) / 1000000);
+-
+-	gpu->devfreq.busy_cycles = busy_cycles;
++	*out_sample_rate = clk_get_rate(gpu->core_clk);
+ 
+ 	pm_runtime_put(&gpu->pdev->dev);
+ 
+-	if (WARN_ON(busy_time > ~0LU))
+-		return ~0LU;
+-
+-	return (unsigned long)busy_time;
++	return busy_cycles;
+ }
+ 
+ static uint32_t a5xx_get_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring)
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+index ccc4fcf7a630f..40fb92becc78a 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+@@ -1649,12 +1649,14 @@ static void a6xx_destroy(struct msm_gpu *gpu)
+ 	kfree(a6xx_gpu);
+ }
+ 
+-static unsigned long a6xx_gpu_busy(struct msm_gpu *gpu)
++static u64 a6xx_gpu_busy(struct msm_gpu *gpu, unsigned long *out_sample_rate)
+ {
+ 	struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
+ 	struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
+-	u64 busy_cycles, busy_time;
++	u64 busy_cycles;
+ 
++	/* 19.2MHz */
++	*out_sample_rate = 19200000;
+ 
+ 	/* Only read the gpu busy if the hardware is already active */
+ 	if (pm_runtime_get_if_in_use(a6xx_gpu->gmu.dev) == 0)
+@@ -1664,17 +1666,10 @@ static unsigned long a6xx_gpu_busy(struct msm_gpu *gpu)
+ 			REG_A6XX_GMU_CX_GMU_POWER_COUNTER_XOCLK_0_L,
+ 			REG_A6XX_GMU_CX_GMU_POWER_COUNTER_XOCLK_0_H);
+ 
+-	busy_time = (busy_cycles - gpu->devfreq.busy_cycles) * 10;
+-	do_div(busy_time, 192);
+-
+-	gpu->devfreq.busy_cycles = busy_cycles;
+ 
+ 	pm_runtime_put(a6xx_gpu->gmu.dev);
+ 
+-	if (WARN_ON(busy_time > ~0LU))
+-		return ~0LU;
+-
+-	return (unsigned long)busy_time;
++	return busy_cycles;
+ }
+ 
+ static void a6xx_gpu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp)
+@@ -1919,6 +1914,7 @@ struct msm_gpu *a6xx_gpu_init(struct drm_device *dev)
+ 	BUG_ON(!node);
+ 
+ 	ret = a6xx_gmu_init(a6xx_gpu, node);
++	of_node_put(node);
+ 	if (ret) {
+ 		a6xx_destroy(&(a6xx_gpu->base.base));
+ 		return ERR_PTR(ret);
+diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
+index 9efc84929be0b..1219f71629a52 100644
+--- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
+@@ -272,7 +272,10 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_file_private *ctx,
+ 		*value = 0;
+ 		return 0;
+ 	case MSM_PARAM_FAULTS:
+-		*value = gpu->global_faults + ctx->aspace->faults;
++		if (ctx->aspace)
++			*value = gpu->global_faults + ctx->aspace->faults;
++		else
++			*value = gpu->global_faults;
+ 		return 0;
+ 	case MSM_PARAM_SUSPENDS:
+ 		*value = gpu->suspend_count;
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
+index 7763558ef566b..16ba9f9b9a787 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
+@@ -204,7 +204,8 @@ static int dpu_crtc_get_crc(struct drm_crtc *crtc)
+ 		rc = m->hw_lm->ops.collect_misr(m->hw_lm, &crcs[i]);
+ 
+ 		if (rc) {
+-			DRM_DEBUG_DRIVER("MISR read failed\n");
++			if (rc != -ENODATA)
++				DRM_DEBUG_DRIVER("MISR read failed\n");
+ 			return rc;
+ 		}
+ 	}
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_interrupts.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_interrupts.c
+index c61b5b283f08d..cf9aa06ab8bdf 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_interrupts.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_interrupts.c
+@@ -599,6 +599,9 @@ void dpu_core_irq_uninstall(struct dpu_kms *dpu_kms)
+ {
+ 	int i;
+ 
++	if (!dpu_kms->hw_intr)
++		return;
++
+ 	pm_runtime_get_sync(&dpu_kms->pdev->dev);
+ 	for (i = 0; i < dpu_kms->hw_intr->total_irqs; i++)
+ 		if (!list_empty(&dpu_kms->hw_intr->irq_cb_tbl[i]))
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c
+index 116e2b5b1a90f..284f5610dc35b 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c
+@@ -148,6 +148,7 @@ static void dpu_hw_intf_setup_timing_engine(struct dpu_hw_intf *ctx,
+ 		active_v_end = active_v_start + (p->yres * hsync_period) - 1;
+ 
+ 		display_v_start += p->hsync_pulse_width + p->h_back_porch;
++		display_v_end   -= p->h_front_porch; 
+ 
+ 		active_hctl = (active_h_end << 16) | active_h_start;
+ 		display_hctl = active_hctl;
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_lm.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_lm.c
+index 86363c0ec8341..462f5082099e6 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_lm.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_lm.c
+@@ -138,7 +138,7 @@ static int dpu_hw_lm_collect_misr(struct dpu_hw_mixer *ctx, u32 *misr_value)
+ 	ctrl = DPU_REG_READ(c, LM_MISR_CTRL);
+ 
+ 	if (!(ctrl & LM_MISR_CTRL_ENABLE))
+-		return -EINVAL;
++		return -ENODATA;
+ 
+ 	if (!(ctrl & LM_MISR_CTRL_STATUS))
+ 		return -EINVAL;
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
+index e29796c4f27be..c95bacd4f458e 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
+@@ -793,8 +793,10 @@ static void _dpu_kms_hw_destroy(struct dpu_kms *dpu_kms)
+ 		for (i = 0; i < dpu_kms->catalog->vbif_count; i++) {
+ 			u32 vbif_idx = dpu_kms->catalog->vbif[i].id;
+ 
+-			if ((vbif_idx < VBIF_MAX) && dpu_kms->hw_vbif[vbif_idx])
++			if ((vbif_idx < VBIF_MAX) && dpu_kms->hw_vbif[vbif_idx]) {
+ 				dpu_hw_vbif_destroy(dpu_kms->hw_vbif[vbif_idx]);
++				dpu_kms->hw_vbif[vbif_idx] = NULL;
++			}
+ 		}
+ 	}
+ 
+@@ -1056,7 +1058,9 @@ static int dpu_kms_hw_init(struct msm_kms *kms)
+ 
+ 	dpu_kms_parse_data_bus_icc_path(dpu_kms);
+ 
+-	pm_runtime_get_sync(&dpu_kms->pdev->dev);
++	rc = pm_runtime_resume_and_get(&dpu_kms->pdev->dev);
++	if (rc < 0)
++		goto error;
+ 
+ 	dpu_kms->core_rev = readl_relaxed(dpu_kms->mmio + 0x0);
+ 
+@@ -1240,7 +1244,7 @@ static int dpu_bind(struct device *dev, struct device *master, void *data)
+ 
+ 	priv->kms = &dpu_kms->base;
+ 
+-	return ret;
++	return 0;
+ }
+ 
+ static void dpu_unbind(struct device *dev, struct device *master, void *data)
+diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c
+index b966cd69f99dd..31447da0af25c 100644
+--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c
++++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c
+@@ -612,9 +612,15 @@ static int mdp5_crtc_setup_pipeline(struct drm_crtc *crtc,
+ 		if (ret)
+ 			return ret;
+ 
+-		mdp5_mixer_release(new_crtc_state->state, old_mixer);
++		ret = mdp5_mixer_release(new_crtc_state->state, old_mixer);
++		if (ret)
++			return ret;
++
+ 		if (old_r_mixer) {
+-			mdp5_mixer_release(new_crtc_state->state, old_r_mixer);
++			ret = mdp5_mixer_release(new_crtc_state->state, old_r_mixer);
++			if (ret)
++				return ret;
++
+ 			if (!need_right_mixer)
+ 				pipeline->r_mixer = NULL;
+ 		}
+@@ -991,8 +997,10 @@ static int mdp5_crtc_cursor_set(struct drm_crtc *crtc,
+ 
+ 	ret = msm_gem_get_and_pin_iova(cursor_bo, kms->aspace,
+ 			&mdp5_crtc->cursor.iova);
+-	if (ret)
++	if (ret) {
++		drm_gem_object_put(cursor_bo);
+ 		return -EINVAL;
++	}
+ 
+ 	pm_runtime_get_sync(&pdev->dev);
+ 
+diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c
+index 3b92372e7bdf1..1d4bbde293208 100644
+--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c
++++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c
+@@ -570,9 +570,9 @@ struct msm_kms *mdp5_kms_init(struct drm_device *dev)
+ 	}
+ 
+ 	irq = irq_of_parse_and_map(pdev->dev.of_node, 0);
+-	if (irq < 0) {
+-		ret = irq;
+-		DRM_DEV_ERROR(&pdev->dev, "failed to get irq: %d\n", ret);
++	if (!irq) {
++		ret = -EINVAL;
++		DRM_DEV_ERROR(&pdev->dev, "failed to get irq\n");
+ 		goto fail;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_mixer.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_mixer.c
+index 954db683ae444..2536def2a0005 100644
+--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_mixer.c
++++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_mixer.c
+@@ -116,21 +116,28 @@ int mdp5_mixer_assign(struct drm_atomic_state *s, struct drm_crtc *crtc,
+ 	return 0;
+ }
+ 
+-void mdp5_mixer_release(struct drm_atomic_state *s, struct mdp5_hw_mixer *mixer)
++int mdp5_mixer_release(struct drm_atomic_state *s, struct mdp5_hw_mixer *mixer)
+ {
+ 	struct mdp5_global_state *global_state = mdp5_get_global_state(s);
+-	struct mdp5_hw_mixer_state *new_state = &global_state->hwmixer;
++	struct mdp5_hw_mixer_state *new_state;
+ 
+ 	if (!mixer)
+-		return;
++		return 0;
++
++	if (IS_ERR(global_state))
++		return PTR_ERR(global_state);
++
++	new_state = &global_state->hwmixer;
+ 
+ 	if (WARN_ON(!new_state->hwmixer_to_crtc[mixer->idx]))
+-		return;
++		return -EINVAL;
+ 
+ 	DBG("%s: release from crtc %s", mixer->name,
+ 	    new_state->hwmixer_to_crtc[mixer->idx]->name);
+ 
+ 	new_state->hwmixer_to_crtc[mixer->idx] = NULL;
++
++	return 0;
+ }
+ 
+ void mdp5_mixer_destroy(struct mdp5_hw_mixer *mixer)
+diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_mixer.h b/drivers/gpu/drm/msm/disp/mdp5/mdp5_mixer.h
+index 43c9ba43ce185..545ee223b9d74 100644
+--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_mixer.h
++++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_mixer.h
+@@ -30,7 +30,7 @@ void mdp5_mixer_destroy(struct mdp5_hw_mixer *lm);
+ int mdp5_mixer_assign(struct drm_atomic_state *s, struct drm_crtc *crtc,
+ 		      uint32_t caps, struct mdp5_hw_mixer **mixer,
+ 		      struct mdp5_hw_mixer **r_mixer);
+-void mdp5_mixer_release(struct drm_atomic_state *s,
+-			struct mdp5_hw_mixer *mixer);
++int mdp5_mixer_release(struct drm_atomic_state *s,
++		       struct mdp5_hw_mixer *mixer);
+ 
+ #endif /* __MDP5_LM_H__ */
+diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_pipe.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_pipe.c
+index ba6695963aa66..a4f5cb90f3e80 100644
+--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_pipe.c
++++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_pipe.c
+@@ -119,18 +119,23 @@ int mdp5_pipe_assign(struct drm_atomic_state *s, struct drm_plane *plane,
+ 	return 0;
+ }
+ 
+-void mdp5_pipe_release(struct drm_atomic_state *s, struct mdp5_hw_pipe *hwpipe)
++int mdp5_pipe_release(struct drm_atomic_state *s, struct mdp5_hw_pipe *hwpipe)
+ {
+ 	struct msm_drm_private *priv = s->dev->dev_private;
+ 	struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(priv->kms));
+ 	struct mdp5_global_state *state = mdp5_get_global_state(s);
+-	struct mdp5_hw_pipe_state *new_state = &state->hwpipe;
++	struct mdp5_hw_pipe_state *new_state;
+ 
+ 	if (!hwpipe)
+-		return;
++		return 0;
++
++	if (IS_ERR(state))
++		return PTR_ERR(state);
++
++	new_state = &state->hwpipe;
+ 
+ 	if (WARN_ON(!new_state->hwpipe_to_plane[hwpipe->idx]))
+-		return;
++		return -EINVAL;
+ 
+ 	DBG("%s: release from plane %s", hwpipe->name,
+ 		new_state->hwpipe_to_plane[hwpipe->idx]->name);
+@@ -141,6 +146,8 @@ void mdp5_pipe_release(struct drm_atomic_state *s, struct mdp5_hw_pipe *hwpipe)
+ 	}
+ 
+ 	new_state->hwpipe_to_plane[hwpipe->idx] = NULL;
++
++	return 0;
+ }
+ 
+ void mdp5_pipe_destroy(struct mdp5_hw_pipe *hwpipe)
+diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_pipe.h b/drivers/gpu/drm/msm/disp/mdp5/mdp5_pipe.h
+index 9b26d0761bd4f..cca67938cab21 100644
+--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_pipe.h
++++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_pipe.h
+@@ -37,7 +37,7 @@ int mdp5_pipe_assign(struct drm_atomic_state *s, struct drm_plane *plane,
+ 		     uint32_t caps, uint32_t blkcfg,
+ 		     struct mdp5_hw_pipe **hwpipe,
+ 		     struct mdp5_hw_pipe **r_hwpipe);
+-void mdp5_pipe_release(struct drm_atomic_state *s, struct mdp5_hw_pipe *hwpipe);
++int mdp5_pipe_release(struct drm_atomic_state *s, struct mdp5_hw_pipe *hwpipe);
+ 
+ struct mdp5_hw_pipe *mdp5_pipe_init(enum mdp5_pipe pipe,
+ 		uint32_t reg_offset, uint32_t caps);
+diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c
+index c478d25f7825a..f2d72497467bd 100644
+--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c
++++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c
+@@ -314,12 +314,24 @@ static int mdp5_plane_atomic_check_with_state(struct drm_crtc_state *crtc_state,
+ 				mdp5_state->r_hwpipe = NULL;
+ 
+ 
+-			mdp5_pipe_release(state->state, old_hwpipe);
+-			mdp5_pipe_release(state->state, old_right_hwpipe);
++			ret = mdp5_pipe_release(state->state, old_hwpipe);
++			if (ret)
++				return ret;
++
++			ret = mdp5_pipe_release(state->state, old_right_hwpipe);
++			if (ret)
++				return ret;
++
+ 		}
+ 	} else {
+-		mdp5_pipe_release(state->state, mdp5_state->hwpipe);
+-		mdp5_pipe_release(state->state, mdp5_state->r_hwpipe);
++		ret = mdp5_pipe_release(state->state, mdp5_state->hwpipe);
++		if (ret)
++			return ret;
++
++		ret = mdp5_pipe_release(state->state, mdp5_state->r_hwpipe);
++		if (ret)
++			return ret;
++
+ 		mdp5_state->hwpipe = mdp5_state->r_hwpipe = NULL;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/msm/dp/dp_ctrl.c b/drivers/gpu/drm/msm/dp/dp_ctrl.c
+index 53568567e05bc..08cc48af03b7d 100644
+--- a/drivers/gpu/drm/msm/dp/dp_ctrl.c
++++ b/drivers/gpu/drm/msm/dp/dp_ctrl.c
+@@ -1532,7 +1532,7 @@ static int dp_ctrl_process_phy_test_request(struct dp_ctrl_private *ctrl)
+ 	 * running. Add the global reset just before disabling the
+ 	 * link clocks and core clocks.
+ 	 */
+-	ret = dp_ctrl_off_link_stream(&ctrl->dp_ctrl);
++	ret = dp_ctrl_off(&ctrl->dp_ctrl);
+ 	if (ret) {
+ 		DRM_ERROR("failed to disable DP controller\n");
+ 		return ret;
+@@ -1699,8 +1699,6 @@ int dp_ctrl_on_link(struct dp_ctrl *dp_ctrl)
+ 		ctrl->link->link_params.rate,
+ 		ctrl->link->link_params.num_lanes, ctrl->dp_ctrl.pixel_rate);
+ 
+-	ctrl->link->phy_params.p_level = 0;
+-	ctrl->link->phy_params.v_level = 0;
+ 
+ 	rc = dp_ctrl_enable_mainlink_clocks(ctrl);
+ 	if (rc)
+@@ -1822,12 +1820,6 @@ int dp_ctrl_on_stream(struct dp_ctrl *dp_ctrl)
+ 		}
+ 	}
+ 
+-	if (!dp_ctrl_channel_eq_ok(ctrl))
+-		dp_ctrl_link_retrain(ctrl);
+-
+-	/* stop txing train pattern to end link training */
+-	dp_ctrl_clear_training_pattern(ctrl);
+-
+ 	ret = dp_ctrl_enable_stream_clocks(ctrl);
+ 	if (ret) {
+ 		DRM_ERROR("Failed to start pixel clocks. ret=%d\n", ret);
+@@ -1839,6 +1831,12 @@ int dp_ctrl_on_stream(struct dp_ctrl *dp_ctrl)
+ 		return 0;
+ 	}
+ 
++	if (!dp_ctrl_channel_eq_ok(ctrl))
++		dp_ctrl_link_retrain(ctrl);
++
++	/* stop txing train pattern to end link training */
++	dp_ctrl_clear_training_pattern(ctrl);
++
+ 	/*
+ 	 * Set up transfer unit values and set controller state to send
+ 	 * video.
+diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c
+index 178b774a5fbd3..8deb92bddfdec 100644
+--- a/drivers/gpu/drm/msm/dp/dp_display.c
++++ b/drivers/gpu/drm/msm/dp/dp_display.c
+@@ -113,6 +113,7 @@ struct dp_display_private {
+ 	u32 hpd_state;
+ 	u32 event_pndx;
+ 	u32 event_gndx;
++	struct task_struct *ev_tsk;
+ 	struct dp_event event_list[DP_EVENT_Q_MAX];
+ 	spinlock_t event_lock;
+ 
+@@ -249,6 +250,8 @@ void dp_display_signal_audio_complete(struct msm_dp *dp_display)
+ 	complete_all(&dp->audio_comp);
+ }
+ 
++static int dp_hpd_event_thread_start(struct dp_display_private *dp_priv);
++
+ static int dp_display_bind(struct device *dev, struct device *master,
+ 			   void *data)
+ {
+@@ -282,9 +285,18 @@ static int dp_display_bind(struct device *dev, struct device *master,
+ 	}
+ 
+ 	rc = dp_register_audio_driver(dev, dp->audio);
+-	if (rc)
++	if (rc) {
+ 		DRM_ERROR("Audio registration Dp failed\n");
++		goto end;
++	}
+ 
++	rc = dp_hpd_event_thread_start(dp);
++	if (rc) {
++		DRM_ERROR("Event thread create failed\n");
++		goto end;
++	}
++
++	return 0;
+ end:
+ 	return rc;
+ }
+@@ -295,6 +307,11 @@ static void dp_display_unbind(struct device *dev, struct device *master,
+ 	struct dp_display_private *dp = dev_get_dp_display_private(dev);
+ 	struct msm_drm_private *priv = dev_get_drvdata(master);
+ 
++	/* disable all HPD interrupts */
++	dp_catalog_hpd_config_intr(dp->catalog, DP_DP_HPD_INT_MASK, false);
++
++	kthread_stop(dp->ev_tsk);
++
+ 	dp_power_client_deinit(dp->power);
+ 	dp_aux_unregister(dp->aux);
+ 	priv->dp[dp->id] = NULL;
+@@ -1093,12 +1110,17 @@ static int hpd_event_thread(void *data)
+ 	while (1) {
+ 		if (timeout_mode) {
+ 			wait_event_timeout(dp_priv->event_q,
+-				(dp_priv->event_pndx == dp_priv->event_gndx),
+-						EVENT_TIMEOUT);
++				(dp_priv->event_pndx == dp_priv->event_gndx) ||
++					kthread_should_stop(), EVENT_TIMEOUT);
+ 		} else {
+ 			wait_event_interruptible(dp_priv->event_q,
+-				(dp_priv->event_pndx != dp_priv->event_gndx));
++				(dp_priv->event_pndx != dp_priv->event_gndx) ||
++					kthread_should_stop());
+ 		}
++
++		if (kthread_should_stop())
++			break;
++
+ 		spin_lock_irqsave(&dp_priv->event_lock, flag);
+ 		todo = &dp_priv->event_list[dp_priv->event_gndx];
+ 		if (todo->delay) {
+@@ -1168,12 +1190,17 @@ static int hpd_event_thread(void *data)
+ 	return 0;
+ }
+ 
+-static void dp_hpd_event_setup(struct dp_display_private *dp_priv)
++static int dp_hpd_event_thread_start(struct dp_display_private *dp_priv)
+ {
+-	init_waitqueue_head(&dp_priv->event_q);
+-	spin_lock_init(&dp_priv->event_lock);
++	/* set event q to empty */
++	dp_priv->event_gndx = 0;
++	dp_priv->event_pndx = 0;
+ 
+-	kthread_run(hpd_event_thread, dp_priv, "dp_hpd_handler");
++	dp_priv->ev_tsk = kthread_run(hpd_event_thread, dp_priv, "dp_hpd_handler");
++	if (IS_ERR(dp_priv->ev_tsk))
++		return PTR_ERR(dp_priv->ev_tsk);
++
++	return 0;
+ }
+ 
+ static irqreturn_t dp_display_irq_handler(int irq, void *dev_id)
+@@ -1233,10 +1260,9 @@ int dp_display_request_irq(struct msm_dp *dp_display)
+ 	dp = container_of(dp_display, struct dp_display_private, dp_display);
+ 
+ 	dp->irq = irq_of_parse_and_map(dp->pdev->dev.of_node, 0);
+-	if (dp->irq < 0) {
+-		rc = dp->irq;
+-		DRM_ERROR("failed to get irq: %d\n", rc);
+-		return rc;
++	if (!dp->irq) {
++		DRM_ERROR("failed to get irq\n");
++		return -EINVAL;
+ 	}
+ 
+ 	rc = devm_request_irq(&dp->pdev->dev, dp->irq,
+@@ -1303,7 +1329,10 @@ static int dp_display_probe(struct platform_device *pdev)
+ 		return -EPROBE_DEFER;
+ 	}
+ 
++	/* setup event q */
+ 	mutex_init(&dp->event_mutex);
++	init_waitqueue_head(&dp->event_q);
++	spin_lock_init(&dp->event_lock);
+ 
+ 	/* Store DP audio handle inside DP display */
+ 	dp->dp_display.dp_audio = dp->audio;
+@@ -1483,8 +1512,6 @@ void msm_dp_irq_postinstall(struct msm_dp *dp_display)
+ 
+ 	dp = container_of(dp_display, struct dp_display_private, dp_display);
+ 
+-	dp_hpd_event_setup(dp);
+-
+ 	dp_add_event(dp, EV_HPD_INIT_SETUP, 0, 100);
+ }
+ 
+diff --git a/drivers/gpu/drm/msm/dp/dp_drm.c b/drivers/gpu/drm/msm/dp/dp_drm.c
+index 80f59cf990898..262744914f97d 100644
+--- a/drivers/gpu/drm/msm/dp/dp_drm.c
++++ b/drivers/gpu/drm/msm/dp/dp_drm.c
+@@ -230,9 +230,13 @@ struct drm_bridge *msm_dp_bridge_init(struct msm_dp *dp_display, struct drm_devi
+ 	bridge->funcs = &dp_bridge_ops;
+ 	bridge->encoder = encoder;
+ 
++	drm_bridge_add(bridge);
++
+ 	rc = drm_bridge_attach(encoder, bridge, NULL, DRM_BRIDGE_ATTACH_NO_CONNECTOR);
+ 	if (rc) {
+ 		DRM_ERROR("failed to attach bridge, rc=%d\n", rc);
++		drm_bridge_remove(bridge);
++
+ 		return ERR_PTR(rc);
+ 	}
+ 
+diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c b/drivers/gpu/drm/msm/dsi/dsi_host.c
+index d51e70fab93db..8925f60fd9ecc 100644
+--- a/drivers/gpu/drm/msm/dsi/dsi_host.c
++++ b/drivers/gpu/drm/msm/dsi/dsi_host.c
+@@ -1341,10 +1341,10 @@ static int dsi_cmds2buf_tx(struct msm_dsi_host *msm_host,
+ 			dsi_get_bpp(msm_host->format) / 8;
+ 
+ 	len = dsi_cmd_dma_add(msm_host, msg);
+-	if (!len) {
++	if (len < 0) {
+ 		pr_err("%s: failed to add cmd type = 0x%x\n",
+ 			__func__,  msg->type);
+-		return -EINVAL;
++		return len;
+ 	}
+ 
+ 	/* for video mode, do not send cmds more than
+@@ -1363,10 +1363,14 @@ static int dsi_cmds2buf_tx(struct msm_dsi_host *msm_host,
+ 	}
+ 
+ 	ret = dsi_cmd_dma_tx(msm_host, len);
+-	if (ret < len) {
+-		pr_err("%s: cmd dma tx failed, type=0x%x, data0=0x%x, len=%d\n",
+-			__func__, msg->type, (*(u8 *)(msg->tx_buf)), len);
+-		return -ECOMM;
++	if (ret < 0) {
++		pr_err("%s: cmd dma tx failed, type=0x%x, data0=0x%x, len=%d, ret=%d\n",
++			__func__, msg->type, (*(u8 *)(msg->tx_buf)), len, ret);
++		return ret;
++	} else if (ret < len) {
++		pr_err("%s: cmd dma tx failed, type=0x%x, data0=0x%x, ret=%d len=%d\n",
++			__func__, msg->type, (*(u8 *)(msg->tx_buf)), ret, len);
++		return -EIO;
+ 	}
+ 
+ 	return len;
+@@ -2092,9 +2096,12 @@ int msm_dsi_host_cmd_rx(struct mipi_dsi_host *host,
+ 		}
+ 
+ 		ret = dsi_cmds2buf_tx(msm_host, msg);
+-		if (ret < msg->tx_len) {
++		if (ret < 0) {
+ 			pr_err("%s: Read cmd Tx failed, %d\n", __func__, ret);
+ 			return ret;
++		} else if (ret < msg->tx_len) {
++			pr_err("%s: Read cmd Tx failed, too short: %d\n", __func__, ret);
++			return -ECOMM;
+ 		}
+ 
+ 		/*
+diff --git a/drivers/gpu/drm/msm/dsi/dsi_manager.c b/drivers/gpu/drm/msm/dsi/dsi_manager.c
+index 9f6af0f0fe005..84f3b2ebf1b8a 100644
+--- a/drivers/gpu/drm/msm/dsi/dsi_manager.c
++++ b/drivers/gpu/drm/msm/dsi/dsi_manager.c
+@@ -34,6 +34,32 @@ static struct msm_dsi_manager msm_dsim_glb;
+ #define IS_SYNC_NEEDED()	(msm_dsim_glb.is_sync_needed)
+ #define IS_MASTER_DSI_LINK(id)	(msm_dsim_glb.master_dsi_link_id == id)
+ 
++#ifdef CONFIG_OF
++static bool dsi_mgr_power_on_early(struct drm_bridge *bridge)
++{
++	struct drm_bridge *next_bridge = drm_bridge_get_next_bridge(bridge);
++
++	/*
++	 * If the next bridge in the chain is the Parade ps8640 bridge chip
++	 * then don't power on early since it seems to violate the expectations
++	 * of the firmware that the bridge chip is running.
++	 *
++	 * NOTE: this is expected to be a temporary special case. It's expected
++	 * that we'll eventually have a framework that allows the next level
++	 * bridge to indicate whether it needs us to power on before it or
++	 * after it. When that framework is in place then we'll use it and
++	 * remove this special case.
++	 */
++	return !(next_bridge && next_bridge->of_node &&
++		 of_device_is_compatible(next_bridge->of_node, "parade,ps8640"));
++}
++#else
++static inline bool dsi_mgr_power_on_early(struct drm_bridge *bridge)
++{
++	return true;
++}
++#endif
++
+ static inline struct msm_dsi *dsi_mgr_get_dsi(int id)
+ {
+ 	return msm_dsim_glb.dsi[id];
+@@ -389,6 +415,9 @@ static void dsi_mgr_bridge_pre_enable(struct drm_bridge *bridge)
+ 	if (is_bonded_dsi && !IS_MASTER_DSI_LINK(id))
+ 		return;
+ 
++	if (!dsi_mgr_power_on_early(bridge))
++		dsi_mgr_bridge_power_on(bridge);
++
+ 	/* Always call panel functions once, because even for dual panels,
+ 	 * there is only one drm_panel instance.
+ 	 */
+@@ -570,7 +599,8 @@ static void dsi_mgr_bridge_mode_set(struct drm_bridge *bridge,
+ 	if (is_bonded_dsi && other_dsi)
+ 		msm_dsi_host_set_display_mode(other_dsi->host, adjusted_mode);
+ 
+-	dsi_mgr_bridge_power_on(bridge);
++	if (dsi_mgr_power_on_early(bridge))
++		dsi_mgr_bridge_power_on(bridge);
+ }
+ 
+ static const struct drm_connector_funcs dsi_mgr_connector_funcs = {
+@@ -665,6 +695,8 @@ struct drm_bridge *msm_dsi_manager_bridge_init(u8 id)
+ 	bridge = &dsi_bridge->base;
+ 	bridge->funcs = &dsi_mgr_bridge_funcs;
+ 
++	drm_bridge_add(bridge);
++
+ 	ret = drm_bridge_attach(encoder, bridge, NULL, 0);
+ 	if (ret)
+ 		goto fail;
+@@ -735,6 +767,7 @@ struct drm_connector *msm_dsi_manager_ext_bridge_init(u8 id)
+ 
+ void msm_dsi_manager_bridge_destroy(struct drm_bridge *bridge)
+ {
++	drm_bridge_remove(bridge);
+ }
+ 
+ int msm_dsi_manager_cmd_xfer(int id, const struct mipi_dsi_msg *msg)
+diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
+index 75557ac99adf1..8199c53567f4e 100644
+--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
++++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
+@@ -1062,7 +1062,7 @@ const struct msm_dsi_phy_cfg dsi_phy_14nm_660_cfgs = {
+ 	},
+ 	.min_pll_rate = VCO_MIN_RATE,
+ 	.max_pll_rate = VCO_MAX_RATE,
+-	.io_start = { 0xc994400, 0xc996000 },
++	.io_start = { 0xc994400, 0xc996400 },
+ 	.num_dsi_phy = 2,
+ };
+ 
+diff --git a/drivers/gpu/drm/msm/hdmi/hdmi.c b/drivers/gpu/drm/msm/hdmi/hdmi.c
+index ec324352e862e..f6229262dcb05 100644
+--- a/drivers/gpu/drm/msm/hdmi/hdmi.c
++++ b/drivers/gpu/drm/msm/hdmi/hdmi.c
+@@ -142,6 +142,10 @@ static struct hdmi *msm_hdmi_init(struct platform_device *pdev)
+ 	/* HDCP needs physical address of hdmi register */
+ 	res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
+ 		config->mmio_name);
++	if (!res) {
++		ret = -EINVAL;
++		goto fail;
++	}
+ 	hdmi->mmio_phy_addr = res->start;
+ 
+ 	hdmi->qfprom_mmio = msm_ioremap(pdev, config->qfprom_mmio_name);
+@@ -298,9 +302,9 @@ int msm_hdmi_modeset_init(struct hdmi *hdmi,
+ 	drm_connector_attach_encoder(hdmi->connector, hdmi->encoder);
+ 
+ 	hdmi->irq = irq_of_parse_and_map(pdev->dev.of_node, 0);
+-	if (hdmi->irq < 0) {
+-		ret = hdmi->irq;
+-		DRM_DEV_ERROR(dev->dev, "failed to get irq: %d\n", ret);
++	if (!hdmi->irq) {
++		ret = -EINVAL;
++		DRM_DEV_ERROR(dev->dev, "failed to get irq\n");
+ 		goto fail;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/msm/hdmi/hdmi_bridge.c b/drivers/gpu/drm/msm/hdmi/hdmi_bridge.c
+index 10ebe2089cb61..97c24010c4d10 100644
+--- a/drivers/gpu/drm/msm/hdmi/hdmi_bridge.c
++++ b/drivers/gpu/drm/msm/hdmi/hdmi_bridge.c
+@@ -15,6 +15,7 @@ void msm_hdmi_bridge_destroy(struct drm_bridge *bridge)
+ 	struct hdmi_bridge *hdmi_bridge = to_hdmi_bridge(bridge);
+ 
+ 	msm_hdmi_hpd_disable(hdmi_bridge);
++	drm_bridge_remove(bridge);
+ }
+ 
+ static void msm_hdmi_power_on(struct drm_bridge *bridge)
+@@ -349,6 +350,8 @@ struct drm_bridge *msm_hdmi_bridge_init(struct hdmi *hdmi)
+ 		DRM_BRIDGE_OP_DETECT |
+ 		DRM_BRIDGE_OP_EDID;
+ 
++	drm_bridge_add(bridge);
++
+ 	ret = drm_bridge_attach(hdmi->encoder, bridge, NULL, DRM_BRIDGE_ATTACH_NO_CONNECTOR);
+ 	if (ret)
+ 		goto fail;
+diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
+index affa95eb05fcd..f2c46116df55c 100644
+--- a/drivers/gpu/drm/msm/msm_drv.c
++++ b/drivers/gpu/drm/msm/msm_drv.c
+@@ -11,6 +11,7 @@
+ #include <linux/uaccess.h>
+ #include <uapi/linux/sched/types.h>
+ 
++#include <drm/drm_bridge.h>
+ #include <drm/drm_drv.h>
+ #include <drm/drm_file.h>
+ #include <drm/drm_ioctl.h>
+@@ -112,6 +113,8 @@ static int msm_irq_postinstall(struct drm_device *dev)
+ 
+ static int msm_irq_install(struct drm_device *dev, unsigned int irq)
+ {
++	struct msm_drm_private *priv = dev->dev_private;
++	struct msm_kms *kms = priv->kms;
+ 	int ret;
+ 
+ 	if (irq == IRQ_NOTCONNECTED)
+@@ -123,6 +126,8 @@ static int msm_irq_install(struct drm_device *dev, unsigned int irq)
+ 	if (ret)
+ 		return ret;
+ 
++	kms->irq_requested = true;
++
+ 	ret = msm_irq_postinstall(dev);
+ 	if (ret) {
+ 		free_irq(irq, dev);
+@@ -138,7 +143,8 @@ static void msm_irq_uninstall(struct drm_device *dev)
+ 	struct msm_kms *kms = priv->kms;
+ 
+ 	kms->funcs->irq_uninstall(kms);
+-	free_irq(kms->irq, dev);
++	if (kms->irq_requested)
++		free_irq(kms->irq, dev);
+ }
+ 
+ struct msm_vblank_work {
+@@ -232,6 +238,9 @@ static int msm_drm_uninit(struct device *dev)
+ 
+ 	drm_mode_config_cleanup(ddev);
+ 
++	for (i = 0; i < priv->num_bridges; i++)
++		drm_bridge_remove(priv->bridges[i]);
++
+ 	pm_runtime_get_sync(dev);
+ 	msm_irq_uninstall(ddev);
+ 	pm_runtime_put_sync(dev);
+diff --git a/drivers/gpu/drm/msm/msm_gem_prime.c b/drivers/gpu/drm/msm/msm_gem_prime.c
+index e8f1b7a2ca9c5..94ab705e9b8a4 100644
+--- a/drivers/gpu/drm/msm/msm_gem_prime.c
++++ b/drivers/gpu/drm/msm/msm_gem_prime.c
+@@ -17,7 +17,7 @@ struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj)
+ 	int npages = obj->size >> PAGE_SHIFT;
+ 
+ 	if (WARN_ON(!msm_obj->pages))  /* should have already pinned! */
+-		return NULL;
++		return ERR_PTR(-ENOMEM);
+ 
+ 	return drm_prime_pages_to_sg(obj->dev, msm_obj->pages, npages);
+ }
+diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c
+index faf0c242874e8..58eb3e1662cb9 100644
+--- a/drivers/gpu/drm/msm/msm_gpu.c
++++ b/drivers/gpu/drm/msm/msm_gpu.c
+@@ -371,7 +371,8 @@ static void recover_worker(struct kthread_work *work)
+ 
+ 		/* Increment the fault counts */
+ 		submit->queue->faults++;
+-		submit->aspace->faults++;
++		if (submit->aspace)
++			submit->aspace->faults++;
+ 
+ 		task = get_pid_task(submit->pid, PIDTYPE_PID);
+ 		if (task) {
+diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h
+index 02419f2ca2bc5..143c56f5185b8 100644
+--- a/drivers/gpu/drm/msm/msm_gpu.h
++++ b/drivers/gpu/drm/msm/msm_gpu.h
+@@ -9,6 +9,7 @@
+ 
+ #include <linux/adreno-smmu-priv.h>
+ #include <linux/clk.h>
++#include <linux/devfreq.h>
+ #include <linux/interconnect.h>
+ #include <linux/pm_opp.h>
+ #include <linux/regulator/consumer.h>
+@@ -62,7 +63,7 @@ struct msm_gpu_funcs {
+ 	/* for generation specific debugfs: */
+ 	void (*debugfs_init)(struct msm_gpu *gpu, struct drm_minor *minor);
+ #endif
+-	unsigned long (*gpu_busy)(struct msm_gpu *gpu);
++	u64 (*gpu_busy)(struct msm_gpu *gpu, unsigned long *out_sample_rate);
+ 	struct msm_gpu_state *(*gpu_state_get)(struct msm_gpu *gpu);
+ 	int (*gpu_state_put)(struct msm_gpu_state *state);
+ 	unsigned long (*gpu_get_freq)(struct msm_gpu *gpu);
+@@ -106,11 +107,8 @@ struct msm_gpu_devfreq {
+ 	struct dev_pm_qos_request boost_freq;
+ 
+ 	/**
+-	 * busy_cycles:
+-	 *
+-	 * Used by implementation of gpu->gpu_busy() to track the last
+-	 * busy counter value, for calculating elapsed busy cycles since
+-	 * last sampling period.
++	 * busy_cycles: Last busy counter value, for calculating elapsed busy
++	 * cycles since last sampling period.
+ 	 */
+ 	u64 busy_cycles;
+ 
+@@ -120,6 +118,8 @@ struct msm_gpu_devfreq {
+ 	/** idle_time: Time of last transition to idle: */
+ 	ktime_t idle_time;
+ 
++	struct devfreq_dev_status average_status;
++
+ 	/**
+ 	 * idle_work:
+ 	 *
+diff --git a/drivers/gpu/drm/msm/msm_gpu_devfreq.c b/drivers/gpu/drm/msm/msm_gpu_devfreq.c
+index 12641616acd30..c7dbaa4b19264 100644
+--- a/drivers/gpu/drm/msm/msm_gpu_devfreq.c
++++ b/drivers/gpu/drm/msm/msm_gpu_devfreq.c
+@@ -9,6 +9,7 @@
+ 
+ #include <linux/devfreq.h>
+ #include <linux/devfreq_cooling.h>
++#include <linux/math64.h>
+ #include <linux/units.h>
+ 
+ /*
+@@ -49,18 +50,95 @@ static unsigned long get_freq(struct msm_gpu *gpu)
+ 	return clk_get_rate(gpu->core_clk);
+ }
+ 
+-static int msm_devfreq_get_dev_status(struct device *dev,
++static void get_raw_dev_status(struct msm_gpu *gpu,
+ 		struct devfreq_dev_status *status)
+ {
+-	struct msm_gpu *gpu = dev_to_gpu(dev);
++	struct msm_gpu_devfreq *df = &gpu->devfreq;
++	u64 busy_cycles, busy_time;
++	unsigned long sample_rate;
+ 	ktime_t time;
+ 
+ 	status->current_frequency = get_freq(gpu);
+-	status->busy_time = gpu->funcs->gpu_busy(gpu);
+-
++	busy_cycles = gpu->funcs->gpu_busy(gpu, &sample_rate);
+ 	time = ktime_get();
+-	status->total_time = ktime_us_delta(time, gpu->devfreq.time);
+-	gpu->devfreq.time = time;
++
++	busy_time = busy_cycles - df->busy_cycles;
++	status->total_time = ktime_us_delta(time, df->time);
++
++	df->busy_cycles = busy_cycles;
++	df->time = time;
++
++	busy_time *= USEC_PER_SEC;
++	do_div(busy_time, sample_rate);
++	if (WARN_ON(busy_time > ~0LU))
++		busy_time = ~0LU;
++
++	status->busy_time = busy_time;
++}
++
++static void update_average_dev_status(struct msm_gpu *gpu,
++		const struct devfreq_dev_status *raw)
++{
++	struct msm_gpu_devfreq *df = &gpu->devfreq;
++	const u32 polling_ms = df->devfreq->profile->polling_ms;
++	const u32 max_history_ms = polling_ms * 11 / 10;
++	struct devfreq_dev_status *avg = &df->average_status;
++	u64 avg_freq;
++
++	/* simple_ondemand governor interacts poorly with gpu->clamp_to_idle.
++	 * When we enforce the constraint on idle, it calls get_dev_status
++	 * which would normally reset the stats.  When we remove the
++	 * constraint on active, it calls get_dev_status again where busy_time
++	 * would be 0.
++	 *
++	 * To remedy this, we always return the average load over the past
++	 * polling_ms.
++	 */
++
++	/* raw is longer than polling_ms or avg has no history */
++	if (div_u64(raw->total_time, USEC_PER_MSEC) >= polling_ms ||
++	    !avg->total_time) {
++		*avg = *raw;
++		return;
++	}
++
++	/* Truncate the oldest history first.
++	 *
++	 * Because we keep the history with a single devfreq_dev_status,
++	 * rather than a list of devfreq_dev_status, we have to assume freq
++	 * and load are the same over avg->total_time.  We can scale down
++	 * avg->busy_time and avg->total_time by the same factor to drop
++	 * history.
++	 */
++	if (div_u64(avg->total_time + raw->total_time, USEC_PER_MSEC) >=
++			max_history_ms) {
++		const u32 new_total_time = polling_ms * USEC_PER_MSEC -
++			raw->total_time;
++		avg->busy_time = div_u64(
++				mul_u32_u32(avg->busy_time, new_total_time),
++				avg->total_time);
++		avg->total_time = new_total_time;
++	}
++
++	/* compute the average freq over avg->total_time + raw->total_time */
++	avg_freq = mul_u32_u32(avg->current_frequency, avg->total_time);
++	avg_freq += mul_u32_u32(raw->current_frequency, raw->total_time);
++	do_div(avg_freq, avg->total_time + raw->total_time);
++
++	avg->current_frequency = avg_freq;
++	avg->busy_time += raw->busy_time;
++	avg->total_time += raw->total_time;
++}
++
++static int msm_devfreq_get_dev_status(struct device *dev,
++		struct devfreq_dev_status *status)
++{
++	struct msm_gpu *gpu = dev_to_gpu(dev);
++	struct devfreq_dev_status raw;
++
++	get_raw_dev_status(gpu, &raw);
++	update_average_dev_status(gpu, &raw);
++	*status = gpu->devfreq.average_status;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/msm/msm_kms.h b/drivers/gpu/drm/msm/msm_kms.h
+index 2a4f0526cb980..401d7e19811f3 100644
+--- a/drivers/gpu/drm/msm/msm_kms.h
++++ b/drivers/gpu/drm/msm/msm_kms.h
+@@ -148,6 +148,7 @@ struct msm_kms {
+ 
+ 	/* irq number to be passed on to msm_irq_install */
+ 	int irq;
++	bool irq_requested;
+ 
+ 	/* mapper-id used to request GEM buffer mapped for scanout: */
+ 	struct msm_gem_address_space *aspace;
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/atom.h b/drivers/gpu/drm/nouveau/dispnv50/atom.h
+index 3d82b3c67decc..93f8f4f645784 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/atom.h
++++ b/drivers/gpu/drm/nouveau/dispnv50/atom.h
+@@ -160,14 +160,14 @@ nv50_head_atom_get(struct drm_atomic_state *state, struct drm_crtc *crtc)
+ static inline struct drm_encoder *
+ nv50_head_atom_get_encoder(struct nv50_head_atom *atom)
+ {
+-	struct drm_encoder *encoder = NULL;
++	struct drm_encoder *encoder;
+ 
+ 	/* We only ever have a single encoder */
+ 	drm_for_each_encoder_mask(encoder, atom->state.crtc->dev,
+ 				  atom->state.encoder_mask)
+-		break;
++		return encoder;
+ 
+-	return encoder;
++	return NULL;
+ }
+ 
+ #define nv50_wndw_atom(p) container_of((p), struct nv50_wndw_atom, state)
+diff --git a/drivers/gpu/drm/nouveau/dispnv50/crc.c b/drivers/gpu/drm/nouveau/dispnv50/crc.c
+index 29428e770f146..b834e8a9ae775 100644
+--- a/drivers/gpu/drm/nouveau/dispnv50/crc.c
++++ b/drivers/gpu/drm/nouveau/dispnv50/crc.c
+@@ -390,9 +390,18 @@ void nv50_crc_atomic_check_outp(struct nv50_atom *atom)
+ 		struct nv50_head_atom *armh = nv50_head_atom(old_crtc_state);
+ 		struct nv50_head_atom *asyh = nv50_head_atom(new_crtc_state);
+ 		struct nv50_outp_atom *outp_atom;
+-		struct nouveau_encoder *outp =
+-			nv50_real_outp(nv50_head_atom_get_encoder(armh));
+-		struct drm_encoder *encoder = &outp->base.base;
++		struct nouveau_encoder *outp;
++		struct drm_encoder *encoder, *enc;
++
++		enc = nv50_head_atom_get_encoder(armh);
++		if (!enc)
++			continue;
++
++		outp = nv50_real_outp(enc);
++		if (!outp)
++			continue;
++
++		encoder = &outp->base.base;
+ 
+ 		if (!asyh->clr.crc)
+ 			continue;
+@@ -443,8 +452,16 @@ void nv50_crc_atomic_set(struct nv50_head *head,
+ 	struct drm_device *dev = crtc->dev;
+ 	struct nv50_crc *crc = &head->crc;
+ 	const struct nv50_crc_func *func = nv50_disp(dev)->core->func->crc;
+-	struct nouveau_encoder *outp =
+-		nv50_real_outp(nv50_head_atom_get_encoder(asyh));
++	struct nouveau_encoder *outp;
++	struct drm_encoder *encoder;
++
++	encoder = nv50_head_atom_get_encoder(asyh);
++	if (!encoder)
++		return;
++
++	outp = nv50_real_outp(encoder);
++	if (!outp)
++		return;
+ 
+ 	func->set_src(head, outp->or, nv50_crc_source_type(outp, asyh->crc.src),
+ 		      &crc->ctx[crc->ctx_idx]);
+diff --git a/drivers/gpu/drm/nouveau/include/nvkm/core/subdev.h b/drivers/gpu/drm/nouveau/include/nvkm/core/subdev.h
+index 1665738948fb4..96113c8bee8c5 100644
+--- a/drivers/gpu/drm/nouveau/include/nvkm/core/subdev.h
++++ b/drivers/gpu/drm/nouveau/include/nvkm/core/subdev.h
+@@ -62,4 +62,6 @@ void nvkm_subdev_intr(struct nvkm_subdev *);
+ #define nvkm_debug(s,f,a...) nvkm_printk((s), DEBUG,   info, f, ##a)
+ #define nvkm_trace(s,f,a...) nvkm_printk((s), TRACE,   info, f, ##a)
+ #define nvkm_spam(s,f,a...)  nvkm_printk((s),  SPAM,    dbg, f, ##a)
++
++#define nvkm_error_ratelimited(s,f,a...) nvkm_printk((s), ERROR, err_ratelimited, f, ##a)
+ #endif
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/bus/gf100.c b/drivers/gpu/drm/nouveau/nvkm/subdev/bus/gf100.c
+index 53a6651ac2258..80b5aaceeaad1 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/bus/gf100.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/bus/gf100.c
+@@ -35,13 +35,13 @@ gf100_bus_intr(struct nvkm_bus *bus)
+ 		u32 addr = nvkm_rd32(device, 0x009084);
+ 		u32 data = nvkm_rd32(device, 0x009088);
+ 
+-		nvkm_error(subdev,
+-			   "MMIO %s of %08x FAULT at %06x [ %s%s%s]\n",
+-			   (addr & 0x00000002) ? "write" : "read", data,
+-			   (addr & 0x00fffffc),
+-			   (stat & 0x00000002) ? "!ENGINE " : "",
+-			   (stat & 0x00000004) ? "PRIVRING " : "",
+-			   (stat & 0x00000008) ? "TIMEOUT " : "");
++		nvkm_error_ratelimited(subdev,
++				       "MMIO %s of %08x FAULT at %06x [ %s%s%s]\n",
++				       (addr & 0x00000002) ? "write" : "read", data,
++				       (addr & 0x00fffffc),
++				       (stat & 0x00000002) ? "!ENGINE " : "",
++				       (stat & 0x00000004) ? "PRIVRING " : "",
++				       (stat & 0x00000008) ? "TIMEOUT " : "");
+ 
+ 		nvkm_wr32(device, 0x009084, 0x00000000);
+ 		nvkm_wr32(device, 0x001100, (stat & 0x0000000e));
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/bus/nv31.c b/drivers/gpu/drm/nouveau/nvkm/subdev/bus/nv31.c
+index ad8da523bb22e..c75e463f35013 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/bus/nv31.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/bus/nv31.c
+@@ -45,9 +45,9 @@ nv31_bus_intr(struct nvkm_bus *bus)
+ 		u32 addr = nvkm_rd32(device, 0x009084);
+ 		u32 data = nvkm_rd32(device, 0x009088);
+ 
+-		nvkm_error(subdev, "MMIO %s of %08x FAULT at %06x\n",
+-			   (addr & 0x00000002) ? "write" : "read", data,
+-			   (addr & 0x00fffffc));
++		nvkm_error_ratelimited(subdev, "MMIO %s of %08x FAULT at %06x\n",
++				       (addr & 0x00000002) ? "write" : "read", data,
++				       (addr & 0x00fffffc));
+ 
+ 		stat &= ~0x00000008;
+ 		nvkm_wr32(device, 0x001100, 0x00000008);
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/bus/nv50.c b/drivers/gpu/drm/nouveau/nvkm/subdev/bus/nv50.c
+index 3a1e45adeedc1..2055d0b100d3f 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/bus/nv50.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/bus/nv50.c
+@@ -60,9 +60,9 @@ nv50_bus_intr(struct nvkm_bus *bus)
+ 		u32 addr = nvkm_rd32(device, 0x009084);
+ 		u32 data = nvkm_rd32(device, 0x009088);
+ 
+-		nvkm_error(subdev, "MMIO %s of %08x FAULT at %06x\n",
+-			   (addr & 0x00000002) ? "write" : "read", data,
+-			   (addr & 0x00fffffc));
++		nvkm_error_ratelimited(subdev, "MMIO %s of %08x FAULT at %06x\n",
++				       (addr & 0x00000002) ? "write" : "read", data,
++				       (addr & 0x00fffffc));
+ 
+ 		stat &= ~0x00000008;
+ 		nvkm_wr32(device, 0x001100, 0x00000008);
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/clk/base.c b/drivers/gpu/drm/nouveau/nvkm/subdev/clk/base.c
+index 57199be082fd3..c2b5cc5f97eda 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/clk/base.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/clk/base.c
+@@ -135,10 +135,10 @@ nvkm_cstate_find_best(struct nvkm_clk *clk, struct nvkm_pstate *pstate,
+ 
+ 	list_for_each_entry_from_reverse(cstate, &pstate->list, head) {
+ 		if (nvkm_cstate_valid(clk, cstate, max_volt, clk->temp))
+-			break;
++			return cstate;
+ 	}
+ 
+-	return cstate;
++	return NULL;
+ }
+ 
+ static struct nvkm_cstate *
+@@ -169,6 +169,8 @@ nvkm_cstate_prog(struct nvkm_clk *clk, struct nvkm_pstate *pstate, int cstatei)
+ 	if (!list_empty(&pstate->list)) {
+ 		cstate = nvkm_cstate_get(clk, pstate, cstatei);
+ 		cstate = nvkm_cstate_find_best(clk, pstate, cstate);
++		if (!cstate)
++			return -EINVAL;
+ 	} else {
+ 		cstate = &pstate->base;
+ 	}
+diff --git a/drivers/gpu/drm/omapdrm/omap_overlay.c b/drivers/gpu/drm/omapdrm/omap_overlay.c
+index 10730c9b27523..b0bc9ad2ef73a 100644
+--- a/drivers/gpu/drm/omapdrm/omap_overlay.c
++++ b/drivers/gpu/drm/omapdrm/omap_overlay.c
+@@ -86,7 +86,7 @@ int omap_overlay_assign(struct drm_atomic_state *s, struct drm_plane *plane,
+ 		r_ovl = omap_plane_find_free_overlay(s->dev, overlay_map,
+ 						     caps, fourcc);
+ 		if (!r_ovl) {
+-			overlay_map[r_ovl->idx] = NULL;
++			overlay_map[ovl->idx] = NULL;
+ 			*overlay = NULL;
+ 			return -ENOMEM;
+ 		}
+diff --git a/drivers/gpu/drm/panel/panel-simple.c b/drivers/gpu/drm/panel/panel-simple.c
+index a34f4198a5341..6880dc59fa886 100644
+--- a/drivers/gpu/drm/panel/panel-simple.c
++++ b/drivers/gpu/drm/panel/panel-simple.c
+@@ -720,7 +720,7 @@ static const struct drm_display_mode ampire_am_1280800n3tzqw_t00h_mode = {
+ static const struct panel_desc ampire_am_1280800n3tzqw_t00h = {
+ 	.modes = &ampire_am_1280800n3tzqw_t00h_mode,
+ 	.num_modes = 1,
+-	.bpc = 6,
++	.bpc = 8,
+ 	.size = {
+ 		.width = 217,
+ 		.height = 136,
+@@ -2029,6 +2029,7 @@ static const struct panel_desc innolux_g070y2_l01 = {
+ 		.unprepare = 800,
+ 	},
+ 	.bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG,
++	.bus_flags = DRM_BUS_FLAG_DE_HIGH,
+ 	.connector_type = DRM_MODE_CONNECTOR_LVDS,
+ };
+ 
+diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+index 3e8d9e2d1b675..d53037531f407 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
++++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+@@ -2118,10 +2118,10 @@ static int vop_bind(struct device *dev, struct device *master, void *data)
+ 	vop_win_init(vop);
+ 
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+-	vop->len = resource_size(res);
+ 	vop->regs = devm_ioremap_resource(dev, res);
+ 	if (IS_ERR(vop->regs))
+ 		return PTR_ERR(vop->regs);
++	vop->len = resource_size(res);
+ 
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
+ 	if (res) {
+diff --git a/drivers/gpu/drm/selftests/test-drm_buddy.c b/drivers/gpu/drm/selftests/test-drm_buddy.c
+index fa997f89522b6..aca0c491040f2 100644
+--- a/drivers/gpu/drm/selftests/test-drm_buddy.c
++++ b/drivers/gpu/drm/selftests/test-drm_buddy.c
+@@ -488,8 +488,10 @@ static int igt_buddy_alloc_smoke(void *arg)
+ 	}
+ 
+ 	order = drm_random_order(mm.max_order + 1, &prng);
+-	if (!order)
++	if (!order) {
++		err = -ENOMEM;
+ 		goto out_fini;
++	}
+ 
+ 	for (i = 0; i <= mm.max_order; ++i) {
+ 		struct drm_buddy_block *block;
+@@ -902,14 +904,13 @@ err_fini:
+ 
+ static int igt_buddy_alloc_limit(void *arg)
+ {
+-	u64 end, size = U64_MAX, start = 0;
++	u64 size = U64_MAX, start = 0;
+ 	struct drm_buddy_block *block;
+ 	unsigned long flags = 0;
+ 	LIST_HEAD(allocated);
+ 	struct drm_buddy mm;
+ 	int err;
+ 
+-	size = end = round_down(size, 4096);
+ 	err = drm_buddy_init(&mm, size, PAGE_SIZE);
+ 	if (err)
+ 		return err;
+@@ -921,7 +922,8 @@ static int igt_buddy_alloc_limit(void *arg)
+ 		goto out_fini;
+ 	}
+ 
+-	err = drm_buddy_alloc_blocks(&mm, start, end, size,
++	size = mm.chunk_size << mm.max_order;
++	err = drm_buddy_alloc_blocks(&mm, start, size, size,
+ 				     PAGE_SIZE, &allocated, flags);
+ 
+ 	if (unlikely(err))
+diff --git a/drivers/gpu/drm/solomon/Kconfig b/drivers/gpu/drm/solomon/Kconfig
+index 5861c3ab7c452..6230369505c96 100644
+--- a/drivers/gpu/drm/solomon/Kconfig
++++ b/drivers/gpu/drm/solomon/Kconfig
+@@ -1,6 +1,6 @@
+ config DRM_SSD130X
+ 	tristate "DRM support for Solomon SSD130x OLED displays"
+-	depends on DRM
++	depends on DRM && MMU
+ 	select BACKLIGHT_CLASS_DEVICE
+ 	select DRM_GEM_SHMEM_HELPER
+ 	select DRM_KMS_HELPER
+diff --git a/drivers/gpu/drm/solomon/ssd130x.c b/drivers/gpu/drm/solomon/ssd130x.c
+index ce4dc20412e06..38b6c2c14f536 100644
+--- a/drivers/gpu/drm/solomon/ssd130x.c
++++ b/drivers/gpu/drm/solomon/ssd130x.c
+@@ -48,7 +48,7 @@
+ #define SSD130X_CONTRAST			0x81
+ #define SSD130X_SET_LOOKUP_TABLE		0x91
+ #define SSD130X_CHARGE_PUMP			0x8d
+-#define SSD130X_SEG_REMAP_ON			0xa1
++#define SSD130X_SET_SEG_REMAP			0xa0
+ #define SSD130X_DISPLAY_OFF			0xae
+ #define SSD130X_SET_MULTIPLEX_RATIO		0xa8
+ #define SSD130X_DISPLAY_ON			0xaf
+@@ -61,7 +61,9 @@
+ #define SSD130X_SET_COM_PINS_CONFIG		0xda
+ #define SSD130X_SET_VCOMH			0xdb
+ 
+-#define SSD130X_SET_COM_SCAN_DIR_MASK		GENMASK(3, 2)
++#define SSD130X_SET_SEG_REMAP_MASK		GENMASK(0, 0)
++#define SSD130X_SET_SEG_REMAP_SET(val)		FIELD_PREP(SSD130X_SET_SEG_REMAP_MASK, (val))
++#define SSD130X_SET_COM_SCAN_DIR_MASK		GENMASK(3, 3)
+ #define SSD130X_SET_COM_SCAN_DIR_SET(val)	FIELD_PREP(SSD130X_SET_COM_SCAN_DIR_MASK, (val))
+ #define SSD130X_SET_CLOCK_DIV_MASK		GENMASK(3, 0)
+ #define SSD130X_SET_CLOCK_DIV_SET(val)		FIELD_PREP(SSD130X_SET_CLOCK_DIV_MASK, (val))
+@@ -235,7 +237,7 @@ static void ssd130x_power_off(struct ssd130x_device *ssd130x)
+ 
+ static int ssd130x_init(struct ssd130x_device *ssd130x)
+ {
+-	u32 precharge, dclk, com_invdir, compins, chargepump;
++	u32 precharge, dclk, com_invdir, compins, chargepump, seg_remap;
+ 	int ret;
+ 
+ 	/* Set initial contrast */
+@@ -244,11 +246,11 @@ static int ssd130x_init(struct ssd130x_device *ssd130x)
+ 		return ret;
+ 
+ 	/* Set segment re-map */
+-	if (ssd130x->seg_remap) {
+-		ret = ssd130x_write_cmd(ssd130x, 1, SSD130X_SEG_REMAP_ON);
+-		if (ret < 0)
+-			return ret;
+-	}
++	seg_remap = (SSD130X_SET_SEG_REMAP |
++		     SSD130X_SET_SEG_REMAP_SET(ssd130x->seg_remap));
++	ret = ssd130x_write_cmd(ssd130x, 1, seg_remap);
++	if (ret < 0)
++		return ret;
+ 
+ 	/* Set COM direction */
+ 	com_invdir = (SSD130X_SET_COM_SCAN_DIR |
+@@ -353,11 +355,14 @@ static int ssd130x_update_rect(struct ssd130x_device *ssd130x, u8 *buf,
+ 	unsigned int width = drm_rect_width(rect);
+ 	unsigned int height = drm_rect_height(rect);
+ 	unsigned int line_length = DIV_ROUND_UP(width, 8);
+-	unsigned int pages = DIV_ROUND_UP(y % 8 + height, 8);
++	unsigned int pages = DIV_ROUND_UP(height, 8);
++	struct drm_device *drm = &ssd130x->drm;
+ 	u32 array_idx = 0;
+ 	int ret, i, j, k;
+ 	u8 *data_array = NULL;
+ 
++	drm_WARN_ONCE(drm, y % 8 != 0, "y must be aligned to screen page\n");
++
+ 	data_array = kcalloc(width, pages, GFP_KERNEL);
+ 	if (!data_array)
+ 		return -ENOMEM;
+@@ -399,13 +404,13 @@ static int ssd130x_update_rect(struct ssd130x_device *ssd130x, u8 *buf,
+ 	if (ret < 0)
+ 		goto out_free;
+ 
+-	for (i = y / 8; i < y / 8 + pages; i++) {
++	for (i = 0; i < pages; i++) {
+ 		int m = 8;
+ 
+ 		/* Last page may be partial */
+-		if (8 * (i + 1) > ssd130x->height)
++		if (8 * (y / 8 + i + 1) > ssd130x->height)
+ 			m = ssd130x->height % 8;
+-		for (j = x; j < x + width; j++) {
++		for (j = 0; j < width; j++) {
+ 			u8 data = 0;
+ 
+ 			for (k = 0; k < m; k++) {
+@@ -435,7 +440,8 @@ static void ssd130x_clear_screen(struct ssd130x_device *ssd130x)
+ 		.y2 = ssd130x->height,
+ 	};
+ 
+-	buf = kcalloc(ssd130x->width, ssd130x->height, GFP_KERNEL);
++	buf = kcalloc(DIV_ROUND_UP(ssd130x->width, 8), ssd130x->height,
++		      GFP_KERNEL);
+ 	if (!buf)
+ 		return;
+ 
+@@ -449,14 +455,20 @@ static int ssd130x_fb_blit_rect(struct drm_framebuffer *fb, const struct iosys_m
+ {
+ 	struct ssd130x_device *ssd130x = drm_to_ssd130x(fb->dev);
+ 	void *vmap = map->vaddr; /* TODO: Use mapping abstraction properly */
++	unsigned int dst_pitch;
+ 	int ret = 0;
+ 	u8 *buf = NULL;
+ 
+-	buf = kcalloc(fb->width, fb->height, GFP_KERNEL);
++	/* Align y to display page boundaries */
++	rect->y1 = round_down(rect->y1, 8);
++	rect->y2 = min_t(unsigned int, round_up(rect->y2, 8), ssd130x->height);
++
++	dst_pitch = DIV_ROUND_UP(drm_rect_width(rect), 8);
++	buf = kcalloc(dst_pitch, drm_rect_height(rect), GFP_KERNEL);
+ 	if (!buf)
+ 		return -ENOMEM;
+ 
+-	drm_fb_xrgb8888_to_mono_reversed(buf, 0, vmap, fb, rect);
++	drm_fb_xrgb8888_to_mono(buf, dst_pitch, vmap, fb, rect);
+ 
+ 	ssd130x_update_rect(ssd130x, buf, rect);
+ 
+diff --git a/drivers/gpu/drm/stm/ltdc.c b/drivers/gpu/drm/stm/ltdc.c
+index 17cc050207f4a..6bd45df8f5a72 100644
+--- a/drivers/gpu/drm/stm/ltdc.c
++++ b/drivers/gpu/drm/stm/ltdc.c
+@@ -869,8 +869,8 @@ static void ltdc_crtc_mode_set_nofb(struct drm_crtc *crtc)
+ 	struct drm_device *ddev = crtc->dev;
+ 	struct drm_connector_list_iter iter;
+ 	struct drm_connector *connector = NULL;
+-	struct drm_encoder *encoder = NULL;
+-	struct drm_bridge *bridge = NULL;
++	struct drm_encoder *encoder = NULL, *en_iter;
++	struct drm_bridge *bridge = NULL, *br_iter;
+ 	struct drm_display_mode *mode = &crtc->state->adjusted_mode;
+ 	u32 hsync, vsync, accum_hbp, accum_vbp, accum_act_w, accum_act_h;
+ 	u32 total_width, total_height;
+@@ -880,15 +880,19 @@ static void ltdc_crtc_mode_set_nofb(struct drm_crtc *crtc)
+ 	int ret;
+ 
+ 	/* get encoder from crtc */
+-	drm_for_each_encoder(encoder, ddev)
+-		if (encoder->crtc == crtc)
++	drm_for_each_encoder(en_iter, ddev)
++		if (en_iter->crtc == crtc) {
++			encoder = en_iter;
+ 			break;
++		}
+ 
+ 	if (encoder) {
+ 		/* get bridge from encoder */
+-		list_for_each_entry(bridge, &encoder->bridge_chain, chain_node)
+-			if (bridge->encoder == encoder)
++		list_for_each_entry(br_iter, &encoder->bridge_chain, chain_node)
++			if (br_iter->encoder == encoder) {
++				bridge = br_iter;
+ 				break;
++			}
+ 
+ 		/* Get the connector from encoder */
+ 		drm_connector_list_iter_begin(ddev, &iter);
+diff --git a/drivers/gpu/drm/tegra/gem.c b/drivers/gpu/drm/tegra/gem.c
+index 0063403ab5e18..7c7dd84e6db84 100644
+--- a/drivers/gpu/drm/tegra/gem.c
++++ b/drivers/gpu/drm/tegra/gem.c
+@@ -88,6 +88,7 @@ static struct host1x_bo_mapping *tegra_bo_pin(struct device *dev, struct host1x_
+ 		if (IS_ERR(map->sgt)) {
+ 			dma_buf_detach(buf, map->attach);
+ 			err = PTR_ERR(map->sgt);
++			map->sgt = NULL;
+ 			goto free;
+ 		}
+ 
+diff --git a/drivers/gpu/drm/tilcdc/tilcdc_external.c b/drivers/gpu/drm/tilcdc/tilcdc_external.c
+index 7594cf6e186eb..3b86d002ef62e 100644
+--- a/drivers/gpu/drm/tilcdc/tilcdc_external.c
++++ b/drivers/gpu/drm/tilcdc/tilcdc_external.c
+@@ -60,11 +60,13 @@ struct drm_connector *tilcdc_encoder_find_connector(struct drm_device *ddev,
+ int tilcdc_add_component_encoder(struct drm_device *ddev)
+ {
+ 	struct tilcdc_drm_private *priv = ddev->dev_private;
+-	struct drm_encoder *encoder;
++	struct drm_encoder *encoder = NULL, *iter;
+ 
+-	list_for_each_entry(encoder, &ddev->mode_config.encoder_list, head)
+-		if (encoder->possible_crtcs & (1 << priv->crtc->index))
++	list_for_each_entry(iter, &ddev->mode_config.encoder_list, head)
++		if (iter->possible_crtcs & (1 << priv->crtc->index)) {
++			encoder = iter;
+ 			break;
++		}
+ 
+ 	if (!encoder) {
+ 		dev_err(ddev->dev, "%s: No suitable encoder found\n", __func__);
+diff --git a/drivers/gpu/drm/tiny/repaper.c b/drivers/gpu/drm/tiny/repaper.c
+index 37b6bb90e46e1..a096fb8b83e99 100644
+--- a/drivers/gpu/drm/tiny/repaper.c
++++ b/drivers/gpu/drm/tiny/repaper.c
+@@ -540,7 +540,7 @@ static int repaper_fb_dirty(struct drm_framebuffer *fb)
+ 	if (ret)
+ 		goto out_free;
+ 
+-	drm_fb_xrgb8888_to_mono_reversed(buf, 0, cma_obj->vaddr, fb, &clip);
++	drm_fb_xrgb8888_to_mono(buf, 0, cma_obj->vaddr, fb, &clip);
+ 
+ 	drm_gem_fb_end_cpu_access(fb, DMA_FROM_DEVICE);
+ 
+diff --git a/drivers/gpu/drm/v3d/v3d_perfmon.c b/drivers/gpu/drm/v3d/v3d_perfmon.c
+index 0288ef063513e..f6a88abccc7d9 100644
+--- a/drivers/gpu/drm/v3d/v3d_perfmon.c
++++ b/drivers/gpu/drm/v3d/v3d_perfmon.c
+@@ -25,11 +25,12 @@ void v3d_perfmon_start(struct v3d_dev *v3d, struct v3d_perfmon *perfmon)
+ {
+ 	unsigned int i;
+ 	u32 mask;
+-	u8 ncounters = perfmon->ncounters;
++	u8 ncounters;
+ 
+ 	if (WARN_ON_ONCE(!perfmon || v3d->active_perfmon))
+ 		return;
+ 
++	ncounters = perfmon->ncounters;
+ 	mask = GENMASK(ncounters - 1, 0);
+ 
+ 	for (i = 0; i < ncounters; i++) {
+diff --git a/drivers/gpu/drm/vc4/vc4_crtc.c b/drivers/gpu/drm/vc4/vc4_crtc.c
+index 783890e8d43a2..477b3c5ad089c 100644
+--- a/drivers/gpu/drm/vc4/vc4_crtc.c
++++ b/drivers/gpu/drm/vc4/vc4_crtc.c
+@@ -123,7 +123,7 @@ static bool vc4_crtc_get_scanout_position(struct drm_crtc *crtc,
+ 		*vpos /= 2;
+ 
+ 		/* Use hpos to correct for field offset in interlaced mode. */
+-		if (VC4_GET_FIELD(val, SCALER_DISPSTATX_FRAME_COUNT) % 2)
++		if (vc4_hvs_get_fifo_frame_count(dev, vc4_crtc_state->assigned_channel) % 2)
+ 			*hpos += mode->crtc_htotal / 2;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/vc4/vc4_drv.h b/drivers/gpu/drm/vc4/vc4_drv.h
+index 4329e09d357c8..801da3e8ebdb8 100644
+--- a/drivers/gpu/drm/vc4/vc4_drv.h
++++ b/drivers/gpu/drm/vc4/vc4_drv.h
+@@ -935,6 +935,7 @@ void vc4_irq_reset(struct drm_device *dev);
+ extern struct platform_driver vc4_hvs_driver;
+ void vc4_hvs_stop_channel(struct drm_device *dev, unsigned int output);
+ int vc4_hvs_get_fifo_from_output(struct drm_device *dev, unsigned int output);
++u8 vc4_hvs_get_fifo_frame_count(struct drm_device *dev, unsigned int fifo);
+ int vc4_hvs_atomic_check(struct drm_crtc *crtc, struct drm_atomic_state *state);
+ void vc4_hvs_atomic_begin(struct drm_crtc *crtc, struct drm_atomic_state *state);
+ void vc4_hvs_atomic_enable(struct drm_crtc *crtc, struct drm_atomic_state *state);
+diff --git a/drivers/gpu/drm/vc4/vc4_hvs.c b/drivers/gpu/drm/vc4/vc4_hvs.c
+index 604933e20e6a2..9d88bfb50c9b0 100644
+--- a/drivers/gpu/drm/vc4/vc4_hvs.c
++++ b/drivers/gpu/drm/vc4/vc4_hvs.c
+@@ -197,6 +197,29 @@ static void vc4_hvs_update_gamma_lut(struct drm_crtc *crtc)
+ 	vc4_hvs_lut_load(crtc);
+ }
+ 
++u8 vc4_hvs_get_fifo_frame_count(struct drm_device *dev, unsigned int fifo)
++{
++	struct vc4_dev *vc4 = to_vc4_dev(dev);
++	u8 field = 0;
++
++	switch (fifo) {
++	case 0:
++		field = VC4_GET_FIELD(HVS_READ(SCALER_DISPSTAT1),
++				      SCALER_DISPSTAT1_FRCNT0);
++		break;
++	case 1:
++		field = VC4_GET_FIELD(HVS_READ(SCALER_DISPSTAT1),
++				      SCALER_DISPSTAT1_FRCNT1);
++		break;
++	case 2:
++		field = VC4_GET_FIELD(HVS_READ(SCALER_DISPSTAT2),
++				      SCALER_DISPSTAT2_FRCNT2);
++		break;
++	}
++
++	return field;
++}
++
+ int vc4_hvs_get_fifo_from_output(struct drm_device *dev, unsigned int output)
+ {
+ 	struct vc4_dev *vc4 = to_vc4_dev(dev);
+@@ -582,6 +605,7 @@ static int vc4_hvs_bind(struct device *dev, struct device *master, void *data)
+ 	struct vc4_hvs *hvs = NULL;
+ 	int ret;
+ 	u32 dispctrl;
++	u32 reg;
+ 
+ 	hvs = devm_kzalloc(&pdev->dev, sizeof(*hvs), GFP_KERNEL);
+ 	if (!hvs)
+@@ -653,6 +677,26 @@ static int vc4_hvs_bind(struct device *dev, struct device *master, void *data)
+ 
+ 	vc4->hvs = hvs;
+ 
++	reg = HVS_READ(SCALER_DISPECTRL);
++	reg &= ~SCALER_DISPECTRL_DSP2_MUX_MASK;
++	HVS_WRITE(SCALER_DISPECTRL,
++		  reg | VC4_SET_FIELD(0, SCALER_DISPECTRL_DSP2_MUX));
++
++	reg = HVS_READ(SCALER_DISPCTRL);
++	reg &= ~SCALER_DISPCTRL_DSP3_MUX_MASK;
++	HVS_WRITE(SCALER_DISPCTRL,
++		  reg | VC4_SET_FIELD(3, SCALER_DISPCTRL_DSP3_MUX));
++
++	reg = HVS_READ(SCALER_DISPEOLN);
++	reg &= ~SCALER_DISPEOLN_DSP4_MUX_MASK;
++	HVS_WRITE(SCALER_DISPEOLN,
++		  reg | VC4_SET_FIELD(3, SCALER_DISPEOLN_DSP4_MUX));
++
++	reg = HVS_READ(SCALER_DISPDITHER);
++	reg &= ~SCALER_DISPDITHER_DSP5_MUX_MASK;
++	HVS_WRITE(SCALER_DISPDITHER,
++		  reg | VC4_SET_FIELD(3, SCALER_DISPDITHER_DSP5_MUX));
++
+ 	dispctrl = HVS_READ(SCALER_DISPCTRL);
+ 
+ 	dispctrl |= SCALER_DISPCTRL_ENABLE;
+@@ -660,10 +704,6 @@ static int vc4_hvs_bind(struct device *dev, struct device *master, void *data)
+ 		    SCALER_DISPCTRL_DISPEIRQ(1) |
+ 		    SCALER_DISPCTRL_DISPEIRQ(2);
+ 
+-	/* Set DSP3 (PV1) to use HVS channel 2, which would otherwise
+-	 * be unused.
+-	 */
+-	dispctrl &= ~SCALER_DISPCTRL_DSP3_MUX_MASK;
+ 	dispctrl &= ~(SCALER_DISPCTRL_DMAEIRQ |
+ 		      SCALER_DISPCTRL_SLVWREIRQ |
+ 		      SCALER_DISPCTRL_SLVRDEIRQ |
+@@ -677,7 +717,6 @@ static int vc4_hvs_bind(struct device *dev, struct device *master, void *data)
+ 		      SCALER_DISPCTRL_DSPEISLUR(1) |
+ 		      SCALER_DISPCTRL_DSPEISLUR(2) |
+ 		      SCALER_DISPCTRL_SCLEIRQ);
+-	dispctrl |= VC4_SET_FIELD(2, SCALER_DISPCTRL_DSP3_MUX);
+ 
+ 	HVS_WRITE(SCALER_DISPCTRL, dispctrl);
+ 
+diff --git a/drivers/gpu/drm/vc4/vc4_kms.c b/drivers/gpu/drm/vc4/vc4_kms.c
+index 24de29bc1cda4..992d6a2400029 100644
+--- a/drivers/gpu/drm/vc4/vc4_kms.c
++++ b/drivers/gpu/drm/vc4/vc4_kms.c
+@@ -385,9 +385,10 @@ static void vc4_atomic_commit_tail(struct drm_atomic_state *state)
+ 	}
+ 
+ 	if (vc4->hvs->hvs5) {
++		unsigned long state_rate = max(old_hvs_state->core_clock_rate,
++					       new_hvs_state->core_clock_rate);
+ 		unsigned long core_rate = max_t(unsigned long,
+-						500000000,
+-						new_hvs_state->core_clock_rate);
++						500000000, state_rate);
+ 
+ 		clk_set_min_rate(hvs->core_clk, core_rate);
+ 	}
+diff --git a/drivers/gpu/drm/vc4/vc4_regs.h b/drivers/gpu/drm/vc4/vc4_regs.h
+index 33410718089ec..bae8c9cd6f7ca 100644
+--- a/drivers/gpu/drm/vc4/vc4_regs.h
++++ b/drivers/gpu/drm/vc4/vc4_regs.h
+@@ -379,8 +379,6 @@
+ # define SCALER_DISPSTATX_MODE_EOF		3
+ # define SCALER_DISPSTATX_FULL			BIT(29)
+ # define SCALER_DISPSTATX_EMPTY			BIT(28)
+-# define SCALER_DISPSTATX_FRAME_COUNT_MASK	VC4_MASK(17, 12)
+-# define SCALER_DISPSTATX_FRAME_COUNT_SHIFT	12
+ # define SCALER_DISPSTATX_LINE_MASK		VC4_MASK(11, 0)
+ # define SCALER_DISPSTATX_LINE_SHIFT		0
+ 
+@@ -403,9 +401,15 @@
+ 						 (x) * (SCALER_DISPBKGND1 - \
+ 							SCALER_DISPBKGND0))
+ #define SCALER_DISPSTAT1                        0x00000058
++# define SCALER_DISPSTAT1_FRCNT0_MASK		VC4_MASK(23, 18)
++# define SCALER_DISPSTAT1_FRCNT0_SHIFT		18
++# define SCALER_DISPSTAT1_FRCNT1_MASK		VC4_MASK(17, 12)
++# define SCALER_DISPSTAT1_FRCNT1_SHIFT		12
++
+ #define SCALER_DISPSTATX(x)			(SCALER_DISPSTAT0 +        \
+ 						 (x) * (SCALER_DISPSTAT1 - \
+ 							SCALER_DISPSTAT0))
++
+ #define SCALER_DISPBASE1                        0x0000005c
+ #define SCALER_DISPBASEX(x)			(SCALER_DISPBASE0 +        \
+ 						 (x) * (SCALER_DISPBASE1 - \
+@@ -415,7 +419,11 @@
+ 						 (x) * (SCALER_DISPCTRL1 - \
+ 							SCALER_DISPCTRL0))
+ #define SCALER_DISPBKGND2                       0x00000064
++
+ #define SCALER_DISPSTAT2                        0x00000068
++# define SCALER_DISPSTAT2_FRCNT2_MASK		VC4_MASK(17, 12)
++# define SCALER_DISPSTAT2_FRCNT2_SHIFT		12
++
+ #define SCALER_DISPBASE2                        0x0000006c
+ #define SCALER_DISPALPHA2                       0x00000070
+ #define SCALER_GAMADDR                          0x00000078
+diff --git a/drivers/gpu/drm/vc4/vc4_txp.c b/drivers/gpu/drm/vc4/vc4_txp.c
+index 9809ca3e29451..82beb8c159f28 100644
+--- a/drivers/gpu/drm/vc4/vc4_txp.c
++++ b/drivers/gpu/drm/vc4/vc4_txp.c
+@@ -298,12 +298,18 @@ static void vc4_txp_connector_atomic_commit(struct drm_connector *conn,
+ 	if (WARN_ON(i == ARRAY_SIZE(drm_fmts)))
+ 		return;
+ 
+-	ctrl = TXP_GO | TXP_VSTART_AT_EOF | TXP_EI |
++	ctrl = TXP_GO | TXP_EI |
+ 	       VC4_SET_FIELD(0xf, TXP_BYTE_ENABLE) |
+ 	       VC4_SET_FIELD(txp_fmts[i], TXP_FORMAT);
+ 
+ 	if (fb->format->has_alpha)
+ 		ctrl |= TXP_ALPHA_ENABLE;
++	else
++		/*
++		 * If TXP_ALPHA_ENABLE isn't set and TXP_ALPHA_INVERT is, the
++		 * hardware will force the output padding to be 0xff.
++		 */
++		ctrl |= TXP_ALPHA_INVERT;
+ 
+ 	gem = drm_fb_cma_get_gem_obj(fb, 0);
+ 	TXP_WRITE(TXP_DST_PTR, gem->paddr + fb->offsets[0]);
+diff --git a/drivers/gpu/drm/virtio/virtgpu_display.c b/drivers/gpu/drm/virtio/virtgpu_display.c
+index 5b00310ac4cd4..f73352e7b8329 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_display.c
++++ b/drivers/gpu/drm/virtio/virtgpu_display.c
+@@ -179,6 +179,8 @@ static int virtio_gpu_conn_get_modes(struct drm_connector *connector)
+ 		DRM_DEBUG("add mode: %dx%d\n", width, height);
+ 		mode = drm_cvt_mode(connector->dev, width, height, 60,
+ 				    false, false, false);
++		if (!mode)
++			return count;
+ 		mode->type |= DRM_MODE_TYPE_PREFERRED;
+ 		drm_mode_probed_add(connector, mode);
+ 		count++;
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
+index 93431e8f66060..9410152f9d6f1 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
+@@ -914,6 +914,15 @@ static int vmw_kms_new_framebuffer_surface(struct vmw_private *dev_priv,
+ 	 * Sanity checks.
+ 	 */
+ 
++	if (!drm_any_plane_has_format(&dev_priv->drm,
++				      mode_cmd->pixel_format,
++				      mode_cmd->modifier[0])) {
++		drm_dbg(&dev_priv->drm,
++			"unsupported pixel format %p4cc / modifier 0x%llx\n",
++			&mode_cmd->pixel_format, mode_cmd->modifier[0]);
++		return -EINVAL;
++	}
++
+ 	/* Surface must be marked as a scanout. */
+ 	if (unlikely(!surface->metadata.scanout))
+ 		return -EINVAL;
+@@ -1236,20 +1245,13 @@ static int vmw_kms_new_framebuffer_bo(struct vmw_private *dev_priv,
+ 		return -EINVAL;
+ 	}
+ 
+-	/* Limited framebuffer color depth support for screen objects */
+-	if (dev_priv->active_display_unit == vmw_du_screen_object) {
+-		switch (mode_cmd->pixel_format) {
+-		case DRM_FORMAT_XRGB8888:
+-		case DRM_FORMAT_ARGB8888:
+-			break;
+-		case DRM_FORMAT_XRGB1555:
+-		case DRM_FORMAT_RGB565:
+-			break;
+-		default:
+-			DRM_ERROR("Invalid pixel format: %p4cc\n",
+-				  &mode_cmd->pixel_format);
+-			return -EINVAL;
+-		}
++	if (!drm_any_plane_has_format(&dev_priv->drm,
++				      mode_cmd->pixel_format,
++				      mode_cmd->modifier[0])) {
++		drm_dbg(&dev_priv->drm,
++			"unsupported pixel format %p4cc / modifier 0x%llx\n",
++			&mode_cmd->pixel_format, mode_cmd->modifier[0]);
++		return -EINVAL;
+ 	}
+ 
+ 	vfbd = kzalloc(sizeof(*vfbd), GFP_KERNEL);
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.h b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.h
+index 4d36e85073806..d9ebd02099a68 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.h
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.h
+@@ -247,7 +247,6 @@ struct vmw_framebuffer_bo {
+ static const uint32_t __maybe_unused vmw_primary_plane_formats[] = {
+ 	DRM_FORMAT_XRGB1555,
+ 	DRM_FORMAT_RGB565,
+-	DRM_FORMAT_RGB888,
+ 	DRM_FORMAT_XRGB8888,
+ 	DRM_FORMAT_ARGB8888,
+ };
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
+index 708899ba21029..6542f14986510 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
+@@ -859,22 +859,21 @@ void vmw_query_move_notify(struct ttm_buffer_object *bo,
+ 	struct ttm_device *bdev = bo->bdev;
+ 	struct vmw_private *dev_priv;
+ 
+-
+ 	dev_priv = container_of(bdev, struct vmw_private, bdev);
+ 
+ 	mutex_lock(&dev_priv->binding_mutex);
+ 
+-	dx_query_mob = container_of(bo, struct vmw_buffer_object, base);
+-	if (!dx_query_mob || !dx_query_mob->dx_query_ctx) {
+-		mutex_unlock(&dev_priv->binding_mutex);
+-		return;
+-	}
+-
+ 	/* If BO is being moved from MOB to system memory */
+ 	if (new_mem->mem_type == TTM_PL_SYSTEM &&
+ 	    old_mem->mem_type == VMW_PL_MOB) {
+ 		struct vmw_fence_obj *fence;
+ 
++		dx_query_mob = container_of(bo, struct vmw_buffer_object, base);
++		if (!dx_query_mob || !dx_query_mob->dx_query_ctx) {
++			mutex_unlock(&dev_priv->binding_mutex);
++			return;
++		}
++
+ 		(void) vmw_query_readback_all(dx_query_mob);
+ 		mutex_unlock(&dev_priv->binding_mutex);
+ 
+@@ -888,7 +887,6 @@ void vmw_query_move_notify(struct ttm_buffer_object *bo,
+ 		(void) ttm_bo_wait(bo, false, false);
+ 	} else
+ 		mutex_unlock(&dev_priv->binding_mutex);
+-
+ }
+ 
+ /**
+diff --git a/drivers/hid/amd-sfh-hid/amd_sfh_hid.c b/drivers/hid/amd-sfh-hid/amd_sfh_hid.c
+index 2bf97b6ac9735..e2a9679e32be8 100644
+--- a/drivers/hid/amd-sfh-hid/amd_sfh_hid.c
++++ b/drivers/hid/amd-sfh-hid/amd_sfh_hid.c
+@@ -141,10 +141,10 @@ int amdtp_hid_probe(u32 cur_hid_dev, struct amdtp_cl_data *cli_data)
+ 
+ 	hid->driver_data = hid_data;
+ 	cli_data->hid_sensor_hubs[cur_hid_dev] = hid;
+-	hid->bus = BUS_AMD_AMDTP;
++	hid->bus = BUS_AMD_SFH;
+ 	hid->vendor = AMD_SFH_HID_VENDOR;
+ 	hid->product = AMD_SFH_HID_PRODUCT;
+-	snprintf(hid->name, sizeof(hid->name), "%s %04X:%04X", "hid-amdtp",
++	snprintf(hid->name, sizeof(hid->name), "%s %04X:%04X", "hid-amdsfh",
+ 		 hid->vendor, hid->product);
+ 
+ 	rc = hid_add_device(hid);
+diff --git a/drivers/hid/amd-sfh-hid/amd_sfh_hid.h b/drivers/hid/amd-sfh-hid/amd_sfh_hid.h
+index c60abd38054ca..cb04f47c86483 100644
+--- a/drivers/hid/amd-sfh-hid/amd_sfh_hid.h
++++ b/drivers/hid/amd-sfh-hid/amd_sfh_hid.h
+@@ -12,7 +12,7 @@
+ #define AMDSFH_HID_H
+ 
+ #define MAX_HID_DEVICES		5
+-#define BUS_AMD_AMDTP		0x20
++#define BUS_AMD_SFH		0x20
+ #define AMD_SFH_HID_VENDOR	0x1022
+ #define AMD_SFH_HID_PRODUCT	0x0001
+ 
+diff --git a/drivers/hid/hid-bigbenff.c b/drivers/hid/hid-bigbenff.c
+index 74ad8bf98bfd5..e8c5e3ac9fff1 100644
+--- a/drivers/hid/hid-bigbenff.c
++++ b/drivers/hid/hid-bigbenff.c
+@@ -347,6 +347,12 @@ static int bigben_probe(struct hid_device *hid,
+ 	bigben->report = list_entry(report_list->next,
+ 		struct hid_report, list);
+ 
++	if (list_empty(&hid->inputs)) {
++		hid_err(hid, "no inputs found\n");
++		error = -ENODEV;
++		goto error_hw_stop;
++	}
++
+ 	hidinput = list_first_entry(&hid->inputs, struct hid_input, list);
+ 	set_bit(FF_RUMBLE, hidinput->input->ffbit);
+ 
+diff --git a/drivers/hid/hid-elan.c b/drivers/hid/hid-elan.c
+index 3091355d48df6..8e4a5528e25df 100644
+--- a/drivers/hid/hid-elan.c
++++ b/drivers/hid/hid-elan.c
+@@ -188,7 +188,6 @@ static int elan_input_configured(struct hid_device *hdev, struct hid_input *hi)
+ 	ret = input_mt_init_slots(input, ELAN_MAX_FINGERS, INPUT_MT_POINTER);
+ 	if (ret) {
+ 		hid_err(hdev, "Failed to init elan MT slots: %d\n", ret);
+-		input_free_device(input);
+ 		return ret;
+ 	}
+ 
+@@ -200,7 +199,6 @@ static int elan_input_configured(struct hid_device *hdev, struct hid_input *hi)
+ 		hid_err(hdev, "Failed to register elan input device: %d\n",
+ 			ret);
+ 		input_mt_destroy_slots(input);
+-		input_free_device(input);
+ 		return ret;
+ 	}
+ 
+diff --git a/drivers/hid/hid-led.c b/drivers/hid/hid-led.c
+index c2c66ceca1327..7d82f8d426bbc 100644
+--- a/drivers/hid/hid-led.c
++++ b/drivers/hid/hid-led.c
+@@ -366,7 +366,7 @@ static const struct hidled_config hidled_configs[] = {
+ 		.type = DREAM_CHEEKY,
+ 		.name = "Dream Cheeky Webmail Notifier",
+ 		.short_name = "dream_cheeky",
+-		.max_brightness = 31,
++		.max_brightness = 63,
+ 		.num_leds = 1,
+ 		.report_size = 9,
+ 		.report_type = RAW_REQUEST,
+diff --git a/drivers/hv/channel.c b/drivers/hv/channel.c
+index dc5c35210c16a..20fc8d50a0398 100644
+--- a/drivers/hv/channel.c
++++ b/drivers/hv/channel.c
+@@ -1245,7 +1245,9 @@ u64 vmbus_next_request_id(struct vmbus_channel *channel, u64 rqst_addr)
+ 
+ 	/*
+ 	 * Cannot return an ID of 0, which is reserved for an unsolicited
+-	 * message from Hyper-V.
++	 * message from Hyper-V; Hyper-V does not acknowledge (respond to)
++	 * VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED requests with ID of
++	 * 0 sent by the guest.
+ 	 */
+ 	return current_id + 1;
+ }
+@@ -1270,7 +1272,7 @@ u64 vmbus_request_addr(struct vmbus_channel *channel, u64 trans_id)
+ 
+ 	/* Hyper-V can send an unsolicited message with ID of 0 */
+ 	if (!trans_id)
+-		return trans_id;
++		return VMBUS_RQST_ERROR;
+ 
+ 	spin_lock_irqsave(&rqstor->req_lock, flags);
+ 
+diff --git a/drivers/hwmon/peci/dimmtemp.c b/drivers/hwmon/peci/dimmtemp.c
+index c8222354c0056..53e58a9c28ea2 100644
+--- a/drivers/hwmon/peci/dimmtemp.c
++++ b/drivers/hwmon/peci/dimmtemp.c
+@@ -219,7 +219,7 @@ static int check_populated_dimms(struct peci_dimmtemp *priv)
+ 	int chan_rank_max = priv->gen_info->chan_rank_max;
+ 	int dimm_idx_max = priv->gen_info->dimm_idx_max;
+ 	u32 chan_rank_empty = 0;
+-	u64 dimm_mask = 0;
++	u32 dimm_mask = 0;
+ 	int chan_rank, dimm_idx, ret;
+ 	u32 pcs;
+ 
+@@ -278,9 +278,9 @@ static int check_populated_dimms(struct peci_dimmtemp *priv)
+ 		return -EAGAIN;
+ 	}
+ 
+-	dev_dbg(priv->dev, "Scanned populated DIMMs: %#llx\n", dimm_mask);
++	dev_dbg(priv->dev, "Scanned populated DIMMs: %#x\n", dimm_mask);
+ 
+-	bitmap_from_u64(priv->dimm_mask, dimm_mask);
++	bitmap_from_arr32(priv->dimm_mask, &dimm_mask, DIMM_NUMS_MAX);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/hwmon/pmbus/pmbus_core.c b/drivers/hwmon/pmbus/pmbus_core.c
+index d93574d6a1fb6..86429bfa4847c 100644
+--- a/drivers/hwmon/pmbus/pmbus_core.c
++++ b/drivers/hwmon/pmbus/pmbus_core.c
+@@ -2308,6 +2308,21 @@ static int pmbus_init_common(struct i2c_client *client, struct pmbus_data *data,
+ 	struct device *dev = &client->dev;
+ 	int page, ret;
+ 
++	/*
++	 * Figure out if PEC is enabled before accessing any other register.
++	 * Make sure PEC is disabled, will be enabled later if needed.
++	 */
++	client->flags &= ~I2C_CLIENT_PEC;
++
++	/* Enable PEC if the controller and bus supports it */
++	if (!(data->flags & PMBUS_NO_CAPABILITY)) {
++		ret = i2c_smbus_read_byte_data(client, PMBUS_CAPABILITY);
++		if (ret >= 0 && (ret & PB_CAPABILITY_ERROR_CHECK)) {
++			if (i2c_check_functionality(client->adapter, I2C_FUNC_SMBUS_PEC))
++				client->flags |= I2C_CLIENT_PEC;
++		}
++	}
++
+ 	/*
+ 	 * Some PMBus chips don't support PMBUS_STATUS_WORD, so try
+ 	 * to use PMBUS_STATUS_BYTE instead if that is the case.
+@@ -2326,19 +2341,6 @@ static int pmbus_init_common(struct i2c_client *client, struct pmbus_data *data,
+ 		data->has_status_word = true;
+ 	}
+ 
+-	/* Make sure PEC is disabled, will be enabled later if needed */
+-	client->flags &= ~I2C_CLIENT_PEC;
+-
+-	/* Enable PEC if the controller and bus supports it */
+-	if (!(data->flags & PMBUS_NO_CAPABILITY)) {
+-		ret = i2c_smbus_read_byte_data(client, PMBUS_CAPABILITY);
+-		if (ret >= 0 && (ret & PB_CAPABILITY_ERROR_CHECK)) {
+-			if (i2c_check_functionality(client->adapter, I2C_FUNC_SMBUS_PEC)) {
+-				client->flags |= I2C_CLIENT_PEC;
+-			}
+-		}
+-	}
+-
+ 	/*
+ 	 * Check if the chip is write protected. If it is, we can not clear
+ 	 * faults, and we should not try it. Also, in that case, writes into
+@@ -2548,11 +2550,78 @@ static int pmbus_regulator_get_error_flags(struct regulator_dev *rdev, unsigned
+ 	return 0;
+ }
+ 
++static int pmbus_regulator_get_voltage(struct regulator_dev *rdev)
++{
++	struct device *dev = rdev_get_dev(rdev);
++	struct i2c_client *client = to_i2c_client(dev->parent);
++	struct pmbus_data *data = i2c_get_clientdata(client);
++	struct pmbus_sensor s = {
++		.page = rdev_get_id(rdev),
++		.class = PSC_VOLTAGE_OUT,
++		.convert = true,
++	};
++
++	s.data = _pmbus_read_word_data(client, s.page, 0xff, PMBUS_READ_VOUT);
++	if (s.data < 0)
++		return s.data;
++
++	return (int)pmbus_reg2data(data, &s) * 1000; /* unit is uV */
++}
++
++static int pmbus_regulator_set_voltage(struct regulator_dev *rdev, int min_uv,
++				       int max_uv, unsigned int *selector)
++{
++	struct device *dev = rdev_get_dev(rdev);
++	struct i2c_client *client = to_i2c_client(dev->parent);
++	struct pmbus_data *data = i2c_get_clientdata(client);
++	struct pmbus_sensor s = {
++		.page = rdev_get_id(rdev),
++		.class = PSC_VOLTAGE_OUT,
++		.convert = true,
++		.data = -1,
++	};
++	int val = DIV_ROUND_CLOSEST(min_uv, 1000); /* convert to mV */
++	int low, high;
++
++	*selector = 0;
++
++	if (pmbus_check_word_register(client, s.page, PMBUS_MFR_VOUT_MIN))
++		s.data = _pmbus_read_word_data(client, s.page, 0xff, PMBUS_MFR_VOUT_MIN);
++	if (s.data < 0) {
++		s.data = _pmbus_read_word_data(client, s.page, 0xff, PMBUS_VOUT_MARGIN_LOW);
++		if (s.data < 0)
++			return s.data;
++	}
++	low = pmbus_reg2data(data, &s);
++
++	s.data = -1;
++	if (pmbus_check_word_register(client, s.page, PMBUS_MFR_VOUT_MAX))
++		s.data = _pmbus_read_word_data(client, s.page, 0xff, PMBUS_MFR_VOUT_MAX);
++	if (s.data < 0) {
++		s.data = _pmbus_read_word_data(client, s.page, 0xff, PMBUS_VOUT_MARGIN_HIGH);
++		if (s.data < 0)
++			return s.data;
++	}
++	high = pmbus_reg2data(data, &s);
++
++	/* Make sure we are within margins */
++	if (low > val)
++		val = low;
++	if (high < val)
++		val = high;
++
++	val = pmbus_data2reg(data, &s, val);
++
++	return _pmbus_write_word_data(client, s.page, PMBUS_VOUT_COMMAND, (u16)val);
++}
++
+ const struct regulator_ops pmbus_regulator_ops = {
+ 	.enable = pmbus_regulator_enable,
+ 	.disable = pmbus_regulator_disable,
+ 	.is_enabled = pmbus_regulator_is_enabled,
+ 	.get_error_flags = pmbus_regulator_get_error_flags,
++	.get_voltage = pmbus_regulator_get_voltage,
++	.set_voltage = pmbus_regulator_set_voltage,
+ };
+ EXPORT_SYMBOL_NS_GPL(pmbus_regulator_ops, PMBUS);
+ 
+diff --git a/drivers/hwtracing/coresight/coresight-core.c b/drivers/hwtracing/coresight/coresight-core.c
+index af00dca8d1acd..ee6ce92ab4c31 100644
+--- a/drivers/hwtracing/coresight/coresight-core.c
++++ b/drivers/hwtracing/coresight/coresight-core.c
+@@ -1379,7 +1379,7 @@ static int coresight_fixup_device_conns(struct coresight_device *csdev)
+ 			continue;
+ 		conn->child_dev =
+ 			coresight_find_csdev_by_fwnode(conn->child_fwnode);
+-		if (conn->child_dev) {
++		if (conn->child_dev && conn->child_dev->has_conns_grp) {
+ 			ret = coresight_make_links(csdev, conn,
+ 						   conn->child_dev);
+ 			if (ret)
+@@ -1571,6 +1571,7 @@ struct coresight_device *coresight_register(struct coresight_desc *desc)
+ 	int nr_refcnts = 1;
+ 	atomic_t *refcnts = NULL;
+ 	struct coresight_device *csdev;
++	bool registered = false;
+ 
+ 	csdev = kzalloc(sizeof(*csdev), GFP_KERNEL);
+ 	if (!csdev) {
+@@ -1591,7 +1592,8 @@ struct coresight_device *coresight_register(struct coresight_desc *desc)
+ 	refcnts = kcalloc(nr_refcnts, sizeof(*refcnts), GFP_KERNEL);
+ 	if (!refcnts) {
+ 		ret = -ENOMEM;
+-		goto err_free_csdev;
++		kfree(csdev);
++		goto err_out;
+ 	}
+ 
+ 	csdev->refcnt = refcnts;
+@@ -1616,6 +1618,13 @@ struct coresight_device *coresight_register(struct coresight_desc *desc)
+ 	csdev->dev.fwnode = fwnode_handle_get(dev_fwnode(desc->dev));
+ 	dev_set_name(&csdev->dev, "%s", desc->name);
+ 
++	/*
++	 * Make sure the device registration and the connection fixup
++	 * are synchronised, so that we don't see uninitialised devices
++	 * on the coresight bus while trying to resolve the connections.
++	 */
++	mutex_lock(&coresight_mutex);
++
+ 	ret = device_register(&csdev->dev);
+ 	if (ret) {
+ 		put_device(&csdev->dev);
+@@ -1623,7 +1632,7 @@ struct coresight_device *coresight_register(struct coresight_desc *desc)
+ 		 * All resources are free'd explicitly via
+ 		 * coresight_device_release(), triggered from put_device().
+ 		 */
+-		goto err_out;
++		goto out_unlock;
+ 	}
+ 
+ 	if (csdev->type == CORESIGHT_DEV_TYPE_SINK ||
+@@ -1638,11 +1647,11 @@ struct coresight_device *coresight_register(struct coresight_desc *desc)
+ 			 * from put_device(), which is in turn called from
+ 			 * function device_unregister().
+ 			 */
+-			goto err_out;
++			goto out_unlock;
+ 		}
+ 	}
+-
+-	mutex_lock(&coresight_mutex);
++	/* Device is now registered */
++	registered = true;
+ 
+ 	ret = coresight_create_conns_sysfs_group(csdev);
+ 	if (!ret)
+@@ -1652,16 +1661,18 @@ struct coresight_device *coresight_register(struct coresight_desc *desc)
+ 	if (!ret && cti_assoc_ops && cti_assoc_ops->add)
+ 		cti_assoc_ops->add(csdev);
+ 
++out_unlock:
+ 	mutex_unlock(&coresight_mutex);
+-	if (ret) {
++	/* Success */
++	if (!ret)
++		return csdev;
++
++	/* Unregister the device if needed */
++	if (registered) {
+ 		coresight_unregister(csdev);
+ 		return ERR_PTR(ret);
+ 	}
+ 
+-	return csdev;
+-
+-err_free_csdev:
+-	kfree(csdev);
+ err_out:
+ 	/* Cleanup the connection information */
+ 	coresight_release_platform_data(NULL, desc->pdata);
+diff --git a/drivers/i2c/busses/i2c-at91-master.c b/drivers/i2c/busses/i2c-at91-master.c
+index b0eae94909f44..c0c35785a0dc4 100644
+--- a/drivers/i2c/busses/i2c-at91-master.c
++++ b/drivers/i2c/busses/i2c-at91-master.c
+@@ -656,6 +656,7 @@ static int at91_twi_xfer(struct i2c_adapter *adap, struct i2c_msg *msg, int num)
+ 	unsigned int_addr_flag = 0;
+ 	struct i2c_msg *m_start = msg;
+ 	bool is_read;
++	u8 *dma_buf = NULL;
+ 
+ 	dev_dbg(&adap->dev, "at91_xfer: processing %d messages:\n", num);
+ 
+@@ -703,7 +704,17 @@ static int at91_twi_xfer(struct i2c_adapter *adap, struct i2c_msg *msg, int num)
+ 	dev->msg = m_start;
+ 	dev->recv_len_abort = false;
+ 
++	if (dev->use_dma) {
++		dma_buf = i2c_get_dma_safe_msg_buf(m_start, 1);
++		if (!dma_buf) {
++			ret = -ENOMEM;
++			goto out;
++		}
++		dev->buf = dma_buf;
++	}
++
+ 	ret = at91_do_twi_transfer(dev);
++	i2c_put_dma_safe_msg_buf(dma_buf, m_start, !ret);
+ 
+ 	ret = (ret < 0) ? ret : num;
+ out:
+diff --git a/drivers/i2c/busses/i2c-npcm7xx.c b/drivers/i2c/busses/i2c-npcm7xx.c
+index 71aad029425d8..c638f2efb97c7 100644
+--- a/drivers/i2c/busses/i2c-npcm7xx.c
++++ b/drivers/i2c/busses/i2c-npcm7xx.c
+@@ -359,14 +359,14 @@ static int npcm_i2c_get_SCL(struct i2c_adapter *_adap)
+ {
+ 	struct npcm_i2c *bus = container_of(_adap, struct npcm_i2c, adap);
+ 
+-	return !!(I2CCTL3_SCL_LVL & ioread32(bus->reg + NPCM_I2CCTL3));
++	return !!(I2CCTL3_SCL_LVL & ioread8(bus->reg + NPCM_I2CCTL3));
+ }
+ 
+ static int npcm_i2c_get_SDA(struct i2c_adapter *_adap)
+ {
+ 	struct npcm_i2c *bus = container_of(_adap, struct npcm_i2c, adap);
+ 
+-	return !!(I2CCTL3_SDA_LVL & ioread32(bus->reg + NPCM_I2CCTL3));
++	return !!(I2CCTL3_SDA_LVL & ioread8(bus->reg + NPCM_I2CCTL3));
+ }
+ 
+ static inline u16 npcm_i2c_get_index(struct npcm_i2c *bus)
+@@ -563,6 +563,15 @@ static inline void npcm_i2c_nack(struct npcm_i2c *bus)
+ 	iowrite8(val, bus->reg + NPCM_I2CCTL1);
+ }
+ 
++static inline void npcm_i2c_clear_master_status(struct npcm_i2c *bus)
++{
++	u8 val;
++
++	/* Clear NEGACK, STASTR and BER bits */
++	val = NPCM_I2CST_BER | NPCM_I2CST_NEGACK | NPCM_I2CST_STASTR;
++	iowrite8(val, bus->reg + NPCM_I2CST);
++}
++
+ #if IS_ENABLED(CONFIG_I2C_SLAVE)
+ static void npcm_i2c_slave_int_enable(struct npcm_i2c *bus, bool enable)
+ {
+@@ -642,8 +651,8 @@ static void npcm_i2c_reset(struct npcm_i2c *bus)
+ 	iowrite8(NPCM_I2CCST_BB, bus->reg + NPCM_I2CCST);
+ 	iowrite8(0xFF, bus->reg + NPCM_I2CST);
+ 
+-	/* Clear EOB bit */
+-	iowrite8(NPCM_I2CCST3_EO_BUSY, bus->reg + NPCM_I2CCST3);
++	/* Clear and disable EOB */
++	npcm_i2c_eob_int(bus, false);
+ 
+ 	/* Clear all fifo bits: */
+ 	iowrite8(NPCM_I2CFIF_CTS_CLR_FIFO, bus->reg + NPCM_I2CFIF_CTS);
+@@ -655,6 +664,9 @@ static void npcm_i2c_reset(struct npcm_i2c *bus)
+ 	}
+ #endif
+ 
++	/* clear status bits for spurious interrupts */
++	npcm_i2c_clear_master_status(bus);
++
+ 	bus->state = I2C_IDLE;
+ }
+ 
+@@ -815,15 +827,6 @@ static void npcm_i2c_read_fifo(struct npcm_i2c *bus, u8 bytes_in_fifo)
+ 	}
+ }
+ 
+-static inline void npcm_i2c_clear_master_status(struct npcm_i2c *bus)
+-{
+-	u8 val;
+-
+-	/* Clear NEGACK, STASTR and BER bits */
+-	val = NPCM_I2CST_BER | NPCM_I2CST_NEGACK | NPCM_I2CST_STASTR;
+-	iowrite8(val, bus->reg + NPCM_I2CST);
+-}
+-
+ static void npcm_i2c_master_abort(struct npcm_i2c *bus)
+ {
+ 	/* Only current master is allowed to issue a stop condition */
+@@ -1231,7 +1234,16 @@ static irqreturn_t npcm_i2c_int_slave_handler(struct npcm_i2c *bus)
+ 		ret = IRQ_HANDLED;
+ 	} /* SDAST */
+ 
+-	return ret;
++	/*
++	 * if irq is not one of the above, make sure EOB is disabled and all
++	 * status bits are cleared.
++	 */
++	if (ret == IRQ_NONE) {
++		npcm_i2c_eob_int(bus, false);
++		npcm_i2c_clear_master_status(bus);
++	}
++
++	return IRQ_HANDLED;
+ }
+ 
+ static int npcm_i2c_reg_slave(struct i2c_client *client)
+@@ -1467,6 +1479,9 @@ static void npcm_i2c_irq_handle_nack(struct npcm_i2c *bus)
+ 		npcm_i2c_eob_int(bus, false);
+ 		npcm_i2c_master_stop(bus);
+ 
++		/* Clear SDA Status bit (by reading dummy byte) */
++		npcm_i2c_rd_byte(bus);
++
+ 		/*
+ 		 * The bus is released from stall only after the SW clears
+ 		 * NEGACK bit. Then a Stop condition is sent.
+@@ -1474,6 +1489,8 @@ static void npcm_i2c_irq_handle_nack(struct npcm_i2c *bus)
+ 		npcm_i2c_clear_master_status(bus);
+ 		readx_poll_timeout_atomic(ioread8, bus->reg + NPCM_I2CCST, val,
+ 					  !(val & NPCM_I2CCST_BUSY), 10, 200);
++		/* verify no status bits are still set after bus is released */
++		npcm_i2c_clear_master_status(bus);
+ 	}
+ 	bus->state = I2C_IDLE;
+ 
+@@ -1672,10 +1689,10 @@ static int npcm_i2c_recovery_tgclk(struct i2c_adapter *_adap)
+ 	int              iter = 27;
+ 
+ 	if ((npcm_i2c_get_SDA(_adap) == 1) && (npcm_i2c_get_SCL(_adap) == 1)) {
+-		dev_dbg(bus->dev, "bus%d recovery skipped, bus not stuck",
+-			bus->num);
++		dev_dbg(bus->dev, "bus%d-0x%x recovery skipped, bus not stuck",
++			bus->num, bus->dest_addr);
+ 		npcm_i2c_reset(bus);
+-		return status;
++		return 0;
+ 	}
+ 
+ 	npcm_i2c_int_enable(bus, false);
+@@ -1909,6 +1926,7 @@ static int npcm_i2c_init_module(struct npcm_i2c *bus, enum i2c_mode mode,
+ 	    bus_freq_hz < I2C_FREQ_MIN_HZ || bus_freq_hz > I2C_FREQ_MAX_HZ)
+ 		return -EINVAL;
+ 
++	npcm_i2c_int_enable(bus, false);
+ 	npcm_i2c_disable(bus);
+ 
+ 	/* Configure FIFO mode : */
+@@ -1937,10 +1955,17 @@ static int npcm_i2c_init_module(struct npcm_i2c *bus, enum i2c_mode mode,
+ 	val = (val | NPCM_I2CCTL1_NMINTE) & ~NPCM_I2CCTL1_RWS;
+ 	iowrite8(val, bus->reg + NPCM_I2CCTL1);
+ 
+-	npcm_i2c_int_enable(bus, true);
+-
+ 	npcm_i2c_reset(bus);
+ 
++	/* check HW is OK: SDA and SCL should be high at this point. */
++	if ((npcm_i2c_get_SDA(&bus->adap) == 0) || (npcm_i2c_get_SCL(&bus->adap) == 0)) {
++		dev_err(bus->dev, "I2C%d init fail: lines are low\n", bus->num);
++		dev_err(bus->dev, "SDA=%d SCL=%d\n", npcm_i2c_get_SDA(&bus->adap),
++			npcm_i2c_get_SCL(&bus->adap));
++		return -ENXIO;
++	}
++
++	npcm_i2c_int_enable(bus, true);
+ 	return 0;
+ }
+ 
+@@ -1988,10 +2013,14 @@ static irqreturn_t npcm_i2c_bus_irq(int irq, void *dev_id)
+ #if IS_ENABLED(CONFIG_I2C_SLAVE)
+ 	if (bus->slave) {
+ 		bus->master_or_slave = I2C_SLAVE;
+-		return npcm_i2c_int_slave_handler(bus);
++		if (npcm_i2c_int_slave_handler(bus))
++			return IRQ_HANDLED;
+ 	}
+ #endif
+-	return IRQ_NONE;
++	/* clear status bits for spurious interrupts */
++	npcm_i2c_clear_master_status(bus);
++
++	return IRQ_HANDLED;
+ }
+ 
+ static bool npcm_i2c_master_start_xmit(struct npcm_i2c *bus,
+@@ -2047,8 +2076,7 @@ static int npcm_i2c_master_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs,
+ 	u16 nwrite, nread;
+ 	u8 *write_data, *read_data;
+ 	u8 slave_addr;
+-	int timeout;
+-	int ret = 0;
++	unsigned long timeout;
+ 	bool read_block = false;
+ 	bool read_PEC = false;
+ 	u8 bus_busy;
+@@ -2099,13 +2127,13 @@ static int npcm_i2c_master_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs,
+ 	 * 9: bits per transaction (including the ack/nack)
+ 	 */
+ 	timeout_usec = (2 * 9 * USEC_PER_SEC / bus->bus_freq) * (2 + nread + nwrite);
+-	timeout = max(msecs_to_jiffies(35), usecs_to_jiffies(timeout_usec));
++	timeout = max_t(unsigned long, bus->adap.timeout, usecs_to_jiffies(timeout_usec));
+ 	if (nwrite >= 32 * 1024 || nread >= 32 * 1024) {
+ 		dev_err(bus->dev, "i2c%d buffer too big\n", bus->num);
+ 		return -EINVAL;
+ 	}
+ 
+-	time_left = jiffies + msecs_to_jiffies(DEFAULT_STALL_COUNT) + 1;
++	time_left = jiffies + timeout + 1;
+ 	do {
+ 		/*
+ 		 * we must clear slave address immediately when the bus is not
+@@ -2138,12 +2166,12 @@ static int npcm_i2c_master_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs,
+ 	bus->read_block_use = read_block;
+ 
+ 	reinit_completion(&bus->cmd_complete);
+-	if (!npcm_i2c_master_start_xmit(bus, slave_addr, nwrite, nread,
+-					write_data, read_data, read_PEC,
+-					read_block))
+-		ret = -EBUSY;
+ 
+-	if (ret != -EBUSY) {
++	npcm_i2c_int_enable(bus, true);
++
++	if (npcm_i2c_master_start_xmit(bus, slave_addr, nwrite, nread,
++				       write_data, read_data, read_PEC,
++				       read_block)) {
+ 		time_left = wait_for_completion_timeout(&bus->cmd_complete,
+ 							timeout);
+ 
+@@ -2157,26 +2185,31 @@ static int npcm_i2c_master_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs,
+ 			}
+ 		}
+ 	}
+-	ret = bus->cmd_err;
+ 
+ 	/* if there was BER, check if need to recover the bus: */
+ 	if (bus->cmd_err == -EAGAIN)
+-		ret = i2c_recover_bus(adap);
++		bus->cmd_err = i2c_recover_bus(adap);
+ 
+ 	/*
+ 	 * After any type of error, check if LAST bit is still set,
+ 	 * due to a HW issue.
+ 	 * It cannot be cleared without resetting the module.
+ 	 */
+-	if (bus->cmd_err &&
+-	    (NPCM_I2CRXF_CTL_LAST_PEC & ioread8(bus->reg + NPCM_I2CRXF_CTL)))
++	else if (bus->cmd_err &&
++		 (NPCM_I2CRXF_CTL_LAST_PEC & ioread8(bus->reg + NPCM_I2CRXF_CTL)))
+ 		npcm_i2c_reset(bus);
+ 
++	/* after any xfer, successful or not, stall and EOB must be disabled */
++	npcm_i2c_stall_after_start(bus, false);
++	npcm_i2c_eob_int(bus, false);
++
+ #if IS_ENABLED(CONFIG_I2C_SLAVE)
+ 	/* reenable slave if it was enabled */
+ 	if (bus->slave)
+ 		iowrite8((bus->slave->addr & 0x7F) | NPCM_I2CADDR_SAEN,
+ 			 bus->reg + NPCM_I2CADDR1);
++#else
++	npcm_i2c_int_enable(bus, false);
+ #endif
+ 	return bus->cmd_err;
+ }
+@@ -2269,7 +2302,7 @@ static int npcm_i2c_probe_bus(struct platform_device *pdev)
+ 	adap = &bus->adap;
+ 	adap->owner = THIS_MODULE;
+ 	adap->retries = 3;
+-	adap->timeout = HZ;
++	adap->timeout = msecs_to_jiffies(35);
+ 	adap->algo = &npcm_i2c_algo;
+ 	adap->quirks = &npcm_i2c_quirks;
+ 	adap->algo_data = bus;
+diff --git a/drivers/i2c/busses/i2c-rcar.c b/drivers/i2c/busses/i2c-rcar.c
+index 0db3d75590662..0064c632af5cd 100644
+--- a/drivers/i2c/busses/i2c-rcar.c
++++ b/drivers/i2c/busses/i2c-rcar.c
+@@ -1063,8 +1063,10 @@ static int rcar_i2c_probe(struct platform_device *pdev)
+ 	pm_runtime_enable(dev);
+ 	pm_runtime_get_sync(dev);
+ 	ret = rcar_i2c_clock_calculate(priv);
+-	if (ret < 0)
+-		goto out_pm_put;
++	if (ret < 0) {
++		pm_runtime_put(dev);
++		goto out_pm_disable;
++	}
+ 
+ 	rcar_i2c_write(priv, ICSAR, 0); /* Gen2: must be 0 if not using slave */
+ 
+@@ -1093,19 +1095,19 @@ static int rcar_i2c_probe(struct platform_device *pdev)
+ 
+ 	ret = platform_get_irq(pdev, 0);
+ 	if (ret < 0)
+-		goto out_pm_disable;
++		goto out_pm_put;
+ 	priv->irq = ret;
+ 	ret = devm_request_irq(dev, priv->irq, irqhandler, irqflags, dev_name(dev), priv);
+ 	if (ret < 0) {
+ 		dev_err(dev, "cannot get irq %d\n", priv->irq);
+-		goto out_pm_disable;
++		goto out_pm_put;
+ 	}
+ 
+ 	platform_set_drvdata(pdev, priv);
+ 
+ 	ret = i2c_add_numbered_adapter(adap);
+ 	if (ret < 0)
+-		goto out_pm_disable;
++		goto out_pm_put;
+ 
+ 	if (priv->flags & ID_P_HOST_NOTIFY) {
+ 		priv->host_notify_client = i2c_new_slave_host_notify_device(adap);
+@@ -1122,7 +1124,8 @@ static int rcar_i2c_probe(struct platform_device *pdev)
+  out_del_device:
+ 	i2c_del_adapter(&priv->adap);
+  out_pm_put:
+-	pm_runtime_put(dev);
++	if (priv->flags & ID_P_PM_BLOCKED)
++		pm_runtime_put(dev);
+  out_pm_disable:
+ 	pm_runtime_disable(dev);
+ 	return ret;
+diff --git a/drivers/infiniband/hw/hfi1/file_ops.c b/drivers/infiniband/hw/hfi1/file_ops.c
+index 1783a6ea5427b..3ebdd42fec362 100644
+--- a/drivers/infiniband/hw/hfi1/file_ops.c
++++ b/drivers/infiniband/hw/hfi1/file_ops.c
+@@ -265,6 +265,8 @@ static ssize_t hfi1_write_iter(struct kiocb *kiocb, struct iov_iter *from)
+ 	unsigned long dim = from->nr_segs;
+ 	int idx;
+ 
++	if (!HFI1_CAP_IS_KSET(SDMA))
++		return -EINVAL;
+ 	idx = srcu_read_lock(&fd->pq_srcu);
+ 	pq = srcu_dereference(fd->pq, &fd->pq_srcu);
+ 	if (!cq || !pq) {
+diff --git a/drivers/infiniband/hw/hfi1/init.c b/drivers/infiniband/hw/hfi1/init.c
+index 4436ed41547c4..436372b314312 100644
+--- a/drivers/infiniband/hw/hfi1/init.c
++++ b/drivers/infiniband/hw/hfi1/init.c
+@@ -489,7 +489,7 @@ void set_link_ipg(struct hfi1_pportdata *ppd)
+ 	u16 shift, mult;
+ 	u64 src;
+ 	u32 current_egress_rate; /* Mbits /sec */
+-	u32 max_pkt_time;
++	u64 max_pkt_time;
+ 	/*
+ 	 * max_pkt_time is the maximum packet egress time in units
+ 	 * of the fabric clock period 1/(805 MHz).
+diff --git a/drivers/infiniband/hw/hfi1/sdma.c b/drivers/infiniband/hw/hfi1/sdma.c
+index f07d328689d3d..a95b654f52540 100644
+--- a/drivers/infiniband/hw/hfi1/sdma.c
++++ b/drivers/infiniband/hw/hfi1/sdma.c
+@@ -1288,11 +1288,13 @@ void sdma_clean(struct hfi1_devdata *dd, size_t num_engines)
+ 		kvfree(sde->tx_ring);
+ 		sde->tx_ring = NULL;
+ 	}
+-	spin_lock_irq(&dd->sde_map_lock);
+-	sdma_map_free(rcu_access_pointer(dd->sdma_map));
+-	RCU_INIT_POINTER(dd->sdma_map, NULL);
+-	spin_unlock_irq(&dd->sde_map_lock);
+-	synchronize_rcu();
++	if (rcu_access_pointer(dd->sdma_map)) {
++		spin_lock_irq(&dd->sde_map_lock);
++		sdma_map_free(rcu_access_pointer(dd->sdma_map));
++		RCU_INIT_POINTER(dd->sdma_map, NULL);
++		spin_unlock_irq(&dd->sde_map_lock);
++		synchronize_rcu();
++	}
+ 	kfree(dd->per_sdma);
+ 	dd->per_sdma = NULL;
+ 
+diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h
+index 3083d6db1d688..2d690bf8ac16e 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_device.h
++++ b/drivers/infiniband/hw/hns/hns_roce_device.h
+@@ -535,6 +535,11 @@ struct hns_roce_cmd_context {
+ 	u16			busy;
+ };
+ 
++enum hns_roce_cmdq_state {
++	HNS_ROCE_CMDQ_STATE_NORMAL,
++	HNS_ROCE_CMDQ_STATE_FATAL_ERR,
++};
++
+ struct hns_roce_cmdq {
+ 	struct dma_pool		*pool;
+ 	struct semaphore	poll_sem;
+@@ -554,6 +559,7 @@ struct hns_roce_cmdq {
+ 	 * close device, switch into poll mode(non event mode)
+ 	 */
+ 	u8			use_events;
++	enum hns_roce_cmdq_state state;
+ };
+ 
+ struct hns_roce_cmd_mailbox {
+@@ -725,7 +731,6 @@ struct hns_roce_caps {
+ 	u32		num_pi_qps;
+ 	u32		reserved_qps;
+ 	int		num_qpc_timer;
+-	int		num_cqc_timer;
+ 	u32		num_srqs;
+ 	u32		max_wqes;
+ 	u32		max_srq_wrs;
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index 2b0cef17ad452..86f6a4aae1e5a 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -1265,6 +1265,16 @@ static int hns_roce_cmq_csq_done(struct hns_roce_dev *hr_dev)
+ 	return tail == priv->cmq.csq.head;
+ }
+ 
++static void update_cmdq_status(struct hns_roce_dev *hr_dev)
++{
++	struct hns_roce_v2_priv *priv = hr_dev->priv;
++	struct hnae3_handle *handle = priv->handle;
++
++	if (handle->rinfo.reset_state == HNS_ROCE_STATE_RST_INIT ||
++	    handle->rinfo.instance_state == HNS_ROCE_STATE_INIT)
++		hr_dev->cmd.state = HNS_ROCE_CMDQ_STATE_FATAL_ERR;
++}
++
+ static int __hns_roce_cmq_send(struct hns_roce_dev *hr_dev,
+ 			       struct hns_roce_cmq_desc *desc, int num)
+ {
+@@ -1318,6 +1328,8 @@ static int __hns_roce_cmq_send(struct hns_roce_dev *hr_dev,
+ 			 csq->head, tail);
+ 		csq->head = tail;
+ 
++		update_cmdq_status(hr_dev);
++
+ 		ret = -EAGAIN;
+ 	}
+ 
+@@ -1332,6 +1344,9 @@ static int hns_roce_cmq_send(struct hns_roce_dev *hr_dev,
+ 	bool busy;
+ 	int ret;
+ 
++	if (hr_dev->cmd.state == HNS_ROCE_CMDQ_STATE_FATAL_ERR)
++		return -EIO;
++
+ 	if (!v2_chk_mbox_is_avail(hr_dev, &busy))
+ 		return busy ? -EBUSY : 0;
+ 
+@@ -1528,6 +1543,9 @@ static void hns_roce_function_clear(struct hns_roce_dev *hr_dev)
+ {
+ 	int i;
+ 
++	if (hr_dev->cmd.state == HNS_ROCE_CMDQ_STATE_FATAL_ERR)
++		return;
++
+ 	for (i = hr_dev->func_num - 1; i >= 0; i--) {
+ 		__hns_roce_function_clear(hr_dev, i);
+ 		if (i != 0)
+@@ -1947,7 +1965,7 @@ static void set_default_caps(struct hns_roce_dev *hr_dev)
+ 	caps->num_mtpts		= HNS_ROCE_V2_MAX_MTPT_NUM;
+ 	caps->num_pds		= HNS_ROCE_V2_MAX_PD_NUM;
+ 	caps->num_qpc_timer	= HNS_ROCE_V2_MAX_QPC_TIMER_NUM;
+-	caps->num_cqc_timer	= HNS_ROCE_V2_MAX_CQC_TIMER_NUM;
++	caps->cqc_timer_bt_num	= HNS_ROCE_V2_MAX_CQC_TIMER_BT_NUM;
+ 
+ 	caps->max_qp_init_rdma	= HNS_ROCE_V2_MAX_QP_INIT_RDMA;
+ 	caps->max_qp_dest_rdma	= HNS_ROCE_V2_MAX_QP_DEST_RDMA;
+@@ -2243,7 +2261,6 @@ static int hns_roce_query_pf_caps(struct hns_roce_dev *hr_dev)
+ 	caps->max_rq_sg = roundup_pow_of_two(caps->max_rq_sg);
+ 	caps->max_extend_sg	     = le32_to_cpu(resp_a->max_extend_sg);
+ 	caps->num_qpc_timer	     = le16_to_cpu(resp_a->num_qpc_timer);
+-	caps->num_cqc_timer	     = le16_to_cpu(resp_a->num_cqc_timer);
+ 	caps->max_srq_sges	     = le16_to_cpu(resp_a->max_srq_sges);
+ 	caps->max_srq_sges = roundup_pow_of_two(caps->max_srq_sges);
+ 	caps->num_aeq_vectors	     = resp_a->num_aeq_vectors;
+@@ -3000,6 +3017,9 @@ static int v2_wait_mbox_complete(struct hns_roce_dev *hr_dev, u32 timeout,
+ 	mb_st = (struct hns_roce_mbox_status *)desc.data;
+ 	end = msecs_to_jiffies(timeout) + jiffies;
+ 	while (v2_chk_mbox_is_avail(hr_dev, &busy)) {
++		if (hr_dev->cmd.state == HNS_ROCE_CMDQ_STATE_FATAL_ERR)
++			return -EIO;
++
+ 		status = 0;
+ 		hns_roce_cmq_setup_basic_desc(&desc, HNS_ROCE_OPC_QUERY_MB_ST,
+ 					      true);
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+index 0d87b627601e9..9cbb230de03bd 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+@@ -41,7 +41,7 @@
+ #define HNS_ROCE_V2_MAX_SRQ_WR			0x8000
+ #define HNS_ROCE_V2_MAX_SRQ_SGE			64
+ #define HNS_ROCE_V2_MAX_CQ_NUM			0x100000
+-#define HNS_ROCE_V2_MAX_CQC_TIMER_NUM		0x100
++#define HNS_ROCE_V2_MAX_CQC_TIMER_BT_NUM	0x100
+ #define HNS_ROCE_V2_MAX_SRQ_NUM			0x100000
+ #define HNS_ROCE_V2_MAX_CQE_NUM			0x400000
+ #define HNS_ROCE_V2_MAX_RQ_SGE_NUM		64
+diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c
+index f73ba619f3756..c8af4ebd7cbd3 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_main.c
++++ b/drivers/infiniband/hw/hns/hns_roce_main.c
+@@ -737,7 +737,7 @@ static int hns_roce_init_hem(struct hns_roce_dev *hr_dev)
+ 		ret = hns_roce_init_hem_table(hr_dev, &hr_dev->cqc_timer_table,
+ 					      HEM_TYPE_CQC_TIMER,
+ 					      hr_dev->caps.cqc_timer_entry_sz,
+-					      hr_dev->caps.num_cqc_timer, 1);
++					      hr_dev->caps.cqc_timer_bt_num, 1);
+ 		if (ret) {
+ 			dev_err(dev,
+ 				"Failed to init CQC timer memory, aborting.\n");
+diff --git a/drivers/infiniband/sw/rdmavt/qp.c b/drivers/infiniband/sw/rdmavt/qp.c
+index 8ef112f883a77..3acab569fbb94 100644
+--- a/drivers/infiniband/sw/rdmavt/qp.c
++++ b/drivers/infiniband/sw/rdmavt/qp.c
+@@ -2775,7 +2775,7 @@ void rvt_qp_iter(struct rvt_dev_info *rdi,
+ EXPORT_SYMBOL(rvt_qp_iter);
+ 
+ /*
+- * This should be called with s_lock held.
++ * This should be called with s_lock and r_lock held.
+  */
+ void rvt_send_complete(struct rvt_qp *qp, struct rvt_swqe *wqe,
+ 		       enum ib_wc_status status)
+@@ -3134,7 +3134,9 @@ send_comp:
+ 	rvp->n_loop_pkts++;
+ flush_send:
+ 	sqp->s_rnr_retry = sqp->s_rnr_retry_cnt;
++	spin_lock(&sqp->r_lock);
+ 	rvt_send_complete(sqp, wqe, send_status);
++	spin_unlock(&sqp->r_lock);
+ 	if (local_ops) {
+ 		atomic_dec(&sqp->local_ops_pending);
+ 		local_ops = 0;
+@@ -3188,7 +3190,9 @@ serr:
+ 	spin_unlock_irqrestore(&qp->r_lock, flags);
+ serr_no_r_lock:
+ 	spin_lock_irqsave(&sqp->s_lock, flags);
++	spin_lock(&sqp->r_lock);
+ 	rvt_send_complete(sqp, wqe, send_status);
++	spin_unlock(&sqp->r_lock);
+ 	if (sqp->ibqp.qp_type == IB_QPT_RC) {
+ 		int lastwqe;
+ 
+diff --git a/drivers/infiniband/sw/rxe/rxe_mcast.c b/drivers/infiniband/sw/rxe/rxe_mcast.c
+index 873a9b10307c0..86cc2e18a7fda 100644
+--- a/drivers/infiniband/sw/rxe/rxe_mcast.c
++++ b/drivers/infiniband/sw/rxe/rxe_mcast.c
+@@ -206,8 +206,10 @@ static struct rxe_mcg *rxe_get_mcg(struct rxe_dev *rxe, union ib_gid *mgid)
+ 
+ 	/* speculative alloc of new mcg */
+ 	mcg = kzalloc(sizeof(*mcg), GFP_KERNEL);
+-	if (!mcg)
+-		return ERR_PTR(-ENOMEM);
++	if (!mcg) {
++		err = -ENOMEM;
++		goto err_dec;
++	}
+ 
+ 	spin_lock_bh(&rxe->mcg_lock);
+ 	/* re-check to see if someone else just added it */
+diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c
+index ae5fbc79dd5c6..8a1cff80a68e7 100644
+--- a/drivers/infiniband/sw/rxe/rxe_req.c
++++ b/drivers/infiniband/sw/rxe/rxe_req.c
+@@ -661,7 +661,7 @@ next_wqe:
+ 	opcode = next_opcode(qp, wqe, wqe->wr.opcode);
+ 	if (unlikely(opcode < 0)) {
+ 		wqe->status = IB_WC_LOC_QP_OP_ERR;
+-		goto exit;
++		goto err;
+ 	}
+ 
+ 	mask = rxe_opcode[opcode].mask;
+diff --git a/drivers/input/keyboard/gpio_keys.c b/drivers/input/keyboard/gpio_keys.c
+index d75a8b179a8ae..a5dc4ab87fa1f 100644
+--- a/drivers/input/keyboard/gpio_keys.c
++++ b/drivers/input/keyboard/gpio_keys.c
+@@ -131,7 +131,7 @@ static void gpio_keys_quiesce_key(void *data)
+ 
+ 	if (!bdata->gpiod)
+ 		hrtimer_cancel(&bdata->release_timer);
+-	if (bdata->debounce_use_hrtimer)
++	else if (bdata->debounce_use_hrtimer)
+ 		hrtimer_cancel(&bdata->debounce_timer);
+ 	else
+ 		cancel_delayed_work_sync(&bdata->work);
+diff --git a/drivers/input/misc/sparcspkr.c b/drivers/input/misc/sparcspkr.c
+index fe43e5557ed72..cdcb7737c46aa 100644
+--- a/drivers/input/misc/sparcspkr.c
++++ b/drivers/input/misc/sparcspkr.c
+@@ -205,6 +205,7 @@ static int bbc_beep_probe(struct platform_device *op)
+ 
+ 	info = &state->u.bbc;
+ 	info->clock_freq = of_getintprop_default(dp, "clock-frequency", 0);
++	of_node_put(dp);
+ 	if (!info->clock_freq)
+ 		goto out_free;
+ 
+diff --git a/drivers/input/touchscreen/stmfts.c b/drivers/input/touchscreen/stmfts.c
+index 72e0b767e1ba4..c175d44c52f37 100644
+--- a/drivers/input/touchscreen/stmfts.c
++++ b/drivers/input/touchscreen/stmfts.c
+@@ -337,13 +337,15 @@ static int stmfts_input_open(struct input_dev *dev)
+ 	struct stmfts_data *sdata = input_get_drvdata(dev);
+ 	int err;
+ 
+-	err = pm_runtime_get_sync(&sdata->client->dev);
+-	if (err < 0)
+-		goto out;
++	err = pm_runtime_resume_and_get(&sdata->client->dev);
++	if (err)
++		return err;
+ 
+ 	err = i2c_smbus_write_byte(sdata->client, STMFTS_MS_MT_SENSE_ON);
+-	if (err)
+-		goto out;
++	if (err) {
++		pm_runtime_put_sync(&sdata->client->dev);
++		return err;
++	}
+ 
+ 	mutex_lock(&sdata->mutex);
+ 	sdata->running = true;
+@@ -366,9 +368,7 @@ static int stmfts_input_open(struct input_dev *dev)
+ 				 "failed to enable touchkey\n");
+ 	}
+ 
+-out:
+-	pm_runtime_put_noidle(&sdata->client->dev);
+-	return err;
++	return 0;
+ }
+ 
+ static void stmfts_input_close(struct input_dev *dev)
+diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
+index b4a798c7b347f..d8060503ba51b 100644
+--- a/drivers/iommu/amd/init.c
++++ b/drivers/iommu/amd/init.c
+@@ -84,7 +84,7 @@
+ #define ACPI_DEVFLAG_LINT1              0x80
+ #define ACPI_DEVFLAG_ATSDIS             0x10000000
+ 
+-#define LOOP_TIMEOUT	100000
++#define LOOP_TIMEOUT	2000000
+ /*
+  * ACPI table definitions
+  *
+diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
+index a1ada7bff44e6..079694f894b85 100644
+--- a/drivers/iommu/amd/iommu.c
++++ b/drivers/iommu/amd/iommu.c
+@@ -1838,17 +1838,10 @@ void amd_iommu_domain_update(struct protection_domain *domain)
+ 	amd_iommu_domain_flush_complete(domain);
+ }
+ 
+-static void __init amd_iommu_init_dma_ops(void)
+-{
+-	swiotlb = (iommu_default_passthrough() || sme_me_mask) ? 1 : 0;
+-}
+-
+ int __init amd_iommu_init_api(void)
+ {
+ 	int err;
+ 
+-	amd_iommu_init_dma_ops();
+-
+ 	err = bus_set_iommu(&pci_bus_type, &amd_iommu_ops);
+ 	if (err)
+ 		return err;
+diff --git a/drivers/iommu/amd/iommu_v2.c b/drivers/iommu/amd/iommu_v2.c
+index e56b137ceabd1..afb3efd565b78 100644
+--- a/drivers/iommu/amd/iommu_v2.c
++++ b/drivers/iommu/amd/iommu_v2.c
+@@ -956,6 +956,7 @@ static void __exit amd_iommu_v2_exit(void)
+ {
+ 	struct device_state *dev_state, *next;
+ 	unsigned long flags;
++	LIST_HEAD(freelist);
+ 
+ 	if (!amd_iommu_v2_supported())
+ 		return;
+@@ -975,11 +976,20 @@ static void __exit amd_iommu_v2_exit(void)
+ 
+ 		put_device_state(dev_state);
+ 		list_del(&dev_state->list);
+-		free_device_state(dev_state);
++		list_add_tail(&dev_state->list, &freelist);
+ 	}
+ 
+ 	spin_unlock_irqrestore(&state_lock, flags);
+ 
++	/*
++	 * Since free_device_state waits on the count to be zero,
++	 * we need to free dev_state outside the spinlock.
++	 */
++	list_for_each_entry_safe(dev_state, next, &freelist, list) {
++		list_del(&dev_state->list);
++		free_device_state(dev_state);
++	}
++
+ 	destroy_workqueue(iommu_wq);
+ }
+ 
+diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
+index c623dae1e1154..1ef7bbb4acf30 100644
+--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
++++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
+@@ -6,6 +6,7 @@
+ #include <linux/mm.h>
+ #include <linux/mmu_context.h>
+ #include <linux/mmu_notifier.h>
++#include <linux/sched/mm.h>
+ #include <linux/slab.h>
+ 
+ #include "arm-smmu-v3.h"
+@@ -96,9 +97,14 @@ static struct arm_smmu_ctx_desc *arm_smmu_alloc_shared_cd(struct mm_struct *mm)
+ 	struct arm_smmu_ctx_desc *cd;
+ 	struct arm_smmu_ctx_desc *ret = NULL;
+ 
++	/* Don't free the mm until we release the ASID */
++	mmgrab(mm);
++
+ 	asid = arm64_mm_context_get(mm);
+-	if (!asid)
+-		return ERR_PTR(-ESRCH);
++	if (!asid) {
++		err = -ESRCH;
++		goto out_drop_mm;
++	}
+ 
+ 	cd = kzalloc(sizeof(*cd), GFP_KERNEL);
+ 	if (!cd) {
+@@ -165,6 +171,8 @@ out_free_cd:
+ 	kfree(cd);
+ out_put_context:
+ 	arm64_mm_context_put(mm);
++out_drop_mm:
++	mmdrop(mm);
+ 	return err < 0 ? ERR_PTR(err) : ret;
+ }
+ 
+@@ -173,6 +181,7 @@ static void arm_smmu_free_shared_cd(struct arm_smmu_ctx_desc *cd)
+ 	if (arm_smmu_free_asid(cd)) {
+ 		/* Unpin ASID */
+ 		arm64_mm_context_put(cd->mm);
++		mmdrop(cd->mm);
+ 		kfree(cd);
+ 	}
+ }
+diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
+index 09f6e1c0f9c07..2932281e93fc4 100644
+--- a/drivers/iommu/dma-iommu.c
++++ b/drivers/iommu/dma-iommu.c
+@@ -776,6 +776,7 @@ static struct page **__iommu_dma_alloc_noncontiguous(struct device *dev,
+ 	unsigned int count, min_size, alloc_sizes = domain->pgsize_bitmap;
+ 	struct page **pages;
+ 	dma_addr_t iova;
++	ssize_t ret;
+ 
+ 	if (static_branch_unlikely(&iommu_deferred_attach_enabled) &&
+ 	    iommu_deferred_attach(dev, domain))
+@@ -813,8 +814,8 @@ static struct page **__iommu_dma_alloc_noncontiguous(struct device *dev,
+ 			arch_dma_prep_coherent(sg_page(sg), sg->length);
+ 	}
+ 
+-	if (iommu_map_sg_atomic(domain, iova, sgt->sgl, sgt->orig_nents, ioprot)
+-			< size)
++	ret = iommu_map_sg_atomic(domain, iova, sgt->sgl, sgt->orig_nents, ioprot);
++	if (ret < 0 || ret < size)
+ 		goto out_free_sg;
+ 
+ 	sgt->sgl->dma_address = iova;
+@@ -1209,7 +1210,7 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
+ 	 * implementation - it knows better than we do.
+ 	 */
+ 	ret = iommu_map_sg_atomic(domain, iova, sg, nents, prot);
+-	if (ret < iova_len)
++	if (ret < 0 || ret < iova_len)
+ 		goto out_free_iova;
+ 
+ 	return __finalise_sg(dev, sg, nents, iova);
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index 0ea47e17b379e..ba9a63cac47cc 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -5031,7 +5031,7 @@ static void quirk_igfx_skip_te_disable(struct pci_dev *dev)
+ 	ver = (dev->device >> 8) & 0xff;
+ 	if (ver != 0x45 && ver != 0x46 && ver != 0x4c &&
+ 	    ver != 0x4e && ver != 0x8a && ver != 0x98 &&
+-	    ver != 0x9a)
++	    ver != 0x9a && ver != 0xa7)
+ 		return;
+ 
+ 	if (risky_device(dev))
+diff --git a/drivers/iommu/msm_iommu.c b/drivers/iommu/msm_iommu.c
+index 50f57624610f6..a16e0fe57cd8d 100644
+--- a/drivers/iommu/msm_iommu.c
++++ b/drivers/iommu/msm_iommu.c
+@@ -610,16 +610,19 @@ static void insert_iommu_master(struct device *dev,
+ static int qcom_iommu_of_xlate(struct device *dev,
+ 			       struct of_phandle_args *spec)
+ {
+-	struct msm_iommu_dev *iommu;
++	struct msm_iommu_dev *iommu = NULL, *iter;
+ 	unsigned long flags;
+ 	int ret = 0;
+ 
+ 	spin_lock_irqsave(&msm_iommu_lock, flags);
+-	list_for_each_entry(iommu, &qcom_iommu_devices, dev_node)
+-		if (iommu->dev->of_node == spec->np)
++	list_for_each_entry(iter, &qcom_iommu_devices, dev_node) {
++		if (iter->dev->of_node == spec->np) {
++			iommu = iter;
+ 			break;
++		}
++	}
+ 
+-	if (!iommu || iommu->dev->of_node != spec->np) {
++	if (!iommu) {
+ 		ret = -ENODEV;
+ 		goto fail;
+ 	}
+diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c
+index 6fd75a60abd67..1a31f4707222a 100644
+--- a/drivers/iommu/mtk_iommu.c
++++ b/drivers/iommu/mtk_iommu.c
+@@ -446,7 +446,7 @@ static void mtk_iommu_domain_free(struct iommu_domain *domain)
+ static int mtk_iommu_attach_device(struct iommu_domain *domain,
+ 				   struct device *dev)
+ {
+-	struct mtk_iommu_data *data = dev_iommu_priv_get(dev);
++	struct mtk_iommu_data *data = dev_iommu_priv_get(dev), *frstdata;
+ 	struct mtk_iommu_domain *dom = to_mtk_domain(domain);
+ 	struct device *m4udev = data->dev;
+ 	int ret, domid;
+@@ -456,20 +456,24 @@ static int mtk_iommu_attach_device(struct iommu_domain *domain,
+ 		return domid;
+ 
+ 	if (!dom->data) {
+-		if (mtk_iommu_domain_finalise(dom, data, domid))
++		/* Data is in the frstdata in sharing pgtable case. */
++		frstdata = mtk_iommu_get_m4u_data();
++
++		if (mtk_iommu_domain_finalise(dom, frstdata, domid))
+ 			return -ENODEV;
+ 		dom->data = data;
+ 	}
+ 
++	mutex_lock(&data->mutex);
+ 	if (!data->m4u_dom) { /* Initialize the M4U HW */
+ 		ret = pm_runtime_resume_and_get(m4udev);
+ 		if (ret < 0)
+-			return ret;
++			goto err_unlock;
+ 
+ 		ret = mtk_iommu_hw_init(data);
+ 		if (ret) {
+ 			pm_runtime_put(m4udev);
+-			return ret;
++			goto err_unlock;
+ 		}
+ 		data->m4u_dom = dom;
+ 		writel(dom->cfg.arm_v7s_cfg.ttbr & MMU_PT_ADDR_MASK,
+@@ -477,9 +481,14 @@ static int mtk_iommu_attach_device(struct iommu_domain *domain,
+ 
+ 		pm_runtime_put(m4udev);
+ 	}
++	mutex_unlock(&data->mutex);
+ 
+ 	mtk_iommu_config(data, dev, true, domid);
+ 	return 0;
++
++err_unlock:
++	mutex_unlock(&data->mutex);
++	return ret;
+ }
+ 
+ static void mtk_iommu_detach_device(struct iommu_domain *domain,
+@@ -572,6 +581,9 @@ static struct iommu_device *mtk_iommu_probe_device(struct device *dev)
+ 	 * All the ports in each a device should be in the same larbs.
+ 	 */
+ 	larbid = MTK_M4U_TO_LARB(fwspec->ids[0]);
++	if (larbid >= MTK_LARB_NR_MAX)
++		return ERR_PTR(-EINVAL);
++
+ 	for (i = 1; i < fwspec->num_ids; i++) {
+ 		larbidx = MTK_M4U_TO_LARB(fwspec->ids[i]);
+ 		if (larbid != larbidx) {
+@@ -581,6 +593,9 @@ static struct iommu_device *mtk_iommu_probe_device(struct device *dev)
+ 		}
+ 	}
+ 	larbdev = data->larb_imu[larbid].dev;
++	if (!larbdev)
++		return ERR_PTR(-EINVAL);
++
+ 	link = device_link_add(dev, larbdev,
+ 			       DL_FLAG_PM_RUNTIME | DL_FLAG_STATELESS);
+ 	if (!link)
+@@ -619,6 +634,7 @@ static struct iommu_group *mtk_iommu_device_group(struct device *dev)
+ 	if (domid < 0)
+ 		return ERR_PTR(domid);
+ 
++	mutex_lock(&data->mutex);
+ 	group = data->m4u_group[domid];
+ 	if (!group) {
+ 		group = iommu_group_alloc();
+@@ -627,6 +643,7 @@ static struct iommu_group *mtk_iommu_device_group(struct device *dev)
+ 	} else {
+ 		iommu_group_ref_get(group);
+ 	}
++	mutex_unlock(&data->mutex);
+ 	return group;
+ }
+ 
+@@ -907,6 +924,7 @@ static int mtk_iommu_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	platform_set_drvdata(pdev, data);
++	mutex_init(&data->mutex);
+ 
+ 	ret = iommu_device_sysfs_add(&data->iommu, dev, NULL,
+ 				     "mtk-iommu.%pa", &ioaddr);
+@@ -952,10 +970,8 @@ static int mtk_iommu_remove(struct platform_device *pdev)
+ 	iommu_device_sysfs_remove(&data->iommu);
+ 	iommu_device_unregister(&data->iommu);
+ 
+-	if (iommu_present(&platform_bus_type))
+-		bus_set_iommu(&platform_bus_type, NULL);
++	list_del(&data->list);
+ 
+-	clk_disable_unprepare(data->bclk);
+ 	device_link_remove(data->smicomm_dev, &pdev->dev);
+ 	pm_runtime_disable(&pdev->dev);
+ 	devm_free_irq(&pdev->dev, data->irq, data);
+diff --git a/drivers/iommu/mtk_iommu.h b/drivers/iommu/mtk_iommu.h
+index b742432220c5f..5e8da947affc8 100644
+--- a/drivers/iommu/mtk_iommu.h
++++ b/drivers/iommu/mtk_iommu.h
+@@ -80,6 +80,8 @@ struct mtk_iommu_data {
+ 
+ 	struct dma_iommu_mapping	*mapping; /* For mtk_iommu_v1.c */
+ 
++	struct mutex			mutex; /* Protect m4u_group/m4u_dom above */
++
+ 	struct list_head		list;
+ 	struct mtk_smi_larb_iommu	larb_imu[MTK_LARB_NR_MAX];
+ };
+diff --git a/drivers/iommu/mtk_iommu_v1.c b/drivers/iommu/mtk_iommu_v1.c
+index ecff800656e6b..74563f689fbd9 100644
+--- a/drivers/iommu/mtk_iommu_v1.c
++++ b/drivers/iommu/mtk_iommu_v1.c
+@@ -80,6 +80,7 @@
+ /* MTK generation one iommu HW only support 4K size mapping */
+ #define MT2701_IOMMU_PAGE_SHIFT			12
+ #define MT2701_IOMMU_PAGE_SIZE			(1UL << MT2701_IOMMU_PAGE_SHIFT)
++#define MT2701_LARB_NR_MAX			3
+ 
+ /*
+  * MTK m4u support 4GB iova address space, and only support 4K page
+@@ -457,6 +458,9 @@ static struct iommu_device *mtk_iommu_probe_device(struct device *dev)
+ 
+ 	/* Link the consumer device with the smi-larb device(supplier) */
+ 	larbid = mt2701_m4u_to_larb(fwspec->ids[0]);
++	if (larbid >= MT2701_LARB_NR_MAX)
++		return ERR_PTR(-EINVAL);
++
+ 	for (idx = 1; idx < fwspec->num_ids; idx++) {
+ 		larbidx = mt2701_m4u_to_larb(fwspec->ids[idx]);
+ 		if (larbid != larbidx) {
+@@ -467,6 +471,9 @@ static struct iommu_device *mtk_iommu_probe_device(struct device *dev)
+ 	}
+ 
+ 	larbdev = data->larb_imu[larbid].dev;
++	if (!larbdev)
++		return ERR_PTR(-EINVAL);
++
+ 	link = device_link_add(dev, larbdev,
+ 			       DL_FLAG_PM_RUNTIME | DL_FLAG_STATELESS);
+ 	if (!link)
+diff --git a/drivers/irqchip/irq-armada-370-xp.c b/drivers/irqchip/irq-armada-370-xp.c
+index 5b8d571c041dc..1120084cba09d 100644
+--- a/drivers/irqchip/irq-armada-370-xp.c
++++ b/drivers/irqchip/irq-armada-370-xp.c
+@@ -308,7 +308,16 @@ static inline int armada_370_xp_msi_init(struct device_node *node,
+ 
+ static void armada_xp_mpic_perf_init(void)
+ {
+-	unsigned long cpuid = cpu_logical_map(smp_processor_id());
++	unsigned long cpuid;
++
++	/*
++	 * This Performance Counter Overflow interrupt is specific for
++	 * Armada 370 and XP. It is not available on Armada 375, 38x and 39x.
++	 */
++	if (!of_machine_is_compatible("marvell,armada-370-xp"))
++		return;
++
++	cpuid = cpu_logical_map(smp_processor_id());
+ 
+ 	/* Enable Performance Counter Overflow interrupts */
+ 	writel(ARMADA_370_XP_INT_CAUSE_PERF(cpuid),
+diff --git a/drivers/irqchip/irq-aspeed-i2c-ic.c b/drivers/irqchip/irq-aspeed-i2c-ic.c
+index a47db16ff9603..9c9fc3e2967ed 100644
+--- a/drivers/irqchip/irq-aspeed-i2c-ic.c
++++ b/drivers/irqchip/irq-aspeed-i2c-ic.c
+@@ -77,8 +77,8 @@ static int __init aspeed_i2c_ic_of_init(struct device_node *node,
+ 	}
+ 
+ 	i2c_ic->parent_irq = irq_of_parse_and_map(node, 0);
+-	if (i2c_ic->parent_irq < 0) {
+-		ret = i2c_ic->parent_irq;
++	if (!i2c_ic->parent_irq) {
++		ret = -EINVAL;
+ 		goto err_iounmap;
+ 	}
+ 
+diff --git a/drivers/irqchip/irq-aspeed-scu-ic.c b/drivers/irqchip/irq-aspeed-scu-ic.c
+index 18b77c3e6db4b..279e92cf0b16b 100644
+--- a/drivers/irqchip/irq-aspeed-scu-ic.c
++++ b/drivers/irqchip/irq-aspeed-scu-ic.c
+@@ -157,8 +157,8 @@ static int aspeed_scu_ic_of_init_common(struct aspeed_scu_ic *scu_ic,
+ 	}
+ 
+ 	irq = irq_of_parse_and_map(node, 0);
+-	if (irq < 0) {
+-		rc = irq;
++	if (!irq) {
++		rc = -EINVAL;
+ 		goto err;
+ 	}
+ 
+diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
+index b252d5534547c..1af2b50f36f3e 100644
+--- a/drivers/irqchip/irq-gic-v3.c
++++ b/drivers/irqchip/irq-gic-v3.c
+@@ -556,7 +556,8 @@ static void gic_irq_nmi_teardown(struct irq_data *d)
+ 
+ static void gic_eoi_irq(struct irq_data *d)
+ {
+-	gic_write_eoir(gic_irq(d));
++	write_gicreg(gic_irq(d), ICC_EOIR1_EL1);
++	isb();
+ }
+ 
+ static void gic_eoimode1_eoi_irq(struct irq_data *d)
+@@ -640,82 +641,101 @@ static void gic_deactivate_unhandled(u32 irqnr)
+ 		if (irqnr < 8192)
+ 			gic_write_dir(irqnr);
+ 	} else {
+-		gic_write_eoir(irqnr);
++		write_gicreg(irqnr, ICC_EOIR1_EL1);
++		isb();
+ 	}
+ }
+ 
+-static inline void gic_handle_nmi(u32 irqnr, struct pt_regs *regs)
++/*
++ * Follow a read of the IAR with any HW maintenance that needs to happen prior
++ * to invoking the relevant IRQ handler. We must do two things:
++ *
++ * (1) Ensure instruction ordering between a read of IAR and subsequent
++ *     instructions in the IRQ handler using an ISB.
++ *
++ *     It is possible for the IAR to report an IRQ which was signalled *after*
++ *     the CPU took an IRQ exception as multiple interrupts can race to be
++ *     recognized by the GIC, earlier interrupts could be withdrawn, and/or
++ *     later interrupts could be prioritized by the GIC.
++ *
++ *     For devices which are tightly coupled to the CPU, such as PMUs, a
++ *     context synchronization event is necessary to ensure that system
++ *     register state is not stale, as these may have been indirectly written
++ *     *after* exception entry.
++ *
++ * (2) Deactivate the interrupt when EOI mode 1 is in use.
++ */
++static inline void gic_complete_ack(u32 irqnr)
+ {
+-	bool irqs_enabled = interrupts_enabled(regs);
+-	int err;
+-
+-	if (irqs_enabled)
+-		nmi_enter();
+-
+ 	if (static_branch_likely(&supports_deactivate_key))
+-		gic_write_eoir(irqnr);
+-	/*
+-	 * Leave the PSR.I bit set to prevent other NMIs to be
+-	 * received while handling this one.
+-	 * PSR.I will be restored when we ERET to the
+-	 * interrupted context.
+-	 */
+-	err = generic_handle_domain_nmi(gic_data.domain, irqnr);
+-	if (err)
+-		gic_deactivate_unhandled(irqnr);
++		write_gicreg(irqnr, ICC_EOIR1_EL1);
+ 
+-	if (irqs_enabled)
+-		nmi_exit();
++	isb();
+ }
+ 
+-static u32 do_read_iar(struct pt_regs *regs)
++static bool gic_rpr_is_nmi_prio(void)
+ {
+-	u32 iar;
++	if (!gic_supports_nmi())
++		return false;
+ 
+-	if (gic_supports_nmi() && unlikely(!interrupts_enabled(regs))) {
+-		u64 pmr;
++	return unlikely(gic_read_rpr() == GICD_INT_RPR_PRI(GICD_INT_NMI_PRI));
++}
+ 
+-		/*
+-		 * We were in a context with IRQs disabled. However, the
+-		 * entry code has set PMR to a value that allows any
+-		 * interrupt to be acknowledged, and not just NMIs. This can
+-		 * lead to surprising effects if the NMI has been retired in
+-		 * the meantime, and that there is an IRQ pending. The IRQ
+-		 * would then be taken in NMI context, something that nobody
+-		 * wants to debug twice.
+-		 *
+-		 * Until we sort this, drop PMR again to a level that will
+-		 * actually only allow NMIs before reading IAR, and then
+-		 * restore it to what it was.
+-		 */
+-		pmr = gic_read_pmr();
+-		gic_pmr_mask_irqs();
+-		isb();
++static bool gic_irqnr_is_special(u32 irqnr)
++{
++	return irqnr >= 1020 && irqnr <= 1023;
++}
+ 
+-		iar = gic_read_iar();
++static void __gic_handle_irq(u32 irqnr, struct pt_regs *regs)
++{
++	if (gic_irqnr_is_special(irqnr))
++		return;
+ 
+-		gic_write_pmr(pmr);
+-	} else {
+-		iar = gic_read_iar();
++	gic_complete_ack(irqnr);
++
++	if (generic_handle_domain_irq(gic_data.domain, irqnr)) {
++		WARN_ONCE(true, "Unexpected interrupt (irqnr %u)\n", irqnr);
++		gic_deactivate_unhandled(irqnr);
+ 	}
++}
++
++static void __gic_handle_nmi(u32 irqnr, struct pt_regs *regs)
++{
++	if (gic_irqnr_is_special(irqnr))
++		return;
++
++	gic_complete_ack(irqnr);
+ 
+-	return iar;
++	if (generic_handle_domain_nmi(gic_data.domain, irqnr)) {
++		WARN_ONCE(true, "Unexpected pseudo-NMI (irqnr %u)\n", irqnr);
++		gic_deactivate_unhandled(irqnr);
++	}
+ }
+ 
+-static asmlinkage void __exception_irq_entry gic_handle_irq(struct pt_regs *regs)
++/*
++ * An exception has been taken from a context with IRQs enabled, and this could
++ * be an IRQ or an NMI.
++ *
++ * The entry code called us with DAIF.IF set to keep NMIs masked. We must clear
++ * DAIF.IF (and update ICC_PMR_EL1 to mask regular IRQs) prior to returning,
++ * after handling any NMI but before handling any IRQ.
++ *
++ * The entry code has performed IRQ entry, and if an NMI is detected we must
++ * perform NMI entry/exit around invoking the handler.
++ */
++static void __gic_handle_irq_from_irqson(struct pt_regs *regs)
+ {
++	bool is_nmi;
+ 	u32 irqnr;
+ 
+-	irqnr = do_read_iar(regs);
++	irqnr = gic_read_iar();
+ 
+-	/* Check for special IDs first */
+-	if ((irqnr >= 1020 && irqnr <= 1023))
+-		return;
++	is_nmi = gic_rpr_is_nmi_prio();
+ 
+-	if (gic_supports_nmi() &&
+-	    unlikely(gic_read_rpr() == GICD_INT_RPR_PRI(GICD_INT_NMI_PRI))) {
+-		gic_handle_nmi(irqnr, regs);
+-		return;
++	if (is_nmi) {
++		nmi_enter();
++		__gic_handle_nmi(irqnr, regs);
++		nmi_exit();
+ 	}
+ 
+ 	if (gic_prio_masking_enabled()) {
+@@ -723,15 +743,52 @@ static asmlinkage void __exception_irq_entry gic_handle_irq(struct pt_regs *regs
+ 		gic_arch_enable_irqs();
+ 	}
+ 
+-	if (static_branch_likely(&supports_deactivate_key))
+-		gic_write_eoir(irqnr);
+-	else
+-		isb();
++	if (!is_nmi)
++		__gic_handle_irq(irqnr, regs);
++}
+ 
+-	if (generic_handle_domain_irq(gic_data.domain, irqnr)) {
+-		WARN_ONCE(true, "Unexpected interrupt received!\n");
+-		gic_deactivate_unhandled(irqnr);
+-	}
++/*
++ * An exception has been taken from a context with IRQs disabled, which can only
++ * be an NMI.
++ *
++ * The entry code called us with DAIF.IF set to keep NMIs masked. We must leave
++ * DAIF.IF (and ICC_PMR_EL1) unchanged.
++ *
++ * The entry code has performed NMI entry.
++ */
++static void __gic_handle_irq_from_irqsoff(struct pt_regs *regs)
++{
++	u64 pmr;
++	u32 irqnr;
++
++	/*
++	 * We were in a context with IRQs disabled. However, the
++	 * entry code has set PMR to a value that allows any
++	 * interrupt to be acknowledged, and not just NMIs. This can
++	 * lead to surprising effects if the NMI has been retired in
++	 * the meantime, and that there is an IRQ pending. The IRQ
++	 * would then be taken in NMI context, something that nobody
++	 * wants to debug twice.
++	 *
++	 * Until we sort this, drop PMR again to a level that will
++	 * actually only allow NMIs before reading IAR, and then
++	 * restore it to what it was.
++	 */
++	pmr = gic_read_pmr();
++	gic_pmr_mask_irqs();
++	isb();
++	irqnr = gic_read_iar();
++	gic_write_pmr(pmr);
++
++	__gic_handle_nmi(irqnr, regs);
++}
++
++static asmlinkage void __exception_irq_entry gic_handle_irq(struct pt_regs *regs)
++{
++	if (unlikely(gic_supports_nmi() && !interrupts_enabled(regs)))
++		__gic_handle_irq_from_irqsoff(regs);
++	else
++		__gic_handle_irq_from_irqson(regs);
+ }
+ 
+ static u32 gic_get_pribits(void)
+diff --git a/drivers/irqchip/irq-sni-exiu.c b/drivers/irqchip/irq-sni-exiu.c
+index abd011fcecf4a..c7db617e1a2f6 100644
+--- a/drivers/irqchip/irq-sni-exiu.c
++++ b/drivers/irqchip/irq-sni-exiu.c
+@@ -37,11 +37,26 @@ struct exiu_irq_data {
+ 	u32		spi_base;
+ };
+ 
+-static void exiu_irq_eoi(struct irq_data *d)
++static void exiu_irq_ack(struct irq_data *d)
+ {
+ 	struct exiu_irq_data *data = irq_data_get_irq_chip_data(d);
+ 
+ 	writel(BIT(d->hwirq), data->base + EIREQCLR);
++}
++
++static void exiu_irq_eoi(struct irq_data *d)
++{
++	struct exiu_irq_data *data = irq_data_get_irq_chip_data(d);
++
++	/*
++	 * Level triggered interrupts are latched and must be cleared during
++	 * EOI or the interrupt will be jammed on. Of course if a level
++	 * triggered interrupt is still asserted then the write will not clear
++	 * the interrupt.
++	 */
++	if (irqd_is_level_type(d))
++		writel(BIT(d->hwirq), data->base + EIREQCLR);
++
+ 	irq_chip_eoi_parent(d);
+ }
+ 
+@@ -91,10 +106,13 @@ static int exiu_irq_set_type(struct irq_data *d, unsigned int type)
+ 	writel_relaxed(val, data->base + EILVL);
+ 
+ 	val = readl_relaxed(data->base + EIEDG);
+-	if (type == IRQ_TYPE_LEVEL_LOW || type == IRQ_TYPE_LEVEL_HIGH)
++	if (type == IRQ_TYPE_LEVEL_LOW || type == IRQ_TYPE_LEVEL_HIGH) {
+ 		val &= ~BIT(d->hwirq);
+-	else
++		irq_set_handler_locked(d, handle_fasteoi_irq);
++	} else {
+ 		val |= BIT(d->hwirq);
++		irq_set_handler_locked(d, handle_fasteoi_ack_irq);
++	}
+ 	writel_relaxed(val, data->base + EIEDG);
+ 
+ 	writel_relaxed(BIT(d->hwirq), data->base + EIREQCLR);
+@@ -104,6 +122,7 @@ static int exiu_irq_set_type(struct irq_data *d, unsigned int type)
+ 
+ static struct irq_chip exiu_irq_chip = {
+ 	.name			= "EXIU",
++	.irq_ack		= exiu_irq_ack,
+ 	.irq_eoi		= exiu_irq_eoi,
+ 	.irq_enable		= exiu_irq_enable,
+ 	.irq_mask		= exiu_irq_mask,
+diff --git a/drivers/irqchip/irq-xtensa-mx.c b/drivers/irqchip/irq-xtensa-mx.c
+index 27933338f7b36..8c581c985aa7d 100644
+--- a/drivers/irqchip/irq-xtensa-mx.c
++++ b/drivers/irqchip/irq-xtensa-mx.c
+@@ -151,14 +151,25 @@ static struct irq_chip xtensa_mx_irq_chip = {
+ 	.irq_set_affinity = xtensa_mx_irq_set_affinity,
+ };
+ 
++static void __init xtensa_mx_init_common(struct irq_domain *root_domain)
++{
++	unsigned int i;
++
++	irq_set_default_host(root_domain);
++	secondary_init_irq();
++
++	/* Initialize default IRQ routing to CPU 0 */
++	for (i = 0; i < XCHAL_NUM_EXTINTERRUPTS; ++i)
++		set_er(1, MIROUT(i));
++}
++
+ int __init xtensa_mx_init_legacy(struct device_node *interrupt_parent)
+ {
+ 	struct irq_domain *root_domain =
+ 		irq_domain_add_legacy(NULL, NR_IRQS - 1, 1, 0,
+ 				&xtensa_mx_irq_domain_ops,
+ 				&xtensa_mx_irq_chip);
+-	irq_set_default_host(root_domain);
+-	secondary_init_irq();
++	xtensa_mx_init_common(root_domain);
+ 	return 0;
+ }
+ 
+@@ -168,8 +179,7 @@ static int __init xtensa_mx_init(struct device_node *np,
+ 	struct irq_domain *root_domain =
+ 		irq_domain_add_linear(np, NR_IRQS, &xtensa_mx_irq_domain_ops,
+ 				&xtensa_mx_irq_chip);
+-	irq_set_default_host(root_domain);
+-	secondary_init_irq();
++	xtensa_mx_init_common(root_domain);
+ 	return 0;
+ }
+ IRQCHIP_DECLARE(xtensa_mx_irq_chip, "cdns,xtensa-mx", xtensa_mx_init);
+diff --git a/drivers/macintosh/Kconfig b/drivers/macintosh/Kconfig
+index 5cdc361da37cb..539a2ed4e13dc 100644
+--- a/drivers/macintosh/Kconfig
++++ b/drivers/macintosh/Kconfig
+@@ -44,6 +44,7 @@ config ADB_IOP
+ config ADB_CUDA
+ 	bool "Support for Cuda/Egret based Macs and PowerMacs"
+ 	depends on (ADB || PPC_PMAC) && !PPC_PMAC64
++	select RTC_LIB
+ 	help
+ 	  This provides support for Cuda/Egret based Macintosh and
+ 	  Power Macintosh systems. This includes most m68k based Macs,
+@@ -57,6 +58,7 @@ config ADB_CUDA
+ config ADB_PMU
+ 	bool "Support for PMU based PowerMacs and PowerBooks"
+ 	depends on PPC_PMAC || MAC
++	select RTC_LIB
+ 	help
+ 	  On PowerBooks, iBooks, and recent iMacs and Power Macintoshes, the
+ 	  PMU is an embedded microprocessor whose primary function is to
+@@ -67,6 +69,10 @@ config ADB_PMU
+ 	  this device; you should do so if your machine is one of those
+ 	  mentioned above.
+ 
++config ADB_PMU_EVENT
++	def_bool y
++	depends on ADB_PMU && INPUT=y
++
+ config ADB_PMU_LED
+ 	bool "Support for the Power/iBook front LED"
+ 	depends on PPC_PMAC && ADB_PMU
+diff --git a/drivers/macintosh/Makefile b/drivers/macintosh/Makefile
+index 49819b1b6f201..712edcb3e0b08 100644
+--- a/drivers/macintosh/Makefile
++++ b/drivers/macintosh/Makefile
+@@ -12,7 +12,8 @@ obj-$(CONFIG_MAC_EMUMOUSEBTN)	+= mac_hid.o
+ obj-$(CONFIG_INPUT_ADBHID)	+= adbhid.o
+ obj-$(CONFIG_ANSLCD)		+= ans-lcd.o
+ 
+-obj-$(CONFIG_ADB_PMU)		+= via-pmu.o via-pmu-event.o
++obj-$(CONFIG_ADB_PMU)		+= via-pmu.o
++obj-$(CONFIG_ADB_PMU_EVENT)	+= via-pmu-event.o
+ obj-$(CONFIG_ADB_PMU_LED)	+= via-pmu-led.o
+ obj-$(CONFIG_PMAC_BACKLIGHT)	+= via-pmu-backlight.o
+ obj-$(CONFIG_ADB_CUDA)		+= via-cuda.o
+diff --git a/drivers/macintosh/via-pmu.c b/drivers/macintosh/via-pmu.c
+index 4b98bc26a94b5..2109129ea1bbf 100644
+--- a/drivers/macintosh/via-pmu.c
++++ b/drivers/macintosh/via-pmu.c
+@@ -1459,7 +1459,7 @@ next:
+ 		pmu_pass_intr(data, len);
+ 		/* len == 6 is probably a bad check. But how do I
+ 		 * know what PMU versions send what events here? */
+-		if (len == 6) {
++		if (IS_ENABLED(CONFIG_ADB_PMU_EVENT) && len == 6) {
+ 			via_pmu_event(PMU_EVT_POWER, !!(data[1]&8));
+ 			via_pmu_event(PMU_EVT_LID, data[1]&1);
+ 		}
+diff --git a/drivers/mailbox/mailbox.c b/drivers/mailbox/mailbox.c
+index 3e7d4b20ab34f..4229b9b5da98f 100644
+--- a/drivers/mailbox/mailbox.c
++++ b/drivers/mailbox/mailbox.c
+@@ -82,11 +82,11 @@ static void msg_submit(struct mbox_chan *chan)
+ exit:
+ 	spin_unlock_irqrestore(&chan->lock, flags);
+ 
+-	/* kick start the timer immediately to avoid delays */
+ 	if (!err && (chan->txdone_method & TXDONE_BY_POLL)) {
+-		/* but only if not already active */
+-		if (!hrtimer_active(&chan->mbox->poll_hrt))
+-			hrtimer_start(&chan->mbox->poll_hrt, 0, HRTIMER_MODE_REL);
++		/* kick start the timer immediately to avoid delays */
++		spin_lock_irqsave(&chan->mbox->poll_hrt_lock, flags);
++		hrtimer_start(&chan->mbox->poll_hrt, 0, HRTIMER_MODE_REL);
++		spin_unlock_irqrestore(&chan->mbox->poll_hrt_lock, flags);
+ 	}
+ }
+ 
+@@ -120,20 +120,26 @@ static enum hrtimer_restart txdone_hrtimer(struct hrtimer *hrtimer)
+ 		container_of(hrtimer, struct mbox_controller, poll_hrt);
+ 	bool txdone, resched = false;
+ 	int i;
++	unsigned long flags;
+ 
+ 	for (i = 0; i < mbox->num_chans; i++) {
+ 		struct mbox_chan *chan = &mbox->chans[i];
+ 
+ 		if (chan->active_req && chan->cl) {
+-			resched = true;
+ 			txdone = chan->mbox->ops->last_tx_done(chan);
+ 			if (txdone)
+ 				tx_tick(chan, 0);
++			else
++				resched = true;
+ 		}
+ 	}
+ 
+ 	if (resched) {
+-		hrtimer_forward_now(hrtimer, ms_to_ktime(mbox->txpoll_period));
++		spin_lock_irqsave(&mbox->poll_hrt_lock, flags);
++		if (!hrtimer_is_queued(hrtimer))
++			hrtimer_forward_now(hrtimer, ms_to_ktime(mbox->txpoll_period));
++		spin_unlock_irqrestore(&mbox->poll_hrt_lock, flags);
++
+ 		return HRTIMER_RESTART;
+ 	}
+ 	return HRTIMER_NORESTART;
+@@ -500,6 +506,7 @@ int mbox_controller_register(struct mbox_controller *mbox)
+ 		hrtimer_init(&mbox->poll_hrt, CLOCK_MONOTONIC,
+ 			     HRTIMER_MODE_REL);
+ 		mbox->poll_hrt.function = txdone_hrtimer;
++		spin_lock_init(&mbox->poll_hrt_lock);
+ 	}
+ 
+ 	for (i = 0; i < mbox->num_chans; i++) {
+diff --git a/drivers/mailbox/pcc.c b/drivers/mailbox/pcc.c
+index ed18936b8ce68..ebfa33a40fceb 100644
+--- a/drivers/mailbox/pcc.c
++++ b/drivers/mailbox/pcc.c
+@@ -654,7 +654,7 @@ static int pcc_mbox_probe(struct platform_device *pdev)
+ 		goto err;
+ 	}
+ 
+-	pcc_mbox_ctrl = devm_kmalloc(dev, sizeof(*pcc_mbox_ctrl), GFP_KERNEL);
++	pcc_mbox_ctrl = devm_kzalloc(dev, sizeof(*pcc_mbox_ctrl), GFP_KERNEL);
+ 	if (!pcc_mbox_ctrl) {
+ 		rc = -ENOMEM;
+ 		goto err;
+diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
+index ad9f16689419d..2362bb8ef6d19 100644
+--- a/drivers/md/bcache/btree.c
++++ b/drivers/md/bcache/btree.c
+@@ -2006,8 +2006,7 @@ int bch_btree_check(struct cache_set *c)
+ 	int i;
+ 	struct bkey *k = NULL;
+ 	struct btree_iter iter;
+-	struct btree_check_state *check_state;
+-	char name[32];
++	struct btree_check_state check_state;
+ 
+ 	/* check and mark root node keys */
+ 	for_each_key_filter(&c->root->keys, k, &iter, bch_ptr_invalid)
+@@ -2018,63 +2017,58 @@ int bch_btree_check(struct cache_set *c)
+ 	if (c->root->level == 0)
+ 		return 0;
+ 
+-	check_state = kzalloc(sizeof(struct btree_check_state), GFP_KERNEL);
+-	if (!check_state)
+-		return -ENOMEM;
+-
+-	check_state->c = c;
+-	check_state->total_threads = bch_btree_chkthread_nr();
+-	check_state->key_idx = 0;
+-	spin_lock_init(&check_state->idx_lock);
+-	atomic_set(&check_state->started, 0);
+-	atomic_set(&check_state->enough, 0);
+-	init_waitqueue_head(&check_state->wait);
++	check_state.c = c;
++	check_state.total_threads = bch_btree_chkthread_nr();
++	check_state.key_idx = 0;
++	spin_lock_init(&check_state.idx_lock);
++	atomic_set(&check_state.started, 0);
++	atomic_set(&check_state.enough, 0);
++	init_waitqueue_head(&check_state.wait);
+ 
++	rw_lock(0, c->root, c->root->level);
+ 	/*
+ 	 * Run multiple threads to check btree nodes in parallel,
+-	 * if check_state->enough is non-zero, it means current
++	 * if check_state.enough is non-zero, it means current
+ 	 * running check threads are enough, unncessary to create
+ 	 * more.
+ 	 */
+-	for (i = 0; i < check_state->total_threads; i++) {
+-		/* fetch latest check_state->enough earlier */
++	for (i = 0; i < check_state.total_threads; i++) {
++		/* fetch latest check_state.enough earlier */
+ 		smp_mb__before_atomic();
+-		if (atomic_read(&check_state->enough))
++		if (atomic_read(&check_state.enough))
+ 			break;
+ 
+-		check_state->infos[i].result = 0;
+-		check_state->infos[i].state = check_state;
+-		snprintf(name, sizeof(name), "bch_btrchk[%u]", i);
+-		atomic_inc(&check_state->started);
++		check_state.infos[i].result = 0;
++		check_state.infos[i].state = &check_state;
+ 
+-		check_state->infos[i].thread =
++		check_state.infos[i].thread =
+ 			kthread_run(bch_btree_check_thread,
+-				    &check_state->infos[i],
+-				    name);
+-		if (IS_ERR(check_state->infos[i].thread)) {
++				    &check_state.infos[i],
++				    "bch_btrchk[%d]", i);
++		if (IS_ERR(check_state.infos[i].thread)) {
+ 			pr_err("fails to run thread bch_btrchk[%d]\n", i);
+ 			for (--i; i >= 0; i--)
+-				kthread_stop(check_state->infos[i].thread);
++				kthread_stop(check_state.infos[i].thread);
+ 			ret = -ENOMEM;
+ 			goto out;
+ 		}
++		atomic_inc(&check_state.started);
+ 	}
+ 
+ 	/*
+ 	 * Must wait for all threads to stop.
+ 	 */
+-	wait_event_interruptible(check_state->wait,
+-				 atomic_read(&check_state->started) == 0);
++	wait_event(check_state.wait, atomic_read(&check_state.started) == 0);
+ 
+-	for (i = 0; i < check_state->total_threads; i++) {
+-		if (check_state->infos[i].result) {
+-			ret = check_state->infos[i].result;
++	for (i = 0; i < check_state.total_threads; i++) {
++		if (check_state.infos[i].result) {
++			ret = check_state.infos[i].result;
+ 			goto out;
+ 		}
+ 	}
+ 
+ out:
+-	kfree(check_state);
++	rw_unlock(0, c->root);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/md/bcache/btree.h b/drivers/md/bcache/btree.h
+index 50482107134f1..1b5fdbc0d83eb 100644
+--- a/drivers/md/bcache/btree.h
++++ b/drivers/md/bcache/btree.h
+@@ -226,7 +226,7 @@ struct btree_check_info {
+ 	int				result;
+ };
+ 
+-#define BCH_BTR_CHKTHREAD_MAX	64
++#define BCH_BTR_CHKTHREAD_MAX	12
+ struct btree_check_state {
+ 	struct cache_set		*c;
+ 	int				total_threads;
+diff --git a/drivers/md/bcache/journal.c b/drivers/md/bcache/journal.c
+index df5347ea450b5..e5da469a42357 100644
+--- a/drivers/md/bcache/journal.c
++++ b/drivers/md/bcache/journal.c
+@@ -405,6 +405,11 @@ err:
+ 	return ret;
+ }
+ 
++void bch_journal_space_reserve(struct journal *j)
++{
++	j->do_reserve = true;
++}
++
+ /* Journalling */
+ 
+ static void btree_flush_write(struct cache_set *c)
+@@ -621,12 +626,30 @@ static void do_journal_discard(struct cache *ca)
+ 	}
+ }
+ 
++static unsigned int free_journal_buckets(struct cache_set *c)
++{
++	struct journal *j = &c->journal;
++	struct cache *ca = c->cache;
++	struct journal_device *ja = &c->cache->journal;
++	unsigned int n;
++
++	/* In case njournal_buckets is not power of 2 */
++	if (ja->cur_idx >= ja->discard_idx)
++		n = ca->sb.njournal_buckets +  ja->discard_idx - ja->cur_idx;
++	else
++		n = ja->discard_idx - ja->cur_idx;
++
++	if (n > (1 + j->do_reserve))
++		return n - (1 + j->do_reserve);
++
++	return 0;
++}
++
+ static void journal_reclaim(struct cache_set *c)
+ {
+ 	struct bkey *k = &c->journal.key;
+ 	struct cache *ca = c->cache;
+ 	uint64_t last_seq;
+-	unsigned int next;
+ 	struct journal_device *ja = &ca->journal;
+ 	atomic_t p __maybe_unused;
+ 
+@@ -649,12 +672,10 @@ static void journal_reclaim(struct cache_set *c)
+ 	if (c->journal.blocks_free)
+ 		goto out;
+ 
+-	next = (ja->cur_idx + 1) % ca->sb.njournal_buckets;
+-	/* No space available on this device */
+-	if (next == ja->discard_idx)
++	if (!free_journal_buckets(c))
+ 		goto out;
+ 
+-	ja->cur_idx = next;
++	ja->cur_idx = (ja->cur_idx + 1) % ca->sb.njournal_buckets;
+ 	k->ptr[0] = MAKE_PTR(0,
+ 			     bucket_to_sector(c, ca->sb.d[ja->cur_idx]),
+ 			     ca->sb.nr_this_dev);
+diff --git a/drivers/md/bcache/journal.h b/drivers/md/bcache/journal.h
+index f2ea34d5f431b..cd316b4a1e95f 100644
+--- a/drivers/md/bcache/journal.h
++++ b/drivers/md/bcache/journal.h
+@@ -105,6 +105,7 @@ struct journal {
+ 	spinlock_t		lock;
+ 	spinlock_t		flush_write_lock;
+ 	bool			btree_flushing;
++	bool			do_reserve;
+ 	/* used when waiting because the journal was full */
+ 	struct closure_waitlist	wait;
+ 	struct closure		io;
+@@ -182,5 +183,6 @@ int bch_journal_replay(struct cache_set *c, struct list_head *list);
+ 
+ void bch_journal_free(struct cache_set *c);
+ int bch_journal_alloc(struct cache_set *c);
++void bch_journal_space_reserve(struct journal *j);
+ 
+ #endif /* _BCACHE_JOURNAL_H */
+diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c
+index 320fcdfef48ef..02df49d79b4bf 100644
+--- a/drivers/md/bcache/request.c
++++ b/drivers/md/bcache/request.c
+@@ -1105,6 +1105,12 @@ static void detached_dev_do_request(struct bcache_device *d, struct bio *bio,
+ 	 * which would call closure_get(&dc->disk.cl)
+ 	 */
+ 	ddip = kzalloc(sizeof(struct detached_dev_io_private), GFP_NOIO);
++	if (!ddip) {
++		bio->bi_status = BLK_STS_RESOURCE;
++		bio->bi_end_io(bio);
++		return;
++	}
++
+ 	ddip->d = d;
+ 	/* Count on the bcache device */
+ 	ddip->orig_bdev = orig_bdev;
+diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
+index bf3de149d3c9f..2bb55278d22d6 100644
+--- a/drivers/md/bcache/super.c
++++ b/drivers/md/bcache/super.c
+@@ -2128,6 +2128,7 @@ static int run_cache_set(struct cache_set *c)
+ 
+ 	flash_devs_run(c);
+ 
++	bch_journal_space_reserve(&c->journal);
+ 	set_bit(CACHE_SET_RUNNING, &c->flags);
+ 	return 0;
+ err:
+diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
+index 9ee0005874cda..75b71199800dc 100644
+--- a/drivers/md/bcache/writeback.c
++++ b/drivers/md/bcache/writeback.c
+@@ -805,13 +805,11 @@ static int bch_writeback_thread(void *arg)
+ 
+ /* Init */
+ #define INIT_KEYS_EACH_TIME	500000
+-#define INIT_KEYS_SLEEP_MS	100
+ 
+ struct sectors_dirty_init {
+ 	struct btree_op	op;
+ 	unsigned int	inode;
+ 	size_t		count;
+-	struct bkey	start;
+ };
+ 
+ static int sectors_dirty_init_fn(struct btree_op *_op, struct btree *b,
+@@ -827,11 +825,8 @@ static int sectors_dirty_init_fn(struct btree_op *_op, struct btree *b,
+ 					     KEY_START(k), KEY_SIZE(k));
+ 
+ 	op->count++;
+-	if (atomic_read(&b->c->search_inflight) &&
+-	    !(op->count % INIT_KEYS_EACH_TIME)) {
+-		bkey_copy_key(&op->start, k);
+-		return -EAGAIN;
+-	}
++	if (!(op->count % INIT_KEYS_EACH_TIME))
++		cond_resched();
+ 
+ 	return MAP_CONTINUE;
+ }
+@@ -846,24 +841,16 @@ static int bch_root_node_dirty_init(struct cache_set *c,
+ 	bch_btree_op_init(&op.op, -1);
+ 	op.inode = d->id;
+ 	op.count = 0;
+-	op.start = KEY(op.inode, 0, 0);
+-
+-	do {
+-		ret = bcache_btree(map_keys_recurse,
+-				   k,
+-				   c->root,
+-				   &op.op,
+-				   &op.start,
+-				   sectors_dirty_init_fn,
+-				   0);
+-		if (ret == -EAGAIN)
+-			schedule_timeout_interruptible(
+-				msecs_to_jiffies(INIT_KEYS_SLEEP_MS));
+-		else if (ret < 0) {
+-			pr_warn("sectors dirty init failed, ret=%d!\n", ret);
+-			break;
+-		}
+-	} while (ret == -EAGAIN);
++
++	ret = bcache_btree(map_keys_recurse,
++			   k,
++			   c->root,
++			   &op.op,
++			   &KEY(op.inode, 0, 0),
++			   sectors_dirty_init_fn,
++			   0);
++	if (ret < 0)
++		pr_warn("sectors dirty init failed, ret=%d!\n", ret);
+ 
+ 	return ret;
+ }
+@@ -907,7 +894,6 @@ static int bch_dirty_init_thread(void *arg)
+ 				goto out;
+ 			}
+ 			skip_nr--;
+-			cond_resched();
+ 		}
+ 
+ 		if (p) {
+@@ -917,7 +903,6 @@ static int bch_dirty_init_thread(void *arg)
+ 
+ 		p = NULL;
+ 		prev_idx = cur_idx;
+-		cond_resched();
+ 	}
+ 
+ out:
+@@ -948,67 +933,55 @@ void bch_sectors_dirty_init(struct bcache_device *d)
+ 	struct btree_iter iter;
+ 	struct sectors_dirty_init op;
+ 	struct cache_set *c = d->c;
+-	struct bch_dirty_init_state *state;
+-	char name[32];
++	struct bch_dirty_init_state state;
+ 
+ 	/* Just count root keys if no leaf node */
++	rw_lock(0, c->root, c->root->level);
+ 	if (c->root->level == 0) {
+ 		bch_btree_op_init(&op.op, -1);
+ 		op.inode = d->id;
+ 		op.count = 0;
+-		op.start = KEY(op.inode, 0, 0);
+ 
+ 		for_each_key_filter(&c->root->keys,
+ 				    k, &iter, bch_ptr_invalid)
+ 			sectors_dirty_init_fn(&op.op, c->root, k);
+-		return;
+-	}
+ 
+-	state = kzalloc(sizeof(struct bch_dirty_init_state), GFP_KERNEL);
+-	if (!state) {
+-		pr_warn("sectors dirty init failed: cannot allocate memory\n");
++		rw_unlock(0, c->root);
+ 		return;
+ 	}
+ 
+-	state->c = c;
+-	state->d = d;
+-	state->total_threads = bch_btre_dirty_init_thread_nr();
+-	state->key_idx = 0;
+-	spin_lock_init(&state->idx_lock);
+-	atomic_set(&state->started, 0);
+-	atomic_set(&state->enough, 0);
+-	init_waitqueue_head(&state->wait);
+-
+-	for (i = 0; i < state->total_threads; i++) {
+-		/* Fetch latest state->enough earlier */
++	state.c = c;
++	state.d = d;
++	state.total_threads = bch_btre_dirty_init_thread_nr();
++	state.key_idx = 0;
++	spin_lock_init(&state.idx_lock);
++	atomic_set(&state.started, 0);
++	atomic_set(&state.enough, 0);
++	init_waitqueue_head(&state.wait);
++
++	for (i = 0; i < state.total_threads; i++) {
++		/* Fetch latest state.enough earlier */
+ 		smp_mb__before_atomic();
+-		if (atomic_read(&state->enough))
++		if (atomic_read(&state.enough))
+ 			break;
+ 
+-		state->infos[i].state = state;
+-		atomic_inc(&state->started);
+-		snprintf(name, sizeof(name), "bch_dirty_init[%d]", i);
+-
+-		state->infos[i].thread =
+-			kthread_run(bch_dirty_init_thread,
+-				    &state->infos[i],
+-				    name);
+-		if (IS_ERR(state->infos[i].thread)) {
++		state.infos[i].state = &state;
++		state.infos[i].thread =
++			kthread_run(bch_dirty_init_thread, &state.infos[i],
++				    "bch_dirtcnt[%d]", i);
++		if (IS_ERR(state.infos[i].thread)) {
+ 			pr_err("fails to run thread bch_dirty_init[%d]\n", i);
+ 			for (--i; i >= 0; i--)
+-				kthread_stop(state->infos[i].thread);
++				kthread_stop(state.infos[i].thread);
+ 			goto out;
+ 		}
++		atomic_inc(&state.started);
+ 	}
+ 
+-	/*
+-	 * Must wait for all threads to stop.
+-	 */
+-	wait_event_interruptible(state->wait,
+-		 atomic_read(&state->started) == 0);
+-
+ out:
+-	kfree(state);
++	/* Must wait for all threads to stop. */
++	wait_event(state.wait, atomic_read(&state.started) == 0);
++	rw_unlock(0, c->root);
+ }
+ 
+ void bch_cached_dev_writeback_init(struct cached_dev *dc)
+diff --git a/drivers/md/bcache/writeback.h b/drivers/md/bcache/writeback.h
+index 02b2f9df73f69..31df716951f66 100644
+--- a/drivers/md/bcache/writeback.h
++++ b/drivers/md/bcache/writeback.h
+@@ -20,7 +20,7 @@
+ #define BCH_WRITEBACK_FRAGMENT_THRESHOLD_MID 57
+ #define BCH_WRITEBACK_FRAGMENT_THRESHOLD_HIGH 64
+ 
+-#define BCH_DIRTY_INIT_THRD_MAX	64
++#define BCH_DIRTY_INIT_THRD_MAX	12
+ /*
+  * 14 (16384ths) is chosen here as something that each backing device
+  * should be a reasonable fraction of the share, and not to blow up
+diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c
+index bfd6026d78099..612460d2bdaf2 100644
+--- a/drivers/md/md-bitmap.c
++++ b/drivers/md/md-bitmap.c
+@@ -639,14 +639,6 @@ re_read:
+ 	daemon_sleep = le32_to_cpu(sb->daemon_sleep) * HZ;
+ 	write_behind = le32_to_cpu(sb->write_behind);
+ 	sectors_reserved = le32_to_cpu(sb->sectors_reserved);
+-	/* Setup nodes/clustername only if bitmap version is
+-	 * cluster-compatible
+-	 */
+-	if (sb->version == cpu_to_le32(BITMAP_MAJOR_CLUSTERED)) {
+-		nodes = le32_to_cpu(sb->nodes);
+-		strlcpy(bitmap->mddev->bitmap_info.cluster_name,
+-				sb->cluster_name, 64);
+-	}
+ 
+ 	/* verify that the bitmap-specific fields are valid */
+ 	if (sb->magic != cpu_to_le32(BITMAP_MAGIC))
+@@ -668,6 +660,16 @@ re_read:
+ 		goto out;
+ 	}
+ 
++	/*
++	 * Setup nodes/clustername only if bitmap version is
++	 * cluster-compatible
++	 */
++	if (sb->version == cpu_to_le32(BITMAP_MAJOR_CLUSTERED)) {
++		nodes = le32_to_cpu(sb->nodes);
++		strlcpy(bitmap->mddev->bitmap_info.cluster_name,
++				sb->cluster_name, 64);
++	}
++
+ 	/* keep the array size field of the bitmap superblock up to date */
+ 	sb->sync_size = cpu_to_le64(bitmap->mddev->resync_max_sectors);
+ 
+@@ -700,9 +702,9 @@ re_read:
+ 
+ out:
+ 	kunmap_atomic(sb);
+-	/* Assigning chunksize is required for "re_read" */
+-	bitmap->mddev->bitmap_info.chunksize = chunksize;
+ 	if (err == 0 && nodes && (bitmap->cluster_slot < 0)) {
++		/* Assigning chunksize is required for "re_read" */
++		bitmap->mddev->bitmap_info.chunksize = chunksize;
+ 		err = md_setup_cluster(bitmap->mddev, nodes);
+ 		if (err) {
+ 			pr_warn("%s: Could not setup cluster service (%d)\n",
+@@ -713,18 +715,18 @@ out:
+ 		goto re_read;
+ 	}
+ 
+-
+ out_no_sb:
+-	if (test_bit(BITMAP_STALE, &bitmap->flags))
+-		bitmap->events_cleared = bitmap->mddev->events;
+-	bitmap->mddev->bitmap_info.chunksize = chunksize;
+-	bitmap->mddev->bitmap_info.daemon_sleep = daemon_sleep;
+-	bitmap->mddev->bitmap_info.max_write_behind = write_behind;
+-	bitmap->mddev->bitmap_info.nodes = nodes;
+-	if (bitmap->mddev->bitmap_info.space == 0 ||
+-	    bitmap->mddev->bitmap_info.space > sectors_reserved)
+-		bitmap->mddev->bitmap_info.space = sectors_reserved;
+-	if (err) {
++	if (err == 0) {
++		if (test_bit(BITMAP_STALE, &bitmap->flags))
++			bitmap->events_cleared = bitmap->mddev->events;
++		bitmap->mddev->bitmap_info.chunksize = chunksize;
++		bitmap->mddev->bitmap_info.daemon_sleep = daemon_sleep;
++		bitmap->mddev->bitmap_info.max_write_behind = write_behind;
++		bitmap->mddev->bitmap_info.nodes = nodes;
++		if (bitmap->mddev->bitmap_info.space == 0 ||
++			bitmap->mddev->bitmap_info.space > sectors_reserved)
++			bitmap->mddev->bitmap_info.space = sectors_reserved;
++	} else {
+ 		md_bitmap_print_sb(bitmap);
+ 		if (bitmap->cluster_slot < 0)
+ 			md_cluster_stop(bitmap->mddev);
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 309b3af906ad3..066f792b374e3 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -2627,14 +2627,16 @@ static void sync_sbs(struct mddev *mddev, int nospares)
+ 
+ static bool does_sb_need_changing(struct mddev *mddev)
+ {
+-	struct md_rdev *rdev;
++	struct md_rdev *rdev = NULL, *iter;
+ 	struct mdp_superblock_1 *sb;
+ 	int role;
+ 
+ 	/* Find a good rdev */
+-	rdev_for_each(rdev, mddev)
+-		if ((rdev->raid_disk >= 0) && !test_bit(Faulty, &rdev->flags))
++	rdev_for_each(iter, mddev)
++		if ((iter->raid_disk >= 0) && !test_bit(Faulty, &iter->flags)) {
++			rdev = iter;
+ 			break;
++		}
+ 
+ 	/* No good device found. */
+ 	if (!rdev)
+@@ -5596,8 +5598,6 @@ static void md_free(struct kobject *ko)
+ 
+ 	bioset_exit(&mddev->bio_set);
+ 	bioset_exit(&mddev->sync_set);
+-	if (mddev->level != 1 && mddev->level != 10)
+-		bioset_exit(&mddev->io_acct_set);
+ 	kfree(mddev);
+ }
+ 
+@@ -6284,8 +6284,6 @@ void md_stop(struct mddev *mddev)
+ 	__md_stop(mddev);
+ 	bioset_exit(&mddev->bio_set);
+ 	bioset_exit(&mddev->sync_set);
+-	if (mddev->level != 1 && mddev->level != 10)
+-		bioset_exit(&mddev->io_acct_set);
+ }
+ 
+ EXPORT_SYMBOL_GPL(md_stop);
+@@ -9791,16 +9789,18 @@ static int read_rdev(struct mddev *mddev, struct md_rdev *rdev)
+ 
+ void md_reload_sb(struct mddev *mddev, int nr)
+ {
+-	struct md_rdev *rdev;
++	struct md_rdev *rdev = NULL, *iter;
+ 	int err;
+ 
+ 	/* Find the rdev */
+-	rdev_for_each_rcu(rdev, mddev) {
+-		if (rdev->desc_nr == nr)
++	rdev_for_each_rcu(iter, mddev) {
++		if (iter->desc_nr == nr) {
++			rdev = iter;
+ 			break;
++		}
+ 	}
+ 
+-	if (!rdev || rdev->desc_nr != nr) {
++	if (!rdev) {
+ 		pr_warn("%s: %d Could not find rdev with nr %d\n", __func__, __LINE__, nr);
+ 		return;
+ 	}
+diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
+index b21e101183f44..46c30fb538a46 100644
+--- a/drivers/md/raid0.c
++++ b/drivers/md/raid0.c
+@@ -361,7 +361,6 @@ static void free_conf(struct mddev *mddev, struct r0conf *conf)
+ 	kfree(conf->strip_zone);
+ 	kfree(conf->devlist);
+ 	kfree(conf);
+-	mddev->private = NULL;
+ }
+ 
+ static void raid0_free(struct mddev *mddev, void *priv)
+diff --git a/drivers/media/cec/core/cec-adap.c b/drivers/media/cec/core/cec-adap.c
+index 2e12331c12a9d..01766e7447728 100644
+--- a/drivers/media/cec/core/cec-adap.c
++++ b/drivers/media/cec/core/cec-adap.c
+@@ -1278,7 +1278,7 @@ static int cec_config_log_addr(struct cec_adapter *adap,
+ 		 * While trying to poll the physical address was reset
+ 		 * and the adapter was unconfigured, so bail out.
+ 		 */
+-		if (!adap->is_configuring)
++		if (adap->phys_addr == CEC_PHYS_ADDR_INVALID)
+ 			return -EINTR;
+ 
+ 		if (err)
+@@ -1335,7 +1335,6 @@ static void cec_adap_unconfigure(struct cec_adapter *adap)
+ 	    adap->phys_addr != CEC_PHYS_ADDR_INVALID)
+ 		WARN_ON(adap->ops->adap_log_addr(adap, CEC_LOG_ADDR_INVALID));
+ 	adap->log_addrs.log_addr_mask = 0;
+-	adap->is_configuring = false;
+ 	adap->is_configured = false;
+ 	cec_flush(adap);
+ 	wake_up_interruptible(&adap->kthread_waitq);
+@@ -1527,9 +1526,10 @@ unconfigure:
+ 	for (i = 0; i < las->num_log_addrs; i++)
+ 		las->log_addr[i] = CEC_LOG_ADDR_INVALID;
+ 	cec_adap_unconfigure(adap);
++	adap->is_configuring = false;
+ 	adap->kthread_config = NULL;
+-	mutex_unlock(&adap->lock);
+ 	complete(&adap->config_completion);
++	mutex_unlock(&adap->lock);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/media/i2c/Kconfig b/drivers/media/i2c/Kconfig
+index fae2baabb7738..2b20aa6c37b1b 100644
+--- a/drivers/media/i2c/Kconfig
++++ b/drivers/media/i2c/Kconfig
+@@ -372,6 +372,7 @@ config VIDEO_OV13B10
+ config VIDEO_OV2640
+ 	tristate "OmniVision OV2640 sensor support"
+ 	depends on VIDEO_DEV && I2C
++	select V4L2_ASYNC
+ 	help
+ 	  This is a Video4Linux2 sensor driver for the OmniVision
+ 	  OV2640 camera.
+diff --git a/drivers/media/i2c/ccs/ccs-core.c b/drivers/media/i2c/ccs/ccs-core.c
+index 03e841b8443f0..7ae469caf9906 100644
+--- a/drivers/media/i2c/ccs/ccs-core.c
++++ b/drivers/media/i2c/ccs/ccs-core.c
+@@ -1602,8 +1602,11 @@ static int ccs_power_on(struct device *dev)
+ 			usleep_range(1000, 2000);
+ 		} while (--retry);
+ 
+-		if (!reset)
+-			return -EIO;
++		if (!reset) {
++			dev_err(dev, "software reset failed\n");
++			rval = -EIO;
++			goto out_cci_addr_fail;
++		}
+ 	}
+ 
+ 	if (sensor->hwcfg.i2c_addr_alt) {
+diff --git a/drivers/media/i2c/dw9714.c b/drivers/media/i2c/dw9714.c
+index cd7008ad8f2f3..8c5797ba57d41 100644
+--- a/drivers/media/i2c/dw9714.c
++++ b/drivers/media/i2c/dw9714.c
+@@ -183,6 +183,7 @@ static int dw9714_probe(struct i2c_client *client)
+ 	return 0;
+ 
+ err_cleanup:
++	regulator_disable(dw9714_dev->vcc);
+ 	v4l2_ctrl_handler_free(&dw9714_dev->ctrls_vcm);
+ 	media_entity_cleanup(&dw9714_dev->sd.entity);
+ 
+diff --git a/drivers/media/i2c/dw9768.c b/drivers/media/i2c/dw9768.c
+index 65c6acf3ced9a..c086580efac78 100644
+--- a/drivers/media/i2c/dw9768.c
++++ b/drivers/media/i2c/dw9768.c
+@@ -469,11 +469,6 @@ static int dw9768_probe(struct i2c_client *client)
+ 
+ 	dw9768->sd.entity.function = MEDIA_ENT_F_LENS;
+ 
+-	/*
+-	 * Device is already turned on by i2c-core with ACPI domain PM.
+-	 * Attempt to turn off the device to satisfy the privacy LED concerns.
+-	 */
+-	pm_runtime_set_active(dev);
+ 	pm_runtime_enable(dev);
+ 	if (!pm_runtime_enabled(dev)) {
+ 		ret = dw9768_runtime_resume(dev);
+@@ -488,7 +483,6 @@ static int dw9768_probe(struct i2c_client *client)
+ 		dev_err(dev, "failed to register V4L2 subdev: %d", ret);
+ 		goto err_power_off;
+ 	}
+-	pm_runtime_idle(dev);
+ 
+ 	return 0;
+ 
+diff --git a/drivers/media/i2c/max9286.c b/drivers/media/i2c/max9286.c
+index d2a4915ed9f7b..3684faa72253b 100644
+--- a/drivers/media/i2c/max9286.c
++++ b/drivers/media/i2c/max9286.c
+@@ -1147,22 +1147,18 @@ static int max9286_poc_enable(struct max9286_priv *priv, bool enable)
+ 	return ret;
+ }
+ 
+-static int max9286_init(struct device *dev)
++static int max9286_init(struct max9286_priv *priv)
+ {
+-	struct max9286_priv *priv;
+-	struct i2c_client *client;
++	struct i2c_client *client = priv->client;
+ 	int ret;
+ 
+-	client = to_i2c_client(dev);
+-	priv = i2c_get_clientdata(client);
+-
+ 	ret = max9286_poc_enable(priv, true);
+ 	if (ret)
+ 		return ret;
+ 
+ 	ret = max9286_setup(priv);
+ 	if (ret) {
+-		dev_err(dev, "Unable to setup max9286\n");
++		dev_err(&client->dev, "Unable to setup max9286\n");
+ 		goto err_poc_disable;
+ 	}
+ 
+@@ -1172,13 +1168,13 @@ static int max9286_init(struct device *dev)
+ 	 */
+ 	ret = max9286_v4l2_register(priv);
+ 	if (ret) {
+-		dev_err(dev, "Failed to register with V4L2\n");
++		dev_err(&client->dev, "Failed to register with V4L2\n");
+ 		goto err_poc_disable;
+ 	}
+ 
+ 	ret = max9286_i2c_mux_init(priv);
+ 	if (ret) {
+-		dev_err(dev, "Unable to initialize I2C multiplexer\n");
++		dev_err(&client->dev, "Unable to initialize I2C multiplexer\n");
+ 		goto err_v4l2_register;
+ 	}
+ 
+@@ -1333,7 +1329,6 @@ static int max9286_probe(struct i2c_client *client)
+ 	mutex_init(&priv->mutex);
+ 
+ 	priv->client = client;
+-	i2c_set_clientdata(client, priv);
+ 
+ 	priv->gpiod_pwdn = devm_gpiod_get_optional(&client->dev, "enable",
+ 						   GPIOD_OUT_HIGH);
+@@ -1369,7 +1364,7 @@ static int max9286_probe(struct i2c_client *client)
+ 	if (ret)
+ 		goto err_powerdown;
+ 
+-	ret = max9286_init(&client->dev);
++	ret = max9286_init(priv);
+ 	if (ret < 0)
+ 		goto err_cleanup_dt;
+ 
+@@ -1385,7 +1380,7 @@ err_powerdown:
+ 
+ static int max9286_remove(struct i2c_client *client)
+ {
+-	struct max9286_priv *priv = i2c_get_clientdata(client);
++	struct max9286_priv *priv = sd_to_max9286(i2c_get_clientdata(client));
+ 
+ 	i2c_mux_del_adapters(priv->mux);
+ 
+diff --git a/drivers/media/i2c/ov5648.c b/drivers/media/i2c/ov5648.c
+index 930ff6897044a..dfcd33e9ee136 100644
+--- a/drivers/media/i2c/ov5648.c
++++ b/drivers/media/i2c/ov5648.c
+@@ -2498,9 +2498,9 @@ static int ov5648_probe(struct i2c_client *client)
+ 
+ 	/* DOVDD: digital I/O */
+ 	sensor->dovdd = devm_regulator_get(dev, "dovdd");
+-	if (IS_ERR(sensor->dvdd)) {
++	if (IS_ERR(sensor->dovdd)) {
+ 		dev_err(dev, "cannot get DOVDD (digital I/O) regulator\n");
+-		ret = PTR_ERR(sensor->dvdd);
++		ret = PTR_ERR(sensor->dovdd);
+ 		goto error_endpoint;
+ 	}
+ 
+diff --git a/drivers/media/i2c/ov7670.c b/drivers/media/i2c/ov7670.c
+index 1967464231160..1be2c0e5bdc15 100644
+--- a/drivers/media/i2c/ov7670.c
++++ b/drivers/media/i2c/ov7670.c
+@@ -2017,7 +2017,6 @@ static int ov7670_remove(struct i2c_client *client)
+ 	v4l2_async_unregister_subdev(sd);
+ 	v4l2_ctrl_handler_free(&info->hdl);
+ 	media_entity_cleanup(&info->sd.entity);
+-	ov7670_power_off(sd);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/media/i2c/rdacm20.c b/drivers/media/i2c/rdacm20.c
+index 025a610de8935..9c6f66cab5642 100644
+--- a/drivers/media/i2c/rdacm20.c
++++ b/drivers/media/i2c/rdacm20.c
+@@ -611,7 +611,7 @@ static int rdacm20_probe(struct i2c_client *client)
+ 		goto error_free_ctrls;
+ 
+ 	dev->pad.flags = MEDIA_PAD_FL_SOURCE;
+-	dev->sd.entity.flags |= MEDIA_ENT_F_CAM_SENSOR;
++	dev->sd.entity.function = MEDIA_ENT_F_CAM_SENSOR;
+ 	ret = media_entity_pads_init(&dev->sd.entity, 1, &dev->pad);
+ 	if (ret < 0)
+ 		goto error_free_ctrls;
+diff --git a/drivers/media/i2c/rdacm21.c b/drivers/media/i2c/rdacm21.c
+index 12ec5467ed1ee..ef31cf5f23cac 100644
+--- a/drivers/media/i2c/rdacm21.c
++++ b/drivers/media/i2c/rdacm21.c
+@@ -583,7 +583,7 @@ static int rdacm21_probe(struct i2c_client *client)
+ 		goto error_free_ctrls;
+ 
+ 	dev->pad.flags = MEDIA_PAD_FL_SOURCE;
+-	dev->sd.entity.flags |= MEDIA_ENT_F_CAM_SENSOR;
++	dev->sd.entity.function = MEDIA_ENT_F_CAM_SENSOR;
+ 	ret = media_entity_pads_init(&dev->sd.entity, 1, &dev->pad);
+ 	if (ret < 0)
+ 		goto error_free_ctrls;
+diff --git a/drivers/media/pci/cx23885/cx23885-core.c b/drivers/media/pci/cx23885/cx23885-core.c
+index f8f2ff3b00c37..a07b18f2034e9 100644
+--- a/drivers/media/pci/cx23885/cx23885-core.c
++++ b/drivers/media/pci/cx23885/cx23885-core.c
+@@ -2165,7 +2165,7 @@ static int cx23885_initdev(struct pci_dev *pci_dev,
+ 	err = dma_set_mask(&pci_dev->dev, 0xffffffff);
+ 	if (err) {
+ 		pr_err("%s/0: Oops: no 32bit PCI DMA ???\n", dev->name);
+-		goto fail_ctrl;
++		goto fail_dma_set_mask;
+ 	}
+ 
+ 	err = request_irq(pci_dev->irq, cx23885_irq,
+@@ -2173,7 +2173,7 @@ static int cx23885_initdev(struct pci_dev *pci_dev,
+ 	if (err < 0) {
+ 		pr_err("%s: can't get IRQ %d\n",
+ 		       dev->name, pci_dev->irq);
+-		goto fail_irq;
++		goto fail_dma_set_mask;
+ 	}
+ 
+ 	switch (dev->board) {
+@@ -2195,7 +2195,7 @@ static int cx23885_initdev(struct pci_dev *pci_dev,
+ 
+ 	return 0;
+ 
+-fail_irq:
++fail_dma_set_mask:
+ 	cx23885_dev_unregister(dev);
+ fail_ctrl:
+ 	v4l2_ctrl_handler_free(hdl);
+diff --git a/drivers/media/pci/cx25821/cx25821-core.c b/drivers/media/pci/cx25821/cx25821-core.c
+index 3078a39f0b95d..6627fa9166d30 100644
+--- a/drivers/media/pci/cx25821/cx25821-core.c
++++ b/drivers/media/pci/cx25821/cx25821-core.c
+@@ -1332,11 +1332,11 @@ static void cx25821_finidev(struct pci_dev *pci_dev)
+ 	struct cx25821_dev *dev = get_cx25821(v4l2_dev);
+ 
+ 	cx25821_shutdown(dev);
+-	pci_disable_device(pci_dev);
+ 
+ 	/* unregister stuff */
+ 	if (pci_dev->irq)
+ 		free_irq(pci_dev->irq, dev);
++	pci_disable_device(pci_dev);
+ 
+ 	cx25821_dev_unregister(dev);
+ 	v4l2_device_unregister(v4l2_dev);
+diff --git a/drivers/media/platform/amphion/vdec.c b/drivers/media/platform/amphion/vdec.c
+index 8f8dfd6ce2c68..c0dfede11ab74 100644
+--- a/drivers/media/platform/amphion/vdec.c
++++ b/drivers/media/platform/amphion/vdec.c
+@@ -782,7 +782,7 @@ static void vdec_init_fmt(struct vpu_inst *inst)
+ 	if (vdec->codec_info.progressive)
+ 		inst->cap_format.field = V4L2_FIELD_NONE;
+ 	else
+-		inst->cap_format.field = V4L2_FIELD_SEQ_BT;
++		inst->cap_format.field = V4L2_FIELD_SEQ_TB;
+ 	if (vdec->codec_info.color_primaries == V4L2_COLORSPACE_DEFAULT)
+ 		vdec->codec_info.color_primaries = V4L2_COLORSPACE_REC709;
+ 	if (vdec->codec_info.transfer_chars == V4L2_XFER_FUNC_DEFAULT)
+diff --git a/drivers/media/platform/aspeed/aspeed-video.c b/drivers/media/platform/aspeed/aspeed-video.c
+index b937dbcbe9e0a..20f795ccc11b6 100644
+--- a/drivers/media/platform/aspeed/aspeed-video.c
++++ b/drivers/media/platform/aspeed/aspeed-video.c
+@@ -1993,6 +1993,7 @@ static int aspeed_video_probe(struct platform_device *pdev)
+ 
+ 	rc = aspeed_video_setup_video(video);
+ 	if (rc) {
++		aspeed_video_free_buf(video, &video->jpeg);
+ 		clk_unprepare(video->vclk);
+ 		clk_unprepare(video->eclk);
+ 		return rc;
+@@ -2024,8 +2025,7 @@ static int aspeed_video_remove(struct platform_device *pdev)
+ 
+ 	v4l2_device_unregister(v4l2_dev);
+ 
+-	dma_free_coherent(video->dev, VE_JPEG_HEADER_SIZE, video->jpeg.virt,
+-			  video->jpeg.dma);
++	aspeed_video_free_buf(video, &video->jpeg);
+ 
+ 	of_reserved_mem_device_release(dev);
+ 
+diff --git a/drivers/media/platform/atmel/atmel-sama5d2-isc.c b/drivers/media/platform/atmel/atmel-sama5d2-isc.c
+index c5b9563e36cb1..c2d50b0c0e3d3 100644
+--- a/drivers/media/platform/atmel/atmel-sama5d2-isc.c
++++ b/drivers/media/platform/atmel/atmel-sama5d2-isc.c
+@@ -291,7 +291,7 @@ static void isc_sama5d2_config_rlp(struct isc_device *isc)
+ 	 * Thus, if the YCYC mode is selected, replace it with the
+ 	 * sama5d2-compliant mode which is YYCC .
+ 	 */
+-	if ((rlp_mode & ISC_RLP_CFG_MODE_YCYC) == ISC_RLP_CFG_MODE_YCYC) {
++	if ((rlp_mode & ISC_RLP_CFG_MODE_MASK) == ISC_RLP_CFG_MODE_YCYC) {
+ 		rlp_mode &= ~ISC_RLP_CFG_MODE_MASK;
+ 		rlp_mode |= ISC_RLP_CFG_MODE_YYCC;
+ 	}
+@@ -562,7 +562,7 @@ static int atmel_isc_probe(struct platform_device *pdev)
+ 	ret = clk_prepare_enable(isc->ispck);
+ 	if (ret) {
+ 		dev_err(dev, "failed to enable ispck: %d\n", ret);
+-		goto cleanup_subdev;
++		goto disable_pm;
+ 	}
+ 
+ 	/* ispck should be greater or equal to hclock */
+@@ -580,6 +580,9 @@ static int atmel_isc_probe(struct platform_device *pdev)
+ unprepare_clk:
+ 	clk_disable_unprepare(isc->ispck);
+ 
++disable_pm:
++	pm_runtime_disable(dev);
++
+ cleanup_subdev:
+ 	isc_subdev_cleanup(isc);
+ 
+diff --git a/drivers/media/platform/chips-media/coda-common.c b/drivers/media/platform/chips-media/coda-common.c
+index a57822b050706..27d002d9f631f 100644
+--- a/drivers/media/platform/chips-media/coda-common.c
++++ b/drivers/media/platform/chips-media/coda-common.c
+@@ -1324,7 +1324,8 @@ static int coda_enum_frameintervals(struct file *file, void *fh,
+ 				    struct v4l2_frmivalenum *f)
+ {
+ 	struct coda_ctx *ctx = fh_to_ctx(fh);
+-	int i;
++	struct coda_q_data *q_data;
++	const struct coda_codec *codec;
+ 
+ 	if (f->index)
+ 		return -EINVAL;
+@@ -1333,12 +1334,19 @@ static int coda_enum_frameintervals(struct file *file, void *fh,
+ 	if (!ctx->vdoa && f->pixel_format == V4L2_PIX_FMT_YUYV)
+ 		return -EINVAL;
+ 
+-	for (i = 0; i < CODA_MAX_FORMATS; i++) {
+-		if (f->pixel_format == ctx->cvd->src_formats[i] ||
+-		    f->pixel_format == ctx->cvd->dst_formats[i])
+-			break;
++	if (coda_format_normalize_yuv(f->pixel_format) == V4L2_PIX_FMT_YUV420) {
++		q_data = get_q_data(ctx, V4L2_BUF_TYPE_VIDEO_CAPTURE);
++		codec = coda_find_codec(ctx->dev, f->pixel_format,
++					q_data->fourcc);
++	} else {
++		codec = coda_find_codec(ctx->dev, V4L2_PIX_FMT_YUV420,
++					f->pixel_format);
+ 	}
+-	if (i == CODA_MAX_FORMATS)
++	if (!codec)
++		return -EINVAL;
++
++	if (f->width < MIN_W || f->width > codec->max_w ||
++	    f->height < MIN_H || f->height > codec->max_h)
+ 		return -EINVAL;
+ 
+ 	f->type = V4L2_FRMIVAL_TYPE_CONTINUOUS;
+@@ -2344,8 +2352,8 @@ static void coda_encode_ctrls(struct coda_ctx *ctx)
+ 		V4L2_CID_MPEG_VIDEO_H264_CHROMA_QP_INDEX_OFFSET, -12, 12, 1, 0);
+ 	v4l2_ctrl_new_std_menu(&ctx->ctrls, &coda_ctrl_ops,
+ 		V4L2_CID_MPEG_VIDEO_H264_PROFILE,
+-		V4L2_MPEG_VIDEO_H264_PROFILE_BASELINE, 0x0,
+-		V4L2_MPEG_VIDEO_H264_PROFILE_BASELINE);
++		V4L2_MPEG_VIDEO_H264_PROFILE_CONSTRAINED_BASELINE, 0x0,
++		V4L2_MPEG_VIDEO_H264_PROFILE_CONSTRAINED_BASELINE);
+ 	if (ctx->dev->devtype->product == CODA_HX4 ||
+ 	    ctx->dev->devtype->product == CODA_7541) {
+ 		v4l2_ctrl_new_std_menu(&ctx->ctrls, &coda_ctrl_ops,
+@@ -2359,12 +2367,15 @@ static void coda_encode_ctrls(struct coda_ctx *ctx)
+ 	if (ctx->dev->devtype->product == CODA_960) {
+ 		v4l2_ctrl_new_std_menu(&ctx->ctrls, &coda_ctrl_ops,
+ 			V4L2_CID_MPEG_VIDEO_H264_LEVEL,
+-			V4L2_MPEG_VIDEO_H264_LEVEL_4_0,
+-			~((1 << V4L2_MPEG_VIDEO_H264_LEVEL_2_0) |
++			V4L2_MPEG_VIDEO_H264_LEVEL_4_2,
++			~((1 << V4L2_MPEG_VIDEO_H264_LEVEL_1_0) |
++			  (1 << V4L2_MPEG_VIDEO_H264_LEVEL_2_0) |
+ 			  (1 << V4L2_MPEG_VIDEO_H264_LEVEL_3_0) |
+ 			  (1 << V4L2_MPEG_VIDEO_H264_LEVEL_3_1) |
+ 			  (1 << V4L2_MPEG_VIDEO_H264_LEVEL_3_2) |
+-			  (1 << V4L2_MPEG_VIDEO_H264_LEVEL_4_0)),
++			  (1 << V4L2_MPEG_VIDEO_H264_LEVEL_4_0) |
++			  (1 << V4L2_MPEG_VIDEO_H264_LEVEL_4_1) |
++			  (1 << V4L2_MPEG_VIDEO_H264_LEVEL_4_2)),
+ 			V4L2_MPEG_VIDEO_H264_LEVEL_4_0);
+ 	}
+ 	v4l2_ctrl_new_std(&ctx->ctrls, &coda_ctrl_ops,
+@@ -2426,7 +2437,7 @@ static void coda_decode_ctrls(struct coda_ctx *ctx)
+ 	ctx->h264_profile_ctrl = v4l2_ctrl_new_std_menu(&ctx->ctrls,
+ 		&coda_ctrl_ops, V4L2_CID_MPEG_VIDEO_H264_PROFILE,
+ 		V4L2_MPEG_VIDEO_H264_PROFILE_HIGH,
+-		~((1 << V4L2_MPEG_VIDEO_H264_PROFILE_BASELINE) |
++		~((1 << V4L2_MPEG_VIDEO_H264_PROFILE_CONSTRAINED_BASELINE) |
+ 		  (1 << V4L2_MPEG_VIDEO_H264_PROFILE_MAIN) |
+ 		  (1 << V4L2_MPEG_VIDEO_H264_PROFILE_HIGH)),
+ 		V4L2_MPEG_VIDEO_H264_PROFILE_HIGH);
+diff --git a/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec.c b/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec.c
+index 130ecef2e7664..c8ee5e2b4f699 100644
+--- a/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec.c
++++ b/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec.c
+@@ -47,14 +47,7 @@ static struct mtk_q_data *mtk_vdec_get_q_data(struct mtk_vcodec_ctx *ctx,
+ static int vidioc_try_decoder_cmd(struct file *file, void *priv,
+ 				struct v4l2_decoder_cmd *cmd)
+ {
+-	struct mtk_vcodec_ctx *ctx = fh_to_ctx(priv);
+-
+-	/* Use M2M stateless helper if relevant */
+-	if (ctx->dev->vdec_pdata->uses_stateless_api)
+-		return v4l2_m2m_ioctl_stateless_try_decoder_cmd(file, priv,
+-								cmd);
+-	else
+-		return v4l2_m2m_ioctl_try_decoder_cmd(file, priv, cmd);
++	return v4l2_m2m_ioctl_try_decoder_cmd(file, priv, cmd);
+ }
+ 
+ 
+@@ -69,10 +62,6 @@ static int vidioc_decoder_cmd(struct file *file, void *priv,
+ 	if (ret)
+ 		return ret;
+ 
+-	/* Use M2M stateless helper if relevant */
+-	if (ctx->dev->vdec_pdata->uses_stateless_api)
+-		return v4l2_m2m_ioctl_stateless_decoder_cmd(file, priv, cmd);
+-
+ 	mtk_v4l2_debug(1, "decoder cmd=%u", cmd->cmd);
+ 	dst_vq = v4l2_m2m_get_vq(ctx->m2m_ctx,
+ 				V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE);
+diff --git a/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec_drv.c b/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec_drv.c
+index df7b25e9cbc88..fe7b2f1739b15 100644
+--- a/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec_drv.c
++++ b/drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec_drv.c
+@@ -400,6 +400,9 @@ static int mtk_vcodec_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	if (dev->vdec_pdata->uses_stateless_api) {
++		v4l2_disable_ioctl(vfd_dec, VIDIOC_DECODER_CMD);
++		v4l2_disable_ioctl(vfd_dec, VIDIOC_TRY_DECODER_CMD);
++
+ 		dev->mdev_dec.dev = &pdev->dev;
+ 		strscpy(dev->mdev_dec.model, MTK_VCODEC_DEC_NAME,
+ 			sizeof(dev->mdev_dec.model));
+@@ -487,7 +490,8 @@ static int mtk_vcodec_dec_remove(struct platform_device *pdev)
+ 		video_unregister_device(dev->vfd_dec);
+ 
+ 	v4l2_device_unregister(&dev->v4l2_dev);
+-	pm_runtime_disable(dev->pm.dev);
++	if (!dev->vdec_pdata->is_subdev_supported)
++		pm_runtime_disable(dev->pm.dev);
+ 	mtk_vcodec_fw_release(dev->fw_handler);
+ 	return 0;
+ }
+diff --git a/drivers/media/platform/nxp/imx-mipi-csis.c b/drivers/media/platform/nxp/imx-mipi-csis.c
+index 0a72734db55e9..e0e345fbb00f8 100644
+--- a/drivers/media/platform/nxp/imx-mipi-csis.c
++++ b/drivers/media/platform/nxp/imx-mipi-csis.c
+@@ -310,7 +310,7 @@ struct mipi_csis_info {
+ 	unsigned int num_clocks;
+ };
+ 
+-struct csi_state {
++struct mipi_csis_device {
+ 	struct device *dev;
+ 	void __iomem *regs;
+ 	struct clk_bulk_data *clks;
+@@ -487,59 +487,60 @@ static const struct csis_pix_format *find_csis_format(u32 code)
+  * Hardware configuration
+  */
+ 
+-static inline u32 mipi_csis_read(struct csi_state *state, u32 reg)
++static inline u32 mipi_csis_read(struct mipi_csis_device *csis, u32 reg)
+ {
+-	return readl(state->regs + reg);
++	return readl(csis->regs + reg);
+ }
+ 
+-static inline void mipi_csis_write(struct csi_state *state, u32 reg, u32 val)
++static inline void mipi_csis_write(struct mipi_csis_device *csis, u32 reg,
++				   u32 val)
+ {
+-	writel(val, state->regs + reg);
++	writel(val, csis->regs + reg);
+ }
+ 
+-static void mipi_csis_enable_interrupts(struct csi_state *state, bool on)
++static void mipi_csis_enable_interrupts(struct mipi_csis_device *csis, bool on)
+ {
+-	mipi_csis_write(state, MIPI_CSIS_INT_MSK, on ? 0xffffffff : 0);
+-	mipi_csis_write(state, MIPI_CSIS_DBG_INTR_MSK, on ? 0xffffffff : 0);
++	mipi_csis_write(csis, MIPI_CSIS_INT_MSK, on ? 0xffffffff : 0);
++	mipi_csis_write(csis, MIPI_CSIS_DBG_INTR_MSK, on ? 0xffffffff : 0);
+ }
+ 
+-static void mipi_csis_sw_reset(struct csi_state *state)
++static void mipi_csis_sw_reset(struct mipi_csis_device *csis)
+ {
+-	u32 val = mipi_csis_read(state, MIPI_CSIS_CMN_CTRL);
++	u32 val = mipi_csis_read(csis, MIPI_CSIS_CMN_CTRL);
+ 
+-	mipi_csis_write(state, MIPI_CSIS_CMN_CTRL,
++	mipi_csis_write(csis, MIPI_CSIS_CMN_CTRL,
+ 			val | MIPI_CSIS_CMN_CTRL_RESET);
+ 	usleep_range(10, 20);
+ }
+ 
+-static void mipi_csis_system_enable(struct csi_state *state, int on)
++static void mipi_csis_system_enable(struct mipi_csis_device *csis, int on)
+ {
+ 	u32 val, mask;
+ 
+-	val = mipi_csis_read(state, MIPI_CSIS_CMN_CTRL);
++	val = mipi_csis_read(csis, MIPI_CSIS_CMN_CTRL);
+ 	if (on)
+ 		val |= MIPI_CSIS_CMN_CTRL_ENABLE;
+ 	else
+ 		val &= ~MIPI_CSIS_CMN_CTRL_ENABLE;
+-	mipi_csis_write(state, MIPI_CSIS_CMN_CTRL, val);
++	mipi_csis_write(csis, MIPI_CSIS_CMN_CTRL, val);
+ 
+-	val = mipi_csis_read(state, MIPI_CSIS_DPHY_CMN_CTRL);
++	val = mipi_csis_read(csis, MIPI_CSIS_DPHY_CMN_CTRL);
+ 	val &= ~MIPI_CSIS_DPHY_CMN_CTRL_ENABLE;
+ 	if (on) {
+-		mask = (1 << (state->bus.num_data_lanes + 1)) - 1;
++		mask = (1 << (csis->bus.num_data_lanes + 1)) - 1;
+ 		val |= (mask & MIPI_CSIS_DPHY_CMN_CTRL_ENABLE);
+ 	}
+-	mipi_csis_write(state, MIPI_CSIS_DPHY_CMN_CTRL, val);
++	mipi_csis_write(csis, MIPI_CSIS_DPHY_CMN_CTRL, val);
+ }
+ 
+-/* Called with the state.lock mutex held */
+-static void __mipi_csis_set_format(struct csi_state *state)
++/* Called with the csis.lock mutex held */
++static void __mipi_csis_set_format(struct mipi_csis_device *csis)
+ {
+-	struct v4l2_mbus_framefmt *mf = &state->format_mbus[CSIS_PAD_SINK];
++	struct v4l2_mbus_framefmt *mf = &csis->format_mbus[CSIS_PAD_SINK];
+ 	u32 val;
+ 
+ 	/* Color format */
+-	val = mipi_csis_read(state, MIPI_CSIS_ISP_CONFIG_CH(0));
++	val = mipi_csis_read(csis, MIPI_CSIS_ISP_CONFIG_CH(0));
+ 	val &= ~(MIPI_CSIS_ISPCFG_ALIGN_32BIT | MIPI_CSIS_ISPCFG_FMT_MASK
+ 		| MIPI_CSIS_ISPCFG_PIXEL_MASK);
+ 
+@@ -556,28 +557,28 @@ static void __mipi_csis_set_format(struct csi_state *state)
+ 	 *
+ 	 * TODO: Verify which other formats require DUAL (or QUAD) modes.
+ 	 */
+-	if (state->csis_fmt->data_type == MIPI_CSI2_DATA_TYPE_YUV422_8)
++	if (csis->csis_fmt->data_type == MIPI_CSI2_DATA_TYPE_YUV422_8)
+ 		val |= MIPI_CSIS_ISPCFG_PIXEL_MODE_DUAL;
+ 
+-	val |= MIPI_CSIS_ISPCFG_FMT(state->csis_fmt->data_type);
+-	mipi_csis_write(state, MIPI_CSIS_ISP_CONFIG_CH(0), val);
++	val |= MIPI_CSIS_ISPCFG_FMT(csis->csis_fmt->data_type);
++	mipi_csis_write(csis, MIPI_CSIS_ISP_CONFIG_CH(0), val);
+ 
+ 	/* Pixel resolution */
+ 	val = mf->width | (mf->height << 16);
+-	mipi_csis_write(state, MIPI_CSIS_ISP_RESOL_CH(0), val);
++	mipi_csis_write(csis, MIPI_CSIS_ISP_RESOL_CH(0), val);
+ }
+ 
+-static int mipi_csis_calculate_params(struct csi_state *state)
++static int mipi_csis_calculate_params(struct mipi_csis_device *csis)
+ {
+ 	s64 link_freq;
+ 	u32 lane_rate;
+ 
+ 	/* Calculate the line rate from the pixel rate. */
+-	link_freq = v4l2_get_link_freq(state->src_sd->ctrl_handler,
+-				       state->csis_fmt->width,
+-				       state->bus.num_data_lanes * 2);
++	link_freq = v4l2_get_link_freq(csis->src_sd->ctrl_handler,
++				       csis->csis_fmt->width,
++				       csis->bus.num_data_lanes * 2);
+ 	if (link_freq < 0) {
+-		dev_err(state->dev, "Unable to obtain link frequency: %d\n",
++		dev_err(csis->dev, "Unable to obtain link frequency: %d\n",
+ 			(int)link_freq);
+ 		return link_freq;
+ 	}
+@@ -585,7 +586,7 @@ static int mipi_csis_calculate_params(struct csi_state *state)
+ 	lane_rate = link_freq * 2;
+ 
+ 	if (lane_rate < 80000000 || lane_rate > 1500000000) {
+-		dev_dbg(state->dev, "Out-of-bound lane rate %u\n", lane_rate);
++		dev_dbg(csis->dev, "Out-of-bound lane rate %u\n", lane_rate);
+ 		return -EINVAL;
+ 	}
+ 
+@@ -595,57 +596,57 @@ static int mipi_csis_calculate_params(struct csi_state *state)
+ 	 * (which is documented as corresponding to CSI-2 v0.87 to v1.00) until
+ 	 * we figure out how to compute it correctly.
+ 	 */
+-	state->hs_settle = (lane_rate - 5000000) / 45000000;
+-	state->clk_settle = 0;
++	csis->hs_settle = (lane_rate - 5000000) / 45000000;
++	csis->clk_settle = 0;
+ 
+-	dev_dbg(state->dev, "lane rate %u, Tclk_settle %u, Ths_settle %u\n",
+-		lane_rate, state->clk_settle, state->hs_settle);
++	dev_dbg(csis->dev, "lane rate %u, Tclk_settle %u, Ths_settle %u\n",
++		lane_rate, csis->clk_settle, csis->hs_settle);
+ 
+-	if (state->debug.hs_settle < 0xff) {
+-		dev_dbg(state->dev, "overriding Ths_settle with %u\n",
+-			state->debug.hs_settle);
+-		state->hs_settle = state->debug.hs_settle;
++	if (csis->debug.hs_settle < 0xff) {
++		dev_dbg(csis->dev, "overriding Ths_settle with %u\n",
++			csis->debug.hs_settle);
++		csis->hs_settle = csis->debug.hs_settle;
+ 	}
+ 
+-	if (state->debug.clk_settle < 4) {
+-		dev_dbg(state->dev, "overriding Tclk_settle with %u\n",
+-			state->debug.clk_settle);
+-		state->clk_settle = state->debug.clk_settle;
++	if (csis->debug.clk_settle < 4) {
++		dev_dbg(csis->dev, "overriding Tclk_settle with %u\n",
++			csis->debug.clk_settle);
++		csis->clk_settle = csis->debug.clk_settle;
+ 	}
+ 
+ 	return 0;
+ }
+ 
+-static void mipi_csis_set_params(struct csi_state *state)
++static void mipi_csis_set_params(struct mipi_csis_device *csis)
+ {
+-	int lanes = state->bus.num_data_lanes;
++	int lanes = csis->bus.num_data_lanes;
+ 	u32 val;
+ 
+-	val = mipi_csis_read(state, MIPI_CSIS_CMN_CTRL);
++	val = mipi_csis_read(csis, MIPI_CSIS_CMN_CTRL);
+ 	val &= ~MIPI_CSIS_CMN_CTRL_LANE_NR_MASK;
+ 	val |= (lanes - 1) << MIPI_CSIS_CMN_CTRL_LANE_NR_OFFSET;
+-	if (state->info->version == MIPI_CSIS_V3_3)
++	if (csis->info->version == MIPI_CSIS_V3_3)
+ 		val |= MIPI_CSIS_CMN_CTRL_INTER_MODE;
+-	mipi_csis_write(state, MIPI_CSIS_CMN_CTRL, val);
++	mipi_csis_write(csis, MIPI_CSIS_CMN_CTRL, val);
+ 
+-	__mipi_csis_set_format(state);
++	__mipi_csis_set_format(csis);
+ 
+-	mipi_csis_write(state, MIPI_CSIS_DPHY_CMN_CTRL,
+-			MIPI_CSIS_DPHY_CMN_CTRL_HSSETTLE(state->hs_settle) |
+-			MIPI_CSIS_DPHY_CMN_CTRL_CLKSETTLE(state->clk_settle));
++	mipi_csis_write(csis, MIPI_CSIS_DPHY_CMN_CTRL,
++			MIPI_CSIS_DPHY_CMN_CTRL_HSSETTLE(csis->hs_settle) |
++			MIPI_CSIS_DPHY_CMN_CTRL_CLKSETTLE(csis->clk_settle));
+ 
+ 	val = (0 << MIPI_CSIS_ISP_SYNC_HSYNC_LINTV_OFFSET)
+ 	    | (0 << MIPI_CSIS_ISP_SYNC_VSYNC_SINTV_OFFSET)
+ 	    | (0 << MIPI_CSIS_ISP_SYNC_VSYNC_EINTV_OFFSET);
+-	mipi_csis_write(state, MIPI_CSIS_ISP_SYNC_CH(0), val);
++	mipi_csis_write(csis, MIPI_CSIS_ISP_SYNC_CH(0), val);
+ 
+-	val = mipi_csis_read(state, MIPI_CSIS_CLK_CTRL);
++	val = mipi_csis_read(csis, MIPI_CSIS_CLK_CTRL);
+ 	val |= MIPI_CSIS_CLK_CTRL_WCLK_SRC;
+ 	val |= MIPI_CSIS_CLK_CTRL_CLKGATE_TRAIL_CH0(15);
+ 	val &= ~MIPI_CSIS_CLK_CTRL_CLKGATE_EN_MSK;
+-	mipi_csis_write(state, MIPI_CSIS_CLK_CTRL, val);
++	mipi_csis_write(csis, MIPI_CSIS_CLK_CTRL, val);
+ 
+-	mipi_csis_write(state, MIPI_CSIS_DPHY_BCTRL_L,
++	mipi_csis_write(csis, MIPI_CSIS_DPHY_BCTRL_L,
+ 			MIPI_CSIS_DPHY_BCTRL_L_BIAS_REF_VOLT_715MV |
+ 			MIPI_CSIS_DPHY_BCTRL_L_BGR_CHOPPER_FREQ_3MHZ |
+ 			MIPI_CSIS_DPHY_BCTRL_L_REG_12P_LVL_CTL_1_2V |
+@@ -653,95 +654,95 @@ static void mipi_csis_set_params(struct csi_state *state)
+ 			MIPI_CSIS_DPHY_BCTRL_L_LP_RX_VREF_LVL_715MV |
+ 			MIPI_CSIS_DPHY_BCTRL_L_LP_CD_HYS_60MV |
+ 			MIPI_CSIS_DPHY_BCTRL_L_B_DPHYCTRL(20000000));
+-	mipi_csis_write(state, MIPI_CSIS_DPHY_BCTRL_H, 0);
++	mipi_csis_write(csis, MIPI_CSIS_DPHY_BCTRL_H, 0);
+ 
+ 	/* Update the shadow register. */
+-	val = mipi_csis_read(state, MIPI_CSIS_CMN_CTRL);
+-	mipi_csis_write(state, MIPI_CSIS_CMN_CTRL,
++	val = mipi_csis_read(csis, MIPI_CSIS_CMN_CTRL);
++	mipi_csis_write(csis, MIPI_CSIS_CMN_CTRL,
+ 			val | MIPI_CSIS_CMN_CTRL_UPDATE_SHADOW |
+ 			MIPI_CSIS_CMN_CTRL_UPDATE_SHADOW_CTRL);
+ }
+ 
+-static int mipi_csis_clk_enable(struct csi_state *state)
++static int mipi_csis_clk_enable(struct mipi_csis_device *csis)
+ {
+-	return clk_bulk_prepare_enable(state->info->num_clocks, state->clks);
++	return clk_bulk_prepare_enable(csis->info->num_clocks, csis->clks);
+ }
+ 
+-static void mipi_csis_clk_disable(struct csi_state *state)
++static void mipi_csis_clk_disable(struct mipi_csis_device *csis)
+ {
+-	clk_bulk_disable_unprepare(state->info->num_clocks, state->clks);
++	clk_bulk_disable_unprepare(csis->info->num_clocks, csis->clks);
+ }
+ 
+-static int mipi_csis_clk_get(struct csi_state *state)
++static int mipi_csis_clk_get(struct mipi_csis_device *csis)
+ {
+ 	unsigned int i;
+ 	int ret;
+ 
+-	state->clks = devm_kcalloc(state->dev, state->info->num_clocks,
+-				   sizeof(*state->clks), GFP_KERNEL);
++	csis->clks = devm_kcalloc(csis->dev, csis->info->num_clocks,
++				  sizeof(*csis->clks), GFP_KERNEL);
+ 
+-	if (!state->clks)
++	if (!csis->clks)
+ 		return -ENOMEM;
+ 
+-	for (i = 0; i < state->info->num_clocks; i++)
+-		state->clks[i].id = mipi_csis_clk_id[i];
++	for (i = 0; i < csis->info->num_clocks; i++)
++		csis->clks[i].id = mipi_csis_clk_id[i];
+ 
+-	ret = devm_clk_bulk_get(state->dev, state->info->num_clocks,
+-				state->clks);
++	ret = devm_clk_bulk_get(csis->dev, csis->info->num_clocks,
++				csis->clks);
+ 	if (ret < 0)
+ 		return ret;
+ 
+ 	/* Set clock rate */
+-	ret = clk_set_rate(state->clks[MIPI_CSIS_CLK_WRAP].clk,
+-			   state->clk_frequency);
++	ret = clk_set_rate(csis->clks[MIPI_CSIS_CLK_WRAP].clk,
++			   csis->clk_frequency);
+ 	if (ret < 0)
+-		dev_err(state->dev, "set rate=%d failed: %d\n",
+-			state->clk_frequency, ret);
++		dev_err(csis->dev, "set rate=%d failed: %d\n",
++			csis->clk_frequency, ret);
+ 
+ 	return ret;
+ }
+ 
+-static void mipi_csis_start_stream(struct csi_state *state)
++static void mipi_csis_start_stream(struct mipi_csis_device *csis)
+ {
+-	mipi_csis_sw_reset(state);
+-	mipi_csis_set_params(state);
+-	mipi_csis_system_enable(state, true);
+-	mipi_csis_enable_interrupts(state, true);
++	mipi_csis_sw_reset(csis);
++	mipi_csis_set_params(csis);
++	mipi_csis_system_enable(csis, true);
++	mipi_csis_enable_interrupts(csis, true);
+ }
+ 
+-static void mipi_csis_stop_stream(struct csi_state *state)
++static void mipi_csis_stop_stream(struct mipi_csis_device *csis)
+ {
+-	mipi_csis_enable_interrupts(state, false);
+-	mipi_csis_system_enable(state, false);
++	mipi_csis_enable_interrupts(csis, false);
++	mipi_csis_system_enable(csis, false);
+ }
+ 
+ static irqreturn_t mipi_csis_irq_handler(int irq, void *dev_id)
+ {
+-	struct csi_state *state = dev_id;
++	struct mipi_csis_device *csis = dev_id;
+ 	unsigned long flags;
+ 	unsigned int i;
+ 	u32 status;
+ 	u32 dbg_status;
+ 
+-	status = mipi_csis_read(state, MIPI_CSIS_INT_SRC);
+-	dbg_status = mipi_csis_read(state, MIPI_CSIS_DBG_INTR_SRC);
++	status = mipi_csis_read(csis, MIPI_CSIS_INT_SRC);
++	dbg_status = mipi_csis_read(csis, MIPI_CSIS_DBG_INTR_SRC);
+ 
+-	spin_lock_irqsave(&state->slock, flags);
++	spin_lock_irqsave(&csis->slock, flags);
+ 
+ 	/* Update the event/error counters */
+-	if ((status & MIPI_CSIS_INT_SRC_ERRORS) || state->debug.enable) {
++	if ((status & MIPI_CSIS_INT_SRC_ERRORS) || csis->debug.enable) {
+ 		for (i = 0; i < MIPI_CSIS_NUM_EVENTS; i++) {
+-			struct mipi_csis_event *event = &state->events[i];
++			struct mipi_csis_event *event = &csis->events[i];
+ 
+ 			if ((!event->debug && (status & event->mask)) ||
+ 			    (event->debug && (dbg_status & event->mask)))
+ 				event->counter++;
+ 		}
+ 	}
+-	spin_unlock_irqrestore(&state->slock, flags);
++	spin_unlock_irqrestore(&csis->slock, flags);
+ 
+-	mipi_csis_write(state, MIPI_CSIS_INT_SRC, status);
+-	mipi_csis_write(state, MIPI_CSIS_DBG_INTR_SRC, dbg_status);
++	mipi_csis_write(csis, MIPI_CSIS_INT_SRC, status);
++	mipi_csis_write(csis, MIPI_CSIS_DBG_INTR_SRC, dbg_status);
+ 
+ 	return IRQ_HANDLED;
+ }
+@@ -750,47 +751,47 @@ static irqreturn_t mipi_csis_irq_handler(int irq, void *dev_id)
+  * PHY regulator and reset
+  */
+ 
+-static int mipi_csis_phy_enable(struct csi_state *state)
++static int mipi_csis_phy_enable(struct mipi_csis_device *csis)
+ {
+-	if (state->info->version != MIPI_CSIS_V3_3)
++	if (csis->info->version != MIPI_CSIS_V3_3)
+ 		return 0;
+ 
+-	return regulator_enable(state->mipi_phy_regulator);
++	return regulator_enable(csis->mipi_phy_regulator);
+ }
+ 
+-static int mipi_csis_phy_disable(struct csi_state *state)
++static int mipi_csis_phy_disable(struct mipi_csis_device *csis)
+ {
+-	if (state->info->version != MIPI_CSIS_V3_3)
++	if (csis->info->version != MIPI_CSIS_V3_3)
+ 		return 0;
+ 
+-	return regulator_disable(state->mipi_phy_regulator);
++	return regulator_disable(csis->mipi_phy_regulator);
+ }
+ 
+-static void mipi_csis_phy_reset(struct csi_state *state)
++static void mipi_csis_phy_reset(struct mipi_csis_device *csis)
+ {
+-	if (state->info->version != MIPI_CSIS_V3_3)
++	if (csis->info->version != MIPI_CSIS_V3_3)
+ 		return;
+ 
+-	reset_control_assert(state->mrst);
++	reset_control_assert(csis->mrst);
+ 	msleep(20);
+-	reset_control_deassert(state->mrst);
++	reset_control_deassert(csis->mrst);
+ }
+ 
+-static int mipi_csis_phy_init(struct csi_state *state)
++static int mipi_csis_phy_init(struct mipi_csis_device *csis)
+ {
+-	if (state->info->version != MIPI_CSIS_V3_3)
++	if (csis->info->version != MIPI_CSIS_V3_3)
+ 		return 0;
+ 
+ 	/* Get MIPI PHY reset and regulator. */
+-	state->mrst = devm_reset_control_get_exclusive(state->dev, NULL);
+-	if (IS_ERR(state->mrst))
+-		return PTR_ERR(state->mrst);
++	csis->mrst = devm_reset_control_get_exclusive(csis->dev, NULL);
++	if (IS_ERR(csis->mrst))
++		return PTR_ERR(csis->mrst);
+ 
+-	state->mipi_phy_regulator = devm_regulator_get(state->dev, "phy");
+-	if (IS_ERR(state->mipi_phy_regulator))
+-		return PTR_ERR(state->mipi_phy_regulator);
++	csis->mipi_phy_regulator = devm_regulator_get(csis->dev, "phy");
++	if (IS_ERR(csis->mipi_phy_regulator))
++		return PTR_ERR(csis->mipi_phy_regulator);
+ 
+-	return regulator_set_voltage(state->mipi_phy_regulator, 1000000,
++	return regulator_set_voltage(csis->mipi_phy_regulator, 1000000,
+ 				     1000000);
+ }
+ 
+@@ -798,36 +799,36 @@ static int mipi_csis_phy_init(struct csi_state *state)
+  * Debug
+  */
+ 
+-static void mipi_csis_clear_counters(struct csi_state *state)
++static void mipi_csis_clear_counters(struct mipi_csis_device *csis)
+ {
+ 	unsigned long flags;
+ 	unsigned int i;
+ 
+-	spin_lock_irqsave(&state->slock, flags);
++	spin_lock_irqsave(&csis->slock, flags);
+ 	for (i = 0; i < MIPI_CSIS_NUM_EVENTS; i++)
+-		state->events[i].counter = 0;
+-	spin_unlock_irqrestore(&state->slock, flags);
++		csis->events[i].counter = 0;
++	spin_unlock_irqrestore(&csis->slock, flags);
+ }
+ 
+-static void mipi_csis_log_counters(struct csi_state *state, bool non_errors)
++static void mipi_csis_log_counters(struct mipi_csis_device *csis, bool non_errors)
+ {
+ 	unsigned int num_events = non_errors ? MIPI_CSIS_NUM_EVENTS
+ 				: MIPI_CSIS_NUM_EVENTS - 8;
+ 	unsigned long flags;
+ 	unsigned int i;
+ 
+-	spin_lock_irqsave(&state->slock, flags);
++	spin_lock_irqsave(&csis->slock, flags);
+ 
+ 	for (i = 0; i < num_events; ++i) {
+-		if (state->events[i].counter > 0 || state->debug.enable)
+-			dev_info(state->dev, "%s events: %d\n",
+-				 state->events[i].name,
+-				 state->events[i].counter);
++		if (csis->events[i].counter > 0 || csis->debug.enable)
++			dev_info(csis->dev, "%s events: %d\n",
++				 csis->events[i].name,
++				 csis->events[i].counter);
+ 	}
+-	spin_unlock_irqrestore(&state->slock, flags);
++	spin_unlock_irqrestore(&csis->slock, flags);
+ }
+ 
+-static int mipi_csis_dump_regs(struct csi_state *state)
++static int mipi_csis_dump_regs(struct mipi_csis_device *csis)
+ {
+ 	static const struct {
+ 		u32 offset;
+@@ -851,11 +852,11 @@ static int mipi_csis_dump_regs(struct csi_state *state)
+ 	unsigned int i;
+ 	u32 cfg;
+ 
+-	dev_info(state->dev, "--- REGISTERS ---\n");
++	dev_info(csis->dev, "--- REGISTERS ---\n");
+ 
+ 	for (i = 0; i < ARRAY_SIZE(registers); i++) {
+-		cfg = mipi_csis_read(state, registers[i].offset);
+-		dev_info(state->dev, "%14s: 0x%08x\n", registers[i].name, cfg);
++		cfg = mipi_csis_read(csis, registers[i].offset);
++		dev_info(csis->dev, "%14s: 0x%08x\n", registers[i].name, cfg);
+ 	}
+ 
+ 	return 0;
+@@ -863,123 +864,123 @@ static int mipi_csis_dump_regs(struct csi_state *state)
+ 
+ static int mipi_csis_dump_regs_show(struct seq_file *m, void *private)
+ {
+-	struct csi_state *state = m->private;
++	struct mipi_csis_device *csis = m->private;
+ 
+-	return mipi_csis_dump_regs(state);
++	return mipi_csis_dump_regs(csis);
+ }
+ DEFINE_SHOW_ATTRIBUTE(mipi_csis_dump_regs);
+ 
+-static void mipi_csis_debugfs_init(struct csi_state *state)
++static void mipi_csis_debugfs_init(struct mipi_csis_device *csis)
+ {
+-	state->debug.hs_settle = UINT_MAX;
+-	state->debug.clk_settle = UINT_MAX;
++	csis->debug.hs_settle = UINT_MAX;
++	csis->debug.clk_settle = UINT_MAX;
+ 
+-	state->debugfs_root = debugfs_create_dir(dev_name(state->dev), NULL);
++	csis->debugfs_root = debugfs_create_dir(dev_name(csis->dev), NULL);
+ 
+-	debugfs_create_bool("debug_enable", 0600, state->debugfs_root,
+-			    &state->debug.enable);
+-	debugfs_create_file("dump_regs", 0600, state->debugfs_root, state,
++	debugfs_create_bool("debug_enable", 0600, csis->debugfs_root,
++			    &csis->debug.enable);
++	debugfs_create_file("dump_regs", 0600, csis->debugfs_root, csis,
+ 			    &mipi_csis_dump_regs_fops);
+-	debugfs_create_u32("tclk_settle", 0600, state->debugfs_root,
+-			   &state->debug.clk_settle);
+-	debugfs_create_u32("ths_settle", 0600, state->debugfs_root,
+-			   &state->debug.hs_settle);
++	debugfs_create_u32("tclk_settle", 0600, csis->debugfs_root,
++			   &csis->debug.clk_settle);
++	debugfs_create_u32("ths_settle", 0600, csis->debugfs_root,
++			   &csis->debug.hs_settle);
+ }
+ 
+-static void mipi_csis_debugfs_exit(struct csi_state *state)
++static void mipi_csis_debugfs_exit(struct mipi_csis_device *csis)
+ {
+-	debugfs_remove_recursive(state->debugfs_root);
++	debugfs_remove_recursive(csis->debugfs_root);
+ }
+ 
+ /* -----------------------------------------------------------------------------
+  * V4L2 subdev operations
+  */
+ 
+-static struct csi_state *mipi_sd_to_csis_state(struct v4l2_subdev *sdev)
++static struct mipi_csis_device *sd_to_mipi_csis_device(struct v4l2_subdev *sdev)
+ {
+-	return container_of(sdev, struct csi_state, sd);
++	return container_of(sdev, struct mipi_csis_device, sd);
+ }
+ 
+ static int mipi_csis_s_stream(struct v4l2_subdev *sd, int enable)
+ {
+-	struct csi_state *state = mipi_sd_to_csis_state(sd);
++	struct mipi_csis_device *csis = sd_to_mipi_csis_device(sd);
+ 	int ret;
+ 
+ 	if (enable) {
+-		ret = mipi_csis_calculate_params(state);
++		ret = mipi_csis_calculate_params(csis);
+ 		if (ret < 0)
+ 			return ret;
+ 
+-		mipi_csis_clear_counters(state);
++		mipi_csis_clear_counters(csis);
+ 
+-		ret = pm_runtime_resume_and_get(state->dev);
++		ret = pm_runtime_resume_and_get(csis->dev);
+ 		if (ret < 0)
+ 			return ret;
+ 
+-		ret = v4l2_subdev_call(state->src_sd, core, s_power, 1);
++		ret = v4l2_subdev_call(csis->src_sd, core, s_power, 1);
+ 		if (ret < 0 && ret != -ENOIOCTLCMD)
+ 			goto done;
+ 	}
+ 
+-	mutex_lock(&state->lock);
++	mutex_lock(&csis->lock);
+ 
+ 	if (enable) {
+-		if (state->state & ST_SUSPENDED) {
++		if (csis->state & ST_SUSPENDED) {
+ 			ret = -EBUSY;
+ 			goto unlock;
+ 		}
+ 
+-		mipi_csis_start_stream(state);
+-		ret = v4l2_subdev_call(state->src_sd, video, s_stream, 1);
++		mipi_csis_start_stream(csis);
++		ret = v4l2_subdev_call(csis->src_sd, video, s_stream, 1);
+ 		if (ret < 0)
+ 			goto unlock;
+ 
+-		mipi_csis_log_counters(state, true);
++		mipi_csis_log_counters(csis, true);
+ 
+-		state->state |= ST_STREAMING;
++		csis->state |= ST_STREAMING;
+ 	} else {
+-		v4l2_subdev_call(state->src_sd, video, s_stream, 0);
+-		ret = v4l2_subdev_call(state->src_sd, core, s_power, 0);
++		v4l2_subdev_call(csis->src_sd, video, s_stream, 0);
++		ret = v4l2_subdev_call(csis->src_sd, core, s_power, 0);
+ 		if (ret == -ENOIOCTLCMD)
+ 			ret = 0;
+-		mipi_csis_stop_stream(state);
+-		state->state &= ~ST_STREAMING;
+-		if (state->debug.enable)
+-			mipi_csis_log_counters(state, true);
++		mipi_csis_stop_stream(csis);
++		csis->state &= ~ST_STREAMING;
++		if (csis->debug.enable)
++			mipi_csis_log_counters(csis, true);
+ 	}
+ 
+ unlock:
+-	mutex_unlock(&state->lock);
++	mutex_unlock(&csis->lock);
+ 
+ done:
+ 	if (!enable || ret < 0)
+-		pm_runtime_put(state->dev);
++		pm_runtime_put(csis->dev);
+ 
+ 	return ret;
+ }
+ 
+ static struct v4l2_mbus_framefmt *
+-mipi_csis_get_format(struct csi_state *state,
++mipi_csis_get_format(struct mipi_csis_device *csis,
+ 		     struct v4l2_subdev_state *sd_state,
+ 		     enum v4l2_subdev_format_whence which,
+ 		     unsigned int pad)
+ {
+ 	if (which == V4L2_SUBDEV_FORMAT_TRY)
+-		return v4l2_subdev_get_try_format(&state->sd, sd_state, pad);
++		return v4l2_subdev_get_try_format(&csis->sd, sd_state, pad);
+ 
+-	return &state->format_mbus[pad];
++	return &csis->format_mbus[pad];
+ }
+ 
+ static int mipi_csis_init_cfg(struct v4l2_subdev *sd,
+ 			      struct v4l2_subdev_state *sd_state)
+ {
+-	struct csi_state *state = mipi_sd_to_csis_state(sd);
++	struct mipi_csis_device *csis = sd_to_mipi_csis_device(sd);
+ 	struct v4l2_mbus_framefmt *fmt_sink;
+ 	struct v4l2_mbus_framefmt *fmt_source;
+ 	enum v4l2_subdev_format_whence which;
+ 
+ 	which = sd_state ? V4L2_SUBDEV_FORMAT_TRY : V4L2_SUBDEV_FORMAT_ACTIVE;
+-	fmt_sink = mipi_csis_get_format(state, sd_state, which, CSIS_PAD_SINK);
++	fmt_sink = mipi_csis_get_format(csis, sd_state, which, CSIS_PAD_SINK);
+ 
+ 	fmt_sink->code = MEDIA_BUS_FMT_UYVY8_1X16;
+ 	fmt_sink->width = MIPI_CSIS_DEF_PIX_WIDTH;
+@@ -993,15 +994,7 @@ static int mipi_csis_init_cfg(struct v4l2_subdev *sd,
+ 		V4L2_MAP_QUANTIZATION_DEFAULT(false, fmt_sink->colorspace,
+ 					      fmt_sink->ycbcr_enc);
+ 
+-	/*
+-	 * When called from mipi_csis_subdev_init() to initialize the active
+-	 * configuration, cfg is NULL, which indicates there's no source pad
+-	 * configuration to set.
+-	 */
+-	if (!sd_state)
+-		return 0;
+-
+-	fmt_source = mipi_csis_get_format(state, sd_state, which,
++	fmt_source = mipi_csis_get_format(csis, sd_state, which,
+ 					  CSIS_PAD_SOURCE);
+ 	*fmt_source = *fmt_sink;
+ 
+@@ -1012,15 +1005,15 @@ static int mipi_csis_get_fmt(struct v4l2_subdev *sd,
+ 			     struct v4l2_subdev_state *sd_state,
+ 			     struct v4l2_subdev_format *sdformat)
+ {
+-	struct csi_state *state = mipi_sd_to_csis_state(sd);
++	struct mipi_csis_device *csis = sd_to_mipi_csis_device(sd);
+ 	struct v4l2_mbus_framefmt *fmt;
+ 
+-	fmt = mipi_csis_get_format(state, sd_state, sdformat->which,
++	fmt = mipi_csis_get_format(csis, sd_state, sdformat->which,
+ 				   sdformat->pad);
+ 
+-	mutex_lock(&state->lock);
++	mutex_lock(&csis->lock);
+ 	sdformat->format = *fmt;
+-	mutex_unlock(&state->lock);
++	mutex_unlock(&csis->lock);
+ 
+ 	return 0;
+ }
+@@ -1029,7 +1022,7 @@ static int mipi_csis_enum_mbus_code(struct v4l2_subdev *sd,
+ 				    struct v4l2_subdev_state *sd_state,
+ 				    struct v4l2_subdev_mbus_code_enum *code)
+ {
+-	struct csi_state *state = mipi_sd_to_csis_state(sd);
++	struct mipi_csis_device *csis = sd_to_mipi_csis_device(sd);
+ 
+ 	/*
+ 	 * The CSIS can't transcode in any way, the source format is identical
+@@ -1041,7 +1034,7 @@ static int mipi_csis_enum_mbus_code(struct v4l2_subdev *sd,
+ 		if (code->index > 0)
+ 			return -EINVAL;
+ 
+-		fmt = mipi_csis_get_format(state, sd_state, code->which,
++		fmt = mipi_csis_get_format(csis, sd_state, code->which,
+ 					   code->pad);
+ 		code->code = fmt->code;
+ 		return 0;
+@@ -1062,7 +1055,7 @@ static int mipi_csis_set_fmt(struct v4l2_subdev *sd,
+ 			     struct v4l2_subdev_state *sd_state,
+ 			     struct v4l2_subdev_format *sdformat)
+ {
+-	struct csi_state *state = mipi_sd_to_csis_state(sd);
++	struct mipi_csis_device *csis = sd_to_mipi_csis_device(sd);
+ 	struct csis_pix_format const *csis_fmt;
+ 	struct v4l2_mbus_framefmt *fmt;
+ 	unsigned int align;
+@@ -1110,10 +1103,10 @@ static int mipi_csis_set_fmt(struct v4l2_subdev *sd,
+ 			      &sdformat->format.height, 1,
+ 			      CSIS_MAX_PIX_HEIGHT, 0, 0);
+ 
+-	fmt = mipi_csis_get_format(state, sd_state, sdformat->which,
++	fmt = mipi_csis_get_format(csis, sd_state, sdformat->which,
+ 				   sdformat->pad);
+ 
+-	mutex_lock(&state->lock);
++	mutex_lock(&csis->lock);
+ 
+ 	fmt->code = csis_fmt->code;
+ 	fmt->width = sdformat->format.width;
+@@ -1126,7 +1119,7 @@ static int mipi_csis_set_fmt(struct v4l2_subdev *sd,
+ 	sdformat->format = *fmt;
+ 
+ 	/* Propagate the format from sink to source. */
+-	fmt = mipi_csis_get_format(state, sd_state, sdformat->which,
++	fmt = mipi_csis_get_format(csis, sd_state, sdformat->which,
+ 				   CSIS_PAD_SOURCE);
+ 	*fmt = sdformat->format;
+ 
+@@ -1135,22 +1128,22 @@ static int mipi_csis_set_fmt(struct v4l2_subdev *sd,
+ 
+ 	/* Store the CSIS format descriptor for active formats. */
+ 	if (sdformat->which == V4L2_SUBDEV_FORMAT_ACTIVE)
+-		state->csis_fmt = csis_fmt;
++		csis->csis_fmt = csis_fmt;
+ 
+-	mutex_unlock(&state->lock);
++	mutex_unlock(&csis->lock);
+ 
+ 	return 0;
+ }
+ 
+ static int mipi_csis_log_status(struct v4l2_subdev *sd)
+ {
+-	struct csi_state *state = mipi_sd_to_csis_state(sd);
++	struct mipi_csis_device *csis = sd_to_mipi_csis_device(sd);
+ 
+-	mutex_lock(&state->lock);
+-	mipi_csis_log_counters(state, true);
+-	if (state->debug.enable && (state->state & ST_POWERED))
+-		mipi_csis_dump_regs(state);
+-	mutex_unlock(&state->lock);
++	mutex_lock(&csis->lock);
++	mipi_csis_log_counters(csis, true);
++	if (csis->debug.enable && (csis->state & ST_POWERED))
++		mipi_csis_dump_regs(csis);
++	mutex_unlock(&csis->lock);
+ 
+ 	return 0;
+ }
+@@ -1185,10 +1178,10 @@ static int mipi_csis_link_setup(struct media_entity *entity,
+ 				const struct media_pad *remote_pad, u32 flags)
+ {
+ 	struct v4l2_subdev *sd = media_entity_to_v4l2_subdev(entity);
+-	struct csi_state *state = mipi_sd_to_csis_state(sd);
++	struct mipi_csis_device *csis = sd_to_mipi_csis_device(sd);
+ 	struct v4l2_subdev *remote_sd;
+ 
+-	dev_dbg(state->dev, "link setup %s -> %s", remote_pad->entity->name,
++	dev_dbg(csis->dev, "link setup %s -> %s", remote_pad->entity->name,
+ 		local_pad->entity->name);
+ 
+ 	/* We only care about the link to the source. */
+@@ -1198,12 +1191,12 @@ static int mipi_csis_link_setup(struct media_entity *entity,
+ 	remote_sd = media_entity_to_v4l2_subdev(remote_pad->entity);
+ 
+ 	if (flags & MEDIA_LNK_FL_ENABLED) {
+-		if (state->src_sd)
++		if (csis->src_sd)
+ 			return -EBUSY;
+ 
+-		state->src_sd = remote_sd;
++		csis->src_sd = remote_sd;
+ 	} else {
+-		state->src_sd = NULL;
++		csis->src_sd = NULL;
+ 	}
+ 
+ 	return 0;
+@@ -1219,18 +1212,18 @@ static const struct media_entity_operations mipi_csis_entity_ops = {
+  * Async subdev notifier
+  */
+ 
+-static struct csi_state *
++static struct mipi_csis_device *
+ mipi_notifier_to_csis_state(struct v4l2_async_notifier *n)
+ {
+-	return container_of(n, struct csi_state, notifier);
++	return container_of(n, struct mipi_csis_device, notifier);
+ }
+ 
+ static int mipi_csis_notify_bound(struct v4l2_async_notifier *notifier,
+ 				  struct v4l2_subdev *sd,
+ 				  struct v4l2_async_subdev *asd)
+ {
+-	struct csi_state *state = mipi_notifier_to_csis_state(notifier);
+-	struct media_pad *sink = &state->sd.entity.pads[CSIS_PAD_SINK];
++	struct mipi_csis_device *csis = mipi_notifier_to_csis_state(notifier);
++	struct media_pad *sink = &csis->sd.entity.pads[CSIS_PAD_SINK];
+ 
+ 	return v4l2_create_fwnode_links_to_pad(sd, sink, 0);
+ }
+@@ -1239,7 +1232,7 @@ static const struct v4l2_async_notifier_operations mipi_csis_notify_ops = {
+ 	.bound = mipi_csis_notify_bound,
+ };
+ 
+-static int mipi_csis_async_register(struct csi_state *state)
++static int mipi_csis_async_register(struct mipi_csis_device *csis)
+ {
+ 	struct v4l2_fwnode_endpoint vep = {
+ 		.bus_type = V4L2_MBUS_CSI2_DPHY,
+@@ -1249,9 +1242,9 @@ static int mipi_csis_async_register(struct csi_state *state)
+ 	unsigned int i;
+ 	int ret;
+ 
+-	v4l2_async_nf_init(&state->notifier);
++	v4l2_async_nf_init(&csis->notifier);
+ 
+-	ep = fwnode_graph_get_endpoint_by_id(dev_fwnode(state->dev), 0, 0,
++	ep = fwnode_graph_get_endpoint_by_id(dev_fwnode(csis->dev), 0, 0,
+ 					     FWNODE_GRAPH_ENDPOINT_NEXT);
+ 	if (!ep)
+ 		return -ENOTCONN;
+@@ -1262,19 +1255,19 @@ static int mipi_csis_async_register(struct csi_state *state)
+ 
+ 	for (i = 0; i < vep.bus.mipi_csi2.num_data_lanes; ++i) {
+ 		if (vep.bus.mipi_csi2.data_lanes[i] != i + 1) {
+-			dev_err(state->dev,
++			dev_err(csis->dev,
+ 				"data lanes reordering is not supported");
+ 			ret = -EINVAL;
+ 			goto err_parse;
+ 		}
+ 	}
+ 
+-	state->bus = vep.bus.mipi_csi2;
++	csis->bus = vep.bus.mipi_csi2;
+ 
+-	dev_dbg(state->dev, "data lanes: %d\n", state->bus.num_data_lanes);
+-	dev_dbg(state->dev, "flags: 0x%08x\n", state->bus.flags);
++	dev_dbg(csis->dev, "data lanes: %d\n", csis->bus.num_data_lanes);
++	dev_dbg(csis->dev, "flags: 0x%08x\n", csis->bus.flags);
+ 
+-	asd = v4l2_async_nf_add_fwnode_remote(&state->notifier, ep,
++	asd = v4l2_async_nf_add_fwnode_remote(&csis->notifier, ep,
+ 					      struct v4l2_async_subdev);
+ 	if (IS_ERR(asd)) {
+ 		ret = PTR_ERR(asd);
+@@ -1283,13 +1276,13 @@ static int mipi_csis_async_register(struct csi_state *state)
+ 
+ 	fwnode_handle_put(ep);
+ 
+-	state->notifier.ops = &mipi_csis_notify_ops;
++	csis->notifier.ops = &mipi_csis_notify_ops;
+ 
+-	ret = v4l2_async_subdev_nf_register(&state->sd, &state->notifier);
++	ret = v4l2_async_subdev_nf_register(&csis->sd, &csis->notifier);
+ 	if (ret)
+ 		return ret;
+ 
+-	return v4l2_async_register_subdev(&state->sd);
++	return v4l2_async_register_subdev(&csis->sd);
+ 
+ err_parse:
+ 	fwnode_handle_put(ep);
+@@ -1304,23 +1297,23 @@ err_parse:
+ static int mipi_csis_pm_suspend(struct device *dev, bool runtime)
+ {
+ 	struct v4l2_subdev *sd = dev_get_drvdata(dev);
+-	struct csi_state *state = mipi_sd_to_csis_state(sd);
++	struct mipi_csis_device *csis = sd_to_mipi_csis_device(sd);
+ 	int ret = 0;
+ 
+-	mutex_lock(&state->lock);
+-	if (state->state & ST_POWERED) {
+-		mipi_csis_stop_stream(state);
+-		ret = mipi_csis_phy_disable(state);
++	mutex_lock(&csis->lock);
++	if (csis->state & ST_POWERED) {
++		mipi_csis_stop_stream(csis);
++		ret = mipi_csis_phy_disable(csis);
+ 		if (ret)
+ 			goto unlock;
+-		mipi_csis_clk_disable(state);
+-		state->state &= ~ST_POWERED;
++		mipi_csis_clk_disable(csis);
++		csis->state &= ~ST_POWERED;
+ 		if (!runtime)
+-			state->state |= ST_SUSPENDED;
++			csis->state |= ST_SUSPENDED;
+ 	}
+ 
+ unlock:
+-	mutex_unlock(&state->lock);
++	mutex_unlock(&csis->lock);
+ 
+ 	return ret ? -EAGAIN : 0;
+ }
+@@ -1328,28 +1321,28 @@ unlock:
+ static int mipi_csis_pm_resume(struct device *dev, bool runtime)
+ {
+ 	struct v4l2_subdev *sd = dev_get_drvdata(dev);
+-	struct csi_state *state = mipi_sd_to_csis_state(sd);
++	struct mipi_csis_device *csis = sd_to_mipi_csis_device(sd);
+ 	int ret = 0;
+ 
+-	mutex_lock(&state->lock);
+-	if (!runtime && !(state->state & ST_SUSPENDED))
++	mutex_lock(&csis->lock);
++	if (!runtime && !(csis->state & ST_SUSPENDED))
+ 		goto unlock;
+ 
+-	if (!(state->state & ST_POWERED)) {
+-		ret = mipi_csis_phy_enable(state);
++	if (!(csis->state & ST_POWERED)) {
++		ret = mipi_csis_phy_enable(csis);
+ 		if (ret)
+ 			goto unlock;
+ 
+-		state->state |= ST_POWERED;
+-		mipi_csis_clk_enable(state);
++		csis->state |= ST_POWERED;
++		mipi_csis_clk_enable(csis);
+ 	}
+-	if (state->state & ST_STREAMING)
+-		mipi_csis_start_stream(state);
++	if (csis->state & ST_STREAMING)
++		mipi_csis_start_stream(csis);
+ 
+-	state->state &= ~ST_SUSPENDED;
++	csis->state &= ~ST_SUSPENDED;
+ 
+ unlock:
+-	mutex_unlock(&state->lock);
++	mutex_unlock(&csis->lock);
+ 
+ 	return ret ? -EAGAIN : 0;
+ }
+@@ -1384,14 +1377,14 @@ static const struct dev_pm_ops mipi_csis_pm_ops = {
+  * Probe/remove & platform driver
+  */
+ 
+-static int mipi_csis_subdev_init(struct csi_state *state)
++static int mipi_csis_subdev_init(struct mipi_csis_device *csis)
+ {
+-	struct v4l2_subdev *sd = &state->sd;
++	struct v4l2_subdev *sd = &csis->sd;
+ 
+ 	v4l2_subdev_init(sd, &mipi_csis_subdev_ops);
+ 	sd->owner = THIS_MODULE;
+ 	snprintf(sd->name, sizeof(sd->name), "csis-%s",
+-		 dev_name(state->dev));
++		 dev_name(csis->dev));
+ 
+ 	sd->flags |= V4L2_SUBDEV_FL_HAS_DEVNODE;
+ 	sd->ctrl_handler = NULL;
+@@ -1399,26 +1392,26 @@ static int mipi_csis_subdev_init(struct csi_state *state)
+ 	sd->entity.function = MEDIA_ENT_F_VID_IF_BRIDGE;
+ 	sd->entity.ops = &mipi_csis_entity_ops;
+ 
+-	sd->dev = state->dev;
++	sd->dev = csis->dev;
+ 
+-	state->csis_fmt = &mipi_csis_formats[0];
++	csis->csis_fmt = &mipi_csis_formats[0];
+ 	mipi_csis_init_cfg(sd, NULL);
+ 
+-	state->pads[CSIS_PAD_SINK].flags = MEDIA_PAD_FL_SINK
++	csis->pads[CSIS_PAD_SINK].flags = MEDIA_PAD_FL_SINK
+ 					 | MEDIA_PAD_FL_MUST_CONNECT;
+-	state->pads[CSIS_PAD_SOURCE].flags = MEDIA_PAD_FL_SOURCE
++	csis->pads[CSIS_PAD_SOURCE].flags = MEDIA_PAD_FL_SOURCE
+ 					   | MEDIA_PAD_FL_MUST_CONNECT;
+ 	return media_entity_pads_init(&sd->entity, CSIS_PADS_NUM,
+-				      state->pads);
++				      csis->pads);
+ }
+ 
+-static int mipi_csis_parse_dt(struct csi_state *state)
++static int mipi_csis_parse_dt(struct mipi_csis_device *csis)
+ {
+-	struct device_node *node = state->dev->of_node;
++	struct device_node *node = csis->dev->of_node;
+ 
+ 	if (of_property_read_u32(node, "clock-frequency",
+-				 &state->clk_frequency))
+-		state->clk_frequency = DEFAULT_SCLK_CSIS_FREQ;
++				 &csis->clk_frequency))
++		csis->clk_frequency = DEFAULT_SCLK_CSIS_FREQ;
+ 
+ 	return 0;
+ }
+@@ -1426,78 +1419,78 @@ static int mipi_csis_parse_dt(struct csi_state *state)
+ static int mipi_csis_probe(struct platform_device *pdev)
+ {
+ 	struct device *dev = &pdev->dev;
+-	struct csi_state *state;
++	struct mipi_csis_device *csis;
+ 	int irq;
+ 	int ret;
+ 
+-	state = devm_kzalloc(dev, sizeof(*state), GFP_KERNEL);
+-	if (!state)
++	csis = devm_kzalloc(dev, sizeof(*csis), GFP_KERNEL);
++	if (!csis)
+ 		return -ENOMEM;
+ 
+-	mutex_init(&state->lock);
+-	spin_lock_init(&state->slock);
++	mutex_init(&csis->lock);
++	spin_lock_init(&csis->slock);
+ 
+-	state->dev = dev;
+-	state->info = of_device_get_match_data(dev);
++	csis->dev = dev;
++	csis->info = of_device_get_match_data(dev);
+ 
+-	memcpy(state->events, mipi_csis_events, sizeof(state->events));
++	memcpy(csis->events, mipi_csis_events, sizeof(csis->events));
+ 
+ 	/* Parse DT properties. */
+-	ret = mipi_csis_parse_dt(state);
++	ret = mipi_csis_parse_dt(csis);
+ 	if (ret < 0) {
+ 		dev_err(dev, "Failed to parse device tree: %d\n", ret);
+ 		return ret;
+ 	}
+ 
+ 	/* Acquire resources. */
+-	state->regs = devm_platform_ioremap_resource(pdev, 0);
+-	if (IS_ERR(state->regs))
+-		return PTR_ERR(state->regs);
++	csis->regs = devm_platform_ioremap_resource(pdev, 0);
++	if (IS_ERR(csis->regs))
++		return PTR_ERR(csis->regs);
+ 
+ 	irq = platform_get_irq(pdev, 0);
+ 	if (irq < 0)
+ 		return irq;
+ 
+-	ret = mipi_csis_phy_init(state);
++	ret = mipi_csis_phy_init(csis);
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	ret = mipi_csis_clk_get(state);
++	ret = mipi_csis_clk_get(csis);
+ 	if (ret < 0)
+ 		return ret;
+ 
+ 	/* Reset PHY and enable the clocks. */
+-	mipi_csis_phy_reset(state);
++	mipi_csis_phy_reset(csis);
+ 
+-	ret = mipi_csis_clk_enable(state);
++	ret = mipi_csis_clk_enable(csis);
+ 	if (ret < 0) {
+-		dev_err(state->dev, "failed to enable clocks: %d\n", ret);
++		dev_err(csis->dev, "failed to enable clocks: %d\n", ret);
+ 		return ret;
+ 	}
+ 
+ 	/* Now that the hardware is initialized, request the interrupt. */
+ 	ret = devm_request_irq(dev, irq, mipi_csis_irq_handler, 0,
+-			       dev_name(dev), state);
++			       dev_name(dev), csis);
+ 	if (ret) {
+ 		dev_err(dev, "Interrupt request failed\n");
+ 		goto disable_clock;
+ 	}
+ 
+ 	/* Initialize and register the subdev. */
+-	ret = mipi_csis_subdev_init(state);
++	ret = mipi_csis_subdev_init(csis);
+ 	if (ret < 0)
+ 		goto disable_clock;
+ 
+-	platform_set_drvdata(pdev, &state->sd);
++	platform_set_drvdata(pdev, &csis->sd);
+ 
+-	ret = mipi_csis_async_register(state);
++	ret = mipi_csis_async_register(csis);
+ 	if (ret < 0) {
+ 		dev_err(dev, "async register failed: %d\n", ret);
+ 		goto cleanup;
+ 	}
+ 
+ 	/* Initialize debugfs. */
+-	mipi_csis_debugfs_init(state);
++	mipi_csis_debugfs_init(csis);
+ 
+ 	/* Enable runtime PM. */
+ 	pm_runtime_enable(dev);
+@@ -1508,20 +1501,20 @@ static int mipi_csis_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	dev_info(dev, "lanes: %d, freq: %u\n",
+-		 state->bus.num_data_lanes, state->clk_frequency);
++		 csis->bus.num_data_lanes, csis->clk_frequency);
+ 
+ 	return 0;
+ 
+ unregister_all:
+-	mipi_csis_debugfs_exit(state);
++	mipi_csis_debugfs_exit(csis);
+ cleanup:
+-	media_entity_cleanup(&state->sd.entity);
+-	v4l2_async_nf_unregister(&state->notifier);
+-	v4l2_async_nf_cleanup(&state->notifier);
+-	v4l2_async_unregister_subdev(&state->sd);
++	media_entity_cleanup(&csis->sd.entity);
++	v4l2_async_nf_unregister(&csis->notifier);
++	v4l2_async_nf_cleanup(&csis->notifier);
++	v4l2_async_unregister_subdev(&csis->sd);
+ disable_clock:
+-	mipi_csis_clk_disable(state);
+-	mutex_destroy(&state->lock);
++	mipi_csis_clk_disable(csis);
++	mutex_destroy(&csis->lock);
+ 
+ 	return ret;
+ }
+@@ -1529,18 +1522,18 @@ disable_clock:
+ static int mipi_csis_remove(struct platform_device *pdev)
+ {
+ 	struct v4l2_subdev *sd = platform_get_drvdata(pdev);
+-	struct csi_state *state = mipi_sd_to_csis_state(sd);
++	struct mipi_csis_device *csis = sd_to_mipi_csis_device(sd);
+ 
+-	mipi_csis_debugfs_exit(state);
+-	v4l2_async_nf_unregister(&state->notifier);
+-	v4l2_async_nf_cleanup(&state->notifier);
+-	v4l2_async_unregister_subdev(&state->sd);
++	mipi_csis_debugfs_exit(csis);
++	v4l2_async_nf_unregister(&csis->notifier);
++	v4l2_async_nf_cleanup(&csis->notifier);
++	v4l2_async_unregister_subdev(&csis->sd);
+ 
+ 	pm_runtime_disable(&pdev->dev);
+ 	mipi_csis_pm_suspend(&pdev->dev, true);
+-	mipi_csis_clk_disable(state);
+-	media_entity_cleanup(&state->sd.entity);
+-	mutex_destroy(&state->lock);
++	mipi_csis_clk_disable(csis);
++	media_entity_cleanup(&csis->sd.entity);
++	mutex_destroy(&csis->lock);
+ 	pm_runtime_set_suspended(&pdev->dev);
+ 
+ 	return 0;
+diff --git a/drivers/media/platform/qcom/venus/helpers.c b/drivers/media/platform/qcom/venus/helpers.c
+index 0bca95d016507..fa01edd54c038 100644
+--- a/drivers/media/platform/qcom/venus/helpers.c
++++ b/drivers/media/platform/qcom/venus/helpers.c
+@@ -90,12 +90,28 @@ bool venus_helper_check_codec(struct venus_inst *inst, u32 v4l2_pixfmt)
+ }
+ EXPORT_SYMBOL_GPL(venus_helper_check_codec);
+ 
++static void free_dpb_buf(struct venus_inst *inst, struct intbuf *buf)
++{
++	ida_free(&inst->dpb_ids, buf->dpb_out_tag);
++
++	list_del_init(&buf->list);
++	dma_free_attrs(inst->core->dev, buf->size, buf->va, buf->da,
++		       buf->attrs);
++	kfree(buf);
++}
++
+ int venus_helper_queue_dpb_bufs(struct venus_inst *inst)
+ {
+-	struct intbuf *buf;
++	struct intbuf *buf, *next;
++	unsigned int dpb_size = 0;
+ 	int ret = 0;
+ 
+-	list_for_each_entry(buf, &inst->dpbbufs, list) {
++	if (inst->dpb_buftype == HFI_BUFFER_OUTPUT)
++		dpb_size = inst->output_buf_size;
++	else if (inst->dpb_buftype == HFI_BUFFER_OUTPUT2)
++		dpb_size = inst->output2_buf_size;
++
++	list_for_each_entry_safe(buf, next, &inst->dpbbufs, list) {
+ 		struct hfi_frame_data fdata;
+ 
+ 		memset(&fdata, 0, sizeof(fdata));
+@@ -106,6 +122,12 @@ int venus_helper_queue_dpb_bufs(struct venus_inst *inst)
+ 		if (buf->owned_by == FIRMWARE)
+ 			continue;
+ 
++		/* free buffer from previous sequence which was released later */
++		if (dpb_size > buf->size) {
++			free_dpb_buf(inst, buf);
++			continue;
++		}
++
+ 		fdata.clnt_data = buf->dpb_out_tag;
+ 
+ 		ret = hfi_session_process_buf(inst, &fdata);
+@@ -127,13 +149,7 @@ int venus_helper_free_dpb_bufs(struct venus_inst *inst)
+ 	list_for_each_entry_safe(buf, n, &inst->dpbbufs, list) {
+ 		if (buf->owned_by == FIRMWARE)
+ 			continue;
+-
+-		ida_free(&inst->dpb_ids, buf->dpb_out_tag);
+-
+-		list_del_init(&buf->list);
+-		dma_free_attrs(inst->core->dev, buf->size, buf->va, buf->da,
+-			       buf->attrs);
+-		kfree(buf);
++		free_dpb_buf(inst, buf);
+ 	}
+ 
+ 	if (list_empty(&inst->dpbbufs))
+diff --git a/drivers/media/platform/qcom/venus/hfi.c b/drivers/media/platform/qcom/venus/hfi.c
+index 4e2151fb47f08..1968f09ad177a 100644
+--- a/drivers/media/platform/qcom/venus/hfi.c
++++ b/drivers/media/platform/qcom/venus/hfi.c
+@@ -104,6 +104,9 @@ int hfi_core_deinit(struct venus_core *core, bool blocking)
+ 		mutex_lock(&core->lock);
+ 	}
+ 
++	if (!core->ops)
++		goto unlock;
++
+ 	ret = core->ops->core_deinit(core);
+ 
+ 	if (!ret)
+diff --git a/drivers/media/platform/renesas/vsp1/vsp1_rpf.c b/drivers/media/platform/renesas/vsp1/vsp1_rpf.c
+index 85587c1b6a373..75083cb234fe3 100644
+--- a/drivers/media/platform/renesas/vsp1/vsp1_rpf.c
++++ b/drivers/media/platform/renesas/vsp1/vsp1_rpf.c
+@@ -291,11 +291,11 @@ static void rpf_configure_partition(struct vsp1_entity *entity,
+ 		     + crop.left * fmtinfo->bpp[0] / 8;
+ 
+ 	if (format->num_planes > 1) {
++		unsigned int bpl = format->plane_fmt[1].bytesperline;
+ 		unsigned int offset;
+ 
+-		offset = crop.top * format->plane_fmt[1].bytesperline
+-		       + crop.left / fmtinfo->hsub
+-		       * fmtinfo->bpp[1] / 8;
++		offset = crop.top / fmtinfo->vsub * bpl
++		       + crop.left / fmtinfo->hsub * fmtinfo->bpp[1] / 8;
+ 		mem.addr[1] += offset;
+ 		mem.addr[2] += offset;
+ 	}
+diff --git a/drivers/media/platform/rockchip/rga/rga.c b/drivers/media/platform/rockchip/rga/rga.c
+index 3d3d1062e2122..2f8df74ad0fde 100644
+--- a/drivers/media/platform/rockchip/rga/rga.c
++++ b/drivers/media/platform/rockchip/rga/rga.c
+@@ -865,7 +865,7 @@ static int rga_probe(struct platform_device *pdev)
+ 
+ 	ret = pm_runtime_resume_and_get(rga->dev);
+ 	if (ret < 0)
+-		goto rel_vdev;
++		goto rel_m2m;
+ 
+ 	rga->version.major = (rga_read(rga, RGA_VERSION_INFO) >> 24) & 0xFF;
+ 	rga->version.minor = (rga_read(rga, RGA_VERSION_INFO) >> 20) & 0x0F;
+@@ -881,7 +881,7 @@ static int rga_probe(struct platform_device *pdev)
+ 					   DMA_ATTR_WRITE_COMBINE);
+ 	if (!rga->cmdbuf_virt) {
+ 		ret = -ENOMEM;
+-		goto rel_vdev;
++		goto rel_m2m;
+ 	}
+ 
+ 	rga->src_mmu_pages =
+@@ -918,6 +918,8 @@ free_src_pages:
+ free_dma:
+ 	dma_free_attrs(rga->dev, RGA_CMDBUF_SIZE, rga->cmdbuf_virt,
+ 		       rga->cmdbuf_phy, DMA_ATTR_WRITE_COMBINE);
++rel_m2m:
++	v4l2_m2m_release(rga->m2m_dev);
+ rel_vdev:
+ 	video_device_release(vfd);
+ unreg_v4l2_dev:
+diff --git a/drivers/media/platform/samsung/exynos4-is/fimc-is.c b/drivers/media/platform/samsung/exynos4-is/fimc-is.c
+index e55e411038f48..e3072d69c49fa 100644
+--- a/drivers/media/platform/samsung/exynos4-is/fimc-is.c
++++ b/drivers/media/platform/samsung/exynos4-is/fimc-is.c
+@@ -140,7 +140,7 @@ static int fimc_is_enable_clocks(struct fimc_is *is)
+ 			dev_err(&is->pdev->dev, "clock %s enable failed\n",
+ 				fimc_is_clocks[i]);
+ 			for (--i; i >= 0; i--)
+-				clk_disable(is->clocks[i]);
++				clk_disable_unprepare(is->clocks[i]);
+ 			return ret;
+ 		}
+ 		pr_debug("enabled clock: %s\n", fimc_is_clocks[i]);
+@@ -830,7 +830,7 @@ static int fimc_is_probe(struct platform_device *pdev)
+ 
+ 	ret = pm_runtime_resume_and_get(dev);
+ 	if (ret < 0)
+-		goto err_irq;
++		goto err_pm_disable;
+ 
+ 	vb2_dma_contig_set_max_seg_size(dev, DMA_BIT_MASK(32));
+ 
+@@ -864,6 +864,8 @@ err_pm:
+ 	pm_runtime_put_noidle(dev);
+ 	if (!pm_runtime_enabled(dev))
+ 		fimc_is_runtime_suspend(dev);
++err_pm_disable:
++	pm_runtime_disable(dev);
+ err_irq:
+ 	free_irq(is->irq, is);
+ err_clk:
+diff --git a/drivers/media/platform/samsung/exynos4-is/fimc-isp-video.h b/drivers/media/platform/samsung/exynos4-is/fimc-isp-video.h
+index edcb3a5e3cb90..2dd4ddbc748a1 100644
+--- a/drivers/media/platform/samsung/exynos4-is/fimc-isp-video.h
++++ b/drivers/media/platform/samsung/exynos4-is/fimc-isp-video.h
+@@ -32,7 +32,7 @@ static inline int fimc_isp_video_device_register(struct fimc_isp *isp,
+ 	return 0;
+ }
+ 
+-void fimc_isp_video_device_unregister(struct fimc_isp *isp,
++static inline void fimc_isp_video_device_unregister(struct fimc_isp *isp,
+ 				enum v4l2_buf_type type)
+ {
+ }
+diff --git a/drivers/media/platform/st/sti/delta/delta-v4l2.c b/drivers/media/platform/st/sti/delta/delta-v4l2.c
+index c887a31ebb540..420ad4d8df5d5 100644
+--- a/drivers/media/platform/st/sti/delta/delta-v4l2.c
++++ b/drivers/media/platform/st/sti/delta/delta-v4l2.c
+@@ -1859,7 +1859,7 @@ static int delta_probe(struct platform_device *pdev)
+ 	if (ret) {
+ 		dev_err(delta->dev, "%s failed to initialize firmware ipc channel\n",
+ 			DELTA_PREFIX);
+-		goto err;
++		goto err_pm_disable;
+ 	}
+ 
+ 	/* register all available decoders */
+@@ -1873,7 +1873,7 @@ static int delta_probe(struct platform_device *pdev)
+ 	if (ret) {
+ 		dev_err(delta->dev, "%s failed to register V4L2 device\n",
+ 			DELTA_PREFIX);
+-		goto err;
++		goto err_pm_disable;
+ 	}
+ 
+ 	delta->work_queue = create_workqueue(DELTA_NAME);
+@@ -1898,6 +1898,8 @@ err_work_queue:
+ 	destroy_workqueue(delta->work_queue);
+ err_v4l2:
+ 	v4l2_device_unregister(&delta->v4l2_dev);
++err_pm_disable:
++	pm_runtime_disable(dev);
+ err:
+ 	return ret;
+ }
+diff --git a/drivers/media/radio/Kconfig b/drivers/media/radio/Kconfig
+index cca03bd2cc426..616a38feb641e 100644
+--- a/drivers/media/radio/Kconfig
++++ b/drivers/media/radio/Kconfig
+@@ -4,10 +4,10 @@
+ #
+ 
+ menuconfig RADIO_ADAPTERS
+-	bool "Radio Adapters"
++	tristate "Radio Adapters"
+ 	depends on VIDEO_DEV
+ 	depends on MEDIA_RADIO_SUPPORT
+-	default y
++	default VIDEO_DEV
+ 	help
+ 	  Say Y here to enable selecting AM/FM radio adapters.
+ 
+diff --git a/drivers/media/rc/bpf-lirc.c b/drivers/media/rc/bpf-lirc.c
+index 3eff08d7b8e5c..fe17c7f98e810 100644
+--- a/drivers/media/rc/bpf-lirc.c
++++ b/drivers/media/rc/bpf-lirc.c
+@@ -216,8 +216,12 @@ void lirc_bpf_run(struct rc_dev *rcdev, u32 sample)
+ 
+ 	raw->bpf_sample = sample;
+ 
+-	if (raw->progs)
+-		BPF_PROG_RUN_ARRAY(raw->progs, &raw->bpf_sample, bpf_prog_run);
++	if (raw->progs) {
++		rcu_read_lock();
++		bpf_prog_run_array(rcu_dereference(raw->progs),
++				   &raw->bpf_sample, bpf_prog_run);
++		rcu_read_unlock();
++	}
+ }
+ 
+ /*
+diff --git a/drivers/media/rc/imon.c b/drivers/media/rc/imon.c
+index 54da6f60079ba..ab090663f975f 100644
+--- a/drivers/media/rc/imon.c
++++ b/drivers/media/rc/imon.c
+@@ -153,6 +153,24 @@ struct imon_context {
+ 	const struct imon_usb_dev_descr *dev_descr;
+ 					/* device description with key */
+ 					/* table for front panels */
++	/*
++	 * Fields for deferring free_imon_context().
++	 *
++	 * Since reference to "struct imon_context" is stored into
++	 * "struct file"->private_data, we need to remember
++	 * how many file descriptors might access this "struct imon_context".
++	 */
++	refcount_t users;
++	/*
++	 * Use a flag for telling display_open()/vfd_write()/lcd_write() that
++	 * imon_disconnect() was already called.
++	 */
++	bool disconnected;
++	/*
++	 * We need to wait for RCU grace period in order to allow
++	 * display_open() to safely check ->disconnected and increment ->users.
++	 */
++	struct rcu_head rcu;
+ };
+ 
+ #define TOUCH_TIMEOUT	(HZ/30)
+@@ -160,18 +178,18 @@ struct imon_context {
+ /* vfd character device file operations */
+ static const struct file_operations vfd_fops = {
+ 	.owner		= THIS_MODULE,
+-	.open		= &display_open,
+-	.write		= &vfd_write,
+-	.release	= &display_close,
++	.open		= display_open,
++	.write		= vfd_write,
++	.release	= display_close,
+ 	.llseek		= noop_llseek,
+ };
+ 
+ /* lcd character device file operations */
+ static const struct file_operations lcd_fops = {
+ 	.owner		= THIS_MODULE,
+-	.open		= &display_open,
+-	.write		= &lcd_write,
+-	.release	= &display_close,
++	.open		= display_open,
++	.write		= lcd_write,
++	.release	= display_close,
+ 	.llseek		= noop_llseek,
+ };
+ 
+@@ -439,9 +457,6 @@ static struct usb_driver imon_driver = {
+ 	.id_table	= imon_usb_id_table,
+ };
+ 
+-/* to prevent races between open() and disconnect(), probing, etc */
+-static DEFINE_MUTEX(driver_lock);
+-
+ /* Module bookkeeping bits */
+ MODULE_AUTHOR(MOD_AUTHOR);
+ MODULE_DESCRIPTION(MOD_DESC);
+@@ -481,9 +496,11 @@ static void free_imon_context(struct imon_context *ictx)
+ 	struct device *dev = ictx->dev;
+ 
+ 	usb_free_urb(ictx->tx_urb);
++	WARN_ON(ictx->dev_present_intf0);
+ 	usb_free_urb(ictx->rx_urb_intf0);
++	WARN_ON(ictx->dev_present_intf1);
+ 	usb_free_urb(ictx->rx_urb_intf1);
+-	kfree(ictx);
++	kfree_rcu(ictx, rcu);
+ 
+ 	dev_dbg(dev, "%s: iMON context freed\n", __func__);
+ }
+@@ -499,9 +516,6 @@ static int display_open(struct inode *inode, struct file *file)
+ 	int subminor;
+ 	int retval = 0;
+ 
+-	/* prevent races with disconnect */
+-	mutex_lock(&driver_lock);
+-
+ 	subminor = iminor(inode);
+ 	interface = usb_find_interface(&imon_driver, subminor);
+ 	if (!interface) {
+@@ -509,13 +523,16 @@ static int display_open(struct inode *inode, struct file *file)
+ 		retval = -ENODEV;
+ 		goto exit;
+ 	}
+-	ictx = usb_get_intfdata(interface);
+ 
+-	if (!ictx) {
++	rcu_read_lock();
++	ictx = usb_get_intfdata(interface);
++	if (!ictx || ictx->disconnected || !refcount_inc_not_zero(&ictx->users)) {
++		rcu_read_unlock();
+ 		pr_err("no context found for minor %d\n", subminor);
+ 		retval = -ENODEV;
+ 		goto exit;
+ 	}
++	rcu_read_unlock();
+ 
+ 	mutex_lock(&ictx->lock);
+ 
+@@ -533,8 +550,10 @@ static int display_open(struct inode *inode, struct file *file)
+ 
+ 	mutex_unlock(&ictx->lock);
+ 
++	if (retval && refcount_dec_and_test(&ictx->users))
++		free_imon_context(ictx);
++
+ exit:
+-	mutex_unlock(&driver_lock);
+ 	return retval;
+ }
+ 
+@@ -544,16 +563,9 @@ exit:
+  */
+ static int display_close(struct inode *inode, struct file *file)
+ {
+-	struct imon_context *ictx = NULL;
++	struct imon_context *ictx = file->private_data;
+ 	int retval = 0;
+ 
+-	ictx = file->private_data;
+-
+-	if (!ictx) {
+-		pr_err("no context for device\n");
+-		return -ENODEV;
+-	}
+-
+ 	mutex_lock(&ictx->lock);
+ 
+ 	if (!ictx->display_supported) {
+@@ -568,6 +580,8 @@ static int display_close(struct inode *inode, struct file *file)
+ 	}
+ 
+ 	mutex_unlock(&ictx->lock);
++	if (refcount_dec_and_test(&ictx->users))
++		free_imon_context(ictx);
+ 	return retval;
+ }
+ 
+@@ -934,15 +948,12 @@ static ssize_t vfd_write(struct file *file, const char __user *buf,
+ 	int offset;
+ 	int seq;
+ 	int retval = 0;
+-	struct imon_context *ictx;
++	struct imon_context *ictx = file->private_data;
+ 	static const unsigned char vfd_packet6[] = {
+ 		0x01, 0x00, 0x00, 0x00, 0x00, 0xFF, 0xFF };
+ 
+-	ictx = file->private_data;
+-	if (!ictx) {
+-		pr_err_ratelimited("no context for device\n");
++	if (ictx->disconnected)
+ 		return -ENODEV;
+-	}
+ 
+ 	mutex_lock(&ictx->lock);
+ 
+@@ -1018,13 +1029,10 @@ static ssize_t lcd_write(struct file *file, const char __user *buf,
+ 			 size_t n_bytes, loff_t *pos)
+ {
+ 	int retval = 0;
+-	struct imon_context *ictx;
++	struct imon_context *ictx = file->private_data;
+ 
+-	ictx = file->private_data;
+-	if (!ictx) {
+-		pr_err_ratelimited("no context for device\n");
++	if (ictx->disconnected)
+ 		return -ENODEV;
+-	}
+ 
+ 	mutex_lock(&ictx->lock);
+ 
+@@ -2404,7 +2412,6 @@ static int imon_probe(struct usb_interface *interface,
+ 	int ifnum, sysfs_err;
+ 	int ret = 0;
+ 	struct imon_context *ictx = NULL;
+-	struct imon_context *first_if_ctx = NULL;
+ 	u16 vendor, product;
+ 
+ 	usbdev     = usb_get_dev(interface_to_usbdev(interface));
+@@ -2416,17 +2423,12 @@ static int imon_probe(struct usb_interface *interface,
+ 	dev_dbg(dev, "%s: found iMON device (%04x:%04x, intf%d)\n",
+ 		__func__, vendor, product, ifnum);
+ 
+-	/* prevent races probing devices w/multiple interfaces */
+-	mutex_lock(&driver_lock);
+-
+ 	first_if = usb_ifnum_to_if(usbdev, 0);
+ 	if (!first_if) {
+ 		ret = -ENODEV;
+ 		goto fail;
+ 	}
+ 
+-	first_if_ctx = usb_get_intfdata(first_if);
+-
+ 	if (ifnum == 0) {
+ 		ictx = imon_init_intf0(interface, id);
+ 		if (!ictx) {
+@@ -2434,9 +2436,11 @@ static int imon_probe(struct usb_interface *interface,
+ 			ret = -ENODEV;
+ 			goto fail;
+ 		}
++		refcount_set(&ictx->users, 1);
+ 
+ 	} else {
+ 		/* this is the secondary interface on the device */
++		struct imon_context *first_if_ctx = usb_get_intfdata(first_if);
+ 
+ 		/* fail early if first intf failed to register */
+ 		if (!first_if_ctx) {
+@@ -2450,14 +2454,13 @@ static int imon_probe(struct usb_interface *interface,
+ 			ret = -ENODEV;
+ 			goto fail;
+ 		}
++		refcount_inc(&ictx->users);
+ 
+ 	}
+ 
+ 	usb_set_intfdata(interface, ictx);
+ 
+ 	if (ifnum == 0) {
+-		mutex_lock(&ictx->lock);
+-
+ 		if (product == 0xffdc && ictx->rf_device) {
+ 			sysfs_err = sysfs_create_group(&interface->dev.kobj,
+ 						       &imon_rf_attr_group);
+@@ -2468,21 +2471,17 @@ static int imon_probe(struct usb_interface *interface,
+ 
+ 		if (ictx->display_supported)
+ 			imon_init_display(ictx, interface);
+-
+-		mutex_unlock(&ictx->lock);
+ 	}
+ 
+ 	dev_info(dev, "iMON device (%04x:%04x, intf%d) on usb<%d:%d> initialized\n",
+ 		 vendor, product, ifnum,
+ 		 usbdev->bus->busnum, usbdev->devnum);
+ 
+-	mutex_unlock(&driver_lock);
+ 	usb_put_dev(usbdev);
+ 
+ 	return 0;
+ 
+ fail:
+-	mutex_unlock(&driver_lock);
+ 	usb_put_dev(usbdev);
+ 	dev_err(dev, "unable to register, err %d\n", ret);
+ 
+@@ -2498,10 +2497,8 @@ static void imon_disconnect(struct usb_interface *interface)
+ 	struct device *dev;
+ 	int ifnum;
+ 
+-	/* prevent races with multi-interface device probing and display_open */
+-	mutex_lock(&driver_lock);
+-
+ 	ictx = usb_get_intfdata(interface);
++	ictx->disconnected = true;
+ 	dev = ictx->dev;
+ 	ifnum = interface->cur_altsetting->desc.bInterfaceNumber;
+ 
+@@ -2542,11 +2539,9 @@ static void imon_disconnect(struct usb_interface *interface)
+ 		}
+ 	}
+ 
+-	if (!ictx->dev_present_intf0 && !ictx->dev_present_intf1)
++	if (refcount_dec_and_test(&ictx->users))
+ 		free_imon_context(ictx);
+ 
+-	mutex_unlock(&driver_lock);
+-
+ 	dev_dbg(dev, "%s: iMON device (intf%d) disconnected\n",
+ 		__func__, ifnum);
+ }
+diff --git a/drivers/media/usb/pvrusb2/pvrusb2-hdw.c b/drivers/media/usb/pvrusb2/pvrusb2-hdw.c
+index cd7b118d59290..a9666373af6b9 100644
+--- a/drivers/media/usb/pvrusb2/pvrusb2-hdw.c
++++ b/drivers/media/usb/pvrusb2/pvrusb2-hdw.c
+@@ -2569,6 +2569,11 @@ struct pvr2_hdw *pvr2_hdw_create(struct usb_interface *intf,
+ 	} while (0);
+ 	mutex_unlock(&pvr2_unit_mtx);
+ 
++	INIT_WORK(&hdw->workpoll, pvr2_hdw_worker_poll);
++
++	if (hdw->unit_number == -1)
++		goto fail;
++
+ 	cnt1 = 0;
+ 	cnt2 = scnprintf(hdw->name+cnt1,sizeof(hdw->name)-cnt1,"pvrusb2");
+ 	cnt1 += cnt2;
+@@ -2580,8 +2585,6 @@ struct pvr2_hdw *pvr2_hdw_create(struct usb_interface *intf,
+ 	if (cnt1 >= sizeof(hdw->name)) cnt1 = sizeof(hdw->name)-1;
+ 	hdw->name[cnt1] = 0;
+ 
+-	INIT_WORK(&hdw->workpoll,pvr2_hdw_worker_poll);
+-
+ 	pvr2_trace(PVR2_TRACE_INIT,"Driver unit number is %d, name is %s",
+ 		   hdw->unit_number,hdw->name);
+ 
+diff --git a/drivers/media/usb/uvc/uvc_v4l2.c b/drivers/media/usb/uvc/uvc_v4l2.c
+index 711556d13d03a..177181985345c 100644
+--- a/drivers/media/usb/uvc/uvc_v4l2.c
++++ b/drivers/media/usb/uvc/uvc_v4l2.c
+@@ -871,29 +871,31 @@ static int uvc_ioctl_enum_input(struct file *file, void *fh,
+ 	struct uvc_video_chain *chain = handle->chain;
+ 	const struct uvc_entity *selector = chain->selector;
+ 	struct uvc_entity *iterm = NULL;
++	struct uvc_entity *it;
+ 	u32 index = input->index;
+-	int pin = 0;
+ 
+ 	if (selector == NULL ||
+ 	    (chain->dev->quirks & UVC_QUIRK_IGNORE_SELECTOR_UNIT)) {
+ 		if (index != 0)
+ 			return -EINVAL;
+-		list_for_each_entry(iterm, &chain->entities, chain) {
+-			if (UVC_ENTITY_IS_ITERM(iterm))
++		list_for_each_entry(it, &chain->entities, chain) {
++			if (UVC_ENTITY_IS_ITERM(it)) {
++				iterm = it;
+ 				break;
++			}
+ 		}
+-		pin = iterm->id;
+ 	} else if (index < selector->bNrInPins) {
+-		pin = selector->baSourceID[index];
+-		list_for_each_entry(iterm, &chain->entities, chain) {
+-			if (!UVC_ENTITY_IS_ITERM(iterm))
++		list_for_each_entry(it, &chain->entities, chain) {
++			if (!UVC_ENTITY_IS_ITERM(it))
+ 				continue;
+-			if (iterm->id == pin)
++			if (it->id == selector->baSourceID[index]) {
++				iterm = it;
+ 				break;
++			}
+ 		}
+ 	}
+ 
+-	if (iterm == NULL || iterm->id != pin)
++	if (iterm == NULL)
+ 		return -EINVAL;
+ 
+ 	memset(input, 0, sizeof(*input));
+diff --git a/drivers/memory/samsung/exynos5422-dmc.c b/drivers/memory/samsung/exynos5422-dmc.c
+index 9c8318923ed0b..4733e7898ffe5 100644
+--- a/drivers/memory/samsung/exynos5422-dmc.c
++++ b/drivers/memory/samsung/exynos5422-dmc.c
+@@ -1322,7 +1322,6 @@ static int exynos5_dmc_init_clks(struct exynos5_dmc *dmc)
+  */
+ static int exynos5_performance_counters_init(struct exynos5_dmc *dmc)
+ {
+-	int counters_size;
+ 	int ret, i;
+ 
+ 	dmc->num_counters = devfreq_event_get_edev_count(dmc->dev,
+@@ -1332,8 +1331,8 @@ static int exynos5_performance_counters_init(struct exynos5_dmc *dmc)
+ 		return dmc->num_counters;
+ 	}
+ 
+-	counters_size = sizeof(struct devfreq_event_dev) * dmc->num_counters;
+-	dmc->counter = devm_kzalloc(dmc->dev, counters_size, GFP_KERNEL);
++	dmc->counter = devm_kcalloc(dmc->dev, dmc->num_counters,
++				    sizeof(*dmc->counter), GFP_KERNEL);
+ 	if (!dmc->counter)
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/mfd/davinci_voicecodec.c b/drivers/mfd/davinci_voicecodec.c
+index e5c8bc998eb4e..965820481f1e1 100644
+--- a/drivers/mfd/davinci_voicecodec.c
++++ b/drivers/mfd/davinci_voicecodec.c
+@@ -46,14 +46,12 @@ static int __init davinci_vc_probe(struct platform_device *pdev)
+ 	}
+ 	clk_enable(davinci_vc->clk);
+ 
+-	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+-
+-	fifo_base = (dma_addr_t)res->start;
+-	davinci_vc->base = devm_ioremap_resource(&pdev->dev, res);
++	davinci_vc->base = devm_platform_get_and_ioremap_resource(pdev, 0, &res);
+ 	if (IS_ERR(davinci_vc->base)) {
+ 		ret = PTR_ERR(davinci_vc->base);
+ 		goto fail;
+ 	}
++	fifo_base = (dma_addr_t)res->start;
+ 
+ 	davinci_vc->regmap = devm_regmap_init_mmio(&pdev->dev,
+ 						   davinci_vc->base,
+diff --git a/drivers/mfd/ipaq-micro.c b/drivers/mfd/ipaq-micro.c
+index e92eeeb67a98a..4cd5ecc722112 100644
+--- a/drivers/mfd/ipaq-micro.c
++++ b/drivers/mfd/ipaq-micro.c
+@@ -403,7 +403,7 @@ static int __init micro_probe(struct platform_device *pdev)
+ 	micro_reset_comm(micro);
+ 
+ 	irq = platform_get_irq(pdev, 0);
+-	if (!irq)
++	if (irq < 0)
+ 		return -EINVAL;
+ 	ret = devm_request_irq(&pdev->dev, irq, micro_serial_isr,
+ 			       IRQF_SHARED, "ipaq-micro",
+diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c
+index d80ada8cac09f..29cf292c0aba6 100644
+--- a/drivers/misc/fastrpc.c
++++ b/drivers/misc/fastrpc.c
+@@ -1747,17 +1747,18 @@ err_invoke:
+ static int fastrpc_req_mem_unmap_impl(struct fastrpc_user *fl, struct fastrpc_mem_unmap *req)
+ {
+ 	struct fastrpc_invoke_args args[1] = { [0] = { 0 } };
+-	struct fastrpc_map *map = NULL, *m;
++	struct fastrpc_map *map = NULL, *iter, *m;
+ 	struct fastrpc_mem_unmap_req_msg req_msg = { 0 };
+ 	int err = 0;
+ 	u32 sc;
+ 	struct device *dev = fl->sctx->dev;
+ 
+ 	spin_lock(&fl->lock);
+-	list_for_each_entry_safe(map, m, &fl->maps, node) {
+-		if ((req->fd < 0 || map->fd == req->fd) && (map->raddr == req->vaddr))
++	list_for_each_entry_safe(iter, m, &fl->maps, node) {
++		if ((req->fd < 0 || iter->fd == req->fd) && (iter->raddr == req->vaddr)) {
++			map = iter;
+ 			break;
+-		map = NULL;
++		}
+ 	}
+ 
+ 	spin_unlock(&fl->lock);
+diff --git a/drivers/misc/ocxl/file.c b/drivers/misc/ocxl/file.c
+index d881f5e40ad9e..6777c419a8da2 100644
+--- a/drivers/misc/ocxl/file.c
++++ b/drivers/misc/ocxl/file.c
+@@ -556,7 +556,9 @@ int ocxl_file_register_afu(struct ocxl_afu *afu)
+ 
+ err_unregister:
+ 	ocxl_sysfs_unregister_afu(info); // safe to call even if register failed
++	free_minor(info);
+ 	device_unregister(&info->dev);
++	return rc;
+ err_put:
+ 	ocxl_afu_put(afu);
+ 	free_minor(info);
+diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
+index 506dc900f5c7c..5235b03c6cffa 100644
+--- a/drivers/mmc/core/block.c
++++ b/drivers/mmc/core/block.c
+@@ -609,11 +609,11 @@ static int __mmc_blk_ioctl_cmd(struct mmc_card *card, struct mmc_blk_data *md,
+ 
+ 	if (idata->rpmb || (cmd.flags & MMC_RSP_R1B) == MMC_RSP_R1B) {
+ 		/*
+-		 * Ensure RPMB/R1B command has completed by polling CMD13
+-		 * "Send Status".
++		 * Ensure RPMB/R1B command has completed by polling CMD13 "Send Status". Here we
++		 * allow to override the default timeout value if a custom timeout is specified.
+ 		 */
+-		err = mmc_poll_for_busy(card, MMC_BLK_TIMEOUT_MS, false,
+-					MMC_BUSY_IO);
++		err = mmc_poll_for_busy(card, idata->ic.cmd_timeout_ms ? : MMC_BLK_TIMEOUT_MS,
++					false, MMC_BUSY_IO);
+ 	}
+ 
+ 	return err;
+diff --git a/drivers/mmc/host/jz4740_mmc.c b/drivers/mmc/host/jz4740_mmc.c
+index 7ab1b38a7be50..b1d563b2ed1b0 100644
+--- a/drivers/mmc/host/jz4740_mmc.c
++++ b/drivers/mmc/host/jz4740_mmc.c
+@@ -247,6 +247,26 @@ static int jz4740_mmc_acquire_dma_channels(struct jz4740_mmc_host *host)
+ 		return PTR_ERR(host->dma_rx);
+ 	}
+ 
++	/*
++	 * Limit the maximum segment size in any SG entry according to
++	 * the parameters of the DMA engine device.
++	 */
++	if (host->dma_tx) {
++		struct device *dev = host->dma_tx->device->dev;
++		unsigned int max_seg_size = dma_get_max_seg_size(dev);
++
++		if (max_seg_size < host->mmc->max_seg_size)
++			host->mmc->max_seg_size = max_seg_size;
++	}
++
++	if (host->dma_rx) {
++		struct device *dev = host->dma_rx->device->dev;
++		unsigned int max_seg_size = dma_get_max_seg_size(dev);
++
++		if (max_seg_size < host->mmc->max_seg_size)
++			host->mmc->max_seg_size = max_seg_size;
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/mmc/host/sdhci_am654.c b/drivers/mmc/host/sdhci_am654.c
+index e54fe24d47e73..e7ced1496a073 100644
+--- a/drivers/mmc/host/sdhci_am654.c
++++ b/drivers/mmc/host/sdhci_am654.c
+@@ -147,6 +147,9 @@ struct sdhci_am654_data {
+ 	int drv_strength;
+ 	int strb_sel;
+ 	u32 flags;
++	u32 quirks;
++
++#define SDHCI_AM654_QUIRK_FORCE_CDTEST BIT(0)
+ };
+ 
+ struct sdhci_am654_driver_data {
+@@ -369,6 +372,21 @@ static void sdhci_am654_write_b(struct sdhci_host *host, u8 val, int reg)
+ 	}
+ }
+ 
++static void sdhci_am654_reset(struct sdhci_host *host, u8 mask)
++{
++	u8 ctrl;
++	struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
++	struct sdhci_am654_data *sdhci_am654 = sdhci_pltfm_priv(pltfm_host);
++
++	sdhci_reset(host, mask);
++
++	if (sdhci_am654->quirks & SDHCI_AM654_QUIRK_FORCE_CDTEST) {
++		ctrl = sdhci_readb(host, SDHCI_HOST_CONTROL);
++		ctrl |= SDHCI_CTRL_CDTEST_INS | SDHCI_CTRL_CDTEST_EN;
++		sdhci_writeb(host, ctrl, SDHCI_HOST_CONTROL);
++	}
++}
++
+ static int sdhci_am654_execute_tuning(struct mmc_host *mmc, u32 opcode)
+ {
+ 	struct sdhci_host *host = mmc_priv(mmc);
+@@ -500,7 +518,7 @@ static struct sdhci_ops sdhci_j721e_4bit_ops = {
+ 	.set_clock = sdhci_j721e_4bit_set_clock,
+ 	.write_b = sdhci_am654_write_b,
+ 	.irq = sdhci_am654_cqhci_irq,
+-	.reset = sdhci_reset,
++	.reset = sdhci_am654_reset,
+ };
+ 
+ static const struct sdhci_pltfm_data sdhci_j721e_4bit_pdata = {
+@@ -719,6 +737,9 @@ static int sdhci_am654_get_of_property(struct platform_device *pdev,
+ 	device_property_read_u32(dev, "ti,clkbuf-sel",
+ 				 &sdhci_am654->clkbuf_sel);
+ 
++	if (device_property_read_bool(dev, "ti,fails-without-test-cd"))
++		sdhci_am654->quirks |= SDHCI_AM654_QUIRK_FORCE_CDTEST;
++
+ 	sdhci_get_of_property(pdev);
+ 
+ 	return 0;
+diff --git a/drivers/mtd/chips/cfi_cmdset_0002.c b/drivers/mtd/chips/cfi_cmdset_0002.c
+index a761134fd3bea..59334530dd46f 100644
+--- a/drivers/mtd/chips/cfi_cmdset_0002.c
++++ b/drivers/mtd/chips/cfi_cmdset_0002.c
+@@ -59,6 +59,10 @@
+ #define CFI_SR_WBASB		BIT(3)
+ #define CFI_SR_SLSB		BIT(1)
+ 
++enum cfi_quirks {
++	CFI_QUIRK_DQ_TRUE_DATA = BIT(0),
++};
++
+ static int cfi_amdstd_read (struct mtd_info *, loff_t, size_t, size_t *, u_char *);
+ static int cfi_amdstd_write_words(struct mtd_info *, loff_t, size_t, size_t *, const u_char *);
+ #if !FORCE_WORD_WRITE
+@@ -436,6 +440,15 @@ static void fixup_s29ns512p_sectors(struct mtd_info *mtd)
+ 		mtd->name);
+ }
+ 
++static void fixup_quirks(struct mtd_info *mtd)
++{
++	struct map_info *map = mtd->priv;
++	struct cfi_private *cfi = map->fldrv_priv;
++
++	if (cfi->mfr == CFI_MFR_AMD && cfi->id == 0x0c01)
++		cfi->quirks |= CFI_QUIRK_DQ_TRUE_DATA;
++}
++
+ /* Used to fix CFI-Tables of chips without Extended Query Tables */
+ static struct cfi_fixup cfi_nopri_fixup_table[] = {
+ 	{ CFI_MFR_SST, 0x234a, fixup_sst39vf }, /* SST39VF1602 */
+@@ -474,6 +487,7 @@ static struct cfi_fixup cfi_fixup_table[] = {
+ #if !FORCE_WORD_WRITE
+ 	{ CFI_MFR_ANY, CFI_ID_ANY, fixup_use_write_buffers },
+ #endif
++	{ CFI_MFR_ANY, CFI_ID_ANY, fixup_quirks },
+ 	{ 0, 0, NULL }
+ };
+ static struct cfi_fixup jedec_fixup_table[] = {
+@@ -802,21 +816,25 @@ static struct mtd_info *cfi_amdstd_setup(struct mtd_info *mtd)
+ }
+ 
+ /*
+- * Return true if the chip is ready.
++ * Return true if the chip is ready and has the correct value.
+  *
+  * Ready is one of: read mode, query mode, erase-suspend-read mode (in any
+  * non-suspended sector) and is indicated by no toggle bits toggling.
+  *
++ * Error are indicated by toggling bits or bits held with the wrong value,
++ * or with bits toggling.
++ *
+  * Note that anything more complicated than checking if no bits are toggling
+  * (including checking DQ5 for an error status) is tricky to get working
+  * correctly and is therefore not done	(particularly with interleaved chips
+  * as each chip must be checked independently of the others).
+  */
+ static int __xipram chip_ready(struct map_info *map, struct flchip *chip,
+-			       unsigned long addr)
++			       unsigned long addr, map_word *expected)
+ {
+ 	struct cfi_private *cfi = map->fldrv_priv;
+ 	map_word d, t;
++	int ret;
+ 
+ 	if (cfi_use_status_reg(cfi)) {
+ 		map_word ready = CMD(CFI_SR_DRB);
+@@ -826,57 +844,32 @@ static int __xipram chip_ready(struct map_info *map, struct flchip *chip,
+ 		 */
+ 		cfi_send_gen_cmd(0x70, cfi->addr_unlock1, chip->start, map, cfi,
+ 				 cfi->device_type, NULL);
+-		d = map_read(map, addr);
++		t = map_read(map, addr);
+ 
+-		return map_word_andequal(map, d, ready, ready);
++		return map_word_andequal(map, t, ready, ready);
+ 	}
+ 
+ 	d = map_read(map, addr);
+ 	t = map_read(map, addr);
+ 
+-	return map_word_equal(map, d, t);
++	ret = map_word_equal(map, d, t);
++
++	if (!ret || !expected)
++		return ret;
++
++	return map_word_equal(map, t, *expected);
+ }
+ 
+-/*
+- * Return true if the chip is ready and has the correct value.
+- *
+- * Ready is one of: read mode, query mode, erase-suspend-read mode (in any
+- * non-suspended sector) and it is indicated by no bits toggling.
+- *
+- * Error are indicated by toggling bits or bits held with the wrong value,
+- * or with bits toggling.
+- *
+- * Note that anything more complicated than checking if no bits are toggling
+- * (including checking DQ5 for an error status) is tricky to get working
+- * correctly and is therefore not done	(particularly with interleaved chips
+- * as each chip must be checked independently of the others).
+- *
+- */
+ static int __xipram chip_good(struct map_info *map, struct flchip *chip,
+-			      unsigned long addr, map_word expected)
++			      unsigned long addr, map_word *expected)
+ {
+ 	struct cfi_private *cfi = map->fldrv_priv;
+-	map_word oldd, curd;
+-
+-	if (cfi_use_status_reg(cfi)) {
+-		map_word ready = CMD(CFI_SR_DRB);
+-
+-		/*
+-		 * For chips that support status register, check device
+-		 * ready bit
+-		 */
+-		cfi_send_gen_cmd(0x70, cfi->addr_unlock1, chip->start, map, cfi,
+-				 cfi->device_type, NULL);
+-		curd = map_read(map, addr);
+-
+-		return map_word_andequal(map, curd, ready, ready);
+-	}
++	map_word *datum = expected;
+ 
+-	oldd = map_read(map, addr);
+-	curd = map_read(map, addr);
++	if (cfi->quirks & CFI_QUIRK_DQ_TRUE_DATA)
++		datum = NULL;
+ 
+-	return	map_word_equal(map, oldd, curd) &&
+-		map_word_equal(map, curd, expected);
++	return chip_ready(map, chip, addr, datum);
+ }
+ 
+ static int get_chip(struct map_info *map, struct flchip *chip, unsigned long adr, int mode)
+@@ -893,7 +886,7 @@ static int get_chip(struct map_info *map, struct flchip *chip, unsigned long adr
+ 
+ 	case FL_STATUS:
+ 		for (;;) {
+-			if (chip_ready(map, chip, adr))
++			if (chip_ready(map, chip, adr, NULL))
+ 				break;
+ 
+ 			if (time_after(jiffies, timeo)) {
+@@ -932,7 +925,7 @@ static int get_chip(struct map_info *map, struct flchip *chip, unsigned long adr
+ 		chip->state = FL_ERASE_SUSPENDING;
+ 		chip->erase_suspended = 1;
+ 		for (;;) {
+-			if (chip_ready(map, chip, adr))
++			if (chip_ready(map, chip, adr, NULL))
+ 				break;
+ 
+ 			if (time_after(jiffies, timeo)) {
+@@ -1463,7 +1456,7 @@ static int do_otp_lock(struct map_info *map, struct flchip *chip, loff_t adr,
+ 	/* wait for chip to become ready */
+ 	timeo = jiffies + msecs_to_jiffies(2);
+ 	for (;;) {
+-		if (chip_ready(map, chip, adr))
++		if (chip_ready(map, chip, adr, NULL))
+ 			break;
+ 
+ 		if (time_after(jiffies, timeo)) {
+@@ -1699,7 +1692,7 @@ static int __xipram do_write_oneword_once(struct map_info *map,
+ 		 * "chip_good" to avoid the failure due to scheduling.
+ 		 */
+ 		if (time_after(jiffies, timeo) &&
+-		    !chip_good(map, chip, adr, datum)) {
++		    !chip_good(map, chip, adr, &datum)) {
+ 			xip_enable(map, chip, adr);
+ 			printk(KERN_WARNING "MTD %s(): software timeout\n", __func__);
+ 			xip_disable(map, chip, adr);
+@@ -1707,7 +1700,7 @@ static int __xipram do_write_oneword_once(struct map_info *map,
+ 			break;
+ 		}
+ 
+-		if (chip_good(map, chip, adr, datum)) {
++		if (chip_good(map, chip, adr, &datum)) {
+ 			if (cfi_check_err_status(map, chip, adr))
+ 				ret = -EIO;
+ 			break;
+@@ -1979,14 +1972,14 @@ static int __xipram do_write_buffer_wait(struct map_info *map,
+ 		 * "chip_good" to avoid the failure due to scheduling.
+ 		 */
+ 		if (time_after(jiffies, timeo) &&
+-		    !chip_good(map, chip, adr, datum)) {
++		    !chip_good(map, chip, adr, &datum)) {
+ 			pr_err("MTD %s(): software timeout, address:0x%.8lx.\n",
+ 			       __func__, adr);
+ 			ret = -EIO;
+ 			break;
+ 		}
+ 
+-		if (chip_good(map, chip, adr, datum)) {
++		if (chip_good(map, chip, adr, &datum)) {
+ 			if (cfi_check_err_status(map, chip, adr))
+ 				ret = -EIO;
+ 			break;
+@@ -2195,7 +2188,7 @@ static int cfi_amdstd_panic_wait(struct map_info *map, struct flchip *chip,
+ 	 * If the driver thinks the chip is idle, and no toggle bits
+ 	 * are changing, then the chip is actually idle for sure.
+ 	 */
+-	if (chip->state == FL_READY && chip_ready(map, chip, adr))
++	if (chip->state == FL_READY && chip_ready(map, chip, adr, NULL))
+ 		return 0;
+ 
+ 	/*
+@@ -2212,7 +2205,7 @@ static int cfi_amdstd_panic_wait(struct map_info *map, struct flchip *chip,
+ 
+ 		/* wait for the chip to become ready */
+ 		for (i = 0; i < jiffies_to_usecs(timeo); i++) {
+-			if (chip_ready(map, chip, adr))
++			if (chip_ready(map, chip, adr, NULL))
+ 				return 0;
+ 
+ 			udelay(1);
+@@ -2276,13 +2269,13 @@ retry:
+ 	map_write(map, datum, adr);
+ 
+ 	for (i = 0; i < jiffies_to_usecs(uWriteTimeout); i++) {
+-		if (chip_ready(map, chip, adr))
++		if (chip_ready(map, chip, adr, NULL))
+ 			break;
+ 
+ 		udelay(1);
+ 	}
+ 
+-	if (!chip_good(map, chip, adr, datum) ||
++	if (!chip_ready(map, chip, adr, &datum) ||
+ 	    cfi_check_err_status(map, chip, adr)) {
+ 		/* reset on all failures. */
+ 		map_write(map, CMD(0xF0), chip->start);
+@@ -2424,6 +2417,7 @@ static int __xipram do_erase_chip(struct map_info *map, struct flchip *chip)
+ 	DECLARE_WAITQUEUE(wait, current);
+ 	int ret;
+ 	int retry_cnt = 0;
++	map_word datum = map_word_ff(map);
+ 
+ 	adr = cfi->addr_unlock1;
+ 
+@@ -2478,7 +2472,7 @@ static int __xipram do_erase_chip(struct map_info *map, struct flchip *chip)
+ 			chip->erase_suspended = 0;
+ 		}
+ 
+-		if (chip_good(map, chip, adr, map_word_ff(map))) {
++		if (chip_ready(map, chip, adr, &datum)) {
+ 			if (cfi_check_err_status(map, chip, adr))
+ 				ret = -EIO;
+ 			break;
+@@ -2523,6 +2517,7 @@ static int __xipram do_erase_oneblock(struct map_info *map, struct flchip *chip,
+ 	DECLARE_WAITQUEUE(wait, current);
+ 	int ret;
+ 	int retry_cnt = 0;
++	map_word datum = map_word_ff(map);
+ 
+ 	adr += chip->start;
+ 
+@@ -2577,7 +2572,7 @@ static int __xipram do_erase_oneblock(struct map_info *map, struct flchip *chip,
+ 			chip->erase_suspended = 0;
+ 		}
+ 
+-		if (chip_good(map, chip, adr, map_word_ff(map))) {
++		if (chip_ready(map, chip, adr, &datum)) {
+ 			if (cfi_check_err_status(map, chip, adr))
+ 				ret = -EIO;
+ 			break;
+@@ -2771,7 +2766,7 @@ static int __maybe_unused do_ppb_xxlock(struct map_info *map,
+ 	 */
+ 	timeo = jiffies + msecs_to_jiffies(2000);	/* 2s max (un)locking */
+ 	for (;;) {
+-		if (chip_ready(map, chip, adr))
++		if (chip_ready(map, chip, adr, NULL))
+ 			break;
+ 
+ 		if (time_after(jiffies, timeo)) {
+diff --git a/drivers/mtd/mtdblock.c b/drivers/mtd/mtdblock.c
+index 03e3de3a5d79e..1e94e7d10b8be 100644
+--- a/drivers/mtd/mtdblock.c
++++ b/drivers/mtd/mtdblock.c
+@@ -257,6 +257,10 @@ static int mtdblock_open(struct mtd_blktrans_dev *mbd)
+ 		return 0;
+ 	}
+ 
++	if (mtd_type_is_nand(mbd->mtd))
++		pr_warn("%s: MTD device '%s' is NAND, please consider using UBI block devices instead.\n",
++			mbd->tr->name, mbd->mtd->name);
++
+ 	/* OK, it's not open. Create cache info for it */
+ 	mtdblk->count = 1;
+ 	mutex_init(&mtdblk->cache_mutex);
+@@ -322,10 +326,6 @@ static void mtdblock_add_mtd(struct mtd_blktrans_ops *tr, struct mtd_info *mtd)
+ 	if (!(mtd->flags & MTD_WRITEABLE))
+ 		dev->mbd.readonly = 1;
+ 
+-	if (mtd_type_is_nand(mtd))
+-		pr_warn("%s: MTD device '%s' is NAND, please consider using UBI block devices instead.\n",
+-			tr->name, mtd->name);
+-
+ 	if (add_mtd_blktrans_dev(&dev->mbd))
+ 		kfree(dev);
+ }
+diff --git a/drivers/mtd/nand/raw/cadence-nand-controller.c b/drivers/mtd/nand/raw/cadence-nand-controller.c
+index 7eec60ea90564..0d72672f8b64d 100644
+--- a/drivers/mtd/nand/raw/cadence-nand-controller.c
++++ b/drivers/mtd/nand/raw/cadence-nand-controller.c
+@@ -2983,11 +2983,10 @@ static int cadence_nand_dt_probe(struct platform_device *ofdev)
+ 	if (IS_ERR(cdns_ctrl->reg))
+ 		return PTR_ERR(cdns_ctrl->reg);
+ 
+-	res = platform_get_resource(ofdev, IORESOURCE_MEM, 1);
+-	cdns_ctrl->io.dma = res->start;
+-	cdns_ctrl->io.virt = devm_ioremap_resource(&ofdev->dev, res);
++	cdns_ctrl->io.virt = devm_platform_get_and_ioremap_resource(ofdev, 1, &res);
+ 	if (IS_ERR(cdns_ctrl->io.virt))
+ 		return PTR_ERR(cdns_ctrl->io.virt);
++	cdns_ctrl->io.dma = res->start;
+ 
+ 	dt->clk = devm_clk_get(cdns_ctrl->dev, "nf_clk");
+ 	if (IS_ERR(dt->clk))
+diff --git a/drivers/mtd/nand/raw/denali_pci.c b/drivers/mtd/nand/raw/denali_pci.c
+index 20c085a30adcb..de7e722d38262 100644
+--- a/drivers/mtd/nand/raw/denali_pci.c
++++ b/drivers/mtd/nand/raw/denali_pci.c
+@@ -74,22 +74,21 @@ static int denali_pci_probe(struct pci_dev *dev, const struct pci_device_id *id)
+ 		return ret;
+ 	}
+ 
+-	denali->reg = ioremap(csr_base, csr_len);
++	denali->reg = devm_ioremap(denali->dev, csr_base, csr_len);
+ 	if (!denali->reg) {
+ 		dev_err(&dev->dev, "Spectra: Unable to remap memory region\n");
+ 		return -ENOMEM;
+ 	}
+ 
+-	denali->host = ioremap(mem_base, mem_len);
++	denali->host = devm_ioremap(denali->dev, mem_base, mem_len);
+ 	if (!denali->host) {
+ 		dev_err(&dev->dev, "Spectra: ioremap failed!");
+-		ret = -ENOMEM;
+-		goto out_unmap_reg;
++		return -ENOMEM;
+ 	}
+ 
+ 	ret = denali_init(denali);
+ 	if (ret)
+-		goto out_unmap_host;
++		return ret;
+ 
+ 	nsels = denali->nbanks;
+ 
+@@ -117,10 +116,6 @@ static int denali_pci_probe(struct pci_dev *dev, const struct pci_device_id *id)
+ 
+ out_remove_denali:
+ 	denali_remove(denali);
+-out_unmap_host:
+-	iounmap(denali->host);
+-out_unmap_reg:
+-	iounmap(denali->reg);
+ 	return ret;
+ }
+ 
+@@ -129,8 +124,6 @@ static void denali_pci_remove(struct pci_dev *dev)
+ 	struct denali_controller *denali = pci_get_drvdata(dev);
+ 
+ 	denali_remove(denali);
+-	iounmap(denali->reg);
+-	iounmap(denali->host);
+ }
+ 
+ static struct pci_driver denali_pci_driver = {
+diff --git a/drivers/mtd/nand/raw/intel-nand-controller.c b/drivers/mtd/nand/raw/intel-nand-controller.c
+index 7c1c80dae826a..e91b879b32bdb 100644
+--- a/drivers/mtd/nand/raw/intel-nand-controller.c
++++ b/drivers/mtd/nand/raw/intel-nand-controller.c
+@@ -619,9 +619,9 @@ static int ebu_nand_probe(struct platform_device *pdev)
+ 	resname = devm_kasprintf(dev, GFP_KERNEL, "nand_cs%d", cs);
+ 	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, resname);
+ 	ebu_host->cs[cs].chipaddr = devm_ioremap_resource(dev, res);
+-	ebu_host->cs[cs].nand_pa = res->start;
+ 	if (IS_ERR(ebu_host->cs[cs].chipaddr))
+ 		return PTR_ERR(ebu_host->cs[cs].chipaddr);
++	ebu_host->cs[cs].nand_pa = res->start;
+ 
+ 	ebu_host->clk = devm_clk_get(dev, NULL);
+ 	if (IS_ERR(ebu_host->clk))
+diff --git a/drivers/mtd/nand/spi/gigadevice.c b/drivers/mtd/nand/spi/gigadevice.c
+index 1dd1c58980934..da77ab20296ea 100644
+--- a/drivers/mtd/nand/spi/gigadevice.c
++++ b/drivers/mtd/nand/spi/gigadevice.c
+@@ -39,6 +39,14 @@ static SPINAND_OP_VARIANTS(read_cache_variants_f,
+ 		SPINAND_PAGE_READ_FROM_CACHE_OP_3A(true, 0, 1, NULL, 0),
+ 		SPINAND_PAGE_READ_FROM_CACHE_OP_3A(false, 0, 0, NULL, 0));
+ 
++static SPINAND_OP_VARIANTS(read_cache_variants_1gq5,
++		SPINAND_PAGE_READ_FROM_CACHE_QUADIO_OP(0, 2, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_X4_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_DUALIO_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_X2_OP(0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_OP(true, 0, 1, NULL, 0),
++		SPINAND_PAGE_READ_FROM_CACHE_OP(false, 0, 1, NULL, 0));
++
+ static SPINAND_OP_VARIANTS(write_cache_variants,
+ 		SPINAND_PROG_LOAD_X4(true, 0, NULL, 0),
+ 		SPINAND_PROG_LOAD(true, 0, NULL, 0));
+@@ -339,7 +347,7 @@ static const struct spinand_info gigadevice_spinand_table[] = {
+ 		     SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x51),
+ 		     NAND_MEMORG(1, 2048, 128, 64, 1024, 20, 1, 1, 1),
+ 		     NAND_ECCREQ(4, 512),
+-		     SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
++		     SPINAND_INFO_OP_VARIANTS(&read_cache_variants_1gq5,
+ 					      &write_cache_variants,
+ 					      &update_cache_variants),
+ 		     SPINAND_HAS_QE_BIT,
+diff --git a/drivers/mtd/spi-nor/core.c b/drivers/mtd/spi-nor/core.c
+index b4f141ad9c9c3..c1630131c7342 100644
+--- a/drivers/mtd/spi-nor/core.c
++++ b/drivers/mtd/spi-nor/core.c
+@@ -788,6 +788,15 @@ static int spi_nor_write_16bit_sr_and_check(struct spi_nor *nor, u8 sr1)
+ 	if (ret)
+ 		return ret;
+ 
++	ret = spi_nor_read_sr(nor, sr_cr);
++	if (ret)
++		return ret;
++
++	if (sr1 != sr_cr[0]) {
++		dev_dbg(nor->dev, "SR: Read back test failed\n");
++		return -EIO;
++	}
++
+ 	if (nor->flags & SNOR_F_NO_READ_CR)
+ 		return 0;
+ 
+diff --git a/drivers/net/amt.c b/drivers/net/amt.c
+index 10455c9b9da0e..de4ea518c793f 100644
+--- a/drivers/net/amt.c
++++ b/drivers/net/amt.c
+@@ -943,7 +943,7 @@ static void amt_req_work(struct work_struct *work)
+ 	if (amt->status < AMT_STATUS_RECEIVED_ADVERTISEMENT)
+ 		goto out;
+ 
+-	if (amt->req_cnt++ > AMT_MAX_REQ_COUNT) {
++	if (amt->req_cnt > AMT_MAX_REQ_COUNT) {
+ 		netdev_dbg(amt->dev, "Gateway is not ready");
+ 		amt->qi = AMT_INIT_REQ_TIMEOUT;
+ 		amt->ready4 = false;
+@@ -951,13 +951,15 @@ static void amt_req_work(struct work_struct *work)
+ 		amt->remote_ip = 0;
+ 		__amt_update_gw_status(amt, AMT_STATUS_INIT, false);
+ 		amt->req_cnt = 0;
++		goto out;
+ 	}
+ 	spin_unlock_bh(&amt->lock);
+ 
+ 	amt_send_request(amt, false);
+ 	amt_send_request(amt, true);
+-	amt_update_gw_status(amt, AMT_STATUS_SENT_REQUEST, true);
+ 	spin_lock_bh(&amt->lock);
++	__amt_update_gw_status(amt, AMT_STATUS_SENT_REQUEST, true);
++	amt->req_cnt++;
+ out:
+ 	exp = min_t(u32, (1 * (1 << amt->req_cnt)), AMT_MAX_REQ_TIMEOUT);
+ 	mod_delayed_work(amt_wq, &amt->req_wq, msecs_to_jiffies(exp * 1000));
+@@ -2696,9 +2698,8 @@ static int amt_rcv(struct sock *sk, struct sk_buff *skb)
+ 				err = true;
+ 				goto drop;
+ 			}
+-			if (amt_advertisement_handler(amt, skb))
+-				amt->dev->stats.rx_dropped++;
+-			goto out;
++			err = amt_advertisement_handler(amt, skb);
++			break;
+ 		case AMT_MSG_MULTICAST_DATA:
+ 			if (iph->saddr != amt->remote_ip) {
+ 				netdev_dbg(amt->dev, "Invalid Relay IP\n");
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index 38e1525481261..b5c5196e03ee0 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -5591,16 +5591,23 @@ static int bond_ethtool_get_ts_info(struct net_device *bond_dev,
+ 	const struct ethtool_ops *ops;
+ 	struct net_device *real_dev;
+ 	struct phy_device *phydev;
++	int ret = 0;
+ 
++	rcu_read_lock();
+ 	real_dev = bond_option_active_slave_get_rcu(bond);
++	dev_hold(real_dev);
++	rcu_read_unlock();
++
+ 	if (real_dev) {
+ 		ops = real_dev->ethtool_ops;
+ 		phydev = real_dev->phydev;
+ 
+ 		if (phy_has_tsinfo(phydev)) {
+-			return phy_ts_info(phydev, info);
++			ret = phy_ts_info(phydev, info);
++			goto out;
+ 		} else if (ops->get_ts_info) {
+-			return ops->get_ts_info(real_dev, info);
++			ret = ops->get_ts_info(real_dev, info);
++			goto out;
+ 		}
+ 	}
+ 
+@@ -5608,7 +5615,9 @@ static int bond_ethtool_get_ts_info(struct net_device *bond_dev,
+ 				SOF_TIMESTAMPING_SOFTWARE;
+ 	info->phc_index = -1;
+ 
+-	return 0;
++out:
++	dev_put(real_dev);
++	return ret;
+ }
+ 
+ static const struct ethtool_ops bond_ethtool_ops = {
+diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd.h b/drivers/net/can/spi/mcp251xfd/mcp251xfd.h
+index 9cb6b5ad8dda0..60e56fa4601d3 100644
+--- a/drivers/net/can/spi/mcp251xfd/mcp251xfd.h
++++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd.h
+@@ -441,7 +441,7 @@ struct mcp251xfd_hw_tef_obj {
+ /* The tx_obj_raw version is used in spi async, i.e. without
+  * regmap. We have to take care of endianness ourselves.
+  */
+-struct mcp251xfd_hw_tx_obj_raw {
++struct __packed mcp251xfd_hw_tx_obj_raw {
+ 	__le32 id;
+ 	__le32 flags;
+ 	u8 data[sizeof_field(struct canfd_frame, data)];
+diff --git a/drivers/net/can/xilinx_can.c b/drivers/net/can/xilinx_can.c
+index e562c5ab1149a..43f0c6a064ba1 100644
+--- a/drivers/net/can/xilinx_can.c
++++ b/drivers/net/can/xilinx_can.c
+@@ -239,7 +239,7 @@ static const struct can_bittiming_const xcan_bittiming_const_canfd = {
+ };
+ 
+ /* AXI CANFD Data Bittiming constants as per AXI CANFD 1.0 specs */
+-static struct can_bittiming_const xcan_data_bittiming_const_canfd = {
++static const struct can_bittiming_const xcan_data_bittiming_const_canfd = {
+ 	.name = DRIVER_NAME,
+ 	.tseg1_min = 1,
+ 	.tseg1_max = 16,
+@@ -265,7 +265,7 @@ static const struct can_bittiming_const xcan_bittiming_const_canfd2 = {
+ };
+ 
+ /* AXI CANFD 2.0 Data Bittiming constants as per AXI CANFD 2.0 spec */
+-static struct can_bittiming_const xcan_data_bittiming_const_canfd2 = {
++static const struct can_bittiming_const xcan_data_bittiming_const_canfd2 = {
+ 	.name = DRIVER_NAME,
+ 	.tseg1_min = 1,
+ 	.tseg1_max = 32,
+diff --git a/drivers/net/dsa/Kconfig b/drivers/net/dsa/Kconfig
+index 37a3dabdce313..6d1fcb08bba1f 100644
+--- a/drivers/net/dsa/Kconfig
++++ b/drivers/net/dsa/Kconfig
+@@ -72,7 +72,6 @@ source "drivers/net/dsa/realtek/Kconfig"
+ 
+ config NET_DSA_SMSC_LAN9303
+ 	tristate
+-	depends on VLAN_8021Q || VLAN_8021Q=n
+ 	select NET_DSA_TAG_LAN9303
+ 	select REGMAP
+ 	help
+@@ -82,6 +81,7 @@ config NET_DSA_SMSC_LAN9303
+ config NET_DSA_SMSC_LAN9303_I2C
+ 	tristate "SMSC/Microchip LAN9303 3-ports 10/100 ethernet switch in I2C managed mode"
+ 	depends on I2C
++	depends on VLAN_8021Q || VLAN_8021Q=n
+ 	select NET_DSA_SMSC_LAN9303
+ 	select REGMAP_I2C
+ 	help
+@@ -91,6 +91,7 @@ config NET_DSA_SMSC_LAN9303_I2C
+ config NET_DSA_SMSC_LAN9303_MDIO
+ 	tristate "SMSC/Microchip LAN9303 3-ports 10/100 ethernet switch in MDIO managed mode"
+ 	select NET_DSA_SMSC_LAN9303
++	depends on VLAN_8021Q || VLAN_8021Q=n
+ 	help
+ 	  Enable access functions if the SMSC/Microchip LAN9303 is configured
+ 	  for MDIO managed mode.
+diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c
+index fe3cb26f4287e..831ccbecb0c2d 100644
+--- a/drivers/net/dsa/mt7530.c
++++ b/drivers/net/dsa/mt7530.c
+@@ -2540,13 +2540,7 @@ static void mt7531_sgmii_validate(struct mt7530_priv *priv, int port,
+ 	/* Port5 supports ethier RGMII or SGMII.
+ 	 * Port6 supports SGMII only.
+ 	 */
+-	switch (port) {
+-	case 5:
+-		if (mt7531_is_rgmii_port(priv, port))
+-			break;
+-		fallthrough;
+-	case 6:
+-		phylink_set(supported, 1000baseX_Full);
++	if (port == 6) {
+ 		phylink_set(supported, 2500baseX_Full);
+ 		phylink_set(supported, 2500baseT_Full);
+ 	}
+@@ -2914,8 +2908,6 @@ static void
+ mt7530_mac_port_validate(struct dsa_switch *ds, int port,
+ 			 unsigned long *supported)
+ {
+-	if (port == 5)
+-		phylink_set(supported, 1000baseX_Full);
+ }
+ 
+ static void mt7531_mac_port_validate(struct dsa_switch *ds, int port,
+@@ -2952,8 +2944,10 @@ mt753x_phylink_validate(struct dsa_switch *ds, int port,
+ 	}
+ 
+ 	/* This switch only supports 1G full-duplex. */
+-	if (state->interface != PHY_INTERFACE_MODE_MII)
++	if (state->interface != PHY_INTERFACE_MODE_MII) {
+ 		phylink_set(mask, 1000baseT_Full);
++		phylink_set(mask, 1000baseX_Full);
++	}
+ 
+ 	priv->info->mac_port_validate(ds, port, mask);
+ 
+diff --git a/drivers/net/dsa/qca8k.c b/drivers/net/dsa/qca8k.c
+index d3ed0a7f80771..22b328bd7cd51 100644
+--- a/drivers/net/dsa/qca8k.c
++++ b/drivers/net/dsa/qca8k.c
+@@ -1287,7 +1287,12 @@ qca8k_internal_mdio_read(struct mii_bus *slave_bus, int phy, int regnum)
+ 	if (ret >= 0)
+ 		return ret;
+ 
+-	return qca8k_mdio_read(priv, phy, regnum);
++	ret = qca8k_mdio_read(priv, phy, regnum);
++
++	if (ret < 0)
++		return 0xffff;
++
++	return ret;
+ }
+ 
+ static int
+diff --git a/drivers/net/ethernet/broadcom/Makefile b/drivers/net/ethernet/broadcom/Makefile
+index 0ddfb5b5d53ca..2e6c5f258a1ff 100644
+--- a/drivers/net/ethernet/broadcom/Makefile
++++ b/drivers/net/ethernet/broadcom/Makefile
+@@ -17,3 +17,8 @@ obj-$(CONFIG_BGMAC_BCMA) += bgmac-bcma.o bgmac-bcma-mdio.o
+ obj-$(CONFIG_BGMAC_PLATFORM) += bgmac-platform.o
+ obj-$(CONFIG_SYSTEMPORT) += bcmsysport.o
+ obj-$(CONFIG_BNXT) += bnxt/
++
++# FIXME: temporarily silence -Warray-bounds on non W=1+ builds
++ifndef KBUILD_EXTRA_WARN
++CFLAGS_tg3.o += -Wno-array-bounds
++endif
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 1d69fe0737a1c..d5149478a3510 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -10363,6 +10363,7 @@ static int __bnxt_open_nic(struct bnxt *bp, bool irq_re_init, bool link_re_init)
+ 	if (BNXT_PF(bp))
+ 		bnxt_vf_reps_open(bp);
+ 	bnxt_ptp_init_rtc(bp, true);
++	bnxt_ptp_cfg_tstamp_filters(bp);
+ 	return 0;
+ 
+ open_err_irq:
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c
+index 00f2f80c00733..f9c94e5fe7187 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c
+@@ -295,6 +295,27 @@ static int bnxt_ptp_cfg_event(struct bnxt *bp, u8 event)
+ 	return hwrm_req_send(bp, req);
+ }
+ 
++void bnxt_ptp_cfg_tstamp_filters(struct bnxt *bp)
++{
++	struct bnxt_ptp_cfg *ptp = bp->ptp_cfg;
++	struct hwrm_port_mac_cfg_input *req;
++
++	if (!ptp || !ptp->tstamp_filters)
++		return;
++
++	if (hwrm_req_init(bp, req, HWRM_PORT_MAC_CFG))
++		goto out;
++	req->flags = cpu_to_le32(ptp->tstamp_filters);
++	req->enables = cpu_to_le32(PORT_MAC_CFG_REQ_ENABLES_RX_TS_CAPTURE_PTP_MSG_TYPE);
++	req->rx_ts_capture_ptp_msg_type = cpu_to_le16(ptp->rxctl);
++
++	if (!hwrm_req_send(bp, req))
++		return;
++	ptp->tstamp_filters = 0;
++out:
++	netdev_warn(bp->dev, "Failed to configure HW packet timestamp filters\n");
++}
++
+ void bnxt_ptp_reapply_pps(struct bnxt *bp)
+ {
+ 	struct bnxt_ptp_cfg *ptp = bp->ptp_cfg;
+@@ -435,27 +456,36 @@ static int bnxt_ptp_enable(struct ptp_clock_info *ptp_info,
+ static int bnxt_hwrm_ptp_cfg(struct bnxt *bp)
+ {
+ 	struct bnxt_ptp_cfg *ptp = bp->ptp_cfg;
+-	struct hwrm_port_mac_cfg_input *req;
+ 	u32 flags = 0;
+-	int rc;
++	int rc = 0;
+ 
+-	rc = hwrm_req_init(bp, req, HWRM_PORT_MAC_CFG);
+-	if (rc)
+-		return rc;
++	switch (ptp->rx_filter) {
++	case HWTSTAMP_FILTER_NONE:
++		flags = PORT_MAC_CFG_REQ_FLAGS_PTP_RX_TS_CAPTURE_DISABLE;
++		break;
++	case HWTSTAMP_FILTER_PTP_V2_EVENT:
++	case HWTSTAMP_FILTER_PTP_V2_SYNC:
++	case HWTSTAMP_FILTER_PTP_V2_DELAY_REQ:
++		flags = PORT_MAC_CFG_REQ_FLAGS_PTP_RX_TS_CAPTURE_ENABLE;
++		break;
++	}
+ 
+-	if (ptp->rx_filter)
+-		flags |= PORT_MAC_CFG_REQ_FLAGS_PTP_RX_TS_CAPTURE_ENABLE;
+-	else
+-		flags |= PORT_MAC_CFG_REQ_FLAGS_PTP_RX_TS_CAPTURE_DISABLE;
+ 	if (ptp->tx_tstamp_en)
+ 		flags |= PORT_MAC_CFG_REQ_FLAGS_PTP_TX_TS_CAPTURE_ENABLE;
+ 	else
+ 		flags |= PORT_MAC_CFG_REQ_FLAGS_PTP_TX_TS_CAPTURE_DISABLE;
+-	req->flags = cpu_to_le32(flags);
+-	req->enables = cpu_to_le32(PORT_MAC_CFG_REQ_ENABLES_RX_TS_CAPTURE_PTP_MSG_TYPE);
+-	req->rx_ts_capture_ptp_msg_type = cpu_to_le16(ptp->rxctl);
+ 
+-	return hwrm_req_send(bp, req);
++	ptp->tstamp_filters = flags;
++
++	if (netif_running(bp->dev)) {
++		rc = bnxt_close_nic(bp, false, false);
++		if (!rc)
++			rc = bnxt_open_nic(bp, false, false);
++		if (!rc && !ptp->tstamp_filters)
++			rc = -EIO;
++	}
++
++	return rc;
+ }
+ 
+ int bnxt_hwtstamp_set(struct net_device *dev, struct ifreq *ifr)
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.h
+index 530b9922608c8..4ce0a14c1e232 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.h
+@@ -113,6 +113,7 @@ struct bnxt_ptp_cfg {
+ 					 BNXT_PTP_MSG_PDELAY_RESP)
+ 	u8			tx_tstamp_en:1;
+ 	int			rx_filter;
++	u32			tstamp_filters;
+ 
+ 	u32			refclk_regs[2];
+ 	u32			refclk_mapped_regs[2];
+@@ -133,6 +134,7 @@ do {						\
+ int bnxt_ptp_parse(struct sk_buff *skb, u16 *seq_id, u16 *hdr_off);
+ void bnxt_ptp_update_current_time(struct bnxt *bp);
+ void bnxt_ptp_pps_event(struct bnxt *bp, u32 data1, u32 data2);
++void bnxt_ptp_cfg_tstamp_filters(struct bnxt *bp);
+ void bnxt_ptp_reapply_pps(struct bnxt *bp);
+ int bnxt_hwtstamp_set(struct net_device *dev, struct ifreq *ifr);
+ int bnxt_hwtstamp_get(struct net_device *dev, struct ifreq *ifr);
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index 61284baa0496e..e9e5c3f6027c7 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -36,6 +36,7 @@
+ #include <linux/iopoll.h>
+ #include <linux/phy/phy.h>
+ #include <linux/pm_runtime.h>
++#include <linux/ptp_classify.h>
+ #include <linux/reset.h>
+ #include "macb.h"
+ 
+@@ -1124,6 +1125,36 @@ static void macb_tx_error_task(struct work_struct *work)
+ 	spin_unlock_irqrestore(&bp->lock, flags);
+ }
+ 
++static bool ptp_one_step_sync(struct sk_buff *skb)
++{
++	struct ptp_header *hdr;
++	unsigned int ptp_class;
++	u8 msgtype;
++
++	/* No need to parse packet if PTP TS is not involved */
++	if (likely(!(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP)))
++		goto not_oss;
++
++	/* Identify and return whether PTP one step sync is being processed */
++	ptp_class = ptp_classify_raw(skb);
++	if (ptp_class == PTP_CLASS_NONE)
++		goto not_oss;
++
++	hdr = ptp_parse_header(skb, ptp_class);
++	if (!hdr)
++		goto not_oss;
++
++	if (hdr->flag_field[0] & PTP_FLAG_TWOSTEP)
++		goto not_oss;
++
++	msgtype = ptp_get_msgtype(hdr, ptp_class);
++	if (msgtype == PTP_MSGTYPE_SYNC)
++		return true;
++
++not_oss:
++	return false;
++}
++
+ static void macb_tx_interrupt(struct macb_queue *queue)
+ {
+ 	unsigned int tail;
+@@ -1168,8 +1199,8 @@ static void macb_tx_interrupt(struct macb_queue *queue)
+ 
+ 			/* First, update TX stats if needed */
+ 			if (skb) {
+-				if (unlikely(skb_shinfo(skb)->tx_flags &
+-					     SKBTX_HW_TSTAMP) &&
++				if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) &&
++				    !ptp_one_step_sync(skb) &&
+ 				    gem_ptp_do_txstamp(queue, skb, desc) == 0) {
+ 					/* skb now belongs to timestamp buffer
+ 					 * and will be removed later
+@@ -1999,7 +2030,8 @@ static unsigned int macb_tx_map(struct macb *bp,
+ 			ctrl |= MACB_BF(TX_LSO, lso_ctrl);
+ 			ctrl |= MACB_BF(TX_TCP_SEQ_SRC, seq_ctrl);
+ 			if ((bp->dev->features & NETIF_F_HW_CSUM) &&
+-			    skb->ip_summed != CHECKSUM_PARTIAL && !lso_ctrl)
++			    skb->ip_summed != CHECKSUM_PARTIAL && !lso_ctrl &&
++			    !ptp_one_step_sync(skb))
+ 				ctrl |= MACB_BIT(TX_NOCRC);
+ 		} else
+ 			/* Only set MSS/MFS on payload descriptors
+@@ -2097,7 +2129,7 @@ static int macb_pad_and_fcs(struct sk_buff **skb, struct net_device *ndev)
+ 
+ 	if (!(ndev->features & NETIF_F_HW_CSUM) ||
+ 	    !((*skb)->ip_summed != CHECKSUM_PARTIAL) ||
+-	    skb_shinfo(*skb)->gso_size)	/* Not available for GSO */
++	    skb_shinfo(*skb)->gso_size || ptp_one_step_sync(*skb))
+ 		return 0;
+ 
+ 	if (padlen <= 0) {
+@@ -4594,7 +4626,7 @@ static int zynqmp_init(struct platform_device *pdev)
+ 
+ 	if (bp->phy_interface == PHY_INTERFACE_MODE_SGMII) {
+ 		/* Ensure PS-GTR PHY device used in SGMII mode is ready */
+-		bp->sgmii_phy = devm_phy_get(&pdev->dev, "sgmii-phy");
++		bp->sgmii_phy = devm_phy_optional_get(&pdev->dev, NULL);
+ 
+ 		if (IS_ERR(bp->sgmii_phy)) {
+ 			ret = PTR_ERR(bp->sgmii_phy);
+diff --git a/drivers/net/ethernet/cadence/macb_ptp.c b/drivers/net/ethernet/cadence/macb_ptp.c
+index fb6b27f46b153..9559c16078f95 100644
+--- a/drivers/net/ethernet/cadence/macb_ptp.c
++++ b/drivers/net/ethernet/cadence/macb_ptp.c
+@@ -470,8 +470,10 @@ int gem_set_hwtst(struct net_device *dev, struct ifreq *ifr, int cmd)
+ 	case HWTSTAMP_TX_ONESTEP_SYNC:
+ 		if (gem_ptp_set_one_step_sync(bp, 1) != 0)
+ 			return -ERANGE;
+-		fallthrough;
++		tx_bd_control = TSTAMP_ALL_FRAMES;
++		break;
+ 	case HWTSTAMP_TX_ON:
++		gem_ptp_set_one_step_sync(bp, 0);
+ 		tx_bd_control = TSTAMP_ALL_FRAMES;
+ 		break;
+ 	default:
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+index 4b047255d9280..cd9ec80522e75 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+@@ -1097,6 +1097,7 @@ static void dpaa2_eth_free_tx_fd(struct dpaa2_eth_priv *priv,
+ 	u32 fd_len = dpaa2_fd_get_len(fd);
+ 	struct dpaa2_sg_entry *sgt;
+ 	int should_free_skb = 1;
++	void *tso_hdr;
+ 	int i;
+ 
+ 	fd_addr = dpaa2_fd_get_addr(fd);
+@@ -1135,20 +1136,21 @@ static void dpaa2_eth_free_tx_fd(struct dpaa2_eth_priv *priv,
+ 			sgt = (struct dpaa2_sg_entry *)(buffer_start +
+ 							priv->tx_data_offset);
+ 
++			/* Unmap the SGT buffer */
++			dma_unmap_single(dev, fd_addr, swa->tso.sgt_size,
++					 DMA_BIDIRECTIONAL);
++
+ 			/* Unmap and free the header */
++			tso_hdr = dpaa2_iova_to_virt(priv->iommu_domain, dpaa2_sg_get_addr(sgt));
+ 			dma_unmap_single(dev, dpaa2_sg_get_addr(sgt), TSO_HEADER_SIZE,
+ 					 DMA_TO_DEVICE);
+-			kfree(dpaa2_iova_to_virt(priv->iommu_domain, dpaa2_sg_get_addr(sgt)));
++			kfree(tso_hdr);
+ 
+ 			/* Unmap the other SG entries for the data */
+ 			for (i = 1; i < swa->tso.num_sg; i++)
+ 				dma_unmap_single(dev, dpaa2_sg_get_addr(&sgt[i]),
+ 						 dpaa2_sg_get_len(&sgt[i]), DMA_TO_DEVICE);
+ 
+-			/* Unmap the SGT buffer */
+-			dma_unmap_single(dev, fd_addr, swa->sg.sgt_size,
+-					 DMA_BIDIRECTIONAL);
+-
+ 			if (!swa->tso.is_last_fd)
+ 				should_free_skb = 0;
+ 		} else {
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c
+index ebc77771f5dac..4aa1f433ed24d 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c
++++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c
+@@ -643,6 +643,7 @@ int hinic_pf_to_mgmt_init(struct hinic_pf_to_mgmt *pf_to_mgmt,
+ 	err = alloc_msg_buf(pf_to_mgmt);
+ 	if (err) {
+ 		dev_err(&pdev->dev, "Failed to allocate msg buffers\n");
++		destroy_workqueue(pf_to_mgmt->workq);
+ 		hinic_health_reporters_destroy(hwdev->devlink_dev);
+ 		return err;
+ 	}
+@@ -650,6 +651,7 @@ int hinic_pf_to_mgmt_init(struct hinic_pf_to_mgmt *pf_to_mgmt,
+ 	err = hinic_api_cmd_init(pf_to_mgmt->cmd_chain, hwif);
+ 	if (err) {
+ 		dev_err(&pdev->dev, "Failed to initialize cmd chains\n");
++		destroy_workqueue(pf_to_mgmt->workq);
+ 		hinic_health_reporters_destroy(hwdev->devlink_dev);
+ 		return err;
+ 	}
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_wq.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_wq.c
+index f7dc7d825f637..4daf6bf291ecb 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_wq.c
++++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_wq.c
+@@ -386,7 +386,7 @@ static int alloc_wqes_shadow(struct hinic_wq *wq)
+ 		return -ENOMEM;
+ 
+ 	wq->shadow_idx = devm_kcalloc(&pdev->dev, wq->num_q_pages,
+-				      sizeof(wq->prod_idx), GFP_KERNEL);
++				      sizeof(*wq->shadow_idx), GFP_KERNEL);
+ 	if (!wq->shadow_idx)
+ 		goto err_shadow_idx;
+ 
+diff --git a/drivers/net/ethernet/intel/ice/ice_devlink.c b/drivers/net/ethernet/intel/ice/ice_devlink.c
+index a230edb384665..4a9de59121d85 100644
+--- a/drivers/net/ethernet/intel/ice/ice_devlink.c
++++ b/drivers/net/ethernet/intel/ice/ice_devlink.c
+@@ -753,9 +753,12 @@ int ice_devlink_create_vf_port(struct ice_vf *vf)
+ 
+ 	pf = vf->pf;
+ 	dev = ice_pf_to_dev(pf);
+-	vsi = ice_get_vf_vsi(vf);
+ 	devlink_port = &vf->devlink_port;
+ 
++	vsi = ice_get_vf_vsi(vf);
++	if (!vsi)
++		return -EINVAL;
++
+ 	attrs.flavour = DEVLINK_PORT_FLAVOUR_PCI_VF;
+ 	attrs.pci_vf.pf = pf->hw.bus.func;
+ 	attrs.pci_vf.vf = vf->vf_id;
+diff --git a/drivers/net/ethernet/intel/ice/ice_repr.c b/drivers/net/ethernet/intel/ice/ice_repr.c
+index 848f2adea563e..a91b81c3088b5 100644
+--- a/drivers/net/ethernet/intel/ice/ice_repr.c
++++ b/drivers/net/ethernet/intel/ice/ice_repr.c
+@@ -293,8 +293,13 @@ static int ice_repr_add(struct ice_vf *vf)
+ 	struct ice_q_vector *q_vector;
+ 	struct ice_netdev_priv *np;
+ 	struct ice_repr *repr;
++	struct ice_vsi *vsi;
+ 	int err;
+ 
++	vsi = ice_get_vf_vsi(vf);
++	if (!vsi)
++		return -EINVAL;
++
+ 	repr = kzalloc(sizeof(*repr), GFP_KERNEL);
+ 	if (!repr)
+ 		return -ENOMEM;
+@@ -313,7 +318,7 @@ static int ice_repr_add(struct ice_vf *vf)
+ 		goto err_alloc;
+ 	}
+ 
+-	repr->src_vsi = ice_get_vf_vsi(vf);
++	repr->src_vsi = vsi;
+ 	repr->vf = vf;
+ 	vf->repr = repr;
+ 	np = netdev_priv(repr->netdev);
+diff --git a/drivers/net/ethernet/intel/ice/ice_sriov.c b/drivers/net/ethernet/intel/ice/ice_sriov.c
+index 0c438219f7a39..bb1721f1321db 100644
+--- a/drivers/net/ethernet/intel/ice/ice_sriov.c
++++ b/drivers/net/ethernet/intel/ice/ice_sriov.c
+@@ -46,7 +46,12 @@ static void ice_free_vf_entries(struct ice_pf *pf)
+  */
+ static void ice_vf_vsi_release(struct ice_vf *vf)
+ {
+-	ice_vsi_release(ice_get_vf_vsi(vf));
++	struct ice_vsi *vsi = ice_get_vf_vsi(vf);
++
++	if (WARN_ON(!vsi))
++		return;
++
++	ice_vsi_release(vsi);
+ 	ice_vf_invalidate_vsi(vf);
+ }
+ 
+@@ -104,6 +109,8 @@ static void ice_dis_vf_mappings(struct ice_vf *vf)
+ 
+ 	hw = &pf->hw;
+ 	vsi = ice_get_vf_vsi(vf);
++	if (WARN_ON(!vsi))
++		return;
+ 
+ 	dev = ice_pf_to_dev(pf);
+ 	wr32(hw, VPINT_ALLOC(vf->vf_id), 0);
+@@ -341,6 +348,9 @@ static void ice_ena_vf_q_mappings(struct ice_vf *vf, u16 max_txq, u16 max_rxq)
+ 	struct ice_hw *hw = &vf->pf->hw;
+ 	u32 reg;
+ 
++	if (WARN_ON(!vsi))
++		return;
++
+ 	/* set regardless of mapping mode */
+ 	wr32(hw, VPLAN_TXQ_MAPENA(vf->vf_id), VPLAN_TXQ_MAPENA_TX_ENA_M);
+ 
+@@ -386,6 +396,9 @@ static void ice_ena_vf_mappings(struct ice_vf *vf)
+ {
+ 	struct ice_vsi *vsi = ice_get_vf_vsi(vf);
+ 
++	if (WARN_ON(!vsi))
++		return;
++
+ 	ice_ena_vf_msix_mappings(vf);
+ 	ice_ena_vf_q_mappings(vf, vsi->alloc_txq, vsi->alloc_rxq);
+ }
+@@ -1128,6 +1141,8 @@ static struct ice_vf *ice_get_vf_from_pfq(struct ice_pf *pf, u16 pfq)
+ 		u16 rxq_idx;
+ 
+ 		vsi = ice_get_vf_vsi(vf);
++		if (!vsi)
++			continue;
+ 
+ 		ice_for_each_rxq(vsi, rxq_idx)
+ 			if (vsi->rxq_map[rxq_idx] == pfq) {
+@@ -1521,8 +1536,15 @@ static int ice_calc_all_vfs_min_tx_rate(struct ice_pf *pf)
+ static bool
+ ice_min_tx_rate_oversubscribed(struct ice_vf *vf, int min_tx_rate)
+ {
+-	int link_speed_mbps = ice_get_link_speed_mbps(ice_get_vf_vsi(vf));
+-	int all_vfs_min_tx_rate = ice_calc_all_vfs_min_tx_rate(vf->pf);
++	struct ice_vsi *vsi = ice_get_vf_vsi(vf);
++	int all_vfs_min_tx_rate;
++	int link_speed_mbps;
++
++	if (WARN_ON(!vsi))
++		return false;
++
++	link_speed_mbps = ice_get_link_speed_mbps(vsi);
++	all_vfs_min_tx_rate = ice_calc_all_vfs_min_tx_rate(vf->pf);
+ 
+ 	/* this VF's previous rate is being overwritten */
+ 	all_vfs_min_tx_rate -= vf->min_tx_rate;
+@@ -1566,6 +1588,10 @@ ice_set_vf_bw(struct net_device *netdev, int vf_id, int min_tx_rate,
+ 		goto out_put_vf;
+ 
+ 	vsi = ice_get_vf_vsi(vf);
++	if (!vsi) {
++		ret = -EINVAL;
++		goto out_put_vf;
++	}
+ 
+ 	/* when max_tx_rate is zero that means no max Tx rate limiting, so only
+ 	 * check if max_tx_rate is non-zero
+diff --git a/drivers/net/ethernet/intel/ice/ice_vf_lib.c b/drivers/net/ethernet/intel/ice/ice_vf_lib.c
+index 6578059d94794..aefd66a4db80d 100644
+--- a/drivers/net/ethernet/intel/ice/ice_vf_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_vf_lib.c
+@@ -220,8 +220,10 @@ static void ice_vf_clear_counters(struct ice_vf *vf)
+ {
+ 	struct ice_vsi *vsi = ice_get_vf_vsi(vf);
+ 
++	if (vsi)
++		vsi->num_vlan = 0;
++
+ 	vf->num_mac = 0;
+-	vsi->num_vlan = 0;
+ 	memset(&vf->mdd_tx_events, 0, sizeof(vf->mdd_tx_events));
+ 	memset(&vf->mdd_rx_events, 0, sizeof(vf->mdd_rx_events));
+ }
+@@ -251,6 +253,9 @@ static int ice_vf_rebuild_vsi(struct ice_vf *vf)
+ 	struct ice_vsi *vsi = ice_get_vf_vsi(vf);
+ 	struct ice_pf *pf = vf->pf;
+ 
++	if (WARN_ON(!vsi))
++		return -EINVAL;
++
+ 	if (ice_vsi_rebuild(vsi, true)) {
+ 		dev_err(ice_pf_to_dev(pf), "failed to rebuild VF %d VSI\n",
+ 			vf->vf_id);
+@@ -514,6 +519,10 @@ int ice_reset_vf(struct ice_vf *vf, u32 flags)
+ 	ice_trigger_vf_reset(vf, flags & ICE_VF_RESET_VFLR, false);
+ 
+ 	vsi = ice_get_vf_vsi(vf);
++	if (WARN_ON(!vsi)) {
++		err = -EIO;
++		goto out_unlock;
++	}
+ 
+ 	ice_dis_vf_qs(vf);
+ 
+@@ -572,6 +581,11 @@ int ice_reset_vf(struct ice_vf *vf, u32 flags)
+ 
+ 	vf->vf_ops->post_vsi_rebuild(vf);
+ 	vsi = ice_get_vf_vsi(vf);
++	if (WARN_ON(!vsi)) {
++		err = -EINVAL;
++		goto out_unlock;
++	}
++
+ 	ice_eswitch_update_repr(vsi);
+ 	ice_eswitch_replay_vf_mac_rule(vf);
+ 
+@@ -610,6 +624,9 @@ void ice_dis_vf_qs(struct ice_vf *vf)
+ {
+ 	struct ice_vsi *vsi = ice_get_vf_vsi(vf);
+ 
++	if (WARN_ON(!vsi))
++		return;
++
+ 	ice_vsi_stop_lan_tx_rings(vsi, ICE_NO_RESET, vf->vf_id);
+ 	ice_vsi_stop_all_rx_rings(vsi);
+ 	ice_set_vf_state_qs_dis(vf);
+@@ -790,6 +807,9 @@ static int ice_vf_rebuild_host_mac_cfg(struct ice_vf *vf)
+ 	u8 broadcast[ETH_ALEN];
+ 	int status;
+ 
++	if (WARN_ON(!vsi))
++		return -EINVAL;
++
+ 	if (ice_is_eswitch_mode_switchdev(vf->pf))
+ 		return 0;
+ 
+@@ -875,6 +895,9 @@ static int ice_vf_rebuild_host_tx_rate_cfg(struct ice_vf *vf)
+ 	struct ice_vsi *vsi = ice_get_vf_vsi(vf);
+ 	int err;
+ 
++	if (WARN_ON(!vsi))
++		return -EINVAL;
++
+ 	if (vf->min_tx_rate) {
+ 		err = ice_set_min_bw_limit(vsi, (u64)vf->min_tx_rate * 1000);
+ 		if (err) {
+@@ -938,6 +961,9 @@ void ice_vf_rebuild_host_cfg(struct ice_vf *vf)
+ 	struct device *dev = ice_pf_to_dev(vf->pf);
+ 	struct ice_vsi *vsi = ice_get_vf_vsi(vf);
+ 
++	if (WARN_ON(!vsi))
++		return;
++
+ 	ice_vf_set_host_trust_cfg(vf);
+ 
+ 	if (ice_vf_rebuild_host_mac_cfg(vf))
+diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl.c b/drivers/net/ethernet/intel/ice/ice_virtchnl.c
+index 2889e050a4c93..5405a0e752cf7 100644
+--- a/drivers/net/ethernet/intel/ice/ice_virtchnl.c
++++ b/drivers/net/ethernet/intel/ice/ice_virtchnl.c
+@@ -2392,6 +2392,11 @@ static int ice_vc_ena_vlan_stripping(struct ice_vf *vf)
+ 	}
+ 
+ 	vsi = ice_get_vf_vsi(vf);
++	if (!vsi) {
++		v_ret = VIRTCHNL_STATUS_ERR_PARAM;
++		goto error_param;
++	}
++
+ 	if (vsi->inner_vlan_ops.ena_stripping(vsi, ETH_P_8021Q))
+ 		v_ret = VIRTCHNL_STATUS_ERR_PARAM;
+ 
+diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c
+index 8e38ee2faf586..b74ccbd1591ae 100644
+--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c
++++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c
+@@ -1344,7 +1344,12 @@ static void ice_vf_fdir_dump_info(struct ice_vf *vf)
+ 	pf = vf->pf;
+ 	hw = &pf->hw;
+ 	dev = ice_pf_to_dev(pf);
+-	vf_vsi = pf->vsi[vf->lan_vsi_idx];
++	vf_vsi = ice_get_vf_vsi(vf);
++	if (!vf_vsi) {
++		dev_dbg(dev, "VF %d: invalid VSI pointer\n", vf->vf_id);
++		return;
++	}
++
+ 	vsi_num = ice_get_hw_vsi_num(hw, vf_vsi->idx);
+ 
+ 	fd_size = rd32(hw, VSIQF_FD_SIZE(vsi_num));
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/devlink.c b/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
+index 057dde6f44174..9401127fb0ecf 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
+@@ -178,13 +178,13 @@ static int mlx5_devlink_reload_up(struct devlink *devlink, enum devlink_reload_a
+ 	*actions_performed = BIT(action);
+ 	switch (action) {
+ 	case DEVLINK_RELOAD_ACTION_DRIVER_REINIT:
+-		return mlx5_load_one(dev);
++		return mlx5_load_one(dev, false);
+ 	case DEVLINK_RELOAD_ACTION_FW_ACTIVATE:
+ 		if (limit == DEVLINK_RELOAD_LIMIT_NO_RESET)
+ 			break;
+ 		/* On fw_activate action, also driver is reloaded and reinit performed */
+ 		*actions_performed |= BIT(DEVLINK_RELOAD_ACTION_DRIVER_REINIT);
+-		return mlx5_load_one(dev);
++		return mlx5_load_one(dev, false);
+ 	default:
+ 		/* Unsupported action should not get to this function */
+ 		WARN_ON(1);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+index 8653ac0fd865c..ee34e861d3af6 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+@@ -1221,6 +1221,7 @@ mlx5e_tx_mpwqe_supported(struct mlx5_core_dev *mdev)
+ 		MLX5_CAP_ETH(mdev, enhanced_multi_pkt_send_wqe);
+ }
+ 
++int mlx5e_get_pf_num_tirs(struct mlx5_core_dev *mdev);
+ int mlx5e_priv_init(struct mlx5e_priv *priv,
+ 		    const struct mlx5e_profile *profile,
+ 		    struct net_device *netdev,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/ct_fs_smfs.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/ct_fs_smfs.c
+index bec9ed0103a93..2b80fe73549d2 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc/ct_fs_smfs.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc/ct_fs_smfs.c
+@@ -101,7 +101,7 @@ mlx5_ct_fs_smfs_matcher_create(struct mlx5_ct_fs *fs, struct mlx5dr_table *tbl,
+ 	spec->match_criteria_enable = MLX5_MATCH_MISC_PARAMETERS_2 | MLX5_MATCH_OUTER_HEADERS;
+ 
+ 	dr_matcher = mlx5_smfs_matcher_create(tbl, priority, spec);
+-	kfree(spec);
++	kvfree(spec);
+ 	if (!dr_matcher)
+ 		return ERR_PTR(-EINVAL);
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index fa229998606c2..72867a8ff48b6 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -5251,6 +5251,15 @@ mlx5e_calc_max_nch(struct mlx5_core_dev *mdev, struct net_device *netdev,
+ 	return max_nch;
+ }
+ 
++int mlx5e_get_pf_num_tirs(struct mlx5_core_dev *mdev)
++{
++	/* Indirect TIRS: 2 sets of TTCs (inner + outer steering)
++	 * and 1 set of direct TIRS
++	 */
++	return 2 * MLX5E_NUM_INDIR_TIRS
++		+ mlx5e_profile_max_num_channels(mdev, &mlx5e_nic_profile);
++}
++
+ /* mlx5e generic netdev management API (move to en_common.c) */
+ int mlx5e_priv_init(struct mlx5e_priv *priv,
+ 		    const struct mlx5e_profile *profile,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+index 6b7e7ea6ded23..a464461f14189 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+@@ -604,10 +604,16 @@ bool mlx5e_eswitch_vf_rep(const struct net_device *netdev)
+ 	return netdev->netdev_ops == &mlx5e_netdev_ops_rep;
+ }
+ 
++/* One indirect TIR set for outer. Inner not supported in reps. */
++#define REP_NUM_INDIR_TIRS MLX5E_NUM_INDIR_TIRS
++
+ static int mlx5e_rep_max_nch_limit(struct mlx5_core_dev *mdev)
+ {
+-	return (1 << MLX5_CAP_GEN(mdev, log_max_tir)) /
+-		mlx5_eswitch_get_total_vports(mdev);
++	int max_tir_num = 1 << MLX5_CAP_GEN(mdev, log_max_tir);
++	int num_vports = mlx5_eswitch_get_total_vports(mdev);
++
++	return (max_tir_num - mlx5e_get_pf_num_tirs(mdev)
++		- (num_vports * REP_NUM_INDIR_TIRS)) / num_vports;
+ }
+ 
+ static void mlx5e_build_rep_params(struct net_device *netdev)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+index 3ad67e6b5586d..89ba72e8d1091 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+@@ -2071,16 +2071,16 @@ void mlx5_del_flow_rules(struct mlx5_flow_handle *handle)
+ 	down_write_ref_node(&fte->node, false);
+ 	for (i = handle->num_rules - 1; i >= 0; i--)
+ 		tree_remove_node(&handle->rule[i]->node, true);
+-	if (fte->dests_size) {
+-		if (fte->modify_mask)
+-			modify_fte(fte);
+-		up_write_ref_node(&fte->node, false);
+-	} else if (list_empty(&fte->node.children)) {
++	if (list_empty(&fte->node.children)) {
+ 		del_hw_fte(&fte->node);
+ 		/* Avoid double call to del_hw_fte */
+ 		fte->node.del_hw_func = NULL;
+ 		up_write_ref_node(&fte->node, false);
+ 		tree_put_node(&fte->node, false);
++	} else if (fte->dests_size) {
++		if (fte->modify_mask)
++			modify_fte(fte);
++		up_write_ref_node(&fte->node, false);
+ 	} else {
+ 		up_write_ref_node(&fte->node, false);
+ 	}
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c b/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
+index 81eb67fb95b04..052af4901c0b9 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
+@@ -149,7 +149,7 @@ static void mlx5_fw_reset_complete_reload(struct mlx5_core_dev *dev)
+ 	if (test_bit(MLX5_FW_RESET_FLAGS_PENDING_COMP, &fw_reset->reset_flags)) {
+ 		complete(&fw_reset->done);
+ 	} else {
+-		mlx5_load_one(dev);
++		mlx5_load_one(dev, false);
+ 		devlink_remote_reload_actions_performed(priv_to_devlink(dev), 0,
+ 							BIT(DEVLINK_RELOAD_ACTION_DRIVER_REINIT) |
+ 							BIT(DEVLINK_RELOAD_ACTION_FW_ACTIVATE));
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/tout.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/tout.c
+index c1df0d3595d87..d758848d34d0c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/tout.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/tout.c
+@@ -10,6 +10,7 @@ struct mlx5_timeouts {
+ 
+ static const u32 tout_def_sw_val[MAX_TIMEOUT_TYPES] = {
+ 	[MLX5_TO_FW_PRE_INIT_TIMEOUT_MS] = 120000,
++	[MLX5_TO_FW_PRE_INIT_ON_RECOVERY_TIMEOUT_MS] = 7200000,
+ 	[MLX5_TO_FW_PRE_INIT_WARN_MESSAGE_INTERVAL_MS] = 20000,
+ 	[MLX5_TO_FW_PRE_INIT_WAIT_MS] = 2,
+ 	[MLX5_TO_FW_INIT_MS] = 2000,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/tout.h b/drivers/net/ethernet/mellanox/mlx5/core/lib/tout.h
+index 1c42ead782fa7..257c03eeab365 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/tout.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/tout.h
+@@ -7,6 +7,7 @@
+ enum mlx5_timeouts_types {
+ 	/* pre init timeouts (not read from FW) */
+ 	MLX5_TO_FW_PRE_INIT_TIMEOUT_MS,
++	MLX5_TO_FW_PRE_INIT_ON_RECOVERY_TIMEOUT_MS,
+ 	MLX5_TO_FW_PRE_INIT_WARN_MESSAGE_INTERVAL_MS,
+ 	MLX5_TO_FW_PRE_INIT_WAIT_MS,
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+index ef196cb764e2a..8b52636999943 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+@@ -1014,7 +1014,7 @@ static void mlx5_cleanup_once(struct mlx5_core_dev *dev)
+ 	mlx5_devcom_unregister_device(dev->priv.devcom);
+ }
+ 
+-static int mlx5_function_setup(struct mlx5_core_dev *dev, bool boot)
++static int mlx5_function_setup(struct mlx5_core_dev *dev, u64 timeout)
+ {
+ 	int err;
+ 
+@@ -1029,11 +1029,11 @@ static int mlx5_function_setup(struct mlx5_core_dev *dev, bool boot)
+ 
+ 	/* wait for firmware to accept initialization segments configurations
+ 	 */
+-	err = wait_fw_init(dev, mlx5_tout_ms(dev, FW_PRE_INIT_TIMEOUT),
++	err = wait_fw_init(dev, timeout,
+ 			   mlx5_tout_ms(dev, FW_PRE_INIT_WARN_MESSAGE_INTERVAL));
+ 	if (err) {
+ 		mlx5_core_err(dev, "Firmware over %llu MS in pre-initializing state, aborting\n",
+-			      mlx5_tout_ms(dev, FW_PRE_INIT_TIMEOUT));
++			      timeout);
+ 		return err;
+ 	}
+ 
+@@ -1296,7 +1296,7 @@ int mlx5_init_one(struct mlx5_core_dev *dev)
+ 	mutex_lock(&dev->intf_state_mutex);
+ 	dev->state = MLX5_DEVICE_STATE_UP;
+ 
+-	err = mlx5_function_setup(dev, true);
++	err = mlx5_function_setup(dev, mlx5_tout_ms(dev, FW_PRE_INIT_TIMEOUT));
+ 	if (err)
+ 		goto err_function;
+ 
+@@ -1360,9 +1360,10 @@ out:
+ 	mutex_unlock(&dev->intf_state_mutex);
+ }
+ 
+-int mlx5_load_one(struct mlx5_core_dev *dev)
++int mlx5_load_one(struct mlx5_core_dev *dev, bool recovery)
+ {
+ 	int err = 0;
++	u64 timeout;
+ 
+ 	mutex_lock(&dev->intf_state_mutex);
+ 	if (test_bit(MLX5_INTERFACE_STATE_UP, &dev->intf_state)) {
+@@ -1372,7 +1373,11 @@ int mlx5_load_one(struct mlx5_core_dev *dev)
+ 	/* remove any previous indication of internal error */
+ 	dev->state = MLX5_DEVICE_STATE_UP;
+ 
+-	err = mlx5_function_setup(dev, false);
++	if (recovery)
++		timeout = mlx5_tout_ms(dev, FW_PRE_INIT_ON_RECOVERY_TIMEOUT);
++	else
++		timeout = mlx5_tout_ms(dev, FW_PRE_INIT_TIMEOUT);
++	err = mlx5_function_setup(dev, timeout);
+ 	if (err)
+ 		goto err_function;
+ 
+@@ -1746,7 +1751,7 @@ static void mlx5_pci_resume(struct pci_dev *pdev)
+ 
+ 	mlx5_pci_trace(dev, "Enter, loading driver..\n");
+ 
+-	err = mlx5_load_one(dev);
++	err = mlx5_load_one(dev, false);
+ 
+ 	mlx5_pci_trace(dev, "Done, err = %d, device %s\n", err,
+ 		       !err ? "recovered" : "Failed");
+@@ -1833,7 +1838,7 @@ static int mlx5_resume(struct pci_dev *pdev)
+ {
+ 	struct mlx5_core_dev *dev = pci_get_drvdata(pdev);
+ 
+-	return mlx5_load_one(dev);
++	return mlx5_load_one(dev, false);
+ }
+ 
+ static const struct pci_device_id mlx5_core_pci_table[] = {
+@@ -1878,7 +1883,7 @@ int mlx5_recover_device(struct mlx5_core_dev *dev)
+ 			return -EIO;
+ 	}
+ 
+-	return mlx5_load_one(dev);
++	return mlx5_load_one(dev, true);
+ }
+ 
+ static struct pci_driver mlx5_core_driver = {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
+index a9b2d6ead542b..9026be1d62232 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
+@@ -290,7 +290,7 @@ void mlx5_mdev_uninit(struct mlx5_core_dev *dev);
+ int mlx5_init_one(struct mlx5_core_dev *dev);
+ void mlx5_uninit_one(struct mlx5_core_dev *dev);
+ void mlx5_unload_one(struct mlx5_core_dev *dev);
+-int mlx5_load_one(struct mlx5_core_dev *dev);
++int mlx5_load_one(struct mlx5_core_dev *dev, bool recovery);
+ 
+ int mlx5_vport_get_other_func_cap(struct mlx5_core_dev *dev, u16 function_id, void *out);
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_dcb.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_dcb.c
+index 5f92b16913605..aff6d4f35cd2f 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_dcb.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_dcb.c
+@@ -168,8 +168,6 @@ static int mlxsw_sp_dcbnl_ieee_setets(struct net_device *dev,
+ static int mlxsw_sp_dcbnl_app_validate(struct net_device *dev,
+ 				       struct dcb_app *app)
+ {
+-	int prio;
+-
+ 	if (app->priority >= IEEE_8021QAZ_MAX_TCS) {
+ 		netdev_err(dev, "APP entry with priority value %u is invalid\n",
+ 			   app->priority);
+@@ -183,17 +181,6 @@ static int mlxsw_sp_dcbnl_app_validate(struct net_device *dev,
+ 				   app->protocol);
+ 			return -EINVAL;
+ 		}
+-
+-		/* Warn about any DSCP APP entries with the same PID. */
+-		prio = fls(dcb_ieee_getapp_mask(dev, app));
+-		if (prio--) {
+-			if (prio < app->priority)
+-				netdev_warn(dev, "Choosing priority %d for DSCP %d in favor of previously-active value of %d\n",
+-					    app->priority, app->protocol, prio);
+-			else if (prio > app->priority)
+-				netdev_warn(dev, "Ignoring new priority %d for DSCP %d in favor of current value of %d\n",
+-					    app->priority, app->protocol, prio);
+-		}
+ 		break;
+ 
+ 	case IEEE_8021QAZ_APP_SEL_ETHERTYPE:
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_trap.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_trap.c
+index 47b061b99160e..ed4d0d3448f31 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_trap.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_trap.c
+@@ -864,7 +864,7 @@ static const struct mlxsw_sp_trap_item mlxsw_sp_trap_items_arr[] = {
+ 		.trap = MLXSW_SP_TRAP_CONTROL(LLDP, LLDP, TRAP),
+ 		.listeners_arr = {
+ 			MLXSW_RXL(mlxsw_sp_rx_ptp_listener, LLDP, TRAP_TO_CPU,
+-				  false, SP_LLDP, DISCARD),
++				  true, SP_LLDP, DISCARD),
+ 		},
+ 	},
+ 	{
+diff --git a/drivers/net/ethernet/sfc/ef10.c b/drivers/net/ethernet/sfc/ef10.c
+index f8edb3f1b73ad..186cb28c03bdb 100644
+--- a/drivers/net/ethernet/sfc/ef10.c
++++ b/drivers/net/ethernet/sfc/ef10.c
+@@ -2256,7 +2256,7 @@ int efx_ef10_tx_tso_desc(struct efx_tx_queue *tx_queue, struct sk_buff *skb,
+ 	 * guaranteed to satisfy the second as we only attempt TSO if
+ 	 * inner_network_header <= 208.
+ 	 */
+-	ip_tot_len = -EFX_TSO2_MAX_HDRLEN;
++	ip_tot_len = 0x10000 - EFX_TSO2_MAX_HDRLEN;
+ 	EFX_WARN_ON_ONCE_PARANOID(mss + EFX_TSO2_MAX_HDRLEN +
+ 				  (tcp->doff << 2u) > ip_tot_len);
+ 
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_selftests.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_selftests.c
+index 9f1759593b942..2fc51dc5eb0bb 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_selftests.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_selftests.c
+@@ -1084,8 +1084,9 @@ static int stmmac_test_rxp(struct stmmac_priv *priv)
+ 	unsigned char addr[ETH_ALEN] = {0xde, 0xad, 0xbe, 0xef, 0x00, 0x00};
+ 	struct tc_cls_u32_offload cls_u32 = { };
+ 	struct stmmac_packet_attrs attr = { };
+-	struct tc_action **actions, *act;
++	struct tc_action **actions;
+ 	struct tc_u32_sel *sel;
++	struct tcf_gact *gact;
+ 	struct tcf_exts *exts;
+ 	int ret, i, nk = 1;
+ 
+@@ -1110,8 +1111,8 @@ static int stmmac_test_rxp(struct stmmac_priv *priv)
+ 		goto cleanup_exts;
+ 	}
+ 
+-	act = kcalloc(nk, sizeof(*act), GFP_KERNEL);
+-	if (!act) {
++	gact = kcalloc(nk, sizeof(*gact), GFP_KERNEL);
++	if (!gact) {
+ 		ret = -ENOMEM;
+ 		goto cleanup_actions;
+ 	}
+@@ -1126,9 +1127,7 @@ static int stmmac_test_rxp(struct stmmac_priv *priv)
+ 	exts->nr_actions = nk;
+ 	exts->actions = actions;
+ 	for (i = 0; i < nk; i++) {
+-		struct tcf_gact *gact = to_gact(&act[i]);
+-
+-		actions[i] = &act[i];
++		actions[i] = (struct tc_action *)&gact[i];
+ 		gact->tcf_action = TC_ACT_SHOT;
+ 	}
+ 
+@@ -1152,7 +1151,7 @@ static int stmmac_test_rxp(struct stmmac_priv *priv)
+ 	stmmac_tc_setup_cls_u32(priv, priv, &cls_u32);
+ 
+ cleanup_act:
+-	kfree(act);
++	kfree(gact);
+ cleanup_actions:
+ 	kfree(actions);
+ cleanup_exts:
+diff --git a/drivers/net/ethernet/ti/Kconfig b/drivers/net/ethernet/ti/Kconfig
+index affcf92cd3aa5..fb30bc5d56cb7 100644
+--- a/drivers/net/ethernet/ti/Kconfig
++++ b/drivers/net/ethernet/ti/Kconfig
+@@ -94,6 +94,7 @@ config TI_K3_AM65_CPSW_NUSS
+ 	depends on ARCH_K3 && OF && TI_K3_UDMA_GLUE_LAYER
+ 	select NET_DEVLINK
+ 	select TI_DAVINCI_MDIO
++	select PHYLINK
+ 	imply PHY_TI_GMII_SEL
+ 	depends on TI_K3_AM65_CPTS || !TI_K3_AM65_CPTS
+ 	help
+diff --git a/drivers/net/ethernet/xscale/ptp_ixp46x.c b/drivers/net/ethernet/xscale/ptp_ixp46x.c
+index 1f382777aa5a8..9abbdb71e629f 100644
+--- a/drivers/net/ethernet/xscale/ptp_ixp46x.c
++++ b/drivers/net/ethernet/xscale/ptp_ixp46x.c
+@@ -271,7 +271,7 @@ static int ptp_ixp_probe(struct platform_device *pdev)
+ 	ixp_clock.master_irq = platform_get_irq(pdev, 0);
+ 	ixp_clock.slave_irq = platform_get_irq(pdev, 1);
+ 	if (IS_ERR(ixp_clock.regs) ||
+-	    !ixp_clock.master_irq || !ixp_clock.slave_irq)
++	    ixp_clock.master_irq < 0 || ixp_clock.slave_irq < 0)
+ 		return -ENXIO;
+ 
+ 	ixp_clock.caps = ptp_ixp_caps;
+diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
+index fde1c492ca02a..b1dece6b96986 100644
+--- a/drivers/net/hyperv/netvsc_drv.c
++++ b/drivers/net/hyperv/netvsc_drv.c
+@@ -2671,7 +2671,10 @@ static int netvsc_suspend(struct hv_device *dev)
+ 
+ 	/* Save the current config info */
+ 	ndev_ctx->saved_netvsc_dev_info = netvsc_devinfo_get(nvdev);
+-
++	if (!ndev_ctx->saved_netvsc_dev_info) {
++		ret = -ENOMEM;
++		goto out;
++	}
+ 	ret = netvsc_detach(net, nvdev);
+ out:
+ 	rtnl_unlock();
+diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
+index 53764f3c0c7e4..b9b4cb82790f8 100644
+--- a/drivers/net/ipa/ipa_endpoint.c
++++ b/drivers/net/ipa/ipa_endpoint.c
+@@ -587,19 +587,23 @@ static void ipa_endpoint_init_hdr_ext(struct ipa_endpoint *endpoint)
+ 	struct ipa *ipa = endpoint->ipa;
+ 	u32 val = 0;
+ 
+-	val |= HDR_ENDIANNESS_FMASK;		/* big endian */
+-
+-	/* A QMAP header contains a 6 bit pad field at offset 0.  The RMNet
+-	 * driver assumes this field is meaningful in packets it receives,
+-	 * and assumes the header's payload length includes that padding.
+-	 * The RMNet driver does *not* pad packets it sends, however, so
+-	 * the pad field (although 0) should be ignored.
+-	 */
+-	if (endpoint->data->qmap && !endpoint->toward_ipa) {
+-		val |= HDR_TOTAL_LEN_OR_PAD_VALID_FMASK;
+-		/* HDR_TOTAL_LEN_OR_PAD is 0 (pad, not total_len) */
+-		val |= HDR_PAYLOAD_LEN_INC_PADDING_FMASK;
+-		/* HDR_TOTAL_LEN_OR_PAD_OFFSET is 0 */
++	if (endpoint->data->qmap) {
++		/* We have a header, so we must specify its endianness */
++		val |= HDR_ENDIANNESS_FMASK;	/* big endian */
++
++		/* A QMAP header contains a 6 bit pad field at offset 0.
++		 * The RMNet driver assumes this field is meaningful in
++		 * packets it receives, and assumes the header's payload
++		 * length includes that padding.  The RMNet driver does
++		 * *not* pad packets it sends, however, so the pad field
++		 * (although 0) should be ignored.
++		 */
++		if (!endpoint->toward_ipa) {
++			val |= HDR_TOTAL_LEN_OR_PAD_VALID_FMASK;
++			/* HDR_TOTAL_LEN_OR_PAD is 0 (pad, not total_len) */
++			val |= HDR_PAYLOAD_LEN_INC_PADDING_FMASK;
++			/* HDR_TOTAL_LEN_OR_PAD_OFFSET is 0 */
++		}
+ 	}
+ 
+ 	/* HDR_PAYLOAD_LEN_INC_PADDING is 0 */
+@@ -759,8 +763,6 @@ static void ipa_endpoint_init_aggr(struct ipa_endpoint *endpoint)
+ 
+ 			close_eof = rx_data->aggr_close_eof;
+ 			val |= aggr_sw_eof_active_encoded(version, close_eof);
+-
+-			/* AGGR_HARD_BYTE_LIMIT_ENABLE is 0 */
+ 		} else {
+ 			val |= u32_encode_bits(IPA_ENABLE_DEAGGR,
+ 					       AGGR_EN_FMASK);
+@@ -1060,7 +1062,7 @@ static int ipa_endpoint_replenish_one(struct ipa_endpoint *endpoint,
+ 
+ 	ret = gsi_trans_page_add(trans, page, len, offset);
+ 	if (ret)
+-		__free_pages(page, get_order(buffer_size));
++		put_page(page);
+ 	else
+ 		trans->data = page;	/* transaction owns page now */
+ 
+@@ -1383,11 +1385,8 @@ void ipa_endpoint_trans_release(struct ipa_endpoint *endpoint,
+ 	} else {
+ 		struct page *page = trans->data;
+ 
+-		if (page) {
+-			u32 buffer_size = endpoint->data->rx.buffer_size;
+-
+-			__free_pages(page, get_order(buffer_size));
+-		}
++		if (page)
++			put_page(page);
+ 	}
+ }
+ 
+diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
+index 832f09ac075e7..817577e713d70 100644
+--- a/drivers/net/macsec.c
++++ b/drivers/net/macsec.c
+@@ -99,6 +99,7 @@ struct pcpu_secy_stats {
+  * struct macsec_dev - private data
+  * @secy: SecY config
+  * @real_dev: pointer to underlying netdevice
++ * @dev_tracker: refcount tracker for @real_dev reference
+  * @stats: MACsec device stats
+  * @secys: linked list of SecY's on the underlying device
+  * @gro_cells: pointer to the Generic Receive Offload cell
+@@ -107,6 +108,7 @@ struct pcpu_secy_stats {
+ struct macsec_dev {
+ 	struct macsec_secy secy;
+ 	struct net_device *real_dev;
++	netdevice_tracker dev_tracker;
+ 	struct pcpu_secy_stats __percpu *stats;
+ 	struct list_head secys;
+ 	struct gro_cells gro_cells;
+@@ -3459,6 +3461,9 @@ static int macsec_dev_init(struct net_device *dev)
+ 	if (is_zero_ether_addr(dev->broadcast))
+ 		memcpy(dev->broadcast, real_dev->broadcast, dev->addr_len);
+ 
++	/* Get macsec's reference to real_dev */
++	dev_hold_track(real_dev, &macsec->dev_tracker, GFP_KERNEL);
++
+ 	return 0;
+ }
+ 
+@@ -3704,6 +3709,8 @@ static void macsec_free_netdev(struct net_device *dev)
+ 	free_percpu(macsec->stats);
+ 	free_percpu(macsec->secy.tx_sc.stats);
+ 
++	/* Get rid of the macsec's reference to real_dev */
++	dev_put_track(macsec->real_dev, &macsec->dev_tracker);
+ }
+ 
+ static void macsec_setup(struct net_device *dev)
+diff --git a/drivers/net/phy/micrel.c b/drivers/net/phy/micrel.c
+index cd9aa353b653f..48c7d715a9e38 100644
+--- a/drivers/net/phy/micrel.c
++++ b/drivers/net/phy/micrel.c
+@@ -497,7 +497,7 @@ static int kszphy_config_reset(struct phy_device *phydev)
+ 		}
+ 	}
+ 
+-	if (priv->led_mode >= 0)
++	if (priv->type && priv->led_mode >= 0)
+ 		kszphy_setup_led(phydev, priv->type->led_mode_reg, priv->led_mode);
+ 
+ 	return 0;
+@@ -513,10 +513,10 @@ static int kszphy_config_init(struct phy_device *phydev)
+ 
+ 	type = priv->type;
+ 
+-	if (type->has_broadcast_disable)
++	if (type && type->has_broadcast_disable)
+ 		kszphy_broadcast_disable(phydev);
+ 
+-	if (type->has_nand_tree_disable)
++	if (type && type->has_nand_tree_disable)
+ 		kszphy_nand_tree_disable(phydev);
+ 
+ 	return kszphy_config_reset(phydev);
+@@ -1514,7 +1514,7 @@ static int kszphy_probe(struct phy_device *phydev)
+ 
+ 	priv->type = type;
+ 
+-	if (type->led_mode_reg) {
++	if (type && type->led_mode_reg) {
+ 		ret = of_property_read_u32(np, "micrel,led-mode",
+ 				&priv->led_mode);
+ 		if (ret)
+@@ -1535,7 +1535,8 @@ static int kszphy_probe(struct phy_device *phydev)
+ 		unsigned long rate = clk_get_rate(clk);
+ 		bool rmii_ref_clk_sel_25_mhz;
+ 
+-		priv->rmii_ref_clk_sel = type->has_rmii_ref_clk_sel;
++		if (type)
++			priv->rmii_ref_clk_sel = type->has_rmii_ref_clk_sel;
+ 		rmii_ref_clk_sel_25_mhz = of_property_read_bool(np,
+ 				"micrel,rmii-reference-clock-select-25-mhz");
+ 
+diff --git a/drivers/net/usb/asix_devices.c b/drivers/net/usb/asix_devices.c
+index 38e47a93fb833..5b5eb630c4b79 100644
+--- a/drivers/net/usb/asix_devices.c
++++ b/drivers/net/usb/asix_devices.c
+@@ -795,11 +795,7 @@ static int ax88772_stop(struct usbnet *dev)
+ {
+ 	struct asix_common_private *priv = dev->driver_priv;
+ 
+-	/* On unplugged USB, we will get MDIO communication errors and the
+-	 * PHY will be set in to PHY_HALTED state.
+-	 */
+-	if (priv->phydev->state != PHY_HALTED)
+-		phy_stop(priv->phydev);
++	phy_stop(priv->phydev);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/usb/smsc95xx.c b/drivers/net/usb/smsc95xx.c
+index 4ef61f6b85df5..edf0492ad489a 100644
+--- a/drivers/net/usb/smsc95xx.c
++++ b/drivers/net/usb/smsc95xx.c
+@@ -1243,8 +1243,7 @@ static int smsc95xx_start_phy(struct usbnet *dev)
+ 
+ static int smsc95xx_stop(struct usbnet *dev)
+ {
+-	if (dev->net->phydev)
+-		phy_stop(dev->net->phydev);
++	phy_stop(dev->net->phydev);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/usb/usbnet.c b/drivers/net/usb/usbnet.c
+index 9a6450f796dcb..36b24ec116504 100644
+--- a/drivers/net/usb/usbnet.c
++++ b/drivers/net/usb/usbnet.c
+@@ -1616,9 +1616,6 @@ void usbnet_disconnect (struct usb_interface *intf)
+ 		   xdev->bus->bus_name, xdev->devpath,
+ 		   dev->driver_info->description);
+ 
+-	if (dev->driver_info->unbind)
+-		dev->driver_info->unbind(dev, intf);
+-
+ 	net = dev->net;
+ 	unregister_netdev (net);
+ 
+@@ -1626,6 +1623,9 @@ void usbnet_disconnect (struct usb_interface *intf)
+ 
+ 	usb_scuttle_anchored_urbs(&dev->deferred);
+ 
++	if (dev->driver_info->unbind)
++		dev->driver_info->unbind(dev, intf);
++
+ 	usb_kill_urb(dev->interrupt);
+ 	usb_free_urb(dev->interrupt);
+ 	kfree(dev->padding_pkt);
+diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c
+index b11aaee8b8c03..a11b31191d5aa 100644
+--- a/drivers/net/wireless/ath/ath10k/mac.c
++++ b/drivers/net/wireless/ath/ath10k/mac.c
+@@ -5339,13 +5339,29 @@ err:
+ static void ath10k_stop(struct ieee80211_hw *hw)
+ {
+ 	struct ath10k *ar = hw->priv;
++	u32 opt;
+ 
+ 	ath10k_drain_tx(ar);
+ 
+ 	mutex_lock(&ar->conf_mutex);
+ 	if (ar->state != ATH10K_STATE_OFF) {
+-		if (!ar->hw_rfkill_on)
+-			ath10k_halt(ar);
++		if (!ar->hw_rfkill_on) {
++			/* If the current driver state is RESTARTING but not yet
++			 * fully RESTARTED because of incoming suspend event,
++			 * then ath10k_halt() is already called via
++			 * ath10k_core_restart() and should not be called here.
++			 */
++			if (ar->state != ATH10K_STATE_RESTARTING) {
++				ath10k_halt(ar);
++			} else {
++				/* Suspending here, because when in RESTARTING
++				 * state, ath10k_core_stop() skips
++				 * ath10k_wait_for_suspend().
++				 */
++				opt = WMI_PDEV_SUSPEND_AND_DISABLE_INTR;
++				ath10k_wait_for_suspend(ar, opt);
++			}
++		}
+ 		ar->state = ATH10K_STATE_OFF;
+ 	}
+ 	mutex_unlock(&ar->conf_mutex);
+diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c
+index 58ff761393db1..54d738bdee0e1 100644
+--- a/drivers/net/wireless/ath/ath11k/mac.c
++++ b/drivers/net/wireless/ath/ath11k/mac.c
+@@ -5520,8 +5520,8 @@ static void ath11k_mgmt_over_wmi_tx_work(struct work_struct *work)
+ 		}
+ 
+ 		arvif = ath11k_vif_to_arvif(skb_cb->vif);
+-		if (ar->allocated_vdev_map & (1LL << arvif->vdev_id) &&
+-		    arvif->is_started) {
++		mutex_lock(&ar->conf_mutex);
++		if (ar->allocated_vdev_map & (1LL << arvif->vdev_id)) {
+ 			ret = ath11k_mac_mgmt_tx_wmi(ar, arvif, skb);
+ 			if (ret) {
+ 				ath11k_warn(ar->ab, "failed to tx mgmt frame, vdev_id %d :%d\n",
+@@ -5539,6 +5539,7 @@ static void ath11k_mgmt_over_wmi_tx_work(struct work_struct *work)
+ 				    arvif->is_started);
+ 			ath11k_mgmt_over_wmi_tx_drop(ar, skb);
+ 		}
++		mutex_unlock(&ar->conf_mutex);
+ 	}
+ }
+ 
+@@ -7114,6 +7115,7 @@ ath11k_mac_op_unassign_vif_chanctx(struct ieee80211_hw *hw,
+ 	struct ath11k *ar = hw->priv;
+ 	struct ath11k_base *ab = ar->ab;
+ 	struct ath11k_vif *arvif = (void *)vif->drv_priv;
++	struct ath11k_peer *peer;
+ 	int ret;
+ 
+ 	mutex_lock(&ar->conf_mutex);
+@@ -7125,9 +7127,13 @@ ath11k_mac_op_unassign_vif_chanctx(struct ieee80211_hw *hw,
+ 	WARN_ON(!arvif->is_started);
+ 
+ 	if (ab->hw_params.vdev_start_delay &&
+-	    arvif->vdev_type == WMI_VDEV_TYPE_MONITOR &&
+-	    ath11k_peer_find_by_addr(ab, ar->mac_addr))
+-		ath11k_peer_delete(ar, arvif->vdev_id, ar->mac_addr);
++	    arvif->vdev_type == WMI_VDEV_TYPE_MONITOR) {
++		spin_lock_bh(&ab->base_lock);
++		peer = ath11k_peer_find_by_addr(ab, ar->mac_addr);
++		spin_unlock_bh(&ab->base_lock);
++		if (peer)
++			ath11k_peer_delete(ar, arvif->vdev_id, ar->mac_addr);
++	}
+ 
+ 	if (arvif->vdev_type == WMI_VDEV_TYPE_MONITOR) {
+ 		ret = ath11k_mac_monitor_stop(ar);
+diff --git a/drivers/net/wireless/ath/ath11k/pci.c b/drivers/net/wireless/ath/ath11k/pci.c
+index 903758751c99a..8a3ff12057e89 100644
+--- a/drivers/net/wireless/ath/ath11k/pci.c
++++ b/drivers/net/wireless/ath/ath11k/pci.c
+@@ -191,6 +191,7 @@ void ath11k_pci_write32(struct ath11k_base *ab, u32 offset, u32 value)
+ {
+ 	struct ath11k_pci *ab_pci = ath11k_pci_priv(ab);
+ 	u32 window_start;
++	int ret = 0;
+ 
+ 	/* for offset beyond BAR + 4K - 32, may
+ 	 * need to wakeup MHI to access.
+@@ -198,7 +199,7 @@ void ath11k_pci_write32(struct ath11k_base *ab, u32 offset, u32 value)
+ 	if (ab->hw_params.wakeup_mhi &&
+ 	    test_bit(ATH11K_PCI_FLAG_INIT_DONE, &ab_pci->flags) &&
+ 	    offset >= ACCESS_ALWAYS_OFF)
+-		mhi_device_get_sync(ab_pci->mhi_ctrl->mhi_dev);
++		ret = mhi_device_get_sync(ab_pci->mhi_ctrl->mhi_dev);
+ 
+ 	if (offset < WINDOW_START) {
+ 		iowrite32(value, ab->mem  + offset);
+@@ -222,7 +223,8 @@ void ath11k_pci_write32(struct ath11k_base *ab, u32 offset, u32 value)
+ 
+ 	if (ab->hw_params.wakeup_mhi &&
+ 	    test_bit(ATH11K_PCI_FLAG_INIT_DONE, &ab_pci->flags) &&
+-	    offset >= ACCESS_ALWAYS_OFF)
++	    offset >= ACCESS_ALWAYS_OFF &&
++	    !ret)
+ 		mhi_device_put(ab_pci->mhi_ctrl->mhi_dev);
+ }
+ 
+@@ -230,6 +232,7 @@ u32 ath11k_pci_read32(struct ath11k_base *ab, u32 offset)
+ {
+ 	struct ath11k_pci *ab_pci = ath11k_pci_priv(ab);
+ 	u32 val, window_start;
++	int ret = 0;
+ 
+ 	/* for offset beyond BAR + 4K - 32, may
+ 	 * need to wakeup MHI to access.
+@@ -237,7 +240,7 @@ u32 ath11k_pci_read32(struct ath11k_base *ab, u32 offset)
+ 	if (ab->hw_params.wakeup_mhi &&
+ 	    test_bit(ATH11K_PCI_FLAG_INIT_DONE, &ab_pci->flags) &&
+ 	    offset >= ACCESS_ALWAYS_OFF)
+-		mhi_device_get_sync(ab_pci->mhi_ctrl->mhi_dev);
++		ret = mhi_device_get_sync(ab_pci->mhi_ctrl->mhi_dev);
+ 
+ 	if (offset < WINDOW_START) {
+ 		val = ioread32(ab->mem + offset);
+@@ -261,7 +264,8 @@ u32 ath11k_pci_read32(struct ath11k_base *ab, u32 offset)
+ 
+ 	if (ab->hw_params.wakeup_mhi &&
+ 	    test_bit(ATH11K_PCI_FLAG_INIT_DONE, &ab_pci->flags) &&
+-	    offset >= ACCESS_ALWAYS_OFF)
++	    offset >= ACCESS_ALWAYS_OFF &&
++	    !ret)
+ 		mhi_device_put(ab_pci->mhi_ctrl->mhi_dev);
+ 
+ 	return val;
+diff --git a/drivers/net/wireless/ath/ath11k/spectral.c b/drivers/net/wireless/ath/ath11k/spectral.c
+index 2b18871d5f7cb..516a7b4cd1805 100644
+--- a/drivers/net/wireless/ath/ath11k/spectral.c
++++ b/drivers/net/wireless/ath/ath11k/spectral.c
+@@ -212,7 +212,10 @@ static int ath11k_spectral_scan_config(struct ath11k *ar,
+ 		return -ENODEV;
+ 
+ 	arvif->spectral_enabled = (mode != ATH11K_SPECTRAL_DISABLED);
++
++	spin_lock_bh(&ar->spectral.lock);
+ 	ar->spectral.mode = mode;
++	spin_unlock_bh(&ar->spectral.lock);
+ 
+ 	ret = ath11k_wmi_vdev_spectral_enable(ar, arvif->vdev_id,
+ 					      ATH11K_WMI_SPECTRAL_TRIGGER_CMD_CLEAR,
+@@ -843,9 +846,6 @@ static inline void ath11k_spectral_ring_free(struct ath11k *ar)
+ {
+ 	struct ath11k_spectral *sp = &ar->spectral;
+ 
+-	if (!sp->enabled)
+-		return;
+-
+ 	ath11k_dbring_srng_cleanup(ar, &sp->rx_ring);
+ 	ath11k_dbring_buf_cleanup(ar, &sp->rx_ring);
+ }
+@@ -897,15 +897,16 @@ void ath11k_spectral_deinit(struct ath11k_base *ab)
+ 		if (!sp->enabled)
+ 			continue;
+ 
+-		ath11k_spectral_debug_unregister(ar);
+-		ath11k_spectral_ring_free(ar);
++		mutex_lock(&ar->conf_mutex);
++		ath11k_spectral_scan_config(ar, ATH11K_SPECTRAL_DISABLED);
++		mutex_unlock(&ar->conf_mutex);
+ 
+ 		spin_lock_bh(&sp->lock);
+-
+-		sp->mode = ATH11K_SPECTRAL_DISABLED;
+ 		sp->enabled = false;
+-
+ 		spin_unlock_bh(&sp->lock);
++
++		ath11k_spectral_debug_unregister(ar);
++		ath11k_spectral_ring_free(ar);
+ 	}
+ }
+ 
+diff --git a/drivers/net/wireless/ath/ath11k/wmi.c b/drivers/net/wireless/ath/ath11k/wmi.c
+index 2751fe8814df7..0900f75eef202 100644
+--- a/drivers/net/wireless/ath/ath11k/wmi.c
++++ b/drivers/net/wireless/ath/ath11k/wmi.c
+@@ -5789,9 +5789,9 @@ static int ath11k_wmi_tlv_rssi_chain_parse(struct ath11k_base *ab,
+ 					   arvif->bssid,
+ 					   NULL);
+ 	if (!sta) {
+-		ath11k_warn(ab, "not found station for bssid %pM\n",
+-			    arvif->bssid);
+-		ret = -EPROTO;
++		ath11k_dbg(ab, ATH11K_DBG_WMI,
++			   "not found station of bssid %pM for rssi chain\n",
++			   arvif->bssid);
+ 		goto exit;
+ 	}
+ 
+@@ -5889,8 +5889,9 @@ static int ath11k_wmi_tlv_fw_stats_data_parse(struct ath11k_base *ab,
+ 					   "wmi stats vdev id %d snr %d\n",
+ 					   src->vdev_id, src->beacon_snr);
+ 			} else {
+-				ath11k_warn(ab, "not found station for bssid %pM\n",
+-					    arvif->bssid);
++				ath11k_dbg(ab, ATH11K_DBG_WMI,
++					   "not found station of bssid %pM for vdev stat\n",
++					   arvif->bssid);
+ 			}
+ 		}
+ 
+diff --git a/drivers/net/wireless/ath/ath11k/wmi.h b/drivers/net/wireless/ath/ath11k/wmi.h
+index 587f423072508..b5b72483477d7 100644
+--- a/drivers/net/wireless/ath/ath11k/wmi.h
++++ b/drivers/net/wireless/ath/ath11k/wmi.h
+@@ -3088,9 +3088,6 @@ enum scan_dwelltime_adaptive_mode {
+ 	SCAN_DWELL_MODE_STATIC = 4
+ };
+ 
+-#define WLAN_SCAN_MAX_NUM_SSID          10
+-#define WLAN_SCAN_MAX_NUM_BSSID         10
+-
+ #define WLAN_SSID_MAX_LEN 32
+ 
+ struct element_info {
+@@ -3105,7 +3102,6 @@ struct wlan_ssid {
+ 
+ #define WMI_IE_BITMAP_SIZE             8
+ 
+-#define WMI_SCAN_MAX_NUM_SSID                0x0A
+ /* prefix used by scan requestor ids on the host */
+ #define WMI_HOST_SCAN_REQUESTOR_ID_PREFIX 0xA000
+ 
+@@ -3113,10 +3109,6 @@ struct wlan_ssid {
+ /* host cycles through the lower 12 bits to generate ids */
+ #define WMI_HOST_SCAN_REQ_ID_PREFIX 0xA000
+ 
+-#define WLAN_SCAN_PARAMS_MAX_SSID    16
+-#define WLAN_SCAN_PARAMS_MAX_BSSID   4
+-#define WLAN_SCAN_PARAMS_MAX_IE_LEN  256
+-
+ /* Values lower than this may be refused by some firmware revisions with a scan
+  * completion with a timedout reason.
+  */
+@@ -3312,8 +3304,8 @@ struct scan_req_params {
+ 	u32 n_probes;
+ 	u32 *chan_list;
+ 	u32 notify_scan_events;
+-	struct wlan_ssid ssid[WLAN_SCAN_MAX_NUM_SSID];
+-	struct wmi_mac_addr bssid_list[WLAN_SCAN_MAX_NUM_BSSID];
++	struct wlan_ssid ssid[WLAN_SCAN_PARAMS_MAX_SSID];
++	struct wmi_mac_addr bssid_list[WLAN_SCAN_PARAMS_MAX_BSSID];
+ 	struct element_info extraie;
+ 	struct element_info htcap;
+ 	struct element_info vhtcap;
+diff --git a/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c b/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c
+index b0a4ca3559fd8..abed1effd95ca 100644
+--- a/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c
++++ b/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c
+@@ -5615,7 +5615,7 @@ unsigned int ar9003_get_paprd_scale_factor(struct ath_hw *ah,
+ 
+ static u8 ar9003_get_eepmisc(struct ath_hw *ah)
+ {
+-	return ah->eeprom.map4k.baseEepHeader.eepMisc;
++	return ah->eeprom.ar9300_eep.baseEepHeader.opCapFlags.eepMisc;
+ }
+ 
+ const struct eeprom_ops eep_ar9300_ops = {
+diff --git a/drivers/net/wireless/ath/ath9k/ar9003_phy.h b/drivers/net/wireless/ath/ath9k/ar9003_phy.h
+index a171dbb29fbb6..ad949eb02f3d2 100644
+--- a/drivers/net/wireless/ath/ath9k/ar9003_phy.h
++++ b/drivers/net/wireless/ath/ath9k/ar9003_phy.h
+@@ -720,7 +720,7 @@
+ #define AR_CH0_TOP2		(AR_SREV_9300(ah) ? 0x1628c : \
+ 					(AR_SREV_9462(ah) ? 0x16290 : 0x16284))
+ #define AR_CH0_TOP2_XPABIASLVL		(AR_SREV_9561(ah) ? 0x1e00 : 0xf000)
+-#define AR_CH0_TOP2_XPABIASLVL_S	12
++#define AR_CH0_TOP2_XPABIASLVL_S	(AR_SREV_9561(ah) ? 9 : 12)
+ 
+ #define AR_CH0_XTAL		(AR_SREV_9300(ah) ? 0x16294 : \
+ 				 ((AR_SREV_9462(ah) || AR_SREV_9565(ah)) ? 0x16298 : \
+diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c b/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c
+index 6a850a0bfa8ad..a23eaca0326d1 100644
+--- a/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c
++++ b/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c
+@@ -1016,6 +1016,14 @@ static bool ath9k_rx_prepare(struct ath9k_htc_priv *priv,
+ 		goto rx_next;
+ 	}
+ 
++	if (rxstatus->rs_keyix >= ATH_KEYMAX &&
++	    rxstatus->rs_keyix != ATH9K_RXKEYIX_INVALID) {
++		ath_dbg(common, ANY,
++			"Invalid keyix, dropping (keyix: %d)\n",
++			rxstatus->rs_keyix);
++		goto rx_next;
++	}
++
+ 	/* Get the RX status information */
+ 
+ 	memset(rx_status, 0, sizeof(struct ieee80211_rx_status));
+diff --git a/drivers/net/wireless/ath/carl9170/tx.c b/drivers/net/wireless/ath/carl9170/tx.c
+index 1b76f4434c069..791f9f120af3a 100644
+--- a/drivers/net/wireless/ath/carl9170/tx.c
++++ b/drivers/net/wireless/ath/carl9170/tx.c
+@@ -1558,6 +1558,9 @@ static struct carl9170_vif_info *carl9170_pick_beaconing_vif(struct ar9170 *ar)
+ 					goto out;
+ 			}
+ 		} while (ar->beacon_enabled && i--);
++
++		/* no entry found in list */
++		return NULL;
+ 	}
+ 
+ out:
+diff --git a/drivers/net/wireless/broadcom/b43/phy_n.c b/drivers/net/wireless/broadcom/b43/phy_n.c
+index cf3ccf4ddfe72..aa5c994656749 100644
+--- a/drivers/net/wireless/broadcom/b43/phy_n.c
++++ b/drivers/net/wireless/broadcom/b43/phy_n.c
+@@ -582,7 +582,7 @@ static void b43_nphy_adjust_lna_gain_table(struct b43_wldev *dev)
+ 	u16 data[4];
+ 	s16 gain[2];
+ 	u16 minmax[2];
+-	static const u16 lna_gain[4] = { -2, 10, 19, 25 };
++	static const s16 lna_gain[4] = { -2, 10, 19, 25 };
+ 
+ 	if (nphy->hang_avoid)
+ 		b43_nphy_stay_in_carrier_search(dev, 1);
+diff --git a/drivers/net/wireless/broadcom/b43legacy/phy.c b/drivers/net/wireless/broadcom/b43legacy/phy.c
+index 05404fbd1e70b..c1395e622759e 100644
+--- a/drivers/net/wireless/broadcom/b43legacy/phy.c
++++ b/drivers/net/wireless/broadcom/b43legacy/phy.c
+@@ -1123,7 +1123,7 @@ void b43legacy_phy_lo_b_measure(struct b43legacy_wldev *dev)
+ 	struct b43legacy_phy *phy = &dev->phy;
+ 	u16 regstack[12] = { 0 };
+ 	u16 mls;
+-	u16 fval;
++	s16 fval;
+ 	int i;
+ 	int j;
+ 
+diff --git a/drivers/net/wireless/intel/ipw2x00/libipw_tx.c b/drivers/net/wireless/intel/ipw2x00/libipw_tx.c
+index 36d1e6b2568db..4aec1fce1ae29 100644
+--- a/drivers/net/wireless/intel/ipw2x00/libipw_tx.c
++++ b/drivers/net/wireless/intel/ipw2x00/libipw_tx.c
+@@ -383,7 +383,7 @@ netdev_tx_t libipw_xmit(struct sk_buff *skb, struct net_device *dev)
+ 
+ 		/* Each fragment may need to have room for encryption
+ 		 * pre/postfix */
+-		if (host_encrypt)
++		if (host_encrypt && crypt && crypt->ops)
+ 			bytes_per_frag -= crypt->ops->extra_mpdu_prefix_len +
+ 			    crypt->ops->extra_mpdu_postfix_len;
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/acpi.c b/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
+index 33aae639ad37e..e6d64152c81a7 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
+@@ -937,6 +937,9 @@ int iwl_sar_geo_init(struct iwl_fw_runtime *fwrt,
+ {
+ 	int i, j;
+ 
++	if (!fwrt->geo_enabled)
++		return -ENODATA;
++
+ 	if (!iwl_sar_geo_support(fwrt))
+ 		return -EOPNOTSUPP;
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/mei/main.c b/drivers/net/wireless/intel/iwlwifi/mei/main.c
+index b4f45234cfc89..357f14626cf43 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mei/main.c
++++ b/drivers/net/wireless/intel/iwlwifi/mei/main.c
+@@ -493,6 +493,7 @@ void iwl_mei_add_data_to_ring(struct sk_buff *skb, bool cb_tx)
+ 	if (cb_tx) {
+ 		struct iwl_sap_cb_data *cb_hdr = skb_push(skb, sizeof(*cb_hdr));
+ 
++		memset(cb_hdr, 0, sizeof(*cb_hdr));
+ 		cb_hdr->hdr.type = cpu_to_le16(SAP_MSG_CB_DATA_PACKET);
+ 		cb_hdr->hdr.len = cpu_to_le16(skb->len - sizeof(cb_hdr->hdr));
+ 		cb_hdr->hdr.seq_num = cpu_to_le32(atomic_inc_return(&mei->sap_seq_no));
+@@ -1019,6 +1020,8 @@ static void iwl_mei_handle_sap_data(struct mei_cl_device *cldev,
+ 
+ 		/* We need enough room for the WiFi header + SNAP + IV */
+ 		skb = netdev_alloc_skb(netdev, len + QOS_HDR_IV_SNAP_LEN);
++		if (!skb)
++			continue;
+ 
+ 		skb_reserve(skb, QOS_HDR_IV_SNAP_LEN);
+ 		ethhdr = skb_push(skb, sizeof(*ethhdr));
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/power.c b/drivers/net/wireless/intel/iwlwifi/mvm/power.c
+index b2ea2fca5376f..b9bd81242b216 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/power.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/power.c
+@@ -563,6 +563,9 @@ static void iwl_mvm_power_get_vifs_iterator(void *_data, u8 *mac,
+ 	struct iwl_power_vifs *power_iterator = _data;
+ 	bool active = mvmvif->phy_ctxt && mvmvif->phy_ctxt->id < NUM_PHY_CTX;
+ 
++	if (!mvmvif->uploaded)
++		return;
++
+ 	switch (ieee80211_vif_type_p2p(vif)) {
+ 	case NL80211_IFTYPE_P2P_DEVICE:
+ 		break;
+diff --git a/drivers/net/wireless/marvell/mwifiex/11h.c b/drivers/net/wireless/marvell/mwifiex/11h.c
+index d2ee6469e67bb..3fa25cd64cda0 100644
+--- a/drivers/net/wireless/marvell/mwifiex/11h.c
++++ b/drivers/net/wireless/marvell/mwifiex/11h.c
+@@ -303,5 +303,7 @@ void mwifiex_dfs_chan_sw_work_queue(struct work_struct *work)
+ 
+ 	mwifiex_dbg(priv->adapter, MSG,
+ 		    "indicating channel switch completion to kernel\n");
++	mutex_lock(&priv->wdev.mtx);
+ 	cfg80211_ch_switch_notify(priv->netdev, &priv->dfs_chandef);
++	mutex_unlock(&priv->wdev.mtx);
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/agg-rx.c b/drivers/net/wireless/mediatek/mt76/agg-rx.c
+index 72622220051bb..6c8b441945791 100644
+--- a/drivers/net/wireless/mediatek/mt76/agg-rx.c
++++ b/drivers/net/wireless/mediatek/mt76/agg-rx.c
+@@ -162,8 +162,9 @@ void mt76_rx_aggr_reorder(struct sk_buff *skb, struct sk_buff_head *frames)
+ 	if (!sta)
+ 		return;
+ 
+-	if (!status->aggr && !(status->flag & RX_FLAG_8023)) {
+-		mt76_rx_aggr_check_ctl(skb, frames);
++	if (!status->aggr) {
++		if (!(status->flag & RX_FLAG_8023))
++			mt76_rx_aggr_check_ctl(skb, frames);
+ 		return;
+ 	}
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mac80211.c b/drivers/net/wireless/mediatek/mt76/mac80211.c
+index 5b53d008eb664..8a2fedbb1451c 100644
+--- a/drivers/net/wireless/mediatek/mt76/mac80211.c
++++ b/drivers/net/wireless/mediatek/mt76/mac80211.c
+@@ -248,6 +248,8 @@ static void mt76_init_stream_cap(struct mt76_phy *phy,
+ 		vht_cap->cap |= IEEE80211_VHT_CAP_TXSTBC;
+ 	else
+ 		vht_cap->cap &= ~IEEE80211_VHT_CAP_TXSTBC;
++	vht_cap->cap |= IEEE80211_VHT_CAP_TX_ANTENNA_PATTERN |
++			IEEE80211_VHT_CAP_RX_ANTENNA_PATTERN;
+ 
+ 	for (i = 0; i < 8; i++) {
+ 		if (i < nstream)
+@@ -323,8 +325,6 @@ mt76_init_sband(struct mt76_phy *phy, struct mt76_sband *msband,
+ 	vht_cap->cap |= IEEE80211_VHT_CAP_RXLDPC |
+ 			IEEE80211_VHT_CAP_RXSTBC_1 |
+ 			IEEE80211_VHT_CAP_SHORT_GI_80 |
+-			IEEE80211_VHT_CAP_RX_ANTENNA_PATTERN |
+-			IEEE80211_VHT_CAP_TX_ANTENNA_PATTERN |
+ 			(3 << IEEE80211_VHT_CAP_MAX_A_MPDU_LENGTH_EXPONENT_SHIFT);
+ 
+ 	return 0;
+@@ -1303,7 +1303,7 @@ mt76_sta_add(struct mt76_dev *dev, struct ieee80211_vif *vif,
+ 			continue;
+ 
+ 		mtxq = (struct mt76_txq *)sta->txq[i]->drv_priv;
+-		mtxq->wcid = wcid;
++		mtxq->wcid = wcid->idx;
+ 	}
+ 
+ 	ewma_signal_init(&wcid->rssi);
+@@ -1381,7 +1381,9 @@ void mt76_sta_pre_rcu_remove(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ 	struct mt76_wcid *wcid = (struct mt76_wcid *)sta->drv_priv;
+ 
+ 	mutex_lock(&dev->mutex);
++	spin_lock_bh(&dev->status_lock);
+ 	rcu_assign_pointer(dev->wcid[wcid->idx], NULL);
++	spin_unlock_bh(&dev->status_lock);
+ 	mutex_unlock(&dev->mutex);
+ }
+ EXPORT_SYMBOL_GPL(mt76_sta_pre_rcu_remove);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76.h b/drivers/net/wireless/mediatek/mt76/mt76.h
+index 882fb5d2517fa..522c523d5c416 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76.h
++++ b/drivers/net/wireless/mediatek/mt76/mt76.h
+@@ -275,7 +275,7 @@ struct mt76_wcid {
+ };
+ 
+ struct mt76_txq {
+-	struct mt76_wcid *wcid;
++	u16 wcid;
+ 
+ 	u16 agg_ssn;
+ 	bool send_bar;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7603/main.c b/drivers/net/wireless/mediatek/mt76/mt7603/main.c
+index 83c5eec5b1633..1d098e9799ddc 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7603/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7603/main.c
+@@ -75,7 +75,7 @@ mt7603_add_interface(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
+ 	mt7603_wtbl_init(dev, idx, mvif->idx, bc_addr);
+ 
+ 	mtxq = (struct mt76_txq *)vif->txq->drv_priv;
+-	mtxq->wcid = &mvif->sta.wcid;
++	mtxq->wcid = idx;
+ 	rcu_assign_pointer(dev->mt76.wcid[idx], &mvif->sta.wcid);
+ 
+ out:
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/main.c b/drivers/net/wireless/mediatek/mt76/mt7615/main.c
+index d79cbdbd5a051..6b8e3e7ae4a26 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/main.c
+@@ -234,7 +234,7 @@ static int mt7615_add_interface(struct ieee80211_hw *hw,
+ 	rcu_assign_pointer(dev->mt76.wcid[idx], &mvif->sta.wcid);
+ 	if (vif->txq) {
+ 		mtxq = (struct mt76_txq *)vif->txq->drv_priv;
+-		mtxq->wcid = &mvif->sta.wcid;
++		mtxq->wcid = idx;
+ 	}
+ 
+ 	ret = mt7615_mcu_add_dev_info(phy, vif, true);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_util.c b/drivers/net/wireless/mediatek/mt76/mt76x02_util.c
+index dd30f537676da..be1d27de993ae 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x02_util.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x02_util.c
+@@ -292,7 +292,8 @@ mt76x02_vif_init(struct mt76x02_dev *dev, struct ieee80211_vif *vif,
+ 	mt76_packet_id_init(&mvif->group_wcid);
+ 
+ 	mtxq = (struct mt76_txq *)vif->txq->drv_priv;
+-	mtxq->wcid = &mvif->group_wcid;
++	rcu_assign_pointer(dev->mt76.wcid[MT_VIF_WCID(idx)], &mvif->group_wcid);
++	mtxq->wcid = MT_VIF_WCID(idx);
+ }
+ 
+ int
+@@ -345,6 +346,7 @@ void mt76x02_remove_interface(struct ieee80211_hw *hw,
+ 	struct mt76x02_vif *mvif = (struct mt76x02_vif *)vif->drv_priv;
+ 
+ 	dev->mt76.vif_mask &= ~BIT(mvif->idx);
++	rcu_assign_pointer(dev->mt76.wcid[mvif->group_wcid.idx], NULL);
+ 	mt76_packet_id_flush(&dev->mt76, &mvif->group_wcid);
+ }
+ EXPORT_SYMBOL_GPL(mt76x02_remove_interface);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/debugfs.c b/drivers/net/wireless/mediatek/mt76/mt7915/debugfs.c
+index 4e1ecaec8f4fb..dece0a6e00b33 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/debugfs.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/debugfs.c
+@@ -95,7 +95,7 @@ mt7915_muru_debug_set(void *data, u64 val)
+ 	struct mt7915_dev *dev = data;
+ 
+ 	dev->muru_debug = val;
+-	mt7915_mcu_muru_debug_set(dev, data);
++	mt7915_mcu_muru_debug_set(dev, dev->muru_debug);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.c b/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.c
+index 5b133bcdab17d..4b1a9811646fd 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.c
+@@ -152,6 +152,8 @@ static void mt7915_eeprom_parse_band_config(struct mt7915_phy *phy)
+ 			phy->mt76->cap.has_2ghz = true;
+ 			return;
+ 		}
++	} else if (val == MT_EE_BAND_SEL_DEFAULT && dev->dbdc_support) {
++		val = phy->band_idx ? MT_EE_BAND_SEL_5GHZ : MT_EE_BAND_SEL_2GHZ;
+ 	}
+ 
+ 	switch (val) {
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mac.c b/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
+index e9e7efbf350d5..45169a027fdac 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
+@@ -309,7 +309,7 @@ mt7915_mac_decode_he_mu_radiotap(struct sk_buff *skb, __le32 *rxv)
+ }
+ 
+ static void
+-mt7915_mac_decode_he_radiotap(struct sk_buff *skb, __le32 *rxv, u32 mode)
++mt7915_mac_decode_he_radiotap(struct sk_buff *skb, __le32 *rxv, u8 mode)
+ {
+ 	struct mt76_rx_status *status = (struct mt76_rx_status *)skb->cb;
+ 	static const struct ieee80211_radiotap_he known = {
+@@ -474,10 +474,10 @@ static int
+ mt7915_mac_fill_rx_rate(struct mt7915_dev *dev,
+ 			struct mt76_rx_status *status,
+ 			struct ieee80211_supported_band *sband,
+-			__le32 *rxv)
++			__le32 *rxv, u8 *mode)
+ {
+ 	u32 v0, v2;
+-	u8 stbc, gi, bw, dcm, mode, nss;
++	u8 stbc, gi, bw, dcm, nss;
+ 	int i, idx;
+ 	bool cck = false;
+ 
+@@ -490,18 +490,18 @@ mt7915_mac_fill_rx_rate(struct mt7915_dev *dev,
+ 	if (!is_mt7915(&dev->mt76)) {
+ 		stbc = FIELD_GET(MT_PRXV_HT_STBC, v0);
+ 		gi = FIELD_GET(MT_PRXV_HT_SHORT_GI, v0);
+-		mode = FIELD_GET(MT_PRXV_TX_MODE, v0);
++		*mode = FIELD_GET(MT_PRXV_TX_MODE, v0);
+ 		dcm = FIELD_GET(MT_PRXV_DCM, v0);
+ 		bw = FIELD_GET(MT_PRXV_FRAME_MODE, v0);
+ 	} else {
+ 		stbc = FIELD_GET(MT_CRXV_HT_STBC, v2);
+ 		gi = FIELD_GET(MT_CRXV_HT_SHORT_GI, v2);
+-		mode = FIELD_GET(MT_CRXV_TX_MODE, v2);
++		*mode = FIELD_GET(MT_CRXV_TX_MODE, v2);
+ 		dcm = !!(idx & GENMASK(3, 0) & MT_PRXV_TX_DCM);
+ 		bw = FIELD_GET(MT_CRXV_FRAME_MODE, v2);
+ 	}
+ 
+-	switch (mode) {
++	switch (*mode) {
+ 	case MT_PHY_TYPE_CCK:
+ 		cck = true;
+ 		fallthrough;
+@@ -521,7 +521,7 @@ mt7915_mac_fill_rx_rate(struct mt7915_dev *dev,
+ 		status->encoding = RX_ENC_VHT;
+ 		if (gi)
+ 			status->enc_flags |= RX_ENC_FLAG_SHORT_GI;
+-		if (i > 9)
++		if (i > 11)
+ 			return -EINVAL;
+ 		break;
+ 	case MT_PHY_TYPE_HE_MU:
+@@ -546,7 +546,7 @@ mt7915_mac_fill_rx_rate(struct mt7915_dev *dev,
+ 	case IEEE80211_STA_RX_BW_20:
+ 		break;
+ 	case IEEE80211_STA_RX_BW_40:
+-		if (mode & MT_PHY_TYPE_HE_EXT_SU &&
++		if (*mode & MT_PHY_TYPE_HE_EXT_SU &&
+ 		    (idx & MT_PRXV_TX_ER_SU_106T)) {
+ 			status->bw = RATE_INFO_BW_HE_RU;
+ 			status->he_ru =
+@@ -566,7 +566,7 @@ mt7915_mac_fill_rx_rate(struct mt7915_dev *dev,
+ 	}
+ 
+ 	status->enc_flags |= RX_ENC_FLAG_STBC_MASK * stbc;
+-	if (mode < MT_PHY_TYPE_HE_SU && gi)
++	if (*mode < MT_PHY_TYPE_HE_SU && gi)
+ 		status->enc_flags |= RX_ENC_FLAG_SHORT_GI;
+ 
+ 	return 0;
+@@ -581,7 +581,6 @@ mt7915_mac_fill_rx(struct mt7915_dev *dev, struct sk_buff *skb)
+ 	struct ieee80211_supported_band *sband;
+ 	__le32 *rxd = (__le32 *)skb->data;
+ 	__le32 *rxv = NULL;
+-	u32 mode = 0;
+ 	u32 rxd0 = le32_to_cpu(rxd[0]);
+ 	u32 rxd1 = le32_to_cpu(rxd[1]);
+ 	u32 rxd2 = le32_to_cpu(rxd[2]);
+@@ -590,10 +589,10 @@ mt7915_mac_fill_rx(struct mt7915_dev *dev, struct sk_buff *skb)
+ 	u32 csum_mask = MT_RXD0_NORMAL_IP_SUM | MT_RXD0_NORMAL_UDP_TCP_SUM;
+ 	bool unicast, insert_ccmp_hdr = false;
+ 	u8 remove_pad, amsdu_info;
++	u8 mode = 0, qos_ctl = 0;
+ 	bool hdr_trans;
+ 	u16 hdr_gap;
+ 	u16 seq_ctrl = 0;
+-	u8 qos_ctl = 0;
+ 	__le16 fc = 0;
+ 	int idx;
+ 
+@@ -766,7 +765,8 @@ mt7915_mac_fill_rx(struct mt7915_dev *dev, struct sk_buff *skb)
+ 		}
+ 
+ 		if (!is_mt7915(&dev->mt76) || (rxd1 & MT_RXD1_NORMAL_GROUP_5)) {
+-			ret = mt7915_mac_fill_rx_rate(dev, status, sband, rxv);
++			ret = mt7915_mac_fill_rx_rate(dev, status, sband, rxv,
++						      &mode);
+ 			if (ret < 0)
+ 				return ret;
+ 		}
+@@ -864,8 +864,11 @@ mt7915_mac_fill_rx_vector(struct mt7915_dev *dev, struct sk_buff *skb)
+ 	int i;
+ 
+ 	band_idx = le32_get_bits(rxv_hdr[1], MT_RXV_HDR_BAND_IDX);
+-	if (band_idx && !phy->band_idx)
++	if (band_idx && !phy->band_idx) {
+ 		phy = mt7915_ext_phy(dev);
++		if (!phy)
++			goto out;
++	}
+ 
+ 	rcpi = le32_to_cpu(rxv[6]);
+ 	ib_rssi = le32_to_cpu(rxv[7]);
+@@ -890,8 +893,8 @@ mt7915_mac_fill_rx_vector(struct mt7915_dev *dev, struct sk_buff *skb)
+ 
+ 	phy->test.last_freq_offset = foe;
+ 	phy->test.last_snr = snr;
++out:
+ #endif
+-
+ 	dev_kfree_skb(skb);
+ }
+ 
+@@ -1017,6 +1020,7 @@ mt7915_mac_write_txwi_8023(struct mt7915_dev *dev, __le32 *txwi,
+ 
+ 	u8 tid = skb->priority & IEEE80211_QOS_CTL_TID_MASK;
+ 	u8 fc_type, fc_stype;
++	u16 ethertype;
+ 	bool wmm = false;
+ 	u32 val;
+ 
+@@ -1030,7 +1034,8 @@ mt7915_mac_write_txwi_8023(struct mt7915_dev *dev, __le32 *txwi,
+ 	val = FIELD_PREP(MT_TXD1_HDR_FORMAT, MT_HDR_FORMAT_802_3) |
+ 	      FIELD_PREP(MT_TXD1_TID, tid);
+ 
+-	if (be16_to_cpu(skb->protocol) >= ETH_P_802_3_MIN)
++	ethertype = get_unaligned_be16(&skb->data[12]);
++	if (ethertype >= ETH_P_802_3_MIN)
+ 		val |= MT_TXD1_ETH_802_3;
+ 
+ 	txwi[1] |= cpu_to_le32(val);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/main.c b/drivers/net/wireless/mediatek/mt76/mt7915/main.c
+index c3f44d801e7fe..187cf4ccd36e1 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/main.c
+@@ -246,7 +246,7 @@ static int mt7915_add_interface(struct ieee80211_hw *hw,
+ 	rcu_assign_pointer(dev->mt76.wcid[idx], &mvif->sta.wcid);
+ 	if (vif->txq) {
+ 		mtxq = (struct mt76_txq *)vif->txq->drv_priv;
+-		mtxq->wcid = &mvif->sta.wcid;
++		mtxq->wcid = idx;
+ 	}
+ 
+ 	if (vif->type != NL80211_IFTYPE_AP &&
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+index e7a6f80e77551..736c9c342baaa 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+@@ -1854,7 +1854,8 @@ mt7915_mcu_beacon_mbss(struct sk_buff *rskb, struct sk_buff *skb,
+ 			continue;
+ 
+ 		for_each_element(sub_elem, elem->data + 1, elem->datalen - 1) {
+-			const u8 *data;
++			const struct ieee80211_bssid_index *idx;
++			const u8 *idx_ie;
+ 
+ 			if (sub_elem->id || sub_elem->datalen < 4)
+ 				continue; /* not a valid BSS profile */
+@@ -1862,14 +1863,19 @@ mt7915_mcu_beacon_mbss(struct sk_buff *rskb, struct sk_buff *skb,
+ 			/* Find WLAN_EID_MULTI_BSSID_IDX
+ 			 * in the merged nontransmitted profile
+ 			 */
+-			data = cfg80211_find_ie(WLAN_EID_MULTI_BSSID_IDX,
+-						sub_elem->data,
+-						sub_elem->datalen);
+-			if (!data || data[1] < 1 || !data[2])
++			idx_ie = cfg80211_find_ie(WLAN_EID_MULTI_BSSID_IDX,
++						  sub_elem->data,
++						  sub_elem->datalen);
++			if (!idx_ie || idx_ie[1] < sizeof(*idx))
+ 				continue;
+ 
+-			mbss->offset[data[2]] = cpu_to_le16(data - skb->data);
+-			mbss->bitmap |= cpu_to_le32(BIT(data[2]));
++			idx = (void *)(idx_ie + 2);
++			if (!idx->bssid_index || idx->bssid_index > 31)
++				continue;
++
++			mbss->offset[idx->bssid_index] =
++				cpu_to_le16(idx_ie - skb->data);
++			mbss->bitmap |= cpu_to_le32(BIT(idx->bssid_index));
+ 		}
+ 	}
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h b/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h
+index 6efa0a2e23458..4b6eda958ef36 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h
+@@ -319,7 +319,7 @@ struct mt7915_dev {
+ 	void *cal;
+ 
+ 	struct {
+-		u8 table_mask;
++		u16 table_mask;
+ 		u8 n_agrt;
+ 	} twt;
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/soc.c b/drivers/net/wireless/mediatek/mt76/mt7915/soc.c
+index 3028c02cb840e..be448d471b03b 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/soc.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/soc.c
+@@ -210,6 +210,8 @@ static int mt7986_wmac_gpio_setup(struct mt7915_dev *dev)
+ 		if (IS_ERR_OR_NULL(state))
+ 			return -EINVAL;
+ 		break;
++	default:
++		return -EINVAL;
+ 	}
+ 
+ 	ret = pinctrl_select_state(pinctrl, state);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mac.c b/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
+index 233998ca48573..c5350e7a11e62 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
+@@ -696,7 +696,7 @@ mt7921_mac_fill_rx(struct mt7921_dev *dev, struct sk_buff *skb)
+ 			status->nss =
+ 				FIELD_GET(MT_PRXV_NSTS, v0) + 1;
+ 			status->encoding = RX_ENC_VHT;
+-			if (i > 9)
++			if (i > 11)
+ 				return -EINVAL;
+ 			break;
+ 		case MT_PHY_TYPE_HE_MU:
+@@ -814,6 +814,7 @@ mt7921_mac_write_txwi_8023(struct mt7921_dev *dev, __le32 *txwi,
+ {
+ 	u8 tid = skb->priority & IEEE80211_QOS_CTL_TID_MASK;
+ 	u8 fc_type, fc_stype;
++	u16 ethertype;
+ 	bool wmm = false;
+ 	u32 val;
+ 
+@@ -827,7 +828,8 @@ mt7921_mac_write_txwi_8023(struct mt7921_dev *dev, __le32 *txwi,
+ 	val = FIELD_PREP(MT_TXD1_HDR_FORMAT, MT_HDR_FORMAT_802_3) |
+ 	      FIELD_PREP(MT_TXD1_TID, tid);
+ 
+-	if (be16_to_cpu(skb->protocol) >= ETH_P_802_3_MIN)
++	ethertype = get_unaligned_be16(&skb->data[12]);
++	if (ethertype >= ETH_P_802_3_MIN)
+ 		val |= MT_TXD1_ETH_802_3;
+ 
+ 	txwi[1] |= cpu_to_le32(val);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/main.c b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
+index fdaf2451bc1de..9b9e80f56eda7 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
+@@ -330,7 +330,7 @@ static int mt7921_add_interface(struct ieee80211_hw *hw,
+ 	rcu_assign_pointer(dev->mt76.wcid[idx], &mvif->sta.wcid);
+ 	if (vif->txq) {
+ 		mtxq = (struct mt76_txq *)vif->txq->drv_priv;
+-		mtxq->wcid = &mvif->sta.wcid;
++		mtxq->wcid = idx;
+ 	}
+ 
+ out:
+@@ -489,8 +489,8 @@ mt7921_sniffer_interface_iter(void *priv, u8 *mac, struct ieee80211_vif *vif)
+ 	bool monitor = !!(hw->conf.flags & IEEE80211_CONF_MONITOR);
+ 
+ 	mt7921_mcu_set_sniffer(dev, vif, monitor);
+-	pm->enable = !monitor;
+-	pm->ds_enable = !monitor;
++	pm->enable = pm->enable_user && !monitor;
++	pm->ds_enable = pm->ds_enable_user && !monitor;
+ 
+ 	mt76_connac_mcu_set_deep_sleep(&dev->mt76, pm->ds_enable);
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/pci.c b/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
+index 1a01d025bbe59..b5fb22b8e0869 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
+@@ -119,7 +119,6 @@ static void mt7921e_unregister_device(struct mt7921_dev *dev)
+ 	mt7921_mcu_exit(dev);
+ 
+ 	tasklet_disable(&dev->irq_tasklet);
+-	mt76_free_device(&dev->mt76);
+ }
+ 
+ static u32 __mt7921_reg_addr(struct mt7921_dev *dev, u32 addr)
+@@ -302,8 +301,10 @@ static int mt7921_pci_probe(struct pci_dev *pdev,
+ 	dev->bus_ops = dev->mt76.bus;
+ 	bus_ops = devm_kmemdup(dev->mt76.dev, dev->bus_ops, sizeof(*bus_ops),
+ 			       GFP_KERNEL);
+-	if (!bus_ops)
+-		return -ENOMEM;
++	if (!bus_ops) {
++		ret = -ENOMEM;
++		goto err_free_dev;
++	}
+ 
+ 	bus_ops->rr = mt7921_rr;
+ 	bus_ops->wr = mt7921_wr;
+@@ -312,7 +313,7 @@ static int mt7921_pci_probe(struct pci_dev *pdev,
+ 
+ 	ret = __mt7921e_mcu_drv_pmctrl(dev);
+ 	if (ret)
+-		return ret;
++		goto err_free_dev;
+ 
+ 	mdev->rev = (mt7921_l1_rr(dev, MT_HW_CHIPID) << 16) |
+ 		    (mt7921_l1_rr(dev, MT_HW_REV) & 0xff);
+@@ -354,6 +355,7 @@ static void mt7921_pci_remove(struct pci_dev *pdev)
+ 
+ 	mt7921e_unregister_device(dev);
+ 	devm_free_irq(&pdev->dev, pdev->irq, dev);
++	mt76_free_device(&dev->mt76);
+ 	pci_free_irq_vectors(pdev);
+ }
+ 
+diff --git a/drivers/net/wireless/mediatek/mt76/tx.c b/drivers/net/wireless/mediatek/mt76/tx.c
+index 6b8c9dc805425..5ed2d60debfb3 100644
+--- a/drivers/net/wireless/mediatek/mt76/tx.c
++++ b/drivers/net/wireless/mediatek/mt76/tx.c
+@@ -120,7 +120,7 @@ mt76_tx_status_skb_add(struct mt76_dev *dev, struct mt76_wcid *wcid,
+ 
+ 	memset(cb, 0, sizeof(*cb));
+ 
+-	if (!wcid)
++	if (!wcid || !rcu_access_pointer(dev->wcid[wcid->idx]))
+ 		return MT_PACKET_ID_NO_ACK;
+ 
+ 	if (info->flags & IEEE80211_TX_CTL_NO_ACK)
+@@ -436,12 +436,11 @@ mt76_txq_stopped(struct mt76_queue *q)
+ 
+ static int
+ mt76_txq_send_burst(struct mt76_phy *phy, struct mt76_queue *q,
+-		    struct mt76_txq *mtxq)
++		    struct mt76_txq *mtxq, struct mt76_wcid *wcid)
+ {
+ 	struct mt76_dev *dev = phy->dev;
+ 	struct ieee80211_txq *txq = mtxq_to_txq(mtxq);
+ 	enum mt76_txq_id qid = mt76_txq_get_qid(txq);
+-	struct mt76_wcid *wcid = mtxq->wcid;
+ 	struct ieee80211_tx_info *info;
+ 	struct sk_buff *skb;
+ 	int n_frames = 1;
+@@ -521,8 +520,8 @@ mt76_txq_schedule_list(struct mt76_phy *phy, enum mt76_txq_id qid)
+ 			break;
+ 
+ 		mtxq = (struct mt76_txq *)txq->drv_priv;
+-		wcid = mtxq->wcid;
+-		if (wcid && test_bit(MT_WCID_FLAG_PS, &wcid->flags))
++		wcid = rcu_dereference(dev->wcid[mtxq->wcid]);
++		if (!wcid || test_bit(MT_WCID_FLAG_PS, &wcid->flags))
+ 			continue;
+ 
+ 		spin_lock_bh(&q->lock);
+@@ -541,7 +540,7 @@ mt76_txq_schedule_list(struct mt76_phy *phy, enum mt76_txq_id qid)
+ 		}
+ 
+ 		if (!mt76_txq_stopped(q))
+-			n_frames = mt76_txq_send_burst(phy, q, mtxq);
++			n_frames = mt76_txq_send_burst(phy, q, mtxq, wcid);
+ 
+ 		spin_unlock_bh(&q->lock);
+ 
+diff --git a/drivers/net/wireless/microchip/wilc1000/mon.c b/drivers/net/wireless/microchip/wilc1000/mon.c
+index 6bd63934c2d84..b5a1b65c087ca 100644
+--- a/drivers/net/wireless/microchip/wilc1000/mon.c
++++ b/drivers/net/wireless/microchip/wilc1000/mon.c
+@@ -233,7 +233,7 @@ struct net_device *wilc_wfi_init_mon_interface(struct wilc *wl,
+ 	wl->monitor_dev->netdev_ops = &wilc_wfi_netdev_ops;
+ 	wl->monitor_dev->needs_free_netdev = true;
+ 
+-	if (cfg80211_register_netdevice(wl->monitor_dev)) {
++	if (register_netdevice(wl->monitor_dev)) {
+ 		netdev_err(real_dev, "register_netdevice failed\n");
+ 		free_netdev(wl->monitor_dev);
+ 		return NULL;
+@@ -251,7 +251,7 @@ void wilc_wfi_deinit_mon_interface(struct wilc *wl, bool rtnl_locked)
+ 		return;
+ 
+ 	if (rtnl_locked)
+-		cfg80211_unregister_netdevice(wl->monitor_dev);
++		unregister_netdevice(wl->monitor_dev);
+ 	else
+ 		unregister_netdev(wl->monitor_dev);
+ 	wl->monitor_dev = NULL;
+diff --git a/drivers/net/wireless/realtek/rtl818x/rtl8180/dev.c b/drivers/net/wireless/realtek/rtl818x/rtl8180/dev.c
+index 2477e18c7caec..025619cd14e82 100644
+--- a/drivers/net/wireless/realtek/rtl818x/rtl8180/dev.c
++++ b/drivers/net/wireless/realtek/rtl818x/rtl8180/dev.c
+@@ -460,8 +460,10 @@ static void rtl8180_tx(struct ieee80211_hw *dev,
+ 	struct rtl8180_priv *priv = dev->priv;
+ 	struct rtl8180_tx_ring *ring;
+ 	struct rtl8180_tx_desc *entry;
++	unsigned int prio = 0;
+ 	unsigned long flags;
+-	unsigned int idx, prio, hw_prio;
++	unsigned int idx, hw_prio;
++
+ 	dma_addr_t mapping;
+ 	u32 tx_flags;
+ 	u8 rc_flags;
+@@ -470,7 +472,9 @@ static void rtl8180_tx(struct ieee80211_hw *dev,
+ 	/* do arithmetic and then convert to le16 */
+ 	u16 frame_duration = 0;
+ 
+-	prio = skb_get_queue_mapping(skb);
++	/* rtl8180/rtl8185 only has one useable tx queue */
++	if (dev->queues > IEEE80211_AC_BK)
++		prio = skb_get_queue_mapping(skb);
+ 	ring = &priv->tx_ring[prio];
+ 
+ 	mapping = dma_map_single(&priv->pdev->dev, skb->data, skb->len,
+diff --git a/drivers/net/wireless/realtek/rtlwifi/usb.c b/drivers/net/wireless/realtek/rtlwifi/usb.c
+index 86a2368732547..a8eebafb9a7ee 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/usb.c
++++ b/drivers/net/wireless/realtek/rtlwifi/usb.c
+@@ -1014,7 +1014,7 @@ int rtl_usb_probe(struct usb_interface *intf,
+ 	hw = ieee80211_alloc_hw(sizeof(struct rtl_priv) +
+ 				sizeof(struct rtl_usb_priv), &rtl_ops);
+ 	if (!hw) {
+-		WARN_ONCE(true, "rtl_usb: ieee80211 alloc failed\n");
++		pr_warn("rtl_usb: ieee80211 alloc failed\n");
+ 		return -ENOMEM;
+ 	}
+ 	rtlpriv = hw->priv;
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8821c.c b/drivers/net/wireless/realtek/rtw88/rtw8821c.c
+index 99eee128ae945..ec38a7c849517 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8821c.c
++++ b/drivers/net/wireless/realtek/rtw88/rtw8821c.c
+@@ -512,6 +512,7 @@ static s8 get_cck_rx_pwr(struct rtw_dev *rtwdev, u8 lna_idx, u8 vga_idx)
+ static void query_phy_status_page0(struct rtw_dev *rtwdev, u8 *phy_status,
+ 				   struct rtw_rx_pkt_stat *pkt_stat)
+ {
++	struct rtw_dm_info *dm_info = &rtwdev->dm_info;
+ 	s8 rx_power;
+ 	u8 lna_idx = 0;
+ 	u8 vga_idx = 0;
+@@ -523,6 +524,7 @@ static void query_phy_status_page0(struct rtw_dev *rtwdev, u8 *phy_status,
+ 
+ 	pkt_stat->rx_power[RF_PATH_A] = rx_power;
+ 	pkt_stat->rssi = rtw_phy_rf_power_2_rssi(pkt_stat->rx_power, 1);
++	dm_info->rssi[RF_PATH_A] = pkt_stat->rssi;
+ 	pkt_stat->bw = RTW_CHANNEL_WIDTH_20;
+ 	pkt_stat->signal_power = rx_power;
+ }
+@@ -530,6 +532,7 @@ static void query_phy_status_page0(struct rtw_dev *rtwdev, u8 *phy_status,
+ static void query_phy_status_page1(struct rtw_dev *rtwdev, u8 *phy_status,
+ 				   struct rtw_rx_pkt_stat *pkt_stat)
+ {
++	struct rtw_dm_info *dm_info = &rtwdev->dm_info;
+ 	u8 rxsc, bw;
+ 	s8 min_rx_power = -120;
+ 
+@@ -549,6 +552,7 @@ static void query_phy_status_page1(struct rtw_dev *rtwdev, u8 *phy_status,
+ 
+ 	pkt_stat->rx_power[RF_PATH_A] = GET_PHY_STAT_P1_PWDB_A(phy_status) - 110;
+ 	pkt_stat->rssi = rtw_phy_rf_power_2_rssi(pkt_stat->rx_power, 1);
++	dm_info->rssi[RF_PATH_A] = pkt_stat->rssi;
+ 	pkt_stat->bw = bw;
+ 	pkt_stat->signal_power = max(pkt_stat->rx_power[RF_PATH_A],
+ 				     min_rx_power);
+diff --git a/drivers/net/wireless/realtek/rtw88/rx.c b/drivers/net/wireless/realtek/rtw88/rx.c
+index d2d607e22198d..84aedabdf2853 100644
+--- a/drivers/net/wireless/realtek/rtw88/rx.c
++++ b/drivers/net/wireless/realtek/rtw88/rx.c
+@@ -158,7 +158,8 @@ void rtw_rx_fill_rx_status(struct rtw_dev *rtwdev,
+ 	memset(rx_status, 0, sizeof(*rx_status));
+ 	rx_status->freq = hw->conf.chandef.chan->center_freq;
+ 	rx_status->band = hw->conf.chandef.chan->band;
+-	if (rtw_fw_feature_check(&rtwdev->fw, FW_FEATURE_SCAN_OFFLOAD))
++	if (rtw_fw_feature_check(&rtwdev->fw, FW_FEATURE_SCAN_OFFLOAD) &&
++	    test_bit(RTW_FLAG_SCANNING, rtwdev->flags))
+ 		rtw_set_rx_freq_by_pktstat(pkt_stat, rx_status);
+ 	if (pkt_stat->crc_err)
+ 		rx_status->flag |= RX_FLAG_FAILED_FCS_CRC;
+diff --git a/drivers/net/wireless/realtek/rtw89/cam.c b/drivers/net/wireless/realtek/rtw89/cam.c
+index 305dbbebff6bb..26bef9fdd2053 100644
+--- a/drivers/net/wireless/realtek/rtw89/cam.c
++++ b/drivers/net/wireless/realtek/rtw89/cam.c
+@@ -421,10 +421,8 @@ static void rtw89_cam_reset_key_iter(struct ieee80211_hw *hw,
+ 				     void *data)
+ {
+ 	struct rtw89_dev *rtwdev = (struct rtw89_dev *)data;
+-	struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+ 
+ 	rtw89_cam_sec_key_del(rtwdev, vif, sta, key, false);
+-	rtw89_cam_deinit(rtwdev, rtwvif);
+ }
+ 
+ void rtw89_cam_deinit_addr_cam(struct rtw89_dev *rtwdev,
+@@ -480,6 +478,12 @@ int rtw89_cam_init_addr_cam(struct rtw89_dev *rtwdev,
+ 	int i;
+ 	int ret;
+ 
++	if (unlikely(addr_cam->valid)) {
++		rtw89_debug(rtwdev, RTW89_DBG_FW,
++			    "addr cam is already valid; skip init\n");
++		return 0;
++	}
++
+ 	ret = rtw89_cam_get_avail_addr_cam(rtwdev, &addr_cam_idx);
+ 	if (ret) {
+ 		rtw89_err(rtwdev, "failed to get available addr cam\n");
+@@ -531,6 +535,12 @@ static int rtw89_cam_init_bssid_cam(struct rtw89_dev *rtwdev,
+ 	u8 bssid_cam_idx;
+ 	int ret;
+ 
++	if (unlikely(bssid_cam->valid)) {
++		rtw89_debug(rtwdev, RTW89_DBG_FW,
++			    "bssid cam is already valid; skip init\n");
++		return 0;
++	}
++
+ 	ret = rtw89_cam_get_avail_bssid_cam(rtwdev, &bssid_cam_idx);
+ 	if (ret) {
+ 		rtw89_err(rtwdev, "failed to get available bssid cam\n");
+diff --git a/drivers/net/wireless/realtek/rtw89/fw.c b/drivers/net/wireless/realtek/rtw89/fw.c
+index 6deaf8eec6b47..a9b5315a517e8 100644
+--- a/drivers/net/wireless/realtek/rtw89/fw.c
++++ b/drivers/net/wireless/realtek/rtw89/fw.c
+@@ -2065,7 +2065,7 @@ static void rtw89_hw_scan_add_chan(struct rtw89_dev *rtwdev, int chan_type,
+ 		ch_info->num_pkt = 0;
+ 		break;
+ 	case RTW89_CHAN_DFS:
+-		ch_info->period = min_t(u8, ch_info->period,
++		ch_info->period = max_t(u8, ch_info->period,
+ 					RTW89_DFS_CHAN_TIME);
+ 		ch_info->dwell_time = RTW89_DWELL_TIME;
+ 		break;
+diff --git a/drivers/net/wireless/realtek/rtw89/phy.c b/drivers/net/wireless/realtek/rtw89/phy.c
+index ac211d8973118..8414f30184b9e 100644
+--- a/drivers/net/wireless/realtek/rtw89/phy.c
++++ b/drivers/net/wireless/realtek/rtw89/phy.c
+@@ -2213,6 +2213,11 @@ void rtw89_phy_cfo_parse(struct rtw89_dev *rtwdev, s16 cfo_val,
+ 	struct rtw89_cfo_tracking_info *cfo = &rtwdev->cfo_tracking;
+ 	u8 macid = phy_ppdu->mac_id;
+ 
++	if (macid >= CFO_TRACK_MAX_USER) {
++		rtw89_warn(rtwdev, "mac_id %d is out of range\n", macid);
++		return;
++	}
++
+ 	cfo->cfo_tail[macid] += cfo_val;
+ 	cfo->cfo_cnt[macid]++;
+ 	cfo->packet_count++;
+diff --git a/drivers/net/wireless/realtek/rtw89/ser.c b/drivers/net/wireless/realtek/rtw89/ser.c
+index 837cdc366a61a..e86f3d89ef1bf 100644
+--- a/drivers/net/wireless/realtek/rtw89/ser.c
++++ b/drivers/net/wireless/realtek/rtw89/ser.c
+@@ -220,11 +220,32 @@ static void ser_reset_vif(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
+ 	rtwvif->trigger = false;
+ }
+ 
++static void ser_sta_deinit_addr_cam_iter(void *data, struct ieee80211_sta *sta)
++{
++	struct rtw89_dev *rtwdev = (struct rtw89_dev *)data;
++	struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
++
++	rtw89_cam_deinit_addr_cam(rtwdev, &rtwsta->addr_cam);
++}
++
++static void ser_deinit_cam(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++{
++	if (rtwvif->net_type == RTW89_NET_TYPE_AP_MODE)
++		ieee80211_iterate_stations_atomic(rtwdev->hw,
++						  ser_sta_deinit_addr_cam_iter,
++						  rtwdev);
++
++	rtw89_cam_deinit(rtwdev, rtwvif);
++}
++
+ static void ser_reset_mac_binding(struct rtw89_dev *rtwdev)
+ {
+ 	struct rtw89_vif *rtwvif;
+ 
+ 	rtw89_cam_reset_keys(rtwdev);
++	rtw89_for_each_rtwvif(rtwdev, rtwvif)
++		ser_deinit_cam(rtwdev, rtwvif);
++
+ 	rtw89_core_release_all_bits_map(rtwdev->mac_id_map, RTW89_MAX_MAC_ID_NUM);
+ 	rtw89_for_each_rtwvif(rtwdev, rtwvif)
+ 		ser_reset_vif(rtwdev, rtwvif);
+diff --git a/drivers/net/wireless/ti/wl1251/event.c b/drivers/net/wireless/ti/wl1251/event.c
+index e6d426edab56b..e945aafd88ee5 100644
+--- a/drivers/net/wireless/ti/wl1251/event.c
++++ b/drivers/net/wireless/ti/wl1251/event.c
+@@ -169,11 +169,9 @@ int wl1251_event_wait(struct wl1251 *wl, u32 mask, int timeout_ms)
+ 		msleep(1);
+ 
+ 		/* read from both event fields */
+-		wl1251_mem_read(wl, wl->mbox_ptr[0], &events_vector,
+-				sizeof(events_vector));
++		events_vector = wl1251_mem_read32(wl, wl->mbox_ptr[0]);
+ 		event = events_vector & mask;
+-		wl1251_mem_read(wl, wl->mbox_ptr[1], &events_vector,
+-				sizeof(events_vector));
++		events_vector = wl1251_mem_read32(wl, wl->mbox_ptr[1]);
+ 		event |= events_vector & mask;
+ 	} while (!event);
+ 
+@@ -202,7 +200,7 @@ void wl1251_event_mbox_config(struct wl1251 *wl)
+ 
+ int wl1251_event_handle(struct wl1251 *wl, u8 mbox_num)
+ {
+-	struct event_mailbox mbox;
++	struct event_mailbox *mbox;
+ 	int ret;
+ 
+ 	wl1251_debug(DEBUG_EVENT, "EVENT on mbox %d", mbox_num);
+@@ -210,12 +208,20 @@ int wl1251_event_handle(struct wl1251 *wl, u8 mbox_num)
+ 	if (mbox_num > 1)
+ 		return -EINVAL;
+ 
++	mbox = kmalloc(sizeof(*mbox), GFP_KERNEL);
++	if (!mbox) {
++		wl1251_error("can not allocate mbox buffer");
++		return -ENOMEM;
++	}
++
+ 	/* first we read the mbox descriptor */
+-	wl1251_mem_read(wl, wl->mbox_ptr[mbox_num], &mbox,
+-			    sizeof(struct event_mailbox));
++	wl1251_mem_read(wl, wl->mbox_ptr[mbox_num], mbox,
++			sizeof(*mbox));
+ 
+ 	/* process the descriptor */
+-	ret = wl1251_event_process(wl, &mbox);
++	ret = wl1251_event_process(wl, mbox);
++	kfree(mbox);
++
+ 	if (ret < 0)
+ 		return ret;
+ 
+diff --git a/drivers/net/wireless/ti/wl1251/io.c b/drivers/net/wireless/ti/wl1251/io.c
+index 5ebe7958ed5c7..e8d567af74b4b 100644
+--- a/drivers/net/wireless/ti/wl1251/io.c
++++ b/drivers/net/wireless/ti/wl1251/io.c
+@@ -121,7 +121,13 @@ void wl1251_set_partition(struct wl1251 *wl,
+ 			  u32 mem_start, u32 mem_size,
+ 			  u32 reg_start, u32 reg_size)
+ {
+-	struct wl1251_partition partition[2];
++	struct wl1251_partition_set *partition;
++
++	partition = kmalloc(sizeof(*partition), GFP_KERNEL);
++	if (!partition) {
++		wl1251_error("can not allocate partition buffer");
++		return;
++	}
+ 
+ 	wl1251_debug(DEBUG_SPI, "mem_start %08X mem_size %08X",
+ 		     mem_start, mem_size);
+@@ -164,10 +170,10 @@ void wl1251_set_partition(struct wl1251 *wl,
+ 			     reg_start, reg_size);
+ 	}
+ 
+-	partition[0].start = mem_start;
+-	partition[0].size  = mem_size;
+-	partition[1].start = reg_start;
+-	partition[1].size  = reg_size;
++	partition->mem.start = mem_start;
++	partition->mem.size  = mem_size;
++	partition->reg.start = reg_start;
++	partition->reg.size  = reg_size;
+ 
+ 	wl->physical_mem_addr = mem_start;
+ 	wl->physical_reg_addr = reg_start;
+@@ -176,5 +182,7 @@ void wl1251_set_partition(struct wl1251 *wl,
+ 	wl->virtual_reg_addr = mem_size;
+ 
+ 	wl->if_ops->write(wl, HW_ACCESS_PART0_SIZE_ADDR, partition,
+-		sizeof(partition));
++		sizeof(*partition));
++
++	kfree(partition);
+ }
+diff --git a/drivers/net/wireless/ti/wl1251/tx.c b/drivers/net/wireless/ti/wl1251/tx.c
+index 98cd39619d579..e9dc3c72bb110 100644
+--- a/drivers/net/wireless/ti/wl1251/tx.c
++++ b/drivers/net/wireless/ti/wl1251/tx.c
+@@ -443,19 +443,25 @@ static void wl1251_tx_packet_cb(struct wl1251 *wl,
+ void wl1251_tx_complete(struct wl1251 *wl)
+ {
+ 	int i, result_index, num_complete = 0, queue_len;
+-	struct tx_result result[FW_TX_CMPLT_BLOCK_SIZE], *result_ptr;
++	struct tx_result *result, *result_ptr;
+ 	unsigned long flags;
+ 
+ 	if (unlikely(wl->state != WL1251_STATE_ON))
+ 		return;
+ 
++	result = kmalloc_array(FW_TX_CMPLT_BLOCK_SIZE, sizeof(*result), GFP_KERNEL);
++	if (!result) {
++		wl1251_error("can not allocate result buffer");
++		return;
++	}
++
+ 	/* First we read the result */
+-	wl1251_mem_read(wl, wl->data_path->tx_complete_addr,
+-			    result, sizeof(result));
++	wl1251_mem_read(wl, wl->data_path->tx_complete_addr, result,
++			FW_TX_CMPLT_BLOCK_SIZE * sizeof(*result));
+ 
+ 	result_index = wl->next_tx_complete;
+ 
+-	for (i = 0; i < ARRAY_SIZE(result); i++) {
++	for (i = 0; i < FW_TX_CMPLT_BLOCK_SIZE; i++) {
+ 		result_ptr = &result[result_index];
+ 
+ 		if (result_ptr->done_1 == 1 &&
+@@ -538,6 +544,7 @@ void wl1251_tx_complete(struct wl1251 *wl)
+ 
+ 	}
+ 
++	kfree(result);
+ 	wl->next_tx_complete = result_index;
+ }
+ 
+diff --git a/drivers/nfc/st21nfca/se.c b/drivers/nfc/st21nfca/se.c
+index c922f10d0d7b9..7e213f8ddc98b 100644
+--- a/drivers/nfc/st21nfca/se.c
++++ b/drivers/nfc/st21nfca/se.c
+@@ -241,7 +241,7 @@ int st21nfca_hci_se_io(struct nfc_hci_dev *hdev, u32 se_idx,
+ }
+ EXPORT_SYMBOL(st21nfca_hci_se_io);
+ 
+-static void st21nfca_se_wt_timeout(struct timer_list *t)
++static void st21nfca_se_wt_work(struct work_struct *work)
+ {
+ 	/*
+ 	 * No answer from the secure element
+@@ -254,8 +254,9 @@ static void st21nfca_se_wt_timeout(struct timer_list *t)
+ 	 */
+ 	/* hardware reset managed through VCC_UICC_OUT power supply */
+ 	u8 param = 0x01;
+-	struct st21nfca_hci_info *info = from_timer(info, t,
+-						    se_info.bwi_timer);
++	struct st21nfca_hci_info *info = container_of(work,
++						struct st21nfca_hci_info,
++						se_info.timeout_work);
+ 
+ 	info->se_info.bwi_active = false;
+ 
+@@ -271,6 +272,13 @@ static void st21nfca_se_wt_timeout(struct timer_list *t)
+ 	info->se_info.cb(info->se_info.cb_context, NULL, 0, -ETIME);
+ }
+ 
++static void st21nfca_se_wt_timeout(struct timer_list *t)
++{
++	struct st21nfca_hci_info *info = from_timer(info, t, se_info.bwi_timer);
++
++	schedule_work(&info->se_info.timeout_work);
++}
++
+ static void st21nfca_se_activation_timeout(struct timer_list *t)
+ {
+ 	struct st21nfca_hci_info *info = from_timer(info, t,
+@@ -360,6 +368,7 @@ int st21nfca_apdu_reader_event_received(struct nfc_hci_dev *hdev,
+ 	switch (event) {
+ 	case ST21NFCA_EVT_TRANSMIT_DATA:
+ 		del_timer_sync(&info->se_info.bwi_timer);
++		cancel_work_sync(&info->se_info.timeout_work);
+ 		info->se_info.bwi_active = false;
+ 		r = nfc_hci_send_event(hdev, ST21NFCA_DEVICE_MGNT_GATE,
+ 				ST21NFCA_EVT_SE_END_OF_APDU_TRANSFER, NULL, 0);
+@@ -389,6 +398,7 @@ void st21nfca_se_init(struct nfc_hci_dev *hdev)
+ 	struct st21nfca_hci_info *info = nfc_hci_get_clientdata(hdev);
+ 
+ 	init_completion(&info->se_info.req_completion);
++	INIT_WORK(&info->se_info.timeout_work, st21nfca_se_wt_work);
+ 	/* initialize timers */
+ 	timer_setup(&info->se_info.bwi_timer, st21nfca_se_wt_timeout, 0);
+ 	info->se_info.bwi_active = false;
+@@ -416,6 +426,7 @@ void st21nfca_se_deinit(struct nfc_hci_dev *hdev)
+ 	if (info->se_info.se_active)
+ 		del_timer_sync(&info->se_info.se_active_timer);
+ 
++	cancel_work_sync(&info->se_info.timeout_work);
+ 	info->se_info.bwi_active = false;
+ 	info->se_info.se_active = false;
+ }
+diff --git a/drivers/nfc/st21nfca/st21nfca.h b/drivers/nfc/st21nfca/st21nfca.h
+index cb6ad916be911..ae6771cc9894a 100644
+--- a/drivers/nfc/st21nfca/st21nfca.h
++++ b/drivers/nfc/st21nfca/st21nfca.h
+@@ -141,6 +141,7 @@ struct st21nfca_se_info {
+ 
+ 	se_io_cb_t cb;
+ 	void *cb_context;
++	struct work_struct timeout_work;
+ };
+ 
+ struct st21nfca_hci_info {
+diff --git a/drivers/nvdimm/core.c b/drivers/nvdimm/core.c
+index 69a03358817f1..681cc28703a3e 100644
+--- a/drivers/nvdimm/core.c
++++ b/drivers/nvdimm/core.c
+@@ -368,9 +368,7 @@ static ssize_t capability_show(struct device *dev,
+ 	if (!nd_desc->fw_ops)
+ 		return -EOPNOTSUPP;
+ 
+-	nvdimm_bus_lock(dev);
+ 	cap = nd_desc->fw_ops->capability(nd_desc);
+-	nvdimm_bus_unlock(dev);
+ 
+ 	switch (cap) {
+ 	case NVDIMM_FWA_CAP_QUIESCE:
+@@ -395,10 +393,8 @@ static ssize_t activate_show(struct device *dev,
+ 	if (!nd_desc->fw_ops)
+ 		return -EOPNOTSUPP;
+ 
+-	nvdimm_bus_lock(dev);
+ 	cap = nd_desc->fw_ops->capability(nd_desc);
+ 	state = nd_desc->fw_ops->activate_state(nd_desc);
+-	nvdimm_bus_unlock(dev);
+ 
+ 	if (cap < NVDIMM_FWA_CAP_QUIESCE)
+ 		return -EOPNOTSUPP;
+@@ -443,7 +439,6 @@ static ssize_t activate_store(struct device *dev,
+ 	else
+ 		return -EINVAL;
+ 
+-	nvdimm_bus_lock(dev);
+ 	state = nd_desc->fw_ops->activate_state(nd_desc);
+ 
+ 	switch (state) {
+@@ -461,7 +456,6 @@ static ssize_t activate_store(struct device *dev,
+ 	default:
+ 		rc = -ENXIO;
+ 	}
+-	nvdimm_bus_unlock(dev);
+ 
+ 	if (rc == 0)
+ 		rc = len;
+@@ -484,10 +478,7 @@ static umode_t nvdimm_bus_firmware_visible(struct kobject *kobj, struct attribut
+ 	if (!nd_desc->fw_ops)
+ 		return 0;
+ 
+-	nvdimm_bus_lock(dev);
+ 	cap = nd_desc->fw_ops->capability(nd_desc);
+-	nvdimm_bus_unlock(dev);
+-
+ 	if (cap < NVDIMM_FWA_CAP_QUIESCE)
+ 		return 0;
+ 
+diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
+index 58d95242a836b..4aa17132a5572 100644
+--- a/drivers/nvdimm/pmem.c
++++ b/drivers/nvdimm/pmem.c
+@@ -158,36 +158,20 @@ static blk_status_t pmem_do_write(struct pmem_device *pmem,
+ 			struct page *page, unsigned int page_off,
+ 			sector_t sector, unsigned int len)
+ {
+-	blk_status_t rc = BLK_STS_OK;
+-	bool bad_pmem = false;
+ 	phys_addr_t pmem_off = sector * 512 + pmem->data_offset;
+ 	void *pmem_addr = pmem->virt_addr + pmem_off;
+ 
+-	if (unlikely(is_bad_pmem(&pmem->bb, sector, len)))
+-		bad_pmem = true;
++	if (unlikely(is_bad_pmem(&pmem->bb, sector, len))) {
++		blk_status_t rc = pmem_clear_poison(pmem, pmem_off, len);
++
++		if (rc != BLK_STS_OK)
++			return rc;
++	}
+ 
+-	/*
+-	 * Note that we write the data both before and after
+-	 * clearing poison.  The write before clear poison
+-	 * handles situations where the latest written data is
+-	 * preserved and the clear poison operation simply marks
+-	 * the address range as valid without changing the data.
+-	 * In this case application software can assume that an
+-	 * interrupted write will either return the new good
+-	 * data or an error.
+-	 *
+-	 * However, if pmem_clear_poison() leaves the data in an
+-	 * indeterminate state we need to perform the write
+-	 * after clear poison.
+-	 */
+ 	flush_dcache_page(page);
+ 	write_pmem(pmem_addr, page, page_off, len);
+-	if (unlikely(bad_pmem)) {
+-		rc = pmem_clear_poison(pmem, pmem_off, len);
+-		write_pmem(pmem_addr, page, page_off, len);
+-	}
+ 
+-	return rc;
++	return BLK_STS_OK;
+ }
+ 
+ static void pmem_submit_bio(struct bio *bio)
+diff --git a/drivers/nvdimm/security.c b/drivers/nvdimm/security.c
+index 4b80150e4afa7..b5aa55c614616 100644
+--- a/drivers/nvdimm/security.c
++++ b/drivers/nvdimm/security.c
+@@ -379,11 +379,6 @@ static int security_overwrite(struct nvdimm *nvdimm, unsigned int keyid)
+ 			|| !nvdimm->sec.flags)
+ 		return -EOPNOTSUPP;
+ 
+-	if (dev->driver == NULL) {
+-		dev_dbg(dev, "Unable to overwrite while DIMM active.\n");
+-		return -EINVAL;
+-	}
+-
+ 	rc = check_security_state(nvdimm);
+ 	if (rc)
+ 		return rc;
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index e1846d04817f3..2d6a01853109b 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -1771,7 +1771,7 @@ static void nvme_set_queue_limits(struct nvme_ctrl *ctrl,
+ 		blk_queue_max_segments(q, min_t(u32, max_segments, USHRT_MAX));
+ 	}
+ 	blk_queue_virt_boundary(q, NVME_CTRL_PAGE_SIZE - 1);
+-	blk_queue_dma_alignment(q, 7);
++	blk_queue_dma_alignment(q, 3);
+ 	blk_queue_write_cache(q, vwc, vwc);
+ }
+ 
+@@ -3080,10 +3080,6 @@ int nvme_init_ctrl_finish(struct nvme_ctrl *ctrl)
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = nvme_init_non_mdts_limits(ctrl);
+-	if (ret < 0)
+-		return ret;
+-
+ 	ret = nvme_configure_apst(ctrl);
+ 	if (ret < 0)
+ 		return ret;
+@@ -4237,11 +4233,26 @@ static void nvme_scan_work(struct work_struct *work)
+ {
+ 	struct nvme_ctrl *ctrl =
+ 		container_of(work, struct nvme_ctrl, scan_work);
++	int ret;
+ 
+ 	/* No tagset on a live ctrl means IO queues could not created */
+ 	if (ctrl->state != NVME_CTRL_LIVE || !ctrl->tagset)
+ 		return;
+ 
++	/*
++	 * Identify controller limits can change at controller reset due to
++	 * new firmware download, even though it is not common we cannot ignore
++	 * such scenario. Controller's non-mdts limits are reported in the unit
++	 * of logical blocks that is dependent on the format of attached
++	 * namespace. Hence re-read the limits at the time of ns allocation.
++	 */
++	ret = nvme_init_non_mdts_limits(ctrl);
++	if (ret < 0) {
++		dev_warn(ctrl->device,
++			"reading non-mdts-limits failed: %d\n", ret);
++		return;
++	}
++
+ 	if (test_and_clear_bit(NVME_AER_NOTICE_NS_CHANGED, &ctrl->events)) {
+ 		dev_info(ctrl->device, "rescanning namespaces.\n");
+ 		nvme_clear_changed_ns_log(ctrl);
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 3aacf1c0d5a5f..17aeb7d5c4852 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -1775,6 +1775,7 @@ static int nvme_alloc_admin_tags(struct nvme_dev *dev)
+ 		dev->ctrl.admin_q = blk_mq_init_queue(&dev->admin_tagset);
+ 		if (IS_ERR(dev->ctrl.admin_q)) {
+ 			blk_mq_free_tag_set(&dev->admin_tagset);
++			dev->ctrl.admin_q = NULL;
+ 			return -ENOMEM;
+ 		}
+ 		if (!blk_get_queue(dev->ctrl.admin_q)) {
+diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c
+index ec315b060cd50..0f30496ce80bf 100644
+--- a/drivers/of/fdt.c
++++ b/drivers/of/fdt.c
+@@ -1105,6 +1105,9 @@ int __init early_init_dt_scan_memory(void)
+ 		if (type == NULL || strcmp(type, "memory") != 0)
+ 			continue;
+ 
++		if (!of_fdt_device_is_available(fdt, node))
++			continue;
++
+ 		reg = of_get_flat_dt_prop(node, "linux,usable-memory", &l);
+ 		if (reg == NULL)
+ 			reg = of_get_flat_dt_prop(node, "reg", &l);
+diff --git a/drivers/of/kexec.c b/drivers/of/kexec.c
+index b9bd1cff17938..8d374cc552be5 100644
+--- a/drivers/of/kexec.c
++++ b/drivers/of/kexec.c
+@@ -386,6 +386,15 @@ void *of_kexec_alloc_and_setup_fdt(const struct kimage *image,
+ 				crashk_res.end - crashk_res.start + 1);
+ 		if (ret)
+ 			goto out;
++
++		if (crashk_low_res.end) {
++			ret = fdt_appendprop_addrrange(fdt, 0, chosen_node,
++					"linux,usable-memory-range",
++					crashk_low_res.start,
++					crashk_low_res.end - crashk_low_res.start + 1);
++			if (ret)
++				goto out;
++		}
+ 	}
+ 
+ 	/* add bootargs */
+diff --git a/drivers/of/overlay.c b/drivers/of/overlay.c
+index d80160cf34bb7..d1187123c4fc4 100644
+--- a/drivers/of/overlay.c
++++ b/drivers/of/overlay.c
+@@ -170,9 +170,7 @@ static int overlay_notify(struct overlay_changeset *ovcs,
+ 
+ 		ret = blocking_notifier_call_chain(&overlay_notify_chain,
+ 						   action, &nd);
+-		if (ret == NOTIFY_OK || ret == NOTIFY_STOP)
+-			return 0;
+-		if (ret) {
++		if (notifier_to_errno(ret)) {
+ 			ret = notifier_to_errno(ret);
+ 			pr_err("overlay changeset %s notifier error %d, target: %pOF\n",
+ 			       of_overlay_action_name[action], ret, nd.target);
+diff --git a/drivers/opp/of.c b/drivers/opp/of.c
+index 440ab5a03df9f..95b184fc33727 100644
+--- a/drivers/opp/of.c
++++ b/drivers/opp/of.c
+@@ -437,11 +437,11 @@ static int _bandwidth_supported(struct device *dev, struct opp_table *opp_table)
+ 
+ 	/* Checking only first OPP is sufficient */
+ 	np = of_get_next_available_child(opp_np, NULL);
++	of_node_put(opp_np);
+ 	if (!np) {
+ 		dev_err(dev, "OPP table empty\n");
+ 		return -EINVAL;
+ 	}
+-	of_node_put(opp_np);
+ 
+ 	prop = of_find_property(np, "opp-peak-kBps", NULL);
+ 	of_node_put(np);
+diff --git a/drivers/pci/controller/cadence/pci-j721e.c b/drivers/pci/controller/cadence/pci-j721e.c
+index 768d33f9ebc87..a82f845cc4b52 100644
+--- a/drivers/pci/controller/cadence/pci-j721e.c
++++ b/drivers/pci/controller/cadence/pci-j721e.c
+@@ -69,6 +69,7 @@ struct j721e_pcie_data {
+ 	enum j721e_pcie_mode	mode;
+ 	unsigned int		quirk_retrain_flag:1;
+ 	unsigned int		quirk_detect_quiet_flag:1;
++	unsigned int		quirk_disable_flr:1;
+ 	u32			linkdown_irq_regfield;
+ 	unsigned int		byte_access_allowed:1;
+ };
+@@ -307,6 +308,7 @@ static const struct j721e_pcie_data j7200_pcie_rc_data = {
+ static const struct j721e_pcie_data j7200_pcie_ep_data = {
+ 	.mode = PCI_MODE_EP,
+ 	.quirk_detect_quiet_flag = true,
++	.quirk_disable_flr = true,
+ };
+ 
+ static const struct j721e_pcie_data am64_pcie_rc_data = {
+@@ -405,6 +407,7 @@ static int j721e_pcie_probe(struct platform_device *pdev)
+ 			return -ENOMEM;
+ 
+ 		ep->quirk_detect_quiet_flag = data->quirk_detect_quiet_flag;
++		ep->quirk_disable_flr = data->quirk_disable_flr;
+ 
+ 		cdns_pcie = &ep->pcie;
+ 		cdns_pcie->dev = dev;
+diff --git a/drivers/pci/controller/cadence/pcie-cadence-ep.c b/drivers/pci/controller/cadence/pcie-cadence-ep.c
+index 88e05b9c2e5b8..b8b655d4047ec 100644
+--- a/drivers/pci/controller/cadence/pcie-cadence-ep.c
++++ b/drivers/pci/controller/cadence/pcie-cadence-ep.c
+@@ -187,8 +187,7 @@ static int cdns_pcie_ep_map_addr(struct pci_epc *epc, u8 fn, u8 vfn,
+ 	struct cdns_pcie *pcie = &ep->pcie;
+ 	u32 r;
+ 
+-	r = find_first_zero_bit(&ep->ob_region_map,
+-				sizeof(ep->ob_region_map) * BITS_PER_LONG);
++	r = find_first_zero_bit(&ep->ob_region_map, BITS_PER_LONG);
+ 	if (r >= ep->max_regions - 1) {
+ 		dev_err(&epc->dev, "no free outbound region\n");
+ 		return -EINVAL;
+@@ -565,7 +564,8 @@ static int cdns_pcie_ep_start(struct pci_epc *epc)
+ 	struct cdns_pcie_ep *ep = epc_get_drvdata(epc);
+ 	struct cdns_pcie *pcie = &ep->pcie;
+ 	struct device *dev = pcie->dev;
+-	int ret;
++	int max_epfs = sizeof(epc->function_num_map) * 8;
++	int ret, value, epf;
+ 
+ 	/*
+ 	 * BIT(0) is hardwired to 1, hence function 0 is always enabled
+@@ -573,6 +573,21 @@ static int cdns_pcie_ep_start(struct pci_epc *epc)
+ 	 */
+ 	cdns_pcie_writel(pcie, CDNS_PCIE_LM_EP_FUNC_CFG, epc->function_num_map);
+ 
++	if (ep->quirk_disable_flr) {
++		for (epf = 0; epf < max_epfs; epf++) {
++			if (!(epc->function_num_map & BIT(epf)))
++				continue;
++
++			value = cdns_pcie_ep_fn_readl(pcie, epf,
++					CDNS_PCIE_EP_FUNC_DEV_CAP_OFFSET +
++					PCI_EXP_DEVCAP);
++			value &= ~PCI_EXP_DEVCAP_FLR;
++			cdns_pcie_ep_fn_writel(pcie, epf,
++					CDNS_PCIE_EP_FUNC_DEV_CAP_OFFSET +
++					PCI_EXP_DEVCAP, value);
++		}
++	}
++
+ 	ret = cdns_pcie_start_link(pcie);
+ 	if (ret) {
+ 		dev_err(dev, "Failed to start link\n");
+diff --git a/drivers/pci/controller/cadence/pcie-cadence.h b/drivers/pci/controller/cadence/pcie-cadence.h
+index c8a27b6290cea..d9c785365da3b 100644
+--- a/drivers/pci/controller/cadence/pcie-cadence.h
++++ b/drivers/pci/controller/cadence/pcie-cadence.h
+@@ -123,6 +123,7 @@
+ 
+ #define CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET	0x90
+ #define CDNS_PCIE_EP_FUNC_MSIX_CAP_OFFSET	0xb0
++#define CDNS_PCIE_EP_FUNC_DEV_CAP_OFFSET	0xc0
+ #define CDNS_PCIE_EP_FUNC_SRIOV_CAP_OFFSET	0x200
+ 
+ /*
+@@ -357,6 +358,7 @@ struct cdns_pcie_epf {
+  *        minimize time between read and write
+  * @epf: Structure to hold info about endpoint function
+  * @quirk_detect_quiet_flag: LTSSM Detect Quiet min delay set as quirk
++ * @quirk_disable_flr: Disable FLR (Function Level Reset) quirk flag
+  */
+ struct cdns_pcie_ep {
+ 	struct cdns_pcie	pcie;
+@@ -372,6 +374,7 @@ struct cdns_pcie_ep {
+ 	spinlock_t		lock;
+ 	struct cdns_pcie_epf	*epf;
+ 	unsigned int		quirk_detect_quiet_flag:1;
++	unsigned int		quirk_disable_flr:1;
+ };
+ 
+ 
+diff --git a/drivers/pci/controller/dwc/pci-imx6.c b/drivers/pci/controller/dwc/pci-imx6.c
+index 6619e3caffe2d..7a285fb0f6199 100644
+--- a/drivers/pci/controller/dwc/pci-imx6.c
++++ b/drivers/pci/controller/dwc/pci-imx6.c
+@@ -408,6 +408,11 @@ static void imx6_pcie_assert_core_reset(struct imx6_pcie *imx6_pcie)
+ 			dev_err(dev, "failed to disable vpcie regulator: %d\n",
+ 				ret);
+ 	}
++
++	/* Some boards don't have PCIe reset GPIO. */
++	if (gpio_is_valid(imx6_pcie->reset_gpio))
++		gpio_set_value_cansleep(imx6_pcie->reset_gpio,
++					imx6_pcie->gpio_active_high);
+ }
+ 
+ static unsigned int imx6_pcie_grp_offset(const struct imx6_pcie *imx6_pcie)
+@@ -540,15 +545,6 @@ static void imx6_pcie_deassert_core_reset(struct imx6_pcie *imx6_pcie)
+ 	/* allow the clocks to stabilize */
+ 	usleep_range(200, 500);
+ 
+-	/* Some boards don't have PCIe reset GPIO. */
+-	if (gpio_is_valid(imx6_pcie->reset_gpio)) {
+-		gpio_set_value_cansleep(imx6_pcie->reset_gpio,
+-					imx6_pcie->gpio_active_high);
+-		msleep(100);
+-		gpio_set_value_cansleep(imx6_pcie->reset_gpio,
+-					!imx6_pcie->gpio_active_high);
+-	}
+-
+ 	switch (imx6_pcie->drvdata->variant) {
+ 	case IMX8MQ:
+ 		reset_control_deassert(imx6_pcie->pciephy_reset);
+@@ -595,6 +591,15 @@ static void imx6_pcie_deassert_core_reset(struct imx6_pcie *imx6_pcie)
+ 		break;
+ 	}
+ 
++	/* Some boards don't have PCIe reset GPIO. */
++	if (gpio_is_valid(imx6_pcie->reset_gpio)) {
++		msleep(100);
++		gpio_set_value_cansleep(imx6_pcie->reset_gpio,
++					!imx6_pcie->gpio_active_high);
++		/* Wait for 100ms after PERST# deassertion (PCIe r5.0, 6.6.1) */
++		msleep(100);
++	}
++
+ 	return;
+ 
+ err_ref_clk:
+diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c
+index 2fa86f32d9642..9979302532b72 100644
+--- a/drivers/pci/controller/dwc/pcie-designware-host.c
++++ b/drivers/pci/controller/dwc/pcie-designware-host.c
+@@ -396,7 +396,8 @@ int dw_pcie_host_init(struct pcie_port *pp)
+ 						      sizeof(pp->msi_msg),
+ 						      DMA_FROM_DEVICE,
+ 						      DMA_ATTR_SKIP_CPU_SYNC);
+-			if (dma_mapping_error(pci->dev, pp->msi_data)) {
++			ret = dma_mapping_error(pci->dev, pp->msi_data);
++			if (ret) {
+ 				dev_err(pci->dev, "Failed to map MSI data\n");
+ 				pp->msi_data = 0;
+ 				goto err_free_msi;
+diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
+index 816028c0f6edb..ed55421eb9ba9 100644
+--- a/drivers/pci/controller/dwc/pcie-qcom.c
++++ b/drivers/pci/controller/dwc/pcie-qcom.c
+@@ -1238,12 +1238,6 @@ static int qcom_pcie_init_2_7_0(struct qcom_pcie *pcie)
+ 		goto err_disable_clocks;
+ 	}
+ 
+-	ret = clk_prepare_enable(res->pipe_clk);
+-	if (ret) {
+-		dev_err(dev, "cannot prepare/enable pipe clock\n");
+-		goto err_disable_clocks;
+-	}
+-
+ 	/* Wait for reset to complete, required on SM8450 */
+ 	usleep_range(1000, 1500);
+ 
+@@ -1627,22 +1621,21 @@ static int qcom_pcie_probe(struct platform_device *pdev)
+ 	pp->ops = &qcom_pcie_dw_ops;
+ 
+ 	ret = phy_init(pcie->phy);
+-	if (ret) {
+-		pm_runtime_disable(&pdev->dev);
++	if (ret)
+ 		goto err_pm_runtime_put;
+-	}
+ 
+ 	platform_set_drvdata(pdev, pcie);
+ 
+ 	ret = dw_pcie_host_init(pp);
+ 	if (ret) {
+ 		dev_err(dev, "cannot initialize host\n");
+-		pm_runtime_disable(&pdev->dev);
+-		goto err_pm_runtime_put;
++		goto err_phy_exit;
+ 	}
+ 
+ 	return 0;
+ 
++err_phy_exit:
++	phy_exit(pcie->phy);
+ err_pm_runtime_put:
+ 	pm_runtime_put(dev);
+ 	pm_runtime_disable(dev);
+diff --git a/drivers/pci/controller/pcie-mediatek-gen3.c b/drivers/pci/controller/pcie-mediatek-gen3.c
+index 3e8d70bfabc6a..5d9fd36b02d18 100644
+--- a/drivers/pci/controller/pcie-mediatek-gen3.c
++++ b/drivers/pci/controller/pcie-mediatek-gen3.c
+@@ -838,6 +838,14 @@ static int mtk_pcie_setup(struct mtk_gen3_pcie *pcie)
+ 	if (err)
+ 		return err;
+ 
++	/*
++	 * The controller may have been left out of reset by the bootloader
++	 * so make sure that we get a clean start by asserting resets here.
++	 */
++	reset_control_assert(pcie->phy_reset);
++	reset_control_assert(pcie->mac_reset);
++	usleep_range(10, 20);
++
+ 	/* Don't touch the hardware registers before power up */
+ 	err = mtk_pcie_power_up(pcie);
+ 	if (err)
+diff --git a/drivers/pci/controller/pcie-mediatek.c b/drivers/pci/controller/pcie-mediatek.c
+index ddfbd4aebdeca..be8bd919cb88f 100644
+--- a/drivers/pci/controller/pcie-mediatek.c
++++ b/drivers/pci/controller/pcie-mediatek.c
+@@ -1008,6 +1008,7 @@ static int mtk_pcie_subsys_powerup(struct mtk_pcie *pcie)
+ 					   "mediatek,generic-pciecfg");
+ 	if (cfg_node) {
+ 		pcie->cfg = syscon_node_to_regmap(cfg_node);
++		of_node_put(cfg_node);
+ 		if (IS_ERR(pcie->cfg))
+ 			return PTR_ERR(pcie->cfg);
+ 	}
+diff --git a/drivers/pci/controller/pcie-microchip-host.c b/drivers/pci/controller/pcie-microchip-host.c
+index 29d8e81e41810..2c52a8cef7260 100644
+--- a/drivers/pci/controller/pcie-microchip-host.c
++++ b/drivers/pci/controller/pcie-microchip-host.c
+@@ -406,6 +406,7 @@ static void mc_pcie_enable_msi(struct mc_pcie *port, void __iomem *base)
+ static void mc_handle_msi(struct irq_desc *desc)
+ {
+ 	struct mc_pcie *port = irq_desc_get_handler_data(desc);
++	struct irq_chip *chip = irq_desc_get_chip(desc);
+ 	struct device *dev = port->dev;
+ 	struct mc_msi *msi = &port->msi;
+ 	void __iomem *bridge_base_addr =
+@@ -414,8 +415,11 @@ static void mc_handle_msi(struct irq_desc *desc)
+ 	u32 bit;
+ 	int ret;
+ 
++	chained_irq_enter(chip, desc);
++
+ 	status = readl_relaxed(bridge_base_addr + ISTATUS_LOCAL);
+ 	if (status & PM_MSI_INT_MSI_MASK) {
++		writel_relaxed(status & PM_MSI_INT_MSI_MASK, bridge_base_addr + ISTATUS_LOCAL);
+ 		status = readl_relaxed(bridge_base_addr + ISTATUS_MSI);
+ 		for_each_set_bit(bit, &status, msi->num_vectors) {
+ 			ret = generic_handle_domain_irq(msi->dev_domain, bit);
+@@ -424,6 +428,8 @@ static void mc_handle_msi(struct irq_desc *desc)
+ 						    bit);
+ 		}
+ 	}
++
++	chained_irq_exit(chip, desc);
+ }
+ 
+ static void mc_msi_bottom_irq_ack(struct irq_data *data)
+@@ -432,13 +438,8 @@ static void mc_msi_bottom_irq_ack(struct irq_data *data)
+ 	void __iomem *bridge_base_addr =
+ 		port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
+ 	u32 bitpos = data->hwirq;
+-	unsigned long status;
+ 
+ 	writel_relaxed(BIT(bitpos), bridge_base_addr + ISTATUS_MSI);
+-	status = readl_relaxed(bridge_base_addr + ISTATUS_MSI);
+-	if (!status)
+-		writel_relaxed(BIT(PM_MSI_INT_MSI_SHIFT),
+-			       bridge_base_addr + ISTATUS_LOCAL);
+ }
+ 
+ static void mc_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
+@@ -563,6 +564,7 @@ static int mc_allocate_msi_domains(struct mc_pcie *port)
+ static void mc_handle_intx(struct irq_desc *desc)
+ {
+ 	struct mc_pcie *port = irq_desc_get_handler_data(desc);
++	struct irq_chip *chip = irq_desc_get_chip(desc);
+ 	struct device *dev = port->dev;
+ 	void __iomem *bridge_base_addr =
+ 		port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
+@@ -570,6 +572,8 @@ static void mc_handle_intx(struct irq_desc *desc)
+ 	u32 bit;
+ 	int ret;
+ 
++	chained_irq_enter(chip, desc);
++
+ 	status = readl_relaxed(bridge_base_addr + ISTATUS_LOCAL);
+ 	if (status & PM_MSI_INT_INTX_MASK) {
+ 		status &= PM_MSI_INT_INTX_MASK;
+@@ -581,6 +585,8 @@ static void mc_handle_intx(struct irq_desc *desc)
+ 						    bit);
+ 		}
+ 	}
++
++	chained_irq_exit(chip, desc);
+ }
+ 
+ static void mc_ack_intx_irq(struct irq_data *data)
+diff --git a/drivers/pci/controller/pcie-rockchip-ep.c b/drivers/pci/controller/pcie-rockchip-ep.c
+index 5fb9ce6e536e0..d1a200b93b2bf 100644
+--- a/drivers/pci/controller/pcie-rockchip-ep.c
++++ b/drivers/pci/controller/pcie-rockchip-ep.c
+@@ -264,8 +264,7 @@ static int rockchip_pcie_ep_map_addr(struct pci_epc *epc, u8 fn, u8 vfn,
+ 	struct rockchip_pcie *pcie = &ep->rockchip;
+ 	u32 r;
+ 
+-	r = find_first_zero_bit(&ep->ob_region_map,
+-				sizeof(ep->ob_region_map) * BITS_PER_LONG);
++	r = find_first_zero_bit(&ep->ob_region_map, BITS_PER_LONG);
+ 	/*
+ 	 * Region 0 is reserved for configuration space and shouldn't
+ 	 * be used elsewhere per TRM, so leave it out.
+diff --git a/drivers/pci/pci-acpi.c b/drivers/pci/pci-acpi.c
+index 1f15ab7eabf81..3ae435beaf0a9 100644
+--- a/drivers/pci/pci-acpi.c
++++ b/drivers/pci/pci-acpi.c
+@@ -974,9 +974,11 @@ bool acpi_pci_power_manageable(struct pci_dev *dev)
+ 
+ bool acpi_pci_bridge_d3(struct pci_dev *dev)
+ {
+-	const union acpi_object *obj;
+-	struct acpi_device *adev;
+ 	struct pci_dev *rpdev;
++	struct acpi_device *adev;
++	acpi_status status;
++	unsigned long long state;
++	const union acpi_object *obj;
+ 
+ 	if (acpi_pci_disabled || !dev->is_hotplug_bridge)
+ 		return false;
+@@ -985,12 +987,6 @@ bool acpi_pci_bridge_d3(struct pci_dev *dev)
+ 	if (acpi_pci_power_manageable(dev))
+ 		return true;
+ 
+-	/*
+-	 * The ACPI firmware will provide the device-specific properties through
+-	 * _DSD configuration object. Look for the 'HotPlugSupportInD3' property
+-	 * for the root port and if it is set we know the hierarchy behind it
+-	 * supports D3 just fine.
+-	 */
+ 	rpdev = pcie_find_root_port(dev);
+ 	if (!rpdev)
+ 		return false;
+@@ -999,11 +995,34 @@ bool acpi_pci_bridge_d3(struct pci_dev *dev)
+ 	if (!adev)
+ 		return false;
+ 
+-	if (acpi_dev_get_property(adev, "HotPlugSupportInD3",
+-				   ACPI_TYPE_INTEGER, &obj) < 0)
++	/*
++	 * If the Root Port cannot signal wakeup signals at all, i.e., it
++	 * doesn't supply a wakeup GPE via _PRW, it cannot signal hotplug
++	 * events from low-power states including D3hot and D3cold.
++	 */
++	if (!adev->wakeup.flags.valid)
+ 		return false;
+ 
+-	return obj->integer.value == 1;
++	/*
++	 * If the Root Port cannot wake itself from D3hot or D3cold, we
++	 * can't use D3.
++	 */
++	status = acpi_evaluate_integer(adev->handle, "_S0W", NULL, &state);
++	if (ACPI_SUCCESS(status) && state < ACPI_STATE_D3_HOT)
++		return false;
++
++	/*
++	 * The "HotPlugSupportInD3" property in a Root Port _DSD indicates
++	 * the Port can signal hotplug events while in D3.  We assume any
++	 * bridges *below* that Root Port can also signal hotplug events
++	 * while in D3.
++	 */
++	if (!acpi_dev_get_property(adev, "HotPlugSupportInD3",
++				   ACPI_TYPE_INTEGER, &obj) &&
++	    obj->integer.value == 1)
++		return true;
++
++	return false;
+ }
+ 
+ int acpi_pci_set_power_state(struct pci_dev *dev, pci_power_t state)
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index d25122fbe98ab..8ac110d6c6f4b 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -2920,6 +2920,8 @@ static const struct dmi_system_id bridge_d3_blacklist[] = {
+ 			DMI_MATCH(DMI_BOARD_VENDOR, "Gigabyte Technology Co., Ltd."),
+ 			DMI_MATCH(DMI_BOARD_NAME, "X299 DESIGNARE EX-CF"),
+ 		},
++	},
++	{
+ 		/*
+ 		 * Downstream device is not accessible after putting a root port
+ 		 * into D3cold and back into D0 on Elo i2.
+@@ -5113,19 +5115,19 @@ static int pci_reset_bus_function(struct pci_dev *dev, bool probe)
+ 
+ void pci_dev_lock(struct pci_dev *dev)
+ {
+-	pci_cfg_access_lock(dev);
+ 	/* block PM suspend, driver probe, etc. */
+ 	device_lock(&dev->dev);
++	pci_cfg_access_lock(dev);
+ }
+ EXPORT_SYMBOL_GPL(pci_dev_lock);
+ 
+ /* Return 1 on successful lock, 0 on contention */
+ int pci_dev_trylock(struct pci_dev *dev)
+ {
+-	if (pci_cfg_access_trylock(dev)) {
+-		if (device_trylock(&dev->dev))
++	if (device_trylock(&dev->dev)) {
++		if (pci_cfg_access_trylock(dev))
+ 			return 1;
+-		pci_cfg_access_unlock(dev);
++		device_unlock(&dev->dev);
+ 	}
+ 
+ 	return 0;
+@@ -5134,8 +5136,8 @@ EXPORT_SYMBOL_GPL(pci_dev_trylock);
+ 
+ void pci_dev_unlock(struct pci_dev *dev)
+ {
+-	device_unlock(&dev->dev);
+ 	pci_cfg_access_unlock(dev);
++	device_unlock(&dev->dev);
+ }
+ EXPORT_SYMBOL_GPL(pci_dev_unlock);
+ 
+diff --git a/drivers/pci/pcie/aer.c b/drivers/pci/pcie/aer.c
+index 9fa1f97e5b270..7952e5efd6cf3 100644
+--- a/drivers/pci/pcie/aer.c
++++ b/drivers/pci/pcie/aer.c
+@@ -101,6 +101,11 @@ struct aer_stats {
+ #define ERR_COR_ID(d)			(d & 0xffff)
+ #define ERR_UNCOR_ID(d)			(d >> 16)
+ 
++#define AER_ERR_STATUS_MASK		(PCI_ERR_ROOT_UNCOR_RCV |	\
++					PCI_ERR_ROOT_COR_RCV |		\
++					PCI_ERR_ROOT_MULTI_COR_RCV |	\
++					PCI_ERR_ROOT_MULTI_UNCOR_RCV)
++
+ static int pcie_aer_disable;
+ static pci_ers_result_t aer_root_reset(struct pci_dev *dev);
+ 
+@@ -1196,7 +1201,7 @@ static irqreturn_t aer_irq(int irq, void *context)
+ 	struct aer_err_source e_src = {};
+ 
+ 	pci_read_config_dword(rp, aer + PCI_ERR_ROOT_STATUS, &e_src.status);
+-	if (!(e_src.status & (PCI_ERR_ROOT_UNCOR_RCV|PCI_ERR_ROOT_COR_RCV)))
++	if (!(e_src.status & AER_ERR_STATUS_MASK))
+ 		return IRQ_NONE;
+ 
+ 	pci_read_config_dword(rp, aer + PCI_ERR_ROOT_ERR_SRC, &e_src.id);
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index da829274fc66d..41aeaa2351322 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -12,6 +12,7 @@
+  * file, where their drivers can use them.
+  */
+ 
++#include <linux/bitfield.h>
+ #include <linux/types.h>
+ #include <linux/kernel.h>
+ #include <linux/export.h>
+@@ -5895,3 +5896,49 @@ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x1533, rom_bar_overlap_defect);
+ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x1536, rom_bar_overlap_defect);
+ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x1537, rom_bar_overlap_defect);
+ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x1538, rom_bar_overlap_defect);
++
++#ifdef CONFIG_PCIEASPM
++/*
++ * Several Intel DG2 graphics devices advertise that they can only tolerate
++ * 1us latency when transitioning from L1 to L0, which may prevent ASPM L1
++ * from being enabled.  But in fact these devices can tolerate unlimited
++ * latency.  Override their Device Capabilities value to allow ASPM L1 to
++ * be enabled.
++ */
++static void aspm_l1_acceptable_latency(struct pci_dev *dev)
++{
++	u32 l1_lat = FIELD_GET(PCI_EXP_DEVCAP_L1, dev->devcap);
++
++	if (l1_lat < 7) {
++		dev->devcap |= FIELD_PREP(PCI_EXP_DEVCAP_L1, 7);
++		pci_info(dev, "ASPM: overriding L1 acceptable latency from %#x to 0x7\n",
++			 l1_lat);
++	}
++}
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x4f80, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x4f81, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x4f82, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x4f83, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x4f84, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x4f85, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x4f86, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x4f87, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x4f88, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x5690, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x5691, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x5692, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x5693, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x5694, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x5695, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x56a0, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x56a1, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x56a2, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x56a3, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x56a4, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x56a5, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x56a6, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x56b0, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x56b1, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x56c0, aspm_l1_acceptable_latency);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x56c1, aspm_l1_acceptable_latency);
++#endif
+diff --git a/drivers/phy/qualcomm/phy-qcom-qmp.c b/drivers/phy/qualcomm/phy-qcom-qmp.c
+index b144ae1f729ab..9afac02e0eaa2 100644
+--- a/drivers/phy/qualcomm/phy-qcom-qmp.c
++++ b/drivers/phy/qualcomm/phy-qcom-qmp.c
+@@ -5818,6 +5818,11 @@ static const struct phy_ops qcom_qmp_pcie_ufs_ops = {
+ 	.owner		= THIS_MODULE,
+ };
+ 
++static void qcom_qmp_reset_control_put(void *data)
++{
++	reset_control_put(data);
++}
++
+ static
+ int qcom_qmp_phy_create(struct device *dev, struct device_node *np, int id,
+ 			void __iomem *serdes, const struct qmp_phy_cfg *cfg)
+@@ -5890,7 +5895,7 @@ int qcom_qmp_phy_create(struct device *dev, struct device_node *np, int id,
+ 	 * all phys that don't need this.
+ 	 */
+ 	snprintf(prop_name, sizeof(prop_name), "pipe%d", id);
+-	qphy->pipe_clk = of_clk_get_by_name(np, prop_name);
++	qphy->pipe_clk = devm_get_clk_from_child(dev, np, prop_name);
+ 	if (IS_ERR(qphy->pipe_clk)) {
+ 		if (cfg->type == PHY_TYPE_PCIE ||
+ 		    cfg->type == PHY_TYPE_USB3) {
+@@ -5912,6 +5917,10 @@ int qcom_qmp_phy_create(struct device *dev, struct device_node *np, int id,
+ 			dev_err(dev, "failed to get lane%d reset\n", id);
+ 			return PTR_ERR(qphy->lane_rst);
+ 		}
++		ret = devm_add_action_or_reset(dev, qcom_qmp_reset_control_put,
++					       qphy->lane_rst);
++		if (ret)
++			return ret;
+ 	}
+ 
+ 	if (cfg->type == PHY_TYPE_UFS || cfg->type == PHY_TYPE_PCIE)
+diff --git a/drivers/pinctrl/bcm/pinctrl-bcm2835.c b/drivers/pinctrl/bcm/pinctrl-bcm2835.c
+index 47e433e09c5ce..dad4530547768 100644
+--- a/drivers/pinctrl/bcm/pinctrl-bcm2835.c
++++ b/drivers/pinctrl/bcm/pinctrl-bcm2835.c
+@@ -358,6 +358,22 @@ static int bcm2835_gpio_direction_output(struct gpio_chip *chip,
+ 	return 0;
+ }
+ 
++static int bcm2835_of_gpio_ranges_fallback(struct gpio_chip *gc,
++					   struct device_node *np)
++{
++	struct pinctrl_dev *pctldev = of_pinctrl_get(np);
++
++	of_node_put(np);
++
++	if (!pctldev)
++		return 0;
++
++	gpiochip_add_pin_range(gc, pinctrl_dev_get_devname(pctldev), 0, 0,
++			       gc->ngpio);
++
++	return 0;
++}
++
+ static const struct gpio_chip bcm2835_gpio_chip = {
+ 	.label = MODULE_NAME,
+ 	.owner = THIS_MODULE,
+@@ -372,6 +388,7 @@ static const struct gpio_chip bcm2835_gpio_chip = {
+ 	.base = -1,
+ 	.ngpio = BCM2835_NUM_GPIOS,
+ 	.can_sleep = false,
++	.of_gpio_ranges_fallback = bcm2835_of_gpio_ranges_fallback,
+ };
+ 
+ static const struct gpio_chip bcm2711_gpio_chip = {
+@@ -388,6 +405,7 @@ static const struct gpio_chip bcm2711_gpio_chip = {
+ 	.base = -1,
+ 	.ngpio = BCM2711_NUM_GPIOS,
+ 	.can_sleep = false,
++	.of_gpio_ranges_fallback = bcm2835_of_gpio_ranges_fallback,
+ };
+ 
+ static void bcm2835_gpio_irq_handle_bank(struct bcm2835_pinctrl *pc,
+diff --git a/drivers/pinctrl/mediatek/Kconfig b/drivers/pinctrl/mediatek/Kconfig
+index 40accd110c3d8..b3074082c56d3 100644
+--- a/drivers/pinctrl/mediatek/Kconfig
++++ b/drivers/pinctrl/mediatek/Kconfig
+@@ -166,6 +166,7 @@ config PINCTRL_MT8195
+ 	bool "Mediatek MT8195 pin control"
+ 	depends on OF
+ 	depends on ARM64 || COMPILE_TEST
++	default ARM64 && ARCH_MEDIATEK
+ 	select PINCTRL_MTK_PARIS
+ 
+ config PINCTRL_MT8365
+diff --git a/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c b/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c
+index 08cad14042e2e..adccf03b3e5af 100644
+--- a/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c
++++ b/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c
+@@ -773,7 +773,7 @@ static int armada_37xx_irqchip_register(struct platform_device *pdev,
+ 	for (i = 0; i < nr_irq_parent; i++) {
+ 		int irq = irq_of_parse_and_map(np, i);
+ 
+-		if (irq < 0)
++		if (!irq)
+ 			continue;
+ 		girq->parents[i] = irq;
+ 	}
+diff --git a/drivers/pinctrl/pinctrl-apple-gpio.c b/drivers/pinctrl/pinctrl-apple-gpio.c
+index 72f4dd2466e11..6d1bff9588d99 100644
+--- a/drivers/pinctrl/pinctrl-apple-gpio.c
++++ b/drivers/pinctrl/pinctrl-apple-gpio.c
+@@ -72,6 +72,7 @@ struct regmap_config regmap_config = {
+ 	.max_register = 512 * sizeof(u32),
+ 	.num_reg_defaults_raw = 512,
+ 	.use_relaxed_mmio = true,
++	.use_raw_spinlock = true,
+ };
+ 
+ /* No locking needed to mask/unmask IRQs as the interrupt mode is per pin-register. */
+diff --git a/drivers/pinctrl/pinctrl-rockchip.c b/drivers/pinctrl/pinctrl-rockchip.c
+index 2cb79e649fcf3..e1f58451984ff 100644
+--- a/drivers/pinctrl/pinctrl-rockchip.c
++++ b/drivers/pinctrl/pinctrl-rockchip.c
+@@ -2110,19 +2110,20 @@ static bool rockchip_pinconf_pull_valid(struct rockchip_pin_ctrl *ctrl,
+ 	return false;
+ }
+ 
+-static int rockchip_pinconf_defer_output(struct rockchip_pin_bank *bank,
+-					 unsigned int pin, u32 arg)
++static int rockchip_pinconf_defer_pin(struct rockchip_pin_bank *bank,
++					 unsigned int pin, u32 param, u32 arg)
+ {
+-	struct rockchip_pin_output_deferred *cfg;
++	struct rockchip_pin_deferred *cfg;
+ 
+ 	cfg = kzalloc(sizeof(*cfg), GFP_KERNEL);
+ 	if (!cfg)
+ 		return -ENOMEM;
+ 
+ 	cfg->pin = pin;
++	cfg->param = param;
+ 	cfg->arg = arg;
+ 
+-	list_add_tail(&cfg->head, &bank->deferred_output);
++	list_add_tail(&cfg->head, &bank->deferred_pins);
+ 
+ 	return 0;
+ }
+@@ -2143,6 +2144,25 @@ static int rockchip_pinconf_set(struct pinctrl_dev *pctldev, unsigned int pin,
+ 		param = pinconf_to_config_param(configs[i]);
+ 		arg = pinconf_to_config_argument(configs[i]);
+ 
++		if (param == PIN_CONFIG_OUTPUT || param == PIN_CONFIG_INPUT_ENABLE) {
++			/*
++			 * Check for gpio driver not being probed yet.
++			 * The lock makes sure that either gpio-probe has completed
++			 * or the gpio driver hasn't probed yet.
++			 */
++			mutex_lock(&bank->deferred_lock);
++			if (!gpio || !gpio->direction_output) {
++				rc = rockchip_pinconf_defer_pin(bank, pin - bank->pin_base, param,
++								arg);
++				mutex_unlock(&bank->deferred_lock);
++				if (rc)
++					return rc;
++
++				break;
++			}
++			mutex_unlock(&bank->deferred_lock);
++		}
++
+ 		switch (param) {
+ 		case PIN_CONFIG_BIAS_DISABLE:
+ 			rc =  rockchip_set_pull(bank, pin - bank->pin_base,
+@@ -2171,27 +2191,21 @@ static int rockchip_pinconf_set(struct pinctrl_dev *pctldev, unsigned int pin,
+ 			if (rc != RK_FUNC_GPIO)
+ 				return -EINVAL;
+ 
+-			/*
+-			 * Check for gpio driver not being probed yet.
+-			 * The lock makes sure that either gpio-probe has completed
+-			 * or the gpio driver hasn't probed yet.
+-			 */
+-			mutex_lock(&bank->deferred_lock);
+-			if (!gpio || !gpio->direction_output) {
+-				rc = rockchip_pinconf_defer_output(bank, pin - bank->pin_base, arg);
+-				mutex_unlock(&bank->deferred_lock);
+-				if (rc)
+-					return rc;
+-
+-				break;
+-			}
+-			mutex_unlock(&bank->deferred_lock);
+-
+ 			rc = gpio->direction_output(gpio, pin - bank->pin_base,
+ 						    arg);
+ 			if (rc)
+ 				return rc;
+ 			break;
++		case PIN_CONFIG_INPUT_ENABLE:
++			rc = rockchip_set_mux(bank, pin - bank->pin_base,
++					      RK_FUNC_GPIO);
++			if (rc != RK_FUNC_GPIO)
++				return -EINVAL;
++
++			rc = gpio->direction_input(gpio, pin - bank->pin_base);
++			if (rc)
++				return rc;
++			break;
+ 		case PIN_CONFIG_DRIVE_STRENGTH:
+ 			/* rk3288 is the first with per-pin drive-strength */
+ 			if (!info->ctrl->drv_calc_reg)
+@@ -2500,7 +2514,7 @@ static int rockchip_pinctrl_register(struct platform_device *pdev,
+ 			pdesc++;
+ 		}
+ 
+-		INIT_LIST_HEAD(&pin_bank->deferred_output);
++		INIT_LIST_HEAD(&pin_bank->deferred_pins);
+ 		mutex_init(&pin_bank->deferred_lock);
+ 	}
+ 
+@@ -2763,7 +2777,7 @@ static int rockchip_pinctrl_remove(struct platform_device *pdev)
+ {
+ 	struct rockchip_pinctrl *info = platform_get_drvdata(pdev);
+ 	struct rockchip_pin_bank *bank;
+-	struct rockchip_pin_output_deferred *cfg;
++	struct rockchip_pin_deferred *cfg;
+ 	int i;
+ 
+ 	of_platform_depopulate(&pdev->dev);
+@@ -2772,9 +2786,9 @@ static int rockchip_pinctrl_remove(struct platform_device *pdev)
+ 		bank = &info->ctrl->pin_banks[i];
+ 
+ 		mutex_lock(&bank->deferred_lock);
+-		while (!list_empty(&bank->deferred_output)) {
+-			cfg = list_first_entry(&bank->deferred_output,
+-					       struct rockchip_pin_output_deferred, head);
++		while (!list_empty(&bank->deferred_pins)) {
++			cfg = list_first_entry(&bank->deferred_pins,
++					       struct rockchip_pin_deferred, head);
+ 			list_del(&cfg->head);
+ 			kfree(cfg);
+ 		}
+diff --git a/drivers/pinctrl/pinctrl-rockchip.h b/drivers/pinctrl/pinctrl-rockchip.h
+index 91f10279d0844..98a01a616da67 100644
+--- a/drivers/pinctrl/pinctrl-rockchip.h
++++ b/drivers/pinctrl/pinctrl-rockchip.h
+@@ -171,7 +171,7 @@ struct rockchip_pin_bank {
+ 	u32				toggle_edge_mode;
+ 	u32				recalced_mask;
+ 	u32				route_mask;
+-	struct list_head		deferred_output;
++	struct list_head		deferred_pins;
+ 	struct mutex			deferred_lock;
+ };
+ 
+@@ -247,9 +247,12 @@ struct rockchip_pin_config {
+ 	unsigned int		nconfigs;
+ };
+ 
+-struct rockchip_pin_output_deferred {
++enum pin_config_param;
++
++struct rockchip_pin_deferred {
+ 	struct list_head head;
+ 	unsigned int pin;
++	enum pin_config_param param;
+ 	u32 arg;
+ };
+ 
+diff --git a/drivers/pinctrl/renesas/core.c b/drivers/pinctrl/renesas/core.c
+index d0d4714731c14..3d8bf521c3e77 100644
+--- a/drivers/pinctrl/renesas/core.c
++++ b/drivers/pinctrl/renesas/core.c
+@@ -71,12 +71,11 @@ static int sh_pfc_map_resources(struct sh_pfc *pfc,
+ 
+ 	/* Fill them. */
+ 	for (i = 0; i < num_windows; i++) {
+-		res = platform_get_resource(pdev, IORESOURCE_MEM, i);
+-		windows->phys = res->start;
+-		windows->size = resource_size(res);
+-		windows->virt = devm_ioremap_resource(pfc->dev, res);
++		windows->virt = devm_platform_get_and_ioremap_resource(pdev, i, &res);
+ 		if (IS_ERR(windows->virt))
+ 			return -ENOMEM;
++		windows->phys = res->start;
++		windows->size = resource_size(res);
+ 		windows++;
+ 	}
+ 	for (i = 0; i < num_irqs; i++)
+diff --git a/drivers/pinctrl/renesas/pfc-r8a779a0.c b/drivers/pinctrl/renesas/pfc-r8a779a0.c
+index 4a668a04b7ca6..0c26e95ba7db1 100644
+--- a/drivers/pinctrl/renesas/pfc-r8a779a0.c
++++ b/drivers/pinctrl/renesas/pfc-r8a779a0.c
+@@ -629,7 +629,36 @@ enum {
+ };
+ 
+ static const u16 pinmux_data[] = {
++/* Using GP_2_[2-15] requires disabling I2C in MOD_SEL2 */
++#define GP_2_2_FN	GP_2_2_FN,	FN_SEL_I2C0_0
++#define GP_2_3_FN	GP_2_3_FN,	FN_SEL_I2C0_0
++#define GP_2_4_FN	GP_2_4_FN,	FN_SEL_I2C1_0
++#define GP_2_5_FN	GP_2_5_FN,	FN_SEL_I2C1_0
++#define GP_2_6_FN	GP_2_6_FN,	FN_SEL_I2C2_0
++#define GP_2_7_FN	GP_2_7_FN,	FN_SEL_I2C2_0
++#define GP_2_8_FN	GP_2_8_FN,	FN_SEL_I2C3_0
++#define GP_2_9_FN	GP_2_9_FN,	FN_SEL_I2C3_0
++#define GP_2_10_FN	GP_2_10_FN,	FN_SEL_I2C4_0
++#define GP_2_11_FN	GP_2_11_FN,	FN_SEL_I2C4_0
++#define GP_2_12_FN	GP_2_12_FN,	FN_SEL_I2C5_0
++#define GP_2_13_FN	GP_2_13_FN,	FN_SEL_I2C5_0
++#define GP_2_14_FN	GP_2_14_FN,	FN_SEL_I2C6_0
++#define GP_2_15_FN	GP_2_15_FN,	FN_SEL_I2C6_0
+ 	PINMUX_DATA_GP_ALL(),
++#undef GP_2_2_FN
++#undef GP_2_3_FN
++#undef GP_2_4_FN
++#undef GP_2_5_FN
++#undef GP_2_6_FN
++#undef GP_2_7_FN
++#undef GP_2_8_FN
++#undef GP_2_9_FN
++#undef GP_2_10_FN
++#undef GP_2_11_FN
++#undef GP_2_12_FN
++#undef GP_2_13_FN
++#undef GP_2_14_FN
++#undef GP_2_15_FN
+ 
+ 	PINMUX_SINGLE(MMC_D7),
+ 	PINMUX_SINGLE(MMC_D6),
+diff --git a/drivers/pinctrl/renesas/pfc-r8a779f0.c b/drivers/pinctrl/renesas/pfc-r8a779f0.c
+index 91860608242c5..3b4ca9622bbe1 100644
+--- a/drivers/pinctrl/renesas/pfc-r8a779f0.c
++++ b/drivers/pinctrl/renesas/pfc-r8a779f0.c
+@@ -257,7 +257,28 @@ enum {
+ };
+ 
+ static const u16 pinmux_data[] = {
++/* Using GP_1_[0-9] requires disabling I2C in MOD_SEL1 */
++#define GP_1_0_FN	GP_1_0_FN,	FN_SEL_I2C0_0
++#define GP_1_1_FN	GP_1_1_FN,	FN_SEL_I2C0_0
++#define GP_1_2_FN	GP_1_2_FN,	FN_SEL_I2C1_0
++#define GP_1_3_FN	GP_1_3_FN,	FN_SEL_I2C1_0
++#define GP_1_4_FN	GP_1_4_FN,	FN_SEL_I2C2_0
++#define GP_1_5_FN	GP_1_5_FN,	FN_SEL_I2C2_0
++#define GP_1_6_FN	GP_1_6_FN,	FN_SEL_I2C3_0
++#define GP_1_7_FN	GP_1_7_FN,	FN_SEL_I2C3_0
++#define GP_1_8_FN	GP_1_8_FN,	FN_SEL_I2C4_0
++#define GP_1_9_FN	GP_1_9_FN,	FN_SEL_I2C4_0
+ 	PINMUX_DATA_GP_ALL(),
++#undef GP_1_0_FN
++#undef GP_1_1_FN
++#undef GP_1_2_FN
++#undef GP_1_3_FN
++#undef GP_1_4_FN
++#undef GP_1_5_FN
++#undef GP_1_6_FN
++#undef GP_1_7_FN
++#undef GP_1_8_FN
++#undef GP_1_9_FN
+ 
+ 	PINMUX_SINGLE(SD_WP),
+ 	PINMUX_SINGLE(SD_CD),
+diff --git a/drivers/pinctrl/renesas/pinctrl-rzn1.c b/drivers/pinctrl/renesas/pinctrl-rzn1.c
+index ef5fb25b6016d..849d091205d4d 100644
+--- a/drivers/pinctrl/renesas/pinctrl-rzn1.c
++++ b/drivers/pinctrl/renesas/pinctrl-rzn1.c
+@@ -865,17 +865,15 @@ static int rzn1_pinctrl_probe(struct platform_device *pdev)
+ 	ipctl->mdio_func[0] = -1;
+ 	ipctl->mdio_func[1] = -1;
+ 
+-	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+-	ipctl->lev1_protect_phys = (u32)res->start + 0x400;
+-	ipctl->lev1 = devm_ioremap_resource(&pdev->dev, res);
++	ipctl->lev1 = devm_platform_get_and_ioremap_resource(pdev, 0, &res);
+ 	if (IS_ERR(ipctl->lev1))
+ 		return PTR_ERR(ipctl->lev1);
++	ipctl->lev1_protect_phys = (u32)res->start + 0x400;
+ 
+-	res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
+-	ipctl->lev2_protect_phys = (u32)res->start + 0x400;
+-	ipctl->lev2 = devm_ioremap_resource(&pdev->dev, res);
++	ipctl->lev2 = devm_platform_get_and_ioremap_resource(pdev, 1, &res);
+ 	if (IS_ERR(ipctl->lev2))
+ 		return PTR_ERR(ipctl->lev2);
++	ipctl->lev2_protect_phys = (u32)res->start + 0x400;
+ 
+ 	ipctl->clk = devm_clk_get(&pdev->dev, NULL);
+ 	if (IS_ERR(ipctl->clk))
+diff --git a/drivers/platform/chrome/cros_ec.c b/drivers/platform/chrome/cros_ec.c
+index d49a4efe46c8a..a5cc8f24299eb 100644
+--- a/drivers/platform/chrome/cros_ec.c
++++ b/drivers/platform/chrome/cros_ec.c
+@@ -189,6 +189,8 @@ int cros_ec_register(struct cros_ec_device *ec_dev)
+ 	ec_dev->max_request = sizeof(struct ec_params_hello);
+ 	ec_dev->max_response = sizeof(struct ec_response_get_protocol_info);
+ 	ec_dev->max_passthru = 0;
++	ec_dev->ec = NULL;
++	ec_dev->pd = NULL;
+ 
+ 	ec_dev->din = devm_kzalloc(dev, ec_dev->din_size, GFP_KERNEL);
+ 	if (!ec_dev->din)
+@@ -245,18 +247,16 @@ int cros_ec_register(struct cros_ec_device *ec_dev)
+ 		if (IS_ERR(ec_dev->pd)) {
+ 			dev_err(ec_dev->dev,
+ 				"Failed to create CrOS PD platform device\n");
+-			platform_device_unregister(ec_dev->ec);
+-			return PTR_ERR(ec_dev->pd);
++			err = PTR_ERR(ec_dev->pd);
++			goto exit;
+ 		}
+ 	}
+ 
+ 	if (IS_ENABLED(CONFIG_OF) && dev->of_node) {
+ 		err = devm_of_platform_populate(dev);
+ 		if (err) {
+-			platform_device_unregister(ec_dev->pd);
+-			platform_device_unregister(ec_dev->ec);
+ 			dev_err(dev, "Failed to register sub-devices\n");
+-			return err;
++			goto exit;
+ 		}
+ 	}
+ 
+@@ -278,7 +278,7 @@ int cros_ec_register(struct cros_ec_device *ec_dev)
+ 		err = blocking_notifier_chain_register(&ec_dev->event_notifier,
+ 						      &ec_dev->notifier_ready);
+ 		if (err)
+-			return err;
++			goto exit;
+ 	}
+ 
+ 	dev_info(dev, "Chrome EC device registered\n");
+@@ -291,6 +291,10 @@ int cros_ec_register(struct cros_ec_device *ec_dev)
+ 		cros_ec_irq_thread(0, ec_dev);
+ 
+ 	return 0;
++exit:
++	platform_device_unregister(ec_dev->ec);
++	platform_device_unregister(ec_dev->pd);
++	return err;
+ }
+ EXPORT_SYMBOL(cros_ec_register);
+ 
+diff --git a/drivers/platform/chrome/cros_ec_chardev.c b/drivers/platform/chrome/cros_ec_chardev.c
+index e0bce869c49a9..fd33de546aee0 100644
+--- a/drivers/platform/chrome/cros_ec_chardev.c
++++ b/drivers/platform/chrome/cros_ec_chardev.c
+@@ -301,7 +301,7 @@ static long cros_ec_chardev_ioctl_xcmd(struct cros_ec_dev *ec, void __user *arg)
+ 	}
+ 
+ 	s_cmd->command += ec->cmd_offset;
+-	ret = cros_ec_cmd_xfer_status(ec->ec_dev, s_cmd);
++	ret = cros_ec_cmd_xfer(ec->ec_dev, s_cmd);
+ 	/* Only copy data to userland if data was received. */
+ 	if (ret < 0)
+ 		goto exit;
+diff --git a/drivers/platform/chrome/cros_ec_proto.c b/drivers/platform/chrome/cros_ec_proto.c
+index c4caf2e2de825..ac1419881ff35 100644
+--- a/drivers/platform/chrome/cros_ec_proto.c
++++ b/drivers/platform/chrome/cros_ec_proto.c
+@@ -560,22 +560,28 @@ exit:
+ EXPORT_SYMBOL(cros_ec_query_all);
+ 
+ /**
+- * cros_ec_cmd_xfer_status() - Send a command to the ChromeOS EC.
++ * cros_ec_cmd_xfer() - Send a command to the ChromeOS EC.
+  * @ec_dev: EC device.
+  * @msg: Message to write.
+  *
+- * Call this to send a command to the ChromeOS EC. This should be used instead of calling the EC's
+- * cmd_xfer() callback directly. It returns success status only if both the command was transmitted
+- * successfully and the EC replied with success status.
++ * Call this to send a command to the ChromeOS EC. This should be used instead
++ * of calling the EC's cmd_xfer() callback directly. This function does not
++ * convert EC command execution error codes to Linux error codes. Most
++ * in-kernel users will want to use cros_ec_cmd_xfer_status() instead since
++ * that function implements the conversion.
+  *
+  * Return:
+- * >=0 - The number of bytes transferred
+- * <0 - Linux error code
++ * >0 - EC command was executed successfully. The return value is the number
++ *      of bytes returned by the EC (excluding the header).
++ * =0 - EC communication was successful. EC command execution results are
++ *      reported in msg->result. The result will be EC_RES_SUCCESS if the
++ *      command was executed successfully or report an EC command execution
++ *      error.
++ * <0 - EC communication error. Return value is the Linux error code.
+  */
+-int cros_ec_cmd_xfer_status(struct cros_ec_device *ec_dev,
+-			    struct cros_ec_command *msg)
++int cros_ec_cmd_xfer(struct cros_ec_device *ec_dev, struct cros_ec_command *msg)
+ {
+-	int ret, mapped;
++	int ret;
+ 
+ 	mutex_lock(&ec_dev->lock);
+ 	if (ec_dev->proto_version == EC_PROTO_VERSION_UNKNOWN) {
+@@ -616,6 +622,32 @@ int cros_ec_cmd_xfer_status(struct cros_ec_device *ec_dev,
+ 	ret = send_command(ec_dev, msg);
+ 	mutex_unlock(&ec_dev->lock);
+ 
++	return ret;
++}
++EXPORT_SYMBOL(cros_ec_cmd_xfer);
++
++/**
++ * cros_ec_cmd_xfer_status() - Send a command to the ChromeOS EC.
++ * @ec_dev: EC device.
++ * @msg: Message to write.
++ *
++ * Call this to send a command to the ChromeOS EC. This should be used instead of calling the EC's
++ * cmd_xfer() callback directly. It returns success status only if both the command was transmitted
++ * successfully and the EC replied with success status.
++ *
++ * Return:
++ * >=0 - The number of bytes transferred.
++ * <0 - Linux error code
++ */
++int cros_ec_cmd_xfer_status(struct cros_ec_device *ec_dev,
++			    struct cros_ec_command *msg)
++{
++	int ret, mapped;
++
++	ret = cros_ec_cmd_xfer(ec_dev, msg);
++	if (ret < 0)
++		return ret;
++
+ 	mapped = cros_ec_map_error(msg->result);
+ 	if (mapped) {
+ 		dev_dbg(ec_dev->dev, "Command result (err: %d [%d])\n",
+diff --git a/drivers/platform/mips/cpu_hwmon.c b/drivers/platform/mips/cpu_hwmon.c
+index 386389ffec419..d8c5f9195f85f 100644
+--- a/drivers/platform/mips/cpu_hwmon.c
++++ b/drivers/platform/mips/cpu_hwmon.c
+@@ -55,55 +55,6 @@ out:
+ static int nr_packages;
+ static struct device *cpu_hwmon_dev;
+ 
+-static SENSOR_DEVICE_ATTR(name, 0444, NULL, NULL, 0);
+-
+-static struct attribute *cpu_hwmon_attributes[] = {
+-	&sensor_dev_attr_name.dev_attr.attr,
+-	NULL
+-};
+-
+-/* Hwmon device attribute group */
+-static struct attribute_group cpu_hwmon_attribute_group = {
+-	.attrs = cpu_hwmon_attributes,
+-};
+-
+-static ssize_t get_cpu_temp(struct device *dev,
+-			struct device_attribute *attr, char *buf);
+-static ssize_t cpu_temp_label(struct device *dev,
+-			struct device_attribute *attr, char *buf);
+-
+-static SENSOR_DEVICE_ATTR(temp1_input, 0444, get_cpu_temp, NULL, 1);
+-static SENSOR_DEVICE_ATTR(temp1_label, 0444, cpu_temp_label, NULL, 1);
+-static SENSOR_DEVICE_ATTR(temp2_input, 0444, get_cpu_temp, NULL, 2);
+-static SENSOR_DEVICE_ATTR(temp2_label, 0444, cpu_temp_label, NULL, 2);
+-static SENSOR_DEVICE_ATTR(temp3_input, 0444, get_cpu_temp, NULL, 3);
+-static SENSOR_DEVICE_ATTR(temp3_label, 0444, cpu_temp_label, NULL, 3);
+-static SENSOR_DEVICE_ATTR(temp4_input, 0444, get_cpu_temp, NULL, 4);
+-static SENSOR_DEVICE_ATTR(temp4_label, 0444, cpu_temp_label, NULL, 4);
+-
+-static const struct attribute *hwmon_cputemp[4][3] = {
+-	{
+-		&sensor_dev_attr_temp1_input.dev_attr.attr,
+-		&sensor_dev_attr_temp1_label.dev_attr.attr,
+-		NULL
+-	},
+-	{
+-		&sensor_dev_attr_temp2_input.dev_attr.attr,
+-		&sensor_dev_attr_temp2_label.dev_attr.attr,
+-		NULL
+-	},
+-	{
+-		&sensor_dev_attr_temp3_input.dev_attr.attr,
+-		&sensor_dev_attr_temp3_label.dev_attr.attr,
+-		NULL
+-	},
+-	{
+-		&sensor_dev_attr_temp4_input.dev_attr.attr,
+-		&sensor_dev_attr_temp4_label.dev_attr.attr,
+-		NULL
+-	}
+-};
+-
+ static ssize_t cpu_temp_label(struct device *dev,
+ 			struct device_attribute *attr, char *buf)
+ {
+@@ -121,24 +72,47 @@ static ssize_t get_cpu_temp(struct device *dev,
+ 	return sprintf(buf, "%d\n", value);
+ }
+ 
+-static int create_sysfs_cputemp_files(struct kobject *kobj)
+-{
+-	int i, ret = 0;
+-
+-	for (i = 0; i < nr_packages; i++)
+-		ret = sysfs_create_files(kobj, hwmon_cputemp[i]);
++static SENSOR_DEVICE_ATTR(temp1_input, 0444, get_cpu_temp, NULL, 1);
++static SENSOR_DEVICE_ATTR(temp1_label, 0444, cpu_temp_label, NULL, 1);
++static SENSOR_DEVICE_ATTR(temp2_input, 0444, get_cpu_temp, NULL, 2);
++static SENSOR_DEVICE_ATTR(temp2_label, 0444, cpu_temp_label, NULL, 2);
++static SENSOR_DEVICE_ATTR(temp3_input, 0444, get_cpu_temp, NULL, 3);
++static SENSOR_DEVICE_ATTR(temp3_label, 0444, cpu_temp_label, NULL, 3);
++static SENSOR_DEVICE_ATTR(temp4_input, 0444, get_cpu_temp, NULL, 4);
++static SENSOR_DEVICE_ATTR(temp4_label, 0444, cpu_temp_label, NULL, 4);
+ 
+-	return ret;
+-}
++static struct attribute *cpu_hwmon_attributes[] = {
++	&sensor_dev_attr_temp1_input.dev_attr.attr,
++	&sensor_dev_attr_temp1_label.dev_attr.attr,
++	&sensor_dev_attr_temp2_input.dev_attr.attr,
++	&sensor_dev_attr_temp2_label.dev_attr.attr,
++	&sensor_dev_attr_temp3_input.dev_attr.attr,
++	&sensor_dev_attr_temp3_label.dev_attr.attr,
++	&sensor_dev_attr_temp4_input.dev_attr.attr,
++	&sensor_dev_attr_temp4_label.dev_attr.attr,
++	NULL
++};
+ 
+-static void remove_sysfs_cputemp_files(struct kobject *kobj)
++static umode_t cpu_hwmon_is_visible(struct kobject *kobj,
++				    struct attribute *attr, int i)
+ {
+-	int i;
++	int id = i / 2;
+ 
+-	for (i = 0; i < nr_packages; i++)
+-		sysfs_remove_files(kobj, hwmon_cputemp[i]);
++	if (id < nr_packages)
++		return attr->mode;
++	return 0;
+ }
+ 
++static struct attribute_group cpu_hwmon_group = {
++	.attrs = cpu_hwmon_attributes,
++	.is_visible = cpu_hwmon_is_visible,
++};
++
++static const struct attribute_group *cpu_hwmon_groups[] = {
++	&cpu_hwmon_group,
++	NULL
++};
++
+ #define CPU_THERMAL_THRESHOLD 90000
+ static struct delayed_work thermal_work;
+ 
+@@ -159,50 +133,31 @@ static void do_thermal_timer(struct work_struct *work)
+ 
+ static int __init loongson_hwmon_init(void)
+ {
+-	int ret;
+-
+ 	pr_info("Loongson Hwmon Enter...\n");
+ 
+ 	if (cpu_has_csr())
+ 		csr_temp_enable = csr_readl(LOONGSON_CSR_FEATURES) &
+ 				  LOONGSON_CSRF_TEMP;
+ 
+-	cpu_hwmon_dev = hwmon_device_register_with_info(NULL, "cpu_hwmon", NULL, NULL, NULL);
+-	if (IS_ERR(cpu_hwmon_dev)) {
+-		ret = PTR_ERR(cpu_hwmon_dev);
+-		pr_err("hwmon_device_register fail!\n");
+-		goto fail_hwmon_device_register;
+-	}
+-
+ 	nr_packages = loongson_sysconf.nr_cpus /
+ 		loongson_sysconf.cores_per_package;
+ 
+-	ret = create_sysfs_cputemp_files(&cpu_hwmon_dev->kobj);
+-	if (ret) {
+-		pr_err("fail to create cpu temperature interface!\n");
+-		goto fail_create_sysfs_cputemp_files;
++	cpu_hwmon_dev = hwmon_device_register_with_groups(NULL, "cpu_hwmon",
++							  NULL, cpu_hwmon_groups);
++	if (IS_ERR(cpu_hwmon_dev)) {
++		pr_err("hwmon_device_register fail!\n");
++		return PTR_ERR(cpu_hwmon_dev);
+ 	}
+ 
+ 	INIT_DEFERRABLE_WORK(&thermal_work, do_thermal_timer);
+ 	schedule_delayed_work(&thermal_work, msecs_to_jiffies(20000));
+ 
+-	return ret;
+-
+-fail_create_sysfs_cputemp_files:
+-	sysfs_remove_group(&cpu_hwmon_dev->kobj,
+-				&cpu_hwmon_attribute_group);
+-	hwmon_device_unregister(cpu_hwmon_dev);
+-
+-fail_hwmon_device_register:
+-	return ret;
++	return 0;
+ }
+ 
+ static void __exit loongson_hwmon_exit(void)
+ {
+ 	cancel_delayed_work_sync(&thermal_work);
+-	remove_sysfs_cputemp_files(&cpu_hwmon_dev->kobj);
+-	sysfs_remove_group(&cpu_hwmon_dev->kobj,
+-				&cpu_hwmon_attribute_group);
+ 	hwmon_device_unregister(cpu_hwmon_dev);
+ }
+ 
+diff --git a/drivers/platform/x86/intel/chtwc_int33fe.c b/drivers/platform/x86/intel/chtwc_int33fe.c
+index 0de509fbf0209..c52ac23e23315 100644
+--- a/drivers/platform/x86/intel/chtwc_int33fe.c
++++ b/drivers/platform/x86/intel/chtwc_int33fe.c
+@@ -389,6 +389,8 @@ static int cht_int33fe_typec_probe(struct platform_device *pdev)
+ 		goto out_unregister_fusb302;
+ 	}
+ 
++	platform_set_drvdata(pdev, data);
++
+ 	return 0;
+ 
+ out_unregister_fusb302:
+diff --git a/drivers/platform/x86/intel/hid.c b/drivers/platform/x86/intel/hid.c
+index 2def562c6e1de..216d31e3403dd 100644
+--- a/drivers/platform/x86/intel/hid.c
++++ b/drivers/platform/x86/intel/hid.c
+@@ -238,7 +238,7 @@ static bool intel_hid_evaluate_method(acpi_handle handle,
+ 
+ 	method_name = (char *)intel_hid_dsm_fn_to_method[fn_index];
+ 
+-	if (!(intel_hid_dsm_fn_mask & fn_index))
++	if (!(intel_hid_dsm_fn_mask & BIT(fn_index)))
+ 		goto skip_dsm_eval;
+ 
+ 	obj = acpi_evaluate_dsm_typed(handle, &intel_dsm_guid,
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index d2553970a67ba..c4d844ffad7a6 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -2133,10 +2133,13 @@ struct regulator *_regulator_get(struct device *dev, const char *id,
+ 		rdev->exclusive = 1;
+ 
+ 		ret = _regulator_is_enabled(rdev);
+-		if (ret > 0)
++		if (ret > 0) {
+ 			rdev->use_count = 1;
+-		else
++			regulator->enable_count = 1;
++		} else {
+ 			rdev->use_count = 0;
++			regulator->enable_count = 0;
++		}
+ 	}
+ 
+ 	link = device_link_add(dev, &rdev->dev, DL_FLAG_STATELESS);
+diff --git a/drivers/regulator/da9121-regulator.c b/drivers/regulator/da9121-regulator.c
+index eb9df485bd8aa..76e0e23bf598c 100644
+--- a/drivers/regulator/da9121-regulator.c
++++ b/drivers/regulator/da9121-regulator.c
+@@ -1030,6 +1030,8 @@ static int da9121_assign_chip_model(struct i2c_client *i2c,
+ 		chip->variant_id = DA9121_TYPE_DA9142;
+ 		regmap = &da9121_2ch_regmap_config;
+ 		break;
++	default:
++		return -EINVAL;
+ 	}
+ 
+ 	/* Set these up for of_regulator_match call which may want .of_map_modes */
+diff --git a/drivers/regulator/pfuze100-regulator.c b/drivers/regulator/pfuze100-regulator.c
+index d60d7d1b7fa25..aa55cfca9e400 100644
+--- a/drivers/regulator/pfuze100-regulator.c
++++ b/drivers/regulator/pfuze100-regulator.c
+@@ -521,6 +521,7 @@ static int pfuze_parse_regulators_dt(struct pfuze_chip *chip)
+ 	parent = of_get_child_by_name(np, "regulators");
+ 	if (!parent) {
+ 		dev_err(dev, "regulators node not found\n");
++		of_node_put(np);
+ 		return -EINVAL;
+ 	}
+ 
+@@ -550,6 +551,7 @@ static int pfuze_parse_regulators_dt(struct pfuze_chip *chip)
+ 	}
+ 
+ 	of_node_put(parent);
++	of_node_put(np);
+ 	if (ret < 0) {
+ 		dev_err(dev, "Error parsing regulator init data: %d\n",
+ 			ret);
+diff --git a/drivers/regulator/qcom_smd-regulator.c b/drivers/regulator/qcom_smd-regulator.c
+index 8490aa8eecb1a..7dff94a2eb7e9 100644
+--- a/drivers/regulator/qcom_smd-regulator.c
++++ b/drivers/regulator/qcom_smd-regulator.c
+@@ -944,32 +944,31 @@ static const struct rpm_regulator_data rpm_pm8950_regulators[] = {
+ 	{ "s2", QCOM_SMD_RPM_SMPA, 2, &pm8950_hfsmps, "vdd_s2" },
+ 	{ "s3", QCOM_SMD_RPM_SMPA, 3, &pm8950_hfsmps, "vdd_s3" },
+ 	{ "s4", QCOM_SMD_RPM_SMPA, 4, &pm8950_hfsmps, "vdd_s4" },
+-	{ "s5", QCOM_SMD_RPM_SMPA, 5, &pm8950_ftsmps2p5, "vdd_s5" },
++	/* S5 is managed via SPMI. */
+ 	{ "s6", QCOM_SMD_RPM_SMPA, 6, &pm8950_hfsmps, "vdd_s6" },
+ 
+ 	{ "l1", QCOM_SMD_RPM_LDOA, 1, &pm8950_ult_nldo, "vdd_l1_l19" },
+ 	{ "l2", QCOM_SMD_RPM_LDOA, 2, &pm8950_ult_nldo, "vdd_l2_l23" },
+ 	{ "l3", QCOM_SMD_RPM_LDOA, 3, &pm8950_ult_nldo, "vdd_l3" },
+-	{ "l4", QCOM_SMD_RPM_LDOA, 4, &pm8950_ult_pldo, "vdd_l4_l5_l6_l7_l16" },
+-	{ "l5", QCOM_SMD_RPM_LDOA, 5, &pm8950_pldo_lv, "vdd_l4_l5_l6_l7_l16" },
+-	{ "l6", QCOM_SMD_RPM_LDOA, 6, &pm8950_pldo_lv, "vdd_l4_l5_l6_l7_l16" },
+-	{ "l7", QCOM_SMD_RPM_LDOA, 7, &pm8950_pldo_lv, "vdd_l4_l5_l6_l7_l16" },
++	/* L4 seems not to exist. */
++	{ "l5", QCOM_SMD_RPM_LDOA, 5, &pm8950_pldo_lv, "vdd_l5_l6_l7_l16" },
++	{ "l6", QCOM_SMD_RPM_LDOA, 6, &pm8950_pldo_lv, "vdd_l5_l6_l7_l16" },
++	{ "l7", QCOM_SMD_RPM_LDOA, 7, &pm8950_pldo_lv, "vdd_l5_l6_l7_l16" },
+ 	{ "l8", QCOM_SMD_RPM_LDOA, 8, &pm8950_ult_pldo, "vdd_l8_l11_l12_l17_l22" },
+ 	{ "l9", QCOM_SMD_RPM_LDOA, 9, &pm8950_ult_pldo, "vdd_l9_l10_l13_l14_l15_l18" },
+ 	{ "l10", QCOM_SMD_RPM_LDOA, 10, &pm8950_ult_nldo, "vdd_l9_l10_l13_l14_l15_l18"},
+-	{ "l11", QCOM_SMD_RPM_LDOA, 11, &pm8950_ult_pldo, "vdd_l8_l11_l12_l17_l22"},
+-	{ "l12", QCOM_SMD_RPM_LDOA, 12, &pm8950_ult_pldo, "vdd_l8_l11_l12_l17_l22"},
+-	{ "l13", QCOM_SMD_RPM_LDOA, 13, &pm8950_ult_pldo, "vdd_l9_l10_l13_l14_l15_l18"},
+-	{ "l14", QCOM_SMD_RPM_LDOA, 14, &pm8950_ult_pldo, "vdd_l9_l10_l13_l14_l15_l18"},
+-	{ "l15", QCOM_SMD_RPM_LDOA, 15, &pm8950_ult_pldo, "vdd_l9_l10_l13_l14_l15_l18"},
+-	{ "l16", QCOM_SMD_RPM_LDOA, 16, &pm8950_ult_pldo, "vdd_l4_l5_l6_l7_l16"},
+-	{ "l17", QCOM_SMD_RPM_LDOA, 17, &pm8950_ult_pldo, "vdd_l8_l11_l12_l17_l22"},
+-	{ "l18", QCOM_SMD_RPM_LDOA, 18, &pm8950_ult_pldo, "vdd_l9_l10_l13_l14_l15_l18"},
+-	{ "l19", QCOM_SMD_RPM_LDOA, 18, &pm8950_pldo, "vdd_l1_l19"},
+-	{ "l20", QCOM_SMD_RPM_LDOA, 18, &pm8950_pldo, "vdd_l20"},
+-	{ "l21", QCOM_SMD_RPM_LDOA, 18, &pm8950_pldo, "vdd_l21"},
+-	{ "l22", QCOM_SMD_RPM_LDOA, 18, &pm8950_pldo, "vdd_l8_l11_l12_l17_l22"},
+-	{ "l23", QCOM_SMD_RPM_LDOA, 18, &pm8950_pldo, "vdd_l2_l23"},
++	{ "l11", QCOM_SMD_RPM_LDOA, 11, &pm8950_ult_pldo, "vdd_l8_l11_l12_l17_l22" },
++	{ "l12", QCOM_SMD_RPM_LDOA, 12, &pm8950_ult_pldo, "vdd_l8_l11_l12_l17_l22" },
++	{ "l13", QCOM_SMD_RPM_LDOA, 13, &pm8950_ult_pldo, "vdd_l9_l10_l13_l14_l15_l18" },
++	{ "l14", QCOM_SMD_RPM_LDOA, 14, &pm8950_ult_pldo, "vdd_l9_l10_l13_l14_l15_l18" },
++	{ "l15", QCOM_SMD_RPM_LDOA, 15, &pm8950_ult_pldo, "vdd_l9_l10_l13_l14_l15_l18" },
++	{ "l16", QCOM_SMD_RPM_LDOA, 16, &pm8950_ult_pldo, "vdd_l5_l6_l7_l16" },
++	{ "l17", QCOM_SMD_RPM_LDOA, 17, &pm8950_ult_pldo, "vdd_l8_l11_l12_l17_l22" },
++	/* L18 seems not to exist. */
++	{ "l19", QCOM_SMD_RPM_LDOA, 19, &pm8950_pldo, "vdd_l1_l19" },
++	/* L20 & L21 seem not to exist. */
++	{ "l22", QCOM_SMD_RPM_LDOA, 22, &pm8950_pldo, "vdd_l8_l11_l12_l17_l22" },
++	{ "l23", QCOM_SMD_RPM_LDOA, 23, &pm8950_pldo, "vdd_l2_l23" },
+ 	{}
+ };
+ 
+diff --git a/drivers/regulator/scmi-regulator.c b/drivers/regulator/scmi-regulator.c
+index 1f02f60ad1366..41ae7ac27ff6a 100644
+--- a/drivers/regulator/scmi-regulator.c
++++ b/drivers/regulator/scmi-regulator.c
+@@ -352,7 +352,7 @@ static int scmi_regulator_probe(struct scmi_device *sdev)
+ 			return ret;
+ 		}
+ 	}
+-
++	of_node_put(np);
+ 	/*
+ 	 * Register a regulator for each valid regulator-DT-entry that we
+ 	 * can successfully reach via SCMI and has a valid associated voltage
+diff --git a/drivers/s390/cio/chsc.c b/drivers/s390/cio/chsc.c
+index 297fb399363cc..620a917cd3a15 100644
+--- a/drivers/s390/cio/chsc.c
++++ b/drivers/s390/cio/chsc.c
+@@ -1255,7 +1255,7 @@ exit:
+ EXPORT_SYMBOL_GPL(css_general_characteristics);
+ EXPORT_SYMBOL_GPL(css_chsc_characteristics);
+ 
+-int chsc_sstpc(void *page, unsigned int op, u16 ctrl, u64 *clock_delta)
++int chsc_sstpc(void *page, unsigned int op, u16 ctrl, long *clock_delta)
+ {
+ 	struct {
+ 		struct chsc_header request;
+@@ -1266,7 +1266,7 @@ int chsc_sstpc(void *page, unsigned int op, u16 ctrl, u64 *clock_delta)
+ 		unsigned int rsvd2[5];
+ 		struct chsc_header response;
+ 		unsigned int rsvd3[3];
+-		u64 clock_delta;
++		s64 clock_delta;
+ 		unsigned int rsvd4[2];
+ 	} *rr;
+ 	int rc;
+diff --git a/drivers/scsi/dc395x.c b/drivers/scsi/dc395x.c
+index 67a89715c8630..670a836a6ba19 100644
+--- a/drivers/scsi/dc395x.c
++++ b/drivers/scsi/dc395x.c
+@@ -3585,10 +3585,19 @@ static struct DeviceCtlBlk *device_alloc(struct AdapterCtlBlk *acb,
+ #endif
+ 	if (dcb->target_lun != 0) {
+ 		/* Copy settings */
+-		struct DeviceCtlBlk *p;
+-		list_for_each_entry(p, &acb->dcb_list, list)
+-			if (p->target_id == dcb->target_id)
++		struct DeviceCtlBlk *p = NULL, *iter;
++
++		list_for_each_entry(iter, &acb->dcb_list, list)
++			if (iter->target_id == dcb->target_id) {
++				p = iter;
+ 				break;
++			}
++
++		if (!p) {
++			kfree(dcb);
++			return NULL;
++		}
++
+ 		dprintkdbg(DBG_1, 
+ 		       "device_alloc: <%02i-%i> copy from <%02i-%i>\n",
+ 		       dcb->target_id, dcb->target_lun,
+diff --git a/drivers/scsi/fcoe/fcoe_ctlr.c b/drivers/scsi/fcoe/fcoe_ctlr.c
+index 1756a0ac6f083..558f3f4e18593 100644
+--- a/drivers/scsi/fcoe/fcoe_ctlr.c
++++ b/drivers/scsi/fcoe/fcoe_ctlr.c
+@@ -1969,7 +1969,7 @@ EXPORT_SYMBOL(fcoe_ctlr_recv_flogi);
+  *
+  * Returns: u64 fc world wide name
+  */
+-u64 fcoe_wwn_from_mac(unsigned char mac[MAX_ADDR_LEN],
++u64 fcoe_wwn_from_mac(unsigned char mac[ETH_ALEN],
+ 		      unsigned int scheme, unsigned int port)
+ {
+ 	u64 wwn;
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_main.c b/drivers/scsi/hisi_sas/hisi_sas_main.c
+index 4bda2f6cb3526..849cc5fc86af6 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_main.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_main.c
+@@ -446,6 +446,8 @@ void hisi_sas_task_deliver(struct hisi_hba *hisi_hba,
+ 		return;
+ 	}
+ 
++	/* Make slot memories observable before marking as ready */
++	smp_wmb();
+ 	WRITE_ONCE(slot->ready, 1);
+ 
+ 	spin_lock(&dq->lock);
+@@ -709,8 +711,6 @@ static int hisi_sas_init_device(struct domain_device *device)
+ 	struct scsi_lun lun;
+ 	int retry = HISI_SAS_DISK_RECOVER_CNT;
+ 	struct hisi_hba *hisi_hba = dev_to_hisi_hba(device);
+-	struct device *dev = hisi_hba->dev;
+-	struct sas_phy *local_phy;
+ 
+ 	switch (device->dev_type) {
+ 	case SAS_END_DEVICE:
+@@ -729,30 +729,18 @@ static int hisi_sas_init_device(struct domain_device *device)
+ 	case SAS_SATA_PM_PORT:
+ 	case SAS_SATA_PENDING:
+ 		/*
+-		 * send HARD RESET to clear previous affiliation of
+-		 * STP target port
++		 * If an expander is swapped when a SATA disk is attached then
++		 * we should issue a hard reset to clear previous affiliation
++		 * of STP target port, see SPL (chapter 6.19.4).
++		 *
++		 * However we don't need to issue a hard reset here for these
++		 * reasons:
++		 * a. When probing the device, libsas/libata already issues a
++		 * hard reset in sas_probe_sata() -> ata_sas_async_probe().
++		 * Note that in hisi_sas_debug_I_T_nexus_reset() we take care
++		 * to issue a hard reset by checking the dev status (== INIT).
++		 * b. When resetting the controller, this is simply unnecessary.
+ 		 */
+-		local_phy = sas_get_local_phy(device);
+-		if (!scsi_is_sas_phy_local(local_phy) &&
+-		    !test_bit(HISI_SAS_RESETTING_BIT, &hisi_hba->flags)) {
+-			unsigned long deadline = ata_deadline(jiffies, 20000);
+-			struct sata_device *sata_dev = &device->sata_dev;
+-			struct ata_host *ata_host = sata_dev->ata_host;
+-			struct ata_port_operations *ops = ata_host->ops;
+-			struct ata_port *ap = sata_dev->ap;
+-			struct ata_link *link;
+-			unsigned int classes;
+-
+-			ata_for_each_link(link, ap, EDGE)
+-				rc = ops->hardreset(link, &classes,
+-						    deadline);
+-		}
+-		sas_put_local_phy(local_phy);
+-		if (rc) {
+-			dev_warn(dev, "SATA disk hardreset fail: %d\n", rc);
+-			return rc;
+-		}
+-
+ 		while (retry-- > 0) {
+ 			rc = hisi_sas_softreset_ata_disk(device);
+ 			if (!rc)
+@@ -768,15 +756,19 @@ static int hisi_sas_init_device(struct domain_device *device)
+ 
+ int hisi_sas_slave_alloc(struct scsi_device *sdev)
+ {
+-	struct domain_device *ddev;
++	struct domain_device *ddev = sdev_to_domain_dev(sdev);
++	struct hisi_sas_device *sas_dev = ddev->lldd_dev;
+ 	int rc;
+ 
+ 	rc = sas_slave_alloc(sdev);
+ 	if (rc)
+ 		return rc;
+-	ddev = sdev_to_domain_dev(sdev);
+ 
+-	return hisi_sas_init_device(ddev);
++	rc = hisi_sas_init_device(ddev);
++	if (rc)
++		return rc;
++	sas_dev->dev_status = HISI_SAS_DEV_NORMAL;
++	return 0;
+ }
+ EXPORT_SYMBOL_GPL(hisi_sas_slave_alloc);
+ 
+@@ -826,7 +818,6 @@ static int hisi_sas_dev_found(struct domain_device *device)
+ 	dev_info(dev, "dev[%d:%x] found\n",
+ 		sas_dev->device_id, sas_dev->dev_type);
+ 
+-	sas_dev->dev_status = HISI_SAS_DEV_NORMAL;
+ 	return 0;
+ 
+ err_out:
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+index 79f87d7c3e682..7d819fc0395e4 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+@@ -1563,9 +1563,15 @@ static irqreturn_t phy_up_v3_hw(int phy_no, struct hisi_hba *hisi_hba)
+ 
+ 	phy->port_id = port_id;
+ 
+-	/* Call pm_runtime_put_sync() with pairs in hisi_sas_phyup_pm_work() */
++	/*
++	 * Call pm_runtime_get_noresume() which pairs with
++	 * hisi_sas_phyup_pm_work() -> pm_runtime_put_sync().
++	 * For failure call pm_runtime_put() as we are in a hardirq context.
++	 */
+ 	pm_runtime_get_noresume(dev);
+-	hisi_sas_notify_phy_event(phy, HISI_PHYE_PHY_UP_PM);
++	res = hisi_sas_notify_phy_event(phy, HISI_PHYE_PHY_UP_PM);
++	if (!res)
++		pm_runtime_put(dev);
+ 
+ 	res = IRQ_HANDLED;
+ 
+diff --git a/drivers/scsi/lpfc/lpfc.h b/drivers/scsi/lpfc/lpfc.h
+index 0025760230e51..da5e91a911510 100644
+--- a/drivers/scsi/lpfc/lpfc.h
++++ b/drivers/scsi/lpfc/lpfc.h
+@@ -1025,6 +1025,7 @@ struct lpfc_hba {
+ #define LS_MDS_LINK_DOWN      0x8	/* MDS Diagnostics Link Down */
+ #define LS_MDS_LOOPBACK       0x10	/* MDS Diagnostics Link Up (Loopback) */
+ #define LS_CT_VEN_RPA         0x20	/* Vendor RPA sent to switch */
++#define LS_EXTERNAL_LOOPBACK  0x40	/* External loopback plug inserted */
+ 
+ 	uint32_t hba_flag;	/* hba generic flags */
+ #define HBA_ERATT_HANDLED	0x1 /* This flag is set when eratt handled */
+diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c
+index 872a26376ccbb..892b3da1ba450 100644
+--- a/drivers/scsi/lpfc/lpfc_els.c
++++ b/drivers/scsi/lpfc/lpfc_els.c
+@@ -1387,6 +1387,9 @@ lpfc_issue_els_flogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+ 
+ 	phba->hba_flag |= (HBA_FLOGI_ISSUED | HBA_FLOGI_OUTSTANDING);
+ 
++	/* Clear external loopback plug detected flag */
++	phba->link_flag &= ~LS_EXTERNAL_LOOPBACK;
++
+ 	/* Check for a deferred FLOGI ACC condition */
+ 	if (phba->defer_flogi_acc_flag) {
+ 		/* lookup ndlp for received FLOGI */
+@@ -1532,10 +1535,13 @@ lpfc_initial_flogi(struct lpfc_vport *vport)
+ 	}
+ 
+ 	if (lpfc_issue_els_flogi(vport, ndlp, 0)) {
+-		/* This decrement of reference count to node shall kick off
+-		 * the release of the node.
++		/* A node reference should be retained while registered with a
++		 * transport or dev-loss-evt work is pending.
++		 * Otherwise, decrement node reference to trigger release.
+ 		 */
+-		lpfc_nlp_put(ndlp);
++		if (!(ndlp->fc4_xpt_flags & (SCSI_XPT_REGD | NVME_XPT_REGD)) &&
++		    !(ndlp->nlp_flag & NLP_IN_DEV_LOSS))
++			lpfc_nlp_put(ndlp);
+ 		return 0;
+ 	}
+ 	return 1;
+@@ -1578,10 +1584,13 @@ lpfc_initial_fdisc(struct lpfc_vport *vport)
+ 	}
+ 
+ 	if (lpfc_issue_els_fdisc(vport, ndlp, 0)) {
+-		/* decrement node reference count to trigger the release of
+-		 * the node.
++		/* A node reference should be retained while registered with a
++		 * transport or dev-loss-evt work is pending.
++		 * Otherwise, decrement node reference to trigger release.
+ 		 */
+-		lpfc_nlp_put(ndlp);
++		if (!(ndlp->fc4_xpt_flags & (SCSI_XPT_REGD | NVME_XPT_REGD)) &&
++		    !(ndlp->nlp_flag & NLP_IN_DEV_LOSS))
++			lpfc_nlp_put(ndlp);
+ 		return 0;
+ 	}
+ 	return 1;
+@@ -1983,6 +1992,7 @@ lpfc_cmpl_els_plogi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ 	int disc;
+ 	struct serv_parm *sp = NULL;
+ 	u32 ulp_status, ulp_word4, did, iotag;
++	bool release_node = false;
+ 
+ 	/* we pass cmdiocb to state machine which needs rspiocb as well */
+ 	cmdiocb->context_un.rsp_iocb = rspiocb;
+@@ -2071,19 +2081,21 @@ lpfc_cmpl_els_plogi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ 			spin_unlock_irq(&ndlp->lock);
+ 			goto out;
+ 		}
+-		spin_unlock_irq(&ndlp->lock);
+ 
+ 		/* No PLOGI collision and the node is not registered with the
+ 		 * scsi or nvme transport. It is no longer an active node. Just
+ 		 * start the device remove process.
+ 		 */
+ 		if (!(ndlp->fc4_xpt_flags & (SCSI_XPT_REGD | NVME_XPT_REGD))) {
+-			spin_lock_irq(&ndlp->lock);
+ 			ndlp->nlp_flag &= ~NLP_NPR_2B_DISC;
+-			spin_unlock_irq(&ndlp->lock);
++			if (!(ndlp->nlp_flag & NLP_IN_DEV_LOSS))
++				release_node = true;
++		}
++		spin_unlock_irq(&ndlp->lock);
++
++		if (release_node)
+ 			lpfc_disc_state_machine(vport, ndlp, cmdiocb,
+ 						NLP_EVT_DEVICE_RM);
+-		}
+ 	} else {
+ 		/* Good status, call state machine */
+ 		prsp = list_entry(((struct lpfc_dmabuf *)
+@@ -2294,6 +2306,7 @@ lpfc_cmpl_els_prli(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ 	u32 loglevel;
+ 	u32 ulp_status;
+ 	u32 ulp_word4;
++	bool release_node = false;
+ 
+ 	/* we pass cmdiocb to state machine which needs rspiocb as well */
+ 	cmdiocb->context_un.rsp_iocb = rspiocb;
+@@ -2370,14 +2383,18 @@ lpfc_cmpl_els_prli(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ 		 * it is no longer an active node.  Otherwise devloss
+ 		 * handles the final cleanup.
+ 		 */
++		spin_lock_irq(&ndlp->lock);
+ 		if (!(ndlp->fc4_xpt_flags & (SCSI_XPT_REGD | NVME_XPT_REGD)) &&
+ 		    !ndlp->fc4_prli_sent) {
+-			spin_lock_irq(&ndlp->lock);
+ 			ndlp->nlp_flag &= ~NLP_NPR_2B_DISC;
+-			spin_unlock_irq(&ndlp->lock);
++			if (!(ndlp->nlp_flag & NLP_IN_DEV_LOSS))
++				release_node = true;
++		}
++		spin_unlock_irq(&ndlp->lock);
++
++		if (release_node)
+ 			lpfc_disc_state_machine(vport, ndlp, cmdiocb,
+ 						NLP_EVT_DEVICE_RM);
+-		}
+ 	} else {
+ 		/* Good status, call state machine.  However, if another
+ 		 * PRLI is outstanding, don't call the state machine
+@@ -2749,6 +2766,7 @@ lpfc_cmpl_els_adisc(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ 	struct lpfc_nodelist *ndlp;
+ 	int  disc;
+ 	u32 ulp_status, ulp_word4, tmo;
++	bool release_node = false;
+ 
+ 	/* we pass cmdiocb to state machine which needs rspiocb as well */
+ 	cmdiocb->context_un.rsp_iocb = rspiocb;
+@@ -2815,13 +2833,17 @@ lpfc_cmpl_els_adisc(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ 		 * transport, it is no longer an active node. Otherwise
+ 		 * devloss handles the final cleanup.
+ 		 */
++		spin_lock_irq(&ndlp->lock);
+ 		if (!(ndlp->fc4_xpt_flags & (SCSI_XPT_REGD | NVME_XPT_REGD))) {
+-			spin_lock_irq(&ndlp->lock);
+ 			ndlp->nlp_flag &= ~NLP_NPR_2B_DISC;
+-			spin_unlock_irq(&ndlp->lock);
++			if (!(ndlp->nlp_flag & NLP_IN_DEV_LOSS))
++				release_node = true;
++		}
++		spin_unlock_irq(&ndlp->lock);
++
++		if (release_node)
+ 			lpfc_disc_state_machine(vport, ndlp, cmdiocb,
+ 						NLP_EVT_DEVICE_RM);
+-		}
+ 	} else
+ 		/* Good status, call state machine */
+ 		lpfc_disc_state_machine(vport, ndlp, cmdiocb,
+@@ -3855,9 +3877,6 @@ lpfc_least_capable_settings(struct lpfc_hba *phba,
+ {
+ 	u32 rsp_sig_cap = 0, drv_sig_cap = 0;
+ 	u32 rsp_sig_freq_cyc = 0, rsp_sig_freq_scale = 0;
+-	struct lpfc_cgn_info *cp;
+-	u32 crc;
+-	u16 sig_freq;
+ 
+ 	/* Get rsp signal and frequency capabilities.  */
+ 	rsp_sig_cap = be32_to_cpu(pcgd->xmt_signal_capability);
+@@ -3913,25 +3932,7 @@ lpfc_least_capable_settings(struct lpfc_hba *phba,
+ 		}
+ 	}
+ 
+-	if (!phba->cgn_i)
+-		return;
+-
+-	/* Update signal frequency in congestion info buffer */
+-	cp = (struct lpfc_cgn_info *)phba->cgn_i->virt;
+-
+-	/* Frequency (in ms) Signal Warning/Signal Congestion Notifications
+-	 * are received by the HBA
+-	 */
+-	sig_freq = phba->cgn_sig_freq;
+-
+-	if (phba->cgn_reg_signal == EDC_CG_SIG_WARN_ONLY)
+-		cp->cgn_warn_freq = cpu_to_le16(sig_freq);
+-	if (phba->cgn_reg_signal == EDC_CG_SIG_WARN_ALARM) {
+-		cp->cgn_alarm_freq = cpu_to_le16(sig_freq);
+-		cp->cgn_warn_freq = cpu_to_le16(sig_freq);
+-	}
+-	crc = lpfc_cgn_calc_crc32(cp, LPFC_CGN_INFO_SZ, LPFC_CGN_CRC32_SEED);
+-	cp->cgn_info_crc = cpu_to_le32(crc);
++	/* We are NOT recording signal frequency in congestion info buffer */
+ 	return;
+ 
+ out_no_support:
+@@ -8163,6 +8164,9 @@ lpfc_els_rcv_flogi(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
+ 	uint32_t fc_flag = 0;
+ 	uint32_t port_state = 0;
+ 
++	/* Clear external loopback plug detected flag */
++	phba->link_flag &= ~LS_EXTERNAL_LOOPBACK;
++
+ 	cmd = *lp++;
+ 	sp = (struct serv_parm *) lp;
+ 
+@@ -8214,6 +8218,12 @@ lpfc_els_rcv_flogi(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
+ 			return 1;
+ 		}
+ 
++		/* External loopback plug insertion detected */
++		phba->link_flag |= LS_EXTERNAL_LOOPBACK;
++
++		lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS | LOG_LIBDFC,
++				 "1119 External Loopback plug detected\n");
++
+ 		/* abort the flogi coming back to ourselves
+ 		 * due to external loopback on the port.
+ 		 */
+@@ -9940,11 +9950,14 @@ lpfc_els_rcv_fpin_cgn(struct lpfc_hba *phba, struct fc_tlv_desc *tlv)
+ 			/* Take action here for an Alarm event */
+ 			if (phba->cmf_active_mode != LPFC_CFG_OFF) {
+ 				if (phba->cgn_reg_fpin & LPFC_CGN_FPIN_ALARM) {
+-					/* Track of alarm cnt for cgn_info */
+-					atomic_inc(&phba->cgn_fabric_alarm_cnt);
+ 					/* Track of alarm cnt for SYNC_WQE */
+ 					atomic_inc(&phba->cgn_sync_alarm_cnt);
+ 				}
++				/* Track alarm cnt for cgn_info regardless
++				 * of whether CMF is configured for Signals
++				 * or FPINs.
++				 */
++				atomic_inc(&phba->cgn_fabric_alarm_cnt);
+ 				goto cleanup;
+ 			}
+ 			break;
+@@ -9952,11 +9965,14 @@ lpfc_els_rcv_fpin_cgn(struct lpfc_hba *phba, struct fc_tlv_desc *tlv)
+ 			/* Take action here for a Warning event */
+ 			if (phba->cmf_active_mode != LPFC_CFG_OFF) {
+ 				if (phba->cgn_reg_fpin & LPFC_CGN_FPIN_WARN) {
+-					/* Track of warning cnt for cgn_info */
+-					atomic_inc(&phba->cgn_fabric_warn_cnt);
+ 					/* Track of warning cnt for SYNC_WQE */
+ 					atomic_inc(&phba->cgn_sync_warn_cnt);
+ 				}
++				/* Track warning cnt and freq for cgn_info
++				 * regardless of whether CMF is configured for
++				 * Signals or FPINs.
++				 */
++				atomic_inc(&phba->cgn_fabric_warn_cnt);
+ cleanup:
+ 				/* Save frequency in ms */
+ 				phba->cgn_fpin_frequency =
+@@ -9965,14 +9981,10 @@ cleanup:
+ 				if (phba->cgn_i) {
+ 					cp = (struct lpfc_cgn_info *)
+ 						phba->cgn_i->virt;
+-					if (phba->cgn_reg_fpin &
+-						LPFC_CGN_FPIN_ALARM)
+-						cp->cgn_alarm_freq =
+-							cpu_to_le16(value);
+-					if (phba->cgn_reg_fpin &
+-						LPFC_CGN_FPIN_WARN)
+-						cp->cgn_warn_freq =
+-							cpu_to_le16(value);
++					cp->cgn_alarm_freq =
++						cpu_to_le16(value);
++					cp->cgn_warn_freq =
++						cpu_to_le16(value);
+ 					crc = lpfc_cgn_calc_crc32
+ 						(cp,
+ 						LPFC_CGN_INFO_SZ,
+diff --git a/drivers/scsi/lpfc/lpfc_hbadisc.c b/drivers/scsi/lpfc/lpfc_hbadisc.c
+index 2b877dff5ed4f..6b6b3790d7b58 100644
+--- a/drivers/scsi/lpfc/lpfc_hbadisc.c
++++ b/drivers/scsi/lpfc/lpfc_hbadisc.c
+@@ -1221,6 +1221,9 @@ lpfc_linkdown(struct lpfc_hba *phba)
+ 
+ 	phba->defer_flogi_acc_flag = false;
+ 
++	/* Clear external loopback plug detected flag */
++	phba->link_flag &= ~LS_EXTERNAL_LOOPBACK;
++
+ 	spin_lock_irq(&phba->hbalock);
+ 	phba->fcf.fcf_flag &= ~(FCF_AVAILABLE | FCF_SCAN_DONE);
+ 	spin_unlock_irq(&phba->hbalock);
+diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
+index 461d333b1b3a8..011849c1ed3c9 100644
+--- a/drivers/scsi/lpfc/lpfc_init.c
++++ b/drivers/scsi/lpfc/lpfc_init.c
+@@ -5866,21 +5866,8 @@ lpfc_cgn_save_evt_cnt(struct lpfc_hba *phba)
+ 
+ 	/* Use the frequency found in the last rcv'ed FPIN */
+ 	value = phba->cgn_fpin_frequency;
+-	if (phba->cgn_reg_fpin & LPFC_CGN_FPIN_WARN)
+-		cp->cgn_warn_freq = cpu_to_le16(value);
+-	if (phba->cgn_reg_fpin & LPFC_CGN_FPIN_ALARM)
+-		cp->cgn_alarm_freq = cpu_to_le16(value);
+-
+-	/* Frequency (in ms) Signal Warning/Signal Congestion Notifications
+-	 * are received by the HBA
+-	 */
+-	value = phba->cgn_sig_freq;
+-
+-	if (phba->cgn_reg_signal == EDC_CG_SIG_WARN_ONLY ||
+-	    phba->cgn_reg_signal == EDC_CG_SIG_WARN_ALARM)
+-		cp->cgn_warn_freq = cpu_to_le16(value);
+-	if (phba->cgn_reg_signal == EDC_CG_SIG_WARN_ALARM)
+-		cp->cgn_alarm_freq = cpu_to_le16(value);
++	cp->cgn_warn_freq = cpu_to_le16(value);
++	cp->cgn_alarm_freq = cpu_to_le16(value);
+ 
+ 	lvalue = lpfc_cgn_calc_crc32(cp, LPFC_CGN_INFO_SZ,
+ 				     LPFC_CGN_CRC32_SEED);
+@@ -6595,9 +6582,6 @@ lpfc_sli4_async_sli_evt(struct lpfc_hba *phba, struct lpfc_acqe_sli *acqe_sli)
+ 		/* Alarm overrides warning, so check that first */
+ 		if (cgn_signal->alarm_cnt) {
+ 			if (phba->cgn_reg_signal == EDC_CG_SIG_WARN_ALARM) {
+-				/* Keep track of alarm cnt for cgn_info */
+-				atomic_add(cgn_signal->alarm_cnt,
+-					   &phba->cgn_fabric_alarm_cnt);
+ 				/* Keep track of alarm cnt for CMF_SYNC_WQE */
+ 				atomic_add(cgn_signal->alarm_cnt,
+ 					   &phba->cgn_sync_alarm_cnt);
+@@ -6606,8 +6590,6 @@ lpfc_sli4_async_sli_evt(struct lpfc_hba *phba, struct lpfc_acqe_sli *acqe_sli)
+ 			/* signal action needs to be taken */
+ 			if (phba->cgn_reg_signal == EDC_CG_SIG_WARN_ONLY ||
+ 			    phba->cgn_reg_signal == EDC_CG_SIG_WARN_ALARM) {
+-				/* Keep track of warning cnt for cgn_info */
+-				atomic_add(cnt, &phba->cgn_fabric_warn_cnt);
+ 				/* Keep track of warning cnt for CMF_SYNC_WQE */
+ 				atomic_add(cnt, &phba->cgn_sync_warn_cnt);
+ 			}
+@@ -15700,34 +15682,7 @@ void lpfc_dmp_dbg(struct lpfc_hba *phba)
+ 	unsigned int temp_idx;
+ 	int i;
+ 	int j = 0;
+-	unsigned long rem_nsec, iflags;
+-	bool log_verbose = false;
+-	struct lpfc_vport *port_iterator;
+-
+-	/* Don't dump messages if we explicitly set log_verbose for the
+-	 * physical port or any vport.
+-	 */
+-	if (phba->cfg_log_verbose)
+-		return;
+-
+-	spin_lock_irqsave(&phba->port_list_lock, iflags);
+-	list_for_each_entry(port_iterator, &phba->port_list, listentry) {
+-		if (port_iterator->load_flag & FC_UNLOADING)
+-			continue;
+-		if (scsi_host_get(lpfc_shost_from_vport(port_iterator))) {
+-			if (port_iterator->cfg_log_verbose)
+-				log_verbose = true;
+-
+-			scsi_host_put(lpfc_shost_from_vport(port_iterator));
+-
+-			if (log_verbose) {
+-				spin_unlock_irqrestore(&phba->port_list_lock,
+-						       iflags);
+-				return;
+-			}
+-		}
+-	}
+-	spin_unlock_irqrestore(&phba->port_list_lock, iflags);
++	unsigned long rem_nsec;
+ 
+ 	if (atomic_cmpxchg(&phba->dbg_log_dmping, 0, 1) != 0)
+ 		return;
+diff --git a/drivers/scsi/lpfc/lpfc_logmsg.h b/drivers/scsi/lpfc/lpfc_logmsg.h
+index 7d480c7987942..a5aafe230c74f 100644
+--- a/drivers/scsi/lpfc/lpfc_logmsg.h
++++ b/drivers/scsi/lpfc/lpfc_logmsg.h
+@@ -73,7 +73,7 @@ do { \
+ #define lpfc_printf_vlog(vport, level, mask, fmt, arg...) \
+ do { \
+ 	{ if (((mask) & (vport)->cfg_log_verbose) || (level[1] <= '3')) { \
+-		if ((mask) & LOG_TRACE_EVENT) \
++		if ((mask) & LOG_TRACE_EVENT && !(vport)->cfg_log_verbose) \
+ 			lpfc_dmp_dbg((vport)->phba); \
+ 		dev_printk(level, &((vport)->phba->pcidev)->dev, "%d:(%d):" \
+ 			   fmt, (vport)->phba->brd_no, vport->vpi, ##arg);  \
+@@ -89,11 +89,11 @@ do { \
+ 				 (phba)->pport->cfg_log_verbose : \
+ 				 (phba)->cfg_log_verbose; \
+ 	if (((mask) & log_verbose) || (level[1] <= '3')) { \
+-		if ((mask) & LOG_TRACE_EVENT) \
++		if ((mask) & LOG_TRACE_EVENT && !log_verbose) \
+ 			lpfc_dmp_dbg(phba); \
+ 		dev_printk(level, &((phba)->pcidev)->dev, "%d:" \
+ 			fmt, phba->brd_no, ##arg); \
+-	} else  if (!(phba)->cfg_log_verbose)\
++	} else if (!log_verbose)\
+ 		lpfc_dbg_print(phba, "%d:" fmt, phba->brd_no, ##arg); \
+ 	} \
+ } while (0)
+diff --git a/drivers/scsi/lpfc/lpfc_nportdisc.c b/drivers/scsi/lpfc/lpfc_nportdisc.c
+index c4e1a07066a2e..4b065c51ee1b0 100644
+--- a/drivers/scsi/lpfc/lpfc_nportdisc.c
++++ b/drivers/scsi/lpfc/lpfc_nportdisc.c
+@@ -614,9 +614,15 @@ lpfc_rcv_plogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+ 		stat.un.b.lsRjtRsnCode = LSRJT_INVALID_CMD;
+ 		stat.un.b.lsRjtRsnCodeExp = LSEXP_NOTHING_MORE;
+ 		rc = lpfc_els_rsp_reject(vport, stat.un.lsRjtError, cmdiocb,
+-			ndlp, login_mbox);
+-		if (rc)
++					 ndlp, login_mbox);
++		if (rc) {
++			mp = (struct lpfc_dmabuf *)login_mbox->ctx_buf;
++			if (mp) {
++				lpfc_mbuf_free(phba, mp->virt, mp->phys);
++				kfree(mp);
++			}
+ 			mempool_free(login_mbox, phba->mbox_mem_pool);
++		}
+ 		return 1;
+ 	}
+ 
+diff --git a/drivers/scsi/lpfc/lpfc_scsi.c b/drivers/scsi/lpfc/lpfc_scsi.c
+index ba9dbb51b75f0..f617a2ef6b0f4 100644
+--- a/drivers/scsi/lpfc/lpfc_scsi.c
++++ b/drivers/scsi/lpfc/lpfc_scsi.c
+@@ -3835,7 +3835,7 @@ lpfc_update_cmf_cmpl(struct lpfc_hba *phba,
+ 		else
+ 			time = div_u64(time + 500, 1000); /* round it */
+ 
+-		cgs = this_cpu_ptr(phba->cmf_stat);
++		cgs = per_cpu_ptr(phba->cmf_stat, raw_smp_processor_id());
+ 		atomic64_add(size, &cgs->rcv_bytes);
+ 		atomic64_add(time, &cgs->rx_latency);
+ 		atomic_inc(&cgs->rx_io_cnt);
+@@ -3879,7 +3879,7 @@ lpfc_update_cmf_cmd(struct lpfc_hba *phba, uint32_t size)
+ 			atomic_set(&phba->rx_max_read_cnt, size);
+ 	}
+ 
+-	cgs = this_cpu_ptr(phba->cmf_stat);
++	cgs = per_cpu_ptr(phba->cmf_stat, raw_smp_processor_id());
+ 	atomic64_add(size, &cgs->total_bytes);
+ 	return 0;
+ }
+@@ -5864,25 +5864,25 @@ lpfc_abort_handler(struct scsi_cmnd *cmnd)
+ 	if (!lpfc_cmd)
+ 		return ret;
+ 
+-	spin_lock_irqsave(&phba->hbalock, flags);
++	/* Guard against IO completion being called at same time */
++	spin_lock_irqsave(&lpfc_cmd->buf_lock, flags);
++
++	spin_lock(&phba->hbalock);
+ 	/* driver queued commands are in process of being flushed */
+ 	if (phba->hba_flag & HBA_IOQ_FLUSH) {
+ 		lpfc_printf_vlog(vport, KERN_WARNING, LOG_FCP,
+ 			"3168 SCSI Layer abort requested I/O has been "
+ 			"flushed by LLD.\n");
+ 		ret = FAILED;
+-		goto out_unlock;
++		goto out_unlock_hba;
+ 	}
+ 
+-	/* Guard against IO completion being called at same time */
+-	spin_lock(&lpfc_cmd->buf_lock);
+-
+ 	if (!lpfc_cmd->pCmd) {
+ 		lpfc_printf_vlog(vport, KERN_WARNING, LOG_FCP,
+ 			 "2873 SCSI Layer I/O Abort Request IO CMPL Status "
+ 			 "x%x ID %d LUN %llu\n",
+ 			 SUCCESS, cmnd->device->id, cmnd->device->lun);
+-		goto out_unlock_buf;
++		goto out_unlock_hba;
+ 	}
+ 
+ 	iocb = &lpfc_cmd->cur_iocbq;
+@@ -5890,7 +5890,7 @@ lpfc_abort_handler(struct scsi_cmnd *cmnd)
+ 		pring_s4 = phba->sli4_hba.hdwq[iocb->hba_wqidx].io_wq->pring;
+ 		if (!pring_s4) {
+ 			ret = FAILED;
+-			goto out_unlock_buf;
++			goto out_unlock_hba;
+ 		}
+ 		spin_lock(&pring_s4->ring_lock);
+ 	}
+@@ -5923,8 +5923,8 @@ lpfc_abort_handler(struct scsi_cmnd *cmnd)
+ 			 "3389 SCSI Layer I/O Abort Request is pending\n");
+ 		if (phba->sli_rev == LPFC_SLI_REV4)
+ 			spin_unlock(&pring_s4->ring_lock);
+-		spin_unlock(&lpfc_cmd->buf_lock);
+-		spin_unlock_irqrestore(&phba->hbalock, flags);
++		spin_unlock(&phba->hbalock);
++		spin_unlock_irqrestore(&lpfc_cmd->buf_lock, flags);
+ 		goto wait_for_cmpl;
+ 	}
+ 
+@@ -5945,15 +5945,13 @@ lpfc_abort_handler(struct scsi_cmnd *cmnd)
+ 	if (ret_val != IOCB_SUCCESS) {
+ 		/* Indicate the IO is not being aborted by the driver. */
+ 		lpfc_cmd->waitq = NULL;
+-		spin_unlock(&lpfc_cmd->buf_lock);
+-		spin_unlock_irqrestore(&phba->hbalock, flags);
+ 		ret = FAILED;
+-		goto out;
++		goto out_unlock_hba;
+ 	}
+ 
+ 	/* no longer need the lock after this point */
+-	spin_unlock(&lpfc_cmd->buf_lock);
+-	spin_unlock_irqrestore(&phba->hbalock, flags);
++	spin_unlock(&phba->hbalock);
++	spin_unlock_irqrestore(&lpfc_cmd->buf_lock, flags);
+ 
+ 	if (phba->cfg_poll & DISABLE_FCP_RING_INT)
+ 		lpfc_sli_handle_fast_ring_event(phba,
+@@ -5988,10 +5986,9 @@ wait_for_cmpl:
+ out_unlock_ring:
+ 	if (phba->sli_rev == LPFC_SLI_REV4)
+ 		spin_unlock(&pring_s4->ring_lock);
+-out_unlock_buf:
+-	spin_unlock(&lpfc_cmd->buf_lock);
+-out_unlock:
+-	spin_unlock_irqrestore(&phba->hbalock, flags);
++out_unlock_hba:
++	spin_unlock(&phba->hbalock);
++	spin_unlock_irqrestore(&lpfc_cmd->buf_lock, flags);
+ out:
+ 	lpfc_printf_vlog(vport, KERN_WARNING, LOG_FCP,
+ 			 "0749 SCSI Layer I/O Abort Request Status x%x ID %d "
+diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
+index 6adaf79e67cc0..331241a71452f 100644
+--- a/drivers/scsi/lpfc/lpfc_sli.c
++++ b/drivers/scsi/lpfc/lpfc_sli.c
+@@ -1373,7 +1373,7 @@ static void
+ __lpfc_sli_release_iocbq_s4(struct lpfc_hba *phba, struct lpfc_iocbq *iocbq)
+ {
+ 	struct lpfc_sglq *sglq;
+-	size_t start_clean = offsetof(struct lpfc_iocbq, iocb);
++	size_t start_clean = offsetof(struct lpfc_iocbq, wqe);
+ 	unsigned long iflag = 0;
+ 	struct lpfc_sli_ring *pring;
+ 
+@@ -10800,24 +10800,15 @@ __lpfc_sli_prep_xmit_seq64_s4(struct lpfc_iocbq *cmdiocbq,
+ {
+ 	union lpfc_wqe128 *wqe;
+ 	struct ulp_bde64 *bpl;
+-	struct ulp_bde64_le *bde;
+ 
+ 	wqe = &cmdiocbq->wqe;
+ 	memset(wqe, 0, sizeof(*wqe));
+ 
+ 	/* Words 0 - 2 */
+ 	bpl = (struct ulp_bde64 *)bmp->virt;
+-	if (cmdiocbq->cmd_flag & (LPFC_IO_LIBDFC | LPFC_IO_LOOPBACK)) {
+-		wqe->xmit_sequence.bde.addrHigh = bpl->addrHigh;
+-		wqe->xmit_sequence.bde.addrLow = bpl->addrLow;
+-		wqe->xmit_sequence.bde.tus.w = bpl->tus.w;
+-	} else {
+-		bde = (struct ulp_bde64_le *)&wqe->xmit_sequence.bde;
+-		bde->addr_low = cpu_to_le32(putPaddrLow(bmp->phys));
+-		bde->addr_high = cpu_to_le32(putPaddrHigh(bmp->phys));
+-		bde->type_size = cpu_to_le32(bpl->tus.f.bdeSize);
+-		bde->type_size |= cpu_to_le32(ULP_BDE64_TYPE_BDE_64);
+-	}
++	wqe->xmit_sequence.bde.addrHigh = bpl->addrHigh;
++	wqe->xmit_sequence.bde.addrLow = bpl->addrLow;
++	wqe->xmit_sequence.bde.tus.w = bpl->tus.w;
+ 
+ 	/* Word 5 */
+ 	bf_set(wqe_ls, &wqe->xmit_sequence.wge_ctl, last_seq);
+@@ -12066,6 +12057,8 @@ lpfc_ignore_els_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ {
+ 	struct lpfc_nodelist *ndlp = NULL;
+ 	IOCB_t *irsp;
++	LPFC_MBOXQ_t *mbox;
++	struct lpfc_dmabuf *mp;
+ 	u32 ulp_command, ulp_status, ulp_word4, iotag;
+ 
+ 	ulp_command = get_job_cmnd(phba, cmdiocb);
+@@ -12077,6 +12070,21 @@ lpfc_ignore_els_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ 	} else {
+ 		irsp = &rspiocb->iocb;
+ 		iotag = irsp->ulpIoTag;
++
++		/* It is possible a PLOGI_RJT for NPIV ports to get aborted.
++		 * The MBX_REG_LOGIN64 mbox command is freed back to the
++		 * mbox_mem_pool here.
++		 */
++		if (cmdiocb->context_un.mbox) {
++			mbox = cmdiocb->context_un.mbox;
++			mp = (struct lpfc_dmabuf *)mbox->ctx_buf;
++			if (mp) {
++				lpfc_mbuf_free(phba, mp->virt, mp->phys);
++				kfree(mp);
++			}
++			mempool_free(mbox, phba->mbox_mem_pool);
++			cmdiocb->context_un.mbox = NULL;
++		}
+ 	}
+ 
+ 	/* ELS cmd tag <ulpIoTag> completes */
+@@ -12185,7 +12193,8 @@ lpfc_sli_issue_abort_iotag(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
+ 
+ 	if (phba->link_state < LPFC_LINK_UP ||
+ 	    (phba->sli_rev == LPFC_SLI_REV4 &&
+-	     phba->sli4_hba.link_state.status == LPFC_FC_LA_TYPE_LINK_DOWN))
++	     phba->sli4_hba.link_state.status == LPFC_FC_LA_TYPE_LINK_DOWN) ||
++	    (phba->link_flag & LS_EXTERNAL_LOOPBACK))
+ 		ia = true;
+ 	else
+ 		ia = false;
+@@ -12644,7 +12653,8 @@ lpfc_sli_abort_taskmgmt(struct lpfc_vport *vport, struct lpfc_sli_ring *pring,
+ 		ndlp = lpfc_cmd->rdata->pnode;
+ 
+ 		if (lpfc_is_link_up(phba) &&
+-		    (ndlp && ndlp->nlp_state == NLP_STE_MAPPED_NODE))
++		    (ndlp && ndlp->nlp_state == NLP_STE_MAPPED_NODE) &&
++		    !(phba->link_flag & LS_EXTERNAL_LOOPBACK))
+ 			ia = false;
+ 		else
+ 			ia = true;
+@@ -18107,7 +18117,6 @@ lpfc_fc_frame_check(struct lpfc_hba *phba, struct fc_frame_header *fc_hdr)
+ 	case FC_RCTL_ELS_REP:	/* extended link services reply */
+ 	case FC_RCTL_ELS4_REQ:	/* FC-4 ELS request */
+ 	case FC_RCTL_ELS4_REP:	/* FC-4 ELS reply */
+-	case FC_RCTL_BA_NOP:  	/* basic link service NOP */
+ 	case FC_RCTL_BA_ABTS: 	/* basic link service abort */
+ 	case FC_RCTL_BA_RMC: 	/* remove connection */
+ 	case FC_RCTL_BA_ACC:	/* basic accept */
+@@ -18128,6 +18137,7 @@ lpfc_fc_frame_check(struct lpfc_hba *phba, struct fc_frame_header *fc_hdr)
+ 		fc_vft_hdr = (struct fc_vft_header *)fc_hdr;
+ 		fc_hdr = &((struct fc_frame_header *)fc_vft_hdr)[1];
+ 		return lpfc_fc_frame_check(phba, fc_hdr);
++	case FC_RCTL_BA_NOP:	/* basic link service NOP */
+ 	default:
+ 		goto drop;
+ 	}
+@@ -18942,12 +18952,14 @@ lpfc_sli4_send_seq_to_ulp(struct lpfc_vport *vport,
+ 	if (!lpfc_complete_unsol_iocb(phba,
+ 				      phba->sli4_hba.els_wq->pring,
+ 				      iocbq, fc_hdr->fh_r_ctl,
+-				      fc_hdr->fh_type))
++				      fc_hdr->fh_type)) {
+ 		lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
+ 				"2540 Ring %d handler: unexpected Rctl "
+ 				"x%x Type x%x received\n",
+ 				LPFC_ELS_RING,
+ 				fc_hdr->fh_r_ctl, fc_hdr->fh_type);
++		lpfc_in_buf_free(phba, &seq_dmabuf->dbuf);
++	}
+ 
+ 	/* Free iocb created in lpfc_prep_seq */
+ 	list_for_each_entry_safe(curr_iocb, next_iocb,
+@@ -21107,7 +21119,7 @@ lpfc_sli4_issue_abort_iotag(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ 	abtswqe = &abtsiocb->wqe;
+ 	memset(abtswqe, 0, sizeof(*abtswqe));
+ 
+-	if (!lpfc_is_link_up(phba))
++	if (!lpfc_is_link_up(phba) || (phba->link_flag & LS_EXTERNAL_LOOPBACK))
+ 		bf_set(abort_cmd_ia, &abtswqe->abort_cmd, 1);
+ 	bf_set(abort_cmd_criteria, &abtswqe->abort_cmd, T_XRI_TAG);
+ 	abtswqe->abort_cmd.rsrvd5 = 0;
+diff --git a/drivers/scsi/megaraid.c b/drivers/scsi/megaraid.c
+index a5d8cee2d5106..bf491af9f0d65 100644
+--- a/drivers/scsi/megaraid.c
++++ b/drivers/scsi/megaraid.c
+@@ -4607,7 +4607,7 @@ static int __init megaraid_init(void)
+ 	 * major number allocation.
+ 	 */
+ 	major = register_chrdev(0, "megadev_legacy", &megadev_fops);
+-	if (!major) {
++	if (major < 0) {
+ 		printk(KERN_WARNING
+ 				"megaraid: failed to register char device\n");
+ 	}
+diff --git a/drivers/scsi/ufs/ti-j721e-ufs.c b/drivers/scsi/ufs/ti-j721e-ufs.c
+index eafe0db98d542..122d650d08102 100644
+--- a/drivers/scsi/ufs/ti-j721e-ufs.c
++++ b/drivers/scsi/ufs/ti-j721e-ufs.c
+@@ -29,11 +29,9 @@ static int ti_j721e_ufs_probe(struct platform_device *pdev)
+ 		return PTR_ERR(regbase);
+ 
+ 	pm_runtime_enable(dev);
+-	ret = pm_runtime_get_sync(dev);
+-	if (ret < 0) {
+-		pm_runtime_put_noidle(dev);
++	ret = pm_runtime_resume_and_get(dev);
++	if (ret < 0)
+ 		goto disable_pm;
+-	}
+ 
+ 	/* Select MPHY refclk frequency */
+ 	clk = devm_clk_get(dev, NULL);
+diff --git a/drivers/scsi/ufs/ufs-qcom.c b/drivers/scsi/ufs/ufs-qcom.c
+index 586c0e567ff9a..e1b6c9e7a7f25 100644
+--- a/drivers/scsi/ufs/ufs-qcom.c
++++ b/drivers/scsi/ufs/ufs-qcom.c
+@@ -641,12 +641,7 @@ static int ufs_qcom_resume(struct ufs_hba *hba, enum ufs_pm_op pm_op)
+ 			return err;
+ 	}
+ 
+-	err = ufs_qcom_ice_resume(host);
+-	if (err)
+-		return err;
+-
+-	hba->is_sys_suspended = false;
+-	return 0;
++	return ufs_qcom_ice_resume(host);
+ }
+ 
+ static void ufs_qcom_dev_ref_clk_ctrl(struct ufs_qcom_host *host, bool enable)
+@@ -687,8 +682,11 @@ static void ufs_qcom_dev_ref_clk_ctrl(struct ufs_qcom_host *host, bool enable)
+ 
+ 		writel_relaxed(temp, host->dev_ref_clk_ctrl_mmio);
+ 
+-		/* ensure that ref_clk is enabled/disabled before we return */
+-		wmb();
++		/*
++		 * Make sure the write to ref_clk reaches the destination and
++		 * not stored in a Write Buffer (WB).
++		 */
++		readl(host->dev_ref_clk_ctrl_mmio);
+ 
+ 		/*
+ 		 * If we call hibern8 exit after this, we need to make sure that
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index 3f9caafa91bfa..4c9eb4be449ca 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -113,8 +113,13 @@ int ufshcd_dump_regs(struct ufs_hba *hba, size_t offset, size_t len,
+ 	if (!regs)
+ 		return -ENOMEM;
+ 
+-	for (pos = 0; pos < len; pos += 4)
++	for (pos = 0; pos < len; pos += 4) {
++		if (offset == 0 &&
++		    pos >= REG_UIC_ERROR_CODE_PHY_ADAPTER_LAYER &&
++		    pos <= REG_UIC_ERROR_CODE_DME)
++			continue;
+ 		regs[pos / 4] = ufshcd_readl(hba, offset + pos);
++	}
+ 
+ 	ufshcd_hex_dump(prefix, regs, len);
+ 	kfree(regs);
+diff --git a/drivers/soc/bcm/bcm63xx/bcm-pmb.c b/drivers/soc/bcm/bcm63xx/bcm-pmb.c
+index 7bbe46ea5f945..9407cac47fdbe 100644
+--- a/drivers/soc/bcm/bcm63xx/bcm-pmb.c
++++ b/drivers/soc/bcm/bcm63xx/bcm-pmb.c
+@@ -312,6 +312,9 @@ static int bcm_pmb_probe(struct platform_device *pdev)
+ 	for (e = table; e->name; e++) {
+ 		struct bcm_pmb_pm_domain *pd = devm_kzalloc(dev, sizeof(*pd), GFP_KERNEL);
+ 
++		if (!pd)
++			return -ENOMEM;
++
+ 		pd->pmb = pmb;
+ 		pd->data = e;
+ 		pd->genpd.name = e->name;
+diff --git a/drivers/soc/qcom/llcc-qcom.c b/drivers/soc/qcom/llcc-qcom.c
+index eecafeded56ff..85ba8209b1826 100644
+--- a/drivers/soc/qcom/llcc-qcom.c
++++ b/drivers/soc/qcom/llcc-qcom.c
+@@ -749,6 +749,7 @@ static const struct of_device_id qcom_llcc_of_match[] = {
+ 	{ .compatible = "qcom,sm8450-llcc", .data = &sm8450_cfg },
+ 	{ }
+ };
++MODULE_DEVICE_TABLE(of, qcom_llcc_of_match);
+ 
+ static struct platform_driver qcom_llcc_driver = {
+ 	.driver = {
+diff --git a/drivers/soc/qcom/smp2p.c b/drivers/soc/qcom/smp2p.c
+index 4a157240f419e..59dbf4b61e6c2 100644
+--- a/drivers/soc/qcom/smp2p.c
++++ b/drivers/soc/qcom/smp2p.c
+@@ -493,6 +493,7 @@ static int smp2p_parse_ipc(struct qcom_smp2p *smp2p)
+ 	}
+ 
+ 	smp2p->ipc_regmap = syscon_node_to_regmap(syscon);
++	of_node_put(syscon);
+ 	if (IS_ERR(smp2p->ipc_regmap))
+ 		return PTR_ERR(smp2p->ipc_regmap);
+ 
+diff --git a/drivers/soc/qcom/smsm.c b/drivers/soc/qcom/smsm.c
+index ef15d014c03a3..9df9bba242f3e 100644
+--- a/drivers/soc/qcom/smsm.c
++++ b/drivers/soc/qcom/smsm.c
+@@ -374,6 +374,7 @@ static int smsm_parse_ipc(struct qcom_smsm *smsm, unsigned host_id)
+ 		return 0;
+ 
+ 	host->ipc_regmap = syscon_node_to_regmap(syscon);
++	of_node_put(syscon);
+ 	if (IS_ERR(host->ipc_regmap))
+ 		return PTR_ERR(host->ipc_regmap);
+ 
+diff --git a/drivers/soc/ti/ti_sci_pm_domains.c b/drivers/soc/ti/ti_sci_pm_domains.c
+index 8afb3f45d2637..a33ec7eaf23d1 100644
+--- a/drivers/soc/ti/ti_sci_pm_domains.c
++++ b/drivers/soc/ti/ti_sci_pm_domains.c
+@@ -183,6 +183,8 @@ static int ti_sci_pm_domain_probe(struct platform_device *pdev)
+ 		devm_kcalloc(dev, max_id + 1,
+ 			     sizeof(*pd_provider->data.domains),
+ 			     GFP_KERNEL);
++	if (!pd_provider->data.domains)
++		return -ENOMEM;
+ 
+ 	pd_provider->data.num_domains = max_id + 1;
+ 	pd_provider->data.xlate = ti_sci_pd_xlate;
+diff --git a/drivers/spi/spi-cadence-quadspi.c b/drivers/spi/spi-cadence-quadspi.c
+index 19686fb47bb35..ec53b807909e7 100644
+--- a/drivers/spi/spi-cadence-quadspi.c
++++ b/drivers/spi/spi-cadence-quadspi.c
+@@ -1865,7 +1865,7 @@ static const struct cqspi_driver_platdata intel_lgm_qspi = {
+ };
+ 
+ static const struct cqspi_driver_platdata socfpga_qspi = {
+-	.quirks = CQSPI_NO_SUPPORT_WR_COMPLETION,
++	.quirks = CQSPI_DISABLE_DAC_MODE | CQSPI_NO_SUPPORT_WR_COMPLETION,
+ };
+ 
+ static const struct cqspi_driver_platdata versal_ospi = {
+diff --git a/drivers/spi/spi-fsl-qspi.c b/drivers/spi/spi-fsl-qspi.c
+index 9851551ebbe05..46ae46a944c5c 100644
+--- a/drivers/spi/spi-fsl-qspi.c
++++ b/drivers/spi/spi-fsl-qspi.c
+@@ -876,6 +876,10 @@ static int fsl_qspi_probe(struct platform_device *pdev)
+ 
+ 	res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
+ 					"QuadSPI-memory");
++	if (!res) {
++		ret = -EINVAL;
++		goto err_put_ctrl;
++	}
+ 	q->memmap_phy = res->start;
+ 	/* Since there are 4 cs, map size required is 4 times ahb_buf_size */
+ 	q->ahb_addr = devm_ioremap(dev, q->memmap_phy,
+diff --git a/drivers/spi/spi-img-spfi.c b/drivers/spi/spi-img-spfi.c
+index 5f05d519fbbd0..71376b6df89db 100644
+--- a/drivers/spi/spi-img-spfi.c
++++ b/drivers/spi/spi-img-spfi.c
+@@ -731,7 +731,7 @@ static int img_spfi_resume(struct device *dev)
+ 	int ret;
+ 
+ 	ret = pm_runtime_get_sync(dev);
+-	if (ret) {
++	if (ret < 0) {
+ 		pm_runtime_put_noidle(dev);
+ 		return ret;
+ 	}
+diff --git a/drivers/spi/spi-rockchip.c b/drivers/spi/spi-rockchip.c
+index cdc16eecaf6b5..a08215eb9e148 100644
+--- a/drivers/spi/spi-rockchip.c
++++ b/drivers/spi/spi-rockchip.c
+@@ -196,6 +196,8 @@ struct rockchip_spi {
+ 
+ 	bool slave_abort;
+ 	bool cs_inactive; /* spi slave tansmition stop when cs inactive */
++	bool cs_high_supported; /* native CS supports active-high polarity */
++
+ 	struct spi_transfer *xfer; /* Store xfer temporarily */
+ };
+ 
+@@ -719,6 +721,11 @@ static int rockchip_spi_setup(struct spi_device *spi)
+ 	struct rockchip_spi *rs = spi_controller_get_devdata(spi->controller);
+ 	u32 cr0;
+ 
++	if (!spi->cs_gpiod && (spi->mode & SPI_CS_HIGH) && !rs->cs_high_supported) {
++		dev_warn(&spi->dev, "setup: non GPIO CS can't be active-high\n");
++		return -EINVAL;
++	}
++
+ 	pm_runtime_get_sync(rs->dev);
+ 
+ 	cr0 = readl_relaxed(rs->regs + ROCKCHIP_SPI_CTRLR0);
+@@ -899,6 +906,7 @@ static int rockchip_spi_probe(struct platform_device *pdev)
+ 
+ 	switch (readl_relaxed(rs->regs + ROCKCHIP_SPI_VERSION)) {
+ 	case ROCKCHIP_SPI_VER2_TYPE2:
++		rs->cs_high_supported = true;
+ 		ctlr->mode_bits |= SPI_CS_HIGH;
+ 		if (ctlr->can_dma && slave_mode)
+ 			rs->cs_inactive = true;
+diff --git a/drivers/spi/spi-rspi.c b/drivers/spi/spi-rspi.c
+index bd5708d7e5a15..7a014eeec2d0d 100644
+--- a/drivers/spi/spi-rspi.c
++++ b/drivers/spi/spi-rspi.c
+@@ -1108,14 +1108,11 @@ static struct dma_chan *rspi_request_dma_chan(struct device *dev,
+ 	}
+ 
+ 	memset(&cfg, 0, sizeof(cfg));
++	cfg.dst_addr = port_addr + RSPI_SPDR;
++	cfg.src_addr = port_addr + RSPI_SPDR;
++	cfg.dst_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE;
++	cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE;
+ 	cfg.direction = dir;
+-	if (dir == DMA_MEM_TO_DEV) {
+-		cfg.dst_addr = port_addr;
+-		cfg.dst_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE;
+-	} else {
+-		cfg.src_addr = port_addr;
+-		cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE;
+-	}
+ 
+ 	ret = dmaengine_slave_config(chan, &cfg);
+ 	if (ret) {
+@@ -1146,12 +1143,12 @@ static int rspi_request_dma(struct device *dev, struct spi_controller *ctlr,
+ 	}
+ 
+ 	ctlr->dma_tx = rspi_request_dma_chan(dev, DMA_MEM_TO_DEV, dma_tx_id,
+-					     res->start + RSPI_SPDR);
++					     res->start);
+ 	if (!ctlr->dma_tx)
+ 		return -ENODEV;
+ 
+ 	ctlr->dma_rx = rspi_request_dma_chan(dev, DMA_DEV_TO_MEM, dma_rx_id,
+-					     res->start + RSPI_SPDR);
++					     res->start);
+ 	if (!ctlr->dma_rx) {
+ 		dma_release_channel(ctlr->dma_tx);
+ 		ctlr->dma_tx = NULL;
+diff --git a/drivers/spi/spi-stm32-qspi.c b/drivers/spi/spi-stm32-qspi.c
+index ffdc55f87e821..dd38cb8ffbc20 100644
+--- a/drivers/spi/spi-stm32-qspi.c
++++ b/drivers/spi/spi-stm32-qspi.c
+@@ -308,7 +308,8 @@ static int stm32_qspi_wait_cmd(struct stm32_qspi *qspi,
+ 	if (!op->data.nbytes)
+ 		goto wait_nobusy;
+ 
+-	if (readl_relaxed(qspi->io_base + QSPI_SR) & SR_TCF)
++	if ((readl_relaxed(qspi->io_base + QSPI_SR) & SR_TCF) ||
++	    qspi->fmode == CCR_FMODE_APM)
+ 		goto out;
+ 
+ 	reinit_completion(&qspi->data_completion);
+diff --git a/drivers/spi/spi-ti-qspi.c b/drivers/spi/spi-ti-qspi.c
+index e06aafe169e0c..081da1fd3fd7e 100644
+--- a/drivers/spi/spi-ti-qspi.c
++++ b/drivers/spi/spi-ti-qspi.c
+@@ -448,6 +448,7 @@ static int ti_qspi_dma_xfer(struct ti_qspi *qspi, dma_addr_t dma_dst,
+ 	enum dma_ctrl_flags flags = DMA_CTRL_ACK | DMA_PREP_INTERRUPT;
+ 	struct dma_async_tx_descriptor *tx;
+ 	int ret;
++	unsigned long time_left;
+ 
+ 	tx = dmaengine_prep_dma_memcpy(chan, dma_dst, dma_src, len, flags);
+ 	if (!tx) {
+@@ -467,9 +468,9 @@ static int ti_qspi_dma_xfer(struct ti_qspi *qspi, dma_addr_t dma_dst,
+ 	}
+ 
+ 	dma_async_issue_pending(chan);
+-	ret = wait_for_completion_timeout(&qspi->transfer_complete,
++	time_left = wait_for_completion_timeout(&qspi->transfer_complete,
+ 					  msecs_to_jiffies(len));
+-	if (ret <= 0) {
++	if (time_left == 0) {
+ 		dmaengine_terminate_sync(chan);
+ 		dev_err(qspi->dev, "DMA wait_for_completion_timeout\n");
+ 		return -ETIMEDOUT;
+diff --git a/drivers/staging/media/hantro/hantro_drv.c b/drivers/staging/media/hantro/hantro_drv.c
+index dc768884cb79a..bd7d11032c94a 100644
+--- a/drivers/staging/media/hantro/hantro_drv.c
++++ b/drivers/staging/media/hantro/hantro_drv.c
+@@ -56,6 +56,10 @@ dma_addr_t hantro_get_ref(struct hantro_ctx *ctx, u64 ts)
+ 	return hantro_get_dec_buf_addr(ctx, buf);
+ }
+ 
++static const struct v4l2_event hantro_eos_event = {
++	.type = V4L2_EVENT_EOS
++};
++
+ static void hantro_job_finish_no_pm(struct hantro_dev *vpu,
+ 				    struct hantro_ctx *ctx,
+ 				    enum vb2_buffer_state result)
+@@ -73,6 +77,12 @@ static void hantro_job_finish_no_pm(struct hantro_dev *vpu,
+ 	src->sequence = ctx->sequence_out++;
+ 	dst->sequence = ctx->sequence_cap++;
+ 
++	if (v4l2_m2m_is_last_draining_src_buf(ctx->fh.m2m_ctx, src)) {
++		dst->flags |= V4L2_BUF_FLAG_LAST;
++		v4l2_event_queue_fh(&ctx->fh, &hantro_eos_event);
++		v4l2_m2m_mark_stopped(ctx->fh.m2m_ctx);
++	}
++
+ 	v4l2_m2m_buf_done_and_job_finish(ctx->dev->m2m_dev, ctx->fh.m2m_ctx,
+ 					 result);
+ }
+@@ -809,10 +819,13 @@ static int hantro_add_func(struct hantro_dev *vpu, unsigned int funcid)
+ 	snprintf(vfd->name, sizeof(vfd->name), "%s-%s", match->compatible,
+ 		 funcid == MEDIA_ENT_F_PROC_VIDEO_ENCODER ? "enc" : "dec");
+ 
+-	if (funcid == MEDIA_ENT_F_PROC_VIDEO_ENCODER)
++	if (funcid == MEDIA_ENT_F_PROC_VIDEO_ENCODER) {
+ 		vpu->encoder = func;
+-	else
++	} else {
+ 		vpu->decoder = func;
++		v4l2_disable_ioctl(vfd, VIDIOC_TRY_ENCODER_CMD);
++		v4l2_disable_ioctl(vfd, VIDIOC_ENCODER_CMD);
++	}
+ 
+ 	video_set_drvdata(vfd, vpu);
+ 
+diff --git a/drivers/staging/media/hantro/hantro_g2_hevc_dec.c b/drivers/staging/media/hantro/hantro_g2_hevc_dec.c
+index c524af41baf59..5f3178bac9c80 100644
+--- a/drivers/staging/media/hantro/hantro_g2_hevc_dec.c
++++ b/drivers/staging/media/hantro/hantro_g2_hevc_dec.c
+@@ -60,7 +60,7 @@ static void prepare_tile_info_buffer(struct hantro_ctx *ctx)
+ 					no_chroma = 1;
+ 				for (j = 0, tmp_w = 0; j < num_tile_cols - 1; j++) {
+ 					tmp_w += pps->column_width_minus1[j] + 1;
+-					*p++ = pps->column_width_minus1[j + 1];
++					*p++ = pps->column_width_minus1[j] + 1;
+ 					*p++ = h;
+ 					if (i == 0 && h == 1 && ctb_size == 16)
+ 						no_chroma = 1;
+@@ -180,13 +180,8 @@ static void set_params(struct hantro_ctx *ctx)
+ 		hantro_reg_write(vpu, &g2_max_cu_qpd_depth, 0);
+ 	}
+ 
+-	if (pps->flags & V4L2_HEVC_PPS_FLAG_PPS_SLICE_CHROMA_QP_OFFSETS_PRESENT) {
+-		hantro_reg_write(vpu, &g2_cb_qp_offset, pps->pps_cb_qp_offset);
+-		hantro_reg_write(vpu, &g2_cr_qp_offset, pps->pps_cr_qp_offset);
+-	} else {
+-		hantro_reg_write(vpu, &g2_cb_qp_offset, 0);
+-		hantro_reg_write(vpu, &g2_cr_qp_offset, 0);
+-	}
++	hantro_reg_write(vpu, &g2_cb_qp_offset, pps->pps_cb_qp_offset);
++	hantro_reg_write(vpu, &g2_cr_qp_offset, pps->pps_cr_qp_offset);
+ 
+ 	hantro_reg_write(vpu, &g2_filt_offset_beta, pps->pps_beta_offset_div2);
+ 	hantro_reg_write(vpu, &g2_filt_offset_tc, pps->pps_tc_offset_div2);
+diff --git a/drivers/staging/media/hantro/hantro_h264.c b/drivers/staging/media/hantro/hantro_h264.c
+index 0b4d2491be3b8..228629fb3cdf9 100644
+--- a/drivers/staging/media/hantro/hantro_h264.c
++++ b/drivers/staging/media/hantro/hantro_h264.c
+@@ -354,8 +354,6 @@ u16 hantro_h264_get_ref_nbr(struct hantro_ctx *ctx, unsigned int dpb_idx)
+ 
+ 	if (!(dpb->flags & V4L2_H264_DPB_ENTRY_FLAG_ACTIVE))
+ 		return 0;
+-	if (dpb->flags & V4L2_H264_DPB_ENTRY_FLAG_LONG_TERM)
+-		return dpb->pic_num;
+ 	return dpb->frame_num;
+ }
+ 
+diff --git a/drivers/staging/media/hantro/hantro_v4l2.c b/drivers/staging/media/hantro/hantro_v4l2.c
+index 67148ba346f52..71a6279750bf7 100644
+--- a/drivers/staging/media/hantro/hantro_v4l2.c
++++ b/drivers/staging/media/hantro/hantro_v4l2.c
+@@ -628,6 +628,38 @@ static int vidioc_s_selection(struct file *file, void *priv,
+ 	return 0;
+ }
+ 
++static const struct v4l2_event hantro_eos_event = {
++	.type = V4L2_EVENT_EOS
++};
++
++static int vidioc_encoder_cmd(struct file *file, void *priv,
++			      struct v4l2_encoder_cmd *ec)
++{
++	struct hantro_ctx *ctx = fh_to_ctx(priv);
++	int ret;
++
++	ret = v4l2_m2m_ioctl_try_encoder_cmd(file, priv, ec);
++	if (ret < 0)
++		return ret;
++
++	if (!vb2_is_streaming(v4l2_m2m_get_src_vq(ctx->fh.m2m_ctx)) ||
++	    !vb2_is_streaming(v4l2_m2m_get_dst_vq(ctx->fh.m2m_ctx)))
++		return 0;
++
++	ret = v4l2_m2m_ioctl_encoder_cmd(file, priv, ec);
++	if (ret < 0)
++		return ret;
++
++	if (ec->cmd == V4L2_ENC_CMD_STOP &&
++	    v4l2_m2m_has_stopped(ctx->fh.m2m_ctx))
++		v4l2_event_queue_fh(&ctx->fh, &hantro_eos_event);
++
++	if (ec->cmd == V4L2_ENC_CMD_START)
++		vb2_clear_last_buffer_dequeued(&ctx->fh.m2m_ctx->cap_q_ctx.q);
++
++	return 0;
++}
++
+ const struct v4l2_ioctl_ops hantro_ioctl_ops = {
+ 	.vidioc_querycap = vidioc_querycap,
+ 	.vidioc_enum_framesizes = vidioc_enum_framesizes,
+@@ -657,6 +689,9 @@ const struct v4l2_ioctl_ops hantro_ioctl_ops = {
+ 
+ 	.vidioc_g_selection = vidioc_g_selection,
+ 	.vidioc_s_selection = vidioc_s_selection,
++
++	.vidioc_try_encoder_cmd = v4l2_m2m_ioctl_try_encoder_cmd,
++	.vidioc_encoder_cmd = vidioc_encoder_cmd,
+ };
+ 
+ static int
+@@ -733,8 +768,12 @@ static int hantro_buf_prepare(struct vb2_buffer *vb)
+ 	 * (for OUTPUT buffers, if userspace passes 0 bytesused, v4l2-core sets
+ 	 * it to buffer length).
+ 	 */
+-	if (V4L2_TYPE_IS_CAPTURE(vq->type))
+-		vb2_set_plane_payload(vb, 0, pix_fmt->plane_fmt[0].sizeimage);
++	if (V4L2_TYPE_IS_CAPTURE(vq->type)) {
++		if (ctx->is_encoder)
++			vb2_set_plane_payload(vb, 0, 0);
++		else
++			vb2_set_plane_payload(vb, 0, pix_fmt->plane_fmt[0].sizeimage);
++	}
+ 
+ 	return 0;
+ }
+@@ -744,6 +783,22 @@ static void hantro_buf_queue(struct vb2_buffer *vb)
+ 	struct hantro_ctx *ctx = vb2_get_drv_priv(vb->vb2_queue);
+ 	struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
+ 
++	if (V4L2_TYPE_IS_CAPTURE(vb->vb2_queue->type) &&
++	    vb2_is_streaming(vb->vb2_queue) &&
++	    v4l2_m2m_dst_buf_is_last(ctx->fh.m2m_ctx)) {
++		unsigned int i;
++
++		for (i = 0; i < vb->num_planes; i++)
++			vb2_set_plane_payload(vb, i, 0);
++
++		vbuf->field = V4L2_FIELD_NONE;
++		vbuf->sequence = ctx->sequence_cap++;
++
++		v4l2_m2m_last_buffer_done(ctx->fh.m2m_ctx, vbuf);
++		v4l2_event_queue_fh(&ctx->fh, &hantro_eos_event);
++		return;
++	}
++
+ 	v4l2_m2m_buf_queue(ctx->fh.m2m_ctx, vbuf);
+ }
+ 
+@@ -759,6 +814,8 @@ static int hantro_start_streaming(struct vb2_queue *q, unsigned int count)
+ 	struct hantro_ctx *ctx = vb2_get_drv_priv(q);
+ 	int ret = 0;
+ 
++	v4l2_m2m_update_start_streaming_state(ctx->fh.m2m_ctx, q);
++
+ 	if (V4L2_TYPE_IS_OUTPUT(q->type))
+ 		ctx->sequence_out = 0;
+ 	else
+@@ -831,6 +888,12 @@ static void hantro_stop_streaming(struct vb2_queue *q)
+ 		hantro_return_bufs(q, v4l2_m2m_src_buf_remove);
+ 	else
+ 		hantro_return_bufs(q, v4l2_m2m_dst_buf_remove);
++
++	v4l2_m2m_update_stop_streaming_state(ctx->fh.m2m_ctx, q);
++
++	if (V4L2_TYPE_IS_OUTPUT(q->type) &&
++	    v4l2_m2m_has_stopped(ctx->fh.m2m_ctx))
++		v4l2_event_queue_fh(&ctx->fh, &hantro_eos_event);
+ }
+ 
+ static void hantro_buf_request_complete(struct vb2_buffer *vb)
+diff --git a/drivers/staging/media/rkvdec/rkvdec-h264.c b/drivers/staging/media/rkvdec/rkvdec-h264.c
+index 951e19231da21..22b4bf9e9ef40 100644
+--- a/drivers/staging/media/rkvdec/rkvdec-h264.c
++++ b/drivers/staging/media/rkvdec/rkvdec-h264.c
+@@ -112,6 +112,7 @@ struct rkvdec_h264_run {
+ 	const struct v4l2_ctrl_h264_sps *sps;
+ 	const struct v4l2_ctrl_h264_pps *pps;
+ 	const struct v4l2_ctrl_h264_scaling_matrix *scaling_matrix;
++	int ref_buf_idx[V4L2_H264_NUM_DPB_ENTRIES];
+ };
+ 
+ struct rkvdec_h264_ctx {
+@@ -661,8 +662,8 @@ static void assemble_hw_pps(struct rkvdec_ctx *ctx,
+ 	WRITE_PPS(0xff, PROFILE_IDC);
+ 	WRITE_PPS(1, CONSTRAINT_SET3_FLAG);
+ 	WRITE_PPS(sps->chroma_format_idc, CHROMA_FORMAT_IDC);
+-	WRITE_PPS(sps->bit_depth_luma_minus8 + 8, BIT_DEPTH_LUMA);
+-	WRITE_PPS(sps->bit_depth_chroma_minus8 + 8, BIT_DEPTH_CHROMA);
++	WRITE_PPS(sps->bit_depth_luma_minus8, BIT_DEPTH_LUMA);
++	WRITE_PPS(sps->bit_depth_chroma_minus8, BIT_DEPTH_CHROMA);
+ 	WRITE_PPS(0, QPPRIME_Y_ZERO_TRANSFORM_BYPASS_FLAG);
+ 	WRITE_PPS(sps->log2_max_frame_num_minus4, LOG2_MAX_FRAME_NUM_MINUS4);
+ 	WRITE_PPS(sps->max_num_ref_frames, MAX_NUM_REF_FRAMES);
+@@ -725,6 +726,26 @@ static void assemble_hw_pps(struct rkvdec_ctx *ctx,
+ 	}
+ }
+ 
++static void lookup_ref_buf_idx(struct rkvdec_ctx *ctx,
++			       struct rkvdec_h264_run *run)
++{
++	const struct v4l2_ctrl_h264_decode_params *dec_params = run->decode_params;
++	u32 i;
++
++	for (i = 0; i < ARRAY_SIZE(dec_params->dpb); i++) {
++		struct v4l2_m2m_ctx *m2m_ctx = ctx->fh.m2m_ctx;
++		const struct v4l2_h264_dpb_entry *dpb = run->decode_params->dpb;
++		struct vb2_queue *cap_q = &m2m_ctx->cap_q_ctx.q;
++		int buf_idx = -1;
++
++		if (dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_ACTIVE)
++			buf_idx = vb2_find_timestamp(cap_q,
++						     dpb[i].reference_ts, 0);
++
++		run->ref_buf_idx[i] = buf_idx;
++	}
++}
++
+ static void assemble_hw_rps(struct rkvdec_ctx *ctx,
+ 			    struct rkvdec_h264_run *run)
+ {
+@@ -762,7 +783,7 @@ static void assemble_hw_rps(struct rkvdec_ctx *ctx,
+ 
+ 	for (j = 0; j < RKVDEC_NUM_REFLIST; j++) {
+ 		for (i = 0; i < h264_ctx->reflists.num_valid; i++) {
+-			u8 dpb_valid = 0;
++			bool dpb_valid = run->ref_buf_idx[i] >= 0;
+ 			u8 idx = 0;
+ 
+ 			switch (j) {
+@@ -779,8 +800,6 @@ static void assemble_hw_rps(struct rkvdec_ctx *ctx,
+ 
+ 			if (idx >= ARRAY_SIZE(dec_params->dpb))
+ 				continue;
+-			dpb_valid = !!(dpb[idx].flags &
+-				       V4L2_H264_DPB_ENTRY_FLAG_ACTIVE);
+ 
+ 			set_ps_field(hw_rps, DPB_INFO(i, j),
+ 				     idx | dpb_valid << 4);
+@@ -859,13 +878,8 @@ get_ref_buf(struct rkvdec_ctx *ctx, struct rkvdec_h264_run *run,
+ 	    unsigned int dpb_idx)
+ {
+ 	struct v4l2_m2m_ctx *m2m_ctx = ctx->fh.m2m_ctx;
+-	const struct v4l2_h264_dpb_entry *dpb = run->decode_params->dpb;
+ 	struct vb2_queue *cap_q = &m2m_ctx->cap_q_ctx.q;
+-	int buf_idx = -1;
+-
+-	if (dpb[dpb_idx].flags & V4L2_H264_DPB_ENTRY_FLAG_ACTIVE)
+-		buf_idx = vb2_find_timestamp(cap_q,
+-					     dpb[dpb_idx].reference_ts, 0);
++	int buf_idx = run->ref_buf_idx[dpb_idx];
+ 
+ 	/*
+ 	 * If a DPB entry is unused or invalid, address of current destination
+@@ -1102,6 +1116,7 @@ static int rkvdec_h264_run(struct rkvdec_ctx *ctx)
+ 
+ 	assemble_hw_scaling_list(ctx, &run);
+ 	assemble_hw_pps(ctx, &run);
++	lookup_ref_buf_idx(ctx, &run);
+ 	assemble_hw_rps(ctx, &run);
+ 	config_registers(ctx, &run);
+ 
+diff --git a/drivers/staging/r8188eu/os_dep/ioctl_linux.c b/drivers/staging/r8188eu/os_dep/ioctl_linux.c
+index eb9375b0c6606..60bd1cc2b3afd 100644
+--- a/drivers/staging/r8188eu/os_dep/ioctl_linux.c
++++ b/drivers/staging/r8188eu/os_dep/ioctl_linux.c
+@@ -1131,9 +1131,11 @@ static int rtw_wx_set_scan(struct net_device *dev, struct iw_request_info *a,
+ 						break;
+ 					}
+ 					sec_len = *(pos++); len -= 1;
+-					if (sec_len > 0 && sec_len <= len) {
++					if (sec_len > 0 &&
++					    sec_len <= len &&
++					    sec_len <= 32) {
+ 						ssid[ssid_index].SsidLength = sec_len;
+-						memcpy(ssid[ssid_index].Ssid, pos, ssid[ssid_index].SsidLength);
++						memcpy(ssid[ssid_index].Ssid, pos, sec_len);
+ 						ssid_index++;
+ 					}
+ 					pos += sec_len;
+@@ -1886,88 +1888,6 @@ static int rtw_wx_get_nick(struct net_device *dev,
+ 	return 0;
+ }
+ 
+-static int rtw_wx_read32(struct net_device *dev,
+-			    struct iw_request_info *info,
+-			    union iwreq_data *wrqu, char *extra)
+-{
+-	struct adapter *padapter;
+-	struct iw_point *p;
+-	u16 len;
+-	u32 addr;
+-	u32 data32;
+-	u32 bytes;
+-	u8 *ptmp;
+-	int ret;
+-
+-	padapter = (struct adapter *)rtw_netdev_priv(dev);
+-	p = &wrqu->data;
+-	len = p->length;
+-	ptmp = memdup_user(p->pointer, len);
+-	if (IS_ERR(ptmp))
+-		return PTR_ERR(ptmp);
+-
+-	bytes = 0;
+-	addr = 0;
+-	sscanf(ptmp, "%d,%x", &bytes, &addr);
+-
+-	switch (bytes) {
+-	case 1:
+-		data32 = rtw_read8(padapter, addr);
+-		sprintf(extra, "0x%02X", data32);
+-		break;
+-	case 2:
+-		data32 = rtw_read16(padapter, addr);
+-		sprintf(extra, "0x%04X", data32);
+-		break;
+-	case 4:
+-		data32 = rtw_read32(padapter, addr);
+-		sprintf(extra, "0x%08X", data32);
+-		break;
+-	default:
+-		ret = -EINVAL;
+-		goto err_free_ptmp;
+-	}
+-
+-	kfree(ptmp);
+-	return 0;
+-
+-err_free_ptmp:
+-	kfree(ptmp);
+-	return ret;
+-}
+-
+-static int rtw_wx_write32(struct net_device *dev,
+-			    struct iw_request_info *info,
+-			    union iwreq_data *wrqu, char *extra)
+-{
+-	struct adapter *padapter = (struct adapter *)rtw_netdev_priv(dev);
+-
+-	u32 addr;
+-	u32 data32;
+-	u32 bytes;
+-
+-	bytes = 0;
+-	addr = 0;
+-	data32 = 0;
+-	sscanf(extra, "%d,%x,%x", &bytes, &addr, &data32);
+-
+-	switch (bytes) {
+-	case 1:
+-		rtw_write8(padapter, addr, (u8)data32);
+-		break;
+-	case 2:
+-		rtw_write16(padapter, addr, (u16)data32);
+-		break;
+-	case 4:
+-		rtw_write32(padapter, addr, data32);
+-		break;
+-	default:
+-		return -EINVAL;
+-	}
+-
+-	return 0;
+-}
+-
+ static int rtw_wx_read_rf(struct net_device *dev,
+ 			    struct iw_request_info *info,
+ 			    union iwreq_data *wrqu, char *extra)
+@@ -3895,8 +3815,8 @@ static const struct iw_priv_args rtw_private_args[] = {
+ };
+ 
+ static iw_handler rtw_private_handler[] = {
+-rtw_wx_write32,				/* 0x00 */
+-rtw_wx_read32,				/* 0x01 */
++	NULL,				/* 0x00 */
++	NULL,				/* 0x01 */
+ 	NULL,				/* 0x02 */
+ NULL,					/* 0x03 */
+ /*  for MM DTV platform */
+diff --git a/drivers/target/target_core_device.c b/drivers/target/target_core_device.c
+index 44bb380e7390c..fa866acef5bb2 100644
+--- a/drivers/target/target_core_device.c
++++ b/drivers/target/target_core_device.c
+@@ -850,7 +850,6 @@ bool target_configure_unmap_from_queue(struct se_dev_attrib *attrib,
+ 	attrib->unmap_granularity = q->limits.discard_granularity / block_size;
+ 	attrib->unmap_granularity_alignment = q->limits.discard_alignment /
+ 								block_size;
+-	attrib->unmap_zeroes_data = !!(q->limits.max_write_zeroes_sectors);
+ 	return true;
+ }
+ EXPORT_SYMBOL(target_configure_unmap_from_queue);
+diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c
+index fd7267baa7078..3deaeecb712e3 100644
+--- a/drivers/target/target_core_user.c
++++ b/drivers/target/target_core_user.c
+@@ -20,6 +20,7 @@
+ #include <linux/configfs.h>
+ #include <linux/mutex.h>
+ #include <linux/workqueue.h>
++#include <linux/pagemap.h>
+ #include <net/genetlink.h>
+ #include <scsi/scsi_common.h>
+ #include <scsi/scsi_proto.h>
+@@ -1660,17 +1661,37 @@ static int tcmu_check_and_free_pending_cmd(struct tcmu_cmd *cmd)
+ static u32 tcmu_blocks_release(struct tcmu_dev *udev, unsigned long first,
+ 				unsigned long last)
+ {
+-	XA_STATE(xas, &udev->data_pages, first * udev->data_pages_per_blk);
+ 	struct page *page;
++	unsigned long dpi;
+ 	u32 pages_freed = 0;
+ 
+-	xas_lock(&xas);
+-	xas_for_each(&xas, page, (last + 1) * udev->data_pages_per_blk - 1) {
+-		xas_store(&xas, NULL);
++	first = first * udev->data_pages_per_blk;
++	last = (last + 1) * udev->data_pages_per_blk - 1;
++	xa_for_each_range(&udev->data_pages, dpi, page, first, last) {
++		xa_erase(&udev->data_pages, dpi);
++		/*
++		 * While reaching here there may be page faults occurring on
++		 * the to-be-released pages. A race condition may occur if
++		 * unmap_mapping_range() is called before page faults on these
++		 * pages have completed; a valid but stale map is created.
++		 *
++		 * If another command subsequently runs and needs to extend
++		 * dbi_thresh, it may reuse the slot corresponding to the
++		 * previous page in data_bitmap. Though we will allocate a new
++		 * page for the slot in data_area, no page fault will happen
++		 * because we have a valid map. Therefore the command's data
++		 * will be lost.
++		 *
++		 * We lock and unlock pages that are to be released to ensure
++		 * all page faults have completed. This way
++		 * unmap_mapping_range() can ensure stale maps are cleanly
++		 * removed.
++		 */
++		lock_page(page);
++		unlock_page(page);
+ 		__free_page(page);
+ 		pages_freed++;
+ 	}
+-	xas_unlock(&xas);
+ 
+ 	atomic_sub(pages_freed, &global_page_count);
+ 
+@@ -1822,6 +1843,7 @@ static struct page *tcmu_try_get_data_page(struct tcmu_dev *udev, uint32_t dpi)
+ 	page = xa_load(&udev->data_pages, dpi);
+ 	if (likely(page)) {
+ 		get_page(page);
++		lock_page(page);
+ 		mutex_unlock(&udev->cmdr_lock);
+ 		return page;
+ 	}
+@@ -1863,6 +1885,7 @@ static vm_fault_t tcmu_vma_fault(struct vm_fault *vmf)
+ 	struct page *page;
+ 	unsigned long offset;
+ 	void *addr;
++	vm_fault_t ret = 0;
+ 
+ 	int mi = tcmu_find_mem_index(vmf->vma);
+ 	if (mi < 0)
+@@ -1887,10 +1910,11 @@ static vm_fault_t tcmu_vma_fault(struct vm_fault *vmf)
+ 		page = tcmu_try_get_data_page(udev, dpi);
+ 		if (!page)
+ 			return VM_FAULT_SIGBUS;
++		ret = VM_FAULT_LOCKED;
+ 	}
+ 
+ 	vmf->page = page;
+-	return 0;
++	return ret;
+ }
+ 
+ static const struct vm_operations_struct tcmu_vm_ops = {
+@@ -3205,12 +3229,22 @@ static void find_free_blocks(void)
+ 			udev->dbi_max = block;
+ 		}
+ 
++		/*
++		 * Release the block pages.
++		 *
++		 * Also note that since tcmu_vma_fault() gets an extra page
++		 * refcount, tcmu_blocks_release() won't free pages if pages
++		 * are mapped. This means it is safe to call
++		 * tcmu_blocks_release() before unmap_mapping_range() which
++		 * drops the refcount of any pages it unmaps and thus releases
++		 * them.
++		 */
++		pages_freed = tcmu_blocks_release(udev, start, end - 1);
++
+ 		/* Here will truncate the data area from off */
+ 		off = udev->data_off + (loff_t)start * udev->data_blk_size;
+ 		unmap_mapping_range(udev->inode->i_mapping, off, 0, 1);
+ 
+-		/* Release the block pages */
+-		pages_freed = tcmu_blocks_release(udev, start, end - 1);
+ 		mutex_unlock(&udev->cmdr_lock);
+ 
+ 		total_pages_freed += pages_freed;
+diff --git a/drivers/thermal/broadcom/bcm2711_thermal.c b/drivers/thermal/broadcom/bcm2711_thermal.c
+index 1ec57d9ecf539..e9bef5c3414b6 100644
+--- a/drivers/thermal/broadcom/bcm2711_thermal.c
++++ b/drivers/thermal/broadcom/bcm2711_thermal.c
+@@ -38,7 +38,6 @@ static int bcm2711_get_temp(void *data, int *temp)
+ 	int offset = thermal_zone_get_offset(priv->thermal);
+ 	u32 val;
+ 	int ret;
+-	long t;
+ 
+ 	ret = regmap_read(priv->regmap, AVS_RO_TEMP_STATUS, &val);
+ 	if (ret)
+@@ -50,9 +49,7 @@ static int bcm2711_get_temp(void *data, int *temp)
+ 	val &= AVS_RO_TEMP_STATUS_DATA_MSK;
+ 
+ 	/* Convert a HW code to a temperature reading (millidegree celsius) */
+-	t = slope * val + offset;
+-
+-	*temp = t < 0 ? 0 : t;
++	*temp = slope * val + offset;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/thermal/broadcom/sr-thermal.c b/drivers/thermal/broadcom/sr-thermal.c
+index 475ce29007713..85ab9edd580cc 100644
+--- a/drivers/thermal/broadcom/sr-thermal.c
++++ b/drivers/thermal/broadcom/sr-thermal.c
+@@ -60,6 +60,9 @@ static int sr_thermal_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++	if (!res)
++		return -ENOENT;
++
+ 	sr_thermal->regs = (void __iomem *)devm_memremap(&pdev->dev, res->start,
+ 							 resource_size(res),
+ 							 MEMREMAP_WB);
+diff --git a/drivers/thermal/devfreq_cooling.c b/drivers/thermal/devfreq_cooling.c
+index 4310cb342a9fb..d38a80adec733 100644
+--- a/drivers/thermal/devfreq_cooling.c
++++ b/drivers/thermal/devfreq_cooling.c
+@@ -358,21 +358,28 @@ of_devfreq_cooling_register_power(struct device_node *np, struct devfreq *df,
+ 	struct thermal_cooling_device *cdev;
+ 	struct device *dev = df->dev.parent;
+ 	struct devfreq_cooling_device *dfc;
++	struct thermal_cooling_device_ops *ops;
+ 	char *name;
+ 	int err, num_opps;
+ 
+-	dfc = kzalloc(sizeof(*dfc), GFP_KERNEL);
+-	if (!dfc)
++	ops = kmemdup(&devfreq_cooling_ops, sizeof(*ops), GFP_KERNEL);
++	if (!ops)
+ 		return ERR_PTR(-ENOMEM);
+ 
++	dfc = kzalloc(sizeof(*dfc), GFP_KERNEL);
++	if (!dfc) {
++		err = -ENOMEM;
++		goto free_ops;
++	}
++
+ 	dfc->devfreq = df;
+ 
+ 	dfc->em_pd = em_pd_get(dev);
+ 	if (dfc->em_pd) {
+-		devfreq_cooling_ops.get_requested_power =
++		ops->get_requested_power =
+ 			devfreq_cooling_get_requested_power;
+-		devfreq_cooling_ops.state2power = devfreq_cooling_state2power;
+-		devfreq_cooling_ops.power2state = devfreq_cooling_power2state;
++		ops->state2power = devfreq_cooling_state2power;
++		ops->power2state = devfreq_cooling_power2state;
+ 
+ 		dfc->power_ops = dfc_power;
+ 
+@@ -407,8 +414,7 @@ of_devfreq_cooling_register_power(struct device_node *np, struct devfreq *df,
+ 	if (!name)
+ 		goto remove_qos_req;
+ 
+-	cdev = thermal_of_cooling_device_register(np, name, dfc,
+-						  &devfreq_cooling_ops);
++	cdev = thermal_of_cooling_device_register(np, name, dfc, ops);
+ 	kfree(name);
+ 
+ 	if (IS_ERR(cdev)) {
+@@ -429,6 +435,8 @@ free_table:
+ 	kfree(dfc->freq_table);
+ free_dfc:
+ 	kfree(dfc);
++free_ops:
++	kfree(ops);
+ 
+ 	return ERR_PTR(err);
+ }
+@@ -510,11 +518,13 @@ EXPORT_SYMBOL_GPL(devfreq_cooling_em_register);
+ void devfreq_cooling_unregister(struct thermal_cooling_device *cdev)
+ {
+ 	struct devfreq_cooling_device *dfc;
++	const struct thermal_cooling_device_ops *ops;
+ 	struct device *dev;
+ 
+ 	if (IS_ERR_OR_NULL(cdev))
+ 		return;
+ 
++	ops = cdev->ops;
+ 	dfc = cdev->devdata;
+ 	dev = dfc->devfreq->dev.parent;
+ 
+@@ -525,5 +535,6 @@ void devfreq_cooling_unregister(struct thermal_cooling_device *cdev)
+ 
+ 	kfree(dfc->freq_table);
+ 	kfree(dfc);
++	kfree(ops);
+ }
+ EXPORT_SYMBOL_GPL(devfreq_cooling_unregister);
+diff --git a/drivers/thermal/imx_sc_thermal.c b/drivers/thermal/imx_sc_thermal.c
+index 8d76dbfde6a9f..331a241eb0ef3 100644
+--- a/drivers/thermal/imx_sc_thermal.c
++++ b/drivers/thermal/imx_sc_thermal.c
+@@ -94,8 +94,8 @@ static int imx_sc_thermal_probe(struct platform_device *pdev)
+ 		sensor = devm_kzalloc(&pdev->dev, sizeof(*sensor), GFP_KERNEL);
+ 		if (!sensor) {
+ 			of_node_put(child);
+-			of_node_put(sensor_np);
+-			return -ENOMEM;
++			ret = -ENOMEM;
++			goto put_node;
+ 		}
+ 
+ 		ret = thermal_zone_of_get_sensor_id(child,
+@@ -124,7 +124,9 @@ static int imx_sc_thermal_probe(struct platform_device *pdev)
+ 			dev_warn(&pdev->dev, "failed to add hwmon sysfs attributes\n");
+ 	}
+ 
++put_node:
+ 	of_node_put(sensor_np);
++	of_node_put(np);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/thermal/thermal_core.c b/drivers/thermal/thermal_core.c
+index 82654dc8382b8..cdc0552e8c42e 100644
+--- a/drivers/thermal/thermal_core.c
++++ b/drivers/thermal/thermal_core.c
+@@ -947,6 +947,7 @@ __thermal_cooling_device_register(struct device_node *np,
+ 	return cdev;
+ 
+ out_kfree_type:
++	thermal_cooling_device_destroy_sysfs(cdev);
+ 	kfree(cdev->type);
+ 	put_device(&cdev->device);
+ 	cdev = NULL;
+diff --git a/drivers/tty/goldfish.c b/drivers/tty/goldfish.c
+index ad13532e92fe2..9e8ccb8ed6d69 100644
+--- a/drivers/tty/goldfish.c
++++ b/drivers/tty/goldfish.c
+@@ -61,13 +61,13 @@ static void do_rw_io(struct goldfish_tty *qtty,
+ 	spin_lock_irqsave(&qtty->lock, irq_flags);
+ 	gf_write_ptr((void *)address, base + GOLDFISH_TTY_REG_DATA_PTR,
+ 		     base + GOLDFISH_TTY_REG_DATA_PTR_HIGH);
+-	__raw_writel(count, base + GOLDFISH_TTY_REG_DATA_LEN);
++	gf_iowrite32(count, base + GOLDFISH_TTY_REG_DATA_LEN);
+ 
+ 	if (is_write)
+-		__raw_writel(GOLDFISH_TTY_CMD_WRITE_BUFFER,
++		gf_iowrite32(GOLDFISH_TTY_CMD_WRITE_BUFFER,
+ 		       base + GOLDFISH_TTY_REG_CMD);
+ 	else
+-		__raw_writel(GOLDFISH_TTY_CMD_READ_BUFFER,
++		gf_iowrite32(GOLDFISH_TTY_CMD_READ_BUFFER,
+ 		       base + GOLDFISH_TTY_REG_CMD);
+ 
+ 	spin_unlock_irqrestore(&qtty->lock, irq_flags);
+@@ -142,7 +142,7 @@ static irqreturn_t goldfish_tty_interrupt(int irq, void *dev_id)
+ 	unsigned char *buf;
+ 	u32 count;
+ 
+-	count = __raw_readl(base + GOLDFISH_TTY_REG_BYTES_READY);
++	count = gf_ioread32(base + GOLDFISH_TTY_REG_BYTES_READY);
+ 	if (count == 0)
+ 		return IRQ_NONE;
+ 
+@@ -159,7 +159,7 @@ static int goldfish_tty_activate(struct tty_port *port, struct tty_struct *tty)
+ {
+ 	struct goldfish_tty *qtty = container_of(port, struct goldfish_tty,
+ 									port);
+-	__raw_writel(GOLDFISH_TTY_CMD_INT_ENABLE, qtty->base + GOLDFISH_TTY_REG_CMD);
++	gf_iowrite32(GOLDFISH_TTY_CMD_INT_ENABLE, qtty->base + GOLDFISH_TTY_REG_CMD);
+ 	return 0;
+ }
+ 
+@@ -167,7 +167,7 @@ static void goldfish_tty_shutdown(struct tty_port *port)
+ {
+ 	struct goldfish_tty *qtty = container_of(port, struct goldfish_tty,
+ 									port);
+-	__raw_writel(GOLDFISH_TTY_CMD_INT_DISABLE, qtty->base + GOLDFISH_TTY_REG_CMD);
++	gf_iowrite32(GOLDFISH_TTY_CMD_INT_DISABLE, qtty->base + GOLDFISH_TTY_REG_CMD);
+ }
+ 
+ static int goldfish_tty_open(struct tty_struct *tty, struct file *filp)
+@@ -202,7 +202,7 @@ static unsigned int goldfish_tty_chars_in_buffer(struct tty_struct *tty)
+ {
+ 	struct goldfish_tty *qtty = &goldfish_ttys[tty->index];
+ 	void __iomem *base = qtty->base;
+-	return __raw_readl(base + GOLDFISH_TTY_REG_BYTES_READY);
++	return gf_ioread32(base + GOLDFISH_TTY_REG_BYTES_READY);
+ }
+ 
+ static void goldfish_tty_console_write(struct console *co, const char *b,
+@@ -355,7 +355,7 @@ static int goldfish_tty_probe(struct platform_device *pdev)
+ 	 * on Ranchu emulator (qemu2) returns 1 here and
+ 	 * driver will use physical addresses.
+ 	 */
+-	qtty->version = __raw_readl(base + GOLDFISH_TTY_REG_VERSION);
++	qtty->version = gf_ioread32(base + GOLDFISH_TTY_REG_VERSION);
+ 
+ 	/*
+ 	 * Goldfish TTY device on Ranchu emulator (qemu2)
+@@ -374,7 +374,7 @@ static int goldfish_tty_probe(struct platform_device *pdev)
+ 		}
+ 	}
+ 
+-	__raw_writel(GOLDFISH_TTY_CMD_INT_DISABLE, base + GOLDFISH_TTY_REG_CMD);
++	gf_iowrite32(GOLDFISH_TTY_CMD_INT_DISABLE, base + GOLDFISH_TTY_REG_CMD);
+ 
+ 	ret = request_irq(irq, goldfish_tty_interrupt, IRQF_SHARED,
+ 			  "goldfish_tty", qtty);
+@@ -436,7 +436,7 @@ static int goldfish_tty_remove(struct platform_device *pdev)
+ #ifdef CONFIG_GOLDFISH_TTY_EARLY_CONSOLE
+ static void gf_early_console_putchar(struct uart_port *port, unsigned char ch)
+ {
+-	__raw_writel(ch, port->membase);
++	gf_iowrite32(ch, port->membase);
+ }
+ 
+ static void gf_early_write(struct console *con, const char *s, unsigned int n)
+diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c
+index fd8b86dde5255..ea5381dedb07c 100644
+--- a/drivers/tty/n_gsm.c
++++ b/drivers/tty/n_gsm.c
+@@ -444,6 +444,25 @@ static u8 gsm_encode_modem(const struct gsm_dlci *dlci)
+ 	return modembits;
+ }
+ 
++static void gsm_hex_dump_bytes(const char *fname, const u8 *data,
++			       unsigned long len)
++{
++	char *prefix;
++
++	if (!fname) {
++		print_hex_dump(KERN_INFO, "", DUMP_PREFIX_NONE, 16, 1, data, len,
++			       true);
++		return;
++	}
++
++	prefix = kasprintf(GFP_KERNEL, "%s: ", fname);
++	if (!prefix)
++		return;
++	print_hex_dump(KERN_INFO, prefix, DUMP_PREFIX_OFFSET, 16, 1, data, len,
++		       true);
++	kfree(prefix);
++}
++
+ /**
+  *	gsm_print_packet	-	display a frame for debug
+  *	@hdr: header to print before decode
+@@ -508,7 +527,7 @@ static void gsm_print_packet(const char *hdr, int addr, int cr,
+ 	else
+ 		pr_cont("(F)");
+ 
+-	print_hex_dump_bytes("", DUMP_PREFIX_NONE, data, dlen);
++	gsm_hex_dump_bytes(NULL, data, dlen);
+ }
+ 
+ 
+@@ -698,9 +717,7 @@ static void gsm_data_kick(struct gsm_mux *gsm, struct gsm_dlci *dlci)
+ 		}
+ 
+ 		if (debug & 4)
+-			print_hex_dump_bytes("gsm_data_kick: ",
+-					     DUMP_PREFIX_OFFSET,
+-					     gsm->txframe, len);
++			gsm_hex_dump_bytes(__func__, gsm->txframe, len);
+ 		if (gsmld_output(gsm, gsm->txframe, len) <= 0)
+ 			break;
+ 		/* FIXME: Can eliminate one SOF in many more cases */
+@@ -2448,8 +2465,7 @@ static int gsmld_output(struct gsm_mux *gsm, u8 *data, int len)
+ 		return -ENOSPC;
+ 	}
+ 	if (debug & 4)
+-		print_hex_dump_bytes("gsmld_output: ", DUMP_PREFIX_OFFSET,
+-				     data, len);
++		gsm_hex_dump_bytes(__func__, data, len);
+ 	return gsm->tty->ops->write(gsm->tty, data, len);
+ }
+ 
+@@ -2525,8 +2541,7 @@ static void gsmld_receive_buf(struct tty_struct *tty, const unsigned char *cp,
+ 	char flags = TTY_NORMAL;
+ 
+ 	if (debug & 4)
+-		print_hex_dump_bytes("gsmld_receive: ", DUMP_PREFIX_OFFSET,
+-				     cp, count);
++		gsm_hex_dump_bytes(__func__, cp, count);
+ 
+ 	for (; count; count--, cp++) {
+ 		if (fp)
+diff --git a/drivers/tty/serial/pch_uart.c b/drivers/tty/serial/pch_uart.c
+index affe71f8b50c7..937636ecdc3c3 100644
+--- a/drivers/tty/serial/pch_uart.c
++++ b/drivers/tty/serial/pch_uart.c
+@@ -624,22 +624,6 @@ static int push_rx(struct eg20t_port *priv, const unsigned char *buf,
+ 	return 0;
+ }
+ 
+-static int pop_tx_x(struct eg20t_port *priv, unsigned char *buf)
+-{
+-	int ret = 0;
+-	struct uart_port *port = &priv->port;
+-
+-	if (port->x_char) {
+-		dev_dbg(priv->port.dev, "%s:X character send %02x (%lu)\n",
+-			__func__, port->x_char, jiffies);
+-		buf[0] = port->x_char;
+-		port->x_char = 0;
+-		ret = 1;
+-	}
+-
+-	return ret;
+-}
+-
+ static int dma_push_rx(struct eg20t_port *priv, int size)
+ {
+ 	int room;
+@@ -889,9 +873,10 @@ static unsigned int handle_tx(struct eg20t_port *priv)
+ 
+ 	fifo_size = max(priv->fifo_size, 1);
+ 	tx_empty = 1;
+-	if (pop_tx_x(priv, xmit->buf)) {
+-		pch_uart_hal_write(priv, xmit->buf, 1);
++	if (port->x_char) {
++		pch_uart_hal_write(priv, &port->x_char, 1);
+ 		port->icount.tx++;
++		port->x_char = 0;
+ 		tx_empty = 0;
+ 		fifo_size--;
+ 	}
+@@ -946,9 +931,11 @@ static unsigned int dma_handle_tx(struct eg20t_port *priv)
+ 	}
+ 
+ 	fifo_size = max(priv->fifo_size, 1);
+-	if (pop_tx_x(priv, xmit->buf)) {
+-		pch_uart_hal_write(priv, xmit->buf, 1);
++
++	if (port->x_char) {
++		pch_uart_hal_write(priv, &port->x_char, 1);
+ 		port->icount.tx++;
++		port->x_char = 0;
+ 		fifo_size--;
+ 	}
+ 
+diff --git a/drivers/tty/tty_buffer.c b/drivers/tty/tty_buffer.c
+index 646510476c304..bfa431a8e6902 100644
+--- a/drivers/tty/tty_buffer.c
++++ b/drivers/tty/tty_buffer.c
+@@ -175,7 +175,8 @@ static struct tty_buffer *tty_buffer_alloc(struct tty_port *port, size_t size)
+ 	 */
+ 	if (atomic_read(&port->buf.mem_used) > port->buf.mem_limit)
+ 		return NULL;
+-	p = kmalloc(sizeof(struct tty_buffer) + 2 * size, GFP_ATOMIC);
++	p = kmalloc(sizeof(struct tty_buffer) + 2 * size,
++		    GFP_ATOMIC | __GFP_NOWARN);
+ 	if (p == NULL)
+ 		return NULL;
+ 
+diff --git a/drivers/usb/core/hcd.c b/drivers/usb/core/hcd.c
+index d9712c2602afe..06eea8848ccc2 100644
+--- a/drivers/usb/core/hcd.c
++++ b/drivers/usb/core/hcd.c
+@@ -2816,6 +2816,7 @@ int usb_add_hcd(struct usb_hcd *hcd,
+ {
+ 	int retval;
+ 	struct usb_device *rhdev;
++	struct usb_hcd *shared_hcd;
+ 
+ 	if (!hcd->skip_phy_initialization && usb_hcd_is_primary_hcd(hcd)) {
+ 		hcd->phy_roothub = usb_phy_roothub_alloc(hcd->self.sysdev);
+@@ -2976,13 +2977,26 @@ int usb_add_hcd(struct usb_hcd *hcd,
+ 		goto err_hcd_driver_start;
+ 	}
+ 
++	/* starting here, usbcore will pay attention to the shared HCD roothub */
++	shared_hcd = hcd->shared_hcd;
++	if (!usb_hcd_is_primary_hcd(hcd) && shared_hcd && HCD_DEFER_RH_REGISTER(shared_hcd)) {
++		retval = register_root_hub(shared_hcd);
++		if (retval != 0)
++			goto err_register_root_hub;
++
++		if (shared_hcd->uses_new_polling && HCD_POLL_RH(shared_hcd))
++			usb_hcd_poll_rh_status(shared_hcd);
++	}
++
+ 	/* starting here, usbcore will pay attention to this root hub */
+-	retval = register_root_hub(hcd);
+-	if (retval != 0)
+-		goto err_register_root_hub;
++	if (!HCD_DEFER_RH_REGISTER(hcd)) {
++		retval = register_root_hub(hcd);
++		if (retval != 0)
++			goto err_register_root_hub;
+ 
+-	if (hcd->uses_new_polling && HCD_POLL_RH(hcd))
+-		usb_hcd_poll_rh_status(hcd);
++		if (hcd->uses_new_polling && HCD_POLL_RH(hcd))
++			usb_hcd_poll_rh_status(hcd);
++	}
+ 
+ 	return retval;
+ 
+@@ -3020,6 +3034,7 @@ EXPORT_SYMBOL_GPL(usb_add_hcd);
+ void usb_remove_hcd(struct usb_hcd *hcd)
+ {
+ 	struct usb_device *rhdev = hcd->self.root_hub;
++	bool rh_registered;
+ 
+ 	dev_info(hcd->self.controller, "remove, state %x\n", hcd->state);
+ 
+@@ -3030,6 +3045,7 @@ void usb_remove_hcd(struct usb_hcd *hcd)
+ 
+ 	dev_dbg(hcd->self.controller, "roothub graceful disconnect\n");
+ 	spin_lock_irq (&hcd_root_hub_lock);
++	rh_registered = hcd->rh_registered;
+ 	hcd->rh_registered = 0;
+ 	spin_unlock_irq (&hcd_root_hub_lock);
+ 
+@@ -3039,7 +3055,8 @@ void usb_remove_hcd(struct usb_hcd *hcd)
+ 	cancel_work_sync(&hcd->died_work);
+ 
+ 	mutex_lock(&usb_bus_idr_lock);
+-	usb_disconnect(&rhdev);		/* Sets rhdev to NULL */
++	if (rh_registered)
++		usb_disconnect(&rhdev);		/* Sets rhdev to NULL */
+ 	mutex_unlock(&usb_bus_idr_lock);
+ 
+ 	/*
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index 97b44a68668a5..f99a65a64588f 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -510,6 +510,9 @@ static const struct usb_device_id usb_quirk_list[] = {
+ 	/* DJI CineSSD */
+ 	{ USB_DEVICE(0x2ca3, 0x0031), .driver_info = USB_QUIRK_NO_LPM },
+ 
++	/* DELL USB GEN2 */
++	{ USB_DEVICE(0x413c, 0xb062), .driver_info = USB_QUIRK_NO_LPM | USB_QUIRK_RESET_RESUME },
++
+ 	/* VCOM device */
+ 	{ USB_DEVICE(0x4296, 0x7570), .driver_info = USB_QUIRK_CONFIG_INTF_STRINGS },
+ 
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 0b9c2493844a8..026fc360cc506 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -3380,14 +3380,14 @@ static bool dwc3_gadget_endpoint_trbs_complete(struct dwc3_ep *dep,
+ 	struct dwc3		*dwc = dep->dwc;
+ 	bool			no_started_trb = true;
+ 
+-	if (!dep->endpoint.desc)
+-		return no_started_trb;
+-
+ 	dwc3_gadget_ep_cleanup_completed_requests(dep, event, status);
+ 
+ 	if (dep->flags & DWC3_EP_END_TRANSFER_PENDING)
+ 		goto out;
+ 
++	if (!dep->endpoint.desc)
++		return no_started_trb;
++
+ 	if (usb_endpoint_xfer_isoc(dep->endpoint.desc) &&
+ 		list_empty(&dep->started_list) &&
+ 		(list_empty(&dep->pending_list) || status == -EXDEV))
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index d7e0e6ebf0800..d57c5ff5ae1f4 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -59,6 +59,7 @@
+ #define PCI_DEVICE_ID_INTEL_TIGER_LAKE_XHCI		0x9a13
+ #define PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI		0x1138
+ #define PCI_DEVICE_ID_INTEL_ALDER_LAKE_XHCI		0x461e
++#define PCI_DEVICE_ID_INTEL_ALDER_LAKE_N_XHCI		0x464e
+ #define PCI_DEVICE_ID_INTEL_ALDER_LAKE_PCH_XHCI	0x51ed
+ 
+ #define PCI_DEVICE_ID_AMD_RENOIR_XHCI			0x1639
+@@ -268,6 +269,7 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ 	     pdev->device == PCI_DEVICE_ID_INTEL_TIGER_LAKE_XHCI ||
+ 	     pdev->device == PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI ||
+ 	     pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_XHCI ||
++	     pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_N_XHCI ||
+ 	     pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_PCH_XHCI))
+ 		xhci->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW;
+ 
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 25b87e99b4dd4..2be38d9de8df4 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -696,6 +696,8 @@ int xhci_run(struct usb_hcd *hcd)
+ 	xhci_dbg_trace(xhci, trace_xhci_dbg_init,
+ 			"Finished xhci_run for USB2 roothub");
+ 
++	set_bit(HCD_FLAG_DEFER_RH_REGISTER, &hcd->flags);
++
+ 	xhci_create_dbc_dev(xhci);
+ 
+ 	xhci_debugfs_init(xhci);
+diff --git a/drivers/usb/isp1760/isp1760-core.c b/drivers/usb/isp1760/isp1760-core.c
+index d1d9a7d5da175..af88f4fe00d27 100644
+--- a/drivers/usb/isp1760/isp1760-core.c
++++ b/drivers/usb/isp1760/isp1760-core.c
+@@ -251,6 +251,8 @@ static const struct reg_field isp1760_hc_reg_fields[] = {
+ 	[HW_DM_PULLDOWN]	= REG_FIELD(ISP176x_HC_OTG_CTRL, 2, 2),
+ 	[HW_DP_PULLDOWN]	= REG_FIELD(ISP176x_HC_OTG_CTRL, 1, 1),
+ 	[HW_DP_PULLUP]		= REG_FIELD(ISP176x_HC_OTG_CTRL, 0, 0),
++	/* Make sure the array is sized properly during compilation */
++	[HC_FIELD_MAX]		= {},
+ };
+ 
+ static const struct reg_field isp1763_hc_reg_fields[] = {
+@@ -321,6 +323,8 @@ static const struct reg_field isp1763_hc_reg_fields[] = {
+ 	[HW_DM_PULLDOWN_CLEAR]	= REG_FIELD(ISP1763_HC_OTG_CTRL_CLEAR, 2, 2),
+ 	[HW_DP_PULLDOWN_CLEAR]	= REG_FIELD(ISP1763_HC_OTG_CTRL_CLEAR, 1, 1),
+ 	[HW_DP_PULLUP_CLEAR]	= REG_FIELD(ISP1763_HC_OTG_CTRL_CLEAR, 0, 0),
++	/* Make sure the array is sized properly during compilation */
++	[HC_FIELD_MAX]		= {},
+ };
+ 
+ static const struct regmap_range isp1763_hc_volatile_ranges[] = {
+@@ -405,6 +409,8 @@ static const struct reg_field isp1761_dc_reg_fields[] = {
+ 	[DC_CHIP_ID_HIGH]	= REG_FIELD(ISP176x_DC_CHIPID, 16, 31),
+ 	[DC_CHIP_ID_LOW]	= REG_FIELD(ISP176x_DC_CHIPID, 0, 15),
+ 	[DC_SCRATCH]		= REG_FIELD(ISP176x_DC_SCRATCH, 0, 15),
++	/* Make sure the array is sized properly during compilation */
++	[DC_FIELD_MAX]		= {},
+ };
+ 
+ static const struct regmap_range isp1763_dc_volatile_ranges[] = {
+@@ -458,6 +464,8 @@ static const struct reg_field isp1763_dc_reg_fields[] = {
+ 	[DC_CHIP_ID_HIGH]	= REG_FIELD(ISP1763_DC_CHIPID_HIGH, 0, 15),
+ 	[DC_CHIP_ID_LOW]	= REG_FIELD(ISP1763_DC_CHIPID_LOW, 0, 15),
+ 	[DC_SCRATCH]		= REG_FIELD(ISP1763_DC_SCRATCH, 0, 15),
++	/* Make sure the array is sized properly during compilation */
++	[DC_FIELD_MAX]		= {},
+ };
+ 
+ static const struct regmap_config isp1763_dc_regmap_conf = {
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 152ad882657d7..e60425bbf5376 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -1137,6 +1137,8 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM12, 0xff, 0, 0) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, 0x0620, 0xff, 0xff, 0x30) },	/* EM160R-GL */
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, 0x0620, 0xff, 0, 0) },
++	{ USB_DEVICE_INTERFACE_CLASS(QUECTEL_VENDOR_ID, 0x0700, 0xff), /* BG95 */
++	  .driver_info = RSVD(3) | ZLP },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500Q, 0xff, 0xff, 0x30) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500Q, 0xff, 0, 0) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500Q, 0xff, 0xff, 0x10),
+diff --git a/drivers/usb/serial/pl2303.c b/drivers/usb/serial/pl2303.c
+index 1d878d05a6584..3506c47e1eef0 100644
+--- a/drivers/usb/serial/pl2303.c
++++ b/drivers/usb/serial/pl2303.c
+@@ -421,6 +421,9 @@ static int pl2303_detect_type(struct usb_serial *serial)
+ 	bcdUSB = le16_to_cpu(desc->bcdUSB);
+ 
+ 	switch (bcdUSB) {
++	case 0x101:
++		/* USB 1.0.1? Let's assume they meant 1.1... */
++		fallthrough;
+ 	case 0x110:
+ 		switch (bcdDevice) {
+ 		case 0x300:
+diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c
+index ddbe142af09ae..881f9864c437c 100644
+--- a/drivers/vdpa/vdpa_sim/vdpa_sim.c
++++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c
+@@ -353,11 +353,14 @@ static void vdpasim_set_vq_ready(struct vdpa_device *vdpa, u16 idx, bool ready)
+ {
+ 	struct vdpasim *vdpasim = vdpa_to_sim(vdpa);
+ 	struct vdpasim_virtqueue *vq = &vdpasim->vqs[idx];
++	bool old_ready;
+ 
+ 	spin_lock(&vdpasim->lock);
++	old_ready = vq->ready;
+ 	vq->ready = ready;
+-	if (vq->ready)
++	if (vq->ready && !old_ready) {
+ 		vdpasim_queue_ready(vdpasim, idx);
++	}
+ 	spin_unlock(&vdpasim->lock);
+ }
+ 
+diff --git a/drivers/video/console/sticon.c b/drivers/video/console/sticon.c
+index 40496e9e9b438..f304163e87e99 100644
+--- a/drivers/video/console/sticon.c
++++ b/drivers/video/console/sticon.c
+@@ -46,6 +46,7 @@
+ #include <linux/slab.h>
+ #include <linux/font.h>
+ #include <linux/crc32.h>
++#include <linux/fb.h>
+ 
+ #include <asm/io.h>
+ 
+@@ -392,7 +393,9 @@ static int __init sticonsole_init(void)
+     for (i = 0; i < MAX_NR_CONSOLES; i++)
+ 	font_data[i] = STI_DEF_FONT;
+ 
+-    pr_info("sticon: Initializing STI text console.\n");
++    pr_info("sticon: Initializing STI text console on %s at [%s]\n",
++	sticon_sti->sti_data->inq_outptr.dev_name,
++	sticon_sti->pa_path);
+     console_lock();
+     err = do_take_over_console(&sti_con, 0, MAX_NR_CONSOLES - 1,
+ 		PAGE0->mem_cons.cl_class != CL_DUPLEX);
+diff --git a/drivers/video/console/sticore.c b/drivers/video/console/sticore.c
+index f869b723494f1..6a947ff96d6eb 100644
+--- a/drivers/video/console/sticore.c
++++ b/drivers/video/console/sticore.c
+@@ -30,10 +30,11 @@
+ #include <asm/pdc.h>
+ #include <asm/cacheflush.h>
+ #include <asm/grfioctl.h>
++#include <asm/fb.h>
+ 
+ #include "../fbdev/sticore.h"
+ 
+-#define STI_DRIVERVERSION "Version 0.9b"
++#define STI_DRIVERVERSION "Version 0.9c"
+ 
+ static struct sti_struct *default_sti __read_mostly;
+ 
+@@ -502,7 +503,7 @@ sti_select_fbfont(struct sti_cooked_rom *cooked_rom, const char *fbfont_name)
+ 	if (!fbfont)
+ 		return NULL;
+ 
+-	pr_info("STI selected %ux%u framebuffer font %s for sticon\n",
++	pr_info("    using %ux%u framebuffer font %s\n",
+ 			fbfont->width, fbfont->height, fbfont->name);
+ 			
+ 	bpc = ((fbfont->width+7)/8) * fbfont->height; 
+@@ -946,6 +947,7 @@ out_err:
+ 
+ static void sticore_check_for_default_sti(struct sti_struct *sti, char *path)
+ {
++	pr_info("    located at [%s]\n", sti->pa_path);
+ 	if (strcmp (path, default_sti_path) == 0)
+ 		default_sti = sti;
+ }
+@@ -957,7 +959,6 @@ static void sticore_check_for_default_sti(struct sti_struct *sti, char *path)
+  */
+ static int __init sticore_pa_init(struct parisc_device *dev)
+ {
+-	char pa_path[21];
+ 	struct sti_struct *sti = NULL;
+ 	int hpa = dev->hpa.start;
+ 
+@@ -970,8 +971,8 @@ static int __init sticore_pa_init(struct parisc_device *dev)
+ 	if (!sti)
+ 		return 1;
+ 
+-	print_pa_hwpath(dev, pa_path);
+-	sticore_check_for_default_sti(sti, pa_path);
++	print_pa_hwpath(dev, sti->pa_path);
++	sticore_check_for_default_sti(sti, sti->pa_path);
+ 	return 0;
+ }
+ 
+@@ -1007,9 +1008,8 @@ static int sticore_pci_init(struct pci_dev *pd, const struct pci_device_id *ent)
+ 
+ 	sti = sti_try_rom_generic(rom_base, fb_base, pd);
+ 	if (sti) {
+-		char pa_path[30];
+-		print_pci_hwpath(pd, pa_path);
+-		sticore_check_for_default_sti(sti, pa_path);
++		print_pci_hwpath(pd, sti->pa_path);
++		sticore_check_for_default_sti(sti, sti->pa_path);
+ 	}
+ 	
+ 	if (!sti) {
+@@ -1127,6 +1127,22 @@ int sti_call(const struct sti_struct *sti, unsigned long func,
+ 	return ret;
+ }
+ 
++/* check if given fb_info is the primary device */
++int fb_is_primary_device(struct fb_info *info)
++{
++	struct sti_struct *sti;
++
++	sti = sti_get_rom(0);
++
++	/* if no built-in graphics card found, allow any fb driver as default */
++	if (!sti)
++		return true;
++
++	/* return true if it's the default built-in framebuffer driver */
++	return (sti->info == info);
++}
++EXPORT_SYMBOL(fb_is_primary_device);
++
+ MODULE_AUTHOR("Philipp Rumpf, Helge Deller, Thomas Bogendoerfer");
+ MODULE_DESCRIPTION("Core STI driver for HP's NGLE series graphics cards in HP PARISC machines");
+ MODULE_LICENSE("GPL v2");
+diff --git a/drivers/video/fbdev/amba-clcd.c b/drivers/video/fbdev/amba-clcd.c
+index 9ec969e136bfd..8080116aea844 100644
+--- a/drivers/video/fbdev/amba-clcd.c
++++ b/drivers/video/fbdev/amba-clcd.c
+@@ -758,12 +758,15 @@ static int clcdfb_of_vram_setup(struct clcd_fb *fb)
+ 		return -ENODEV;
+ 
+ 	fb->fb.screen_base = of_iomap(memory, 0);
+-	if (!fb->fb.screen_base)
++	if (!fb->fb.screen_base) {
++		of_node_put(memory);
+ 		return -ENOMEM;
++	}
+ 
+ 	fb->fb.fix.smem_start = of_translate_address(memory,
+ 			of_get_address(memory, 0, &size, NULL));
+ 	fb->fb.fix.smem_len = size;
++	of_node_put(memory);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/video/fbdev/core/fb_defio.c b/drivers/video/fbdev/core/fb_defio.c
+index 842c66b3e33d0..6aaf6d0abf399 100644
+--- a/drivers/video/fbdev/core/fb_defio.c
++++ b/drivers/video/fbdev/core/fb_defio.c
+@@ -59,7 +59,6 @@ static vm_fault_t fb_deferred_io_fault(struct vm_fault *vmf)
+ 		printk(KERN_ERR "no mapping available\n");
+ 
+ 	BUG_ON(!page->mapping);
+-	INIT_LIST_HEAD(&page->lru);
+ 	page->index = vmf->pgoff;
+ 
+ 	vmf->page = page;
+@@ -213,6 +212,8 @@ static void fb_deferred_io_work(struct work_struct *work)
+ void fb_deferred_io_init(struct fb_info *info)
+ {
+ 	struct fb_deferred_io *fbdefio = info->fbdefio;
++	struct page *page;
++	unsigned int i;
+ 
+ 	BUG_ON(!fbdefio);
+ 	mutex_init(&fbdefio->lock);
+@@ -220,6 +221,12 @@ void fb_deferred_io_init(struct fb_info *info)
+ 	INIT_LIST_HEAD(&fbdefio->pagelist);
+ 	if (fbdefio->delay == 0) /* set a default of 1 s */
+ 		fbdefio->delay = HZ;
++
++	/* initialize all the page lists one time */
++	for (i = 0; i < info->fix.smem_len; i += PAGE_SIZE) {
++		page = fb_deferred_io_page(info, i);
++		INIT_LIST_HEAD(&page->lru);
++	}
+ }
+ EXPORT_SYMBOL_GPL(fb_deferred_io_init);
+ 
+diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c
+index 2fc1b80a26ad9..9a8ae6fa6ecbb 100644
+--- a/drivers/video/fbdev/core/fbcon.c
++++ b/drivers/video/fbdev/core/fbcon.c
+@@ -3265,6 +3265,9 @@ static void fbcon_register_existing_fbs(struct work_struct *work)
+ 
+ 	console_lock();
+ 
++	deferred_takeover = false;
++	logo_shown = FBCON_LOGO_DONTSHOW;
++
+ 	for_each_registered_fb(i)
+ 		fbcon_fb_registered(registered_fb[i]);
+ 
+@@ -3282,8 +3285,6 @@ static int fbcon_output_notifier(struct notifier_block *nb,
+ 	pr_info("fbcon: Taking over console\n");
+ 
+ 	dummycon_unregister_output_notifier(&fbcon_output_nb);
+-	deferred_takeover = false;
+-	logo_shown = FBCON_LOGO_DONTSHOW;
+ 
+ 	/* We may get called in atomic context */
+ 	schedule_work(&fbcon_deferred_takeover_work);
+diff --git a/drivers/video/fbdev/sticore.h b/drivers/video/fbdev/sticore.h
+index c338f7848ae2b..0ebdd28a0b813 100644
+--- a/drivers/video/fbdev/sticore.h
++++ b/drivers/video/fbdev/sticore.h
+@@ -370,6 +370,9 @@ struct sti_struct {
+ 
+ 	/* pointer to all internal data */
+ 	struct sti_all_data *sti_data;
++
++	/* pa_path of this device */
++	char pa_path[24];
+ };
+ 
+ 
+diff --git a/drivers/video/fbdev/stifb.c b/drivers/video/fbdev/stifb.c
+index bebb2eea6448b..38a861e22c339 100644
+--- a/drivers/video/fbdev/stifb.c
++++ b/drivers/video/fbdev/stifb.c
+@@ -1358,11 +1358,11 @@ static int __init stifb_init_fb(struct sti_struct *sti, int bpp_pref)
+ 		goto out_err3;
+ 	}
+ 
++	/* save for primary gfx device detection & unregister_framebuffer() */
++	sti->info = info;
+ 	if (register_framebuffer(&fb->info) < 0)
+ 		goto out_err4;
+ 
+-	sti->info = info; /* save for unregister_framebuffer() */
+-
+ 	fb_info(&fb->info, "%s %dx%d-%d frame buffer device, %s, id: %04x, mmio: 0x%04lx\n",
+ 		fix->id,
+ 		var->xres, 
+diff --git a/drivers/video/fbdev/vesafb.c b/drivers/video/fbdev/vesafb.c
+index e25e8de5ff672..929d4775cb4bc 100644
+--- a/drivers/video/fbdev/vesafb.c
++++ b/drivers/video/fbdev/vesafb.c
+@@ -490,11 +490,12 @@ static int vesafb_remove(struct platform_device *pdev)
+ {
+ 	struct fb_info *info = platform_get_drvdata(pdev);
+ 
+-	/* vesafb_destroy takes care of info cleanup */
+-	unregister_framebuffer(info);
+ 	if (((struct vesafb_par *)(info->par))->region)
+ 		release_region(0x3c0, 32);
+ 
++	/* vesafb_destroy takes care of info cleanup */
++	unregister_framebuffer(info);
++
+ 	return 0;
+ }
+ 
+diff --git a/fs/afs/misc.c b/fs/afs/misc.c
+index 1d1a8debe4723..933e67fcdab1a 100644
+--- a/fs/afs/misc.c
++++ b/fs/afs/misc.c
+@@ -163,8 +163,11 @@ void afs_prioritise_error(struct afs_error *e, int error, u32 abort_code)
+ 		return;
+ 
+ 	case -ECONNABORTED:
++		error = afs_abort_to_error(abort_code);
++		fallthrough;
++	case -ENETRESET: /* Responded, but we seem to have changed address */
+ 		e->responded = true;
+-		e->error = afs_abort_to_error(abort_code);
++		e->error = error;
+ 		return;
+ 	}
+ }
+diff --git a/fs/afs/rotate.c b/fs/afs/rotate.c
+index 79e1a5f6701be..a840c3588ebbb 100644
+--- a/fs/afs/rotate.c
++++ b/fs/afs/rotate.c
+@@ -292,6 +292,10 @@ bool afs_select_fileserver(struct afs_operation *op)
+ 		op->error = error;
+ 		goto iterate_address;
+ 
++	case -ENETRESET:
++		pr_warn("kAFS: Peer reset %s (op=%x)\n",
++			op->type ? op->type->name : "???", op->debug_id);
++		fallthrough;
+ 	case -ECONNRESET:
+ 		_debug("call reset");
+ 		op->error = error;
+diff --git a/fs/afs/rxrpc.c b/fs/afs/rxrpc.c
+index 23a1a92d64bb5..a5434f3e57c68 100644
+--- a/fs/afs/rxrpc.c
++++ b/fs/afs/rxrpc.c
+@@ -537,6 +537,8 @@ static void afs_deliver_to_call(struct afs_call *call)
+ 		case -ENODATA:
+ 		case -EBADMSG:
+ 		case -EMSGSIZE:
++		case -ENOMEM:
++		case -EFAULT:
+ 			abort_code = RXGEN_CC_UNMARSHAL;
+ 			if (state != AFS_CALL_CL_AWAIT_REPLY)
+ 				abort_code = RXGEN_SS_UNMARSHAL;
+@@ -544,7 +546,7 @@ static void afs_deliver_to_call(struct afs_call *call)
+ 						abort_code, ret, "KUM");
+ 			goto local_abort;
+ 		default:
+-			abort_code = RX_USER_ABORT;
++			abort_code = RX_CALL_DEAD;
+ 			rxrpc_kernel_abort_call(call->net->socket, call->rxcall,
+ 						abort_code, ret, "KER");
+ 			goto local_abort;
+@@ -836,7 +838,7 @@ void afs_send_empty_reply(struct afs_call *call)
+ 	case -ENOMEM:
+ 		_debug("oom");
+ 		rxrpc_kernel_abort_call(net->socket, call->rxcall,
+-					RX_USER_ABORT, -ENOMEM, "KOO");
++					RXGEN_SS_MARSHAL, -ENOMEM, "KOO");
+ 		fallthrough;
+ 	default:
+ 		_leave(" [error]");
+@@ -878,7 +880,7 @@ void afs_send_simple_reply(struct afs_call *call, const void *buf, size_t len)
+ 	if (n == -ENOMEM) {
+ 		_debug("oom");
+ 		rxrpc_kernel_abort_call(net->socket, call->rxcall,
+-					RX_USER_ABORT, -ENOMEM, "KOO");
++					RXGEN_SS_MARSHAL, -ENOMEM, "KOO");
+ 	}
+ 	_leave(" [error]");
+ }
+diff --git a/fs/afs/write.c b/fs/afs/write.c
+index 4763132ca57e7..c1bc52ac7de11 100644
+--- a/fs/afs/write.c
++++ b/fs/afs/write.c
+@@ -636,6 +636,7 @@ static ssize_t afs_write_back_from_locked_folio(struct address_space *mapping,
+ 	case -EKEYEXPIRED:
+ 	case -EKEYREJECTED:
+ 	case -EKEYREVOKED:
++	case -ENETRESET:
+ 		afs_redirty_pages(wbc, mapping, start, len);
+ 		mapping_set_error(mapping, ret);
+ 		break;
+diff --git a/fs/binfmt_flat.c b/fs/binfmt_flat.c
+index 6268981500112..dca0b6875f9c3 100644
+--- a/fs/binfmt_flat.c
++++ b/fs/binfmt_flat.c
+@@ -440,6 +440,30 @@ static void old_reloc(unsigned long rl)
+ 
+ /****************************************************************************/
+ 
++static inline u32 __user *skip_got_header(u32 __user *rp)
++{
++	if (IS_ENABLED(CONFIG_RISCV)) {
++		/*
++		 * RISC-V has a 16 byte GOT PLT header for elf64-riscv
++		 * and 8 byte GOT PLT header for elf32-riscv.
++		 * Skip the whole GOT PLT header, since it is reserved
++		 * for the dynamic linker (ld.so).
++		 */
++		u32 rp_val0, rp_val1;
++
++		if (get_user(rp_val0, rp))
++			return rp;
++		if (get_user(rp_val1, rp + 1))
++			return rp;
++
++		if (rp_val0 == 0xffffffff && rp_val1 == 0xffffffff)
++			rp += 4;
++		else if (rp_val0 == 0xffffffff)
++			rp += 2;
++	}
++	return rp;
++}
++
+ static int load_flat_file(struct linux_binprm *bprm,
+ 		struct lib_info *libinfo, int id, unsigned long *extra_stack)
+ {
+@@ -789,7 +813,8 @@ static int load_flat_file(struct linux_binprm *bprm,
+ 	 * image.
+ 	 */
+ 	if (flags & FLAT_FLAG_GOTPIC) {
+-		for (rp = (u32 __user *)datapos; ; rp++) {
++		rp = skip_got_header((u32 __user *) datapos);
++		for (; ; rp++) {
+ 			u32 addr, rp_val;
+ 			if (get_user(rp_val, rp))
+ 				return -EFAULT;
+diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
+index 0dd6de9941999..667b7025d5030 100644
+--- a/fs/btrfs/block-group.c
++++ b/fs/btrfs/block-group.c
+@@ -1367,6 +1367,14 @@ void btrfs_delete_unused_bgs(struct btrfs_fs_info *fs_info)
+ 			goto next;
+ 		}
+ 
++		ret = btrfs_zone_finish(block_group);
++		if (ret < 0) {
++			btrfs_dec_block_group_ro(block_group);
++			if (ret == -EAGAIN)
++				ret = 0;
++			goto next;
++		}
++
+ 		/*
+ 		 * Want to do this before we do anything else so we can recover
+ 		 * properly if we fail to join the transaction.
+diff --git a/fs/btrfs/block-group.h b/fs/btrfs/block-group.h
+index e8308f2ad07d1..19db5693175fe 100644
+--- a/fs/btrfs/block-group.h
++++ b/fs/btrfs/block-group.h
+@@ -212,6 +212,8 @@ struct btrfs_block_group {
+ 	u64 meta_write_pointer;
+ 	struct map_lookup *physical_map;
+ 	struct list_head active_bg_list;
++	struct work_struct zone_finish_work;
++	struct extent_buffer *last_eb;
+ };
+ 
+ static inline u64 btrfs_block_group_end(struct btrfs_block_group *block_group)
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 31c3f592e5875..30d0bbfdb3bca 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -3611,7 +3611,7 @@ int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_device
+ 		~BTRFS_FEATURE_INCOMPAT_SUPP;
+ 	if (features) {
+ 		btrfs_err(fs_info,
+-		    "cannot mount because of unsupported optional features (%llx)",
++		    "cannot mount because of unsupported optional features (0x%llx)",
+ 		    features);
+ 		err = -EINVAL;
+ 		goto fail_alloc;
+@@ -3649,7 +3649,7 @@ int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_device
+ 		~BTRFS_FEATURE_COMPAT_RO_SUPP;
+ 	if (!sb_rdonly(sb) && features) {
+ 		btrfs_err(fs_info,
+-	"cannot mount read-write because of unsupported optional features (%llx)",
++	"cannot mount read-write because of unsupported optional features (0x%llx)",
+ 		       features);
+ 		err = -EINVAL;
+ 		goto fail_alloc;
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index 33c19f51d79b0..a23a42ba88cae 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -3743,8 +3743,12 @@ int btrfs_do_readpage(struct page *page, struct extent_map **em_cached,
+ 					 this_bio_flag,
+ 					 force_bio_submit);
+ 		if (ret) {
+-			unlock_extent(tree, cur, cur + iosize - 1);
+-			end_page_read(page, false, cur, iosize);
++			/*
++			 * We have to unlock the remaining range, or the page
++			 * will never be unlocked.
++			 */
++			unlock_extent(tree, cur, end);
++			end_page_read(page, false, cur, end + 1 - cur);
+ 			goto out;
+ 		}
+ 		cur = cur + iosize;
+@@ -3920,10 +3924,12 @@ static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode,
+ 	u64 extent_offset;
+ 	u64 block_start;
+ 	struct extent_map *em;
++	int saved_ret = 0;
+ 	int ret = 0;
+ 	int nr = 0;
+ 	u32 opf = REQ_OP_WRITE;
+ 	const unsigned int write_flags = wbc_to_write_flags(wbc);
++	bool has_error = false;
+ 	bool compressed;
+ 
+ 	ret = btrfs_writepage_cow_fixup(page);
+@@ -3973,6 +3979,9 @@ static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode,
+ 		if (IS_ERR(em)) {
+ 			btrfs_page_set_error(fs_info, page, cur, end - cur + 1);
+ 			ret = PTR_ERR_OR_ZERO(em);
++			has_error = true;
++			if (!saved_ret)
++				saved_ret = ret;
+ 			break;
+ 		}
+ 
+@@ -4036,6 +4045,10 @@ static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode,
+ 					 end_bio_extent_writepage,
+ 					 0, 0, false);
+ 		if (ret) {
++			has_error = true;
++			if (!saved_ret)
++				saved_ret = ret;
++
+ 			btrfs_page_set_error(fs_info, page, cur, iosize);
+ 			if (PageWriteback(page))
+ 				btrfs_page_clear_writeback(fs_info, page, cur,
+@@ -4049,8 +4062,10 @@ static noinline_for_stack int __extent_writepage_io(struct btrfs_inode *inode,
+ 	 * If we finish without problem, we should not only clear page dirty,
+ 	 * but also empty subpage dirty bits
+ 	 */
+-	if (!ret)
++	if (!has_error)
+ 		btrfs_page_assert_not_dirty(fs_info, page);
++	else
++		ret = saved_ret;
+ 	*nr_ret = nr;
+ 	return ret;
+ }
+@@ -4181,9 +4196,6 @@ void wait_on_extent_buffer_writeback(struct extent_buffer *eb)
+ 
+ static void end_extent_buffer_writeback(struct extent_buffer *eb)
+ {
+-	if (test_bit(EXTENT_BUFFER_ZONE_FINISH, &eb->bflags))
+-		btrfs_zone_finish_endio(eb->fs_info, eb->start, eb->len);
+-
+ 	clear_bit(EXTENT_BUFFER_WRITEBACK, &eb->bflags);
+ 	smp_mb__after_atomic();
+ 	wake_up_bit(&eb->bflags, EXTENT_BUFFER_WRITEBACK);
+@@ -4803,8 +4815,7 @@ static int submit_eb_page(struct page *page, struct writeback_control *wbc,
+ 		/*
+ 		 * Implies write in zoned mode. Mark the last eb in a block group.
+ 		 */
+-		if (cache->seq_zone && eb->start + eb->len == cache->zone_capacity)
+-			set_bit(EXTENT_BUFFER_ZONE_FINISH, &eb->bflags);
++		btrfs_schedule_zone_finish_bg(cache, eb);
+ 		btrfs_put_block_group(cache);
+ 	}
+ 	ret = write_one_eb(eb, wbc, epd);
+diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h
+index 151e9da5da2dc..c37a3e5f5eb98 100644
+--- a/fs/btrfs/extent_io.h
++++ b/fs/btrfs/extent_io.h
+@@ -32,7 +32,6 @@ enum {
+ 	/* write IO error */
+ 	EXTENT_BUFFER_WRITE_ERR,
+ 	EXTENT_BUFFER_NO_CHECK,
+-	EXTENT_BUFFER_ZONE_FINISH,
+ };
+ 
+ /* these are flags for __process_pages_contig */
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 95c499b8424e7..861b9748a0d9f 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -64,6 +64,8 @@ struct btrfs_iget_args {
+ struct btrfs_dio_data {
+ 	ssize_t submitted;
+ 	struct extent_changeset *data_reserved;
++	bool data_space_reserved;
++	bool nocow_done;
+ };
+ 
+ struct btrfs_rename_ctx {
+@@ -7489,15 +7491,25 @@ static int btrfs_get_blocks_direct_write(struct extent_map **map,
+ 			ret = PTR_ERR(em2);
+ 			goto out;
+ 		}
++
++		dio_data->nocow_done = true;
+ 	} else {
+ 		/* Our caller expects us to free the input extent map. */
+ 		free_extent_map(em);
+ 		*map = NULL;
+ 
+-		/* We have to COW, so need to reserve metadata and data space. */
+-		ret = btrfs_delalloc_reserve_space(BTRFS_I(inode),
+-						   &dio_data->data_reserved,
+-						   start, len);
++		/*
++		 * If we could not allocate data space before locking the file
++		 * range and we can't do a NOCOW write, then we have to fail.
++		 */
++		if (!dio_data->data_space_reserved)
++			return -ENOSPC;
++
++		/*
++		 * We have to COW and we have already reserved data space before,
++		 * so now we reserve only metadata.
++		 */
++		ret = btrfs_delalloc_reserve_metadata(BTRFS_I(inode), len, len);
+ 		if (ret < 0)
+ 			goto out;
+ 		space_reserved = true;
+@@ -7510,10 +7522,8 @@ static int btrfs_get_blocks_direct_write(struct extent_map **map,
+ 		*map = em;
+ 		len = min(len, em->len - (start - em->start));
+ 		if (len < prev_len)
+-			btrfs_delalloc_release_space(BTRFS_I(inode),
+-						     dio_data->data_reserved,
+-						     start + len, prev_len - len,
+-						     true);
++			btrfs_delalloc_release_metadata(BTRFS_I(inode),
++							prev_len - len, true);
+ 	}
+ 
+ 	/*
+@@ -7531,15 +7541,7 @@ static int btrfs_get_blocks_direct_write(struct extent_map **map,
+ out:
+ 	if (ret && space_reserved) {
+ 		btrfs_delalloc_release_extents(BTRFS_I(inode), len);
+-		if (can_nocow) {
+-			btrfs_delalloc_release_metadata(BTRFS_I(inode), len, true);
+-		} else {
+-			btrfs_delalloc_release_space(BTRFS_I(inode),
+-						     dio_data->data_reserved,
+-						     start, len, true);
+-			extent_changeset_free(dio_data->data_reserved);
+-			dio_data->data_reserved = NULL;
+-		}
++		btrfs_delalloc_release_metadata(BTRFS_I(inode), len, true);
+ 	}
+ 	return ret;
+ }
+@@ -7556,6 +7558,7 @@ static int btrfs_dio_iomap_begin(struct inode *inode, loff_t start,
+ 	const bool write = !!(flags & IOMAP_WRITE);
+ 	int ret = 0;
+ 	u64 len = length;
++	const u64 data_alloc_len = length;
+ 	bool unlock_extents = false;
+ 
+ 	if (!write)
+@@ -7584,6 +7587,25 @@ static int btrfs_dio_iomap_begin(struct inode *inode, loff_t start,
+ 
+ 	iomap->private = dio_data;
+ 
++	/*
++	 * We always try to allocate data space and must do it before locking
++	 * the file range, to avoid deadlocks with concurrent writes to the same
++	 * range if the range has several extents and the writes don't expand the
++	 * current i_size (the inode lock is taken in shared mode). If we fail to
++	 * allocate data space here we continue and later, after locking the
++	 * file range, we fail with ENOSPC only if we figure out we can not do a
++	 * NOCOW write.
++	 */
++	if (write && !(flags & IOMAP_NOWAIT)) {
++		ret = btrfs_check_data_free_space(BTRFS_I(inode),
++						  &dio_data->data_reserved,
++						  start, data_alloc_len);
++		if (!ret)
++			dio_data->data_space_reserved = true;
++		else if (ret && !(BTRFS_I(inode)->flags &
++				  (BTRFS_INODE_NODATACOW | BTRFS_INODE_PREALLOC)))
++			goto err;
++	}
+ 
+ 	/*
+ 	 * If this errors out it's because we couldn't invalidate pagecache for
+@@ -7658,6 +7680,24 @@ static int btrfs_dio_iomap_begin(struct inode *inode, loff_t start,
+ 		unlock_extents = true;
+ 		/* Recalc len in case the new em is smaller than requested */
+ 		len = min(len, em->len - (start - em->start));
++		if (dio_data->data_space_reserved) {
++			u64 release_offset;
++			u64 release_len = 0;
++
++			if (dio_data->nocow_done) {
++				release_offset = start;
++				release_len = data_alloc_len;
++			} else if (len < data_alloc_len) {
++				release_offset = start + len;
++				release_len = data_alloc_len - len;
++			}
++
++			if (release_len > 0)
++				btrfs_free_reserved_data_space(BTRFS_I(inode),
++							       dio_data->data_reserved,
++							       release_offset,
++							       release_len);
++		}
+ 	} else {
+ 		/*
+ 		 * We need to unlock only the end area that we aren't using.
+@@ -7702,6 +7742,13 @@ unlock_err:
+ 	unlock_extent_cached(&BTRFS_I(inode)->io_tree, lockstart, lockend,
+ 			     &cached_state);
+ err:
++	if (dio_data->data_space_reserved) {
++		btrfs_free_reserved_data_space(BTRFS_I(inode),
++					       dio_data->data_reserved,
++					       start, data_alloc_len);
++		extent_changeset_free(dio_data->data_reserved);
++	}
++
+ 	kfree(dio_data);
+ 
+ 	return ret;
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index be6c24577dbe0..777801902511e 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -561,7 +561,7 @@ static noinline int create_subvol(struct user_namespace *mnt_userns,
+ 	struct timespec64 cur_time = current_time(dir);
+ 	struct inode *inode;
+ 	int ret;
+-	dev_t anon_dev = 0;
++	dev_t anon_dev;
+ 	u64 objectid;
+ 	u64 index = 0;
+ 
+@@ -571,11 +571,7 @@ static noinline int create_subvol(struct user_namespace *mnt_userns,
+ 
+ 	ret = btrfs_get_free_objectid(fs_info->tree_root, &objectid);
+ 	if (ret)
+-		goto fail_free;
+-
+-	ret = get_anon_bdev(&anon_dev);
+-	if (ret < 0)
+-		goto fail_free;
++		goto out_root_item;
+ 
+ 	/*
+ 	 * Don't create subvolume whose level is not zero. Or qgroup will be
+@@ -583,9 +579,13 @@ static noinline int create_subvol(struct user_namespace *mnt_userns,
+ 	 */
+ 	if (btrfs_qgroup_level(objectid)) {
+ 		ret = -ENOSPC;
+-		goto fail_free;
++		goto out_root_item;
+ 	}
+ 
++	ret = get_anon_bdev(&anon_dev);
++	if (ret < 0)
++		goto out_root_item;
++
+ 	btrfs_init_block_rsv(&block_rsv, BTRFS_BLOCK_RSV_TEMP);
+ 	/*
+ 	 * The same as the snapshot creation, please see the comment
+@@ -593,26 +593,26 @@ static noinline int create_subvol(struct user_namespace *mnt_userns,
+ 	 */
+ 	ret = btrfs_subvolume_reserve_metadata(root, &block_rsv, 8, false);
+ 	if (ret)
+-		goto fail_free;
++		goto out_anon_dev;
+ 
+ 	trans = btrfs_start_transaction(root, 0);
+ 	if (IS_ERR(trans)) {
+ 		ret = PTR_ERR(trans);
+ 		btrfs_subvolume_release_metadata(root, &block_rsv);
+-		goto fail_free;
++		goto out_anon_dev;
+ 	}
+ 	trans->block_rsv = &block_rsv;
+ 	trans->bytes_reserved = block_rsv.size;
+ 
+ 	ret = btrfs_qgroup_inherit(trans, 0, objectid, inherit);
+ 	if (ret)
+-		goto fail;
++		goto out;
+ 
+ 	leaf = btrfs_alloc_tree_block(trans, root, 0, objectid, NULL, 0, 0, 0,
+ 				      BTRFS_NESTING_NORMAL);
+ 	if (IS_ERR(leaf)) {
+ 		ret = PTR_ERR(leaf);
+-		goto fail;
++		goto out;
+ 	}
+ 
+ 	btrfs_mark_buffer_dirty(leaf);
+@@ -667,7 +667,7 @@ static noinline int create_subvol(struct user_namespace *mnt_userns,
+ 		btrfs_tree_unlock(leaf);
+ 		btrfs_free_tree_block(trans, objectid, leaf, 0, 1);
+ 		free_extent_buffer(leaf);
+-		goto fail;
++		goto out;
+ 	}
+ 
+ 	free_extent_buffer(leaf);
+@@ -676,19 +676,18 @@ static noinline int create_subvol(struct user_namespace *mnt_userns,
+ 	key.offset = (u64)-1;
+ 	new_root = btrfs_get_new_fs_root(fs_info, objectid, anon_dev);
+ 	if (IS_ERR(new_root)) {
+-		free_anon_bdev(anon_dev);
+ 		ret = PTR_ERR(new_root);
+ 		btrfs_abort_transaction(trans, ret);
+-		goto fail;
++		goto out;
+ 	}
+-	/* Freeing will be done in btrfs_put_root() of new_root */
++	/* anon_dev is owned by new_root now. */
+ 	anon_dev = 0;
+ 
+ 	ret = btrfs_record_root_in_trans(trans, new_root);
+ 	if (ret) {
+ 		btrfs_put_root(new_root);
+ 		btrfs_abort_transaction(trans, ret);
+-		goto fail;
++		goto out;
+ 	}
+ 
+ 	ret = btrfs_create_subvol_root(trans, new_root, root, mnt_userns);
+@@ -696,7 +695,7 @@ static noinline int create_subvol(struct user_namespace *mnt_userns,
+ 	if (ret) {
+ 		/* We potentially lose an unused inode item here */
+ 		btrfs_abort_transaction(trans, ret);
+-		goto fail;
++		goto out;
+ 	}
+ 
+ 	/*
+@@ -705,28 +704,28 @@ static noinline int create_subvol(struct user_namespace *mnt_userns,
+ 	ret = btrfs_set_inode_index(BTRFS_I(dir), &index);
+ 	if (ret) {
+ 		btrfs_abort_transaction(trans, ret);
+-		goto fail;
++		goto out;
+ 	}
+ 
+ 	ret = btrfs_insert_dir_item(trans, name, namelen, BTRFS_I(dir), &key,
+ 				    BTRFS_FT_DIR, index);
+ 	if (ret) {
+ 		btrfs_abort_transaction(trans, ret);
+-		goto fail;
++		goto out;
+ 	}
+ 
+ 	btrfs_i_size_write(BTRFS_I(dir), dir->i_size + namelen * 2);
+ 	ret = btrfs_update_inode(trans, root, BTRFS_I(dir));
+ 	if (ret) {
+ 		btrfs_abort_transaction(trans, ret);
+-		goto fail;
++		goto out;
+ 	}
+ 
+ 	ret = btrfs_add_root_ref(trans, objectid, root->root_key.objectid,
+ 				 btrfs_ino(BTRFS_I(dir)), index, name, namelen);
+ 	if (ret) {
+ 		btrfs_abort_transaction(trans, ret);
+-		goto fail;
++		goto out;
+ 	}
+ 
+ 	ret = btrfs_uuid_tree_add(trans, root_item->uuid,
+@@ -734,8 +733,7 @@ static noinline int create_subvol(struct user_namespace *mnt_userns,
+ 	if (ret)
+ 		btrfs_abort_transaction(trans, ret);
+ 
+-fail:
+-	kfree(root_item);
++out:
+ 	trans->block_rsv = NULL;
+ 	trans->bytes_reserved = 0;
+ 	btrfs_subvolume_release_metadata(root, &block_rsv);
+@@ -751,11 +749,10 @@ fail:
+ 			return PTR_ERR(inode);
+ 		d_instantiate(dentry, inode);
+ 	}
+-	return ret;
+-
+-fail_free:
++out_anon_dev:
+ 	if (anon_dev)
+ 		free_anon_bdev(anon_dev);
++out_root_item:
+ 	kfree(root_item);
+ 	return ret;
+ }
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index a8cc736731fdb..659575526e9fe 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -7671,12 +7671,12 @@ int btrfs_read_chunk_tree(struct btrfs_fs_info *fs_info)
+ 	 * do another round of validation checks.
+ 	 */
+ 	if (total_dev != fs_info->fs_devices->total_devices) {
+-		btrfs_err(fs_info,
+-	   "super_num_devices %llu mismatch with num_devices %llu found here",
++		btrfs_warn(fs_info,
++"super block num_devices %llu mismatch with DEV_ITEM count %llu, will be repaired on next transaction commit",
+ 			  btrfs_super_num_devices(fs_info->super_copy),
+ 			  total_dev);
+-		ret = -EINVAL;
+-		goto error;
++		fs_info->fs_devices->total_devices = total_dev;
++		btrfs_set_super_num_devices(fs_info->super_copy, total_dev);
+ 	}
+ 	if (btrfs_super_total_bytes(fs_info->super_copy) <
+ 	    fs_info->fs_devices->total_rw_bytes) {
+diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
+index d31b0eda210f1..b5236f5afca7c 100644
+--- a/fs/btrfs/zoned.c
++++ b/fs/btrfs/zoned.c
+@@ -1896,7 +1896,7 @@ int btrfs_zone_finish(struct btrfs_block_group *block_group)
+ 	/* Check if we have unwritten allocated space */
+ 	if ((block_group->flags &
+ 	     (BTRFS_BLOCK_GROUP_METADATA | BTRFS_BLOCK_GROUP_SYSTEM)) &&
+-	    block_group->alloc_offset > block_group->meta_write_pointer) {
++	    block_group->start + block_group->alloc_offset > block_group->meta_write_pointer) {
+ 		spin_unlock(&block_group->lock);
+ 		return -EAGAIN;
+ 	}
+@@ -2000,6 +2000,7 @@ void btrfs_zone_finish_endio(struct btrfs_fs_info *fs_info, u64 logical, u64 len
+ 	struct btrfs_block_group *block_group;
+ 	struct map_lookup *map;
+ 	struct btrfs_device *device;
++	u64 min_alloc_bytes;
+ 	u64 physical;
+ 
+ 	if (!btrfs_is_zoned(fs_info))
+@@ -2008,7 +2009,15 @@ void btrfs_zone_finish_endio(struct btrfs_fs_info *fs_info, u64 logical, u64 len
+ 	block_group = btrfs_lookup_block_group(fs_info, logical);
+ 	ASSERT(block_group);
+ 
+-	if (logical + length < block_group->start + block_group->zone_capacity)
++	/* No MIXED_BG on zoned btrfs. */
++	if (block_group->flags & BTRFS_BLOCK_GROUP_DATA)
++		min_alloc_bytes = fs_info->sectorsize;
++	else
++		min_alloc_bytes = fs_info->nodesize;
++
++	/* Bail out if we can allocate more data from this block group. */
++	if (logical + length + min_alloc_bytes <=
++	    block_group->start + block_group->zone_capacity)
+ 		goto out;
+ 
+ 	spin_lock(&block_group->lock);
+@@ -2046,6 +2055,37 @@ out:
+ 	btrfs_put_block_group(block_group);
+ }
+ 
++static void btrfs_zone_finish_endio_workfn(struct work_struct *work)
++{
++	struct btrfs_block_group *bg =
++		container_of(work, struct btrfs_block_group, zone_finish_work);
++
++	wait_on_extent_buffer_writeback(bg->last_eb);
++	free_extent_buffer(bg->last_eb);
++	btrfs_zone_finish_endio(bg->fs_info, bg->start, bg->length);
++	btrfs_put_block_group(bg);
++}
++
++void btrfs_schedule_zone_finish_bg(struct btrfs_block_group *bg,
++				   struct extent_buffer *eb)
++{
++	if (!bg->seq_zone || eb->start + eb->len * 2 <= bg->start + bg->zone_capacity)
++		return;
++
++	if (WARN_ON(bg->zone_finish_work.func == btrfs_zone_finish_endio_workfn)) {
++		btrfs_err(bg->fs_info, "double scheduling of bg %llu zone finishing",
++			  bg->start);
++		return;
++	}
++
++	/* For the work */
++	btrfs_get_block_group(bg);
++	atomic_inc(&eb->refs);
++	bg->last_eb = eb;
++	INIT_WORK(&bg->zone_finish_work, btrfs_zone_finish_endio_workfn);
++	queue_work(system_unbound_wq, &bg->zone_finish_work);
++}
++
+ void btrfs_clear_data_reloc_bg(struct btrfs_block_group *bg)
+ {
+ 	struct btrfs_fs_info *fs_info = bg->fs_info;
+diff --git a/fs/btrfs/zoned.h b/fs/btrfs/zoned.h
+index 6dee76248cb4d..2d898970aec5f 100644
+--- a/fs/btrfs/zoned.h
++++ b/fs/btrfs/zoned.h
+@@ -76,6 +76,8 @@ int btrfs_zone_finish(struct btrfs_block_group *block_group);
+ bool btrfs_can_activate_zone(struct btrfs_fs_devices *fs_devices, u64 flags);
+ void btrfs_zone_finish_endio(struct btrfs_fs_info *fs_info, u64 logical,
+ 			     u64 length);
++void btrfs_schedule_zone_finish_bg(struct btrfs_block_group *bg,
++				   struct extent_buffer *eb);
+ void btrfs_clear_data_reloc_bg(struct btrfs_block_group *bg);
+ void btrfs_free_zone_cache(struct btrfs_fs_info *fs_info);
+ #else /* CONFIG_BLK_DEV_ZONED */
+@@ -233,6 +235,9 @@ static inline bool btrfs_can_activate_zone(struct btrfs_fs_devices *fs_devices,
+ static inline void btrfs_zone_finish_endio(struct btrfs_fs_info *fs_info,
+ 					   u64 logical, u64 length) { }
+ 
++static inline void btrfs_schedule_zone_finish_bg(struct btrfs_block_group *bg,
++						 struct extent_buffer *eb) { }
++
+ static inline void btrfs_clear_data_reloc_bg(struct btrfs_block_group *bg) { }
+ 
+ static inline void btrfs_free_zone_cache(struct btrfs_fs_info *fs_info) { }
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index 00c3de177dd66..1bd3e1bb0fdf0 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -3375,13 +3375,17 @@ static void handle_session(struct ceph_mds_session *session,
+ 	}
+ 
+ 	if (msg_version >= 5) {
+-		u32 flags;
+-		/* version >= 4, struct_v, struct_cv, len, metric_spec */
+-	        ceph_decode_skip_n(&p, end, 2 + sizeof(u32) * 2, bad);
++		u32 flags, len;
++
++		/* version >= 4 */
++		ceph_decode_skip_16(&p, end, bad); /* struct_v, struct_cv */
++		ceph_decode_32_safe(&p, end, len, bad); /* len */
++		ceph_decode_skip_n(&p, end, len, bad); /* metric_spec */
++
+ 		/* version >= 5, flags   */
+-                ceph_decode_32_safe(&p, end, flags, bad);
++		ceph_decode_32_safe(&p, end, flags, bad);
+ 		if (flags & CEPH_SESSION_BLOCKLISTED) {
+-		        pr_warn("mds%d session blocklisted\n", session->s_mds);
++			pr_warn("mds%d session blocklisted\n", session->s_mds);
+ 			blocklisted = true;
+ 		}
+ 	}
+diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
+index 2b1a1c029c75e..6d150bb87aaf8 100644
+--- a/fs/cifs/cifsfs.c
++++ b/fs/cifs/cifsfs.c
+@@ -836,7 +836,7 @@ cifs_smb3_do_mount(struct file_system_type *fs_type,
+ 	      int flags, struct smb3_fs_context *old_ctx)
+ {
+ 	int rc;
+-	struct super_block *sb;
++	struct super_block *sb = NULL;
+ 	struct cifs_sb_info *cifs_sb = NULL;
+ 	struct cifs_mnt_data mnt_data;
+ 	struct dentry *root;
+@@ -932,9 +932,11 @@ out_super:
+ 	return root;
+ out:
+ 	if (cifs_sb) {
+-		kfree(cifs_sb->prepath);
+-		smb3_cleanup_fs_context(cifs_sb->ctx);
+-		kfree(cifs_sb);
++		if (!sb || IS_ERR(sb)) {  /* otherwise kill_sb will handle */
++			kfree(cifs_sb->prepath);
++			smb3_cleanup_fs_context(cifs_sb->ctx);
++			kfree(cifs_sb);
++		}
+ 	}
+ 	return root;
+ }
+diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
+index 8de977c359b11..5024b6792dab6 100644
+--- a/fs/cifs/cifsglob.h
++++ b/fs/cifs/cifsglob.h
+@@ -944,7 +944,7 @@ struct cifs_ses {
+ 				   and after mount option parsing we fill it */
+ 	char *domainName;
+ 	char *password;
+-	char *workstation_name;
++	char workstation_name[CIFS_MAX_WORKSTATION_LEN];
+ 	struct session_key auth_key;
+ 	struct ntlmssp_auth *ntlmssp; /* ciphertext, flags, server challenge */
+ 	enum securityEnum sectype; /* what security flavor was specified? */
+@@ -1979,4 +1979,17 @@ static inline bool cifs_is_referral_server(struct cifs_tcon *tcon,
+ 	return is_tcon_dfs(tcon) || (ref && (ref->flags & DFSREF_REFERRAL_SERVER));
+ }
+ 
++static inline size_t ntlmssp_workstation_name_size(const struct cifs_ses *ses)
++{
++	if (WARN_ON_ONCE(!ses || !ses->server))
++		return 0;
++	/*
++	 * Make workstation name no more than 15 chars when using insecure dialects as some legacy
++	 * servers do require it during NTLMSSP.
++	 */
++	if (ses->server->dialect <= SMB20_PROT_ID)
++		return min_t(size_t, sizeof(ses->workstation_name), RFC1001_NAME_LEN_WITH_NULL);
++	return sizeof(ses->workstation_name);
++}
++
+ #endif	/* _CIFS_GLOB_H */
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index 42e14f408856d..aa2d4c49e2a5b 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -2037,18 +2037,7 @@ cifs_set_cifscreds(struct smb3_fs_context *ctx, struct cifs_ses *ses)
+ 		}
+ 	}
+ 
+-	ctx->workstation_name = kstrdup(ses->workstation_name, GFP_KERNEL);
+-	if (!ctx->workstation_name) {
+-		cifs_dbg(FYI, "Unable to allocate memory for workstation_name\n");
+-		rc = -ENOMEM;
+-		kfree(ctx->username);
+-		ctx->username = NULL;
+-		kfree_sensitive(ctx->password);
+-		ctx->password = NULL;
+-		kfree(ctx->domainname);
+-		ctx->domainname = NULL;
+-		goto out_key_put;
+-	}
++	strscpy(ctx->workstation_name, ses->workstation_name, sizeof(ctx->workstation_name));
+ 
+ out_key_put:
+ 	up_read(&key->sem);
+@@ -2157,12 +2146,9 @@ cifs_get_smb_ses(struct TCP_Server_Info *server, struct smb3_fs_context *ctx)
+ 		if (!ses->domainName)
+ 			goto get_ses_fail;
+ 	}
+-	if (ctx->workstation_name) {
+-		ses->workstation_name = kstrdup(ctx->workstation_name,
+-						GFP_KERNEL);
+-		if (!ses->workstation_name)
+-			goto get_ses_fail;
+-	}
++
++	strscpy(ses->workstation_name, ctx->workstation_name, sizeof(ses->workstation_name));
++
+ 	if (ctx->domainauto)
+ 		ses->domainAuto = ctx->domainauto;
+ 	ses->cred_uid = ctx->cred_uid;
+@@ -3420,8 +3406,9 @@ cifs_are_all_path_components_accessible(struct TCP_Server_Info *server,
+ }
+ 
+ /*
+- * Check if path is remote (e.g. a DFS share). Return -EREMOTE if it is,
+- * otherwise 0.
++ * Check if path is remote (i.e. a DFS share).
++ *
++ * Return -EREMOTE if it is, otherwise 0 or -errno.
+  */
+ static int is_path_remote(struct mount_ctx *mnt_ctx)
+ {
+@@ -3432,6 +3419,7 @@ static int is_path_remote(struct mount_ctx *mnt_ctx)
+ 	struct cifs_tcon *tcon = mnt_ctx->tcon;
+ 	struct smb3_fs_context *ctx = mnt_ctx->fs_ctx;
+ 	char *full_path;
++	bool nodfs = cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NO_DFS;
+ 
+ 	if (!server->ops->is_path_accessible)
+ 		return -EOPNOTSUPP;
+@@ -3449,14 +3437,20 @@ static int is_path_remote(struct mount_ctx *mnt_ctx)
+ 	rc = server->ops->is_path_accessible(xid, tcon, cifs_sb,
+ 					     full_path);
+ #ifdef CONFIG_CIFS_DFS_UPCALL
++	if (nodfs) {
++		if (rc == -EREMOTE)
++			rc = -EOPNOTSUPP;
++		goto out;
++	}
++
++	/* path *might* exist with non-ASCII characters in DFS root
++	 * try again with full path (only if nodfs is not set) */
+ 	if (rc == -ENOENT && is_tcon_dfs(tcon))
+ 		rc = cifs_dfs_query_info_nonascii_quirk(xid, tcon, cifs_sb,
+ 							full_path);
+ #endif
+-	if (rc != 0 && rc != -EREMOTE) {
+-		kfree(full_path);
+-		return rc;
+-	}
++	if (rc != 0 && rc != -EREMOTE)
++		goto out;
+ 
+ 	if (rc != -EREMOTE) {
+ 		rc = cifs_are_all_path_components_accessible(server, xid, tcon,
+@@ -3468,6 +3462,7 @@ static int is_path_remote(struct mount_ctx *mnt_ctx)
+ 		}
+ 	}
+ 
++out:
+ 	kfree(full_path);
+ 	return rc;
+ }
+@@ -3703,6 +3698,7 @@ int cifs_mount(struct cifs_sb_info *cifs_sb, struct smb3_fs_context *ctx)
+ 	if (!isdfs)
+ 		goto out;
+ 
++	/* proceed as DFS mount */
+ 	uuid_gen(&mnt_ctx.mount_id);
+ 	rc = connect_dfs_root(&mnt_ctx, &tl);
+ 	dfs_cache_free_tgts(&tl);
+@@ -3960,7 +3956,7 @@ cifs_negotiate_protocol(const unsigned int xid, struct cifs_ses *ses,
+ 	if (rc == 0) {
+ 		spin_lock(&cifs_tcp_ses_lock);
+ 		if (server->tcpStatus == CifsInNegotiate)
+-			server->tcpStatus = CifsNeedSessSetup;
++			server->tcpStatus = CifsGood;
+ 		else
+ 			rc = -EHOSTDOWN;
+ 		spin_unlock(&cifs_tcp_ses_lock);
+@@ -3983,19 +3979,18 @@ cifs_setup_session(const unsigned int xid, struct cifs_ses *ses,
+ 	bool is_binding = false;
+ 
+ 	/* only send once per connect */
++	spin_lock(&ses->chan_lock);
++	is_binding = !CIFS_ALL_CHANS_NEED_RECONNECT(ses);
++	spin_unlock(&ses->chan_lock);
++
+ 	spin_lock(&cifs_tcp_ses_lock);
+-	if ((server->tcpStatus != CifsNeedSessSetup) &&
+-	    (ses->status == CifsGood)) {
++	if (ses->status == CifsExiting) {
+ 		spin_unlock(&cifs_tcp_ses_lock);
+ 		return 0;
+ 	}
+-	server->tcpStatus = CifsInSessSetup;
++	ses->status = CifsInSessSetup;
+ 	spin_unlock(&cifs_tcp_ses_lock);
+ 
+-	spin_lock(&ses->chan_lock);
+-	is_binding = !CIFS_ALL_CHANS_NEED_RECONNECT(ses);
+-	spin_unlock(&ses->chan_lock);
+-
+ 	if (!is_binding) {
+ 		ses->capabilities = server->capabilities;
+ 		if (!linuxExtEnabled)
+@@ -4019,13 +4014,13 @@ cifs_setup_session(const unsigned int xid, struct cifs_ses *ses,
+ 	if (rc) {
+ 		cifs_server_dbg(VFS, "Send error in SessSetup = %d\n", rc);
+ 		spin_lock(&cifs_tcp_ses_lock);
+-		if (server->tcpStatus == CifsInSessSetup)
+-			server->tcpStatus = CifsNeedSessSetup;
++		if (ses->status == CifsInSessSetup)
++			ses->status = CifsNeedSessSetup;
+ 		spin_unlock(&cifs_tcp_ses_lock);
+ 	} else {
+ 		spin_lock(&cifs_tcp_ses_lock);
+-		if (server->tcpStatus == CifsInSessSetup)
+-			server->tcpStatus = CifsGood;
++		if (ses->status == CifsInSessSetup)
++			ses->status = CifsGood;
+ 		/* Even if one channel is active, session is in good state */
+ 		ses->status = CifsGood;
+ 		spin_unlock(&cifs_tcp_ses_lock);
+diff --git a/fs/cifs/dfs_cache.c b/fs/cifs/dfs_cache.c
+index 956f8e5cf3e74..c5dd6f7305bd1 100644
+--- a/fs/cifs/dfs_cache.c
++++ b/fs/cifs/dfs_cache.c
+@@ -654,7 +654,7 @@ static struct cache_entry *__lookup_cache_entry(const char *path, unsigned int h
+ 			return ce;
+ 		}
+ 	}
+-	return ERR_PTR(-EEXIST);
++	return ERR_PTR(-ENOENT);
+ }
+ 
+ /*
+@@ -662,7 +662,7 @@ static struct cache_entry *__lookup_cache_entry(const char *path, unsigned int h
+  *
+  * Use whole path components in the match.  Must be called with htable_rw_lock held.
+  *
+- * Return ERR_PTR(-EEXIST) if the entry is not found.
++ * Return ERR_PTR(-ENOENT) if the entry is not found.
+  */
+ static struct cache_entry *lookup_cache_entry(const char *path)
+ {
+@@ -710,7 +710,7 @@ static struct cache_entry *lookup_cache_entry(const char *path)
+ 		while (e > s && *e != sep)
+ 			e--;
+ 	}
+-	return ERR_PTR(-EEXIST);
++	return ERR_PTR(-ENOENT);
+ }
+ 
+ /**
+diff --git a/fs/cifs/fs_context.c b/fs/cifs/fs_context.c
+index a92e9eec521f3..fbb0e98c7d2c4 100644
+--- a/fs/cifs/fs_context.c
++++ b/fs/cifs/fs_context.c
+@@ -312,7 +312,6 @@ smb3_fs_context_dup(struct smb3_fs_context *new_ctx, struct smb3_fs_context *ctx
+ 	new_ctx->password = NULL;
+ 	new_ctx->server_hostname = NULL;
+ 	new_ctx->domainname = NULL;
+-	new_ctx->workstation_name = NULL;
+ 	new_ctx->UNC = NULL;
+ 	new_ctx->source = NULL;
+ 	new_ctx->iocharset = NULL;
+@@ -327,7 +326,6 @@ smb3_fs_context_dup(struct smb3_fs_context *new_ctx, struct smb3_fs_context *ctx
+ 	DUP_CTX_STR(UNC);
+ 	DUP_CTX_STR(source);
+ 	DUP_CTX_STR(domainname);
+-	DUP_CTX_STR(workstation_name);
+ 	DUP_CTX_STR(nodename);
+ 	DUP_CTX_STR(iocharset);
+ 
+@@ -766,8 +764,7 @@ static int smb3_verify_reconfigure_ctx(struct fs_context *fc,
+ 		cifs_errorf(fc, "can not change domainname during remount\n");
+ 		return -EINVAL;
+ 	}
+-	if (new_ctx->workstation_name &&
+-	    (!old_ctx->workstation_name || strcmp(new_ctx->workstation_name, old_ctx->workstation_name))) {
++	if (strcmp(new_ctx->workstation_name, old_ctx->workstation_name)) {
+ 		cifs_errorf(fc, "can not change workstation_name during remount\n");
+ 		return -EINVAL;
+ 	}
+@@ -814,7 +811,6 @@ static int smb3_reconfigure(struct fs_context *fc)
+ 	STEAL_STRING(cifs_sb, ctx, username);
+ 	STEAL_STRING(cifs_sb, ctx, password);
+ 	STEAL_STRING(cifs_sb, ctx, domainname);
+-	STEAL_STRING(cifs_sb, ctx, workstation_name);
+ 	STEAL_STRING(cifs_sb, ctx, nodename);
+ 	STEAL_STRING(cifs_sb, ctx, iocharset);
+ 
+@@ -1467,22 +1463,15 @@ static int smb3_fs_context_parse_param(struct fs_context *fc,
+ 
+ int smb3_init_fs_context(struct fs_context *fc)
+ {
+-	int rc;
+ 	struct smb3_fs_context *ctx;
+ 	char *nodename = utsname()->nodename;
+ 	int i;
+ 
+ 	ctx = kzalloc(sizeof(struct smb3_fs_context), GFP_KERNEL);
+-	if (unlikely(!ctx)) {
+-		rc = -ENOMEM;
+-		goto err_exit;
+-	}
++	if (unlikely(!ctx))
++		return -ENOMEM;
+ 
+-	ctx->workstation_name = kstrdup(nodename, GFP_KERNEL);
+-	if (unlikely(!ctx->workstation_name)) {
+-		rc = -ENOMEM;
+-		goto err_exit;
+-	}
++	strscpy(ctx->workstation_name, nodename, sizeof(ctx->workstation_name));
+ 
+ 	/*
+ 	 * does not have to be perfect mapping since field is
+@@ -1555,14 +1544,6 @@ int smb3_init_fs_context(struct fs_context *fc)
+ 	fc->fs_private = ctx;
+ 	fc->ops = &smb3_fs_context_ops;
+ 	return 0;
+-
+-err_exit:
+-	if (ctx) {
+-		kfree(ctx->workstation_name);
+-		kfree(ctx);
+-	}
+-
+-	return rc;
+ }
+ 
+ void
+@@ -1588,8 +1569,6 @@ smb3_cleanup_fs_context_contents(struct smb3_fs_context *ctx)
+ 	ctx->source = NULL;
+ 	kfree(ctx->domainname);
+ 	ctx->domainname = NULL;
+-	kfree(ctx->workstation_name);
+-	ctx->workstation_name = NULL;
+ 	kfree(ctx->nodename);
+ 	ctx->nodename = NULL;
+ 	kfree(ctx->iocharset);
+diff --git a/fs/cifs/fs_context.h b/fs/cifs/fs_context.h
+index e54090d9ef368..3a156c1439254 100644
+--- a/fs/cifs/fs_context.h
++++ b/fs/cifs/fs_context.h
+@@ -170,7 +170,7 @@ struct smb3_fs_context {
+ 	char *server_hostname;
+ 	char *UNC;
+ 	char *nodename;
+-	char *workstation_name;
++	char workstation_name[CIFS_MAX_WORKSTATION_LEN];
+ 	char *iocharset;  /* local code page for mapping to and from Unicode */
+ 	char source_rfc1001_name[RFC1001_NAME_LEN_WITH_NULL]; /* clnt nb name */
+ 	char target_rfc1001_name[RFC1001_NAME_LEN_WITH_NULL]; /* srvr nb name */
+diff --git a/fs/cifs/misc.c b/fs/cifs/misc.c
+index afaf59c221936..5a803d6861464 100644
+--- a/fs/cifs/misc.c
++++ b/fs/cifs/misc.c
+@@ -95,7 +95,6 @@ sesInfoFree(struct cifs_ses *buf_to_free)
+ 	kfree_sensitive(buf_to_free->password);
+ 	kfree(buf_to_free->user_name);
+ 	kfree(buf_to_free->domainName);
+-	kfree(buf_to_free->workstation_name);
+ 	kfree_sensitive(buf_to_free->auth_key.response);
+ 	kfree(buf_to_free->iface_list);
+ 	kfree_sensitive(buf_to_free);
+@@ -1309,7 +1308,7 @@ int cifs_update_super_prepath(struct cifs_sb_info *cifs_sb, char *prefix)
+  * for "\<server>\<dfsname>\<linkpath>" DFS reference,
+  * where <dfsname> contains non-ASCII unicode symbols.
+  *
+- * Check such DFS reference and emulate -ENOENT if it is actual.
++ * Check such DFS reference.
+  */
+ int cifs_dfs_query_info_nonascii_quirk(const unsigned int xid,
+ 				       struct cifs_tcon *tcon,
+@@ -1341,10 +1340,6 @@ int cifs_dfs_query_info_nonascii_quirk(const unsigned int xid,
+ 		cifs_dbg(FYI, "DFS ref '%s' is found, emulate -EREMOTE\n",
+ 			 dfspath);
+ 		rc = -EREMOTE;
+-	} else if (rc == -EEXIST) {
+-		cifs_dbg(FYI, "DFS ref '%s' is not found, emulate -ENOENT\n",
+-			 dfspath);
+-		rc = -ENOENT;
+ 	} else {
+ 		cifs_dbg(FYI, "%s: dfs_cache_find returned %d\n", __func__, rc);
+ 	}
+diff --git a/fs/cifs/sess.c b/fs/cifs/sess.c
+index 32f478c7a66d8..1a0995bb5d90c 100644
+--- a/fs/cifs/sess.c
++++ b/fs/cifs/sess.c
+@@ -714,9 +714,9 @@ static int size_of_ntlmssp_blob(struct cifs_ses *ses, int base_size)
+ 	else
+ 		sz += sizeof(__le16);
+ 
+-	if (ses->workstation_name)
++	if (ses->workstation_name[0])
+ 		sz += sizeof(__le16) * strnlen(ses->workstation_name,
+-			CIFS_MAX_WORKSTATION_LEN);
++					       ntlmssp_workstation_name_size(ses));
+ 	else
+ 		sz += sizeof(__le16);
+ 
+@@ -960,7 +960,7 @@ int build_ntlmssp_auth_blob(unsigned char **pbuffer,
+ 
+ 	cifs_security_buffer_from_str(&sec_blob->WorkstationName,
+ 				      ses->workstation_name,
+-				      CIFS_MAX_WORKSTATION_LEN,
++				      ntlmssp_workstation_name_size(ses),
+ 				      *pbuffer, &tmp,
+ 				      nls_cp);
+ 
+diff --git a/fs/cifs/smb2inode.c b/fs/cifs/smb2inode.c
+index fe5bfa245fa7e..1b89b9b8a212a 100644
+--- a/fs/cifs/smb2inode.c
++++ b/fs/cifs/smb2inode.c
+@@ -362,8 +362,6 @@ smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ 	num_rqst++;
+ 
+ 	if (cfile) {
+-		cifsFileInfo_put(cfile);
+-		cfile = NULL;
+ 		rc = compound_send_recv(xid, ses, server,
+ 					flags, num_rqst - 2,
+ 					&rqst[1], &resp_buftype[1],
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index d6aaeff4a30a5..861291662c95d 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -760,8 +760,8 @@ int open_cached_dir(unsigned int xid, struct cifs_tcon *tcon,
+ 		struct cifs_sb_info *cifs_sb,
+ 		struct cached_fid **cfid)
+ {
+-	struct cifs_ses *ses = tcon->ses;
+-	struct TCP_Server_Info *server = ses->server;
++	struct cifs_ses *ses;
++	struct TCP_Server_Info *server;
+ 	struct cifs_open_parms oparms;
+ 	struct smb2_create_rsp *o_rsp = NULL;
+ 	struct smb2_query_info_rsp *qi_rsp = NULL;
+@@ -779,6 +779,9 @@ int open_cached_dir(unsigned int xid, struct cifs_tcon *tcon,
+ 	if (tcon->nohandlecache)
+ 		return -ENOTSUPP;
+ 
++	ses = tcon->ses;
++	server = ses->server;
++
+ 	if (cifs_sb->root == NULL)
+ 		return -ENOENT;
+ 
+@@ -3837,7 +3840,7 @@ static long smb3_simple_falloc(struct file *file, struct cifs_tcon *tcon,
+ 		if (rc)
+ 			goto out;
+ 
+-		if ((cifsi->cifsAttrs & FILE_ATTRIBUTE_SPARSE_FILE) == 0)
++		if (cifsi->cifsAttrs & FILE_ATTRIBUTE_SPARSE_FILE)
+ 			smb2_set_sparse(xid, tcon, cfile, inode, false);
+ 
+ 		eof = cpu_to_le64(off + len);
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index 1b7ad0c095668..f5321a3500f38 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -3899,7 +3899,8 @@ SMB2_echo(struct TCP_Server_Info *server)
+ 	cifs_dbg(FYI, "In echo request for conn_id %lld\n", server->conn_id);
+ 
+ 	spin_lock(&cifs_tcp_ses_lock);
+-	if (server->tcpStatus == CifsNeedNegotiate) {
++	if (server->ops->need_neg &&
++	    server->ops->need_neg(server)) {
+ 		spin_unlock(&cifs_tcp_ses_lock);
+ 		/* No need to send echo on newly established connections */
+ 		mod_delayed_work(cifsiod_wq, &server->reconnect, 0);
+diff --git a/fs/cifs/smb2transport.c b/fs/cifs/smb2transport.c
+index 2af79093b78bc..01b732641edb2 100644
+--- a/fs/cifs/smb2transport.c
++++ b/fs/cifs/smb2transport.c
+@@ -641,7 +641,8 @@ smb2_sign_rqst(struct smb_rqst *rqst, struct TCP_Server_Info *server)
+ 	if (!is_signed)
+ 		return 0;
+ 	spin_lock(&cifs_tcp_ses_lock);
+-	if (server->tcpStatus == CifsNeedNegotiate) {
++	if (server->ops->need_neg &&
++	    server->ops->need_neg(server)) {
+ 		spin_unlock(&cifs_tcp_ses_lock);
+ 		return 0;
+ 	}
+diff --git a/fs/dax.c b/fs/dax.c
+index 67a08a32fccb7..a372304c9695b 100644
+--- a/fs/dax.c
++++ b/fs/dax.c
+@@ -845,7 +845,8 @@ static void dax_entry_mkclean(struct address_space *mapping, pgoff_t index,
+ 			if (!pmd_dirty(*pmdp) && !pmd_write(*pmdp))
+ 				goto unlock_pmd;
+ 
+-			flush_cache_page(vma, address, pfn);
++			flush_cache_range(vma, address,
++					  address + HPAGE_PMD_SIZE);
+ 			pmd = pmdp_invalidate(vma, address, pmdp);
+ 			pmd = pmd_wrprotect(pmd);
+ 			pmd = pmd_mkclean(pmd);
+diff --git a/fs/dlm/lock.c b/fs/dlm/lock.c
+index bdb51d209ba25..5b485cd96c931 100644
+--- a/fs/dlm/lock.c
++++ b/fs/dlm/lock.c
+@@ -1559,6 +1559,7 @@ static int _remove_from_waiters(struct dlm_lkb *lkb, int mstype,
+ 		lkb->lkb_wait_type = 0;
+ 		lkb->lkb_flags &= ~DLM_IFL_OVERLAP_CANCEL;
+ 		lkb->lkb_wait_count--;
++		unhold_lkb(lkb);
+ 		goto out_del;
+ 	}
+ 
+@@ -1585,6 +1586,7 @@ static int _remove_from_waiters(struct dlm_lkb *lkb, int mstype,
+ 		log_error(ls, "remwait error %x reply %d wait_type %d overlap",
+ 			  lkb->lkb_id, mstype, lkb->lkb_wait_type);
+ 		lkb->lkb_wait_count--;
++		unhold_lkb(lkb);
+ 		lkb->lkb_wait_type = 0;
+ 	}
+ 
+@@ -1795,7 +1797,6 @@ static void shrink_bucket(struct dlm_ls *ls, int b)
+ 		memcpy(ls->ls_remove_name, name, DLM_RESNAME_MAXLEN);
+ 		spin_unlock(&ls->ls_remove_spin);
+ 		spin_unlock(&ls->ls_rsbtbl[b].lock);
+-		wake_up(&ls->ls_remove_wait);
+ 
+ 		send_remove(r);
+ 
+@@ -1804,6 +1805,7 @@ static void shrink_bucket(struct dlm_ls *ls, int b)
+ 		ls->ls_remove_len = 0;
+ 		memset(ls->ls_remove_name, 0, DLM_RESNAME_MAXLEN);
+ 		spin_unlock(&ls->ls_remove_spin);
++		wake_up(&ls->ls_remove_wait);
+ 
+ 		dlm_free_rsb(r);
+ 	}
+@@ -4079,7 +4081,6 @@ static void send_repeat_remove(struct dlm_ls *ls, char *ms_name, int len)
+ 	memcpy(ls->ls_remove_name, name, DLM_RESNAME_MAXLEN);
+ 	spin_unlock(&ls->ls_remove_spin);
+ 	spin_unlock(&ls->ls_rsbtbl[b].lock);
+-	wake_up(&ls->ls_remove_wait);
+ 
+ 	rv = _create_message(ls, sizeof(struct dlm_message) + len,
+ 			     dir_nodeid, DLM_MSG_REMOVE, &ms, &mh);
+@@ -4095,6 +4096,7 @@ static void send_repeat_remove(struct dlm_ls *ls, char *ms_name, int len)
+ 	ls->ls_remove_len = 0;
+ 	memset(ls->ls_remove_name, 0, DLM_RESNAME_MAXLEN);
+ 	spin_unlock(&ls->ls_remove_spin);
++	wake_up(&ls->ls_remove_wait);
+ }
+ 
+ static int receive_request(struct dlm_ls *ls, struct dlm_message *ms)
+@@ -5331,11 +5333,16 @@ int dlm_recover_waiters_post(struct dlm_ls *ls)
+ 		lkb->lkb_flags &= ~DLM_IFL_OVERLAP_UNLOCK;
+ 		lkb->lkb_flags &= ~DLM_IFL_OVERLAP_CANCEL;
+ 		lkb->lkb_wait_type = 0;
+-		lkb->lkb_wait_count = 0;
++		/* drop all wait_count references we still
++		 * hold a reference for this iteration.
++		 */
++		while (lkb->lkb_wait_count) {
++			lkb->lkb_wait_count--;
++			unhold_lkb(lkb);
++		}
+ 		mutex_lock(&ls->ls_waiters_mutex);
+ 		list_del_init(&lkb->lkb_wait_reply);
+ 		mutex_unlock(&ls->ls_waiters_mutex);
+-		unhold_lkb(lkb); /* for waiters list */
+ 
+ 		if (oc || ou) {
+ 			/* do an unlock or cancel instead of resending */
+diff --git a/fs/dlm/lowcomms.c b/fs/dlm/lowcomms.c
+index e284d696c1fdc..6ed935ad82471 100644
+--- a/fs/dlm/lowcomms.c
++++ b/fs/dlm/lowcomms.c
+@@ -1789,7 +1789,7 @@ static int dlm_listen_for_all(void)
+ 				  SOCK_STREAM, dlm_proto_ops->proto, &sock);
+ 	if (result < 0) {
+ 		log_print("Can't create comms socket: %d", result);
+-		goto out;
++		return result;
+ 	}
+ 
+ 	sock_set_mark(sock->sk, dlm_config.ci_mark);
+diff --git a/fs/dlm/plock.c b/fs/dlm/plock.c
+index c38b2b8ffd1d3..a10d2bcfe75a8 100644
+--- a/fs/dlm/plock.c
++++ b/fs/dlm/plock.c
+@@ -23,11 +23,11 @@ struct plock_op {
+ 	struct list_head list;
+ 	int done;
+ 	struct dlm_plock_info info;
++	int (*callback)(struct file_lock *fl, int result);
+ };
+ 
+ struct plock_xop {
+ 	struct plock_op xop;
+-	int (*callback)(struct file_lock *fl, int result);
+ 	void *fl;
+ 	void *file;
+ 	struct file_lock flc;
+@@ -129,19 +129,18 @@ int dlm_posix_lock(dlm_lockspace_t *lockspace, u64 number, struct file *file,
+ 		/* fl_owner is lockd which doesn't distinguish
+ 		   processes on the nfs client */
+ 		op->info.owner	= (__u64) fl->fl_pid;
+-		xop->callback	= fl->fl_lmops->lm_grant;
++		op->callback	= fl->fl_lmops->lm_grant;
+ 		locks_init_lock(&xop->flc);
+ 		locks_copy_lock(&xop->flc, fl);
+ 		xop->fl		= fl;
+ 		xop->file	= file;
+ 	} else {
+ 		op->info.owner	= (__u64)(long) fl->fl_owner;
+-		xop->callback	= NULL;
+ 	}
+ 
+ 	send_op(op);
+ 
+-	if (xop->callback == NULL) {
++	if (!op->callback) {
+ 		rv = wait_event_interruptible(recv_wq, (op->done != 0));
+ 		if (rv == -ERESTARTSYS) {
+ 			log_debug(ls, "dlm_posix_lock: wait killed %llx",
+@@ -203,7 +202,7 @@ static int dlm_plock_callback(struct plock_op *op)
+ 	file = xop->file;
+ 	flc = &xop->flc;
+ 	fl = xop->fl;
+-	notify = xop->callback;
++	notify = op->callback;
+ 
+ 	if (op->info.rv) {
+ 		notify(fl, op->info.rv);
+@@ -436,10 +435,9 @@ static ssize_t dev_write(struct file *file, const char __user *u, size_t count,
+ 		if (op->info.fsid == info.fsid &&
+ 		    op->info.number == info.number &&
+ 		    op->info.owner == info.owner) {
+-			struct plock_xop *xop = (struct plock_xop *)op;
+ 			list_del_init(&op->list);
+ 			memcpy(&op->info, &info, sizeof(info));
+-			if (xop->callback)
++			if (op->callback)
+ 				do_callback = 1;
+ 			else
+ 				op->done = 1;
+diff --git a/fs/erofs/decompressor.c b/fs/erofs/decompressor.c
+index 3efa686c76441..0e0d1fc0f1301 100644
+--- a/fs/erofs/decompressor.c
++++ b/fs/erofs/decompressor.c
+@@ -322,6 +322,7 @@ static int z_erofs_shifted_transform(struct z_erofs_decompress_req *rq,
+ 		PAGE_ALIGN(rq->pageofs_out + rq->outputsize) >> PAGE_SHIFT;
+ 	const unsigned int righthalf = min_t(unsigned int, rq->outputsize,
+ 					     PAGE_SIZE - rq->pageofs_out);
++	const unsigned int lefthalf = rq->outputsize - righthalf;
+ 	unsigned char *src, *dst;
+ 
+ 	if (nrpages_out > 2) {
+@@ -344,10 +345,10 @@ static int z_erofs_shifted_transform(struct z_erofs_decompress_req *rq,
+ 	if (nrpages_out == 2) {
+ 		DBG_BUGON(!rq->out[1]);
+ 		if (rq->out[1] == *rq->in) {
+-			memmove(src, src + righthalf, rq->pageofs_out);
++			memmove(src, src + righthalf, lefthalf);
+ 		} else {
+ 			dst = kmap_atomic(rq->out[1]);
+-			memcpy(dst, src + righthalf, rq->pageofs_out);
++			memcpy(dst, src + righthalf, lefthalf);
+ 			kunmap_atomic(dst);
+ 		}
+ 	}
+diff --git a/fs/exec.c b/fs/exec.c
+index e3e55d5e0be1f..75eb6e0ee7b2f 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -1308,8 +1308,6 @@ int begin_new_exec(struct linux_binprm * bprm)
+ 	if (retval)
+ 		goto out_unlock;
+ 
+-	if (me->flags & PF_KTHREAD)
+-		free_kthread_struct(me);
+ 	me->flags &= ~(PF_RANDOMIZE | PF_FORKNOEXEC | PF_KTHREAD |
+ 					PF_NOFREEZE | PF_NO_SETAFFINITY);
+ 	flush_thread();
+@@ -1955,6 +1953,10 @@ int kernel_execve(const char *kernel_filename,
+ 	int fd = AT_FDCWD;
+ 	int retval;
+ 
++	if (WARN_ON_ONCE((current->flags & PF_KTHREAD) &&
++			(current->worker_private)))
++		return -EINVAL;
++
+ 	filename = getname_kernel(kernel_filename);
+ 	if (IS_ERR(filename))
+ 		return PTR_ERR(filename);
+diff --git a/fs/exportfs/expfs.c b/fs/exportfs/expfs.c
+index 0106eba46d5af..3ef80d000e13d 100644
+--- a/fs/exportfs/expfs.c
++++ b/fs/exportfs/expfs.c
+@@ -145,7 +145,7 @@ static struct dentry *reconnect_one(struct vfsmount *mnt,
+ 	if (err)
+ 		goto out_err;
+ 	dprintk("%s: found name: %s\n", __func__, nbuf);
+-	tmp = lookup_one_len_unlocked(nbuf, parent, strlen(nbuf));
++	tmp = lookup_one_unlocked(mnt_user_ns(mnt), nbuf, parent, strlen(nbuf));
+ 	if (IS_ERR(tmp)) {
+ 		dprintk("%s: lookup failed: %d\n", __func__, PTR_ERR(tmp));
+ 		err = PTR_ERR(tmp);
+@@ -525,7 +525,8 @@ exportfs_decode_fh_raw(struct vfsmount *mnt, struct fid *fid, int fh_len,
+ 		}
+ 
+ 		inode_lock(target_dir->d_inode);
+-		nresult = lookup_one_len(nbuf, target_dir, strlen(nbuf));
++		nresult = lookup_one(mnt_user_ns(mnt), nbuf,
++				     target_dir, strlen(nbuf));
+ 		if (!IS_ERR(nresult)) {
+ 			if (unlikely(nresult->d_inode != result->d_inode)) {
+ 				dput(nresult);
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index a743b1e3b89ec..f6d6661817b63 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -1440,12 +1440,6 @@ struct ext4_super_block {
+ 
+ #ifdef __KERNEL__
+ 
+-#ifdef CONFIG_FS_ENCRYPTION
+-#define DUMMY_ENCRYPTION_ENABLED(sbi) ((sbi)->s_dummy_enc_policy.policy != NULL)
+-#else
+-#define DUMMY_ENCRYPTION_ENABLED(sbi) (0)
+-#endif
+-
+ /* Number of quota types we support */
+ #define EXT4_MAXQUOTAS 3
+ 
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index e473fde6b64b4..c148bb97b5273 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -372,7 +372,7 @@ static int ext4_valid_extent_entries(struct inode *inode,
+ {
+ 	unsigned short entries;
+ 	ext4_lblk_t lblock = 0;
+-	ext4_lblk_t prev = 0;
++	ext4_lblk_t cur = 0;
+ 
+ 	if (eh->eh_entries == 0)
+ 		return 1;
+@@ -396,11 +396,11 @@ static int ext4_valid_extent_entries(struct inode *inode,
+ 
+ 			/* Check for overlapping extents */
+ 			lblock = le32_to_cpu(ext->ee_block);
+-			if ((lblock <= prev) && prev) {
++			if (lblock < cur) {
+ 				*pblk = ext4_ext_pblock(ext);
+ 				return 0;
+ 			}
+-			prev = lblock + ext4_ext_get_actual_len(ext) - 1;
++			cur = lblock + ext4_ext_get_actual_len(ext);
+ 			ext++;
+ 			entries--;
+ 		}
+@@ -420,13 +420,13 @@ static int ext4_valid_extent_entries(struct inode *inode,
+ 
+ 			/* Check for overlapping index extents */
+ 			lblock = le32_to_cpu(ext_idx->ei_block);
+-			if ((lblock <= prev) && prev) {
++			if (lblock < cur) {
+ 				*pblk = ext4_idx_pblock(ext_idx);
+ 				return 0;
+ 			}
+ 			ext_idx++;
+ 			entries--;
+-			prev = lblock;
++			cur = lblock + 1;
+ 		}
+ 	}
+ 	return 1;
+@@ -4693,15 +4693,17 @@ long ext4_fallocate(struct file *file, int mode, loff_t offset, loff_t len)
+ 		     FALLOC_FL_INSERT_RANGE))
+ 		return -EOPNOTSUPP;
+ 
++	inode_lock(inode);
++	ret = ext4_convert_inline_data(inode);
++	inode_unlock(inode);
++	if (ret)
++		goto exit;
++
+ 	if (mode & FALLOC_FL_PUNCH_HOLE) {
+ 		ret = ext4_punch_hole(file, offset, len);
+ 		goto exit;
+ 	}
+ 
+-	ret = ext4_convert_inline_data(inode);
+-	if (ret)
+-		goto exit;
+-
+ 	if (mode & FALLOC_FL_COLLAPSE_RANGE) {
+ 		ret = ext4_collapse_range(file, offset, len);
+ 		goto exit;
+diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
+index 9c076262770d9..e9ef5cf309694 100644
+--- a/fs/ext4/inline.c
++++ b/fs/ext4/inline.c
+@@ -2005,6 +2005,18 @@ int ext4_convert_inline_data(struct inode *inode)
+ 	if (!ext4_has_inline_data(inode)) {
+ 		ext4_clear_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA);
+ 		return 0;
++	} else if (!ext4_test_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA)) {
++		/*
++		 * Inode has inline data but EXT4_STATE_MAY_INLINE_DATA is
++		 * cleared. This means we are in the middle of moving of
++		 * inline data to delay allocated block. Just force writeout
++		 * here to finish conversion.
++		 */
++		error = filemap_flush(inode->i_mapping);
++		if (error)
++			return error;
++		if (!ext4_has_inline_data(inode))
++			return 0;
+ 	}
+ 
+ 	needed_blocks = ext4_writepage_trans_blocks(inode);
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 646ece9b3455f..beed9e32571c0 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -3967,15 +3967,6 @@ int ext4_punch_hole(struct file *file, loff_t offset, loff_t length)
+ 
+ 	trace_ext4_punch_hole(inode, offset, length, 0);
+ 
+-	ext4_clear_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA);
+-	if (ext4_has_inline_data(inode)) {
+-		filemap_invalidate_lock(mapping);
+-		ret = ext4_convert_inline_data(inode);
+-		filemap_invalidate_unlock(mapping);
+-		if (ret)
+-			return ret;
+-	}
+-
+ 	/*
+ 	 * Write out all dirty pages to avoid race conditions
+ 	 * Then release them.
+@@ -5398,6 +5389,7 @@ int ext4_setattr(struct user_namespace *mnt_userns, struct dentry *dentry,
+ 	if (attr->ia_valid & ATTR_SIZE) {
+ 		handle_t *handle;
+ 		loff_t oldsize = inode->i_size;
++		loff_t old_disksize;
+ 		int shrink = (attr->ia_size < inode->i_size);
+ 
+ 		if (!(ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS))) {
+@@ -5469,6 +5461,7 @@ int ext4_setattr(struct user_namespace *mnt_userns, struct dentry *dentry,
+ 					inode->i_sb->s_blocksize_bits);
+ 
+ 			down_write(&EXT4_I(inode)->i_data_sem);
++			old_disksize = EXT4_I(inode)->i_disksize;
+ 			EXT4_I(inode)->i_disksize = attr->ia_size;
+ 			rc = ext4_mark_inode_dirty(handle, inode);
+ 			if (!error)
+@@ -5480,6 +5473,8 @@ int ext4_setattr(struct user_namespace *mnt_userns, struct dentry *dentry,
+ 			 */
+ 			if (!error)
+ 				i_size_write(inode, attr->ia_size);
++			else
++				EXT4_I(inode)->i_disksize = old_disksize;
+ 			up_write(&EXT4_I(inode)->i_data_sem);
+ 			ext4_journal_stop(handle);
+ 			if (error)
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index 252c168454c7f..87d85ce04d58b 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -6398,6 +6398,7 @@ __releases(ext4_group_lock_ptr(sb, e4b->bd_group))
+  * @start:		first group block to examine
+  * @max:		last group block to examine
+  * @minblocks:		minimum extent block count
++ * @set_trimmed:	set the trimmed flag if at least one block is trimmed
+  *
+  * ext4_trim_all_free walks through group's block bitmap searching for free
+  * extents. When the free extent is found, mark it as used in group buddy
+@@ -6407,7 +6408,7 @@ __releases(ext4_group_lock_ptr(sb, e4b->bd_group))
+ static ext4_grpblk_t
+ ext4_trim_all_free(struct super_block *sb, ext4_group_t group,
+ 		   ext4_grpblk_t start, ext4_grpblk_t max,
+-		   ext4_grpblk_t minblocks)
++		   ext4_grpblk_t minblocks, bool set_trimmed)
+ {
+ 	struct ext4_buddy e4b;
+ 	int ret;
+@@ -6426,7 +6427,7 @@ ext4_trim_all_free(struct super_block *sb, ext4_group_t group,
+ 	if (!EXT4_MB_GRP_WAS_TRIMMED(e4b.bd_info) ||
+ 	    minblocks < EXT4_SB(sb)->s_last_trim_minblks) {
+ 		ret = ext4_try_to_trim_range(sb, &e4b, start, max, minblocks);
+-		if (ret >= 0)
++		if (ret >= 0 && set_trimmed)
+ 			EXT4_MB_GRP_SET_TRIMMED(e4b.bd_info);
+ 	} else {
+ 		ret = 0;
+@@ -6463,6 +6464,7 @@ int ext4_trim_fs(struct super_block *sb, struct fstrim_range *range)
+ 	ext4_fsblk_t first_data_blk =
+ 			le32_to_cpu(EXT4_SB(sb)->s_es->s_first_data_block);
+ 	ext4_fsblk_t max_blks = ext4_blocks_count(EXT4_SB(sb)->s_es);
++	bool whole_group, eof = false;
+ 	int ret = 0;
+ 
+ 	start = range->start >> sb->s_blocksize_bits;
+@@ -6481,8 +6483,10 @@ int ext4_trim_fs(struct super_block *sb, struct fstrim_range *range)
+ 		if (minlen > EXT4_CLUSTERS_PER_GROUP(sb))
+ 			goto out;
+ 	}
+-	if (end >= max_blks)
++	if (end >= max_blks - 1) {
+ 		end = max_blks - 1;
++		eof = true;
++	}
+ 	if (end <= first_data_blk)
+ 		goto out;
+ 	if (start < first_data_blk)
+@@ -6496,6 +6500,7 @@ int ext4_trim_fs(struct super_block *sb, struct fstrim_range *range)
+ 
+ 	/* end now represents the last cluster to discard in this group */
+ 	end = EXT4_CLUSTERS_PER_GROUP(sb) - 1;
++	whole_group = true;
+ 
+ 	for (group = first_group; group <= last_group; group++) {
+ 		grp = ext4_get_group_info(sb, group);
+@@ -6512,12 +6517,13 @@ int ext4_trim_fs(struct super_block *sb, struct fstrim_range *range)
+ 		 * change it for the last group, note that last_cluster is
+ 		 * already computed earlier by ext4_get_group_no_and_offset()
+ 		 */
+-		if (group == last_group)
++		if (group == last_group) {
+ 			end = last_cluster;
+-
++			whole_group = eof ? true : end == EXT4_CLUSTERS_PER_GROUP(sb) - 1;
++		}
+ 		if (grp->bb_free >= minlen) {
+ 			cnt = ext4_trim_all_free(sb, group, first_cluster,
+-						end, minlen);
++						 end, minlen, whole_group);
+ 			if (cnt < 0) {
+ 				ret = cnt;
+ 				break;
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index 767b4bfe39c38..e9cba12e5e128 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -277,9 +277,9 @@ static struct dx_frame *dx_probe(struct ext4_filename *fname,
+ 				 struct dx_hash_info *hinfo,
+ 				 struct dx_frame *frame);
+ static void dx_release(struct dx_frame *frames);
+-static int dx_make_map(struct inode *dir, struct ext4_dir_entry_2 *de,
+-		       unsigned blocksize, struct dx_hash_info *hinfo,
+-		       struct dx_map_entry map[]);
++static int dx_make_map(struct inode *dir, struct buffer_head *bh,
++		       struct dx_hash_info *hinfo,
++		       struct dx_map_entry *map_tail);
+ static void dx_sort_map(struct dx_map_entry *map, unsigned count);
+ static struct ext4_dir_entry_2 *dx_move_dirents(struct inode *dir, char *from,
+ 					char *to, struct dx_map_entry *offsets,
+@@ -777,12 +777,14 @@ static struct dx_frame *
+ dx_probe(struct ext4_filename *fname, struct inode *dir,
+ 	 struct dx_hash_info *hinfo, struct dx_frame *frame_in)
+ {
+-	unsigned count, indirect;
++	unsigned count, indirect, level, i;
+ 	struct dx_entry *at, *entries, *p, *q, *m;
+ 	struct dx_root *root;
+ 	struct dx_frame *frame = frame_in;
+ 	struct dx_frame *ret_err = ERR_PTR(ERR_BAD_DX_DIR);
+ 	u32 hash;
++	ext4_lblk_t block;
++	ext4_lblk_t blocks[EXT4_HTREE_LEVEL];
+ 
+ 	memset(frame_in, 0, EXT4_HTREE_LEVEL * sizeof(frame_in[0]));
+ 	frame->bh = ext4_read_dirblock(dir, 0, INDEX);
+@@ -854,6 +856,8 @@ dx_probe(struct ext4_filename *fname, struct inode *dir,
+ 	}
+ 
+ 	dxtrace(printk("Look up %x", hash));
++	level = 0;
++	blocks[0] = 0;
+ 	while (1) {
+ 		count = dx_get_count(entries);
+ 		if (!count || count > dx_get_limit(entries)) {
+@@ -882,15 +886,27 @@ dx_probe(struct ext4_filename *fname, struct inode *dir,
+ 			       dx_get_block(at)));
+ 		frame->entries = entries;
+ 		frame->at = at;
+-		if (!indirect--)
++
++		block = dx_get_block(at);
++		for (i = 0; i <= level; i++) {
++			if (blocks[i] == block) {
++				ext4_warning_inode(dir,
++					"dx entry: tree cycle block %u points back to block %u",
++					blocks[level], block);
++				goto fail;
++			}
++		}
++		if (++level > indirect)
+ 			return frame;
++		blocks[level] = block;
+ 		frame++;
+-		frame->bh = ext4_read_dirblock(dir, dx_get_block(at), INDEX);
++		frame->bh = ext4_read_dirblock(dir, block, INDEX);
+ 		if (IS_ERR(frame->bh)) {
+ 			ret_err = (struct dx_frame *) frame->bh;
+ 			frame->bh = NULL;
+ 			goto fail;
+ 		}
++
+ 		entries = ((struct dx_node *) frame->bh->b_data)->entries;
+ 
+ 		if (dx_get_limit(entries) != dx_node_limit(dir)) {
+@@ -1249,15 +1265,23 @@ static inline int search_dirblock(struct buffer_head *bh,
+  * Create map of hash values, offsets, and sizes, stored at end of block.
+  * Returns number of entries mapped.
+  */
+-static int dx_make_map(struct inode *dir, struct ext4_dir_entry_2 *de,
+-		       unsigned blocksize, struct dx_hash_info *hinfo,
++static int dx_make_map(struct inode *dir, struct buffer_head *bh,
++		       struct dx_hash_info *hinfo,
+ 		       struct dx_map_entry *map_tail)
+ {
+ 	int count = 0;
+-	char *base = (char *) de;
++	struct ext4_dir_entry_2 *de = (struct ext4_dir_entry_2 *)bh->b_data;
++	unsigned int buflen = bh->b_size;
++	char *base = bh->b_data;
+ 	struct dx_hash_info h = *hinfo;
+ 
+-	while ((char *) de < base + blocksize) {
++	if (ext4_has_metadata_csum(dir->i_sb))
++		buflen -= sizeof(struct ext4_dir_entry_tail);
++
++	while ((char *) de < base + buflen) {
++		if (ext4_check_dir_entry(dir, NULL, de, bh, base, buflen,
++					 ((char *)de) - base))
++			return -EFSCORRUPTED;
+ 		if (de->name_len && de->inode) {
+ 			if (ext4_hash_in_dirent(dir))
+ 				h.hash = EXT4_DIRENT_HASH(de);
+@@ -1270,8 +1294,7 @@ static int dx_make_map(struct inode *dir, struct ext4_dir_entry_2 *de,
+ 			count++;
+ 			cond_resched();
+ 		}
+-		/* XXX: do we need to check rec_len == 0 case? -Chris */
+-		de = ext4_next_entry(de, blocksize);
++		de = ext4_next_entry(de, dir->i_sb->s_blocksize);
+ 	}
+ 	return count;
+ }
+@@ -1943,8 +1966,11 @@ static struct ext4_dir_entry_2 *do_split(handle_t *handle, struct inode *dir,
+ 
+ 	/* create map in the end of data2 block */
+ 	map = (struct dx_map_entry *) (data2 + blocksize);
+-	count = dx_make_map(dir, (struct ext4_dir_entry_2 *) data1,
+-			     blocksize, hinfo, map);
++	count = dx_make_map(dir, *bh, hinfo, map);
++	if (count < 0) {
++		err = count;
++		goto journal_error;
++	}
+ 	map -= count;
+ 	dx_sort_map(map, count);
+ 	/* Ensure that neither split block is over half full */
+@@ -3455,6 +3481,9 @@ static struct buffer_head *ext4_get_first_dir_block(handle_t *handle,
+ 	struct buffer_head *bh;
+ 
+ 	if (!ext4_has_inline_data(inode)) {
++		struct ext4_dir_entry_2 *de;
++		unsigned int offset;
++
+ 		/* The first directory block must not be a hole, so
+ 		 * treat it as DIRENT_HTREE
+ 		 */
+@@ -3463,9 +3492,30 @@ static struct buffer_head *ext4_get_first_dir_block(handle_t *handle,
+ 			*retval = PTR_ERR(bh);
+ 			return NULL;
+ 		}
+-		*parent_de = ext4_next_entry(
+-					(struct ext4_dir_entry_2 *)bh->b_data,
+-					inode->i_sb->s_blocksize);
++
++		de = (struct ext4_dir_entry_2 *) bh->b_data;
++		if (ext4_check_dir_entry(inode, NULL, de, bh, bh->b_data,
++					 bh->b_size, 0) ||
++		    le32_to_cpu(de->inode) != inode->i_ino ||
++		    strcmp(".", de->name)) {
++			EXT4_ERROR_INODE(inode, "directory missing '.'");
++			brelse(bh);
++			*retval = -EFSCORRUPTED;
++			return NULL;
++		}
++		offset = ext4_rec_len_from_disk(de->rec_len,
++						inode->i_sb->s_blocksize);
++		de = ext4_next_entry(de, inode->i_sb->s_blocksize);
++		if (ext4_check_dir_entry(inode, NULL, de, bh, bh->b_data,
++					 bh->b_size, offset) ||
++		    le32_to_cpu(de->inode) == 0 || strcmp("..", de->name)) {
++			EXT4_ERROR_INODE(inode, "directory missing '..'");
++			brelse(bh);
++			*retval = -EFSCORRUPTED;
++			return NULL;
++		}
++		*parent_de = de;
++
+ 		return bh;
+ 	}
+ 
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index 1466fbdbc8e34..a0c79304f92ff 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -1913,6 +1913,7 @@ static const struct mount_opts {
+ 	 MOPT_EXT4_ONLY | MOPT_CLEAR},
+ 	{Opt_warn_on_error, EXT4_MOUNT_WARN_ON_ERROR, MOPT_SET},
+ 	{Opt_nowarn_on_error, EXT4_MOUNT_WARN_ON_ERROR, MOPT_CLEAR},
++	{Opt_commit, 0, MOPT_NO_EXT2},
+ 	{Opt_nojournal_checksum, EXT4_MOUNT_JOURNAL_CHECKSUM,
+ 	 MOPT_EXT4_ONLY | MOPT_CLEAR},
+ 	{Opt_journal_checksum, EXT4_MOUNT_JOURNAL_CHECKSUM,
+@@ -2427,11 +2428,12 @@ static int ext4_parse_param(struct fs_context *fc, struct fs_parameter *param)
+ 		ctx->spec |= EXT4_SPEC_DUMMY_ENCRYPTION;
+ 		ctx->test_dummy_enc_arg = kmemdup_nul(param->string, param->size,
+ 						      GFP_KERNEL);
++		return 0;
+ #else
+ 		ext4_msg(NULL, KERN_WARNING,
+-			 "Test dummy encryption mount option ignored");
++			 "test_dummy_encryption option not supported");
++		return -EINVAL;
+ #endif
+-		return 0;
+ 	case Opt_dax:
+ 	case Opt_dax_type:
+ #ifdef CONFIG_FS_DAX
+@@ -2625,8 +2627,10 @@ parse_failed:
+ 	ret = ext4_apply_options(fc, sb);
+ 
+ out_free:
+-	kfree(s_ctx);
+-	kfree(fc);
++	if (fc) {
++		ext4_fc_free(fc);
++		kfree(fc);
++	}
+ 	kfree(s_mount_opts);
+ 	return ret;
+ }
+@@ -2786,12 +2790,44 @@ err_jquota_specified:
+ #endif
+ }
+ 
++static int ext4_check_test_dummy_encryption(const struct fs_context *fc,
++					    struct super_block *sb)
++{
++#ifdef CONFIG_FS_ENCRYPTION
++	const struct ext4_fs_context *ctx = fc->fs_private;
++	const struct ext4_sb_info *sbi = EXT4_SB(sb);
++
++	if (!(ctx->spec & EXT4_SPEC_DUMMY_ENCRYPTION))
++		return 0;
++
++	if (!ext4_has_feature_encrypt(sb)) {
++		ext4_msg(NULL, KERN_WARNING,
++			 "test_dummy_encryption requires encrypt feature");
++		return -EINVAL;
++	}
++	/*
++	 * This mount option is just for testing, and it's not worthwhile to
++	 * implement the extra complexity (e.g. RCU protection) that would be
++	 * needed to allow it to be set or changed during remount.  We do allow
++	 * it to be specified during remount, but only if there is no change.
++	 */
++	if (fc->purpose == FS_CONTEXT_FOR_RECONFIGURE &&
++	    !sbi->s_dummy_enc_policy.policy) {
++		ext4_msg(NULL, KERN_WARNING,
++			 "Can't set test_dummy_encryption on remount");
++		return -EINVAL;
++	}
++#endif /* CONFIG_FS_ENCRYPTION */
++	return 0;
++}
++
+ static int ext4_check_opt_consistency(struct fs_context *fc,
+ 				      struct super_block *sb)
+ {
+ 	struct ext4_fs_context *ctx = fc->fs_private;
+ 	struct ext4_sb_info *sbi = fc->s_fs_info;
+ 	int is_remount = fc->purpose == FS_CONTEXT_FOR_RECONFIGURE;
++	int err;
+ 
+ 	if ((ctx->opt_flags & MOPT_NO_EXT2) && IS_EXT2_SB(sb)) {
+ 		ext4_msg(NULL, KERN_ERR,
+@@ -2821,20 +2857,9 @@ static int ext4_check_opt_consistency(struct fs_context *fc,
+ 				 "for blocksize < PAGE_SIZE");
+ 	}
+ 
+-#ifdef CONFIG_FS_ENCRYPTION
+-	/*
+-	 * This mount option is just for testing, and it's not worthwhile to
+-	 * implement the extra complexity (e.g. RCU protection) that would be
+-	 * needed to allow it to be set or changed during remount.  We do allow
+-	 * it to be specified during remount, but only if there is no change.
+-	 */
+-	if ((ctx->spec & EXT4_SPEC_DUMMY_ENCRYPTION) &&
+-	    is_remount && !sbi->s_dummy_enc_policy.policy) {
+-		ext4_msg(NULL, KERN_WARNING,
+-			 "Can't set test_dummy_encryption on remount");
+-		return -1;
+-	}
+-#endif
++	err = ext4_check_test_dummy_encryption(fc, sb);
++	if (err)
++		return err;
+ 
+ 	if ((ctx->spec & EXT4_SPEC_DATAJ) && is_remount) {
+ 		if (!sbi->s_journal) {
+@@ -4409,7 +4434,8 @@ static int __ext4_fill_super(struct fs_context *fc, struct super_block *sb)
+ 	int silent = fc->sb_flags & SB_SILENT;
+ 
+ 	/* Set defaults for the variables that will be set during parsing */
+-	ctx->journal_ioprio = DEFAULT_JOURNAL_IOPRIO;
++	if (!(ctx->spec & EXT4_SPEC_JOURNAL_IOPRIO))
++		ctx->journal_ioprio = DEFAULT_JOURNAL_IOPRIO;
+ 
+ 	sbi->s_inode_readahead_blks = EXT4_DEF_INODE_READAHEAD_BLKS;
+ 	sbi->s_sectors_written_start =
+@@ -4886,7 +4912,7 @@ static int __ext4_fill_super(struct fs_context *fc, struct super_block *sb)
+ 					sbi->s_inodes_per_block;
+ 	sbi->s_desc_per_block = blocksize / EXT4_DESC_SIZE(sb);
+ 	sbi->s_sbh = bh;
+-	sbi->s_mount_state = le16_to_cpu(es->s_state);
++	sbi->s_mount_state = le16_to_cpu(es->s_state) & ~EXT4_FC_REPLAY;
+ 	sbi->s_addr_per_block_bits = ilog2(EXT4_ADDR_PER_BLOCK(sb));
+ 	sbi->s_desc_per_block_bits = ilog2(EXT4_DESC_PER_BLOCK(sb));
+ 
+@@ -5279,12 +5305,6 @@ no_journal:
+ 		goto failed_mount_wq;
+ 	}
+ 
+-	if (DUMMY_ENCRYPTION_ENABLED(sbi) && !sb_rdonly(sb) &&
+-	    !ext4_has_feature_encrypt(sb)) {
+-		ext4_set_feature_encrypt(sb);
+-		ext4_commit_super(sb);
+-	}
+-
+ 	/*
+ 	 * Get the # of file system overhead blocks from the
+ 	 * superblock if present.
+@@ -6276,7 +6296,6 @@ static int __ext4_remount(struct fs_context *fc, struct super_block *sb)
+ 	char *to_free[EXT4_MAXQUOTAS];
+ #endif
+ 
+-	ctx->journal_ioprio = DEFAULT_JOURNAL_IOPRIO;
+ 
+ 	/* Store the original options */
+ 	old_sb_flags = sb->s_flags;
+@@ -6302,9 +6321,14 @@ static int __ext4_remount(struct fs_context *fc, struct super_block *sb)
+ 		} else
+ 			old_opts.s_qf_names[i] = NULL;
+ #endif
+-	if (sbi->s_journal && sbi->s_journal->j_task->io_context)
+-		ctx->journal_ioprio =
+-			sbi->s_journal->j_task->io_context->ioprio;
++	if (!(ctx->spec & EXT4_SPEC_JOURNAL_IOPRIO)) {
++		if (sbi->s_journal && sbi->s_journal->j_task->io_context)
++			ctx->journal_ioprio =
++				sbi->s_journal->j_task->io_context->ioprio;
++		else
++			ctx->journal_ioprio = DEFAULT_JOURNAL_IOPRIO;
++
++	}
+ 
+ 	ext4_apply_options(fc, sb);
+ 
+@@ -6445,7 +6469,8 @@ static int __ext4_remount(struct fs_context *fc, struct super_block *sb)
+ 				if (err)
+ 					goto restore_opts;
+ 			}
+-			sbi->s_mount_state = le16_to_cpu(es->s_state);
++			sbi->s_mount_state = (le16_to_cpu(es->s_state) &
++					      ~EXT4_FC_REPLAY);
+ 
+ 			err = ext4_setup_super(sb, es, 0);
+ 			if (err)
+diff --git a/fs/f2fs/dir.c b/fs/f2fs/dir.c
+index a0e51937d92eb..d5bd7932fb642 100644
+--- a/fs/f2fs/dir.c
++++ b/fs/f2fs/dir.c
+@@ -82,7 +82,8 @@ int f2fs_init_casefolded_name(const struct inode *dir,
+ #if IS_ENABLED(CONFIG_UNICODE)
+ 	struct super_block *sb = dir->i_sb;
+ 
+-	if (IS_CASEFOLDED(dir)) {
++	if (IS_CASEFOLDED(dir) &&
++	    !is_dot_dotdot(fname->usr_fname->name, fname->usr_fname->len)) {
+ 		fname->cf_name.name = f2fs_kmem_cache_alloc(f2fs_cf_name_slab,
+ 					GFP_NOFS, false, F2FS_SB(sb));
+ 		if (!fname->cf_name.name)
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 8c570de21ed5a..6ec8c6d4711f4 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -508,11 +508,11 @@ struct f2fs_filename {
+ #if IS_ENABLED(CONFIG_UNICODE)
+ 	/*
+ 	 * For casefolded directories: the casefolded name, but it's left NULL
+-	 * if the original name is not valid Unicode, if the directory is both
+-	 * casefolded and encrypted and its encryption key is unavailable, or if
+-	 * the filesystem is doing an internal operation where usr_fname is also
+-	 * NULL.  In all these cases we fall back to treating the name as an
+-	 * opaque byte sequence.
++	 * if the original name is not valid Unicode, if the original name is
++	 * "." or "..", if the directory is both casefolded and encrypted and
++	 * its encryption key is unavailable, or if the filesystem is doing an
++	 * internal operation where usr_fname is also NULL.  In all these cases
++	 * we fall back to treating the name as an opaque byte sequence.
+ 	 */
+ 	struct fscrypt_str cf_name;
+ #endif
+@@ -1117,8 +1117,8 @@ enum count_type {
+  */
+ #define PAGE_TYPE_OF_BIO(type)	((type) > META ? META : (type))
+ enum page_type {
+-	DATA,
+-	NODE,
++	DATA = 0,
++	NODE = 1,	/* should not change this */
+ 	META,
+ 	NR_PAGE_TYPE,
+ 	META_FLUSH,
+@@ -2605,11 +2605,17 @@ static inline void dec_valid_node_count(struct f2fs_sb_info *sbi,
+ {
+ 	spin_lock(&sbi->stat_lock);
+ 
+-	f2fs_bug_on(sbi, !sbi->total_valid_block_count);
+-	f2fs_bug_on(sbi, !sbi->total_valid_node_count);
++	if (unlikely(!sbi->total_valid_block_count ||
++			!sbi->total_valid_node_count)) {
++		f2fs_warn(sbi, "dec_valid_node_count: inconsistent block counts, total_valid_block:%u, total_valid_node:%u",
++			  sbi->total_valid_block_count,
++			  sbi->total_valid_node_count);
++		set_sbi_flag(sbi, SBI_NEED_FSCK);
++	} else {
++		sbi->total_valid_block_count--;
++		sbi->total_valid_node_count--;
++	}
+ 
+-	sbi->total_valid_node_count--;
+-	sbi->total_valid_block_count--;
+ 	if (sbi->reserved_blocks &&
+ 		sbi->current_reserved_blocks < sbi->reserved_blocks)
+ 		sbi->current_reserved_blocks++;
+@@ -4046,6 +4052,7 @@ extern struct kmem_cache *f2fs_inode_entry_slab;
+  * inline.c
+  */
+ bool f2fs_may_inline_data(struct inode *inode);
++bool f2fs_sanity_check_inline_data(struct inode *inode);
+ bool f2fs_may_inline_dentry(struct inode *inode);
+ void f2fs_do_read_inline_data(struct page *page, struct page *ipage);
+ void f2fs_truncate_inline_inode(struct inode *inode,
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index 5b89af0f27f05..176e97b985e61 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -1437,11 +1437,19 @@ static int f2fs_do_zero_range(struct dnode_of_data *dn, pgoff_t start,
+ 			ret = -ENOSPC;
+ 			break;
+ 		}
+-		if (dn->data_blkaddr != NEW_ADDR) {
+-			f2fs_invalidate_blocks(sbi, dn->data_blkaddr);
+-			dn->data_blkaddr = NEW_ADDR;
+-			f2fs_set_data_blkaddr(dn);
++
++		if (dn->data_blkaddr == NEW_ADDR)
++			continue;
++
++		if (!f2fs_is_valid_blkaddr(sbi, dn->data_blkaddr,
++					DATA_GENERIC_ENHANCE)) {
++			ret = -EFSCORRUPTED;
++			break;
+ 		}
++
++		f2fs_invalidate_blocks(sbi, dn->data_blkaddr);
++		dn->data_blkaddr = NEW_ADDR;
++		f2fs_set_data_blkaddr(dn);
+ 	}
+ 
+ 	f2fs_update_extent_cache_range(dn, start, 0, index - start);
+@@ -1766,6 +1774,10 @@ static long f2fs_fallocate(struct file *file, int mode,
+ 
+ 	inode_lock(inode);
+ 
++	ret = file_modified(file);
++	if (ret)
++		goto out;
++
+ 	if (mode & FALLOC_FL_PUNCH_HOLE) {
+ 		if (offset >= inode->i_size)
+ 			goto out;
+diff --git a/fs/f2fs/hash.c b/fs/f2fs/hash.c
+index 3cb1e7a24740f..049ce50cec9b0 100644
+--- a/fs/f2fs/hash.c
++++ b/fs/f2fs/hash.c
+@@ -91,7 +91,7 @@ static u32 TEA_hash_name(const u8 *p, size_t len)
+ /*
+  * Compute @fname->hash.  For all directories, @fname->disk_name must be set.
+  * For casefolded directories, @fname->usr_fname must be set, and also
+- * @fname->cf_name if the filename is valid Unicode.
++ * @fname->cf_name if the filename is valid Unicode and is not "." or "..".
+  */
+ void f2fs_hash_filename(const struct inode *dir, struct f2fs_filename *fname)
+ {
+@@ -110,10 +110,11 @@ void f2fs_hash_filename(const struct inode *dir, struct f2fs_filename *fname)
+ 		/*
+ 		 * If the casefolded name is provided, hash it instead of the
+ 		 * on-disk name.  If the casefolded name is *not* provided, that
+-		 * should only be because the name wasn't valid Unicode, so fall
+-		 * back to treating the name as an opaque byte sequence.  Note
+-		 * that to handle encrypted directories, the fallback must use
+-		 * usr_fname (plaintext) rather than disk_name (ciphertext).
++		 * should only be because the name wasn't valid Unicode or was
++		 * "." or "..", so fall back to treating the name as an opaque
++		 * byte sequence.  Note that to handle encrypted directories,
++		 * the fallback must use usr_fname (plaintext) rather than
++		 * disk_name (ciphertext).
+ 		 */
+ 		WARN_ON_ONCE(!fname->usr_fname->name);
+ 		if (fname->cf_name.name) {
+diff --git a/fs/f2fs/inline.c b/fs/f2fs/inline.c
+index a578bf83b803b..bf46a7dfbea2f 100644
+--- a/fs/f2fs/inline.c
++++ b/fs/f2fs/inline.c
+@@ -14,21 +14,40 @@
+ #include "node.h"
+ #include <trace/events/f2fs.h>
+ 
+-bool f2fs_may_inline_data(struct inode *inode)
++static bool support_inline_data(struct inode *inode)
+ {
+ 	if (f2fs_is_atomic_file(inode))
+ 		return false;
+-
+ 	if (!S_ISREG(inode->i_mode) && !S_ISLNK(inode->i_mode))
+ 		return false;
+-
+ 	if (i_size_read(inode) > MAX_INLINE_DATA(inode))
+ 		return false;
++	return true;
++}
+ 
+-	if (f2fs_post_read_required(inode))
++bool f2fs_may_inline_data(struct inode *inode)
++{
++	if (!support_inline_data(inode))
+ 		return false;
+ 
+-	return true;
++	return !f2fs_post_read_required(inode);
++}
++
++bool f2fs_sanity_check_inline_data(struct inode *inode)
++{
++	if (!f2fs_has_inline_data(inode))
++		return false;
++
++	if (!support_inline_data(inode))
++		return true;
++
++	/*
++	 * used by sanity_check_inode(), when disk layout fields has not
++	 * been synchronized to inmem fields.
++	 */
++	return (S_ISREG(inode->i_mode) &&
++		(file_is_encrypt(inode) || file_is_verity(inode) ||
++		(F2FS_I(inode)->i_flags & F2FS_COMPR_FL)));
+ }
+ 
+ bool f2fs_may_inline_dentry(struct inode *inode)
+diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c
+index 83639238a1fe9..e9818723103c6 100644
+--- a/fs/f2fs/inode.c
++++ b/fs/f2fs/inode.c
+@@ -276,8 +276,7 @@ static bool sanity_check_inode(struct inode *inode, struct page *node_page)
+ 		}
+ 	}
+ 
+-	if (f2fs_has_inline_data(inode) &&
+-			(!S_ISREG(inode->i_mode) && !S_ISLNK(inode->i_mode))) {
++	if (f2fs_sanity_check_inline_data(inode)) {
+ 		set_sbi_flag(sbi, SBI_NEED_FSCK);
+ 		f2fs_warn(sbi, "%s: inode (ino=%lx, mode=%u) should not have inline_data, run fsck to fix",
+ 			  __func__, inode->i_ino, inode->i_mode);
+@@ -796,8 +795,22 @@ retry:
+ 		f2fs_lock_op(sbi);
+ 		err = f2fs_remove_inode_page(inode);
+ 		f2fs_unlock_op(sbi);
+-		if (err == -ENOENT)
++		if (err == -ENOENT) {
+ 			err = 0;
++
++			/*
++			 * in fuzzed image, another node may has the same
++			 * block address as inode's, if it was truncated
++			 * previously, truncation of inode node will fail.
++			 */
++			if (is_inode_flag_set(inode, FI_DIRTY_INODE)) {
++				f2fs_warn(F2FS_I_SB(inode),
++					"f2fs_evict_inode: inconsistent node id, ino:%lu",
++					inode->i_ino);
++				f2fs_inode_synced(inode);
++				set_sbi_flag(sbi, SBI_NEED_FSCK);
++			}
++		}
+ 	}
+ 
+ 	/* give more chances, if ENOMEM case */
+diff --git a/fs/f2fs/namei.c b/fs/f2fs/namei.c
+index 5ed79b29999fc..fffafd2aa4387 100644
+--- a/fs/f2fs/namei.c
++++ b/fs/f2fs/namei.c
+@@ -461,6 +461,13 @@ static int __recover_dot_dentries(struct inode *dir, nid_t pino)
+ 		return 0;
+ 	}
+ 
++	if (!S_ISDIR(dir->i_mode)) {
++		f2fs_err(sbi, "inconsistent inode status, skip recovering inline_dots inode (ino:%lu, i_mode:%u, pino:%u)",
++			  dir->i_ino, dir->i_mode, pino);
++		set_sbi_flag(sbi, SBI_NEED_FSCK);
++		return -ENOTDIR;
++	}
++
+ 	err = f2fs_dquot_initialize(dir);
+ 	if (err)
+ 		return err;
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index bd9731cdec565..aa0162664a1ec 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -355,16 +355,19 @@ void f2fs_drop_inmem_page(struct inode *inode, struct page *page)
+ 	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ 	struct list_head *head = &fi->inmem_pages;
+ 	struct inmem_pages *cur = NULL;
++	struct inmem_pages *tmp;
+ 
+ 	f2fs_bug_on(sbi, !page_private_atomic(page));
+ 
+ 	mutex_lock(&fi->inmem_lock);
+-	list_for_each_entry(cur, head, list) {
+-		if (cur->page == page)
++	list_for_each_entry(tmp, head, list) {
++		if (tmp->page == page) {
++			cur = tmp;
+ 			break;
++		}
+ 	}
+ 
+-	f2fs_bug_on(sbi, list_empty(head) || cur->page != page);
++	f2fs_bug_on(sbi, !cur);
+ 	list_del(&cur->list);
+ 	mutex_unlock(&fi->inmem_lock);
+ 
+@@ -4457,7 +4460,7 @@ static int build_sit_entries(struct f2fs_sb_info *sbi)
+ 	unsigned int i, start, end;
+ 	unsigned int readed, start_blk = 0;
+ 	int err = 0;
+-	block_t total_node_blocks = 0;
++	block_t sit_valid_blocks[2] = {0, 0};
+ 
+ 	do {
+ 		readed = f2fs_ra_meta_pages(sbi, start_blk, BIO_MAX_VECS,
+@@ -4482,8 +4485,8 @@ static int build_sit_entries(struct f2fs_sb_info *sbi)
+ 			if (err)
+ 				return err;
+ 			seg_info_from_raw_sit(se, &sit);
+-			if (IS_NODESEG(se->type))
+-				total_node_blocks += se->valid_blocks;
++
++			sit_valid_blocks[SE_PAGETYPE(se)] += se->valid_blocks;
+ 
+ 			if (f2fs_block_unit_discard(sbi)) {
+ 				/* build discard map only one time */
+@@ -4523,15 +4526,15 @@ static int build_sit_entries(struct f2fs_sb_info *sbi)
+ 		sit = sit_in_journal(journal, i);
+ 
+ 		old_valid_blocks = se->valid_blocks;
+-		if (IS_NODESEG(se->type))
+-			total_node_blocks -= old_valid_blocks;
++
++		sit_valid_blocks[SE_PAGETYPE(se)] -= old_valid_blocks;
+ 
+ 		err = check_block_count(sbi, start, &sit);
+ 		if (err)
+ 			break;
+ 		seg_info_from_raw_sit(se, &sit);
+-		if (IS_NODESEG(se->type))
+-			total_node_blocks += se->valid_blocks;
++
++		sit_valid_blocks[SE_PAGETYPE(se)] += se->valid_blocks;
+ 
+ 		if (f2fs_block_unit_discard(sbi)) {
+ 			if (is_set_ckpt_flags(sbi, CP_TRIMMED_FLAG)) {
+@@ -4553,13 +4556,24 @@ static int build_sit_entries(struct f2fs_sb_info *sbi)
+ 	}
+ 	up_read(&curseg->journal_rwsem);
+ 
+-	if (!err && total_node_blocks != valid_node_count(sbi)) {
++	if (err)
++		return err;
++
++	if (sit_valid_blocks[NODE] != valid_node_count(sbi)) {
+ 		f2fs_err(sbi, "SIT is corrupted node# %u vs %u",
+-			 total_node_blocks, valid_node_count(sbi));
+-		err = -EFSCORRUPTED;
++			 sit_valid_blocks[NODE], valid_node_count(sbi));
++		return -EFSCORRUPTED;
+ 	}
+ 
+-	return err;
++	if (sit_valid_blocks[DATA] + sit_valid_blocks[NODE] >
++				valid_user_blocks(sbi)) {
++		f2fs_err(sbi, "SIT is corrupted data# %u %u vs %u",
++			 sit_valid_blocks[DATA], sit_valid_blocks[NODE],
++			 valid_user_blocks(sbi));
++		return -EFSCORRUPTED;
++	}
++
++	return 0;
+ }
+ 
+ static void init_free_segmap(struct f2fs_sb_info *sbi)
+diff --git a/fs/f2fs/segment.h b/fs/f2fs/segment.h
+index 5c94caf0c0a1d..1fa26a9603cb8 100644
+--- a/fs/f2fs/segment.h
++++ b/fs/f2fs/segment.h
+@@ -24,6 +24,7 @@
+ 
+ #define IS_DATASEG(t)	((t) <= CURSEG_COLD_DATA)
+ #define IS_NODESEG(t)	((t) >= CURSEG_HOT_NODE && (t) <= CURSEG_COLD_NODE)
++#define SE_PAGETYPE(se)	((IS_NODESEG((se)->type) ? NODE : DATA))
+ 
+ static inline void sanity_check_seg_type(struct f2fs_sb_info *sbi,
+ 						unsigned short seg_type)
+@@ -572,11 +573,10 @@ static inline int reserved_sections(struct f2fs_sb_info *sbi)
+ 	return GET_SEC_FROM_SEG(sbi, reserved_segments(sbi));
+ }
+ 
+-static inline bool has_curseg_enough_space(struct f2fs_sb_info *sbi)
++static inline bool has_curseg_enough_space(struct f2fs_sb_info *sbi,
++			unsigned int node_blocks, unsigned int dent_blocks)
+ {
+-	unsigned int node_blocks = get_pages(sbi, F2FS_DIRTY_NODES) +
+-					get_pages(sbi, F2FS_DIRTY_DENTS);
+-	unsigned int dent_blocks = get_pages(sbi, F2FS_DIRTY_DENTS);
++
+ 	unsigned int segno, left_blocks;
+ 	int i;
+ 
+@@ -602,19 +602,28 @@ static inline bool has_curseg_enough_space(struct f2fs_sb_info *sbi)
+ static inline bool has_not_enough_free_secs(struct f2fs_sb_info *sbi,
+ 					int freed, int needed)
+ {
+-	int node_secs = get_blocktype_secs(sbi, F2FS_DIRTY_NODES);
+-	int dent_secs = get_blocktype_secs(sbi, F2FS_DIRTY_DENTS);
+-	int imeta_secs = get_blocktype_secs(sbi, F2FS_DIRTY_IMETA);
++	unsigned int total_node_blocks = get_pages(sbi, F2FS_DIRTY_NODES) +
++					get_pages(sbi, F2FS_DIRTY_DENTS) +
++					get_pages(sbi, F2FS_DIRTY_IMETA);
++	unsigned int total_dent_blocks = get_pages(sbi, F2FS_DIRTY_DENTS);
++	unsigned int node_secs = total_node_blocks / BLKS_PER_SEC(sbi);
++	unsigned int dent_secs = total_dent_blocks / BLKS_PER_SEC(sbi);
++	unsigned int node_blocks = total_node_blocks % BLKS_PER_SEC(sbi);
++	unsigned int dent_blocks = total_dent_blocks % BLKS_PER_SEC(sbi);
++	unsigned int free, need_lower, need_upper;
+ 
+ 	if (unlikely(is_sbi_flag_set(sbi, SBI_POR_DOING)))
+ 		return false;
+ 
+-	if (free_sections(sbi) + freed == reserved_sections(sbi) + needed &&
+-			has_curseg_enough_space(sbi))
++	free = free_sections(sbi) + freed;
++	need_lower = node_secs + dent_secs + reserved_sections(sbi) + needed;
++	need_upper = need_lower + (node_blocks ? 1 : 0) + (dent_blocks ? 1 : 0);
++
++	if (free > need_upper)
+ 		return false;
+-	return (free_sections(sbi) + freed) <=
+-		(node_secs + 2 * dent_secs + imeta_secs +
+-		reserved_sections(sbi) + needed);
++	else if (free <= need_lower)
++		return true;
++	return !has_curseg_enough_space(sbi, node_blocks, dent_blocks);
+ }
+ 
+ static inline bool f2fs_is_checkpoint_ready(struct f2fs_sb_info *sbi)
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 4368f90571bd6..1b59f95606c72 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -2684,7 +2684,8 @@ int f2fs_quota_sync(struct super_block *sb, int type)
+ 		if (!sb_has_quota_active(sb, cnt))
+ 			continue;
+ 
+-		inode_lock(dqopt->files[cnt]);
++		if (!f2fs_sb_has_quota_ino(sbi))
++			inode_lock(dqopt->files[cnt]);
+ 
+ 		/*
+ 		 * do_quotactl
+@@ -2703,7 +2704,8 @@ int f2fs_quota_sync(struct super_block *sb, int type)
+ 		f2fs_up_read(&sbi->quota_sem);
+ 		f2fs_unlock_op(sbi);
+ 
+-		inode_unlock(dqopt->files[cnt]);
++		if (!f2fs_sb_has_quota_ino(sbi))
++			inode_unlock(dqopt->files[cnt]);
+ 
+ 		if (ret)
+ 			break;
+diff --git a/fs/fat/fatent.c b/fs/fat/fatent.c
+index 978ac6751aeb7..1db348f8f887a 100644
+--- a/fs/fat/fatent.c
++++ b/fs/fat/fatent.c
+@@ -94,7 +94,8 @@ static int fat12_ent_bread(struct super_block *sb, struct fat_entry *fatent,
+ err_brelse:
+ 	brelse(bhs[0]);
+ err:
+-	fat_msg(sb, KERN_ERR, "FAT read failed (blocknr %llu)", (llu)blocknr);
++	fat_msg_ratelimit(sb, KERN_ERR, "FAT read failed (blocknr %llu)",
++			  (llu)blocknr);
+ 	return -EIO;
+ }
+ 
+@@ -107,8 +108,8 @@ static int fat_ent_bread(struct super_block *sb, struct fat_entry *fatent,
+ 	fatent->fat_inode = MSDOS_SB(sb)->fat_inode;
+ 	fatent->bhs[0] = sb_bread(sb, blocknr);
+ 	if (!fatent->bhs[0]) {
+-		fat_msg(sb, KERN_ERR, "FAT read failed (blocknr %llu)",
+-		       (llu)blocknr);
++		fat_msg_ratelimit(sb, KERN_ERR, "FAT read failed (blocknr %llu)",
++				  (llu)blocknr);
+ 		return -EIO;
+ 	}
+ 	fatent->nr_bhs = 1;
+diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
+index 1fae0196292a1..a1074a26e784d 100644
+--- a/fs/fs-writeback.c
++++ b/fs/fs-writeback.c
+@@ -1779,11 +1779,12 @@ static long writeback_sb_inodes(struct super_block *sb,
+ 	};
+ 	unsigned long start_time = jiffies;
+ 	long write_chunk;
+-	long wrote = 0;  /* count both pages and inodes */
++	long total_wrote = 0;  /* count both pages and inodes */
+ 
+ 	while (!list_empty(&wb->b_io)) {
+ 		struct inode *inode = wb_inode(wb->b_io.prev);
+ 		struct bdi_writeback *tmp_wb;
++		long wrote;
+ 
+ 		if (inode->i_sb != sb) {
+ 			if (work->sb) {
+@@ -1859,7 +1860,9 @@ static long writeback_sb_inodes(struct super_block *sb,
+ 
+ 		wbc_detach_inode(&wbc);
+ 		work->nr_pages -= write_chunk - wbc.nr_to_write;
+-		wrote += write_chunk - wbc.nr_to_write;
++		wrote = write_chunk - wbc.nr_to_write - wbc.pages_skipped;
++		wrote = wrote < 0 ? 0 : wrote;
++		total_wrote += wrote;
+ 
+ 		if (need_resched()) {
+ 			/*
+@@ -1881,7 +1884,7 @@ static long writeback_sb_inodes(struct super_block *sb,
+ 		tmp_wb = inode_to_wb_and_lock_list(inode);
+ 		spin_lock(&inode->i_lock);
+ 		if (!(inode->i_state & I_DIRTY_ALL))
+-			wrote++;
++			total_wrote++;
+ 		requeue_inode(inode, tmp_wb, &wbc);
+ 		inode_sync_complete(inode);
+ 		spin_unlock(&inode->i_lock);
+@@ -1895,14 +1898,14 @@ static long writeback_sb_inodes(struct super_block *sb,
+ 		 * bail out to wb_writeback() often enough to check
+ 		 * background threshold and other termination conditions.
+ 		 */
+-		if (wrote) {
++		if (total_wrote) {
+ 			if (time_is_before_jiffies(start_time + HZ / 10UL))
+ 				break;
+ 			if (work->nr_pages <= 0)
+ 				break;
+ 		}
+ 	}
+-	return wrote;
++	return total_wrote;
+ }
+ 
+ static long __writeback_inodes_wb(struct bdi_writeback *wb,
+diff --git a/fs/gfs2/quota.c b/fs/gfs2/quota.c
+index be0997e24d60b..dc77080a82bbf 100644
+--- a/fs/gfs2/quota.c
++++ b/fs/gfs2/quota.c
+@@ -531,34 +531,42 @@ static void qdsb_put(struct gfs2_quota_data *qd)
+  */
+ int gfs2_qa_get(struct gfs2_inode *ip)
+ {
+-	int error = 0;
+ 	struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode);
++	struct inode *inode = &ip->i_inode;
+ 
+ 	if (sdp->sd_args.ar_quota == GFS2_QUOTA_OFF)
+ 		return 0;
+ 
+-	down_write(&ip->i_rw_mutex);
++	spin_lock(&inode->i_lock);
+ 	if (ip->i_qadata == NULL) {
+-		ip->i_qadata = kmem_cache_zalloc(gfs2_qadata_cachep, GFP_NOFS);
+-		if (!ip->i_qadata) {
+-			error = -ENOMEM;
+-			goto out;
+-		}
++		struct gfs2_qadata *tmp;
++
++		spin_unlock(&inode->i_lock);
++		tmp = kmem_cache_zalloc(gfs2_qadata_cachep, GFP_NOFS);
++		if (!tmp)
++			return -ENOMEM;
++
++		spin_lock(&inode->i_lock);
++		if (ip->i_qadata == NULL)
++			ip->i_qadata = tmp;
++		else
++			kmem_cache_free(gfs2_qadata_cachep, tmp);
+ 	}
+ 	ip->i_qadata->qa_ref++;
+-out:
+-	up_write(&ip->i_rw_mutex);
+-	return error;
++	spin_unlock(&inode->i_lock);
++	return 0;
+ }
+ 
+ void gfs2_qa_put(struct gfs2_inode *ip)
+ {
+-	down_write(&ip->i_rw_mutex);
++	struct inode *inode = &ip->i_inode;
++
++	spin_lock(&inode->i_lock);
+ 	if (ip->i_qadata && --ip->i_qadata->qa_ref == 0) {
+ 		kmem_cache_free(gfs2_qadata_cachep, ip->i_qadata);
+ 		ip->i_qadata = NULL;
+ 	}
+-	up_write(&ip->i_rw_mutex);
++	spin_unlock(&inode->i_lock);
+ }
+ 
+ int gfs2_quota_hold(struct gfs2_inode *ip, kuid_t uid, kgid_t gid)
+diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
+index dd3a088db11d1..591599829e2a6 100644
+--- a/fs/hugetlbfs/inode.c
++++ b/fs/hugetlbfs/inode.c
+@@ -1048,12 +1048,12 @@ static int hugetlbfs_statfs(struct dentry *dentry, struct kstatfs *buf)
+ 		if (sbinfo->spool) {
+ 			long free_pages;
+ 
+-			spin_lock(&sbinfo->spool->lock);
++			spin_lock_irq(&sbinfo->spool->lock);
+ 			buf->f_blocks = sbinfo->spool->max_hpages;
+ 			free_pages = sbinfo->spool->max_hpages
+ 				- sbinfo->spool->used_hpages;
+ 			buf->f_bavail = buf->f_bfree = free_pages;
+-			spin_unlock(&sbinfo->spool->lock);
++			spin_unlock_irq(&sbinfo->spool->lock);
+ 			buf->f_files = sbinfo->max_inodes;
+ 			buf->f_ffree = sbinfo->free_inodes;
+ 		}
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index e0823f58f7959..9e247335e70d5 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -5981,6 +5981,7 @@ static void io_poll_cancel_req(struct io_kiocb *req)
+ 
+ #define wqe_to_req(wait)	((void *)((unsigned long) (wait)->private & ~1))
+ #define wqe_is_double(wait)	((unsigned long) (wait)->private & 1)
++#define IO_ASYNC_POLL_COMMON	(EPOLLONESHOT | POLLPRI)
+ 
+ static int io_poll_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
+ 			void *key)
+@@ -6015,7 +6016,7 @@ static int io_poll_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
+ 	}
+ 
+ 	/* for instances that support it check for an event match first */
+-	if (mask && !(mask & poll->events))
++	if (mask && !(mask & (poll->events & ~IO_ASYNC_POLL_COMMON)))
+ 		return 0;
+ 
+ 	if (io_poll_get_ownership(req)) {
+@@ -6171,7 +6172,7 @@ static int io_arm_poll_handler(struct io_kiocb *req, unsigned issue_flags)
+ 	struct io_ring_ctx *ctx = req->ctx;
+ 	struct async_poll *apoll;
+ 	struct io_poll_table ipt;
+-	__poll_t mask = EPOLLONESHOT | POLLERR | POLLPRI;
++	__poll_t mask = IO_ASYNC_POLL_COMMON | POLLERR;
+ 	int ret;
+ 
+ 	if (!def->pollin && !def->pollout)
+@@ -7327,6 +7328,8 @@ fail:
+ 		 * wait for request slots on the block side.
+ 		 */
+ 		if (!needs_poll) {
++			if (!(req->ctx->flags & IORING_SETUP_IOPOLL))
++				break;
+ 			cond_resched();
+ 			continue;
+ 		}
+diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
+index 8ce8720093b9a..358ee1fb6f0db 100644
+--- a/fs/iomap/buffered-io.c
++++ b/fs/iomap/buffered-io.c
+@@ -531,7 +531,8 @@ iomap_write_failed(struct inode *inode, loff_t pos, unsigned len)
+ 	 * write started inside the existing inode size.
+ 	 */
+ 	if (pos + len > i_size)
+-		truncate_pagecache_range(inode, max(pos, i_size), pos + len);
++		truncate_pagecache_range(inode, max(pos, i_size),
++					 pos + len - 1);
+ }
+ 
+ static int iomap_read_folio_sync(loff_t block_start, struct folio *folio,
+diff --git a/fs/jfs/jfs_dmap.c b/fs/jfs/jfs_dmap.c
+index d8502f4989d9d..e75f31b81d634 100644
+--- a/fs/jfs/jfs_dmap.c
++++ b/fs/jfs/jfs_dmap.c
+@@ -385,7 +385,8 @@ int dbFree(struct inode *ip, s64 blkno, s64 nblocks)
+ 	}
+ 
+ 	/* write the last buffer. */
+-	write_metapage(mp);
++	if (mp)
++		write_metapage(mp);
+ 
+ 	IREAD_UNLOCK(ipbmap);
+ 
+diff --git a/fs/ksmbd/connection.c b/fs/ksmbd/connection.c
+index 208d2cff7bd37..bc6050b67256d 100644
+--- a/fs/ksmbd/connection.c
++++ b/fs/ksmbd/connection.c
+@@ -62,7 +62,7 @@ struct ksmbd_conn *ksmbd_conn_alloc(void)
+ 	atomic_set(&conn->req_running, 0);
+ 	atomic_set(&conn->r_count, 0);
+ 	conn->total_credits = 1;
+-	conn->outstanding_credits = 1;
++	conn->outstanding_credits = 0;
+ 
+ 	init_waitqueue_head(&conn->req_running_q);
+ 	INIT_LIST_HEAD(&conn->conns_list);
+diff --git a/fs/ksmbd/smb2misc.c b/fs/ksmbd/smb2misc.c
+index 4a9460153b595..f8f456377a51d 100644
+--- a/fs/ksmbd/smb2misc.c
++++ b/fs/ksmbd/smb2misc.c
+@@ -338,7 +338,7 @@ static int smb2_validate_credit_charge(struct ksmbd_conn *conn,
+ 		ret = 1;
+ 	}
+ 
+-	if ((u64)conn->outstanding_credits + credit_charge > conn->vals->max_credits) {
++	if ((u64)conn->outstanding_credits + credit_charge > conn->total_credits) {
+ 		ksmbd_debug(SMB, "Limits exceeding the maximum allowable outstanding requests, given : %u, pending : %u\n",
+ 			    credit_charge, conn->outstanding_credits);
+ 		ret = 1;
+diff --git a/fs/ksmbd/smb_common.c b/fs/ksmbd/smb_common.c
+index 9a7e211dbf4f4..7f8ab14fb8ec1 100644
+--- a/fs/ksmbd/smb_common.c
++++ b/fs/ksmbd/smb_common.c
+@@ -140,8 +140,10 @@ int ksmbd_verify_smb_message(struct ksmbd_work *work)
+ 
+ 	hdr = work->request_buf;
+ 	if (*(__le32 *)hdr->Protocol == SMB1_PROTO_NUMBER &&
+-	    hdr->Command == SMB_COM_NEGOTIATE)
++	    hdr->Command == SMB_COM_NEGOTIATE) {
++		work->conn->outstanding_credits++;
+ 		return 0;
++	}
+ 
+ 	return -EINVAL;
+ }
+diff --git a/fs/namei.c b/fs/namei.c
+index 509657fdf4f56..fd3c95ac261bd 100644
+--- a/fs/namei.c
++++ b/fs/namei.c
+@@ -2768,7 +2768,8 @@ struct dentry *lookup_one(struct user_namespace *mnt_userns, const char *name,
+ EXPORT_SYMBOL(lookup_one);
+ 
+ /**
+- * lookup_one_len_unlocked - filesystem helper to lookup single pathname component
++ * lookup_one_unlocked - filesystem helper to lookup single pathname component
++ * @mnt_userns:	idmapping of the mount the lookup is performed from
+  * @name:	pathname component to lookup
+  * @base:	base directory to lookup from
+  * @len:	maximum length @len should be interpreted to
+@@ -2779,14 +2780,15 @@ EXPORT_SYMBOL(lookup_one);
+  * Unlike lookup_one_len, it should be called without the parent
+  * i_mutex held, and will take the i_mutex itself if necessary.
+  */
+-struct dentry *lookup_one_len_unlocked(const char *name,
+-				       struct dentry *base, int len)
++struct dentry *lookup_one_unlocked(struct user_namespace *mnt_userns,
++				   const char *name, struct dentry *base,
++				   int len)
+ {
+ 	struct qstr this;
+ 	int err;
+ 	struct dentry *ret;
+ 
+-	err = lookup_one_common(&init_user_ns, name, base, len, &this);
++	err = lookup_one_common(mnt_userns, name, base, len, &this);
+ 	if (err)
+ 		return ERR_PTR(err);
+ 
+@@ -2795,6 +2797,59 @@ struct dentry *lookup_one_len_unlocked(const char *name,
+ 		ret = lookup_slow(&this, base, 0);
+ 	return ret;
+ }
++EXPORT_SYMBOL(lookup_one_unlocked);
++
++/**
++ * lookup_one_positive_unlocked - filesystem helper to lookup single
++ *				  pathname component
++ * @mnt_userns:	idmapping of the mount the lookup is performed from
++ * @name:	pathname component to lookup
++ * @base:	base directory to lookup from
++ * @len:	maximum length @len should be interpreted to
++ *
++ * This helper will yield ERR_PTR(-ENOENT) on negatives. The helper returns
++ * known positive or ERR_PTR(). This is what most of the users want.
++ *
++ * Note that pinned negative with unlocked parent _can_ become positive at any
++ * time, so callers of lookup_one_unlocked() need to be very careful; pinned
++ * positives have >d_inode stable, so this one avoids such problems.
++ *
++ * Note that this routine is purely a helper for filesystem usage and should
++ * not be called by generic code.
++ *
++ * The helper should be called without i_mutex held.
++ */
++struct dentry *lookup_one_positive_unlocked(struct user_namespace *mnt_userns,
++					    const char *name,
++					    struct dentry *base, int len)
++{
++	struct dentry *ret = lookup_one_unlocked(mnt_userns, name, base, len);
++
++	if (!IS_ERR(ret) && d_flags_negative(smp_load_acquire(&ret->d_flags))) {
++		dput(ret);
++		ret = ERR_PTR(-ENOENT);
++	}
++	return ret;
++}
++EXPORT_SYMBOL(lookup_one_positive_unlocked);
++
++/**
++ * lookup_one_len_unlocked - filesystem helper to lookup single pathname component
++ * @name:	pathname component to lookup
++ * @base:	base directory to lookup from
++ * @len:	maximum length @len should be interpreted to
++ *
++ * Note that this routine is purely a helper for filesystem usage and should
++ * not be called by generic code.
++ *
++ * Unlike lookup_one_len, it should be called without the parent
++ * i_mutex held, and will take the i_mutex itself if necessary.
++ */
++struct dentry *lookup_one_len_unlocked(const char *name,
++				       struct dentry *base, int len)
++{
++	return lookup_one_unlocked(&init_user_ns, name, base, len);
++}
+ EXPORT_SYMBOL(lookup_one_len_unlocked);
+ 
+ /*
+@@ -2808,12 +2863,7 @@ EXPORT_SYMBOL(lookup_one_len_unlocked);
+ struct dentry *lookup_positive_unlocked(const char *name,
+ 				       struct dentry *base, int len)
+ {
+-	struct dentry *ret = lookup_one_len_unlocked(name, base, len);
+-	if (!IS_ERR(ret) && d_flags_negative(smp_load_acquire(&ret->d_flags))) {
+-		dput(ret);
+-		ret = ERR_PTR(-ENOENT);
+-	}
+-	return ret;
++	return lookup_one_positive_unlocked(&init_user_ns, name, base, len);
+ }
+ EXPORT_SYMBOL(lookup_positive_unlocked);
+ 
+diff --git a/fs/namespace.c b/fs/namespace.c
+index afe2b64b14f1f..41461f55c0390 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -4026,8 +4026,9 @@ static int can_idmap_mount(const struct mount_kattr *kattr, struct mount *mnt)
+ static inline bool mnt_allow_writers(const struct mount_kattr *kattr,
+ 				     const struct mount *mnt)
+ {
+-	return !(kattr->attr_set & MNT_READONLY) ||
+-	       (mnt->mnt.mnt_flags & MNT_READONLY);
++	return (!(kattr->attr_set & MNT_READONLY) ||
++		(mnt->mnt.mnt_flags & MNT_READONLY)) &&
++	       !kattr->mnt_userns;
+ }
+ 
+ static int mount_setattr_prepare(struct mount_kattr *kattr, struct mount *mnt)
+diff --git a/fs/nfs/file.c b/fs/nfs/file.c
+index 150b7fa8f0a73..3f17748eaf290 100644
+--- a/fs/nfs/file.c
++++ b/fs/nfs/file.c
+@@ -204,15 +204,16 @@ static int
+ nfs_file_fsync_commit(struct file *file, int datasync)
+ {
+ 	struct inode *inode = file_inode(file);
+-	int ret;
++	int ret, ret2;
+ 
+ 	dprintk("NFS: fsync file(%pD2) datasync %d\n", file, datasync);
+ 
+ 	nfs_inc_stats(inode, NFSIOS_VFSFSYNC);
+ 	ret = nfs_commit_inode(inode, FLUSH_SYNC);
+-	if (ret < 0)
+-		return ret;
+-	return file_check_and_advance_wb_err(file);
++	ret2 = file_check_and_advance_wb_err(file);
++	if (ret2 < 0)
++		return ret2;
++	return ret;
+ }
+ 
+ int
+@@ -385,11 +386,8 @@ static int nfs_write_end(struct file *file, struct address_space *mapping,
+ 		return status;
+ 	NFS_I(mapping->host)->write_io += copied;
+ 
+-	if (nfs_ctx_key_to_expire(ctx, mapping->host)) {
+-		status = nfs_wb_all(mapping->host);
+-		if (status < 0)
+-			return status;
+-	}
++	if (nfs_ctx_key_to_expire(ctx, mapping->host))
++		nfs_wb_all(mapping->host);
+ 
+ 	return copied;
+ }
+@@ -597,18 +595,6 @@ static const struct vm_operations_struct nfs_file_vm_ops = {
+ 	.page_mkwrite = nfs_vm_page_mkwrite,
+ };
+ 
+-static int nfs_need_check_write(struct file *filp, struct inode *inode,
+-				int error)
+-{
+-	struct nfs_open_context *ctx;
+-
+-	ctx = nfs_file_open_context(filp);
+-	if (nfs_error_is_fatal_on_server(error) ||
+-	    nfs_ctx_key_to_expire(ctx, inode))
+-		return 1;
+-	return 0;
+-}
+-
+ ssize_t nfs_file_write(struct kiocb *iocb, struct iov_iter *from)
+ {
+ 	struct file *file = iocb->ki_filp;
+@@ -636,7 +622,7 @@ ssize_t nfs_file_write(struct kiocb *iocb, struct iov_iter *from)
+ 	if (iocb->ki_flags & IOCB_APPEND || iocb->ki_pos > i_size_read(inode)) {
+ 		result = nfs_revalidate_file_size(inode, file);
+ 		if (result)
+-			goto out;
++			return result;
+ 	}
+ 
+ 	nfs_clear_invalid_mapping(file->f_mapping);
+@@ -655,6 +641,7 @@ ssize_t nfs_file_write(struct kiocb *iocb, struct iov_iter *from)
+ 
+ 	written = result;
+ 	iocb->ki_pos += written;
++	nfs_add_stats(inode, NFSIOS_NORMALWRITTENBYTES, written);
+ 
+ 	if (mntflags & NFS_MOUNT_WRITE_EAGER) {
+ 		result = filemap_fdatawrite_range(file->f_mapping,
+@@ -672,17 +659,22 @@ ssize_t nfs_file_write(struct kiocb *iocb, struct iov_iter *from)
+ 	}
+ 	result = generic_write_sync(iocb, written);
+ 	if (result < 0)
+-		goto out;
++		return result;
+ 
++out:
+ 	/* Return error values */
+ 	error = filemap_check_wb_err(file->f_mapping, since);
+-	if (nfs_need_check_write(file, inode, error)) {
+-		int err = nfs_wb_all(inode);
+-		if (err < 0)
+-			result = err;
++	switch (error) {
++	default:
++		break;
++	case -EDQUOT:
++	case -EFBIG:
++	case -ENOSPC:
++		nfs_wb_all(inode);
++		error = file_check_and_advance_wb_err(file);
++		if (error < 0)
++			result = error;
+ 	}
+-	nfs_add_stats(inode, NFSIOS_NORMALWRITTENBYTES, written);
+-out:
+ 	return result;
+ 
+ out_swapfile:
+diff --git a/fs/nfs/fscache.c b/fs/nfs/fscache.c
+index f73c09a9cf0a9..e861d7bae305f 100644
+--- a/fs/nfs/fscache.c
++++ b/fs/nfs/fscache.c
+@@ -231,11 +231,10 @@ void nfs_fscache_release_file(struct inode *inode, struct file *filp)
+ {
+ 	struct nfs_fscache_inode_auxdata auxdata;
+ 	struct fscache_cookie *cookie = nfs_i_fscache(inode);
++	loff_t i_size = i_size_read(inode);
+ 
+-	if (fscache_cookie_valid(cookie)) {
+-		nfs_fscache_update_auxdata(&auxdata, inode);
+-		fscache_unuse_cookie(cookie, &auxdata, NULL);
+-	}
++	nfs_fscache_update_auxdata(&auxdata, inode);
++	fscache_unuse_cookie(cookie, &auxdata, &i_size);
+ }
+ 
+ /*
+diff --git a/fs/nfs/nfs4namespace.c b/fs/nfs/nfs4namespace.c
+index 3680c8da510c9..f2dbf904c5989 100644
+--- a/fs/nfs/nfs4namespace.c
++++ b/fs/nfs/nfs4namespace.c
+@@ -417,6 +417,9 @@ static int nfs_do_refmount(struct fs_context *fc, struct rpc_clnt *client)
+ 	fs_locations = kmalloc(sizeof(struct nfs4_fs_locations), GFP_KERNEL);
+ 	if (!fs_locations)
+ 		goto out_free;
++	fs_locations->fattr = nfs_alloc_fattr();
++	if (!fs_locations->fattr)
++		goto out_free_2;
+ 
+ 	/* Get locations */
+ 	dentry = ctx->clone_data.dentry;
+@@ -427,14 +430,16 @@ static int nfs_do_refmount(struct fs_context *fc, struct rpc_clnt *client)
+ 	err = nfs4_proc_fs_locations(client, d_inode(parent), &dentry->d_name, fs_locations, page);
+ 	dput(parent);
+ 	if (err != 0)
+-		goto out_free_2;
++		goto out_free_3;
+ 
+ 	err = -ENOENT;
+ 	if (fs_locations->nlocations <= 0 ||
+ 	    fs_locations->fs_path.ncomponents <= 0)
+-		goto out_free_2;
++		goto out_free_3;
+ 
+ 	err = nfs_follow_referral(fc, fs_locations);
++out_free_3:
++	kfree(fs_locations->fattr);
+ out_free_2:
+ 	kfree(fs_locations);
+ out_free:
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index a79f66432bd39..8c5907287c161 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -1162,7 +1162,7 @@ static int nfs4_call_sync_sequence(struct rpc_clnt *clnt,
+ {
+ 	unsigned short task_flags = 0;
+ 
+-	if (server->nfs_client->cl_minorversion)
++	if (server->caps & NFS_CAP_MOVEABLE)
+ 		task_flags = RPC_TASK_MOVEABLE;
+ 	return nfs4_do_call_sync(clnt, server, msg, args, res, task_flags);
+ }
+@@ -2568,7 +2568,7 @@ static int nfs4_run_open_task(struct nfs4_opendata *data,
+ 	};
+ 	int status;
+ 
+-	if (server->nfs_client->cl_minorversion)
++	if (nfs_server_capable(dir, NFS_CAP_MOVEABLE))
+ 		task_setup_data.flags |= RPC_TASK_MOVEABLE;
+ 
+ 	kref_get(&data->kref);
+@@ -3733,7 +3733,7 @@ int nfs4_do_close(struct nfs4_state *state, gfp_t gfp_mask, int wait)
+ 	};
+ 	int status = -ENOMEM;
+ 
+-	if (server->nfs_client->cl_minorversion)
++	if (nfs_server_capable(state->inode, NFS_CAP_MOVEABLE))
+ 		task_setup_data.flags |= RPC_TASK_MOVEABLE;
+ 
+ 	nfs4_state_protect(server->nfs_client, NFS_SP4_MACH_CRED_CLEANUP,
+@@ -4243,6 +4243,8 @@ static int nfs4_get_referral(struct rpc_clnt *client, struct inode *dir,
+ 	if (locations == NULL)
+ 		goto out;
+ 
++	locations->fattr = fattr;
++
+ 	status = nfs4_proc_fs_locations(client, dir, name, locations, page);
+ 	if (status != 0)
+ 		goto out;
+@@ -4252,17 +4254,14 @@ static int nfs4_get_referral(struct rpc_clnt *client, struct inode *dir,
+ 	 * referral.  Cause us to drop into the exception handler, which
+ 	 * will kick off migration recovery.
+ 	 */
+-	if (nfs_fsid_equal(&NFS_SERVER(dir)->fsid, &locations->fattr.fsid)) {
++	if (nfs_fsid_equal(&NFS_SERVER(dir)->fsid, &fattr->fsid)) {
+ 		dprintk("%s: server did not return a different fsid for"
+ 			" a referral at %s\n", __func__, name->name);
+ 		status = -NFS4ERR_MOVED;
+ 		goto out;
+ 	}
+ 	/* Fixup attributes for the nfs_lookup() call to nfs_fhget() */
+-	nfs_fixup_referral_attributes(&locations->fattr);
+-
+-	/* replace the lookup nfs_fattr with the locations nfs_fattr */
+-	memcpy(fattr, &locations->fattr, sizeof(struct nfs_fattr));
++	nfs_fixup_referral_attributes(fattr);
+ 	memset(fhandle, 0, sizeof(struct nfs_fh));
+ out:
+ 	if (page)
+@@ -4404,7 +4403,7 @@ static int _nfs4_proc_lookup(struct rpc_clnt *clnt, struct inode *dir,
+ 	};
+ 	unsigned short task_flags = 0;
+ 
+-	if (server->nfs_client->cl_minorversion)
++	if (nfs_server_capable(dir, NFS_CAP_MOVEABLE))
+ 		task_flags = RPC_TASK_MOVEABLE;
+ 
+ 	/* Is this is an attribute revalidation, subject to softreval? */
+@@ -6612,10 +6611,13 @@ static int _nfs4_proc_delegreturn(struct inode *inode, const struct cred *cred,
+ 		.rpc_client = server->client,
+ 		.rpc_message = &msg,
+ 		.callback_ops = &nfs4_delegreturn_ops,
+-		.flags = RPC_TASK_ASYNC | RPC_TASK_TIMEOUT | RPC_TASK_MOVEABLE,
++		.flags = RPC_TASK_ASYNC | RPC_TASK_TIMEOUT,
+ 	};
+ 	int status = 0;
+ 
++	if (nfs_server_capable(inode, NFS_CAP_MOVEABLE))
++		task_setup_data.flags |= RPC_TASK_MOVEABLE;
++
+ 	data = kzalloc(sizeof(*data), GFP_KERNEL);
+ 	if (data == NULL)
+ 		return -ENOMEM;
+@@ -6929,10 +6931,8 @@ static struct rpc_task *nfs4_do_unlck(struct file_lock *fl,
+ 		.workqueue = nfsiod_workqueue,
+ 		.flags = RPC_TASK_ASYNC,
+ 	};
+-	struct nfs_client *client =
+-		NFS_SERVER(lsp->ls_state->inode)->nfs_client;
+ 
+-	if (client->cl_minorversion)
++	if (nfs_server_capable(lsp->ls_state->inode, NFS_CAP_MOVEABLE))
+ 		task_setup_data.flags |= RPC_TASK_MOVEABLE;
+ 
+ 	nfs4_state_protect(NFS_SERVER(lsp->ls_state->inode)->nfs_client,
+@@ -7203,9 +7203,8 @@ static int _nfs4_do_setlk(struct nfs4_state *state, int cmd, struct file_lock *f
+ 		.flags = RPC_TASK_ASYNC | RPC_TASK_CRED_NOREF,
+ 	};
+ 	int ret;
+-	struct nfs_client *client = NFS_SERVER(state->inode)->nfs_client;
+ 
+-	if (client->cl_minorversion)
++	if (nfs_server_capable(state->inode, NFS_CAP_MOVEABLE))
+ 		task_setup_data.flags |= RPC_TASK_MOVEABLE;
+ 
+ 	data = nfs4_alloc_lockdata(fl, nfs_file_open_context(fl->fl_file),
+@@ -7902,7 +7901,7 @@ static int _nfs4_proc_fs_locations(struct rpc_clnt *client, struct inode *dir,
+ 	else
+ 		bitmask[1] &= ~FATTR4_WORD1_MOUNTED_ON_FILEID;
+ 
+-	nfs_fattr_init(&fs_locations->fattr);
++	nfs_fattr_init(fs_locations->fattr);
+ 	fs_locations->server = server;
+ 	fs_locations->nlocations = 0;
+ 	status = nfs4_call_sync(client, server, &msg, &args.seq_args, &res.seq_res, 0);
+@@ -7967,7 +7966,7 @@ static int _nfs40_proc_get_locations(struct nfs_server *server,
+ 	unsigned long now = jiffies;
+ 	int status;
+ 
+-	nfs_fattr_init(&locations->fattr);
++	nfs_fattr_init(locations->fattr);
+ 	locations->server = server;
+ 	locations->nlocations = 0;
+ 
+@@ -8032,7 +8031,7 @@ static int _nfs41_proc_get_locations(struct nfs_server *server,
+ 	};
+ 	int status;
+ 
+-	nfs_fattr_init(&locations->fattr);
++	nfs_fattr_init(locations->fattr);
+ 	locations->server = server;
+ 	locations->nlocations = 0;
+ 
+@@ -10391,7 +10390,8 @@ static const struct nfs4_minor_version_ops nfs_v4_1_minor_ops = {
+ 		| NFS_CAP_POSIX_LOCK
+ 		| NFS_CAP_STATEID_NFSV41
+ 		| NFS_CAP_ATOMIC_OPEN_V1
+-		| NFS_CAP_LGOPEN,
++		| NFS_CAP_LGOPEN
++		| NFS_CAP_MOVEABLE,
+ 	.init_client = nfs41_init_client,
+ 	.shutdown_client = nfs41_shutdown_client,
+ 	.match_stateid = nfs41_match_stateid,
+@@ -10426,7 +10426,8 @@ static const struct nfs4_minor_version_ops nfs_v4_2_minor_ops = {
+ 		| NFS_CAP_LAYOUTSTATS
+ 		| NFS_CAP_CLONE
+ 		| NFS_CAP_LAYOUTERROR
+-		| NFS_CAP_READ_PLUS,
++		| NFS_CAP_READ_PLUS
++		| NFS_CAP_MOVEABLE,
+ 	.init_client = nfs41_init_client,
+ 	.shutdown_client = nfs41_shutdown_client,
+ 	.match_stateid = nfs41_match_stateid,
+diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
+index 9e1c987c81e7f..9656d40bb4887 100644
+--- a/fs/nfs/nfs4state.c
++++ b/fs/nfs/nfs4state.c
+@@ -2106,6 +2106,11 @@ static int nfs4_try_migration(struct nfs_server *server, const struct cred *cred
+ 		dprintk("<-- %s: no memory\n", __func__);
+ 		goto out;
+ 	}
++	locations->fattr = nfs_alloc_fattr();
++	if (locations->fattr == NULL) {
++		dprintk("<-- %s: no memory\n", __func__);
++		goto out;
++	}
+ 
+ 	inode = d_inode(server->super->s_root);
+ 	result = nfs4_proc_get_locations(server, NFS_FH(inode), locations,
+@@ -2120,7 +2125,7 @@ static int nfs4_try_migration(struct nfs_server *server, const struct cred *cred
+ 	if (!locations->nlocations)
+ 		goto out;
+ 
+-	if (!(locations->fattr.valid & NFS_ATTR_FATTR_V4_LOCATIONS)) {
++	if (!(locations->fattr->valid & NFS_ATTR_FATTR_V4_LOCATIONS)) {
+ 		dprintk("<-- %s: No fs_locations data, migration skipped\n",
+ 			__func__);
+ 		goto out;
+@@ -2145,6 +2150,8 @@ static int nfs4_try_migration(struct nfs_server *server, const struct cred *cred
+ out:
+ 	if (page != NULL)
+ 		__free_page(page);
++	if (locations != NULL)
++		kfree(locations->fattr);
+ 	kfree(locations);
+ 	if (result) {
+ 		pr_err("NFS: migration recovery failed (server %s)\n",
+diff --git a/fs/nfs/nfs4xdr.c b/fs/nfs/nfs4xdr.c
+index 86a5f6516928e..5d822594336dc 100644
+--- a/fs/nfs/nfs4xdr.c
++++ b/fs/nfs/nfs4xdr.c
+@@ -7051,7 +7051,7 @@ static int nfs4_xdr_dec_fs_locations(struct rpc_rqst *req,
+ 	if (res->migration) {
+ 		xdr_enter_page(xdr, PAGE_SIZE);
+ 		status = decode_getfattr_generic(xdr,
+-					&res->fs_locations->fattr,
++					res->fs_locations->fattr,
+ 					 NULL, res->fs_locations,
+ 					 res->fs_locations->server);
+ 		if (status)
+@@ -7064,7 +7064,7 @@ static int nfs4_xdr_dec_fs_locations(struct rpc_rqst *req,
+ 			goto out;
+ 		xdr_enter_page(xdr, PAGE_SIZE);
+ 		status = decode_getfattr_generic(xdr,
+-					&res->fs_locations->fattr,
++					res->fs_locations->fattr,
+ 					 NULL, res->fs_locations,
+ 					 res->fs_locations->server);
+ 	}
+diff --git a/fs/nfs/pagelist.c b/fs/nfs/pagelist.c
+index 9157dd19b8b4f..317cedfa52bf6 100644
+--- a/fs/nfs/pagelist.c
++++ b/fs/nfs/pagelist.c
+@@ -767,6 +767,9 @@ int nfs_initiate_pgio(struct rpc_clnt *clnt, struct nfs_pgio_header *hdr,
+ 		.flags = RPC_TASK_ASYNC | flags,
+ 	};
+ 
++	if (nfs_server_capable(hdr->inode, NFS_CAP_MOVEABLE))
++		task_setup_data.flags |= RPC_TASK_MOVEABLE;
++
+ 	hdr->rw_ops->rw_initiate(hdr, &msg, rpc_ops, &task_setup_data, how);
+ 
+ 	dprintk("NFS: initiated pgio call "
+diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
+index 856c962273c71..68a87be3e6f96 100644
+--- a/fs/nfs/pnfs.c
++++ b/fs/nfs/pnfs.c
+@@ -2000,6 +2000,7 @@ lookup_again:
+ 	lo = pnfs_find_alloc_layout(ino, ctx, gfp_flags);
+ 	if (lo == NULL) {
+ 		spin_unlock(&ino->i_lock);
++		lseg = ERR_PTR(-ENOMEM);
+ 		trace_pnfs_update_layout(ino, pos, count, iomode, lo, lseg,
+ 				 PNFS_UPDATE_LAYOUT_NOMEM);
+ 		goto out;
+@@ -2128,6 +2129,7 @@ lookup_again:
+ 
+ 	lgp = pnfs_alloc_init_layoutget_args(ino, ctx, &stateid, &arg, gfp_flags);
+ 	if (!lgp) {
++		lseg = ERR_PTR(-ENOMEM);
+ 		trace_pnfs_update_layout(ino, pos, count, iomode, lo, NULL,
+ 					 PNFS_UPDATE_LAYOUT_NOMEM);
+ 		nfs_layoutget_end(lo);
+diff --git a/fs/nfs/unlink.c b/fs/nfs/unlink.c
+index 6f325e10056ce..9697cd5d2561c 100644
+--- a/fs/nfs/unlink.c
++++ b/fs/nfs/unlink.c
+@@ -102,6 +102,10 @@ static void nfs_do_call_unlink(struct inode *inode, struct nfs_unlinkdata *data)
+ 	};
+ 	struct rpc_task *task;
+ 	struct inode *dir = d_inode(data->dentry->d_parent);
++
++	if (nfs_server_capable(inode, NFS_CAP_MOVEABLE))
++		task_setup_data.flags |= RPC_TASK_MOVEABLE;
++
+ 	nfs_sb_active(dir->i_sb);
+ 	data->args.fh = NFS_FH(dir);
+ 	nfs_fattr_init(data->res.dir_attr);
+@@ -344,6 +348,10 @@ nfs_async_rename(struct inode *old_dir, struct inode *new_dir,
+ 		.flags = RPC_TASK_ASYNC | RPC_TASK_CRED_NOREF,
+ 	};
+ 
++	if (nfs_server_capable(old_dir, NFS_CAP_MOVEABLE) &&
++	    nfs_server_capable(new_dir, NFS_CAP_MOVEABLE))
++		task_setup_data.flags |= RPC_TASK_MOVEABLE;
++
+ 	data = kzalloc(sizeof(*data), GFP_KERNEL);
+ 	if (data == NULL)
+ 		return ERR_PTR(-ENOMEM);
+diff --git a/fs/nfs/write.c b/fs/nfs/write.c
+index f00d45cf80ef3..1c706465d090b 100644
+--- a/fs/nfs/write.c
++++ b/fs/nfs/write.c
+@@ -603,8 +603,9 @@ static void nfs_write_error(struct nfs_page *req, int error)
+  * Find an associated nfs write request, and prepare to flush it out
+  * May return an error if the user signalled nfs_wait_on_request().
+  */
+-static int nfs_page_async_flush(struct nfs_pageio_descriptor *pgio,
+-				struct page *page)
++static int nfs_page_async_flush(struct page *page,
++				struct writeback_control *wbc,
++				struct nfs_pageio_descriptor *pgio)
+ {
+ 	struct nfs_page *req;
+ 	int ret = 0;
+@@ -630,11 +631,11 @@ static int nfs_page_async_flush(struct nfs_pageio_descriptor *pgio,
+ 		/*
+ 		 * Remove the problematic req upon fatal errors on the server
+ 		 */
+-		if (nfs_error_is_fatal(ret)) {
+-			if (nfs_error_is_fatal_on_server(ret))
+-				goto out_launder;
+-		} else
+-			ret = -EAGAIN;
++		if (nfs_error_is_fatal_on_server(ret))
++			goto out_launder;
++		if (wbc->sync_mode == WB_SYNC_NONE)
++			ret = AOP_WRITEPAGE_ACTIVATE;
++		redirty_page_for_writepage(wbc, page);
+ 		nfs_redirty_request(req);
+ 		pgio->pg_error = 0;
+ 	} else
+@@ -650,15 +651,8 @@ out_launder:
+ static int nfs_do_writepage(struct page *page, struct writeback_control *wbc,
+ 			    struct nfs_pageio_descriptor *pgio)
+ {
+-	int ret;
+-
+ 	nfs_pageio_cond_complete(pgio, page_index(page));
+-	ret = nfs_page_async_flush(pgio, page);
+-	if (ret == -EAGAIN) {
+-		redirty_page_for_writepage(wbc, page);
+-		ret = AOP_WRITEPAGE_ACTIVATE;
+-	}
+-	return ret;
++	return nfs_page_async_flush(page, wbc, pgio);
+ }
+ 
+ /*
+@@ -681,11 +675,7 @@ static int nfs_writepage_locked(struct page *page,
+ 	err = nfs_do_writepage(page, wbc, &pgio);
+ 	pgio.pg_error = 0;
+ 	nfs_pageio_complete(&pgio);
+-	if (err < 0)
+-		return err;
+-	if (nfs_error_is_fatal(pgio.pg_error))
+-		return pgio.pg_error;
+-	return 0;
++	return err;
+ }
+ 
+ int nfs_writepage(struct page *page, struct writeback_control *wbc)
+@@ -737,19 +727,19 @@ int nfs_writepages(struct address_space *mapping, struct writeback_control *wbc)
+ 		priority = wb_priority(wbc);
+ 	}
+ 
+-	nfs_pageio_init_write(&pgio, inode, priority, false,
+-				&nfs_async_write_completion_ops);
+-	pgio.pg_io_completion = ioc;
+-	err = write_cache_pages(mapping, wbc, nfs_writepages_callback, &pgio);
+-	pgio.pg_error = 0;
+-	nfs_pageio_complete(&pgio);
++	do {
++		nfs_pageio_init_write(&pgio, inode, priority, false,
++				      &nfs_async_write_completion_ops);
++		pgio.pg_io_completion = ioc;
++		err = write_cache_pages(mapping, wbc, nfs_writepages_callback,
++					&pgio);
++		pgio.pg_error = 0;
++		nfs_pageio_complete(&pgio);
++	} while (err < 0 && !nfs_error_is_fatal(err));
+ 	nfs_io_completion_put(ioc);
+ 
+ 	if (err < 0)
+ 		goto out_err;
+-	err = pgio.pg_error;
+-	if (nfs_error_is_fatal(err))
+-		goto out_err;
+ 	return 0;
+ out_err:
+ 	return err;
+@@ -1444,7 +1434,7 @@ static void nfs_async_write_error(struct list_head *head, int error)
+ 	while (!list_empty(head)) {
+ 		req = nfs_list_entry(head->next);
+ 		nfs_list_remove_request(req);
+-		if (nfs_error_is_fatal(error))
++		if (nfs_error_is_fatal_on_server(error))
+ 			nfs_write_error(req, error);
+ 		else
+ 			nfs_redirty_request(req);
+@@ -1719,6 +1709,10 @@ int nfs_initiate_commit(struct rpc_clnt *clnt, struct nfs_commit_data *data,
+ 		.flags = RPC_TASK_ASYNC | flags,
+ 		.priority = priority,
+ 	};
++
++	if (nfs_server_capable(data->inode, NFS_CAP_MOVEABLE))
++		task_setup_data.flags |= RPC_TASK_MOVEABLE;
++
+ 	/* Set up the initial task struct.  */
+ 	nfs_ops->commit_setup(data, &msg, &task_setup_data.rpc_client);
+ 	trace_nfs_initiate_commit(data);
+diff --git a/fs/nfsd/nfscache.c b/fs/nfsd/nfscache.c
+index 0b3f12aa37ff5..7da88bdc0d6c3 100644
+--- a/fs/nfsd/nfscache.c
++++ b/fs/nfsd/nfscache.c
+@@ -206,7 +206,6 @@ void nfsd_reply_cache_shutdown(struct nfsd_net *nn)
+ 	struct svc_cacherep	*rp;
+ 	unsigned int i;
+ 
+-	nfsd_reply_cache_stats_destroy(nn);
+ 	unregister_shrinker(&nn->nfsd_reply_cache_shrinker);
+ 
+ 	for (i = 0; i < nn->drc_hashsize; i++) {
+@@ -217,6 +216,7 @@ void nfsd_reply_cache_shutdown(struct nfsd_net *nn)
+ 									rp, nn);
+ 		}
+ 	}
++	nfsd_reply_cache_stats_destroy(nn);
+ 
+ 	kvfree(nn->drc_hashtbl);
+ 	nn->drc_hashtbl = NULL;
+diff --git a/fs/notify/fanotify/fanotify_user.c b/fs/notify/fanotify/fanotify_user.c
+index a792e21c53099..16d8fc84713a4 100644
+--- a/fs/notify/fanotify/fanotify_user.c
++++ b/fs/notify/fanotify/fanotify_user.c
+@@ -264,7 +264,7 @@ static int create_fd(struct fsnotify_group *group, struct path *path,
+ 	 * originally opened O_WRONLY.
+ 	 */
+ 	new_file = dentry_open(path,
+-			       group->fanotify_data.f_flags | FMODE_NONOTIFY,
++			       group->fanotify_data.f_flags | __FMODE_NONOTIFY,
+ 			       current_cred());
+ 	if (IS_ERR(new_file)) {
+ 		/*
+@@ -1348,7 +1348,7 @@ SYSCALL_DEFINE2(fanotify_init, unsigned int, flags, unsigned int, event_f_flags)
+ 	    (!(fid_mode & FAN_REPORT_NAME) || !(fid_mode & FAN_REPORT_FID)))
+ 		return -EINVAL;
+ 
+-	f_flags = O_RDWR | FMODE_NONOTIFY;
++	f_flags = O_RDWR | __FMODE_NONOTIFY;
+ 	if (flags & FAN_CLOEXEC)
+ 		f_flags |= O_CLOEXEC;
+ 	if (flags & FAN_NONBLOCK)
+diff --git a/fs/notify/fdinfo.c b/fs/notify/fdinfo.c
+index 57f0d5d9f934e..3451708fd035c 100644
+--- a/fs/notify/fdinfo.c
++++ b/fs/notify/fdinfo.c
+@@ -83,16 +83,9 @@ static void inotify_fdinfo(struct seq_file *m, struct fsnotify_mark *mark)
+ 	inode_mark = container_of(mark, struct inotify_inode_mark, fsn_mark);
+ 	inode = igrab(fsnotify_conn_inode(mark->connector));
+ 	if (inode) {
+-		/*
+-		 * IN_ALL_EVENTS represents all of the mask bits
+-		 * that we expose to userspace.  There is at
+-		 * least one bit (FS_EVENT_ON_CHILD) which is
+-		 * used only internally to the kernel.
+-		 */
+-		u32 mask = mark->mask & IN_ALL_EVENTS;
+-		seq_printf(m, "inotify wd:%x ino:%lx sdev:%x mask:%x ignored_mask:%x ",
++		seq_printf(m, "inotify wd:%x ino:%lx sdev:%x mask:%x ignored_mask:0 ",
+ 			   inode_mark->wd, inode->i_ino, inode->i_sb->s_dev,
+-			   mask, mark->ignored_mask);
++			   inotify_mark_user_mask(mark));
+ 		show_mark_fhandle(m, inode);
+ 		seq_putc(m, '\n');
+ 		iput(inode);
+diff --git a/fs/notify/inotify/inotify.h b/fs/notify/inotify/inotify.h
+index 2007e37119160..8f00151eb731f 100644
+--- a/fs/notify/inotify/inotify.h
++++ b/fs/notify/inotify/inotify.h
+@@ -22,6 +22,18 @@ static inline struct inotify_event_info *INOTIFY_E(struct fsnotify_event *fse)
+ 	return container_of(fse, struct inotify_event_info, fse);
+ }
+ 
++/*
++ * INOTIFY_USER_FLAGS represents all of the mask bits that we expose to
++ * userspace.  There is at least one bit (FS_EVENT_ON_CHILD) which is
++ * used only internally to the kernel.
++ */
++#define INOTIFY_USER_MASK (IN_ALL_EVENTS | IN_ONESHOT | IN_EXCL_UNLINK)
++
++static inline __u32 inotify_mark_user_mask(struct fsnotify_mark *fsn_mark)
++{
++	return fsn_mark->mask & INOTIFY_USER_MASK;
++}
++
+ extern void inotify_ignored_and_remove_idr(struct fsnotify_mark *fsn_mark,
+ 					   struct fsnotify_group *group);
+ extern int inotify_handle_inode_event(struct fsnotify_mark *inode_mark,
+diff --git a/fs/notify/inotify/inotify_user.c b/fs/notify/inotify/inotify_user.c
+index 54583f62dc440..3ef57db0ec9d6 100644
+--- a/fs/notify/inotify/inotify_user.c
++++ b/fs/notify/inotify/inotify_user.c
+@@ -110,7 +110,7 @@ static inline __u32 inotify_arg_to_mask(struct inode *inode, u32 arg)
+ 		mask |= FS_EVENT_ON_CHILD;
+ 
+ 	/* mask off the flags used to open the fd */
+-	mask |= (arg & (IN_ALL_EVENTS | IN_ONESHOT | IN_EXCL_UNLINK));
++	mask |= (arg & INOTIFY_USER_MASK);
+ 
+ 	return mask;
+ }
+diff --git a/fs/notify/mark.c b/fs/notify/mark.c
+index 4853184f7ddef..c86982be2d505 100644
+--- a/fs/notify/mark.c
++++ b/fs/notify/mark.c
+@@ -452,7 +452,7 @@ void fsnotify_free_mark(struct fsnotify_mark *mark)
+ void fsnotify_destroy_mark(struct fsnotify_mark *mark,
+ 			   struct fsnotify_group *group)
+ {
+-	mutex_lock_nested(&group->mark_mutex, SINGLE_DEPTH_NESTING);
++	mutex_lock(&group->mark_mutex);
+ 	fsnotify_detach_mark(mark);
+ 	mutex_unlock(&group->mark_mutex);
+ 	fsnotify_free_mark(mark);
+@@ -770,7 +770,7 @@ void fsnotify_clear_marks_by_group(struct fsnotify_group *group,
+ 	 * move marks to free to to_free list in one go and then free marks in
+ 	 * to_free list one by one.
+ 	 */
+-	mutex_lock_nested(&group->mark_mutex, SINGLE_DEPTH_NESTING);
++	mutex_lock(&group->mark_mutex);
+ 	list_for_each_entry_safe(mark, lmark, &group->marks_list, g_list) {
+ 		if (mark->connector->type == obj_type)
+ 			list_move(&mark->g_list, &to_free);
+@@ -779,7 +779,7 @@ void fsnotify_clear_marks_by_group(struct fsnotify_group *group,
+ 
+ clear:
+ 	while (1) {
+-		mutex_lock_nested(&group->mark_mutex, SINGLE_DEPTH_NESTING);
++		mutex_lock(&group->mark_mutex);
+ 		if (list_empty(head)) {
+ 			mutex_unlock(&group->mark_mutex);
+ 			break;
+diff --git a/fs/ntfs3/file.c b/fs/ntfs3/file.c
+index 787b53b984ee1..3bae76930e68a 100644
+--- a/fs/ntfs3/file.c
++++ b/fs/ntfs3/file.c
+@@ -495,7 +495,7 @@ static int ntfs_truncate(struct inode *inode, loff_t new_size)
+ 
+ 	down_write(&ni->file.run_lock);
+ 	err = attr_set_size(ni, ATTR_DATA, NULL, 0, &ni->file.run, new_size,
+-			    &new_valid, true, NULL);
++			    &new_valid, ni->mi.sbi->options->prealloc, NULL);
+ 	up_write(&ni->file.run_lock);
+ 
+ 	if (new_valid < ni->i_valid)
+@@ -662,7 +662,13 @@ static long ntfs_fallocate(struct file *file, int mode, loff_t vbo, loff_t len)
+ 		/*
+ 		 * Normal file: Allocate clusters, do not change 'valid' size.
+ 		 */
+-		err = ntfs_set_size(inode, max(end, i_size));
++		loff_t new_size = max(end, i_size);
++
++		err = inode_newsize_ok(inode, new_size);
++		if (err)
++			goto out;
++
++		err = ntfs_set_size(inode, new_size);
+ 		if (err)
+ 			goto out;
+ 
+@@ -762,7 +768,7 @@ int ntfs3_setattr(struct user_namespace *mnt_userns, struct dentry *dentry,
+ 		}
+ 		inode_dio_wait(inode);
+ 
+-		if (attr->ia_size < oldsize)
++		if (attr->ia_size <= oldsize)
+ 			err = ntfs_truncate(inode, attr->ia_size);
+ 		else if (attr->ia_size > oldsize)
+ 			err = ntfs_extend(inode, attr->ia_size, 0, NULL);
+diff --git a/fs/ntfs3/frecord.c b/fs/ntfs3/frecord.c
+index 6f47a9c17f896..18842998c8fa3 100644
+--- a/fs/ntfs3/frecord.c
++++ b/fs/ntfs3/frecord.c
+@@ -1964,10 +1964,8 @@ int ni_fiemap(struct ntfs_inode *ni, struct fiemap_extent_info *fieinfo,
+ 
+ 		vcn += clen;
+ 
+-		if (vbo + bytes >= end) {
++		if (vbo + bytes >= end)
+ 			bytes = end - vbo;
+-			flags |= FIEMAP_EXTENT_LAST;
+-		}
+ 
+ 		if (vbo + bytes <= valid) {
+ 			;
+@@ -1977,6 +1975,9 @@ int ni_fiemap(struct ntfs_inode *ni, struct fiemap_extent_info *fieinfo,
+ 			/* vbo < valid && valid < vbo + bytes */
+ 			u64 dlen = valid - vbo;
+ 
++			if (vbo + dlen >= end)
++				flags |= FIEMAP_EXTENT_LAST;
++
+ 			err = fiemap_fill_next_extent(fieinfo, vbo, lbo, dlen,
+ 						      flags);
+ 			if (err < 0)
+@@ -1995,6 +1996,9 @@ int ni_fiemap(struct ntfs_inode *ni, struct fiemap_extent_info *fieinfo,
+ 			flags |= FIEMAP_EXTENT_UNWRITTEN;
+ 		}
+ 
++		if (vbo + bytes >= end)
++			flags |= FIEMAP_EXTENT_LAST;
++
+ 		err = fiemap_fill_next_extent(fieinfo, vbo, lbo, bytes, flags);
+ 		if (err < 0)
+ 			break;
+diff --git a/fs/ntfs3/fslog.c b/fs/ntfs3/fslog.c
+index 06492f088d602..49b7df6167785 100644
+--- a/fs/ntfs3/fslog.c
++++ b/fs/ntfs3/fslog.c
+@@ -1185,8 +1185,6 @@ static int log_read_rst(struct ntfs_log *log, u32 l_size, bool first,
+ 	if (!r_page)
+ 		return -ENOMEM;
+ 
+-	memset(info, 0, sizeof(struct restart_info));
+-
+ 	/* Determine which restart area we are looking for. */
+ 	if (first) {
+ 		vbo = 0;
+@@ -3791,10 +3789,11 @@ int log_replay(struct ntfs_inode *ni, bool *initialized)
+ 	if (!log)
+ 		return -ENOMEM;
+ 
++	memset(&rst_info, 0, sizeof(struct restart_info));
++
+ 	log->ni = ni;
+ 	log->l_size = l_size;
+ 	log->one_page_buf = kmalloc(page_size, GFP_NOFS);
+-
+ 	if (!log->one_page_buf) {
+ 		err = -ENOMEM;
+ 		goto out;
+@@ -3842,6 +3841,7 @@ int log_replay(struct ntfs_inode *ni, bool *initialized)
+ 	if (rst_info.vbo)
+ 		goto check_restart_area;
+ 
++	memset(&rst_info2, 0, sizeof(struct restart_info));
+ 	err = log_read_rst(log, l_size, false, &rst_info2);
+ 
+ 	/* Determine which restart area to use. */
+@@ -4085,8 +4085,10 @@ process_log:
+ 		if (client == LFS_NO_CLIENT_LE) {
+ 			/* Insert "NTFS" client LogFile. */
+ 			client = ra->client_idx[0];
+-			if (client == LFS_NO_CLIENT_LE)
+-				return -EINVAL;
++			if (client == LFS_NO_CLIENT_LE) {
++				err = -EINVAL;
++				goto out;
++			}
+ 
+ 			t16 = le16_to_cpu(client);
+ 			cr = ca + t16;
+diff --git a/fs/ntfs3/inode.c b/fs/ntfs3/inode.c
+index 9eab11e3b0341..38045264a61bd 100644
+--- a/fs/ntfs3/inode.c
++++ b/fs/ntfs3/inode.c
+@@ -757,6 +757,7 @@ static ssize_t ntfs_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
+ 	loff_t vbo = iocb->ki_pos;
+ 	loff_t end;
+ 	int wr = iov_iter_rw(iter) & WRITE;
++	size_t iter_count = iov_iter_count(iter);
+ 	loff_t valid;
+ 	ssize_t ret;
+ 
+@@ -770,10 +771,13 @@ static ssize_t ntfs_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
+ 				 wr ? ntfs_get_block_direct_IO_W
+ 				    : ntfs_get_block_direct_IO_R);
+ 
+-	if (ret <= 0)
++	if (ret > 0)
++		end = vbo + ret;
++	else if (wr && ret == -EIOCBQUEUED)
++		end = vbo + iter_count;
++	else
+ 		goto out;
+ 
+-	end = vbo + ret;
+ 	valid = ni->i_valid;
+ 	if (wr) {
+ 		if (end > valid && !S_ISBLK(inode->i_mode)) {
+@@ -1951,6 +1955,7 @@ const struct address_space_operations ntfs_aops = {
+ 	.direct_IO	= ntfs_direct_IO,
+ 	.bmap		= ntfs_bmap,
+ 	.dirty_folio	= block_dirty_folio,
++	.invalidate_folio = block_invalidate_folio,
+ };
+ 
+ const struct address_space_operations ntfs_aops_cmpr = {
+diff --git a/fs/ntfs3/xattr.c b/fs/ntfs3/xattr.c
+index afd0ddad826ff..0968565ff2ca0 100644
+--- a/fs/ntfs3/xattr.c
++++ b/fs/ntfs3/xattr.c
+@@ -112,7 +112,7 @@ static int ntfs_read_ea(struct ntfs_inode *ni, struct EA_FULL **ea,
+ 		return -ENOMEM;
+ 
+ 	if (!size) {
+-		;
++		/* EA info persists, but xattr is empty. Looks like EA problem. */
+ 	} else if (attr_ea->non_res) {
+ 		struct runs_tree run;
+ 
+@@ -541,7 +541,7 @@ struct posix_acl *ntfs_get_acl(struct inode *inode, int type, bool rcu)
+ 
+ static noinline int ntfs_set_acl_ex(struct user_namespace *mnt_userns,
+ 				    struct inode *inode, struct posix_acl *acl,
+-				    int type)
++				    int type, bool init_acl)
+ {
+ 	const char *name;
+ 	size_t size, name_len;
+@@ -554,8 +554,9 @@ static noinline int ntfs_set_acl_ex(struct user_namespace *mnt_userns,
+ 
+ 	switch (type) {
+ 	case ACL_TYPE_ACCESS:
+-		if (acl) {
+-			umode_t mode = inode->i_mode;
++		/* Do not change i_mode if we are in init_acl */
++		if (acl && !init_acl) {
++			umode_t mode;
+ 
+ 			err = posix_acl_update_mode(mnt_userns, inode, &mode,
+ 						    &acl);
+@@ -616,7 +617,68 @@ out:
+ int ntfs_set_acl(struct user_namespace *mnt_userns, struct inode *inode,
+ 		 struct posix_acl *acl, int type)
+ {
+-	return ntfs_set_acl_ex(mnt_userns, inode, acl, type);
++	return ntfs_set_acl_ex(mnt_userns, inode, acl, type, false);
++}
++
++static int ntfs_xattr_get_acl(struct user_namespace *mnt_userns,
++			      struct inode *inode, int type, void *buffer,
++			      size_t size)
++{
++	struct posix_acl *acl;
++	int err;
++
++	if (!(inode->i_sb->s_flags & SB_POSIXACL)) {
++		ntfs_inode_warn(inode, "add mount option \"acl\" to use acl");
++		return -EOPNOTSUPP;
++	}
++
++	acl = ntfs_get_acl(inode, type, false);
++	if (IS_ERR(acl))
++		return PTR_ERR(acl);
++
++	if (!acl)
++		return -ENODATA;
++
++	err = posix_acl_to_xattr(mnt_userns, acl, buffer, size);
++	posix_acl_release(acl);
++
++	return err;
++}
++
++static int ntfs_xattr_set_acl(struct user_namespace *mnt_userns,
++			      struct inode *inode, int type, const void *value,
++			      size_t size)
++{
++	struct posix_acl *acl;
++	int err;
++
++	if (!(inode->i_sb->s_flags & SB_POSIXACL)) {
++		ntfs_inode_warn(inode, "add mount option \"acl\" to use acl");
++		return -EOPNOTSUPP;
++	}
++
++	if (!inode_owner_or_capable(mnt_userns, inode))
++		return -EPERM;
++
++	if (!value) {
++		acl = NULL;
++	} else {
++		acl = posix_acl_from_xattr(mnt_userns, value, size);
++		if (IS_ERR(acl))
++			return PTR_ERR(acl);
++
++		if (acl) {
++			err = posix_acl_valid(mnt_userns, acl);
++			if (err)
++				goto release_and_out;
++		}
++	}
++
++	err = ntfs_set_acl(mnt_userns, inode, acl, type);
++
++release_and_out:
++	posix_acl_release(acl);
++	return err;
+ }
+ 
+ /*
+@@ -636,7 +698,7 @@ int ntfs_init_acl(struct user_namespace *mnt_userns, struct inode *inode,
+ 
+ 	if (default_acl) {
+ 		err = ntfs_set_acl_ex(mnt_userns, inode, default_acl,
+-				      ACL_TYPE_DEFAULT);
++				      ACL_TYPE_DEFAULT, true);
+ 		posix_acl_release(default_acl);
+ 	} else {
+ 		inode->i_default_acl = NULL;
+@@ -647,7 +709,7 @@ int ntfs_init_acl(struct user_namespace *mnt_userns, struct inode *inode,
+ 	else {
+ 		if (!err)
+ 			err = ntfs_set_acl_ex(mnt_userns, inode, acl,
+-					      ACL_TYPE_ACCESS);
++					      ACL_TYPE_ACCESS, true);
+ 		posix_acl_release(acl);
+ 	}
+ 
+@@ -785,6 +847,23 @@ static int ntfs_getxattr(const struct xattr_handler *handler, struct dentry *de,
+ 		goto out;
+ 	}
+ 
++#ifdef CONFIG_NTFS3_FS_POSIX_ACL
++	if ((name_len == sizeof(XATTR_NAME_POSIX_ACL_ACCESS) - 1 &&
++	     !memcmp(name, XATTR_NAME_POSIX_ACL_ACCESS,
++		     sizeof(XATTR_NAME_POSIX_ACL_ACCESS))) ||
++	    (name_len == sizeof(XATTR_NAME_POSIX_ACL_DEFAULT) - 1 &&
++	     !memcmp(name, XATTR_NAME_POSIX_ACL_DEFAULT,
++		     sizeof(XATTR_NAME_POSIX_ACL_DEFAULT)))) {
++		/* TODO: init_user_ns? */
++		err = ntfs_xattr_get_acl(
++			&init_user_ns, inode,
++			name_len == sizeof(XATTR_NAME_POSIX_ACL_ACCESS) - 1
++				? ACL_TYPE_ACCESS
++				: ACL_TYPE_DEFAULT,
++			buffer, size);
++		goto out;
++	}
++#endif
+ 	/* Deal with NTFS extended attribute. */
+ 	err = ntfs_get_ea(inode, name, name_len, buffer, size, NULL);
+ 
+@@ -897,10 +976,29 @@ set_new_fa:
+ 		goto out;
+ 	}
+ 
++#ifdef CONFIG_NTFS3_FS_POSIX_ACL
++	if ((name_len == sizeof(XATTR_NAME_POSIX_ACL_ACCESS) - 1 &&
++	     !memcmp(name, XATTR_NAME_POSIX_ACL_ACCESS,
++		     sizeof(XATTR_NAME_POSIX_ACL_ACCESS))) ||
++	    (name_len == sizeof(XATTR_NAME_POSIX_ACL_DEFAULT) - 1 &&
++	     !memcmp(name, XATTR_NAME_POSIX_ACL_DEFAULT,
++		     sizeof(XATTR_NAME_POSIX_ACL_DEFAULT)))) {
++		err = ntfs_xattr_set_acl(
++			mnt_userns, inode,
++			name_len == sizeof(XATTR_NAME_POSIX_ACL_ACCESS) - 1
++				? ACL_TYPE_ACCESS
++				: ACL_TYPE_DEFAULT,
++			value, size);
++		goto out;
++	}
++#endif
+ 	/* Deal with NTFS extended attribute. */
+ 	err = ntfs_set_ea(inode, name, name_len, value, size, flags);
+ 
+ out:
++	inode->i_ctime = current_time(inode);
++	mark_inode_dirty(inode);
++
+ 	return err;
+ }
+ 
+diff --git a/fs/ocfs2/dlmfs/userdlm.c b/fs/ocfs2/dlmfs/userdlm.c
+index 29f183a15798e..c1d67c806e1d3 100644
+--- a/fs/ocfs2/dlmfs/userdlm.c
++++ b/fs/ocfs2/dlmfs/userdlm.c
+@@ -433,6 +433,11 @@ again:
+ 	}
+ 
+ 	spin_lock(&lockres->l_lock);
++	if (lockres->l_flags & USER_LOCK_IN_TEARDOWN) {
++		spin_unlock(&lockres->l_lock);
++		status = -EAGAIN;
++		goto bail;
++	}
+ 
+ 	/* We only compare against the currently granted level
+ 	 * here. If the lock is blocked waiting on a downconvert,
+@@ -595,7 +600,7 @@ int user_dlm_destroy_lock(struct user_lock_res *lockres)
+ 	spin_lock(&lockres->l_lock);
+ 	if (lockres->l_flags & USER_LOCK_IN_TEARDOWN) {
+ 		spin_unlock(&lockres->l_lock);
+-		return 0;
++		goto bail;
+ 	}
+ 
+ 	lockres->l_flags |= USER_LOCK_IN_TEARDOWN;
+@@ -609,12 +614,17 @@ int user_dlm_destroy_lock(struct user_lock_res *lockres)
+ 	}
+ 
+ 	if (lockres->l_ro_holders || lockres->l_ex_holders) {
++		lockres->l_flags &= ~USER_LOCK_IN_TEARDOWN;
+ 		spin_unlock(&lockres->l_lock);
+ 		goto bail;
+ 	}
+ 
+ 	status = 0;
+ 	if (!(lockres->l_flags & USER_LOCK_ATTACHED)) {
++		/*
++		 * lock is never requested, leave USER_LOCK_IN_TEARDOWN set
++		 * to avoid new lock request coming in.
++		 */
+ 		spin_unlock(&lockres->l_lock);
+ 		goto bail;
+ 	}
+@@ -625,6 +635,10 @@ int user_dlm_destroy_lock(struct user_lock_res *lockres)
+ 
+ 	status = ocfs2_dlm_unlock(conn, &lockres->l_lksb, DLM_LKF_VALBLK);
+ 	if (status) {
++		spin_lock(&lockres->l_lock);
++		lockres->l_flags &= ~USER_LOCK_IN_TEARDOWN;
++		lockres->l_flags &= ~USER_LOCK_BUSY;
++		spin_unlock(&lockres->l_lock);
+ 		user_log_dlm_error("ocfs2_dlm_unlock", status, lockres);
+ 		goto bail;
+ 	}
+diff --git a/fs/ocfs2/inode.c b/fs/ocfs2/inode.c
+index 5739dc3015698..bb116c39b5813 100644
+--- a/fs/ocfs2/inode.c
++++ b/fs/ocfs2/inode.c
+@@ -125,6 +125,7 @@ struct inode *ocfs2_iget(struct ocfs2_super *osb, u64 blkno, unsigned flags,
+ 	struct inode *inode = NULL;
+ 	struct super_block *sb = osb->sb;
+ 	struct ocfs2_find_inode_args args;
++	journal_t *journal = osb->journal->j_journal;
+ 
+ 	trace_ocfs2_iget_begin((unsigned long long)blkno, flags,
+ 			       sysfile_type);
+@@ -171,11 +172,10 @@ struct inode *ocfs2_iget(struct ocfs2_super *osb, u64 blkno, unsigned flags,
+ 	 * part of the transaction - the inode could have been reclaimed and
+ 	 * now it is reread from disk.
+ 	 */
+-	if (osb->journal) {
++	if (journal) {
+ 		transaction_t *transaction;
+ 		tid_t tid;
+ 		struct ocfs2_inode_info *oi = OCFS2_I(inode);
+-		journal_t *journal = osb->journal->j_journal;
+ 
+ 		read_lock(&journal->j_state_lock);
+ 		if (journal->j_running_transaction)
+diff --git a/fs/ocfs2/journal.c b/fs/ocfs2/journal.c
+index 1887a27087097..fa87d89cf7542 100644
+--- a/fs/ocfs2/journal.c
++++ b/fs/ocfs2/journal.c
+@@ -810,22 +810,20 @@ void ocfs2_set_journal_params(struct ocfs2_super *osb)
+ 	write_unlock(&journal->j_state_lock);
+ }
+ 
+-int ocfs2_journal_init(struct ocfs2_super *osb, int *dirty)
++/*
++ * alloc & initialize skeleton for journal structure.
++ * ocfs2_journal_init() will make fs have journal ability.
++ */
++int ocfs2_journal_alloc(struct ocfs2_super *osb)
+ {
+-	int status = -1;
+-	struct inode *inode = NULL; /* the journal inode */
+-	journal_t *j_journal = NULL;
+-	struct ocfs2_journal *journal = NULL;
+-	struct ocfs2_dinode *di = NULL;
+-	struct buffer_head *bh = NULL;
+-	int inode_lock = 0;
++	int status = 0;
++	struct ocfs2_journal *journal;
+ 
+-	/* initialize our journal structure */
+ 	journal = kzalloc(sizeof(struct ocfs2_journal), GFP_KERNEL);
+ 	if (!journal) {
+ 		mlog(ML_ERROR, "unable to alloc journal\n");
+ 		status = -ENOMEM;
+-		goto done;
++		goto bail;
+ 	}
+ 	osb->journal = journal;
+ 	journal->j_osb = osb;
+@@ -839,6 +837,21 @@ int ocfs2_journal_init(struct ocfs2_super *osb, int *dirty)
+ 	INIT_WORK(&journal->j_recovery_work, ocfs2_complete_recovery);
+ 	journal->j_state = OCFS2_JOURNAL_FREE;
+ 
++bail:
++	return status;
++}
++
++int ocfs2_journal_init(struct ocfs2_super *osb, int *dirty)
++{
++	int status = -1;
++	struct inode *inode = NULL; /* the journal inode */
++	journal_t *j_journal = NULL;
++	struct ocfs2_journal *journal = osb->journal;
++	struct ocfs2_dinode *di = NULL;
++	struct buffer_head *bh = NULL;
++	int inode_lock = 0;
++
++	BUG_ON(!journal);
+ 	/* already have the inode for our journal */
+ 	inode = ocfs2_get_system_file_inode(osb, JOURNAL_SYSTEM_INODE,
+ 					    osb->slot_num);
+diff --git a/fs/ocfs2/journal.h b/fs/ocfs2/journal.h
+index 8dcb2f2cadbc5..969d0aa287187 100644
+--- a/fs/ocfs2/journal.h
++++ b/fs/ocfs2/journal.h
+@@ -154,6 +154,7 @@ int ocfs2_compute_replay_slots(struct ocfs2_super *osb);
+  *  Journal Control:
+  *  Initialize, Load, Shutdown, Wipe a journal.
+  *
++ *  ocfs2_journal_alloc    - Initialize skeleton for journal structure.
+  *  ocfs2_journal_init     - Initialize journal structures in the OSB.
+  *  ocfs2_journal_load     - Load the given journal off disk. Replay it if
+  *                          there's transactions still in there.
+@@ -167,6 +168,7 @@ int ocfs2_compute_replay_slots(struct ocfs2_super *osb);
+  *  ocfs2_start_checkpoint - Kick the commit thread to do a checkpoint.
+  */
+ void   ocfs2_set_journal_params(struct ocfs2_super *osb);
++int    ocfs2_journal_alloc(struct ocfs2_super *osb);
+ int    ocfs2_journal_init(struct ocfs2_super *osb, int *dirty);
+ void   ocfs2_journal_shutdown(struct ocfs2_super *osb);
+ int    ocfs2_journal_wipe(struct ocfs2_journal *journal,
+diff --git a/fs/ocfs2/super.c b/fs/ocfs2/super.c
+index 477cdf94122e7..311433c69a3f5 100644
+--- a/fs/ocfs2/super.c
++++ b/fs/ocfs2/super.c
+@@ -2195,6 +2195,15 @@ static int ocfs2_initialize_super(struct super_block *sb,
+ 
+ 	get_random_bytes(&osb->s_next_generation, sizeof(u32));
+ 
++	/*
++	 * FIXME
++	 * This should be done in ocfs2_journal_init(), but any inode
++	 * writes back operation will cause the filesystem to crash.
++	 */
++	status = ocfs2_journal_alloc(osb);
++	if (status < 0)
++		goto bail;
++
+ 	INIT_WORK(&osb->dquot_drop_work, ocfs2_drop_dquot_refs);
+ 	init_llist_head(&osb->dquot_drop_list);
+ 
+@@ -2483,6 +2492,12 @@ static void ocfs2_delete_osb(struct ocfs2_super *osb)
+ 
+ 	kfree(osb->osb_orphan_wipes);
+ 	kfree(osb->slot_recovery_generations);
++	/* FIXME
++	 * This belongs in journal shutdown, but because we have to
++	 * allocate osb->journal at the middle of ocfs2_initialize_super(),
++	 * we free it here.
++	 */
++	kfree(osb->journal);
+ 	kfree(osb->local_alloc_copy);
+ 	kfree(osb->uuid_str);
+ 	kfree(osb->vol_label);
+diff --git a/fs/proc/generic.c b/fs/proc/generic.c
+index f2132407e1335..587b91d9d998f 100644
+--- a/fs/proc/generic.c
++++ b/fs/proc/generic.c
+@@ -448,6 +448,9 @@ static struct proc_dir_entry *__proc_create(struct proc_dir_entry **parent,
+ 	proc_set_user(ent, (*parent)->uid, (*parent)->gid);
+ 
+ 	ent->proc_dops = &proc_misc_dentry_ops;
++	/* Revalidate everything under /proc/${pid}/net */
++	if ((*parent)->proc_dops == &proc_net_dentry_ops)
++		pde_force_lookup(ent);
+ 
+ out:
+ 	return ent;
+diff --git a/fs/proc/proc_net.c b/fs/proc/proc_net.c
+index e1cfeda397f3f..913e5acefbb66 100644
+--- a/fs/proc/proc_net.c
++++ b/fs/proc/proc_net.c
+@@ -376,6 +376,9 @@ static __net_init int proc_net_ns_init(struct net *net)
+ 
+ 	proc_set_user(netd, uid, gid);
+ 
++	/* Seed dentry revalidation for /proc/${pid}/net */
++	pde_force_lookup(netd);
++
+ 	err = -EEXIST;
+ 	net_statd = proc_net_mkdir(net, "stat", netd);
+ 	if (!net_statd)
+diff --git a/fs/seq_file.c b/fs/seq_file.c
+index 7ab8a58c29b61..9456a2032224a 100644
+--- a/fs/seq_file.c
++++ b/fs/seq_file.c
+@@ -931,6 +931,38 @@ struct list_head *seq_list_next(void *v, struct list_head *head, loff_t *ppos)
+ }
+ EXPORT_SYMBOL(seq_list_next);
+ 
++struct list_head *seq_list_start_rcu(struct list_head *head, loff_t pos)
++{
++	struct list_head *lh;
++
++	list_for_each_rcu(lh, head)
++		if (pos-- == 0)
++			return lh;
++
++	return NULL;
++}
++EXPORT_SYMBOL(seq_list_start_rcu);
++
++struct list_head *seq_list_start_head_rcu(struct list_head *head, loff_t pos)
++{
++	if (!pos)
++		return head;
++
++	return seq_list_start_rcu(head, pos - 1);
++}
++EXPORT_SYMBOL(seq_list_start_head_rcu);
++
++struct list_head *seq_list_next_rcu(void *v, struct list_head *head,
++				    loff_t *ppos)
++{
++	struct list_head *lh;
++
++	lh = list_next_rcu((struct list_head *)v);
++	++*ppos;
++	return lh == head ? NULL : lh;
++}
++EXPORT_SYMBOL(seq_list_next_rcu);
++
+ /**
+  * seq_hlist_start - start an iteration of a hlist
+  * @head: the head of the hlist
+diff --git a/include/drm/drm_edid.h b/include/drm/drm_edid.h
+index 144c495b99c48..d6b2aeb342119 100644
+--- a/include/drm/drm_edid.h
++++ b/include/drm/drm_edid.h
+@@ -121,7 +121,7 @@ struct detailed_data_monitor_range {
+ 			u8 supported_scalings;
+ 			u8 preferred_refresh;
+ 		} __attribute__((packed)) cvt;
+-	} formula;
++	} __attribute__((packed)) formula;
+ } __attribute__((packed));
+ 
+ struct detailed_data_wpindex {
+@@ -154,7 +154,7 @@ struct detailed_non_pixel {
+ 		struct detailed_data_wpindex color;
+ 		struct std_timing timings[6];
+ 		struct cvt_timing cvt[4];
+-	} data;
++	} __attribute__((packed)) data;
+ } __attribute__((packed));
+ 
+ #define EDID_DETAIL_EST_TIMINGS 0xf7
+@@ -172,7 +172,7 @@ struct detailed_timing {
+ 	union {
+ 		struct detailed_pixel_timing pixel_data;
+ 		struct detailed_non_pixel other_data;
+-	} data;
++	} __attribute__((packed)) data;
+ } __attribute__((packed));
+ 
+ #define DRM_EDID_INPUT_SERRATION_VSYNC (1 << 0)
+diff --git a/include/drm/drm_format_helper.h b/include/drm/drm_format_helper.h
+index 0b0937c0b2f63..55145eca07828 100644
+--- a/include/drm/drm_format_helper.h
++++ b/include/drm/drm_format_helper.h
+@@ -43,8 +43,7 @@ int drm_fb_blit_toio(void __iomem *dst, unsigned int dst_pitch, uint32_t dst_for
+ 		     const void *vmap, const struct drm_framebuffer *fb,
+ 		     const struct drm_rect *rect);
+ 
+-void drm_fb_xrgb8888_to_mono_reversed(void *dst, unsigned int dst_pitch, const void *src,
+-				      const struct drm_framebuffer *fb,
+-				      const struct drm_rect *clip);
++void drm_fb_xrgb8888_to_mono(void *dst, unsigned int dst_pitch, const void *src,
++			     const struct drm_framebuffer *fb, const struct drm_rect *clip);
+ 
+ #endif /* __LINUX_DRM_FORMAT_HELPER_H */
+diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
+index 1973ef9bd40fc..4fa359c2c01fb 100644
+--- a/include/linux/blk_types.h
++++ b/include/linux/blk_types.h
+@@ -246,9 +246,8 @@ typedef unsigned int blk_qc_t;
+ struct bio {
+ 	struct bio		*bi_next;	/* request queue link */
+ 	struct block_device	*bi_bdev;
+-	unsigned int		bi_opf;		/* bottom bits req flags,
+-						 * top bits REQ_OP. Use
+-						 * accessors.
++	unsigned int		bi_opf;		/* bottom bits REQ_OP, top bits
++						 * req_flags.
+ 						 */
+ 	unsigned short		bi_flags;	/* BIO_* below */
+ 	unsigned short		bi_ioprio;
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index bdb5298735ce9..83bd5598ec4df 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -672,7 +672,7 @@ struct btf_func_model {
+ #define BPF_TRAMP_F_RET_FENTRY_RET	BIT(4)
+ 
+ /* Each call __bpf_prog_enter + call bpf_func + call __bpf_prog_exit is ~50
+- * bytes on x86.  Pick a number to fit into BPF_IMAGE_SIZE / 2
++ * bytes on x86.
+  */
+ #define BPF_MAX_TRAMP_PROGS 38
+ 
+@@ -1221,7 +1221,7 @@ u64 bpf_event_output(struct bpf_map *map, u64 flags, void *meta, u64 meta_size,
+ /* an array of programs to be executed under rcu_lock.
+  *
+  * Typical usage:
+- * ret = BPF_PROG_RUN_ARRAY(&bpf_prog_array, ctx, bpf_prog_run);
++ * ret = bpf_prog_run_array(rcu_dereference(&bpf_prog_array), ctx, bpf_prog_run);
+  *
+  * the structure returned by bpf_prog_array_alloc() should be populated
+  * with program pointers and the last pointer must be NULL.
+@@ -1315,83 +1315,22 @@ static inline void bpf_reset_run_ctx(struct bpf_run_ctx *old_ctx)
+ 
+ typedef u32 (*bpf_prog_run_fn)(const struct bpf_prog *prog, const void *ctx);
+ 
+-static __always_inline int
+-BPF_PROG_RUN_ARRAY_CG_FLAGS(const struct bpf_prog_array __rcu *array_rcu,
+-			    const void *ctx, bpf_prog_run_fn run_prog,
+-			    int retval, u32 *ret_flags)
+-{
+-	const struct bpf_prog_array_item *item;
+-	const struct bpf_prog *prog;
+-	const struct bpf_prog_array *array;
+-	struct bpf_run_ctx *old_run_ctx;
+-	struct bpf_cg_run_ctx run_ctx;
+-	u32 func_ret;
+-
+-	run_ctx.retval = retval;
+-	migrate_disable();
+-	rcu_read_lock();
+-	array = rcu_dereference(array_rcu);
+-	item = &array->items[0];
+-	old_run_ctx = bpf_set_run_ctx(&run_ctx.run_ctx);
+-	while ((prog = READ_ONCE(item->prog))) {
+-		run_ctx.prog_item = item;
+-		func_ret = run_prog(prog, ctx);
+-		if (!(func_ret & 1) && !IS_ERR_VALUE((long)run_ctx.retval))
+-			run_ctx.retval = -EPERM;
+-		*(ret_flags) |= (func_ret >> 1);
+-		item++;
+-	}
+-	bpf_reset_run_ctx(old_run_ctx);
+-	rcu_read_unlock();
+-	migrate_enable();
+-	return run_ctx.retval;
+-}
+-
+-static __always_inline int
+-BPF_PROG_RUN_ARRAY_CG(const struct bpf_prog_array __rcu *array_rcu,
+-		      const void *ctx, bpf_prog_run_fn run_prog,
+-		      int retval)
+-{
+-	const struct bpf_prog_array_item *item;
+-	const struct bpf_prog *prog;
+-	const struct bpf_prog_array *array;
+-	struct bpf_run_ctx *old_run_ctx;
+-	struct bpf_cg_run_ctx run_ctx;
+-
+-	run_ctx.retval = retval;
+-	migrate_disable();
+-	rcu_read_lock();
+-	array = rcu_dereference(array_rcu);
+-	item = &array->items[0];
+-	old_run_ctx = bpf_set_run_ctx(&run_ctx.run_ctx);
+-	while ((prog = READ_ONCE(item->prog))) {
+-		run_ctx.prog_item = item;
+-		if (!run_prog(prog, ctx) && !IS_ERR_VALUE((long)run_ctx.retval))
+-			run_ctx.retval = -EPERM;
+-		item++;
+-	}
+-	bpf_reset_run_ctx(old_run_ctx);
+-	rcu_read_unlock();
+-	migrate_enable();
+-	return run_ctx.retval;
+-}
+-
+ static __always_inline u32
+-BPF_PROG_RUN_ARRAY(const struct bpf_prog_array __rcu *array_rcu,
++bpf_prog_run_array(const struct bpf_prog_array *array,
+ 		   const void *ctx, bpf_prog_run_fn run_prog)
+ {
+ 	const struct bpf_prog_array_item *item;
+ 	const struct bpf_prog *prog;
+-	const struct bpf_prog_array *array;
+ 	struct bpf_run_ctx *old_run_ctx;
+ 	struct bpf_trace_run_ctx run_ctx;
+ 	u32 ret = 1;
+ 
+-	migrate_disable();
+-	rcu_read_lock();
+-	array = rcu_dereference(array_rcu);
++	RCU_LOCKDEP_WARN(!rcu_read_lock_held(), "no rcu lock held");
++
+ 	if (unlikely(!array))
+-		goto out;
++		return ret;
++
++	migrate_disable();
+ 	old_run_ctx = bpf_set_run_ctx(&run_ctx.run_ctx);
+ 	item = &array->items[0];
+ 	while ((prog = READ_ONCE(item->prog))) {
+@@ -1400,50 +1339,10 @@ BPF_PROG_RUN_ARRAY(const struct bpf_prog_array __rcu *array_rcu,
+ 		item++;
+ 	}
+ 	bpf_reset_run_ctx(old_run_ctx);
+-out:
+-	rcu_read_unlock();
+ 	migrate_enable();
+ 	return ret;
+ }
+ 
+-/* To be used by __cgroup_bpf_run_filter_skb for EGRESS BPF progs
+- * so BPF programs can request cwr for TCP packets.
+- *
+- * Current cgroup skb programs can only return 0 or 1 (0 to drop the
+- * packet. This macro changes the behavior so the low order bit
+- * indicates whether the packet should be dropped (0) or not (1)
+- * and the next bit is a congestion notification bit. This could be
+- * used by TCP to call tcp_enter_cwr()
+- *
+- * Hence, new allowed return values of CGROUP EGRESS BPF programs are:
+- *   0: drop packet
+- *   1: keep packet
+- *   2: drop packet and cn
+- *   3: keep packet and cn
+- *
+- * This macro then converts it to one of the NET_XMIT or an error
+- * code that is then interpreted as drop packet (and no cn):
+- *   0: NET_XMIT_SUCCESS  skb should be transmitted
+- *   1: NET_XMIT_DROP     skb should be dropped and cn
+- *   2: NET_XMIT_CN       skb should be transmitted and cn
+- *   3: -err              skb should be dropped
+- */
+-#define BPF_PROG_CGROUP_INET_EGRESS_RUN_ARRAY(array, ctx, func)		\
+-	({						\
+-		u32 _flags = 0;				\
+-		bool _cn;				\
+-		u32 _ret;				\
+-		_ret = BPF_PROG_RUN_ARRAY_CG_FLAGS(array, ctx, func, 0, &_flags); \
+-		_cn = _flags & BPF_RET_SET_CN;		\
+-		if (_ret && !IS_ERR_VALUE((long)_ret))	\
+-			_ret = -EFAULT;			\
+-		if (!_ret)				\
+-			_ret = (_cn ? NET_XMIT_CN : NET_XMIT_SUCCESS);	\
+-		else					\
+-			_ret = (_cn ? NET_XMIT_DROP : _ret);		\
+-		_ret;					\
+-	})
+-
+ #ifdef CONFIG_BPF_SYSCALL
+ DECLARE_PER_CPU(int, bpf_prog_active);
+ extern struct mutex bpf_stats_enabled_mutex;
+@@ -2085,6 +1984,8 @@ void bpf_offload_dev_netdev_unregister(struct bpf_offload_dev *offdev,
+ 				       struct net_device *netdev);
+ bool bpf_offload_dev_match(struct bpf_prog *prog, struct net_device *netdev);
+ 
++void unpriv_ebpf_notify(int new_state);
++
+ #if defined(CONFIG_NET) && defined(CONFIG_BPF_SYSCALL)
+ int bpf_prog_offload_init(struct bpf_prog *prog, union bpf_attr *attr);
+ 
+diff --git a/include/linux/compat.h b/include/linux/compat.h
+index 1c758b0e03598..01fddf72a81f0 100644
+--- a/include/linux/compat.h
++++ b/include/linux/compat.h
+@@ -235,6 +235,7 @@ typedef struct compat_siginfo {
+ 				struct {
+ 					compat_ulong_t _data;
+ 					u32 _type;
++					u32 _flags;
+ 				} _perf;
+ 			};
+ 		} _sigfault;
+diff --git a/include/linux/efi.h b/include/linux/efi.h
+index ccd4d3f91c98c..cc6d2be2ffd51 100644
+--- a/include/linux/efi.h
++++ b/include/linux/efi.h
+@@ -213,6 +213,8 @@ struct capsule_info {
+ 	size_t			page_bytes_remain;
+ };
+ 
++int efi_capsule_setup_info(struct capsule_info *cap_info, void *kbuff,
++                           size_t hdr_bytes);
+ int __efi_capsule_setup_info(struct capsule_info *cap_info);
+ 
+ /*
+diff --git a/include/linux/fwnode.h b/include/linux/fwnode.h
+index 3a532ba66f6c2..7defac04f9a38 100644
+--- a/include/linux/fwnode.h
++++ b/include/linux/fwnode.h
+@@ -148,12 +148,12 @@ struct fwnode_operations {
+ 	int (*add_links)(struct fwnode_handle *fwnode);
+ };
+ 
+-#define fwnode_has_op(fwnode, op)				\
+-	((fwnode) && (fwnode)->ops && (fwnode)->ops->op)
++#define fwnode_has_op(fwnode, op)					\
++	(!IS_ERR_OR_NULL(fwnode) && (fwnode)->ops && (fwnode)->ops->op)
++
+ #define fwnode_call_int_op(fwnode, op, ...)				\
+-	(fwnode ? (fwnode_has_op(fwnode, op) ?				\
+-		   (fwnode)->ops->op(fwnode, ## __VA_ARGS__) : -ENXIO) : \
+-	 -EINVAL)
++	(fwnode_has_op(fwnode, op) ?					\
++	 (fwnode)->ops->op(fwnode, ## __VA_ARGS__) : (IS_ERR_OR_NULL(fwnode) ? -EINVAL : -ENXIO))
+ 
+ #define fwnode_call_bool_op(fwnode, op, ...)		\
+ 	(fwnode_has_op(fwnode, op) ?			\
+diff --git a/include/linux/goldfish.h b/include/linux/goldfish.h
+index 12be1601fd845..bcc17f95b9066 100644
+--- a/include/linux/goldfish.h
++++ b/include/linux/goldfish.h
+@@ -8,14 +8,21 @@
+ 
+ /* Helpers for Goldfish virtual platform */
+ 
++#ifndef gf_ioread32
++#define gf_ioread32 ioread32
++#endif
++#ifndef gf_iowrite32
++#define gf_iowrite32 iowrite32
++#endif
++
+ static inline void gf_write_ptr(const void *ptr, void __iomem *portl,
+ 				void __iomem *porth)
+ {
+ 	const unsigned long addr = (unsigned long)ptr;
+ 
+-	__raw_writel(lower_32_bits(addr), portl);
++	gf_iowrite32(lower_32_bits(addr), portl);
+ #ifdef CONFIG_64BIT
+-	__raw_writel(upper_32_bits(addr), porth);
++	gf_iowrite32(upper_32_bits(addr), porth);
+ #endif
+ }
+ 
+@@ -23,9 +30,9 @@ static inline void gf_write_dma_addr(const dma_addr_t addr,
+ 				     void __iomem *portl,
+ 				     void __iomem *porth)
+ {
+-	__raw_writel(lower_32_bits(addr), portl);
++	gf_iowrite32(lower_32_bits(addr), portl);
+ #ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
+-	__raw_writel(upper_32_bits(addr), porth);
++	gf_iowrite32(upper_32_bits(addr), porth);
+ #endif
+ }
+ 
+diff --git a/include/linux/gpio/driver.h b/include/linux/gpio/driver.h
+index 874aabd270c9b..48d03eb4e5d8a 100644
+--- a/include/linux/gpio/driver.h
++++ b/include/linux/gpio/driver.h
+@@ -501,6 +501,18 @@ struct gpio_chip {
+ 	 */
+ 	int (*of_xlate)(struct gpio_chip *gc,
+ 			const struct of_phandle_args *gpiospec, u32 *flags);
++
++	/**
++	 * @of_gpio_ranges_fallback:
++	 *
++	 * Optional hook for the case that no gpio-ranges property is defined
++	 * within the device tree node "np" (usually DT before introduction
++	 * of gpio-ranges). So this callback is helpful to provide the
++	 * necessary backward compatibility for the pin ranges.
++	 */
++	int (*of_gpio_ranges_fallback)(struct gpio_chip *gc,
++				       struct device_node *np);
++
+ #endif /* CONFIG_OF_GPIO */
+ };
+ 
+diff --git a/include/linux/ipmi_smi.h b/include/linux/ipmi_smi.h
+index 9277d21c2690c..5d69820d8b027 100644
+--- a/include/linux/ipmi_smi.h
++++ b/include/linux/ipmi_smi.h
+@@ -125,6 +125,12 @@ struct ipmi_smi_msg {
+ 	void (*done)(struct ipmi_smi_msg *msg);
+ };
+ 
++#define INIT_IPMI_SMI_MSG(done_handler) \
++{						\
++	.done = done_handler,			\
++	.type = IPMI_SMI_MSG_TYPE_NORMAL	\
++}
++
+ struct ipmi_smi_handlers {
+ 	struct module *owner;
+ 
+diff --git a/include/linux/kexec.h b/include/linux/kexec.h
+index 58d1b58a971e3..fcd5035209f19 100644
+--- a/include/linux/kexec.h
++++ b/include/linux/kexec.h
+@@ -193,14 +193,6 @@ void *kexec_purgatory_get_symbol_addr(struct kimage *image, const char *name);
+ int arch_kexec_kernel_image_probe(struct kimage *image, void *buf,
+ 				  unsigned long buf_len);
+ void *arch_kexec_kernel_image_load(struct kimage *image);
+-int arch_kexec_apply_relocations_add(struct purgatory_info *pi,
+-				     Elf_Shdr *section,
+-				     const Elf_Shdr *relsec,
+-				     const Elf_Shdr *symtab);
+-int arch_kexec_apply_relocations(struct purgatory_info *pi,
+-				 Elf_Shdr *section,
+-				 const Elf_Shdr *relsec,
+-				 const Elf_Shdr *symtab);
+ int arch_kimage_file_post_load_cleanup(struct kimage *image);
+ #ifdef CONFIG_KEXEC_SIG
+ int arch_kexec_kernel_verify_sig(struct kimage *image, void *buf,
+@@ -229,6 +221,44 @@ extern int crash_exclude_mem_range(struct crash_mem *mem,
+ 				   unsigned long long mend);
+ extern int crash_prepare_elf64_headers(struct crash_mem *mem, int kernel_map,
+ 				       void **addr, unsigned long *sz);
++
++#ifndef arch_kexec_apply_relocations_add
++/*
++ * arch_kexec_apply_relocations_add - apply relocations of type RELA
++ * @pi:		Purgatory to be relocated.
++ * @section:	Section relocations applying to.
++ * @relsec:	Section containing RELAs.
++ * @symtab:	Corresponding symtab.
++ *
++ * Return: 0 on success, negative errno on error.
++ */
++static inline int
++arch_kexec_apply_relocations_add(struct purgatory_info *pi, Elf_Shdr *section,
++				 const Elf_Shdr *relsec, const Elf_Shdr *symtab)
++{
++	pr_err("RELA relocation unsupported.\n");
++	return -ENOEXEC;
++}
++#endif
++
++#ifndef arch_kexec_apply_relocations
++/*
++ * arch_kexec_apply_relocations - apply relocations of type REL
++ * @pi:		Purgatory to be relocated.
++ * @section:	Section relocations applying to.
++ * @relsec:	Section containing RELs.
++ * @symtab:	Corresponding symtab.
++ *
++ * Return: 0 on success, negative errno on error.
++ */
++static inline int
++arch_kexec_apply_relocations(struct purgatory_info *pi, Elf_Shdr *section,
++			     const Elf_Shdr *relsec, const Elf_Shdr *symtab)
++{
++	pr_err("REL relocation unsupported.\n");
++	return -ENOEXEC;
++}
++#endif
+ #endif /* CONFIG_KEXEC_FILE */
+ 
+ #ifdef CONFIG_KEXEC_ELF
+diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
+index 157168769fc26..55041d2f884de 100644
+--- a/include/linux/kprobes.h
++++ b/include/linux/kprobes.h
+@@ -424,7 +424,7 @@ void unregister_kretprobe(struct kretprobe *rp);
+ int register_kretprobes(struct kretprobe **rps, int num);
+ void unregister_kretprobes(struct kretprobe **rps, int num);
+ 
+-#ifdef CONFIG_KRETPROBE_ON_RETHOOK
++#if defined(CONFIG_KRETPROBE_ON_RETHOOK) || !defined(CONFIG_KRETPROBES)
+ #define kprobe_flush_task(tk)	do {} while (0)
+ #else
+ void kprobe_flush_task(struct task_struct *tk);
+diff --git a/include/linux/linkage.h b/include/linux/linkage.h
+index acb1ad2356f1b..1feab6136b5b5 100644
+--- a/include/linux/linkage.h
++++ b/include/linux/linkage.h
+@@ -171,12 +171,9 @@
+ 
+ /* SYM_ALIAS -- use only if you have to */
+ #ifndef SYM_ALIAS
+-#define SYM_ALIAS(alias, name, sym_type, linkage)			\
+-	linkage(alias) ASM_NL						\
+-	.set alias, name ASM_NL						\
+-	.type alias sym_type ASM_NL					\
+-	.set .L__sym_size_##alias, .L__sym_size_##name ASM_NL		\
+-	.size alias, .L__sym_size_##alias
++#define SYM_ALIAS(alias, name, linkage)			\
++	linkage(alias) ASM_NL				\
++	.set alias, name ASM_NL
+ #endif
+ 
+ /* === code annotations === */
+@@ -261,7 +258,7 @@
+  */
+ #ifndef SYM_FUNC_ALIAS
+ #define SYM_FUNC_ALIAS(alias, name)					\
+-	SYM_ALIAS(alias, name, SYM_T_FUNC, SYM_L_GLOBAL)
++	SYM_ALIAS(alias, name, SYM_L_GLOBAL)
+ #endif
+ 
+ /*
+@@ -269,7 +266,7 @@
+  */
+ #ifndef SYM_FUNC_ALIAS_LOCAL
+ #define SYM_FUNC_ALIAS_LOCAL(alias, name)				\
+-	SYM_ALIAS(alias, name, SYM_T_FUNC, SYM_L_LOCAL)
++	SYM_ALIAS(alias, name, SYM_L_LOCAL)
+ #endif
+ 
+ /*
+@@ -277,7 +274,7 @@
+  */
+ #ifndef SYM_FUNC_ALIAS_WEAK
+ #define SYM_FUNC_ALIAS_WEAK(alias, name)				\
+-	SYM_ALIAS(alias, name, SYM_T_FUNC, SYM_L_WEAK)
++	SYM_ALIAS(alias, name, SYM_L_WEAK)
+ #endif
+ 
+ /* SYM_CODE_START -- use for non-C (special) functions */
+diff --git a/include/linux/list.h b/include/linux/list.h
+index dd6c2041d09c1..0df13cb03028b 100644
+--- a/include/linux/list.h
++++ b/include/linux/list.h
+@@ -35,7 +35,7 @@
+ static inline void INIT_LIST_HEAD(struct list_head *list)
+ {
+ 	WRITE_ONCE(list->next, list);
+-	list->prev = list;
++	WRITE_ONCE(list->prev, list);
+ }
+ 
+ #ifdef CONFIG_DEBUG_LIST
+@@ -306,7 +306,7 @@ static inline int list_empty(const struct list_head *head)
+ static inline void list_del_init_careful(struct list_head *entry)
+ {
+ 	__list_del_entry(entry);
+-	entry->prev = entry;
++	WRITE_ONCE(entry->prev, entry);
+ 	smp_store_release(&entry->next, entry);
+ }
+ 
+@@ -326,7 +326,7 @@ static inline void list_del_init_careful(struct list_head *entry)
+ static inline int list_empty_careful(const struct list_head *head)
+ {
+ 	struct list_head *next = smp_load_acquire(&head->next);
+-	return list_is_head(next, head) && (next == head->prev);
++	return list_is_head(next, head) && (next == READ_ONCE(head->prev));
+ }
+ 
+ /**
+@@ -579,6 +579,16 @@ static inline void list_splice_tail_init(struct list_head *list,
+ #define list_for_each(pos, head) \
+ 	for (pos = (head)->next; !list_is_head(pos, (head)); pos = pos->next)
+ 
++/**
++ * list_for_each_rcu - Iterate over a list in an RCU-safe fashion
++ * @pos:	the &struct list_head to use as a loop cursor.
++ * @head:	the head for your list.
++ */
++#define list_for_each_rcu(pos, head)		  \
++	for (pos = rcu_dereference((head)->next); \
++	     !list_is_head(pos, (head)); \
++	     pos = rcu_dereference(pos->next))
++
+ /**
+  * list_for_each_continue - continue iteration over a list
+  * @pos:	the &struct list_head to use as a loop cursor.
+diff --git a/include/linux/mailbox_controller.h b/include/linux/mailbox_controller.h
+index 36d6ce673503c..6fee33cb52f58 100644
+--- a/include/linux/mailbox_controller.h
++++ b/include/linux/mailbox_controller.h
+@@ -83,6 +83,7 @@ struct mbox_controller {
+ 				      const struct of_phandle_args *sp);
+ 	/* Internal to API */
+ 	struct hrtimer poll_hrt;
++	spinlock_t poll_hrt_lock;
+ 	struct list_head node;
+ };
+ 
+diff --git a/include/linux/module.h b/include/linux/module.h
+index 1e135fd5c076a..d5e9066990ca0 100644
+--- a/include/linux/module.h
++++ b/include/linux/module.h
+@@ -290,8 +290,7 @@ extern typeof(name) __mod_##type##__##name##_device_table		\
+  * files require multiple MODULE_FIRMWARE() specifiers */
+ #define MODULE_FIRMWARE(_firmware) MODULE_INFO(firmware, _firmware)
+ 
+-#define _MODULE_IMPORT_NS(ns)	MODULE_INFO(import_ns, #ns)
+-#define MODULE_IMPORT_NS(ns)	_MODULE_IMPORT_NS(ns)
++#define MODULE_IMPORT_NS(ns)	MODULE_INFO(import_ns, __stringify(ns))
+ 
+ struct notifier_block;
+ 
+diff --git a/include/linux/mtd/cfi.h b/include/linux/mtd/cfi.h
+index fd1ecb8211060..d88bb56c18e2e 100644
+--- a/include/linux/mtd/cfi.h
++++ b/include/linux/mtd/cfi.h
+@@ -286,6 +286,7 @@ struct cfi_private {
+ 	map_word sector_erase_cmd;
+ 	unsigned long chipshift; /* Because they're of the same type */
+ 	const char *im_name;	 /* inter_module name for cmdset_setup */
++	unsigned long quirks;
+ 	struct flchip chips[];  /* per-chip data structure for each chip */
+ };
+ 
+diff --git a/include/linux/namei.h b/include/linux/namei.h
+index e89329bb3134e..caeb08a98536c 100644
+--- a/include/linux/namei.h
++++ b/include/linux/namei.h
+@@ -69,6 +69,12 @@ extern struct dentry *lookup_one_len(const char *, struct dentry *, int);
+ extern struct dentry *lookup_one_len_unlocked(const char *, struct dentry *, int);
+ extern struct dentry *lookup_positive_unlocked(const char *, struct dentry *, int);
+ struct dentry *lookup_one(struct user_namespace *, const char *, struct dentry *, int);
++struct dentry *lookup_one_unlocked(struct user_namespace *mnt_userns,
++				   const char *name, struct dentry *base,
++				   int len);
++struct dentry *lookup_one_positive_unlocked(struct user_namespace *mnt_userns,
++					    const char *name,
++					    struct dentry *base, int len);
+ 
+ extern int follow_down_one(struct path *);
+ extern int follow_down(struct path *);
+diff --git a/include/linux/nfs_fs_sb.h b/include/linux/nfs_fs_sb.h
+index 157d2bd6b2417..ea2f7e6b1b0b5 100644
+--- a/include/linux/nfs_fs_sb.h
++++ b/include/linux/nfs_fs_sb.h
+@@ -287,4 +287,5 @@ struct nfs_server {
+ #define NFS_CAP_XATTR		(1U << 28)
+ #define NFS_CAP_READ_PLUS	(1U << 29)
+ #define NFS_CAP_FS_LOCATIONS	(1U << 30)
++#define NFS_CAP_MOVEABLE	(1U << 31)
+ #endif
+diff --git a/include/linux/nfs_xdr.h b/include/linux/nfs_xdr.h
+index 2863e5a69c6ab..20e97329fe463 100644
+--- a/include/linux/nfs_xdr.h
++++ b/include/linux/nfs_xdr.h
+@@ -1212,7 +1212,7 @@ struct nfs4_fs_location {
+ 
+ #define NFS4_FS_LOCATIONS_MAXENTRIES 10
+ struct nfs4_fs_locations {
+-	struct nfs_fattr fattr;
++	struct nfs_fattr *fattr;
+ 	const struct nfs_server *server;
+ 	struct nfs4_pathname fs_path;
+ 	int nlocations;
+diff --git a/include/linux/nodemask.h b/include/linux/nodemask.h
+index 567c3ddba2c42..c6199dbe25913 100644
+--- a/include/linux/nodemask.h
++++ b/include/linux/nodemask.h
+@@ -375,14 +375,13 @@ static inline void __nodes_fold(nodemask_t *dstp, const nodemask_t *origp,
+ }
+ 
+ #if MAX_NUMNODES > 1
+-#define for_each_node_mask(node, mask)			\
+-	for ((node) = first_node(mask);			\
+-		(node) < MAX_NUMNODES;			\
+-		(node) = next_node((node), (mask)))
++#define for_each_node_mask(node, mask)				    \
++	for ((node) = first_node(mask);				    \
++	     (node >= 0) && (node) < MAX_NUMNODES;		    \
++	     (node) = next_node((node), (mask)))
+ #else /* MAX_NUMNODES == 1 */
+-#define for_each_node_mask(node, mask)			\
+-	if (!nodes_empty(mask))				\
+-		for ((node) = 0; (node) < 1; (node)++)
++#define for_each_node_mask(node, mask)                                  \
++	for ((node) = 0; (node) < 1 && !nodes_empty(mask); (node)++)
+ #endif /* MAX_NUMNODES */
+ 
+ /*
+diff --git a/include/linux/platform_data/cros_ec_proto.h b/include/linux/platform_data/cros_ec_proto.h
+index df3c78c92ca2f..16931569adce1 100644
+--- a/include/linux/platform_data/cros_ec_proto.h
++++ b/include/linux/platform_data/cros_ec_proto.h
+@@ -216,6 +216,9 @@ int cros_ec_prepare_tx(struct cros_ec_device *ec_dev,
+ int cros_ec_check_result(struct cros_ec_device *ec_dev,
+ 			 struct cros_ec_command *msg);
+ 
++int cros_ec_cmd_xfer(struct cros_ec_device *ec_dev,
++		     struct cros_ec_command *msg);
++
+ int cros_ec_cmd_xfer_status(struct cros_ec_device *ec_dev,
+ 			    struct cros_ec_command *msg);
+ 
+diff --git a/include/linux/ptp_classify.h b/include/linux/ptp_classify.h
+index fefa7790dc460..2b6ea36ad1626 100644
+--- a/include/linux/ptp_classify.h
++++ b/include/linux/ptp_classify.h
+@@ -43,6 +43,9 @@
+ #define OFF_PTP_SOURCE_UUID	22 /* PTPv1 only */
+ #define OFF_PTP_SEQUENCE_ID	30
+ 
++/* PTP header flag fields */
++#define PTP_FLAG_TWOSTEP	BIT(1)
++
+ /* Below defines should actually be removed at some point in time. */
+ #define IP6_HLEN	40
+ #define UDP_HLEN	8
+diff --git a/include/linux/ptrace.h b/include/linux/ptrace.h
+index 15b3d176b6b4d..c952c5ba8fab6 100644
+--- a/include/linux/ptrace.h
++++ b/include/linux/ptrace.h
+@@ -30,7 +30,6 @@ extern int ptrace_access_vm(struct task_struct *tsk, unsigned long addr,
+ 
+ #define PT_SEIZED	0x00010000	/* SEIZE used, enable new behavior */
+ #define PT_PTRACED	0x00000001
+-#define PT_DTRACE	0x00000002	/* delayed trace (used on m68k, i386) */
+ 
+ #define PT_OPT_FLAG_SHIFT	3
+ /* PT_TRACE_* event enable flags */
+@@ -47,12 +46,6 @@ extern int ptrace_access_vm(struct task_struct *tsk, unsigned long addr,
+ #define PT_EXITKILL		(PTRACE_O_EXITKILL << PT_OPT_FLAG_SHIFT)
+ #define PT_SUSPEND_SECCOMP	(PTRACE_O_SUSPEND_SECCOMP << PT_OPT_FLAG_SHIFT)
+ 
+-/* single stepping state bits (used on ARM and PA-RISC) */
+-#define PT_SINGLESTEP_BIT	31
+-#define PT_SINGLESTEP		(1<<PT_SINGLESTEP_BIT)
+-#define PT_BLOCKSTEP_BIT	30
+-#define PT_BLOCKSTEP		(1<<PT_BLOCKSTEP_BIT)
+-
+ extern long arch_ptrace(struct task_struct *child, long request,
+ 			unsigned long addr, unsigned long data);
+ extern int ptrace_readdata(struct task_struct *tsk, unsigned long src, char __user *dst, int len);
+diff --git a/include/linux/sched/signal.h b/include/linux/sched/signal.h
+index 3c8b34876744b..bab7cc56b13a8 100644
+--- a/include/linux/sched/signal.h
++++ b/include/linux/sched/signal.h
+@@ -320,7 +320,7 @@ int send_sig_mceerr(int code, void __user *, short, struct task_struct *);
+ 
+ int force_sig_bnderr(void __user *addr, void __user *lower, void __user *upper);
+ int force_sig_pkuerr(void __user *addr, u32 pkey);
+-int force_sig_perf(void __user *addr, u32 type, u64 sig_data);
++int send_sig_perf(void __user *addr, u32 type, u64 sig_data);
+ 
+ int force_sig_ptrace_errno_trap(int errno, void __user *addr);
+ int force_sig_fault_trapno(int sig, int code, void __user *addr, int trapno);
+diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h
+index 719c9a6cac8d8..4492266935dd7 100644
+--- a/include/linux/sched/task.h
++++ b/include/linux/sched/task.h
+@@ -32,6 +32,7 @@ struct kernel_clone_args {
+ 	size_t set_tid_size;
+ 	int cgroup;
+ 	int io_thread;
++	int kthread;
+ 	struct cgroup *cgrp;
+ 	struct css_set *cset;
+ };
+@@ -89,6 +90,7 @@ struct task_struct *create_io_thread(int (*fn)(void *), void *arg, int node);
+ struct task_struct *fork_idle(int);
+ struct mm_struct *copy_init_mm(void);
+ extern pid_t kernel_thread(int (*fn)(void *), void *arg, unsigned long flags);
++extern pid_t user_mode_thread(int (*fn)(void *), void *arg, unsigned long flags);
+ extern long kernel_wait4(pid_t, int __user *, int, struct rusage *);
+ int kernel_wait(pid_t pid, int *stat);
+ 
+diff --git a/include/linux/seq_file.h b/include/linux/seq_file.h
+index 60820ab511d22..bd023dd38ae6f 100644
+--- a/include/linux/seq_file.h
++++ b/include/linux/seq_file.h
+@@ -277,6 +277,10 @@ extern struct list_head *seq_list_start_head(struct list_head *head,
+ extern struct list_head *seq_list_next(void *v, struct list_head *head,
+ 		loff_t *ppos);
+ 
++extern struct list_head *seq_list_start_rcu(struct list_head *head, loff_t pos);
++extern struct list_head *seq_list_start_head_rcu(struct list_head *head, loff_t pos);
++extern struct list_head *seq_list_next_rcu(void *v, struct list_head *head, loff_t *ppos);
++
+ /*
+  * Helpers for iteration over hlist_head-s in seq_files
+  */
+diff --git a/include/linux/set_memory.h b/include/linux/set_memory.h
+index f36be5166c197..369769ce7399d 100644
+--- a/include/linux/set_memory.h
++++ b/include/linux/set_memory.h
+@@ -42,14 +42,14 @@ static inline bool can_set_direct_map(void)
+ #endif
+ #endif /* CONFIG_ARCH_HAS_SET_DIRECT_MAP */
+ 
+-#ifndef set_mce_nospec
+-static inline int set_mce_nospec(unsigned long pfn, bool unmap)
++#ifdef CONFIG_X86_64
++int set_mce_nospec(unsigned long pfn);
++int clear_mce_nospec(unsigned long pfn);
++#else
++static inline int set_mce_nospec(unsigned long pfn)
+ {
+ 	return 0;
+ }
+-#endif
+-
+-#ifndef clear_mce_nospec
+ static inline int clear_mce_nospec(unsigned long pfn)
+ {
+ 	return 0;
+diff --git a/include/linux/usb/hcd.h b/include/linux/usb/hcd.h
+index 548a028f2dabb..2c1fc9212cf28 100644
+--- a/include/linux/usb/hcd.h
++++ b/include/linux/usb/hcd.h
+@@ -124,6 +124,7 @@ struct usb_hcd {
+ #define HCD_FLAG_RH_RUNNING		5	/* root hub is running? */
+ #define HCD_FLAG_DEAD			6	/* controller has died? */
+ #define HCD_FLAG_INTF_AUTHORIZED	7	/* authorize interfaces? */
++#define HCD_FLAG_DEFER_RH_REGISTER	8	/* Defer roothub registration */
+ 
+ 	/* The flags can be tested using these macros; they are likely to
+ 	 * be slightly faster than test_bit().
+@@ -134,6 +135,7 @@ struct usb_hcd {
+ #define HCD_WAKEUP_PENDING(hcd)	((hcd)->flags & (1U << HCD_FLAG_WAKEUP_PENDING))
+ #define HCD_RH_RUNNING(hcd)	((hcd)->flags & (1U << HCD_FLAG_RH_RUNNING))
+ #define HCD_DEAD(hcd)		((hcd)->flags & (1U << HCD_FLAG_DEAD))
++#define HCD_DEFER_RH_REGISTER(hcd) ((hcd)->flags & (1U << HCD_FLAG_DEFER_RH_REGISTER))
+ 
+ 	/*
+ 	 * Specifies if interfaces are authorized by default
+diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h
+index 69ef31cea5822..62a9bb022aedf 100644
+--- a/include/net/bluetooth/hci.h
++++ b/include/net/bluetooth/hci.h
+@@ -265,6 +265,15 @@ enum {
+ 	 * runtime suspend, because event filtering takes place there.
+ 	 */
+ 	HCI_QUIRK_BROKEN_FILTER_CLEAR_ALL,
++
++	/*
++	 * When this quirk is set, disables the use of
++	 * HCI_OP_ENHANCED_SETUP_SYNC_CONN command to setup SCO connections.
++	 *
++	 * This quirk can be set before hci_register_dev is called or
++	 * during the hdev->setup vendor callback.
++	 */
++	HCI_QUIRK_BROKEN_ENHANCED_SETUP_SYNC_CONN,
+ };
+ 
+ /* HCI device flags */
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index 62d7b81b1cb74..5a52a2018b56a 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -1495,8 +1495,12 @@ void hci_conn_del_sysfs(struct hci_conn *conn);
+ #define privacy_mode_capable(dev) (use_ll_privacy(dev) && \
+ 				   (hdev->commands[39] & 0x04))
+ 
+-/* Use enhanced synchronous connection if command is supported */
+-#define enhanced_sco_capable(dev) ((dev)->commands[29] & 0x08)
++/* Use enhanced synchronous connection if command is supported and its quirk
++ * has not been set.
++ */
++#define enhanced_sync_conn_capable(dev) \
++	(((dev)->commands[29] & 0x08) && \
++	 !test_bit(HCI_QUIRK_BROKEN_ENHANCED_SETUP_SYNC_CONN, &(dev)->quirks))
+ 
+ /* Use ext scanning if set ext scan param and ext scan enable is supported */
+ #define use_ext_scan(dev) (((dev)->commands[37] & 0x20) && \
+diff --git a/include/net/if_inet6.h b/include/net/if_inet6.h
+index 4cfdef6ca4f64..c8490729b4aea 100644
+--- a/include/net/if_inet6.h
++++ b/include/net/if_inet6.h
+@@ -64,6 +64,14 @@ struct inet6_ifaddr {
+ 
+ 	struct hlist_node	addr_lst;
+ 	struct list_head	if_list;
++	/*
++	 * Used to safely traverse idev->addr_list in process context
++	 * if the idev->lock needed to protect idev->addr_list cannot be held.
++	 * In that case, add the items to this list temporarily and iterate
++	 * without holding idev->lock.
++	 * See addrconf_ifdown and dev_forward_change.
++	 */
++	struct list_head	if_list_aux;
+ 
+ 	struct list_head	tmp_list;
+ 	struct inet6_ifaddr	*ifpub;
+diff --git a/include/net/ip.h b/include/net/ip.h
+index 0161137914cf9..26fffda78cca4 100644
+--- a/include/net/ip.h
++++ b/include/net/ip.h
+@@ -94,7 +94,7 @@ static inline void ipcm_init_sk(struct ipcm_cookie *ipcm,
+ 
+ 	ipcm->sockc.mark = inet->sk.sk_mark;
+ 	ipcm->sockc.tsflags = inet->sk.sk_tsflags;
+-	ipcm->oif = inet->sk.sk_bound_dev_if;
++	ipcm->oif = READ_ONCE(inet->sk.sk_bound_dev_if);
+ 	ipcm->addr = inet->inet_saddr;
+ }
+ 
+diff --git a/include/net/sock.h b/include/net/sock.h
+index c4b91fc19b9ca..3c4fb8f03fd99 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -2866,13 +2866,14 @@ static inline void sk_pacing_shift_update(struct sock *sk, int val)
+  */
+ static inline bool sk_dev_equal_l3scope(struct sock *sk, int dif)
+ {
++	int bound_dev_if = READ_ONCE(sk->sk_bound_dev_if);
+ 	int mdif;
+ 
+-	if (!sk->sk_bound_dev_if || sk->sk_bound_dev_if == dif)
++	if (!bound_dev_if || bound_dev_if == dif)
+ 		return true;
+ 
+ 	mdif = l3mdev_master_ifindex_by_index(sock_net(sk), dif);
+-	if (mdif && mdif == sk->sk_bound_dev_if)
++	if (mdif && mdif == bound_dev_if)
+ 		return true;
+ 
+ 	return false;
+diff --git a/include/scsi/libfcoe.h b/include/scsi/libfcoe.h
+index fac8e89aed81d..310e0dbffda99 100644
+--- a/include/scsi/libfcoe.h
++++ b/include/scsi/libfcoe.h
+@@ -249,7 +249,8 @@ int fcoe_ctlr_recv_flogi(struct fcoe_ctlr *, struct fc_lport *,
+ 			 struct fc_frame *);
+ 
+ /* libfcoe funcs */
+-u64 fcoe_wwn_from_mac(unsigned char mac[MAX_ADDR_LEN], unsigned int, unsigned int);
++u64 fcoe_wwn_from_mac(unsigned char mac[ETH_ALEN], unsigned int scheme,
++		      unsigned int port);
+ int fcoe_libfc_config(struct fc_lport *, struct fcoe_ctlr *,
+ 		      const struct libfc_function_template *, int init_fcp);
+ u32 fcoe_fc_crc(struct fc_frame *fp);
+diff --git a/include/scsi/libiscsi.h b/include/scsi/libiscsi.h
+index d0a24779c52dc..c0703cd20a993 100644
+--- a/include/scsi/libiscsi.h
++++ b/include/scsi/libiscsi.h
+@@ -54,9 +54,9 @@ enum {
+ #define ISID_SIZE			6
+ 
+ /* Connection flags */
+-#define ISCSI_CONN_FLAG_SUSPEND_TX	BIT(0)
+-#define ISCSI_CONN_FLAG_SUSPEND_RX	BIT(1)
+-#define ISCSI_CONN_FLAG_BOUND		BIT(2)
++#define ISCSI_CONN_FLAG_SUSPEND_TX	0
++#define ISCSI_CONN_FLAG_SUSPEND_RX	1
++#define ISCSI_CONN_FLAG_BOUND		2
+ 
+ #define ISCSI_ITT_MASK			0x1fff
+ #define ISCSI_TOTAL_CMDS_MAX		4096
+diff --git a/include/sound/cs35l41.h b/include/sound/cs35l41.h
+index bf7f9a9aeba04..9341130257ea6 100644
+--- a/include/sound/cs35l41.h
++++ b/include/sound/cs35l41.h
+@@ -536,7 +536,6 @@
+ 
+ #define CS35L41_MAX_CACHE_REG		36
+ #define CS35L41_OTP_SIZE_WORDS		32
+-#define CS35L41_NUM_OTP_ELEM		100
+ 
+ #define CS35L41_VALID_PDATA		0x80000000
+ #define CS35L41_NUM_SUPPLIES            2
+diff --git a/include/sound/jack.h b/include/sound/jack.h
+index 1181f536557eb..1ed90e2109e9b 100644
+--- a/include/sound/jack.h
++++ b/include/sound/jack.h
+@@ -62,6 +62,7 @@ struct snd_jack {
+ 	const char *id;
+ #ifdef CONFIG_SND_JACK_INPUT_DEV
+ 	struct input_dev *input_dev;
++	struct mutex input_dev_lock;
+ 	int registered;
+ 	int type;
+ 	char name[100];
+diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h
+index 4a3ab0ed6e062..1c714336b8635 100644
+--- a/include/trace/events/rxrpc.h
++++ b/include/trace/events/rxrpc.h
+@@ -1509,7 +1509,7 @@ TRACE_EVENT(rxrpc_call_reset,
+ 		    __entry->call_serial = call->rx_serial;
+ 		    __entry->conn_serial = call->conn->hi_serial;
+ 		    __entry->tx_seq = call->tx_hard_ack;
+-		    __entry->rx_seq = call->ackr_seen;
++		    __entry->rx_seq = call->rx_hard_ack;
+ 			   ),
+ 
+ 	    TP_printk("c=%08x %08x:%08x r=%08x/%08x tx=%08x rx=%08x",
+diff --git a/include/trace/events/vmscan.h b/include/trace/events/vmscan.h
+index de136dbd623ac..e7d28dc549dab 100644
+--- a/include/trace/events/vmscan.h
++++ b/include/trace/events/vmscan.h
+@@ -297,7 +297,7 @@ TRACE_EVENT(mm_vmscan_lru_isolate,
+ 		__field(unsigned long, nr_scanned)
+ 		__field(unsigned long, nr_skipped)
+ 		__field(unsigned long, nr_taken)
+-		__field(isolate_mode_t, isolate_mode)
++		__field(unsigned int, isolate_mode)
+ 		__field(int, lru)
+ 	),
+ 
+@@ -308,7 +308,7 @@ TRACE_EVENT(mm_vmscan_lru_isolate,
+ 		__entry->nr_scanned = nr_scanned;
+ 		__entry->nr_skipped = nr_skipped;
+ 		__entry->nr_taken = nr_taken;
+-		__entry->isolate_mode = isolate_mode;
++		__entry->isolate_mode = (__force unsigned int)isolate_mode;
+ 		__entry->lru = lru;
+ 	),
+ 
+diff --git a/include/uapi/asm-generic/siginfo.h b/include/uapi/asm-generic/siginfo.h
+index 3ba180f550d7c..ffbe4cec9f32d 100644
+--- a/include/uapi/asm-generic/siginfo.h
++++ b/include/uapi/asm-generic/siginfo.h
+@@ -99,6 +99,7 @@ union __sifields {
+ 			struct {
+ 				unsigned long _data;
+ 				__u32 _type;
++				__u32 _flags;
+ 			} _perf;
+ 		};
+ 	} _sigfault;
+@@ -164,6 +165,7 @@ typedef struct siginfo {
+ #define si_pkey		_sifields._sigfault._addr_pkey._pkey
+ #define si_perf_data	_sifields._sigfault._perf._data
+ #define si_perf_type	_sifields._sigfault._perf._type
++#define si_perf_flags	_sifields._sigfault._perf._flags
+ #define si_band		_sifields._sigpoll._band
+ #define si_fd		_sifields._sigpoll._fd
+ #define si_call_addr	_sifields._sigsys._call_addr
+@@ -270,6 +272,11 @@ typedef struct siginfo {
+  * that are of the form: ((PTRACE_EVENT_XXX << 8) | SIGTRAP)
+  */
+ 
++/*
++ * Flags for si_perf_flags if SIGTRAP si_code is TRAP_PERF.
++ */
++#define TRAP_PERF_FLAG_ASYNC (1u << 0)
++
+ /*
+  * SIGCHLD si_codes
+  */
+diff --git a/include/uapi/linux/android/binder.h b/include/uapi/linux/android/binder.h
+index 11157fae8a8e7..688bcdaeed536 100644
+--- a/include/uapi/linux/android/binder.h
++++ b/include/uapi/linux/android/binder.h
+@@ -289,7 +289,7 @@ struct binder_transaction_data {
+ 	/* General information about the transaction. */
+ 	__u32	        flags;
+ 	__kernel_pid_t	sender_pid;
+-	__kernel_uid_t	sender_euid;
++	__kernel_uid32_t	sender_euid;
+ 	binder_size_t	data_size;	/* number of bytes of data */
+ 	binder_size_t	offsets_size;	/* number of bytes of offsets */
+ 
+diff --git a/include/uapi/linux/landlock.h b/include/uapi/linux/landlock.h
+index b3d952067f59c..21c8d58283c9e 100644
+--- a/include/uapi/linux/landlock.h
++++ b/include/uapi/linux/landlock.h
+@@ -33,7 +33,9 @@ struct landlock_ruleset_attr {
+  * - %LANDLOCK_CREATE_RULESET_VERSION: Get the highest supported Landlock ABI
+  *   version.
+  */
++/* clang-format off */
+ #define LANDLOCK_CREATE_RULESET_VERSION			(1U << 0)
++/* clang-format on */
+ 
+ /**
+  * enum landlock_rule_type - Landlock rule type
+@@ -60,8 +62,9 @@ struct landlock_path_beneath_attr {
+ 	 */
+ 	__u64 allowed_access;
+ 	/**
+-	 * @parent_fd: File descriptor, open with ``O_PATH``, which identifies
+-	 * the parent directory of a file hierarchy, or just a file.
++	 * @parent_fd: File descriptor, preferably opened with ``O_PATH``,
++	 * which identifies the parent directory of a file hierarchy, or just a
++	 * file.
+ 	 */
+ 	__s32 parent_fd;
+ 	/*
+@@ -120,6 +123,7 @@ struct landlock_path_beneath_attr {
+  *   :manpage:`access(2)`.
+  *   Future Landlock evolutions will enable to restrict them.
+  */
++/* clang-format off */
+ #define LANDLOCK_ACCESS_FS_EXECUTE			(1ULL << 0)
+ #define LANDLOCK_ACCESS_FS_WRITE_FILE			(1ULL << 1)
+ #define LANDLOCK_ACCESS_FS_READ_FILE			(1ULL << 2)
+@@ -133,5 +137,6 @@ struct landlock_path_beneath_attr {
+ #define LANDLOCK_ACCESS_FS_MAKE_FIFO			(1ULL << 10)
+ #define LANDLOCK_ACCESS_FS_MAKE_BLOCK			(1ULL << 11)
+ #define LANDLOCK_ACCESS_FS_MAKE_SYM			(1ULL << 12)
++/* clang-format on */
+ 
+ #endif /* _UAPI_LINUX_LANDLOCK_H */
+diff --git a/include/uapi/linux/lirc.h b/include/uapi/linux/lirc.h
+index 23b0f2c8ba81e..8d7ca7c6af42e 100644
+--- a/include/uapi/linux/lirc.h
++++ b/include/uapi/linux/lirc.h
+@@ -84,6 +84,13 @@
+ #define LIRC_CAN_SEND(x) ((x)&LIRC_CAN_SEND_MASK)
+ #define LIRC_CAN_REC(x) ((x)&LIRC_CAN_REC_MASK)
+ 
++/*
++ * Unused features. These features were never implemented, in tree or
++ * out of tree. These definitions are here so not to break the lircd build.
++ */
++#define LIRC_CAN_SET_REC_FILTER		0
++#define LIRC_CAN_NOTIFY_DECODE		0
++
+ /*** IOCTL commands for lirc driver ***/
+ 
+ #define LIRC_GET_FEATURES              _IOR('i', 0x00000000, __u32)
+diff --git a/include/uapi/linux/types.h b/include/uapi/linux/types.h
+index c4dc597f3dcf4..308433be33c26 100644
+--- a/include/uapi/linux/types.h
++++ b/include/uapi/linux/types.h
+@@ -26,6 +26,9 @@
+ #define __bitwise
+ #endif
+ 
++/* The kernel doesn't use this legacy form, but user space does */
++#define __bitwise__ __bitwise
++
+ typedef __u16 __bitwise __le16;
+ typedef __u16 __bitwise __be16;
+ typedef __u32 __bitwise __le32;
+diff --git a/init/Kconfig b/init/Kconfig
+index ddcbefe535e9e..b19e2eeaae803 100644
+--- a/init/Kconfig
++++ b/init/Kconfig
+@@ -77,6 +77,11 @@ config CC_HAS_ASM_GOTO_OUTPUT
+ 	depends on CC_HAS_ASM_GOTO
+ 	def_bool $(success,echo 'int foo(int x) { asm goto ("": "=r"(x) ::: bar); return x; bar: return 0; }' | $(CC) -x c - -c -o /dev/null)
+ 
++config CC_HAS_ASM_GOTO_TIED_OUTPUT
++	depends on CC_HAS_ASM_GOTO_OUTPUT
++	# Detect buggy gcc and clang, fixed in gcc-11 clang-14.
++	def_bool $(success,echo 'int foo(int *x) { asm goto (".long (%l[bar]) - .\n": "+m"(*x) ::: bar); return *x; bar: return 0; }' | $CC -x c - -c -o /dev/null)
++
+ config TOOLS_SUPPORT_RELR
+ 	def_bool $(success,env "CC=$(CC)" "LD=$(LD)" "NM=$(NM)" "OBJCOPY=$(OBJCOPY)" $(srctree)/scripts/tools-support-relr.sh)
+ 
+diff --git a/init/main.c b/init/main.c
+index f057c49f1d9d8..80bd217dd35ea 100644
+--- a/init/main.c
++++ b/init/main.c
+@@ -688,7 +688,7 @@ noinline void __ref rest_init(void)
+ 	 * the init task will end up wanting to create kthreads, which, if
+ 	 * we schedule it before we create kthreadd, will OOPS.
+ 	 */
+-	pid = kernel_thread(kernel_init, NULL, CLONE_FS);
++	pid = user_mode_thread(kernel_init, NULL, CLONE_FS);
+ 	/*
+ 	 * Pin init on the boot CPU. Task migration is not properly working
+ 	 * until sched_init_smp() has been run. It will set the allowed
+diff --git a/ipc/mqueue.c b/ipc/mqueue.c
+index 7c08eb3c258d2..54cb6264f8cff 100644
+--- a/ipc/mqueue.c
++++ b/ipc/mqueue.c
+@@ -45,6 +45,7 @@
+ 
+ struct mqueue_fs_context {
+ 	struct ipc_namespace	*ipc_ns;
++	bool			 newns;	/* Set if newly created ipc namespace */
+ };
+ 
+ #define MQUEUE_MAGIC	0x19800202
+@@ -427,6 +428,14 @@ static int mqueue_get_tree(struct fs_context *fc)
+ {
+ 	struct mqueue_fs_context *ctx = fc->fs_private;
+ 
++	/*
++	 * With a newly created ipc namespace, we don't need to do a search
++	 * for an ipc namespace match, but we still need to set s_fs_info.
++	 */
++	if (ctx->newns) {
++		fc->s_fs_info = ctx->ipc_ns;
++		return get_tree_nodev(fc, mqueue_fill_super);
++	}
+ 	return get_tree_keyed(fc, mqueue_fill_super, ctx->ipc_ns);
+ }
+ 
+@@ -454,6 +463,10 @@ static int mqueue_init_fs_context(struct fs_context *fc)
+ 	return 0;
+ }
+ 
++/*
++ * mq_init_ns() is currently the only caller of mq_create_mount().
++ * So the ns parameter is always a newly created ipc namespace.
++ */
+ static struct vfsmount *mq_create_mount(struct ipc_namespace *ns)
+ {
+ 	struct mqueue_fs_context *ctx;
+@@ -465,6 +478,7 @@ static struct vfsmount *mq_create_mount(struct ipc_namespace *ns)
+ 		return ERR_CAST(fc);
+ 
+ 	ctx = fc->fs_private;
++	ctx->newns = true;
+ 	put_ipc_ns(ctx->ipc_ns);
+ 	ctx->ipc_ns = get_ipc_ns(ns);
+ 	put_user_ns(fc->user_ns);
+diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
+index 128028efda647..0cb6211fcb58d 100644
+--- a/kernel/bpf/cgroup.c
++++ b/kernel/bpf/cgroup.c
+@@ -22,6 +22,72 @@
+ DEFINE_STATIC_KEY_ARRAY_FALSE(cgroup_bpf_enabled_key, MAX_CGROUP_BPF_ATTACH_TYPE);
+ EXPORT_SYMBOL(cgroup_bpf_enabled_key);
+ 
++/* __always_inline is necessary to prevent indirect call through run_prog
++ * function pointer.
++ */
++static __always_inline int
++bpf_prog_run_array_cg_flags(const struct cgroup_bpf *cgrp,
++			    enum cgroup_bpf_attach_type atype,
++			    const void *ctx, bpf_prog_run_fn run_prog,
++			    int retval, u32 *ret_flags)
++{
++	const struct bpf_prog_array_item *item;
++	const struct bpf_prog *prog;
++	const struct bpf_prog_array *array;
++	struct bpf_run_ctx *old_run_ctx;
++	struct bpf_cg_run_ctx run_ctx;
++	u32 func_ret;
++
++	run_ctx.retval = retval;
++	migrate_disable();
++	rcu_read_lock();
++	array = rcu_dereference(cgrp->effective[atype]);
++	item = &array->items[0];
++	old_run_ctx = bpf_set_run_ctx(&run_ctx.run_ctx);
++	while ((prog = READ_ONCE(item->prog))) {
++		run_ctx.prog_item = item;
++		func_ret = run_prog(prog, ctx);
++		if (!(func_ret & 1) && !IS_ERR_VALUE((long)run_ctx.retval))
++			run_ctx.retval = -EPERM;
++		*(ret_flags) |= (func_ret >> 1);
++		item++;
++	}
++	bpf_reset_run_ctx(old_run_ctx);
++	rcu_read_unlock();
++	migrate_enable();
++	return run_ctx.retval;
++}
++
++static __always_inline int
++bpf_prog_run_array_cg(const struct cgroup_bpf *cgrp,
++		      enum cgroup_bpf_attach_type atype,
++		      const void *ctx, bpf_prog_run_fn run_prog,
++		      int retval)
++{
++	const struct bpf_prog_array_item *item;
++	const struct bpf_prog *prog;
++	const struct bpf_prog_array *array;
++	struct bpf_run_ctx *old_run_ctx;
++	struct bpf_cg_run_ctx run_ctx;
++
++	run_ctx.retval = retval;
++	migrate_disable();
++	rcu_read_lock();
++	array = rcu_dereference(cgrp->effective[atype]);
++	item = &array->items[0];
++	old_run_ctx = bpf_set_run_ctx(&run_ctx.run_ctx);
++	while ((prog = READ_ONCE(item->prog))) {
++		run_ctx.prog_item = item;
++		if (!run_prog(prog, ctx) && !IS_ERR_VALUE((long)run_ctx.retval))
++			run_ctx.retval = -EPERM;
++		item++;
++	}
++	bpf_reset_run_ctx(old_run_ctx);
++	rcu_read_unlock();
++	migrate_enable();
++	return run_ctx.retval;
++}
++
+ void cgroup_bpf_offline(struct cgroup *cgrp)
+ {
+ 	cgroup_get(cgrp);
+@@ -1075,11 +1141,38 @@ int __cgroup_bpf_run_filter_skb(struct sock *sk,
+ 	bpf_compute_and_save_data_end(skb, &saved_data_end);
+ 
+ 	if (atype == CGROUP_INET_EGRESS) {
+-		ret = BPF_PROG_CGROUP_INET_EGRESS_RUN_ARRAY(
+-			cgrp->bpf.effective[atype], skb, __bpf_prog_run_save_cb);
++		u32 flags = 0;
++		bool cn;
++
++		ret = bpf_prog_run_array_cg_flags(
++			&cgrp->bpf, atype,
++			skb, __bpf_prog_run_save_cb, 0, &flags);
++
++		/* Return values of CGROUP EGRESS BPF programs are:
++		 *   0: drop packet
++		 *   1: keep packet
++		 *   2: drop packet and cn
++		 *   3: keep packet and cn
++		 *
++		 * The returned value is then converted to one of the NET_XMIT
++		 * or an error code that is then interpreted as drop packet
++		 * (and no cn):
++		 *   0: NET_XMIT_SUCCESS  skb should be transmitted
++		 *   1: NET_XMIT_DROP     skb should be dropped and cn
++		 *   2: NET_XMIT_CN       skb should be transmitted and cn
++		 *   3: -err              skb should be dropped
++		 */
++
++		cn = flags & BPF_RET_SET_CN;
++		if (ret && !IS_ERR_VALUE((long)ret))
++			ret = -EFAULT;
++		if (!ret)
++			ret = (cn ? NET_XMIT_CN : NET_XMIT_SUCCESS);
++		else
++			ret = (cn ? NET_XMIT_DROP : ret);
+ 	} else {
+-		ret = BPF_PROG_RUN_ARRAY_CG(cgrp->bpf.effective[atype], skb,
+-					    __bpf_prog_run_save_cb, 0);
++		ret = bpf_prog_run_array_cg(&cgrp->bpf, atype,
++					    skb, __bpf_prog_run_save_cb, 0);
+ 		if (ret && !IS_ERR_VALUE((long)ret))
+ 			ret = -EFAULT;
+ 	}
+@@ -1109,8 +1202,7 @@ int __cgroup_bpf_run_filter_sk(struct sock *sk,
+ {
+ 	struct cgroup *cgrp = sock_cgroup_ptr(&sk->sk_cgrp_data);
+ 
+-	return BPF_PROG_RUN_ARRAY_CG(cgrp->bpf.effective[atype], sk,
+-				     bpf_prog_run, 0);
++	return bpf_prog_run_array_cg(&cgrp->bpf, atype, sk, bpf_prog_run, 0);
+ }
+ EXPORT_SYMBOL(__cgroup_bpf_run_filter_sk);
+ 
+@@ -1155,8 +1247,8 @@ int __cgroup_bpf_run_filter_sock_addr(struct sock *sk,
+ 	}
+ 
+ 	cgrp = sock_cgroup_ptr(&sk->sk_cgrp_data);
+-	return BPF_PROG_RUN_ARRAY_CG_FLAGS(cgrp->bpf.effective[atype], &ctx,
+-					   bpf_prog_run, 0, flags);
++	return bpf_prog_run_array_cg_flags(&cgrp->bpf, atype,
++					   &ctx, bpf_prog_run, 0, flags);
+ }
+ EXPORT_SYMBOL(__cgroup_bpf_run_filter_sock_addr);
+ 
+@@ -1182,8 +1274,8 @@ int __cgroup_bpf_run_filter_sock_ops(struct sock *sk,
+ {
+ 	struct cgroup *cgrp = sock_cgroup_ptr(&sk->sk_cgrp_data);
+ 
+-	return BPF_PROG_RUN_ARRAY_CG(cgrp->bpf.effective[atype], sock_ops,
+-				     bpf_prog_run, 0);
++	return bpf_prog_run_array_cg(&cgrp->bpf, atype, sock_ops, bpf_prog_run,
++				     0);
+ }
+ EXPORT_SYMBOL(__cgroup_bpf_run_filter_sock_ops);
+ 
+@@ -1200,8 +1292,7 @@ int __cgroup_bpf_check_dev_permission(short dev_type, u32 major, u32 minor,
+ 
+ 	rcu_read_lock();
+ 	cgrp = task_dfl_cgroup(current);
+-	ret = BPF_PROG_RUN_ARRAY_CG(cgrp->bpf.effective[atype], &ctx,
+-				    bpf_prog_run, 0);
++	ret = bpf_prog_run_array_cg(&cgrp->bpf, atype, &ctx, bpf_prog_run, 0);
+ 	rcu_read_unlock();
+ 
+ 	return ret;
+@@ -1366,8 +1457,7 @@ int __cgroup_bpf_run_filter_sysctl(struct ctl_table_header *head,
+ 
+ 	rcu_read_lock();
+ 	cgrp = task_dfl_cgroup(current);
+-	ret = BPF_PROG_RUN_ARRAY_CG(cgrp->bpf.effective[atype], &ctx,
+-				    bpf_prog_run, 0);
++	ret = bpf_prog_run_array_cg(&cgrp->bpf, atype, &ctx, bpf_prog_run, 0);
+ 	rcu_read_unlock();
+ 
+ 	kfree(ctx.cur_val);
+@@ -1459,7 +1549,7 @@ int __cgroup_bpf_run_filter_setsockopt(struct sock *sk, int *level,
+ 	}
+ 
+ 	lock_sock(sk);
+-	ret = BPF_PROG_RUN_ARRAY_CG(cgrp->bpf.effective[CGROUP_SETSOCKOPT],
++	ret = bpf_prog_run_array_cg(&cgrp->bpf, CGROUP_SETSOCKOPT,
+ 				    &ctx, bpf_prog_run, 0);
+ 	release_sock(sk);
+ 
+@@ -1559,7 +1649,7 @@ int __cgroup_bpf_run_filter_getsockopt(struct sock *sk, int level,
+ 	}
+ 
+ 	lock_sock(sk);
+-	ret = BPF_PROG_RUN_ARRAY_CG(cgrp->bpf.effective[CGROUP_GETSOCKOPT],
++	ret = bpf_prog_run_array_cg(&cgrp->bpf, CGROUP_GETSOCKOPT,
+ 				    &ctx, bpf_prog_run, retval);
+ 	release_sock(sk);
+ 
+@@ -1608,7 +1698,7 @@ int __cgroup_bpf_run_filter_getsockopt_kern(struct sock *sk, int level,
+ 	 * be called if that data shouldn't be "exported".
+ 	 */
+ 
+-	ret = BPF_PROG_RUN_ARRAY_CG(cgrp->bpf.effective[CGROUP_GETSOCKOPT],
++	ret = bpf_prog_run_array_cg(&cgrp->bpf, CGROUP_GETSOCKOPT,
+ 				    &ctx, bpf_prog_run, retval);
+ 	if (ret < 0)
+ 		return ret;
+diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c
+index f8ff598596b85..ac740630c79c2 100644
+--- a/kernel/dma/debug.c
++++ b/kernel/dma/debug.c
+@@ -448,7 +448,7 @@ void debug_dma_dump_mappings(struct device *dev)
+  * other hand, consumes a single dma_debug_entry, but inserts 'nents'
+  * entries into the tree.
+  */
+-static RADIX_TREE(dma_active_cacheline, GFP_NOWAIT);
++static RADIX_TREE(dma_active_cacheline, GFP_ATOMIC);
+ static DEFINE_SPINLOCK(radix_lock);
+ #define ACTIVE_CACHELINE_MAX_OVERLAP ((1 << RADIX_TREE_MAX_TAGS) - 1)
+ #define CACHELINE_PER_PAGE_SHIFT (PAGE_SHIFT - L1_CACHE_SHIFT)
+diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
+index 9743c6ccce1a9..e978f36e6be86 100644
+--- a/kernel/dma/direct.c
++++ b/kernel/dma/direct.c
+@@ -79,7 +79,7 @@ static int dma_set_decrypted(struct device *dev, void *vaddr, size_t size)
+ {
+ 	if (!force_dma_unencrypted(dev))
+ 		return 0;
+-	return set_memory_decrypted((unsigned long)vaddr, 1 << get_order(size));
++	return set_memory_decrypted((unsigned long)vaddr, PFN_UP(size));
+ }
+ 
+ static int dma_set_encrypted(struct device *dev, void *vaddr, size_t size)
+@@ -88,7 +88,7 @@ static int dma_set_encrypted(struct device *dev, void *vaddr, size_t size)
+ 
+ 	if (!force_dma_unencrypted(dev))
+ 		return 0;
+-	ret = set_memory_encrypted((unsigned long)vaddr, 1 << get_order(size));
++	ret = set_memory_encrypted((unsigned long)vaddr, PFN_UP(size));
+ 	if (ret)
+ 		pr_warn_ratelimited("leaking DMA memory that can't be re-encrypted\n");
+ 	return ret;
+@@ -115,7 +115,7 @@ static struct page *dma_direct_alloc_swiotlb(struct device *dev, size_t size)
+ }
+ 
+ static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size,
+-		gfp_t gfp)
++		gfp_t gfp, bool allow_highmem)
+ {
+ 	int node = dev_to_node(dev);
+ 	struct page *page = NULL;
+@@ -129,9 +129,12 @@ static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size,
+ 	gfp |= dma_direct_optimal_gfp_mask(dev, dev->coherent_dma_mask,
+ 					   &phys_limit);
+ 	page = dma_alloc_contiguous(dev, size, gfp);
+-	if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) {
+-		dma_free_contiguous(dev, page, size);
+-		page = NULL;
++	if (page) {
++		if (!dma_coherent_ok(dev, page_to_phys(page), size) ||
++		    (!allow_highmem && PageHighMem(page))) {
++			dma_free_contiguous(dev, page, size);
++			page = NULL;
++		}
+ 	}
+ again:
+ 	if (!page)
+@@ -189,7 +192,7 @@ static void *dma_direct_alloc_no_mapping(struct device *dev, size_t size,
+ {
+ 	struct page *page;
+ 
+-	page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO);
++	page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO, true);
+ 	if (!page)
+ 		return NULL;
+ 
+@@ -262,7 +265,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
+ 		return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);
+ 
+ 	/* we always manually zero the memory once we are done */
+-	page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO);
++	page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO, true);
+ 	if (!page)
+ 		return NULL;
+ 
+@@ -370,19 +373,9 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
+ 	if (force_dma_unencrypted(dev) && dma_direct_use_pool(dev, gfp))
+ 		return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);
+ 
+-	page = __dma_direct_alloc_pages(dev, size, gfp);
++	page = __dma_direct_alloc_pages(dev, size, gfp, false);
+ 	if (!page)
+ 		return NULL;
+-	if (PageHighMem(page)) {
+-		/*
+-		 * Depending on the cma= arguments and per-arch setup
+-		 * dma_alloc_contiguous could return highmem pages.
+-		 * Without remapping there is no way to return them here,
+-		 * so log an error and fail.
+-		 */
+-		dev_info(dev, "Rejecting highmem page from CMA.\n");
+-		goto out_free_pages;
+-	}
+ 
+ 	ret = page_address(page);
+ 	if (dma_set_decrypted(dev, ret, size))
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 7f1e4c5897e75..950b25c3f2103 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -6428,8 +6428,8 @@ static void perf_sigtrap(struct perf_event *event)
+ 	if (current->flags & PF_EXITING)
+ 		return;
+ 
+-	force_sig_perf((void __user *)event->pending_addr,
+-		       event->attr.type, event->attr.sig_data);
++	send_sig_perf((void __user *)event->pending_addr,
++		      event->attr.type, event->attr.sig_data);
+ }
+ 
+ static void perf_pending_event_disable(struct perf_event *event)
+diff --git a/kernel/fork.c b/kernel/fork.c
+index 35a3beff140b6..0d8abfb9e0f4a 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -2157,7 +2157,7 @@ static __latent_entropy struct task_struct *copy_process(
+ 	p->io_context = NULL;
+ 	audit_set_context(p, NULL);
+ 	cgroup_fork(p);
+-	if (p->flags & PF_KTHREAD) {
++	if (args->kthread) {
+ 		if (!set_kthread_struct(p))
+ 			goto bad_fork_cleanup_delayacct;
+ 	}
+@@ -2548,7 +2548,8 @@ struct task_struct * __init fork_idle(int cpu)
+ {
+ 	struct task_struct *task;
+ 	struct kernel_clone_args args = {
+-		.flags = CLONE_VM,
++		.flags		= CLONE_VM,
++		.kthread	= 1,
+ 	};
+ 
+ 	task = copy_process(&init_struct_pid, 0, cpu_to_node(cpu), &args);
+@@ -2679,6 +2680,23 @@ pid_t kernel_clone(struct kernel_clone_args *args)
+  * Create a kernel thread.
+  */
+ pid_t kernel_thread(int (*fn)(void *), void *arg, unsigned long flags)
++{
++	struct kernel_clone_args args = {
++		.flags		= ((lower_32_bits(flags) | CLONE_VM |
++				    CLONE_UNTRACED) & ~CSIGNAL),
++		.exit_signal	= (lower_32_bits(flags) & CSIGNAL),
++		.stack		= (unsigned long)fn,
++		.stack_size	= (unsigned long)arg,
++		.kthread	= 1,
++	};
++
++	return kernel_clone(&args);
++}
++
++/*
++ * Create a user mode thread.
++ */
++pid_t user_mode_thread(int (*fn)(void *), void *arg, unsigned long flags)
+ {
+ 	struct kernel_clone_args args = {
+ 		.flags		= ((lower_32_bits(flags) | CLONE_VM |
+diff --git a/kernel/kexec_file.c b/kernel/kexec_file.c
+index 8347fc158d2b9..c108a2a887546 100644
+--- a/kernel/kexec_file.c
++++ b/kernel/kexec_file.c
+@@ -108,40 +108,6 @@ int __weak arch_kexec_kernel_verify_sig(struct kimage *image, void *buf,
+ }
+ #endif
+ 
+-/*
+- * arch_kexec_apply_relocations_add - apply relocations of type RELA
+- * @pi:		Purgatory to be relocated.
+- * @section:	Section relocations applying to.
+- * @relsec:	Section containing RELAs.
+- * @symtab:	Corresponding symtab.
+- *
+- * Return: 0 on success, negative errno on error.
+- */
+-int __weak
+-arch_kexec_apply_relocations_add(struct purgatory_info *pi, Elf_Shdr *section,
+-				 const Elf_Shdr *relsec, const Elf_Shdr *symtab)
+-{
+-	pr_err("RELA relocation unsupported.\n");
+-	return -ENOEXEC;
+-}
+-
+-/*
+- * arch_kexec_apply_relocations - apply relocations of type REL
+- * @pi:		Purgatory to be relocated.
+- * @section:	Section relocations applying to.
+- * @relsec:	Section containing RELs.
+- * @symtab:	Corresponding symtab.
+- *
+- * Return: 0 on success, negative errno on error.
+- */
+-int __weak
+-arch_kexec_apply_relocations(struct purgatory_info *pi, Elf_Shdr *section,
+-			     const Elf_Shdr *relsec, const Elf_Shdr *symtab)
+-{
+-	pr_err("REL relocation unsupported.\n");
+-	return -ENOEXEC;
+-}
+-
+ /*
+  * Free up memory used by kernel, initrd, and command line. This is temporary
+  * memory allocation which is not needed any more after these buffers have
+diff --git a/kernel/kprobes.c b/kernel/kprobes.c
+index dd58c0be9ce25..f214f8c088ede 100644
+--- a/kernel/kprobes.c
++++ b/kernel/kprobes.c
+@@ -1257,79 +1257,6 @@ void kprobe_busy_end(void)
+ 	preempt_enable();
+ }
+ 
+-#if !defined(CONFIG_KRETPROBE_ON_RETHOOK)
+-static void free_rp_inst_rcu(struct rcu_head *head)
+-{
+-	struct kretprobe_instance *ri = container_of(head, struct kretprobe_instance, rcu);
+-
+-	if (refcount_dec_and_test(&ri->rph->ref))
+-		kfree(ri->rph);
+-	kfree(ri);
+-}
+-NOKPROBE_SYMBOL(free_rp_inst_rcu);
+-
+-static void recycle_rp_inst(struct kretprobe_instance *ri)
+-{
+-	struct kretprobe *rp = get_kretprobe(ri);
+-
+-	if (likely(rp))
+-		freelist_add(&ri->freelist, &rp->freelist);
+-	else
+-		call_rcu(&ri->rcu, free_rp_inst_rcu);
+-}
+-NOKPROBE_SYMBOL(recycle_rp_inst);
+-
+-/*
+- * This function is called from delayed_put_task_struct() when a task is
+- * dead and cleaned up to recycle any kretprobe instances associated with
+- * this task. These left over instances represent probed functions that
+- * have been called but will never return.
+- */
+-void kprobe_flush_task(struct task_struct *tk)
+-{
+-	struct kretprobe_instance *ri;
+-	struct llist_node *node;
+-
+-	/* Early boot, not yet initialized. */
+-	if (unlikely(!kprobes_initialized))
+-		return;
+-
+-	kprobe_busy_begin();
+-
+-	node = __llist_del_all(&tk->kretprobe_instances);
+-	while (node) {
+-		ri = container_of(node, struct kretprobe_instance, llist);
+-		node = node->next;
+-
+-		recycle_rp_inst(ri);
+-	}
+-
+-	kprobe_busy_end();
+-}
+-NOKPROBE_SYMBOL(kprobe_flush_task);
+-
+-static inline void free_rp_inst(struct kretprobe *rp)
+-{
+-	struct kretprobe_instance *ri;
+-	struct freelist_node *node;
+-	int count = 0;
+-
+-	node = rp->freelist.head;
+-	while (node) {
+-		ri = container_of(node, struct kretprobe_instance, freelist);
+-		node = node->next;
+-
+-		kfree(ri);
+-		count++;
+-	}
+-
+-	if (refcount_sub_and_test(count, &rp->rph->ref)) {
+-		kfree(rp->rph);
+-		rp->rph = NULL;
+-	}
+-}
+-#endif	/* !CONFIG_KRETPROBE_ON_RETHOOK */
+-
+ /* Add the new probe to 'ap->list'. */
+ static int add_new_kprobe(struct kprobe *ap, struct kprobe *p)
+ {
+@@ -1928,6 +1855,77 @@ static struct notifier_block kprobe_exceptions_nb = {
+ #ifdef CONFIG_KRETPROBES
+ 
+ #if !defined(CONFIG_KRETPROBE_ON_RETHOOK)
++static void free_rp_inst_rcu(struct rcu_head *head)
++{
++	struct kretprobe_instance *ri = container_of(head, struct kretprobe_instance, rcu);
++
++	if (refcount_dec_and_test(&ri->rph->ref))
++		kfree(ri->rph);
++	kfree(ri);
++}
++NOKPROBE_SYMBOL(free_rp_inst_rcu);
++
++static void recycle_rp_inst(struct kretprobe_instance *ri)
++{
++	struct kretprobe *rp = get_kretprobe(ri);
++
++	if (likely(rp))
++		freelist_add(&ri->freelist, &rp->freelist);
++	else
++		call_rcu(&ri->rcu, free_rp_inst_rcu);
++}
++NOKPROBE_SYMBOL(recycle_rp_inst);
++
++/*
++ * This function is called from delayed_put_task_struct() when a task is
++ * dead and cleaned up to recycle any kretprobe instances associated with
++ * this task. These left over instances represent probed functions that
++ * have been called but will never return.
++ */
++void kprobe_flush_task(struct task_struct *tk)
++{
++	struct kretprobe_instance *ri;
++	struct llist_node *node;
++
++	/* Early boot, not yet initialized. */
++	if (unlikely(!kprobes_initialized))
++		return;
++
++	kprobe_busy_begin();
++
++	node = __llist_del_all(&tk->kretprobe_instances);
++	while (node) {
++		ri = container_of(node, struct kretprobe_instance, llist);
++		node = node->next;
++
++		recycle_rp_inst(ri);
++	}
++
++	kprobe_busy_end();
++}
++NOKPROBE_SYMBOL(kprobe_flush_task);
++
++static inline void free_rp_inst(struct kretprobe *rp)
++{
++	struct kretprobe_instance *ri;
++	struct freelist_node *node;
++	int count = 0;
++
++	node = rp->freelist.head;
++	while (node) {
++		ri = container_of(node, struct kretprobe_instance, freelist);
++		node = node->next;
++
++		kfree(ri);
++		count++;
++	}
++
++	if (refcount_sub_and_test(count, &rp->rph->ref)) {
++		kfree(rp->rph);
++		rp->rph = NULL;
++	}
++}
++
+ /* This assumes the 'tsk' is the current task or the is not running. */
+ static kprobe_opcode_t *__kretprobe_find_ret_addr(struct task_struct *tsk,
+ 						  struct llist_node **cur)
+diff --git a/kernel/module.c b/kernel/module.c
+index 6cea788fd965c..6529c84c536f6 100644
+--- a/kernel/module.c
++++ b/kernel/module.c
+@@ -3033,6 +3033,10 @@ static int elf_validity_check(struct load_info *info)
+ 	 * strings in the section safe.
+ 	 */
+ 	info->secstrings = (void *)info->hdr + strhdr->sh_offset;
++	if (strhdr->sh_size == 0) {
++		pr_err("empty section name table\n");
++		goto no_exec;
++	}
+ 	if (info->secstrings[strhdr->sh_size - 1] != '\0') {
+ 		pr_err("ELF Spec violation: section name table isn't null terminated\n");
+ 		goto no_exec;
+diff --git a/kernel/power/energy_model.c b/kernel/power/energy_model.c
+index 0153b0ca7b23e..6219aaa454b5b 100644
+--- a/kernel/power/energy_model.c
++++ b/kernel/power/energy_model.c
+@@ -259,6 +259,8 @@ static void em_cpufreq_update_efficiencies(struct device *dev)
+ 			found++;
+ 	}
+ 
++	cpufreq_cpu_put(policy);
++
+ 	if (!found)
+ 		return;
+ 
+diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
+index da03c15ecc898..1ead794fc2f47 100644
+--- a/kernel/printk/printk.c
++++ b/kernel/printk/printk.c
+@@ -746,8 +746,19 @@ static ssize_t devkmsg_read(struct file *file, char __user *buf,
+ 			goto out;
+ 		}
+ 
++		/*
++		 * Guarantee this task is visible on the waitqueue before
++		 * checking the wake condition.
++		 *
++		 * The full memory barrier within set_current_state() of
++		 * prepare_to_wait_event() pairs with the full memory barrier
++		 * within wq_has_sleeper().
++		 *
++		 * This pairs with __wake_up_klogd:A.
++		 */
+ 		ret = wait_event_interruptible(log_wait,
+-				prb_read_valid(prb, atomic64_read(&user->seq), r));
++				prb_read_valid(prb,
++					atomic64_read(&user->seq), r)); /* LMM(devkmsg_read:A) */
+ 		if (ret)
+ 			goto out;
+ 	}
+@@ -1513,7 +1524,18 @@ static int syslog_print(char __user *buf, int size)
+ 		seq = syslog_seq;
+ 
+ 		mutex_unlock(&syslog_lock);
+-		len = wait_event_interruptible(log_wait, prb_read_valid(prb, seq, NULL));
++		/*
++		 * Guarantee this task is visible on the waitqueue before
++		 * checking the wake condition.
++		 *
++		 * The full memory barrier within set_current_state() of
++		 * prepare_to_wait_event() pairs with the full memory barrier
++		 * within wq_has_sleeper().
++		 *
++		 * This pairs with __wake_up_klogd:A.
++		 */
++		len = wait_event_interruptible(log_wait,
++				prb_read_valid(prb, seq, NULL)); /* LMM(syslog_print:A) */
+ 		mutex_lock(&syslog_lock);
+ 
+ 		if (len)
+@@ -3310,28 +3332,43 @@ static void wake_up_klogd_work_func(struct irq_work *irq_work)
+ static DEFINE_PER_CPU(struct irq_work, wake_up_klogd_work) =
+ 	IRQ_WORK_INIT_LAZY(wake_up_klogd_work_func);
+ 
+-void wake_up_klogd(void)
++static void __wake_up_klogd(int val)
+ {
+ 	if (!printk_percpu_data_ready())
+ 		return;
+ 
+ 	preempt_disable();
+-	if (waitqueue_active(&log_wait)) {
+-		this_cpu_or(printk_pending, PRINTK_PENDING_WAKEUP);
++	/*
++	 * Guarantee any new records can be seen by tasks preparing to wait
++	 * before this context checks if the wait queue is empty.
++	 *
++	 * The full memory barrier within wq_has_sleeper() pairs with the full
++	 * memory barrier within set_current_state() of
++	 * prepare_to_wait_event(), which is called after ___wait_event() adds
++	 * the waiter but before it has checked the wait condition.
++	 *
++	 * This pairs with devkmsg_read:A and syslog_print:A.
++	 */
++	if (wq_has_sleeper(&log_wait) || /* LMM(__wake_up_klogd:A) */
++	    (val & PRINTK_PENDING_OUTPUT)) {
++		this_cpu_or(printk_pending, val);
+ 		irq_work_queue(this_cpu_ptr(&wake_up_klogd_work));
+ 	}
+ 	preempt_enable();
+ }
+ 
+-void defer_console_output(void)
++void wake_up_klogd(void)
+ {
+-	if (!printk_percpu_data_ready())
+-		return;
++	__wake_up_klogd(PRINTK_PENDING_WAKEUP);
++}
+ 
+-	preempt_disable();
+-	this_cpu_or(printk_pending, PRINTK_PENDING_OUTPUT);
+-	irq_work_queue(this_cpu_ptr(&wake_up_klogd_work));
+-	preempt_enable();
++void defer_console_output(void)
++{
++	/*
++	 * New messages may have been added directly to the ringbuffer
++	 * using vprintk_store(), so wake any waiters as well.
++	 */
++	__wake_up_klogd(PRINTK_PENDING_WAKEUP | PRINTK_PENDING_OUTPUT);
+ }
+ 
+ void printk_trigger_flush(void)
+diff --git a/kernel/ptrace.c b/kernel/ptrace.c
+index ccc4b465775b8..6149ca5e0e14b 100644
+--- a/kernel/ptrace.c
++++ b/kernel/ptrace.c
+@@ -1236,9 +1236,8 @@ int ptrace_request(struct task_struct *child, long request,
+ 		return ptrace_resume(child, request, data);
+ 
+ 	case PTRACE_KILL:
+-		if (child->exit_state)	/* already dead */
+-			return 0;
+-		return ptrace_resume(child, request, SIGKILL);
++		send_sig_info(SIGKILL, SEND_SIG_NOINFO, child);
++		return 0;
+ 
+ #ifdef CONFIG_HAVE_ARCH_TRACEHOOK
+ 	case PTRACE_GETREGSET:
+diff --git a/kernel/rcu/Kconfig b/kernel/rcu/Kconfig
+index bf8e341e75b4f..f559870fbf8b3 100644
+--- a/kernel/rcu/Kconfig
++++ b/kernel/rcu/Kconfig
+@@ -86,6 +86,7 @@ config TASKS_RCU
+ 
+ config TASKS_RUDE_RCU
+ 	def_bool 0
++	select IRQ_WORK
+ 	help
+ 	  This option enables a task-based RCU implementation that uses
+ 	  only context switch (including preemption) and user-mode
+diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
+index 99cf3a13954cf..00ff0896fb000 100644
+--- a/kernel/rcu/tasks.h
++++ b/kernel/rcu/tasks.h
+@@ -460,7 +460,7 @@ static void rcu_tasks_invoke_cbs(struct rcu_tasks *rtp, struct rcu_tasks_percpu
+ 		}
+ 	}
+ 
+-	if (rcu_segcblist_empty(&rtpcp->cblist))
++	if (rcu_segcblist_empty(&rtpcp->cblist) || !cpu_possible(cpu))
+ 		return;
+ 	raw_spin_lock_irqsave_rcu_node(rtpcp, flags);
+ 	rcu_segcblist_advance(&rtpcp->cblist, rcu_seq_current(&rtp->tasks_gp_seq));
+@@ -950,6 +950,9 @@ static void rcu_tasks_be_rude(struct work_struct *work)
+ // Wait for one rude RCU-tasks grace period.
+ static void rcu_tasks_rude_wait_gp(struct rcu_tasks *rtp)
+ {
++	if (num_online_cpus() <= 1)
++		return;	// Fastpath for only one CPU.
++
+ 	rtp->n_ipis += cpumask_weight(cpu_online_mask);
+ 	schedule_on_each_cpu(rcu_tasks_be_rude);
+ }
+diff --git a/kernel/scftorture.c b/kernel/scftorture.c
+index dcb0410950e45..5d113aa59e773 100644
+--- a/kernel/scftorture.c
++++ b/kernel/scftorture.c
+@@ -267,9 +267,10 @@ static void scf_handler(void *scfc_in)
+ 	}
+ 	this_cpu_inc(scf_invoked_count);
+ 	if (longwait <= 0) {
+-		if (!(r & 0xffc0))
++		if (!(r & 0xffc0)) {
+ 			udelay(r & 0x3f);
+-		goto out;
++			goto out;
++		}
+ 	}
+ 	if (r & 0xfff)
+ 		goto out;
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index d58c0389eb23c..e58d894df2074 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -610,10 +610,10 @@ void double_rq_lock(struct rq *rq1, struct rq *rq2)
+ 		swap(rq1, rq2);
+ 
+ 	raw_spin_rq_lock(rq1);
+-	if (__rq_lockp(rq1) == __rq_lockp(rq2))
+-		return;
++	if (__rq_lockp(rq1) != __rq_lockp(rq2))
++		raw_spin_rq_lock_nested(rq2, SINGLE_DEPTH_NESTING);
+ 
+-	raw_spin_rq_lock_nested(rq2, SINGLE_DEPTH_NESTING);
++	double_rq_clock_clear_update(rq1, rq2);
+ }
+ #endif
+ 
+diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
+index fb4255ae0b2c8..b61281d104584 100644
+--- a/kernel/sched/deadline.c
++++ b/kernel/sched/deadline.c
+@@ -1832,6 +1832,7 @@ out:
+ 
+ static void migrate_task_rq_dl(struct task_struct *p, int new_cpu __maybe_unused)
+ {
++	struct rq_flags rf;
+ 	struct rq *rq;
+ 
+ 	if (READ_ONCE(p->__state) != TASK_WAKING)
+@@ -1843,7 +1844,7 @@ static void migrate_task_rq_dl(struct task_struct *p, int new_cpu __maybe_unused
+ 	 * from try_to_wake_up(). Hence, p->pi_lock is locked, but
+ 	 * rq->lock is not... So, lock it
+ 	 */
+-	raw_spin_rq_lock(rq);
++	rq_lock(rq, &rf);
+ 	if (p->dl.dl_non_contending) {
+ 		update_rq_clock(rq);
+ 		sub_running_bw(&p->dl, &rq->dl);
+@@ -1859,7 +1860,7 @@ static void migrate_task_rq_dl(struct task_struct *p, int new_cpu __maybe_unused
+ 			put_task_struct(p);
+ 	}
+ 	sub_rq_bw(&p->dl, &rq->dl);
+-	raw_spin_rq_unlock(rq);
++	rq_unlock(rq, &rf);
+ }
+ 
+ static void check_preempt_equal_dl(struct rq *rq, struct task_struct *p)
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index a68482d665355..cc8daa3dcc8bc 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -4846,8 +4846,8 @@ static int tg_unthrottle_up(struct task_group *tg, void *data)
+ 
+ 	cfs_rq->throttle_count--;
+ 	if (!cfs_rq->throttle_count) {
+-		cfs_rq->throttled_clock_task_time += rq_clock_task(rq) -
+-					     cfs_rq->throttled_clock_task;
++		cfs_rq->throttled_clock_pelt_time += rq_clock_pelt(rq) -
++					     cfs_rq->throttled_clock_pelt;
+ 
+ 		/* Add cfs_rq with load or one or more already running entities to the list */
+ 		if (!cfs_rq_is_decayed(cfs_rq) || cfs_rq->nr_running)
+@@ -4864,7 +4864,7 @@ static int tg_throttle_down(struct task_group *tg, void *data)
+ 
+ 	/* group is entering throttled state, stop time */
+ 	if (!cfs_rq->throttle_count) {
+-		cfs_rq->throttled_clock_task = rq_clock_task(rq);
++		cfs_rq->throttled_clock_pelt = rq_clock_pelt(rq);
+ 		list_del_leaf_cfs_rq(cfs_rq);
+ 	}
+ 	cfs_rq->throttle_count++;
+@@ -5308,7 +5308,7 @@ static void sync_throttle(struct task_group *tg, int cpu)
+ 	pcfs_rq = tg->parent->cfs_rq[cpu];
+ 
+ 	cfs_rq->throttle_count = pcfs_rq->throttle_count;
+-	cfs_rq->throttled_clock_task = rq_clock_task(cpu_rq(cpu));
++	cfs_rq->throttled_clock_pelt = rq_clock_pelt(cpu_rq(cpu));
+ }
+ 
+ /* conditionally throttle active cfs_rq's from put_prev_entity() */
+diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h
+index c336f5f481bca..4ff2ed4f8fa15 100644
+--- a/kernel/sched/pelt.h
++++ b/kernel/sched/pelt.h
+@@ -145,9 +145,9 @@ static inline u64 rq_clock_pelt(struct rq *rq)
+ static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
+ {
+ 	if (unlikely(cfs_rq->throttle_count))
+-		return cfs_rq->throttled_clock_task - cfs_rq->throttled_clock_task_time;
++		return cfs_rq->throttled_clock_pelt - cfs_rq->throttled_clock_pelt_time;
+ 
+-	return rq_clock_pelt(rq_of(cfs_rq)) - cfs_rq->throttled_clock_task_time;
++	return rq_clock_pelt(rq_of(cfs_rq)) - cfs_rq->throttled_clock_pelt_time;
+ }
+ #else
+ static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
+diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c
+index a4fa3aadfcba6..ed9fb557dadd0 100644
+--- a/kernel/sched/psi.c
++++ b/kernel/sched/psi.c
+@@ -1060,14 +1060,17 @@ int psi_show(struct seq_file *m, struct psi_group *group, enum psi_res res)
+ 	mutex_unlock(&group->avgs_lock);
+ 
+ 	for (full = 0; full < 2; full++) {
+-		unsigned long avg[3];
+-		u64 total;
++		unsigned long avg[3] = { 0, };
++		u64 total = 0;
+ 		int w;
+ 
+-		for (w = 0; w < 3; w++)
+-			avg[w] = group->avg[res * 2 + full][w];
+-		total = div_u64(group->total[PSI_AVGS][res * 2 + full],
+-				NSEC_PER_USEC);
++		/* CPU FULL is undefined at the system level */
++		if (!(group == &psi_system && res == PSI_CPU && full)) {
++			for (w = 0; w < 3; w++)
++				avg[w] = group->avg[res * 2 + full][w];
++			total = div_u64(group->total[PSI_AVGS][res * 2 + full],
++					NSEC_PER_USEC);
++		}
+ 
+ 		seq_printf(m, "%s avg10=%lu.%02lu avg60=%lu.%02lu avg300=%lu.%02lu total=%llu\n",
+ 			   full ? "full" : "some",
+diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
+index a32c46889af89..7891c0f0e1ff7 100644
+--- a/kernel/sched/rt.c
++++ b/kernel/sched/rt.c
+@@ -871,6 +871,7 @@ static int do_sched_rt_period_timer(struct rt_bandwidth *rt_b, int overrun)
+ 		int enqueue = 0;
+ 		struct rt_rq *rt_rq = sched_rt_period_rt_rq(rt_b, i);
+ 		struct rq *rq = rq_of_rt_rq(rt_rq);
++		struct rq_flags rf;
+ 		int skip;
+ 
+ 		/*
+@@ -885,7 +886,7 @@ static int do_sched_rt_period_timer(struct rt_bandwidth *rt_b, int overrun)
+ 		if (skip)
+ 			continue;
+ 
+-		raw_spin_rq_lock(rq);
++		rq_lock(rq, &rf);
+ 		update_rq_clock(rq);
+ 
+ 		if (rt_rq->rt_time) {
+@@ -923,7 +924,7 @@ static int do_sched_rt_period_timer(struct rt_bandwidth *rt_b, int overrun)
+ 
+ 		if (enqueue)
+ 			sched_rt_rq_enqueue(rt_rq);
+-		raw_spin_rq_unlock(rq);
++		rq_unlock(rq, &rf);
+ 	}
+ 
+ 	if (!throttled && (!rt_bandwidth_enabled() || rt_b->rt_runtime == RUNTIME_INF))
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index 8dccb34eb1908..0d2b6b758f324 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -603,8 +603,8 @@ struct cfs_rq {
+ 	s64			runtime_remaining;
+ 
+ 	u64			throttled_clock;
+-	u64			throttled_clock_task;
+-	u64			throttled_clock_task_time;
++	u64			throttled_clock_pelt;
++	u64			throttled_clock_pelt_time;
+ 	int			throttled;
+ 	int			throttle_count;
+ 	struct list_head	throttled_list;
+@@ -2478,6 +2478,24 @@ unsigned long arch_scale_freq_capacity(int cpu)
+ }
+ #endif
+ 
++#ifdef CONFIG_SCHED_DEBUG
++/*
++ * In double_lock_balance()/double_rq_lock(), we use raw_spin_rq_lock() to
++ * acquire rq lock instead of rq_lock(). So at the end of these two functions
++ * we need to call double_rq_clock_clear_update() to clear RQCF_UPDATED of
++ * rq->clock_update_flags to avoid the WARN_DOUBLE_CLOCK warning.
++ */
++static inline void double_rq_clock_clear_update(struct rq *rq1, struct rq *rq2)
++{
++	rq1->clock_update_flags &= (RQCF_REQ_SKIP|RQCF_ACT_SKIP);
++	/* rq1 == rq2 for !CONFIG_SMP, so just clear RQCF_UPDATED once. */
++#ifdef CONFIG_SMP
++	rq2->clock_update_flags &= (RQCF_REQ_SKIP|RQCF_ACT_SKIP);
++#endif
++}
++#else
++static inline void double_rq_clock_clear_update(struct rq *rq1, struct rq *rq2) {}
++#endif
+ 
+ #ifdef CONFIG_SMP
+ 
+@@ -2543,14 +2561,15 @@ static inline int _double_lock_balance(struct rq *this_rq, struct rq *busiest)
+ 	__acquires(busiest->lock)
+ 	__acquires(this_rq->lock)
+ {
+-	if (__rq_lockp(this_rq) == __rq_lockp(busiest))
+-		return 0;
+-
+-	if (likely(raw_spin_rq_trylock(busiest)))
++	if (__rq_lockp(this_rq) == __rq_lockp(busiest) ||
++	    likely(raw_spin_rq_trylock(busiest))) {
++		double_rq_clock_clear_update(this_rq, busiest);
+ 		return 0;
++	}
+ 
+ 	if (rq_order_less(this_rq, busiest)) {
+ 		raw_spin_rq_lock_nested(busiest, SINGLE_DEPTH_NESTING);
++		double_rq_clock_clear_update(this_rq, busiest);
+ 		return 0;
+ 	}
+ 
+@@ -2644,6 +2663,7 @@ static inline void double_rq_lock(struct rq *rq1, struct rq *rq2)
+ 	BUG_ON(rq1 != rq2);
+ 	raw_spin_rq_lock(rq1);
+ 	__acquire(rq2->lock);	/* Fake it out ;) */
++	double_rq_clock_clear_update(rq1, rq2);
+ }
+ 
+ /*
+diff --git a/kernel/signal.c b/kernel/signal.c
+index 30cd1ca43bcd5..e43bc2a692f5e 100644
+--- a/kernel/signal.c
++++ b/kernel/signal.c
+@@ -1805,7 +1805,7 @@ int force_sig_pkuerr(void __user *addr, u32 pkey)
+ }
+ #endif
+ 
+-int force_sig_perf(void __user *addr, u32 type, u64 sig_data)
++int send_sig_perf(void __user *addr, u32 type, u64 sig_data)
+ {
+ 	struct kernel_siginfo info;
+ 
+@@ -1817,7 +1817,18 @@ int force_sig_perf(void __user *addr, u32 type, u64 sig_data)
+ 	info.si_perf_data = sig_data;
+ 	info.si_perf_type = type;
+ 
+-	return force_sig_info(&info);
++	/*
++	 * Signals generated by perf events should not terminate the whole
++	 * process if SIGTRAP is blocked, however, delivering the signal
++	 * asynchronously is better than not delivering at all. But tell user
++	 * space if the signal was asynchronous, so it can clearly be
++	 * distinguished from normal synchronous ones.
++	 */
++	info.si_perf_flags = sigismember(&current->blocked, info.si_signo) ?
++				     TRAP_PERF_FLAG_ASYNC :
++				     0;
++
++	return send_sig_info(info.si_signo, &info, current);
+ }
+ 
+ /**
+@@ -3432,6 +3443,7 @@ void copy_siginfo_to_external32(struct compat_siginfo *to,
+ 		to->si_addr = ptr_to_compat(from->si_addr);
+ 		to->si_perf_data = from->si_perf_data;
+ 		to->si_perf_type = from->si_perf_type;
++		to->si_perf_flags = from->si_perf_flags;
+ 		break;
+ 	case SIL_CHLD:
+ 		to->si_pid = from->si_pid;
+@@ -3509,6 +3521,7 @@ static int post_copy_siginfo_from_user32(kernel_siginfo_t *to,
+ 		to->si_addr = compat_ptr(from->si_addr);
+ 		to->si_perf_data = from->si_perf_data;
+ 		to->si_perf_type = from->si_perf_type;
++		to->si_perf_flags = from->si_perf_flags;
+ 		break;
+ 	case SIL_CHLD:
+ 		to->si_pid    = from->si_pid;
+@@ -4722,6 +4735,7 @@ static inline void siginfo_buildtime_checks(void)
+ 	CHECK_OFFSET(si_pkey);
+ 	CHECK_OFFSET(si_perf_data);
+ 	CHECK_OFFSET(si_perf_type);
++	CHECK_OFFSET(si_perf_flags);
+ 
+ 	/* sigpoll */
+ 	CHECK_OFFSET(si_band);
+diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
+index d8553f46caa29..6b58fc6813dfc 100644
+--- a/kernel/trace/bpf_trace.c
++++ b/kernel/trace/bpf_trace.c
+@@ -129,7 +129,10 @@ unsigned int trace_call_bpf(struct trace_event_call *call, void *ctx)
+ 	 * out of events when it was updated in between this and the
+ 	 * rcu_dereference() which is accepted risk.
+ 	 */
+-	ret = BPF_PROG_RUN_ARRAY(call->prog_array, ctx, bpf_prog_run);
++	rcu_read_lock();
++	ret = bpf_prog_run_array(rcu_dereference(call->prog_array),
++				 ctx, bpf_prog_run);
++	rcu_read_unlock();
+ 
+  out:
+ 	__this_cpu_dec(bpf_prog_active);
+diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
+index af899b058c8d0..9c2031941b460 100644
+--- a/kernel/trace/ftrace.c
++++ b/kernel/trace/ftrace.c
+@@ -4465,7 +4465,7 @@ int ftrace_func_mapper_add_ip(struct ftrace_func_mapper *mapper,
+  * @ip: The instruction pointer address to remove the data from
+  *
+  * Returns the data if it is found, otherwise NULL.
+- * Note, if the data pointer is used as the data itself, (see 
++ * Note, if the data pointer is used as the data itself, (see
+  * ftrace_func_mapper_find_ip(), then the return value may be meaningless,
+  * if the data pointer was set to zero.
+  */
+@@ -5195,8 +5195,6 @@ int register_ftrace_direct(unsigned long ip, unsigned long addr)
+ 		goto out_unlock;
+ 
+ 	ret = ftrace_set_filter_ip(&direct_ops, ip, 0, 0);
+-	if (ret)
+-		remove_hash_entry(direct_functions, entry);
+ 
+ 	if (!ret && !(direct_ops.flags & FTRACE_OPS_FL_ENABLED)) {
+ 		ret = register_ftrace_function(&direct_ops);
+@@ -5205,6 +5203,7 @@ int register_ftrace_direct(unsigned long ip, unsigned long addr)
+ 	}
+ 
+ 	if (ret) {
++		remove_hash_entry(direct_functions, entry);
+ 		kfree(entry);
+ 		if (!direct->count) {
+ 			list_del_rcu(&direct->next);
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index f4de111fa18ff..f6fb04d79eba6 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -721,13 +721,16 @@ int trace_pid_write(struct trace_pid_list *filtered_pids,
+ 		pos = 0;
+ 
+ 		ret = trace_get_user(&parser, ubuf, cnt, &pos);
+-		if (ret < 0 || !trace_parser_loaded(&parser))
++		if (ret < 0)
+ 			break;
+ 
+ 		read += ret;
+ 		ubuf += ret;
+ 		cnt -= ret;
+ 
++		if (!trace_parser_loaded(&parser))
++			break;
++
+ 		ret = -EINVAL;
+ 		if (kstrtoul(parser.buffer, 0, &val))
+ 			break;
+@@ -753,7 +756,6 @@ int trace_pid_write(struct trace_pid_list *filtered_pids,
+ 	if (!nr_pids) {
+ 		/* Cleared the list of pids */
+ 		trace_pid_list_free(pid_list);
+-		read = ret;
+ 		pid_list = NULL;
+ 	}
+ 
+diff --git a/kernel/trace/trace_boot.c b/kernel/trace/trace_boot.c
+index 0580287d7a0d1..778200dd8edea 100644
+--- a/kernel/trace/trace_boot.c
++++ b/kernel/trace/trace_boot.c
+@@ -300,7 +300,7 @@ trace_boot_hist_add_handlers(struct xbc_node *hnode, char **bufp,
+ {
+ 	struct xbc_node *node;
+ 	const char *p, *handler;
+-	int ret;
++	int ret = 0;
+ 
+ 	handler = xbc_node_get_data(hnode);
+ 
+diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
+index f97de82d1342a..f5c1dfb4b0117 100644
+--- a/kernel/trace/trace_events.c
++++ b/kernel/trace/trace_events.c
+@@ -392,12 +392,6 @@ static void test_event_printk(struct trace_event_call *call)
+ 			if (!(dereference_flags & (1ULL << arg)))
+ 				goto next_arg;
+ 
+-			/* Check for __get_sockaddr */;
+-			if (str_has_prefix(fmt + i, "__get_sockaddr(")) {
+-				dereference_flags &= ~(1ULL << arg);
+-				goto next_arg;
+-			}
+-
+ 			/* Find the REC-> in the argument */
+ 			c = strchr(fmt + i, ',');
+ 			r = strstr(fmt + i, "REC->");
+@@ -413,7 +407,14 @@ static void test_event_printk(struct trace_event_call *call)
+ 				a = strchr(fmt + i, '&');
+ 				if ((a && (a < r)) || test_field(r, call))
+ 					dereference_flags &= ~(1ULL << arg);
++			} else if ((r = strstr(fmt + i, "__get_dynamic_array(")) &&
++				   (!c || r < c)) {
++				dereference_flags &= ~(1ULL << arg);
++			} else if ((r = strstr(fmt + i, "__get_sockaddr(")) &&
++				   (!c || r < c)) {
++				dereference_flags &= ~(1ULL << arg);
+ 			}
++
+ 		next_arg:
+ 			i--;
+ 			arg++;
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index 44db5ba9cabb8..a0e41906d9ce1 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -2093,8 +2093,11 @@ static int init_var_ref(struct hist_field *ref_field,
+ 	return err;
+  free:
+ 	kfree(ref_field->system);
++	ref_field->system = NULL;
+ 	kfree(ref_field->event_name);
++	ref_field->event_name = NULL;
+ 	kfree(ref_field->name);
++	ref_field->name = NULL;
+ 
+ 	goto out;
+ }
+diff --git a/kernel/trace/trace_osnoise.c b/kernel/trace/trace_osnoise.c
+index afb92e2f0aeab..d8e8167a079ff 100644
+--- a/kernel/trace/trace_osnoise.c
++++ b/kernel/trace/trace_osnoise.c
+@@ -1578,11 +1578,12 @@ static enum hrtimer_restart timerlat_irq(struct hrtimer *timer)
+ 
+ 	trace_timerlat_sample(&s);
+ 
+-	notify_new_max_latency(diff);
+-
+-	if (osnoise_data.stop_tracing)
+-		if (time_to_us(diff) >= osnoise_data.stop_tracing)
++	if (osnoise_data.stop_tracing) {
++		if (time_to_us(diff) >= osnoise_data.stop_tracing) {
+ 			osnoise_stop_tracing();
++			notify_new_max_latency(diff);
++		}
++	}
+ 
+ 	wake_up_process(tlat->kthread);
+ 
+diff --git a/kernel/trace/trace_selftest.c b/kernel/trace/trace_selftest.c
+index abcadbe933bb7..a2d301f58ceda 100644
+--- a/kernel/trace/trace_selftest.c
++++ b/kernel/trace/trace_selftest.c
+@@ -895,6 +895,9 @@ trace_selftest_startup_function_graph(struct tracer *trace,
+ 		ret = -1;
+ 		goto out;
+ 	}
++
++	/* Enable tracing on all functions again */
++	ftrace_set_global_filter(NULL, 0, 1);
+ #endif
+ 
+ 	/* Don't test dynamic tracing, the function tracer already did */
+diff --git a/kernel/umh.c b/kernel/umh.c
+index 36c123360ab88..b989736e87074 100644
+--- a/kernel/umh.c
++++ b/kernel/umh.c
+@@ -132,7 +132,7 @@ static void call_usermodehelper_exec_sync(struct subprocess_info *sub_info)
+ 
+ 	/* If SIGCLD is ignored do_wait won't populate the status. */
+ 	kernel_sigaction(SIGCHLD, SIG_DFL);
+-	pid = kernel_thread(call_usermodehelper_exec_async, sub_info, SIGCHLD);
++	pid = user_mode_thread(call_usermodehelper_exec_async, sub_info, SIGCHLD);
+ 	if (pid < 0)
+ 		sub_info->retval = pid;
+ 	else
+@@ -171,8 +171,8 @@ static void call_usermodehelper_exec_work(struct work_struct *work)
+ 		 * want to pollute current->children, and we need a parent
+ 		 * that always ignores SIGCHLD to ensure auto-reaping.
+ 		 */
+-		pid = kernel_thread(call_usermodehelper_exec_async, sub_info,
+-				    CLONE_PARENT | SIGCHLD);
++		pid = user_mode_thread(call_usermodehelper_exec_async, sub_info,
++				       CLONE_PARENT | SIGCHLD);
+ 		if (pid < 0) {
+ 			sub_info->retval = pid;
+ 			umh_complete(sub_info);
+diff --git a/lib/kunit/debugfs.c b/lib/kunit/debugfs.c
+index b71db0abc12bf..1048ef1b8d6ec 100644
+--- a/lib/kunit/debugfs.c
++++ b/lib/kunit/debugfs.c
+@@ -52,7 +52,7 @@ static void debugfs_print_result(struct seq_file *seq,
+ static int debugfs_print_results(struct seq_file *seq, void *v)
+ {
+ 	struct kunit_suite *suite = (struct kunit_suite *)seq->private;
+-	bool success = kunit_suite_has_succeeded(suite);
++	enum kunit_status success = kunit_suite_has_succeeded(suite);
+ 	struct kunit_case *test_case;
+ 
+ 	if (!suite || !suite->log)
+diff --git a/lib/kunit/executor.c b/lib/kunit/executor.c
+index 22640c9ee8198..96f96e42ce062 100644
+--- a/lib/kunit/executor.c
++++ b/lib/kunit/executor.c
+@@ -71,9 +71,13 @@ kunit_filter_tests(struct kunit_suite *const suite, const char *test_glob)
+ 
+ 	/* Use memcpy to workaround copy->name being const. */
+ 	copy = kmalloc(sizeof(*copy), GFP_KERNEL);
++	if (!copy)
++		return ERR_PTR(-ENOMEM);
+ 	memcpy(copy, suite, sizeof(*copy));
+ 
+ 	filtered = kcalloc(n + 1, sizeof(*filtered), GFP_KERNEL);
++	if (!filtered)
++		return ERR_PTR(-ENOMEM);
+ 
+ 	n = 0;
+ 	kunit_suite_for_each_test_case(suite, test_case) {
+@@ -106,14 +110,16 @@ kunit_filter_subsuite(struct kunit_suite * const * const subsuite,
+ 
+ 	filtered = kmalloc_array(n + 1, sizeof(*filtered), GFP_KERNEL);
+ 	if (!filtered)
+-		return NULL;
++		return ERR_PTR(-ENOMEM);
+ 
+ 	n = 0;
+ 	for (i = 0; subsuite[i] != NULL; ++i) {
+ 		if (!glob_match(filter->suite_glob, subsuite[i]->name))
+ 			continue;
+ 		filtered_suite = kunit_filter_tests(subsuite[i], filter->test_glob);
+-		if (filtered_suite)
++		if (IS_ERR(filtered_suite))
++			return ERR_CAST(filtered_suite);
++		else if (filtered_suite)
+ 			filtered[n++] = filtered_suite;
+ 	}
+ 	filtered[n] = NULL;
+@@ -146,7 +152,8 @@ static void kunit_free_suite_set(struct suite_set suite_set)
+ }
+ 
+ static struct suite_set kunit_filter_suites(const struct suite_set *suite_set,
+-					    const char *filter_glob)
++					    const char *filter_glob,
++					    int *err)
+ {
+ 	int i;
+ 	struct kunit_suite * const **copy, * const *filtered_subsuite;
+@@ -166,6 +173,10 @@ static struct suite_set kunit_filter_suites(const struct suite_set *suite_set,
+ 
+ 	for (i = 0; i < max; ++i) {
+ 		filtered_subsuite = kunit_filter_subsuite(suite_set->start[i], &filter);
++		if (IS_ERR(filtered_subsuite)) {
++			*err = PTR_ERR(filtered_subsuite);
++			return filtered;
++		}
+ 		if (filtered_subsuite)
+ 			*copy++ = filtered_subsuite;
+ 	}
+@@ -236,9 +247,15 @@ int kunit_run_all_tests(void)
+ 		.start = __kunit_suites_start,
+ 		.end = __kunit_suites_end,
+ 	};
++	int err = 0;
+ 
+-	if (filter_glob_param)
+-		suite_set = kunit_filter_suites(&suite_set, filter_glob_param);
++	if (filter_glob_param) {
++		suite_set = kunit_filter_suites(&suite_set, filter_glob_param, &err);
++		if (err) {
++			pr_err("kunit executor: error filtering suites: %d\n", err);
++			goto out;
++		}
++	}
+ 
+ 	if (!action_param)
+ 		kunit_exec_run_tests(&suite_set);
+@@ -251,9 +268,10 @@ int kunit_run_all_tests(void)
+ 		kunit_free_suite_set(suite_set);
+ 	}
+ 
+-	kunit_handle_shutdown();
+ 
+-	return 0;
++out:
++	kunit_handle_shutdown();
++	return err;
+ }
+ 
+ #if IS_BUILTIN(CONFIG_KUNIT_TEST)
+diff --git a/lib/kunit/executor_test.c b/lib/kunit/executor_test.c
+index 4ed57fd94e427..eac6ff4802738 100644
+--- a/lib/kunit/executor_test.c
++++ b/lib/kunit/executor_test.c
+@@ -137,14 +137,16 @@ static void filter_suites_test(struct kunit *test)
+ 		.end = suites + 2,
+ 	};
+ 	struct suite_set filtered = {.start = NULL, .end = NULL};
++	int err = 0;
+ 
+ 	/* Emulate two files, each having one suite */
+ 	subsuites[0][0] = alloc_fake_suite(test, "suite0", dummy_test_cases);
+ 	subsuites[1][0] = alloc_fake_suite(test, "suite1", dummy_test_cases);
+ 
+ 	/* Filter out suite1 */
+-	filtered = kunit_filter_suites(&suite_set, "suite0");
++	filtered = kunit_filter_suites(&suite_set, "suite0", &err);
+ 	kfree_subsuites_at_end(test, &filtered); /* let us use ASSERTs without leaking */
++	KUNIT_EXPECT_EQ(test, err, 0);
+ 	KUNIT_ASSERT_EQ(test, filtered.end - filtered.start, (ptrdiff_t)1);
+ 
+ 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, filtered.start);
+diff --git a/lib/string_helpers.c b/lib/string_helpers.c
+index 4f877e9551d5b..5ed3beb066e6d 100644
+--- a/lib/string_helpers.c
++++ b/lib/string_helpers.c
+@@ -757,6 +757,9 @@ char **devm_kasprintf_strarray(struct device *dev, const char *prefix, size_t n)
+ 		return ERR_PTR(-ENOMEM);
+ 	}
+ 
++	ptr->n = n;
++	devres_add(dev, ptr);
++
+ 	return ptr->array;
+ }
+ EXPORT_SYMBOL_GPL(devm_kasprintf_strarray);
+diff --git a/mm/cma.c b/mm/cma.c
+index eaa4b5c920a20..4a978e09547a8 100644
+--- a/mm/cma.c
++++ b/mm/cma.c
+@@ -37,6 +37,7 @@
+ 
+ struct cma cma_areas[MAX_CMA_AREAS];
+ unsigned cma_area_count;
++static DEFINE_MUTEX(cma_mutex);
+ 
+ phys_addr_t cma_get_base(const struct cma *cma)
+ {
+@@ -468,9 +469,10 @@ struct page *cma_alloc(struct cma *cma, unsigned long count,
+ 		spin_unlock_irq(&cma->lock);
+ 
+ 		pfn = cma->base_pfn + (bitmap_no << cma->order_per_bit);
++		mutex_lock(&cma_mutex);
+ 		ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA,
+ 				     GFP_KERNEL | (no_warn ? __GFP_NOWARN : 0));
+-
++		mutex_unlock(&cma_mutex);
+ 		if (ret == 0) {
+ 			page = pfn_to_page(pfn);
+ 			break;
+diff --git a/mm/compaction.c b/mm/compaction.c
+index fe915db6149b9..de42b8e487589 100644
+--- a/mm/compaction.c
++++ b/mm/compaction.c
+@@ -1858,6 +1858,8 @@ static unsigned long fast_find_migrateblock(struct compact_control *cc)
+ 
+ 				update_fast_start_pfn(cc, free_pfn);
+ 				pfn = pageblock_start_pfn(free_pfn);
++				if (pfn < cc->zone->zone_start_pfn)
++					pfn = cc->zone->zone_start_pfn;
+ 				cc->fast_search_fail = 0;
+ 				found_block = true;
+ 				set_pageblock_skip(freepage);
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 3fc721789743e..410bbb0aee321 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -6562,7 +6562,14 @@ int huge_pmd_unshare(struct mm_struct *mm, struct vm_area_struct *vma,
+ 	pud_clear(pud);
+ 	put_page(virt_to_page(ptep));
+ 	mm_dec_nr_pmds(mm);
+-	*addr = ALIGN(*addr, HPAGE_SIZE * PTRS_PER_PTE) - HPAGE_SIZE;
++	/*
++	 * This update of passed address optimizes loops sequentially
++	 * processing addresses in increments of huge page size (PMD_SIZE
++	 * in this case).  By clearing the pud, a PUD_SIZE area is unmapped.
++	 * Update address to the 'last page' in the cleared area so that
++	 * calling loop can move to first page past this area.
++	 */
++	*addr |= PUD_SIZE - PMD_SIZE;
+ 	return 1;
+ }
+ 
+diff --git a/mm/memremap.c b/mm/memremap.c
+index af0223605e697..2554a6b07007f 100644
+--- a/mm/memremap.c
++++ b/mm/memremap.c
+@@ -214,7 +214,7 @@ static int pagemap_range(struct dev_pagemap *pgmap, struct mhp_params *params,
+ 
+ 	if (!mhp_range_allowed(range->start, range_len(range), !is_private)) {
+ 		error = -EINVAL;
+-		goto err_pfn_remap;
++		goto err_kasan;
+ 	}
+ 
+ 	mem_hotplug_begin();
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 0e42038382c12..5ced6cb260ed1 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -5324,8 +5324,8 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid,
+ 		page = __rmqueue_pcplist(zone, 0, ac.migratetype, alloc_flags,
+ 								pcp, pcp_list);
+ 		if (unlikely(!page)) {
+-			/* Try and get at least one page */
+-			if (!nr_populated)
++			/* Try and allocate at least one page */
++			if (!nr_account)
+ 				goto failed_irq;
+ 			break;
+ 		}
+diff --git a/mm/page_owner.c b/mm/page_owner.c
+index fb3a05fdebdbf..19bc559e49040 100644
+--- a/mm/page_owner.c
++++ b/mm/page_owner.c
+@@ -168,7 +168,7 @@ static inline void __set_page_owner_handle(struct page_ext *page_ext,
+ 		page_owner->pid = current->pid;
+ 		page_owner->tgid = current->tgid;
+ 		page_owner->ts_nsec = local_clock();
+-		strlcpy(page_owner->comm, current->comm,
++		strscpy(page_owner->comm, current->comm,
+ 			sizeof(page_owner->comm));
+ 		__set_bit(PAGE_EXT_OWNER, &page_ext->flags);
+ 		__set_bit(PAGE_EXT_OWNER_ALLOCATED, &page_ext->flags);
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index fe803bee419a9..ac06c9724c7f3 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -481,7 +481,7 @@ static bool hci_setup_sync_conn(struct hci_conn *conn, __u16 handle)
+ 
+ bool hci_setup_sync(struct hci_conn *conn, __u16 handle)
+ {
+-	if (enhanced_sco_capable(conn->hdev))
++	if (enhanced_sync_conn_capable(conn->hdev))
+ 		return hci_enhanced_setup_sync_conn(conn, handle);
+ 
+ 	return hci_setup_sync_conn(conn, handle);
+@@ -943,10 +943,11 @@ static void create_le_conn_complete(struct hci_dev *hdev, void *data, int err)
+ 
+ 	bt_dev_err(hdev, "request failed to create LE connection: err %d", err);
+ 
+-	if (!conn)
++	/* Check if connection is still pending */
++	if (conn != hci_lookup_le_connect(hdev))
+ 		goto done;
+ 
+-	hci_le_conn_failed(conn, err);
++	hci_conn_failed(conn, err);
+ 
+ done:
+ 	hci_dev_unlock(hdev);
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 66451661283c2..af17dfb20e017 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -1835,7 +1835,9 @@ static u8 hci_cc_le_clear_accept_list(struct hci_dev *hdev, void *data,
+ 	if (rp->status)
+ 		return rp->status;
+ 
++	hci_dev_lock(hdev);
+ 	hci_bdaddr_list_clear(&hdev->le_accept_list);
++	hci_dev_unlock(hdev);
+ 
+ 	return rp->status;
+ }
+@@ -1855,8 +1857,10 @@ static u8 hci_cc_le_add_to_accept_list(struct hci_dev *hdev, void *data,
+ 	if (!sent)
+ 		return rp->status;
+ 
++	hci_dev_lock(hdev);
+ 	hci_bdaddr_list_add(&hdev->le_accept_list, &sent->bdaddr,
+ 			    sent->bdaddr_type);
++	hci_dev_unlock(hdev);
+ 
+ 	return rp->status;
+ }
+@@ -1876,8 +1880,10 @@ static u8 hci_cc_le_del_from_accept_list(struct hci_dev *hdev, void *data,
+ 	if (!sent)
+ 		return rp->status;
+ 
++	hci_dev_lock(hdev);
+ 	hci_bdaddr_list_del(&hdev->le_accept_list, &sent->bdaddr,
+ 			    sent->bdaddr_type);
++	hci_dev_unlock(hdev);
+ 
+ 	return rp->status;
+ }
+@@ -1949,9 +1955,11 @@ static u8 hci_cc_le_add_to_resolv_list(struct hci_dev *hdev, void *data,
+ 	if (!sent)
+ 		return rp->status;
+ 
++	hci_dev_lock(hdev);
+ 	hci_bdaddr_list_add_with_irk(&hdev->le_resolv_list, &sent->bdaddr,
+ 				sent->bdaddr_type, sent->peer_irk,
+ 				sent->local_irk);
++	hci_dev_unlock(hdev);
+ 
+ 	return rp->status;
+ }
+@@ -1971,8 +1979,10 @@ static u8 hci_cc_le_del_from_resolv_list(struct hci_dev *hdev, void *data,
+ 	if (!sent)
+ 		return rp->status;
+ 
++	hci_dev_lock(hdev);
+ 	hci_bdaddr_list_del_with_irk(&hdev->le_resolv_list, &sent->bdaddr,
+ 			    sent->bdaddr_type);
++	hci_dev_unlock(hdev);
+ 
+ 	return rp->status;
+ }
+@@ -1987,7 +1997,9 @@ static u8 hci_cc_le_clear_resolv_list(struct hci_dev *hdev, void *data,
+ 	if (rp->status)
+ 		return rp->status;
+ 
++	hci_dev_lock(hdev);
+ 	hci_bdaddr_list_clear(&hdev->le_resolv_list);
++	hci_dev_unlock(hdev);
+ 
+ 	return rp->status;
+ }
+@@ -3225,10 +3237,12 @@ static void hci_conn_request_evt(struct hci_dev *hdev, void *data,
+ 		return;
+ 	}
+ 
++	hci_dev_lock(hdev);
++
+ 	if (hci_bdaddr_list_lookup(&hdev->reject_list, &ev->bdaddr,
+ 				   BDADDR_BREDR)) {
+ 		hci_reject_conn(hdev, &ev->bdaddr);
+-		return;
++		goto unlock;
+ 	}
+ 
+ 	/* Require HCI_CONNECTABLE or an accept list entry to accept the
+@@ -3240,13 +3254,11 @@ static void hci_conn_request_evt(struct hci_dev *hdev, void *data,
+ 	    !hci_bdaddr_list_lookup_with_flags(&hdev->accept_list, &ev->bdaddr,
+ 					       BDADDR_BREDR)) {
+ 		hci_reject_conn(hdev, &ev->bdaddr);
+-		return;
++		goto unlock;
+ 	}
+ 
+ 	/* Connection accepted */
+ 
+-	hci_dev_lock(hdev);
+-
+ 	ie = hci_inquiry_cache_lookup(hdev, &ev->bdaddr);
+ 	if (ie)
+ 		memcpy(ie->data.dev_class, ev->dev_class, 3);
+@@ -3258,8 +3270,7 @@ static void hci_conn_request_evt(struct hci_dev *hdev, void *data,
+ 				    HCI_ROLE_SLAVE);
+ 		if (!conn) {
+ 			bt_dev_err(hdev, "no memory for new connection");
+-			hci_dev_unlock(hdev);
+-			return;
++			goto unlock;
+ 		}
+ 	}
+ 
+@@ -3299,6 +3310,10 @@ static void hci_conn_request_evt(struct hci_dev *hdev, void *data,
+ 		conn->state = BT_CONNECT2;
+ 		hci_connect_cfm(conn, 0);
+ 	}
++
++	return;
++unlock:
++	hci_dev_unlock(hdev);
+ }
+ 
+ static u8 hci_to_mgmt_reason(u8 err)
+@@ -5617,10 +5632,12 @@ static void le_conn_complete_evt(struct hci_dev *hdev, u8 status,
+ 		status = HCI_ERROR_INVALID_PARAMETERS;
+ 	}
+ 
+-	if (status) {
+-		hci_conn_failed(conn, status);
++	/* All connection failure handling is taken care of by the
++	 * hci_conn_failed function which is triggered by the HCI
++	 * request completion callbacks used for connecting.
++	 */
++	if (status)
+ 		goto unlock;
+-	}
+ 
+ 	if (conn->dst_type == ADDR_LE_DEV_PUBLIC)
+ 		addr_type = BDADDR_LE_PUBLIC;
+diff --git a/net/bluetooth/hci_request.c b/net/bluetooth/hci_request.c
+index 42c8047a9897d..f4afe482e3004 100644
+--- a/net/bluetooth/hci_request.c
++++ b/net/bluetooth/hci_request.c
+@@ -2260,6 +2260,7 @@ static int active_scan(struct hci_request *req, unsigned long opt)
+ 	if (err < 0)
+ 		own_addr_type = ADDR_LE_DEV_PUBLIC;
+ 
++	hci_dev_lock(hdev);
+ 	if (hci_is_adv_monitoring(hdev)) {
+ 		/* Duplicate filter should be disabled when some advertisement
+ 		 * monitor is activated, otherwise AdvMon can only receive one
+@@ -2276,6 +2277,7 @@ static int active_scan(struct hci_request *req, unsigned long opt)
+ 		 */
+ 		filter_dup = LE_SCAN_FILTER_DUP_DISABLE;
+ 	}
++	hci_dev_unlock(hdev);
+ 
+ 	hci_req_start_scan(req, LE_SCAN_ACTIVE, interval,
+ 			   hdev->le_scan_window_discovery, own_addr_type,
+diff --git a/net/bluetooth/sco.c b/net/bluetooth/sco.c
+index 8eabf41b29939..1111da4e2f2bd 100644
+--- a/net/bluetooth/sco.c
++++ b/net/bluetooth/sco.c
+@@ -574,19 +574,24 @@ static int sco_sock_connect(struct socket *sock, struct sockaddr *addr, int alen
+ 	    addr->sa_family != AF_BLUETOOTH)
+ 		return -EINVAL;
+ 
+-	if (sk->sk_state != BT_OPEN && sk->sk_state != BT_BOUND)
+-		return -EBADFD;
++	lock_sock(sk);
++	if (sk->sk_state != BT_OPEN && sk->sk_state != BT_BOUND) {
++		err = -EBADFD;
++		goto done;
++	}
+ 
+-	if (sk->sk_type != SOCK_SEQPACKET)
+-		return -EINVAL;
++	if (sk->sk_type != SOCK_SEQPACKET) {
++		err = -EINVAL;
++		goto done;
++	}
+ 
+ 	hdev = hci_get_route(&sa->sco_bdaddr, &sco_pi(sk)->src, BDADDR_BREDR);
+-	if (!hdev)
+-		return -EHOSTUNREACH;
++	if (!hdev) {
++		err = -EHOSTUNREACH;
++		goto done;
++	}
+ 	hci_dev_lock(hdev);
+ 
+-	lock_sock(sk);
+-
+ 	/* Set destination address and psm */
+ 	bacpy(&sco_pi(sk)->dst, &sa->sco_bdaddr);
+ 
+@@ -885,7 +890,7 @@ static int sco_sock_setsockopt(struct socket *sock, int level, int optname,
+ 			err = -EBADFD;
+ 			break;
+ 		}
+-		if (enhanced_sco_capable(hdev) &&
++		if (enhanced_sync_conn_capable(hdev) &&
+ 		    voice.setting == BT_VOICE_TRANSPARENT)
+ 			sco_pi(sk)->codec.id = BT_CODEC_TRANSPARENT;
+ 		hci_dev_put(hdev);
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 2771fd22dc6ae..0784c339cd7d8 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -3215,11 +3215,15 @@ int skb_checksum_help(struct sk_buff *skb)
+ 	}
+ 
+ 	offset = skb_checksum_start_offset(skb);
+-	BUG_ON(offset >= skb_headlen(skb));
++	ret = -EINVAL;
++	if (WARN_ON_ONCE(offset >= skb_headlen(skb)))
++		goto out;
++
+ 	csum = skb_checksum(skb, offset, skb->len - offset, 0);
+ 
+ 	offset += skb->csum_offset;
+-	BUG_ON(offset + sizeof(__sum16) > skb_headlen(skb));
++	if (WARN_ON_ONCE(offset + sizeof(__sum16) > skb_headlen(skb)))
++		goto out;
+ 
+ 	ret = skb_ensure_writable(skb, offset + sizeof(__sum16));
+ 	if (ret)
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 60f99e9fb6d12..1f3ce7aea7168 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -5711,7 +5711,7 @@ static bool tcp_validate_incoming(struct sock *sk, struct sk_buff *skb,
+ 						  &tp->last_oow_ack_time))
+ 				tcp_send_dupack(sk, skb);
+ 		} else if (tcp_reset_check(sk, skb)) {
+-			tcp_reset(sk, skb);
++			goto reset;
+ 		}
+ 		goto discard;
+ 	}
+@@ -5747,17 +5747,16 @@ static bool tcp_validate_incoming(struct sock *sk, struct sk_buff *skb,
+ 		}
+ 
+ 		if (rst_seq_match)
+-			tcp_reset(sk, skb);
+-		else {
+-			/* Disable TFO if RST is out-of-order
+-			 * and no data has been received
+-			 * for current active TFO socket
+-			 */
+-			if (tp->syn_fastopen && !tp->data_segs_in &&
+-			    sk->sk_state == TCP_ESTABLISHED)
+-				tcp_fastopen_active_disable(sk);
+-			tcp_send_challenge_ack(sk);
+-		}
++			goto reset;
++
++		/* Disable TFO if RST is out-of-order
++		 * and no data has been received
++		 * for current active TFO socket
++		 */
++		if (tp->syn_fastopen && !tp->data_segs_in &&
++		    sk->sk_state == TCP_ESTABLISHED)
++			tcp_fastopen_active_disable(sk);
++		tcp_send_challenge_ack(sk);
+ 		goto discard;
+ 	}
+ 
+@@ -5782,6 +5781,11 @@ syn_challenge:
+ discard:
+ 	tcp_drop(sk, skb);
+ 	return false;
++
++reset:
++	tcp_reset(sk, skb);
++	__kfree_skb(skb);
++	return false;
+ }
+ 
+ /*
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index b225041765885..51e77dc6571a2 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -797,6 +797,7 @@ static void dev_forward_change(struct inet6_dev *idev)
+ {
+ 	struct net_device *dev;
+ 	struct inet6_ifaddr *ifa;
++	LIST_HEAD(tmp_addr_list);
+ 
+ 	if (!idev)
+ 		return;
+@@ -815,14 +816,24 @@ static void dev_forward_change(struct inet6_dev *idev)
+ 		}
+ 	}
+ 
++	read_lock_bh(&idev->lock);
+ 	list_for_each_entry(ifa, &idev->addr_list, if_list) {
+ 		if (ifa->flags&IFA_F_TENTATIVE)
+ 			continue;
++		list_add_tail(&ifa->if_list_aux, &tmp_addr_list);
++	}
++	read_unlock_bh(&idev->lock);
++
++	while (!list_empty(&tmp_addr_list)) {
++		ifa = list_first_entry(&tmp_addr_list,
++				       struct inet6_ifaddr, if_list_aux);
++		list_del(&ifa->if_list_aux);
+ 		if (idev->cnf.forwarding)
+ 			addrconf_join_anycast(ifa);
+ 		else
+ 			addrconf_leave_anycast(ifa);
+ 	}
++
+ 	inet6_netconf_notify_devconf(dev_net(dev), RTM_NEWNETCONF,
+ 				     NETCONFA_FORWARDING,
+ 				     dev->ifindex, &idev->cnf);
+@@ -3728,7 +3739,8 @@ static int addrconf_ifdown(struct net_device *dev, bool unregister)
+ 	unsigned long event = unregister ? NETDEV_UNREGISTER : NETDEV_DOWN;
+ 	struct net *net = dev_net(dev);
+ 	struct inet6_dev *idev;
+-	struct inet6_ifaddr *ifa, *tmp;
++	struct inet6_ifaddr *ifa;
++	LIST_HEAD(tmp_addr_list);
+ 	bool keep_addr = false;
+ 	bool was_ready;
+ 	int state, i;
+@@ -3820,16 +3832,23 @@ restart:
+ 		write_lock_bh(&idev->lock);
+ 	}
+ 
+-	list_for_each_entry_safe(ifa, tmp, &idev->addr_list, if_list) {
++	list_for_each_entry(ifa, &idev->addr_list, if_list)
++		list_add_tail(&ifa->if_list_aux, &tmp_addr_list);
++	write_unlock_bh(&idev->lock);
++
++	while (!list_empty(&tmp_addr_list)) {
+ 		struct fib6_info *rt = NULL;
+ 		bool keep;
+ 
++		ifa = list_first_entry(&tmp_addr_list,
++				       struct inet6_ifaddr, if_list_aux);
++		list_del(&ifa->if_list_aux);
++
+ 		addrconf_del_dad_work(ifa);
+ 
+ 		keep = keep_addr && (ifa->flags & IFA_F_PERMANENT) &&
+ 			!addr_is_local(&ifa->addr);
+ 
+-		write_unlock_bh(&idev->lock);
+ 		spin_lock_bh(&ifa->lock);
+ 
+ 		if (keep) {
+@@ -3860,15 +3879,14 @@ restart:
+ 			addrconf_leave_solict(ifa->idev, &ifa->addr);
+ 		}
+ 
+-		write_lock_bh(&idev->lock);
+ 		if (!keep) {
++			write_lock_bh(&idev->lock);
+ 			list_del_rcu(&ifa->if_list);
++			write_unlock_bh(&idev->lock);
+ 			in6_ifa_put(ifa);
+ 		}
+ 	}
+ 
+-	write_unlock_bh(&idev->lock);
+-
+ 	/* Step 5: Discard anycast and multicast list */
+ 	if (unregister) {
+ 		ipv6_ac_destroy_dev(idev);
+@@ -4201,7 +4219,8 @@ static void addrconf_dad_completed(struct inet6_ifaddr *ifp, bool bump_id,
+ 	send_rs = send_mld &&
+ 		  ipv6_accept_ra(ifp->idev) &&
+ 		  ifp->idev->cnf.rtr_solicits != 0 &&
+-		  (dev->flags&IFF_LOOPBACK) == 0;
++		  (dev->flags & IFF_LOOPBACK) == 0 &&
++		  (dev->type != ARPHRD_TUNNEL);
+ 	read_unlock_bh(&ifp->idev->lock);
+ 
+ 	/* While dad is in progress mld report's source address is in6_addrany.
+diff --git a/net/ipv6/datagram.c b/net/ipv6/datagram.c
+index 206f66310a88d..0324e26850169 100644
+--- a/net/ipv6/datagram.c
++++ b/net/ipv6/datagram.c
+@@ -218,11 +218,11 @@ ipv4_connected:
+ 				err = -EINVAL;
+ 				goto out;
+ 			}
+-			sk->sk_bound_dev_if = usin->sin6_scope_id;
++			WRITE_ONCE(sk->sk_bound_dev_if, usin->sin6_scope_id);
+ 		}
+ 
+ 		if (!sk->sk_bound_dev_if && (addr_type & IPV6_ADDR_MULTICAST))
+-			sk->sk_bound_dev_if = np->mcast_oif;
++			WRITE_ONCE(sk->sk_bound_dev_if, np->mcast_oif);
+ 
+ 		/* Connect to link-local address requires an interface */
+ 		if (!sk->sk_bound_dev_if) {
+@@ -798,7 +798,7 @@ int ip6_datagram_send_ctl(struct net *net, struct sock *sk,
+ 			if (src_idx) {
+ 				if (fl6->flowi6_oif &&
+ 				    src_idx != fl6->flowi6_oif &&
+-				    (sk->sk_bound_dev_if != fl6->flowi6_oif ||
++				    (READ_ONCE(sk->sk_bound_dev_if) != fl6->flowi6_oif ||
+ 				     !sk_dev_equal_l3scope(sk, src_idx)))
+ 					return -EINVAL;
+ 				fl6->flowi6_oif = src_idx;
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index 7f0fa9bd9ffe0..a535c3f2e4af4 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -105,7 +105,7 @@ static int compute_score(struct sock *sk, struct net *net,
+ 			 const struct in6_addr *daddr, unsigned short hnum,
+ 			 int dif, int sdif)
+ {
+-	int score;
++	int bound_dev_if, score;
+ 	struct inet_sock *inet;
+ 	bool dev_match;
+ 
+@@ -132,10 +132,11 @@ static int compute_score(struct sock *sk, struct net *net,
+ 		score++;
+ 	}
+ 
+-	dev_match = udp_sk_bound_dev_eq(net, sk->sk_bound_dev_if, dif, sdif);
++	bound_dev_if = READ_ONCE(sk->sk_bound_dev_if);
++	dev_match = udp_sk_bound_dev_eq(net, bound_dev_if, dif, sdif);
+ 	if (!dev_match)
+ 		return -1;
+-	if (sk->sk_bound_dev_if)
++	if (bound_dev_if)
+ 		score++;
+ 
+ 	if (READ_ONCE(sk->sk_incoming_cpu) == raw_smp_processor_id())
+@@ -789,7 +790,7 @@ static bool __udp_v6_is_mcast_sock(struct net *net, struct sock *sk,
+ 	    (inet->inet_dport && inet->inet_dport != rmt_port) ||
+ 	    (!ipv6_addr_any(&sk->sk_v6_daddr) &&
+ 		    !ipv6_addr_equal(&sk->sk_v6_daddr, rmt_addr)) ||
+-	    !udp_sk_bound_dev_eq(net, sk->sk_bound_dev_if, dif, sdif) ||
++	    !udp_sk_bound_dev_eq(net, READ_ONCE(sk->sk_bound_dev_if), dif, sdif) ||
+ 	    (!ipv6_addr_any(&sk->sk_v6_rcv_saddr) &&
+ 		    !ipv6_addr_equal(&sk->sk_v6_rcv_saddr, loc_addr)))
+ 		return false;
+@@ -1433,7 +1434,7 @@ do_udp_sendmsg:
+ 	}
+ 
+ 	if (!fl6->flowi6_oif)
+-		fl6->flowi6_oif = sk->sk_bound_dev_if;
++		fl6->flowi6_oif = READ_ONCE(sk->sk_bound_dev_if);
+ 
+ 	if (!fl6->flowi6_oif)
+ 		fl6->flowi6_oif = np->sticky_pktinfo.ipi6_ifindex;
+diff --git a/net/mac80211/chan.c b/net/mac80211/chan.c
+index e26d42de14ec2..3bb88c6b6d849 100644
+--- a/net/mac80211/chan.c
++++ b/net/mac80211/chan.c
+@@ -1749,12 +1749,9 @@ int ieee80211_vif_use_reserved_context(struct ieee80211_sub_if_data *sdata)
+ 
+ 	if (new_ctx->replace_state == IEEE80211_CHANCTX_REPLACE_NONE) {
+ 		if (old_ctx)
+-			err = ieee80211_vif_use_reserved_reassign(sdata);
+-		else
+-			err = ieee80211_vif_use_reserved_assign(sdata);
++			return ieee80211_vif_use_reserved_reassign(sdata);
+ 
+-		if (err)
+-			return err;
++		return ieee80211_vif_use_reserved_assign(sdata);
+ 	}
+ 
+ 	/*
+diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h
+index d4a7ba4a82025..e58aa6fa58f24 100644
+--- a/net/mac80211/ieee80211_i.h
++++ b/net/mac80211/ieee80211_i.h
+@@ -1148,6 +1148,9 @@ struct tpt_led_trigger {
+  *	a scan complete for an aborted scan.
+  * @SCAN_HW_CANCELLED: Set for our scan work function when the scan is being
+  *	cancelled.
++ * @SCAN_BEACON_WAIT: Set whenever we're passive scanning because of radar/no-IR
++ *	and could send a probe request after receiving a beacon.
++ * @SCAN_BEACON_DONE: Beacon received, we can now send a probe request
+  */
+ enum {
+ 	SCAN_SW_SCANNING,
+@@ -1156,6 +1159,8 @@ enum {
+ 	SCAN_COMPLETED,
+ 	SCAN_ABORTED,
+ 	SCAN_HW_CANCELLED,
++	SCAN_BEACON_WAIT,
++	SCAN_BEACON_DONE,
+ };
+ 
+ /**
+diff --git a/net/mac80211/rc80211_minstrel_ht.c b/net/mac80211/rc80211_minstrel_ht.c
+index 9c6ace858107a..5a6bf46a42483 100644
+--- a/net/mac80211/rc80211_minstrel_ht.c
++++ b/net/mac80211/rc80211_minstrel_ht.c
+@@ -362,6 +362,9 @@ minstrel_ht_get_stats(struct minstrel_priv *mp, struct minstrel_ht_sta *mi,
+ 
+ 	group = MINSTREL_CCK_GROUP;
+ 	for (idx = 0; idx < ARRAY_SIZE(mp->cck_rates); idx++) {
++		if (!(mi->supported[group] & BIT(idx)))
++			continue;
++
+ 		if (rate->idx != mp->cck_rates[idx])
+ 			continue;
+ 
+diff --git a/net/mac80211/scan.c b/net/mac80211/scan.c
+index 5e6b275afc9e6..b698756887eb5 100644
+--- a/net/mac80211/scan.c
++++ b/net/mac80211/scan.c
+@@ -281,6 +281,16 @@ void ieee80211_scan_rx(struct ieee80211_local *local, struct sk_buff *skb)
+ 	if (likely(!sdata1 && !sdata2))
+ 		return;
+ 
++	if (test_and_clear_bit(SCAN_BEACON_WAIT, &local->scanning)) {
++		/*
++		 * we were passive scanning because of radar/no-IR, but
++		 * the beacon/proberesp rx gives us an opportunity to upgrade
++		 * to active scan
++		 */
++		 set_bit(SCAN_BEACON_DONE, &local->scanning);
++		 ieee80211_queue_delayed_work(&local->hw, &local->scan_work, 0);
++	}
++
+ 	if (ieee80211_is_probe_resp(mgmt->frame_control)) {
+ 		struct cfg80211_scan_request *scan_req;
+ 		struct cfg80211_sched_scan_request *sched_scan_req;
+@@ -787,6 +797,8 @@ static int __ieee80211_start_scan(struct ieee80211_sub_if_data *sdata,
+ 						IEEE80211_CHAN_RADAR)) ||
+ 		    !req->n_ssids) {
+ 			next_delay = IEEE80211_PASSIVE_CHANNEL_TIME;
++			if (req->n_ssids)
++				set_bit(SCAN_BEACON_WAIT, &local->scanning);
+ 		} else {
+ 			ieee80211_scan_state_send_probe(local, &next_delay);
+ 			next_delay = IEEE80211_CHANNEL_TIME;
+@@ -998,6 +1010,8 @@ set_channel:
+ 	    !scan_req->n_ssids) {
+ 		*next_delay = IEEE80211_PASSIVE_CHANNEL_TIME;
+ 		local->next_scan_state = SCAN_DECISION;
++		if (scan_req->n_ssids)
++			set_bit(SCAN_BEACON_WAIT, &local->scanning);
+ 		return;
+ 	}
+ 
+@@ -1090,6 +1104,8 @@ void ieee80211_scan_work(struct work_struct *work)
+ 			goto out;
+ 	}
+ 
++	clear_bit(SCAN_BEACON_WAIT, &local->scanning);
++
+ 	/*
+ 	 * as long as no delay is required advance immediately
+ 	 * without scheduling a new work
+@@ -1100,6 +1116,10 @@ void ieee80211_scan_work(struct work_struct *work)
+ 			goto out_complete;
+ 		}
+ 
++		if (test_and_clear_bit(SCAN_BEACON_DONE, &local->scanning) &&
++		    local->next_scan_state == SCAN_DECISION)
++			local->next_scan_state = SCAN_SEND_PROBE;
++
+ 		switch (local->next_scan_state) {
+ 		case SCAN_DECISION:
+ 			/* if no more bands/channels left, complete scan */
+diff --git a/net/mptcp/pm.c b/net/mptcp/pm.c
+index aa51b100e0335..4d6a61acc4870 100644
+--- a/net/mptcp/pm.c
++++ b/net/mptcp/pm.c
+@@ -261,14 +261,25 @@ void mptcp_pm_rm_addr_received(struct mptcp_sock *msk,
+ 	spin_unlock_bh(&pm->lock);
+ }
+ 
+-void mptcp_pm_mp_prio_received(struct sock *sk, u8 bkup)
++void mptcp_pm_mp_prio_received(struct sock *ssk, u8 bkup)
+ {
+-	struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk);
++	struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk);
++	struct sock *sk = subflow->conn;
++	struct mptcp_sock *msk;
+ 
+ 	pr_debug("subflow->backup=%d, bkup=%d\n", subflow->backup, bkup);
+-	subflow->backup = bkup;
++	msk = mptcp_sk(sk);
++	if (subflow->backup != bkup) {
++		subflow->backup = bkup;
++		mptcp_data_lock(sk);
++		if (!sock_owned_by_user(sk))
++			msk->last_snd = NULL;
++		else
++			__set_bit(MPTCP_RESET_SCHEDULER,  &msk->cb_flags);
++		mptcp_data_unlock(sk);
++	}
+ 
+-	mptcp_event(MPTCP_EVENT_SUB_PRIORITY, mptcp_sk(subflow->conn), sk, GFP_ATOMIC);
++	mptcp_event(MPTCP_EVENT_SUB_PRIORITY, msk, ssk, GFP_ATOMIC);
+ }
+ 
+ void mptcp_pm_mp_fail_received(struct sock *sk, u64 fail_seq)
+diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
+index b5e8de6f75076..e3dcc5501579f 100644
+--- a/net/mptcp/pm_netlink.c
++++ b/net/mptcp/pm_netlink.c
+@@ -727,6 +727,8 @@ static int mptcp_pm_nl_mp_prio_send_ack(struct mptcp_sock *msk,
+ 		if (!addresses_equal(&local, addr, addr->port))
+ 			continue;
+ 
++		if (subflow->backup != bkup)
++			msk->last_snd = NULL;
+ 		subflow->backup = bkup;
+ 		subflow->send_mp_prio = 1;
+ 		subflow->request_bkup = bkup;
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index 0cbea3b6d0a42..8f54293c1d887 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -3092,15 +3092,19 @@ static void mptcp_release_cb(struct sock *sk)
+ 		spin_lock_bh(&sk->sk_lock.slock);
+ 	}
+ 
+-	/* be sure to set the current sk state before tacking actions
+-	 * depending on sk_state
+-	 */
+-	if (__test_and_clear_bit(MPTCP_CONNECTED, &msk->cb_flags))
+-		__mptcp_set_connected(sk);
+ 	if (__test_and_clear_bit(MPTCP_CLEAN_UNA, &msk->cb_flags))
+ 		__mptcp_clean_una_wakeup(sk);
+-	if (__test_and_clear_bit(MPTCP_ERROR_REPORT, &msk->cb_flags))
+-		__mptcp_error_report(sk);
++	if (unlikely(&msk->cb_flags)) {
++		/* be sure to set the current sk state before tacking actions
++		 * depending on sk_state, that is processing MPTCP_ERROR_REPORT
++		 */
++		if (__test_and_clear_bit(MPTCP_CONNECTED, &msk->cb_flags))
++			__mptcp_set_connected(sk);
++		if (__test_and_clear_bit(MPTCP_ERROR_REPORT, &msk->cb_flags))
++			__mptcp_error_report(sk);
++		if (__test_and_clear_bit(MPTCP_RESET_SCHEDULER, &msk->cb_flags))
++			msk->last_snd = NULL;
++	}
+ 
+ 	__mptcp_update_rmem(sk);
+ }
+diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
+index 5655a63aa6a8b..9ac63fa4866ef 100644
+--- a/net/mptcp/protocol.h
++++ b/net/mptcp/protocol.h
+@@ -124,6 +124,7 @@
+ #define MPTCP_RETRANSMIT	4
+ #define MPTCP_FLUSH_JOIN_LIST	5
+ #define MPTCP_CONNECTED		6
++#define MPTCP_RESET_SCHEDULER	7
+ 
+ static inline bool before64(__u64 seq1, __u64 seq2)
+ {
+diff --git a/net/nfc/core.c b/net/nfc/core.c
+index 5b286e1e0a6ff..6ff3e10ff8e35 100644
+--- a/net/nfc/core.c
++++ b/net/nfc/core.c
+@@ -1166,6 +1166,7 @@ void nfc_unregister_device(struct nfc_dev *dev)
+ 	if (dev->rfkill) {
+ 		rfkill_unregister(dev->rfkill);
+ 		rfkill_destroy(dev->rfkill);
++		dev->rfkill = NULL;
+ 	}
+ 	dev->shutting_down = true;
+ 	device_unlock(&dev->dev);
+diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h
+index 969e532f77a90..f2d593e27b64f 100644
+--- a/net/rxrpc/ar-internal.h
++++ b/net/rxrpc/ar-internal.h
+@@ -68,7 +68,7 @@ struct rxrpc_net {
+ 	struct proc_dir_entry	*proc_net;	/* Subdir in /proc/net */
+ 	u32			epoch;		/* Local epoch for detecting local-end reset */
+ 	struct list_head	calls;		/* List of calls active in this namespace */
+-	rwlock_t		call_lock;	/* Lock for ->calls */
++	spinlock_t		call_lock;	/* Lock for ->calls */
+ 	atomic_t		nr_calls;	/* Count of allocated calls */
+ 
+ 	atomic_t		nr_conns;
+@@ -676,13 +676,12 @@ struct rxrpc_call {
+ 
+ 	spinlock_t		input_lock;	/* Lock for packet input to this call */
+ 
+-	/* receive-phase ACK management */
++	/* Receive-phase ACK management (ACKs we send). */
+ 	u8			ackr_reason;	/* reason to ACK */
+ 	rxrpc_serial_t		ackr_serial;	/* serial of packet being ACK'd */
+-	rxrpc_serial_t		ackr_first_seq;	/* first sequence number received */
+-	rxrpc_seq_t		ackr_prev_seq;	/* previous sequence number received */
+-	rxrpc_seq_t		ackr_consumed;	/* Highest packet shown consumed */
+-	rxrpc_seq_t		ackr_seen;	/* Highest packet shown seen */
++	rxrpc_seq_t		ackr_highest_seq; /* Higest sequence number received */
++	atomic_t		ackr_nr_unacked; /* Number of unacked packets */
++	atomic_t		ackr_nr_consumed; /* Number of packets needing hard ACK */
+ 
+ 	/* RTT management */
+ 	rxrpc_serial_t		rtt_serial[4];	/* Serial number of DATA or PING sent */
+@@ -692,8 +691,10 @@ struct rxrpc_call {
+ #define RXRPC_CALL_RTT_AVAIL_MASK	0xf
+ #define RXRPC_CALL_RTT_PEND_SHIFT	8
+ 
+-	/* transmission-phase ACK management */
++	/* Transmission-phase ACK management (ACKs we've received). */
+ 	ktime_t			acks_latest_ts;	/* Timestamp of latest ACK received */
++	rxrpc_seq_t		acks_first_seq;	/* first sequence number received */
++	rxrpc_seq_t		acks_prev_seq;	/* Highest previousPacket received */
+ 	rxrpc_seq_t		acks_lowest_nak; /* Lowest NACK in the buffer (or ==tx_hard_ack) */
+ 	rxrpc_seq_t		acks_lost_top;	/* tx_top at the time lost-ack ping sent */
+ 	rxrpc_serial_t		acks_lost_ping;	/* Serial number of probe ACK */
+diff --git a/net/rxrpc/call_accept.c b/net/rxrpc/call_accept.c
+index 1ae90fb979362..8b24ffbc72efb 100644
+--- a/net/rxrpc/call_accept.c
++++ b/net/rxrpc/call_accept.c
+@@ -140,9 +140,9 @@ static int rxrpc_service_prealloc_one(struct rxrpc_sock *rx,
+ 	write_unlock(&rx->call_lock);
+ 
+ 	rxnet = call->rxnet;
+-	write_lock(&rxnet->call_lock);
+-	list_add_tail(&call->link, &rxnet->calls);
+-	write_unlock(&rxnet->call_lock);
++	spin_lock_bh(&rxnet->call_lock);
++	list_add_tail_rcu(&call->link, &rxnet->calls);
++	spin_unlock_bh(&rxnet->call_lock);
+ 
+ 	b->call_backlog[call_head] = call;
+ 	smp_store_release(&b->call_backlog_head, (call_head + 1) & (size - 1));
+diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c
+index 22e05de5d1ca9..f8ecad2b730e8 100644
+--- a/net/rxrpc/call_event.c
++++ b/net/rxrpc/call_event.c
+@@ -377,9 +377,9 @@ recheck_state:
+ 		if (test_bit(RXRPC_CALL_RX_HEARD, &call->flags) &&
+ 		    (int)call->conn->hi_serial - (int)call->rx_serial > 0) {
+ 			trace_rxrpc_call_reset(call);
+-			rxrpc_abort_call("EXP", call, 0, RX_USER_ABORT, -ECONNRESET);
++			rxrpc_abort_call("EXP", call, 0, RX_CALL_DEAD, -ECONNRESET);
+ 		} else {
+-			rxrpc_abort_call("EXP", call, 0, RX_USER_ABORT, -ETIME);
++			rxrpc_abort_call("EXP", call, 0, RX_CALL_TIMEOUT, -ETIME);
+ 		}
+ 		set_bit(RXRPC_CALL_EV_ABORT, &call->events);
+ 		goto recheck_state;
+@@ -406,7 +406,8 @@ recheck_state:
+ 		goto recheck_state;
+ 	}
+ 
+-	if (test_and_clear_bit(RXRPC_CALL_EV_RESEND, &call->events)) {
++	if (test_and_clear_bit(RXRPC_CALL_EV_RESEND, &call->events) &&
++	    call->state != RXRPC_CALL_CLIENT_RECV_REPLY) {
+ 		rxrpc_resend(call, now);
+ 		goto recheck_state;
+ 	}
+diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c
+index 043508fd8d8a5..25c9a2cbf048c 100644
+--- a/net/rxrpc/call_object.c
++++ b/net/rxrpc/call_object.c
+@@ -337,9 +337,9 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx,
+ 	write_unlock(&rx->call_lock);
+ 
+ 	rxnet = call->rxnet;
+-	write_lock(&rxnet->call_lock);
+-	list_add_tail(&call->link, &rxnet->calls);
+-	write_unlock(&rxnet->call_lock);
++	spin_lock_bh(&rxnet->call_lock);
++	list_add_tail_rcu(&call->link, &rxnet->calls);
++	spin_unlock_bh(&rxnet->call_lock);
+ 
+ 	/* From this point on, the call is protected by its own lock. */
+ 	release_sock(&rx->sk);
+@@ -631,9 +631,9 @@ void rxrpc_put_call(struct rxrpc_call *call, enum rxrpc_call_trace op)
+ 		ASSERTCMP(call->state, ==, RXRPC_CALL_COMPLETE);
+ 
+ 		if (!list_empty(&call->link)) {
+-			write_lock(&rxnet->call_lock);
++			spin_lock_bh(&rxnet->call_lock);
+ 			list_del_init(&call->link);
+-			write_unlock(&rxnet->call_lock);
++			spin_unlock_bh(&rxnet->call_lock);
+ 		}
+ 
+ 		rxrpc_cleanup_call(call);
+@@ -705,7 +705,7 @@ void rxrpc_destroy_all_calls(struct rxrpc_net *rxnet)
+ 	_enter("");
+ 
+ 	if (!list_empty(&rxnet->calls)) {
+-		write_lock(&rxnet->call_lock);
++		spin_lock_bh(&rxnet->call_lock);
+ 
+ 		while (!list_empty(&rxnet->calls)) {
+ 			call = list_entry(rxnet->calls.next,
+@@ -720,12 +720,12 @@ void rxrpc_destroy_all_calls(struct rxrpc_net *rxnet)
+ 			       rxrpc_call_states[call->state],
+ 			       call->flags, call->events);
+ 
+-			write_unlock(&rxnet->call_lock);
++			spin_unlock_bh(&rxnet->call_lock);
+ 			cond_resched();
+-			write_lock(&rxnet->call_lock);
++			spin_lock_bh(&rxnet->call_lock);
+ 		}
+ 
+-		write_unlock(&rxnet->call_lock);
++		spin_unlock_bh(&rxnet->call_lock);
+ 	}
+ 
+ 	atomic_dec(&rxnet->nr_calls);
+diff --git a/net/rxrpc/conn_object.c b/net/rxrpc/conn_object.c
+index b2159dbf5412c..660cd9b1a4658 100644
+--- a/net/rxrpc/conn_object.c
++++ b/net/rxrpc/conn_object.c
+@@ -183,7 +183,7 @@ void __rxrpc_disconnect_call(struct rxrpc_connection *conn,
+ 			chan->last_type = RXRPC_PACKET_TYPE_ABORT;
+ 			break;
+ 		default:
+-			chan->last_abort = RX_USER_ABORT;
++			chan->last_abort = RX_CALL_DEAD;
+ 			chan->last_type = RXRPC_PACKET_TYPE_ABORT;
+ 			break;
+ 		}
+diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c
+index dc201363f2c48..3521ebd0ee41c 100644
+--- a/net/rxrpc/input.c
++++ b/net/rxrpc/input.c
+@@ -412,8 +412,8 @@ static void rxrpc_input_data(struct rxrpc_call *call, struct sk_buff *skb)
+ {
+ 	struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
+ 	enum rxrpc_call_state state;
+-	unsigned int j, nr_subpackets;
+-	rxrpc_serial_t serial = sp->hdr.serial, ack_serial = 0;
++	unsigned int j, nr_subpackets, nr_unacked = 0;
++	rxrpc_serial_t serial = sp->hdr.serial, ack_serial = serial;
+ 	rxrpc_seq_t seq0 = sp->hdr.seq, hard_ack;
+ 	bool immediate_ack = false, jumbo_bad = false;
+ 	u8 ack = 0;
+@@ -453,7 +453,6 @@ static void rxrpc_input_data(struct rxrpc_call *call, struct sk_buff *skb)
+ 	    !rxrpc_receiving_reply(call))
+ 		goto unlock;
+ 
+-	call->ackr_prev_seq = seq0;
+ 	hard_ack = READ_ONCE(call->rx_hard_ack);
+ 
+ 	nr_subpackets = sp->nr_subpackets;
+@@ -534,6 +533,9 @@ static void rxrpc_input_data(struct rxrpc_call *call, struct sk_buff *skb)
+ 			ack_serial = serial;
+ 		}
+ 
++		if (after(seq0, call->ackr_highest_seq))
++			call->ackr_highest_seq = seq0;
++
+ 		/* Queue the packet.  We use a couple of memory barriers here as need
+ 		 * to make sure that rx_top is perceived to be set after the buffer
+ 		 * pointer and that the buffer pointer is set after the annotation and
+@@ -567,6 +569,8 @@ static void rxrpc_input_data(struct rxrpc_call *call, struct sk_buff *skb)
+ 			sp = NULL;
+ 		}
+ 
++		nr_unacked++;
++
+ 		if (last) {
+ 			set_bit(RXRPC_CALL_RX_LAST, &call->flags);
+ 			if (!ack) {
+@@ -586,9 +590,14 @@ static void rxrpc_input_data(struct rxrpc_call *call, struct sk_buff *skb)
+ 			}
+ 			call->rx_expect_next = seq + 1;
+ 		}
++		if (!ack)
++			ack_serial = serial;
+ 	}
+ 
+ ack:
++	if (atomic_add_return(nr_unacked, &call->ackr_nr_unacked) > 2 && !ack)
++		ack = RXRPC_ACK_IDLE;
++
+ 	if (ack)
+ 		rxrpc_propose_ACK(call, ack, ack_serial,
+ 				  immediate_ack, true,
+@@ -812,7 +821,7 @@ static void rxrpc_input_soft_acks(struct rxrpc_call *call, u8 *acks,
+ static bool rxrpc_is_ack_valid(struct rxrpc_call *call,
+ 			       rxrpc_seq_t first_pkt, rxrpc_seq_t prev_pkt)
+ {
+-	rxrpc_seq_t base = READ_ONCE(call->ackr_first_seq);
++	rxrpc_seq_t base = READ_ONCE(call->acks_first_seq);
+ 
+ 	if (after(first_pkt, base))
+ 		return true; /* The window advanced */
+@@ -820,7 +829,7 @@ static bool rxrpc_is_ack_valid(struct rxrpc_call *call,
+ 	if (before(first_pkt, base))
+ 		return false; /* firstPacket regressed */
+ 
+-	if (after_eq(prev_pkt, call->ackr_prev_seq))
++	if (after_eq(prev_pkt, call->acks_prev_seq))
+ 		return true; /* previousPacket hasn't regressed. */
+ 
+ 	/* Some rx implementations put a serial number in previousPacket. */
+@@ -903,11 +912,38 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb)
+ 				  rxrpc_propose_ack_respond_to_ack);
+ 	}
+ 
++	/* If we get an EXCEEDS_WINDOW ACK from the server, it probably
++	 * indicates that the client address changed due to NAT.  The server
++	 * lost the call because it switched to a different peer.
++	 */
++	if (unlikely(buf.ack.reason == RXRPC_ACK_EXCEEDS_WINDOW) &&
++	    first_soft_ack == 1 &&
++	    prev_pkt == 0 &&
++	    rxrpc_is_client_call(call)) {
++		rxrpc_set_call_completion(call, RXRPC_CALL_REMOTELY_ABORTED,
++					  0, -ENETRESET);
++		return;
++	}
++
++	/* If we get an OUT_OF_SEQUENCE ACK from the server, that can also
++	 * indicate a change of address.  However, we can retransmit the call
++	 * if we still have it buffered to the beginning.
++	 */
++	if (unlikely(buf.ack.reason == RXRPC_ACK_OUT_OF_SEQUENCE) &&
++	    first_soft_ack == 1 &&
++	    prev_pkt == 0 &&
++	    call->tx_hard_ack == 0 &&
++	    rxrpc_is_client_call(call)) {
++		rxrpc_set_call_completion(call, RXRPC_CALL_REMOTELY_ABORTED,
++					  0, -ENETRESET);
++		return;
++	}
++
+ 	/* Discard any out-of-order or duplicate ACKs (outside lock). */
+ 	if (!rxrpc_is_ack_valid(call, first_soft_ack, prev_pkt)) {
+ 		trace_rxrpc_rx_discard_ack(call->debug_id, ack_serial,
+-					   first_soft_ack, call->ackr_first_seq,
+-					   prev_pkt, call->ackr_prev_seq);
++					   first_soft_ack, call->acks_first_seq,
++					   prev_pkt, call->acks_prev_seq);
+ 		return;
+ 	}
+ 
+@@ -922,14 +958,14 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb)
+ 	/* Discard any out-of-order or duplicate ACKs (inside lock). */
+ 	if (!rxrpc_is_ack_valid(call, first_soft_ack, prev_pkt)) {
+ 		trace_rxrpc_rx_discard_ack(call->debug_id, ack_serial,
+-					   first_soft_ack, call->ackr_first_seq,
+-					   prev_pkt, call->ackr_prev_seq);
++					   first_soft_ack, call->acks_first_seq,
++					   prev_pkt, call->acks_prev_seq);
+ 		goto out;
+ 	}
+ 	call->acks_latest_ts = skb->tstamp;
+ 
+-	call->ackr_first_seq = first_soft_ack;
+-	call->ackr_prev_seq = prev_pkt;
++	call->acks_first_seq = first_soft_ack;
++	call->acks_prev_seq = prev_pkt;
+ 
+ 	/* Parse rwind and mtu sizes if provided. */
+ 	if (buf.info.rxMTU)
+diff --git a/net/rxrpc/net_ns.c b/net/rxrpc/net_ns.c
+index cc7e30733feb0..e4d6d432515bc 100644
+--- a/net/rxrpc/net_ns.c
++++ b/net/rxrpc/net_ns.c
+@@ -50,7 +50,7 @@ static __net_init int rxrpc_init_net(struct net *net)
+ 	rxnet->epoch |= RXRPC_RANDOM_EPOCH;
+ 
+ 	INIT_LIST_HEAD(&rxnet->calls);
+-	rwlock_init(&rxnet->call_lock);
++	spin_lock_init(&rxnet->call_lock);
+ 	atomic_set(&rxnet->nr_calls, 1);
+ 
+ 	atomic_set(&rxnet->nr_conns, 1);
+diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c
+index a45c83f22236e..9683617db7049 100644
+--- a/net/rxrpc/output.c
++++ b/net/rxrpc/output.c
+@@ -74,11 +74,18 @@ static size_t rxrpc_fill_out_ack(struct rxrpc_connection *conn,
+ 				 u8 reason)
+ {
+ 	rxrpc_serial_t serial;
++	unsigned int tmp;
+ 	rxrpc_seq_t hard_ack, top, seq;
+ 	int ix;
+ 	u32 mtu, jmax;
+ 	u8 *ackp = pkt->acks;
+ 
++	tmp = atomic_xchg(&call->ackr_nr_unacked, 0);
++	tmp |= atomic_xchg(&call->ackr_nr_consumed, 0);
++	if (!tmp && (reason == RXRPC_ACK_DELAY ||
++		     reason == RXRPC_ACK_IDLE))
++		return 0;
++
+ 	/* Barrier against rxrpc_input_data(). */
+ 	serial = call->ackr_serial;
+ 	hard_ack = READ_ONCE(call->rx_hard_ack);
+@@ -89,7 +96,7 @@ static size_t rxrpc_fill_out_ack(struct rxrpc_connection *conn,
+ 	pkt->ack.bufferSpace	= htons(8);
+ 	pkt->ack.maxSkew	= htons(0);
+ 	pkt->ack.firstPacket	= htonl(hard_ack + 1);
+-	pkt->ack.previousPacket	= htonl(call->ackr_prev_seq);
++	pkt->ack.previousPacket	= htonl(call->ackr_highest_seq);
+ 	pkt->ack.serial		= htonl(serial);
+ 	pkt->ack.reason		= reason;
+ 	pkt->ack.nAcks		= top - hard_ack;
+@@ -223,6 +230,10 @@ int rxrpc_send_ack_packet(struct rxrpc_call *call, bool ping,
+ 	n = rxrpc_fill_out_ack(conn, call, pkt, &hard_ack, &top, reason);
+ 
+ 	spin_unlock_bh(&call->lock);
++	if (n == 0) {
++		kfree(pkt);
++		return 0;
++	}
+ 
+ 	iov[0].iov_base	= pkt;
+ 	iov[0].iov_len	= sizeof(pkt->whdr) + sizeof(pkt->ack) + n;
+@@ -259,13 +270,6 @@ int rxrpc_send_ack_packet(struct rxrpc_call *call, bool ping,
+ 					  ntohl(pkt->ack.serial),
+ 					  false, true,
+ 					  rxrpc_propose_ack_retry_tx);
+-		} else {
+-			spin_lock_bh(&call->lock);
+-			if (after(hard_ack, call->ackr_consumed))
+-				call->ackr_consumed = hard_ack;
+-			if (after(top, call->ackr_seen))
+-				call->ackr_seen = top;
+-			spin_unlock_bh(&call->lock);
+ 		}
+ 
+ 		rxrpc_set_keepalive(call);
+diff --git a/net/rxrpc/proc.c b/net/rxrpc/proc.c
+index e2f990754f882..5a67955cc00f6 100644
+--- a/net/rxrpc/proc.c
++++ b/net/rxrpc/proc.c
+@@ -26,29 +26,23 @@ static const char *const rxrpc_conn_states[RXRPC_CONN__NR_STATES] = {
+  */
+ static void *rxrpc_call_seq_start(struct seq_file *seq, loff_t *_pos)
+ 	__acquires(rcu)
+-	__acquires(rxnet->call_lock)
+ {
+ 	struct rxrpc_net *rxnet = rxrpc_net(seq_file_net(seq));
+ 
+ 	rcu_read_lock();
+-	read_lock(&rxnet->call_lock);
+-	return seq_list_start_head(&rxnet->calls, *_pos);
++	return seq_list_start_head_rcu(&rxnet->calls, *_pos);
+ }
+ 
+ static void *rxrpc_call_seq_next(struct seq_file *seq, void *v, loff_t *pos)
+ {
+ 	struct rxrpc_net *rxnet = rxrpc_net(seq_file_net(seq));
+ 
+-	return seq_list_next(v, &rxnet->calls, pos);
++	return seq_list_next_rcu(v, &rxnet->calls, pos);
+ }
+ 
+ static void rxrpc_call_seq_stop(struct seq_file *seq, void *v)
+-	__releases(rxnet->call_lock)
+ 	__releases(rcu)
+ {
+-	struct rxrpc_net *rxnet = rxrpc_net(seq_file_net(seq));
+-
+-	read_unlock(&rxnet->call_lock);
+ 	rcu_read_unlock();
+ }
+ 
+diff --git a/net/rxrpc/recvmsg.c b/net/rxrpc/recvmsg.c
+index eca6dda26c77e..250f23bc1c076 100644
+--- a/net/rxrpc/recvmsg.c
++++ b/net/rxrpc/recvmsg.c
+@@ -260,11 +260,9 @@ static void rxrpc_rotate_rx_window(struct rxrpc_call *call)
+ 		rxrpc_end_rx_phase(call, serial);
+ 	} else {
+ 		/* Check to see if there's an ACK that needs sending. */
+-		if (after_eq(hard_ack, call->ackr_consumed + 2) ||
+-		    after_eq(top, call->ackr_seen + 2) ||
+-		    (hard_ack == top && after(hard_ack, call->ackr_consumed)))
+-			rxrpc_propose_ACK(call, RXRPC_ACK_DELAY, serial,
+-					  true, true,
++		if (atomic_inc_return(&call->ackr_nr_consumed) > 2)
++			rxrpc_propose_ACK(call, RXRPC_ACK_IDLE, serial,
++					  true, false,
+ 					  rxrpc_propose_ack_rotate_rx);
+ 		if (call->ackr_reason && call->ackr_reason != RXRPC_ACK_DELAY)
+ 			rxrpc_send_ack_packet(call, false, NULL);
+diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c
+index af8ad6c30b9fb..1d38e279e2efa 100644
+--- a/net/rxrpc/sendmsg.c
++++ b/net/rxrpc/sendmsg.c
+@@ -444,6 +444,12 @@ static int rxrpc_send_data(struct rxrpc_sock *rx,
+ 
+ success:
+ 	ret = copied;
++	if (READ_ONCE(call->state) == RXRPC_CALL_COMPLETE) {
++		read_lock_bh(&call->state_lock);
++		if (call->error < 0)
++			ret = call->error;
++		read_unlock_bh(&call->state_lock);
++	}
+ out:
+ 	call->tx_pending = skb;
+ 	_leave(" = %d", ret);
+diff --git a/net/rxrpc/sysctl.c b/net/rxrpc/sysctl.c
+index 540351d6a5f47..555e0910786bc 100644
+--- a/net/rxrpc/sysctl.c
++++ b/net/rxrpc/sysctl.c
+@@ -12,7 +12,7 @@
+ 
+ static struct ctl_table_header *rxrpc_sysctl_reg_table;
+ static const unsigned int four = 4;
+-static const unsigned int thirtytwo = 32;
++static const unsigned int max_backlog = RXRPC_BACKLOG_MAX - 1;
+ static const unsigned int n_65535 = 65535;
+ static const unsigned int n_max_acks = RXRPC_RXTX_BUFF_SIZE - 1;
+ static const unsigned long one_jiffy = 1;
+@@ -89,7 +89,7 @@ static struct ctl_table rxrpc_sysctl_table[] = {
+ 		.mode		= 0644,
+ 		.proc_handler	= proc_dointvec_minmax,
+ 		.extra1		= (void *)&four,
+-		.extra2		= (void *)&thirtytwo,
++		.extra2		= (void *)&max_backlog,
+ 	},
+ 	{
+ 		.procname	= "rx_window_size",
+diff --git a/net/sctp/input.c b/net/sctp/input.c
+index 90e12bafdd489..4f43afa8678f9 100644
+--- a/net/sctp/input.c
++++ b/net/sctp/input.c
+@@ -92,6 +92,7 @@ int sctp_rcv(struct sk_buff *skb)
+ 	struct sctp_chunk *chunk;
+ 	union sctp_addr src;
+ 	union sctp_addr dest;
++	int bound_dev_if;
+ 	int family;
+ 	struct sctp_af *af;
+ 	struct net *net = dev_net(skb->dev);
+@@ -169,7 +170,8 @@ int sctp_rcv(struct sk_buff *skb)
+ 	 * If a frame arrives on an interface and the receiving socket is
+ 	 * bound to another interface, via SO_BINDTODEVICE, treat it as OOTB
+ 	 */
+-	if (sk->sk_bound_dev_if && (sk->sk_bound_dev_if != af->skb_iif(skb))) {
++	bound_dev_if = READ_ONCE(sk->sk_bound_dev_if);
++	if (bound_dev_if && (bound_dev_if != af->skb_iif(skb))) {
+ 		if (transport) {
+ 			sctp_transport_put(transport);
+ 			asoc = NULL;
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index fce16b9d6e1a4..45a24d24210f0 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -1564,9 +1564,9 @@ static int smc_connect(struct socket *sock, struct sockaddr *addr,
+ 	if (rc && rc != -EINPROGRESS)
+ 		goto out;
+ 
+-	sock_hold(&smc->sk); /* sock put in passive closing */
+ 	if (smc->use_fallback)
+ 		goto out;
++	sock_hold(&smc->sk); /* sock put in passive closing */
+ 	if (flags & O_NONBLOCK) {
+ 		if (queue_work(smc_hs_wq, &smc->connect_work))
+ 			smc->connect_nonblock = 1;
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index 1a3551b6d18bb..fd8b48b889681 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -3719,6 +3719,7 @@ static int nl80211_send_iface(struct sk_buff *msg, u32 portid, u32 seq, int flag
+ 	wdev_lock(wdev);
+ 	switch (wdev->iftype) {
+ 	case NL80211_IFTYPE_AP:
++	case NL80211_IFTYPE_P2P_GO:
+ 		if (wdev->ssid_len &&
+ 		    nla_put(msg, NL80211_ATTR_SSID, wdev->ssid_len, wdev->ssid))
+ 			goto nla_put_failure_locked;
+@@ -16286,8 +16287,7 @@ static const struct genl_small_ops nl80211_small_ops[] = {
+ 		.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+ 		.doit = nl80211_color_change,
+ 		.flags = GENL_UNS_ADMIN_PERM,
+-		.internal_flags = NL80211_FLAG_NEED_NETDEV_UP |
+-				  NL80211_FLAG_NEED_RTNL,
++		.internal_flags = NL80211_FLAG_NEED_NETDEV_UP,
+ 	},
+ 	{
+ 		.cmd = NL80211_CMD_SET_FILS_AAD,
+diff --git a/net/wireless/reg.c b/net/wireless/reg.c
+index c76cd973f06e4..58e83ce642ad2 100644
+--- a/net/wireless/reg.c
++++ b/net/wireless/reg.c
+@@ -807,6 +807,8 @@ static int __init load_builtin_regdb_keys(void)
+ 	return 0;
+ }
+ 
++MODULE_FIRMWARE("regulatory.db.p7s");
++
+ static bool regdb_has_valid_signature(const u8 *data, unsigned int size)
+ {
+ 	const struct firmware *sig;
+@@ -1078,6 +1080,8 @@ static void regdb_fw_cb(const struct firmware *fw, void *context)
+ 	release_firmware(fw);
+ }
+ 
++MODULE_FIRMWARE("regulatory.db");
++
+ static int query_regdb_file(const char *alpha2)
+ {
+ 	ASSERT_RTNL();
+diff --git a/samples/bpf/Makefile b/samples/bpf/Makefile
+index 38638845db9d7..72bb85c18804f 100644
+--- a/samples/bpf/Makefile
++++ b/samples/bpf/Makefile
+@@ -368,16 +368,15 @@ VMLINUX_BTF ?= $(abspath $(firstword $(wildcard $(VMLINUX_BTF_PATHS))))
+ 
+ $(obj)/vmlinux.h: $(VMLINUX_BTF) $(BPFTOOL)
+ ifeq ($(VMLINUX_H),)
++ifeq ($(VMLINUX_BTF),)
++	$(error Cannot find a vmlinux for VMLINUX_BTF at any of "$(VMLINUX_BTF_PATHS)",\
++		build the kernel or set VMLINUX_BTF or VMLINUX_H variable)
++endif
+ 	$(Q)$(BPFTOOL) btf dump file $(VMLINUX_BTF) format c > $@
+ else
+ 	$(Q)cp "$(VMLINUX_H)" $@
+ endif
+ 
+-ifeq ($(VMLINUX_BTF),)
+-	$(error Cannot find a vmlinux for VMLINUX_BTF at any of "$(VMLINUX_BTF_PATHS)",\
+-		build the kernel or set VMLINUX_BTF variable)
+-endif
+-
+ clean-files += vmlinux.h
+ 
+ # Get Clang's default includes on this system, as opposed to those seen by
+diff --git a/samples/landlock/sandboxer.c b/samples/landlock/sandboxer.c
+index 8859fc1935428..c089e9cdaf328 100644
+--- a/samples/landlock/sandboxer.c
++++ b/samples/landlock/sandboxer.c
+@@ -22,9 +22,9 @@
+ #include <unistd.h>
+ 
+ #ifndef landlock_create_ruleset
+-static inline int landlock_create_ruleset(
+-		const struct landlock_ruleset_attr *const attr,
+-		const size_t size, const __u32 flags)
++static inline int
++landlock_create_ruleset(const struct landlock_ruleset_attr *const attr,
++			const size_t size, const __u32 flags)
+ {
+ 	return syscall(__NR_landlock_create_ruleset, attr, size, flags);
+ }
+@@ -32,17 +32,18 @@ static inline int landlock_create_ruleset(
+ 
+ #ifndef landlock_add_rule
+ static inline int landlock_add_rule(const int ruleset_fd,
+-		const enum landlock_rule_type rule_type,
+-		const void *const rule_attr, const __u32 flags)
++				    const enum landlock_rule_type rule_type,
++				    const void *const rule_attr,
++				    const __u32 flags)
+ {
+-	return syscall(__NR_landlock_add_rule, ruleset_fd, rule_type,
+-			rule_attr, flags);
++	return syscall(__NR_landlock_add_rule, ruleset_fd, rule_type, rule_attr,
++		       flags);
+ }
+ #endif
+ 
+ #ifndef landlock_restrict_self
+ static inline int landlock_restrict_self(const int ruleset_fd,
+-		const __u32 flags)
++					 const __u32 flags)
+ {
+ 	return syscall(__NR_landlock_restrict_self, ruleset_fd, flags);
+ }
+@@ -70,14 +71,17 @@ static int parse_path(char *env_path, const char ***const path_list)
+ 	return num_paths;
+ }
+ 
++/* clang-format off */
++
+ #define ACCESS_FILE ( \
+ 	LANDLOCK_ACCESS_FS_EXECUTE | \
+ 	LANDLOCK_ACCESS_FS_WRITE_FILE | \
+ 	LANDLOCK_ACCESS_FS_READ_FILE)
+ 
+-static int populate_ruleset(
+-		const char *const env_var, const int ruleset_fd,
+-		const __u64 allowed_access)
++/* clang-format on */
++
++static int populate_ruleset(const char *const env_var, const int ruleset_fd,
++			    const __u64 allowed_access)
+ {
+ 	int num_paths, i, ret = 1;
+ 	char *env_path_name;
+@@ -107,12 +111,10 @@ static int populate_ruleset(
+ 	for (i = 0; i < num_paths; i++) {
+ 		struct stat statbuf;
+ 
+-		path_beneath.parent_fd = open(path_list[i], O_PATH |
+-				O_CLOEXEC);
++		path_beneath.parent_fd = open(path_list[i], O_PATH | O_CLOEXEC);
+ 		if (path_beneath.parent_fd < 0) {
+ 			fprintf(stderr, "Failed to open \"%s\": %s\n",
+-					path_list[i],
+-					strerror(errno));
++				path_list[i], strerror(errno));
+ 			goto out_free_name;
+ 		}
+ 		if (fstat(path_beneath.parent_fd, &statbuf)) {
+@@ -123,9 +125,10 @@ static int populate_ruleset(
+ 		if (!S_ISDIR(statbuf.st_mode))
+ 			path_beneath.allowed_access &= ACCESS_FILE;
+ 		if (landlock_add_rule(ruleset_fd, LANDLOCK_RULE_PATH_BENEATH,
+-					&path_beneath, 0)) {
+-			fprintf(stderr, "Failed to update the ruleset with \"%s\": %s\n",
+-					path_list[i], strerror(errno));
++				      &path_beneath, 0)) {
++			fprintf(stderr,
++				"Failed to update the ruleset with \"%s\": %s\n",
++				path_list[i], strerror(errno));
+ 			close(path_beneath.parent_fd);
+ 			goto out_free_name;
+ 		}
+@@ -139,6 +142,8 @@ out_free_name:
+ 	return ret;
+ }
+ 
++/* clang-format off */
++
+ #define ACCESS_FS_ROUGHLY_READ ( \
+ 	LANDLOCK_ACCESS_FS_EXECUTE | \
+ 	LANDLOCK_ACCESS_FS_READ_FILE | \
+@@ -156,6 +161,8 @@ out_free_name:
+ 	LANDLOCK_ACCESS_FS_MAKE_BLOCK | \
+ 	LANDLOCK_ACCESS_FS_MAKE_SYM)
+ 
++/* clang-format on */
++
+ int main(const int argc, char *const argv[], char *const *const envp)
+ {
+ 	const char *cmd_path;
+@@ -163,55 +170,64 @@ int main(const int argc, char *const argv[], char *const *const envp)
+ 	int ruleset_fd;
+ 	struct landlock_ruleset_attr ruleset_attr = {
+ 		.handled_access_fs = ACCESS_FS_ROUGHLY_READ |
+-			ACCESS_FS_ROUGHLY_WRITE,
++				     ACCESS_FS_ROUGHLY_WRITE,
+ 	};
+ 
+ 	if (argc < 2) {
+-		fprintf(stderr, "usage: %s=\"...\" %s=\"...\" %s <cmd> [args]...\n\n",
+-				ENV_FS_RO_NAME, ENV_FS_RW_NAME, argv[0]);
+-		fprintf(stderr, "Launch a command in a restricted environment.\n\n");
++		fprintf(stderr,
++			"usage: %s=\"...\" %s=\"...\" %s <cmd> [args]...\n\n",
++			ENV_FS_RO_NAME, ENV_FS_RW_NAME, argv[0]);
++		fprintf(stderr,
++			"Launch a command in a restricted environment.\n\n");
+ 		fprintf(stderr, "Environment variables containing paths, "
+ 				"each separated by a colon:\n");
+-		fprintf(stderr, "* %s: list of paths allowed to be used in a read-only way.\n",
+-				ENV_FS_RO_NAME);
+-		fprintf(stderr, "* %s: list of paths allowed to be used in a read-write way.\n",
+-				ENV_FS_RW_NAME);
+-		fprintf(stderr, "\nexample:\n"
+-				"%s=\"/bin:/lib:/usr:/proc:/etc:/dev/urandom\" "
+-				"%s=\"/dev/null:/dev/full:/dev/zero:/dev/pts:/tmp\" "
+-				"%s bash -i\n",
+-				ENV_FS_RO_NAME, ENV_FS_RW_NAME, argv[0]);
++		fprintf(stderr,
++			"* %s: list of paths allowed to be used in a read-only way.\n",
++			ENV_FS_RO_NAME);
++		fprintf(stderr,
++			"* %s: list of paths allowed to be used in a read-write way.\n",
++			ENV_FS_RW_NAME);
++		fprintf(stderr,
++			"\nexample:\n"
++			"%s=\"/bin:/lib:/usr:/proc:/etc:/dev/urandom\" "
++			"%s=\"/dev/null:/dev/full:/dev/zero:/dev/pts:/tmp\" "
++			"%s bash -i\n",
++			ENV_FS_RO_NAME, ENV_FS_RW_NAME, argv[0]);
+ 		return 1;
+ 	}
+ 
+-	ruleset_fd = landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0);
++	ruleset_fd =
++		landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0);
+ 	if (ruleset_fd < 0) {
+ 		const int err = errno;
+ 
+ 		perror("Failed to create a ruleset");
+ 		switch (err) {
+ 		case ENOSYS:
+-			fprintf(stderr, "Hint: Landlock is not supported by the current kernel. "
+-					"To support it, build the kernel with "
+-					"CONFIG_SECURITY_LANDLOCK=y and prepend "
+-					"\"landlock,\" to the content of CONFIG_LSM.\n");
++			fprintf(stderr,
++				"Hint: Landlock is not supported by the current kernel. "
++				"To support it, build the kernel with "
++				"CONFIG_SECURITY_LANDLOCK=y and prepend "
++				"\"landlock,\" to the content of CONFIG_LSM.\n");
+ 			break;
+ 		case EOPNOTSUPP:
+-			fprintf(stderr, "Hint: Landlock is currently disabled. "
+-					"It can be enabled in the kernel configuration by "
+-					"prepending \"landlock,\" to the content of CONFIG_LSM, "
+-					"or at boot time by setting the same content to the "
+-					"\"lsm\" kernel parameter.\n");
++			fprintf(stderr,
++				"Hint: Landlock is currently disabled. "
++				"It can be enabled in the kernel configuration by "
++				"prepending \"landlock,\" to the content of CONFIG_LSM, "
++				"or at boot time by setting the same content to the "
++				"\"lsm\" kernel parameter.\n");
+ 			break;
+ 		}
+ 		return 1;
+ 	}
+ 	if (populate_ruleset(ENV_FS_RO_NAME, ruleset_fd,
+-				ACCESS_FS_ROUGHLY_READ)) {
++			     ACCESS_FS_ROUGHLY_READ)) {
+ 		goto err_close_ruleset;
+ 	}
+ 	if (populate_ruleset(ENV_FS_RW_NAME, ruleset_fd,
+-				ACCESS_FS_ROUGHLY_READ | ACCESS_FS_ROUGHLY_WRITE)) {
++			     ACCESS_FS_ROUGHLY_READ |
++				     ACCESS_FS_ROUGHLY_WRITE)) {
+ 		goto err_close_ruleset;
+ 	}
+ 	if (prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0)) {
+@@ -228,7 +244,7 @@ int main(const int argc, char *const argv[], char *const *const envp)
+ 	cmd_argv = argv + 1;
+ 	execvpe(cmd_path, cmd_argv, envp);
+ 	fprintf(stderr, "Failed to execute \"%s\": %s\n", cmd_path,
+-			strerror(errno));
++		strerror(errno));
+ 	fprintf(stderr, "Hint: access to the binary, the interpreter or "
+ 			"shared libraries may be denied.\n");
+ 	return 1;
+diff --git a/scripts/faddr2line b/scripts/faddr2line
+index 6c6439f69a725..0e6268d598835 100755
+--- a/scripts/faddr2line
++++ b/scripts/faddr2line
+@@ -44,17 +44,6 @@
+ set -o errexit
+ set -o nounset
+ 
+-READELF="${CROSS_COMPILE:-}readelf"
+-ADDR2LINE="${CROSS_COMPILE:-}addr2line"
+-SIZE="${CROSS_COMPILE:-}size"
+-NM="${CROSS_COMPILE:-}nm"
+-
+-command -v awk >/dev/null 2>&1 || die "awk isn't installed"
+-command -v ${READELF} >/dev/null 2>&1 || die "readelf isn't installed"
+-command -v ${ADDR2LINE} >/dev/null 2>&1 || die "addr2line isn't installed"
+-command -v ${SIZE} >/dev/null 2>&1 || die "size isn't installed"
+-command -v ${NM} >/dev/null 2>&1 || die "nm isn't installed"
+-
+ usage() {
+ 	echo "usage: faddr2line [--list] <object file> <func+offset> <func+offset>..." >&2
+ 	exit 1
+@@ -69,6 +58,14 @@ die() {
+ 	exit 1
+ }
+ 
++READELF="${CROSS_COMPILE:-}readelf"
++ADDR2LINE="${CROSS_COMPILE:-}addr2line"
++AWK="awk"
++
++command -v ${AWK} >/dev/null 2>&1 || die "${AWK} isn't installed"
++command -v ${READELF} >/dev/null 2>&1 || die "${READELF} isn't installed"
++command -v ${ADDR2LINE} >/dev/null 2>&1 || die "${ADDR2LINE} isn't installed"
++
+ # Try to figure out the source directory prefix so we can remove it from the
+ # addr2line output.  HACK ALERT: This assumes that start_kernel() is in
+ # init/main.c!  This only works for vmlinux.  Otherwise it falls back to
+@@ -76,7 +73,7 @@ die() {
+ find_dir_prefix() {
+ 	local objfile=$1
+ 
+-	local start_kernel_addr=$(${READELF} -sW $objfile | awk '$8 == "start_kernel" {printf "0x%s", $2}')
++	local start_kernel_addr=$(${READELF} --symbols --wide $objfile | ${AWK} '$8 == "start_kernel" {printf "0x%s", $2}')
+ 	[[ -z $start_kernel_addr ]] && return
+ 
+ 	local file_line=$(${ADDR2LINE} -e $objfile $start_kernel_addr)
+@@ -97,86 +94,133 @@ __faddr2line() {
+ 	local dir_prefix=$3
+ 	local print_warnings=$4
+ 
+-	local func=${func_addr%+*}
++	local sym_name=${func_addr%+*}
+ 	local offset=${func_addr#*+}
+ 	offset=${offset%/*}
+-	local size=
+-	[[ $func_addr =~ "/" ]] && size=${func_addr#*/}
++	local user_size=
++	[[ $func_addr =~ "/" ]] && user_size=${func_addr#*/}
+ 
+-	if [[ -z $func ]] || [[ -z $offset ]] || [[ $func = $func_addr ]]; then
++	if [[ -z $sym_name ]] || [[ -z $offset ]] || [[ $sym_name = $func_addr ]]; then
+ 		warn "bad func+offset $func_addr"
+ 		DONE=1
+ 		return
+ 	fi
+ 
+ 	# Go through each of the object's symbols which match the func name.
+-	# In rare cases there might be duplicates.
+-	file_end=$(${SIZE} -Ax $objfile | awk '$1 == ".text" {print $2}')
+-	while read symbol; do
+-		local fields=($symbol)
+-		local sym_base=0x${fields[0]}
+-		local sym_type=${fields[1]}
+-		local sym_end=${fields[3]}
+-
+-		# calculate the size
+-		local sym_size=$(($sym_end - $sym_base))
++	# In rare cases there might be duplicates, in which case we print all
++	# matches.
++	while read line; do
++		local fields=($line)
++		local sym_addr=0x${fields[1]}
++		local sym_elf_size=${fields[2]}
++		local sym_sec=${fields[6]}
++
++		# Get the section size:
++		local sec_size=$(${READELF} --section-headers --wide $objfile |
++			sed 's/\[ /\[/' |
++			${AWK} -v sec=$sym_sec '$1 == "[" sec "]" { print "0x" $6; exit }')
++
++		if [[ -z $sec_size ]]; then
++			warn "bad section size: section: $sym_sec"
++			DONE=1
++			return
++		fi
++
++		# Calculate the symbol size.
++		#
++		# Unfortunately we can't use the ELF size, because kallsyms
++		# also includes the padding bytes in its size calculation.  For
++		# kallsyms, the size calculation is the distance between the
++		# symbol and the next symbol in a sorted list.
++		local sym_size
++		local cur_sym_addr
++		local found=0
++		while read line; do
++			local fields=($line)
++			cur_sym_addr=0x${fields[1]}
++			local cur_sym_elf_size=${fields[2]}
++			local cur_sym_name=${fields[7]:-}
++
++			if [[ $cur_sym_addr = $sym_addr ]] &&
++			   [[ $cur_sym_elf_size = $sym_elf_size ]] &&
++			   [[ $cur_sym_name = $sym_name ]]; then
++				found=1
++				continue
++			fi
++
++			if [[ $found = 1 ]]; then
++				sym_size=$(($cur_sym_addr - $sym_addr))
++				[[ $sym_size -lt $sym_elf_size ]] && continue;
++				found=2
++				break
++			fi
++		done < <(${READELF} --symbols --wide $objfile | ${AWK} -v sec=$sym_sec '$7 == sec' | sort --key=2)
++
++		if [[ $found = 0 ]]; then
++			warn "can't find symbol: sym_name: $sym_name sym_sec: $sym_sec sym_addr: $sym_addr sym_elf_size: $sym_elf_size"
++			DONE=1
++			return
++		fi
++
++		# If nothing was found after the symbol, assume it's the last
++		# symbol in the section.
++		[[ $found = 1 ]] && sym_size=$(($sec_size - $sym_addr))
++
+ 		if [[ -z $sym_size ]] || [[ $sym_size -le 0 ]]; then
+-			warn "bad symbol size: base: $sym_base end: $sym_end"
++			warn "bad symbol size: sym_addr: $sym_addr cur_sym_addr: $cur_sym_addr"
+ 			DONE=1
+ 			return
+ 		fi
++
+ 		sym_size=0x$(printf %x $sym_size)
+ 
+-		# calculate the address
+-		local addr=$(($sym_base + $offset))
++		# Calculate the section address from user-supplied offset:
++		local addr=$(($sym_addr + $offset))
+ 		if [[ -z $addr ]] || [[ $addr = 0 ]]; then
+-			warn "bad address: $sym_base + $offset"
++			warn "bad address: $sym_addr + $offset"
+ 			DONE=1
+ 			return
+ 		fi
+ 		addr=0x$(printf %x $addr)
+ 
+-		# weed out non-function symbols
+-		if [[ $sym_type != t ]] && [[ $sym_type != T ]]; then
+-			[[ $print_warnings = 1 ]] &&
+-				echo "skipping $func address at $addr due to non-function symbol of type '$sym_type'"
+-			continue
+-		fi
+-
+-		# if the user provided a size, make sure it matches the symbol's size
+-		if [[ -n $size ]] && [[ $size -ne $sym_size ]]; then
++		# If the user provided a size, make sure it matches the symbol's size:
++		if [[ -n $user_size ]] && [[ $user_size -ne $sym_size ]]; then
+ 			[[ $print_warnings = 1 ]] &&
+-				echo "skipping $func address at $addr due to size mismatch ($size != $sym_size)"
++				echo "skipping $sym_name address at $addr due to size mismatch ($user_size != $sym_size)"
+ 			continue;
+ 		fi
+ 
+-		# make sure the provided offset is within the symbol's range
++		# Make sure the provided offset is within the symbol's range:
+ 		if [[ $offset -gt $sym_size ]]; then
+ 			[[ $print_warnings = 1 ]] &&
+-				echo "skipping $func address at $addr due to size mismatch ($offset > $sym_size)"
++				echo "skipping $sym_name address at $addr due to size mismatch ($offset > $sym_size)"
+ 			continue
+ 		fi
+ 
+-		# separate multiple entries with a blank line
++		# In case of duplicates or multiple addresses specified on the
++		# cmdline, separate multiple entries with a blank line:
+ 		[[ $FIRST = 0 ]] && echo
+ 		FIRST=0
+ 
+-		# pass real address to addr2line
+-		echo "$func+$offset/$sym_size:"
+-		local file_lines=$(${ADDR2LINE} -fpie $objfile $addr | sed "s; $dir_prefix\(\./\)*; ;")
+-		[[ -z $file_lines ]] && return
++		echo "$sym_name+$offset/$sym_size:"
+ 
++		# Pass section address to addr2line and strip absolute paths
++		# from the output:
++		local output=$(${ADDR2LINE} -fpie $objfile $addr | sed "s; $dir_prefix\(\./\)*; ;")
++		[[ -z $output ]] && continue
++
++		# Default output (non --list):
+ 		if [[ $LIST = 0 ]]; then
+-			echo "$file_lines" | while read -r line
++			echo "$output" | while read -r line
+ 			do
+ 				echo $line
+ 			done
+ 			DONE=1;
+-			return
++			continue
+ 		fi
+ 
+-		# show each line with context
+-		echo "$file_lines" | while read -r line
++		# For --list, show each line with its corresponding source code:
++		echo "$output" | while read -r line
+ 		do
+ 			echo
+ 			echo $line
+@@ -184,12 +228,12 @@ __faddr2line() {
+ 			n1=$[$n-5]
+ 			n2=$[$n+5]
+ 			f=$(echo $line | sed 's/.*at \(.\+\):.*/\1/g')
+-			awk 'NR>=strtonum("'$n1'") && NR<=strtonum("'$n2'") { if (NR=='$n') printf(">%d<", NR); else printf(" %d ", NR); printf("\t%s\n", $0)}' $f
++			${AWK} 'NR>=strtonum("'$n1'") && NR<=strtonum("'$n2'") { if (NR=='$n') printf(">%d<", NR); else printf(" %d ", NR); printf("\t%s\n", $0)}' $f
+ 		done
+ 
+ 		DONE=1
+ 
+-	done < <(${NM} -n $objfile | awk -v fn=$func -v end=$file_end '$3 == fn { found=1; line=$0; start=$1; next } found == 1 { found=0; print line, "0x"$1 } END {if (found == 1) print line, end; }')
++	done < <(${READELF} --symbols --wide $objfile | ${AWK} -v fn=$sym_name '$4 == "FUNC" && $8 == fn')
+ }
+ 
+ [[ $# -lt 2 ]] && usage
+diff --git a/security/integrity/ima/Kconfig b/security/integrity/ima/Kconfig
+index f3a9cc201c8c2..7249f16257c72 100644
+--- a/security/integrity/ima/Kconfig
++++ b/security/integrity/ima/Kconfig
+@@ -69,10 +69,9 @@ choice
+ 	  hash, defined as 20 bytes, and a null terminated pathname,
+ 	  limited to 255 characters.  The 'ima-ng' measurement list
+ 	  template permits both larger hash digests and longer
+-	  pathnames.
++	  pathnames. The configured default template can be replaced
++	  by specifying "ima_template=" on the boot command line.
+ 
+-	config IMA_TEMPLATE
+-		bool "ima"
+ 	config IMA_NG_TEMPLATE
+ 		bool "ima-ng (default)"
+ 	config IMA_SIG_TEMPLATE
+@@ -82,7 +81,6 @@ endchoice
+ config IMA_DEFAULT_TEMPLATE
+ 	string
+ 	depends on IMA
+-	default "ima" if IMA_TEMPLATE
+ 	default "ima-ng" if IMA_NG_TEMPLATE
+ 	default "ima-sig" if IMA_SIG_TEMPLATE
+ 
+@@ -102,19 +100,19 @@ choice
+ 
+ 	config IMA_DEFAULT_HASH_SHA256
+ 		bool "SHA256"
+-		depends on CRYPTO_SHA256=y && !IMA_TEMPLATE
++		depends on CRYPTO_SHA256=y
+ 
+ 	config IMA_DEFAULT_HASH_SHA512
+ 		bool "SHA512"
+-		depends on CRYPTO_SHA512=y && !IMA_TEMPLATE
++		depends on CRYPTO_SHA512=y
+ 
+ 	config IMA_DEFAULT_HASH_WP512
+ 		bool "WP512"
+-		depends on CRYPTO_WP512=y && !IMA_TEMPLATE
++		depends on CRYPTO_WP512=y
+ 
+ 	config IMA_DEFAULT_HASH_SM3
+ 		bool "SM3"
+-		depends on CRYPTO_SM3=y && !IMA_TEMPLATE
++		depends on CRYPTO_SM3=y
+ endchoice
+ 
+ config IMA_DEFAULT_HASH
+diff --git a/security/integrity/platform_certs/keyring_handler.h b/security/integrity/platform_certs/keyring_handler.h
+index 284558f30411e..212d894a8c0c0 100644
+--- a/security/integrity/platform_certs/keyring_handler.h
++++ b/security/integrity/platform_certs/keyring_handler.h
+@@ -35,3 +35,11 @@ efi_element_handler_t get_handler_for_mok(const efi_guid_t *sig_type);
+ efi_element_handler_t get_handler_for_dbx(const efi_guid_t *sig_type);
+ 
+ #endif
++
++#ifndef UEFI_QUIRK_SKIP_CERT
++#define UEFI_QUIRK_SKIP_CERT(vendor, product) \
++		 .matches = { \
++			DMI_MATCH(DMI_BOARD_VENDOR, vendor), \
++			DMI_MATCH(DMI_PRODUCT_NAME, product), \
++		},
++#endif
+diff --git a/security/integrity/platform_certs/load_uefi.c b/security/integrity/platform_certs/load_uefi.c
+index 5f45c3c07dbd4..093894a640dca 100644
+--- a/security/integrity/platform_certs/load_uefi.c
++++ b/security/integrity/platform_certs/load_uefi.c
+@@ -3,6 +3,7 @@
+ #include <linux/kernel.h>
+ #include <linux/sched.h>
+ #include <linux/cred.h>
++#include <linux/dmi.h>
+ #include <linux/err.h>
+ #include <linux/efi.h>
+ #include <linux/slab.h>
+@@ -12,6 +13,31 @@
+ #include "../integrity.h"
+ #include "keyring_handler.h"
+ 
++/*
++ * On T2 Macs reading the db and dbx efi variables to load UEFI Secure Boot
++ * certificates causes occurrence of a page fault in Apple's firmware and
++ * a crash disabling EFI runtime services. The following quirk skips reading
++ * these variables.
++ */
++static const struct dmi_system_id uefi_skip_cert[] = {
++	{ UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookPro15,1") },
++	{ UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookPro15,2") },
++	{ UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookPro15,3") },
++	{ UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookPro15,4") },
++	{ UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookPro16,1") },
++	{ UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookPro16,2") },
++	{ UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookPro16,3") },
++	{ UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookPro16,4") },
++	{ UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookAir8,1") },
++	{ UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookAir8,2") },
++	{ UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookAir9,1") },
++	{ UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacMini8,1") },
++	{ UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacPro7,1") },
++	{ UEFI_QUIRK_SKIP_CERT("Apple Inc.", "iMac20,1") },
++	{ UEFI_QUIRK_SKIP_CERT("Apple Inc.", "iMac20,2") },
++	{ }
++};
++
+ /*
+  * Look to see if a UEFI variable called MokIgnoreDB exists and return true if
+  * it does.
+@@ -138,6 +164,13 @@ static int __init load_uefi_certs(void)
+ 	unsigned long dbsize = 0, dbxsize = 0, mokxsize = 0;
+ 	efi_status_t status;
+ 	int rc = 0;
++	const struct dmi_system_id *dmi_id;
++
++	dmi_id = dmi_first_match(uefi_skip_cert);
++	if (dmi_id) {
++		pr_err("Reading UEFI Secure Boot Certs is not supported on T2 Macs.\n");
++		return false;
++	}
+ 
+ 	if (!efi_rt_services_supported(EFI_RT_SUPPORTED_GET_VARIABLE))
+ 		return false;
+diff --git a/security/landlock/cred.c b/security/landlock/cred.c
+index 6725af24c6841..ec6c37f04a191 100644
+--- a/security/landlock/cred.c
++++ b/security/landlock/cred.c
+@@ -15,7 +15,7 @@
+ #include "setup.h"
+ 
+ static int hook_cred_prepare(struct cred *const new,
+-		const struct cred *const old, const gfp_t gfp)
++			     const struct cred *const old, const gfp_t gfp)
+ {
+ 	struct landlock_ruleset *const old_dom = landlock_cred(old)->domain;
+ 
+@@ -42,5 +42,5 @@ static struct security_hook_list landlock_hooks[] __lsm_ro_after_init = {
+ __init void landlock_add_cred_hooks(void)
+ {
+ 	security_add_hooks(landlock_hooks, ARRAY_SIZE(landlock_hooks),
+-			LANDLOCK_NAME);
++			   LANDLOCK_NAME);
+ }
+diff --git a/security/landlock/cred.h b/security/landlock/cred.h
+index 5f99d3decade6..af89ab00e6d10 100644
+--- a/security/landlock/cred.h
++++ b/security/landlock/cred.h
+@@ -20,8 +20,8 @@ struct landlock_cred_security {
+ 	struct landlock_ruleset *domain;
+ };
+ 
+-static inline struct landlock_cred_security *landlock_cred(
+-		const struct cred *cred)
++static inline struct landlock_cred_security *
++landlock_cred(const struct cred *cred)
+ {
+ 	return cred->security + landlock_blob_sizes.lbs_cred;
+ }
+@@ -34,8 +34,8 @@ static inline const struct landlock_ruleset *landlock_get_current_domain(void)
+ /*
+  * The call needs to come from an RCU read-side critical section.
+  */
+-static inline const struct landlock_ruleset *landlock_get_task_domain(
+-		const struct task_struct *const task)
++static inline const struct landlock_ruleset *
++landlock_get_task_domain(const struct task_struct *const task)
+ {
+ 	return landlock_cred(__task_cred(task))->domain;
+ }
+diff --git a/security/landlock/fs.c b/security/landlock/fs.c
+index 97b8e421f6171..c5749301b37d6 100644
+--- a/security/landlock/fs.c
++++ b/security/landlock/fs.c
+@@ -141,23 +141,26 @@ retry:
+ }
+ 
+ /* All access rights that can be tied to files. */
++/* clang-format off */
+ #define ACCESS_FILE ( \
+ 	LANDLOCK_ACCESS_FS_EXECUTE | \
+ 	LANDLOCK_ACCESS_FS_WRITE_FILE | \
+ 	LANDLOCK_ACCESS_FS_READ_FILE)
++/* clang-format on */
+ 
+ /*
+  * @path: Should have been checked by get_path_from_fd().
+  */
+ int landlock_append_fs_rule(struct landlock_ruleset *const ruleset,
+-		const struct path *const path, u32 access_rights)
++			    const struct path *const path,
++			    access_mask_t access_rights)
+ {
+ 	int err;
+ 	struct landlock_object *object;
+ 
+ 	/* Files only get access rights that make sense. */
+-	if (!d_is_dir(path->dentry) && (access_rights | ACCESS_FILE) !=
+-			ACCESS_FILE)
++	if (!d_is_dir(path->dentry) &&
++	    (access_rights | ACCESS_FILE) != ACCESS_FILE)
+ 		return -EINVAL;
+ 	if (WARN_ON_ONCE(ruleset->num_layers != 1))
+ 		return -EINVAL;
+@@ -180,59 +183,93 @@ int landlock_append_fs_rule(struct landlock_ruleset *const ruleset,
+ 
+ /* Access-control management */
+ 
+-static inline u64 unmask_layers(
+-		const struct landlock_ruleset *const domain,
+-		const struct path *const path, const u32 access_request,
+-		u64 layer_mask)
++/*
++ * The lifetime of the returned rule is tied to @domain.
++ *
++ * Returns NULL if no rule is found or if @dentry is negative.
++ */
++static inline const struct landlock_rule *
++find_rule(const struct landlock_ruleset *const domain,
++	  const struct dentry *const dentry)
+ {
+ 	const struct landlock_rule *rule;
+ 	const struct inode *inode;
+-	size_t i;
+ 
+-	if (d_is_negative(path->dentry))
+-		/* Ignore nonexistent leafs. */
+-		return layer_mask;
+-	inode = d_backing_inode(path->dentry);
++	/* Ignores nonexistent leafs. */
++	if (d_is_negative(dentry))
++		return NULL;
++
++	inode = d_backing_inode(dentry);
+ 	rcu_read_lock();
+-	rule = landlock_find_rule(domain,
+-			rcu_dereference(landlock_inode(inode)->object));
++	rule = landlock_find_rule(
++		domain, rcu_dereference(landlock_inode(inode)->object));
+ 	rcu_read_unlock();
++	return rule;
++}
++
++/*
++ * @layer_masks is read and may be updated according to the access request and
++ * the matching rule.
++ *
++ * Returns true if the request is allowed (i.e. relevant layer masks for the
++ * request are empty).
++ */
++static inline bool
++unmask_layers(const struct landlock_rule *const rule,
++	      const access_mask_t access_request,
++	      layer_mask_t (*const layer_masks)[LANDLOCK_NUM_ACCESS_FS])
++{
++	size_t layer_level;
++
++	if (!access_request || !layer_masks)
++		return true;
+ 	if (!rule)
+-		return layer_mask;
++		return false;
+ 
+ 	/*
+ 	 * An access is granted if, for each policy layer, at least one rule
+-	 * encountered on the pathwalk grants the requested accesses,
+-	 * regardless of their position in the layer stack.  We must then check
++	 * encountered on the pathwalk grants the requested access,
++	 * regardless of its position in the layer stack.  We must then check
+ 	 * the remaining layers for each inode, from the first added layer to
+-	 * the last one.
++	 * the last one.  When there is multiple requested accesses, for each
++	 * policy layer, the full set of requested accesses may not be granted
++	 * by only one rule, but by the union (binary OR) of multiple rules.
++	 * E.g. /a/b <execute> + /a <read> => /a/b <execute + read>
+ 	 */
+-	for (i = 0; i < rule->num_layers; i++) {
+-		const struct landlock_layer *const layer = &rule->layers[i];
+-		const u64 layer_level = BIT_ULL(layer->level - 1);
+-
+-		/* Checks that the layer grants access to the full request. */
+-		if ((layer->access & access_request) == access_request) {
+-			layer_mask &= ~layer_level;
++	for (layer_level = 0; layer_level < rule->num_layers; layer_level++) {
++		const struct landlock_layer *const layer =
++			&rule->layers[layer_level];
++		const layer_mask_t layer_bit = BIT_ULL(layer->level - 1);
++		const unsigned long access_req = access_request;
++		unsigned long access_bit;
++		bool is_empty;
+ 
+-			if (layer_mask == 0)
+-				return layer_mask;
++		/*
++		 * Records in @layer_masks which layer grants access to each
++		 * requested access.
++		 */
++		is_empty = true;
++		for_each_set_bit(access_bit, &access_req,
++				 ARRAY_SIZE(*layer_masks)) {
++			if (layer->access & BIT_ULL(access_bit))
++				(*layer_masks)[access_bit] &= ~layer_bit;
++			is_empty = is_empty && !(*layer_masks)[access_bit];
+ 		}
++		if (is_empty)
++			return true;
+ 	}
+-	return layer_mask;
++	return false;
+ }
+ 
+ static int check_access_path(const struct landlock_ruleset *const domain,
+-		const struct path *const path, u32 access_request)
++			     const struct path *const path,
++			     const access_mask_t access_request)
+ {
+-	bool allowed = false;
++	layer_mask_t layer_masks[LANDLOCK_NUM_ACCESS_FS] = {};
++	bool allowed = false, has_access = false;
+ 	struct path walker_path;
+-	u64 layer_mask;
+ 	size_t i;
+ 
+-	/* Make sure all layers can be checked. */
+-	BUILD_BUG_ON(BITS_PER_TYPE(layer_mask) < LANDLOCK_MAX_NUM_LAYERS);
+-
+ 	if (!access_request)
+ 		return 0;
+ 	if (WARN_ON_ONCE(!domain || !path))
+@@ -243,20 +280,27 @@ static int check_access_path(const struct landlock_ruleset *const domain,
+ 	 * /proc/<pid>/fd/<file-descriptor> .
+ 	 */
+ 	if ((path->dentry->d_sb->s_flags & SB_NOUSER) ||
+-			(d_is_positive(path->dentry) &&
+-			 unlikely(IS_PRIVATE(d_backing_inode(path->dentry)))))
++	    (d_is_positive(path->dentry) &&
++	     unlikely(IS_PRIVATE(d_backing_inode(path->dentry)))))
+ 		return 0;
+ 	if (WARN_ON_ONCE(domain->num_layers < 1))
+ 		return -EACCES;
+ 
+ 	/* Saves all layers handling a subset of requested accesses. */
+-	layer_mask = 0;
+ 	for (i = 0; i < domain->num_layers; i++) {
+-		if (domain->fs_access_masks[i] & access_request)
+-			layer_mask |= BIT_ULL(i);
++		const unsigned long access_req = access_request;
++		unsigned long access_bit;
++
++		for_each_set_bit(access_bit, &access_req,
++				 ARRAY_SIZE(layer_masks)) {
++			if (domain->fs_access_masks[i] & BIT_ULL(access_bit)) {
++				layer_masks[access_bit] |= BIT_ULL(i);
++				has_access = true;
++			}
++		}
+ 	}
+ 	/* An access request not handled by the domain is allowed. */
+-	if (layer_mask == 0)
++	if (!has_access)
+ 		return 0;
+ 
+ 	walker_path = *path;
+@@ -268,13 +312,11 @@ static int check_access_path(const struct landlock_ruleset *const domain,
+ 	while (true) {
+ 		struct dentry *parent_dentry;
+ 
+-		layer_mask = unmask_layers(domain, &walker_path,
+-				access_request, layer_mask);
+-		if (layer_mask == 0) {
++		allowed = unmask_layers(find_rule(domain, walker_path.dentry),
++					access_request, &layer_masks);
++		if (allowed)
+ 			/* Stops when a rule from each layer grants access. */
+-			allowed = true;
+ 			break;
+-		}
+ 
+ jump_up:
+ 		if (walker_path.dentry == walker_path.mnt->mnt_root) {
+@@ -308,7 +350,7 @@ jump_up:
+ }
+ 
+ static inline int current_check_access_path(const struct path *const path,
+-		const u32 access_request)
++					    const access_mask_t access_request)
+ {
+ 	const struct landlock_ruleset *const dom =
+ 		landlock_get_current_domain();
+@@ -436,8 +478,8 @@ static void hook_sb_delete(struct super_block *const sb)
+ 	if (prev_inode)
+ 		iput(prev_inode);
+ 	/* Waits for pending iput() in release_inode(). */
+-	wait_var_event(&landlock_superblock(sb)->inode_refs, !atomic_long_read(
+-				&landlock_superblock(sb)->inode_refs));
++	wait_var_event(&landlock_superblock(sb)->inode_refs,
++		       !atomic_long_read(&landlock_superblock(sb)->inode_refs));
+ }
+ 
+ /*
+@@ -459,8 +501,8 @@ static void hook_sb_delete(struct super_block *const sb)
+  * a dedicated user space option would be required (e.g. as a ruleset flag).
+  */
+ static int hook_sb_mount(const char *const dev_name,
+-		const struct path *const path, const char *const type,
+-		const unsigned long flags, void *const data)
++			 const struct path *const path, const char *const type,
++			 const unsigned long flags, void *const data)
+ {
+ 	if (!landlock_get_current_domain())
+ 		return 0;
+@@ -468,7 +510,7 @@ static int hook_sb_mount(const char *const dev_name,
+ }
+ 
+ static int hook_move_mount(const struct path *const from_path,
+-		const struct path *const to_path)
++			   const struct path *const to_path)
+ {
+ 	if (!landlock_get_current_domain())
+ 		return 0;
+@@ -502,7 +544,7 @@ static int hook_sb_remount(struct super_block *const sb, void *const mnt_opts)
+  * view of the filesystem.
+  */
+ static int hook_sb_pivotroot(const struct path *const old_path,
+-		const struct path *const new_path)
++			     const struct path *const new_path)
+ {
+ 	if (!landlock_get_current_domain())
+ 		return 0;
+@@ -511,7 +553,7 @@ static int hook_sb_pivotroot(const struct path *const old_path,
+ 
+ /* Path hooks */
+ 
+-static inline u32 get_mode_access(const umode_t mode)
++static inline access_mask_t get_mode_access(const umode_t mode)
+ {
+ 	switch (mode & S_IFMT) {
+ 	case S_IFLNK:
+@@ -545,8 +587,8 @@ static inline u32 get_mode_access(const umode_t mode)
+  * deal with that.
+  */
+ static int hook_path_link(struct dentry *const old_dentry,
+-		const struct path *const new_dir,
+-		struct dentry *const new_dentry)
++			  const struct path *const new_dir,
++			  struct dentry *const new_dentry)
+ {
+ 	const struct landlock_ruleset *const dom =
+ 		landlock_get_current_domain();
+@@ -559,22 +601,23 @@ static int hook_path_link(struct dentry *const old_dentry,
+ 		return -EXDEV;
+ 	if (unlikely(d_is_negative(old_dentry)))
+ 		return -ENOENT;
+-	return check_access_path(dom, new_dir,
+-			get_mode_access(d_backing_inode(old_dentry)->i_mode));
++	return check_access_path(
++		dom, new_dir,
++		get_mode_access(d_backing_inode(old_dentry)->i_mode));
+ }
+ 
+-static inline u32 maybe_remove(const struct dentry *const dentry)
++static inline access_mask_t maybe_remove(const struct dentry *const dentry)
+ {
+ 	if (d_is_negative(dentry))
+ 		return 0;
+ 	return d_is_dir(dentry) ? LANDLOCK_ACCESS_FS_REMOVE_DIR :
+-		LANDLOCK_ACCESS_FS_REMOVE_FILE;
++				  LANDLOCK_ACCESS_FS_REMOVE_FILE;
+ }
+ 
+ static int hook_path_rename(const struct path *const old_dir,
+-		struct dentry *const old_dentry,
+-		const struct path *const new_dir,
+-		struct dentry *const new_dentry)
++			    struct dentry *const old_dentry,
++			    const struct path *const new_dir,
++			    struct dentry *const new_dentry)
+ {
+ 	const struct landlock_ruleset *const dom =
+ 		landlock_get_current_domain();
+@@ -588,20 +631,21 @@ static int hook_path_rename(const struct path *const old_dir,
+ 	if (unlikely(d_is_negative(old_dentry)))
+ 		return -ENOENT;
+ 	/* RENAME_EXCHANGE is handled because directories are the same. */
+-	return check_access_path(dom, old_dir, maybe_remove(old_dentry) |
+-			maybe_remove(new_dentry) |
++	return check_access_path(
++		dom, old_dir,
++		maybe_remove(old_dentry) | maybe_remove(new_dentry) |
+ 			get_mode_access(d_backing_inode(old_dentry)->i_mode));
+ }
+ 
+ static int hook_path_mkdir(const struct path *const dir,
+-		struct dentry *const dentry, const umode_t mode)
++			   struct dentry *const dentry, const umode_t mode)
+ {
+ 	return current_check_access_path(dir, LANDLOCK_ACCESS_FS_MAKE_DIR);
+ }
+ 
+ static int hook_path_mknod(const struct path *const dir,
+-		struct dentry *const dentry, const umode_t mode,
+-		const unsigned int dev)
++			   struct dentry *const dentry, const umode_t mode,
++			   const unsigned int dev)
+ {
+ 	const struct landlock_ruleset *const dom =
+ 		landlock_get_current_domain();
+@@ -612,28 +656,29 @@ static int hook_path_mknod(const struct path *const dir,
+ }
+ 
+ static int hook_path_symlink(const struct path *const dir,
+-		struct dentry *const dentry, const char *const old_name)
++			     struct dentry *const dentry,
++			     const char *const old_name)
+ {
+ 	return current_check_access_path(dir, LANDLOCK_ACCESS_FS_MAKE_SYM);
+ }
+ 
+ static int hook_path_unlink(const struct path *const dir,
+-		struct dentry *const dentry)
++			    struct dentry *const dentry)
+ {
+ 	return current_check_access_path(dir, LANDLOCK_ACCESS_FS_REMOVE_FILE);
+ }
+ 
+ static int hook_path_rmdir(const struct path *const dir,
+-		struct dentry *const dentry)
++			   struct dentry *const dentry)
+ {
+ 	return current_check_access_path(dir, LANDLOCK_ACCESS_FS_REMOVE_DIR);
+ }
+ 
+ /* File hooks */
+ 
+-static inline u32 get_file_access(const struct file *const file)
++static inline access_mask_t get_file_access(const struct file *const file)
+ {
+-	u32 access = 0;
++	access_mask_t access = 0;
+ 
+ 	if (file->f_mode & FMODE_READ) {
+ 		/* A directory can only be opened in read mode. */
+@@ -688,5 +733,5 @@ static struct security_hook_list landlock_hooks[] __lsm_ro_after_init = {
+ __init void landlock_add_fs_hooks(void)
+ {
+ 	security_add_hooks(landlock_hooks, ARRAY_SIZE(landlock_hooks),
+-			LANDLOCK_NAME);
++			   LANDLOCK_NAME);
+ }
+diff --git a/security/landlock/fs.h b/security/landlock/fs.h
+index 187284b421c9d..8db7acf9109b6 100644
+--- a/security/landlock/fs.h
++++ b/security/landlock/fs.h
+@@ -50,14 +50,14 @@ struct landlock_superblock_security {
+ 	atomic_long_t inode_refs;
+ };
+ 
+-static inline struct landlock_inode_security *landlock_inode(
+-		const struct inode *const inode)
++static inline struct landlock_inode_security *
++landlock_inode(const struct inode *const inode)
+ {
+ 	return inode->i_security + landlock_blob_sizes.lbs_inode;
+ }
+ 
+-static inline struct landlock_superblock_security *landlock_superblock(
+-		const struct super_block *const superblock)
++static inline struct landlock_superblock_security *
++landlock_superblock(const struct super_block *const superblock)
+ {
+ 	return superblock->s_security + landlock_blob_sizes.lbs_superblock;
+ }
+@@ -65,6 +65,7 @@ static inline struct landlock_superblock_security *landlock_superblock(
+ __init void landlock_add_fs_hooks(void);
+ 
+ int landlock_append_fs_rule(struct landlock_ruleset *const ruleset,
+-		const struct path *const path, u32 access_hierarchy);
++			    const struct path *const path,
++			    access_mask_t access_hierarchy);
+ 
+ #endif /* _SECURITY_LANDLOCK_FS_H */
+diff --git a/security/landlock/limits.h b/security/landlock/limits.h
+index 2a0a1095ee27e..17c2a2e7fe1ef 100644
+--- a/security/landlock/limits.h
++++ b/security/landlock/limits.h
+@@ -9,13 +9,19 @@
+ #ifndef _SECURITY_LANDLOCK_LIMITS_H
+ #define _SECURITY_LANDLOCK_LIMITS_H
+ 
++#include <linux/bitops.h>
+ #include <linux/limits.h>
+ #include <uapi/linux/landlock.h>
+ 
+-#define LANDLOCK_MAX_NUM_LAYERS		64
++/* clang-format off */
++
++#define LANDLOCK_MAX_NUM_LAYERS		16
+ #define LANDLOCK_MAX_NUM_RULES		U32_MAX
+ 
+ #define LANDLOCK_LAST_ACCESS_FS		LANDLOCK_ACCESS_FS_MAKE_SYM
+ #define LANDLOCK_MASK_ACCESS_FS		((LANDLOCK_LAST_ACCESS_FS << 1) - 1)
++#define LANDLOCK_NUM_ACCESS_FS		__const_hweight64(LANDLOCK_MASK_ACCESS_FS)
++
++/* clang-format on */
+ 
+ #endif /* _SECURITY_LANDLOCK_LIMITS_H */
+diff --git a/security/landlock/object.c b/security/landlock/object.c
+index d674fdf9ff04f..1f50612f01850 100644
+--- a/security/landlock/object.c
++++ b/security/landlock/object.c
+@@ -17,9 +17,9 @@
+ 
+ #include "object.h"
+ 
+-struct landlock_object *landlock_create_object(
+-		const struct landlock_object_underops *const underops,
+-		void *const underobj)
++struct landlock_object *
++landlock_create_object(const struct landlock_object_underops *const underops,
++		       void *const underobj)
+ {
+ 	struct landlock_object *new_object;
+ 
+diff --git a/security/landlock/object.h b/security/landlock/object.h
+index 3f80674c6c8d3..5f28c35e8aa8c 100644
+--- a/security/landlock/object.h
++++ b/security/landlock/object.h
+@@ -76,9 +76,9 @@ struct landlock_object {
+ 	};
+ };
+ 
+-struct landlock_object *landlock_create_object(
+-		const struct landlock_object_underops *const underops,
+-		void *const underobj);
++struct landlock_object *
++landlock_create_object(const struct landlock_object_underops *const underops,
++		       void *const underobj);
+ 
+ void landlock_put_object(struct landlock_object *const object);
+ 
+diff --git a/security/landlock/ptrace.c b/security/landlock/ptrace.c
+index f55b82446de21..4c5b9cd712861 100644
+--- a/security/landlock/ptrace.c
++++ b/security/landlock/ptrace.c
+@@ -30,7 +30,7 @@
+  * means a subset of) the @child domain.
+  */
+ static bool domain_scope_le(const struct landlock_ruleset *const parent,
+-		const struct landlock_ruleset *const child)
++			    const struct landlock_ruleset *const child)
+ {
+ 	const struct landlock_hierarchy *walker;
+ 
+@@ -48,7 +48,7 @@ static bool domain_scope_le(const struct landlock_ruleset *const parent,
+ }
+ 
+ static bool task_is_scoped(const struct task_struct *const parent,
+-		const struct task_struct *const child)
++			   const struct task_struct *const child)
+ {
+ 	bool is_scoped;
+ 	const struct landlock_ruleset *dom_parent, *dom_child;
+@@ -62,7 +62,7 @@ static bool task_is_scoped(const struct task_struct *const parent,
+ }
+ 
+ static int task_ptrace(const struct task_struct *const parent,
+-		const struct task_struct *const child)
++		       const struct task_struct *const child)
+ {
+ 	/* Quick return for non-landlocked tasks. */
+ 	if (!landlocked(parent))
+@@ -86,7 +86,7 @@ static int task_ptrace(const struct task_struct *const parent,
+  * granted, -errno if denied.
+  */
+ static int hook_ptrace_access_check(struct task_struct *const child,
+-		const unsigned int mode)
++				    const unsigned int mode)
+ {
+ 	return task_ptrace(current, child);
+ }
+@@ -116,5 +116,5 @@ static struct security_hook_list landlock_hooks[] __lsm_ro_after_init = {
+ __init void landlock_add_ptrace_hooks(void)
+ {
+ 	security_add_hooks(landlock_hooks, ARRAY_SIZE(landlock_hooks),
+-			LANDLOCK_NAME);
++			   LANDLOCK_NAME);
+ }
+diff --git a/security/landlock/ruleset.c b/security/landlock/ruleset.c
+index ec72b9262bf38..996484f98bfde 100644
+--- a/security/landlock/ruleset.c
++++ b/security/landlock/ruleset.c
+@@ -28,8 +28,9 @@ static struct landlock_ruleset *create_ruleset(const u32 num_layers)
+ {
+ 	struct landlock_ruleset *new_ruleset;
+ 
+-	new_ruleset = kzalloc(struct_size(new_ruleset, fs_access_masks,
+-				num_layers), GFP_KERNEL_ACCOUNT);
++	new_ruleset =
++		kzalloc(struct_size(new_ruleset, fs_access_masks, num_layers),
++			GFP_KERNEL_ACCOUNT);
+ 	if (!new_ruleset)
+ 		return ERR_PTR(-ENOMEM);
+ 	refcount_set(&new_ruleset->usage, 1);
+@@ -44,7 +45,8 @@ static struct landlock_ruleset *create_ruleset(const u32 num_layers)
+ 	return new_ruleset;
+ }
+ 
+-struct landlock_ruleset *landlock_create_ruleset(const u32 fs_access_mask)
++struct landlock_ruleset *
++landlock_create_ruleset(const access_mask_t fs_access_mask)
+ {
+ 	struct landlock_ruleset *new_ruleset;
+ 
+@@ -66,11 +68,10 @@ static void build_check_rule(void)
+ 	BUILD_BUG_ON(rule.num_layers < LANDLOCK_MAX_NUM_LAYERS);
+ }
+ 
+-static struct landlock_rule *create_rule(
+-		struct landlock_object *const object,
+-		const struct landlock_layer (*const layers)[],
+-		const u32 num_layers,
+-		const struct landlock_layer *const new_layer)
++static struct landlock_rule *
++create_rule(struct landlock_object *const object,
++	    const struct landlock_layer (*const layers)[], const u32 num_layers,
++	    const struct landlock_layer *const new_layer)
+ {
+ 	struct landlock_rule *new_rule;
+ 	u32 new_num_layers;
+@@ -85,7 +86,7 @@ static struct landlock_rule *create_rule(
+ 		new_num_layers = num_layers;
+ 	}
+ 	new_rule = kzalloc(struct_size(new_rule, layers, new_num_layers),
+-			GFP_KERNEL_ACCOUNT);
++			   GFP_KERNEL_ACCOUNT);
+ 	if (!new_rule)
+ 		return ERR_PTR(-ENOMEM);
+ 	RB_CLEAR_NODE(&new_rule->node);
+@@ -94,7 +95,7 @@ static struct landlock_rule *create_rule(
+ 	new_rule->num_layers = new_num_layers;
+ 	/* Copies the original layer stack. */
+ 	memcpy(new_rule->layers, layers,
+-			flex_array_size(new_rule, layers, num_layers));
++	       flex_array_size(new_rule, layers, num_layers));
+ 	if (new_layer)
+ 		/* Adds a copy of @new_layer on the layer stack. */
+ 		new_rule->layers[new_rule->num_layers - 1] = *new_layer;
+@@ -142,9 +143,9 @@ static void build_check_ruleset(void)
+  * access rights.
+  */
+ static int insert_rule(struct landlock_ruleset *const ruleset,
+-		struct landlock_object *const object,
+-		const struct landlock_layer (*const layers)[],
+-		size_t num_layers)
++		       struct landlock_object *const object,
++		       const struct landlock_layer (*const layers)[],
++		       size_t num_layers)
+ {
+ 	struct rb_node **walker_node;
+ 	struct rb_node *parent_node = NULL;
+@@ -156,8 +157,8 @@ static int insert_rule(struct landlock_ruleset *const ruleset,
+ 		return -ENOENT;
+ 	walker_node = &(ruleset->root.rb_node);
+ 	while (*walker_node) {
+-		struct landlock_rule *const this = rb_entry(*walker_node,
+-				struct landlock_rule, node);
++		struct landlock_rule *const this =
++			rb_entry(*walker_node, struct landlock_rule, node);
+ 
+ 		if (this->object != object) {
+ 			parent_node = *walker_node;
+@@ -194,7 +195,7 @@ static int insert_rule(struct landlock_ruleset *const ruleset,
+ 		 * ruleset and a domain.
+ 		 */
+ 		new_rule = create_rule(object, &this->layers, this->num_layers,
+-				&(*layers)[0]);
++				       &(*layers)[0]);
+ 		if (IS_ERR(new_rule))
+ 			return PTR_ERR(new_rule);
+ 		rb_replace_node(&this->node, &new_rule->node, &ruleset->root);
+@@ -228,13 +229,14 @@ static void build_check_layer(void)
+ 
+ /* @ruleset must be locked by the caller. */
+ int landlock_insert_rule(struct landlock_ruleset *const ruleset,
+-		struct landlock_object *const object, const u32 access)
++			 struct landlock_object *const object,
++			 const access_mask_t access)
+ {
+-	struct landlock_layer layers[] = {{
++	struct landlock_layer layers[] = { {
+ 		.access = access,
+ 		/* When @level is zero, insert_rule() extends @ruleset. */
+ 		.level = 0,
+-	}};
++	} };
+ 
+ 	build_check_layer();
+ 	return insert_rule(ruleset, object, &layers, ARRAY_SIZE(layers));
+@@ -257,7 +259,7 @@ static void put_hierarchy(struct landlock_hierarchy *hierarchy)
+ }
+ 
+ static int merge_ruleset(struct landlock_ruleset *const dst,
+-		struct landlock_ruleset *const src)
++			 struct landlock_ruleset *const src)
+ {
+ 	struct landlock_rule *walker_rule, *next_rule;
+ 	int err = 0;
+@@ -282,11 +284,11 @@ static int merge_ruleset(struct landlock_ruleset *const dst,
+ 	dst->fs_access_masks[dst->num_layers - 1] = src->fs_access_masks[0];
+ 
+ 	/* Merges the @src tree. */
+-	rbtree_postorder_for_each_entry_safe(walker_rule, next_rule,
+-			&src->root, node) {
+-		struct landlock_layer layers[] = {{
++	rbtree_postorder_for_each_entry_safe(walker_rule, next_rule, &src->root,
++					     node) {
++		struct landlock_layer layers[] = { {
+ 			.level = dst->num_layers,
+-		}};
++		} };
+ 
+ 		if (WARN_ON_ONCE(walker_rule->num_layers != 1)) {
+ 			err = -EINVAL;
+@@ -298,7 +300,7 @@ static int merge_ruleset(struct landlock_ruleset *const dst,
+ 		}
+ 		layers[0].access = walker_rule->layers[0].access;
+ 		err = insert_rule(dst, walker_rule->object, &layers,
+-				ARRAY_SIZE(layers));
++				  ARRAY_SIZE(layers));
+ 		if (err)
+ 			goto out_unlock;
+ 	}
+@@ -310,7 +312,7 @@ out_unlock:
+ }
+ 
+ static int inherit_ruleset(struct landlock_ruleset *const parent,
+-		struct landlock_ruleset *const child)
++			   struct landlock_ruleset *const child)
+ {
+ 	struct landlock_rule *walker_rule, *next_rule;
+ 	int err = 0;
+@@ -325,9 +327,10 @@ static int inherit_ruleset(struct landlock_ruleset *const parent,
+ 
+ 	/* Copies the @parent tree. */
+ 	rbtree_postorder_for_each_entry_safe(walker_rule, next_rule,
+-			&parent->root, node) {
++					     &parent->root, node) {
+ 		err = insert_rule(child, walker_rule->object,
+-				&walker_rule->layers, walker_rule->num_layers);
++				  &walker_rule->layers,
++				  walker_rule->num_layers);
+ 		if (err)
+ 			goto out_unlock;
+ 	}
+@@ -338,7 +341,7 @@ static int inherit_ruleset(struct landlock_ruleset *const parent,
+ 	}
+ 	/* Copies the parent layer stack and leaves a space for the new layer. */
+ 	memcpy(child->fs_access_masks, parent->fs_access_masks,
+-			flex_array_size(parent, fs_access_masks, parent->num_layers));
++	       flex_array_size(parent, fs_access_masks, parent->num_layers));
+ 
+ 	if (WARN_ON_ONCE(!parent->hierarchy)) {
+ 		err = -EINVAL;
+@@ -358,8 +361,7 @@ static void free_ruleset(struct landlock_ruleset *const ruleset)
+ 	struct landlock_rule *freeme, *next;
+ 
+ 	might_sleep();
+-	rbtree_postorder_for_each_entry_safe(freeme, next, &ruleset->root,
+-			node)
++	rbtree_postorder_for_each_entry_safe(freeme, next, &ruleset->root, node)
+ 		free_rule(freeme);
+ 	put_hierarchy(ruleset->hierarchy);
+ 	kfree(ruleset);
+@@ -397,9 +399,9 @@ void landlock_put_ruleset_deferred(struct landlock_ruleset *const ruleset)
+  * Returns the intersection of @parent and @ruleset, or returns @parent if
+  * @ruleset is empty, or returns a duplicate of @ruleset if @parent is empty.
+  */
+-struct landlock_ruleset *landlock_merge_ruleset(
+-		struct landlock_ruleset *const parent,
+-		struct landlock_ruleset *const ruleset)
++struct landlock_ruleset *
++landlock_merge_ruleset(struct landlock_ruleset *const parent,
++		       struct landlock_ruleset *const ruleset)
+ {
+ 	struct landlock_ruleset *new_dom;
+ 	u32 num_layers;
+@@ -421,8 +423,8 @@ struct landlock_ruleset *landlock_merge_ruleset(
+ 	new_dom = create_ruleset(num_layers);
+ 	if (IS_ERR(new_dom))
+ 		return new_dom;
+-	new_dom->hierarchy = kzalloc(sizeof(*new_dom->hierarchy),
+-			GFP_KERNEL_ACCOUNT);
++	new_dom->hierarchy =
++		kzalloc(sizeof(*new_dom->hierarchy), GFP_KERNEL_ACCOUNT);
+ 	if (!new_dom->hierarchy) {
+ 		err = -ENOMEM;
+ 		goto out_put_dom;
+@@ -449,9 +451,9 @@ out_put_dom:
+ /*
+  * The returned access has the same lifetime as @ruleset.
+  */
+-const struct landlock_rule *landlock_find_rule(
+-		const struct landlock_ruleset *const ruleset,
+-		const struct landlock_object *const object)
++const struct landlock_rule *
++landlock_find_rule(const struct landlock_ruleset *const ruleset,
++		   const struct landlock_object *const object)
+ {
+ 	const struct rb_node *node;
+ 
+@@ -459,8 +461,8 @@ const struct landlock_rule *landlock_find_rule(
+ 		return NULL;
+ 	node = ruleset->root.rb_node;
+ 	while (node) {
+-		struct landlock_rule *this = rb_entry(node,
+-				struct landlock_rule, node);
++		struct landlock_rule *this =
++			rb_entry(node, struct landlock_rule, node);
+ 
+ 		if (this->object == object)
+ 			return this;
+diff --git a/security/landlock/ruleset.h b/security/landlock/ruleset.h
+index 2d3ed7ec5a0ab..d43231b783e4f 100644
+--- a/security/landlock/ruleset.h
++++ b/security/landlock/ruleset.h
+@@ -9,13 +9,26 @@
+ #ifndef _SECURITY_LANDLOCK_RULESET_H
+ #define _SECURITY_LANDLOCK_RULESET_H
+ 
++#include <linux/bitops.h>
++#include <linux/build_bug.h>
+ #include <linux/mutex.h>
+ #include <linux/rbtree.h>
+ #include <linux/refcount.h>
+ #include <linux/workqueue.h>
+ 
++#include "limits.h"
+ #include "object.h"
+ 
++typedef u16 access_mask_t;
++/* Makes sure all filesystem access rights can be stored. */
++static_assert(BITS_PER_TYPE(access_mask_t) >= LANDLOCK_NUM_ACCESS_FS);
++/* Makes sure for_each_set_bit() and for_each_clear_bit() calls are OK. */
++static_assert(sizeof(unsigned long) >= sizeof(access_mask_t));
++
++typedef u16 layer_mask_t;
++/* Makes sure all layers can be checked. */
++static_assert(BITS_PER_TYPE(layer_mask_t) >= LANDLOCK_MAX_NUM_LAYERS);
++
+ /**
+  * struct landlock_layer - Access rights for a given layer
+  */
+@@ -28,7 +41,7 @@ struct landlock_layer {
+ 	 * @access: Bitfield of allowed actions on the kernel object.  They are
+ 	 * relative to the object type (e.g. %LANDLOCK_ACTION_FS_READ).
+ 	 */
+-	u16 access;
++	access_mask_t access;
+ };
+ 
+ /**
+@@ -135,26 +148,28 @@ struct landlock_ruleset {
+ 			 * layers are set once and never changed for the
+ 			 * lifetime of the ruleset.
+ 			 */
+-			u16 fs_access_masks[];
++			access_mask_t fs_access_masks[];
+ 		};
+ 	};
+ };
+ 
+-struct landlock_ruleset *landlock_create_ruleset(const u32 fs_access_mask);
++struct landlock_ruleset *
++landlock_create_ruleset(const access_mask_t fs_access_mask);
+ 
+ void landlock_put_ruleset(struct landlock_ruleset *const ruleset);
+ void landlock_put_ruleset_deferred(struct landlock_ruleset *const ruleset);
+ 
+ int landlock_insert_rule(struct landlock_ruleset *const ruleset,
+-		struct landlock_object *const object, const u32 access);
++			 struct landlock_object *const object,
++			 const access_mask_t access);
+ 
+-struct landlock_ruleset *landlock_merge_ruleset(
+-		struct landlock_ruleset *const parent,
+-		struct landlock_ruleset *const ruleset);
++struct landlock_ruleset *
++landlock_merge_ruleset(struct landlock_ruleset *const parent,
++		       struct landlock_ruleset *const ruleset);
+ 
+-const struct landlock_rule *landlock_find_rule(
+-		const struct landlock_ruleset *const ruleset,
+-		const struct landlock_object *const object);
++const struct landlock_rule *
++landlock_find_rule(const struct landlock_ruleset *const ruleset,
++		   const struct landlock_object *const object);
+ 
+ static inline void landlock_get_ruleset(struct landlock_ruleset *const ruleset)
+ {
+diff --git a/security/landlock/syscalls.c b/security/landlock/syscalls.c
+index 7e27ce394020d..507d43827afed 100644
+--- a/security/landlock/syscalls.c
++++ b/security/landlock/syscalls.c
+@@ -43,9 +43,10 @@
+  * @src: User space pointer or NULL.
+  * @usize: (Alleged) size of the data pointed to by @src.
+  */
+-static __always_inline int copy_min_struct_from_user(void *const dst,
+-		const size_t ksize, const size_t ksize_min,
+-		const void __user *const src, const size_t usize)
++static __always_inline int
++copy_min_struct_from_user(void *const dst, const size_t ksize,
++			  const size_t ksize_min, const void __user *const src,
++			  const size_t usize)
+ {
+ 	/* Checks buffer inconsistencies. */
+ 	BUILD_BUG_ON(!dst);
+@@ -93,7 +94,7 @@ static void build_check_abi(void)
+ /* Ruleset handling */
+ 
+ static int fop_ruleset_release(struct inode *const inode,
+-		struct file *const filp)
++			       struct file *const filp)
+ {
+ 	struct landlock_ruleset *ruleset = filp->private_data;
+ 
+@@ -102,15 +103,15 @@ static int fop_ruleset_release(struct inode *const inode,
+ }
+ 
+ static ssize_t fop_dummy_read(struct file *const filp, char __user *const buf,
+-		const size_t size, loff_t *const ppos)
++			      const size_t size, loff_t *const ppos)
+ {
+ 	/* Dummy handler to enable FMODE_CAN_READ. */
+ 	return -EINVAL;
+ }
+ 
+ static ssize_t fop_dummy_write(struct file *const filp,
+-		const char __user *const buf, const size_t size,
+-		loff_t *const ppos)
++			       const char __user *const buf, const size_t size,
++			       loff_t *const ppos)
+ {
+ 	/* Dummy handler to enable FMODE_CAN_WRITE. */
+ 	return -EINVAL;
+@@ -128,7 +129,7 @@ static const struct file_operations ruleset_fops = {
+ 	.write = fop_dummy_write,
+ };
+ 
+-#define LANDLOCK_ABI_VERSION	1
++#define LANDLOCK_ABI_VERSION 1
+ 
+ /**
+  * sys_landlock_create_ruleset - Create a new ruleset
+@@ -168,22 +169,23 @@ SYSCALL_DEFINE3(landlock_create_ruleset,
+ 		return -EOPNOTSUPP;
+ 
+ 	if (flags) {
+-		if ((flags == LANDLOCK_CREATE_RULESET_VERSION)
+-				&& !attr && !size)
++		if ((flags == LANDLOCK_CREATE_RULESET_VERSION) && !attr &&
++		    !size)
+ 			return LANDLOCK_ABI_VERSION;
+ 		return -EINVAL;
+ 	}
+ 
+ 	/* Copies raw user space buffer. */
+ 	err = copy_min_struct_from_user(&ruleset_attr, sizeof(ruleset_attr),
+-			offsetofend(typeof(ruleset_attr), handled_access_fs),
+-			attr, size);
++					offsetofend(typeof(ruleset_attr),
++						    handled_access_fs),
++					attr, size);
+ 	if (err)
+ 		return err;
+ 
+ 	/* Checks content (and 32-bits cast). */
+ 	if ((ruleset_attr.handled_access_fs | LANDLOCK_MASK_ACCESS_FS) !=
+-			LANDLOCK_MASK_ACCESS_FS)
++	    LANDLOCK_MASK_ACCESS_FS)
+ 		return -EINVAL;
+ 
+ 	/* Checks arguments and transforms to kernel struct. */
+@@ -193,7 +195,7 @@ SYSCALL_DEFINE3(landlock_create_ruleset,
+ 
+ 	/* Creates anonymous FD referring to the ruleset. */
+ 	ruleset_fd = anon_inode_getfd("[landlock-ruleset]", &ruleset_fops,
+-			ruleset, O_RDWR | O_CLOEXEC);
++				      ruleset, O_RDWR | O_CLOEXEC);
+ 	if (ruleset_fd < 0)
+ 		landlock_put_ruleset(ruleset);
+ 	return ruleset_fd;
+@@ -204,7 +206,7 @@ SYSCALL_DEFINE3(landlock_create_ruleset,
+  * landlock_put_ruleset() on the return value.
+  */
+ static struct landlock_ruleset *get_ruleset_from_fd(const int fd,
+-		const fmode_t mode)
++						    const fmode_t mode)
+ {
+ 	struct fd ruleset_f;
+ 	struct landlock_ruleset *ruleset;
+@@ -244,8 +246,8 @@ static int get_path_from_fd(const s32 fd, struct path *const path)
+ 	struct fd f;
+ 	int err = 0;
+ 
+-	BUILD_BUG_ON(!__same_type(fd,
+-		((struct landlock_path_beneath_attr *)NULL)->parent_fd));
++	BUILD_BUG_ON(!__same_type(
++		fd, ((struct landlock_path_beneath_attr *)NULL)->parent_fd));
+ 
+ 	/* Handles O_PATH. */
+ 	f = fdget_raw(fd);
+@@ -257,10 +259,10 @@ static int get_path_from_fd(const s32 fd, struct path *const path)
+ 	 * pipefs).
+ 	 */
+ 	if ((f.file->f_op == &ruleset_fops) ||
+-			(f.file->f_path.mnt->mnt_flags & MNT_INTERNAL) ||
+-			(f.file->f_path.dentry->d_sb->s_flags & SB_NOUSER) ||
+-			d_is_negative(f.file->f_path.dentry) ||
+-			IS_PRIVATE(d_backing_inode(f.file->f_path.dentry))) {
++	    (f.file->f_path.mnt->mnt_flags & MNT_INTERNAL) ||
++	    (f.file->f_path.dentry->d_sb->s_flags & SB_NOUSER) ||
++	    d_is_negative(f.file->f_path.dentry) ||
++	    IS_PRIVATE(d_backing_inode(f.file->f_path.dentry))) {
+ 		err = -EBADFD;
+ 		goto out_fdput;
+ 	}
+@@ -290,19 +292,18 @@ out_fdput:
+  *
+  * - EOPNOTSUPP: Landlock is supported by the kernel but disabled at boot time;
+  * - EINVAL: @flags is not 0, or inconsistent access in the rule (i.e.
+- *   &landlock_path_beneath_attr.allowed_access is not a subset of the rule's
+- *   accesses);
++ *   &landlock_path_beneath_attr.allowed_access is not a subset of the
++ *   ruleset handled accesses);
+  * - ENOMSG: Empty accesses (e.g. &landlock_path_beneath_attr.allowed_access);
+  * - EBADF: @ruleset_fd is not a file descriptor for the current thread, or a
+  *   member of @rule_attr is not a file descriptor as expected;
+  * - EBADFD: @ruleset_fd is not a ruleset file descriptor, or a member of
+- *   @rule_attr is not the expected file descriptor type (e.g. file open
+- *   without O_PATH);
++ *   @rule_attr is not the expected file descriptor type;
+  * - EPERM: @ruleset_fd has no write access to the underlying ruleset;
+  * - EFAULT: @rule_attr inconsistency.
+  */
+-SYSCALL_DEFINE4(landlock_add_rule,
+-		const int, ruleset_fd, const enum landlock_rule_type, rule_type,
++SYSCALL_DEFINE4(landlock_add_rule, const int, ruleset_fd,
++		const enum landlock_rule_type, rule_type,
+ 		const void __user *const, rule_attr, const __u32, flags)
+ {
+ 	struct landlock_path_beneath_attr path_beneath_attr;
+@@ -317,20 +318,24 @@ SYSCALL_DEFINE4(landlock_add_rule,
+ 	if (flags)
+ 		return -EINVAL;
+ 
+-	if (rule_type != LANDLOCK_RULE_PATH_BENEATH)
+-		return -EINVAL;
+-
+-	/* Copies raw user space buffer, only one type for now. */
+-	res = copy_from_user(&path_beneath_attr, rule_attr,
+-			sizeof(path_beneath_attr));
+-	if (res)
+-		return -EFAULT;
+-
+ 	/* Gets and checks the ruleset. */
+ 	ruleset = get_ruleset_from_fd(ruleset_fd, FMODE_CAN_WRITE);
+ 	if (IS_ERR(ruleset))
+ 		return PTR_ERR(ruleset);
+ 
++	if (rule_type != LANDLOCK_RULE_PATH_BENEATH) {
++		err = -EINVAL;
++		goto out_put_ruleset;
++	}
++
++	/* Copies raw user space buffer, only one type for now. */
++	res = copy_from_user(&path_beneath_attr, rule_attr,
++			     sizeof(path_beneath_attr));
++	if (res) {
++		err = -EFAULT;
++		goto out_put_ruleset;
++	}
++
+ 	/*
+ 	 * Informs about useless rule: empty allowed_access (i.e. deny rules)
+ 	 * are ignored in path walks.
+@@ -344,7 +349,7 @@ SYSCALL_DEFINE4(landlock_add_rule,
+ 	 * (ruleset->fs_access_masks[0] is automatically upgraded to 64-bits).
+ 	 */
+ 	if ((path_beneath_attr.allowed_access | ruleset->fs_access_masks[0]) !=
+-			ruleset->fs_access_masks[0]) {
++	    ruleset->fs_access_masks[0]) {
+ 		err = -EINVAL;
+ 		goto out_put_ruleset;
+ 	}
+@@ -356,7 +361,7 @@ SYSCALL_DEFINE4(landlock_add_rule,
+ 
+ 	/* Imports the new rule. */
+ 	err = landlock_append_fs_rule(ruleset, &path,
+-			path_beneath_attr.allowed_access);
++				      path_beneath_attr.allowed_access);
+ 	path_put(&path);
+ 
+ out_put_ruleset:
+@@ -389,8 +394,8 @@ out_put_ruleset:
+  * - E2BIG: The maximum number of stacked rulesets is reached for the current
+  *   thread.
+  */
+-SYSCALL_DEFINE2(landlock_restrict_self,
+-		const int, ruleset_fd, const __u32, flags)
++SYSCALL_DEFINE2(landlock_restrict_self, const int, ruleset_fd, const __u32,
++		flags)
+ {
+ 	struct landlock_ruleset *new_dom, *ruleset;
+ 	struct cred *new_cred;
+@@ -400,18 +405,18 @@ SYSCALL_DEFINE2(landlock_restrict_self,
+ 	if (!landlock_initialized)
+ 		return -EOPNOTSUPP;
+ 
+-	/* No flag for now. */
+-	if (flags)
+-		return -EINVAL;
+-
+ 	/*
+ 	 * Similar checks as for seccomp(2), except that an -EPERM may be
+ 	 * returned.
+ 	 */
+ 	if (!task_no_new_privs(current) &&
+-			!ns_capable_noaudit(current_user_ns(), CAP_SYS_ADMIN))
++	    !ns_capable_noaudit(current_user_ns(), CAP_SYS_ADMIN))
+ 		return -EPERM;
+ 
++	/* No flag for now. */
++	if (flags)
++		return -EINVAL;
++
+ 	/* Gets and checks the ruleset. */
+ 	ruleset = get_ruleset_from_fd(ruleset_fd, FMODE_CAN_READ);
+ 	if (IS_ERR(ruleset))
+diff --git a/sound/core/jack.c b/sound/core/jack.c
+index d1e3055f2b6a5..88493cc31914b 100644
+--- a/sound/core/jack.c
++++ b/sound/core/jack.c
+@@ -42,8 +42,11 @@ static int snd_jack_dev_disconnect(struct snd_device *device)
+ #ifdef CONFIG_SND_JACK_INPUT_DEV
+ 	struct snd_jack *jack = device->device_data;
+ 
+-	if (!jack->input_dev)
++	mutex_lock(&jack->input_dev_lock);
++	if (!jack->input_dev) {
++		mutex_unlock(&jack->input_dev_lock);
+ 		return 0;
++	}
+ 
+ 	/* If the input device is registered with the input subsystem
+ 	 * then we need to use a different deallocator. */
+@@ -52,6 +55,7 @@ static int snd_jack_dev_disconnect(struct snd_device *device)
+ 	else
+ 		input_free_device(jack->input_dev);
+ 	jack->input_dev = NULL;
++	mutex_unlock(&jack->input_dev_lock);
+ #endif /* CONFIG_SND_JACK_INPUT_DEV */
+ 	return 0;
+ }
+@@ -90,8 +94,11 @@ static int snd_jack_dev_register(struct snd_device *device)
+ 	snprintf(jack->name, sizeof(jack->name), "%s %s",
+ 		 card->shortname, jack->id);
+ 
+-	if (!jack->input_dev)
++	mutex_lock(&jack->input_dev_lock);
++	if (!jack->input_dev) {
++		mutex_unlock(&jack->input_dev_lock);
+ 		return 0;
++	}
+ 
+ 	jack->input_dev->name = jack->name;
+ 
+@@ -116,6 +123,7 @@ static int snd_jack_dev_register(struct snd_device *device)
+ 	if (err == 0)
+ 		jack->registered = 1;
+ 
++	mutex_unlock(&jack->input_dev_lock);
+ 	return err;
+ }
+ #endif /* CONFIG_SND_JACK_INPUT_DEV */
+@@ -517,9 +525,11 @@ int snd_jack_new(struct snd_card *card, const char *id, int type,
+ 		return -ENOMEM;
+ 	}
+ 
+-	/* don't creat input device for phantom jack */
+-	if (!phantom_jack) {
+ #ifdef CONFIG_SND_JACK_INPUT_DEV
++	mutex_init(&jack->input_dev_lock);
++
++	/* don't create input device for phantom jack */
++	if (!phantom_jack) {
+ 		int i;
+ 
+ 		jack->input_dev = input_allocate_device();
+@@ -537,8 +547,8 @@ int snd_jack_new(struct snd_card *card, const char *id, int type,
+ 				input_set_capability(jack->input_dev, EV_SW,
+ 						     jack_switch_types[i]);
+ 
+-#endif /* CONFIG_SND_JACK_INPUT_DEV */
+ 	}
++#endif /* CONFIG_SND_JACK_INPUT_DEV */
+ 
+ 	err = snd_device_new(card, SNDRV_DEV_JACK, jack, &ops);
+ 	if (err < 0)
+@@ -578,10 +588,14 @@ EXPORT_SYMBOL(snd_jack_new);
+ void snd_jack_set_parent(struct snd_jack *jack, struct device *parent)
+ {
+ 	WARN_ON(jack->registered);
+-	if (!jack->input_dev)
++	mutex_lock(&jack->input_dev_lock);
++	if (!jack->input_dev) {
++		mutex_unlock(&jack->input_dev_lock);
+ 		return;
++	}
+ 
+ 	jack->input_dev->dev.parent = parent;
++	mutex_unlock(&jack->input_dev_lock);
+ }
+ EXPORT_SYMBOL(snd_jack_set_parent);
+ 
+@@ -629,6 +643,8 @@ EXPORT_SYMBOL(snd_jack_set_key);
+ 
+ /**
+  * snd_jack_report - Report the current status of a jack
++ * Note: This function uses mutexes and should be called from a
++ * context which can sleep (such as a workqueue).
+  *
+  * @jack:   The jack to report status for
+  * @status: The current status of the jack
+@@ -654,8 +670,11 @@ void snd_jack_report(struct snd_jack *jack, int status)
+ 					     status & jack_kctl->mask_bits);
+ 
+ #ifdef CONFIG_SND_JACK_INPUT_DEV
+-	if (!jack->input_dev)
++	mutex_lock(&jack->input_dev_lock);
++	if (!jack->input_dev) {
++		mutex_unlock(&jack->input_dev_lock);
+ 		return;
++	}
+ 
+ 	for (i = 0; i < ARRAY_SIZE(jack->key); i++) {
+ 		int testbit = ((SND_JACK_BTN_0 >> i) & ~mask_bits);
+@@ -675,6 +694,7 @@ void snd_jack_report(struct snd_jack *jack, int status)
+ 	}
+ 
+ 	input_sync(jack->input_dev);
++	mutex_unlock(&jack->input_dev_lock);
+ #endif /* CONFIG_SND_JACK_INPUT_DEV */
+ }
+ EXPORT_SYMBOL(snd_jack_report);
+diff --git a/sound/core/pcm_memory.c b/sound/core/pcm_memory.c
+index 8848d2f3160d8..b8296b6eb2c19 100644
+--- a/sound/core/pcm_memory.c
++++ b/sound/core/pcm_memory.c
+@@ -453,7 +453,6 @@ EXPORT_SYMBOL(snd_pcm_lib_malloc_pages);
+  */
+ int snd_pcm_lib_free_pages(struct snd_pcm_substream *substream)
+ {
+-	struct snd_card *card = substream->pcm->card;
+ 	struct snd_pcm_runtime *runtime;
+ 
+ 	if (PCM_RUNTIME_CHECK(substream))
+@@ -462,6 +461,8 @@ int snd_pcm_lib_free_pages(struct snd_pcm_substream *substream)
+ 	if (runtime->dma_area == NULL)
+ 		return 0;
+ 	if (runtime->dma_buffer_p != &substream->dma_buffer) {
++		struct snd_card *card = substream->pcm->card;
++
+ 		/* it's a newly allocated buffer.  release it now. */
+ 		do_free_pages(card, runtime->dma_buffer_p);
+ 		kfree(runtime->dma_buffer_p);
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index ad292df7d805c..323c74a042688 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -1981,6 +1981,7 @@ enum {
+ 	ALC1220_FIXUP_CLEVO_PB51ED_PINS,
+ 	ALC887_FIXUP_ASUS_AUDIO,
+ 	ALC887_FIXUP_ASUS_HMIC,
++	ALCS1200A_FIXUP_MIC_VREF,
+ };
+ 
+ static void alc889_fixup_coef(struct hda_codec *codec,
+@@ -2526,6 +2527,14 @@ static const struct hda_fixup alc882_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = ALC887_FIXUP_ASUS_AUDIO,
+ 	},
++	[ALCS1200A_FIXUP_MIC_VREF] = {
++		.type = HDA_FIXUP_PINCTLS,
++		.v.pins = (const struct hda_pintbl[]) {
++			{ 0x18, PIN_VREF50 }, /* rear mic */
++			{ 0x19, PIN_VREF50 }, /* front mic */
++			{}
++		}
++	},
+ };
+ 
+ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+@@ -2563,6 +2572,7 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x835f, "Asus Eee 1601", ALC888_FIXUP_EEE1601),
+ 	SND_PCI_QUIRK(0x1043, 0x84bc, "ASUS ET2700", ALC887_FIXUP_ASUS_BASS),
+ 	SND_PCI_QUIRK(0x1043, 0x8691, "ASUS ROG Ranger VIII", ALC882_FIXUP_GPIO3),
++	SND_PCI_QUIRK(0x1043, 0x8797, "ASUS TUF B550M-PLUS", ALCS1200A_FIXUP_MIC_VREF),
+ 	SND_PCI_QUIRK(0x104d, 0x9043, "Sony Vaio VGC-LN51JGB", ALC882_FIXUP_NO_PRIMARY_HP),
+ 	SND_PCI_QUIRK(0x104d, 0x9044, "Sony VAIO AiO", ALC882_FIXUP_NO_PRIMARY_HP),
+ 	SND_PCI_QUIRK(0x104d, 0x9047, "Sony Vaio TT", ALC889_FIXUP_VAIO_TT),
+@@ -3131,6 +3141,7 @@ enum {
+ 	ALC269_TYPE_ALC257,
+ 	ALC269_TYPE_ALC215,
+ 	ALC269_TYPE_ALC225,
++	ALC269_TYPE_ALC245,
+ 	ALC269_TYPE_ALC287,
+ 	ALC269_TYPE_ALC294,
+ 	ALC269_TYPE_ALC300,
+@@ -3168,6 +3179,7 @@ static int alc269_parse_auto_config(struct hda_codec *codec)
+ 	case ALC269_TYPE_ALC257:
+ 	case ALC269_TYPE_ALC215:
+ 	case ALC269_TYPE_ALC225:
++	case ALC269_TYPE_ALC245:
+ 	case ALC269_TYPE_ALC287:
+ 	case ALC269_TYPE_ALC294:
+ 	case ALC269_TYPE_ALC300:
+@@ -3695,7 +3707,8 @@ static void alc225_init(struct hda_codec *codec)
+ 	hda_nid_t hp_pin = alc_get_hp_pin(spec);
+ 	bool hp1_pin_sense, hp2_pin_sense;
+ 
+-	if (spec->codec_variant != ALC269_TYPE_ALC287)
++	if (spec->codec_variant != ALC269_TYPE_ALC287 &&
++		spec->codec_variant != ALC269_TYPE_ALC245)
+ 		/* required only at boot or S3 and S4 resume time */
+ 		if (!spec->done_hp_init ||
+ 			is_s3_resume(codec) ||
+@@ -8954,6 +8967,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1028, 0x0a62, "Dell Precision 5560", ALC289_FIXUP_DUAL_SPK),
+ 	SND_PCI_QUIRK(0x1028, 0x0a9d, "Dell Latitude 5430", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1028, 0x0a9e, "Dell Latitude 5430", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1028, 0x0b19, "Dell XPS 15 9520", ALC289_FIXUP_DUAL_SPK),
+ 	SND_PCI_QUIRK(0x1028, 0x164a, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1028, 0x164b, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x1586, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC2),
+@@ -10148,7 +10162,10 @@ static int patch_alc269(struct hda_codec *codec)
+ 	case 0x10ec0245:
+ 	case 0x10ec0285:
+ 	case 0x10ec0289:
+-		spec->codec_variant = ALC269_TYPE_ALC215;
++		if (alc_get_coef0(codec) & 0x0010)
++			spec->codec_variant = ALC269_TYPE_ALC245;
++		else
++			spec->codec_variant = ALC269_TYPE_ALC215;
+ 		spec->shutup = alc225_shutup;
+ 		spec->init_hook = alc225_init;
+ 		spec->gen.mixer_nid = 0;
+diff --git a/sound/soc/amd/yc/acp6x-mach.c b/sound/soc/amd/yc/acp6x-mach.c
+index 9a767f47b89f1..959b70e8baf21 100644
+--- a/sound/soc/amd/yc/acp6x-mach.c
++++ b/sound/soc/amd/yc/acp6x-mach.c
+@@ -45,108 +45,126 @@ static struct snd_soc_card acp6x_card = {
+ 
+ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ 	{
++		.driver_data = &acp6x_card,
+ 		.matches = {
+ 			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "21D2"),
+ 		}
+ 	},
+ 	{
++		.driver_data = &acp6x_card,
+ 		.matches = {
+ 			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "21D3"),
+ 		}
+ 	},
+ 	{
++		.driver_data = &acp6x_card,
+ 		.matches = {
+ 			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "21D4"),
+ 		}
+ 	},
+ 	{
++		.driver_data = &acp6x_card,
+ 		.matches = {
+ 			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "21D5"),
+ 		}
+ 	},
+ 	{
++		.driver_data = &acp6x_card,
+ 		.matches = {
+ 			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "21CF"),
+ 		}
+ 	},
+ 	{
++		.driver_data = &acp6x_card,
+ 		.matches = {
+ 			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "21CG"),
+ 		}
+ 	},
+ 	{
++		.driver_data = &acp6x_card,
+ 		.matches = {
+ 			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "21CQ"),
+ 		}
+ 	},
+ 	{
++		.driver_data = &acp6x_card,
+ 		.matches = {
+ 			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "21CR"),
+ 		}
+ 	},
+ 	{
++		.driver_data = &acp6x_card,
+ 		.matches = {
+ 			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "21AW"),
+ 		}
+ 	},
+ 	{
++		.driver_data = &acp6x_card,
+ 		.matches = {
+ 			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "21AX"),
+ 		}
+ 	},
+ 	{
++		.driver_data = &acp6x_card,
+ 		.matches = {
+ 			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "21BN"),
+ 		}
+ 	},
+ 	{
++		.driver_data = &acp6x_card,
+ 		.matches = {
+ 			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "21BQ"),
+ 		}
+ 	},
+ 	{
++		.driver_data = &acp6x_card,
+ 		.matches = {
+ 			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "21CH"),
+ 		}
+ 	},
+ 	{
++		.driver_data = &acp6x_card,
+ 		.matches = {
+ 			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "21CJ"),
+ 		}
+ 	},
+ 	{
++		.driver_data = &acp6x_card,
+ 		.matches = {
+ 			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "21CK"),
+ 		}
+ 	},
+ 	{
++		.driver_data = &acp6x_card,
+ 		.matches = {
+ 			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "21CL"),
+ 		}
+ 	},
+ 	{
++		.driver_data = &acp6x_card,
+ 		.matches = {
+ 			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "21D8"),
+ 		}
+ 	},
+ 	{
++		.driver_data = &acp6x_card,
+ 		.matches = {
+ 			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "21D9"),
+@@ -157,18 +175,21 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ 
+ static int acp6x_probe(struct platform_device *pdev)
+ {
++	const struct dmi_system_id *dmi_id;
+ 	struct acp6x_pdm *machine = NULL;
+ 	struct snd_soc_card *card;
+ 	int ret;
+-	const struct dmi_system_id *dmi_id;
+ 
++	/* check for any DMI overrides */
+ 	dmi_id = dmi_first_match(yc_acp_quirk_table);
+-	if (!dmi_id)
++	if (dmi_id)
++		platform_set_drvdata(pdev, dmi_id->driver_data);
++
++	card = platform_get_drvdata(pdev);
++	if (!card)
+ 		return -ENODEV;
+-	card = &acp6x_card;
+ 	acp6x_card.dev = &pdev->dev;
+ 
+-	platform_set_drvdata(pdev, card);
+ 	snd_soc_card_set_drvdata(card, machine);
+ 	ret = devm_snd_soc_register_card(&pdev->dev, card);
+ 	if (ret) {
+diff --git a/sound/soc/atmel/atmel-classd.c b/sound/soc/atmel/atmel-classd.c
+index a9f9f449c48c2..74b7b2611aa70 100644
+--- a/sound/soc/atmel/atmel-classd.c
++++ b/sound/soc/atmel/atmel-classd.c
+@@ -458,7 +458,6 @@ static const struct snd_soc_component_driver atmel_classd_cpu_dai_component = {
+ 	.num_controls		= ARRAY_SIZE(atmel_classd_snd_controls),
+ 	.idle_bias_on		= 1,
+ 	.use_pmdown_time	= 1,
+-	.endianness		= 1,
+ };
+ 
+ /* ASoC sound card */
+diff --git a/sound/soc/atmel/atmel-pdmic.c b/sound/soc/atmel/atmel-pdmic.c
+index 42117de299e74..ea34efac2fff5 100644
+--- a/sound/soc/atmel/atmel-pdmic.c
++++ b/sound/soc/atmel/atmel-pdmic.c
+@@ -481,7 +481,6 @@ static const struct snd_soc_component_driver atmel_pdmic_cpu_dai_component = {
+ 	.num_controls		= ARRAY_SIZE(atmel_pdmic_snd_controls),
+ 	.idle_bias_on		= 1,
+ 	.use_pmdown_time	= 1,
+-	.endianness		= 1,
+ };
+ 
+ /* ASoC sound card */
+diff --git a/sound/soc/codecs/Kconfig b/sound/soc/codecs/Kconfig
+index f46a226601032..3dea20b2c4054 100644
+--- a/sound/soc/codecs/Kconfig
++++ b/sound/soc/codecs/Kconfig
+@@ -953,7 +953,6 @@ config SND_SOC_MAX98095
+ 
+ config SND_SOC_MAX98357A
+ 	tristate "Maxim MAX98357A CODEC"
+-	depends on GPIOLIB
+ 
+ config SND_SOC_MAX98371
+ 	tristate
+@@ -1213,7 +1212,6 @@ config SND_SOC_RT1015
+ 
+ config SND_SOC_RT1015P
+ 	tristate
+-	depends on GPIOLIB
+ 
+ config SND_SOC_RT1019
+ 	tristate
+diff --git a/sound/soc/codecs/cs35l41-lib.c b/sound/soc/codecs/cs35l41-lib.c
+index aa6823fbd1a4d..17cf782f39af6 100644
+--- a/sound/soc/codecs/cs35l41-lib.c
++++ b/sound/soc/codecs/cs35l41-lib.c
+@@ -422,7 +422,7 @@ static bool cs35l41_volatile_reg(struct device *dev, unsigned int reg)
+ 	}
+ }
+ 
+-static const struct cs35l41_otp_packed_element_t otp_map_1[CS35L41_NUM_OTP_ELEM] = {
++static const struct cs35l41_otp_packed_element_t otp_map_1[] = {
+ 	/* addr         shift   size */
+ 	{ 0x00002030,	0,	4 }, /*TRIM_OSC_FREQ_TRIM*/
+ 	{ 0x00002030,	7,	1 }, /*TRIM_OSC_TRIM_DONE*/
+@@ -525,7 +525,7 @@ static const struct cs35l41_otp_packed_element_t otp_map_1[CS35L41_NUM_OTP_ELEM]
+ 	{ 0x00017044,	0,	24 }, /*LOT_NUMBER*/
+ };
+ 
+-static const struct cs35l41_otp_packed_element_t otp_map_2[CS35L41_NUM_OTP_ELEM] = {
++static const struct cs35l41_otp_packed_element_t otp_map_2[] = {
+ 	/* addr         shift   size */
+ 	{ 0x00002030,	0,	4 }, /*TRIM_OSC_FREQ_TRIM*/
+ 	{ 0x00002030,	7,	1 }, /*TRIM_OSC_TRIM_DONE*/
+@@ -671,35 +671,35 @@ static const struct cs35l41_otp_map_element_t cs35l41_otp_map_map[] = {
+ 	{
+ 		.id = 0x01,
+ 		.map = otp_map_1,
+-		.num_elements = CS35L41_NUM_OTP_ELEM,
++		.num_elements = ARRAY_SIZE(otp_map_1),
+ 		.bit_offset = 16,
+ 		.word_offset = 2,
+ 	},
+ 	{
+ 		.id = 0x02,
+ 		.map = otp_map_2,
+-		.num_elements = CS35L41_NUM_OTP_ELEM,
++		.num_elements = ARRAY_SIZE(otp_map_2),
+ 		.bit_offset = 16,
+ 		.word_offset = 2,
+ 	},
+ 	{
+ 		.id = 0x03,
+ 		.map = otp_map_2,
+-		.num_elements = CS35L41_NUM_OTP_ELEM,
++		.num_elements = ARRAY_SIZE(otp_map_2),
+ 		.bit_offset = 16,
+ 		.word_offset = 2,
+ 	},
+ 	{
+ 		.id = 0x06,
+ 		.map = otp_map_2,
+-		.num_elements = CS35L41_NUM_OTP_ELEM,
++		.num_elements = ARRAY_SIZE(otp_map_2),
+ 		.bit_offset = 16,
+ 		.word_offset = 2,
+ 	},
+ 	{
+ 		.id = 0x08,
+ 		.map = otp_map_1,
+-		.num_elements = CS35L41_NUM_OTP_ELEM,
++		.num_elements = ARRAY_SIZE(otp_map_1),
+ 		.bit_offset = 16,
+ 		.word_offset = 2,
+ 	},
+diff --git a/sound/soc/codecs/lpass-macro-common.c b/sound/soc/codecs/lpass-macro-common.c
+index 6cede75ed3b5d..1b9082d237c13 100644
+--- a/sound/soc/codecs/lpass-macro-common.c
++++ b/sound/soc/codecs/lpass-macro-common.c
+@@ -24,42 +24,45 @@ struct lpass_macro *lpass_macro_pds_init(struct device *dev)
+ 		return ERR_PTR(-ENOMEM);
+ 
+ 	l_pds->macro_pd = dev_pm_domain_attach_by_name(dev, "macro");
+-	if (IS_ERR_OR_NULL(l_pds->macro_pd))
+-		return NULL;
+-
+-	ret = pm_runtime_get_sync(l_pds->macro_pd);
+-	if (ret < 0) {
+-		pm_runtime_put_noidle(l_pds->macro_pd);
++	if (IS_ERR_OR_NULL(l_pds->macro_pd)) {
++		ret = l_pds->macro_pd ? PTR_ERR(l_pds->macro_pd) : -ENODATA;
+ 		goto macro_err;
+ 	}
+ 
++	ret = pm_runtime_resume_and_get(l_pds->macro_pd);
++	if (ret < 0)
++		goto macro_sync_err;
++
+ 	l_pds->dcodec_pd = dev_pm_domain_attach_by_name(dev, "dcodec");
+-	if (IS_ERR_OR_NULL(l_pds->dcodec_pd))
++	if (IS_ERR_OR_NULL(l_pds->dcodec_pd)) {
++		ret = l_pds->dcodec_pd ? PTR_ERR(l_pds->dcodec_pd) : -ENODATA;
+ 		goto dcodec_err;
++	}
+ 
+-	ret = pm_runtime_get_sync(l_pds->dcodec_pd);
+-	if (ret < 0) {
+-		pm_runtime_put_noidle(l_pds->dcodec_pd);
++	ret = pm_runtime_resume_and_get(l_pds->dcodec_pd);
++	if (ret < 0)
+ 		goto dcodec_sync_err;
+-	}
+ 	return l_pds;
+ 
+ dcodec_sync_err:
+ 	dev_pm_domain_detach(l_pds->dcodec_pd, false);
+ dcodec_err:
+ 	pm_runtime_put(l_pds->macro_pd);
+-macro_err:
++macro_sync_err:
+ 	dev_pm_domain_detach(l_pds->macro_pd, false);
++macro_err:
+ 	return ERR_PTR(ret);
+ }
+ EXPORT_SYMBOL_GPL(lpass_macro_pds_init);
+ 
+ void lpass_macro_pds_exit(struct lpass_macro *pds)
+ {
+-	pm_runtime_put(pds->macro_pd);
+-	dev_pm_domain_detach(pds->macro_pd, false);
+-	pm_runtime_put(pds->dcodec_pd);
+-	dev_pm_domain_detach(pds->dcodec_pd, false);
++	if (pds) {
++		pm_runtime_put(pds->macro_pd);
++		dev_pm_domain_detach(pds->macro_pd, false);
++		pm_runtime_put(pds->dcodec_pd);
++		dev_pm_domain_detach(pds->dcodec_pd, false);
++	}
+ }
+ EXPORT_SYMBOL_GPL(lpass_macro_pds_exit);
+ 
+diff --git a/sound/soc/codecs/max98090.c b/sound/soc/codecs/max98090.c
+index 62b41ca050a20..5513acd360b8f 100644
+--- a/sound/soc/codecs/max98090.c
++++ b/sound/soc/codecs/max98090.c
+@@ -393,7 +393,8 @@ static int max98090_put_enab_tlv(struct snd_kcontrol *kcontrol,
+ 	struct soc_mixer_control *mc =
+ 		(struct soc_mixer_control *)kcontrol->private_value;
+ 	unsigned int mask = (1 << fls(mc->max)) - 1;
+-	unsigned int sel = ucontrol->value.integer.value[0];
++	int sel_unchecked = ucontrol->value.integer.value[0];
++	unsigned int sel;
+ 	unsigned int val = snd_soc_component_read(component, mc->reg);
+ 	unsigned int *select;
+ 
+@@ -413,8 +414,9 @@ static int max98090_put_enab_tlv(struct snd_kcontrol *kcontrol,
+ 
+ 	val = (val >> mc->shift) & mask;
+ 
+-	if (sel < 0 || sel > mc->max)
++	if (sel_unchecked < 0 || sel_unchecked > mc->max)
+ 		return -EINVAL;
++	sel = sel_unchecked;
+ 
+ 	*select = sel;
+ 
+diff --git a/sound/soc/codecs/rk3328_codec.c b/sound/soc/codecs/rk3328_codec.c
+index 758d439e8c7a5..86b679cf7aef9 100644
+--- a/sound/soc/codecs/rk3328_codec.c
++++ b/sound/soc/codecs/rk3328_codec.c
+@@ -481,7 +481,7 @@ static int rk3328_platform_probe(struct platform_device *pdev)
+ 	ret = clk_prepare_enable(rk3328->pclk);
+ 	if (ret < 0) {
+ 		dev_err(&pdev->dev, "failed to enable acodec pclk\n");
+-		return ret;
++		goto err_unprepare_mclk;
+ 	}
+ 
+ 	base = devm_platform_ioremap_resource(pdev, 0);
+diff --git a/sound/soc/codecs/rt5514.c b/sound/soc/codecs/rt5514.c
+index 577680df70520..92428f2b459ba 100644
+--- a/sound/soc/codecs/rt5514.c
++++ b/sound/soc/codecs/rt5514.c
+@@ -419,7 +419,7 @@ static int rt5514_dsp_voice_wake_up_put(struct snd_kcontrol *kcontrol,
+ 		}
+ 	}
+ 
+-	return 0;
++	return 1;
+ }
+ 
+ static const struct snd_kcontrol_new rt5514_snd_controls[] = {
+diff --git a/sound/soc/codecs/rt5645.c b/sound/soc/codecs/rt5645.c
+index 197c560479470..4b2e027c10331 100644
+--- a/sound/soc/codecs/rt5645.c
++++ b/sound/soc/codecs/rt5645.c
+@@ -4154,9 +4154,14 @@ static int rt5645_i2c_remove(struct i2c_client *i2c)
+ 	if (i2c->irq)
+ 		free_irq(i2c->irq, rt5645);
+ 
++	/*
++	 * Since the rt5645_btn_check_callback() can queue jack_detect_work,
++	 * the timer need to be delted first
++	 */
++	del_timer_sync(&rt5645->btn_check_timer);
++
+ 	cancel_delayed_work_sync(&rt5645->jack_detect_work);
+ 	cancel_delayed_work_sync(&rt5645->rcclock_work);
+-	del_timer_sync(&rt5645->btn_check_timer);
+ 
+ 	regulator_bulk_disable(ARRAY_SIZE(rt5645->supplies), rt5645->supplies);
+ 
+diff --git a/sound/soc/codecs/tscs454.c b/sound/soc/codecs/tscs454.c
+index 7e1826d6f06f4..32e6fa7b0a061 100644
+--- a/sound/soc/codecs/tscs454.c
++++ b/sound/soc/codecs/tscs454.c
+@@ -3120,18 +3120,17 @@ static int set_aif_sample_format(struct snd_soc_component *component,
+ 	unsigned int width;
+ 	int ret;
+ 
+-	switch (format) {
+-	case SNDRV_PCM_FORMAT_S16_LE:
++	switch (snd_pcm_format_width(format)) {
++	case 16:
+ 		width = FV_WL_16;
+ 		break;
+-	case SNDRV_PCM_FORMAT_S20_3LE:
++	case 20:
+ 		width = FV_WL_20;
+ 		break;
+-	case SNDRV_PCM_FORMAT_S24_3LE:
++	case 24:
+ 		width = FV_WL_24;
+ 		break;
+-	case SNDRV_PCM_FORMAT_S24_LE:
+-	case SNDRV_PCM_FORMAT_S32_LE:
++	case 32:
+ 		width = FV_WL_32;
+ 		break;
+ 	default:
+@@ -3326,6 +3325,7 @@ static const struct snd_soc_component_driver soc_component_dev_tscs454 = {
+ 	.num_dapm_routes = ARRAY_SIZE(tscs454_intercon),
+ 	.controls =	tscs454_snd_controls,
+ 	.num_controls = ARRAY_SIZE(tscs454_snd_controls),
++	.endianness = 1,
+ };
+ 
+ #define TSCS454_RATES SNDRV_PCM_RATE_8000_96000
+diff --git a/sound/soc/codecs/wm2000.c b/sound/soc/codecs/wm2000.c
+index 72e165cc64439..97ece3114b3dc 100644
+--- a/sound/soc/codecs/wm2000.c
++++ b/sound/soc/codecs/wm2000.c
+@@ -536,7 +536,7 @@ static int wm2000_anc_transition(struct wm2000_priv *wm2000,
+ {
+ 	struct i2c_client *i2c = wm2000->i2c;
+ 	int i, j;
+-	int ret;
++	int ret = 0;
+ 
+ 	if (wm2000->anc_mode == mode)
+ 		return 0;
+@@ -566,13 +566,13 @@ static int wm2000_anc_transition(struct wm2000_priv *wm2000,
+ 		ret = anc_transitions[i].step[j](i2c,
+ 						 anc_transitions[i].analogue);
+ 		if (ret != 0)
+-			return ret;
++			break;
+ 	}
+ 
+ 	if (anc_transitions[i].dest == ANC_OFF)
+ 		clk_disable_unprepare(wm2000->mclk);
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static int wm2000_anc_set_mode(struct wm2000_priv *wm2000)
+diff --git a/sound/soc/fsl/imx-hdmi.c b/sound/soc/fsl/imx-hdmi.c
+index 929f69b758af4..ec149dc739383 100644
+--- a/sound/soc/fsl/imx-hdmi.c
++++ b/sound/soc/fsl/imx-hdmi.c
+@@ -126,6 +126,7 @@ static int imx_hdmi_probe(struct platform_device *pdev)
+ 	data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL);
+ 	if (!data) {
+ 		ret = -ENOMEM;
++		put_device(&cpu_pdev->dev);
+ 		goto fail;
+ 	}
+ 
+diff --git a/sound/soc/fsl/imx-sgtl5000.c b/sound/soc/fsl/imx-sgtl5000.c
+index 8daced42d55e4..580a0d963f0eb 100644
+--- a/sound/soc/fsl/imx-sgtl5000.c
++++ b/sound/soc/fsl/imx-sgtl5000.c
+@@ -120,19 +120,19 @@ static int imx_sgtl5000_probe(struct platform_device *pdev)
+ 	data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL);
+ 	if (!data) {
+ 		ret = -ENOMEM;
+-		goto fail;
++		goto put_device;
+ 	}
+ 
+ 	comp = devm_kzalloc(&pdev->dev, 3 * sizeof(*comp), GFP_KERNEL);
+ 	if (!comp) {
+ 		ret = -ENOMEM;
+-		goto fail;
++		goto put_device;
+ 	}
+ 
+ 	data->codec_clk = clk_get(&codec_dev->dev, NULL);
+ 	if (IS_ERR(data->codec_clk)) {
+ 		ret = PTR_ERR(data->codec_clk);
+-		goto fail;
++		goto put_device;
+ 	}
+ 
+ 	data->clk_frequency = clk_get_rate(data->codec_clk);
+@@ -158,10 +158,10 @@ static int imx_sgtl5000_probe(struct platform_device *pdev)
+ 	data->card.dev = &pdev->dev;
+ 	ret = snd_soc_of_parse_card_name(&data->card, "model");
+ 	if (ret)
+-		goto fail;
++		goto put_device;
+ 	ret = snd_soc_of_parse_audio_routing(&data->card, "audio-routing");
+ 	if (ret)
+-		goto fail;
++		goto put_device;
+ 	data->card.num_links = 1;
+ 	data->card.owner = THIS_MODULE;
+ 	data->card.dai_link = &data->dai;
+@@ -174,7 +174,7 @@ static int imx_sgtl5000_probe(struct platform_device *pdev)
+ 	ret = devm_snd_soc_register_card(&pdev->dev, &data->card);
+ 	if (ret) {
+ 		dev_err_probe(&pdev->dev, ret, "snd_soc_register_card failed\n");
+-		goto fail;
++		goto put_device;
+ 	}
+ 
+ 	of_node_put(ssi_np);
+@@ -182,6 +182,8 @@ static int imx_sgtl5000_probe(struct platform_device *pdev)
+ 
+ 	return 0;
+ 
++put_device:
++	put_device(&codec_dev->dev);
+ fail:
+ 	if (data && !IS_ERR(data->codec_clk))
+ 		clk_put(data->codec_clk);
+diff --git a/sound/soc/intel/boards/bytcr_rt5640.c b/sound/soc/intel/boards/bytcr_rt5640.c
+index d76a505052fb7..f81ae742faa78 100644
+--- a/sound/soc/intel/boards/bytcr_rt5640.c
++++ b/sound/soc/intel/boards/bytcr_rt5640.c
+@@ -773,6 +773,18 @@ static const struct dmi_system_id byt_rt5640_quirk_table[] = {
+ 					BYT_RT5640_OVCD_SF_0P75 |
+ 					BYT_RT5640_MCLK_EN),
+ 	},
++	{	/* HP Pro Tablet 408 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "HP Pro Tablet 408"),
++		},
++		.driver_data = (void *)(BYT_RT5640_DMIC1_MAP |
++					BYT_RT5640_JD_SRC_JD2_IN4N |
++					BYT_RT5640_OVCD_TH_1500UA |
++					BYT_RT5640_OVCD_SF_0P75 |
++					BYT_RT5640_SSP0_AIF1 |
++					BYT_RT5640_MCLK_EN),
++	},
+ 	{	/* HP Stream 7 */
+ 		.matches = {
+ 			DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
+diff --git a/sound/soc/intel/boards/sof_ssp_amp.c b/sound/soc/intel/boards/sof_ssp_amp.c
+index 88530e9de5435..ef70c6f27fe18 100644
+--- a/sound/soc/intel/boards/sof_ssp_amp.c
++++ b/sound/soc/intel/boards/sof_ssp_amp.c
+@@ -9,6 +9,7 @@
+ 
+ #include <linux/acpi.h>
+ #include <linux/delay.h>
++#include <linux/dmi.h>
+ #include <linux/module.h>
+ #include <linux/platform_device.h>
+ #include <sound/core.h>
+@@ -78,6 +79,16 @@ struct sof_card_private {
+ 	bool idisp_codec;
+ };
+ 
++static const struct dmi_system_id chromebook_platforms[] = {
++	{
++		.ident = "Google Chromebooks",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Google"),
++		}
++	},
++	{},
++};
++
+ static const struct snd_soc_dapm_widget sof_ssp_amp_dapm_widgets[] = {
+ 	SND_SOC_DAPM_MIC("SoC DMIC", NULL),
+ };
+@@ -371,7 +382,7 @@ static int sof_ssp_amp_probe(struct platform_device *pdev)
+ 	struct snd_soc_dai_link *dai_links;
+ 	struct snd_soc_acpi_mach *mach;
+ 	struct sof_card_private *ctx;
+-	int dmic_be_num, hdmi_num = 0;
++	int dmic_be_num = 0, hdmi_num = 0;
+ 	int ret, ssp_codec;
+ 
+ 	ctx = devm_kzalloc(&pdev->dev, sizeof(*ctx), GFP_KERNEL);
+@@ -383,7 +394,8 @@ static int sof_ssp_amp_probe(struct platform_device *pdev)
+ 
+ 	mach = pdev->dev.platform_data;
+ 
+-	dmic_be_num = mach->mach_params.dmic_num;
++	if (dmi_check_system(chromebook_platforms) || mach->mach_params.dmic_num > 0)
++		dmic_be_num = 2;
+ 
+ 	ssp_codec = sof_ssp_amp_quirk & SOF_AMPLIFIER_SSP_MASK;
+ 
+diff --git a/sound/soc/mediatek/mt2701/mt2701-wm8960.c b/sound/soc/mediatek/mt2701/mt2701-wm8960.c
+index f56de1b918bf0..0cdf2ae362439 100644
+--- a/sound/soc/mediatek/mt2701/mt2701-wm8960.c
++++ b/sound/soc/mediatek/mt2701/mt2701-wm8960.c
+@@ -129,7 +129,8 @@ static int mt2701_wm8960_machine_probe(struct platform_device *pdev)
+ 	if (!codec_node) {
+ 		dev_err(&pdev->dev,
+ 			"Property 'audio-codec' missing or invalid\n");
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto put_platform_node;
+ 	}
+ 	for_each_card_prelinks(card, i, dai_link) {
+ 		if (dai_link->codecs->name)
+@@ -140,7 +141,7 @@ static int mt2701_wm8960_machine_probe(struct platform_device *pdev)
+ 	ret = snd_soc_of_parse_audio_routing(card, "audio-routing");
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "failed to parse audio-routing: %d\n", ret);
+-		return ret;
++		goto put_codec_node;
+ 	}
+ 
+ 	ret = devm_snd_soc_register_card(&pdev->dev, card);
+@@ -148,6 +149,10 @@ static int mt2701_wm8960_machine_probe(struct platform_device *pdev)
+ 		dev_err(&pdev->dev, "%s snd_soc_register_card fail %d\n",
+ 			__func__, ret);
+ 
++put_codec_node:
++	of_node_put(codec_node);
++put_platform_node:
++	of_node_put(platform_node);
+ 	return ret;
+ }
+ 
+diff --git a/sound/soc/mediatek/mt8173/mt8173-max98090.c b/sound/soc/mediatek/mt8173/mt8173-max98090.c
+index 4cb90da89262b..58778cd2e61b1 100644
+--- a/sound/soc/mediatek/mt8173/mt8173-max98090.c
++++ b/sound/soc/mediatek/mt8173/mt8173-max98090.c
+@@ -167,7 +167,8 @@ static int mt8173_max98090_dev_probe(struct platform_device *pdev)
+ 	if (!codec_node) {
+ 		dev_err(&pdev->dev,
+ 			"Property 'audio-codec' missing or invalid\n");
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto put_platform_node;
+ 	}
+ 	for_each_card_prelinks(card, i, dai_link) {
+ 		if (dai_link->codecs->name)
+@@ -179,6 +180,8 @@ static int mt8173_max98090_dev_probe(struct platform_device *pdev)
+ 	ret = devm_snd_soc_register_card(&pdev->dev, card);
+ 
+ 	of_node_put(codec_node);
++
++put_platform_node:
+ 	of_node_put(platform_node);
+ 	return ret;
+ }
+diff --git a/sound/soc/mxs/mxs-saif.c b/sound/soc/mxs/mxs-saif.c
+index 879c1221a809b..7afe1a1acc568 100644
+--- a/sound/soc/mxs/mxs-saif.c
++++ b/sound/soc/mxs/mxs-saif.c
+@@ -754,6 +754,7 @@ static int mxs_saif_probe(struct platform_device *pdev)
+ 		saif->master_id = saif->id;
+ 	} else {
+ 		ret = of_alias_get_id(master, "saif");
++		of_node_put(master);
+ 		if (ret < 0)
+ 			return ret;
+ 		else
+diff --git a/sound/soc/samsung/aries_wm8994.c b/sound/soc/samsung/aries_wm8994.c
+index 5265e546b124c..83acbe57b2489 100644
+--- a/sound/soc/samsung/aries_wm8994.c
++++ b/sound/soc/samsung/aries_wm8994.c
+@@ -585,10 +585,10 @@ static int aries_audio_probe(struct platform_device *pdev)
+ 
+ 	extcon_np = of_parse_phandle(np, "extcon", 0);
+ 	priv->usb_extcon = extcon_find_edev_by_node(extcon_np);
++	of_node_put(extcon_np);
+ 	if (IS_ERR(priv->usb_extcon))
+ 		return dev_err_probe(dev, PTR_ERR(priv->usb_extcon),
+ 				     "Failed to get extcon device");
+-	of_node_put(extcon_np);
+ 
+ 	priv->adc = devm_iio_channel_get(dev, "headset-detect");
+ 	if (IS_ERR(priv->adc))
+diff --git a/sound/soc/sh/rcar/core.c b/sound/soc/sh/rcar/core.c
+index 6a8fe0da7670b..af8ef2a27d341 100644
+--- a/sound/soc/sh/rcar/core.c
++++ b/sound/soc/sh/rcar/core.c
+@@ -1159,6 +1159,7 @@ void rsnd_parse_connect_common(struct rsnd_dai *rdai, char *name,
+ 		struct device_node *capture)
+ {
+ 	struct rsnd_priv *priv = rsnd_rdai_to_priv(rdai);
++	struct device *dev = rsnd_priv_to_dev(priv);
+ 	struct device_node *np;
+ 	int i;
+ 
+@@ -1169,7 +1170,11 @@ void rsnd_parse_connect_common(struct rsnd_dai *rdai, char *name,
+ 	for_each_child_of_node(node, np) {
+ 		struct rsnd_mod *mod;
+ 
+-		i = rsnd_node_fixed_index(np, name, i);
++		i = rsnd_node_fixed_index(dev, np, name, i);
++		if (i < 0) {
++			of_node_put(np);
++			break;
++		}
+ 
+ 		mod = mod_get(priv, i);
+ 
+@@ -1183,7 +1188,7 @@ void rsnd_parse_connect_common(struct rsnd_dai *rdai, char *name,
+ 	of_node_put(node);
+ }
+ 
+-int rsnd_node_fixed_index(struct device_node *node, char *name, int idx)
++int rsnd_node_fixed_index(struct device *dev, struct device_node *node, char *name, int idx)
+ {
+ 	char node_name[16];
+ 
+@@ -1210,6 +1215,8 @@ int rsnd_node_fixed_index(struct device_node *node, char *name, int idx)
+ 			return idx;
+ 	}
+ 
++	dev_err(dev, "strange node numbering (%s)",
++		of_node_full_name(node));
+ 	return -EINVAL;
+ }
+ 
+@@ -1221,10 +1228,8 @@ int rsnd_node_count(struct rsnd_priv *priv, struct device_node *node, char *name
+ 
+ 	i = 0;
+ 	for_each_child_of_node(node, np) {
+-		i = rsnd_node_fixed_index(np, name, i);
++		i = rsnd_node_fixed_index(dev, np, name, i);
+ 		if (i < 0) {
+-			dev_err(dev, "strange node numbering (%s)",
+-				of_node_full_name(node));
+ 			of_node_put(np);
+ 			return 0;
+ 		}
+diff --git a/sound/soc/sh/rcar/dma.c b/sound/soc/sh/rcar/dma.c
+index 03e0d4eca7815..463ab237d7bd4 100644
+--- a/sound/soc/sh/rcar/dma.c
++++ b/sound/soc/sh/rcar/dma.c
+@@ -240,12 +240,19 @@ static int rsnd_dmaen_start(struct rsnd_mod *mod,
+ struct dma_chan *rsnd_dma_request_channel(struct device_node *of_node, char *name,
+ 					  struct rsnd_mod *mod, char *x)
+ {
++	struct rsnd_priv *priv = rsnd_mod_to_priv(mod);
++	struct device *dev = rsnd_priv_to_dev(priv);
+ 	struct dma_chan *chan = NULL;
+ 	struct device_node *np;
+ 	int i = 0;
+ 
+ 	for_each_child_of_node(of_node, np) {
+-		i = rsnd_node_fixed_index(np, name, i);
++		i = rsnd_node_fixed_index(dev, np, name, i);
++		if (i < 0) {
++			chan = NULL;
++			of_node_put(np);
++			break;
++		}
+ 
+ 		if (i == rsnd_mod_id_raw(mod) && (!chan))
+ 			chan = of_dma_request_slave_channel(np, x);
+diff --git a/sound/soc/sh/rcar/rsnd.h b/sound/soc/sh/rcar/rsnd.h
+index 6580bab0e229b..d9cd190d7e198 100644
+--- a/sound/soc/sh/rcar/rsnd.h
++++ b/sound/soc/sh/rcar/rsnd.h
+@@ -460,7 +460,7 @@ void rsnd_parse_connect_common(struct rsnd_dai *rdai, char *name,
+ 		struct device_node *playback,
+ 		struct device_node *capture);
+ int rsnd_node_count(struct rsnd_priv *priv, struct device_node *node, char *name);
+-int rsnd_node_fixed_index(struct device_node *node, char *name, int idx);
++int rsnd_node_fixed_index(struct device *dev, struct device_node *node, char *name, int idx);
+ 
+ int rsnd_channel_normalization(int chan);
+ #define rsnd_runtime_channel_original(io) \
+diff --git a/sound/soc/sh/rcar/src.c b/sound/soc/sh/rcar/src.c
+index 42a100c6303d4..0ea84ae57c6ac 100644
+--- a/sound/soc/sh/rcar/src.c
++++ b/sound/soc/sh/rcar/src.c
+@@ -676,7 +676,12 @@ int rsnd_src_probe(struct rsnd_priv *priv)
+ 		if (!of_device_is_available(np))
+ 			goto skip;
+ 
+-		i = rsnd_node_fixed_index(np, SRC_NAME, i);
++		i = rsnd_node_fixed_index(dev, np, SRC_NAME, i);
++		if (i < 0) {
++			ret = -EINVAL;
++			of_node_put(np);
++			goto rsnd_src_probe_done;
++		}
+ 
+ 		src = rsnd_src_get(priv, i);
+ 
+diff --git a/sound/soc/sh/rcar/ssi.c b/sound/soc/sh/rcar/ssi.c
+index 87e606f688d3f..43c5e27dc5c86 100644
+--- a/sound/soc/sh/rcar/ssi.c
++++ b/sound/soc/sh/rcar/ssi.c
+@@ -1105,6 +1105,7 @@ void rsnd_parse_connect_ssi(struct rsnd_dai *rdai,
+ 			    struct device_node *capture)
+ {
+ 	struct rsnd_priv *priv = rsnd_rdai_to_priv(rdai);
++	struct device *dev = rsnd_priv_to_dev(priv);
+ 	struct device_node *node;
+ 	struct device_node *np;
+ 	int i;
+@@ -1117,7 +1118,11 @@ void rsnd_parse_connect_ssi(struct rsnd_dai *rdai,
+ 	for_each_child_of_node(node, np) {
+ 		struct rsnd_mod *mod;
+ 
+-		i = rsnd_node_fixed_index(np, SSI_NAME, i);
++		i = rsnd_node_fixed_index(dev, np, SSI_NAME, i);
++		if (i < 0) {
++			of_node_put(np);
++			break;
++		}
+ 
+ 		mod = rsnd_ssi_mod_get(priv, i);
+ 
+@@ -1182,7 +1187,12 @@ int rsnd_ssi_probe(struct rsnd_priv *priv)
+ 		if (!of_device_is_available(np))
+ 			goto skip;
+ 
+-		i = rsnd_node_fixed_index(np, SSI_NAME, i);
++		i = rsnd_node_fixed_index(dev, np, SSI_NAME, i);
++		if (i < 0) {
++			ret = -EINVAL;
++			of_node_put(np);
++			goto rsnd_ssi_probe_done;
++		}
+ 
+ 		ssi = rsnd_ssi_get(priv, i);
+ 
+diff --git a/sound/soc/sh/rcar/ssiu.c b/sound/soc/sh/rcar/ssiu.c
+index 0d8f97633dd26..4b8a63e336c77 100644
+--- a/sound/soc/sh/rcar/ssiu.c
++++ b/sound/soc/sh/rcar/ssiu.c
+@@ -102,6 +102,8 @@ bool rsnd_ssiu_busif_err_status_clear(struct rsnd_mod *mod)
+ 		shift  = 1;
+ 		offset = 1;
+ 		break;
++	default:
++		goto out;
+ 	}
+ 
+ 	for (i = 0; i < 4; i++) {
+@@ -120,7 +122,7 @@ bool rsnd_ssiu_busif_err_status_clear(struct rsnd_mod *mod)
+ 		}
+ 		rsnd_mod_write(mod, reg, val);
+ 	}
+-
++out:
+ 	return error;
+ }
+ 
+@@ -460,6 +462,7 @@ void rsnd_parse_connect_ssiu(struct rsnd_dai *rdai,
+ 			     struct device_node *capture)
+ {
+ 	struct rsnd_priv *priv = rsnd_rdai_to_priv(rdai);
++	struct device *dev = rsnd_priv_to_dev(priv);
+ 	struct device_node *node = rsnd_ssiu_of_node(priv);
+ 	struct rsnd_dai_stream *io_p = &rdai->playback;
+ 	struct rsnd_dai_stream *io_c = &rdai->capture;
+@@ -472,7 +475,11 @@ void rsnd_parse_connect_ssiu(struct rsnd_dai *rdai,
+ 		for_each_child_of_node(node, np) {
+ 			struct rsnd_mod *mod;
+ 
+-			i = rsnd_node_fixed_index(np, SSIU_NAME, i);
++			i = rsnd_node_fixed_index(dev, np, SSIU_NAME, i);
++			if (i < 0) {
++				of_node_put(np);
++				break;
++			}
+ 
+ 			mod = rsnd_ssiu_mod_get(priv, i);
+ 
+diff --git a/sound/soc/sh/rz-ssi.c b/sound/soc/sh/rz-ssi.c
+index e8edaed05d4cf..8a0c01ca06bee 100644
+--- a/sound/soc/sh/rz-ssi.c
++++ b/sound/soc/sh/rz-ssi.c
+@@ -978,22 +978,24 @@ static int rz_ssi_probe(struct platform_device *pdev)
+ 
+ 	/* Error Interrupt */
+ 	ssi->irq_int = platform_get_irq_byname(pdev, "int_req");
+-	if (ssi->irq_int < 0)
+-		return dev_err_probe(&pdev->dev, -ENODEV,
+-				     "Unable to get SSI int_req IRQ\n");
++	if (ssi->irq_int < 0) {
++		rz_ssi_release_dma_channels(ssi);
++		return ssi->irq_int;
++	}
+ 
+ 	ret = devm_request_irq(&pdev->dev, ssi->irq_int, &rz_ssi_interrupt,
+ 			       0, dev_name(&pdev->dev), ssi);
+-	if (ret < 0)
++	if (ret < 0) {
++		rz_ssi_release_dma_channels(ssi);
+ 		return dev_err_probe(&pdev->dev, ret,
+ 				     "irq request error (int_req)\n");
++	}
+ 
+ 	if (!rz_ssi_is_dma_enabled(ssi)) {
+ 		/* Tx and Rx interrupts (pio only) */
+ 		ssi->irq_tx = platform_get_irq_byname(pdev, "dma_tx");
+ 		if (ssi->irq_tx < 0)
+-			return dev_err_probe(&pdev->dev, -ENODEV,
+-					     "Unable to get SSI dma_tx IRQ\n");
++			return ssi->irq_tx;
+ 
+ 		ret = devm_request_irq(&pdev->dev, ssi->irq_tx,
+ 				       &rz_ssi_interrupt, 0,
+@@ -1004,8 +1006,7 @@ static int rz_ssi_probe(struct platform_device *pdev)
+ 
+ 		ssi->irq_rx = platform_get_irq_byname(pdev, "dma_rx");
+ 		if (ssi->irq_rx < 0)
+-			return dev_err_probe(&pdev->dev, -ENODEV,
+-					     "Unable to get SSI dma_rx IRQ\n");
++			return ssi->irq_rx;
+ 
+ 		ret = devm_request_irq(&pdev->dev, ssi->irq_rx,
+ 				       &rz_ssi_interrupt, 0,
+@@ -1016,13 +1017,16 @@ static int rz_ssi_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	ssi->rstc = devm_reset_control_get_exclusive(&pdev->dev, NULL);
+-	if (IS_ERR(ssi->rstc))
++	if (IS_ERR(ssi->rstc)) {
++		rz_ssi_release_dma_channels(ssi);
+ 		return PTR_ERR(ssi->rstc);
++	}
+ 
+ 	reset_control_deassert(ssi->rstc);
+ 	pm_runtime_enable(&pdev->dev);
+ 	ret = pm_runtime_resume_and_get(&pdev->dev);
+ 	if (ret < 0) {
++		rz_ssi_release_dma_channels(ssi);
+ 		pm_runtime_disable(ssi->dev);
+ 		reset_control_assert(ssi->rstc);
+ 		return dev_err_probe(ssi->dev, ret, "pm_runtime_resume_and_get failed\n");
+diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c
+index ca917a849c423..869c76506b669 100644
+--- a/sound/soc/soc-dapm.c
++++ b/sound/soc/soc-dapm.c
+@@ -3437,7 +3437,6 @@ int snd_soc_dapm_put_volsw(struct snd_kcontrol *kcontrol,
+ 			update.val = val;
+ 			card->update = &update;
+ 		}
+-		change |= reg_change;
+ 
+ 		ret = soc_dapm_mixer_update_power(card, kcontrol, connect,
+ 						  rconnect);
+@@ -3539,7 +3538,6 @@ int snd_soc_dapm_put_enum_double(struct snd_kcontrol *kcontrol,
+ 			update.val = val;
+ 			card->update = &update;
+ 		}
+-		change |= reg_change;
+ 
+ 		ret = soc_dapm_mux_update_power(card, kcontrol, item[0], e);
+ 
+diff --git a/sound/soc/sof/amd/pci-rn.c b/sound/soc/sof/amd/pci-rn.c
+index 392ffbdf64179..d809d151a38c4 100644
+--- a/sound/soc/sof/amd/pci-rn.c
++++ b/sound/soc/sof/amd/pci-rn.c
+@@ -93,6 +93,7 @@ static int acp_pci_rn_probe(struct pci_dev *pci, const struct pci_device_id *pci
+ 	res = devm_kzalloc(&pci->dev, sizeof(struct resource) * ARRAY_SIZE(renoir_res), GFP_KERNEL);
+ 	if (!res) {
+ 		sof_pci_remove(pci);
++		platform_device_unregister(dmic_dev);
+ 		return -ENOMEM;
+ 	}
+ 
+diff --git a/sound/soc/sof/ipc3-topology.c b/sound/soc/sof/ipc3-topology.c
+index 2f8450a8c0a1f..cdff48c4195f8 100644
+--- a/sound/soc/sof/ipc3-topology.c
++++ b/sound/soc/sof/ipc3-topology.c
+@@ -20,7 +20,8 @@
+ struct sof_widget_data {
+ 	int ctrl_type;
+ 	int ipc_cmd;
+-	struct sof_abi_hdr *pdata;
++	void *pdata;
++	size_t pdata_size;
+ 	struct snd_sof_control *control;
+ };
+ 
+@@ -784,16 +785,26 @@ static int sof_get_control_data(struct snd_soc_component *scomp,
+ 		}
+ 
+ 		cdata = wdata[i].control->ipc_control_data;
+-		wdata[i].pdata = cdata->data;
+-		if (!wdata[i].pdata)
+-			return -EINVAL;
+ 
+-		/* make sure data is valid - data can be updated at runtime */
+-		if (widget->dobj.widget.kcontrol_type[i] == SND_SOC_TPLG_TYPE_BYTES &&
+-		    wdata[i].pdata->magic != SOF_ABI_MAGIC)
+-			return -EINVAL;
++		if (widget->dobj.widget.kcontrol_type[i] == SND_SOC_TPLG_TYPE_BYTES) {
++			/* make sure data is valid - data can be updated at runtime */
++			if (cdata->data->magic != SOF_ABI_MAGIC)
++				return -EINVAL;
++
++			wdata[i].pdata = cdata->data->data;
++			wdata[i].pdata_size = cdata->data->size;
++		} else {
++			/* points to the control data union */
++			wdata[i].pdata = cdata->chanv;
++			/*
++			 * wdata[i].control->size is calculated with struct_size
++			 * and includes the size of struct sof_ipc_ctrl_data
++			 */
++			wdata[i].pdata_size = wdata[i].control->size -
++					      sizeof(struct sof_ipc_ctrl_data);
++		}
+ 
+-		*size += wdata[i].pdata->size;
++		*size += wdata[i].pdata_size;
+ 
+ 		/* get data type */
+ 		switch (cdata->cmd) {
+@@ -876,10 +887,12 @@ static int sof_process_load(struct snd_soc_component *scomp,
+ 	 */
+ 	if (ipc_data_size) {
+ 		for (i = 0; i < widget->num_kcontrols; i++) {
+-			memcpy(&process->data[offset],
+-			       wdata[i].pdata->data,
+-			       wdata[i].pdata->size);
+-			offset += wdata[i].pdata->size;
++			if (!wdata[i].pdata_size)
++				continue;
++
++			memcpy(&process->data[offset], wdata[i].pdata,
++			       wdata[i].pdata_size);
++			offset += wdata[i].pdata_size;
+ 		}
+ 	}
+ 
+@@ -1592,6 +1605,7 @@ static int sof_ipc3_control_load_bytes(struct snd_sof_dev *sdev, struct snd_sof_
+ 	if (scontrol->priv_size > 0) {
+ 		memcpy(cdata->data, scontrol->priv, scontrol->priv_size);
+ 		kfree(scontrol->priv);
++		scontrol->priv = NULL;
+ 
+ 		if (cdata->data->magic != SOF_ABI_MAGIC) {
+ 			dev_err(sdev->dev, "Wrong ABI magic 0x%08x.\n", cdata->data->magic);
+diff --git a/sound/soc/ti/j721e-evm.c b/sound/soc/ti/j721e-evm.c
+index 4077e15ec48b7..6a969874c9270 100644
+--- a/sound/soc/ti/j721e-evm.c
++++ b/sound/soc/ti/j721e-evm.c
+@@ -630,17 +630,18 @@ static int j721e_soc_probe_cpb(struct j721e_priv *priv, int *link_idx,
+ 	codec_node = of_parse_phandle(node, "ti,cpb-codec", 0);
+ 	if (!codec_node) {
+ 		dev_err(priv->dev, "CPB codec node is not provided\n");
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto put_dai_node;
+ 	}
+ 
+ 	domain = &priv->audio_domains[J721E_AUDIO_DOMAIN_CPB];
+ 	ret = j721e_get_clocks(priv->dev, &domain->codec, "cpb-codec-scki");
+ 	if (ret)
+-		return ret;
++		goto put_codec_node;
+ 
+ 	ret = j721e_get_clocks(priv->dev, &domain->mcasp, "cpb-mcasp-auxclk");
+ 	if (ret)
+-		return ret;
++		goto put_codec_node;
+ 
+ 	/*
+ 	 * Common Processor Board, two links
+@@ -650,8 +651,10 @@ static int j721e_soc_probe_cpb(struct j721e_priv *priv, int *link_idx,
+ 	comp_count = 6;
+ 	compnent = devm_kzalloc(priv->dev, comp_count * sizeof(*compnent),
+ 				GFP_KERNEL);
+-	if (!compnent)
+-		return -ENOMEM;
++	if (!compnent) {
++		ret = -ENOMEM;
++		goto put_codec_node;
++	}
+ 
+ 	comp_idx = 0;
+ 	priv->dai_links[*link_idx].cpus = &compnent[comp_idx++];
+@@ -702,6 +705,12 @@ static int j721e_soc_probe_cpb(struct j721e_priv *priv, int *link_idx,
+ 	(*conf_idx)++;
+ 
+ 	return 0;
++
++put_codec_node:
++	of_node_put(codec_node);
++put_dai_node:
++	of_node_put(dai_node);
++	return ret;
+ }
+ 
+ static int j721e_soc_probe_ivi(struct j721e_priv *priv, int *link_idx,
+@@ -726,23 +735,25 @@ static int j721e_soc_probe_ivi(struct j721e_priv *priv, int *link_idx,
+ 	codeca_node = of_parse_phandle(node, "ti,ivi-codec-a", 0);
+ 	if (!codeca_node) {
+ 		dev_err(priv->dev, "IVI codec-a node is not provided\n");
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto put_dai_node;
+ 	}
+ 
+ 	codecb_node = of_parse_phandle(node, "ti,ivi-codec-b", 0);
+ 	if (!codecb_node) {
+ 		dev_warn(priv->dev, "IVI codec-b node is not provided\n");
+-		return 0;
++		ret = 0;
++		goto put_codeca_node;
+ 	}
+ 
+ 	domain = &priv->audio_domains[J721E_AUDIO_DOMAIN_IVI];
+ 	ret = j721e_get_clocks(priv->dev, &domain->codec, "ivi-codec-scki");
+ 	if (ret)
+-		return ret;
++		goto put_codecb_node;
+ 
+ 	ret = j721e_get_clocks(priv->dev, &domain->mcasp, "ivi-mcasp-auxclk");
+ 	if (ret)
+-		return ret;
++		goto put_codecb_node;
+ 
+ 	/*
+ 	 * IVI extension, two links
+@@ -754,8 +765,10 @@ static int j721e_soc_probe_ivi(struct j721e_priv *priv, int *link_idx,
+ 	comp_count = 8;
+ 	compnent = devm_kzalloc(priv->dev, comp_count * sizeof(*compnent),
+ 				GFP_KERNEL);
+-	if (!compnent)
+-		return -ENOMEM;
++	if (!compnent) {
++		ret = -ENOMEM;
++		goto put_codecb_node;
++	}
+ 
+ 	comp_idx = 0;
+ 	priv->dai_links[*link_idx].cpus = &compnent[comp_idx++];
+@@ -816,6 +829,15 @@ static int j721e_soc_probe_ivi(struct j721e_priv *priv, int *link_idx,
+ 	(*conf_idx)++;
+ 
+ 	return 0;
++
++
++put_codecb_node:
++	of_node_put(codecb_node);
++put_codeca_node:
++	of_node_put(codeca_node);
++put_dai_node:
++	of_node_put(dai_node);
++	return ret;
+ }
+ 
+ static int j721e_soc_probe(struct platform_device *pdev)
+diff --git a/sound/usb/implicit.c b/sound/usb/implicit.c
+index 2d444ec742029..e1bf1b5da423c 100644
+--- a/sound/usb/implicit.c
++++ b/sound/usb/implicit.c
+@@ -45,11 +45,6 @@ struct snd_usb_implicit_fb_match {
+ 
+ /* Implicit feedback quirk table for playback */
+ static const struct snd_usb_implicit_fb_match playback_implicit_fb_quirks[] = {
+-	/* Generic matching */
+-	IMPLICIT_FB_GENERIC_DEV(0x0499, 0x1509), /* Steinberg UR22 */
+-	IMPLICIT_FB_GENERIC_DEV(0x0763, 0x2030), /* M-Audio Fast Track C400 */
+-	IMPLICIT_FB_GENERIC_DEV(0x0763, 0x2031), /* M-Audio Fast Track C600 */
+-
+ 	/* Fixed EP */
+ 	/* FIXME: check the availability of generic matching */
+ 	IMPLICIT_FB_FIXED_DEV(0x0763, 0x2080, 0x81, 2), /* M-Audio FastTrack Ultra */
+@@ -350,7 +345,8 @@ static int audioformat_implicit_fb_quirk(struct snd_usb_audio *chip,
+ 	}
+ 
+ 	/* Try the generic implicit fb if available */
+-	if (chip->generic_implicit_fb)
++	if (chip->generic_implicit_fb ||
++	    (chip->quirk_flags & QUIRK_FLAG_GENERIC_IMPLICIT_FB))
+ 		return add_generic_implicit_fb(chip, fmt, alts);
+ 
+ 	/* No quirk */
+@@ -387,6 +383,8 @@ int snd_usb_parse_implicit_fb_quirk(struct snd_usb_audio *chip,
+ 				    struct audioformat *fmt,
+ 				    struct usb_host_interface *alts)
+ {
++	if (chip->quirk_flags & QUIRK_FLAG_SKIP_IMPLICIT_FB)
++		return 0;
+ 	if (fmt->endpoint & USB_DIR_IN)
+ 		return audioformat_capture_quirk(chip, fmt, alts);
+ 	else
+diff --git a/sound/usb/midi.c b/sound/usb/midi.c
+index 7c6ca2b433a53..344fbeadf161b 100644
+--- a/sound/usb/midi.c
++++ b/sound/usb/midi.c
+@@ -1145,6 +1145,9 @@ static int snd_usbmidi_output_open(struct snd_rawmidi_substream *substream)
+ 
+ static int snd_usbmidi_output_close(struct snd_rawmidi_substream *substream)
+ {
++	struct usbmidi_out_port *port = substream->runtime->private_data;
++
++	cancel_work_sync(&port->ep->work);
+ 	return substream_open(substream, 0, 0);
+ }
+ 
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index fbbe59054c3fb..e8468f9b007d1 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1793,6 +1793,8 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ 		   QUIRK_FLAG_CTL_MSG_DELAY_1M | QUIRK_FLAG_IGNORE_CTL_ERROR),
+ 	DEVICE_FLG(0x046d, 0x09a4, /* Logitech QuickCam E 3500 */
+ 		   QUIRK_FLAG_CTL_MSG_DELAY_1M | QUIRK_FLAG_IGNORE_CTL_ERROR),
++	DEVICE_FLG(0x0499, 0x1509, /* Steinberg UR22 */
++		   QUIRK_FLAG_GENERIC_IMPLICIT_FB),
+ 	DEVICE_FLG(0x04d8, 0xfeea, /* Benchmark DAC1 Pre */
+ 		   QUIRK_FLAG_GET_SAMPLE_RATE),
+ 	DEVICE_FLG(0x04e8, 0xa051, /* Samsung USBC Headset (AKG) */
+@@ -1826,6 +1828,10 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ 		   QUIRK_FLAG_GET_SAMPLE_RATE),
+ 	DEVICE_FLG(0x074d, 0x3553, /* Outlaw RR2150 (Micronas UAC3553B) */
+ 		   QUIRK_FLAG_GET_SAMPLE_RATE),
++	DEVICE_FLG(0x0763, 0x2030, /* M-Audio Fast Track C400 */
++		   QUIRK_FLAG_GENERIC_IMPLICIT_FB),
++	DEVICE_FLG(0x0763, 0x2031, /* M-Audio Fast Track C600 */
++		   QUIRK_FLAG_GENERIC_IMPLICIT_FB),
+ 	DEVICE_FLG(0x08bb, 0x2702, /* LineX FM Transmitter */
+ 		   QUIRK_FLAG_IGNORE_CTL_ERROR),
+ 	DEVICE_FLG(0x0951, 0x16ad, /* Kingston HyperX */
+diff --git a/sound/usb/usbaudio.h b/sound/usb/usbaudio.h
+index b8359a0aa008a..044cd7ab27cbb 100644
+--- a/sound/usb/usbaudio.h
++++ b/sound/usb/usbaudio.h
+@@ -164,6 +164,10 @@ extern bool snd_usb_skip_validation;
+  *  Support generic DSD raw U32_BE format
+  * QUIRK_FLAG_SET_IFACE_FIRST:
+  *  Set up the interface at first like UAC1
++ * QUIRK_FLAG_GENERIC_IMPLICIT_FB
++ *  Apply the generic implicit feedback sync mode (same as implicit_fb=1 option)
++ * QUIRK_FLAG_SKIP_IMPLICIT_FB
++ *  Don't apply implicit feedback sync mode
+  */
+ 
+ #define QUIRK_FLAG_GET_SAMPLE_RATE	(1U << 0)
+@@ -183,5 +187,7 @@ extern bool snd_usb_skip_validation;
+ #define QUIRK_FLAG_IGNORE_CTL_ERROR	(1U << 14)
+ #define QUIRK_FLAG_DSD_RAW		(1U << 15)
+ #define QUIRK_FLAG_SET_IFACE_FIRST	(1U << 16)
++#define QUIRK_FLAG_GENERIC_IMPLICIT_FB	(1U << 17)
++#define QUIRK_FLAG_SKIP_IMPLICIT_FB	(1U << 18)
+ 
+ #endif /* __USBAUDIO_H */
+diff --git a/tools/build/feature/test-libbpf-btf__load_from_kernel_by_id.c b/tools/build/feature/test-libbpf-btf__load_from_kernel_by_id.c
+index f7c084428735a..a17647f7d5a43 100644
+--- a/tools/build/feature/test-libbpf-btf__load_from_kernel_by_id.c
++++ b/tools/build/feature/test-libbpf-btf__load_from_kernel_by_id.c
+@@ -1,7 +1,8 @@
+ // SPDX-License-Identifier: GPL-2.0
+-#include <bpf/libbpf.h>
++#include <bpf/btf.h>
+ 
+ int main(void)
+ {
+-	return btf__load_from_kernel_by_id(20151128, NULL);
++	btf__load_from_kernel_by_id(20151128);
++	return 0;
+ }
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index 809fe209cdcc0..881ea905ca815 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -4587,7 +4587,7 @@ static int probe_kern_probe_read_kernel(void)
+ 	};
+ 	int fd, insn_cnt = ARRAY_SIZE(insns);
+ 
+-	fd = bpf_prog_load(BPF_PROG_TYPE_KPROBE, NULL, "GPL", insns, insn_cnt, NULL);
++	fd = bpf_prog_load(BPF_PROG_TYPE_TRACEPOINT, NULL, "GPL", insns, insn_cnt, NULL);
+ 	return probe_fd(fd);
+ }
+ 
+@@ -5646,9 +5646,10 @@ bpf_object__relocate_core(struct bpf_object *obj, const char *targ_btf_path)
+ 		 */
+ 		prog = NULL;
+ 		for (i = 0; i < obj->nr_programs; i++) {
+-			prog = &obj->programs[i];
+-			if (strcmp(prog->sec_name, sec_name) == 0)
++			if (strcmp(obj->programs[i].sec_name, sec_name) == 0) {
++				prog = &obj->programs[i];
+ 				break;
++			}
+ 		}
+ 		if (!prog) {
+ 			pr_warn("sec '%s': failed to find a BPF program\n", sec_name);
+@@ -5665,10 +5666,17 @@ bpf_object__relocate_core(struct bpf_object *obj, const char *targ_btf_path)
+ 			insn_idx = rec->insn_off / BPF_INSN_SZ;
+ 			prog = find_prog_by_sec_insn(obj, sec_idx, insn_idx);
+ 			if (!prog) {
+-				pr_warn("sec '%s': failed to find program at insn #%d for CO-RE offset relocation #%d\n",
+-					sec_name, insn_idx, i);
+-				err = -EINVAL;
+-				goto out;
++				/* When __weak subprog is "overridden" by another instance
++				 * of the subprog from a different object file, linker still
++				 * appends all the .BTF.ext info that used to belong to that
++				 * eliminated subprogram.
++				 * This is similar to what x86-64 linker does for relocations.
++				 * So just ignore such relocations just like we ignore
++				 * subprog instructions when discovering subprograms.
++				 */
++				pr_debug("sec '%s': skipping CO-RE relocation #%d for insn #%d belonging to eliminated weak subprogram\n",
++					 sec_name, i, insn_idx);
++				continue;
+ 			}
+ 			/* no need to apply CO-RE relocation if the program is
+ 			 * not going to be loaded
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index ca5b746030089..8a0971a620f09 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -5,6 +5,7 @@
+ 
+ #include <string.h>
+ #include <stdlib.h>
++#include <inttypes.h>
+ #include <sys/mman.h>
+ 
+ #include <arch/elf.h>
+@@ -560,12 +561,12 @@ static int add_dead_ends(struct objtool_file *file)
+ 		else if (reloc->addend == reloc->sym->sec->sh.sh_size) {
+ 			insn = find_last_insn(file, reloc->sym->sec);
+ 			if (!insn) {
+-				WARN("can't find unreachable insn at %s+0x%lx",
++				WARN("can't find unreachable insn at %s+0x%" PRIx64,
+ 				     reloc->sym->sec->name, reloc->addend);
+ 				return -1;
+ 			}
+ 		} else {
+-			WARN("can't find unreachable insn at %s+0x%lx",
++			WARN("can't find unreachable insn at %s+0x%" PRIx64,
+ 			     reloc->sym->sec->name, reloc->addend);
+ 			return -1;
+ 		}
+@@ -595,12 +596,12 @@ reachable:
+ 		else if (reloc->addend == reloc->sym->sec->sh.sh_size) {
+ 			insn = find_last_insn(file, reloc->sym->sec);
+ 			if (!insn) {
+-				WARN("can't find reachable insn at %s+0x%lx",
++				WARN("can't find reachable insn at %s+0x%" PRIx64,
+ 				     reloc->sym->sec->name, reloc->addend);
+ 				return -1;
+ 			}
+ 		} else {
+-			WARN("can't find reachable insn at %s+0x%lx",
++			WARN("can't find reachable insn at %s+0x%" PRIx64,
+ 			     reloc->sym->sec->name, reloc->addend);
+ 			return -1;
+ 		}
+diff --git a/tools/objtool/elf.c b/tools/objtool/elf.c
+index ebf2ba5755c1e..e84cf15b8c11b 100644
+--- a/tools/objtool/elf.c
++++ b/tools/objtool/elf.c
+@@ -374,6 +374,9 @@ static void elf_add_symbol(struct elf *elf, struct symbol *sym)
+ 	struct list_head *entry;
+ 	struct rb_node *pnode;
+ 
++	INIT_LIST_HEAD(&sym->pv_target);
++	sym->alias = sym;
++
+ 	sym->type = GELF_ST_TYPE(sym->sym.st_info);
+ 	sym->bind = GELF_ST_BIND(sym->sym.st_info);
+ 
+@@ -435,8 +438,6 @@ static int read_symbols(struct elf *elf)
+ 			return -1;
+ 		}
+ 		memset(sym, 0, sizeof(*sym));
+-		INIT_LIST_HEAD(&sym->pv_target);
+-		sym->alias = sym;
+ 
+ 		sym->idx = i;
+ 
+@@ -546,7 +547,7 @@ static struct section *elf_create_reloc_section(struct elf *elf,
+ 						int reltype);
+ 
+ int elf_add_reloc(struct elf *elf, struct section *sec, unsigned long offset,
+-		  unsigned int type, struct symbol *sym, long addend)
++		  unsigned int type, struct symbol *sym, s64 addend)
+ {
+ 	struct reloc *reloc;
+ 
+@@ -600,24 +601,21 @@ static void elf_dirty_reloc_sym(struct elf *elf, struct symbol *sym)
+ }
+ 
+ /*
+- * Move the first global symbol, as per sh_info, into a new, higher symbol
+- * index. This fees up the shndx for a new local symbol.
++ * The libelf API is terrible; gelf_update_sym*() takes a data block relative
++ * index value, *NOT* the symbol index. As such, iterate the data blocks and
++ * adjust index until it fits.
++ *
++ * If no data block is found, allow adding a new data block provided the index
++ * is only one past the end.
+  */
+-static int elf_move_global_symbol(struct elf *elf, struct section *symtab,
+-				  struct section *symtab_shndx)
++static int elf_update_symbol(struct elf *elf, struct section *symtab,
++			     struct section *symtab_shndx, struct symbol *sym)
+ {
+-	Elf_Data *data, *shndx_data = NULL;
+-	Elf32_Word first_non_local;
+-	struct symbol *sym;
+-	Elf_Scn *s;
+-
+-	first_non_local = symtab->sh.sh_info;
+-
+-	sym = find_symbol_by_index(elf, first_non_local);
+-	if (!sym) {
+-		WARN("no non-local symbols !?");
+-		return first_non_local;
+-	}
++	Elf32_Word shndx = sym->sec ? sym->sec->idx : SHN_UNDEF;
++	Elf_Data *symtab_data = NULL, *shndx_data = NULL;
++	Elf64_Xword entsize = symtab->sh.sh_entsize;
++	int max_idx, idx = sym->idx;
++	Elf_Scn *s, *t = NULL;
+ 
+ 	s = elf_getscn(elf->elf, symtab->idx);
+ 	if (!s) {
+@@ -625,79 +623,124 @@ static int elf_move_global_symbol(struct elf *elf, struct section *symtab,
+ 		return -1;
+ 	}
+ 
+-	data = elf_newdata(s);
+-	if (!data) {
+-		WARN_ELF("elf_newdata");
+-		return -1;
++	if (symtab_shndx) {
++		t = elf_getscn(elf->elf, symtab_shndx->idx);
++		if (!t) {
++			WARN_ELF("elf_getscn");
++			return -1;
++		}
+ 	}
+ 
+-	data->d_buf = &sym->sym;
+-	data->d_size = sizeof(sym->sym);
+-	data->d_align = 1;
+-	data->d_type = ELF_T_SYM;
++	for (;;) {
++		/* get next data descriptor for the relevant sections */
++		symtab_data = elf_getdata(s, symtab_data);
++		if (t)
++			shndx_data = elf_getdata(t, shndx_data);
+ 
+-	sym->idx = symtab->sh.sh_size / sizeof(sym->sym);
+-	elf_dirty_reloc_sym(elf, sym);
++		/* end-of-list */
++		if (!symtab_data) {
++			void *buf;
+ 
+-	symtab->sh.sh_info += 1;
+-	symtab->sh.sh_size += data->d_size;
+-	symtab->changed = true;
++			if (idx) {
++				/* we don't do holes in symbol tables */
++				WARN("index out of range");
++				return -1;
++			}
+ 
+-	if (symtab_shndx) {
+-		s = elf_getscn(elf->elf, symtab_shndx->idx);
+-		if (!s) {
+-			WARN_ELF("elf_getscn");
++			/* if @idx == 0, it's the next contiguous entry, create it */
++			symtab_data = elf_newdata(s);
++			if (t)
++				shndx_data = elf_newdata(t);
++
++			buf = calloc(1, entsize);
++			if (!buf) {
++				WARN("malloc");
++				return -1;
++			}
++
++			symtab_data->d_buf = buf;
++			symtab_data->d_size = entsize;
++			symtab_data->d_align = 1;
++			symtab_data->d_type = ELF_T_SYM;
++
++			symtab->sh.sh_size += entsize;
++			symtab->changed = true;
++
++			if (t) {
++				shndx_data->d_buf = &sym->sec->idx;
++				shndx_data->d_size = sizeof(Elf32_Word);
++				shndx_data->d_align = sizeof(Elf32_Word);
++				shndx_data->d_type = ELF_T_WORD;
++
++				symtab_shndx->sh.sh_size += sizeof(Elf32_Word);
++				symtab_shndx->changed = true;
++			}
++
++			break;
++		}
++
++		/* empty blocks should not happen */
++		if (!symtab_data->d_size) {
++			WARN("zero size data");
+ 			return -1;
+ 		}
+ 
+-		shndx_data = elf_newdata(s);
++		/* is this the right block? */
++		max_idx = symtab_data->d_size / entsize;
++		if (idx < max_idx)
++			break;
++
++		/* adjust index and try again */
++		idx -= max_idx;
++	}
++
++	/* something went side-ways */
++	if (idx < 0) {
++		WARN("negative index");
++		return -1;
++	}
++
++	/* setup extended section index magic and write the symbol */
++	if (shndx >= SHN_UNDEF && shndx < SHN_LORESERVE) {
++		sym->sym.st_shndx = shndx;
++		if (!shndx_data)
++			shndx = 0;
++	} else {
++		sym->sym.st_shndx = SHN_XINDEX;
+ 		if (!shndx_data) {
+-			WARN_ELF("elf_newshndx_data");
++			WARN("no .symtab_shndx");
+ 			return -1;
+ 		}
++	}
+ 
+-		shndx_data->d_buf = &sym->sec->idx;
+-		shndx_data->d_size = sizeof(Elf32_Word);
+-		shndx_data->d_align = 4;
+-		shndx_data->d_type = ELF_T_WORD;
+-
+-		symtab_shndx->sh.sh_size += 4;
+-		symtab_shndx->changed = true;
++	if (!gelf_update_symshndx(symtab_data, shndx_data, idx, &sym->sym, shndx)) {
++		WARN_ELF("gelf_update_symshndx");
++		return -1;
+ 	}
+ 
+-	return first_non_local;
++	return 0;
+ }
+ 
+ static struct symbol *
+ elf_create_section_symbol(struct elf *elf, struct section *sec)
+ {
+ 	struct section *symtab, *symtab_shndx;
+-	Elf_Data *shndx_data = NULL;
+-	struct symbol *sym;
+-	Elf32_Word shndx;
++	Elf32_Word first_non_local, new_idx;
++	struct symbol *sym, *old;
+ 
+ 	symtab = find_section_by_name(elf, ".symtab");
+ 	if (symtab) {
+ 		symtab_shndx = find_section_by_name(elf, ".symtab_shndx");
+-		if (symtab_shndx)
+-			shndx_data = symtab_shndx->data;
+ 	} else {
+ 		WARN("no .symtab");
+ 		return NULL;
+ 	}
+ 
+-	sym = malloc(sizeof(*sym));
++	sym = calloc(1, sizeof(*sym));
+ 	if (!sym) {
+ 		perror("malloc");
+ 		return NULL;
+ 	}
+-	memset(sym, 0, sizeof(*sym));
+-
+-	sym->idx = elf_move_global_symbol(elf, symtab, symtab_shndx);
+-	if (sym->idx < 0) {
+-		WARN("elf_move_global_symbol");
+-		return NULL;
+-	}
+ 
+ 	sym->name = sec->name;
+ 	sym->sec = sec;
+@@ -707,24 +750,41 @@ elf_create_section_symbol(struct elf *elf, struct section *sec)
+ 	// st_other 0
+ 	// st_value 0
+ 	// st_size 0
+-	shndx = sec->idx;
+-	if (shndx >= SHN_UNDEF && shndx < SHN_LORESERVE) {
+-		sym->sym.st_shndx = shndx;
+-		if (!shndx_data)
+-			shndx = 0;
+-	} else {
+-		sym->sym.st_shndx = SHN_XINDEX;
+-		if (!shndx_data) {
+-			WARN("no .symtab_shndx");
++
++	/*
++	 * Move the first global symbol, as per sh_info, into a new, higher
++	 * symbol index. This fees up a spot for a new local symbol.
++	 */
++	first_non_local = symtab->sh.sh_info;
++	new_idx = symtab->sh.sh_size / symtab->sh.sh_entsize;
++	old = find_symbol_by_index(elf, first_non_local);
++	if (old) {
++		old->idx = new_idx;
++
++		hlist_del(&old->hash);
++		elf_hash_add(symbol, &old->hash, old->idx);
++
++		elf_dirty_reloc_sym(elf, old);
++
++		if (elf_update_symbol(elf, symtab, symtab_shndx, old)) {
++			WARN("elf_update_symbol move");
+ 			return NULL;
+ 		}
++
++		new_idx = first_non_local;
+ 	}
+ 
+-	if (!gelf_update_symshndx(symtab->data, shndx_data, sym->idx, &sym->sym, shndx)) {
+-		WARN_ELF("gelf_update_symshndx");
++	sym->idx = new_idx;
++	if (elf_update_symbol(elf, symtab, symtab_shndx, sym)) {
++		WARN("elf_update_symbol");
+ 		return NULL;
+ 	}
+ 
++	/*
++	 * Either way, we added a LOCAL symbol.
++	 */
++	symtab->sh.sh_info += 1;
++
+ 	elf_add_symbol(elf, sym);
+ 
+ 	return sym;
+diff --git a/tools/objtool/include/objtool/elf.h b/tools/objtool/include/objtool/elf.h
+index 9b36802ed86f6..82e57eb4b4c5d 100644
+--- a/tools/objtool/include/objtool/elf.h
++++ b/tools/objtool/include/objtool/elf.h
+@@ -73,7 +73,7 @@ struct reloc {
+ 	struct symbol *sym;
+ 	unsigned long offset;
+ 	unsigned int type;
+-	long addend;
++	s64 addend;
+ 	int idx;
+ 	bool jump_table_start;
+ };
+@@ -135,7 +135,7 @@ struct elf *elf_open_read(const char *name, int flags);
+ struct section *elf_create_section(struct elf *elf, const char *name, unsigned int sh_flags, size_t entsize, int nr);
+ 
+ int elf_add_reloc(struct elf *elf, struct section *sec, unsigned long offset,
+-		  unsigned int type, struct symbol *sym, long addend);
++		  unsigned int type, struct symbol *sym, s64 addend);
+ int elf_add_reloc_to_insn(struct elf *elf, struct section *sec,
+ 			  unsigned long offset, unsigned int type,
+ 			  struct section *insn_sec, unsigned long insn_off);
+diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config
+index 1bd64e7404b9f..c38423807d010 100644
+--- a/tools/perf/Makefile.config
++++ b/tools/perf/Makefile.config
+@@ -239,18 +239,33 @@ ifdef PARSER_DEBUG
+ endif
+ 
+ # Try different combinations to accommodate systems that only have
+-# python[2][-config] in weird combinations but always preferring
+-# python2 and python2-config as per pep-0394. If python2 or python
+-# aren't found, then python3 is used.
+-PYTHON_AUTO := python
+-PYTHON_AUTO := $(if $(call get-executable,python3),python3,$(PYTHON_AUTO))
+-PYTHON_AUTO := $(if $(call get-executable,python),python,$(PYTHON_AUTO))
+-PYTHON_AUTO := $(if $(call get-executable,python2),python2,$(PYTHON_AUTO))
+-override PYTHON := $(call get-executable-or-default,PYTHON,$(PYTHON_AUTO))
+-PYTHON_AUTO_CONFIG := \
+-  $(if $(call get-executable,$(PYTHON)-config),$(PYTHON)-config,python-config)
+-override PYTHON_CONFIG := \
+-  $(call get-executable-or-default,PYTHON_CONFIG,$(PYTHON_AUTO_CONFIG))
++# python[2][3]-config in weird combinations in the following order of
++# priority from lowest to highest:
++#   * python3-config
++#   * python-config
++#   * python2-config as per pep-0394.
++#   * $(PYTHON)-config (If PYTHON is user supplied but PYTHON_CONFIG isn't)
++#
++PYTHON_AUTO := python-config
++PYTHON_AUTO := $(if $(call get-executable,python3-config),python3-config,$(PYTHON_AUTO))
++PYTHON_AUTO := $(if $(call get-executable,python-config),python-config,$(PYTHON_AUTO))
++PYTHON_AUTO := $(if $(call get-executable,python2-config),python2-config,$(PYTHON_AUTO))
++
++# If PYTHON is defined but PYTHON_CONFIG isn't, then take $(PYTHON)-config as if it was the user
++# supplied value for PYTHON_CONFIG. Because it's "user supplied", error out if it doesn't exist.
++ifdef PYTHON
++  ifndef PYTHON_CONFIG
++    PYTHON_CONFIG_AUTO := $(call get-executable,$(PYTHON)-config)
++    PYTHON_CONFIG := $(if $(PYTHON_CONFIG_AUTO),$(PYTHON_CONFIG_AUTO),\
++                          $(call $(error $(PYTHON)-config not found)))
++  endif
++endif
++
++# Select either auto detected python and python-config or use user supplied values if they are
++# defined. get-executable-or-default fails with an error if the first argument is supplied but
++# doesn't exist.
++override PYTHON_CONFIG := $(call get-executable-or-default,PYTHON_CONFIG,$(PYTHON_AUTO))
++override PYTHON := $(call get-executable-or-default,PYTHON,$(subst -config,,$(PYTHON_AUTO)))
+ 
+ grep-libs  = $(filter -l%,$(1))
+ strip-libs  = $(filter-out -l%,$(1))
+diff --git a/tools/perf/arch/x86/util/evlist.c b/tools/perf/arch/x86/util/evlist.c
+index cfc208d71f00a..75564a7df15be 100644
+--- a/tools/perf/arch/x86/util/evlist.c
++++ b/tools/perf/arch/x86/util/evlist.c
+@@ -36,7 +36,7 @@ struct evsel *arch_evlist__leader(struct list_head *list)
+ 				if (slots == first)
+ 					return first;
+ 			}
+-			if (!strncasecmp(evsel->name, "topdown", 7))
++			if (strcasestr(evsel->name, "topdown"))
+ 				has_topdown = true;
+ 			if (slots && has_topdown)
+ 				return slots;
+diff --git a/tools/perf/arch/x86/util/evsel.c b/tools/perf/arch/x86/util/evsel.c
+index ac2899a25b7a3..0c9e56ab07b5b 100644
+--- a/tools/perf/arch/x86/util/evsel.c
++++ b/tools/perf/arch/x86/util/evsel.c
+@@ -3,6 +3,7 @@
+ #include <stdlib.h>
+ #include "util/evsel.h"
+ #include "util/env.h"
++#include "util/pmu.h"
+ #include "linux/string.h"
+ 
+ void arch_evsel__set_sample_weight(struct evsel *evsel)
+@@ -29,3 +30,14 @@ void arch_evsel__fixup_new_cycles(struct perf_event_attr *attr)
+ 
+ 	free(env.cpuid);
+ }
++
++bool arch_evsel__must_be_in_group(const struct evsel *evsel)
++{
++	if ((evsel->pmu_name && strcmp(evsel->pmu_name, "cpu")) ||
++	    !pmu_have_event("cpu", "slots"))
++		return false;
++
++	return evsel->name &&
++		(strcasestr(evsel->name, "slots") ||
++		 strcasestr(evsel->name, "topdown"));
++}
+diff --git a/tools/perf/builtin-c2c.c b/tools/perf/builtin-c2c.c
+index fbbed434014f4..8c9ffacbdd281 100644
+--- a/tools/perf/builtin-c2c.c
++++ b/tools/perf/builtin-c2c.c
+@@ -2735,9 +2735,7 @@ static int perf_c2c__report(int argc, const char **argv)
+ 		   "the input file to process"),
+ 	OPT_INCR('N', "node-info", &c2c.node_info,
+ 		 "show extra node info in report (repeat for more info)"),
+-#ifdef HAVE_SLANG_SUPPORT
+ 	OPT_BOOLEAN(0, "stdio", &c2c.use_stdio, "Use the stdio interface"),
+-#endif
+ 	OPT_BOOLEAN(0, "stats", &c2c.stats_only,
+ 		    "Display only statistic tables (implies --stdio)"),
+ 	OPT_BOOLEAN(0, "full-symbols", &c2c.symbol_full,
+@@ -2767,6 +2765,10 @@ static int perf_c2c__report(int argc, const char **argv)
+ 	if (argc)
+ 		usage_with_options(report_c2c_usage, options);
+ 
++#ifndef HAVE_SLANG_SUPPORT
++	c2c.use_stdio = true;
++#endif
++
+ 	if (c2c.stats_only)
+ 		c2c.use_stdio = true;
+ 
+diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
+index a96f106dc93a0..f058e8cddfa85 100644
+--- a/tools/perf/builtin-stat.c
++++ b/tools/perf/builtin-stat.c
+@@ -271,11 +271,8 @@ static void evlist__check_cpu_maps(struct evlist *evlist)
+ 			pr_warning("     %s: %s\n", evsel->name, buf);
+ 		}
+ 
+-		for_each_group_evsel(pos, leader) {
+-			evsel__set_leader(pos, pos);
+-			pos->core.nr_members = 0;
+-		}
+-		evsel->core.leader->nr_members = 0;
++		for_each_group_evsel(pos, leader)
++			evsel__remove_from_group(pos, leader);
+ 	}
+ }
+ 
+diff --git a/tools/perf/pmu-events/jevents.c b/tools/perf/pmu-events/jevents.c
+index 159d9eab6e799..b1eb68c861e7a 100644
+--- a/tools/perf/pmu-events/jevents.c
++++ b/tools/perf/pmu-events/jevents.c
+@@ -612,7 +612,7 @@ static int json_events(const char *fn,
+ 			} else if (json_streq(map, field, "ExtSel")) {
+ 				char *code = NULL;
+ 				addfield(map, &code, "", "", val);
+-				eventcode |= strtoul(code, NULL, 0) << 21;
++				eventcode |= strtoul(code, NULL, 0) << 8;
+ 				free(code);
+ 			} else if (json_streq(map, field, "EventName")) {
+ 				addfield(map, &je.name, "", "", val);
+diff --git a/tools/perf/util/data.h b/tools/perf/util/data.h
+index c9de82af5584e..1402d9657ef27 100644
+--- a/tools/perf/util/data.h
++++ b/tools/perf/util/data.h
+@@ -4,6 +4,7 @@
+ 
+ #include <stdio.h>
+ #include <stdbool.h>
++#include <linux/types.h>
+ 
+ enum perf_data_mode {
+ 	PERF_DATA_MODE_WRITE,
+diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
+index 52ea004ba01e5..3084ec7e93254 100644
+--- a/tools/perf/util/evlist.c
++++ b/tools/perf/util/evlist.c
+@@ -1790,8 +1790,13 @@ struct evsel *evlist__reset_weak_group(struct evlist *evsel_list, struct evsel *
+ 		if (evsel__has_leader(c2, leader)) {
+ 			if (is_open && close)
+ 				perf_evsel__close(&c2->core);
+-			evsel__set_leader(c2, c2);
+-			c2->core.nr_members = 0;
++			/*
++			 * We want to close all members of the group and reopen
++			 * them. Some events, like Intel topdown, require being
++			 * in a group and so keep these in the group.
++			 */
++			evsel__remove_from_group(c2, leader);
++
+ 			/*
+ 			 * Set this for all former members of the group
+ 			 * to indicate they get reopened.
+@@ -1799,6 +1804,9 @@ struct evsel *evlist__reset_weak_group(struct evlist *evsel_list, struct evsel *
+ 			c2->reset_group = true;
+ 		}
+ 	}
++	/* Reset the leader count if all entries were removed. */
++	if (leader->core.nr_members == 1)
++		leader->core.nr_members = 0;
+ 	return leader;
+ }
+ 
+diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
+index 2a1729e7aee46..deb428ee5e509 100644
+--- a/tools/perf/util/evsel.c
++++ b/tools/perf/util/evsel.c
+@@ -3077,3 +3077,22 @@ int evsel__source_count(const struct evsel *evsel)
+ 	}
+ 	return count;
+ }
++
++bool __weak arch_evsel__must_be_in_group(const struct evsel *evsel __maybe_unused)
++{
++	return false;
++}
++
++/*
++ * Remove an event from a given group (leader).
++ * Some events, e.g., perf metrics Topdown events,
++ * must always be grouped. Ignore the events.
++ */
++void evsel__remove_from_group(struct evsel *evsel, struct evsel *leader)
++{
++	if (!arch_evsel__must_be_in_group(evsel) && evsel != leader) {
++		evsel__set_leader(evsel, evsel);
++		evsel->core.nr_members = 0;
++		leader->core.nr_members--;
++	}
++}
+diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h
+index 041b42d33bf5a..47f65f8e7c749 100644
+--- a/tools/perf/util/evsel.h
++++ b/tools/perf/util/evsel.h
+@@ -483,6 +483,9 @@ bool evsel__has_leader(struct evsel *evsel, struct evsel *leader);
+ bool evsel__is_leader(struct evsel *evsel);
+ void evsel__set_leader(struct evsel *evsel, struct evsel *leader);
+ int evsel__source_count(const struct evsel *evsel);
++void evsel__remove_from_group(struct evsel *evsel, struct evsel *leader);
++
++bool arch_evsel__must_be_in_group(const struct evsel *evsel);
+ 
+ /*
+  * Macro to swap the bit-field postition and size.
+diff --git a/tools/power/x86/turbostat/turbostat.c b/tools/power/x86/turbostat/turbostat.c
+index bc5ae0872fed9..babede4486dec 100644
+--- a/tools/power/x86/turbostat/turbostat.c
++++ b/tools/power/x86/turbostat/turbostat.c
+@@ -4376,6 +4376,7 @@ static double rapl_dram_energy_units_probe(int model, double rapl_energy_units)
+ 	case INTEL_FAM6_BROADWELL_X:	/* BDX */
+ 	case INTEL_FAM6_SKYLAKE_X:	/* SKX */
+ 	case INTEL_FAM6_XEON_PHI_KNL:	/* KNL */
++	case INTEL_FAM6_ICELAKE_X:	/* ICX */
+ 		return (rapl_dram_energy_units = 15.3 / 1000000);
+ 	default:
+ 		return (rapl_energy_units);
+diff --git a/tools/testing/kunit/kunit_parser.py b/tools/testing/kunit/kunit_parser.py
+index 05ff334761dd0..2f93ed1d7f990 100644
+--- a/tools/testing/kunit/kunit_parser.py
++++ b/tools/testing/kunit/kunit_parser.py
+@@ -789,8 +789,11 @@ def parse_test(lines: LineStream, expected_num: int, log: List[str]) -> Test:
+ 
+ 	# Check for there being no tests
+ 	if parent_test and len(subtests) == 0:
+-		test.status = TestStatus.NO_TESTS
+-		test.add_error('0 tests run!')
++		# Don't override a bad status if this test had one reported.
++		# Assumption: no subtests means CRASHED is from Test.__init__()
++		if test.status in (TestStatus.TEST_CRASHED, TestStatus.SUCCESS):
++			test.status = TestStatus.NO_TESTS
++			test.add_error('0 tests run!')
+ 
+ 	# Add statuses to TestCounts attribute in Test object
+ 	bubble_up_test_results(test)
+diff --git a/tools/testing/kunit/test_data/test_is_test_passed-no_tests_no_plan.log b/tools/testing/kunit/test_data/test_is_test_passed-no_tests_no_plan.log
+index dd873c9811086..4f81876ee6f18 100644
+--- a/tools/testing/kunit/test_data/test_is_test_passed-no_tests_no_plan.log
++++ b/tools/testing/kunit/test_data/test_is_test_passed-no_tests_no_plan.log
+@@ -3,5 +3,5 @@ TAP version 14
+   # Subtest: suite
+   1..1
+     # Subtest: case
+-  ok 1 - case # SKIP
++  ok 1 - case
+ ok 1 - suite
+diff --git a/tools/testing/selftests/Makefile b/tools/testing/selftests/Makefile
+index 2319ec87f53d6..bd2ac8b3bf1f5 100644
+--- a/tools/testing/selftests/Makefile
++++ b/tools/testing/selftests/Makefile
+@@ -9,6 +9,7 @@ TARGETS += clone3
+ TARGETS += core
+ TARGETS += cpufreq
+ TARGETS += cpu-hotplug
++TARGETS += damon
+ TARGETS += drivers/dma-buf
+ TARGETS += efivarfs
+ TARGETS += exec
+diff --git a/tools/testing/selftests/arm64/bti/Makefile b/tools/testing/selftests/arm64/bti/Makefile
+index 73e013c082a65..dafa1c2aa5c47 100644
+--- a/tools/testing/selftests/arm64/bti/Makefile
++++ b/tools/testing/selftests/arm64/bti/Makefile
+@@ -39,7 +39,7 @@ BTI_OBJS =                                      \
+ 	teststubs-bti.o                         \
+ 	trampoline-bti.o
+ gen/btitest: $(BTI_OBJS)
+-	$(CC) $(CFLAGS_BTI) $(CFLAGS_COMMON) -nostdlib -o $@ $^
++	$(CC) $(CFLAGS_BTI) $(CFLAGS_COMMON) -nostdlib -static -o $@ $^
+ 
+ NOBTI_OBJS =                                    \
+ 	test-nobti.o                         \
+@@ -50,7 +50,7 @@ NOBTI_OBJS =                                    \
+ 	teststubs-nobti.o                       \
+ 	trampoline-nobti.o
+ gen/nobtitest: $(NOBTI_OBJS)
+-	$(CC) $(CFLAGS_BTI) $(CFLAGS_COMMON) -nostdlib -o $@ $^
++	$(CC) $(CFLAGS_BTI) $(CFLAGS_COMMON) -nostdlib -static -o $@ $^
+ 
+ # Including KSFT lib.mk here will also mangle the TEST_GEN_PROGS list
+ # to account for any OUTPUT target-dirs optionally provided by
+diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
+index 3820608faf57f..6e2383701ce0b 100644
+--- a/tools/testing/selftests/bpf/Makefile
++++ b/tools/testing/selftests/bpf/Makefile
+@@ -75,7 +75,7 @@ TEST_PROGS := test_kmod.sh \
+ 	test_xsk.sh
+ 
+ TEST_PROGS_EXTENDED := with_addr.sh \
+-	with_tunnels.sh \
++	with_tunnels.sh ima_setup.sh \
+ 	test_xdp_vlan.sh test_bpftool.py
+ 
+ # Compile but not part of 'make run_tests'
+@@ -415,11 +415,11 @@ $(TRUNNER_BPF_SKELS): %.skel.h: %.o $(BPFTOOL) | $(TRUNNER_OUTPUT)
+ 
+ $(TRUNNER_BPF_LSKELS): %.lskel.h: %.o $(BPFTOOL) | $(TRUNNER_OUTPUT)
+ 	$$(call msg,GEN-SKEL,$(TRUNNER_BINARY),$$@)
+-	$(Q)$$(BPFTOOL) gen object $$(<:.o=.linked1.o) $$<
+-	$(Q)$$(BPFTOOL) gen object $$(<:.o=.linked2.o) $$(<:.o=.linked1.o)
+-	$(Q)$$(BPFTOOL) gen object $$(<:.o=.linked3.o) $$(<:.o=.linked2.o)
+-	$(Q)diff $$(<:.o=.linked2.o) $$(<:.o=.linked3.o)
+-	$(Q)$$(BPFTOOL) gen skeleton -L $$(<:.o=.linked3.o) name $$(notdir $$(<:.o=_lskel)) > $$@
++	$(Q)$$(BPFTOOL) gen object $$(<:.o=.llinked1.o) $$<
++	$(Q)$$(BPFTOOL) gen object $$(<:.o=.llinked2.o) $$(<:.o=.llinked1.o)
++	$(Q)$$(BPFTOOL) gen object $$(<:.o=.llinked3.o) $$(<:.o=.llinked2.o)
++	$(Q)diff $$(<:.o=.llinked2.o) $$(<:.o=.llinked3.o)
++	$(Q)$$(BPFTOOL) gen skeleton -L $$(<:.o=.llinked3.o) name $$(notdir $$(<:.o=_lskel)) > $$@
+ 
+ $(TRUNNER_BPF_SKELS_LINKED): $(TRUNNER_BPF_OBJS) $(BPFTOOL) | $(TRUNNER_OUTPUT)
+ 	$$(call msg,LINK-BPF,$(TRUNNER_BINARY),$$(@:.skel.h=.o))
+diff --git a/tools/testing/selftests/bpf/prog_tests/trampoline_count.c b/tools/testing/selftests/bpf/prog_tests/trampoline_count.c
+index 9c795ee52b7bf..b0acbda6dbf5e 100644
+--- a/tools/testing/selftests/bpf/prog_tests/trampoline_count.c
++++ b/tools/testing/selftests/bpf/prog_tests/trampoline_count.c
+@@ -1,126 +1,94 @@
+ // SPDX-License-Identifier: GPL-2.0-only
+ #define _GNU_SOURCE
+-#include <sched.h>
+-#include <sys/prctl.h>
+ #include <test_progs.h>
+ 
+ #define MAX_TRAMP_PROGS 38
+ 
+ struct inst {
+ 	struct bpf_object *obj;
+-	struct bpf_link   *link_fentry;
+-	struct bpf_link   *link_fexit;
++	struct bpf_link   *link;
+ };
+ 
+-static int test_task_rename(void)
+-{
+-	int fd, duration = 0, err;
+-	char buf[] = "test_overhead";
+-
+-	fd = open("/proc/self/comm", O_WRONLY|O_TRUNC);
+-	if (CHECK(fd < 0, "open /proc", "err %d", errno))
+-		return -1;
+-	err = write(fd, buf, sizeof(buf));
+-	if (err < 0) {
+-		CHECK(err < 0, "task rename", "err %d", errno);
+-		close(fd);
+-		return -1;
+-	}
+-	close(fd);
+-	return 0;
+-}
+-
+-static struct bpf_link *load(struct bpf_object *obj, const char *name)
++static struct bpf_program *load_prog(char *file, char *name, struct inst *inst)
+ {
++	struct bpf_object *obj;
+ 	struct bpf_program *prog;
+-	int duration = 0;
++	int err;
++
++	obj = bpf_object__open_file(file, NULL);
++	if (!ASSERT_OK_PTR(obj, "obj_open_file"))
++		return NULL;
++
++	inst->obj = obj;
++
++	err = bpf_object__load(obj);
++	if (!ASSERT_OK(err, "obj_load"))
++		return NULL;
+ 
+ 	prog = bpf_object__find_program_by_name(obj, name);
+-	if (CHECK(!prog, "find_probe", "prog '%s' not found\n", name))
+-		return ERR_PTR(-EINVAL);
+-	return bpf_program__attach_trace(prog);
++	if (!ASSERT_OK_PTR(prog, "obj_find_prog"))
++		return NULL;
++
++	return prog;
+ }
+ 
+ /* TODO: use different target function to run in concurrent mode */
+ void serial_test_trampoline_count(void)
+ {
+-	const char *fentry_name = "prog1";
+-	const char *fexit_name = "prog2";
+-	const char *object = "test_trampoline_count.o";
+-	struct inst inst[MAX_TRAMP_PROGS] = {};
+-	int err, i = 0, duration = 0;
+-	struct bpf_object *obj;
++	char *file = "test_trampoline_count.o";
++	char *const progs[] = { "fentry_test", "fmod_ret_test", "fexit_test" };
++	struct inst inst[MAX_TRAMP_PROGS + 1] = {};
++	struct bpf_program *prog;
+ 	struct bpf_link *link;
+-	char comm[16] = {};
++	int prog_fd, err, i;
++	LIBBPF_OPTS(bpf_test_run_opts, opts);
+ 
+ 	/* attach 'allowed' trampoline programs */
+ 	for (i = 0; i < MAX_TRAMP_PROGS; i++) {
+-		obj = bpf_object__open_file(object, NULL);
+-		if (!ASSERT_OK_PTR(obj, "obj_open_file")) {
+-			obj = NULL;
++		prog = load_prog(file, progs[i % ARRAY_SIZE(progs)], &inst[i]);
++		if (!prog)
+ 			goto cleanup;
+-		}
+ 
+-		err = bpf_object__load(obj);
+-		if (CHECK(err, "obj_load", "err %d\n", err))
++		link = bpf_program__attach(prog);
++		if (!ASSERT_OK_PTR(link, "attach_prog"))
+ 			goto cleanup;
+-		inst[i].obj = obj;
+-		obj = NULL;
+-
+-		if (rand() % 2) {
+-			link = load(inst[i].obj, fentry_name);
+-			if (!ASSERT_OK_PTR(link, "attach_prog")) {
+-				link = NULL;
+-				goto cleanup;
+-			}
+-			inst[i].link_fentry = link;
+-		} else {
+-			link = load(inst[i].obj, fexit_name);
+-			if (!ASSERT_OK_PTR(link, "attach_prog")) {
+-				link = NULL;
+-				goto cleanup;
+-			}
+-			inst[i].link_fexit = link;
+-		}
++
++		inst[i].link = link;
+ 	}
+ 
+ 	/* and try 1 extra.. */
+-	obj = bpf_object__open_file(object, NULL);
+-	if (!ASSERT_OK_PTR(obj, "obj_open_file")) {
+-		obj = NULL;
++	prog = load_prog(file, "fmod_ret_test", &inst[i]);
++	if (!prog)
+ 		goto cleanup;
+-	}
+-
+-	err = bpf_object__load(obj);
+-	if (CHECK(err, "obj_load", "err %d\n", err))
+-		goto cleanup_extra;
+ 
+ 	/* ..that needs to fail */
+-	link = load(obj, fentry_name);
+-	err = libbpf_get_error(link);
+-	if (!ASSERT_ERR_PTR(link, "cannot attach over the limit")) {
+-		bpf_link__destroy(link);
+-		goto cleanup_extra;
++	link = bpf_program__attach(prog);
++	if (!ASSERT_ERR_PTR(link, "attach_prog")) {
++		inst[i].link = link;
++		goto cleanup;
+ 	}
+ 
+ 	/* with E2BIG error */
+-	ASSERT_EQ(err, -E2BIG, "proper error check");
+-	ASSERT_EQ(link, NULL, "ptr_is_null");
++	if (!ASSERT_EQ(libbpf_get_error(link), -E2BIG, "E2BIG"))
++		goto cleanup;
++	if (!ASSERT_EQ(link, NULL, "ptr_is_null"))
++		goto cleanup;
+ 
+ 	/* and finaly execute the probe */
+-	if (CHECK_FAIL(prctl(PR_GET_NAME, comm, 0L, 0L, 0L)))
+-		goto cleanup_extra;
+-	CHECK_FAIL(test_task_rename());
+-	CHECK_FAIL(prctl(PR_SET_NAME, comm, 0L, 0L, 0L));
++	prog_fd = bpf_program__fd(prog);
++	if (!ASSERT_GE(prog_fd, 0, "bpf_program__fd"))
++		goto cleanup;
++
++	err = bpf_prog_test_run_opts(prog_fd, &opts);
++	if (!ASSERT_OK(err, "bpf_prog_test_run_opts"))
++		goto cleanup;
++
++	ASSERT_EQ(opts.retval & 0xffff, 4, "bpf_modify_return_test.result");
++	ASSERT_EQ(opts.retval >> 16, 1, "bpf_modify_return_test.side_effect");
+ 
+-cleanup_extra:
+-	bpf_object__close(obj);
+ cleanup:
+-	if (i >= MAX_TRAMP_PROGS)
+-		i = MAX_TRAMP_PROGS - 1;
+ 	for (; i >= 0; i--) {
+-		bpf_link__destroy(inst[i].link_fentry);
+-		bpf_link__destroy(inst[i].link_fexit);
++		bpf_link__destroy(inst[i].link);
+ 		bpf_object__close(inst[i].obj);
+ 	}
+ }
+diff --git a/tools/testing/selftests/bpf/progs/btf_dump_test_case_syntax.c b/tools/testing/selftests/bpf/progs/btf_dump_test_case_syntax.c
+index 1c7105fcae3c4..4ee4748133fec 100644
+--- a/tools/testing/selftests/bpf/progs/btf_dump_test_case_syntax.c
++++ b/tools/testing/selftests/bpf/progs/btf_dump_test_case_syntax.c
+@@ -94,7 +94,7 @@ typedef void (* (*signal_t)(int, void (*)(int)))(int);
+ 
+ typedef char * (*fn_ptr_arr1_t[10])(int **);
+ 
+-typedef char * (* const (* const fn_ptr_arr2_t[5])())(char * (*)(int));
++typedef char * (* (* const fn_ptr_arr2_t[5])())(char * (*)(int));
+ 
+ struct struct_w_typedefs {
+ 	int_t a;
+diff --git a/tools/testing/selftests/bpf/progs/profiler.inc.h b/tools/testing/selftests/bpf/progs/profiler.inc.h
+index 4896fdf816f73..92331053dba3b 100644
+--- a/tools/testing/selftests/bpf/progs/profiler.inc.h
++++ b/tools/testing/selftests/bpf/progs/profiler.inc.h
+@@ -826,8 +826,9 @@ out:
+ 
+ SEC("kprobe/vfs_link")
+ int BPF_KPROBE(kprobe__vfs_link,
+-	       struct dentry* old_dentry, struct inode* dir,
+-	       struct dentry* new_dentry, struct inode** delegated_inode)
++	       struct dentry* old_dentry, struct user_namespace *mnt_userns,
++	       struct inode* dir, struct dentry* new_dentry,
++	       struct inode** delegated_inode)
+ {
+ 	struct bpf_func_stats_ctx stats_ctx;
+ 	bpf_stats_enter(&stats_ctx, profiler_bpf_vfs_link);
+diff --git a/tools/testing/selftests/bpf/progs/test_trampoline_count.c b/tools/testing/selftests/bpf/progs/test_trampoline_count.c
+index f030e469d05b5..7765720da7d58 100644
+--- a/tools/testing/selftests/bpf/progs/test_trampoline_count.c
++++ b/tools/testing/selftests/bpf/progs/test_trampoline_count.c
+@@ -1,20 +1,22 @@
+ // SPDX-License-Identifier: GPL-2.0
+-#include <stdbool.h>
+-#include <stddef.h>
+ #include <linux/bpf.h>
+ #include <bpf/bpf_helpers.h>
+ #include <bpf/bpf_tracing.h>
+ 
+-struct task_struct;
++SEC("fentry/bpf_modify_return_test")
++int BPF_PROG(fentry_test, int a, int *b)
++{
++	return 0;
++}
+ 
+-SEC("fentry/__set_task_comm")
+-int BPF_PROG(prog1, struct task_struct *tsk, const char *buf, bool exec)
++SEC("fmod_ret/bpf_modify_return_test")
++int BPF_PROG(fmod_ret_test, int a, int *b, int ret)
+ {
+ 	return 0;
+ }
+ 
+-SEC("fexit/__set_task_comm")
+-int BPF_PROG(prog2, struct task_struct *tsk, const char *buf, bool exec)
++SEC("fexit/bpf_modify_return_test")
++int BPF_PROG(fexit_test, int a, int *b, int ret)
+ {
+ 	return 0;
+ }
+diff --git a/tools/testing/selftests/bpf/test_bpftool_synctypes.py b/tools/testing/selftests/bpf/test_bpftool_synctypes.py
+index 6bf21e47882af..c0e7acd698edf 100755
+--- a/tools/testing/selftests/bpf/test_bpftool_synctypes.py
++++ b/tools/testing/selftests/bpf/test_bpftool_synctypes.py
+@@ -180,7 +180,7 @@ class FileExtractor(object):
+         @enum_name: name of the enum to parse
+         """
+         start_marker = re.compile(f'enum {enum_name} {{\n')
+-        pattern = re.compile('^\s*(BPF_\w+),?$')
++        pattern = re.compile('^\s*(BPF_\w+),?(\s+/\*.*\*/)?$')
+         end_marker = re.compile('^};')
+         parser = BlockParser(self.reader)
+         parser.search_block(start_marker)
+diff --git a/tools/testing/selftests/bpf/trace_helpers.c b/tools/testing/selftests/bpf/trace_helpers.c
+index 3d6217e3aff7a..9c4be2cdb21a0 100644
+--- a/tools/testing/selftests/bpf/trace_helpers.c
++++ b/tools/testing/selftests/bpf/trace_helpers.c
+@@ -25,15 +25,12 @@ static int ksym_cmp(const void *p1, const void *p2)
+ 
+ int load_kallsyms(void)
+ {
+-	FILE *f = fopen("/proc/kallsyms", "r");
++	FILE *f;
+ 	char func[256], buf[256];
+ 	char symbol;
+ 	void *addr;
+ 	int i = 0;
+ 
+-	if (!f)
+-		return -ENOENT;
+-
+ 	/*
+ 	 * This is called/used from multiplace places,
+ 	 * load symbols just once.
+@@ -41,6 +38,10 @@ int load_kallsyms(void)
+ 	if (sym_cnt)
+ 		return 0;
+ 
++	f = fopen("/proc/kallsyms", "r");
++	if (!f)
++		return -ENOENT;
++
+ 	while (fgets(buf, sizeof(buf), f)) {
+ 		if (sscanf(buf, "%p %c %s", &addr, &symbol, func) != 3)
+ 			break;
+diff --git a/tools/testing/selftests/cgroup/test_stress.sh b/tools/testing/selftests/cgroup/test_stress.sh
+index 15d9d58963941..3c9c4554d5f6a 100755
+--- a/tools/testing/selftests/cgroup/test_stress.sh
++++ b/tools/testing/selftests/cgroup/test_stress.sh
+@@ -1,4 +1,4 @@
+ #!/bin/bash
+ # SPDX-License-Identifier: GPL-2.0
+ 
+-./with_stress.sh -s subsys -s fork ./test_core
++./with_stress.sh -s subsys -s fork ${OUTPUT:-.}/test_core
+diff --git a/tools/testing/selftests/landlock/base_test.c b/tools/testing/selftests/landlock/base_test.c
+index ca40abe9daa86..35f64832b869c 100644
+--- a/tools/testing/selftests/landlock/base_test.c
++++ b/tools/testing/selftests/landlock/base_test.c
+@@ -18,10 +18,11 @@
+ #include "common.h"
+ 
+ #ifndef O_PATH
+-#define O_PATH		010000000
++#define O_PATH 010000000
+ #endif
+ 
+-TEST(inconsistent_attr) {
++TEST(inconsistent_attr)
++{
+ 	const long page_size = sysconf(_SC_PAGESIZE);
+ 	char *const buf = malloc(page_size + 1);
+ 	struct landlock_ruleset_attr *const ruleset_attr = (void *)buf;
+@@ -34,20 +35,26 @@ TEST(inconsistent_attr) {
+ 	ASSERT_EQ(EINVAL, errno);
+ 	ASSERT_EQ(-1, landlock_create_ruleset(ruleset_attr, 1, 0));
+ 	ASSERT_EQ(EINVAL, errno);
++	ASSERT_EQ(-1, landlock_create_ruleset(ruleset_attr, 7, 0));
++	ASSERT_EQ(EINVAL, errno);
+ 
+ 	ASSERT_EQ(-1, landlock_create_ruleset(NULL, 1, 0));
+ 	/* The size if less than sizeof(struct landlock_attr_enforce). */
+ 	ASSERT_EQ(EFAULT, errno);
+ 
+-	ASSERT_EQ(-1, landlock_create_ruleset(NULL,
+-				sizeof(struct landlock_ruleset_attr), 0));
++	ASSERT_EQ(-1, landlock_create_ruleset(
++			      NULL, sizeof(struct landlock_ruleset_attr), 0));
+ 	ASSERT_EQ(EFAULT, errno);
+ 
+ 	ASSERT_EQ(-1, landlock_create_ruleset(ruleset_attr, page_size + 1, 0));
+ 	ASSERT_EQ(E2BIG, errno);
+ 
+-	ASSERT_EQ(-1, landlock_create_ruleset(ruleset_attr,
+-				sizeof(struct landlock_ruleset_attr), 0));
++	/* Checks minimal valid attribute size. */
++	ASSERT_EQ(-1, landlock_create_ruleset(ruleset_attr, 8, 0));
++	ASSERT_EQ(ENOMSG, errno);
++	ASSERT_EQ(-1, landlock_create_ruleset(
++			      ruleset_attr,
++			      sizeof(struct landlock_ruleset_attr), 0));
+ 	ASSERT_EQ(ENOMSG, errno);
+ 	ASSERT_EQ(-1, landlock_create_ruleset(ruleset_attr, page_size, 0));
+ 	ASSERT_EQ(ENOMSG, errno);
+@@ -63,38 +70,44 @@ TEST(inconsistent_attr) {
+ 	free(buf);
+ }
+ 
+-TEST(abi_version) {
++TEST(abi_version)
++{
+ 	const struct landlock_ruleset_attr ruleset_attr = {
+ 		.handled_access_fs = LANDLOCK_ACCESS_FS_READ_FILE,
+ 	};
+ 	ASSERT_EQ(1, landlock_create_ruleset(NULL, 0,
+-				LANDLOCK_CREATE_RULESET_VERSION));
++					     LANDLOCK_CREATE_RULESET_VERSION));
+ 
+ 	ASSERT_EQ(-1, landlock_create_ruleset(&ruleset_attr, 0,
+-				LANDLOCK_CREATE_RULESET_VERSION));
++					      LANDLOCK_CREATE_RULESET_VERSION));
+ 	ASSERT_EQ(EINVAL, errno);
+ 
+ 	ASSERT_EQ(-1, landlock_create_ruleset(NULL, sizeof(ruleset_attr),
+-				LANDLOCK_CREATE_RULESET_VERSION));
++					      LANDLOCK_CREATE_RULESET_VERSION));
+ 	ASSERT_EQ(EINVAL, errno);
+ 
+-	ASSERT_EQ(-1, landlock_create_ruleset(&ruleset_attr,
+-				sizeof(ruleset_attr),
+-				LANDLOCK_CREATE_RULESET_VERSION));
++	ASSERT_EQ(-1,
++		  landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr),
++					  LANDLOCK_CREATE_RULESET_VERSION));
+ 	ASSERT_EQ(EINVAL, errno);
+ 
+ 	ASSERT_EQ(-1, landlock_create_ruleset(NULL, 0,
+-				LANDLOCK_CREATE_RULESET_VERSION | 1 << 31));
++					      LANDLOCK_CREATE_RULESET_VERSION |
++						      1 << 31));
+ 	ASSERT_EQ(EINVAL, errno);
+ }
+ 
+-TEST(inval_create_ruleset_flags) {
++/* Tests ordering of syscall argument checks. */
++TEST(create_ruleset_checks_ordering)
++{
+ 	const int last_flag = LANDLOCK_CREATE_RULESET_VERSION;
+ 	const int invalid_flag = last_flag << 1;
++	int ruleset_fd;
+ 	const struct landlock_ruleset_attr ruleset_attr = {
+ 		.handled_access_fs = LANDLOCK_ACCESS_FS_READ_FILE,
+ 	};
+ 
++	/* Checks priority for invalid flags. */
+ 	ASSERT_EQ(-1, landlock_create_ruleset(NULL, 0, invalid_flag));
+ 	ASSERT_EQ(EINVAL, errno);
+ 
+@@ -102,44 +115,121 @@ TEST(inval_create_ruleset_flags) {
+ 	ASSERT_EQ(EINVAL, errno);
+ 
+ 	ASSERT_EQ(-1, landlock_create_ruleset(NULL, sizeof(ruleset_attr),
+-				invalid_flag));
++					      invalid_flag));
++	ASSERT_EQ(EINVAL, errno);
++
++	ASSERT_EQ(-1,
++		  landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr),
++					  invalid_flag));
+ 	ASSERT_EQ(EINVAL, errno);
+ 
+-	ASSERT_EQ(-1, landlock_create_ruleset(&ruleset_attr,
+-				sizeof(ruleset_attr), invalid_flag));
++	/* Checks too big ruleset_attr size. */
++	ASSERT_EQ(-1, landlock_create_ruleset(&ruleset_attr, -1, 0));
++	ASSERT_EQ(E2BIG, errno);
++
++	/* Checks too small ruleset_attr size. */
++	ASSERT_EQ(-1, landlock_create_ruleset(&ruleset_attr, 0, 0));
++	ASSERT_EQ(EINVAL, errno);
++	ASSERT_EQ(-1, landlock_create_ruleset(&ruleset_attr, 1, 0));
+ 	ASSERT_EQ(EINVAL, errno);
++
++	/* Checks valid call. */
++	ruleset_fd =
++		landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0);
++	ASSERT_LE(0, ruleset_fd);
++	ASSERT_EQ(0, close(ruleset_fd));
+ }
+ 
+-TEST(empty_path_beneath_attr) {
++/* Tests ordering of syscall argument checks. */
++TEST(add_rule_checks_ordering)
++{
+ 	const struct landlock_ruleset_attr ruleset_attr = {
+ 		.handled_access_fs = LANDLOCK_ACCESS_FS_EXECUTE,
+ 	};
+-	const int ruleset_fd = landlock_create_ruleset(&ruleset_attr,
+-			sizeof(ruleset_attr), 0);
++	struct landlock_path_beneath_attr path_beneath_attr = {
++		.allowed_access = LANDLOCK_ACCESS_FS_EXECUTE,
++		.parent_fd = -1,
++	};
++	const int ruleset_fd =
++		landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0);
+ 
+ 	ASSERT_LE(0, ruleset_fd);
+ 
+-	/* Similar to struct landlock_path_beneath_attr.parent_fd = 0 */
++	/* Checks invalid flags. */
++	ASSERT_EQ(-1, landlock_add_rule(-1, 0, NULL, 1));
++	ASSERT_EQ(EINVAL, errno);
++
++	/* Checks invalid ruleset FD. */
++	ASSERT_EQ(-1, landlock_add_rule(-1, 0, NULL, 0));
++	ASSERT_EQ(EBADF, errno);
++
++	/* Checks invalid rule type. */
++	ASSERT_EQ(-1, landlock_add_rule(ruleset_fd, 0, NULL, 0));
++	ASSERT_EQ(EINVAL, errno);
++
++	/* Checks invalid rule attr. */
+ 	ASSERT_EQ(-1, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_PATH_BENEATH,
+-				NULL, 0));
++					NULL, 0));
+ 	ASSERT_EQ(EFAULT, errno);
++
++	/* Checks invalid path_beneath.parent_fd. */
++	ASSERT_EQ(-1, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_PATH_BENEATH,
++					&path_beneath_attr, 0));
++	ASSERT_EQ(EBADF, errno);
++
++	/* Checks valid call. */
++	path_beneath_attr.parent_fd =
++		open("/tmp", O_PATH | O_NOFOLLOW | O_DIRECTORY | O_CLOEXEC);
++	ASSERT_LE(0, path_beneath_attr.parent_fd);
++	ASSERT_EQ(0, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_PATH_BENEATH,
++				       &path_beneath_attr, 0));
++	ASSERT_EQ(0, close(path_beneath_attr.parent_fd));
+ 	ASSERT_EQ(0, close(ruleset_fd));
+ }
+ 
+-TEST(inval_fd_enforce) {
++/* Tests ordering of syscall argument and permission checks. */
++TEST(restrict_self_checks_ordering)
++{
++	const struct landlock_ruleset_attr ruleset_attr = {
++		.handled_access_fs = LANDLOCK_ACCESS_FS_EXECUTE,
++	};
++	struct landlock_path_beneath_attr path_beneath_attr = {
++		.allowed_access = LANDLOCK_ACCESS_FS_EXECUTE,
++		.parent_fd = -1,
++	};
++	const int ruleset_fd =
++		landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0);
++
++	ASSERT_LE(0, ruleset_fd);
++	path_beneath_attr.parent_fd =
++		open("/tmp", O_PATH | O_NOFOLLOW | O_DIRECTORY | O_CLOEXEC);
++	ASSERT_LE(0, path_beneath_attr.parent_fd);
++	ASSERT_EQ(0, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_PATH_BENEATH,
++				       &path_beneath_attr, 0));
++	ASSERT_EQ(0, close(path_beneath_attr.parent_fd));
++
++	/* Checks unprivileged enforcement without no_new_privs. */
++	drop_caps(_metadata);
++	ASSERT_EQ(-1, landlock_restrict_self(-1, -1));
++	ASSERT_EQ(EPERM, errno);
++	ASSERT_EQ(-1, landlock_restrict_self(-1, 0));
++	ASSERT_EQ(EPERM, errno);
++	ASSERT_EQ(-1, landlock_restrict_self(ruleset_fd, 0));
++	ASSERT_EQ(EPERM, errno);
++
+ 	ASSERT_EQ(0, prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0));
+ 
++	/* Checks invalid flags. */
++	ASSERT_EQ(-1, landlock_restrict_self(-1, -1));
++	ASSERT_EQ(EINVAL, errno);
++
++	/* Checks invalid ruleset FD. */
+ 	ASSERT_EQ(-1, landlock_restrict_self(-1, 0));
+ 	ASSERT_EQ(EBADF, errno);
+-}
+-
+-TEST(unpriv_enforce_without_no_new_privs) {
+-	int err;
+ 
+-	drop_caps(_metadata);
+-	err = landlock_restrict_self(-1, 0);
+-	ASSERT_EQ(EPERM, errno);
+-	ASSERT_EQ(err, -1);
++	/* Checks valid call. */
++	ASSERT_EQ(0, landlock_restrict_self(ruleset_fd, 0));
++	ASSERT_EQ(0, close(ruleset_fd));
+ }
+ 
+ TEST(ruleset_fd_io)
+@@ -151,8 +241,8 @@ TEST(ruleset_fd_io)
+ 	char buf;
+ 
+ 	drop_caps(_metadata);
+-	ruleset_fd = landlock_create_ruleset(&ruleset_attr,
+-			sizeof(ruleset_attr), 0);
++	ruleset_fd =
++		landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0);
+ 	ASSERT_LE(0, ruleset_fd);
+ 
+ 	ASSERT_EQ(-1, write(ruleset_fd, ".", 1));
+@@ -197,14 +287,15 @@ TEST(ruleset_fd_transfer)
+ 	drop_caps(_metadata);
+ 
+ 	/* Creates a test ruleset with a simple rule. */
+-	ruleset_fd_tx = landlock_create_ruleset(&ruleset_attr,
+-			sizeof(ruleset_attr), 0);
++	ruleset_fd_tx =
++		landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0);
+ 	ASSERT_LE(0, ruleset_fd_tx);
+-	path_beneath_attr.parent_fd = open("/tmp", O_PATH | O_NOFOLLOW |
+-			O_DIRECTORY | O_CLOEXEC);
++	path_beneath_attr.parent_fd =
++		open("/tmp", O_PATH | O_NOFOLLOW | O_DIRECTORY | O_CLOEXEC);
+ 	ASSERT_LE(0, path_beneath_attr.parent_fd);
+-	ASSERT_EQ(0, landlock_add_rule(ruleset_fd_tx, LANDLOCK_RULE_PATH_BENEATH,
+-				&path_beneath_attr, 0));
++	ASSERT_EQ(0,
++		  landlock_add_rule(ruleset_fd_tx, LANDLOCK_RULE_PATH_BENEATH,
++				    &path_beneath_attr, 0));
+ 	ASSERT_EQ(0, close(path_beneath_attr.parent_fd));
+ 
+ 	cmsg = CMSG_FIRSTHDR(&msg);
+@@ -215,7 +306,8 @@ TEST(ruleset_fd_transfer)
+ 	memcpy(CMSG_DATA(cmsg), &ruleset_fd_tx, sizeof(ruleset_fd_tx));
+ 
+ 	/* Sends the ruleset FD over a socketpair and then close it. */
+-	ASSERT_EQ(0, socketpair(AF_UNIX, SOCK_STREAM | SOCK_CLOEXEC, 0, socket_fds));
++	ASSERT_EQ(0, socketpair(AF_UNIX, SOCK_STREAM | SOCK_CLOEXEC, 0,
++				socket_fds));
+ 	ASSERT_EQ(sizeof(data_tx), sendmsg(socket_fds[0], &msg, 0));
+ 	ASSERT_EQ(0, close(socket_fds[0]));
+ 	ASSERT_EQ(0, close(ruleset_fd_tx));
+@@ -226,7 +318,8 @@ TEST(ruleset_fd_transfer)
+ 		int ruleset_fd_rx;
+ 
+ 		*(char *)msg.msg_iov->iov_base = '\0';
+-		ASSERT_EQ(sizeof(data_tx), recvmsg(socket_fds[1], &msg, MSG_CMSG_CLOEXEC));
++		ASSERT_EQ(sizeof(data_tx),
++			  recvmsg(socket_fds[1], &msg, MSG_CMSG_CLOEXEC));
+ 		ASSERT_EQ('.', *(char *)msg.msg_iov->iov_base);
+ 		ASSERT_EQ(0, close(socket_fds[1]));
+ 		cmsg = CMSG_FIRSTHDR(&msg);
+diff --git a/tools/testing/selftests/landlock/common.h b/tools/testing/selftests/landlock/common.h
+index 183b7e8e1b957..7ba18eb237838 100644
+--- a/tools/testing/selftests/landlock/common.h
++++ b/tools/testing/selftests/landlock/common.h
+@@ -25,6 +25,7 @@
+  * this to be possible, we must not call abort() but instead exit smoothly
+  * (hence the step print).
+  */
++/* clang-format off */
+ #define TEST_F_FORK(fixture_name, test_name) \
+ 	static void fixture_name##_##test_name##_child( \
+ 		struct __test_metadata *_metadata, \
+@@ -71,11 +72,12 @@
+ 		FIXTURE_DATA(fixture_name) __attribute__((unused)) *self, \
+ 		const FIXTURE_VARIANT(fixture_name) \
+ 			__attribute__((unused)) *variant)
++/* clang-format on */
+ 
+ #ifndef landlock_create_ruleset
+-static inline int landlock_create_ruleset(
+-		const struct landlock_ruleset_attr *const attr,
+-		const size_t size, const __u32 flags)
++static inline int
++landlock_create_ruleset(const struct landlock_ruleset_attr *const attr,
++			const size_t size, const __u32 flags)
+ {
+ 	return syscall(__NR_landlock_create_ruleset, attr, size, flags);
+ }
+@@ -83,17 +85,18 @@ static inline int landlock_create_ruleset(
+ 
+ #ifndef landlock_add_rule
+ static inline int landlock_add_rule(const int ruleset_fd,
+-		const enum landlock_rule_type rule_type,
+-		const void *const rule_attr, const __u32 flags)
++				    const enum landlock_rule_type rule_type,
++				    const void *const rule_attr,
++				    const __u32 flags)
+ {
+-	return syscall(__NR_landlock_add_rule, ruleset_fd, rule_type,
+-			rule_attr, flags);
++	return syscall(__NR_landlock_add_rule, ruleset_fd, rule_type, rule_attr,
++		       flags);
+ }
+ #endif
+ 
+ #ifndef landlock_restrict_self
+ static inline int landlock_restrict_self(const int ruleset_fd,
+-		const __u32 flags)
++					 const __u32 flags)
+ {
+ 	return syscall(__NR_landlock_restrict_self, ruleset_fd, flags);
+ }
+@@ -111,69 +114,76 @@ static void _init_caps(struct __test_metadata *const _metadata, bool drop_all)
+ 	};
+ 
+ 	cap_p = cap_get_proc();
+-	EXPECT_NE(NULL, cap_p) {
++	EXPECT_NE(NULL, cap_p)
++	{
+ 		TH_LOG("Failed to cap_get_proc: %s", strerror(errno));
+ 	}
+-	EXPECT_NE(-1, cap_clear(cap_p)) {
++	EXPECT_NE(-1, cap_clear(cap_p))
++	{
+ 		TH_LOG("Failed to cap_clear: %s", strerror(errno));
+ 	}
+ 	if (!drop_all) {
+ 		EXPECT_NE(-1, cap_set_flag(cap_p, CAP_PERMITTED,
+-					ARRAY_SIZE(caps), caps, CAP_SET)) {
++					   ARRAY_SIZE(caps), caps, CAP_SET))
++		{
+ 			TH_LOG("Failed to cap_set_flag: %s", strerror(errno));
+ 		}
+ 	}
+-	EXPECT_NE(-1, cap_set_proc(cap_p)) {
++	EXPECT_NE(-1, cap_set_proc(cap_p))
++	{
+ 		TH_LOG("Failed to cap_set_proc: %s", strerror(errno));
+ 	}
+-	EXPECT_NE(-1, cap_free(cap_p)) {
++	EXPECT_NE(-1, cap_free(cap_p))
++	{
+ 		TH_LOG("Failed to cap_free: %s", strerror(errno));
+ 	}
+ }
+ 
+ /* We cannot put such helpers in a library because of kselftest_harness.h . */
+-__attribute__((__unused__))
+-static void disable_caps(struct __test_metadata *const _metadata)
++__attribute__((__unused__)) static void
++disable_caps(struct __test_metadata *const _metadata)
+ {
+ 	_init_caps(_metadata, false);
+ }
+ 
+-__attribute__((__unused__))
+-static void drop_caps(struct __test_metadata *const _metadata)
++__attribute__((__unused__)) static void
++drop_caps(struct __test_metadata *const _metadata)
+ {
+ 	_init_caps(_metadata, true);
+ }
+ 
+ static void _effective_cap(struct __test_metadata *const _metadata,
+-		const cap_value_t caps, const cap_flag_value_t value)
++			   const cap_value_t caps, const cap_flag_value_t value)
+ {
+ 	cap_t cap_p;
+ 
+ 	cap_p = cap_get_proc();
+-	EXPECT_NE(NULL, cap_p) {
++	EXPECT_NE(NULL, cap_p)
++	{
+ 		TH_LOG("Failed to cap_get_proc: %s", strerror(errno));
+ 	}
+-	EXPECT_NE(-1, cap_set_flag(cap_p, CAP_EFFECTIVE, 1, &caps, value)) {
++	EXPECT_NE(-1, cap_set_flag(cap_p, CAP_EFFECTIVE, 1, &caps, value))
++	{
+ 		TH_LOG("Failed to cap_set_flag: %s", strerror(errno));
+ 	}
+-	EXPECT_NE(-1, cap_set_proc(cap_p)) {
++	EXPECT_NE(-1, cap_set_proc(cap_p))
++	{
+ 		TH_LOG("Failed to cap_set_proc: %s", strerror(errno));
+ 	}
+-	EXPECT_NE(-1, cap_free(cap_p)) {
++	EXPECT_NE(-1, cap_free(cap_p))
++	{
+ 		TH_LOG("Failed to cap_free: %s", strerror(errno));
+ 	}
+ }
+ 
+-__attribute__((__unused__))
+-static void set_cap(struct __test_metadata *const _metadata,
+-		const cap_value_t caps)
++__attribute__((__unused__)) static void
++set_cap(struct __test_metadata *const _metadata, const cap_value_t caps)
+ {
+ 	_effective_cap(_metadata, caps, CAP_SET);
+ }
+ 
+-__attribute__((__unused__))
+-static void clear_cap(struct __test_metadata *const _metadata,
+-		const cap_value_t caps)
++__attribute__((__unused__)) static void
++clear_cap(struct __test_metadata *const _metadata, const cap_value_t caps)
+ {
+ 	_effective_cap(_metadata, caps, CAP_CLEAR);
+ }
+diff --git a/tools/testing/selftests/landlock/fs_test.c b/tools/testing/selftests/landlock/fs_test.c
+index 10c9a1e4ebd9b..a4fdcda62bdee 100644
+--- a/tools/testing/selftests/landlock/fs_test.c
++++ b/tools/testing/selftests/landlock/fs_test.c
+@@ -22,8 +22,21 @@
+ 
+ #include "common.h"
+ 
+-#define TMP_DIR		"tmp"
+-#define BINARY_PATH	"./true"
++#ifndef renameat2
++int renameat2(int olddirfd, const char *oldpath, int newdirfd,
++	      const char *newpath, unsigned int flags)
++{
++	return syscall(__NR_renameat2, olddirfd, oldpath, newdirfd, newpath,
++		       flags);
++}
++#endif
++
++#ifndef RENAME_EXCHANGE
++#define RENAME_EXCHANGE (1 << 1)
++#endif
++
++#define TMP_DIR "tmp"
++#define BINARY_PATH "./true"
+ 
+ /* Paths (sibling number and depth) */
+ static const char dir_s1d1[] = TMP_DIR "/s1d1";
+@@ -75,7 +88,7 @@ static const char dir_s3d3[] = TMP_DIR "/s3d1/s3d2/s3d3";
+  */
+ 
+ static void mkdir_parents(struct __test_metadata *const _metadata,
+-		const char *const path)
++			  const char *const path)
+ {
+ 	char *walker;
+ 	const char *parent;
+@@ -90,9 +103,10 @@ static void mkdir_parents(struct __test_metadata *const _metadata,
+ 			continue;
+ 		walker[i] = '\0';
+ 		err = mkdir(parent, 0700);
+-		ASSERT_FALSE(err && errno != EEXIST) {
+-			TH_LOG("Failed to create directory \"%s\": %s",
+-					parent, strerror(errno));
++		ASSERT_FALSE(err && errno != EEXIST)
++		{
++			TH_LOG("Failed to create directory \"%s\": %s", parent,
++			       strerror(errno));
+ 		}
+ 		walker[i] = '/';
+ 	}
+@@ -100,22 +114,24 @@ static void mkdir_parents(struct __test_metadata *const _metadata,
+ }
+ 
+ static void create_directory(struct __test_metadata *const _metadata,
+-		const char *const path)
++			     const char *const path)
+ {
+ 	mkdir_parents(_metadata, path);
+-	ASSERT_EQ(0, mkdir(path, 0700)) {
++	ASSERT_EQ(0, mkdir(path, 0700))
++	{
+ 		TH_LOG("Failed to create directory \"%s\": %s", path,
+-				strerror(errno));
++		       strerror(errno));
+ 	}
+ }
+ 
+ static void create_file(struct __test_metadata *const _metadata,
+-		const char *const path)
++			const char *const path)
+ {
+ 	mkdir_parents(_metadata, path);
+-	ASSERT_EQ(0, mknod(path, S_IFREG | 0700, 0)) {
++	ASSERT_EQ(0, mknod(path, S_IFREG | 0700, 0))
++	{
+ 		TH_LOG("Failed to create file \"%s\": %s", path,
+-				strerror(errno));
++		       strerror(errno));
+ 	}
+ }
+ 
+@@ -221,8 +237,9 @@ static void remove_layout1(struct __test_metadata *const _metadata)
+ 	EXPECT_EQ(0, remove_path(dir_s3d2));
+ }
+ 
+-FIXTURE(layout1) {
+-};
++/* clang-format off */
++FIXTURE(layout1) {};
++/* clang-format on */
+ 
+ FIXTURE_SETUP(layout1)
+ {
+@@ -242,7 +259,8 @@ FIXTURE_TEARDOWN(layout1)
+  * This helper enables to use the ASSERT_* macros and print the line number
+  * pointing to the test caller.
+  */
+-static int test_open_rel(const int dirfd, const char *const path, const int flags)
++static int test_open_rel(const int dirfd, const char *const path,
++			 const int flags)
+ {
+ 	int fd;
+ 
+@@ -291,23 +309,23 @@ TEST_F_FORK(layout1, inval)
+ {
+ 	struct landlock_path_beneath_attr path_beneath = {
+ 		.allowed_access = LANDLOCK_ACCESS_FS_READ_FILE |
+-			LANDLOCK_ACCESS_FS_WRITE_FILE,
++				  LANDLOCK_ACCESS_FS_WRITE_FILE,
+ 		.parent_fd = -1,
+ 	};
+ 	struct landlock_ruleset_attr ruleset_attr = {
+ 		.handled_access_fs = LANDLOCK_ACCESS_FS_READ_FILE |
+-			LANDLOCK_ACCESS_FS_WRITE_FILE,
++				     LANDLOCK_ACCESS_FS_WRITE_FILE,
+ 	};
+ 	int ruleset_fd;
+ 
+-	path_beneath.parent_fd = open(dir_s1d2, O_PATH | O_DIRECTORY |
+-			O_CLOEXEC);
++	path_beneath.parent_fd =
++		open(dir_s1d2, O_PATH | O_DIRECTORY | O_CLOEXEC);
+ 	ASSERT_LE(0, path_beneath.parent_fd);
+ 
+ 	ruleset_fd = open(dir_s1d1, O_PATH | O_DIRECTORY | O_CLOEXEC);
+ 	ASSERT_LE(0, ruleset_fd);
+ 	ASSERT_EQ(-1, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_PATH_BENEATH,
+-				&path_beneath, 0));
++					&path_beneath, 0));
+ 	/* Returns EBADF because ruleset_fd is not a landlock-ruleset FD. */
+ 	ASSERT_EQ(EBADF, errno);
+ 	ASSERT_EQ(0, close(ruleset_fd));
+@@ -315,55 +333,55 @@ TEST_F_FORK(layout1, inval)
+ 	ruleset_fd = open(dir_s1d1, O_DIRECTORY | O_CLOEXEC);
+ 	ASSERT_LE(0, ruleset_fd);
+ 	ASSERT_EQ(-1, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_PATH_BENEATH,
+-				&path_beneath, 0));
++					&path_beneath, 0));
+ 	/* Returns EBADFD because ruleset_fd is not a valid ruleset. */
+ 	ASSERT_EQ(EBADFD, errno);
+ 	ASSERT_EQ(0, close(ruleset_fd));
+ 
+ 	/* Gets a real ruleset. */
+-	ruleset_fd = landlock_create_ruleset(&ruleset_attr,
+-			sizeof(ruleset_attr), 0);
++	ruleset_fd =
++		landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0);
+ 	ASSERT_LE(0, ruleset_fd);
+ 	ASSERT_EQ(0, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_PATH_BENEATH,
+-				&path_beneath, 0));
++				       &path_beneath, 0));
+ 	ASSERT_EQ(0, close(path_beneath.parent_fd));
+ 
+ 	/* Tests without O_PATH. */
+ 	path_beneath.parent_fd = open(dir_s1d2, O_DIRECTORY | O_CLOEXEC);
+ 	ASSERT_LE(0, path_beneath.parent_fd);
+ 	ASSERT_EQ(0, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_PATH_BENEATH,
+-				&path_beneath, 0));
++				       &path_beneath, 0));
+ 	ASSERT_EQ(0, close(path_beneath.parent_fd));
+ 
+ 	/* Tests with a ruleset FD. */
+ 	path_beneath.parent_fd = ruleset_fd;
+ 	ASSERT_EQ(-1, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_PATH_BENEATH,
+-				&path_beneath, 0));
++					&path_beneath, 0));
+ 	ASSERT_EQ(EBADFD, errno);
+ 
+ 	/* Checks unhandled allowed_access. */
+-	path_beneath.parent_fd = open(dir_s1d2, O_PATH | O_DIRECTORY |
+-			O_CLOEXEC);
++	path_beneath.parent_fd =
++		open(dir_s1d2, O_PATH | O_DIRECTORY | O_CLOEXEC);
+ 	ASSERT_LE(0, path_beneath.parent_fd);
+ 
+ 	/* Test with legitimate values. */
+ 	path_beneath.allowed_access |= LANDLOCK_ACCESS_FS_EXECUTE;
+ 	ASSERT_EQ(-1, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_PATH_BENEATH,
+-				&path_beneath, 0));
++					&path_beneath, 0));
+ 	ASSERT_EQ(EINVAL, errno);
+ 	path_beneath.allowed_access &= ~LANDLOCK_ACCESS_FS_EXECUTE;
+ 
+ 	/* Test with unknown (64-bits) value. */
+ 	path_beneath.allowed_access |= (1ULL << 60);
+ 	ASSERT_EQ(-1, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_PATH_BENEATH,
+-				&path_beneath, 0));
++					&path_beneath, 0));
+ 	ASSERT_EQ(EINVAL, errno);
+ 	path_beneath.allowed_access &= ~(1ULL << 60);
+ 
+ 	/* Test with no access. */
+ 	path_beneath.allowed_access = 0;
+ 	ASSERT_EQ(-1, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_PATH_BENEATH,
+-				&path_beneath, 0));
++					&path_beneath, 0));
+ 	ASSERT_EQ(ENOMSG, errno);
+ 	path_beneath.allowed_access &= ~(1ULL << 60);
+ 
+@@ -376,6 +394,8 @@ TEST_F_FORK(layout1, inval)
+ 	ASSERT_EQ(0, close(ruleset_fd));
+ }
+ 
++/* clang-format off */
++
+ #define ACCESS_FILE ( \
+ 	LANDLOCK_ACCESS_FS_EXECUTE | \
+ 	LANDLOCK_ACCESS_FS_WRITE_FILE | \
+@@ -396,53 +416,87 @@ TEST_F_FORK(layout1, inval)
+ 	LANDLOCK_ACCESS_FS_MAKE_BLOCK | \
+ 	ACCESS_LAST)
+ 
+-TEST_F_FORK(layout1, file_access_rights)
++/* clang-format on */
++
++TEST_F_FORK(layout1, file_and_dir_access_rights)
+ {
+ 	__u64 access;
+ 	int err;
+-	struct landlock_path_beneath_attr path_beneath = {};
++	struct landlock_path_beneath_attr path_beneath_file = {},
++					  path_beneath_dir = {};
+ 	struct landlock_ruleset_attr ruleset_attr = {
+ 		.handled_access_fs = ACCESS_ALL,
+ 	};
+-	const int ruleset_fd = landlock_create_ruleset(&ruleset_attr,
+-			sizeof(ruleset_attr), 0);
++	const int ruleset_fd =
++		landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0);
+ 
+ 	ASSERT_LE(0, ruleset_fd);
+ 
+ 	/* Tests access rights for files. */
+-	path_beneath.parent_fd = open(file1_s1d2, O_PATH | O_CLOEXEC);
+-	ASSERT_LE(0, path_beneath.parent_fd);
++	path_beneath_file.parent_fd = open(file1_s1d2, O_PATH | O_CLOEXEC);
++	ASSERT_LE(0, path_beneath_file.parent_fd);
++
++	/* Tests access rights for directories. */
++	path_beneath_dir.parent_fd =
++		open(dir_s1d2, O_PATH | O_DIRECTORY | O_CLOEXEC);
++	ASSERT_LE(0, path_beneath_dir.parent_fd);
++
+ 	for (access = 1; access <= ACCESS_LAST; access <<= 1) {
+-		path_beneath.allowed_access = access;
++		path_beneath_dir.allowed_access = access;
++		ASSERT_EQ(0, landlock_add_rule(ruleset_fd,
++					       LANDLOCK_RULE_PATH_BENEATH,
++					       &path_beneath_dir, 0));
++
++		path_beneath_file.allowed_access = access;
+ 		err = landlock_add_rule(ruleset_fd, LANDLOCK_RULE_PATH_BENEATH,
+-				&path_beneath, 0);
+-		if ((access | ACCESS_FILE) == ACCESS_FILE) {
++					&path_beneath_file, 0);
++		if (access & ACCESS_FILE) {
+ 			ASSERT_EQ(0, err);
+ 		} else {
+ 			ASSERT_EQ(-1, err);
+ 			ASSERT_EQ(EINVAL, errno);
+ 		}
+ 	}
+-	ASSERT_EQ(0, close(path_beneath.parent_fd));
++	ASSERT_EQ(0, close(path_beneath_file.parent_fd));
++	ASSERT_EQ(0, close(path_beneath_dir.parent_fd));
++	ASSERT_EQ(0, close(ruleset_fd));
++}
++
++TEST_F_FORK(layout1, unknown_access_rights)
++{
++	__u64 access_mask;
++
++	for (access_mask = 1ULL << 63; access_mask != ACCESS_LAST;
++	     access_mask >>= 1) {
++		struct landlock_ruleset_attr ruleset_attr = {
++			.handled_access_fs = access_mask,
++		};
++
++		ASSERT_EQ(-1, landlock_create_ruleset(&ruleset_attr,
++						      sizeof(ruleset_attr), 0));
++		ASSERT_EQ(EINVAL, errno);
++	}
+ }
+ 
+ static void add_path_beneath(struct __test_metadata *const _metadata,
+-		const int ruleset_fd, const __u64 allowed_access,
+-		const char *const path)
++			     const int ruleset_fd, const __u64 allowed_access,
++			     const char *const path)
+ {
+ 	struct landlock_path_beneath_attr path_beneath = {
+ 		.allowed_access = allowed_access,
+ 	};
+ 
+ 	path_beneath.parent_fd = open(path, O_PATH | O_CLOEXEC);
+-	ASSERT_LE(0, path_beneath.parent_fd) {
++	ASSERT_LE(0, path_beneath.parent_fd)
++	{
+ 		TH_LOG("Failed to open directory \"%s\": %s", path,
+-				strerror(errno));
++		       strerror(errno));
+ 	}
+ 	ASSERT_EQ(0, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_PATH_BENEATH,
+-				&path_beneath, 0)) {
++				       &path_beneath, 0))
++	{
+ 		TH_LOG("Failed to update the ruleset with \"%s\": %s", path,
+-				strerror(errno));
++		       strerror(errno));
+ 	}
+ 	ASSERT_EQ(0, close(path_beneath.parent_fd));
+ }
+@@ -452,6 +506,8 @@ struct rule {
+ 	__u64 access;
+ };
+ 
++/* clang-format off */
++
+ #define ACCESS_RO ( \
+ 	LANDLOCK_ACCESS_FS_READ_FILE | \
+ 	LANDLOCK_ACCESS_FS_READ_DIR)
+@@ -460,39 +516,46 @@ struct rule {
+ 	ACCESS_RO | \
+ 	LANDLOCK_ACCESS_FS_WRITE_FILE)
+ 
++/* clang-format on */
++
+ static int create_ruleset(struct __test_metadata *const _metadata,
+-		const __u64 handled_access_fs, const struct rule rules[])
++			  const __u64 handled_access_fs,
++			  const struct rule rules[])
+ {
+ 	int ruleset_fd, i;
+ 	struct landlock_ruleset_attr ruleset_attr = {
+ 		.handled_access_fs = handled_access_fs,
+ 	};
+ 
+-	ASSERT_NE(NULL, rules) {
++	ASSERT_NE(NULL, rules)
++	{
+ 		TH_LOG("No rule list");
+ 	}
+-	ASSERT_NE(NULL, rules[0].path) {
++	ASSERT_NE(NULL, rules[0].path)
++	{
+ 		TH_LOG("Empty rule list");
+ 	}
+ 
+-	ruleset_fd = landlock_create_ruleset(&ruleset_attr,
+-			sizeof(ruleset_attr), 0);
+-	ASSERT_LE(0, ruleset_fd) {
++	ruleset_fd =
++		landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0);
++	ASSERT_LE(0, ruleset_fd)
++	{
+ 		TH_LOG("Failed to create a ruleset: %s", strerror(errno));
+ 	}
+ 
+ 	for (i = 0; rules[i].path; i++) {
+ 		add_path_beneath(_metadata, ruleset_fd, rules[i].access,
+-				rules[i].path);
++				 rules[i].path);
+ 	}
+ 	return ruleset_fd;
+ }
+ 
+ static void enforce_ruleset(struct __test_metadata *const _metadata,
+-		const int ruleset_fd)
++			    const int ruleset_fd)
+ {
+ 	ASSERT_EQ(0, prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0));
+-	ASSERT_EQ(0, landlock_restrict_self(ruleset_fd, 0)) {
++	ASSERT_EQ(0, landlock_restrict_self(ruleset_fd, 0))
++	{
+ 		TH_LOG("Failed to enforce ruleset: %s", strerror(errno));
+ 	}
+ }
+@@ -503,13 +566,14 @@ TEST_F_FORK(layout1, proc_nsfs)
+ 		{
+ 			.path = "/dev/null",
+ 			.access = LANDLOCK_ACCESS_FS_READ_FILE |
+-				LANDLOCK_ACCESS_FS_WRITE_FILE,
++				  LANDLOCK_ACCESS_FS_WRITE_FILE,
+ 		},
+-		{}
++		{},
+ 	};
+ 	struct landlock_path_beneath_attr path_beneath;
+-	const int ruleset_fd = create_ruleset(_metadata, rules[0].access |
+-			LANDLOCK_ACCESS_FS_READ_DIR, rules);
++	const int ruleset_fd = create_ruleset(
++		_metadata, rules[0].access | LANDLOCK_ACCESS_FS_READ_DIR,
++		rules);
+ 
+ 	ASSERT_LE(0, ruleset_fd);
+ 	ASSERT_EQ(0, test_open("/proc/self/ns/mnt", O_RDONLY));
+@@ -536,22 +600,23 @@ TEST_F_FORK(layout1, proc_nsfs)
+ 	 * references to a ruleset.
+ 	 */
+ 	path_beneath.allowed_access = LANDLOCK_ACCESS_FS_READ_FILE |
+-		LANDLOCK_ACCESS_FS_WRITE_FILE,
++				      LANDLOCK_ACCESS_FS_WRITE_FILE,
+ 	path_beneath.parent_fd = open("/proc/self/ns/mnt", O_PATH | O_CLOEXEC);
+ 	ASSERT_LE(0, path_beneath.parent_fd);
+ 	ASSERT_EQ(-1, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_PATH_BENEATH,
+-				&path_beneath, 0));
++					&path_beneath, 0));
+ 	ASSERT_EQ(EBADFD, errno);
+ 	ASSERT_EQ(0, close(path_beneath.parent_fd));
+ }
+ 
+-TEST_F_FORK(layout1, unpriv) {
++TEST_F_FORK(layout1, unpriv)
++{
+ 	const struct rule rules[] = {
+ 		{
+ 			.path = dir_s1d2,
+ 			.access = ACCESS_RO,
+ 		},
+-		{}
++		{},
+ 	};
+ 	int ruleset_fd;
+ 
+@@ -577,9 +642,9 @@ TEST_F_FORK(layout1, effective_access)
+ 		{
+ 			.path = file1_s2d2,
+ 			.access = LANDLOCK_ACCESS_FS_READ_FILE |
+-				LANDLOCK_ACCESS_FS_WRITE_FILE,
++				  LANDLOCK_ACCESS_FS_WRITE_FILE,
+ 		},
+-		{}
++		{},
+ 	};
+ 	const int ruleset_fd = create_ruleset(_metadata, ACCESS_RW, rules);
+ 	char buf;
+@@ -589,17 +654,23 @@ TEST_F_FORK(layout1, effective_access)
+ 	enforce_ruleset(_metadata, ruleset_fd);
+ 	ASSERT_EQ(0, close(ruleset_fd));
+ 
+-	/* Tests on a directory. */
++	/* Tests on a directory (with or without O_PATH). */
+ 	ASSERT_EQ(EACCES, test_open("/", O_RDONLY));
++	ASSERT_EQ(0, test_open("/", O_RDONLY | O_PATH));
+ 	ASSERT_EQ(EACCES, test_open(dir_s1d1, O_RDONLY));
++	ASSERT_EQ(0, test_open(dir_s1d1, O_RDONLY | O_PATH));
+ 	ASSERT_EQ(EACCES, test_open(file1_s1d1, O_RDONLY));
++	ASSERT_EQ(0, test_open(file1_s1d1, O_RDONLY | O_PATH));
++
+ 	ASSERT_EQ(0, test_open(dir_s1d2, O_RDONLY));
+ 	ASSERT_EQ(0, test_open(file1_s1d2, O_RDONLY));
+ 	ASSERT_EQ(0, test_open(dir_s1d3, O_RDONLY));
+ 	ASSERT_EQ(0, test_open(file1_s1d3, O_RDONLY));
+ 
+-	/* Tests on a file. */
++	/* Tests on a file (with or without O_PATH). */
+ 	ASSERT_EQ(EACCES, test_open(dir_s2d2, O_RDONLY));
++	ASSERT_EQ(0, test_open(dir_s2d2, O_RDONLY | O_PATH));
++
+ 	ASSERT_EQ(0, test_open(file1_s2d2, O_RDONLY));
+ 
+ 	/* Checks effective read and write actions. */
+@@ -626,7 +697,7 @@ TEST_F_FORK(layout1, unhandled_access)
+ 			.path = dir_s1d2,
+ 			.access = ACCESS_RO,
+ 		},
+-		{}
++		{},
+ 	};
+ 	/* Here, we only handle read accesses, not write accesses. */
+ 	const int ruleset_fd = create_ruleset(_metadata, ACCESS_RO, rules);
+@@ -653,14 +724,14 @@ TEST_F_FORK(layout1, ruleset_overlap)
+ 		{
+ 			.path = dir_s1d2,
+ 			.access = LANDLOCK_ACCESS_FS_READ_FILE |
+-				LANDLOCK_ACCESS_FS_WRITE_FILE,
++				  LANDLOCK_ACCESS_FS_WRITE_FILE,
+ 		},
+ 		{
+ 			.path = dir_s1d2,
+ 			.access = LANDLOCK_ACCESS_FS_READ_FILE |
+-				LANDLOCK_ACCESS_FS_READ_DIR,
++				  LANDLOCK_ACCESS_FS_READ_DIR,
+ 		},
+-		{}
++		{},
+ 	};
+ 	const int ruleset_fd = create_ruleset(_metadata, ACCESS_RW, rules);
+ 
+@@ -687,6 +758,113 @@ TEST_F_FORK(layout1, ruleset_overlap)
+ 	ASSERT_EQ(0, test_open(dir_s1d3, O_RDONLY | O_DIRECTORY));
+ }
+ 
++TEST_F_FORK(layout1, layer_rule_unions)
++{
++	const struct rule layer1[] = {
++		{
++			.path = dir_s1d2,
++			.access = LANDLOCK_ACCESS_FS_READ_FILE,
++		},
++		/* dir_s1d3 should allow READ_FILE and WRITE_FILE (O_RDWR). */
++		{
++			.path = dir_s1d3,
++			.access = LANDLOCK_ACCESS_FS_WRITE_FILE,
++		},
++		{},
++	};
++	const struct rule layer2[] = {
++		/* Doesn't change anything from layer1. */
++		{
++			.path = dir_s1d2,
++			.access = LANDLOCK_ACCESS_FS_READ_FILE |
++				  LANDLOCK_ACCESS_FS_WRITE_FILE,
++		},
++		{},
++	};
++	const struct rule layer3[] = {
++		/* Only allows write (but not read) to dir_s1d3. */
++		{
++			.path = dir_s1d2,
++			.access = LANDLOCK_ACCESS_FS_WRITE_FILE,
++		},
++		{},
++	};
++	int ruleset_fd = create_ruleset(_metadata, ACCESS_RW, layer1);
++
++	ASSERT_LE(0, ruleset_fd);
++	enforce_ruleset(_metadata, ruleset_fd);
++	ASSERT_EQ(0, close(ruleset_fd));
++
++	/* Checks s1d1 hierarchy with layer1. */
++	ASSERT_EQ(EACCES, test_open(file1_s1d1, O_RDONLY));
++	ASSERT_EQ(EACCES, test_open(file1_s1d1, O_WRONLY));
++	ASSERT_EQ(EACCES, test_open(file1_s1d1, O_RDWR));
++	ASSERT_EQ(EACCES, test_open(dir_s1d1, O_RDONLY | O_DIRECTORY));
++
++	/* Checks s1d2 hierarchy with layer1. */
++	ASSERT_EQ(0, test_open(file1_s1d2, O_RDONLY));
++	ASSERT_EQ(EACCES, test_open(file1_s1d2, O_WRONLY));
++	ASSERT_EQ(EACCES, test_open(file1_s1d2, O_RDWR));
++	ASSERT_EQ(EACCES, test_open(dir_s1d1, O_RDONLY | O_DIRECTORY));
++
++	/* Checks s1d3 hierarchy with layer1. */
++	ASSERT_EQ(0, test_open(file1_s1d3, O_RDONLY));
++	ASSERT_EQ(0, test_open(file1_s1d3, O_WRONLY));
++	/* dir_s1d3 should allow READ_FILE and WRITE_FILE (O_RDWR). */
++	ASSERT_EQ(0, test_open(file1_s1d3, O_RDWR));
++	ASSERT_EQ(EACCES, test_open(dir_s1d1, O_RDONLY | O_DIRECTORY));
++
++	/* Doesn't change anything from layer1. */
++	ruleset_fd = create_ruleset(_metadata, ACCESS_RW, layer2);
++	ASSERT_LE(0, ruleset_fd);
++	enforce_ruleset(_metadata, ruleset_fd);
++	ASSERT_EQ(0, close(ruleset_fd));
++
++	/* Checks s1d1 hierarchy with layer2. */
++	ASSERT_EQ(EACCES, test_open(file1_s1d1, O_RDONLY));
++	ASSERT_EQ(EACCES, test_open(file1_s1d1, O_WRONLY));
++	ASSERT_EQ(EACCES, test_open(file1_s1d1, O_RDWR));
++	ASSERT_EQ(EACCES, test_open(dir_s1d1, O_RDONLY | O_DIRECTORY));
++
++	/* Checks s1d2 hierarchy with layer2. */
++	ASSERT_EQ(0, test_open(file1_s1d2, O_RDONLY));
++	ASSERT_EQ(EACCES, test_open(file1_s1d2, O_WRONLY));
++	ASSERT_EQ(EACCES, test_open(file1_s1d2, O_RDWR));
++	ASSERT_EQ(EACCES, test_open(dir_s1d1, O_RDONLY | O_DIRECTORY));
++
++	/* Checks s1d3 hierarchy with layer2. */
++	ASSERT_EQ(0, test_open(file1_s1d3, O_RDONLY));
++	ASSERT_EQ(0, test_open(file1_s1d3, O_WRONLY));
++	/* dir_s1d3 should allow READ_FILE and WRITE_FILE (O_RDWR). */
++	ASSERT_EQ(0, test_open(file1_s1d3, O_RDWR));
++	ASSERT_EQ(EACCES, test_open(dir_s1d1, O_RDONLY | O_DIRECTORY));
++
++	/* Only allows write (but not read) to dir_s1d3. */
++	ruleset_fd = create_ruleset(_metadata, ACCESS_RW, layer3);
++	ASSERT_LE(0, ruleset_fd);
++	enforce_ruleset(_metadata, ruleset_fd);
++	ASSERT_EQ(0, close(ruleset_fd));
++
++	/* Checks s1d1 hierarchy with layer3. */
++	ASSERT_EQ(EACCES, test_open(file1_s1d1, O_RDONLY));
++	ASSERT_EQ(EACCES, test_open(file1_s1d1, O_WRONLY));
++	ASSERT_EQ(EACCES, test_open(file1_s1d1, O_RDWR));
++	ASSERT_EQ(EACCES, test_open(dir_s1d1, O_RDONLY | O_DIRECTORY));
++
++	/* Checks s1d2 hierarchy with layer3. */
++	ASSERT_EQ(EACCES, test_open(file1_s1d2, O_RDONLY));
++	ASSERT_EQ(EACCES, test_open(file1_s1d2, O_WRONLY));
++	ASSERT_EQ(EACCES, test_open(file1_s1d2, O_RDWR));
++	ASSERT_EQ(EACCES, test_open(dir_s1d1, O_RDONLY | O_DIRECTORY));
++
++	/* Checks s1d3 hierarchy with layer3. */
++	ASSERT_EQ(EACCES, test_open(file1_s1d3, O_RDONLY));
++	ASSERT_EQ(0, test_open(file1_s1d3, O_WRONLY));
++	/* dir_s1d3 should now deny READ_FILE and WRITE_FILE (O_RDWR). */
++	ASSERT_EQ(EACCES, test_open(file1_s1d3, O_RDWR));
++	ASSERT_EQ(EACCES, test_open(dir_s1d1, O_RDONLY | O_DIRECTORY));
++}
++
+ TEST_F_FORK(layout1, non_overlapping_accesses)
+ {
+ 	const struct rule layer1[] = {
+@@ -694,22 +872,22 @@ TEST_F_FORK(layout1, non_overlapping_accesses)
+ 			.path = dir_s1d2,
+ 			.access = LANDLOCK_ACCESS_FS_MAKE_REG,
+ 		},
+-		{}
++		{},
+ 	};
+ 	const struct rule layer2[] = {
+ 		{
+ 			.path = dir_s1d3,
+ 			.access = LANDLOCK_ACCESS_FS_REMOVE_FILE,
+ 		},
+-		{}
++		{},
+ 	};
+ 	int ruleset_fd;
+ 
+ 	ASSERT_EQ(0, unlink(file1_s1d1));
+ 	ASSERT_EQ(0, unlink(file1_s1d2));
+ 
+-	ruleset_fd = create_ruleset(_metadata, LANDLOCK_ACCESS_FS_MAKE_REG,
+-			layer1);
++	ruleset_fd =
++		create_ruleset(_metadata, LANDLOCK_ACCESS_FS_MAKE_REG, layer1);
+ 	ASSERT_LE(0, ruleset_fd);
+ 	enforce_ruleset(_metadata, ruleset_fd);
+ 	ASSERT_EQ(0, close(ruleset_fd));
+@@ -720,7 +898,7 @@ TEST_F_FORK(layout1, non_overlapping_accesses)
+ 	ASSERT_EQ(0, unlink(file1_s1d2));
+ 
+ 	ruleset_fd = create_ruleset(_metadata, LANDLOCK_ACCESS_FS_REMOVE_FILE,
+-			layer2);
++				    layer2);
+ 	ASSERT_LE(0, ruleset_fd);
+ 	enforce_ruleset(_metadata, ruleset_fd);
+ 	ASSERT_EQ(0, close(ruleset_fd));
+@@ -758,7 +936,7 @@ TEST_F_FORK(layout1, interleaved_masked_accesses)
+ 			.path = file1_s1d3,
+ 			.access = LANDLOCK_ACCESS_FS_READ_FILE,
+ 		},
+-		{}
++		{},
+ 	};
+ 	/* First rule with write restrictions. */
+ 	const struct rule layer2_read_write[] = {
+@@ -766,14 +944,14 @@ TEST_F_FORK(layout1, interleaved_masked_accesses)
+ 		{
+ 			.path = dir_s1d3,
+ 			.access = LANDLOCK_ACCESS_FS_READ_FILE |
+-				LANDLOCK_ACCESS_FS_WRITE_FILE,
++				  LANDLOCK_ACCESS_FS_WRITE_FILE,
+ 		},
+ 		/* ...but also denies read access via its grandparent directory. */
+ 		{
+ 			.path = dir_s1d2,
+ 			.access = LANDLOCK_ACCESS_FS_WRITE_FILE,
+ 		},
+-		{}
++		{},
+ 	};
+ 	const struct rule layer3_read[] = {
+ 		/* Allows read access via its great-grandparent directory. */
+@@ -781,7 +959,7 @@ TEST_F_FORK(layout1, interleaved_masked_accesses)
+ 			.path = dir_s1d1,
+ 			.access = LANDLOCK_ACCESS_FS_READ_FILE,
+ 		},
+-		{}
++		{},
+ 	};
+ 	const struct rule layer4_read_write[] = {
+ 		/*
+@@ -792,7 +970,7 @@ TEST_F_FORK(layout1, interleaved_masked_accesses)
+ 			.path = dir_s1d2,
+ 			.access = LANDLOCK_ACCESS_FS_READ_FILE,
+ 		},
+-		{}
++		{},
+ 	};
+ 	const struct rule layer5_read[] = {
+ 		/*
+@@ -803,7 +981,7 @@ TEST_F_FORK(layout1, interleaved_masked_accesses)
+ 			.path = dir_s1d2,
+ 			.access = LANDLOCK_ACCESS_FS_READ_FILE,
+ 		},
+-		{}
++		{},
+ 	};
+ 	const struct rule layer6_execute[] = {
+ 		/*
+@@ -814,7 +992,7 @@ TEST_F_FORK(layout1, interleaved_masked_accesses)
+ 			.path = dir_s2d1,
+ 			.access = LANDLOCK_ACCESS_FS_EXECUTE,
+ 		},
+-		{}
++		{},
+ 	};
+ 	const struct rule layer7_read_write[] = {
+ 		/*
+@@ -825,12 +1003,12 @@ TEST_F_FORK(layout1, interleaved_masked_accesses)
+ 			.path = dir_s1d2,
+ 			.access = LANDLOCK_ACCESS_FS_WRITE_FILE,
+ 		},
+-		{}
++		{},
+ 	};
+ 	int ruleset_fd;
+ 
+ 	ruleset_fd = create_ruleset(_metadata, LANDLOCK_ACCESS_FS_READ_FILE,
+-			layer1_read);
++				    layer1_read);
+ 	ASSERT_LE(0, ruleset_fd);
+ 	enforce_ruleset(_metadata, ruleset_fd);
+ 	ASSERT_EQ(0, close(ruleset_fd));
+@@ -840,8 +1018,10 @@ TEST_F_FORK(layout1, interleaved_masked_accesses)
+ 	ASSERT_EQ(EACCES, test_open(file2_s1d3, O_RDONLY));
+ 	ASSERT_EQ(0, test_open(file2_s1d3, O_WRONLY));
+ 
+-	ruleset_fd = create_ruleset(_metadata, LANDLOCK_ACCESS_FS_READ_FILE |
+-			LANDLOCK_ACCESS_FS_WRITE_FILE, layer2_read_write);
++	ruleset_fd = create_ruleset(_metadata,
++				    LANDLOCK_ACCESS_FS_READ_FILE |
++					    LANDLOCK_ACCESS_FS_WRITE_FILE,
++				    layer2_read_write);
+ 	ASSERT_LE(0, ruleset_fd);
+ 	enforce_ruleset(_metadata, ruleset_fd);
+ 	ASSERT_EQ(0, close(ruleset_fd));
+@@ -852,7 +1032,7 @@ TEST_F_FORK(layout1, interleaved_masked_accesses)
+ 	ASSERT_EQ(0, test_open(file2_s1d3, O_WRONLY));
+ 
+ 	ruleset_fd = create_ruleset(_metadata, LANDLOCK_ACCESS_FS_READ_FILE,
+-			layer3_read);
++				    layer3_read);
+ 	ASSERT_LE(0, ruleset_fd);
+ 	enforce_ruleset(_metadata, ruleset_fd);
+ 	ASSERT_EQ(0, close(ruleset_fd));
+@@ -863,8 +1043,10 @@ TEST_F_FORK(layout1, interleaved_masked_accesses)
+ 	ASSERT_EQ(0, test_open(file2_s1d3, O_WRONLY));
+ 
+ 	/* This time, denies write access for the file hierarchy. */
+-	ruleset_fd = create_ruleset(_metadata, LANDLOCK_ACCESS_FS_READ_FILE |
+-			LANDLOCK_ACCESS_FS_WRITE_FILE, layer4_read_write);
++	ruleset_fd = create_ruleset(_metadata,
++				    LANDLOCK_ACCESS_FS_READ_FILE |
++					    LANDLOCK_ACCESS_FS_WRITE_FILE,
++				    layer4_read_write);
+ 	ASSERT_LE(0, ruleset_fd);
+ 	enforce_ruleset(_metadata, ruleset_fd);
+ 	ASSERT_EQ(0, close(ruleset_fd));
+@@ -879,7 +1061,7 @@ TEST_F_FORK(layout1, interleaved_masked_accesses)
+ 	ASSERT_EQ(EACCES, test_open(file2_s1d3, O_WRONLY));
+ 
+ 	ruleset_fd = create_ruleset(_metadata, LANDLOCK_ACCESS_FS_READ_FILE,
+-			layer5_read);
++				    layer5_read);
+ 	ASSERT_LE(0, ruleset_fd);
+ 	enforce_ruleset(_metadata, ruleset_fd);
+ 	ASSERT_EQ(0, close(ruleset_fd));
+@@ -891,7 +1073,7 @@ TEST_F_FORK(layout1, interleaved_masked_accesses)
+ 	ASSERT_EQ(EACCES, test_open(file2_s1d3, O_RDONLY));
+ 
+ 	ruleset_fd = create_ruleset(_metadata, LANDLOCK_ACCESS_FS_EXECUTE,
+-			layer6_execute);
++				    layer6_execute);
+ 	ASSERT_LE(0, ruleset_fd);
+ 	enforce_ruleset(_metadata, ruleset_fd);
+ 	ASSERT_EQ(0, close(ruleset_fd));
+@@ -902,8 +1084,10 @@ TEST_F_FORK(layout1, interleaved_masked_accesses)
+ 	ASSERT_EQ(EACCES, test_open(file2_s1d3, O_WRONLY));
+ 	ASSERT_EQ(EACCES, test_open(file2_s1d3, O_RDONLY));
+ 
+-	ruleset_fd = create_ruleset(_metadata, LANDLOCK_ACCESS_FS_READ_FILE |
+-			LANDLOCK_ACCESS_FS_WRITE_FILE, layer7_read_write);
++	ruleset_fd = create_ruleset(_metadata,
++				    LANDLOCK_ACCESS_FS_READ_FILE |
++					    LANDLOCK_ACCESS_FS_WRITE_FILE,
++				    layer7_read_write);
+ 	ASSERT_LE(0, ruleset_fd);
+ 	enforce_ruleset(_metadata, ruleset_fd);
+ 	ASSERT_EQ(0, close(ruleset_fd));
+@@ -921,9 +1105,9 @@ TEST_F_FORK(layout1, inherit_subset)
+ 		{
+ 			.path = dir_s1d2,
+ 			.access = LANDLOCK_ACCESS_FS_READ_FILE |
+-				LANDLOCK_ACCESS_FS_READ_DIR,
++				  LANDLOCK_ACCESS_FS_READ_DIR,
+ 		},
+-		{}
++		{},
+ 	};
+ 	const int ruleset_fd = create_ruleset(_metadata, ACCESS_RW, rules);
+ 
+@@ -949,7 +1133,7 @@ TEST_F_FORK(layout1, inherit_subset)
+ 	 * ANDed with the previous ones.
+ 	 */
+ 	add_path_beneath(_metadata, ruleset_fd, LANDLOCK_ACCESS_FS_WRITE_FILE,
+-			dir_s1d2);
++			 dir_s1d2);
+ 	/*
+ 	 * According to ruleset_fd, dir_s1d2 should now have the
+ 	 * LANDLOCK_ACCESS_FS_READ_FILE and LANDLOCK_ACCESS_FS_WRITE_FILE
+@@ -1004,7 +1188,7 @@ TEST_F_FORK(layout1, inherit_subset)
+ 	 * that there was no rule tied to it before.
+ 	 */
+ 	add_path_beneath(_metadata, ruleset_fd, LANDLOCK_ACCESS_FS_WRITE_FILE,
+-			dir_s1d3);
++			 dir_s1d3);
+ 	enforce_ruleset(_metadata, ruleset_fd);
+ 	ASSERT_EQ(0, close(ruleset_fd));
+ 
+@@ -1039,7 +1223,7 @@ TEST_F_FORK(layout1, inherit_superset)
+ 			.path = dir_s1d3,
+ 			.access = ACCESS_RO,
+ 		},
+-		{}
++		{},
+ 	};
+ 	const int ruleset_fd = create_ruleset(_metadata, ACCESS_RW, rules);
+ 
+@@ -1054,8 +1238,10 @@ TEST_F_FORK(layout1, inherit_superset)
+ 	ASSERT_EQ(0, test_open(file1_s1d3, O_RDONLY));
+ 
+ 	/* Now dir_s1d2, parent of dir_s1d3, gets a new rule tied to it. */
+-	add_path_beneath(_metadata, ruleset_fd, LANDLOCK_ACCESS_FS_READ_FILE |
+-			LANDLOCK_ACCESS_FS_READ_DIR, dir_s1d2);
++	add_path_beneath(_metadata, ruleset_fd,
++			 LANDLOCK_ACCESS_FS_READ_FILE |
++				 LANDLOCK_ACCESS_FS_READ_DIR,
++			 dir_s1d2);
+ 	enforce_ruleset(_metadata, ruleset_fd);
+ 	ASSERT_EQ(0, close(ruleset_fd));
+ 
+@@ -1075,12 +1261,12 @@ TEST_F_FORK(layout1, max_layers)
+ 			.path = dir_s1d2,
+ 			.access = ACCESS_RO,
+ 		},
+-		{}
++		{},
+ 	};
+ 	const int ruleset_fd = create_ruleset(_metadata, ACCESS_RW, rules);
+ 
+ 	ASSERT_LE(0, ruleset_fd);
+-	for (i = 0; i < 64; i++)
++	for (i = 0; i < 16; i++)
+ 		enforce_ruleset(_metadata, ruleset_fd);
+ 
+ 	for (i = 0; i < 2; i++) {
+@@ -1097,15 +1283,15 @@ TEST_F_FORK(layout1, empty_or_same_ruleset)
+ 	int ruleset_fd;
+ 
+ 	/* Tests empty handled_access_fs. */
+-	ruleset_fd = landlock_create_ruleset(&ruleset_attr,
+-			sizeof(ruleset_attr), 0);
++	ruleset_fd =
++		landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0);
+ 	ASSERT_LE(-1, ruleset_fd);
+ 	ASSERT_EQ(ENOMSG, errno);
+ 
+ 	/* Enforces policy which deny read access to all files. */
+ 	ruleset_attr.handled_access_fs = LANDLOCK_ACCESS_FS_READ_FILE;
+-	ruleset_fd = landlock_create_ruleset(&ruleset_attr,
+-			sizeof(ruleset_attr), 0);
++	ruleset_fd =
++		landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0);
+ 	ASSERT_LE(0, ruleset_fd);
+ 	enforce_ruleset(_metadata, ruleset_fd);
+ 	ASSERT_EQ(EACCES, test_open(file1_s1d1, O_RDONLY));
+@@ -1113,8 +1299,8 @@ TEST_F_FORK(layout1, empty_or_same_ruleset)
+ 
+ 	/* Nests a policy which deny read access to all directories. */
+ 	ruleset_attr.handled_access_fs = LANDLOCK_ACCESS_FS_READ_DIR;
+-	ruleset_fd = landlock_create_ruleset(&ruleset_attr,
+-			sizeof(ruleset_attr), 0);
++	ruleset_fd =
++		landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0);
+ 	ASSERT_LE(0, ruleset_fd);
+ 	enforce_ruleset(_metadata, ruleset_fd);
+ 	ASSERT_EQ(EACCES, test_open(file1_s1d1, O_RDONLY));
+@@ -1137,7 +1323,7 @@ TEST_F_FORK(layout1, rule_on_mountpoint)
+ 			.path = dir_s3d2,
+ 			.access = ACCESS_RO,
+ 		},
+-		{}
++		{},
+ 	};
+ 	const int ruleset_fd = create_ruleset(_metadata, ACCESS_RW, rules);
+ 
+@@ -1166,7 +1352,7 @@ TEST_F_FORK(layout1, rule_over_mountpoint)
+ 			.path = dir_s3d1,
+ 			.access = ACCESS_RO,
+ 		},
+-		{}
++		{},
+ 	};
+ 	const int ruleset_fd = create_ruleset(_metadata, ACCESS_RW, rules);
+ 
+@@ -1194,7 +1380,7 @@ TEST_F_FORK(layout1, rule_over_root_allow_then_deny)
+ 			.path = "/",
+ 			.access = ACCESS_RO,
+ 		},
+-		{}
++		{},
+ 	};
+ 	int ruleset_fd = create_ruleset(_metadata, ACCESS_RW, rules);
+ 
+@@ -1224,7 +1410,7 @@ TEST_F_FORK(layout1, rule_over_root_deny)
+ 			.path = "/",
+ 			.access = LANDLOCK_ACCESS_FS_READ_FILE,
+ 		},
+-		{}
++		{},
+ 	};
+ 	const int ruleset_fd = create_ruleset(_metadata, ACCESS_RW, rules);
+ 
+@@ -1244,12 +1430,13 @@ TEST_F_FORK(layout1, rule_inside_mount_ns)
+ 			.path = "s3d3",
+ 			.access = ACCESS_RO,
+ 		},
+-		{}
++		{},
+ 	};
+ 	int ruleset_fd;
+ 
+ 	set_cap(_metadata, CAP_SYS_ADMIN);
+-	ASSERT_EQ(0, syscall(SYS_pivot_root, dir_s3d2, dir_s3d3)) {
++	ASSERT_EQ(0, syscall(__NR_pivot_root, dir_s3d2, dir_s3d3))
++	{
+ 		TH_LOG("Failed to pivot root: %s", strerror(errno));
+ 	};
+ 	ASSERT_EQ(0, chdir("/"));
+@@ -1271,7 +1458,7 @@ TEST_F_FORK(layout1, mount_and_pivot)
+ 			.path = dir_s3d2,
+ 			.access = ACCESS_RO,
+ 		},
+-		{}
++		{},
+ 	};
+ 	const int ruleset_fd = create_ruleset(_metadata, ACCESS_RW, rules);
+ 
+@@ -1282,7 +1469,7 @@ TEST_F_FORK(layout1, mount_and_pivot)
+ 	set_cap(_metadata, CAP_SYS_ADMIN);
+ 	ASSERT_EQ(-1, mount(NULL, dir_s3d2, NULL, MS_RDONLY, NULL));
+ 	ASSERT_EQ(EPERM, errno);
+-	ASSERT_EQ(-1, syscall(SYS_pivot_root, dir_s3d2, dir_s3d3));
++	ASSERT_EQ(-1, syscall(__NR_pivot_root, dir_s3d2, dir_s3d3));
+ 	ASSERT_EQ(EPERM, errno);
+ 	clear_cap(_metadata, CAP_SYS_ADMIN);
+ }
+@@ -1294,28 +1481,29 @@ TEST_F_FORK(layout1, move_mount)
+ 			.path = dir_s3d2,
+ 			.access = ACCESS_RO,
+ 		},
+-		{}
++		{},
+ 	};
+ 	const int ruleset_fd = create_ruleset(_metadata, ACCESS_RW, rules);
+ 
+ 	ASSERT_LE(0, ruleset_fd);
+ 
+ 	set_cap(_metadata, CAP_SYS_ADMIN);
+-	ASSERT_EQ(0, syscall(SYS_move_mount, AT_FDCWD, dir_s3d2, AT_FDCWD,
+-				dir_s1d2, 0)) {
++	ASSERT_EQ(0, syscall(__NR_move_mount, AT_FDCWD, dir_s3d2, AT_FDCWD,
++			     dir_s1d2, 0))
++	{
+ 		TH_LOG("Failed to move mount: %s", strerror(errno));
+ 	}
+ 
+-	ASSERT_EQ(0, syscall(SYS_move_mount, AT_FDCWD, dir_s1d2, AT_FDCWD,
+-				dir_s3d2, 0));
++	ASSERT_EQ(0, syscall(__NR_move_mount, AT_FDCWD, dir_s1d2, AT_FDCWD,
++			     dir_s3d2, 0));
+ 	clear_cap(_metadata, CAP_SYS_ADMIN);
+ 
+ 	enforce_ruleset(_metadata, ruleset_fd);
+ 	ASSERT_EQ(0, close(ruleset_fd));
+ 
+ 	set_cap(_metadata, CAP_SYS_ADMIN);
+-	ASSERT_EQ(-1, syscall(SYS_move_mount, AT_FDCWD, dir_s3d2, AT_FDCWD,
+-				dir_s1d2, 0));
++	ASSERT_EQ(-1, syscall(__NR_move_mount, AT_FDCWD, dir_s3d2, AT_FDCWD,
++			      dir_s1d2, 0));
+ 	ASSERT_EQ(EPERM, errno);
+ 	clear_cap(_metadata, CAP_SYS_ADMIN);
+ }
+@@ -1335,7 +1523,7 @@ TEST_F_FORK(layout1, release_inodes)
+ 			.path = dir_s3d3,
+ 			.access = ACCESS_RO,
+ 		},
+-		{}
++		{},
+ 	};
+ 	const int ruleset_fd = create_ruleset(_metadata, ACCESS_RW, rules);
+ 
+@@ -1362,7 +1550,7 @@ enum relative_access {
+ };
+ 
+ static void test_relative_path(struct __test_metadata *const _metadata,
+-		const enum relative_access rel)
++			       const enum relative_access rel)
+ {
+ 	/*
+ 	 * Common layer to check that chroot doesn't ignore it (i.e. a chroot
+@@ -1373,7 +1561,7 @@ static void test_relative_path(struct __test_metadata *const _metadata,
+ 			.path = TMP_DIR,
+ 			.access = ACCESS_RO,
+ 		},
+-		{}
++		{},
+ 	};
+ 	const struct rule layer2_subs[] = {
+ 		{
+@@ -1384,7 +1572,7 @@ static void test_relative_path(struct __test_metadata *const _metadata,
+ 			.path = dir_s2d2,
+ 			.access = ACCESS_RO,
+ 		},
+-		{}
++		{},
+ 	};
+ 	int dirfd, ruleset_fd;
+ 
+@@ -1425,14 +1613,16 @@ static void test_relative_path(struct __test_metadata *const _metadata,
+ 		break;
+ 	case REL_CHROOT_ONLY:
+ 		/* Do chroot into dir_s1d2 (relative to dir_s2d2). */
+-		ASSERT_EQ(0, chroot("../../s1d1/s1d2")) {
++		ASSERT_EQ(0, chroot("../../s1d1/s1d2"))
++		{
+ 			TH_LOG("Failed to chroot: %s", strerror(errno));
+ 		}
+ 		dirfd = AT_FDCWD;
+ 		break;
+ 	case REL_CHROOT_CHDIR:
+ 		/* Do chroot into dir_s1d2. */
+-		ASSERT_EQ(0, chroot(".")) {
++		ASSERT_EQ(0, chroot("."))
++		{
+ 			TH_LOG("Failed to chroot: %s", strerror(errno));
+ 		}
+ 		dirfd = AT_FDCWD;
+@@ -1440,7 +1630,7 @@ static void test_relative_path(struct __test_metadata *const _metadata,
+ 	}
+ 
+ 	ASSERT_EQ((rel == REL_CHROOT_CHDIR) ? 0 : EACCES,
+-			test_open_rel(dirfd, "..", O_RDONLY));
++		  test_open_rel(dirfd, "..", O_RDONLY));
+ 	ASSERT_EQ(0, test_open_rel(dirfd, ".", O_RDONLY));
+ 
+ 	if (rel == REL_CHROOT_ONLY) {
+@@ -1462,11 +1652,13 @@ static void test_relative_path(struct __test_metadata *const _metadata,
+ 	if (rel != REL_CHROOT_CHDIR) {
+ 		ASSERT_EQ(EACCES, test_open_rel(dirfd, "../../s1d1", O_RDONLY));
+ 		ASSERT_EQ(0, test_open_rel(dirfd, "../../s1d1/s1d2", O_RDONLY));
+-		ASSERT_EQ(0, test_open_rel(dirfd, "../../s1d1/s1d2/s1d3", O_RDONLY));
++		ASSERT_EQ(0, test_open_rel(dirfd, "../../s1d1/s1d2/s1d3",
++					   O_RDONLY));
+ 
+ 		ASSERT_EQ(EACCES, test_open_rel(dirfd, "../../s2d1", O_RDONLY));
+ 		ASSERT_EQ(0, test_open_rel(dirfd, "../../s2d1/s2d2", O_RDONLY));
+-		ASSERT_EQ(0, test_open_rel(dirfd, "../../s2d1/s2d2/s2d3", O_RDONLY));
++		ASSERT_EQ(0, test_open_rel(dirfd, "../../s2d1/s2d2/s2d3",
++					   O_RDONLY));
+ 	}
+ 
+ 	if (rel == REL_OPEN)
+@@ -1495,40 +1687,42 @@ TEST_F_FORK(layout1, relative_chroot_chdir)
+ }
+ 
+ static void copy_binary(struct __test_metadata *const _metadata,
+-		const char *const dst_path)
++			const char *const dst_path)
+ {
+ 	int dst_fd, src_fd;
+ 	struct stat statbuf;
+ 
+ 	dst_fd = open(dst_path, O_WRONLY | O_TRUNC | O_CLOEXEC);
+-	ASSERT_LE(0, dst_fd) {
+-		TH_LOG("Failed to open \"%s\": %s", dst_path,
+-				strerror(errno));
++	ASSERT_LE(0, dst_fd)
++	{
++		TH_LOG("Failed to open \"%s\": %s", dst_path, strerror(errno));
+ 	}
+ 	src_fd = open(BINARY_PATH, O_RDONLY | O_CLOEXEC);
+-	ASSERT_LE(0, src_fd) {
++	ASSERT_LE(0, src_fd)
++	{
+ 		TH_LOG("Failed to open \"" BINARY_PATH "\": %s",
+-				strerror(errno));
++		       strerror(errno));
+ 	}
+ 	ASSERT_EQ(0, fstat(src_fd, &statbuf));
+-	ASSERT_EQ(statbuf.st_size, sendfile(dst_fd, src_fd, 0,
+-				statbuf.st_size));
++	ASSERT_EQ(statbuf.st_size,
++		  sendfile(dst_fd, src_fd, 0, statbuf.st_size));
+ 	ASSERT_EQ(0, close(src_fd));
+ 	ASSERT_EQ(0, close(dst_fd));
+ }
+ 
+-static void test_execute(struct __test_metadata *const _metadata,
+-		const int err, const char *const path)
++static void test_execute(struct __test_metadata *const _metadata, const int err,
++			 const char *const path)
+ {
+ 	int status;
+-	char *const argv[] = {(char *)path, NULL};
++	char *const argv[] = { (char *)path, NULL };
+ 	const pid_t child = fork();
+ 
+ 	ASSERT_LE(0, child);
+ 	if (child == 0) {
+-		ASSERT_EQ(err ? -1 : 0, execve(path, argv, NULL)) {
++		ASSERT_EQ(err ? -1 : 0, execve(path, argv, NULL))
++		{
+ 			TH_LOG("Failed to execute \"%s\": %s", path,
+-					strerror(errno));
++			       strerror(errno));
+ 		};
+ 		ASSERT_EQ(err, errno);
+ 		_exit(_metadata->passed ? 2 : 1);
+@@ -1536,9 +1730,10 @@ static void test_execute(struct __test_metadata *const _metadata,
+ 	}
+ 	ASSERT_EQ(child, waitpid(child, &status, 0));
+ 	ASSERT_EQ(1, WIFEXITED(status));
+-	ASSERT_EQ(err ? 2 : 0, WEXITSTATUS(status)) {
++	ASSERT_EQ(err ? 2 : 0, WEXITSTATUS(status))
++	{
+ 		TH_LOG("Unexpected return code for \"%s\": %s", path,
+-				strerror(errno));
++		       strerror(errno));
+ 	};
+ }
+ 
+@@ -1549,10 +1744,10 @@ TEST_F_FORK(layout1, execute)
+ 			.path = dir_s1d2,
+ 			.access = LANDLOCK_ACCESS_FS_EXECUTE,
+ 		},
+-		{}
++		{},
+ 	};
+-	const int ruleset_fd = create_ruleset(_metadata, rules[0].access,
+-			rules);
++	const int ruleset_fd =
++		create_ruleset(_metadata, rules[0].access, rules);
+ 
+ 	ASSERT_LE(0, ruleset_fd);
+ 	copy_binary(_metadata, file1_s1d1);
+@@ -1577,15 +1772,21 @@ TEST_F_FORK(layout1, execute)
+ 
+ TEST_F_FORK(layout1, link)
+ {
+-	const struct rule rules[] = {
++	const struct rule layer1[] = {
+ 		{
+ 			.path = dir_s1d2,
+ 			.access = LANDLOCK_ACCESS_FS_MAKE_REG,
+ 		},
+-		{}
++		{},
++	};
++	const struct rule layer2[] = {
++		{
++			.path = dir_s1d3,
++			.access = LANDLOCK_ACCESS_FS_REMOVE_FILE,
++		},
++		{},
+ 	};
+-	const int ruleset_fd = create_ruleset(_metadata, rules[0].access,
+-			rules);
++	int ruleset_fd = create_ruleset(_metadata, layer1[0].access, layer1);
+ 
+ 	ASSERT_LE(0, ruleset_fd);
+ 
+@@ -1598,14 +1799,30 @@ TEST_F_FORK(layout1, link)
+ 
+ 	ASSERT_EQ(-1, link(file2_s1d1, file1_s1d1));
+ 	ASSERT_EQ(EACCES, errno);
++
+ 	/* Denies linking because of reparenting. */
+ 	ASSERT_EQ(-1, link(file1_s2d1, file1_s1d2));
+ 	ASSERT_EQ(EXDEV, errno);
+ 	ASSERT_EQ(-1, link(file2_s1d2, file1_s1d3));
+ 	ASSERT_EQ(EXDEV, errno);
++	ASSERT_EQ(-1, link(file2_s1d3, file1_s1d2));
++	ASSERT_EQ(EXDEV, errno);
+ 
+ 	ASSERT_EQ(0, link(file2_s1d2, file1_s1d2));
+ 	ASSERT_EQ(0, link(file2_s1d3, file1_s1d3));
++
++	/* Prepares for next unlinks. */
++	ASSERT_EQ(0, unlink(file2_s1d2));
++	ASSERT_EQ(0, unlink(file2_s1d3));
++
++	ruleset_fd = create_ruleset(_metadata, layer2[0].access, layer2);
++	ASSERT_LE(0, ruleset_fd);
++	enforce_ruleset(_metadata, ruleset_fd);
++	ASSERT_EQ(0, close(ruleset_fd));
++
++	/* Checks that linkind doesn't require the ability to delete a file. */
++	ASSERT_EQ(0, link(file1_s1d2, file2_s1d2));
++	ASSERT_EQ(0, link(file1_s1d3, file2_s1d3));
+ }
+ 
+ TEST_F_FORK(layout1, rename_file)
+@@ -1619,14 +1836,13 @@ TEST_F_FORK(layout1, rename_file)
+ 			.path = dir_s2d2,
+ 			.access = LANDLOCK_ACCESS_FS_REMOVE_FILE,
+ 		},
+-		{}
++		{},
+ 	};
+-	const int ruleset_fd = create_ruleset(_metadata, rules[0].access,
+-			rules);
++	const int ruleset_fd =
++		create_ruleset(_metadata, rules[0].access, rules);
+ 
+ 	ASSERT_LE(0, ruleset_fd);
+ 
+-	ASSERT_EQ(0, unlink(file1_s1d1));
+ 	ASSERT_EQ(0, unlink(file1_s1d2));
+ 
+ 	enforce_ruleset(_metadata, ruleset_fd);
+@@ -1662,9 +1878,15 @@ TEST_F_FORK(layout1, rename_file)
+ 	ASSERT_EQ(-1, renameat2(AT_FDCWD, dir_s2d2, AT_FDCWD, file1_s2d1,
+ 				RENAME_EXCHANGE));
+ 	ASSERT_EQ(EACCES, errno);
++	/* Checks that file1_s2d1 cannot be removed (instead of ENOTDIR). */
++	ASSERT_EQ(-1, rename(dir_s2d2, file1_s2d1));
++	ASSERT_EQ(EACCES, errno);
+ 	ASSERT_EQ(-1, renameat2(AT_FDCWD, file1_s2d1, AT_FDCWD, dir_s2d2,
+ 				RENAME_EXCHANGE));
+ 	ASSERT_EQ(EACCES, errno);
++	/* Checks that file1_s1d1 cannot be removed (instead of EISDIR). */
++	ASSERT_EQ(-1, rename(file1_s1d1, dir_s1d2));
++	ASSERT_EQ(EACCES, errno);
+ 
+ 	/* Renames files with different parents. */
+ 	ASSERT_EQ(-1, rename(file1_s2d2, file1_s1d2));
+@@ -1675,14 +1897,14 @@ TEST_F_FORK(layout1, rename_file)
+ 
+ 	/* Exchanges and renames files with same parent. */
+ 	ASSERT_EQ(0, renameat2(AT_FDCWD, file2_s2d3, AT_FDCWD, file1_s2d3,
+-				RENAME_EXCHANGE));
++			       RENAME_EXCHANGE));
+ 	ASSERT_EQ(0, rename(file2_s2d3, file1_s2d3));
+ 
+ 	/* Exchanges files and directories with same parent, twice. */
+ 	ASSERT_EQ(0, renameat2(AT_FDCWD, file1_s2d2, AT_FDCWD, dir_s2d3,
+-				RENAME_EXCHANGE));
++			       RENAME_EXCHANGE));
+ 	ASSERT_EQ(0, renameat2(AT_FDCWD, file1_s2d2, AT_FDCWD, dir_s2d3,
+-				RENAME_EXCHANGE));
++			       RENAME_EXCHANGE));
+ }
+ 
+ TEST_F_FORK(layout1, rename_dir)
+@@ -1696,10 +1918,10 @@ TEST_F_FORK(layout1, rename_dir)
+ 			.path = dir_s2d1,
+ 			.access = LANDLOCK_ACCESS_FS_REMOVE_DIR,
+ 		},
+-		{}
++		{},
+ 	};
+-	const int ruleset_fd = create_ruleset(_metadata, rules[0].access,
+-			rules);
++	const int ruleset_fd =
++		create_ruleset(_metadata, rules[0].access, rules);
+ 
+ 	ASSERT_LE(0, ruleset_fd);
+ 
+@@ -1727,16 +1949,22 @@ TEST_F_FORK(layout1, rename_dir)
+ 	ASSERT_EQ(-1, renameat2(AT_FDCWD, dir_s1d1, AT_FDCWD, dir_s2d1,
+ 				RENAME_EXCHANGE));
+ 	ASSERT_EQ(EACCES, errno);
++	/* Checks that dir_s1d2 cannot be removed (instead of ENOTDIR). */
++	ASSERT_EQ(-1, rename(dir_s1d2, file1_s1d1));
++	ASSERT_EQ(EACCES, errno);
+ 	ASSERT_EQ(-1, renameat2(AT_FDCWD, file1_s1d1, AT_FDCWD, dir_s1d2,
+ 				RENAME_EXCHANGE));
+ 	ASSERT_EQ(EACCES, errno);
++	/* Checks that dir_s1d2 cannot be removed (instead of EISDIR). */
++	ASSERT_EQ(-1, rename(file1_s1d1, dir_s1d2));
++	ASSERT_EQ(EACCES, errno);
+ 
+ 	/*
+ 	 * Exchanges and renames directory to the same parent, which allows
+ 	 * directory removal.
+ 	 */
+ 	ASSERT_EQ(0, renameat2(AT_FDCWD, dir_s1d3, AT_FDCWD, file1_s1d2,
+-				RENAME_EXCHANGE));
++			       RENAME_EXCHANGE));
+ 	ASSERT_EQ(0, unlink(dir_s1d3));
+ 	ASSERT_EQ(0, mkdir(dir_s1d3, 0700));
+ 	ASSERT_EQ(0, rename(file1_s1d2, dir_s1d3));
+@@ -1750,10 +1978,10 @@ TEST_F_FORK(layout1, remove_dir)
+ 			.path = dir_s1d2,
+ 			.access = LANDLOCK_ACCESS_FS_REMOVE_DIR,
+ 		},
+-		{}
++		{},
+ 	};
+-	const int ruleset_fd = create_ruleset(_metadata, rules[0].access,
+-			rules);
++	const int ruleset_fd =
++		create_ruleset(_metadata, rules[0].access, rules);
+ 
+ 	ASSERT_LE(0, ruleset_fd);
+ 
+@@ -1787,10 +2015,10 @@ TEST_F_FORK(layout1, remove_file)
+ 			.path = dir_s1d2,
+ 			.access = LANDLOCK_ACCESS_FS_REMOVE_FILE,
+ 		},
+-		{}
++		{},
+ 	};
+-	const int ruleset_fd = create_ruleset(_metadata, rules[0].access,
+-			rules);
++	const int ruleset_fd =
++		create_ruleset(_metadata, rules[0].access, rules);
+ 
+ 	ASSERT_LE(0, ruleset_fd);
+ 	enforce_ruleset(_metadata, ruleset_fd);
+@@ -1805,14 +2033,15 @@ TEST_F_FORK(layout1, remove_file)
+ }
+ 
+ static void test_make_file(struct __test_metadata *const _metadata,
+-		const __u64 access, const mode_t mode, const dev_t dev)
++			   const __u64 access, const mode_t mode,
++			   const dev_t dev)
+ {
+ 	const struct rule rules[] = {
+ 		{
+ 			.path = dir_s1d2,
+ 			.access = access,
+ 		},
+-		{}
++		{},
+ 	};
+ 	const int ruleset_fd = create_ruleset(_metadata, access, rules);
+ 
+@@ -1820,9 +2049,10 @@ static void test_make_file(struct __test_metadata *const _metadata,
+ 
+ 	ASSERT_EQ(0, unlink(file1_s1d1));
+ 	ASSERT_EQ(0, unlink(file2_s1d1));
+-	ASSERT_EQ(0, mknod(file2_s1d1, mode | 0400, dev)) {
+-		TH_LOG("Failed to make file \"%s\": %s",
+-				file2_s1d1, strerror(errno));
++	ASSERT_EQ(0, mknod(file2_s1d1, mode | 0400, dev))
++	{
++		TH_LOG("Failed to make file \"%s\": %s", file2_s1d1,
++		       strerror(errno));
+ 	};
+ 
+ 	ASSERT_EQ(0, unlink(file1_s1d2));
+@@ -1841,9 +2071,10 @@ static void test_make_file(struct __test_metadata *const _metadata,
+ 	ASSERT_EQ(-1, rename(file2_s1d1, file1_s1d1));
+ 	ASSERT_EQ(EACCES, errno);
+ 
+-	ASSERT_EQ(0, mknod(file1_s1d2, mode | 0400, dev)) {
+-		TH_LOG("Failed to make file \"%s\": %s",
+-				file1_s1d2, strerror(errno));
++	ASSERT_EQ(0, mknod(file1_s1d2, mode | 0400, dev))
++	{
++		TH_LOG("Failed to make file \"%s\": %s", file1_s1d2,
++		       strerror(errno));
+ 	};
+ 	ASSERT_EQ(0, link(file1_s1d2, file2_s1d2));
+ 	ASSERT_EQ(0, unlink(file2_s1d2));
+@@ -1860,7 +2091,7 @@ TEST_F_FORK(layout1, make_char)
+ 	/* Creates a /dev/null device. */
+ 	set_cap(_metadata, CAP_MKNOD);
+ 	test_make_file(_metadata, LANDLOCK_ACCESS_FS_MAKE_CHAR, S_IFCHR,
+-			makedev(1, 3));
++		       makedev(1, 3));
+ }
+ 
+ TEST_F_FORK(layout1, make_block)
+@@ -1868,7 +2099,7 @@ TEST_F_FORK(layout1, make_block)
+ 	/* Creates a /dev/loop0 device. */
+ 	set_cap(_metadata, CAP_MKNOD);
+ 	test_make_file(_metadata, LANDLOCK_ACCESS_FS_MAKE_BLOCK, S_IFBLK,
+-			makedev(7, 0));
++		       makedev(7, 0));
+ }
+ 
+ TEST_F_FORK(layout1, make_reg_1)
+@@ -1898,10 +2129,10 @@ TEST_F_FORK(layout1, make_sym)
+ 			.path = dir_s1d2,
+ 			.access = LANDLOCK_ACCESS_FS_MAKE_SYM,
+ 		},
+-		{}
++		{},
+ 	};
+-	const int ruleset_fd = create_ruleset(_metadata, rules[0].access,
+-			rules);
++	const int ruleset_fd =
++		create_ruleset(_metadata, rules[0].access, rules);
+ 
+ 	ASSERT_LE(0, ruleset_fd);
+ 
+@@ -1943,10 +2174,10 @@ TEST_F_FORK(layout1, make_dir)
+ 			.path = dir_s1d2,
+ 			.access = LANDLOCK_ACCESS_FS_MAKE_DIR,
+ 		},
+-		{}
++		{},
+ 	};
+-	const int ruleset_fd = create_ruleset(_metadata, rules[0].access,
+-			rules);
++	const int ruleset_fd =
++		create_ruleset(_metadata, rules[0].access, rules);
+ 
+ 	ASSERT_LE(0, ruleset_fd);
+ 
+@@ -1965,12 +2196,12 @@ TEST_F_FORK(layout1, make_dir)
+ }
+ 
+ static int open_proc_fd(struct __test_metadata *const _metadata, const int fd,
+-		const int open_flags)
++			const int open_flags)
+ {
+ 	static const char path_template[] = "/proc/self/fd/%d";
+ 	char procfd_path[sizeof(path_template) + 10];
+-	const int procfd_path_size = snprintf(procfd_path, sizeof(procfd_path),
+-			path_template, fd);
++	const int procfd_path_size =
++		snprintf(procfd_path, sizeof(procfd_path), path_template, fd);
+ 
+ 	ASSERT_LT(procfd_path_size, sizeof(procfd_path));
+ 	return open(procfd_path, open_flags);
+@@ -1983,12 +2214,13 @@ TEST_F_FORK(layout1, proc_unlinked_file)
+ 			.path = file1_s1d2,
+ 			.access = LANDLOCK_ACCESS_FS_READ_FILE,
+ 		},
+-		{}
++		{},
+ 	};
+ 	int reg_fd, proc_fd;
+-	const int ruleset_fd = create_ruleset(_metadata,
+-			LANDLOCK_ACCESS_FS_READ_FILE |
+-			LANDLOCK_ACCESS_FS_WRITE_FILE, rules);
++	const int ruleset_fd = create_ruleset(
++		_metadata,
++		LANDLOCK_ACCESS_FS_READ_FILE | LANDLOCK_ACCESS_FS_WRITE_FILE,
++		rules);
+ 
+ 	ASSERT_LE(0, ruleset_fd);
+ 	enforce_ruleset(_metadata, ruleset_fd);
+@@ -2005,9 +2237,10 @@ TEST_F_FORK(layout1, proc_unlinked_file)
+ 	ASSERT_EQ(0, close(proc_fd));
+ 
+ 	proc_fd = open_proc_fd(_metadata, reg_fd, O_RDWR | O_CLOEXEC);
+-	ASSERT_EQ(-1, proc_fd) {
+-		TH_LOG("Successfully opened /proc/self/fd/%d: %s",
+-				reg_fd, strerror(errno));
++	ASSERT_EQ(-1, proc_fd)
++	{
++		TH_LOG("Successfully opened /proc/self/fd/%d: %s", reg_fd,
++		       strerror(errno));
+ 	}
+ 	ASSERT_EQ(EACCES, errno);
+ 
+@@ -2023,13 +2256,13 @@ TEST_F_FORK(layout1, proc_pipe)
+ 		{
+ 			.path = dir_s1d2,
+ 			.access = LANDLOCK_ACCESS_FS_READ_FILE |
+-				LANDLOCK_ACCESS_FS_WRITE_FILE,
++				  LANDLOCK_ACCESS_FS_WRITE_FILE,
+ 		},
+-		{}
++		{},
+ 	};
+ 	/* Limits read and write access to files tied to the filesystem. */
+-	const int ruleset_fd = create_ruleset(_metadata, rules[0].access,
+-			rules);
++	const int ruleset_fd =
++		create_ruleset(_metadata, rules[0].access, rules);
+ 
+ 	ASSERT_LE(0, ruleset_fd);
+ 	enforce_ruleset(_metadata, ruleset_fd);
+@@ -2041,7 +2274,8 @@ TEST_F_FORK(layout1, proc_pipe)
+ 
+ 	/* Checks access to pipes through FD. */
+ 	ASSERT_EQ(0, pipe2(pipe_fds, O_CLOEXEC));
+-	ASSERT_EQ(1, write(pipe_fds[1], ".", 1)) {
++	ASSERT_EQ(1, write(pipe_fds[1], ".", 1))
++	{
+ 		TH_LOG("Failed to write in pipe: %s", strerror(errno));
+ 	}
+ 	ASSERT_EQ(1, read(pipe_fds[0], &buf, 1));
+@@ -2050,9 +2284,10 @@ TEST_F_FORK(layout1, proc_pipe)
+ 	/* Checks write access to pipe through /proc/self/fd . */
+ 	proc_fd = open_proc_fd(_metadata, pipe_fds[1], O_WRONLY | O_CLOEXEC);
+ 	ASSERT_LE(0, proc_fd);
+-	ASSERT_EQ(1, write(proc_fd, ".", 1)) {
++	ASSERT_EQ(1, write(proc_fd, ".", 1))
++	{
+ 		TH_LOG("Failed to write through /proc/self/fd/%d: %s",
+-				pipe_fds[1], strerror(errno));
++		       pipe_fds[1], strerror(errno));
+ 	}
+ 	ASSERT_EQ(0, close(proc_fd));
+ 
+@@ -2060,9 +2295,10 @@ TEST_F_FORK(layout1, proc_pipe)
+ 	proc_fd = open_proc_fd(_metadata, pipe_fds[0], O_RDONLY | O_CLOEXEC);
+ 	ASSERT_LE(0, proc_fd);
+ 	buf = '\0';
+-	ASSERT_EQ(1, read(proc_fd, &buf, 1)) {
++	ASSERT_EQ(1, read(proc_fd, &buf, 1))
++	{
+ 		TH_LOG("Failed to read through /proc/self/fd/%d: %s",
+-				pipe_fds[1], strerror(errno));
++		       pipe_fds[1], strerror(errno));
+ 	}
+ 	ASSERT_EQ(0, close(proc_fd));
+ 
+@@ -2070,8 +2306,9 @@ TEST_F_FORK(layout1, proc_pipe)
+ 	ASSERT_EQ(0, close(pipe_fds[1]));
+ }
+ 
+-FIXTURE(layout1_bind) {
+-};
++/* clang-format off */
++FIXTURE(layout1_bind) {};
++/* clang-format on */
+ 
+ FIXTURE_SETUP(layout1_bind)
+ {
+@@ -2161,7 +2398,7 @@ TEST_F_FORK(layout1_bind, same_content_same_file)
+ 			.path = dir_s2d1,
+ 			.access = ACCESS_RW,
+ 		},
+-		{}
++		{},
+ 	};
+ 	/*
+ 	 * Sets access rights on the same bind-mounted directories.  The result
+@@ -2177,7 +2414,7 @@ TEST_F_FORK(layout1_bind, same_content_same_file)
+ 			.path = dir_s2d2,
+ 			.access = ACCESS_RW,
+ 		},
+-		{}
++		{},
+ 	};
+ 	/* Only allow read-access to the s1d3 hierarchies. */
+ 	const struct rule layer3_source[] = {
+@@ -2185,7 +2422,7 @@ TEST_F_FORK(layout1_bind, same_content_same_file)
+ 			.path = dir_s1d3,
+ 			.access = LANDLOCK_ACCESS_FS_READ_FILE,
+ 		},
+-		{}
++		{},
+ 	};
+ 	/* Removes all access rights. */
+ 	const struct rule layer4_destination[] = {
+@@ -2193,7 +2430,7 @@ TEST_F_FORK(layout1_bind, same_content_same_file)
+ 			.path = bind_file1_s1d3,
+ 			.access = LANDLOCK_ACCESS_FS_WRITE_FILE,
+ 		},
+-		{}
++		{},
+ 	};
+ 	int ruleset_fd;
+ 
+@@ -2282,8 +2519,8 @@ TEST_F_FORK(layout1_bind, same_content_same_file)
+ 	ASSERT_EQ(EACCES, test_open(bind_file1_s1d3, O_WRONLY));
+ }
+ 
+-#define LOWER_BASE	TMP_DIR "/lower"
+-#define LOWER_DATA	LOWER_BASE "/data"
++#define LOWER_BASE TMP_DIR "/lower"
++#define LOWER_DATA LOWER_BASE "/data"
+ static const char lower_fl1[] = LOWER_DATA "/fl1";
+ static const char lower_dl1[] = LOWER_DATA "/dl1";
+ static const char lower_dl1_fl2[] = LOWER_DATA "/dl1/fl2";
+@@ -2295,23 +2532,23 @@ static const char lower_do1_fl3[] = LOWER_DATA "/do1/fl3";
+ static const char (*lower_base_files[])[] = {
+ 	&lower_fl1,
+ 	&lower_fo1,
+-	NULL
++	NULL,
+ };
+ static const char (*lower_base_directories[])[] = {
+ 	&lower_dl1,
+ 	&lower_do1,
+-	NULL
++	NULL,
+ };
+ static const char (*lower_sub_files[])[] = {
+ 	&lower_dl1_fl2,
+ 	&lower_do1_fo2,
+ 	&lower_do1_fl3,
+-	NULL
++	NULL,
+ };
+ 
+-#define UPPER_BASE	TMP_DIR "/upper"
+-#define UPPER_DATA	UPPER_BASE "/data"
+-#define UPPER_WORK	UPPER_BASE "/work"
++#define UPPER_BASE TMP_DIR "/upper"
++#define UPPER_DATA UPPER_BASE "/data"
++#define UPPER_WORK UPPER_BASE "/work"
+ static const char upper_fu1[] = UPPER_DATA "/fu1";
+ static const char upper_du1[] = UPPER_DATA "/du1";
+ static const char upper_du1_fu2[] = UPPER_DATA "/du1/fu2";
+@@ -2323,22 +2560,22 @@ static const char upper_do1_fu3[] = UPPER_DATA "/do1/fu3";
+ static const char (*upper_base_files[])[] = {
+ 	&upper_fu1,
+ 	&upper_fo1,
+-	NULL
++	NULL,
+ };
+ static const char (*upper_base_directories[])[] = {
+ 	&upper_du1,
+ 	&upper_do1,
+-	NULL
++	NULL,
+ };
+ static const char (*upper_sub_files[])[] = {
+ 	&upper_du1_fu2,
+ 	&upper_do1_fo2,
+ 	&upper_do1_fu3,
+-	NULL
++	NULL,
+ };
+ 
+-#define MERGE_BASE	TMP_DIR "/merge"
+-#define MERGE_DATA	MERGE_BASE "/data"
++#define MERGE_BASE TMP_DIR "/merge"
++#define MERGE_DATA MERGE_BASE "/data"
+ static const char merge_fl1[] = MERGE_DATA "/fl1";
+ static const char merge_dl1[] = MERGE_DATA "/dl1";
+ static const char merge_dl1_fl2[] = MERGE_DATA "/dl1/fl2";
+@@ -2355,21 +2592,17 @@ static const char (*merge_base_files[])[] = {
+ 	&merge_fl1,
+ 	&merge_fu1,
+ 	&merge_fo1,
+-	NULL
++	NULL,
+ };
+ static const char (*merge_base_directories[])[] = {
+ 	&merge_dl1,
+ 	&merge_du1,
+ 	&merge_do1,
+-	NULL
++	NULL,
+ };
+ static const char (*merge_sub_files[])[] = {
+-	&merge_dl1_fl2,
+-	&merge_du1_fu2,
+-	&merge_do1_fo2,
+-	&merge_do1_fl3,
+-	&merge_do1_fu3,
+-	NULL
++	&merge_dl1_fl2, &merge_du1_fu2, &merge_do1_fo2,
++	&merge_do1_fl3, &merge_do1_fu3, NULL,
+ };
+ 
+ /*
+@@ -2411,8 +2644,9 @@ static const char (*merge_sub_files[])[] = {
+  *         └── work
+  */
+ 
+-FIXTURE(layout2_overlay) {
+-};
++/* clang-format off */
++FIXTURE(layout2_overlay) {};
++/* clang-format on */
+ 
+ FIXTURE_SETUP(layout2_overlay)
+ {
+@@ -2444,9 +2678,8 @@ FIXTURE_SETUP(layout2_overlay)
+ 	set_cap(_metadata, CAP_SYS_ADMIN);
+ 	set_cap(_metadata, CAP_DAC_OVERRIDE);
+ 	ASSERT_EQ(0, mount("overlay", MERGE_DATA, "overlay", 0,
+-				"lowerdir=" LOWER_DATA
+-				",upperdir=" UPPER_DATA
+-				",workdir=" UPPER_WORK));
++			   "lowerdir=" LOWER_DATA ",upperdir=" UPPER_DATA
++			   ",workdir=" UPPER_WORK));
+ 	clear_cap(_metadata, CAP_DAC_OVERRIDE);
+ 	clear_cap(_metadata, CAP_SYS_ADMIN);
+ }
+@@ -2513,9 +2746,9 @@ TEST_F_FORK(layout2_overlay, no_restriction)
+ 	ASSERT_EQ(0, test_open(merge_do1_fu3, O_RDONLY));
+ }
+ 
+-#define for_each_path(path_list, path_entry, i)			\
+-	for (i = 0, path_entry = *path_list[i]; path_list[i];	\
+-			path_entry = *path_list[++i])
++#define for_each_path(path_list, path_entry, i)               \
++	for (i = 0, path_entry = *path_list[i]; path_list[i]; \
++	     path_entry = *path_list[++i])
+ 
+ TEST_F_FORK(layout2_overlay, same_content_different_file)
+ {
+@@ -2533,7 +2766,7 @@ TEST_F_FORK(layout2_overlay, same_content_different_file)
+ 			.path = MERGE_BASE,
+ 			.access = ACCESS_RW,
+ 		},
+-		{}
++		{},
+ 	};
+ 	const struct rule layer2_data[] = {
+ 		{
+@@ -2548,7 +2781,7 @@ TEST_F_FORK(layout2_overlay, same_content_different_file)
+ 			.path = MERGE_DATA,
+ 			.access = ACCESS_RW,
+ 		},
+-		{}
++		{},
+ 	};
+ 	/* Sets access right on directories inside both layers. */
+ 	const struct rule layer3_subdirs[] = {
+@@ -2580,7 +2813,7 @@ TEST_F_FORK(layout2_overlay, same_content_different_file)
+ 			.path = merge_do1,
+ 			.access = ACCESS_RW,
+ 		},
+-		{}
++		{},
+ 	};
+ 	/* Tighten access rights to the files. */
+ 	const struct rule layer4_files[] = {
+@@ -2611,37 +2844,37 @@ TEST_F_FORK(layout2_overlay, same_content_different_file)
+ 		{
+ 			.path = merge_dl1_fl2,
+ 			.access = LANDLOCK_ACCESS_FS_READ_FILE |
+-				LANDLOCK_ACCESS_FS_WRITE_FILE,
++				  LANDLOCK_ACCESS_FS_WRITE_FILE,
+ 		},
+ 		{
+ 			.path = merge_du1_fu2,
+ 			.access = LANDLOCK_ACCESS_FS_READ_FILE |
+-				LANDLOCK_ACCESS_FS_WRITE_FILE,
++				  LANDLOCK_ACCESS_FS_WRITE_FILE,
+ 		},
+ 		{
+ 			.path = merge_do1_fo2,
+ 			.access = LANDLOCK_ACCESS_FS_READ_FILE |
+-				LANDLOCK_ACCESS_FS_WRITE_FILE,
++				  LANDLOCK_ACCESS_FS_WRITE_FILE,
+ 		},
+ 		{
+ 			.path = merge_do1_fl3,
+ 			.access = LANDLOCK_ACCESS_FS_READ_FILE |
+-				LANDLOCK_ACCESS_FS_WRITE_FILE,
++				  LANDLOCK_ACCESS_FS_WRITE_FILE,
+ 		},
+ 		{
+ 			.path = merge_do1_fu3,
+ 			.access = LANDLOCK_ACCESS_FS_READ_FILE |
+-				LANDLOCK_ACCESS_FS_WRITE_FILE,
++				  LANDLOCK_ACCESS_FS_WRITE_FILE,
+ 		},
+-		{}
++		{},
+ 	};
+ 	const struct rule layer5_merge_only[] = {
+ 		{
+ 			.path = MERGE_DATA,
+ 			.access = LANDLOCK_ACCESS_FS_READ_FILE |
+-				LANDLOCK_ACCESS_FS_WRITE_FILE,
++				  LANDLOCK_ACCESS_FS_WRITE_FILE,
+ 		},
+-		{}
++		{},
+ 	};
+ 	int ruleset_fd;
+ 	size_t i;
+@@ -2659,7 +2892,8 @@ TEST_F_FORK(layout2_overlay, same_content_different_file)
+ 		ASSERT_EQ(EACCES, test_open(path_entry, O_WRONLY));
+ 	}
+ 	for_each_path(lower_base_directories, path_entry, i) {
+-		ASSERT_EQ(EACCES, test_open(path_entry, O_RDONLY | O_DIRECTORY));
++		ASSERT_EQ(EACCES,
++			  test_open(path_entry, O_RDONLY | O_DIRECTORY));
+ 	}
+ 	for_each_path(lower_sub_files, path_entry, i) {
+ 		ASSERT_EQ(0, test_open(path_entry, O_RDONLY));
+@@ -2671,7 +2905,8 @@ TEST_F_FORK(layout2_overlay, same_content_different_file)
+ 		ASSERT_EQ(EACCES, test_open(path_entry, O_WRONLY));
+ 	}
+ 	for_each_path(upper_base_directories, path_entry, i) {
+-		ASSERT_EQ(EACCES, test_open(path_entry, O_RDONLY | O_DIRECTORY));
++		ASSERT_EQ(EACCES,
++			  test_open(path_entry, O_RDONLY | O_DIRECTORY));
+ 	}
+ 	for_each_path(upper_sub_files, path_entry, i) {
+ 		ASSERT_EQ(0, test_open(path_entry, O_RDONLY));
+@@ -2756,7 +2991,8 @@ TEST_F_FORK(layout2_overlay, same_content_different_file)
+ 		ASSERT_EQ(EACCES, test_open(path_entry, O_RDWR));
+ 	}
+ 	for_each_path(merge_base_directories, path_entry, i) {
+-		ASSERT_EQ(EACCES, test_open(path_entry, O_RDONLY | O_DIRECTORY));
++		ASSERT_EQ(EACCES,
++			  test_open(path_entry, O_RDONLY | O_DIRECTORY));
+ 	}
+ 	for_each_path(merge_sub_files, path_entry, i) {
+ 		ASSERT_EQ(0, test_open(path_entry, O_RDWR));
+@@ -2781,7 +3017,8 @@ TEST_F_FORK(layout2_overlay, same_content_different_file)
+ 		ASSERT_EQ(EACCES, test_open(path_entry, O_RDWR));
+ 	}
+ 	for_each_path(merge_base_directories, path_entry, i) {
+-		ASSERT_EQ(EACCES, test_open(path_entry, O_RDONLY | O_DIRECTORY));
++		ASSERT_EQ(EACCES,
++			  test_open(path_entry, O_RDONLY | O_DIRECTORY));
+ 	}
+ 	for_each_path(merge_sub_files, path_entry, i) {
+ 		ASSERT_EQ(0, test_open(path_entry, O_RDWR));
+diff --git a/tools/testing/selftests/landlock/ptrace_test.c b/tools/testing/selftests/landlock/ptrace_test.c
+index 15fbef9cc8496..c28ef98ff3ac1 100644
+--- a/tools/testing/selftests/landlock/ptrace_test.c
++++ b/tools/testing/selftests/landlock/ptrace_test.c
+@@ -26,9 +26,10 @@ static void create_domain(struct __test_metadata *const _metadata)
+ 		.handled_access_fs = LANDLOCK_ACCESS_FS_MAKE_BLOCK,
+ 	};
+ 
+-	ruleset_fd = landlock_create_ruleset(&ruleset_attr,
+-			sizeof(ruleset_attr), 0);
+-	EXPECT_LE(0, ruleset_fd) {
++	ruleset_fd =
++		landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0);
++	EXPECT_LE(0, ruleset_fd)
++	{
+ 		TH_LOG("Failed to create a ruleset: %s", strerror(errno));
+ 	}
+ 	EXPECT_EQ(0, prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0));
+@@ -43,7 +44,7 @@ static int test_ptrace_read(const pid_t pid)
+ 	int procenv_path_size, fd;
+ 
+ 	procenv_path_size = snprintf(procenv_path, sizeof(procenv_path),
+-			path_template, pid);
++				     path_template, pid);
+ 	if (procenv_path_size >= sizeof(procenv_path))
+ 		return E2BIG;
+ 
+@@ -59,9 +60,12 @@ static int test_ptrace_read(const pid_t pid)
+ 	return 0;
+ }
+ 
+-FIXTURE(hierarchy) { };
++/* clang-format off */
++FIXTURE(hierarchy) {};
++/* clang-format on */
+ 
+-FIXTURE_VARIANT(hierarchy) {
++FIXTURE_VARIANT(hierarchy)
++{
+ 	const bool domain_both;
+ 	const bool domain_parent;
+ 	const bool domain_child;
+@@ -83,7 +87,9 @@ FIXTURE_VARIANT(hierarchy) {
+  *       \              P2 -> P1 : allow
+  *        'P2
+  */
++/* clang-format off */
+ FIXTURE_VARIANT_ADD(hierarchy, allow_without_domain) {
++	/* clang-format on */
+ 	.domain_both = false,
+ 	.domain_parent = false,
+ 	.domain_child = false,
+@@ -98,7 +104,9 @@ FIXTURE_VARIANT_ADD(hierarchy, allow_without_domain) {
+  *        |  P2  |
+  *        '------'
+  */
++/* clang-format off */
+ FIXTURE_VARIANT_ADD(hierarchy, allow_with_one_domain) {
++	/* clang-format on */
+ 	.domain_both = false,
+ 	.domain_parent = false,
+ 	.domain_child = true,
+@@ -112,7 +120,9 @@ FIXTURE_VARIANT_ADD(hierarchy, allow_with_one_domain) {
+  *            '
+  *            P2
+  */
++/* clang-format off */
+ FIXTURE_VARIANT_ADD(hierarchy, deny_with_parent_domain) {
++	/* clang-format on */
+ 	.domain_both = false,
+ 	.domain_parent = true,
+ 	.domain_child = false,
+@@ -127,7 +137,9 @@ FIXTURE_VARIANT_ADD(hierarchy, deny_with_parent_domain) {
+  *         |  P2  |
+  *         '------'
+  */
++/* clang-format off */
+ FIXTURE_VARIANT_ADD(hierarchy, deny_with_sibling_domain) {
++	/* clang-format on */
+ 	.domain_both = false,
+ 	.domain_parent = true,
+ 	.domain_child = true,
+@@ -142,7 +154,9 @@ FIXTURE_VARIANT_ADD(hierarchy, deny_with_sibling_domain) {
+  * |         P2  |
+  * '-------------'
+  */
++/* clang-format off */
+ FIXTURE_VARIANT_ADD(hierarchy, allow_sibling_domain) {
++	/* clang-format on */
+ 	.domain_both = true,
+ 	.domain_parent = false,
+ 	.domain_child = false,
+@@ -158,7 +172,9 @@ FIXTURE_VARIANT_ADD(hierarchy, allow_sibling_domain) {
+  * |        '------' |
+  * '-----------------'
+  */
++/* clang-format off */
+ FIXTURE_VARIANT_ADD(hierarchy, allow_with_nested_domain) {
++	/* clang-format on */
+ 	.domain_both = true,
+ 	.domain_parent = false,
+ 	.domain_child = true,
+@@ -174,7 +190,9 @@ FIXTURE_VARIANT_ADD(hierarchy, allow_with_nested_domain) {
+  * |             P2  |
+  * '-----------------'
+  */
++/* clang-format off */
+ FIXTURE_VARIANT_ADD(hierarchy, deny_with_nested_and_parent_domain) {
++	/* clang-format on */
+ 	.domain_both = true,
+ 	.domain_parent = true,
+ 	.domain_child = false,
+@@ -192,17 +210,21 @@ FIXTURE_VARIANT_ADD(hierarchy, deny_with_nested_and_parent_domain) {
+  * |        '------' |
+  * '-----------------'
+  */
++/* clang-format off */
+ FIXTURE_VARIANT_ADD(hierarchy, deny_with_forked_domain) {
++	/* clang-format on */
+ 	.domain_both = true,
+ 	.domain_parent = true,
+ 	.domain_child = true,
+ };
+ 
+ FIXTURE_SETUP(hierarchy)
+-{ }
++{
++}
+ 
+ FIXTURE_TEARDOWN(hierarchy)
+-{ }
++{
++}
+ 
+ /* Test PTRACE_TRACEME and PTRACE_ATTACH for parent and child. */
+ TEST_F(hierarchy, trace)
+@@ -330,7 +352,7 @@ TEST_F(hierarchy, trace)
+ 	ASSERT_EQ(1, write(pipe_parent[1], ".", 1));
+ 	ASSERT_EQ(child, waitpid(child, &status, 0));
+ 	if (WIFSIGNALED(status) || !WIFEXITED(status) ||
+-			WEXITSTATUS(status) != EXIT_SUCCESS)
++	    WEXITSTATUS(status) != EXIT_SUCCESS)
+ 		_metadata->passed = 0;
+ }
+ 
+diff --git a/tools/testing/selftests/resctrl/fill_buf.c b/tools/testing/selftests/resctrl/fill_buf.c
+index 51e5cf22632f7..56ccbeae0638d 100644
+--- a/tools/testing/selftests/resctrl/fill_buf.c
++++ b/tools/testing/selftests/resctrl/fill_buf.c
+@@ -121,8 +121,10 @@ static int fill_cache_read(unsigned char *start_ptr, unsigned char *end_ptr,
+ 
+ 	/* Consume read result so that reading memory is not optimized out. */
+ 	fp = fopen("/dev/null", "w");
+-	if (!fp)
++	if (!fp) {
+ 		perror("Unable to write to /dev/null");
++		return -1;
++	}
+ 	fprintf(fp, "Sum: %d ", ret);
+ 	fclose(fp);
+ 
+diff --git a/tools/tracing/rtla/Makefile b/tools/tracing/rtla/Makefile
+index 11fb417abb42f..523f0a8c38c23 100644
+--- a/tools/tracing/rtla/Makefile
++++ b/tools/tracing/rtla/Makefile
+@@ -23,6 +23,7 @@ $(call allow-override,LD_SO_CONF_PATH,/etc/ld.so.conf.d/)
+ $(call allow-override,LDCONFIG,ldconfig)
+ 
+ INSTALL	=	install
++MKDIR	=	mkdir
+ FOPTS	:=	-flto=auto -ffat-lto-objects -fexceptions -fstack-protector-strong \
+ 		-fasynchronous-unwind-tables -fstack-clash-protection
+ WOPTS	:= 	-Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -Wno-maybe-uninitialized
+@@ -31,7 +32,7 @@ TRACEFS_HEADERS	:= $$($(PKG_CONFIG) --cflags libtracefs)
+ 
+ CFLAGS	:=	-O -g -DVERSION=\"$(VERSION)\" $(FOPTS) $(MOPTS) $(WOPTS) $(TRACEFS_HEADERS)
+ LDFLAGS	:=	-ggdb
+-LIBS	:=	$$($(PKG_CONFIG) --libs libtracefs) -lprocps
++LIBS	:=	$$($(PKG_CONFIG) --libs libtracefs)
+ 
+ SRC	:=	$(wildcard src/*.c)
+ HDR	:=	$(wildcard src/*.h)
+@@ -68,7 +69,7 @@ static: $(OBJ)
+ 
+ .PHONY: install
+ install: doc_install
+-	$(INSTALL) -d -m 755 $(DESTDIR)$(BINDIR)
++	$(MKDIR) -p $(DESTDIR)$(BINDIR)
+ 	$(INSTALL) rtla -m 755 $(DESTDIR)$(BINDIR)
+ 	$(STRIP) $(DESTDIR)$(BINDIR)/rtla
+ 	@test ! -f $(DESTDIR)$(BINDIR)/osnoise || rm $(DESTDIR)$(BINDIR)/osnoise
+diff --git a/tools/tracing/rtla/README.txt b/tools/tracing/rtla/README.txt
+index 6c88446f7e74a..4af3fd40f1716 100644
+--- a/tools/tracing/rtla/README.txt
++++ b/tools/tracing/rtla/README.txt
+@@ -1,19 +1,16 @@
+ RTLA: Real-Time Linux Analysis tools
+ 
+-The rtla is a meta-tool that includes a set of commands that
+-aims to analyze the real-time properties of Linux. But, instead of
+-testing Linux as a black box, rtla leverages kernel tracing
+-capabilities to provide precise information about the properties
+-and root causes of unexpected results.
++The rtla meta-tool includes a set of commands that aims to analyze
++the real-time properties of Linux. Instead of testing Linux as a black box,
++rtla leverages kernel tracing capabilities to provide precise information
++about the properties and root causes of unexpected results.
+ 
+ Installing RTLA
+ 
+-RTLA depends on some libraries and tools. More precisely, it depends on the
+-following libraries:
++RTLA depends on the following libraries and tools:
+ 
+  - libtracefs
+  - libtraceevent
+- - procps
+ 
+ It also depends on python3-docutils to compile man pages.
+ 
+diff --git a/tools/tracing/rtla/src/osnoise_hist.c b/tools/tracing/rtla/src/osnoise_hist.c
+index b4380d45cacd4..5d7ea479ac89f 100644
+--- a/tools/tracing/rtla/src/osnoise_hist.c
++++ b/tools/tracing/rtla/src/osnoise_hist.c
+@@ -809,7 +809,7 @@ int osnoise_hist_main(int argc, char *argv[])
+ 		retval = set_comm_sched_attr("osnoise/", &params->sched_param);
+ 		if (retval) {
+ 			err_msg("Failed to set sched parameters\n");
+-			goto out_hist;
++			goto out_free;
+ 		}
+ 	}
+ 
+@@ -819,7 +819,7 @@ int osnoise_hist_main(int argc, char *argv[])
+ 		record = osnoise_init_trace_tool("osnoise");
+ 		if (!record) {
+ 			err_msg("Failed to enable the trace instance\n");
+-			goto out_hist;
++			goto out_free;
+ 		}
+ 
+ 		if (params->events) {
+@@ -869,6 +869,7 @@ int osnoise_hist_main(int argc, char *argv[])
+ out_hist:
+ 	trace_events_destroy(&record->trace, params->events);
+ 	params->events = NULL;
++out_free:
+ 	osnoise_free_histogram(tool->data);
+ out_destroy:
+ 	osnoise_destroy_tool(record);
+diff --git a/tools/tracing/rtla/src/osnoise_top.c b/tools/tracing/rtla/src/osnoise_top.c
+index 72c2fd6ce005d..76479bfb29224 100644
+--- a/tools/tracing/rtla/src/osnoise_top.c
++++ b/tools/tracing/rtla/src/osnoise_top.c
+@@ -572,7 +572,7 @@ int osnoise_top_main(int argc, char **argv)
+ 	retval = osnoise_top_apply_config(tool, params);
+ 	if (retval) {
+ 		err_msg("Could not apply config\n");
+-		goto out_top;
++		goto out_free;
+ 	}
+ 
+ 	trace = &tool->trace;
+@@ -580,14 +580,14 @@ int osnoise_top_main(int argc, char **argv)
+ 	retval = enable_osnoise(trace);
+ 	if (retval) {
+ 		err_msg("Failed to enable osnoise tracer\n");
+-		goto out_top;
++		goto out_free;
+ 	}
+ 
+ 	if (params->set_sched) {
+ 		retval = set_comm_sched_attr("osnoise/", &params->sched_param);
+ 		if (retval) {
+ 			err_msg("Failed to set sched parameters\n");
+-			goto out_top;
++			goto out_free;
+ 		}
+ 	}
+ 
+@@ -597,7 +597,7 @@ int osnoise_top_main(int argc, char **argv)
+ 		record = osnoise_init_trace_tool("osnoise");
+ 		if (!record) {
+ 			err_msg("Failed to enable the trace instance\n");
+-			goto out_top;
++			goto out_free;
+ 		}
+ 
+ 		if (params->events) {
+@@ -649,6 +649,7 @@ int osnoise_top_main(int argc, char **argv)
+ out_top:
+ 	trace_events_destroy(&record->trace, params->events);
+ 	params->events = NULL;
++out_free:
+ 	osnoise_free_top(tool->data);
+ 	osnoise_destroy_tool(record);
+ 	osnoise_destroy_tool(tool);
+diff --git a/tools/tracing/rtla/src/timerlat_hist.c b/tools/tracing/rtla/src/timerlat_hist.c
+index dc908126c610d..f3ec628f5e519 100644
+--- a/tools/tracing/rtla/src/timerlat_hist.c
++++ b/tools/tracing/rtla/src/timerlat_hist.c
+@@ -821,7 +821,7 @@ int timerlat_hist_main(int argc, char *argv[])
+ 	retval = timerlat_hist_apply_config(tool, params);
+ 	if (retval) {
+ 		err_msg("Could not apply config\n");
+-		goto out_hist;
++		goto out_free;
+ 	}
+ 
+ 	trace = &tool->trace;
+@@ -829,14 +829,14 @@ int timerlat_hist_main(int argc, char *argv[])
+ 	retval = enable_timerlat(trace);
+ 	if (retval) {
+ 		err_msg("Failed to enable timerlat tracer\n");
+-		goto out_hist;
++		goto out_free;
+ 	}
+ 
+ 	if (params->set_sched) {
+ 		retval = set_comm_sched_attr("timerlat/", &params->sched_param);
+ 		if (retval) {
+ 			err_msg("Failed to set sched parameters\n");
+-			goto out_hist;
++			goto out_free;
+ 		}
+ 	}
+ 
+@@ -844,7 +844,7 @@ int timerlat_hist_main(int argc, char *argv[])
+ 		dma_latency_fd = set_cpu_dma_latency(params->dma_latency);
+ 		if (dma_latency_fd < 0) {
+ 			err_msg("Could not set /dev/cpu_dma_latency.\n");
+-			goto out_hist;
++			goto out_free;
+ 		}
+ 	}
+ 
+@@ -854,7 +854,7 @@ int timerlat_hist_main(int argc, char *argv[])
+ 		record = osnoise_init_trace_tool("timerlat");
+ 		if (!record) {
+ 			err_msg("Failed to enable the trace instance\n");
+-			goto out_hist;
++			goto out_free;
+ 		}
+ 
+ 		if (params->events) {
+@@ -904,6 +904,7 @@ out_hist:
+ 		close(dma_latency_fd);
+ 	trace_events_destroy(&record->trace, params->events);
+ 	params->events = NULL;
++out_free:
+ 	timerlat_free_histogram(tool->data);
+ 	osnoise_destroy_tool(record);
+ 	osnoise_destroy_tool(tool);
+diff --git a/tools/tracing/rtla/src/timerlat_top.c b/tools/tracing/rtla/src/timerlat_top.c
+index 1f754c3df53f2..35452a1d45e9f 100644
+--- a/tools/tracing/rtla/src/timerlat_top.c
++++ b/tools/tracing/rtla/src/timerlat_top.c
+@@ -612,7 +612,7 @@ int timerlat_top_main(int argc, char *argv[])
+ 	retval = timerlat_top_apply_config(top, params);
+ 	if (retval) {
+ 		err_msg("Could not apply config\n");
+-		goto out_top;
++		goto out_free;
+ 	}
+ 
+ 	trace = &top->trace;
+@@ -620,14 +620,14 @@ int timerlat_top_main(int argc, char *argv[])
+ 	retval = enable_timerlat(trace);
+ 	if (retval) {
+ 		err_msg("Failed to enable timerlat tracer\n");
+-		goto out_top;
++		goto out_free;
+ 	}
+ 
+ 	if (params->set_sched) {
+ 		retval = set_comm_sched_attr("timerlat/", &params->sched_param);
+ 		if (retval) {
+ 			err_msg("Failed to set sched parameters\n");
+-			goto out_top;
++			goto out_free;
+ 		}
+ 	}
+ 
+@@ -635,7 +635,7 @@ int timerlat_top_main(int argc, char *argv[])
+ 		dma_latency_fd = set_cpu_dma_latency(params->dma_latency);
+ 		if (dma_latency_fd < 0) {
+ 			err_msg("Could not set /dev/cpu_dma_latency.\n");
+-			goto out_top;
++			goto out_free;
+ 		}
+ 	}
+ 
+@@ -645,7 +645,7 @@ int timerlat_top_main(int argc, char *argv[])
+ 		record = osnoise_init_trace_tool("timerlat");
+ 		if (!record) {
+ 			err_msg("Failed to enable the trace instance\n");
+-			goto out_top;
++			goto out_free;
+ 		}
+ 
+ 		if (params->events) {
+@@ -699,6 +699,7 @@ out_top:
+ 		close(dma_latency_fd);
+ 	trace_events_destroy(&record->trace, params->events);
+ 	params->events = NULL;
++out_free:
+ 	timerlat_free_top(top->data);
+ 	osnoise_destroy_tool(record);
+ 	osnoise_destroy_tool(top);
+diff --git a/tools/tracing/rtla/src/utils.c b/tools/tracing/rtla/src/utils.c
+index da2b590edaede..5352167a1e751 100644
+--- a/tools/tracing/rtla/src/utils.c
++++ b/tools/tracing/rtla/src/utils.c
+@@ -3,7 +3,7 @@
+  * Copyright (C) 2021 Red Hat Inc, Daniel Bristot de Oliveira <bristot@kernel.org>
+  */
+ 
+-#include <proc/readproc.h>
++#include <dirent.h>
+ #include <stdarg.h>
+ #include <stdlib.h>
+ #include <string.h>
+@@ -255,50 +255,114 @@ int __set_sched_attr(int pid, struct sched_attr *attr)
+ 
+ 	retval = sched_setattr(pid, attr, flags);
+ 	if (retval < 0) {
+-		err_msg("boost_with_deadline failed to boost pid %d: %s\n",
++		err_msg("Failed to set sched attributes to the pid %d: %s\n",
+ 			pid, strerror(errno));
+ 		return 1;
+ 	}
+ 
+ 	return 0;
+ }
++
++/*
++ * procfs_is_workload_pid - check if a procfs entry contains a comm_prefix* comm
++ *
++ * Check if the procfs entry is a directory of a process, and then check if the
++ * process has a comm with the prefix set in char *comm_prefix. As the
++ * current users of this function only check for kernel threads, there is no
++ * need to check for the threads for the process.
++ *
++ * Return: True if the proc_entry contains a comm file with comm_prefix*.
++ * Otherwise returns false.
++ */
++static int procfs_is_workload_pid(const char *comm_prefix, struct dirent *proc_entry)
++{
++	char buffer[MAX_PATH];
++	int comm_fd, retval;
++	char *t_name;
++
++	if (proc_entry->d_type != DT_DIR)
++		return 0;
++
++	if (*proc_entry->d_name == '.')
++		return 0;
++
++	/* check if the string is a pid */
++	for (t_name = proc_entry->d_name; t_name; t_name++) {
++		if (!isdigit(*t_name))
++			break;
++	}
++
++	if (*t_name != '\0')
++		return 0;
++
++	snprintf(buffer, MAX_PATH, "/proc/%s/comm", proc_entry->d_name);
++	comm_fd = open(buffer, O_RDONLY);
++	if (comm_fd < 0)
++		return 0;
++
++	memset(buffer, 0, MAX_PATH);
++	retval = read(comm_fd, buffer, MAX_PATH);
++
++	close(comm_fd);
++
++	if (retval <= 0)
++		return 0;
++
++	retval = strncmp(comm_prefix, buffer, strlen(comm_prefix));
++	if (retval)
++		return 0;
++
++	/* comm already have \n */
++	debug_msg("Found workload pid:%s comm:%s", proc_entry->d_name, buffer);
++
++	return 1;
++}
++
+ /*
+- * set_comm_sched_attr - set sched params to threads starting with char *comm
++ * set_comm_sched_attr - set sched params to threads starting with char *comm_prefix
+  *
+- * This function uses procps to list the currently running threads and then
+- * set the sched_attr *attr to the threads that start with char *comm. It is
++ * This function uses procfs to list the currently running threads and then set the
++ * sched_attr *attr to the threads that start with char *comm_prefix. It is
+  * mainly used to set the priority to the kernel threads created by the
+  * tracers.
+  */
+-int set_comm_sched_attr(const char *comm, struct sched_attr *attr)
++int set_comm_sched_attr(const char *comm_prefix, struct sched_attr *attr)
+ {
+-	int flags = PROC_FILLCOM | PROC_FILLSTAT;
+-	PROCTAB *ptp;
+-	proc_t task;
++	struct dirent *proc_entry;
++	DIR *procfs;
+ 	int retval;
+ 
+-	ptp = openproc(flags);
+-	if (!ptp) {
+-		err_msg("error openproc()\n");
+-		return -ENOENT;
++	if (strlen(comm_prefix) >= MAX_PATH) {
++		err_msg("Command prefix is too long: %d < strlen(%s)\n",
++			MAX_PATH, comm_prefix);
++		return 1;
+ 	}
+ 
+-	memset(&task, 0, sizeof(task));
++	procfs = opendir("/proc");
++	if (!procfs) {
++		err_msg("Could not open procfs\n");
++		return 1;
++	}
+ 
+-	while (readproc(ptp, &task)) {
+-		retval = strncmp(comm, task.cmd, strlen(comm));
+-		if (retval)
++	while ((proc_entry = readdir(procfs))) {
++
++		retval = procfs_is_workload_pid(comm_prefix, proc_entry);
++		if (!retval)
+ 			continue;
+-		retval = __set_sched_attr(task.tid, attr);
+-		if (retval)
++
++		/* procfs_is_workload_pid confirmed it is a pid */
++		retval = __set_sched_attr(atoi(proc_entry->d_name), attr);
++		if (retval) {
++			err_msg("Error setting sched attributes for pid:%s\n", proc_entry->d_name);
+ 			goto out_err;
+-	}
++		}
+ 
+-	closeproc(ptp);
++		debug_msg("Set sched attributes for pid:%s\n", proc_entry->d_name);
++	}
+ 	return 0;
+ 
+ out_err:
+-	closeproc(ptp);
++	closedir(procfs);
+ 	return 1;
+ }
+ 
+diff --git a/tools/tracing/rtla/src/utils.h b/tools/tracing/rtla/src/utils.h
+index fa08e374870ac..5571afd3b5498 100644
+--- a/tools/tracing/rtla/src/utils.h
++++ b/tools/tracing/rtla/src/utils.h
+@@ -6,6 +6,7 @@
+  * '18446744073709551615\0'
+  */
+ #define BUFF_U64_STR_SIZE	24
++#define MAX_PATH		1024
+ 
+ #define container_of(ptr, type, member)({			\
+ 	const typeof(((type *)0)->member) *__mptr = (ptr);	\
+@@ -53,5 +54,5 @@ struct sched_attr {
+ };
+ 
+ int parse_prio(char *arg, struct sched_attr *sched_param);
+-int set_comm_sched_attr(const char *comm, struct sched_attr *attr);
++int set_comm_sched_attr(const char *comm_prefix, struct sched_attr *attr);
+ int set_cpu_dma_latency(int32_t latency);


             reply	other threads:[~2022-06-09 11:25 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-06-09 11:25 Mike Pagano [this message]
  -- strict thread matches above, loose matches on Subject: below --
2022-08-21 19:59 [gentoo-commits] proj/linux-patches:5.18 commit in: / Mike Pagano
2022-08-21 16:54 Mike Pagano
2022-08-17 14:31 Mike Pagano
2022-08-11 12:32 Mike Pagano
2022-08-03 14:22 Alice Ferrazzi
2022-07-29 16:39 Mike Pagano
2022-07-23 11:45 Alice Ferrazzi
2022-07-22 11:11 Mike Pagano
2022-07-22 11:06 Mike Pagano
2022-07-15 10:10 Alice Ferrazzi
2022-07-15  8:59 Alice Ferrazzi
2022-07-12 15:58 Mike Pagano
2022-07-07 17:29 Mike Pagano
2022-07-07 16:26 Mike Pagano
2022-07-02 16:12 Mike Pagano
2022-06-29 11:07 Mike Pagano
2022-06-26 21:52 Mike Pagano
2022-06-25 19:42 Mike Pagano
2022-06-22 12:53 Mike Pagano
2022-06-16 12:07 Mike Pagano
2022-06-14 17:29 Mike Pagano
2022-06-09 18:40 Mike Pagano
2022-06-06 11:01 Mike Pagano
2022-05-30 21:42 Mike Pagano
2022-05-30 13:57 Mike Pagano
2022-05-24 21:02 Mike Pagano
2022-05-24 18:14 Mike Pagano
2022-05-11 17:39 Mike Pagano
2022-04-25 16:23 Mike Pagano
2022-04-25 16:15 Mike Pagano

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1654773779.565fe84a57a3471aa2c2fbfd0bfd62e7178a6fb0.mpagano@gentoo \
    --to=mpagano@gentoo.org \
    --cc=gentoo-commits@lists.gentoo.org \
    --cc=gentoo-dev@lists.gentoo.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox