public inbox for gentoo-commits@lists.gentoo.org
 help / color / mirror / Atom feed
From: "Mike Pagano" <mpagano@gentoo.org>
To: gentoo-commits@lists.gentoo.org
Subject: [gentoo-commits] proj/linux-patches:6.5 commit in: /
Date: Fri,  6 Oct 2023 12:36:27 +0000 (UTC)	[thread overview]
Message-ID: <1696595772.30b3090c6bab3a7fb130eb08ccddb446aea6aeed.mpagano@gentoo> (raw)

commit:     30b3090c6bab3a7fb130eb08ccddb446aea6aeed
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Oct  6 12:36:12 2023 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Oct  6 12:36:12 2023 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=30b3090c

Linux patch 6.5.6

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README            |     4 +
 1005_linux-6.5.6.patch | 12866 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 12870 insertions(+)

diff --git a/0000_README b/0000_README
index 46cf8e96..ffd65d42 100644
--- a/0000_README
+++ b/0000_README
@@ -63,6 +63,10 @@ Patch:  1004_linux-6.5.5.patch
 From:   https://www.kernel.org
 Desc:   Linux 6.5.5
 
+Patch:  1005_linux-6.5.6.patch
+From:   https://www.kernel.org
+Desc:   Linux 6.5.6
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1005_linux-6.5.6.patch b/1005_linux-6.5.6.patch
new file mode 100644
index 00000000..1cf3da64
--- /dev/null
+++ b/1005_linux-6.5.6.patch
@@ -0,0 +1,12866 @@
+diff --git a/Documentation/admin-guide/cgroup-v1/memory.rst b/Documentation/admin-guide/cgroup-v1/memory.rst
+index fabaad3fd9c21..5f7189dc98537 100644
+--- a/Documentation/admin-guide/cgroup-v1/memory.rst
++++ b/Documentation/admin-guide/cgroup-v1/memory.rst
+@@ -92,8 +92,13 @@ Brief summary of control files.
+  memory.oom_control		     set/show oom controls.
+  memory.numa_stat		     show the number of memory usage per numa
+ 				     node
+- memory.kmem.limit_in_bytes          This knob is deprecated and writing to
+-                                     it will return -ENOTSUPP.
++ memory.kmem.limit_in_bytes          Deprecated knob to set and read the kernel
++                                     memory hard limit. Kernel hard limit is not
++                                     supported since 5.16. Writing any value to
++                                     do file will not have any effect same as if
++                                     nokmem kernel parameter was specified.
++                                     Kernel memory is still charged and reported
++                                     by memory.kmem.usage_in_bytes.
+  memory.kmem.usage_in_bytes          show current kernel memory allocation
+  memory.kmem.failcnt                 show the number of kernel memory usage
+ 				     hits limits
+diff --git a/Documentation/sound/designs/midi-2.0.rst b/Documentation/sound/designs/midi-2.0.rst
+index 27d0d3dea1b0a..d91fdad524f1f 100644
+--- a/Documentation/sound/designs/midi-2.0.rst
++++ b/Documentation/sound/designs/midi-2.0.rst
+@@ -74,8 +74,8 @@ topology based on those information.  When the device is older and
+ doesn't respond to the new UMP inquiries, the driver falls back and
+ builds the topology based on Group Terminal Block (GTB) information
+ from the USB descriptor.  Some device might be screwed up by the
+-unexpected UMP command; in such a case, pass `midi2_probe=0` option to
+-snd-usb-audio driver for skipping the UMP v1.1 inquiries.
++unexpected UMP command; in such a case, pass `midi2_ump_probe=0`
++option to snd-usb-audio driver for skipping the UMP v1.1 inquiries.
+ 
+ When the MIDI 2.0 device is probed, the kernel creates a rawmidi
+ device for each UMP Endpoint of the device.  Its device name is
+diff --git a/Makefile b/Makefile
+index 7545d2b0e7b71..81f14b15592f0 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 5
+-SUBLEVEL = 5
++SUBLEVEL = 6
+ EXTRAVERSION =
+ NAME = Hurr durr I'ma ninja sloth
+ 
+diff --git a/arch/arm/boot/dts/ti/omap/motorola-mapphone-common.dtsi b/arch/arm/boot/dts/ti/omap/motorola-mapphone-common.dtsi
+index 091ba310053eb..d69f0f4b4990d 100644
+--- a/arch/arm/boot/dts/ti/omap/motorola-mapphone-common.dtsi
++++ b/arch/arm/boot/dts/ti/omap/motorola-mapphone-common.dtsi
+@@ -614,12 +614,12 @@
+ /* Configure pwm clock source for timers 8 & 9 */
+ &timer8 {
+ 	assigned-clocks = <&abe_clkctrl OMAP4_TIMER8_CLKCTRL 24>;
+-	assigned-clock-parents = <&sys_clkin_ck>;
++	assigned-clock-parents = <&sys_32k_ck>;
+ };
+ 
+ &timer9 {
+ 	assigned-clocks = <&l4_per_clkctrl OMAP4_TIMER9_CLKCTRL 24>;
+-	assigned-clock-parents = <&sys_clkin_ck>;
++	assigned-clock-parents = <&sys_32k_ck>;
+ };
+ 
+ /*
+diff --git a/arch/arm/boot/dts/ti/omap/omap3-cpu-thermal.dtsi b/arch/arm/boot/dts/ti/omap/omap3-cpu-thermal.dtsi
+index 0da759f8e2c2d..7dd2340bc5e45 100644
+--- a/arch/arm/boot/dts/ti/omap/omap3-cpu-thermal.dtsi
++++ b/arch/arm/boot/dts/ti/omap/omap3-cpu-thermal.dtsi
+@@ -12,8 +12,7 @@ cpu_thermal: cpu-thermal {
+ 	polling-delay = <1000>; /* milliseconds */
+ 	coefficients = <0 20000>;
+ 
+-			/* sensor       ID */
+-	thermal-sensors = <&bandgap     0>;
++	thermal-sensors = <&bandgap>;
+ 
+ 	cpu_trips: trips {
+ 		cpu_alert0: cpu_alert {
+diff --git a/arch/arm/boot/dts/ti/omap/omap4-cpu-thermal.dtsi b/arch/arm/boot/dts/ti/omap/omap4-cpu-thermal.dtsi
+index 801b4f10350c1..d484ec1e4fd86 100644
+--- a/arch/arm/boot/dts/ti/omap/omap4-cpu-thermal.dtsi
++++ b/arch/arm/boot/dts/ti/omap/omap4-cpu-thermal.dtsi
+@@ -12,7 +12,10 @@ cpu_thermal: cpu_thermal {
+ 	polling-delay-passive = <250>; /* milliseconds */
+ 	polling-delay = <1000>; /* milliseconds */
+ 
+-			/* sensor       ID */
++	/*
++	 * See 44xx files for single sensor addressing, omap5 and dra7 need
++	 * also sensor ID for addressing.
++	 */
+ 	thermal-sensors = <&bandgap     0>;
+ 
+ 	cpu_trips: trips {
+diff --git a/arch/arm/boot/dts/ti/omap/omap443x.dtsi b/arch/arm/boot/dts/ti/omap/omap443x.dtsi
+index 238aceb799f89..2104170fe2cd7 100644
+--- a/arch/arm/boot/dts/ti/omap/omap443x.dtsi
++++ b/arch/arm/boot/dts/ti/omap/omap443x.dtsi
+@@ -69,6 +69,7 @@
+ };
+ 
+ &cpu_thermal {
++	thermal-sensors = <&bandgap>;
+ 	coefficients = <0 20000>;
+ };
+ 
+diff --git a/arch/arm/boot/dts/ti/omap/omap4460.dtsi b/arch/arm/boot/dts/ti/omap/omap4460.dtsi
+index 1b27a862ae810..a6764750d4476 100644
+--- a/arch/arm/boot/dts/ti/omap/omap4460.dtsi
++++ b/arch/arm/boot/dts/ti/omap/omap4460.dtsi
+@@ -79,6 +79,7 @@
+ };
+ 
+ &cpu_thermal {
++	thermal-sensors = <&bandgap>;
+ 	coefficients = <348 (-9301)>;
+ };
+ 
+diff --git a/arch/arm64/boot/dts/freescale/Makefile b/arch/arm64/boot/dts/freescale/Makefile
+index a750be13ace89..cf32922c97619 100644
+--- a/arch/arm64/boot/dts/freescale/Makefile
++++ b/arch/arm64/boot/dts/freescale/Makefile
+@@ -66,6 +66,7 @@ dtb-$(CONFIG_ARCH_MXC) += imx8mm-mx8menlo.dtb
+ dtb-$(CONFIG_ARCH_MXC) += imx8mm-nitrogen-r2.dtb
+ dtb-$(CONFIG_ARCH_MXC) += imx8mm-phg.dtb
+ dtb-$(CONFIG_ARCH_MXC) += imx8mm-phyboard-polis-rdk.dtb
++dtb-$(CONFIG_ARCH_MXC) += imx8mm-prt8mm.dtb
+ dtb-$(CONFIG_ARCH_MXC) += imx8mm-tqma8mqml-mba8mx.dtb
+ dtb-$(CONFIG_ARCH_MXC) += imx8mm-var-som-symphony.dtb
+ dtb-$(CONFIG_ARCH_MXC) += imx8mm-venice-gw71xx-0x.dtb
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm-evk.dtsi b/arch/arm64/boot/dts/freescale/imx8mm-evk.dtsi
+index df8e808ac4739..6752c30274369 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm-evk.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mm-evk.dtsi
+@@ -26,7 +26,7 @@
+ 
+ 		port {
+ 			hdmi_connector_in: endpoint {
+-				remote-endpoint = <&adv7533_out>;
++				remote-endpoint = <&adv7535_out>;
+ 			};
+ 		};
+ 	};
+@@ -72,6 +72,13 @@
+ 		enable-active-high;
+ 	};
+ 
++	reg_vddext_3v3: regulator-vddext-3v3 {
++		compatible = "regulator-fixed";
++		regulator-name = "VDDEXT_3V3";
++		regulator-min-microvolt = <3300000>;
++		regulator-max-microvolt = <3300000>;
++	};
++
+ 	backlight: backlight {
+ 		compatible = "pwm-backlight";
+ 		pwms = <&pwm1 0 5000000 0>;
+@@ -317,15 +324,16 @@
+ 
+ 	hdmi@3d {
+ 		compatible = "adi,adv7535";
+-		reg = <0x3d>, <0x3c>, <0x3e>, <0x3f>;
+-		reg-names = "main", "cec", "edid", "packet";
++		reg = <0x3d>;
++		interrupt-parent = <&gpio1>;
++		interrupts = <9 IRQ_TYPE_EDGE_FALLING>;
+ 		adi,dsi-lanes = <4>;
+-
+-		adi,input-depth = <8>;
+-		adi,input-colorspace = "rgb";
+-		adi,input-clock = "1x";
+-		adi,input-style = <1>;
+-		adi,input-justification = "evenly";
++		avdd-supply = <&buck5_reg>;
++		dvdd-supply = <&buck5_reg>;
++		pvdd-supply = <&buck5_reg>;
++		a2vdd-supply = <&buck5_reg>;
++		v3p3-supply = <&reg_vddext_3v3>;
++		v1p2-supply = <&buck5_reg>;
+ 
+ 		ports {
+ 			#address-cells = <1>;
+@@ -334,7 +342,7 @@
+ 			port@0 {
+ 				reg = <0>;
+ 
+-				adv7533_in: endpoint {
++				adv7535_in: endpoint {
+ 					remote-endpoint = <&dsi_out>;
+ 				};
+ 			};
+@@ -342,7 +350,7 @@
+ 			port@1 {
+ 				reg = <1>;
+ 
+-				adv7533_out: endpoint {
++				adv7535_out: endpoint {
+ 					remote-endpoint = <&hdmi_connector_in>;
+ 				};
+ 			};
+@@ -408,7 +416,7 @@
+ 			reg = <1>;
+ 
+ 			dsi_out: endpoint {
+-				remote-endpoint = <&adv7533_in>;
++				remote-endpoint = <&adv7535_in>;
+ 				data-lanes = <1 2 3 4>;
+ 			};
+ 		};
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp-beacon-kit.dts b/arch/arm64/boot/dts/freescale/imx8mp-beacon-kit.dts
+index 06e91297fb163..acd265d8b58ed 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp-beacon-kit.dts
++++ b/arch/arm64/boot/dts/freescale/imx8mp-beacon-kit.dts
+@@ -381,9 +381,10 @@
+ &sai3 {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_sai3>;
+-	assigned-clocks = <&clk IMX8MP_CLK_SAI3>;
++	assigned-clocks = <&clk IMX8MP_CLK_SAI3>,
++			  <&clk IMX8MP_AUDIO_PLL2> ;
+ 	assigned-clock-parents = <&clk IMX8MP_AUDIO_PLL2_OUT>;
+-	assigned-clock-rates = <12288000>;
++	assigned-clock-rates = <12288000>, <361267200>;
+ 	fsl,sai-mclk-direction-output;
+ 	status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp.dtsi b/arch/arm64/boot/dts/freescale/imx8mp.dtsi
+index cc406bb338feb..587265395a9b4 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mp.dtsi
+@@ -794,6 +794,12 @@
+ 						reg = <IMX8MP_POWER_DOMAIN_AUDIOMIX>;
+ 						clocks = <&clk IMX8MP_CLK_AUDIO_ROOT>,
+ 							 <&clk IMX8MP_CLK_AUDIO_AXI>;
++						assigned-clocks = <&clk IMX8MP_CLK_AUDIO_AHB>,
++								  <&clk IMX8MP_CLK_AUDIO_AXI_SRC>;
++						assigned-clock-parents =  <&clk IMX8MP_SYS_PLL1_800M>,
++									  <&clk IMX8MP_SYS_PLL1_800M>;
++						assigned-clock-rates = <400000000>,
++								       <600000000>;
+ 					};
+ 
+ 					pgc_gpu2d: power-domain@6 {
+diff --git a/arch/arm64/boot/dts/qcom/sdm845-db845c.dts b/arch/arm64/boot/dts/qcom/sdm845-db845c.dts
+index d6b464cb61d6f..f546f6f57c1e5 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845-db845c.dts
++++ b/arch/arm64/boot/dts/qcom/sdm845-db845c.dts
+@@ -101,6 +101,14 @@
+ 		};
+ 	};
+ 
++	reserved-memory {
++		/* Cont splash region set up by the bootloader */
++		cont_splash_mem: framebuffer@9d400000 {
++			reg = <0x0 0x9d400000 0x0 0x2400000>;
++			no-map;
++		};
++	};
++
+ 	lt9611_1v8: lt9611-vdd18-regulator {
+ 		compatible = "regulator-fixed";
+ 		regulator-name = "LT9611_1V8";
+@@ -506,6 +514,7 @@
+ };
+ 
+ &mdss {
++	memory-region = <&cont_splash_mem>;
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm64/configs/defconfig b/arch/arm64/configs/defconfig
+index d8bae57af16d5..02adc6ceb8316 100644
+--- a/arch/arm64/configs/defconfig
++++ b/arch/arm64/configs/defconfig
+@@ -1145,7 +1145,6 @@ CONFIG_COMMON_CLK_S2MPS11=y
+ CONFIG_COMMON_CLK_PWM=y
+ CONFIG_COMMON_CLK_RS9_PCIE=y
+ CONFIG_COMMON_CLK_VC5=y
+-CONFIG_COMMON_CLK_NPCM8XX=y
+ CONFIG_COMMON_CLK_BD718XX=m
+ CONFIG_CLK_RASPBERRYPI=m
+ CONFIG_CLK_IMX8MM=y
+diff --git a/arch/loongarch/include/asm/addrspace.h b/arch/loongarch/include/asm/addrspace.h
+index 5c9c03bdf9156..b24437e28c6ed 100644
+--- a/arch/loongarch/include/asm/addrspace.h
++++ b/arch/loongarch/include/asm/addrspace.h
+@@ -19,7 +19,7 @@
+  */
+ #ifndef __ASSEMBLY__
+ #ifndef PHYS_OFFSET
+-#define PHYS_OFFSET	_AC(0, UL)
++#define PHYS_OFFSET	_UL(0)
+ #endif
+ extern unsigned long vm_map_base;
+ #endif /* __ASSEMBLY__ */
+@@ -43,7 +43,7 @@ extern unsigned long vm_map_base;
+  * Memory above this physical address will be considered highmem.
+  */
+ #ifndef HIGHMEM_START
+-#define HIGHMEM_START		(_AC(1, UL) << _AC(DMW_PABITS, UL))
++#define HIGHMEM_START		(_UL(1) << _UL(DMW_PABITS))
+ #endif
+ 
+ #define TO_PHYS(x)		(		((x) & TO_PHYS_MASK))
+@@ -65,16 +65,16 @@ extern unsigned long vm_map_base;
+ #define _ATYPE_
+ #define _ATYPE32_
+ #define _ATYPE64_
+-#define _CONST64_(x)	x
+ #else
+ #define _ATYPE_		__PTRDIFF_TYPE__
+ #define _ATYPE32_	int
+ #define _ATYPE64_	__s64
++#endif
++
+ #ifdef CONFIG_64BIT
+-#define _CONST64_(x)	x ## UL
++#define _CONST64_(x)	_UL(x)
+ #else
+-#define _CONST64_(x)	x ## ULL
+-#endif
++#define _CONST64_(x)	_ULL(x)
+ #endif
+ 
+ /*
+diff --git a/arch/loongarch/include/asm/elf.h b/arch/loongarch/include/asm/elf.h
+index 7af0cebf28d73..b9a4ab54285c1 100644
+--- a/arch/loongarch/include/asm/elf.h
++++ b/arch/loongarch/include/asm/elf.h
+@@ -111,6 +111,15 @@
+ #define R_LARCH_TLS_GD_HI20			98
+ #define R_LARCH_32_PCREL			99
+ #define R_LARCH_RELAX				100
++#define R_LARCH_DELETE				101
++#define R_LARCH_ALIGN				102
++#define R_LARCH_PCREL20_S2			103
++#define R_LARCH_CFA				104
++#define R_LARCH_ADD6				105
++#define R_LARCH_SUB6				106
++#define R_LARCH_ADD_ULEB128			107
++#define R_LARCH_SUB_ULEB128			108
++#define R_LARCH_64_PCREL			109
+ 
+ #ifndef ELF_ARCH
+ 
+diff --git a/arch/loongarch/kernel/mem.c b/arch/loongarch/kernel/mem.c
+index 4a4107a6a9651..aed901c57fb43 100644
+--- a/arch/loongarch/kernel/mem.c
++++ b/arch/loongarch/kernel/mem.c
+@@ -50,7 +50,6 @@ void __init memblock_init(void)
+ 	}
+ 
+ 	memblock_set_current_limit(PFN_PHYS(max_low_pfn));
+-	memblock_set_node(0, PHYS_ADDR_MAX, &memblock.memory, 0);
+ 
+ 	/* Reserve the first 2MB */
+ 	memblock_reserve(PHYS_OFFSET, 0x200000);
+@@ -58,4 +57,7 @@ void __init memblock_init(void)
+ 	/* Reserve the kernel text/data/bss */
+ 	memblock_reserve(__pa_symbol(&_text),
+ 			 __pa_symbol(&_end) - __pa_symbol(&_text));
++
++	memblock_set_node(0, PHYS_ADDR_MAX, &memblock.memory, 0);
++	memblock_set_node(0, PHYS_ADDR_MAX, &memblock.reserved, 0);
+ }
+diff --git a/arch/loongarch/kernel/module.c b/arch/loongarch/kernel/module.c
+index b8b86088b2dd2..b13b2858fe392 100644
+--- a/arch/loongarch/kernel/module.c
++++ b/arch/loongarch/kernel/module.c
+@@ -367,6 +367,24 @@ static int apply_r_larch_got_pc(struct module *mod,
+ 	return apply_r_larch_pcala(mod, location, got, rela_stack, rela_stack_top, type);
+ }
+ 
++static int apply_r_larch_32_pcrel(struct module *mod, u32 *location, Elf_Addr v,
++				  s64 *rela_stack, size_t *rela_stack_top, unsigned int type)
++{
++	ptrdiff_t offset = (void *)v - (void *)location;
++
++	*(u32 *)location = offset;
++	return 0;
++}
++
++static int apply_r_larch_64_pcrel(struct module *mod, u32 *location, Elf_Addr v,
++				  s64 *rela_stack, size_t *rela_stack_top, unsigned int type)
++{
++	ptrdiff_t offset = (void *)v - (void *)location;
++
++	*(u64 *)location = offset;
++	return 0;
++}
++
+ /*
+  * reloc_handlers_rela() - Apply a particular relocation to a module
+  * @mod: the module to apply the reloc to
+@@ -382,7 +400,7 @@ typedef int (*reloc_rela_handler)(struct module *mod, u32 *location, Elf_Addr v,
+ 
+ /* The handlers for known reloc types */
+ static reloc_rela_handler reloc_rela_handlers[] = {
+-	[R_LARCH_NONE ... R_LARCH_RELAX]		     = apply_r_larch_error,
++	[R_LARCH_NONE ... R_LARCH_64_PCREL]		     = apply_r_larch_error,
+ 
+ 	[R_LARCH_NONE]					     = apply_r_larch_none,
+ 	[R_LARCH_32]					     = apply_r_larch_32,
+@@ -396,6 +414,8 @@ static reloc_rela_handler reloc_rela_handlers[] = {
+ 	[R_LARCH_SOP_POP_32_S_10_5 ... R_LARCH_SOP_POP_32_U] = apply_r_larch_sop_imm_field,
+ 	[R_LARCH_ADD32 ... R_LARCH_SUB64]		     = apply_r_larch_add_sub,
+ 	[R_LARCH_PCALA_HI20...R_LARCH_PCALA64_HI12]	     = apply_r_larch_pcala,
++	[R_LARCH_32_PCREL]				     = apply_r_larch_32_pcrel,
++	[R_LARCH_64_PCREL]				     = apply_r_larch_64_pcrel,
+ };
+ 
+ int apply_relocate_add(Elf_Shdr *sechdrs, const char *strtab,
+diff --git a/arch/loongarch/kernel/numa.c b/arch/loongarch/kernel/numa.c
+index 708665895b47d..c75faaa205b8a 100644
+--- a/arch/loongarch/kernel/numa.c
++++ b/arch/loongarch/kernel/numa.c
+@@ -468,7 +468,7 @@ void __init paging_init(void)
+ 
+ void __init mem_init(void)
+ {
+-	high_memory = (void *) __va(get_num_physpages() << PAGE_SHIFT);
++	high_memory = (void *) __va(max_low_pfn << PAGE_SHIFT);
+ 	memblock_free_all();
+ 	setup_zero_pages();	/* This comes from node 0 */
+ }
+diff --git a/arch/loongarch/kernel/vmlinux.lds.S b/arch/loongarch/kernel/vmlinux.lds.S
+index b1686afcf8766..bb2ec86f37a8e 100644
+--- a/arch/loongarch/kernel/vmlinux.lds.S
++++ b/arch/loongarch/kernel/vmlinux.lds.S
+@@ -53,33 +53,6 @@ SECTIONS
+ 	. = ALIGN(PECOFF_SEGMENT_ALIGN);
+ 	_etext = .;
+ 
+-	/*
+-	 * struct alt_inst entries. From the header (alternative.h):
+-	 * "Alternative instructions for different CPU types or capabilities"
+-	 * Think locking instructions on spinlocks.
+-	 */
+-	. = ALIGN(4);
+-	.altinstructions : AT(ADDR(.altinstructions) - LOAD_OFFSET) {
+-		__alt_instructions = .;
+-		*(.altinstructions)
+-		__alt_instructions_end = .;
+-	}
+-
+-#ifdef CONFIG_RELOCATABLE
+-	. = ALIGN(8);
+-	.la_abs : AT(ADDR(.la_abs) - LOAD_OFFSET) {
+-		__la_abs_begin = .;
+-		*(.la_abs)
+-		__la_abs_end = .;
+-	}
+-#endif
+-
+-	.got : ALIGN(16) { *(.got) }
+-	.plt : ALIGN(16) { *(.plt) }
+-	.got.plt : ALIGN(16) { *(.got.plt) }
+-
+-	.data.rel : { *(.data.rel*) }
+-
+ 	. = ALIGN(PECOFF_SEGMENT_ALIGN);
+ 	__init_begin = .;
+ 	__inittext_begin = .;
+@@ -94,6 +67,18 @@ SECTIONS
+ 
+ 	__initdata_begin = .;
+ 
++	/*
++	 * struct alt_inst entries. From the header (alternative.h):
++	 * "Alternative instructions for different CPU types or capabilities"
++	 * Think locking instructions on spinlocks.
++	 */
++	. = ALIGN(4);
++	.altinstructions : AT(ADDR(.altinstructions) - LOAD_OFFSET) {
++		__alt_instructions = .;
++		*(.altinstructions)
++		__alt_instructions_end = .;
++	}
++
+ 	INIT_DATA_SECTION(16)
+ 	.exit.data : {
+ 		EXIT_DATA
+@@ -113,6 +98,11 @@ SECTIONS
+ 
+ 	_sdata = .;
+ 	RO_DATA(4096)
++
++	.got : ALIGN(16) { *(.got) }
++	.plt : ALIGN(16) { *(.plt) }
++	.got.plt : ALIGN(16) { *(.got.plt) }
++
+ 	RW_DATA(1 << CONFIG_L1_CACHE_SHIFT, PAGE_SIZE, THREAD_SIZE)
+ 
+ 	.rela.dyn : ALIGN(8) {
+@@ -121,6 +111,17 @@ SECTIONS
+ 		__rela_dyn_end = .;
+ 	}
+ 
++	.data.rel : { *(.data.rel*) }
++
++#ifdef CONFIG_RELOCATABLE
++	. = ALIGN(8);
++	.la_abs : AT(ADDR(.la_abs) - LOAD_OFFSET) {
++		__la_abs_begin = .;
++		*(.la_abs)
++		__la_abs_end = .;
++	}
++#endif
++
+ 	.sdata : {
+ 		*(.sdata)
+ 	}
+diff --git a/arch/mips/alchemy/devboards/db1000.c b/arch/mips/alchemy/devboards/db1000.c
+index 012da042d0a4f..7b9f91db227f2 100644
+--- a/arch/mips/alchemy/devboards/db1000.c
++++ b/arch/mips/alchemy/devboards/db1000.c
+@@ -164,6 +164,7 @@ static struct platform_device db1x00_audio_dev = {
+ 
+ /******************************************************************************/
+ 
++#ifdef CONFIG_MMC_AU1X
+ static irqreturn_t db1100_mmc_cd(int irq, void *ptr)
+ {
+ 	mmc_detect_change(ptr, msecs_to_jiffies(500));
+@@ -369,6 +370,7 @@ static struct platform_device db1100_mmc1_dev = {
+ 	.num_resources	= ARRAY_SIZE(au1100_mmc1_res),
+ 	.resource	= au1100_mmc1_res,
+ };
++#endif /* CONFIG_MMC_AU1X */
+ 
+ /******************************************************************************/
+ 
+@@ -440,8 +442,10 @@ static struct platform_device *db1x00_devs[] = {
+ 
+ static struct platform_device *db1100_devs[] = {
+ 	&au1100_lcd_device,
++#ifdef CONFIG_MMC_AU1X
+ 	&db1100_mmc0_dev,
+ 	&db1100_mmc1_dev,
++#endif
+ };
+ 
+ int __init db1000_dev_setup(void)
+diff --git a/arch/mips/alchemy/devboards/db1200.c b/arch/mips/alchemy/devboards/db1200.c
+index 76080c71a2a7b..f521874ebb07b 100644
+--- a/arch/mips/alchemy/devboards/db1200.c
++++ b/arch/mips/alchemy/devboards/db1200.c
+@@ -326,6 +326,7 @@ static struct platform_device db1200_ide_dev = {
+ 
+ /**********************************************************************/
+ 
++#ifdef CONFIG_MMC_AU1X
+ /* SD carddetects:  they're supposed to be edge-triggered, but ack
+  * doesn't seem to work (CPLD Rev 2).  Instead, the screaming one
+  * is disabled and its counterpart enabled.  The 200ms timeout is
+@@ -584,6 +585,7 @@ static struct platform_device pb1200_mmc1_dev = {
+ 	.num_resources	= ARRAY_SIZE(au1200_mmc1_res),
+ 	.resource	= au1200_mmc1_res,
+ };
++#endif /* CONFIG_MMC_AU1X */
+ 
+ /**********************************************************************/
+ 
+@@ -751,7 +753,9 @@ static struct platform_device db1200_audiodma_dev = {
+ static struct platform_device *db1200_devs[] __initdata = {
+ 	NULL,		/* PSC0, selected by S6.8 */
+ 	&db1200_ide_dev,
++#ifdef CONFIG_MMC_AU1X
+ 	&db1200_mmc0_dev,
++#endif
+ 	&au1200_lcd_dev,
+ 	&db1200_eth_dev,
+ 	&db1200_nand_dev,
+@@ -762,7 +766,9 @@ static struct platform_device *db1200_devs[] __initdata = {
+ };
+ 
+ static struct platform_device *pb1200_devs[] __initdata = {
++#ifdef CONFIG_MMC_AU1X
+ 	&pb1200_mmc1_dev,
++#endif
+ };
+ 
+ /* Some peripheral base addresses differ on the PB1200 */
+diff --git a/arch/mips/alchemy/devboards/db1300.c b/arch/mips/alchemy/devboards/db1300.c
+index ff61901329c62..d377e043b49f8 100644
+--- a/arch/mips/alchemy/devboards/db1300.c
++++ b/arch/mips/alchemy/devboards/db1300.c
+@@ -450,6 +450,7 @@ static struct platform_device db1300_ide_dev = {
+ 
+ /**********************************************************************/
+ 
++#ifdef CONFIG_MMC_AU1X
+ static irqreturn_t db1300_mmc_cd(int irq, void *ptr)
+ {
+ 	disable_irq_nosync(irq);
+@@ -632,6 +633,7 @@ static struct platform_device db1300_sd0_dev = {
+ 	.resource	= au1300_sd0_res,
+ 	.num_resources	= ARRAY_SIZE(au1300_sd0_res),
+ };
++#endif /* CONFIG_MMC_AU1X */
+ 
+ /**********************************************************************/
+ 
+@@ -767,8 +769,10 @@ static struct platform_device *db1300_dev[] __initdata = {
+ 	&db1300_5waysw_dev,
+ 	&db1300_nand_dev,
+ 	&db1300_ide_dev,
++#ifdef CONFIG_MMC_AU1X
+ 	&db1300_sd0_dev,
+ 	&db1300_sd1_dev,
++#endif
+ 	&db1300_lcd_dev,
+ 	&db1300_ac97_dev,
+ 	&db1300_i2s_dev,
+diff --git a/arch/parisc/include/asm/ropes.h b/arch/parisc/include/asm/ropes.h
+index 8e51c775c80a6..c46ad399a74f2 100644
+--- a/arch/parisc/include/asm/ropes.h
++++ b/arch/parisc/include/asm/ropes.h
+@@ -29,7 +29,7 @@
+ struct ioc {
+ 	void __iomem	*ioc_hpa;	/* I/O MMU base address */
+ 	char		*res_map;	/* resource map, bit == pdir entry */
+-	u64		*pdir_base;	/* physical base address */
++	__le64		*pdir_base;	/* physical base address */
+ 	unsigned long	ibase;		/* pdir IOV Space base - shared w/lba_pci */
+ 	unsigned long	imask;		/* pdir IOV Space mask - shared w/lba_pci */
+ #ifdef ZX1_SUPPORT
+@@ -86,6 +86,9 @@ struct sba_device {
+ 	struct ioc		ioc[MAX_IOC];
+ };
+ 
++/* list of SBA's in system, see drivers/parisc/sba_iommu.c */
++extern struct sba_device *sba_list;
++
+ #define ASTRO_RUNWAY_PORT	0x582
+ #define IKE_MERCED_PORT		0x803
+ #define REO_MERCED_PORT		0x804
+@@ -110,7 +113,7 @@ static inline int IS_PLUTO(struct parisc_device *d) {
+ 
+ #define SBA_PDIR_VALID_BIT	0x8000000000000000ULL
+ 
+-#define SBA_AGPGART_COOKIE	0x0000badbadc0ffeeULL
++#define SBA_AGPGART_COOKIE	(__force __le64) 0x0000badbadc0ffeeULL
+ 
+ #define SBA_FUNC_ID	0x0000	/* function id */
+ #define SBA_FCLASS	0x0008	/* function class, bist, header, rev... */
+diff --git a/arch/parisc/kernel/drivers.c b/arch/parisc/kernel/drivers.c
+index 8f4b77648491a..ed8b759480614 100644
+--- a/arch/parisc/kernel/drivers.c
++++ b/arch/parisc/kernel/drivers.c
+@@ -925,9 +925,9 @@ static __init void qemu_header(void)
+ 	pr_info("#define PARISC_MODEL \"%s\"\n\n",
+ 			boot_cpu_data.pdc.sys_model_name);
+ 
++	#define p ((unsigned long *)&boot_cpu_data.pdc.model)
+ 	pr_info("#define PARISC_PDC_MODEL 0x%lx, 0x%lx, 0x%lx, "
+ 		"0x%lx, 0x%lx, 0x%lx, 0x%lx, 0x%lx, 0x%lx\n\n",
+-	#define p ((unsigned long *)&boot_cpu_data.pdc.model)
+ 		p[0], p[1], p[2], p[3], p[4], p[5], p[6], p[7], p[8]);
+ 	#undef p
+ 
+diff --git a/arch/parisc/kernel/irq.c b/arch/parisc/kernel/irq.c
+index 12c4d4104ade4..2f81bfd4f15e1 100644
+--- a/arch/parisc/kernel/irq.c
++++ b/arch/parisc/kernel/irq.c
+@@ -365,7 +365,7 @@ union irq_stack_union {
+ 	volatile unsigned int lock[1];
+ };
+ 
+-DEFINE_PER_CPU(union irq_stack_union, irq_stack_union) = {
++static DEFINE_PER_CPU(union irq_stack_union, irq_stack_union) = {
+ 		.slock = { 1,1,1,1 },
+ 	};
+ #endif
+diff --git a/arch/powerpc/kernel/hw_breakpoint.c b/arch/powerpc/kernel/hw_breakpoint.c
+index e1b4e70c8fd0f..f432db3faa5b0 100644
+--- a/arch/powerpc/kernel/hw_breakpoint.c
++++ b/arch/powerpc/kernel/hw_breakpoint.c
+@@ -505,11 +505,13 @@ void thread_change_pc(struct task_struct *tsk, struct pt_regs *regs)
+ 	struct arch_hw_breakpoint *info;
+ 	int i;
+ 
++	preempt_disable();
++
+ 	for (i = 0; i < nr_wp_slots(); i++) {
+ 		if (unlikely(tsk->thread.last_hit_ubp[i]))
+ 			goto reset;
+ 	}
+-	return;
++	goto out;
+ 
+ reset:
+ 	regs_set_return_msr(regs, regs->msr & ~MSR_SE);
+@@ -518,6 +520,9 @@ reset:
+ 		__set_breakpoint(i, info);
+ 		tsk->thread.last_hit_ubp[i] = NULL;
+ 	}
++
++out:
++	preempt_enable();
+ }
+ 
+ static bool is_larx_stcx_instr(int type)
+@@ -632,6 +637,11 @@ static void handle_p10dd1_spurious_exception(struct arch_hw_breakpoint **info,
+ 	}
+ }
+ 
++/*
++ * Handle a DABR or DAWR exception.
++ *
++ * Called in atomic context.
++ */
+ int hw_breakpoint_handler(struct die_args *args)
+ {
+ 	bool err = false;
+@@ -758,6 +768,8 @@ NOKPROBE_SYMBOL(hw_breakpoint_handler);
+ 
+ /*
+  * Handle single-step exceptions following a DABR hit.
++ *
++ * Called in atomic context.
+  */
+ static int single_step_dabr_instruction(struct die_args *args)
+ {
+@@ -815,6 +827,8 @@ NOKPROBE_SYMBOL(single_step_dabr_instruction);
+ 
+ /*
+  * Handle debug exception notifications.
++ *
++ * Called in atomic context.
+  */
+ int hw_breakpoint_exceptions_notify(
+ 		struct notifier_block *unused, unsigned long val, void *data)
+diff --git a/arch/powerpc/kernel/hw_breakpoint_constraints.c b/arch/powerpc/kernel/hw_breakpoint_constraints.c
+index a74623025f3ab..9e51801c49152 100644
+--- a/arch/powerpc/kernel/hw_breakpoint_constraints.c
++++ b/arch/powerpc/kernel/hw_breakpoint_constraints.c
+@@ -131,8 +131,13 @@ void wp_get_instr_detail(struct pt_regs *regs, ppc_inst_t *instr,
+ 			 int *type, int *size, unsigned long *ea)
+ {
+ 	struct instruction_op op;
++	int err;
+ 
+-	if (__get_user_instr(*instr, (void __user *)regs->nip))
++	pagefault_disable();
++	err = __get_user_instr(*instr, (void __user *)regs->nip);
++	pagefault_enable();
++
++	if (err)
+ 		return;
+ 
+ 	analyse_instr(&op, regs, *instr);
+diff --git a/arch/powerpc/kernel/stacktrace.c b/arch/powerpc/kernel/stacktrace.c
+index b15f15dcacb5c..e6a958a5da276 100644
+--- a/arch/powerpc/kernel/stacktrace.c
++++ b/arch/powerpc/kernel/stacktrace.c
+@@ -73,29 +73,12 @@ int __no_sanitize_address arch_stack_walk_reliable(stack_trace_consume_fn consum
+ 	bool firstframe;
+ 
+ 	stack_end = stack_page + THREAD_SIZE;
+-	if (!is_idle_task(task)) {
+-		/*
+-		 * For user tasks, this is the SP value loaded on
+-		 * kernel entry, see "PACAKSAVE(r13)" in _switch() and
+-		 * system_call_common().
+-		 *
+-		 * Likewise for non-swapper kernel threads,
+-		 * this also happens to be the top of the stack
+-		 * as setup by copy_thread().
+-		 *
+-		 * Note that stack backlinks are not properly setup by
+-		 * copy_thread() and thus, a forked task() will have
+-		 * an unreliable stack trace until it's been
+-		 * _switch()'ed to for the first time.
+-		 */
+-		stack_end -= STACK_USER_INT_FRAME_SIZE;
+-	} else {
+-		/*
+-		 * idle tasks have a custom stack layout,
+-		 * c.f. cpu_idle_thread_init().
+-		 */
++
++	// See copy_thread() for details.
++	if (task->flags & PF_KTHREAD)
+ 		stack_end -= STACK_FRAME_MIN_SIZE;
+-	}
++	else
++		stack_end -= STACK_USER_INT_FRAME_SIZE;
+ 
+ 	if (task == current)
+ 		sp = current_stack_frame();
+diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
+index 7ef147e2a20d7..109b93874df92 100644
+--- a/arch/powerpc/kernel/traps.c
++++ b/arch/powerpc/kernel/traps.c
+@@ -1512,23 +1512,11 @@ static void do_program_check(struct pt_regs *regs)
+ 			return;
+ 		}
+ 
+-		if (cpu_has_feature(CPU_FTR_DEXCR_NPHIE) && user_mode(regs)) {
+-			ppc_inst_t insn;
+-
+-			if (get_user_instr(insn, (void __user *)regs->nip)) {
+-				_exception(SIGSEGV, regs, SEGV_MAPERR, regs->nip);
+-				return;
+-			}
+-
+-			if (ppc_inst_primary_opcode(insn) == 31 &&
+-			    get_xop(ppc_inst_val(insn)) == OP_31_XOP_HASHCHK) {
+-				_exception(SIGILL, regs, ILL_ILLOPN, regs->nip);
+-				return;
+-			}
++		/* User mode considers other cases after enabling IRQs */
++		if (!user_mode(regs)) {
++			_exception(SIGTRAP, regs, TRAP_BRKPT, regs->nip);
++			return;
+ 		}
+-
+-		_exception(SIGTRAP, regs, TRAP_BRKPT, regs->nip);
+-		return;
+ 	}
+ #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+ 	if (reason & REASON_TM) {
+@@ -1561,16 +1549,44 @@ static void do_program_check(struct pt_regs *regs)
+ 
+ 	/*
+ 	 * If we took the program check in the kernel skip down to sending a
+-	 * SIGILL. The subsequent cases all relate to emulating instructions
+-	 * which we should only do for userspace. We also do not want to enable
+-	 * interrupts for kernel faults because that might lead to further
+-	 * faults, and loose the context of the original exception.
++	 * SIGILL. The subsequent cases all relate to user space, such as
++	 * emulating instructions which we should only do for user space. We
++	 * also do not want to enable interrupts for kernel faults because that
++	 * might lead to further faults, and loose the context of the original
++	 * exception.
+ 	 */
+ 	if (!user_mode(regs))
+ 		goto sigill;
+ 
+ 	interrupt_cond_local_irq_enable(regs);
+ 
++	/*
++	 * (reason & REASON_TRAP) is mostly handled before enabling IRQs,
++	 * except get_user_instr() can sleep so we cannot reliably inspect the
++	 * current instruction in that context. Now that we know we are
++	 * handling a user space trap and can sleep, we can check if the trap
++	 * was a hashchk failure.
++	 */
++	if (reason & REASON_TRAP) {
++		if (cpu_has_feature(CPU_FTR_DEXCR_NPHIE)) {
++			ppc_inst_t insn;
++
++			if (get_user_instr(insn, (void __user *)regs->nip)) {
++				_exception(SIGSEGV, regs, SEGV_MAPERR, regs->nip);
++				return;
++			}
++
++			if (ppc_inst_primary_opcode(insn) == 31 &&
++			    get_xop(ppc_inst_val(insn)) == OP_31_XOP_HASHCHK) {
++				_exception(SIGILL, regs, ILL_ILLOPN, regs->nip);
++				return;
++			}
++		}
++
++		_exception(SIGTRAP, regs, TRAP_BRKPT, regs->nip);
++		return;
++	}
++
+ 	/* (reason & REASON_ILLEGAL) would be the obvious thing here,
+ 	 * but there seems to be a hardware bug on the 405GP (RevD)
+ 	 * that means ESR is sometimes set incorrectly - either to
+diff --git a/arch/powerpc/perf/hv-24x7.c b/arch/powerpc/perf/hv-24x7.c
+index 317175791d23c..3449be7c0d51f 100644
+--- a/arch/powerpc/perf/hv-24x7.c
++++ b/arch/powerpc/perf/hv-24x7.c
+@@ -1418,7 +1418,7 @@ static int h_24x7_event_init(struct perf_event *event)
+ 	}
+ 
+ 	domain = event_get_domain(event);
+-	if (domain >= HV_PERF_DOMAIN_MAX) {
++	if (domain  == 0 || domain >= HV_PERF_DOMAIN_MAX) {
+ 		pr_devel("invalid domain %d\n", domain);
+ 		return -EINVAL;
+ 	}
+diff --git a/arch/riscv/include/asm/errata_list.h b/arch/riscv/include/asm/errata_list.h
+index fb1a810f3d8ce..feab334dd8329 100644
+--- a/arch/riscv/include/asm/errata_list.h
++++ b/arch/riscv/include/asm/errata_list.h
+@@ -100,7 +100,7 @@ asm volatile(ALTERNATIVE(						\
+  * | 31 - 25 | 24 - 20 | 19 - 15 | 14 - 12 | 11 - 7 | 6 - 0 |
+  *   0000001    01001      rs1       000      00000  0001011
+  * dcache.cva rs1 (clean, virtual address)
+- *   0000001    00100      rs1       000      00000  0001011
++ *   0000001    00101      rs1       000      00000  0001011
+  *
+  * dcache.cipa rs1 (clean then invalidate, physical address)
+  * | 31 - 25 | 24 - 20 | 19 - 15 | 14 - 12 | 11 - 7 | 6 - 0 |
+@@ -113,7 +113,7 @@ asm volatile(ALTERNATIVE(						\
+  *   0000000    11001     00000      000      00000  0001011
+  */
+ #define THEAD_inval_A0	".long 0x0265000b"
+-#define THEAD_clean_A0	".long 0x0245000b"
++#define THEAD_clean_A0	".long 0x0255000b"
+ #define THEAD_flush_A0	".long 0x0275000b"
+ #define THEAD_SYNC_S	".long 0x0190000b"
+ 
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index e36261b4ea14f..68ce4f786dcd1 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -1955,6 +1955,7 @@ config EFI
+ 	select UCS2_STRING
+ 	select EFI_RUNTIME_WRAPPERS
+ 	select ARCH_USE_MEMREMAP_PROT
++	select EFI_RUNTIME_MAP if KEXEC_CORE
+ 	help
+ 	  This enables the kernel to use EFI runtime services that are
+ 	  available (such as the EFI variable services).
+@@ -2030,7 +2031,6 @@ config EFI_MAX_FAKE_MEM
+ config EFI_RUNTIME_MAP
+ 	bool "Export EFI runtime maps to sysfs" if EXPERT
+ 	depends on EFI
+-	default KEXEC_CORE
+ 	help
+ 	  Export EFI runtime memory regions to /sys/firmware/efi/runtime-map.
+ 	  That memory map is required by the 2nd kernel to set up EFI virtual
+diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h
+index 5b77bbc28f969..819046974b997 100644
+--- a/arch/x86/include/asm/kexec.h
++++ b/arch/x86/include/asm/kexec.h
+@@ -205,8 +205,6 @@ int arch_kimage_file_post_load_cleanup(struct kimage *image);
+ #endif
+ #endif
+ 
+-typedef void crash_vmclear_fn(void);
+-extern crash_vmclear_fn __rcu *crash_vmclear_loaded_vmcss;
+ extern void kdump_nmi_shootdown_cpus(void);
+ 
+ #endif /* __ASSEMBLY__ */
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index 3bc146dfd38da..f72b30d2238a6 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -1400,7 +1400,6 @@ struct kvm_arch {
+ 	 * the thread holds the MMU lock in write mode.
+ 	 */
+ 	spinlock_t tdp_mmu_pages_lock;
+-	struct workqueue_struct *tdp_mmu_zap_wq;
+ #endif /* CONFIG_X86_64 */
+ 
+ 	/*
+@@ -1814,7 +1813,7 @@ void kvm_mmu_vendor_module_exit(void);
+ 
+ void kvm_mmu_destroy(struct kvm_vcpu *vcpu);
+ int kvm_mmu_create(struct kvm_vcpu *vcpu);
+-int kvm_mmu_init_vm(struct kvm *kvm);
++void kvm_mmu_init_vm(struct kvm *kvm);
+ void kvm_mmu_uninit_vm(struct kvm *kvm);
+ 
+ void kvm_mmu_after_set_cpuid(struct kvm_vcpu *vcpu);
+diff --git a/arch/x86/include/asm/linkage.h b/arch/x86/include/asm/linkage.h
+index 5ff49fd67732e..571fe4d2d2328 100644
+--- a/arch/x86/include/asm/linkage.h
++++ b/arch/x86/include/asm/linkage.h
+@@ -105,6 +105,13 @@
+ 	CFI_POST_PADDING					\
+ 	SYM_FUNC_END(__cfi_##name)
+ 
++/* UML needs to be able to override memcpy() and friends for KASAN. */
++#ifdef CONFIG_UML
++# define SYM_FUNC_ALIAS_MEMFUNC	SYM_FUNC_ALIAS_WEAK
++#else
++# define SYM_FUNC_ALIAS_MEMFUNC	SYM_FUNC_ALIAS
++#endif
++
+ /* SYM_TYPED_FUNC_START -- use for indirectly called globals, w/ CFI type */
+ #define SYM_TYPED_FUNC_START(name)				\
+ 	SYM_TYPED_START(name, SYM_L_GLOBAL, SYM_F_ALIGN)	\
+diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
+index fd750247ca891..9e26294e415c8 100644
+--- a/arch/x86/include/asm/processor.h
++++ b/arch/x86/include/asm/processor.h
+@@ -676,12 +676,10 @@ extern u16 get_llc_id(unsigned int cpu);
+ #ifdef CONFIG_CPU_SUP_AMD
+ extern u32 amd_get_nodes_per_socket(void);
+ extern u32 amd_get_highest_perf(void);
+-extern bool cpu_has_ibpb_brtype_microcode(void);
+ extern void amd_clear_divider(void);
+ #else
+ static inline u32 amd_get_nodes_per_socket(void)	{ return 0; }
+ static inline u32 amd_get_highest_perf(void)		{ return 0; }
+-static inline bool cpu_has_ibpb_brtype_microcode(void)	{ return false; }
+ static inline void amd_clear_divider(void)		{ }
+ #endif
+ 
+diff --git a/arch/x86/include/asm/reboot.h b/arch/x86/include/asm/reboot.h
+index 9177b4354c3f5..dc201724a6433 100644
+--- a/arch/x86/include/asm/reboot.h
++++ b/arch/x86/include/asm/reboot.h
+@@ -25,6 +25,8 @@ void __noreturn machine_real_restart(unsigned int type);
+ #define MRR_BIOS	0
+ #define MRR_APM		1
+ 
++typedef void crash_vmclear_fn(void);
++extern crash_vmclear_fn __rcu *crash_vmclear_loaded_vmcss;
+ void cpu_emergency_disable_virtualization(void);
+ 
+ typedef void (*nmi_shootdown_cb)(int, struct pt_regs*);
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index 7eca6a8abbb1c..28e77c5d6484a 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -766,6 +766,15 @@ static void early_init_amd(struct cpuinfo_x86 *c)
+ 
+ 	if (cpu_has(c, X86_FEATURE_TOPOEXT))
+ 		smp_num_siblings = ((cpuid_ebx(0x8000001e) >> 8) & 0xff) + 1;
++
++	if (!cpu_has(c, X86_FEATURE_HYPERVISOR) && !cpu_has(c, X86_FEATURE_IBPB_BRTYPE)) {
++		if (c->x86 == 0x17 && boot_cpu_has(X86_FEATURE_AMD_IBPB))
++			setup_force_cpu_cap(X86_FEATURE_IBPB_BRTYPE);
++		else if (c->x86 >= 0x19 && !wrmsrl_safe(MSR_IA32_PRED_CMD, PRED_CMD_SBPB)) {
++			setup_force_cpu_cap(X86_FEATURE_IBPB_BRTYPE);
++			setup_force_cpu_cap(X86_FEATURE_SBPB);
++		}
++	}
+ }
+ 
+ static void init_amd_k8(struct cpuinfo_x86 *c)
+@@ -1301,25 +1310,6 @@ void amd_check_microcode(void)
+ 	on_each_cpu(zenbleed_check_cpu, NULL, 1);
+ }
+ 
+-bool cpu_has_ibpb_brtype_microcode(void)
+-{
+-	switch (boot_cpu_data.x86) {
+-	/* Zen1/2 IBPB flushes branch type predictions too. */
+-	case 0x17:
+-		return boot_cpu_has(X86_FEATURE_AMD_IBPB);
+-	case 0x19:
+-		/* Poke the MSR bit on Zen3/4 to check its presence. */
+-		if (!wrmsrl_safe(MSR_IA32_PRED_CMD, PRED_CMD_SBPB)) {
+-			setup_force_cpu_cap(X86_FEATURE_SBPB);
+-			return true;
+-		} else {
+-			return false;
+-		}
+-	default:
+-		return false;
+-	}
+-}
+-
+ /*
+  * Issue a DIV 0/1 insn to clear any division data from previous DIV
+  * operations.
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index f081d26616ac1..10499bcd4e396 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -2404,26 +2404,15 @@ early_param("spec_rstack_overflow", srso_parse_cmdline);
+ 
+ static void __init srso_select_mitigation(void)
+ {
+-	bool has_microcode;
++	bool has_microcode = boot_cpu_has(X86_FEATURE_IBPB_BRTYPE);
+ 
+ 	if (!boot_cpu_has_bug(X86_BUG_SRSO) || cpu_mitigations_off())
+ 		goto pred_cmd;
+ 
+-	/*
+-	 * The first check is for the kernel running as a guest in order
+-	 * for guests to verify whether IBPB is a viable mitigation.
+-	 */
+-	has_microcode = boot_cpu_has(X86_FEATURE_IBPB_BRTYPE) || cpu_has_ibpb_brtype_microcode();
+ 	if (!has_microcode) {
+ 		pr_warn("IBPB-extending microcode not applied!\n");
+ 		pr_warn(SRSO_NOTICE);
+ 	} else {
+-		/*
+-		 * Enable the synthetic (even if in a real CPUID leaf)
+-		 * flags for guests.
+-		 */
+-		setup_force_cpu_cap(X86_FEATURE_IBPB_BRTYPE);
+-
+ 		/*
+ 		 * Zen1/2 with SMT off aren't vulnerable after the right
+ 		 * IBPB microcode has been applied.
+@@ -2444,7 +2433,7 @@ static void __init srso_select_mitigation(void)
+ 
+ 	switch (srso_cmd) {
+ 	case SRSO_CMD_OFF:
+-		return;
++		goto pred_cmd;
+ 
+ 	case SRSO_CMD_MICROCODE:
+ 		if (has_microcode) {
+@@ -2717,7 +2706,7 @@ static ssize_t srso_show_state(char *buf)
+ 
+ 	return sysfs_emit(buf, "%s%s\n",
+ 			  srso_strings[srso_mitigation],
+-			  (cpu_has_ibpb_brtype_microcode() ? "" : ", no microcode"));
++			  boot_cpu_has(X86_FEATURE_IBPB_BRTYPE) ? "" : ", no microcode");
+ }
+ 
+ static ssize_t gds_show_state(char *buf)
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index 00f043a094fcd..6acfe9037a8b6 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -1288,7 +1288,7 @@ static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
+ 	VULNBL_AMD(0x15, RETBLEED),
+ 	VULNBL_AMD(0x16, RETBLEED),
+ 	VULNBL_AMD(0x17, RETBLEED | SMT_RSB | SRSO),
+-	VULNBL_HYGON(0x18, RETBLEED | SMT_RSB),
++	VULNBL_HYGON(0x18, RETBLEED | SMT_RSB | SRSO),
+ 	VULNBL_AMD(0x19, SRSO),
+ 	{}
+ };
+diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c
+index 91fa70e510041..279148e724596 100644
+--- a/arch/x86/kernel/cpu/sgx/encl.c
++++ b/arch/x86/kernel/cpu/sgx/encl.c
+@@ -235,6 +235,21 @@ static struct sgx_epc_page *sgx_encl_eldu(struct sgx_encl_page *encl_page,
+ 	return epc_page;
+ }
+ 
++/*
++ * Ensure the SECS page is not swapped out.  Must be called with encl->lock
++ * to protect the enclave states including SECS and ensure the SECS page is
++ * not swapped out again while being used.
++ */
++static struct sgx_epc_page *sgx_encl_load_secs(struct sgx_encl *encl)
++{
++	struct sgx_epc_page *epc_page = encl->secs.epc_page;
++
++	if (!epc_page)
++		epc_page = sgx_encl_eldu(&encl->secs, NULL);
++
++	return epc_page;
++}
++
+ static struct sgx_encl_page *__sgx_encl_load_page(struct sgx_encl *encl,
+ 						  struct sgx_encl_page *entry)
+ {
+@@ -248,11 +263,9 @@ static struct sgx_encl_page *__sgx_encl_load_page(struct sgx_encl *encl,
+ 		return entry;
+ 	}
+ 
+-	if (!(encl->secs.epc_page)) {
+-		epc_page = sgx_encl_eldu(&encl->secs, NULL);
+-		if (IS_ERR(epc_page))
+-			return ERR_CAST(epc_page);
+-	}
++	epc_page = sgx_encl_load_secs(encl);
++	if (IS_ERR(epc_page))
++		return ERR_CAST(epc_page);
+ 
+ 	epc_page = sgx_encl_eldu(entry, encl->secs.epc_page);
+ 	if (IS_ERR(epc_page))
+@@ -339,6 +352,13 @@ static vm_fault_t sgx_encl_eaug_page(struct vm_area_struct *vma,
+ 
+ 	mutex_lock(&encl->lock);
+ 
++	epc_page = sgx_encl_load_secs(encl);
++	if (IS_ERR(epc_page)) {
++		if (PTR_ERR(epc_page) == -EBUSY)
++			vmret = VM_FAULT_NOPAGE;
++		goto err_out_unlock;
++	}
++
+ 	epc_page = sgx_alloc_epc_page(encl_page, false);
+ 	if (IS_ERR(epc_page)) {
+ 		if (PTR_ERR(epc_page) == -EBUSY)
+diff --git a/arch/x86/kernel/crash.c b/arch/x86/kernel/crash.c
+index cdd92ab43cda4..54cd959cb3160 100644
+--- a/arch/x86/kernel/crash.c
++++ b/arch/x86/kernel/crash.c
+@@ -48,38 +48,12 @@ struct crash_memmap_data {
+ 	unsigned int type;
+ };
+ 
+-/*
+- * This is used to VMCLEAR all VMCSs loaded on the
+- * processor. And when loading kvm_intel module, the
+- * callback function pointer will be assigned.
+- *
+- * protected by rcu.
+- */
+-crash_vmclear_fn __rcu *crash_vmclear_loaded_vmcss = NULL;
+-EXPORT_SYMBOL_GPL(crash_vmclear_loaded_vmcss);
+-
+-static inline void cpu_crash_vmclear_loaded_vmcss(void)
+-{
+-	crash_vmclear_fn *do_vmclear_operation = NULL;
+-
+-	rcu_read_lock();
+-	do_vmclear_operation = rcu_dereference(crash_vmclear_loaded_vmcss);
+-	if (do_vmclear_operation)
+-		do_vmclear_operation();
+-	rcu_read_unlock();
+-}
+-
+ #if defined(CONFIG_SMP) && defined(CONFIG_X86_LOCAL_APIC)
+ 
+ static void kdump_nmi_callback(int cpu, struct pt_regs *regs)
+ {
+ 	crash_save_cpu(regs, cpu);
+ 
+-	/*
+-	 * VMCLEAR VMCSs loaded on all cpus if needed.
+-	 */
+-	cpu_crash_vmclear_loaded_vmcss();
+-
+ 	/*
+ 	 * Disable Intel PT to stop its logging
+ 	 */
+@@ -133,11 +107,6 @@ void native_machine_crash_shutdown(struct pt_regs *regs)
+ 
+ 	crash_smp_send_stop();
+ 
+-	/*
+-	 * VMCLEAR VMCSs loaded on this cpu if needed.
+-	 */
+-	cpu_crash_vmclear_loaded_vmcss();
+-
+ 	cpu_emergency_disable_virtualization();
+ 
+ 	/*
+diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
+index 3adbe97015c13..3fa4c6717a1db 100644
+--- a/arch/x86/kernel/reboot.c
++++ b/arch/x86/kernel/reboot.c
+@@ -787,6 +787,26 @@ void machine_crash_shutdown(struct pt_regs *regs)
+ }
+ #endif
+ 
++/*
++ * This is used to VMCLEAR all VMCSs loaded on the
++ * processor. And when loading kvm_intel module, the
++ * callback function pointer will be assigned.
++ *
++ * protected by rcu.
++ */
++crash_vmclear_fn __rcu *crash_vmclear_loaded_vmcss;
++EXPORT_SYMBOL_GPL(crash_vmclear_loaded_vmcss);
++
++static inline void cpu_crash_vmclear_loaded_vmcss(void)
++{
++	crash_vmclear_fn *do_vmclear_operation = NULL;
++
++	rcu_read_lock();
++	do_vmclear_operation = rcu_dereference(crash_vmclear_loaded_vmcss);
++	if (do_vmclear_operation)
++		do_vmclear_operation();
++	rcu_read_unlock();
++}
+ 
+ /* This is the CPU performing the emergency shutdown work. */
+ int crashing_cpu = -1;
+@@ -798,6 +818,8 @@ int crashing_cpu = -1;
+  */
+ void cpu_emergency_disable_virtualization(void)
+ {
++	cpu_crash_vmclear_loaded_vmcss();
++
+ 	cpu_emergency_vmxoff();
+ 	cpu_emergency_svm_disable();
+ }
+diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
+index fd975a4a52006..aa0df37c1fe72 100644
+--- a/arch/x86/kernel/setup.c
++++ b/arch/x86/kernel/setup.c
+@@ -359,15 +359,11 @@ static void __init add_early_ima_buffer(u64 phys_addr)
+ #if defined(CONFIG_HAVE_IMA_KEXEC) && !defined(CONFIG_OF_FLATTREE)
+ int __init ima_free_kexec_buffer(void)
+ {
+-	int rc;
+-
+ 	if (!ima_kexec_buffer_size)
+ 		return -ENOENT;
+ 
+-	rc = memblock_phys_free(ima_kexec_buffer_phys,
+-				ima_kexec_buffer_size);
+-	if (rc)
+-		return rc;
++	memblock_free_late(ima_kexec_buffer_phys,
++			   ima_kexec_buffer_size);
+ 
+ 	ima_kexec_buffer_phys = 0;
+ 	ima_kexec_buffer_size = 0;
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index ec169f5c7dce2..ec85e84d66ac3 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -6206,21 +6206,17 @@ static void kvm_mmu_invalidate_zap_pages_in_memslot(struct kvm *kvm,
+ 	kvm_mmu_zap_all_fast(kvm);
+ }
+ 
+-int kvm_mmu_init_vm(struct kvm *kvm)
++void kvm_mmu_init_vm(struct kvm *kvm)
+ {
+ 	struct kvm_page_track_notifier_node *node = &kvm->arch.mmu_sp_tracker;
+-	int r;
+ 
+ 	INIT_LIST_HEAD(&kvm->arch.active_mmu_pages);
+ 	INIT_LIST_HEAD(&kvm->arch.zapped_obsolete_pages);
+ 	INIT_LIST_HEAD(&kvm->arch.possible_nx_huge_pages);
+ 	spin_lock_init(&kvm->arch.mmu_unsync_pages_lock);
+ 
+-	if (tdp_mmu_enabled) {
+-		r = kvm_mmu_init_tdp_mmu(kvm);
+-		if (r < 0)
+-			return r;
+-	}
++	if (tdp_mmu_enabled)
++		kvm_mmu_init_tdp_mmu(kvm);
+ 
+ 	node->track_write = kvm_mmu_pte_write;
+ 	node->track_flush_slot = kvm_mmu_invalidate_zap_pages_in_memslot;
+@@ -6233,8 +6229,6 @@ int kvm_mmu_init_vm(struct kvm *kvm)
+ 
+ 	kvm->arch.split_desc_cache.kmem_cache = pte_list_desc_cache;
+ 	kvm->arch.split_desc_cache.gfp_zero = __GFP_ZERO;
+-
+-	return 0;
+ }
+ 
+ static void mmu_free_vm_memory_caches(struct kvm *kvm)
+@@ -6294,7 +6288,6 @@ static bool kvm_rmap_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_e
+ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end)
+ {
+ 	bool flush;
+-	int i;
+ 
+ 	if (WARN_ON_ONCE(gfn_end <= gfn_start))
+ 		return;
+@@ -6305,11 +6298,8 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end)
+ 
+ 	flush = kvm_rmap_zap_gfn_range(kvm, gfn_start, gfn_end);
+ 
+-	if (tdp_mmu_enabled) {
+-		for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++)
+-			flush = kvm_tdp_mmu_zap_leafs(kvm, i, gfn_start,
+-						      gfn_end, true, flush);
+-	}
++	if (tdp_mmu_enabled)
++		flush = kvm_tdp_mmu_zap_leafs(kvm, gfn_start, gfn_end, flush);
+ 
+ 	if (flush)
+ 		kvm_flush_remote_tlbs_range(kvm, gfn_start, gfn_end - gfn_start);
+diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
+index d39af5639ce97..198e5b5f5ab06 100644
+--- a/arch/x86/kvm/mmu/mmu_internal.h
++++ b/arch/x86/kvm/mmu/mmu_internal.h
+@@ -56,7 +56,12 @@ struct kvm_mmu_page {
+ 
+ 	bool tdp_mmu_page;
+ 	bool unsync;
+-	u8 mmu_valid_gen;
++	union {
++		u8 mmu_valid_gen;
++
++		/* Only accessed under slots_lock.  */
++		bool tdp_mmu_scheduled_root_to_zap;
++	};
+ 
+ 	 /*
+ 	  * The shadow page can't be replaced by an equivalent huge page
+@@ -98,13 +103,7 @@ struct kvm_mmu_page {
+ 		struct kvm_rmap_head parent_ptes; /* rmap pointers to parent sptes */
+ 		tdp_ptep_t ptep;
+ 	};
+-	union {
+-		DECLARE_BITMAP(unsync_child_bitmap, 512);
+-		struct {
+-			struct work_struct tdp_mmu_async_work;
+-			void *tdp_mmu_async_data;
+-		};
+-	};
++	DECLARE_BITMAP(unsync_child_bitmap, 512);
+ 
+ 	/*
+ 	 * Tracks shadow pages that, if zapped, would allow KVM to create an NX
+diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
+index 512163d52194b..a423078350fda 100644
+--- a/arch/x86/kvm/mmu/tdp_mmu.c
++++ b/arch/x86/kvm/mmu/tdp_mmu.c
+@@ -12,18 +12,10 @@
+ #include <trace/events/kvm.h>
+ 
+ /* Initializes the TDP MMU for the VM, if enabled. */
+-int kvm_mmu_init_tdp_mmu(struct kvm *kvm)
++void kvm_mmu_init_tdp_mmu(struct kvm *kvm)
+ {
+-	struct workqueue_struct *wq;
+-
+-	wq = alloc_workqueue("kvm", WQ_UNBOUND|WQ_MEM_RECLAIM|WQ_CPU_INTENSIVE, 0);
+-	if (!wq)
+-		return -ENOMEM;
+-
+ 	INIT_LIST_HEAD(&kvm->arch.tdp_mmu_roots);
+ 	spin_lock_init(&kvm->arch.tdp_mmu_pages_lock);
+-	kvm->arch.tdp_mmu_zap_wq = wq;
+-	return 1;
+ }
+ 
+ /* Arbitrarily returns true so that this may be used in if statements. */
+@@ -46,20 +38,15 @@ void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm)
+ 	 * ultimately frees all roots.
+ 	 */
+ 	kvm_tdp_mmu_invalidate_all_roots(kvm);
+-
+-	/*
+-	 * Destroying a workqueue also first flushes the workqueue, i.e. no
+-	 * need to invoke kvm_tdp_mmu_zap_invalidated_roots().
+-	 */
+-	destroy_workqueue(kvm->arch.tdp_mmu_zap_wq);
++	kvm_tdp_mmu_zap_invalidated_roots(kvm);
+ 
+ 	WARN_ON(atomic64_read(&kvm->arch.tdp_mmu_pages));
+ 	WARN_ON(!list_empty(&kvm->arch.tdp_mmu_roots));
+ 
+ 	/*
+ 	 * Ensure that all the outstanding RCU callbacks to free shadow pages
+-	 * can run before the VM is torn down.  Work items on tdp_mmu_zap_wq
+-	 * can call kvm_tdp_mmu_put_root and create new callbacks.
++	 * can run before the VM is torn down.  Putting the last reference to
++	 * zapped roots will create new callbacks.
+ 	 */
+ 	rcu_barrier();
+ }
+@@ -86,46 +73,6 @@ static void tdp_mmu_free_sp_rcu_callback(struct rcu_head *head)
+ 	tdp_mmu_free_sp(sp);
+ }
+ 
+-static void tdp_mmu_zap_root(struct kvm *kvm, struct kvm_mmu_page *root,
+-			     bool shared);
+-
+-static void tdp_mmu_zap_root_work(struct work_struct *work)
+-{
+-	struct kvm_mmu_page *root = container_of(work, struct kvm_mmu_page,
+-						 tdp_mmu_async_work);
+-	struct kvm *kvm = root->tdp_mmu_async_data;
+-
+-	read_lock(&kvm->mmu_lock);
+-
+-	/*
+-	 * A TLB flush is not necessary as KVM performs a local TLB flush when
+-	 * allocating a new root (see kvm_mmu_load()), and when migrating vCPU
+-	 * to a different pCPU.  Note, the local TLB flush on reuse also
+-	 * invalidates any paging-structure-cache entries, i.e. TLB entries for
+-	 * intermediate paging structures, that may be zapped, as such entries
+-	 * are associated with the ASID on both VMX and SVM.
+-	 */
+-	tdp_mmu_zap_root(kvm, root, true);
+-
+-	/*
+-	 * Drop the refcount using kvm_tdp_mmu_put_root() to test its logic for
+-	 * avoiding an infinite loop.  By design, the root is reachable while
+-	 * it's being asynchronously zapped, thus a different task can put its
+-	 * last reference, i.e. flowing through kvm_tdp_mmu_put_root() for an
+-	 * asynchronously zapped root is unavoidable.
+-	 */
+-	kvm_tdp_mmu_put_root(kvm, root, true);
+-
+-	read_unlock(&kvm->mmu_lock);
+-}
+-
+-static void tdp_mmu_schedule_zap_root(struct kvm *kvm, struct kvm_mmu_page *root)
+-{
+-	root->tdp_mmu_async_data = kvm;
+-	INIT_WORK(&root->tdp_mmu_async_work, tdp_mmu_zap_root_work);
+-	queue_work(kvm->arch.tdp_mmu_zap_wq, &root->tdp_mmu_async_work);
+-}
+-
+ void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root,
+ 			  bool shared)
+ {
+@@ -211,8 +158,12 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm,
+ #define for_each_valid_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _shared)	\
+ 	__for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _shared, true)
+ 
+-#define for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id)			\
+-	__for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, false, false)
++#define for_each_tdp_mmu_root_yield_safe(_kvm, _root, _shared)			\
++	for (_root = tdp_mmu_next_root(_kvm, NULL, _shared, false);		\
++	     _root;								\
++	     _root = tdp_mmu_next_root(_kvm, _root, _shared, false))		\
++		if (!kvm_lockdep_assert_mmu_lock_held(_kvm, _shared)) {		\
++		} else
+ 
+ /*
+  * Iterate over all TDP MMU roots.  Requires that mmu_lock be held for write,
+@@ -292,7 +243,7 @@ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu)
+ 	 * by a memslot update or by the destruction of the VM.  Initialize the
+ 	 * refcount to two; one reference for the vCPU, and one reference for
+ 	 * the TDP MMU itself, which is held until the root is invalidated and
+-	 * is ultimately put by tdp_mmu_zap_root_work().
++	 * is ultimately put by kvm_tdp_mmu_zap_invalidated_roots().
+ 	 */
+ 	refcount_set(&root->tdp_mmu_root_count, 2);
+ 
+@@ -877,13 +828,12 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root,
+  * true if a TLB flush is needed before releasing the MMU lock, i.e. if one or
+  * more SPTEs were zapped since the MMU lock was last acquired.
+  */
+-bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, int as_id, gfn_t start, gfn_t end,
+-			   bool can_yield, bool flush)
++bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, gfn_t start, gfn_t end, bool flush)
+ {
+ 	struct kvm_mmu_page *root;
+ 
+-	for_each_tdp_mmu_root_yield_safe(kvm, root, as_id)
+-		flush = tdp_mmu_zap_leafs(kvm, root, start, end, can_yield, flush);
++	for_each_tdp_mmu_root_yield_safe(kvm, root, false)
++		flush = tdp_mmu_zap_leafs(kvm, root, start, end, true, flush);
+ 
+ 	return flush;
+ }
+@@ -891,7 +841,6 @@ bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, int as_id, gfn_t start, gfn_t end,
+ void kvm_tdp_mmu_zap_all(struct kvm *kvm)
+ {
+ 	struct kvm_mmu_page *root;
+-	int i;
+ 
+ 	/*
+ 	 * Zap all roots, including invalid roots, as all SPTEs must be dropped
+@@ -905,10 +854,8 @@ void kvm_tdp_mmu_zap_all(struct kvm *kvm)
+ 	 * is being destroyed or the userspace VMM has exited.  In both cases,
+ 	 * KVM_RUN is unreachable, i.e. no vCPUs will ever service the request.
+ 	 */
+-	for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) {
+-		for_each_tdp_mmu_root_yield_safe(kvm, root, i)
+-			tdp_mmu_zap_root(kvm, root, false);
+-	}
++	for_each_tdp_mmu_root_yield_safe(kvm, root, false)
++		tdp_mmu_zap_root(kvm, root, false);
+ }
+ 
+ /*
+@@ -917,18 +864,47 @@ void kvm_tdp_mmu_zap_all(struct kvm *kvm)
+  */
+ void kvm_tdp_mmu_zap_invalidated_roots(struct kvm *kvm)
+ {
+-	flush_workqueue(kvm->arch.tdp_mmu_zap_wq);
++	struct kvm_mmu_page *root;
++
++	read_lock(&kvm->mmu_lock);
++
++	for_each_tdp_mmu_root_yield_safe(kvm, root, true) {
++		if (!root->tdp_mmu_scheduled_root_to_zap)
++			continue;
++
++		root->tdp_mmu_scheduled_root_to_zap = false;
++		KVM_BUG_ON(!root->role.invalid, kvm);
++
++		/*
++		 * A TLB flush is not necessary as KVM performs a local TLB
++		 * flush when allocating a new root (see kvm_mmu_load()), and
++		 * when migrating a vCPU to a different pCPU.  Note, the local
++		 * TLB flush on reuse also invalidates paging-structure-cache
++		 * entries, i.e. TLB entries for intermediate paging structures,
++		 * that may be zapped, as such entries are associated with the
++		 * ASID on both VMX and SVM.
++		 */
++		tdp_mmu_zap_root(kvm, root, true);
++
++		/*
++		 * The referenced needs to be put *after* zapping the root, as
++		 * the root must be reachable by mmu_notifiers while it's being
++		 * zapped
++		 */
++		kvm_tdp_mmu_put_root(kvm, root, true);
++	}
++
++	read_unlock(&kvm->mmu_lock);
+ }
+ 
+ /*
+  * Mark each TDP MMU root as invalid to prevent vCPUs from reusing a root that
+  * is about to be zapped, e.g. in response to a memslots update.  The actual
+- * zapping is performed asynchronously.  Using a separate workqueue makes it
+- * easy to ensure that the destruction is performed before the "fast zap"
+- * completes, without keeping a separate list of invalidated roots; the list is
+- * effectively the list of work items in the workqueue.
++ * zapping is done separately so that it happens with mmu_lock with read,
++ * whereas invalidating roots must be done with mmu_lock held for write (unless
++ * the VM is being destroyed).
+  *
+- * Note, the asynchronous worker is gifted the TDP MMU's reference.
++ * Note, kvm_tdp_mmu_zap_invalidated_roots() is gifted the TDP MMU's reference.
+  * See kvm_tdp_mmu_get_vcpu_root_hpa().
+  */
+ void kvm_tdp_mmu_invalidate_all_roots(struct kvm *kvm)
+@@ -953,19 +929,20 @@ void kvm_tdp_mmu_invalidate_all_roots(struct kvm *kvm)
+ 	/*
+ 	 * As above, mmu_lock isn't held when destroying the VM!  There can't
+ 	 * be other references to @kvm, i.e. nothing else can invalidate roots
+-	 * or be consuming roots, but walking the list of roots does need to be
+-	 * guarded against roots being deleted by the asynchronous zap worker.
++	 * or get/put references to roots.
+ 	 */
+-	rcu_read_lock();
+-
+-	list_for_each_entry_rcu(root, &kvm->arch.tdp_mmu_roots, link) {
++	list_for_each_entry(root, &kvm->arch.tdp_mmu_roots, link) {
++		/*
++		 * Note, invalid roots can outlive a memslot update!  Invalid
++		 * roots must be *zapped* before the memslot update completes,
++		 * but a different task can acquire a reference and keep the
++		 * root alive after its been zapped.
++		 */
+ 		if (!root->role.invalid) {
++			root->tdp_mmu_scheduled_root_to_zap = true;
+ 			root->role.invalid = true;
+-			tdp_mmu_schedule_zap_root(kvm, root);
+ 		}
+ 	}
+-
+-	rcu_read_unlock();
+ }
+ 
+ /*
+@@ -1146,8 +1123,13 @@ retry:
+ bool kvm_tdp_mmu_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range,
+ 				 bool flush)
+ {
+-	return kvm_tdp_mmu_zap_leafs(kvm, range->slot->as_id, range->start,
+-				     range->end, range->may_block, flush);
++	struct kvm_mmu_page *root;
++
++	__for_each_tdp_mmu_root_yield_safe(kvm, root, range->slot->as_id, false, false)
++		flush = tdp_mmu_zap_leafs(kvm, root, range->start, range->end,
++					  range->may_block, flush);
++
++	return flush;
+ }
+ 
+ typedef bool (*tdp_handler_t)(struct kvm *kvm, struct tdp_iter *iter,
+diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h
+index 0a63b1afabd3c..733a3aef3a96e 100644
+--- a/arch/x86/kvm/mmu/tdp_mmu.h
++++ b/arch/x86/kvm/mmu/tdp_mmu.h
+@@ -7,7 +7,7 @@
+ 
+ #include "spte.h"
+ 
+-int kvm_mmu_init_tdp_mmu(struct kvm *kvm);
++void kvm_mmu_init_tdp_mmu(struct kvm *kvm);
+ void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm);
+ 
+ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu);
+@@ -20,8 +20,7 @@ __must_check static inline bool kvm_tdp_mmu_get_root(struct kvm_mmu_page *root)
+ void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root,
+ 			  bool shared);
+ 
+-bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, int as_id, gfn_t start,
+-				 gfn_t end, bool can_yield, bool flush);
++bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, gfn_t start, gfn_t end, bool flush);
+ bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp);
+ void kvm_tdp_mmu_zap_all(struct kvm *kvm);
+ void kvm_tdp_mmu_invalidate_all_roots(struct kvm *kvm);
+diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
+index cefb67a8c668c..ed1d9de522f4e 100644
+--- a/arch/x86/kvm/svm/sev.c
++++ b/arch/x86/kvm/svm/sev.c
+@@ -2945,6 +2945,32 @@ int sev_es_string_io(struct vcpu_svm *svm, int size, unsigned int port, int in)
+ 				    count, in);
+ }
+ 
++static void sev_es_vcpu_after_set_cpuid(struct vcpu_svm *svm)
++{
++	struct kvm_vcpu *vcpu = &svm->vcpu;
++
++	if (boot_cpu_has(X86_FEATURE_V_TSC_AUX)) {
++		bool v_tsc_aux = guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP) ||
++				 guest_cpuid_has(vcpu, X86_FEATURE_RDPID);
++
++		set_msr_interception(vcpu, svm->msrpm, MSR_TSC_AUX, v_tsc_aux, v_tsc_aux);
++	}
++}
++
++void sev_vcpu_after_set_cpuid(struct vcpu_svm *svm)
++{
++	struct kvm_vcpu *vcpu = &svm->vcpu;
++	struct kvm_cpuid_entry2 *best;
++
++	/* For sev guests, the memory encryption bit is not reserved in CR3.  */
++	best = kvm_find_cpuid_entry(vcpu, 0x8000001F);
++	if (best)
++		vcpu->arch.reserved_gpa_bits &= ~(1UL << (best->ebx & 0x3f));
++
++	if (sev_es_guest(svm->vcpu.kvm))
++		sev_es_vcpu_after_set_cpuid(svm);
++}
++
+ static void sev_es_init_vmcb(struct vcpu_svm *svm)
+ {
+ 	struct kvm_vcpu *vcpu = &svm->vcpu;
+@@ -2991,14 +3017,6 @@ static void sev_es_init_vmcb(struct vcpu_svm *svm)
+ 	set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTBRANCHTOIP, 1, 1);
+ 	set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTINTFROMIP, 1, 1);
+ 	set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTINTTOIP, 1, 1);
+-
+-	if (boot_cpu_has(X86_FEATURE_V_TSC_AUX) &&
+-	    (guest_cpuid_has(&svm->vcpu, X86_FEATURE_RDTSCP) ||
+-	     guest_cpuid_has(&svm->vcpu, X86_FEATURE_RDPID))) {
+-		set_msr_interception(vcpu, svm->msrpm, MSR_TSC_AUX, 1, 1);
+-		if (guest_cpuid_has(&svm->vcpu, X86_FEATURE_RDTSCP))
+-			svm_clr_intercept(svm, INTERCEPT_RDTSCP);
+-	}
+ }
+ 
+ void sev_init_vmcb(struct vcpu_svm *svm)
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index e3acccc126166..e3d92670c1115 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -4217,7 +4217,6 @@ static bool svm_has_emulated_msr(struct kvm *kvm, u32 index)
+ static void svm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
+ {
+ 	struct vcpu_svm *svm = to_svm(vcpu);
+-	struct kvm_cpuid_entry2 *best;
+ 
+ 	vcpu->arch.xsaves_enabled = guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) &&
+ 				    boot_cpu_has(X86_FEATURE_XSAVE) &&
+@@ -4252,12 +4251,8 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
+ 		set_msr_interception(vcpu, svm->msrpm, MSR_IA32_FLUSH_CMD, 0,
+ 				     !!guest_cpuid_has(vcpu, X86_FEATURE_FLUSH_L1D));
+ 
+-	/* For sev guests, the memory encryption bit is not reserved in CR3.  */
+-	if (sev_guest(vcpu->kvm)) {
+-		best = kvm_find_cpuid_entry(vcpu, 0x8000001F);
+-		if (best)
+-			vcpu->arch.reserved_gpa_bits &= ~(1UL << (best->ebx & 0x3f));
+-	}
++	if (sev_guest(vcpu->kvm))
++		sev_vcpu_after_set_cpuid(svm);
+ 
+ 	init_vmcb_after_set_cpuid(vcpu);
+ }
+diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
+index 8239c8de45acf..a96d80465b83b 100644
+--- a/arch/x86/kvm/svm/svm.h
++++ b/arch/x86/kvm/svm/svm.h
+@@ -733,6 +733,7 @@ void __init sev_hardware_setup(void);
+ void sev_hardware_unsetup(void);
+ int sev_cpu_init(struct svm_cpu_data *sd);
+ void sev_init_vmcb(struct vcpu_svm *svm);
++void sev_vcpu_after_set_cpuid(struct vcpu_svm *svm);
+ void sev_free_vcpu(struct kvm_vcpu *vcpu);
+ int sev_handle_vmgexit(struct kvm_vcpu *vcpu);
+ int sev_es_string_io(struct vcpu_svm *svm, int size, unsigned int port, int in);
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index f2fb67a9dc050..bc6f0fea48b43 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -41,7 +41,7 @@
+ #include <asm/idtentry.h>
+ #include <asm/io.h>
+ #include <asm/irq_remapping.h>
+-#include <asm/kexec.h>
++#include <asm/reboot.h>
+ #include <asm/perf_event.h>
+ #include <asm/mmu_context.h>
+ #include <asm/mshyperv.h>
+@@ -754,7 +754,6 @@ static int vmx_set_guest_uret_msr(struct vcpu_vmx *vmx,
+ 	return ret;
+ }
+ 
+-#ifdef CONFIG_KEXEC_CORE
+ static void crash_vmclear_local_loaded_vmcss(void)
+ {
+ 	int cpu = raw_smp_processor_id();
+@@ -764,7 +763,6 @@ static void crash_vmclear_local_loaded_vmcss(void)
+ 			    loaded_vmcss_on_cpu_link)
+ 		vmcs_clear(v->vmcs);
+ }
+-#endif /* CONFIG_KEXEC_CORE */
+ 
+ static void __loaded_vmcs_clear(void *arg)
+ {
+@@ -8623,10 +8621,9 @@ static void __vmx_exit(void)
+ {
+ 	allow_smaller_maxphyaddr = false;
+ 
+-#ifdef CONFIG_KEXEC_CORE
+ 	RCU_INIT_POINTER(crash_vmclear_loaded_vmcss, NULL);
+ 	synchronize_rcu();
+-#endif
++
+ 	vmx_cleanup_l1d_flush();
+ }
+ 
+@@ -8675,10 +8672,9 @@ static int __init vmx_init(void)
+ 		pi_init_cpu(cpu);
+ 	}
+ 
+-#ifdef CONFIG_KEXEC_CORE
+ 	rcu_assign_pointer(crash_vmclear_loaded_vmcss,
+ 			   crash_vmclear_local_loaded_vmcss);
+-#endif
++
+ 	vmx_check_vmcs12_offsets();
+ 
+ 	/*
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index c381770bcbf13..e24bbc8d1fc19 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -12302,9 +12302,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
+ 	if (ret)
+ 		goto out;
+ 
+-	ret = kvm_mmu_init_vm(kvm);
+-	if (ret)
+-		goto out_page_track;
++	kvm_mmu_init_vm(kvm);
+ 
+ 	ret = static_call(kvm_x86_vm_init)(kvm);
+ 	if (ret)
+@@ -12349,7 +12347,6 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
+ 
+ out_uninit_mmu:
+ 	kvm_mmu_uninit_vm(kvm);
+-out_page_track:
+ 	kvm_page_track_cleanup(kvm);
+ out:
+ 	return ret;
+diff --git a/arch/x86/lib/memcpy_64.S b/arch/x86/lib/memcpy_64.S
+index 8f95fb267caa7..76697df8dfd5b 100644
+--- a/arch/x86/lib/memcpy_64.S
++++ b/arch/x86/lib/memcpy_64.S
+@@ -40,7 +40,7 @@ SYM_TYPED_FUNC_START(__memcpy)
+ SYM_FUNC_END(__memcpy)
+ EXPORT_SYMBOL(__memcpy)
+ 
+-SYM_FUNC_ALIAS(memcpy, __memcpy)
++SYM_FUNC_ALIAS_MEMFUNC(memcpy, __memcpy)
+ EXPORT_SYMBOL(memcpy)
+ 
+ SYM_FUNC_START_LOCAL(memcpy_orig)
+diff --git a/arch/x86/lib/memmove_64.S b/arch/x86/lib/memmove_64.S
+index 0559b206fb110..ccdf3a597045e 100644
+--- a/arch/x86/lib/memmove_64.S
++++ b/arch/x86/lib/memmove_64.S
+@@ -212,5 +212,5 @@ SYM_FUNC_START(__memmove)
+ SYM_FUNC_END(__memmove)
+ EXPORT_SYMBOL(__memmove)
+ 
+-SYM_FUNC_ALIAS(memmove, __memmove)
++SYM_FUNC_ALIAS_MEMFUNC(memmove, __memmove)
+ EXPORT_SYMBOL(memmove)
+diff --git a/arch/x86/lib/memset_64.S b/arch/x86/lib/memset_64.S
+index 7c59a704c4584..3d818b849ec64 100644
+--- a/arch/x86/lib/memset_64.S
++++ b/arch/x86/lib/memset_64.S
+@@ -40,7 +40,7 @@ SYM_FUNC_START(__memset)
+ SYM_FUNC_END(__memset)
+ EXPORT_SYMBOL(__memset)
+ 
+-SYM_FUNC_ALIAS(memset, __memset)
++SYM_FUNC_ALIAS_MEMFUNC(memset, __memset)
+ EXPORT_SYMBOL(memset)
+ 
+ SYM_FUNC_START_LOCAL(memset_orig)
+diff --git a/arch/xtensa/boot/Makefile b/arch/xtensa/boot/Makefile
+index a65b7a9ebff28..d8b0fadf429a9 100644
+--- a/arch/xtensa/boot/Makefile
++++ b/arch/xtensa/boot/Makefile
+@@ -9,8 +9,7 @@
+ 
+ 
+ # KBUILD_CFLAGS used when building rest of boot (takes effect recursively)
+-KBUILD_CFLAGS	+= -fno-builtin -Iarch/$(ARCH)/boot/include
+-HOSTFLAGS	+= -Iarch/$(ARCH)/boot/include
++KBUILD_CFLAGS	+= -fno-builtin
+ 
+ subdir-y	:= lib
+ targets		+= vmlinux.bin vmlinux.bin.gz
+diff --git a/arch/xtensa/boot/lib/zmem.c b/arch/xtensa/boot/lib/zmem.c
+index e3ecd743c5153..b89189355122a 100644
+--- a/arch/xtensa/boot/lib/zmem.c
++++ b/arch/xtensa/boot/lib/zmem.c
+@@ -4,13 +4,14 @@
+ /* bits taken from ppc */
+ 
+ extern void *avail_ram, *end_avail;
++void gunzip(void *dst, int dstlen, unsigned char *src, int *lenp);
+ 
+-void exit (void)
++static void exit(void)
+ {
+   for (;;);
+ }
+ 
+-void *zalloc(unsigned size)
++static void *zalloc(unsigned int size)
+ {
+         void *p = avail_ram;
+ 
+diff --git a/arch/xtensa/include/asm/core.h b/arch/xtensa/include/asm/core.h
+index 3f5ffae89b580..6f02f6f21890f 100644
+--- a/arch/xtensa/include/asm/core.h
++++ b/arch/xtensa/include/asm/core.h
+@@ -6,6 +6,10 @@
+ 
+ #include <variant/core.h>
+ 
++#ifndef XCHAL_HAVE_DIV32
++#define XCHAL_HAVE_DIV32 0
++#endif
++
+ #ifndef XCHAL_HAVE_EXCLUSIVE
+ #define XCHAL_HAVE_EXCLUSIVE 0
+ #endif
+diff --git a/arch/xtensa/lib/umulsidi3.S b/arch/xtensa/lib/umulsidi3.S
+index 8c7a94a0c5d07..5da501b578136 100644
+--- a/arch/xtensa/lib/umulsidi3.S
++++ b/arch/xtensa/lib/umulsidi3.S
+@@ -3,7 +3,9 @@
+ #include <asm/asmmacro.h>
+ #include <asm/core.h>
+ 
+-#if !XCHAL_HAVE_MUL16 && !XCHAL_HAVE_MUL32 && !XCHAL_HAVE_MAC16
++#if XCHAL_HAVE_MUL16 || XCHAL_HAVE_MUL32 || XCHAL_HAVE_MAC16
++#define XCHAL_NO_MUL 0
++#else
+ #define XCHAL_NO_MUL 1
+ #endif
+ 
+diff --git a/arch/xtensa/platforms/iss/network.c b/arch/xtensa/platforms/iss/network.c
+index 85c82cd42188a..e89f27f2bb18d 100644
+--- a/arch/xtensa/platforms/iss/network.c
++++ b/arch/xtensa/platforms/iss/network.c
+@@ -201,7 +201,7 @@ static int tuntap_write(struct iss_net_private *lp, struct sk_buff **skb)
+ 	return simc_write(lp->tp.info.tuntap.fd, (*skb)->data, (*skb)->len);
+ }
+ 
+-unsigned short tuntap_protocol(struct sk_buff *skb)
++static unsigned short tuntap_protocol(struct sk_buff *skb)
+ {
+ 	return eth_type_trans(skb, skb->dev);
+ }
+@@ -441,7 +441,7 @@ static int iss_net_change_mtu(struct net_device *dev, int new_mtu)
+ 	return -EINVAL;
+ }
+ 
+-void iss_net_user_timer_expire(struct timer_list *unused)
++static void iss_net_user_timer_expire(struct timer_list *unused)
+ {
+ }
+ 
+diff --git a/crypto/sm2.c b/crypto/sm2.c
+index 285b3cb7c0bc7..5ab120d74c592 100644
+--- a/crypto/sm2.c
++++ b/crypto/sm2.c
+@@ -278,10 +278,14 @@ int sm2_compute_z_digest(struct shash_desc *desc,
+ 	if (!ec)
+ 		return -ENOMEM;
+ 
+-	err = __sm2_set_pub_key(ec, key, keylen);
++	err = sm2_ec_ctx_init(ec);
+ 	if (err)
+ 		goto out_free_ec;
+ 
++	err = __sm2_set_pub_key(ec, key, keylen);
++	if (err)
++		goto out_deinit_ec;
++
+ 	bits_len = SM2_DEFAULT_USERID_LEN * 8;
+ 	entl[0] = bits_len >> 8;
+ 	entl[1] = bits_len & 0xff;
+diff --git a/drivers/accel/ivpu/ivpu_fw.c b/drivers/accel/ivpu/ivpu_fw.c
+index f58951a0d81b1..93c69aaa6218d 100644
+--- a/drivers/accel/ivpu/ivpu_fw.c
++++ b/drivers/accel/ivpu/ivpu_fw.c
+@@ -195,7 +195,8 @@ static int ivpu_fw_mem_init(struct ivpu_device *vdev)
+ 	if (ret)
+ 		return ret;
+ 
+-	fw->mem = ivpu_bo_alloc_internal(vdev, fw->runtime_addr, fw->runtime_size, DRM_IVPU_BO_WC);
++	fw->mem = ivpu_bo_alloc_internal(vdev, fw->runtime_addr, fw->runtime_size,
++					 DRM_IVPU_BO_CACHED | DRM_IVPU_BO_NOSNOOP);
+ 	if (!fw->mem) {
+ 		ivpu_err(vdev, "Failed to allocate firmware runtime memory\n");
+ 		return -ENOMEM;
+@@ -272,7 +273,7 @@ int ivpu_fw_load(struct ivpu_device *vdev)
+ 		memset(start, 0, size);
+ 	}
+ 
+-	wmb(); /* Flush WC buffers after writing fw->mem */
++	clflush_cache_range(fw->mem->kvaddr, fw->mem->base.size);
+ 
+ 	return 0;
+ }
+@@ -374,6 +375,7 @@ void ivpu_fw_boot_params_setup(struct ivpu_device *vdev, struct vpu_boot_params
+ 	if (!ivpu_fw_is_cold_boot(vdev)) {
+ 		boot_params->save_restore_ret_address = 0;
+ 		vdev->pm->is_warmboot = true;
++		clflush_cache_range(vdev->fw->mem->kvaddr, SZ_4K);
+ 		return;
+ 	}
+ 
+@@ -428,7 +430,7 @@ void ivpu_fw_boot_params_setup(struct ivpu_device *vdev, struct vpu_boot_params
+ 	boot_params->punit_telemetry_sram_size = ivpu_hw_reg_telemetry_size_get(vdev);
+ 	boot_params->vpu_telemetry_enable = ivpu_hw_reg_telemetry_enable_get(vdev);
+ 
+-	wmb(); /* Flush WC buffers after writing bootparams */
++	clflush_cache_range(vdev->fw->mem->kvaddr, SZ_4K);
+ 
+ 	ivpu_fw_boot_params_print(vdev, boot_params);
+ }
+diff --git a/drivers/accel/ivpu/ivpu_gem.h b/drivers/accel/ivpu/ivpu_gem.h
+index 6b0ceda5f2537..f4130586ff1b2 100644
+--- a/drivers/accel/ivpu/ivpu_gem.h
++++ b/drivers/accel/ivpu/ivpu_gem.h
+@@ -8,6 +8,8 @@
+ #include <drm/drm_gem.h>
+ #include <drm/drm_mm.h>
+ 
++#define DRM_IVPU_BO_NOSNOOP       0x10000000
++
+ struct dma_buf;
+ struct ivpu_bo_ops;
+ struct ivpu_file_priv;
+@@ -83,6 +85,9 @@ static inline u32 ivpu_bo_cache_mode(struct ivpu_bo *bo)
+ 
+ static inline bool ivpu_bo_is_snooped(struct ivpu_bo *bo)
+ {
++	if (bo->flags & DRM_IVPU_BO_NOSNOOP)
++		return false;
++
+ 	return ivpu_bo_cache_mode(bo) == DRM_IVPU_BO_CACHED;
+ }
+ 
+diff --git a/drivers/accel/ivpu/ivpu_ipc.c b/drivers/accel/ivpu/ivpu_ipc.c
+index fa0af59e39ab6..295c0d7b50398 100644
+--- a/drivers/accel/ivpu/ivpu_ipc.c
++++ b/drivers/accel/ivpu/ivpu_ipc.c
+@@ -209,10 +209,10 @@ int ivpu_ipc_receive(struct ivpu_device *vdev, struct ivpu_ipc_consumer *cons,
+ 	struct ivpu_ipc_rx_msg *rx_msg;
+ 	int wait_ret, ret = 0;
+ 
+-	wait_ret = wait_event_interruptible_timeout(cons->rx_msg_wq,
+-						    (IS_KTHREAD() && kthread_should_stop()) ||
+-						    !list_empty(&cons->rx_msg_list),
+-						    msecs_to_jiffies(timeout_ms));
++	wait_ret = wait_event_timeout(cons->rx_msg_wq,
++				      (IS_KTHREAD() && kthread_should_stop()) ||
++				      !list_empty(&cons->rx_msg_list),
++				      msecs_to_jiffies(timeout_ms));
+ 
+ 	if (IS_KTHREAD() && kthread_should_stop())
+ 		return -EINTR;
+@@ -220,9 +220,6 @@ int ivpu_ipc_receive(struct ivpu_device *vdev, struct ivpu_ipc_consumer *cons,
+ 	if (wait_ret == 0)
+ 		return -ETIMEDOUT;
+ 
+-	if (wait_ret < 0)
+-		return -ERESTARTSYS;
+-
+ 	spin_lock_irq(&cons->rx_msg_lock);
+ 	rx_msg = list_first_entry_or_null(&cons->rx_msg_list, struct ivpu_ipc_rx_msg, link);
+ 	if (!rx_msg) {
+diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c
+index 07204d4829684..305f590c54a8c 100644
+--- a/drivers/acpi/nfit/core.c
++++ b/drivers/acpi/nfit/core.c
+@@ -855,7 +855,7 @@ static size_t sizeof_idt(struct acpi_nfit_interleave *idt)
+ {
+ 	if (idt->header.length < sizeof(*idt))
+ 		return 0;
+-	return sizeof(*idt) + sizeof(u32) * (idt->line_count - 1);
++	return sizeof(*idt) + sizeof(u32) * idt->line_count;
+ }
+ 
+ static bool add_idt(struct acpi_nfit_desc *acpi_desc,
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index 79d02eb4e4797..76bf185a73c65 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -5204,17 +5204,19 @@ static void ata_port_request_pm(struct ata_port *ap, pm_message_t mesg,
+ 	struct ata_link *link;
+ 	unsigned long flags;
+ 
+-	/* Previous resume operation might still be in
+-	 * progress.  Wait for PM_PENDING to clear.
++	spin_lock_irqsave(ap->lock, flags);
++
++	/*
++	 * A previous PM operation might still be in progress. Wait for
++	 * ATA_PFLAG_PM_PENDING to clear.
+ 	 */
+ 	if (ap->pflags & ATA_PFLAG_PM_PENDING) {
++		spin_unlock_irqrestore(ap->lock, flags);
+ 		ata_port_wait_eh(ap);
+-		WARN_ON(ap->pflags & ATA_PFLAG_PM_PENDING);
++		spin_lock_irqsave(ap->lock, flags);
+ 	}
+ 
+-	/* request PM ops to EH */
+-	spin_lock_irqsave(ap->lock, flags);
+-
++	/* Request PM operation to EH */
+ 	ap->pm_mesg = mesg;
+ 	ap->pflags |= ATA_PFLAG_PM_PENDING;
+ 	ata_for_each_link(link, ap, HOST_FIRST) {
+@@ -5226,10 +5228,8 @@ static void ata_port_request_pm(struct ata_port *ap, pm_message_t mesg,
+ 
+ 	spin_unlock_irqrestore(ap->lock, flags);
+ 
+-	if (!async) {
++	if (!async)
+ 		ata_port_wait_eh(ap);
+-		WARN_ON(ap->pflags & ATA_PFLAG_PM_PENDING);
+-	}
+ }
+ 
+ /*
+@@ -5396,7 +5396,7 @@ EXPORT_SYMBOL_GPL(ata_host_resume);
+ #endif
+ 
+ const struct device_type ata_port_type = {
+-	.name = "ata_port",
++	.name = ATA_PORT_TYPE_NAME,
+ #ifdef CONFIG_PM
+ 	.pm = &ata_port_pm_ops,
+ #endif
+@@ -6130,11 +6130,30 @@ static void ata_port_detach(struct ata_port *ap)
+ 	if (!ap->ops->error_handler)
+ 		goto skip_eh;
+ 
+-	/* tell EH we're leaving & flush EH */
++	/* Wait for any ongoing EH */
++	ata_port_wait_eh(ap);
++
++	mutex_lock(&ap->scsi_scan_mutex);
+ 	spin_lock_irqsave(ap->lock, flags);
++
++	/* Remove scsi devices */
++	ata_for_each_link(link, ap, HOST_FIRST) {
++		ata_for_each_dev(dev, link, ALL) {
++			if (dev->sdev) {
++				spin_unlock_irqrestore(ap->lock, flags);
++				scsi_remove_device(dev->sdev);
++				spin_lock_irqsave(ap->lock, flags);
++				dev->sdev = NULL;
++			}
++		}
++	}
++
++	/* Tell EH to disable all devices */
+ 	ap->pflags |= ATA_PFLAG_UNLOADING;
+ 	ata_port_schedule_eh(ap);
++
+ 	spin_unlock_irqrestore(ap->lock, flags);
++	mutex_unlock(&ap->scsi_scan_mutex);
+ 
+ 	/* wait till EH commits suicide */
+ 	ata_port_wait_eh(ap);
+diff --git a/drivers/ata/libata-eh.c b/drivers/ata/libata-eh.c
+index 35e03679b0bfe..960ef5c6f2c10 100644
+--- a/drivers/ata/libata-eh.c
++++ b/drivers/ata/libata-eh.c
+@@ -2822,23 +2822,13 @@ int ata_eh_reset(struct ata_link *link, int classify,
+ 		}
+ 	}
+ 
+-	/*
+-	 * Some controllers can't be frozen very well and may set spurious
+-	 * error conditions during reset.  Clear accumulated error
+-	 * information and re-thaw the port if frozen.  As reset is the
+-	 * final recovery action and we cross check link onlineness against
+-	 * device classification later, no hotplug event is lost by this.
+-	 */
++	/* clear cached SError */
+ 	spin_lock_irqsave(link->ap->lock, flags);
+-	memset(&link->eh_info, 0, sizeof(link->eh_info));
++	link->eh_info.serror = 0;
+ 	if (slave)
+-		memset(&slave->eh_info, 0, sizeof(link->eh_info));
+-	ap->pflags &= ~ATA_PFLAG_EH_PENDING;
++		slave->eh_info.serror = 0;
+ 	spin_unlock_irqrestore(link->ap->lock, flags);
+ 
+-	if (ata_port_is_frozen(ap))
+-		ata_eh_thaw_port(ap);
+-
+ 	/*
+ 	 * Make sure onlineness and classification result correspond.
+ 	 * Hotplug could have happened during reset and some
+diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c
+index c6ece32de8e31..702812285d8f0 100644
+--- a/drivers/ata/libata-scsi.c
++++ b/drivers/ata/libata-scsi.c
+@@ -1106,7 +1106,8 @@ int ata_scsi_dev_config(struct scsi_device *sdev, struct ata_device *dev)
+ 		 * will be woken up by ata_port_pm_resume() with a port reset
+ 		 * and device revalidation.
+ 		 */
+-		sdev->manage_start_stop = 1;
++		sdev->manage_system_start_stop = true;
++		sdev->manage_runtime_start_stop = true;
+ 		sdev->no_start_on_resume = 1;
+ 	}
+ 
+@@ -1139,6 +1140,42 @@ int ata_scsi_dev_config(struct scsi_device *sdev, struct ata_device *dev)
+ 	return 0;
+ }
+ 
++/**
++ *	ata_scsi_slave_alloc - Early setup of SCSI device
++ *	@sdev: SCSI device to examine
++ *
++ *	This is called from scsi_alloc_sdev() when the scsi device
++ *	associated with an ATA device is scanned on a port.
++ *
++ *	LOCKING:
++ *	Defined by SCSI layer.  We don't really care.
++ */
++
++int ata_scsi_slave_alloc(struct scsi_device *sdev)
++{
++	struct ata_port *ap = ata_shost_to_port(sdev->host);
++	struct device_link *link;
++
++	ata_scsi_sdev_config(sdev);
++
++	/*
++	 * Create a link from the ata_port device to the scsi device to ensure
++	 * that PM does suspend/resume in the correct order: the scsi device is
++	 * consumer (child) and the ata port the supplier (parent).
++	 */
++	link = device_link_add(&sdev->sdev_gendev, &ap->tdev,
++			       DL_FLAG_STATELESS |
++			       DL_FLAG_PM_RUNTIME | DL_FLAG_RPM_ACTIVE);
++	if (!link) {
++		ata_port_err(ap, "Failed to create link to scsi device %s\n",
++			     dev_name(&sdev->sdev_gendev));
++		return -ENODEV;
++	}
++
++	return 0;
++}
++EXPORT_SYMBOL_GPL(ata_scsi_slave_alloc);
++
+ /**
+  *	ata_scsi_slave_config - Set SCSI device attributes
+  *	@sdev: SCSI device to examine
+@@ -1155,14 +1192,11 @@ int ata_scsi_slave_config(struct scsi_device *sdev)
+ {
+ 	struct ata_port *ap = ata_shost_to_port(sdev->host);
+ 	struct ata_device *dev = __ata_scsi_find_dev(ap, sdev);
+-	int rc = 0;
+-
+-	ata_scsi_sdev_config(sdev);
+ 
+ 	if (dev)
+-		rc = ata_scsi_dev_config(sdev, dev);
++		return ata_scsi_dev_config(sdev, dev);
+ 
+-	return rc;
++	return 0;
+ }
+ EXPORT_SYMBOL_GPL(ata_scsi_slave_config);
+ 
+@@ -1189,6 +1223,8 @@ void ata_scsi_slave_destroy(struct scsi_device *sdev)
+ 	if (!ap->ops->error_handler)
+ 		return;
+ 
++	device_link_remove(&sdev->sdev_gendev, &ap->tdev);
++
+ 	spin_lock_irqsave(ap->lock, flags);
+ 	dev = __ata_scsi_find_dev(ap, sdev);
+ 	if (dev && dev->sdev) {
+@@ -1892,6 +1928,9 @@ static unsigned int ata_scsiop_inq_std(struct ata_scsi_args *args, u8 *rbuf)
+ 		hdr[2] = 0x7; /* claim SPC-5 version compatibility */
+ 	}
+ 
++	if (args->dev->flags & ATA_DFLAG_CDL)
++		hdr[2] = 0xd; /* claim SPC-6 version compatibility */
++
+ 	memcpy(rbuf, hdr, sizeof(hdr));
+ 	memcpy(&rbuf[8], "ATA     ", 8);
+ 	ata_id_string(args->id, &rbuf[16], ATA_ID_PROD, 16);
+@@ -4448,7 +4487,7 @@ void ata_scsi_simulate(struct ata_device *dev, struct scsi_cmnd *cmd)
+ 		break;
+ 
+ 	case MAINTENANCE_IN:
+-		if (scsicmd[1] == MI_REPORT_SUPPORTED_OPERATION_CODES)
++		if ((scsicmd[1] & 0x1f) == MI_REPORT_SUPPORTED_OPERATION_CODES)
+ 			ata_scsi_rbuf_fill(&args, ata_scsiop_maint_in);
+ 		else
+ 			ata_scsi_set_invalid_field(dev, cmd, 1, 0xff);
+diff --git a/drivers/ata/libata-transport.c b/drivers/ata/libata-transport.c
+index e4fb9d1b9b398..3e49a877500e1 100644
+--- a/drivers/ata/libata-transport.c
++++ b/drivers/ata/libata-transport.c
+@@ -266,6 +266,10 @@ void ata_tport_delete(struct ata_port *ap)
+ 	put_device(dev);
+ }
+ 
++static const struct device_type ata_port_sas_type = {
++	.name = ATA_PORT_TYPE_NAME,
++};
++
+ /** ata_tport_add - initialize a transport ATA port structure
+  *
+  * @parent:	parent device
+@@ -283,7 +287,10 @@ int ata_tport_add(struct device *parent,
+ 	struct device *dev = &ap->tdev;
+ 
+ 	device_initialize(dev);
+-	dev->type = &ata_port_type;
++	if (ap->flags & ATA_FLAG_SAS_HOST)
++		dev->type = &ata_port_sas_type;
++	else
++		dev->type = &ata_port_type;
+ 
+ 	dev->parent = parent;
+ 	ata_host_get(ap->host);
+diff --git a/drivers/ata/libata.h b/drivers/ata/libata.h
+index cf993885d2b25..76d0a5937b66a 100644
+--- a/drivers/ata/libata.h
++++ b/drivers/ata/libata.h
+@@ -30,6 +30,8 @@ enum {
+ 	ATA_DNXFER_QUIET	= (1 << 31),
+ };
+ 
++#define ATA_PORT_TYPE_NAME	"ata_port"
++
+ extern atomic_t ata_print_id;
+ extern int atapi_passthru16;
+ extern int libata_fua;
+diff --git a/drivers/ata/sata_mv.c b/drivers/ata/sata_mv.c
+index d404e631d1527..68e660e4cc410 100644
+--- a/drivers/ata/sata_mv.c
++++ b/drivers/ata/sata_mv.c
+@@ -1255,8 +1255,8 @@ static void mv_dump_mem(struct device *dev, void __iomem *start, unsigned bytes)
+ 
+ 	for (b = 0; b < bytes; ) {
+ 		for (w = 0, o = 0; b < bytes && w < 4; w++) {
+-			o += snprintf(linebuf + o, sizeof(linebuf) - o,
+-				      "%08x ", readl(start + b));
++			o += scnprintf(linebuf + o, sizeof(linebuf) - o,
++				       "%08x ", readl(start + b));
+ 			b += sizeof(u32);
+ 		}
+ 		dev_dbg(dev, "%s: %p: %s\n",
+diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
+index 2328cc05be36e..58d3c4e647d78 100644
+--- a/drivers/block/rbd.c
++++ b/drivers/block/rbd.c
+@@ -632,9 +632,8 @@ void rbd_warn(struct rbd_device *rbd_dev, const char *fmt, ...)
+ static void rbd_dev_remove_parent(struct rbd_device *rbd_dev);
+ 
+ static int rbd_dev_refresh(struct rbd_device *rbd_dev);
+-static int rbd_dev_v2_header_onetime(struct rbd_device *rbd_dev);
+-static int rbd_dev_header_info(struct rbd_device *rbd_dev);
+-static int rbd_dev_v2_parent_info(struct rbd_device *rbd_dev);
++static int rbd_dev_v2_header_onetime(struct rbd_device *rbd_dev,
++				     struct rbd_image_header *header);
+ static const char *rbd_dev_v2_snap_name(struct rbd_device *rbd_dev,
+ 					u64 snap_id);
+ static int _rbd_dev_v2_snap_size(struct rbd_device *rbd_dev, u64 snap_id,
+@@ -995,15 +994,24 @@ static void rbd_init_layout(struct rbd_device *rbd_dev)
+ 	RCU_INIT_POINTER(rbd_dev->layout.pool_ns, NULL);
+ }
+ 
++static void rbd_image_header_cleanup(struct rbd_image_header *header)
++{
++	kfree(header->object_prefix);
++	ceph_put_snap_context(header->snapc);
++	kfree(header->snap_sizes);
++	kfree(header->snap_names);
++
++	memset(header, 0, sizeof(*header));
++}
++
+ /*
+  * Fill an rbd image header with information from the given format 1
+  * on-disk header.
+  */
+-static int rbd_header_from_disk(struct rbd_device *rbd_dev,
+-				 struct rbd_image_header_ondisk *ondisk)
++static int rbd_header_from_disk(struct rbd_image_header *header,
++				struct rbd_image_header_ondisk *ondisk,
++				bool first_time)
+ {
+-	struct rbd_image_header *header = &rbd_dev->header;
+-	bool first_time = header->object_prefix == NULL;
+ 	struct ceph_snap_context *snapc;
+ 	char *object_prefix = NULL;
+ 	char *snap_names = NULL;
+@@ -1070,11 +1078,6 @@ static int rbd_header_from_disk(struct rbd_device *rbd_dev,
+ 	if (first_time) {
+ 		header->object_prefix = object_prefix;
+ 		header->obj_order = ondisk->options.order;
+-		rbd_init_layout(rbd_dev);
+-	} else {
+-		ceph_put_snap_context(header->snapc);
+-		kfree(header->snap_names);
+-		kfree(header->snap_sizes);
+ 	}
+ 
+ 	/* The remaining fields always get updated (when we refresh) */
+@@ -4859,7 +4862,9 @@ out_req:
+  * return, the rbd_dev->header field will contain up-to-date
+  * information about the image.
+  */
+-static int rbd_dev_v1_header_info(struct rbd_device *rbd_dev)
++static int rbd_dev_v1_header_info(struct rbd_device *rbd_dev,
++				  struct rbd_image_header *header,
++				  bool first_time)
+ {
+ 	struct rbd_image_header_ondisk *ondisk = NULL;
+ 	u32 snap_count = 0;
+@@ -4907,7 +4912,7 @@ static int rbd_dev_v1_header_info(struct rbd_device *rbd_dev)
+ 		snap_count = le32_to_cpu(ondisk->snap_count);
+ 	} while (snap_count != want_count);
+ 
+-	ret = rbd_header_from_disk(rbd_dev, ondisk);
++	ret = rbd_header_from_disk(header, ondisk, first_time);
+ out:
+ 	kfree(ondisk);
+ 
+@@ -4931,39 +4936,6 @@ static void rbd_dev_update_size(struct rbd_device *rbd_dev)
+ 	}
+ }
+ 
+-static int rbd_dev_refresh(struct rbd_device *rbd_dev)
+-{
+-	u64 mapping_size;
+-	int ret;
+-
+-	down_write(&rbd_dev->header_rwsem);
+-	mapping_size = rbd_dev->mapping.size;
+-
+-	ret = rbd_dev_header_info(rbd_dev);
+-	if (ret)
+-		goto out;
+-
+-	/*
+-	 * If there is a parent, see if it has disappeared due to the
+-	 * mapped image getting flattened.
+-	 */
+-	if (rbd_dev->parent) {
+-		ret = rbd_dev_v2_parent_info(rbd_dev);
+-		if (ret)
+-			goto out;
+-	}
+-
+-	rbd_assert(!rbd_is_snap(rbd_dev));
+-	rbd_dev->mapping.size = rbd_dev->header.image_size;
+-
+-out:
+-	up_write(&rbd_dev->header_rwsem);
+-	if (!ret && mapping_size != rbd_dev->mapping.size)
+-		rbd_dev_update_size(rbd_dev);
+-
+-	return ret;
+-}
+-
+ static const struct blk_mq_ops rbd_mq_ops = {
+ 	.queue_rq	= rbd_queue_rq,
+ };
+@@ -5503,17 +5475,12 @@ static int _rbd_dev_v2_snap_size(struct rbd_device *rbd_dev, u64 snap_id,
+ 	return 0;
+ }
+ 
+-static int rbd_dev_v2_image_size(struct rbd_device *rbd_dev)
+-{
+-	return _rbd_dev_v2_snap_size(rbd_dev, CEPH_NOSNAP,
+-					&rbd_dev->header.obj_order,
+-					&rbd_dev->header.image_size);
+-}
+-
+-static int rbd_dev_v2_object_prefix(struct rbd_device *rbd_dev)
++static int rbd_dev_v2_object_prefix(struct rbd_device *rbd_dev,
++				    char **pobject_prefix)
+ {
+ 	size_t size;
+ 	void *reply_buf;
++	char *object_prefix;
+ 	int ret;
+ 	void *p;
+ 
+@@ -5531,16 +5498,16 @@ static int rbd_dev_v2_object_prefix(struct rbd_device *rbd_dev)
+ 		goto out;
+ 
+ 	p = reply_buf;
+-	rbd_dev->header.object_prefix = ceph_extract_encoded_string(&p,
+-						p + ret, NULL, GFP_NOIO);
++	object_prefix = ceph_extract_encoded_string(&p, p + ret, NULL,
++						    GFP_NOIO);
++	if (IS_ERR(object_prefix)) {
++		ret = PTR_ERR(object_prefix);
++		goto out;
++	}
+ 	ret = 0;
+ 
+-	if (IS_ERR(rbd_dev->header.object_prefix)) {
+-		ret = PTR_ERR(rbd_dev->header.object_prefix);
+-		rbd_dev->header.object_prefix = NULL;
+-	} else {
+-		dout("  object_prefix = %s\n", rbd_dev->header.object_prefix);
+-	}
++	*pobject_prefix = object_prefix;
++	dout("  object_prefix = %s\n", object_prefix);
+ out:
+ 	kfree(reply_buf);
+ 
+@@ -5591,13 +5558,6 @@ static int _rbd_dev_v2_snap_features(struct rbd_device *rbd_dev, u64 snap_id,
+ 	return 0;
+ }
+ 
+-static int rbd_dev_v2_features(struct rbd_device *rbd_dev)
+-{
+-	return _rbd_dev_v2_snap_features(rbd_dev, CEPH_NOSNAP,
+-					 rbd_is_ro(rbd_dev),
+-					 &rbd_dev->header.features);
+-}
+-
+ /*
+  * These are generic image flags, but since they are used only for
+  * object map, store them in rbd_dev->object_map_flags.
+@@ -5634,6 +5594,14 @@ struct parent_image_info {
+ 	u64		overlap;
+ };
+ 
++static void rbd_parent_info_cleanup(struct parent_image_info *pii)
++{
++	kfree(pii->pool_ns);
++	kfree(pii->image_id);
++
++	memset(pii, 0, sizeof(*pii));
++}
++
+ /*
+  * The caller is responsible for @pii.
+  */
+@@ -5703,6 +5671,9 @@ static int __get_parent_info(struct rbd_device *rbd_dev,
+ 	if (pii->has_overlap)
+ 		ceph_decode_64_safe(&p, end, pii->overlap, e_inval);
+ 
++	dout("%s pool_id %llu pool_ns %s image_id %s snap_id %llu has_overlap %d overlap %llu\n",
++	     __func__, pii->pool_id, pii->pool_ns, pii->image_id, pii->snap_id,
++	     pii->has_overlap, pii->overlap);
+ 	return 0;
+ 
+ e_inval:
+@@ -5741,14 +5712,17 @@ static int __get_parent_info_legacy(struct rbd_device *rbd_dev,
+ 	pii->has_overlap = true;
+ 	ceph_decode_64_safe(&p, end, pii->overlap, e_inval);
+ 
++	dout("%s pool_id %llu pool_ns %s image_id %s snap_id %llu has_overlap %d overlap %llu\n",
++	     __func__, pii->pool_id, pii->pool_ns, pii->image_id, pii->snap_id,
++	     pii->has_overlap, pii->overlap);
+ 	return 0;
+ 
+ e_inval:
+ 	return -EINVAL;
+ }
+ 
+-static int get_parent_info(struct rbd_device *rbd_dev,
+-			   struct parent_image_info *pii)
++static int rbd_dev_v2_parent_info(struct rbd_device *rbd_dev,
++				  struct parent_image_info *pii)
+ {
+ 	struct page *req_page, *reply_page;
+ 	void *p;
+@@ -5776,7 +5750,7 @@ static int get_parent_info(struct rbd_device *rbd_dev,
+ 	return ret;
+ }
+ 
+-static int rbd_dev_v2_parent_info(struct rbd_device *rbd_dev)
++static int rbd_dev_setup_parent(struct rbd_device *rbd_dev)
+ {
+ 	struct rbd_spec *parent_spec;
+ 	struct parent_image_info pii = { 0 };
+@@ -5786,37 +5760,12 @@ static int rbd_dev_v2_parent_info(struct rbd_device *rbd_dev)
+ 	if (!parent_spec)
+ 		return -ENOMEM;
+ 
+-	ret = get_parent_info(rbd_dev, &pii);
++	ret = rbd_dev_v2_parent_info(rbd_dev, &pii);
+ 	if (ret)
+ 		goto out_err;
+ 
+-	dout("%s pool_id %llu pool_ns %s image_id %s snap_id %llu has_overlap %d overlap %llu\n",
+-	     __func__, pii.pool_id, pii.pool_ns, pii.image_id, pii.snap_id,
+-	     pii.has_overlap, pii.overlap);
+-
+-	if (pii.pool_id == CEPH_NOPOOL || !pii.has_overlap) {
+-		/*
+-		 * Either the parent never existed, or we have
+-		 * record of it but the image got flattened so it no
+-		 * longer has a parent.  When the parent of a
+-		 * layered image disappears we immediately set the
+-		 * overlap to 0.  The effect of this is that all new
+-		 * requests will be treated as if the image had no
+-		 * parent.
+-		 *
+-		 * If !pii.has_overlap, the parent image spec is not
+-		 * applicable.  It's there to avoid duplication in each
+-		 * snapshot record.
+-		 */
+-		if (rbd_dev->parent_overlap) {
+-			rbd_dev->parent_overlap = 0;
+-			rbd_dev_parent_put(rbd_dev);
+-			pr_info("%s: clone image has been flattened\n",
+-				rbd_dev->disk->disk_name);
+-		}
+-
++	if (pii.pool_id == CEPH_NOPOOL || !pii.has_overlap)
+ 		goto out;	/* No parent?  No problem. */
+-	}
+ 
+ 	/* The ceph file layout needs to fit pool id in 32 bits */
+ 
+@@ -5828,58 +5777,46 @@ static int rbd_dev_v2_parent_info(struct rbd_device *rbd_dev)
+ 	}
+ 
+ 	/*
+-	 * The parent won't change (except when the clone is
+-	 * flattened, already handled that).  So we only need to
+-	 * record the parent spec we have not already done so.
++	 * The parent won't change except when the clone is flattened,
++	 * so we only need to record the parent image spec once.
+ 	 */
+-	if (!rbd_dev->parent_spec) {
+-		parent_spec->pool_id = pii.pool_id;
+-		if (pii.pool_ns && *pii.pool_ns) {
+-			parent_spec->pool_ns = pii.pool_ns;
+-			pii.pool_ns = NULL;
+-		}
+-		parent_spec->image_id = pii.image_id;
+-		pii.image_id = NULL;
+-		parent_spec->snap_id = pii.snap_id;
+-
+-		rbd_dev->parent_spec = parent_spec;
+-		parent_spec = NULL;	/* rbd_dev now owns this */
++	parent_spec->pool_id = pii.pool_id;
++	if (pii.pool_ns && *pii.pool_ns) {
++		parent_spec->pool_ns = pii.pool_ns;
++		pii.pool_ns = NULL;
+ 	}
++	parent_spec->image_id = pii.image_id;
++	pii.image_id = NULL;
++	parent_spec->snap_id = pii.snap_id;
++
++	rbd_assert(!rbd_dev->parent_spec);
++	rbd_dev->parent_spec = parent_spec;
++	parent_spec = NULL;	/* rbd_dev now owns this */
+ 
+ 	/*
+-	 * We always update the parent overlap.  If it's zero we issue
+-	 * a warning, as we will proceed as if there was no parent.
++	 * Record the parent overlap.  If it's zero, issue a warning as
++	 * we will proceed as if there is no parent.
+ 	 */
+-	if (!pii.overlap) {
+-		if (parent_spec) {
+-			/* refresh, careful to warn just once */
+-			if (rbd_dev->parent_overlap)
+-				rbd_warn(rbd_dev,
+-				    "clone now standalone (overlap became 0)");
+-		} else {
+-			/* initial probe */
+-			rbd_warn(rbd_dev, "clone is standalone (overlap 0)");
+-		}
+-	}
++	if (!pii.overlap)
++		rbd_warn(rbd_dev, "clone is standalone (overlap 0)");
+ 	rbd_dev->parent_overlap = pii.overlap;
+ 
+ out:
+ 	ret = 0;
+ out_err:
+-	kfree(pii.pool_ns);
+-	kfree(pii.image_id);
++	rbd_parent_info_cleanup(&pii);
+ 	rbd_spec_put(parent_spec);
+ 	return ret;
+ }
+ 
+-static int rbd_dev_v2_striping_info(struct rbd_device *rbd_dev)
++static int rbd_dev_v2_striping_info(struct rbd_device *rbd_dev,
++				    u64 *stripe_unit, u64 *stripe_count)
+ {
+ 	struct {
+ 		__le64 stripe_unit;
+ 		__le64 stripe_count;
+ 	} __attribute__ ((packed)) striping_info_buf = { 0 };
+ 	size_t size = sizeof (striping_info_buf);
+-	void *p;
+ 	int ret;
+ 
+ 	ret = rbd_obj_method_sync(rbd_dev, &rbd_dev->header_oid,
+@@ -5891,27 +5828,33 @@ static int rbd_dev_v2_striping_info(struct rbd_device *rbd_dev)
+ 	if (ret < size)
+ 		return -ERANGE;
+ 
+-	p = &striping_info_buf;
+-	rbd_dev->header.stripe_unit = ceph_decode_64(&p);
+-	rbd_dev->header.stripe_count = ceph_decode_64(&p);
++	*stripe_unit = le64_to_cpu(striping_info_buf.stripe_unit);
++	*stripe_count = le64_to_cpu(striping_info_buf.stripe_count);
++	dout("  stripe_unit = %llu stripe_count = %llu\n", *stripe_unit,
++	     *stripe_count);
++
+ 	return 0;
+ }
+ 
+-static int rbd_dev_v2_data_pool(struct rbd_device *rbd_dev)
++static int rbd_dev_v2_data_pool(struct rbd_device *rbd_dev, s64 *data_pool_id)
+ {
+-	__le64 data_pool_id;
++	__le64 data_pool_buf;
+ 	int ret;
+ 
+ 	ret = rbd_obj_method_sync(rbd_dev, &rbd_dev->header_oid,
+ 				  &rbd_dev->header_oloc, "get_data_pool",
+-				  NULL, 0, &data_pool_id, sizeof(data_pool_id));
++				  NULL, 0, &data_pool_buf,
++				  sizeof(data_pool_buf));
++	dout("%s: rbd_obj_method_sync returned %d\n", __func__, ret);
+ 	if (ret < 0)
+ 		return ret;
+-	if (ret < sizeof(data_pool_id))
++	if (ret < sizeof(data_pool_buf))
+ 		return -EBADMSG;
+ 
+-	rbd_dev->header.data_pool_id = le64_to_cpu(data_pool_id);
+-	WARN_ON(rbd_dev->header.data_pool_id == CEPH_NOPOOL);
++	*data_pool_id = le64_to_cpu(data_pool_buf);
++	dout("  data_pool_id = %lld\n", *data_pool_id);
++	WARN_ON(*data_pool_id == CEPH_NOPOOL);
++
+ 	return 0;
+ }
+ 
+@@ -6103,7 +6046,8 @@ out_err:
+ 	return ret;
+ }
+ 
+-static int rbd_dev_v2_snap_context(struct rbd_device *rbd_dev)
++static int rbd_dev_v2_snap_context(struct rbd_device *rbd_dev,
++				   struct ceph_snap_context **psnapc)
+ {
+ 	size_t size;
+ 	int ret;
+@@ -6164,9 +6108,7 @@ static int rbd_dev_v2_snap_context(struct rbd_device *rbd_dev)
+ 	for (i = 0; i < snap_count; i++)
+ 		snapc->snaps[i] = ceph_decode_64(&p);
+ 
+-	ceph_put_snap_context(rbd_dev->header.snapc);
+-	rbd_dev->header.snapc = snapc;
+-
++	*psnapc = snapc;
+ 	dout("  snap context seq = %llu, snap_count = %u\n",
+ 		(unsigned long long)seq, (unsigned int)snap_count);
+ out:
+@@ -6215,38 +6157,42 @@ out:
+ 	return snap_name;
+ }
+ 
+-static int rbd_dev_v2_header_info(struct rbd_device *rbd_dev)
++static int rbd_dev_v2_header_info(struct rbd_device *rbd_dev,
++				  struct rbd_image_header *header,
++				  bool first_time)
+ {
+-	bool first_time = rbd_dev->header.object_prefix == NULL;
+ 	int ret;
+ 
+-	ret = rbd_dev_v2_image_size(rbd_dev);
++	ret = _rbd_dev_v2_snap_size(rbd_dev, CEPH_NOSNAP,
++				    first_time ? &header->obj_order : NULL,
++				    &header->image_size);
+ 	if (ret)
+ 		return ret;
+ 
+ 	if (first_time) {
+-		ret = rbd_dev_v2_header_onetime(rbd_dev);
++		ret = rbd_dev_v2_header_onetime(rbd_dev, header);
+ 		if (ret)
+ 			return ret;
+ 	}
+ 
+-	ret = rbd_dev_v2_snap_context(rbd_dev);
+-	if (ret && first_time) {
+-		kfree(rbd_dev->header.object_prefix);
+-		rbd_dev->header.object_prefix = NULL;
+-	}
++	ret = rbd_dev_v2_snap_context(rbd_dev, &header->snapc);
++	if (ret)
++		return ret;
+ 
+-	return ret;
++	return 0;
+ }
+ 
+-static int rbd_dev_header_info(struct rbd_device *rbd_dev)
++static int rbd_dev_header_info(struct rbd_device *rbd_dev,
++			       struct rbd_image_header *header,
++			       bool first_time)
+ {
+ 	rbd_assert(rbd_image_format_valid(rbd_dev->image_format));
++	rbd_assert(!header->object_prefix && !header->snapc);
+ 
+ 	if (rbd_dev->image_format == 1)
+-		return rbd_dev_v1_header_info(rbd_dev);
++		return rbd_dev_v1_header_info(rbd_dev, header, first_time);
+ 
+-	return rbd_dev_v2_header_info(rbd_dev);
++	return rbd_dev_v2_header_info(rbd_dev, header, first_time);
+ }
+ 
+ /*
+@@ -6734,60 +6680,49 @@ out:
+  */
+ static void rbd_dev_unprobe(struct rbd_device *rbd_dev)
+ {
+-	struct rbd_image_header	*header;
+-
+ 	rbd_dev_parent_put(rbd_dev);
+ 	rbd_object_map_free(rbd_dev);
+ 	rbd_dev_mapping_clear(rbd_dev);
+ 
+ 	/* Free dynamic fields from the header, then zero it out */
+ 
+-	header = &rbd_dev->header;
+-	ceph_put_snap_context(header->snapc);
+-	kfree(header->snap_sizes);
+-	kfree(header->snap_names);
+-	kfree(header->object_prefix);
+-	memset(header, 0, sizeof (*header));
++	rbd_image_header_cleanup(&rbd_dev->header);
+ }
+ 
+-static int rbd_dev_v2_header_onetime(struct rbd_device *rbd_dev)
++static int rbd_dev_v2_header_onetime(struct rbd_device *rbd_dev,
++				     struct rbd_image_header *header)
+ {
+ 	int ret;
+ 
+-	ret = rbd_dev_v2_object_prefix(rbd_dev);
++	ret = rbd_dev_v2_object_prefix(rbd_dev, &header->object_prefix);
+ 	if (ret)
+-		goto out_err;
++		return ret;
+ 
+ 	/*
+ 	 * Get the and check features for the image.  Currently the
+ 	 * features are assumed to never change.
+ 	 */
+-	ret = rbd_dev_v2_features(rbd_dev);
++	ret = _rbd_dev_v2_snap_features(rbd_dev, CEPH_NOSNAP,
++					rbd_is_ro(rbd_dev), &header->features);
+ 	if (ret)
+-		goto out_err;
++		return ret;
+ 
+ 	/* If the image supports fancy striping, get its parameters */
+ 
+-	if (rbd_dev->header.features & RBD_FEATURE_STRIPINGV2) {
+-		ret = rbd_dev_v2_striping_info(rbd_dev);
+-		if (ret < 0)
+-			goto out_err;
++	if (header->features & RBD_FEATURE_STRIPINGV2) {
++		ret = rbd_dev_v2_striping_info(rbd_dev, &header->stripe_unit,
++					       &header->stripe_count);
++		if (ret)
++			return ret;
+ 	}
+ 
+-	if (rbd_dev->header.features & RBD_FEATURE_DATA_POOL) {
+-		ret = rbd_dev_v2_data_pool(rbd_dev);
++	if (header->features & RBD_FEATURE_DATA_POOL) {
++		ret = rbd_dev_v2_data_pool(rbd_dev, &header->data_pool_id);
+ 		if (ret)
+-			goto out_err;
++			return ret;
+ 	}
+ 
+-	rbd_init_layout(rbd_dev);
+ 	return 0;
+-
+-out_err:
+-	rbd_dev->header.features = 0;
+-	kfree(rbd_dev->header.object_prefix);
+-	rbd_dev->header.object_prefix = NULL;
+-	return ret;
+ }
+ 
+ /*
+@@ -6982,13 +6917,15 @@ static int rbd_dev_image_probe(struct rbd_device *rbd_dev, int depth)
+ 	if (!depth)
+ 		down_write(&rbd_dev->header_rwsem);
+ 
+-	ret = rbd_dev_header_info(rbd_dev);
++	ret = rbd_dev_header_info(rbd_dev, &rbd_dev->header, true);
+ 	if (ret) {
+ 		if (ret == -ENOENT && !need_watch)
+ 			rbd_print_dne(rbd_dev, false);
+ 		goto err_out_probe;
+ 	}
+ 
++	rbd_init_layout(rbd_dev);
++
+ 	/*
+ 	 * If this image is the one being mapped, we have pool name and
+ 	 * id, image name and id, and snap name - need to fill snap id.
+@@ -7017,7 +6954,7 @@ static int rbd_dev_image_probe(struct rbd_device *rbd_dev, int depth)
+ 	}
+ 
+ 	if (rbd_dev->header.features & RBD_FEATURE_LAYERING) {
+-		ret = rbd_dev_v2_parent_info(rbd_dev);
++		ret = rbd_dev_setup_parent(rbd_dev);
+ 		if (ret)
+ 			goto err_out_probe;
+ 	}
+@@ -7043,6 +6980,107 @@ err_out_format:
+ 	return ret;
+ }
+ 
++static void rbd_dev_update_header(struct rbd_device *rbd_dev,
++				  struct rbd_image_header *header)
++{
++	rbd_assert(rbd_image_format_valid(rbd_dev->image_format));
++	rbd_assert(rbd_dev->header.object_prefix); /* !first_time */
++
++	if (rbd_dev->header.image_size != header->image_size) {
++		rbd_dev->header.image_size = header->image_size;
++
++		if (!rbd_is_snap(rbd_dev)) {
++			rbd_dev->mapping.size = header->image_size;
++			rbd_dev_update_size(rbd_dev);
++		}
++	}
++
++	ceph_put_snap_context(rbd_dev->header.snapc);
++	rbd_dev->header.snapc = header->snapc;
++	header->snapc = NULL;
++
++	if (rbd_dev->image_format == 1) {
++		kfree(rbd_dev->header.snap_names);
++		rbd_dev->header.snap_names = header->snap_names;
++		header->snap_names = NULL;
++
++		kfree(rbd_dev->header.snap_sizes);
++		rbd_dev->header.snap_sizes = header->snap_sizes;
++		header->snap_sizes = NULL;
++	}
++}
++
++static void rbd_dev_update_parent(struct rbd_device *rbd_dev,
++				  struct parent_image_info *pii)
++{
++	if (pii->pool_id == CEPH_NOPOOL || !pii->has_overlap) {
++		/*
++		 * Either the parent never existed, or we have
++		 * record of it but the image got flattened so it no
++		 * longer has a parent.  When the parent of a
++		 * layered image disappears we immediately set the
++		 * overlap to 0.  The effect of this is that all new
++		 * requests will be treated as if the image had no
++		 * parent.
++		 *
++		 * If !pii.has_overlap, the parent image spec is not
++		 * applicable.  It's there to avoid duplication in each
++		 * snapshot record.
++		 */
++		if (rbd_dev->parent_overlap) {
++			rbd_dev->parent_overlap = 0;
++			rbd_dev_parent_put(rbd_dev);
++			pr_info("%s: clone has been flattened\n",
++				rbd_dev->disk->disk_name);
++		}
++	} else {
++		rbd_assert(rbd_dev->parent_spec);
++
++		/*
++		 * Update the parent overlap.  If it became zero, issue
++		 * a warning as we will proceed as if there is no parent.
++		 */
++		if (!pii->overlap && rbd_dev->parent_overlap)
++			rbd_warn(rbd_dev,
++				 "clone has become standalone (overlap 0)");
++		rbd_dev->parent_overlap = pii->overlap;
++	}
++}
++
++static int rbd_dev_refresh(struct rbd_device *rbd_dev)
++{
++	struct rbd_image_header	header = { 0 };
++	struct parent_image_info pii = { 0 };
++	int ret;
++
++	dout("%s rbd_dev %p\n", __func__, rbd_dev);
++
++	ret = rbd_dev_header_info(rbd_dev, &header, false);
++	if (ret)
++		goto out;
++
++	/*
++	 * If there is a parent, see if it has disappeared due to the
++	 * mapped image getting flattened.
++	 */
++	if (rbd_dev->parent) {
++		ret = rbd_dev_v2_parent_info(rbd_dev, &pii);
++		if (ret)
++			goto out;
++	}
++
++	down_write(&rbd_dev->header_rwsem);
++	rbd_dev_update_header(rbd_dev, &header);
++	if (rbd_dev->parent)
++		rbd_dev_update_parent(rbd_dev, &pii);
++	up_write(&rbd_dev->header_rwsem);
++
++out:
++	rbd_parent_info_cleanup(&pii);
++	rbd_image_header_cleanup(&header);
++	return ret;
++}
++
+ static ssize_t do_rbd_add(const char *buf, size_t count)
+ {
+ 	struct rbd_device *rbd_dev = NULL;
+diff --git a/drivers/bus/ti-sysc.c b/drivers/bus/ti-sysc.c
+index 9766dbf607f97..27c5bae85adc2 100644
+--- a/drivers/bus/ti-sysc.c
++++ b/drivers/bus/ti-sysc.c
+@@ -38,6 +38,7 @@ enum sysc_soc {
+ 	SOC_2420,
+ 	SOC_2430,
+ 	SOC_3430,
++	SOC_AM35,
+ 	SOC_3630,
+ 	SOC_4430,
+ 	SOC_4460,
+@@ -1096,6 +1097,11 @@ static int sysc_enable_module(struct device *dev)
+ 	if (ddata->cfg.quirks & (SYSC_QUIRK_SWSUP_SIDLE |
+ 				 SYSC_QUIRK_SWSUP_SIDLE_ACT)) {
+ 		best_mode = SYSC_IDLE_NO;
++
++		/* Clear WAKEUP */
++		if (regbits->enwkup_shift >= 0 &&
++		    ddata->cfg.sysc_val & BIT(regbits->enwkup_shift))
++			reg &= ~BIT(regbits->enwkup_shift);
+ 	} else {
+ 		best_mode = fls(ddata->cfg.sidlemodes) - 1;
+ 		if (best_mode > SYSC_IDLE_MASK) {
+@@ -1223,6 +1229,13 @@ set_sidle:
+ 		}
+ 	}
+ 
++	if (ddata->cfg.quirks & SYSC_QUIRK_SWSUP_SIDLE_ACT) {
++		/* Set WAKEUP */
++		if (regbits->enwkup_shift >= 0 &&
++		    ddata->cfg.sysc_val & BIT(regbits->enwkup_shift))
++			reg |= BIT(regbits->enwkup_shift);
++	}
++
+ 	reg &= ~(SYSC_IDLE_MASK << regbits->sidle_shift);
+ 	reg |= best_mode << regbits->sidle_shift;
+ 	if (regbits->autoidle_shift >= 0 &&
+@@ -1517,16 +1530,16 @@ struct sysc_revision_quirk {
+ static const struct sysc_revision_quirk sysc_revision_quirks[] = {
+ 	/* These drivers need to be fixed to not use pm_runtime_irq_safe() */
+ 	SYSC_QUIRK("uart", 0, 0x50, 0x54, 0x58, 0x00000046, 0xffffffff,
+-		   SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_LEGACY_IDLE),
++		   SYSC_QUIRK_SWSUP_SIDLE_ACT | SYSC_QUIRK_LEGACY_IDLE),
+ 	SYSC_QUIRK("uart", 0, 0x50, 0x54, 0x58, 0x00000052, 0xffffffff,
+-		   SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_LEGACY_IDLE),
++		   SYSC_QUIRK_SWSUP_SIDLE_ACT | SYSC_QUIRK_LEGACY_IDLE),
+ 	/* Uarts on omap4 and later */
+ 	SYSC_QUIRK("uart", 0, 0x50, 0x54, 0x58, 0x50411e03, 0xffff00ff,
+-		   SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_LEGACY_IDLE),
++		   SYSC_QUIRK_SWSUP_SIDLE_ACT | SYSC_QUIRK_LEGACY_IDLE),
+ 	SYSC_QUIRK("uart", 0, 0x50, 0x54, 0x58, 0x47422e03, 0xffffffff,
+-		   SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_LEGACY_IDLE),
++		   SYSC_QUIRK_SWSUP_SIDLE_ACT | SYSC_QUIRK_LEGACY_IDLE),
+ 	SYSC_QUIRK("uart", 0, 0x50, 0x54, 0x58, 0x47424e03, 0xffffffff,
+-		   SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_LEGACY_IDLE),
++		   SYSC_QUIRK_SWSUP_SIDLE_ACT | SYSC_QUIRK_LEGACY_IDLE),
+ 
+ 	/* Quirks that need to be set based on the module address */
+ 	SYSC_QUIRK("mcpdm", 0x40132000, 0, 0x10, -ENODEV, 0x50000800, 0xffffffff,
+@@ -1861,7 +1874,7 @@ static void sysc_pre_reset_quirk_dss(struct sysc *ddata)
+ 		dev_warn(ddata->dev, "%s: timed out %08x !+ %08x\n",
+ 			 __func__, val, irq_mask);
+ 
+-	if (sysc_soc->soc == SOC_3430) {
++	if (sysc_soc->soc == SOC_3430 || sysc_soc->soc == SOC_AM35) {
+ 		/* Clear DSS_SDI_CONTROL */
+ 		sysc_write(ddata, 0x44, 0);
+ 
+@@ -2149,8 +2162,7 @@ static int sysc_reset(struct sysc *ddata)
+ 	}
+ 
+ 	if (ddata->cfg.srst_udelay)
+-		usleep_range(ddata->cfg.srst_udelay,
+-			     ddata->cfg.srst_udelay * 2);
++		fsleep(ddata->cfg.srst_udelay);
+ 
+ 	if (ddata->post_reset_quirk)
+ 		ddata->post_reset_quirk(ddata);
+@@ -3024,6 +3036,7 @@ static void ti_sysc_idle(struct work_struct *work)
+ static const struct soc_device_attribute sysc_soc_match[] = {
+ 	SOC_FLAG("OMAP242*", SOC_2420),
+ 	SOC_FLAG("OMAP243*", SOC_2430),
++	SOC_FLAG("AM35*", SOC_AM35),
+ 	SOC_FLAG("OMAP3[45]*", SOC_3430),
+ 	SOC_FLAG("OMAP3[67]*", SOC_3630),
+ 	SOC_FLAG("OMAP443*", SOC_4430),
+@@ -3228,7 +3241,7 @@ static int sysc_check_active_timer(struct sysc *ddata)
+ 	 * can be dropped if we stop supporting old beagleboard revisions
+ 	 * A to B4 at some point.
+ 	 */
+-	if (sysc_soc->soc == SOC_3430)
++	if (sysc_soc->soc == SOC_3430 || sysc_soc->soc == SOC_AM35)
+ 		error = -ENXIO;
+ 	else
+ 		error = -EBUSY;
+diff --git a/drivers/char/agp/parisc-agp.c b/drivers/char/agp/parisc-agp.c
+index 514f9f287a781..c6f181702b9a7 100644
+--- a/drivers/char/agp/parisc-agp.c
++++ b/drivers/char/agp/parisc-agp.c
+@@ -394,8 +394,6 @@ find_quicksilver(struct device *dev, void *data)
+ static int __init
+ parisc_agp_init(void)
+ {
+-	extern struct sba_device *sba_list;
+-
+ 	int err = -1;
+ 	struct parisc_device *sba = NULL, *lba = NULL;
+ 	struct lba_device *lbadev = NULL;
+diff --git a/drivers/clk/clk-si521xx.c b/drivers/clk/clk-si521xx.c
+index 4eaf1b53f06bd..ef4ba467e747b 100644
+--- a/drivers/clk/clk-si521xx.c
++++ b/drivers/clk/clk-si521xx.c
+@@ -96,7 +96,7 @@ static int si521xx_regmap_i2c_write(void *context, unsigned int reg,
+ 				    unsigned int val)
+ {
+ 	struct i2c_client *i2c = context;
+-	const u8 data[3] = { reg, 1, val };
++	const u8 data[2] = { reg, val };
+ 	const int count = ARRAY_SIZE(data);
+ 	int ret;
+ 
+@@ -146,7 +146,7 @@ static int si521xx_regmap_i2c_read(void *context, unsigned int reg,
+ static const struct regmap_config si521xx_regmap_config = {
+ 	.reg_bits = 8,
+ 	.val_bits = 8,
+-	.cache_type = REGCACHE_NONE,
++	.cache_type = REGCACHE_FLAT,
+ 	.max_register = SI521XX_REG_DA,
+ 	.rd_table = &si521xx_readable_table,
+ 	.wr_table = &si521xx_writeable_table,
+@@ -281,9 +281,10 @@ static int si521xx_probe(struct i2c_client *client)
+ {
+ 	const u16 chip_info = (u16)(uintptr_t)device_get_match_data(&client->dev);
+ 	const struct clk_parent_data clk_parent_data = { .index = 0 };
+-	struct si521xx *si;
++	const u8 data[3] = { SI521XX_REG_BC, 1, 1 };
+ 	unsigned char name[6] = "DIFF0";
+ 	struct clk_init_data init = {};
++	struct si521xx *si;
+ 	int i, ret;
+ 
+ 	if (!chip_info)
+@@ -308,7 +309,7 @@ static int si521xx_probe(struct i2c_client *client)
+ 				     "Failed to allocate register map\n");
+ 
+ 	/* Always read back 1 Byte via I2C */
+-	ret = regmap_write(si->regmap, SI521XX_REG_BC, 1);
++	ret = i2c_master_send(client, data, ARRAY_SIZE(data));
+ 	if (ret < 0)
+ 		return ret;
+ 
+diff --git a/drivers/clk/sprd/ums512-clk.c b/drivers/clk/sprd/ums512-clk.c
+index fc25bdd85e4ea..f43bb10bd5ae2 100644
+--- a/drivers/clk/sprd/ums512-clk.c
++++ b/drivers/clk/sprd/ums512-clk.c
+@@ -800,7 +800,7 @@ static SPRD_MUX_CLK_DATA(uart1_clk, "uart1-clk", uart_parents,
+ 			 0x250, 0, 3, UMS512_MUX_FLAG);
+ 
+ static const struct clk_parent_data thm_parents[] = {
+-	{ .fw_name = "ext-32m" },
++	{ .fw_name = "ext-32k" },
+ 	{ .hw = &clk_250k.hw  },
+ };
+ static SPRD_MUX_CLK_DATA(thm0_clk, "thm0-clk", thm_parents,
+diff --git a/drivers/clk/tegra/clk-bpmp.c b/drivers/clk/tegra/clk-bpmp.c
+index a9f3fb448de62..7bfba0afd7783 100644
+--- a/drivers/clk/tegra/clk-bpmp.c
++++ b/drivers/clk/tegra/clk-bpmp.c
+@@ -159,7 +159,7 @@ static unsigned long tegra_bpmp_clk_recalc_rate(struct clk_hw *hw,
+ 
+ 	err = tegra_bpmp_clk_transfer(clk->bpmp, &msg);
+ 	if (err < 0)
+-		return err;
++		return 0;
+ 
+ 	return response.rate;
+ }
+diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c
+index ca60bb8114f22..4df4f614f490e 100644
+--- a/drivers/cxl/core/mbox.c
++++ b/drivers/cxl/core/mbox.c
+@@ -715,24 +715,25 @@ static void cxl_walk_cel(struct cxl_memdev_state *mds, size_t size, u8 *cel)
+ 	for (i = 0; i < cel_entries; i++) {
+ 		u16 opcode = le16_to_cpu(cel_entry[i].opcode);
+ 		struct cxl_mem_command *cmd = cxl_mem_find_command(opcode);
++		int enabled = 0;
+ 
+-		if (!cmd && (!cxl_is_poison_command(opcode) ||
+-			     !cxl_is_security_command(opcode))) {
+-			dev_dbg(dev,
+-				"Opcode 0x%04x unsupported by driver\n", opcode);
+-			continue;
+-		}
+-
+-		if (cmd)
++		if (cmd) {
+ 			set_bit(cmd->info.id, mds->enabled_cmds);
++			enabled++;
++		}
+ 
+-		if (cxl_is_poison_command(opcode))
++		if (cxl_is_poison_command(opcode)) {
+ 			cxl_set_poison_cmd_enabled(&mds->poison, opcode);
++			enabled++;
++		}
+ 
+-		if (cxl_is_security_command(opcode))
++		if (cxl_is_security_command(opcode)) {
+ 			cxl_set_security_cmd_enabled(&mds->security, opcode);
++			enabled++;
++		}
+ 
+-		dev_dbg(dev, "Opcode 0x%04x enabled\n", opcode);
++		dev_dbg(dev, "Opcode 0x%04x %s\n", opcode,
++			enabled ? "enabled" : "unsupported by driver");
+ 	}
+ }
+ 
+diff --git a/drivers/cxl/core/port.c b/drivers/cxl/core/port.c
+index 724be8448eb4a..7ca01a834e188 100644
+--- a/drivers/cxl/core/port.c
++++ b/drivers/cxl/core/port.c
+@@ -1,5 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0-only
+ /* Copyright(c) 2020 Intel Corporation. All rights reserved. */
++#include <linux/platform_device.h>
+ #include <linux/memregion.h>
+ #include <linux/workqueue.h>
+ #include <linux/debugfs.h>
+@@ -706,16 +707,20 @@ static int cxl_setup_comp_regs(struct device *dev, struct cxl_register_map *map,
+ 	return cxl_setup_regs(map);
+ }
+ 
+-static inline int cxl_port_setup_regs(struct cxl_port *port,
+-				      resource_size_t component_reg_phys)
++static int cxl_port_setup_regs(struct cxl_port *port,
++			resource_size_t component_reg_phys)
+ {
++	if (dev_is_platform(port->uport_dev))
++		return 0;
+ 	return cxl_setup_comp_regs(&port->dev, &port->comp_map,
+ 				   component_reg_phys);
+ }
+ 
+-static inline int cxl_dport_setup_regs(struct cxl_dport *dport,
+-				       resource_size_t component_reg_phys)
++static int cxl_dport_setup_regs(struct cxl_dport *dport,
++				resource_size_t component_reg_phys)
+ {
++	if (dev_is_platform(dport->dport_dev))
++		return 0;
+ 	return cxl_setup_comp_regs(dport->dport_dev, &dport->comp_map,
+ 				   component_reg_phys);
+ }
+diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c
+index e115ba382e044..b4c6a749406f1 100644
+--- a/drivers/cxl/core/region.c
++++ b/drivers/cxl/core/region.c
+@@ -717,13 +717,35 @@ static int match_free_decoder(struct device *dev, void *data)
+ 	return 0;
+ }
+ 
++static int match_auto_decoder(struct device *dev, void *data)
++{
++	struct cxl_region_params *p = data;
++	struct cxl_decoder *cxld;
++	struct range *r;
++
++	if (!is_switch_decoder(dev))
++		return 0;
++
++	cxld = to_cxl_decoder(dev);
++	r = &cxld->hpa_range;
++
++	if (p->res && p->res->start == r->start && p->res->end == r->end)
++		return 1;
++
++	return 0;
++}
++
+ static struct cxl_decoder *cxl_region_find_decoder(struct cxl_port *port,
+ 						   struct cxl_region *cxlr)
+ {
+ 	struct device *dev;
+ 	int id = 0;
+ 
+-	dev = device_find_child(&port->dev, &id, match_free_decoder);
++	if (test_bit(CXL_REGION_F_AUTO, &cxlr->flags))
++		dev = device_find_child(&port->dev, &cxlr->params,
++					match_auto_decoder);
++	else
++		dev = device_find_child(&port->dev, &id, match_free_decoder);
+ 	if (!dev)
+ 		return NULL;
+ 	/*
+diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c
+index 1cb1494c28fe8..2323169b6e5fe 100644
+--- a/drivers/cxl/pci.c
++++ b/drivers/cxl/pci.c
+@@ -541,9 +541,9 @@ static int cxl_pci_ras_unmask(struct pci_dev *pdev)
+ 		return 0;
+ 	}
+ 
+-	/* BIOS has CXL error control */
+-	if (!host_bridge->native_cxl_error)
+-		return -ENXIO;
++	/* BIOS has PCIe AER error control */
++	if (!host_bridge->native_aer)
++		return 0;
+ 
+ 	rc = pcie_capability_read_word(pdev, PCI_EXP_DEVCTL, &cap);
+ 	if (rc)
+diff --git a/drivers/firewire/sbp2.c b/drivers/firewire/sbp2.c
+index 26db5b8dfc1ef..749868b9e80d6 100644
+--- a/drivers/firewire/sbp2.c
++++ b/drivers/firewire/sbp2.c
+@@ -81,7 +81,8 @@ MODULE_PARM_DESC(exclusive_login, "Exclusive login to sbp2 device "
+  *
+  * - power condition
+  *   Set the power condition field in the START STOP UNIT commands sent by
+- *   sd_mod on suspend, resume, and shutdown (if manage_start_stop is on).
++ *   sd_mod on suspend, resume, and shutdown (if manage_system_start_stop or
++ *   manage_runtime_start_stop is on).
+  *   Some disks need this to spin down or to resume properly.
+  *
+  * - override internal blacklist
+@@ -1517,8 +1518,10 @@ static int sbp2_scsi_slave_configure(struct scsi_device *sdev)
+ 
+ 	sdev->use_10_for_rw = 1;
+ 
+-	if (sbp2_param_exclusive_login)
+-		sdev->manage_start_stop = 1;
++	if (sbp2_param_exclusive_login) {
++		sdev->manage_system_start_stop = true;
++		sdev->manage_runtime_start_stop = true;
++	}
+ 
+ 	if (sdev->type == TYPE_ROM)
+ 		sdev->use_10_for_ms = 1;
+diff --git a/drivers/firmware/arm_ffa/driver.c b/drivers/firmware/arm_ffa/driver.c
+index 2109cd178ff70..121f4fc903cd5 100644
+--- a/drivers/firmware/arm_ffa/driver.c
++++ b/drivers/firmware/arm_ffa/driver.c
+@@ -397,6 +397,19 @@ static u32 ffa_get_num_pages_sg(struct scatterlist *sg)
+ 	return num_pages;
+ }
+ 
++static u8 ffa_memory_attributes_get(u32 func_id)
++{
++	/*
++	 * For the memory lend or donate operation, if the receiver is a PE or
++	 * a proxy endpoint, the owner/sender must not specify the attributes
++	 */
++	if (func_id == FFA_FN_NATIVE(MEM_LEND) ||
++	    func_id == FFA_MEM_LEND)
++		return 0;
++
++	return FFA_MEM_NORMAL | FFA_MEM_WRITE_BACK | FFA_MEM_INNER_SHAREABLE;
++}
++
+ static int
+ ffa_setup_and_transmit(u32 func_id, void *buffer, u32 max_fragsize,
+ 		       struct ffa_mem_ops_args *args)
+@@ -413,8 +426,7 @@ ffa_setup_and_transmit(u32 func_id, void *buffer, u32 max_fragsize,
+ 	mem_region->tag = args->tag;
+ 	mem_region->flags = args->flags;
+ 	mem_region->sender_id = drv_info->vm_id;
+-	mem_region->attributes = FFA_MEM_NORMAL | FFA_MEM_WRITE_BACK |
+-				 FFA_MEM_INNER_SHAREABLE;
++	mem_region->attributes = ffa_memory_attributes_get(func_id);
+ 	ep_mem_access = &mem_region->ep_mem_access[0];
+ 
+ 	for (idx = 0; idx < args->nattrs; idx++, ep_mem_access++) {
+diff --git a/drivers/firmware/arm_scmi/perf.c b/drivers/firmware/arm_scmi/perf.c
+index ecf5c4de851b7..431bda9165c3d 100644
+--- a/drivers/firmware/arm_scmi/perf.c
++++ b/drivers/firmware/arm_scmi/perf.c
+@@ -139,7 +139,7 @@ struct perf_dom_info {
+ 
+ struct scmi_perf_info {
+ 	u32 version;
+-	int num_domains;
++	u16 num_domains;
+ 	enum scmi_power_scale power_scale;
+ 	u64 stats_addr;
+ 	u32 stats_size;
+@@ -356,11 +356,26 @@ static int scmi_perf_mb_limits_set(const struct scmi_protocol_handle *ph,
+ 	return ret;
+ }
+ 
++static inline struct perf_dom_info *
++scmi_perf_domain_lookup(const struct scmi_protocol_handle *ph, u32 domain)
++{
++	struct scmi_perf_info *pi = ph->get_priv(ph);
++
++	if (domain >= pi->num_domains)
++		return ERR_PTR(-EINVAL);
++
++	return pi->dom_info + domain;
++}
++
+ static int scmi_perf_limits_set(const struct scmi_protocol_handle *ph,
+ 				u32 domain, u32 max_perf, u32 min_perf)
+ {
+ 	struct scmi_perf_info *pi = ph->get_priv(ph);
+-	struct perf_dom_info *dom = pi->dom_info + domain;
++	struct perf_dom_info *dom;
++
++	dom = scmi_perf_domain_lookup(ph, domain);
++	if (IS_ERR(dom))
++		return PTR_ERR(dom);
+ 
+ 	if (PROTOCOL_REV_MAJOR(pi->version) >= 0x3 && !max_perf && !min_perf)
+ 		return -EINVAL;
+@@ -408,8 +423,11 @@ static int scmi_perf_mb_limits_get(const struct scmi_protocol_handle *ph,
+ static int scmi_perf_limits_get(const struct scmi_protocol_handle *ph,
+ 				u32 domain, u32 *max_perf, u32 *min_perf)
+ {
+-	struct scmi_perf_info *pi = ph->get_priv(ph);
+-	struct perf_dom_info *dom = pi->dom_info + domain;
++	struct perf_dom_info *dom;
++
++	dom = scmi_perf_domain_lookup(ph, domain);
++	if (IS_ERR(dom))
++		return PTR_ERR(dom);
+ 
+ 	if (dom->fc_info && dom->fc_info[PERF_FC_LIMIT].get_addr) {
+ 		struct scmi_fc_info *fci = &dom->fc_info[PERF_FC_LIMIT];
+@@ -449,8 +467,11 @@ static int scmi_perf_mb_level_set(const struct scmi_protocol_handle *ph,
+ static int scmi_perf_level_set(const struct scmi_protocol_handle *ph,
+ 			       u32 domain, u32 level, bool poll)
+ {
+-	struct scmi_perf_info *pi = ph->get_priv(ph);
+-	struct perf_dom_info *dom = pi->dom_info + domain;
++	struct perf_dom_info *dom;
++
++	dom = scmi_perf_domain_lookup(ph, domain);
++	if (IS_ERR(dom))
++		return PTR_ERR(dom);
+ 
+ 	if (dom->fc_info && dom->fc_info[PERF_FC_LEVEL].set_addr) {
+ 		struct scmi_fc_info *fci = &dom->fc_info[PERF_FC_LEVEL];
+@@ -490,8 +511,11 @@ static int scmi_perf_mb_level_get(const struct scmi_protocol_handle *ph,
+ static int scmi_perf_level_get(const struct scmi_protocol_handle *ph,
+ 			       u32 domain, u32 *level, bool poll)
+ {
+-	struct scmi_perf_info *pi = ph->get_priv(ph);
+-	struct perf_dom_info *dom = pi->dom_info + domain;
++	struct perf_dom_info *dom;
++
++	dom = scmi_perf_domain_lookup(ph, domain);
++	if (IS_ERR(dom))
++		return PTR_ERR(dom);
+ 
+ 	if (dom->fc_info && dom->fc_info[PERF_FC_LEVEL].get_addr) {
+ 		*level = ioread32(dom->fc_info[PERF_FC_LEVEL].get_addr);
+@@ -574,13 +598,14 @@ static int scmi_dvfs_device_opps_add(const struct scmi_protocol_handle *ph,
+ 	unsigned long freq;
+ 	struct scmi_opp *opp;
+ 	struct perf_dom_info *dom;
+-	struct scmi_perf_info *pi = ph->get_priv(ph);
+ 
+ 	domain = scmi_dev_domain_id(dev);
+ 	if (domain < 0)
+-		return domain;
++		return -EINVAL;
+ 
+-	dom = pi->dom_info + domain;
++	dom = scmi_perf_domain_lookup(ph, domain);
++	if (IS_ERR(dom))
++		return PTR_ERR(dom);
+ 
+ 	for (opp = dom->opp, idx = 0; idx < dom->opp_count; idx++, opp++) {
+ 		freq = opp->perf * dom->mult_factor;
+@@ -603,14 +628,17 @@ static int
+ scmi_dvfs_transition_latency_get(const struct scmi_protocol_handle *ph,
+ 				 struct device *dev)
+ {
++	int domain;
+ 	struct perf_dom_info *dom;
+-	struct scmi_perf_info *pi = ph->get_priv(ph);
+-	int domain = scmi_dev_domain_id(dev);
+ 
++	domain = scmi_dev_domain_id(dev);
+ 	if (domain < 0)
+-		return domain;
++		return -EINVAL;
++
++	dom = scmi_perf_domain_lookup(ph, domain);
++	if (IS_ERR(dom))
++		return PTR_ERR(dom);
+ 
+-	dom = pi->dom_info + domain;
+ 	/* uS to nS */
+ 	return dom->opp[dom->opp_count - 1].trans_latency_us * 1000;
+ }
+@@ -618,8 +646,11 @@ scmi_dvfs_transition_latency_get(const struct scmi_protocol_handle *ph,
+ static int scmi_dvfs_freq_set(const struct scmi_protocol_handle *ph, u32 domain,
+ 			      unsigned long freq, bool poll)
+ {
+-	struct scmi_perf_info *pi = ph->get_priv(ph);
+-	struct perf_dom_info *dom = pi->dom_info + domain;
++	struct perf_dom_info *dom;
++
++	dom = scmi_perf_domain_lookup(ph, domain);
++	if (IS_ERR(dom))
++		return PTR_ERR(dom);
+ 
+ 	return scmi_perf_level_set(ph, domain, freq / dom->mult_factor, poll);
+ }
+@@ -630,11 +661,14 @@ static int scmi_dvfs_freq_get(const struct scmi_protocol_handle *ph, u32 domain,
+ 	int ret;
+ 	u32 level;
+ 	struct scmi_perf_info *pi = ph->get_priv(ph);
+-	struct perf_dom_info *dom = pi->dom_info + domain;
+ 
+ 	ret = scmi_perf_level_get(ph, domain, &level, poll);
+-	if (!ret)
++	if (!ret) {
++		struct perf_dom_info *dom = pi->dom_info + domain;
++
++		/* Note domain is validated implicitly by scmi_perf_level_get */
+ 		*freq = level * dom->mult_factor;
++	}
+ 
+ 	return ret;
+ }
+@@ -643,15 +677,14 @@ static int scmi_dvfs_est_power_get(const struct scmi_protocol_handle *ph,
+ 				   u32 domain, unsigned long *freq,
+ 				   unsigned long *power)
+ {
+-	struct scmi_perf_info *pi = ph->get_priv(ph);
+ 	struct perf_dom_info *dom;
+ 	unsigned long opp_freq;
+ 	int idx, ret = -EINVAL;
+ 	struct scmi_opp *opp;
+ 
+-	dom = pi->dom_info + domain;
+-	if (!dom)
+-		return -EIO;
++	dom = scmi_perf_domain_lookup(ph, domain);
++	if (IS_ERR(dom))
++		return PTR_ERR(dom);
+ 
+ 	for (opp = dom->opp, idx = 0; idx < dom->opp_count; idx++, opp++) {
+ 		opp_freq = opp->perf * dom->mult_factor;
+@@ -670,10 +703,16 @@ static int scmi_dvfs_est_power_get(const struct scmi_protocol_handle *ph,
+ static bool scmi_fast_switch_possible(const struct scmi_protocol_handle *ph,
+ 				      struct device *dev)
+ {
++	int domain;
+ 	struct perf_dom_info *dom;
+-	struct scmi_perf_info *pi = ph->get_priv(ph);
+ 
+-	dom = pi->dom_info + scmi_dev_domain_id(dev);
++	domain = scmi_dev_domain_id(dev);
++	if (domain < 0)
++		return false;
++
++	dom = scmi_perf_domain_lookup(ph, domain);
++	if (IS_ERR(dom))
++		return false;
+ 
+ 	return dom->fc_info && dom->fc_info[PERF_FC_LEVEL].set_addr;
+ }
+@@ -819,6 +858,8 @@ static int scmi_perf_protocol_init(const struct scmi_protocol_handle *ph)
+ 	if (!pinfo)
+ 		return -ENOMEM;
+ 
++	pinfo->version = version;
++
+ 	ret = scmi_perf_attributes_get(ph, pinfo);
+ 	if (ret)
+ 		return ret;
+@@ -838,8 +879,6 @@ static int scmi_perf_protocol_init(const struct scmi_protocol_handle *ph)
+ 			scmi_perf_domain_init_fc(ph, domain, &dom->fc_info);
+ 	}
+ 
+-	pinfo->version = version;
+-
+ 	return ph->set_priv(ph, pinfo);
+ }
+ 
+diff --git a/drivers/firmware/cirrus/cs_dsp.c b/drivers/firmware/cirrus/cs_dsp.c
+index 49b70c70dc696..79d4254d1f9bc 100644
+--- a/drivers/firmware/cirrus/cs_dsp.c
++++ b/drivers/firmware/cirrus/cs_dsp.c
+@@ -1863,15 +1863,15 @@ static int cs_dsp_adsp2_setup_algs(struct cs_dsp *dsp)
+ 		return PTR_ERR(adsp2_alg);
+ 
+ 	for (i = 0; i < n_algs; i++) {
+-		cs_dsp_info(dsp,
+-			    "%d: ID %x v%d.%d.%d XM@%x YM@%x ZM@%x\n",
+-			    i, be32_to_cpu(adsp2_alg[i].alg.id),
+-			    (be32_to_cpu(adsp2_alg[i].alg.ver) & 0xff0000) >> 16,
+-			    (be32_to_cpu(adsp2_alg[i].alg.ver) & 0xff00) >> 8,
+-			    be32_to_cpu(adsp2_alg[i].alg.ver) & 0xff,
+-			    be32_to_cpu(adsp2_alg[i].xm),
+-			    be32_to_cpu(adsp2_alg[i].ym),
+-			    be32_to_cpu(adsp2_alg[i].zm));
++		cs_dsp_dbg(dsp,
++			   "%d: ID %x v%d.%d.%d XM@%x YM@%x ZM@%x\n",
++			   i, be32_to_cpu(adsp2_alg[i].alg.id),
++			   (be32_to_cpu(adsp2_alg[i].alg.ver) & 0xff0000) >> 16,
++			   (be32_to_cpu(adsp2_alg[i].alg.ver) & 0xff00) >> 8,
++			   be32_to_cpu(adsp2_alg[i].alg.ver) & 0xff,
++			   be32_to_cpu(adsp2_alg[i].xm),
++			   be32_to_cpu(adsp2_alg[i].ym),
++			   be32_to_cpu(adsp2_alg[i].zm));
+ 
+ 		alg_region = cs_dsp_create_region(dsp, WMFW_ADSP2_XM,
+ 						  adsp2_alg[i].alg.id,
+@@ -1996,14 +1996,14 @@ static int cs_dsp_halo_setup_algs(struct cs_dsp *dsp)
+ 		return PTR_ERR(halo_alg);
+ 
+ 	for (i = 0; i < n_algs; i++) {
+-		cs_dsp_info(dsp,
+-			    "%d: ID %x v%d.%d.%d XM@%x YM@%x\n",
+-			    i, be32_to_cpu(halo_alg[i].alg.id),
+-			    (be32_to_cpu(halo_alg[i].alg.ver) & 0xff0000) >> 16,
+-			    (be32_to_cpu(halo_alg[i].alg.ver) & 0xff00) >> 8,
+-			    be32_to_cpu(halo_alg[i].alg.ver) & 0xff,
+-			    be32_to_cpu(halo_alg[i].xm_base),
+-			    be32_to_cpu(halo_alg[i].ym_base));
++		cs_dsp_dbg(dsp,
++			   "%d: ID %x v%d.%d.%d XM@%x YM@%x\n",
++			   i, be32_to_cpu(halo_alg[i].alg.id),
++			   (be32_to_cpu(halo_alg[i].alg.ver) & 0xff0000) >> 16,
++			   (be32_to_cpu(halo_alg[i].alg.ver) & 0xff00) >> 8,
++			   be32_to_cpu(halo_alg[i].alg.ver) & 0xff,
++			   be32_to_cpu(halo_alg[i].xm_base),
++			   be32_to_cpu(halo_alg[i].ym_base));
+ 
+ 		ret = cs_dsp_halo_create_regions(dsp, halo_alg[i].alg.id,
+ 						 halo_alg[i].alg.ver,
+diff --git a/drivers/firmware/imx/imx-dsp.c b/drivers/firmware/imx/imx-dsp.c
+index a6c06d7476c32..1f410809d3ee4 100644
+--- a/drivers/firmware/imx/imx-dsp.c
++++ b/drivers/firmware/imx/imx-dsp.c
+@@ -115,6 +115,7 @@ static int imx_dsp_setup_channels(struct imx_dsp_ipc *dsp_ipc)
+ 		dsp_chan->idx = i % 2;
+ 		dsp_chan->ch = mbox_request_channel_byname(cl, chan_name);
+ 		if (IS_ERR(dsp_chan->ch)) {
++			kfree(dsp_chan->name);
+ 			ret = PTR_ERR(dsp_chan->ch);
+ 			if (ret != -EPROBE_DEFER)
+ 				dev_err(dev, "Failed to request mbox chan %s ret %d\n",
+diff --git a/drivers/gpio/gpio-pmic-eic-sprd.c b/drivers/gpio/gpio-pmic-eic-sprd.c
+index c3e4d90f6b183..36f6cfc224c2d 100644
+--- a/drivers/gpio/gpio-pmic-eic-sprd.c
++++ b/drivers/gpio/gpio-pmic-eic-sprd.c
+@@ -352,6 +352,7 @@ static int sprd_pmic_eic_probe(struct platform_device *pdev)
+ 	pmic_eic->chip.set_config = sprd_pmic_eic_set_config;
+ 	pmic_eic->chip.set = sprd_pmic_eic_set;
+ 	pmic_eic->chip.get = sprd_pmic_eic_get;
++	pmic_eic->chip.can_sleep = true;
+ 
+ 	irq = &pmic_eic->chip.irq;
+ 	gpio_irq_chip_set_chip(irq, &pmic_eic_irq_chip);
+diff --git a/drivers/gpio/gpio-tb10x.c b/drivers/gpio/gpio-tb10x.c
+index 78f8790168ae1..f96d260a4a19d 100644
+--- a/drivers/gpio/gpio-tb10x.c
++++ b/drivers/gpio/gpio-tb10x.c
+@@ -195,7 +195,7 @@ static int tb10x_gpio_probe(struct platform_device *pdev)
+ 				handle_edge_irq, IRQ_NOREQUEST, IRQ_NOPROBE,
+ 				IRQ_GC_INIT_MASK_CACHE);
+ 		if (ret)
+-			return ret;
++			goto err_remove_domain;
+ 
+ 		gc = tb10x_gpio->domain->gc->gc[0];
+ 		gc->reg_base                         = tb10x_gpio->base;
+@@ -209,6 +209,10 @@ static int tb10x_gpio_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	return 0;
++
++err_remove_domain:
++	irq_domain_remove(tb10x_gpio->domain);
++	return ret;
+ }
+ 
+ static int tb10x_gpio_remove(struct platform_device *pdev)
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
+index b4fcad0e62f7e..a7c8beff1647c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
+@@ -492,7 +492,7 @@ void amdgpu_amdkfd_get_cu_info(struct amdgpu_device *adev, struct kfd_cu_info *c
+ 	cu_info->cu_active_number = acu_info.number;
+ 	cu_info->cu_ao_mask = acu_info.ao_cu_mask;
+ 	memcpy(&cu_info->cu_bitmap[0], &acu_info.bitmap[0],
+-	       sizeof(acu_info.bitmap));
++	       sizeof(cu_info->cu_bitmap));
+ 	cu_info->num_shader_engines = adev->gfx.config.max_shader_engines;
+ 	cu_info->num_shader_arrays_per_engine = adev->gfx.config.max_sh_per_se;
+ 	cu_info->num_cu_per_sh = adev->gfx.config.max_cu_per_sh;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h
+index a4ff515ce8966..59ba03d387fcc 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h
+@@ -43,6 +43,7 @@
+ #define AMDGPU_GFX_LBPW_DISABLED_MODE		0x00000008L
+ 
+ #define AMDGPU_MAX_GC_INSTANCES		8
++#define KGD_MAX_QUEUES			128
+ 
+ #define AMDGPU_MAX_GFX_QUEUES KGD_MAX_QUEUES
+ #define AMDGPU_MAX_COMPUTE_QUEUES KGD_MAX_QUEUES
+@@ -254,7 +255,7 @@ struct amdgpu_cu_info {
+ 	uint32_t number;
+ 	uint32_t ao_cu_mask;
+ 	uint32_t ao_cu_bitmap[4][4];
+-	uint32_t bitmap[4][4];
++	uint32_t bitmap[AMDGPU_MAX_GC_INSTANCES][4][4];
+ };
+ 
+ struct amdgpu_gfx_ras {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+index d4ca19ba5a289..b9fc7e2db5e59 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+@@ -839,7 +839,7 @@ int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
+ 		memcpy(&dev_info->cu_ao_bitmap[0], &adev->gfx.cu_info.ao_cu_bitmap[0],
+ 		       sizeof(adev->gfx.cu_info.ao_cu_bitmap));
+ 		memcpy(&dev_info->cu_bitmap[0], &adev->gfx.cu_info.bitmap[0],
+-		       sizeof(adev->gfx.cu_info.bitmap));
++		       sizeof(dev_info->cu_bitmap));
+ 		dev_info->vram_type = adev->gmc.vram_type;
+ 		dev_info->vram_bit_width = adev->gmc.vram_width;
+ 		dev_info->vce_harvest_config = adev->vce.harvest_config;
+@@ -940,12 +940,17 @@ int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
+ 			struct atom_context *atom_context;
+ 
+ 			atom_context = adev->mode_info.atom_context;
+-			memcpy(vbios_info.name, atom_context->name, sizeof(atom_context->name));
+-			memcpy(vbios_info.vbios_pn, atom_context->vbios_pn, sizeof(atom_context->vbios_pn));
+-			vbios_info.version = atom_context->version;
+-			memcpy(vbios_info.vbios_ver_str, atom_context->vbios_ver_str,
+-						sizeof(atom_context->vbios_ver_str));
+-			memcpy(vbios_info.date, atom_context->date, sizeof(atom_context->date));
++			if (atom_context) {
++				memcpy(vbios_info.name, atom_context->name,
++				       sizeof(atom_context->name));
++				memcpy(vbios_info.vbios_pn, atom_context->vbios_pn,
++				       sizeof(atom_context->vbios_pn));
++				vbios_info.version = atom_context->version;
++				memcpy(vbios_info.vbios_ver_str, atom_context->vbios_ver_str,
++				       sizeof(atom_context->vbios_ver_str));
++				memcpy(vbios_info.date, atom_context->date,
++				       sizeof(atom_context->date));
++			}
+ 
+ 			return copy_to_user(out, &vbios_info,
+ 						min((size_t)size, sizeof(vbios_info))) ? -EFAULT : 0;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
+index 8aaa427f8c0f6..7d5019a884024 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
+@@ -1061,7 +1061,8 @@ int amdgpu_ras_query_error_status(struct amdgpu_device *adev,
+ 	info->ce_count = obj->err_data.ce_count;
+ 
+ 	if (err_data.ce_count) {
+-		if (adev->smuio.funcs &&
++		if (!adev->aid_mask &&
++		    adev->smuio.funcs &&
+ 		    adev->smuio.funcs->get_socket_id &&
+ 		    adev->smuio.funcs->get_die_id) {
+ 			dev_info(adev->dev, "socket: %d, die: %d "
+@@ -1081,7 +1082,8 @@ int amdgpu_ras_query_error_status(struct amdgpu_device *adev,
+ 		}
+ 	}
+ 	if (err_data.ue_count) {
+-		if (adev->smuio.funcs &&
++		if (!adev->aid_mask &&
++		    adev->smuio.funcs &&
+ 		    adev->smuio.funcs->get_socket_id &&
+ 		    adev->smuio.funcs->get_die_id) {
+ 			dev_info(adev->dev, "socket: %d, die: %d "
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring_mux.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring_mux.h
+index b22d4fb2a8470..d3186b570b82e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring_mux.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring_mux.h
+@@ -56,6 +56,15 @@ enum amdgpu_ring_mux_offset_type {
+ 	AMDGPU_MUX_OFFSET_TYPE_CE,
+ };
+ 
++enum ib_complete_status {
++	/* IB not started/reset value, default value. */
++	IB_COMPLETION_STATUS_DEFAULT = 0,
++	/* IB preempted, started but not completed. */
++	IB_COMPLETION_STATUS_PREEMPTED = 1,
++	/* IB completed. */
++	IB_COMPLETION_STATUS_COMPLETED = 2,
++};
++
+ struct amdgpu_ring_mux {
+ 	struct amdgpu_ring      *real_ring;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+index 44af8022b89fa..f743bf2c92877 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+@@ -9448,7 +9448,7 @@ static int gfx_v10_0_get_cu_info(struct amdgpu_device *adev,
+ 				gfx_v10_0_set_user_wgp_inactive_bitmap_per_sh(
+ 					adev, disable_masks[i * 2 + j]);
+ 			bitmap = gfx_v10_0_get_cu_active_bitmap_per_sh(adev);
+-			cu_info->bitmap[i][j] = bitmap;
++			cu_info->bitmap[0][i][j] = bitmap;
+ 
+ 			for (k = 0; k < adev->gfx.config.max_cu_per_sh; k++) {
+ 				if (bitmap & mask) {
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
+index 0451533ddde41..a82cba884c48f 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
+@@ -6394,7 +6394,7 @@ static int gfx_v11_0_get_cu_info(struct amdgpu_device *adev,
+ 			 *    SE6: {SH0,SH1} --> {bitmap[2][2], bitmap[2][3]}
+ 			 *    SE7: {SH0,SH1} --> {bitmap[3][2], bitmap[3][3]}
+ 			 */
+-			cu_info->bitmap[i % 4][j + (i / 4) * 2] = bitmap;
++			cu_info->bitmap[0][i % 4][j + (i / 4) * 2] = bitmap;
+ 
+ 			for (k = 0; k < adev->gfx.config.max_cu_per_sh; k++) {
+ 				if (bitmap & mask)
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c
+index da6caff78c22b..34f9211b26793 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c
+@@ -3577,7 +3577,7 @@ static void gfx_v6_0_get_cu_info(struct amdgpu_device *adev)
+ 				gfx_v6_0_set_user_cu_inactive_bitmap(
+ 					adev, disable_masks[i * 2 + j]);
+ 			bitmap = gfx_v6_0_get_cu_enabled(adev);
+-			cu_info->bitmap[i][j] = bitmap;
++			cu_info->bitmap[0][i][j] = bitmap;
+ 
+ 			for (k = 0; k < adev->gfx.config.max_cu_per_sh; k++) {
+ 				if (bitmap & mask) {
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
+index 8c174c11eaee0..6feae2548e8ee 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
+@@ -5122,7 +5122,7 @@ static void gfx_v7_0_get_cu_info(struct amdgpu_device *adev)
+ 				gfx_v7_0_set_user_cu_inactive_bitmap(
+ 					adev, disable_masks[i * 2 + j]);
+ 			bitmap = gfx_v7_0_get_cu_active_bitmap(adev);
+-			cu_info->bitmap[i][j] = bitmap;
++			cu_info->bitmap[0][i][j] = bitmap;
+ 
+ 			for (k = 0; k < adev->gfx.config.max_cu_per_sh; k ++) {
+ 				if (bitmap & mask) {
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
+index 51c1745c83697..885ebd703260f 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
+@@ -7121,7 +7121,7 @@ static void gfx_v8_0_get_cu_info(struct amdgpu_device *adev)
+ 				gfx_v8_0_set_user_cu_inactive_bitmap(
+ 					adev, disable_masks[i * 2 + j]);
+ 			bitmap = gfx_v8_0_get_cu_active_bitmap(adev);
+-			cu_info->bitmap[i][j] = bitmap;
++			cu_info->bitmap[0][i][j] = bitmap;
+ 
+ 			for (k = 0; k < adev->gfx.config.max_cu_per_sh; k ++) {
+ 				if (bitmap & mask) {
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+index 65577eca58f1c..602d74023b0b9 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+@@ -1497,7 +1497,7 @@ static void gfx_v9_0_init_always_on_cu_mask(struct amdgpu_device *adev)
+ 			amdgpu_gfx_select_se_sh(adev, i, j, 0xffffffff, 0);
+ 
+ 			for (k = 0; k < adev->gfx.config.max_cu_per_sh; k ++) {
+-				if (cu_info->bitmap[i][j] & mask) {
++				if (cu_info->bitmap[0][i][j] & mask) {
+ 					if (counter == pg_always_on_cu_num)
+ 						WREG32_SOC15(GC, 0, mmRLC_PG_ALWAYS_ON_CU_MASK, cu_bitmap);
+ 					if (counter < always_on_cu_num)
+@@ -5230,6 +5230,9 @@ static void gfx_v9_0_ring_patch_de_meta(struct amdgpu_ring *ring,
+ 		de_payload_cpu_addr = adev->virt.csa_cpu_addr + payload_offset;
+ 	}
+ 
++	((struct v9_de_ib_state *)de_payload_cpu_addr)->ib_completion_status =
++		IB_COMPLETION_STATUS_PREEMPTED;
++
+ 	if (offset + (payload_size >> 2) <= ring->buf_mask + 1) {
+ 		memcpy((void *)&ring->ring[offset], de_payload_cpu_addr, payload_size);
+ 	} else {
+@@ -7234,7 +7237,7 @@ static int gfx_v9_0_get_cu_info(struct amdgpu_device *adev,
+ 			 *    SE6,SH0 --> bitmap[2][1]
+ 			 *    SE7,SH0 --> bitmap[3][1]
+ 			 */
+-			cu_info->bitmap[i % 4][j + i / 4] = bitmap;
++			cu_info->bitmap[0][i % 4][j + i / 4] = bitmap;
+ 
+ 			for (k = 0; k < adev->gfx.config.max_cu_per_sh; k ++) {
+ 				if (bitmap & mask) {
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
+index 4f883b94f98ef..84a74a6c6b2de 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
+@@ -4228,7 +4228,7 @@ static void gfx_v9_4_3_set_gds_init(struct amdgpu_device *adev)
+ }
+ 
+ static void gfx_v9_4_3_set_user_cu_inactive_bitmap(struct amdgpu_device *adev,
+-						 u32 bitmap)
++						 u32 bitmap, int xcc_id)
+ {
+ 	u32 data;
+ 
+@@ -4238,15 +4238,15 @@ static void gfx_v9_4_3_set_user_cu_inactive_bitmap(struct amdgpu_device *adev,
+ 	data = bitmap << GC_USER_SHADER_ARRAY_CONFIG__INACTIVE_CUS__SHIFT;
+ 	data &= GC_USER_SHADER_ARRAY_CONFIG__INACTIVE_CUS_MASK;
+ 
+-	WREG32_SOC15(GC, GET_INST(GC, 0), regGC_USER_SHADER_ARRAY_CONFIG, data);
++	WREG32_SOC15(GC, GET_INST(GC, xcc_id), regGC_USER_SHADER_ARRAY_CONFIG, data);
+ }
+ 
+-static u32 gfx_v9_4_3_get_cu_active_bitmap(struct amdgpu_device *adev)
++static u32 gfx_v9_4_3_get_cu_active_bitmap(struct amdgpu_device *adev, int xcc_id)
+ {
+ 	u32 data, mask;
+ 
+-	data = RREG32_SOC15(GC, GET_INST(GC, 0), regCC_GC_SHADER_ARRAY_CONFIG);
+-	data |= RREG32_SOC15(GC, GET_INST(GC, 0), regGC_USER_SHADER_ARRAY_CONFIG);
++	data = RREG32_SOC15(GC, GET_INST(GC, xcc_id), regCC_GC_SHADER_ARRAY_CONFIG);
++	data |= RREG32_SOC15(GC, GET_INST(GC, xcc_id), regGC_USER_SHADER_ARRAY_CONFIG);
+ 
+ 	data &= CC_GC_SHADER_ARRAY_CONFIG__INACTIVE_CUS_MASK;
+ 	data >>= CC_GC_SHADER_ARRAY_CONFIG__INACTIVE_CUS__SHIFT;
+@@ -4259,7 +4259,7 @@ static u32 gfx_v9_4_3_get_cu_active_bitmap(struct amdgpu_device *adev)
+ static int gfx_v9_4_3_get_cu_info(struct amdgpu_device *adev,
+ 				 struct amdgpu_cu_info *cu_info)
+ {
+-	int i, j, k, counter, active_cu_number = 0;
++	int i, j, k, counter, xcc_id, active_cu_number = 0;
+ 	u32 mask, bitmap, ao_bitmap, ao_cu_mask = 0;
+ 	unsigned disable_masks[4 * 4];
+ 
+@@ -4278,46 +4278,38 @@ static int gfx_v9_4_3_get_cu_info(struct amdgpu_device *adev,
+ 				    adev->gfx.config.max_sh_per_se);
+ 
+ 	mutex_lock(&adev->grbm_idx_mutex);
+-	for (i = 0; i < adev->gfx.config.max_shader_engines; i++) {
+-		for (j = 0; j < adev->gfx.config.max_sh_per_se; j++) {
+-			mask = 1;
+-			ao_bitmap = 0;
+-			counter = 0;
+-			gfx_v9_4_3_xcc_select_se_sh(adev, i, j, 0xffffffff, 0);
+-			gfx_v9_4_3_set_user_cu_inactive_bitmap(
+-				adev, disable_masks[i * adev->gfx.config.max_sh_per_se + j]);
+-			bitmap = gfx_v9_4_3_get_cu_active_bitmap(adev);
+-
+-			/*
+-			 * The bitmap(and ao_cu_bitmap) in cu_info structure is
+-			 * 4x4 size array, and it's usually suitable for Vega
+-			 * ASICs which has 4*2 SE/SH layout.
+-			 * But for Arcturus, SE/SH layout is changed to 8*1.
+-			 * To mostly reduce the impact, we make it compatible
+-			 * with current bitmap array as below:
+-			 *    SE4,SH0 --> bitmap[0][1]
+-			 *    SE5,SH0 --> bitmap[1][1]
+-			 *    SE6,SH0 --> bitmap[2][1]
+-			 *    SE7,SH0 --> bitmap[3][1]
+-			 */
+-			cu_info->bitmap[i % 4][j + i / 4] = bitmap;
+-
+-			for (k = 0; k < adev->gfx.config.max_cu_per_sh; k++) {
+-				if (bitmap & mask) {
+-					if (counter < adev->gfx.config.max_cu_per_sh)
+-						ao_bitmap |= mask;
+-					counter++;
++	for (xcc_id = 0; xcc_id < NUM_XCC(adev->gfx.xcc_mask); xcc_id++) {
++		for (i = 0; i < adev->gfx.config.max_shader_engines; i++) {
++			for (j = 0; j < adev->gfx.config.max_sh_per_se; j++) {
++				mask = 1;
++				ao_bitmap = 0;
++				counter = 0;
++				gfx_v9_4_3_xcc_select_se_sh(adev, i, j, 0xffffffff, xcc_id);
++				gfx_v9_4_3_set_user_cu_inactive_bitmap(
++					adev,
++					disable_masks[i * adev->gfx.config.max_sh_per_se + j],
++					xcc_id);
++				bitmap = gfx_v9_4_3_get_cu_active_bitmap(adev, xcc_id);
++
++				cu_info->bitmap[xcc_id][i][j] = bitmap;
++
++				for (k = 0; k < adev->gfx.config.max_cu_per_sh; k++) {
++					if (bitmap & mask) {
++						if (counter < adev->gfx.config.max_cu_per_sh)
++							ao_bitmap |= mask;
++						counter++;
++					}
++					mask <<= 1;
+ 				}
+-				mask <<= 1;
++				active_cu_number += counter;
++				if (i < 2 && j < 2)
++					ao_cu_mask |= (ao_bitmap << (i * 16 + j * 8));
++				cu_info->ao_cu_bitmap[i][j] = ao_bitmap;
+ 			}
+-			active_cu_number += counter;
+-			if (i < 2 && j < 2)
+-				ao_cu_mask |= (ao_bitmap << (i * 16 + j * 8));
+-			cu_info->ao_cu_bitmap[i % 4][j + i / 4] = ao_bitmap;
+ 		}
++		gfx_v9_4_3_xcc_select_se_sh(adev, 0xffffffff, 0xffffffff, 0xffffffff,
++					    xcc_id);
+ 	}
+-	gfx_v9_4_3_xcc_select_se_sh(adev, 0xffffffff, 0xffffffff, 0xffffffff,
+-				    0);
+ 	mutex_unlock(&adev->grbm_idx_mutex);
+ 
+ 	cu_info->number = active_cu_number;
+diff --git a/drivers/gpu/drm/amd/amdgpu/nbio_v4_3.c b/drivers/gpu/drm/amd/amdgpu/nbio_v4_3.c
+index d5ed9e0e1a5f1..e5b5b0f4940f4 100644
+--- a/drivers/gpu/drm/amd/amdgpu/nbio_v4_3.c
++++ b/drivers/gpu/drm/amd/amdgpu/nbio_v4_3.c
+@@ -345,6 +345,9 @@ static void nbio_v4_3_init_registers(struct amdgpu_device *adev)
+ 		data &= ~RCC_DEV0_EPF2_STRAP2__STRAP_NO_SOFT_RESET_DEV0_F2_MASK;
+ 		WREG32_SOC15(NBIO, 0, regRCC_DEV0_EPF2_STRAP2, data);
+ 	}
++	if (amdgpu_sriov_vf(adev))
++		adev->rmmio_remap.reg_offset = SOC15_REG_OFFSET(NBIO, 0,
++			regBIF_BX_DEV0_EPF0_VF0_HDP_MEM_COHERENCY_FLUSH_CNTL) << 2;
+ }
+ 
+ static u32 nbio_v4_3_get_rom_offset(struct amdgpu_device *adev)
+diff --git a/drivers/gpu/drm/amd/amdgpu/soc21.c b/drivers/gpu/drm/amd/amdgpu/soc21.c
+index e5e5d68a4d702..1a5ffbf884891 100644
+--- a/drivers/gpu/drm/amd/amdgpu/soc21.c
++++ b/drivers/gpu/drm/amd/amdgpu/soc21.c
+@@ -786,7 +786,7 @@ static int soc21_common_hw_init(void *handle)
+ 	 * for the purpose of expose those registers
+ 	 * to process space
+ 	 */
+-	if (adev->nbio.funcs->remap_hdp_registers)
++	if (adev->nbio.funcs->remap_hdp_registers && !amdgpu_sriov_vf(adev))
+ 		adev->nbio.funcs->remap_hdp_registers(adev);
+ 	/* enable the doorbell aperture */
+ 	adev->nbio.funcs->enable_doorbell_aperture(adev, true);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_crat.c b/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
+index f5a6f562e2a80..11b9837292536 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
+@@ -2154,7 +2154,8 @@ static int kfd_create_vcrat_image_gpu(void *pcrat_image,
+ 
+ 	amdgpu_amdkfd_get_cu_info(kdev->adev, &cu_info);
+ 	cu->num_simd_per_cu = cu_info.simd_per_cu;
+-	cu->num_simd_cores = cu_info.simd_per_cu * cu_info.cu_active_number;
++	cu->num_simd_cores = cu_info.simd_per_cu *
++			(cu_info.cu_active_number / kdev->kfd->num_nodes);
+ 	cu->max_waves_simd = cu_info.max_waves_per_simd;
+ 
+ 	cu->wave_front_size = cu_info.wave_front_size;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_crat.h b/drivers/gpu/drm/amd/amdkfd/kfd_crat.h
+index fc719389b5d65..4684711aa695a 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_crat.h
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_crat.h
+@@ -79,6 +79,10 @@ struct crat_header {
+ #define CRAT_SUBTYPE_IOLINK_AFFINITY		5
+ #define CRAT_SUBTYPE_MAX			6
+ 
++/*
++ * Do not change the value of CRAT_SIBLINGMAP_SIZE from 32
++ * as it breaks the ABI.
++ */
+ #define CRAT_SIBLINGMAP_SIZE	32
+ 
+ /*
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+index 01192f5abe462..a61334592c87a 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+@@ -216,7 +216,7 @@ static int add_queue_mes(struct device_queue_manager *dqm, struct queue *q,
+ 
+ 	if (q->wptr_bo) {
+ 		wptr_addr_off = (uint64_t)q->properties.write_ptr & (PAGE_SIZE - 1);
+-		queue_input.wptr_mc_addr = ((uint64_t)q->wptr_bo->tbo.resource->start << PAGE_SHIFT) + wptr_addr_off;
++		queue_input.wptr_mc_addr = amdgpu_bo_gpu_offset(q->wptr_bo) + wptr_addr_off;
+ 	}
+ 
+ 	queue_input.is_kfd_process = 1;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.c
+index 863cf060af484..254f343f967a3 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.c
+@@ -97,18 +97,22 @@ void free_mqd_hiq_sdma(struct mqd_manager *mm, void *mqd,
+ 
+ void mqd_symmetrically_map_cu_mask(struct mqd_manager *mm,
+ 		const uint32_t *cu_mask, uint32_t cu_mask_count,
+-		uint32_t *se_mask)
++		uint32_t *se_mask, uint32_t inst)
+ {
+ 	struct kfd_cu_info cu_info;
+ 	uint32_t cu_per_sh[KFD_MAX_NUM_SE][KFD_MAX_NUM_SH_PER_SE] = {0};
+ 	bool wgp_mode_req = KFD_GC_VERSION(mm->dev) >= IP_VERSION(10, 0, 0);
+ 	uint32_t en_mask = wgp_mode_req ? 0x3 : 0x1;
+-	int i, se, sh, cu, cu_bitmap_sh_mul, inc = wgp_mode_req ? 2 : 1;
++	int i, se, sh, cu, cu_bitmap_sh_mul, cu_inc = wgp_mode_req ? 2 : 1;
++	uint32_t cu_active_per_node;
++	int inc = cu_inc * NUM_XCC(mm->dev->xcc_mask);
++	int xcc_inst = inst + ffs(mm->dev->xcc_mask) - 1;
+ 
+ 	amdgpu_amdkfd_get_cu_info(mm->dev->adev, &cu_info);
+ 
+-	if (cu_mask_count > cu_info.cu_active_number)
+-		cu_mask_count = cu_info.cu_active_number;
++	cu_active_per_node = cu_info.cu_active_number / mm->dev->kfd->num_nodes;
++	if (cu_mask_count > cu_active_per_node)
++		cu_mask_count = cu_active_per_node;
+ 
+ 	/* Exceeding these bounds corrupts the stack and indicates a coding error.
+ 	 * Returning with no CU's enabled will hang the queue, which should be
+@@ -141,7 +145,8 @@ void mqd_symmetrically_map_cu_mask(struct mqd_manager *mm,
+ 	for (se = 0; se < cu_info.num_shader_engines; se++)
+ 		for (sh = 0; sh < cu_info.num_shader_arrays_per_engine; sh++)
+ 			cu_per_sh[se][sh] = hweight32(
+-				cu_info.cu_bitmap[se % 4][sh + (se / 4) * cu_bitmap_sh_mul]);
++				cu_info.cu_bitmap[xcc_inst][se % 4][sh + (se / 4) *
++				cu_bitmap_sh_mul]);
+ 
+ 	/* Symmetrically map cu_mask to all SEs & SHs:
+ 	 * se_mask programs up to 2 SH in the upper and lower 16 bits.
+@@ -164,20 +169,33 @@ void mqd_symmetrically_map_cu_mask(struct mqd_manager *mm,
+ 	 * cu_mask[0] bit8 -> se_mask[0] bit1 (SE0,SH0,CU1)
+ 	 * ...
+ 	 *
++	 * For GFX 9.4.3, the following code only looks at a
++	 * subset of the cu_mask corresponding to the inst parameter.
++	 * If we have n XCCs under one GPU node
++	 * cu_mask[0] bit0 -> XCC0 se_mask[0] bit0 (XCC0,SE0,SH0,CU0)
++	 * cu_mask[0] bit1 -> XCC1 se_mask[0] bit0 (XCC1,SE0,SH0,CU0)
++	 * ..
++	 * cu_mask[0] bitn -> XCCn se_mask[0] bit0 (XCCn,SE0,SH0,CU0)
++	 * cu_mask[0] bit n+1 -> XCC0 se_mask[1] bit0 (XCC0,SE1,SH0,CU0)
++	 *
++	 * For example, if there are 6 XCCs under 1 KFD node, this code
++	 * running for each inst, will look at the bits as:
++	 * inst, inst + 6, inst + 12...
++	 *
+ 	 * First ensure all CUs are disabled, then enable user specified CUs.
+ 	 */
+ 	for (i = 0; i < cu_info.num_shader_engines; i++)
+ 		se_mask[i] = 0;
+ 
+-	i = 0;
+-	for (cu = 0; cu < 16; cu += inc) {
++	i = inst;
++	for (cu = 0; cu < 16; cu += cu_inc) {
+ 		for (sh = 0; sh < cu_info.num_shader_arrays_per_engine; sh++) {
+ 			for (se = 0; se < cu_info.num_shader_engines; se++) {
+ 				if (cu_per_sh[se][sh] > cu) {
+ 					if (cu_mask[i / 32] & (en_mask << (i % 32)))
+ 						se_mask[se] |= en_mask << (cu + sh * 16);
+ 					i += inc;
+-					if (i == cu_mask_count)
++					if (i >= cu_mask_count)
+ 						return;
+ 				}
+ 			}
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.h b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.h
+index 23158db7da035..57bf5e513f4d1 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.h
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.h
+@@ -138,7 +138,7 @@ void free_mqd_hiq_sdma(struct mqd_manager *mm, void *mqd,
+ 
+ void mqd_symmetrically_map_cu_mask(struct mqd_manager *mm,
+ 		const uint32_t *cu_mask, uint32_t cu_mask_count,
+-		uint32_t *se_mask);
++		uint32_t *se_mask, uint32_t inst);
+ 
+ int kfd_hiq_load_mqd_kiq(struct mqd_manager *mm, void *mqd,
+ 		uint32_t pipe_id, uint32_t queue_id,
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_cik.c b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_cik.c
+index 65c9f01a1f86c..faa01ee0d1655 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_cik.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_cik.c
+@@ -52,7 +52,7 @@ static void update_cu_mask(struct mqd_manager *mm, void *mqd,
+ 		return;
+ 
+ 	mqd_symmetrically_map_cu_mask(mm,
+-		minfo->cu_mask.ptr, minfo->cu_mask.count, se_mask);
++		minfo->cu_mask.ptr, minfo->cu_mask.count, se_mask, 0);
+ 
+ 	m = get_mqd(mqd);
+ 	m->compute_static_thread_mgmt_se0 = se_mask[0];
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c
+index 94c0fc2e57b7f..0fcb176601295 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c
+@@ -52,7 +52,7 @@ static void update_cu_mask(struct mqd_manager *mm, void *mqd,
+ 		return;
+ 
+ 	mqd_symmetrically_map_cu_mask(mm,
+-		minfo->cu_mask.ptr, minfo->cu_mask.count, se_mask);
++		minfo->cu_mask.ptr, minfo->cu_mask.count, se_mask, 0);
+ 
+ 	m = get_mqd(mqd);
+ 	m->compute_static_thread_mgmt_se0 = se_mask[0];
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v11.c b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v11.c
+index 23b30783dce31..352757f2d3202 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v11.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v11.c
+@@ -71,7 +71,7 @@ static void update_cu_mask(struct mqd_manager *mm, void *mqd,
+ 	}
+ 
+ 	mqd_symmetrically_map_cu_mask(mm,
+-		minfo->cu_mask.ptr, minfo->cu_mask.count, se_mask);
++		minfo->cu_mask.ptr, minfo->cu_mask.count, se_mask, 0);
+ 
+ 	m->compute_static_thread_mgmt_se0 = se_mask[0];
+ 	m->compute_static_thread_mgmt_se1 = se_mask[1];
+@@ -321,6 +321,43 @@ static int get_wave_state(struct mqd_manager *mm, void *mqd,
+ 	return 0;
+ }
+ 
++static void checkpoint_mqd(struct mqd_manager *mm, void *mqd, void *mqd_dst, void *ctl_stack_dst)
++{
++	struct v11_compute_mqd *m;
++
++	m = get_mqd(mqd);
++
++	memcpy(mqd_dst, m, sizeof(struct v11_compute_mqd));
++}
++
++static void restore_mqd(struct mqd_manager *mm, void **mqd,
++			struct kfd_mem_obj *mqd_mem_obj, uint64_t *gart_addr,
++			struct queue_properties *qp,
++			const void *mqd_src,
++			const void *ctl_stack_src, const u32 ctl_stack_size)
++{
++	uint64_t addr;
++	struct v11_compute_mqd *m;
++
++	m = (struct v11_compute_mqd *) mqd_mem_obj->cpu_ptr;
++	addr = mqd_mem_obj->gpu_addr;
++
++	memcpy(m, mqd_src, sizeof(*m));
++
++	*mqd = m;
++	if (gart_addr)
++		*gart_addr = addr;
++
++	m->cp_hqd_pq_doorbell_control =
++		qp->doorbell_off <<
++			CP_HQD_PQ_DOORBELL_CONTROL__DOORBELL_OFFSET__SHIFT;
++	pr_debug("cp_hqd_pq_doorbell_control 0x%x\n",
++			m->cp_hqd_pq_doorbell_control);
++
++	qp->is_active = 0;
++}
++
++
+ static void init_mqd_hiq(struct mqd_manager *mm, void **mqd,
+ 			struct kfd_mem_obj *mqd_mem_obj, uint64_t *gart_addr,
+ 			struct queue_properties *q)
+@@ -438,6 +475,8 @@ struct mqd_manager *mqd_manager_init_v11(enum KFD_MQD_TYPE type,
+ 		mqd->mqd_size = sizeof(struct v11_compute_mqd);
+ 		mqd->get_wave_state = get_wave_state;
+ 		mqd->mqd_stride = kfd_mqd_stride;
++		mqd->checkpoint_mqd = checkpoint_mqd;
++		mqd->restore_mqd = restore_mqd;
+ #if defined(CONFIG_DEBUG_FS)
+ 		mqd->debugfs_show_mqd = debugfs_show_mqd;
+ #endif
+@@ -482,6 +521,8 @@ struct mqd_manager *mqd_manager_init_v11(enum KFD_MQD_TYPE type,
+ 		mqd->update_mqd = update_mqd_sdma;
+ 		mqd->destroy_mqd = kfd_destroy_mqd_sdma;
+ 		mqd->is_occupied = kfd_is_occupied_sdma;
++		mqd->checkpoint_mqd = checkpoint_mqd;
++		mqd->restore_mqd = restore_mqd;
+ 		mqd->mqd_size = sizeof(struct v11_sdma_mqd);
+ 		mqd->mqd_stride = kfd_mqd_stride;
+ #if defined(CONFIG_DEBUG_FS)
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c
+index 601bb9f68048c..a76ae27c8a919 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c
+@@ -60,7 +60,7 @@ static inline struct v9_sdma_mqd *get_sdma_mqd(void *mqd)
+ }
+ 
+ static void update_cu_mask(struct mqd_manager *mm, void *mqd,
+-			struct mqd_update_info *minfo)
++			struct mqd_update_info *minfo, uint32_t inst)
+ {
+ 	struct v9_mqd *m;
+ 	uint32_t se_mask[KFD_MAX_NUM_SE] = {0};
+@@ -69,27 +69,36 @@ static void update_cu_mask(struct mqd_manager *mm, void *mqd,
+ 		return;
+ 
+ 	mqd_symmetrically_map_cu_mask(mm,
+-		minfo->cu_mask.ptr, minfo->cu_mask.count, se_mask);
++		minfo->cu_mask.ptr, minfo->cu_mask.count, se_mask, inst);
+ 
+ 	m = get_mqd(mqd);
++
+ 	m->compute_static_thread_mgmt_se0 = se_mask[0];
+ 	m->compute_static_thread_mgmt_se1 = se_mask[1];
+ 	m->compute_static_thread_mgmt_se2 = se_mask[2];
+ 	m->compute_static_thread_mgmt_se3 = se_mask[3];
+-	m->compute_static_thread_mgmt_se4 = se_mask[4];
+-	m->compute_static_thread_mgmt_se5 = se_mask[5];
+-	m->compute_static_thread_mgmt_se6 = se_mask[6];
+-	m->compute_static_thread_mgmt_se7 = se_mask[7];
+-
+-	pr_debug("update cu mask to %#x %#x %#x %#x %#x %#x %#x %#x\n",
+-		m->compute_static_thread_mgmt_se0,
+-		m->compute_static_thread_mgmt_se1,
+-		m->compute_static_thread_mgmt_se2,
+-		m->compute_static_thread_mgmt_se3,
+-		m->compute_static_thread_mgmt_se4,
+-		m->compute_static_thread_mgmt_se5,
+-		m->compute_static_thread_mgmt_se6,
+-		m->compute_static_thread_mgmt_se7);
++	if (KFD_GC_VERSION(mm->dev) != IP_VERSION(9, 4, 3)) {
++		m->compute_static_thread_mgmt_se4 = se_mask[4];
++		m->compute_static_thread_mgmt_se5 = se_mask[5];
++		m->compute_static_thread_mgmt_se6 = se_mask[6];
++		m->compute_static_thread_mgmt_se7 = se_mask[7];
++
++		pr_debug("update cu mask to %#x %#x %#x %#x %#x %#x %#x %#x\n",
++			m->compute_static_thread_mgmt_se0,
++			m->compute_static_thread_mgmt_se1,
++			m->compute_static_thread_mgmt_se2,
++			m->compute_static_thread_mgmt_se3,
++			m->compute_static_thread_mgmt_se4,
++			m->compute_static_thread_mgmt_se5,
++			m->compute_static_thread_mgmt_se6,
++			m->compute_static_thread_mgmt_se7);
++	} else {
++		pr_debug("inst: %u, update cu mask to %#x %#x %#x %#x\n",
++			inst, m->compute_static_thread_mgmt_se0,
++			m->compute_static_thread_mgmt_se1,
++			m->compute_static_thread_mgmt_se2,
++			m->compute_static_thread_mgmt_se3);
++	}
+ }
+ 
+ static void set_priority(struct v9_mqd *m, struct queue_properties *q)
+@@ -290,7 +299,8 @@ static void update_mqd(struct mqd_manager *mm, void *mqd,
+ 	if (mm->dev->kfd->cwsr_enabled && q->ctx_save_restore_area_address)
+ 		m->cp_hqd_ctx_save_control = 0;
+ 
+-	update_cu_mask(mm, mqd, minfo);
++	if (KFD_GC_VERSION(mm->dev) != IP_VERSION(9, 4, 3))
++		update_cu_mask(mm, mqd, minfo, 0);
+ 	set_priority(m, q);
+ 
+ 	q->is_active = QUEUE_IS_ACTIVE(*q);
+@@ -654,6 +664,8 @@ static void update_mqd_v9_4_3(struct mqd_manager *mm, void *mqd,
+ 		m = get_mqd(mqd + size * xcc);
+ 		update_mqd(mm, m, q, minfo);
+ 
++		update_cu_mask(mm, mqd, minfo, xcc);
++
+ 		if (q->format == KFD_QUEUE_FORMAT_AQL) {
+ 			switch (xcc) {
+ 			case 0:
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_vi.c b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_vi.c
+index d1e962da51dd3..2551a7529b5e0 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_vi.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_vi.c
+@@ -55,7 +55,7 @@ static void update_cu_mask(struct mqd_manager *mm, void *mqd,
+ 		return;
+ 
+ 	mqd_symmetrically_map_cu_mask(mm,
+-		minfo->cu_mask.ptr, minfo->cu_mask.count, se_mask);
++		minfo->cu_mask.ptr, minfo->cu_mask.count, se_mask, 0);
+ 
+ 	m = get_mqd(mqd);
+ 	m->compute_static_thread_mgmt_se0 = se_mask[0];
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
+index 4a17bb7c7b27d..5582191022106 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
+@@ -450,8 +450,7 @@ static ssize_t node_show(struct kobject *kobj, struct attribute *attr,
+ 	sysfs_show_32bit_prop(buffer, offs, "cpu_cores_count",
+ 			      dev->node_props.cpu_cores_count);
+ 	sysfs_show_32bit_prop(buffer, offs, "simd_count",
+-			      dev->gpu ? (dev->node_props.simd_count *
+-					  NUM_XCC(dev->gpu->xcc_mask)) : 0);
++			      dev->gpu ? dev->node_props.simd_count : 0);
+ 	sysfs_show_32bit_prop(buffer, offs, "mem_banks_count",
+ 			      dev->node_props.mem_banks_count);
+ 	sysfs_show_32bit_prop(buffer, offs, "caches_count",
+@@ -1651,14 +1650,17 @@ static int fill_in_l1_pcache(struct kfd_cache_properties **props_ext,
+ static int fill_in_l2_l3_pcache(struct kfd_cache_properties **props_ext,
+ 				struct kfd_gpu_cache_info *pcache_info,
+ 				struct kfd_cu_info *cu_info,
+-				int cache_type, unsigned int cu_processor_id)
++				int cache_type, unsigned int cu_processor_id,
++				struct kfd_node *knode)
+ {
+ 	unsigned int cu_sibling_map_mask;
+ 	int first_active_cu;
+-	int i, j, k;
++	int i, j, k, xcc, start, end;
+ 	struct kfd_cache_properties *pcache = NULL;
+ 
+-	cu_sibling_map_mask = cu_info->cu_bitmap[0][0];
++	start = ffs(knode->xcc_mask) - 1;
++	end = start + NUM_XCC(knode->xcc_mask);
++	cu_sibling_map_mask = cu_info->cu_bitmap[start][0][0];
+ 	cu_sibling_map_mask &=
+ 		((1 << pcache_info[cache_type].num_cu_shared) - 1);
+ 	first_active_cu = ffs(cu_sibling_map_mask);
+@@ -1693,16 +1695,18 @@ static int fill_in_l2_l3_pcache(struct kfd_cache_properties **props_ext,
+ 		cu_sibling_map_mask = cu_sibling_map_mask >> (first_active_cu - 1);
+ 		k = 0;
+ 
+-		for (i = 0; i < cu_info->num_shader_engines; i++) {
+-			for (j = 0; j < cu_info->num_shader_arrays_per_engine; j++) {
+-				pcache->sibling_map[k] = (uint8_t)(cu_sibling_map_mask & 0xFF);
+-				pcache->sibling_map[k+1] = (uint8_t)((cu_sibling_map_mask >> 8) & 0xFF);
+-				pcache->sibling_map[k+2] = (uint8_t)((cu_sibling_map_mask >> 16) & 0xFF);
+-				pcache->sibling_map[k+3] = (uint8_t)((cu_sibling_map_mask >> 24) & 0xFF);
+-				k += 4;
+-
+-				cu_sibling_map_mask = cu_info->cu_bitmap[i % 4][j + i / 4];
+-				cu_sibling_map_mask &= ((1 << pcache_info[cache_type].num_cu_shared) - 1);
++		for (xcc = start; xcc < end; xcc++) {
++			for (i = 0; i < cu_info->num_shader_engines; i++) {
++				for (j = 0; j < cu_info->num_shader_arrays_per_engine; j++) {
++					pcache->sibling_map[k] = (uint8_t)(cu_sibling_map_mask & 0xFF);
++					pcache->sibling_map[k+1] = (uint8_t)((cu_sibling_map_mask >> 8) & 0xFF);
++					pcache->sibling_map[k+2] = (uint8_t)((cu_sibling_map_mask >> 16) & 0xFF);
++					pcache->sibling_map[k+3] = (uint8_t)((cu_sibling_map_mask >> 24) & 0xFF);
++					k += 4;
++
++					cu_sibling_map_mask = cu_info->cu_bitmap[xcc][i % 4][j + i / 4];
++					cu_sibling_map_mask &= ((1 << pcache_info[cache_type].num_cu_shared) - 1);
++				}
+ 			}
+ 		}
+ 		pcache->sibling_map_size = k;
+@@ -1720,7 +1724,7 @@ static int fill_in_l2_l3_pcache(struct kfd_cache_properties **props_ext,
+ static void kfd_fill_cache_non_crat_info(struct kfd_topology_device *dev, struct kfd_node *kdev)
+ {
+ 	struct kfd_gpu_cache_info *pcache_info = NULL;
+-	int i, j, k;
++	int i, j, k, xcc, start, end;
+ 	int ct = 0;
+ 	unsigned int cu_processor_id;
+ 	int ret;
+@@ -1754,37 +1758,42 @@ static void kfd_fill_cache_non_crat_info(struct kfd_topology_device *dev, struct
+ 	 *			then it will consider only one CU from
+ 	 *			the shared unit
+ 	 */
++	start = ffs(kdev->xcc_mask) - 1;
++	end = start + NUM_XCC(kdev->xcc_mask);
++
+ 	for (ct = 0; ct < num_of_cache_types; ct++) {
+ 		cu_processor_id = gpu_processor_id;
+ 		if (pcache_info[ct].cache_level == 1) {
+-			for (i = 0; i < pcu_info->num_shader_engines; i++) {
+-				for (j = 0; j < pcu_info->num_shader_arrays_per_engine; j++) {
+-					for (k = 0; k < pcu_info->num_cu_per_sh; k += pcache_info[ct].num_cu_shared) {
++			for (xcc = start; xcc < end; xcc++) {
++				for (i = 0; i < pcu_info->num_shader_engines; i++) {
++					for (j = 0; j < pcu_info->num_shader_arrays_per_engine; j++) {
++						for (k = 0; k < pcu_info->num_cu_per_sh; k += pcache_info[ct].num_cu_shared) {
+ 
+-						ret = fill_in_l1_pcache(&props_ext, pcache_info, pcu_info,
+-										pcu_info->cu_bitmap[i % 4][j + i / 4], ct,
++							ret = fill_in_l1_pcache(&props_ext, pcache_info, pcu_info,
++										pcu_info->cu_bitmap[xcc][i % 4][j + i / 4], ct,
+ 										cu_processor_id, k);
+ 
+-						if (ret < 0)
+-							break;
++							if (ret < 0)
++								break;
+ 
+-						if (!ret) {
+-							num_of_entries++;
+-							list_add_tail(&props_ext->list, &dev->cache_props);
+-						}
++							if (!ret) {
++								num_of_entries++;
++								list_add_tail(&props_ext->list, &dev->cache_props);
++							}
+ 
+-						/* Move to next CU block */
+-						num_cu_shared = ((k + pcache_info[ct].num_cu_shared) <=
+-							pcu_info->num_cu_per_sh) ?
+-							pcache_info[ct].num_cu_shared :
+-							(pcu_info->num_cu_per_sh - k);
+-						cu_processor_id += num_cu_shared;
++							/* Move to next CU block */
++							num_cu_shared = ((k + pcache_info[ct].num_cu_shared) <=
++								pcu_info->num_cu_per_sh) ?
++								pcache_info[ct].num_cu_shared :
++								(pcu_info->num_cu_per_sh - k);
++							cu_processor_id += num_cu_shared;
++						}
+ 					}
+ 				}
+ 			}
+ 		} else {
+ 			ret = fill_in_l2_l3_pcache(&props_ext, pcache_info,
+-								pcu_info, ct, cu_processor_id);
++					pcu_info, ct, cu_processor_id, kdev);
+ 
+ 			if (ret < 0)
+ 				break;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_topology.h b/drivers/gpu/drm/amd/amdkfd/kfd_topology.h
+index cba2cd5ed9d19..46927263e014d 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_topology.h
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_topology.h
+@@ -86,7 +86,7 @@ struct kfd_mem_properties {
+ 	struct attribute	attr;
+ };
+ 
+-#define CACHE_SIBLINGMAP_SIZE 64
++#define CACHE_SIBLINGMAP_SIZE 128
+ 
+ struct kfd_cache_properties {
+ 	struct list_head	list;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index e0d556cf919f7..c9959bd8147db 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -6062,8 +6062,6 @@ create_stream_for_sink(struct amdgpu_dm_connector *aconnector,
+ 
+ 	if (recalculate_timing)
+ 		drm_mode_set_crtcinfo(&saved_mode, 0);
+-	else if (!old_stream)
+-		drm_mode_set_crtcinfo(&mode, 0);
+ 
+ 	/*
+ 	 * If scaling is enabled and refresh rate didn't change
+@@ -6625,6 +6623,8 @@ enum drm_mode_status amdgpu_dm_connector_mode_valid(struct drm_connector *connec
+ 		goto fail;
+ 	}
+ 
++	drm_mode_set_crtcinfo(mode, 0);
++
+ 	stream = create_validate_stream_for_sink(aconnector, mode,
+ 						 to_dm_connector_state(connector->state),
+ 						 NULL);
+diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
+index 6966420dfbac3..15fa19ee748cf 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
++++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
+@@ -964,7 +964,9 @@ void dce110_edp_backlight_control(
+ 		return;
+ 	}
+ 
+-	if (link->panel_cntl) {
++	if (link->panel_cntl && !(link->dpcd_sink_ext_caps.bits.oled ||
++		link->dpcd_sink_ext_caps.bits.hdr_aux_backlight_control == 1 ||
++		link->dpcd_sink_ext_caps.bits.sdr_aux_backlight_control == 1)) {
+ 		bool is_backlight_on = link->panel_cntl->funcs->is_panel_backlight_on(link->panel_cntl);
+ 
+ 		if ((enable && is_backlight_on) || (!enable && !is_backlight_on)) {
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+index 7c344132a0072..62a077adcdbfa 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+@@ -1054,9 +1054,9 @@ void dcn20_blank_pixel_data(
+ 	enum controller_dp_color_space test_pattern_color_space = CONTROLLER_DP_COLOR_SPACE_UDEFINED;
+ 	struct pipe_ctx *odm_pipe;
+ 	int odm_cnt = 1;
+-
+-	int width = stream->timing.h_addressable + stream->timing.h_border_left + stream->timing.h_border_right;
+-	int height = stream->timing.v_addressable + stream->timing.v_border_bottom + stream->timing.v_border_top;
++	int h_active = stream->timing.h_addressable + stream->timing.h_border_left + stream->timing.h_border_right;
++	int v_active = stream->timing.v_addressable + stream->timing.v_border_bottom + stream->timing.v_border_top;
++	int odm_slice_width, last_odm_slice_width, offset = 0;
+ 
+ 	if (stream->link->test_pattern_enabled)
+ 		return;
+@@ -1066,8 +1066,8 @@ void dcn20_blank_pixel_data(
+ 
+ 	for (odm_pipe = pipe_ctx->next_odm_pipe; odm_pipe; odm_pipe = odm_pipe->next_odm_pipe)
+ 		odm_cnt++;
+-
+-	width = width / odm_cnt;
++	odm_slice_width = h_active / odm_cnt;
++	last_odm_slice_width = h_active - odm_slice_width * (odm_cnt - 1);
+ 
+ 	if (blank) {
+ 		dc->hwss.set_abm_immediate_disable(pipe_ctx);
+@@ -1080,29 +1080,32 @@ void dcn20_blank_pixel_data(
+ 		test_pattern = CONTROLLER_DP_TEST_PATTERN_VIDEOMODE;
+ 	}
+ 
+-	dc->hwss.set_disp_pattern_generator(dc,
+-			pipe_ctx,
+-			test_pattern,
+-			test_pattern_color_space,
+-			stream->timing.display_color_depth,
+-			&black_color,
+-			width,
+-			height,
+-			0);
++	odm_pipe = pipe_ctx;
+ 
+-	for (odm_pipe = pipe_ctx->next_odm_pipe; odm_pipe; odm_pipe = odm_pipe->next_odm_pipe) {
++	while (odm_pipe->next_odm_pipe) {
+ 		dc->hwss.set_disp_pattern_generator(dc,
+ 				odm_pipe,
+-				dc->debug.visual_confirm != VISUAL_CONFIRM_DISABLE && blank ?
+-						CONTROLLER_DP_TEST_PATTERN_COLORRAMP : test_pattern,
++				test_pattern,
+ 				test_pattern_color_space,
+ 				stream->timing.display_color_depth,
+ 				&black_color,
+-				width,
+-				height,
+-				0);
++				odm_slice_width,
++				v_active,
++				offset);
++		offset += odm_slice_width;
++		odm_pipe = odm_pipe->next_odm_pipe;
+ 	}
+ 
++	dc->hwss.set_disp_pattern_generator(dc,
++			odm_pipe,
++			test_pattern,
++			test_pattern_color_space,
++			stream->timing.display_color_depth,
++			&black_color,
++			last_odm_slice_width,
++			v_active,
++			offset);
++
+ 	if (!blank && dc->debug.enable_single_display_2to1_odm_policy) {
+ 		/* when exiting dynamic ODM need to reinit DPG state for unused pipes */
+ 		struct pipe_ctx *old_odm_pipe = dc->current_state->res_ctx.pipe_ctx[pipe_ctx->pipe_idx].next_odm_pipe;
+diff --git a/drivers/gpu/drm/amd/display/dc/link/accessories/link_dp_cts.c b/drivers/gpu/drm/amd/display/dc/link/accessories/link_dp_cts.c
+index db9f1baa27e5e..9fd68a11fad23 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/accessories/link_dp_cts.c
++++ b/drivers/gpu/drm/amd/display/dc/link/accessories/link_dp_cts.c
+@@ -428,15 +428,24 @@ static void set_crtc_test_pattern(struct dc_link *link,
+ 		stream->timing.display_color_depth;
+ 	struct bit_depth_reduction_params params;
+ 	struct output_pixel_processor *opp = pipe_ctx->stream_res.opp;
+-	int width = pipe_ctx->stream->timing.h_addressable +
++	struct pipe_ctx *odm_pipe;
++	int odm_cnt = 1;
++	int h_active = pipe_ctx->stream->timing.h_addressable +
+ 		pipe_ctx->stream->timing.h_border_left +
+ 		pipe_ctx->stream->timing.h_border_right;
+-	int height = pipe_ctx->stream->timing.v_addressable +
++	int v_active = pipe_ctx->stream->timing.v_addressable +
+ 		pipe_ctx->stream->timing.v_border_bottom +
+ 		pipe_ctx->stream->timing.v_border_top;
++	int odm_slice_width, last_odm_slice_width, offset = 0;
+ 
+ 	memset(&params, 0, sizeof(params));
+ 
++	for (odm_pipe = pipe_ctx->next_odm_pipe; odm_pipe; odm_pipe = odm_pipe->next_odm_pipe)
++		odm_cnt++;
++
++	odm_slice_width = h_active / odm_cnt;
++	last_odm_slice_width = h_active - odm_slice_width * (odm_cnt - 1);
++
+ 	switch (test_pattern) {
+ 	case DP_TEST_PATTERN_COLOR_SQUARES:
+ 		controller_test_pattern =
+@@ -473,16 +482,13 @@ static void set_crtc_test_pattern(struct dc_link *link,
+ 	{
+ 		/* disable bit depth reduction */
+ 		pipe_ctx->stream->bit_depth_params = params;
+-		opp->funcs->opp_program_bit_depth_reduction(opp, &params);
+-		if (pipe_ctx->stream_res.tg->funcs->set_test_pattern)
++		if (pipe_ctx->stream_res.tg->funcs->set_test_pattern) {
++			opp->funcs->opp_program_bit_depth_reduction(opp, &params);
+ 			pipe_ctx->stream_res.tg->funcs->set_test_pattern(pipe_ctx->stream_res.tg,
+ 				controller_test_pattern, color_depth);
+-		else if (link->dc->hwss.set_disp_pattern_generator) {
+-			struct pipe_ctx *odm_pipe;
++		} else if (link->dc->hwss.set_disp_pattern_generator) {
+ 			enum controller_dp_color_space controller_color_space;
+-			int opp_cnt = 1;
+-			int offset = 0;
+-			int dpg_width = width;
++			struct output_pixel_processor *odm_opp;
+ 
+ 			switch (test_pattern_color_space) {
+ 			case DP_TEST_PATTERN_COLOR_SPACE_RGB:
+@@ -502,24 +508,9 @@ static void set_crtc_test_pattern(struct dc_link *link,
+ 				break;
+ 			}
+ 
+-			for (odm_pipe = pipe_ctx->next_odm_pipe; odm_pipe; odm_pipe = odm_pipe->next_odm_pipe)
+-				opp_cnt++;
+-			dpg_width = width / opp_cnt;
+-			offset = dpg_width;
+-
+-			link->dc->hwss.set_disp_pattern_generator(link->dc,
+-					pipe_ctx,
+-					controller_test_pattern,
+-					controller_color_space,
+-					color_depth,
+-					NULL,
+-					dpg_width,
+-					height,
+-					0);
+-
+-			for (odm_pipe = pipe_ctx->next_odm_pipe; odm_pipe; odm_pipe = odm_pipe->next_odm_pipe) {
+-				struct output_pixel_processor *odm_opp = odm_pipe->stream_res.opp;
+-
++			odm_pipe = pipe_ctx;
++			while (odm_pipe->next_odm_pipe) {
++				odm_opp = odm_pipe->stream_res.opp;
+ 				odm_opp->funcs->opp_program_bit_depth_reduction(odm_opp, &params);
+ 				link->dc->hwss.set_disp_pattern_generator(link->dc,
+ 						odm_pipe,
+@@ -527,11 +518,23 @@ static void set_crtc_test_pattern(struct dc_link *link,
+ 						controller_color_space,
+ 						color_depth,
+ 						NULL,
+-						dpg_width,
+-						height,
++						odm_slice_width,
++						v_active,
+ 						offset);
+-				offset += offset;
++				offset += odm_slice_width;
++				odm_pipe = odm_pipe->next_odm_pipe;
+ 			}
++			odm_opp = odm_pipe->stream_res.opp;
++			odm_opp->funcs->opp_program_bit_depth_reduction(odm_opp, &params);
++			link->dc->hwss.set_disp_pattern_generator(link->dc,
++					odm_pipe,
++					controller_test_pattern,
++					controller_color_space,
++					color_depth,
++					NULL,
++					last_odm_slice_width,
++					v_active,
++					offset);
+ 		}
+ 	}
+ 	break;
+@@ -540,23 +543,17 @@ static void set_crtc_test_pattern(struct dc_link *link,
+ 		/* restore bitdepth reduction */
+ 		resource_build_bit_depth_reduction_params(pipe_ctx->stream, &params);
+ 		pipe_ctx->stream->bit_depth_params = params;
+-		opp->funcs->opp_program_bit_depth_reduction(opp, &params);
+-		if (pipe_ctx->stream_res.tg->funcs->set_test_pattern)
++		if (pipe_ctx->stream_res.tg->funcs->set_test_pattern) {
++			opp->funcs->opp_program_bit_depth_reduction(opp, &params);
+ 			pipe_ctx->stream_res.tg->funcs->set_test_pattern(pipe_ctx->stream_res.tg,
+-				CONTROLLER_DP_TEST_PATTERN_VIDEOMODE,
+-				color_depth);
+-		else if (link->dc->hwss.set_disp_pattern_generator) {
+-			struct pipe_ctx *odm_pipe;
+-			int opp_cnt = 1;
+-			int dpg_width;
+-
+-			for (odm_pipe = pipe_ctx->next_odm_pipe; odm_pipe; odm_pipe = odm_pipe->next_odm_pipe)
+-				opp_cnt++;
+-
+-			dpg_width = width / opp_cnt;
+-			for (odm_pipe = pipe_ctx->next_odm_pipe; odm_pipe; odm_pipe = odm_pipe->next_odm_pipe) {
+-				struct output_pixel_processor *odm_opp = odm_pipe->stream_res.opp;
++					CONTROLLER_DP_TEST_PATTERN_VIDEOMODE,
++					color_depth);
++		} else if (link->dc->hwss.set_disp_pattern_generator) {
++			struct output_pixel_processor *odm_opp;
+ 
++			odm_pipe = pipe_ctx;
++			while (odm_pipe->next_odm_pipe) {
++				odm_opp = odm_pipe->stream_res.opp;
+ 				odm_opp->funcs->opp_program_bit_depth_reduction(odm_opp, &params);
+ 				link->dc->hwss.set_disp_pattern_generator(link->dc,
+ 						odm_pipe,
+@@ -564,19 +561,23 @@ static void set_crtc_test_pattern(struct dc_link *link,
+ 						CONTROLLER_DP_COLOR_SPACE_UDEFINED,
+ 						color_depth,
+ 						NULL,
+-						dpg_width,
+-						height,
+-						0);
++						odm_slice_width,
++						v_active,
++						offset);
++				offset += odm_slice_width;
++				odm_pipe = odm_pipe->next_odm_pipe;
+ 			}
++			odm_opp = odm_pipe->stream_res.opp;
++			odm_opp->funcs->opp_program_bit_depth_reduction(odm_opp, &params);
+ 			link->dc->hwss.set_disp_pattern_generator(link->dc,
+-					pipe_ctx,
++					odm_pipe,
+ 					CONTROLLER_DP_TEST_PATTERN_VIDEOMODE,
+ 					CONTROLLER_DP_COLOR_SPACE_UDEFINED,
+ 					color_depth,
+ 					NULL,
+-					dpg_width,
+-					height,
+-					0);
++					last_odm_slice_width,
++					v_active,
++					offset);
+ 		}
+ 	}
+ 	break;
+diff --git a/drivers/gpu/drm/amd/include/kgd_kfd_interface.h b/drivers/gpu/drm/amd/include/kgd_kfd_interface.h
+index d0df3381539f0..74cc545085a02 100644
+--- a/drivers/gpu/drm/amd/include/kgd_kfd_interface.h
++++ b/drivers/gpu/drm/amd/include/kgd_kfd_interface.h
+@@ -31,12 +31,12 @@
+ #include <linux/types.h>
+ #include <linux/bitmap.h>
+ #include <linux/dma-fence.h>
++#include "amdgpu_irq.h"
++#include "amdgpu_gfx.h"
+ 
+ struct pci_dev;
+ struct amdgpu_device;
+ 
+-#define KGD_MAX_QUEUES 128
+-
+ struct kfd_dev;
+ struct kgd_mem;
+ 
+@@ -68,7 +68,7 @@ struct kfd_cu_info {
+ 	uint32_t wave_front_size;
+ 	uint32_t max_scratch_slots_per_cu;
+ 	uint32_t lds_size;
+-	uint32_t cu_bitmap[4][4];
++	uint32_t cu_bitmap[AMDGPU_MAX_GC_INSTANCES][4][4];
+ };
+ 
+ /* For getting GPU local memory information from KGD */
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
+index 8f1633c3fb935..73a4a4eb29e08 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
+@@ -100,6 +100,7 @@ int shmem_sg_alloc_table(struct drm_i915_private *i915, struct sg_table *st,
+ 	st->nents = 0;
+ 	for (i = 0; i < page_count; i++) {
+ 		struct folio *folio;
++		unsigned long nr_pages;
+ 		const unsigned int shrink[] = {
+ 			I915_SHRINK_BOUND | I915_SHRINK_UNBOUND,
+ 			0,
+@@ -150,6 +151,8 @@ int shmem_sg_alloc_table(struct drm_i915_private *i915, struct sg_table *st,
+ 			}
+ 		} while (1);
+ 
++		nr_pages = min_t(unsigned long,
++				folio_nr_pages(folio), page_count - i);
+ 		if (!i ||
+ 		    sg->length >= max_segment ||
+ 		    folio_pfn(folio) != next_pfn) {
+@@ -157,13 +160,13 @@ int shmem_sg_alloc_table(struct drm_i915_private *i915, struct sg_table *st,
+ 				sg = sg_next(sg);
+ 
+ 			st->nents++;
+-			sg_set_folio(sg, folio, folio_size(folio), 0);
++			sg_set_folio(sg, folio, nr_pages * PAGE_SIZE, 0);
+ 		} else {
+ 			/* XXX: could overflow? */
+-			sg->length += folio_size(folio);
++			sg->length += nr_pages * PAGE_SIZE;
+ 		}
+-		next_pfn = folio_pfn(folio) + folio_nr_pages(folio);
+-		i += folio_nr_pages(folio) - 1;
++		next_pfn = folio_pfn(folio) + nr_pages;
++		i += nr_pages - 1;
+ 
+ 		/* Check that the i965g/gm workaround works. */
+ 		GEM_BUG_ON(gfp & __GFP_DMA32 && next_pfn >= 0x00100000UL);
+diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+index 0aff5bb13c538..0e81ea6191c64 100644
+--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
++++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+@@ -558,7 +558,6 @@ static int intel_engine_setup(struct intel_gt *gt, enum intel_engine_id id,
+ 		DRIVER_CAPS(i915)->has_logical_contexts = true;
+ 
+ 	ewma__engine_latency_init(&engine->latency);
+-	seqcount_init(&engine->stats.execlists.lock);
+ 
+ 	ATOMIC_INIT_NOTIFIER_HEAD(&engine->context_status_notifier);
+ 
+diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
+index 2ebd937f3b4cb..082c973370824 100644
+--- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
++++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
+@@ -3550,6 +3550,8 @@ int intel_execlists_submission_setup(struct intel_engine_cs *engine)
+ 	logical_ring_default_vfuncs(engine);
+ 	logical_ring_default_irqs(engine);
+ 
++	seqcount_init(&engine->stats.execlists.lock);
++
+ 	if (engine->flags & I915_ENGINE_HAS_RCS_REG_STATE)
+ 		rcs_submission_override(engine);
+ 
+diff --git a/drivers/gpu/drm/i915/gt/intel_ggtt.c b/drivers/gpu/drm/i915/gt/intel_ggtt.c
+index dd0ed941441aa..da21f2786b5d7 100644
+--- a/drivers/gpu/drm/i915/gt/intel_ggtt.c
++++ b/drivers/gpu/drm/i915/gt/intel_ggtt.c
+@@ -511,20 +511,31 @@ void intel_ggtt_unbind_vma(struct i915_address_space *vm,
+ 	vm->clear_range(vm, vma_res->start, vma_res->vma_size);
+ }
+ 
++/*
++ * Reserve the top of the GuC address space for firmware images. Addresses
++ * beyond GUC_GGTT_TOP in the GuC address space are inaccessible by GuC,
++ * which makes for a suitable range to hold GuC/HuC firmware images if the
++ * size of the GGTT is 4G. However, on a 32-bit platform the size of the GGTT
++ * is limited to 2G, which is less than GUC_GGTT_TOP, but we reserve a chunk
++ * of the same size anyway, which is far more than needed, to keep the logic
++ * in uc_fw_ggtt_offset() simple.
++ */
++#define GUC_TOP_RESERVE_SIZE (SZ_4G - GUC_GGTT_TOP)
++
+ static int ggtt_reserve_guc_top(struct i915_ggtt *ggtt)
+ {
+-	u64 size;
++	u64 offset;
+ 	int ret;
+ 
+ 	if (!intel_uc_uses_guc(&ggtt->vm.gt->uc))
+ 		return 0;
+ 
+-	GEM_BUG_ON(ggtt->vm.total <= GUC_GGTT_TOP);
+-	size = ggtt->vm.total - GUC_GGTT_TOP;
++	GEM_BUG_ON(ggtt->vm.total <= GUC_TOP_RESERVE_SIZE);
++	offset = ggtt->vm.total - GUC_TOP_RESERVE_SIZE;
+ 
+-	ret = i915_gem_gtt_reserve(&ggtt->vm, NULL, &ggtt->uc_fw, size,
+-				   GUC_GGTT_TOP, I915_COLOR_UNEVICTABLE,
+-				   PIN_NOEVICT);
++	ret = i915_gem_gtt_reserve(&ggtt->vm, NULL, &ggtt->uc_fw,
++				   GUC_TOP_RESERVE_SIZE, offset,
++				   I915_COLOR_UNEVICTABLE, PIN_NOEVICT);
+ 	if (ret)
+ 		drm_dbg(&ggtt->vm.i915->drm,
+ 			"Failed to reserve top of GGTT for GuC\n");
+diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+index b5b7f2fe8c78e..dc7b40e06e38a 100644
+--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
++++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+@@ -1432,6 +1432,36 @@ static void guc_timestamp_ping(struct work_struct *wrk)
+ 	unsigned long index;
+ 	int srcu, ret;
+ 
++	/*
++	 * Ideally the busyness worker should take a gt pm wakeref because the
++	 * worker only needs to be active while gt is awake. However, the
++	 * gt_park path cancels the worker synchronously and this complicates
++	 * the flow if the worker is also running at the same time. The cancel
++	 * waits for the worker and when the worker releases the wakeref, that
++	 * would call gt_park and would lead to a deadlock.
++	 *
++	 * The resolution is to take the global pm wakeref if runtime pm is
++	 * already active. If not, we don't need to update the busyness stats as
++	 * the stats would already be updated when the gt was parked.
++	 *
++	 * Note:
++	 * - We do not requeue the worker if we cannot take a reference to runtime
++	 *   pm since intel_guc_busyness_unpark would requeue the worker in the
++	 *   resume path.
++	 *
++	 * - If the gt was parked longer than time taken for GT timestamp to roll
++	 *   over, we ignore those rollovers since we don't care about tracking
++	 *   the exact GT time. We only care about roll overs when the gt is
++	 *   active and running workloads.
++	 *
++	 * - There is a window of time between gt_park and runtime suspend,
++	 *   where the worker may run. This is acceptable since the worker will
++	 *   not find any new data to update busyness.
++	 */
++	wakeref = intel_runtime_pm_get_if_active(&gt->i915->runtime_pm);
++	if (!wakeref)
++		return;
++
+ 	/*
+ 	 * Synchronize with gt reset to make sure the worker does not
+ 	 * corrupt the engine/guc stats. NB: can't actually block waiting
+@@ -1440,10 +1470,9 @@ static void guc_timestamp_ping(struct work_struct *wrk)
+ 	 */
+ 	ret = intel_gt_reset_trylock(gt, &srcu);
+ 	if (ret)
+-		return;
++		goto err_trylock;
+ 
+-	with_intel_runtime_pm(&gt->i915->runtime_pm, wakeref)
+-		__update_guc_busyness_stats(guc);
++	__update_guc_busyness_stats(guc);
+ 
+ 	/* adjust context stats for overflow */
+ 	xa_for_each(&guc->context_lookup, index, ce)
+@@ -1452,6 +1481,9 @@ static void guc_timestamp_ping(struct work_struct *wrk)
+ 	intel_gt_reset_unlock(gt, srcu);
+ 
+ 	guc_enable_busyness_worker(guc);
++
++err_trylock:
++	intel_runtime_pm_put(&gt->i915->runtime_pm, wakeref);
+ }
+ 
+ static int guc_action_enable_usage_stats(struct intel_guc *guc)
+diff --git a/drivers/gpu/drm/meson/meson_encoder_hdmi.c b/drivers/gpu/drm/meson/meson_encoder_hdmi.c
+index 53231bfdf7e24..b14e6e507c61b 100644
+--- a/drivers/gpu/drm/meson/meson_encoder_hdmi.c
++++ b/drivers/gpu/drm/meson/meson_encoder_hdmi.c
+@@ -332,6 +332,8 @@ static void meson_encoder_hdmi_hpd_notify(struct drm_bridge *bridge,
+ 			return;
+ 
+ 		cec_notifier_set_phys_addr_from_edid(encoder_hdmi->cec_notifier, edid);
++
++		kfree(edid);
+ 	} else
+ 		cec_notifier_phys_addr_invalidate(encoder_hdmi->cec_notifier);
+ }
+diff --git a/drivers/gpu/drm/tests/drm_mm_test.c b/drivers/gpu/drm/tests/drm_mm_test.c
+index 186b28dc70380..05d5e7af6d250 100644
+--- a/drivers/gpu/drm/tests/drm_mm_test.c
++++ b/drivers/gpu/drm/tests/drm_mm_test.c
+@@ -939,7 +939,7 @@ static void drm_test_mm_insert_range(struct kunit *test)
+ 		KUNIT_ASSERT_FALSE(test, __drm_test_mm_insert_range(test, count, size, 0, max - 1));
+ 		KUNIT_ASSERT_FALSE(test, __drm_test_mm_insert_range(test, count, size, 0, max / 2));
+ 		KUNIT_ASSERT_FALSE(test, __drm_test_mm_insert_range(test, count, size,
+-								    max / 2, max / 2));
++								    max / 2, max));
+ 		KUNIT_ASSERT_FALSE(test, __drm_test_mm_insert_range(test, count, size,
+ 								    max / 4 + 1, 3 * max / 4 - 1));
+ 
+diff --git a/drivers/gpu/drm/virtio/virtgpu_submit.c b/drivers/gpu/drm/virtio/virtgpu_submit.c
+index 1d010c66910d8..aa61e7993e21b 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_submit.c
++++ b/drivers/gpu/drm/virtio/virtgpu_submit.c
+@@ -147,7 +147,6 @@ static void virtio_gpu_complete_submit(struct virtio_gpu_submit *submit)
+ 	submit->buf = NULL;
+ 	submit->buflist = NULL;
+ 	submit->sync_file = NULL;
+-	submit->out_fence = NULL;
+ 	submit->out_fence_fd = -1;
+ }
+ 
+diff --git a/drivers/i2c/busses/i2c-designware-common.c b/drivers/i2c/busses/i2c-designware-common.c
+index cdd8c67d91298..affcfb243f0f5 100644
+--- a/drivers/i2c/busses/i2c-designware-common.c
++++ b/drivers/i2c/busses/i2c-designware-common.c
+@@ -441,8 +441,25 @@ err_release_lock:
+ 
+ void __i2c_dw_disable(struct dw_i2c_dev *dev)
+ {
++	unsigned int raw_intr_stats;
++	unsigned int enable;
+ 	int timeout = 100;
++	bool abort_needed;
+ 	unsigned int status;
++	int ret;
++
++	regmap_read(dev->map, DW_IC_RAW_INTR_STAT, &raw_intr_stats);
++	regmap_read(dev->map, DW_IC_ENABLE, &enable);
++
++	abort_needed = raw_intr_stats & DW_IC_INTR_MST_ON_HOLD;
++	if (abort_needed) {
++		regmap_write(dev->map, DW_IC_ENABLE, enable | DW_IC_ENABLE_ABORT);
++		ret = regmap_read_poll_timeout(dev->map, DW_IC_ENABLE, enable,
++					       !(enable & DW_IC_ENABLE_ABORT), 10,
++					       100);
++		if (ret)
++			dev_err(dev->dev, "timeout while trying to abort current transfer\n");
++	}
+ 
+ 	do {
+ 		__i2c_dw_disable_nowait(dev);
+diff --git a/drivers/i2c/busses/i2c-designware-core.h b/drivers/i2c/busses/i2c-designware-core.h
+index cf4f684f53566..a7f6f3eafad7d 100644
+--- a/drivers/i2c/busses/i2c-designware-core.h
++++ b/drivers/i2c/busses/i2c-designware-core.h
+@@ -98,6 +98,7 @@
+ #define DW_IC_INTR_START_DET			BIT(10)
+ #define DW_IC_INTR_GEN_CALL			BIT(11)
+ #define DW_IC_INTR_RESTART_DET			BIT(12)
++#define DW_IC_INTR_MST_ON_HOLD			BIT(13)
+ 
+ #define DW_IC_INTR_DEFAULT_MASK			(DW_IC_INTR_RX_FULL | \
+ 						 DW_IC_INTR_TX_ABRT | \
+@@ -108,6 +109,8 @@
+ 						 DW_IC_INTR_RX_UNDER | \
+ 						 DW_IC_INTR_RD_REQ)
+ 
++#define DW_IC_ENABLE_ABORT			BIT(1)
++
+ #define DW_IC_STATUS_ACTIVITY			BIT(0)
+ #define DW_IC_STATUS_TFE			BIT(2)
+ #define DW_IC_STATUS_RFNE			BIT(3)
+diff --git a/drivers/i2c/busses/i2c-i801.c b/drivers/i2c/busses/i2c-i801.c
+index 943b8e6d026da..2a3215ac01b3a 100644
+--- a/drivers/i2c/busses/i2c-i801.c
++++ b/drivers/i2c/busses/i2c-i801.c
+@@ -1754,6 +1754,7 @@ static int i801_probe(struct pci_dev *dev, const struct pci_device_id *id)
+ 		"SMBus I801 adapter at %04lx", priv->smba);
+ 	err = i2c_add_adapter(&priv->adapter);
+ 	if (err) {
++		platform_device_unregister(priv->tco_pdev);
+ 		i801_acpi_remove(priv);
+ 		return err;
+ 	}
+diff --git a/drivers/i2c/busses/i2c-npcm7xx.c b/drivers/i2c/busses/i2c-npcm7xx.c
+index 53b65ffb6a647..bf9dbab52d228 100644
+--- a/drivers/i2c/busses/i2c-npcm7xx.c
++++ b/drivers/i2c/busses/i2c-npcm7xx.c
+@@ -695,6 +695,7 @@ static void npcm_i2c_callback(struct npcm_i2c *bus,
+ {
+ 	struct i2c_msg *msgs;
+ 	int msgs_num;
++	bool do_complete = false;
+ 
+ 	msgs = bus->msgs;
+ 	msgs_num = bus->msgs_num;
+@@ -723,23 +724,17 @@ static void npcm_i2c_callback(struct npcm_i2c *bus,
+ 				 msgs[1].flags & I2C_M_RD)
+ 				msgs[1].len = info;
+ 		}
+-		if (completion_done(&bus->cmd_complete) == false)
+-			complete(&bus->cmd_complete);
+-	break;
+-
++		do_complete = true;
++		break;
+ 	case I2C_NACK_IND:
+ 		/* MASTER transmit got a NACK before tx all bytes */
+ 		bus->cmd_err = -ENXIO;
+-		if (bus->master_or_slave == I2C_MASTER)
+-			complete(&bus->cmd_complete);
+-
++		do_complete = true;
+ 		break;
+ 	case I2C_BUS_ERR_IND:
+ 		/* Bus error */
+ 		bus->cmd_err = -EAGAIN;
+-		if (bus->master_or_slave == I2C_MASTER)
+-			complete(&bus->cmd_complete);
+-
++		do_complete = true;
+ 		break;
+ 	case I2C_WAKE_UP_IND:
+ 		/* I2C wake up */
+@@ -753,6 +748,8 @@ static void npcm_i2c_callback(struct npcm_i2c *bus,
+ 	if (bus->slave)
+ 		bus->master_or_slave = I2C_SLAVE;
+ #endif
++	if (do_complete)
++		complete(&bus->cmd_complete);
+ }
+ 
+ static u8 npcm_i2c_fifo_usage(struct npcm_i2c *bus)
+diff --git a/drivers/i2c/busses/i2c-xiic.c b/drivers/i2c/busses/i2c-xiic.c
+index b3bb97762c859..71391b590adae 100644
+--- a/drivers/i2c/busses/i2c-xiic.c
++++ b/drivers/i2c/busses/i2c-xiic.c
+@@ -710,7 +710,7 @@ static irqreturn_t xiic_process(int irq, void *dev_id)
+ 		 * reset the IP instead of just flush fifos
+ 		 */
+ 		ret = xiic_reinit(i2c);
+-		if (!ret)
++		if (ret < 0)
+ 			dev_dbg(i2c->adap.dev.parent, "reinit failed\n");
+ 
+ 		if (i2c->rx_msg) {
+diff --git a/drivers/i2c/muxes/i2c-demux-pinctrl.c b/drivers/i2c/muxes/i2c-demux-pinctrl.c
+index a3a122fae71e0..22f2280eab7f7 100644
+--- a/drivers/i2c/muxes/i2c-demux-pinctrl.c
++++ b/drivers/i2c/muxes/i2c-demux-pinctrl.c
+@@ -243,6 +243,10 @@ static int i2c_demux_pinctrl_probe(struct platform_device *pdev)
+ 
+ 		props[i].name = devm_kstrdup(&pdev->dev, "status", GFP_KERNEL);
+ 		props[i].value = devm_kstrdup(&pdev->dev, "ok", GFP_KERNEL);
++		if (!props[i].name || !props[i].value) {
++			err = -ENOMEM;
++			goto err_rollback;
++		}
+ 		props[i].length = 3;
+ 
+ 		of_changeset_init(&priv->chan[i].chgset);
+diff --git a/drivers/i2c/muxes/i2c-mux-gpio.c b/drivers/i2c/muxes/i2c-mux-gpio.c
+index 5d5cbe0130cdf..5ca03bd34c8d1 100644
+--- a/drivers/i2c/muxes/i2c-mux-gpio.c
++++ b/drivers/i2c/muxes/i2c-mux-gpio.c
+@@ -105,8 +105,10 @@ static int i2c_mux_gpio_probe_fw(struct gpiomux *mux,
+ 
+ 		} else if (is_acpi_node(child)) {
+ 			rc = acpi_get_local_address(ACPI_HANDLE_FWNODE(child), values + i);
+-			if (rc)
++			if (rc) {
++				fwnode_handle_put(child);
+ 				return dev_err_probe(dev, rc, "Cannot get address\n");
++			}
+ 		}
+ 
+ 		i++;
+diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
+index a5a63b1c947eb..98d3ba7f94873 100644
+--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
++++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
+@@ -186,6 +186,15 @@ static void arm_smmu_free_shared_cd(struct arm_smmu_ctx_desc *cd)
+ 	}
+ }
+ 
++/*
++ * Cloned from the MAX_TLBI_OPS in arch/arm64/include/asm/tlbflush.h, this
++ * is used as a threshold to replace per-page TLBI commands to issue in the
++ * command queue with an address-space TLBI command, when SMMU w/o a range
++ * invalidation feature handles too many per-page TLBI commands, which will
++ * otherwise result in a soft lockup.
++ */
++#define CMDQ_MAX_TLBI_OPS		(1 << (PAGE_SHIFT - 3))
++
+ static void arm_smmu_mm_invalidate_range(struct mmu_notifier *mn,
+ 					 struct mm_struct *mm,
+ 					 unsigned long start, unsigned long end)
+@@ -200,10 +209,22 @@ static void arm_smmu_mm_invalidate_range(struct mmu_notifier *mn,
+ 	 * range. So do a simple translation here by calculating size correctly.
+ 	 */
+ 	size = end - start;
++	if (!(smmu_domain->smmu->features & ARM_SMMU_FEAT_RANGE_INV)) {
++		if (size >= CMDQ_MAX_TLBI_OPS * PAGE_SIZE)
++			size = 0;
++	}
++
++	if (!(smmu_domain->smmu->features & ARM_SMMU_FEAT_BTM)) {
++		if (!size)
++			arm_smmu_tlb_inv_asid(smmu_domain->smmu,
++					      smmu_mn->cd->asid);
++		else
++			arm_smmu_tlb_inv_range_asid(start, size,
++						    smmu_mn->cd->asid,
++						    PAGE_SIZE, false,
++						    smmu_domain);
++	}
+ 
+-	if (!(smmu_domain->smmu->features & ARM_SMMU_FEAT_BTM))
+-		arm_smmu_tlb_inv_range_asid(start, size, smmu_mn->cd->asid,
+-					    PAGE_SIZE, false, smmu_domain);
+ 	arm_smmu_atc_inv_domain(smmu_domain, mm->pasid, start, size);
+ }
+ 
+diff --git a/drivers/media/common/videobuf2/frame_vector.c b/drivers/media/common/videobuf2/frame_vector.c
+index 0f430ddc1f670..fd87747be9b17 100644
+--- a/drivers/media/common/videobuf2/frame_vector.c
++++ b/drivers/media/common/videobuf2/frame_vector.c
+@@ -31,6 +31,10 @@
+  * different type underlying the specified range of virtual addresses.
+  * When the function isn't able to map a single page, it returns error.
+  *
++ * Note that get_vaddr_frames() cannot follow VM_IO mappings. It used
++ * to be able to do that, but that could (racily) return non-refcounted
++ * pfns.
++ *
+  * This function takes care of grabbing mmap_lock as necessary.
+  */
+ int get_vaddr_frames(unsigned long start, unsigned int nr_frames, bool write,
+@@ -59,8 +63,6 @@ int get_vaddr_frames(unsigned long start, unsigned int nr_frames, bool write,
+ 	if (likely(ret > 0))
+ 		return ret;
+ 
+-	/* This used to (racily) return non-refcounted pfns. Let people know */
+-	WARN_ONCE(1, "get_vaddr_frames() cannot follow VM_IO mapping");
+ 	vec->nr_frames = 0;
+ 	return ret ? ret : -EFAULT;
+ }
+diff --git a/drivers/media/platform/marvell/Kconfig b/drivers/media/platform/marvell/Kconfig
+index ec1a16734a280..d6499ffe30e8b 100644
+--- a/drivers/media/platform/marvell/Kconfig
++++ b/drivers/media/platform/marvell/Kconfig
+@@ -7,7 +7,7 @@ config VIDEO_CAFE_CCIC
+ 	depends on V4L_PLATFORM_DRIVERS
+ 	depends on PCI && I2C && VIDEO_DEV
+ 	depends on COMMON_CLK
+-	select VIDEO_OV7670
++	select VIDEO_OV7670 if MEDIA_SUBDRV_AUTOSELECT && VIDEO_CAMERA_SENSOR
+ 	select VIDEOBUF2_VMALLOC
+ 	select VIDEOBUF2_DMA_CONTIG
+ 	select VIDEOBUF2_DMA_SG
+@@ -22,7 +22,7 @@ config VIDEO_MMP_CAMERA
+ 	depends on I2C && VIDEO_DEV
+ 	depends on ARCH_MMP || COMPILE_TEST
+ 	depends on COMMON_CLK
+-	select VIDEO_OV7670
++	select VIDEO_OV7670 if MEDIA_SUBDRV_AUTOSELECT && VIDEO_CAMERA_SENSOR
+ 	select I2C_GPIO
+ 	select VIDEOBUF2_VMALLOC
+ 	select VIDEOBUF2_DMA_CONTIG
+diff --git a/drivers/media/platform/via/Kconfig b/drivers/media/platform/via/Kconfig
+index 8926eb0803b27..6e603c0382487 100644
+--- a/drivers/media/platform/via/Kconfig
++++ b/drivers/media/platform/via/Kconfig
+@@ -7,7 +7,7 @@ config VIDEO_VIA_CAMERA
+ 	depends on V4L_PLATFORM_DRIVERS
+ 	depends on FB_VIA && VIDEO_DEV
+ 	select VIDEOBUF2_DMA_SG
+-	select VIDEO_OV7670
++	select VIDEO_OV7670 if VIDEO_CAMERA_SENSOR
+ 	help
+ 	   Driver support for the integrated camera controller in VIA
+ 	   Chrome9 chipsets.  Currently only tested on OLPC xo-1.5 systems
+diff --git a/drivers/media/usb/em28xx/Kconfig b/drivers/media/usb/em28xx/Kconfig
+index b3c472b8c5a96..cb61fd6cc6c61 100644
+--- a/drivers/media/usb/em28xx/Kconfig
++++ b/drivers/media/usb/em28xx/Kconfig
+@@ -12,8 +12,8 @@ config VIDEO_EM28XX_V4L2
+ 	select VIDEO_SAA711X if MEDIA_SUBDRV_AUTOSELECT
+ 	select VIDEO_TVP5150 if MEDIA_SUBDRV_AUTOSELECT
+ 	select VIDEO_MSP3400 if MEDIA_SUBDRV_AUTOSELECT
+-	select VIDEO_MT9V011 if MEDIA_SUBDRV_AUTOSELECT && MEDIA_CAMERA_SUPPORT
+-	select VIDEO_OV2640 if MEDIA_SUBDRV_AUTOSELECT && MEDIA_CAMERA_SUPPORT
++	select VIDEO_MT9V011 if MEDIA_SUBDRV_AUTOSELECT && VIDEO_CAMERA_SENSOR
++	select VIDEO_OV2640 if MEDIA_SUBDRV_AUTOSELECT && VIDEO_CAMERA_SENSOR
+ 	help
+ 	  This is a video4linux driver for Empia 28xx based TV cards.
+ 
+diff --git a/drivers/media/usb/go7007/Kconfig b/drivers/media/usb/go7007/Kconfig
+index 4ff79940ad8d4..b2a15d9fb1f33 100644
+--- a/drivers/media/usb/go7007/Kconfig
++++ b/drivers/media/usb/go7007/Kconfig
+@@ -12,8 +12,8 @@ config VIDEO_GO7007
+ 	select VIDEO_TW2804 if MEDIA_SUBDRV_AUTOSELECT
+ 	select VIDEO_TW9903 if MEDIA_SUBDRV_AUTOSELECT
+ 	select VIDEO_TW9906 if MEDIA_SUBDRV_AUTOSELECT
+-	select VIDEO_OV7640 if MEDIA_SUBDRV_AUTOSELECT && MEDIA_CAMERA_SUPPORT
+ 	select VIDEO_UDA1342 if MEDIA_SUBDRV_AUTOSELECT
++	select VIDEO_OV7640 if MEDIA_SUBDRV_AUTOSELECT && VIDEO_CAMERA_SENSOR
+ 	help
+ 	  This is a video4linux driver for the WIS GO7007 MPEG
+ 	  encoder chip.
+diff --git a/drivers/media/usb/uvc/uvc_ctrl.c b/drivers/media/usb/uvc/uvc_ctrl.c
+index 5e9d3da862dd8..e59a463c27618 100644
+--- a/drivers/media/usb/uvc/uvc_ctrl.c
++++ b/drivers/media/usb/uvc/uvc_ctrl.c
+@@ -1402,6 +1402,9 @@ int uvc_query_v4l2_menu(struct uvc_video_chain *chain,
+ 	query_menu->id = id;
+ 	query_menu->index = index;
+ 
++	if (index >= BITS_PER_TYPE(mapping->menu_mask))
++		return -EINVAL;
++
+ 	ret = mutex_lock_interruptible(&chain->ctrl_mutex);
+ 	if (ret < 0)
+ 		return -ERESTARTSYS;
+diff --git a/drivers/misc/cardreader/rts5227.c b/drivers/misc/cardreader/rts5227.c
+index 3dae5e3a16976..cd512284bfb39 100644
+--- a/drivers/misc/cardreader/rts5227.c
++++ b/drivers/misc/cardreader/rts5227.c
+@@ -83,63 +83,20 @@ static void rts5227_fetch_vendor_settings(struct rtsx_pcr *pcr)
+ 
+ static void rts5227_init_from_cfg(struct rtsx_pcr *pcr)
+ {
+-	struct pci_dev *pdev = pcr->pci;
+-	int l1ss;
+-	u32 lval;
+ 	struct rtsx_cr_option *option = &pcr->option;
+ 
+-	l1ss = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_L1SS);
+-	if (!l1ss)
+-		return;
+-
+-	pci_read_config_dword(pdev, l1ss + PCI_L1SS_CTL1, &lval);
+-
+ 	if (CHK_PCI_PID(pcr, 0x522A)) {
+-		if (0 == (lval & 0x0F))
+-			rtsx_pci_enable_oobs_polling(pcr);
+-		else
++		if (rtsx_check_dev_flag(pcr, ASPM_L1_1_EN | ASPM_L1_2_EN
++				| PM_L1_1_EN | PM_L1_2_EN))
+ 			rtsx_pci_disable_oobs_polling(pcr);
++		else
++			rtsx_pci_enable_oobs_polling(pcr);
+ 	}
+ 
+-	if (lval & PCI_L1SS_CTL1_ASPM_L1_1)
+-		rtsx_set_dev_flag(pcr, ASPM_L1_1_EN);
+-	else
+-		rtsx_clear_dev_flag(pcr, ASPM_L1_1_EN);
+-
+-	if (lval & PCI_L1SS_CTL1_ASPM_L1_2)
+-		rtsx_set_dev_flag(pcr, ASPM_L1_2_EN);
+-	else
+-		rtsx_clear_dev_flag(pcr, ASPM_L1_2_EN);
+-
+-	if (lval & PCI_L1SS_CTL1_PCIPM_L1_1)
+-		rtsx_set_dev_flag(pcr, PM_L1_1_EN);
+-	else
+-		rtsx_clear_dev_flag(pcr, PM_L1_1_EN);
+-
+-	if (lval & PCI_L1SS_CTL1_PCIPM_L1_2)
+-		rtsx_set_dev_flag(pcr, PM_L1_2_EN);
+-	else
+-		rtsx_clear_dev_flag(pcr, PM_L1_2_EN);
+-
+ 	if (option->ltr_en) {
+-		u16 val;
+-
+-		pcie_capability_read_word(pcr->pci, PCI_EXP_DEVCTL2, &val);
+-		if (val & PCI_EXP_DEVCTL2_LTR_EN) {
+-			option->ltr_enabled = true;
+-			option->ltr_active = true;
++		if (option->ltr_enabled)
+ 			rtsx_set_ltr_latency(pcr, option->ltr_active_latency);
+-		} else {
+-			option->ltr_enabled = false;
+-		}
+ 	}
+-
+-	if (rtsx_check_dev_flag(pcr, ASPM_L1_1_EN | ASPM_L1_2_EN
+-				| PM_L1_1_EN | PM_L1_2_EN))
+-		option->force_clkreq_0 = false;
+-	else
+-		option->force_clkreq_0 = true;
+-
+ }
+ 
+ static int rts5227_extra_init_hw(struct rtsx_pcr *pcr)
+@@ -195,7 +152,7 @@ static int rts5227_extra_init_hw(struct rtsx_pcr *pcr)
+ 		}
+ 	}
+ 
+-	if (option->force_clkreq_0 && pcr->aspm_mode == ASPM_MODE_CFG)
++	if (option->force_clkreq_0)
+ 		rtsx_pci_add_cmd(pcr, WRITE_REG_CMD, PETXCFG,
+ 				FORCE_CLKREQ_DELINK_MASK, FORCE_CLKREQ_LOW);
+ 	else
+diff --git a/drivers/misc/cardreader/rts5228.c b/drivers/misc/cardreader/rts5228.c
+index f4ab09439da70..0c7f10bcf6f12 100644
+--- a/drivers/misc/cardreader/rts5228.c
++++ b/drivers/misc/cardreader/rts5228.c
+@@ -386,59 +386,25 @@ static void rts5228_process_ocp(struct rtsx_pcr *pcr)
+ 
+ static void rts5228_init_from_cfg(struct rtsx_pcr *pcr)
+ {
+-	struct pci_dev *pdev = pcr->pci;
+-	int l1ss;
+-	u32 lval;
+ 	struct rtsx_cr_option *option = &pcr->option;
+ 
+-	l1ss = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_L1SS);
+-	if (!l1ss)
+-		return;
+-
+-	pci_read_config_dword(pdev, l1ss + PCI_L1SS_CTL1, &lval);
+-
+-	if (0 == (lval & 0x0F))
+-		rtsx_pci_enable_oobs_polling(pcr);
+-	else
++	if (rtsx_check_dev_flag(pcr, ASPM_L1_1_EN | ASPM_L1_2_EN
++				| PM_L1_1_EN | PM_L1_2_EN))
+ 		rtsx_pci_disable_oobs_polling(pcr);
+-
+-	if (lval & PCI_L1SS_CTL1_ASPM_L1_1)
+-		rtsx_set_dev_flag(pcr, ASPM_L1_1_EN);
+-	else
+-		rtsx_clear_dev_flag(pcr, ASPM_L1_1_EN);
+-
+-	if (lval & PCI_L1SS_CTL1_ASPM_L1_2)
+-		rtsx_set_dev_flag(pcr, ASPM_L1_2_EN);
+-	else
+-		rtsx_clear_dev_flag(pcr, ASPM_L1_2_EN);
+-
+-	if (lval & PCI_L1SS_CTL1_PCIPM_L1_1)
+-		rtsx_set_dev_flag(pcr, PM_L1_1_EN);
+ 	else
+-		rtsx_clear_dev_flag(pcr, PM_L1_1_EN);
+-
+-	if (lval & PCI_L1SS_CTL1_PCIPM_L1_2)
+-		rtsx_set_dev_flag(pcr, PM_L1_2_EN);
+-	else
+-		rtsx_clear_dev_flag(pcr, PM_L1_2_EN);
++		rtsx_pci_enable_oobs_polling(pcr);
+ 
+ 	rtsx_pci_write_register(pcr, ASPM_FORCE_CTL, 0xFF, 0);
+-	if (option->ltr_en) {
+-		u16 val;
+ 
+-		pcie_capability_read_word(pcr->pci, PCI_EXP_DEVCTL2, &val);
+-		if (val & PCI_EXP_DEVCTL2_LTR_EN) {
+-			option->ltr_enabled = true;
+-			option->ltr_active = true;
++	if (option->ltr_en) {
++		if (option->ltr_enabled)
+ 			rtsx_set_ltr_latency(pcr, option->ltr_active_latency);
+-		} else {
+-			option->ltr_enabled = false;
+-		}
+ 	}
+ }
+ 
+ static int rts5228_extra_init_hw(struct rtsx_pcr *pcr)
+ {
++	struct rtsx_cr_option *option = &pcr->option;
+ 
+ 	rtsx_pci_write_register(pcr, RTS5228_AUTOLOAD_CFG1,
+ 			CD_RESUME_EN_MASK, CD_RESUME_EN_MASK);
+@@ -469,6 +435,17 @@ static int rts5228_extra_init_hw(struct rtsx_pcr *pcr)
+ 	else
+ 		rtsx_pci_write_register(pcr, PETXCFG, 0x30, 0x00);
+ 
++	/*
++	 * If u_force_clkreq_0 is enabled, CLKREQ# PIN will be forced
++	 * to drive low, and we forcibly request clock.
++	 */
++	if (option->force_clkreq_0)
++		rtsx_pci_write_register(pcr, PETXCFG,
++				 FORCE_CLKREQ_DELINK_MASK, FORCE_CLKREQ_LOW);
++	else
++		rtsx_pci_write_register(pcr, PETXCFG,
++				 FORCE_CLKREQ_DELINK_MASK, FORCE_CLKREQ_HIGH);
++
+ 	rtsx_pci_write_register(pcr, PWD_SUSPEND_EN, 0xFF, 0xFB);
+ 
+ 	if (pcr->rtd3_en) {
+diff --git a/drivers/misc/cardreader/rts5249.c b/drivers/misc/cardreader/rts5249.c
+index 47ab72a43256b..6c81040e18bef 100644
+--- a/drivers/misc/cardreader/rts5249.c
++++ b/drivers/misc/cardreader/rts5249.c
+@@ -86,64 +86,22 @@ static void rtsx_base_fetch_vendor_settings(struct rtsx_pcr *pcr)
+ 
+ static void rts5249_init_from_cfg(struct rtsx_pcr *pcr)
+ {
+-	struct pci_dev *pdev = pcr->pci;
+-	int l1ss;
+ 	struct rtsx_cr_option *option = &(pcr->option);
+-	u32 lval;
+-
+-	l1ss = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_L1SS);
+-	if (!l1ss)
+-		return;
+-
+-	pci_read_config_dword(pdev, l1ss + PCI_L1SS_CTL1, &lval);
+ 
+ 	if (CHK_PCI_PID(pcr, PID_524A) || CHK_PCI_PID(pcr, PID_525A)) {
+-		if (0 == (lval & 0x0F))
+-			rtsx_pci_enable_oobs_polling(pcr);
+-		else
++		if (rtsx_check_dev_flag(pcr, ASPM_L1_1_EN | ASPM_L1_2_EN
++				| PM_L1_1_EN | PM_L1_2_EN))
+ 			rtsx_pci_disable_oobs_polling(pcr);
++		else
++			rtsx_pci_enable_oobs_polling(pcr);
+ 	}
+ 
+-
+-	if (lval & PCI_L1SS_CTL1_ASPM_L1_1)
+-		rtsx_set_dev_flag(pcr, ASPM_L1_1_EN);
+-
+-	if (lval & PCI_L1SS_CTL1_ASPM_L1_2)
+-		rtsx_set_dev_flag(pcr, ASPM_L1_2_EN);
+-
+-	if (lval & PCI_L1SS_CTL1_PCIPM_L1_1)
+-		rtsx_set_dev_flag(pcr, PM_L1_1_EN);
+-
+-	if (lval & PCI_L1SS_CTL1_PCIPM_L1_2)
+-		rtsx_set_dev_flag(pcr, PM_L1_2_EN);
+-
+ 	if (option->ltr_en) {
+-		u16 val;
+-
+-		pcie_capability_read_word(pdev, PCI_EXP_DEVCTL2, &val);
+-		if (val & PCI_EXP_DEVCTL2_LTR_EN) {
+-			option->ltr_enabled = true;
+-			option->ltr_active = true;
++		if (option->ltr_enabled)
+ 			rtsx_set_ltr_latency(pcr, option->ltr_active_latency);
+-		} else {
+-			option->ltr_enabled = false;
+-		}
+ 	}
+ }
+ 
+-static int rts5249_init_from_hw(struct rtsx_pcr *pcr)
+-{
+-	struct rtsx_cr_option *option = &(pcr->option);
+-
+-	if (rtsx_check_dev_flag(pcr, ASPM_L1_1_EN | ASPM_L1_2_EN
+-				| PM_L1_1_EN | PM_L1_2_EN))
+-		option->force_clkreq_0 = false;
+-	else
+-		option->force_clkreq_0 = true;
+-
+-	return 0;
+-}
+-
+ static void rts52xa_force_power_down(struct rtsx_pcr *pcr, u8 pm_state, bool runtime)
+ {
+ 	/* Set relink_time to 0 */
+@@ -276,7 +234,6 @@ static int rts5249_extra_init_hw(struct rtsx_pcr *pcr)
+ 	struct rtsx_cr_option *option = &(pcr->option);
+ 
+ 	rts5249_init_from_cfg(pcr);
+-	rts5249_init_from_hw(pcr);
+ 
+ 	rtsx_pci_init_cmd(pcr);
+ 
+@@ -327,11 +284,12 @@ static int rts5249_extra_init_hw(struct rtsx_pcr *pcr)
+ 		}
+ 	}
+ 
++
+ 	/*
+ 	 * If u_force_clkreq_0 is enabled, CLKREQ# PIN will be forced
+ 	 * to drive low, and we forcibly request clock.
+ 	 */
+-	if (option->force_clkreq_0 && pcr->aspm_mode == ASPM_MODE_CFG)
++	if (option->force_clkreq_0)
+ 		rtsx_pci_write_register(pcr, PETXCFG,
+ 			FORCE_CLKREQ_DELINK_MASK, FORCE_CLKREQ_LOW);
+ 	else
+diff --git a/drivers/misc/cardreader/rts5260.c b/drivers/misc/cardreader/rts5260.c
+index 79b18f6f73a8a..d2d3a6ccb8f7d 100644
+--- a/drivers/misc/cardreader/rts5260.c
++++ b/drivers/misc/cardreader/rts5260.c
+@@ -480,47 +480,19 @@ static void rts5260_pwr_saving_setting(struct rtsx_pcr *pcr)
+ 
+ static void rts5260_init_from_cfg(struct rtsx_pcr *pcr)
+ {
+-	struct pci_dev *pdev = pcr->pci;
+-	int l1ss;
+ 	struct rtsx_cr_option *option = &pcr->option;
+-	u32 lval;
+-
+-	l1ss = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_L1SS);
+-	if (!l1ss)
+-		return;
+-
+-	pci_read_config_dword(pdev, l1ss + PCI_L1SS_CTL1, &lval);
+-
+-	if (lval & PCI_L1SS_CTL1_ASPM_L1_1)
+-		rtsx_set_dev_flag(pcr, ASPM_L1_1_EN);
+-
+-	if (lval & PCI_L1SS_CTL1_ASPM_L1_2)
+-		rtsx_set_dev_flag(pcr, ASPM_L1_2_EN);
+-
+-	if (lval & PCI_L1SS_CTL1_PCIPM_L1_1)
+-		rtsx_set_dev_flag(pcr, PM_L1_1_EN);
+-
+-	if (lval & PCI_L1SS_CTL1_PCIPM_L1_2)
+-		rtsx_set_dev_flag(pcr, PM_L1_2_EN);
+ 
+ 	rts5260_pwr_saving_setting(pcr);
+ 
+ 	if (option->ltr_en) {
+-		u16 val;
+-
+-		pcie_capability_read_word(pdev, PCI_EXP_DEVCTL2, &val);
+-		if (val & PCI_EXP_DEVCTL2_LTR_EN) {
+-			option->ltr_enabled = true;
+-			option->ltr_active = true;
++		if (option->ltr_enabled)
+ 			rtsx_set_ltr_latency(pcr, option->ltr_active_latency);
+-		} else {
+-			option->ltr_enabled = false;
+-		}
+ 	}
+ }
+ 
+ static int rts5260_extra_init_hw(struct rtsx_pcr *pcr)
+ {
++	struct rtsx_cr_option *option = &pcr->option;
+ 
+ 	/* Set mcu_cnt to 7 to ensure data can be sampled properly */
+ 	rtsx_pci_write_register(pcr, 0xFC03, 0x7F, 0x07);
+@@ -539,6 +511,17 @@ static int rts5260_extra_init_hw(struct rtsx_pcr *pcr)
+ 
+ 	rts5260_init_hw(pcr);
+ 
++	/*
++	 * If u_force_clkreq_0 is enabled, CLKREQ# PIN will be forced
++	 * to drive low, and we forcibly request clock.
++	 */
++	if (option->force_clkreq_0)
++		rtsx_pci_write_register(pcr, PETXCFG,
++				 FORCE_CLKREQ_DELINK_MASK, FORCE_CLKREQ_LOW);
++	else
++		rtsx_pci_write_register(pcr, PETXCFG,
++				 FORCE_CLKREQ_DELINK_MASK, FORCE_CLKREQ_HIGH);
++
+ 	rtsx_pci_write_register(pcr, pcr->reg_pm_ctrl3, 0x10, 0x00);
+ 
+ 	return 0;
+diff --git a/drivers/misc/cardreader/rts5261.c b/drivers/misc/cardreader/rts5261.c
+index 94af6bf8a25a6..67252512a1329 100644
+--- a/drivers/misc/cardreader/rts5261.c
++++ b/drivers/misc/cardreader/rts5261.c
+@@ -454,54 +454,17 @@ static void rts5261_init_from_hw(struct rtsx_pcr *pcr)
+ 
+ static void rts5261_init_from_cfg(struct rtsx_pcr *pcr)
+ {
+-	struct pci_dev *pdev = pcr->pci;
+-	int l1ss;
+-	u32 lval;
+ 	struct rtsx_cr_option *option = &pcr->option;
+ 
+-	l1ss = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_L1SS);
+-	if (!l1ss)
+-		return;
+-
+-	pci_read_config_dword(pdev, l1ss + PCI_L1SS_CTL1, &lval);
+-
+-	if (lval & PCI_L1SS_CTL1_ASPM_L1_1)
+-		rtsx_set_dev_flag(pcr, ASPM_L1_1_EN);
+-	else
+-		rtsx_clear_dev_flag(pcr, ASPM_L1_1_EN);
+-
+-	if (lval & PCI_L1SS_CTL1_ASPM_L1_2)
+-		rtsx_set_dev_flag(pcr, ASPM_L1_2_EN);
+-	else
+-		rtsx_clear_dev_flag(pcr, ASPM_L1_2_EN);
+-
+-	if (lval & PCI_L1SS_CTL1_PCIPM_L1_1)
+-		rtsx_set_dev_flag(pcr, PM_L1_1_EN);
+-	else
+-		rtsx_clear_dev_flag(pcr, PM_L1_1_EN);
+-
+-	if (lval & PCI_L1SS_CTL1_PCIPM_L1_2)
+-		rtsx_set_dev_flag(pcr, PM_L1_2_EN);
+-	else
+-		rtsx_clear_dev_flag(pcr, PM_L1_2_EN);
+-
+-	rtsx_pci_write_register(pcr, ASPM_FORCE_CTL, 0xFF, 0);
+ 	if (option->ltr_en) {
+-		u16 val;
+-
+-		pcie_capability_read_word(pdev, PCI_EXP_DEVCTL2, &val);
+-		if (val & PCI_EXP_DEVCTL2_LTR_EN) {
+-			option->ltr_enabled = true;
+-			option->ltr_active = true;
++		if (option->ltr_enabled)
+ 			rtsx_set_ltr_latency(pcr, option->ltr_active_latency);
+-		} else {
+-			option->ltr_enabled = false;
+-		}
+ 	}
+ }
+ 
+ static int rts5261_extra_init_hw(struct rtsx_pcr *pcr)
+ {
++	struct rtsx_cr_option *option = &pcr->option;
+ 	u32 val;
+ 
+ 	rtsx_pci_write_register(pcr, RTS5261_AUTOLOAD_CFG1,
+@@ -547,6 +510,17 @@ static int rts5261_extra_init_hw(struct rtsx_pcr *pcr)
+ 	else
+ 		rtsx_pci_write_register(pcr, PETXCFG, 0x30, 0x00);
+ 
++	/*
++	 * If u_force_clkreq_0 is enabled, CLKREQ# PIN will be forced
++	 * to drive low, and we forcibly request clock.
++	 */
++	if (option->force_clkreq_0)
++		rtsx_pci_write_register(pcr, PETXCFG,
++				 FORCE_CLKREQ_DELINK_MASK, FORCE_CLKREQ_LOW);
++	else
++		rtsx_pci_write_register(pcr, PETXCFG,
++				 FORCE_CLKREQ_DELINK_MASK, FORCE_CLKREQ_HIGH);
++
+ 	rtsx_pci_write_register(pcr, PWD_SUSPEND_EN, 0xFF, 0xFB);
+ 
+ 	if (pcr->rtd3_en) {
+diff --git a/drivers/misc/cardreader/rtsx_pcr.c b/drivers/misc/cardreader/rtsx_pcr.c
+index a3f4b52bb159f..a30751ad37330 100644
+--- a/drivers/misc/cardreader/rtsx_pcr.c
++++ b/drivers/misc/cardreader/rtsx_pcr.c
+@@ -1326,11 +1326,8 @@ static int rtsx_pci_init_hw(struct rtsx_pcr *pcr)
+ 			return err;
+ 	}
+ 
+-	if (pcr->aspm_mode == ASPM_MODE_REG) {
++	if (pcr->aspm_mode == ASPM_MODE_REG)
+ 		rtsx_pci_write_register(pcr, ASPM_FORCE_CTL, 0x30, 0x30);
+-		rtsx_pci_write_register(pcr, PETXCFG,
+-				FORCE_CLKREQ_DELINK_MASK, FORCE_CLKREQ_HIGH);
+-	}
+ 
+ 	/* No CD interrupt if probing driver with card inserted.
+ 	 * So we need to initialize pcr->card_exist here.
+@@ -1345,7 +1342,9 @@ static int rtsx_pci_init_hw(struct rtsx_pcr *pcr)
+ 
+ static int rtsx_pci_init_chip(struct rtsx_pcr *pcr)
+ {
+-	int err;
++	struct rtsx_cr_option *option = &(pcr->option);
++	int err, l1ss;
++	u32 lval;
+ 	u16 cfg_val;
+ 	u8 val;
+ 
+@@ -1430,6 +1429,48 @@ static int rtsx_pci_init_chip(struct rtsx_pcr *pcr)
+ 			pcr->aspm_enabled = true;
+ 	}
+ 
++	l1ss = pci_find_ext_capability(pcr->pci, PCI_EXT_CAP_ID_L1SS);
++	if (l1ss) {
++		pci_read_config_dword(pcr->pci, l1ss + PCI_L1SS_CTL1, &lval);
++
++		if (lval & PCI_L1SS_CTL1_ASPM_L1_1)
++			rtsx_set_dev_flag(pcr, ASPM_L1_1_EN);
++		else
++			rtsx_clear_dev_flag(pcr, ASPM_L1_1_EN);
++
++		if (lval & PCI_L1SS_CTL1_ASPM_L1_2)
++			rtsx_set_dev_flag(pcr, ASPM_L1_2_EN);
++		else
++			rtsx_clear_dev_flag(pcr, ASPM_L1_2_EN);
++
++		if (lval & PCI_L1SS_CTL1_PCIPM_L1_1)
++			rtsx_set_dev_flag(pcr, PM_L1_1_EN);
++		else
++			rtsx_clear_dev_flag(pcr, PM_L1_1_EN);
++
++		if (lval & PCI_L1SS_CTL1_PCIPM_L1_2)
++			rtsx_set_dev_flag(pcr, PM_L1_2_EN);
++		else
++			rtsx_clear_dev_flag(pcr, PM_L1_2_EN);
++
++		pcie_capability_read_word(pcr->pci, PCI_EXP_DEVCTL2, &cfg_val);
++		if (cfg_val & PCI_EXP_DEVCTL2_LTR_EN) {
++			option->ltr_enabled = true;
++			option->ltr_active = true;
++		} else {
++			option->ltr_enabled = false;
++		}
++
++		if (rtsx_check_dev_flag(pcr, ASPM_L1_1_EN | ASPM_L1_2_EN
++				| PM_L1_1_EN | PM_L1_2_EN))
++			option->force_clkreq_0 = false;
++		else
++			option->force_clkreq_0 = true;
++	} else {
++		option->ltr_enabled = false;
++		option->force_clkreq_0 = true;
++	}
++
+ 	if (pcr->ops->fetch_vendor_settings)
+ 		pcr->ops->fetch_vendor_settings(pcr);
+ 
+diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c
+index d19593fae2265..dcfda0e8e1b78 100644
+--- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
++++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
+@@ -1833,6 +1833,9 @@ static int ena_clean_rx_irq(struct ena_ring *rx_ring, struct napi_struct *napi,
+ 	return work_done;
+ 
+ error:
++	if (xdp_flags & ENA_XDP_REDIRECT)
++		xdp_do_flush();
++
+ 	adapter = netdev_priv(rx_ring->netdev);
+ 
+ 	if (rc == -ENOSPC) {
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 1eb490c48c52e..3325e7021745f 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -2626,6 +2626,7 @@ static int bnxt_poll_nitroa0(struct napi_struct *napi, int budget)
+ 	struct rx_cmp_ext *rxcmp1;
+ 	u32 cp_cons, tmp_raw_cons;
+ 	u32 raw_cons = cpr->cp_raw_cons;
++	bool flush_xdp = false;
+ 	u32 rx_pkts = 0;
+ 	u8 event = 0;
+ 
+@@ -2660,6 +2661,8 @@ static int bnxt_poll_nitroa0(struct napi_struct *napi, int budget)
+ 				rx_pkts++;
+ 			else if (rc == -EBUSY)	/* partial completion */
+ 				break;
++			if (event & BNXT_REDIRECT_EVENT)
++				flush_xdp = true;
+ 		} else if (unlikely(TX_CMP_TYPE(txcmp) ==
+ 				    CMPL_BASE_TYPE_HWRM_DONE)) {
+ 			bnxt_hwrm_handler(bp, txcmp);
+@@ -2679,6 +2682,8 @@ static int bnxt_poll_nitroa0(struct napi_struct *napi, int budget)
+ 
+ 	if (event & BNXT_AGG_EVENT)
+ 		bnxt_db_write(bp, &rxr->rx_agg_db, rxr->rx_agg_prod);
++	if (flush_xdp)
++		xdp_do_flush();
+ 
+ 	if (!bnxt_has_work(bp, cpr) && rx_pkts < budget) {
+ 		napi_complete_done(napi, rx_pkts);
+diff --git a/drivers/net/ethernet/engleder/tsnep_ethtool.c b/drivers/net/ethernet/engleder/tsnep_ethtool.c
+index 716815dad7d21..65ec1abc94421 100644
+--- a/drivers/net/ethernet/engleder/tsnep_ethtool.c
++++ b/drivers/net/ethernet/engleder/tsnep_ethtool.c
+@@ -300,10 +300,8 @@ static void tsnep_ethtool_get_channels(struct net_device *netdev,
+ {
+ 	struct tsnep_adapter *adapter = netdev_priv(netdev);
+ 
+-	ch->max_rx = adapter->num_rx_queues;
+-	ch->max_tx = adapter->num_tx_queues;
+-	ch->rx_count = adapter->num_rx_queues;
+-	ch->tx_count = adapter->num_tx_queues;
++	ch->max_combined = adapter->num_queues;
++	ch->combined_count = adapter->num_queues;
+ }
+ 
+ static int tsnep_ethtool_get_ts_info(struct net_device *netdev,
+diff --git a/drivers/net/ethernet/engleder/tsnep_main.c b/drivers/net/ethernet/engleder/tsnep_main.c
+index 84751bb303a68..479156576bc8a 100644
+--- a/drivers/net/ethernet/engleder/tsnep_main.c
++++ b/drivers/net/ethernet/engleder/tsnep_main.c
+@@ -86,8 +86,11 @@ static irqreturn_t tsnep_irq(int irq, void *arg)
+ 
+ 	/* handle TX/RX queue 0 interrupt */
+ 	if ((active & adapter->queue[0].irq_mask) != 0) {
+-		tsnep_disable_irq(adapter, adapter->queue[0].irq_mask);
+-		napi_schedule(&adapter->queue[0].napi);
++		if (napi_schedule_prep(&adapter->queue[0].napi)) {
++			tsnep_disable_irq(adapter, adapter->queue[0].irq_mask);
++			/* schedule after masking to avoid races */
++			__napi_schedule(&adapter->queue[0].napi);
++		}
+ 	}
+ 
+ 	return IRQ_HANDLED;
+@@ -98,8 +101,11 @@ static irqreturn_t tsnep_irq_txrx(int irq, void *arg)
+ 	struct tsnep_queue *queue = arg;
+ 
+ 	/* handle TX/RX queue interrupt */
+-	tsnep_disable_irq(queue->adapter, queue->irq_mask);
+-	napi_schedule(&queue->napi);
++	if (napi_schedule_prep(&queue->napi)) {
++		tsnep_disable_irq(queue->adapter, queue->irq_mask);
++		/* schedule after masking to avoid races */
++		__napi_schedule(&queue->napi);
++	}
+ 
+ 	return IRQ_HANDLED;
+ }
+@@ -1727,6 +1733,10 @@ static int tsnep_poll(struct napi_struct *napi, int budget)
+ 	if (queue->tx)
+ 		complete = tsnep_tx_poll(queue->tx, budget);
+ 
++	/* handle case where we are called by netpoll with a budget of 0 */
++	if (unlikely(budget <= 0))
++		return budget;
++
+ 	if (queue->rx) {
+ 		done = queue->rx->xsk_pool ?
+ 		       tsnep_rx_poll_zc(queue->rx, napi, budget) :
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+index 613d0a779cef2..71a2ec03f2b38 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+@@ -3352,6 +3352,15 @@ static void hns3_set_default_feature(struct net_device *netdev)
+ 		  NETIF_F_HW_TC);
+ 
+ 	netdev->hw_enc_features |= netdev->vlan_features | NETIF_F_TSO_MANGLEID;
++
++	/* The device_version V3 hardware can't offload the checksum for IP in
++	 * GRE packets, but can do it for NvGRE. So default to disable the
++	 * checksum and GSO offload for GRE.
++	 */
++	if (ae_dev->dev_version > HNAE3_DEVICE_VERSION_V2) {
++		netdev->features &= ~NETIF_F_GSO_GRE;
++		netdev->features &= ~NETIF_F_GSO_GRE_CSUM;
++	}
+ }
+ 
+ static int hns3_alloc_buffer(struct hns3_enet_ring *ring,
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index ce6b658a930cc..ed6cf59853bf6 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -3564,9 +3564,14 @@ static u32 hclge_check_event_cause(struct hclge_dev *hdev, u32 *clearval)
+ static void hclge_clear_event_cause(struct hclge_dev *hdev, u32 event_type,
+ 				    u32 regclr)
+ {
++#define HCLGE_IMP_RESET_DELAY		5
++
+ 	switch (event_type) {
+ 	case HCLGE_VECTOR0_EVENT_PTP:
+ 	case HCLGE_VECTOR0_EVENT_RST:
++		if (regclr == BIT(HCLGE_VECTOR0_IMPRESET_INT_B))
++			mdelay(HCLGE_IMP_RESET_DELAY);
++
+ 		hclge_write_dev(&hdev->hw, HCLGE_MISC_RESET_STS_REG, regclr);
+ 		break;
+ 	case HCLGE_VECTOR0_EVENT_MBX:
+@@ -7348,6 +7353,12 @@ static int hclge_del_cls_flower(struct hnae3_handle *handle,
+ 	ret = hclge_fd_tcam_config(hdev, HCLGE_FD_STAGE_1, true, rule->location,
+ 				   NULL, false);
+ 	if (ret) {
++		/* if tcam config fail, set rule state to TO_DEL,
++		 * so the rule will be deleted when periodic
++		 * task being scheduled.
++		 */
++		hclge_update_fd_list(hdev, HCLGE_FD_TO_DEL, rule->location, NULL);
++		set_bit(HCLGE_STATE_FD_TBL_CHANGED, &hdev->state);
+ 		spin_unlock_bh(&hdev->fd_rule_lock);
+ 		return ret;
+ 	}
+@@ -8824,7 +8835,7 @@ static void hclge_update_overflow_flags(struct hclge_vport *vport,
+ 	if (mac_type == HCLGE_MAC_ADDR_UC) {
+ 		if (is_all_added)
+ 			vport->overflow_promisc_flags &= ~HNAE3_OVERFLOW_UPE;
+-		else
++		else if (hclge_is_umv_space_full(vport, true))
+ 			vport->overflow_promisc_flags |= HNAE3_OVERFLOW_UPE;
+ 	} else {
+ 		if (is_all_added)
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index 7a2f9233d6954..a4d68fb216fb9 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -1855,7 +1855,8 @@ static void hclgevf_periodic_service_task(struct hclgevf_dev *hdev)
+ 	unsigned long delta = round_jiffies_relative(HZ);
+ 	struct hnae3_handle *handle = &hdev->nic;
+ 
+-	if (test_bit(HCLGEVF_STATE_RST_FAIL, &hdev->state))
++	if (test_bit(HCLGEVF_STATE_RST_FAIL, &hdev->state) ||
++	    test_bit(HCLGE_COMM_STATE_CMD_DISABLE, &hdev->hw.hw.comm_state))
+ 		return;
+ 
+ 	if (time_is_after_jiffies(hdev->last_serv_processed + HZ)) {
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_port.c b/drivers/net/ethernet/huawei/hinic/hinic_port.c
+index 9406237c461e0..f81a43d2cdfcd 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_port.c
++++ b/drivers/net/ethernet/huawei/hinic/hinic_port.c
+@@ -456,9 +456,6 @@ int hinic_set_vlan_fliter(struct hinic_dev *nic_dev, u32 en)
+ 	u16 out_size = sizeof(vlan_filter);
+ 	int err;
+ 
+-	if (!hwdev)
+-		return -EINVAL;
+-
+ 	vlan_filter.func_idx = HINIC_HWIF_FUNC_IDX(hwif);
+ 	vlan_filter.enable = en;
+ 
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+index be59ba3774e15..c1e1e8912350b 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+@@ -4464,9 +4464,7 @@ int i40e_ndo_set_vf_port_vlan(struct net_device *netdev, int vf_id,
+ 		goto error_pvid;
+ 
+ 	i40e_vlan_stripping_enable(vsi);
+-	i40e_vc_reset_vf(vf, true);
+-	/* During reset the VF got a new VSI, so refresh a pointer. */
+-	vsi = pf->vsi[vf->lan_vsi_idx];
++
+ 	/* Locked once because multiple functions below iterate list */
+ 	spin_lock_bh(&vsi->mac_filter_hash_lock);
+ 
+@@ -4552,6 +4550,10 @@ int i40e_ndo_set_vf_port_vlan(struct net_device *netdev, int vf_id,
+ 	 */
+ 	vf->port_vlan_id = le16_to_cpu(vsi->info.pvid);
+ 
++	i40e_vc_reset_vf(vf, true);
++	/* During reset the VF got a new VSI, so refresh a pointer. */
++	vsi = pf->vsi[vf->lan_vsi_idx];
++
+ 	ret = i40e_config_vf_promiscuous_mode(vf, vsi->id, allmulti, alluni);
+ 	if (ret) {
+ 		dev_err(&pf->pdev->dev, "Unable to config vf promiscuous mode\n");
+diff --git a/drivers/net/ethernet/intel/iavf/iavf.h b/drivers/net/ethernet/intel/iavf/iavf.h
+index 8cbdebc5b6989..4d4508e04b1d2 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf.h
++++ b/drivers/net/ethernet/intel/iavf/iavf.h
+@@ -521,7 +521,7 @@ void iavf_down(struct iavf_adapter *adapter);
+ int iavf_process_config(struct iavf_adapter *adapter);
+ int iavf_parse_vf_resource_msg(struct iavf_adapter *adapter);
+ void iavf_schedule_reset(struct iavf_adapter *adapter, u64 flags);
+-void iavf_schedule_request_stats(struct iavf_adapter *adapter);
++void iavf_schedule_aq_request(struct iavf_adapter *adapter, u64 flags);
+ void iavf_schedule_finish_config(struct iavf_adapter *adapter);
+ void iavf_reset(struct iavf_adapter *adapter);
+ void iavf_set_ethtool_ops(struct net_device *netdev);
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
+index a34303ad057d0..90397293525f7 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
+@@ -362,7 +362,7 @@ static void iavf_get_ethtool_stats(struct net_device *netdev,
+ 	unsigned int i;
+ 
+ 	/* Explicitly request stats refresh */
+-	iavf_schedule_request_stats(adapter);
++	iavf_schedule_aq_request(adapter, IAVF_FLAG_AQ_REQUEST_STATS);
+ 
+ 	iavf_add_ethtool_stats(&data, adapter, iavf_gstrings_stats);
+ 
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
+index 9610ca770349e..8ea5c0825c3c4 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
+@@ -314,15 +314,13 @@ void iavf_schedule_reset(struct iavf_adapter *adapter, u64 flags)
+ }
+ 
+ /**
+- * iavf_schedule_request_stats - Set the flags and schedule statistics request
++ * iavf_schedule_aq_request - Set the flags and schedule aq request
+  * @adapter: board private structure
+- *
+- * Sets IAVF_FLAG_AQ_REQUEST_STATS flag so iavf_watchdog_task() will explicitly
+- * request and refresh ethtool stats
++ * @flags: requested aq flags
+  **/
+-void iavf_schedule_request_stats(struct iavf_adapter *adapter)
++void iavf_schedule_aq_request(struct iavf_adapter *adapter, u64 flags)
+ {
+-	adapter->aq_required |= IAVF_FLAG_AQ_REQUEST_STATS;
++	adapter->aq_required |= flags;
+ 	mod_delayed_work(adapter->wq, &adapter->watchdog_task, 0);
+ }
+ 
+@@ -823,7 +821,7 @@ iavf_vlan_filter *iavf_add_vlan(struct iavf_adapter *adapter,
+ 		list_add_tail(&f->list, &adapter->vlan_filter_list);
+ 		f->state = IAVF_VLAN_ADD;
+ 		adapter->num_vlan_filters++;
+-		adapter->aq_required |= IAVF_FLAG_AQ_ADD_VLAN_FILTER;
++		iavf_schedule_aq_request(adapter, IAVF_FLAG_AQ_ADD_VLAN_FILTER);
+ 	}
+ 
+ clearout:
+@@ -845,7 +843,7 @@ static void iavf_del_vlan(struct iavf_adapter *adapter, struct iavf_vlan vlan)
+ 	f = iavf_find_vlan(adapter, vlan);
+ 	if (f) {
+ 		f->state = IAVF_VLAN_REMOVE;
+-		adapter->aq_required |= IAVF_FLAG_AQ_DEL_VLAN_FILTER;
++		iavf_schedule_aq_request(adapter, IAVF_FLAG_AQ_DEL_VLAN_FILTER);
+ 	}
+ 
+ 	spin_unlock_bh(&adapter->mac_vlan_list_lock);
+@@ -1421,7 +1419,8 @@ void iavf_down(struct iavf_adapter *adapter)
+ 	iavf_clear_fdir_filters(adapter);
+ 	iavf_clear_adv_rss_conf(adapter);
+ 
+-	if (!(adapter->flags & IAVF_FLAG_PF_COMMS_FAILED)) {
++	if (!(adapter->flags & IAVF_FLAG_PF_COMMS_FAILED) &&
++	    !(test_bit(__IAVF_IN_REMOVE_TASK, &adapter->crit_section))) {
+ 		/* cancel any current operation */
+ 		adapter->current_op = VIRTCHNL_OP_UNKNOWN;
+ 		/* Schedule operations to close down the HW. Don't wait
+diff --git a/drivers/net/ethernet/intel/igc/igc_ethtool.c b/drivers/net/ethernet/intel/igc/igc_ethtool.c
+index 93bce729be76a..7ab6dd58e4001 100644
+--- a/drivers/net/ethernet/intel/igc/igc_ethtool.c
++++ b/drivers/net/ethernet/intel/igc/igc_ethtool.c
+@@ -868,6 +868,18 @@ static void igc_ethtool_get_stats(struct net_device *netdev,
+ 	spin_unlock(&adapter->stats64_lock);
+ }
+ 
++static int igc_ethtool_get_previous_rx_coalesce(struct igc_adapter *adapter)
++{
++	return (adapter->rx_itr_setting <= 3) ?
++		adapter->rx_itr_setting : adapter->rx_itr_setting >> 2;
++}
++
++static int igc_ethtool_get_previous_tx_coalesce(struct igc_adapter *adapter)
++{
++	return (adapter->tx_itr_setting <= 3) ?
++		adapter->tx_itr_setting : adapter->tx_itr_setting >> 2;
++}
++
+ static int igc_ethtool_get_coalesce(struct net_device *netdev,
+ 				    struct ethtool_coalesce *ec,
+ 				    struct kernel_ethtool_coalesce *kernel_coal,
+@@ -875,17 +887,8 @@ static int igc_ethtool_get_coalesce(struct net_device *netdev,
+ {
+ 	struct igc_adapter *adapter = netdev_priv(netdev);
+ 
+-	if (adapter->rx_itr_setting <= 3)
+-		ec->rx_coalesce_usecs = adapter->rx_itr_setting;
+-	else
+-		ec->rx_coalesce_usecs = adapter->rx_itr_setting >> 2;
+-
+-	if (!(adapter->flags & IGC_FLAG_QUEUE_PAIRS)) {
+-		if (adapter->tx_itr_setting <= 3)
+-			ec->tx_coalesce_usecs = adapter->tx_itr_setting;
+-		else
+-			ec->tx_coalesce_usecs = adapter->tx_itr_setting >> 2;
+-	}
++	ec->rx_coalesce_usecs = igc_ethtool_get_previous_rx_coalesce(adapter);
++	ec->tx_coalesce_usecs = igc_ethtool_get_previous_tx_coalesce(adapter);
+ 
+ 	return 0;
+ }
+@@ -910,8 +913,12 @@ static int igc_ethtool_set_coalesce(struct net_device *netdev,
+ 	    ec->tx_coalesce_usecs == 2)
+ 		return -EINVAL;
+ 
+-	if ((adapter->flags & IGC_FLAG_QUEUE_PAIRS) && ec->tx_coalesce_usecs)
++	if ((adapter->flags & IGC_FLAG_QUEUE_PAIRS) &&
++	    ec->tx_coalesce_usecs != igc_ethtool_get_previous_tx_coalesce(adapter)) {
++		NL_SET_ERR_MSG_MOD(extack,
++				   "Queue Pair mode enabled, both Rx and Tx coalescing controlled by rx-usecs");
+ 		return -EINVAL;
++	}
+ 
+ 	/* If ITR is disabled, disable DMAC */
+ 	if (ec->rx_coalesce_usecs == 0) {
+diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
+index 6f557e843e495..4e23b821c39ba 100644
+--- a/drivers/net/ethernet/intel/igc/igc_main.c
++++ b/drivers/net/ethernet/intel/igc/igc_main.c
+@@ -6433,7 +6433,7 @@ static int igc_xdp_xmit(struct net_device *dev, int num_frames,
+ 	struct igc_ring *ring;
+ 	int i, drops;
+ 
+-	if (unlikely(test_bit(__IGC_DOWN, &adapter->state)))
++	if (unlikely(!netif_carrier_ok(dev)))
+ 		return -ENETDOWN;
+ 
+ 	if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK))
+diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_main.c b/drivers/net/ethernet/marvell/octeon_ep/octep_main.c
+index 4424de2ffd70c..dbc518ff82768 100644
+--- a/drivers/net/ethernet/marvell/octeon_ep/octep_main.c
++++ b/drivers/net/ethernet/marvell/octeon_ep/octep_main.c
+@@ -734,13 +734,13 @@ static netdev_tx_t octep_start_xmit(struct sk_buff *skb,
+ dma_map_sg_err:
+ 	if (si > 0) {
+ 		dma_unmap_single(iq->dev, sglist[0].dma_ptr[0],
+-				 sglist[0].len[0], DMA_TO_DEVICE);
+-		sglist[0].len[0] = 0;
++				 sglist[0].len[3], DMA_TO_DEVICE);
++		sglist[0].len[3] = 0;
+ 	}
+ 	while (si > 1) {
+ 		dma_unmap_page(iq->dev, sglist[si >> 2].dma_ptr[si & 3],
+-			       sglist[si >> 2].len[si & 3], DMA_TO_DEVICE);
+-		sglist[si >> 2].len[si & 3] = 0;
++			       sglist[si >> 2].len[3 - (si & 3)], DMA_TO_DEVICE);
++		sglist[si >> 2].len[3 - (si & 3)] = 0;
+ 		si--;
+ 	}
+ 	tx_buffer->gather = 0;
+diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_tx.c b/drivers/net/ethernet/marvell/octeon_ep/octep_tx.c
+index 5a520d37bea02..d0adb82d65c31 100644
+--- a/drivers/net/ethernet/marvell/octeon_ep/octep_tx.c
++++ b/drivers/net/ethernet/marvell/octeon_ep/octep_tx.c
+@@ -69,12 +69,12 @@ int octep_iq_process_completions(struct octep_iq *iq, u16 budget)
+ 		compl_sg++;
+ 
+ 		dma_unmap_single(iq->dev, tx_buffer->sglist[0].dma_ptr[0],
+-				 tx_buffer->sglist[0].len[0], DMA_TO_DEVICE);
++				 tx_buffer->sglist[0].len[3], DMA_TO_DEVICE);
+ 
+ 		i = 1; /* entry 0 is main skb, unmapped above */
+ 		while (frags--) {
+ 			dma_unmap_page(iq->dev, tx_buffer->sglist[i >> 2].dma_ptr[i & 3],
+-				       tx_buffer->sglist[i >> 2].len[i & 3], DMA_TO_DEVICE);
++				       tx_buffer->sglist[i >> 2].len[3 - (i & 3)], DMA_TO_DEVICE);
+ 			i++;
+ 		}
+ 
+@@ -131,13 +131,13 @@ static void octep_iq_free_pending(struct octep_iq *iq)
+ 
+ 		dma_unmap_single(iq->dev,
+ 				 tx_buffer->sglist[0].dma_ptr[0],
+-				 tx_buffer->sglist[0].len[0],
++				 tx_buffer->sglist[0].len[3],
+ 				 DMA_TO_DEVICE);
+ 
+ 		i = 1; /* entry 0 is main skb, unmapped above */
+ 		while (frags--) {
+ 			dma_unmap_page(iq->dev, tx_buffer->sglist[i >> 2].dma_ptr[i & 3],
+-				       tx_buffer->sglist[i >> 2].len[i & 3], DMA_TO_DEVICE);
++				       tx_buffer->sglist[i >> 2].len[3 - (i & 3)], DMA_TO_DEVICE);
+ 			i++;
+ 		}
+ 
+diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_tx.h b/drivers/net/ethernet/marvell/octeon_ep/octep_tx.h
+index 2ef57980eb47b..21e75ff9f5e71 100644
+--- a/drivers/net/ethernet/marvell/octeon_ep/octep_tx.h
++++ b/drivers/net/ethernet/marvell/octeon_ep/octep_tx.h
+@@ -17,7 +17,21 @@
+ #define TX_BUFTYPE_NET_SG        2
+ #define NUM_TX_BUFTYPES          3
+ 
+-/* Hardware format for Scatter/Gather list */
++/* Hardware format for Scatter/Gather list
++ *
++ * 63      48|47     32|31     16|15       0
++ * -----------------------------------------
++ * |  Len 0  |  Len 1  |  Len 2  |  Len 3  |
++ * -----------------------------------------
++ * |                Ptr 0                  |
++ * -----------------------------------------
++ * |                Ptr 1                  |
++ * -----------------------------------------
++ * |                Ptr 2                  |
++ * -----------------------------------------
++ * |                Ptr 3                  |
++ * -----------------------------------------
++ */
+ struct octep_tx_sglist_desc {
+ 	u16 len[4];
+ 	dma_addr_t dma_ptr[4];
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c
+index e77d438489557..53b2a4ef52985 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c
+@@ -29,7 +29,8 @@
+ static bool otx2_xdp_rcv_pkt_handler(struct otx2_nic *pfvf,
+ 				     struct bpf_prog *prog,
+ 				     struct nix_cqe_rx_s *cqe,
+-				     struct otx2_cq_queue *cq);
++				     struct otx2_cq_queue *cq,
++				     bool *need_xdp_flush);
+ 
+ static int otx2_nix_cq_op_status(struct otx2_nic *pfvf,
+ 				 struct otx2_cq_queue *cq)
+@@ -337,7 +338,7 @@ static bool otx2_check_rcv_errors(struct otx2_nic *pfvf,
+ static void otx2_rcv_pkt_handler(struct otx2_nic *pfvf,
+ 				 struct napi_struct *napi,
+ 				 struct otx2_cq_queue *cq,
+-				 struct nix_cqe_rx_s *cqe)
++				 struct nix_cqe_rx_s *cqe, bool *need_xdp_flush)
+ {
+ 	struct nix_rx_parse_s *parse = &cqe->parse;
+ 	struct nix_rx_sg_s *sg = &cqe->sg;
+@@ -353,7 +354,7 @@ static void otx2_rcv_pkt_handler(struct otx2_nic *pfvf,
+ 	}
+ 
+ 	if (pfvf->xdp_prog)
+-		if (otx2_xdp_rcv_pkt_handler(pfvf, pfvf->xdp_prog, cqe, cq))
++		if (otx2_xdp_rcv_pkt_handler(pfvf, pfvf->xdp_prog, cqe, cq, need_xdp_flush))
+ 			return;
+ 
+ 	skb = napi_get_frags(napi);
+@@ -388,6 +389,7 @@ static int otx2_rx_napi_handler(struct otx2_nic *pfvf,
+ 				struct napi_struct *napi,
+ 				struct otx2_cq_queue *cq, int budget)
+ {
++	bool need_xdp_flush = false;
+ 	struct nix_cqe_rx_s *cqe;
+ 	int processed_cqe = 0;
+ 
+@@ -409,13 +411,15 @@ process_cqe:
+ 		cq->cq_head++;
+ 		cq->cq_head &= (cq->cqe_cnt - 1);
+ 
+-		otx2_rcv_pkt_handler(pfvf, napi, cq, cqe);
++		otx2_rcv_pkt_handler(pfvf, napi, cq, cqe, &need_xdp_flush);
+ 
+ 		cqe->hdr.cqe_type = NIX_XQE_TYPE_INVALID;
+ 		cqe->sg.seg_addr = 0x00;
+ 		processed_cqe++;
+ 		cq->pend_cqe--;
+ 	}
++	if (need_xdp_flush)
++		xdp_do_flush();
+ 
+ 	/* Free CQEs to HW */
+ 	otx2_write64(pfvf, NIX_LF_CQ_OP_DOOR,
+@@ -1354,7 +1358,8 @@ bool otx2_xdp_sq_append_pkt(struct otx2_nic *pfvf, u64 iova, int len, u16 qidx)
+ static bool otx2_xdp_rcv_pkt_handler(struct otx2_nic *pfvf,
+ 				     struct bpf_prog *prog,
+ 				     struct nix_cqe_rx_s *cqe,
+-				     struct otx2_cq_queue *cq)
++				     struct otx2_cq_queue *cq,
++				     bool *need_xdp_flush)
+ {
+ 	unsigned char *hard_start, *data;
+ 	int qidx = cq->cq_idx;
+@@ -1391,8 +1396,10 @@ static bool otx2_xdp_rcv_pkt_handler(struct otx2_nic *pfvf,
+ 
+ 		otx2_dma_unmap_page(pfvf, iova, pfvf->rbsize,
+ 				    DMA_FROM_DEVICE);
+-		if (!err)
++		if (!err) {
++			*need_xdp_flush = true;
+ 			return true;
++		}
+ 		put_page(page);
+ 		break;
+ 	default:
+diff --git a/drivers/net/ethernet/microchip/vcap/vcap_api_kunit.c b/drivers/net/ethernet/microchip/vcap/vcap_api_kunit.c
+index c07f25e791c76..fe4e166de8a04 100644
+--- a/drivers/net/ethernet/microchip/vcap/vcap_api_kunit.c
++++ b/drivers/net/ethernet/microchip/vcap/vcap_api_kunit.c
+@@ -243,10 +243,9 @@ static void vcap_test_api_init(struct vcap_admin *admin)
+ }
+ 
+ /* Helper function to create a rule of a specific size */
+-static struct vcap_rule *
+-test_vcap_xn_rule_creator(struct kunit *test, int cid, enum vcap_user user,
+-			  u16 priority,
+-			  int id, int size, int expected_addr)
++static void test_vcap_xn_rule_creator(struct kunit *test, int cid,
++				      enum vcap_user user, u16 priority,
++				      int id, int size, int expected_addr)
+ {
+ 	struct vcap_rule *rule;
+ 	struct vcap_rule_internal *ri;
+@@ -311,7 +310,7 @@ test_vcap_xn_rule_creator(struct kunit *test, int cid, enum vcap_user user,
+ 	ret = vcap_add_rule(rule);
+ 	KUNIT_EXPECT_EQ(test, 0, ret);
+ 	KUNIT_EXPECT_EQ(test, expected_addr, ri->addr);
+-	return rule;
++	vcap_free_rule(rule);
+ }
+ 
+ /* Prepare testing rule deletion */
+@@ -995,6 +994,16 @@ static void vcap_api_encode_rule_actionset_test(struct kunit *test)
+ 	KUNIT_EXPECT_EQ(test, (u32)0x00000000, actwords[11]);
+ }
+ 
++static void vcap_free_ckf(struct vcap_rule *rule)
++{
++	struct vcap_client_keyfield *ckf, *next_ckf;
++
++	list_for_each_entry_safe(ckf, next_ckf, &rule->keyfields, ctrl.list) {
++		list_del(&ckf->ctrl.list);
++		kfree(ckf);
++	}
++}
++
+ static void vcap_api_rule_add_keyvalue_test(struct kunit *test)
+ {
+ 	struct vcap_admin admin = {
+@@ -1027,6 +1036,7 @@ static void vcap_api_rule_add_keyvalue_test(struct kunit *test)
+ 	KUNIT_EXPECT_EQ(test, VCAP_FIELD_BIT, kf->ctrl.type);
+ 	KUNIT_EXPECT_EQ(test, 0x0, kf->data.u1.value);
+ 	KUNIT_EXPECT_EQ(test, 0x1, kf->data.u1.mask);
++	vcap_free_ckf(rule);
+ 
+ 	INIT_LIST_HEAD(&rule->keyfields);
+ 	ret = vcap_rule_add_key_bit(rule, VCAP_KF_LOOKUP_FIRST_IS, VCAP_BIT_1);
+@@ -1039,6 +1049,7 @@ static void vcap_api_rule_add_keyvalue_test(struct kunit *test)
+ 	KUNIT_EXPECT_EQ(test, VCAP_FIELD_BIT, kf->ctrl.type);
+ 	KUNIT_EXPECT_EQ(test, 0x1, kf->data.u1.value);
+ 	KUNIT_EXPECT_EQ(test, 0x1, kf->data.u1.mask);
++	vcap_free_ckf(rule);
+ 
+ 	INIT_LIST_HEAD(&rule->keyfields);
+ 	ret = vcap_rule_add_key_bit(rule, VCAP_KF_LOOKUP_FIRST_IS,
+@@ -1052,6 +1063,7 @@ static void vcap_api_rule_add_keyvalue_test(struct kunit *test)
+ 	KUNIT_EXPECT_EQ(test, VCAP_FIELD_BIT, kf->ctrl.type);
+ 	KUNIT_EXPECT_EQ(test, 0x0, kf->data.u1.value);
+ 	KUNIT_EXPECT_EQ(test, 0x0, kf->data.u1.mask);
++	vcap_free_ckf(rule);
+ 
+ 	INIT_LIST_HEAD(&rule->keyfields);
+ 	ret = vcap_rule_add_key_u32(rule, VCAP_KF_TYPE, 0x98765432, 0xff00ffab);
+@@ -1064,6 +1076,7 @@ static void vcap_api_rule_add_keyvalue_test(struct kunit *test)
+ 	KUNIT_EXPECT_EQ(test, VCAP_FIELD_U32, kf->ctrl.type);
+ 	KUNIT_EXPECT_EQ(test, 0x98765432, kf->data.u32.value);
+ 	KUNIT_EXPECT_EQ(test, 0xff00ffab, kf->data.u32.mask);
++	vcap_free_ckf(rule);
+ 
+ 	INIT_LIST_HEAD(&rule->keyfields);
+ 	ret = vcap_rule_add_key_u128(rule, VCAP_KF_L3_IP6_SIP, &dip);
+@@ -1078,6 +1091,18 @@ static void vcap_api_rule_add_keyvalue_test(struct kunit *test)
+ 		KUNIT_EXPECT_EQ(test, dip.value[idx], kf->data.u128.value[idx]);
+ 	for (idx = 0; idx < ARRAY_SIZE(dip.mask); ++idx)
+ 		KUNIT_EXPECT_EQ(test, dip.mask[idx], kf->data.u128.mask[idx]);
++	vcap_free_ckf(rule);
++}
++
++static void vcap_free_caf(struct vcap_rule *rule)
++{
++	struct vcap_client_actionfield *caf, *next_caf;
++
++	list_for_each_entry_safe(caf, next_caf,
++				 &rule->actionfields, ctrl.list) {
++		list_del(&caf->ctrl.list);
++		kfree(caf);
++	}
+ }
+ 
+ static void vcap_api_rule_add_actionvalue_test(struct kunit *test)
+@@ -1105,6 +1130,7 @@ static void vcap_api_rule_add_actionvalue_test(struct kunit *test)
+ 	KUNIT_EXPECT_EQ(test, VCAP_AF_POLICE_ENA, af->ctrl.action);
+ 	KUNIT_EXPECT_EQ(test, VCAP_FIELD_BIT, af->ctrl.type);
+ 	KUNIT_EXPECT_EQ(test, 0x0, af->data.u1.value);
++	vcap_free_caf(rule);
+ 
+ 	INIT_LIST_HEAD(&rule->actionfields);
+ 	ret = vcap_rule_add_action_bit(rule, VCAP_AF_POLICE_ENA, VCAP_BIT_1);
+@@ -1116,6 +1142,7 @@ static void vcap_api_rule_add_actionvalue_test(struct kunit *test)
+ 	KUNIT_EXPECT_EQ(test, VCAP_AF_POLICE_ENA, af->ctrl.action);
+ 	KUNIT_EXPECT_EQ(test, VCAP_FIELD_BIT, af->ctrl.type);
+ 	KUNIT_EXPECT_EQ(test, 0x1, af->data.u1.value);
++	vcap_free_caf(rule);
+ 
+ 	INIT_LIST_HEAD(&rule->actionfields);
+ 	ret = vcap_rule_add_action_bit(rule, VCAP_AF_POLICE_ENA, VCAP_BIT_ANY);
+@@ -1127,6 +1154,7 @@ static void vcap_api_rule_add_actionvalue_test(struct kunit *test)
+ 	KUNIT_EXPECT_EQ(test, VCAP_AF_POLICE_ENA, af->ctrl.action);
+ 	KUNIT_EXPECT_EQ(test, VCAP_FIELD_BIT, af->ctrl.type);
+ 	KUNIT_EXPECT_EQ(test, 0x0, af->data.u1.value);
++	vcap_free_caf(rule);
+ 
+ 	INIT_LIST_HEAD(&rule->actionfields);
+ 	ret = vcap_rule_add_action_u32(rule, VCAP_AF_TYPE, 0x98765432);
+@@ -1138,6 +1166,7 @@ static void vcap_api_rule_add_actionvalue_test(struct kunit *test)
+ 	KUNIT_EXPECT_EQ(test, VCAP_AF_TYPE, af->ctrl.action);
+ 	KUNIT_EXPECT_EQ(test, VCAP_FIELD_U32, af->ctrl.type);
+ 	KUNIT_EXPECT_EQ(test, 0x98765432, af->data.u32.value);
++	vcap_free_caf(rule);
+ 
+ 	INIT_LIST_HEAD(&rule->actionfields);
+ 	ret = vcap_rule_add_action_u32(rule, VCAP_AF_MASK_MODE, 0xaabbccdd);
+@@ -1149,6 +1178,7 @@ static void vcap_api_rule_add_actionvalue_test(struct kunit *test)
+ 	KUNIT_EXPECT_EQ(test, VCAP_AF_MASK_MODE, af->ctrl.action);
+ 	KUNIT_EXPECT_EQ(test, VCAP_FIELD_U32, af->ctrl.type);
+ 	KUNIT_EXPECT_EQ(test, 0xaabbccdd, af->data.u32.value);
++	vcap_free_caf(rule);
+ }
+ 
+ static void vcap_api_rule_find_keyset_basic_test(struct kunit *test)
+@@ -1408,6 +1438,10 @@ static void vcap_api_encode_rule_test(struct kunit *test)
+ 	ret = list_empty(&is2_admin.rules);
+ 	KUNIT_EXPECT_EQ(test, false, ret);
+ 	KUNIT_EXPECT_EQ(test, 0, ret);
++
++	vcap_enable_lookups(&test_vctrl, &test_netdev, 0, 0,
++			    rule->cookie, false);
++
+ 	vcap_free_rule(rule);
+ 
+ 	/* Check that the rule has been freed: tricky to access since this
+@@ -1418,6 +1452,8 @@ static void vcap_api_encode_rule_test(struct kunit *test)
+ 	KUNIT_EXPECT_EQ(test, true, ret);
+ 	ret = list_empty(&rule->actionfields);
+ 	KUNIT_EXPECT_EQ(test, true, ret);
++
++	vcap_del_rule(&test_vctrl, &test_netdev, id);
+ }
+ 
+ static void vcap_api_set_rule_counter_test(struct kunit *test)
+@@ -1561,6 +1597,11 @@ static void vcap_api_rule_insert_in_order_test(struct kunit *test)
+ 	test_vcap_xn_rule_creator(test, 10000, VCAP_USER_QOS, 20, 400, 6, 774);
+ 	test_vcap_xn_rule_creator(test, 10000, VCAP_USER_QOS, 30, 300, 3, 771);
+ 	test_vcap_xn_rule_creator(test, 10000, VCAP_USER_QOS, 40, 200, 2, 768);
++
++	vcap_del_rule(&test_vctrl, &test_netdev, 200);
++	vcap_del_rule(&test_vctrl, &test_netdev, 300);
++	vcap_del_rule(&test_vctrl, &test_netdev, 400);
++	vcap_del_rule(&test_vctrl, &test_netdev, 500);
+ }
+ 
+ static void vcap_api_rule_insert_reverse_order_test(struct kunit *test)
+@@ -1619,6 +1660,11 @@ static void vcap_api_rule_insert_reverse_order_test(struct kunit *test)
+ 		++idx;
+ 	}
+ 	KUNIT_EXPECT_EQ(test, 768, admin.last_used_addr);
++
++	vcap_del_rule(&test_vctrl, &test_netdev, 500);
++	vcap_del_rule(&test_vctrl, &test_netdev, 400);
++	vcap_del_rule(&test_vctrl, &test_netdev, 300);
++	vcap_del_rule(&test_vctrl, &test_netdev, 200);
+ }
+ 
+ static void vcap_api_rule_remove_at_end_test(struct kunit *test)
+@@ -1819,6 +1865,9 @@ static void vcap_api_rule_remove_in_front_test(struct kunit *test)
+ 	KUNIT_EXPECT_EQ(test, 786, test_init_start);
+ 	KUNIT_EXPECT_EQ(test, 8, test_init_count);
+ 	KUNIT_EXPECT_EQ(test, 794, admin.last_used_addr);
++
++	vcap_del_rule(&test_vctrl, &test_netdev, 200);
++	vcap_del_rule(&test_vctrl, &test_netdev, 300);
+ }
+ 
+ static struct kunit_case vcap_api_rule_remove_test_cases[] = {
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_dev.h b/drivers/net/ethernet/pensando/ionic/ionic_dev.h
+index 0bea208bfba2f..43ce0aac6a94c 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_dev.h
++++ b/drivers/net/ethernet/pensando/ionic/ionic_dev.h
+@@ -187,6 +187,7 @@ typedef void (*ionic_desc_cb)(struct ionic_queue *q,
+ 			      struct ionic_desc_info *desc_info,
+ 			      struct ionic_cq_info *cq_info, void *cb_arg);
+ 
++#define IONIC_MAX_BUF_LEN			((u16)-1)
+ #define IONIC_PAGE_SIZE				PAGE_SIZE
+ #define IONIC_PAGE_SPLIT_SZ			(PAGE_SIZE / 2)
+ #define IONIC_PAGE_GFP_MASK			(GFP_ATOMIC | __GFP_NOWARN |\
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_txrx.c b/drivers/net/ethernet/pensando/ionic/ionic_txrx.c
+index 26798fc635dbd..44466e8c5d77b 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_txrx.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_txrx.c
+@@ -207,7 +207,8 @@ static struct sk_buff *ionic_rx_frags(struct ionic_queue *q,
+ 			return NULL;
+ 		}
+ 
+-		frag_len = min_t(u16, len, IONIC_PAGE_SIZE - buf_info->page_offset);
++		frag_len = min_t(u16, len, min_t(u32, IONIC_MAX_BUF_LEN,
++						 IONIC_PAGE_SIZE - buf_info->page_offset));
+ 		len -= frag_len;
+ 
+ 		dma_sync_single_for_cpu(dev,
+@@ -452,7 +453,8 @@ void ionic_rx_fill(struct ionic_queue *q)
+ 
+ 		/* fill main descriptor - buf[0] */
+ 		desc->addr = cpu_to_le64(buf_info->dma_addr + buf_info->page_offset);
+-		frag_len = min_t(u16, len, IONIC_PAGE_SIZE - buf_info->page_offset);
++		frag_len = min_t(u16, len, min_t(u32, IONIC_MAX_BUF_LEN,
++						 IONIC_PAGE_SIZE - buf_info->page_offset));
+ 		desc->len = cpu_to_le16(frag_len);
+ 		remain_len -= frag_len;
+ 		buf_info++;
+@@ -471,7 +473,9 @@ void ionic_rx_fill(struct ionic_queue *q)
+ 			}
+ 
+ 			sg_elem->addr = cpu_to_le64(buf_info->dma_addr + buf_info->page_offset);
+-			frag_len = min_t(u16, remain_len, IONIC_PAGE_SIZE - buf_info->page_offset);
++			frag_len = min_t(u16, remain_len, min_t(u32, IONIC_MAX_BUF_LEN,
++								IONIC_PAGE_SIZE -
++								buf_info->page_offset));
+ 			sg_elem->len = cpu_to_le16(frag_len);
+ 			remain_len -= frag_len;
+ 			buf_info++;
+diff --git a/drivers/net/team/team.c b/drivers/net/team/team.c
+index 382756c3fb837..1b0fc84b4d0cd 100644
+--- a/drivers/net/team/team.c
++++ b/drivers/net/team/team.c
+@@ -2127,7 +2127,12 @@ static const struct ethtool_ops team_ethtool_ops = {
+ static void team_setup_by_port(struct net_device *dev,
+ 			       struct net_device *port_dev)
+ {
+-	dev->header_ops	= port_dev->header_ops;
++	struct team *team = netdev_priv(dev);
++
++	if (port_dev->type == ARPHRD_ETHER)
++		dev->header_ops	= team->header_ops_cache;
++	else
++		dev->header_ops	= port_dev->header_ops;
+ 	dev->type = port_dev->type;
+ 	dev->hard_header_len = port_dev->hard_header_len;
+ 	dev->needed_headroom = port_dev->needed_headroom;
+@@ -2174,8 +2179,11 @@ static int team_dev_type_check_change(struct net_device *dev,
+ 
+ static void team_setup(struct net_device *dev)
+ {
++	struct team *team = netdev_priv(dev);
++
+ 	ether_setup(dev);
+ 	dev->max_mtu = ETH_MAX_MTU;
++	team->header_ops_cache = dev->header_ops;
+ 
+ 	dev->netdev_ops = &team_netdev_ops;
+ 	dev->ethtool_ops = &team_ethtool_ops;
+diff --git a/drivers/net/thunderbolt/main.c b/drivers/net/thunderbolt/main.c
+index 0c1e8970ee589..0a53ec293d040 100644
+--- a/drivers/net/thunderbolt/main.c
++++ b/drivers/net/thunderbolt/main.c
+@@ -1049,12 +1049,11 @@ static bool tbnet_xmit_csum_and_map(struct tbnet *net, struct sk_buff *skb,
+ 		*tucso = ~csum_tcpudp_magic(ip_hdr(skb)->saddr,
+ 					    ip_hdr(skb)->daddr, 0,
+ 					    ip_hdr(skb)->protocol, 0);
+-	} else if (skb_is_gso_v6(skb)) {
++	} else if (skb_is_gso(skb) && skb_is_gso_v6(skb)) {
+ 		tucso = dest + ((void *)&(tcp_hdr(skb)->check) - data);
+ 		*tucso = ~csum_ipv6_magic(&ipv6_hdr(skb)->saddr,
+ 					  &ipv6_hdr(skb)->daddr, 0,
+ 					  IPPROTO_TCP, 0);
+-		return false;
+ 	} else if (protocol == htons(ETH_P_IPV6)) {
+ 		tucso = dest + skb_checksum_start_offset(skb) + skb->csum_offset;
+ 		*tucso = ~csum_ipv6_magic(&ipv6_hdr(skb)->saddr,
+diff --git a/drivers/net/vxlan/vxlan_core.c b/drivers/net/vxlan/vxlan_core.c
+index c9a9373733c01..4b2db14472e6c 100644
+--- a/drivers/net/vxlan/vxlan_core.c
++++ b/drivers/net/vxlan/vxlan_core.c
+@@ -4296,6 +4296,10 @@ static size_t vxlan_get_size(const struct net_device *dev)
+ 		nla_total_size(sizeof(__u8)) + /* IFLA_VXLAN_REMCSUM_TX */
+ 		nla_total_size(sizeof(__u8)) + /* IFLA_VXLAN_REMCSUM_RX */
+ 		nla_total_size(sizeof(__u8)) + /* IFLA_VXLAN_LOCALBYPASS */
++		nla_total_size(0) + /* IFLA_VXLAN_GBP */
++		nla_total_size(0) + /* IFLA_VXLAN_GPE */
++		nla_total_size(0) + /* IFLA_VXLAN_REMCSUM_NOPARTIAL */
++		nla_total_size(sizeof(__u8)) + /* IFLA_VXLAN_VNIFILTER */
+ 		0;
+ }
+ 
+diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
+index 1cd2bf82319a9..a15b37750d6e9 100644
+--- a/drivers/nvme/host/fc.c
++++ b/drivers/nvme/host/fc.c
+@@ -1924,7 +1924,7 @@ char *nvme_fc_io_getuuid(struct nvmefc_fcp_req *req)
+ 	struct nvme_fc_fcp_op *op = fcp_req_to_fcp_op(req);
+ 	struct request *rq = op->rq;
+ 
+-	if (!IS_ENABLED(CONFIG_BLK_CGROUP_FC_APPID) || !rq->bio)
++	if (!IS_ENABLED(CONFIG_BLK_CGROUP_FC_APPID) || !rq || !rq->bio)
+ 		return NULL;
+ 	return blkcg_get_fc_appid(rq->bio);
+ }
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 2f57da12d9836..347cb5daebc3c 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -2916,9 +2916,6 @@ static struct nvme_dev *nvme_pci_alloc_dev(struct pci_dev *pdev,
+ 	struct nvme_dev *dev;
+ 	int ret = -ENOMEM;
+ 
+-	if (node == NUMA_NO_NODE)
+-		set_dev_node(&pdev->dev, first_memory_node);
+-
+ 	dev = kzalloc_node(sizeof(*dev), GFP_KERNEL, node);
+ 	if (!dev)
+ 		return ERR_PTR(-ENOMEM);
+diff --git a/drivers/parisc/ccio-dma.c b/drivers/parisc/ccio-dma.c
+index 10e846286f4ef..623707fc6ff1c 100644
+--- a/drivers/parisc/ccio-dma.c
++++ b/drivers/parisc/ccio-dma.c
+@@ -222,7 +222,7 @@ struct ioa_registers {
+ struct ioc {
+ 	struct ioa_registers __iomem *ioc_regs;  /* I/O MMU base address */
+ 	u8  *res_map;	                /* resource map, bit == pdir entry */
+-	u64 *pdir_base;	                /* physical base address */
++	__le64 *pdir_base;		/* physical base address */
+ 	u32 pdir_size;			/* bytes, function of IOV Space size */
+ 	u32 res_hint;			/* next available IOVP -
+ 					   circular search */
+@@ -347,7 +347,7 @@ ccio_alloc_range(struct ioc *ioc, struct device *dev, size_t size)
+ 	BUG_ON(pages_needed == 0);
+ 	BUG_ON((pages_needed * IOVP_SIZE) > DMA_CHUNK_SIZE);
+ 
+-	DBG_RES("%s() size: %d pages_needed %d\n",
++	DBG_RES("%s() size: %zu pages_needed %d\n",
+ 			__func__, size, pages_needed);
+ 
+ 	/*
+@@ -435,7 +435,7 @@ ccio_free_range(struct ioc *ioc, dma_addr_t iova, unsigned long pages_mapped)
+ 	BUG_ON((pages_mapped * IOVP_SIZE) > DMA_CHUNK_SIZE);
+ 	BUG_ON(pages_mapped > BITS_PER_LONG);
+ 
+-	DBG_RES("%s():  res_idx: %d pages_mapped %d\n", 
++	DBG_RES("%s():  res_idx: %d pages_mapped %lu\n",
+ 		__func__, res_idx, pages_mapped);
+ 
+ #ifdef CCIO_COLLECT_STATS
+@@ -551,7 +551,7 @@ static u32 hint_lookup[] = {
+  * index are bits 12:19 of the value returned by LCI.
+  */ 
+ static void
+-ccio_io_pdir_entry(u64 *pdir_ptr, space_t sid, unsigned long vba,
++ccio_io_pdir_entry(__le64 *pdir_ptr, space_t sid, unsigned long vba,
+ 		   unsigned long hints)
+ {
+ 	register unsigned long pa;
+@@ -727,7 +727,7 @@ ccio_map_single(struct device *dev, void *addr, size_t size,
+ 	unsigned long flags;
+ 	dma_addr_t iovp;
+ 	dma_addr_t offset;
+-	u64 *pdir_start;
++	__le64 *pdir_start;
+ 	unsigned long hint = hint_lookup[(int)direction];
+ 
+ 	BUG_ON(!dev);
+@@ -754,8 +754,8 @@ ccio_map_single(struct device *dev, void *addr, size_t size,
+ 
+ 	pdir_start = &(ioc->pdir_base[idx]);
+ 
+-	DBG_RUN("%s() 0x%p -> 0x%lx size: %0x%x\n",
+-		__func__, addr, (long)iovp | offset, size);
++	DBG_RUN("%s() %px -> %#lx size: %zu\n",
++		__func__, addr, (long)(iovp | offset), size);
+ 
+ 	/* If not cacheline aligned, force SAFE_DMA on the whole mess */
+ 	if((size % L1_CACHE_BYTES) || ((unsigned long)addr % L1_CACHE_BYTES))
+@@ -813,7 +813,7 @@ ccio_unmap_page(struct device *dev, dma_addr_t iova, size_t size,
+ 		return;
+ 	}
+ 
+-	DBG_RUN("%s() iovp 0x%lx/%x\n",
++	DBG_RUN("%s() iovp %#lx/%zx\n",
+ 		__func__, (long)iova, size);
+ 
+ 	iova ^= offset;        /* clear offset bits */
+@@ -1291,7 +1291,7 @@ ccio_ioc_init(struct ioc *ioc)
+ 			iova_space_size>>20,
+ 			iov_order + PAGE_SHIFT);
+ 
+-	ioc->pdir_base = (u64 *)__get_free_pages(GFP_KERNEL, 
++	ioc->pdir_base = (__le64 *)__get_free_pages(GFP_KERNEL,
+ 						 get_order(ioc->pdir_size));
+ 	if(NULL == ioc->pdir_base) {
+ 		panic("%s() could not allocate I/O Page Table\n", __func__);
+diff --git a/drivers/parisc/iommu-helpers.h b/drivers/parisc/iommu-helpers.h
+index 0905be256de08..c43f1a212a5c8 100644
+--- a/drivers/parisc/iommu-helpers.h
++++ b/drivers/parisc/iommu-helpers.h
+@@ -14,13 +14,13 @@
+ static inline unsigned int
+ iommu_fill_pdir(struct ioc *ioc, struct scatterlist *startsg, int nents, 
+ 		unsigned long hint,
+-		void (*iommu_io_pdir_entry)(u64 *, space_t, unsigned long,
++		void (*iommu_io_pdir_entry)(__le64 *, space_t, unsigned long,
+ 					    unsigned long))
+ {
+ 	struct scatterlist *dma_sg = startsg;	/* pointer to current DMA */
+ 	unsigned int n_mappings = 0;
+ 	unsigned long dma_offset = 0, dma_len = 0;
+-	u64 *pdirp = NULL;
++	__le64 *pdirp = NULL;
+ 
+ 	/* Horrible hack.  For efficiency's sake, dma_sg starts one 
+ 	 * entry below the true start (it is immediately incremented
+@@ -31,8 +31,8 @@ iommu_fill_pdir(struct ioc *ioc, struct scatterlist *startsg, int nents,
+ 		unsigned long vaddr;
+ 		long size;
+ 
+-		DBG_RUN_SG(" %d : %08lx/%05x %p/%05x\n", nents,
+-			   (unsigned long)sg_dma_address(startsg), cnt,
++		DBG_RUN_SG(" %d : %08lx %p/%05x\n", nents,
++			   (unsigned long)sg_dma_address(startsg),
+ 			   sg_virt(startsg), startsg->length
+ 		);
+ 
+diff --git a/drivers/parisc/iosapic.c b/drivers/parisc/iosapic.c
+index bcc1dae007803..890c3c0f3d140 100644
+--- a/drivers/parisc/iosapic.c
++++ b/drivers/parisc/iosapic.c
+@@ -202,9 +202,9 @@ static inline void iosapic_write(void __iomem *iosapic, unsigned int reg, u32 va
+ 
+ static DEFINE_SPINLOCK(iosapic_lock);
+ 
+-static inline void iosapic_eoi(void __iomem *addr, unsigned int data)
++static inline void iosapic_eoi(__le32 __iomem *addr, __le32 data)
+ {
+-	__raw_writel(data, addr);
++	__raw_writel((__force u32)data, addr);
+ }
+ 
+ /*
+diff --git a/drivers/parisc/iosapic_private.h b/drivers/parisc/iosapic_private.h
+index 73ecc657ad954..bd8ff40162b4b 100644
+--- a/drivers/parisc/iosapic_private.h
++++ b/drivers/parisc/iosapic_private.h
+@@ -118,8 +118,8 @@ struct iosapic_irt {
+ struct vector_info {
+ 	struct iosapic_info *iosapic;	/* I/O SAPIC this vector is on */
+ 	struct irt_entry *irte;		/* IRT entry */
+-	u32 __iomem *eoi_addr;		/* precalculate EOI reg address */
+-	u32	eoi_data;		/* IA64: ?       PA: swapped txn_data */
++	__le32 __iomem *eoi_addr;	/* precalculate EOI reg address */
++	__le32	eoi_data;		/* IA64: ?       PA: swapped txn_data */
+ 	int	txn_irq;		/* virtual IRQ number for processor */
+ 	ulong	txn_addr;		/* IA64: id_eid  PA: partial HPA */
+ 	u32	txn_data;		/* CPU interrupt bit */
+diff --git a/drivers/parisc/sba_iommu.c b/drivers/parisc/sba_iommu.c
+index b8e91cbb60567..780ea219cd8d4 100644
+--- a/drivers/parisc/sba_iommu.c
++++ b/drivers/parisc/sba_iommu.c
+@@ -202,7 +202,7 @@ static void
+ sba_dump_pdir_entry(struct ioc *ioc, char *msg, uint pide)
+ {
+ 	/* start printing from lowest pde in rval */
+-	u64 *ptr = &(ioc->pdir_base[pide & (~0U * BITS_PER_LONG)]);
++	__le64 *ptr = &(ioc->pdir_base[pide & (~0U * BITS_PER_LONG)]);
+ 	unsigned long *rptr = (unsigned long *) &(ioc->res_map[(pide >>3) & ~(sizeof(unsigned long) - 1)]);
+ 	uint rcnt;
+ 
+@@ -569,7 +569,7 @@ typedef unsigned long space_t;
+  */
+ 
+ static void
+-sba_io_pdir_entry(u64 *pdir_ptr, space_t sid, unsigned long vba,
++sba_io_pdir_entry(__le64 *pdir_ptr, space_t sid, unsigned long vba,
+ 		  unsigned long hint)
+ {
+ 	u64 pa; /* physical address */
+@@ -613,7 +613,7 @@ static void
+ sba_mark_invalid(struct ioc *ioc, dma_addr_t iova, size_t byte_cnt)
+ {
+ 	u32 iovp = (u32) SBA_IOVP(ioc,iova);
+-	u64 *pdir_ptr = &ioc->pdir_base[PDIR_INDEX(iovp)];
++	__le64 *pdir_ptr = &ioc->pdir_base[PDIR_INDEX(iovp)];
+ 
+ #ifdef ASSERT_PDIR_SANITY
+ 	/* Assert first pdir entry is set.
+@@ -714,7 +714,7 @@ sba_map_single(struct device *dev, void *addr, size_t size,
+ 	unsigned long flags; 
+ 	dma_addr_t iovp;
+ 	dma_addr_t offset;
+-	u64 *pdir_start;
++	__le64 *pdir_start;
+ 	int pide;
+ 
+ 	ioc = GET_IOC(dev);
+@@ -1432,7 +1432,7 @@ sba_ioc_init(struct parisc_device *sba, struct ioc *ioc, int ioc_num)
+ 
+ 	ioc->pdir_size = pdir_size = (iova_space_size/IOVP_SIZE) * sizeof(u64);
+ 
+-	DBG_INIT("%s() hpa 0x%lx mem %ldMB IOV %dMB (%d bits)\n",
++	DBG_INIT("%s() hpa %px mem %ldMB IOV %dMB (%d bits)\n",
+ 			__func__,
+ 			ioc->ioc_hpa,
+ 			(unsigned long) totalram_pages() >> (20 - PAGE_SHIFT),
+@@ -1469,7 +1469,7 @@ sba_ioc_init(struct parisc_device *sba, struct ioc *ioc, int ioc_num)
+ 	ioc->iovp_mask = ~(iova_space_mask + PAGE_SIZE - 1);
+ #endif
+ 
+-	DBG_INIT("%s() IOV base 0x%lx mask 0x%0lx\n",
++	DBG_INIT("%s() IOV base %#lx mask %#0lx\n",
+ 		__func__, ioc->ibase, ioc->imask);
+ 
+ 	/*
+@@ -1581,7 +1581,7 @@ printk("sba_hw_init(): mem_boot 0x%x 0x%x 0x%x 0x%x\n", PAGE0->mem_boot.hpa,
+ 
+ 	if (!IS_PLUTO(sba_dev->dev)) {
+ 		ioc_ctl = READ_REG(sba_dev->sba_hpa+IOC_CTRL);
+-		DBG_INIT("%s() hpa 0x%lx ioc_ctl 0x%Lx ->",
++		DBG_INIT("%s() hpa %px ioc_ctl 0x%Lx ->",
+ 			__func__, sba_dev->sba_hpa, ioc_ctl);
+ 		ioc_ctl &= ~(IOC_CTRL_RM | IOC_CTRL_NC | IOC_CTRL_CE);
+ 		ioc_ctl |= IOC_CTRL_DD | IOC_CTRL_D4 | IOC_CTRL_TC;
+@@ -1666,14 +1666,14 @@ printk("sba_hw_init(): mem_boot 0x%x 0x%x 0x%x 0x%x\n", PAGE0->mem_boot.hpa,
+ 		/* flush out the last writes */
+ 		READ_REG(sba_dev->ioc[i].ioc_hpa + ROPE7_CTL);
+ 
+-		DBG_INIT("	ioc[%d] ROPE_CFG 0x%Lx  ROPE_DBG 0x%Lx\n",
++		DBG_INIT("	ioc[%d] ROPE_CFG %#lx  ROPE_DBG %lx\n",
+ 				i,
+-				READ_REG(sba_dev->ioc[i].ioc_hpa + 0x40),
+-				READ_REG(sba_dev->ioc[i].ioc_hpa + 0x50)
++				(unsigned long) READ_REG(sba_dev->ioc[i].ioc_hpa + 0x40),
++				(unsigned long) READ_REG(sba_dev->ioc[i].ioc_hpa + 0x50)
+ 			);
+-		DBG_INIT("	STATUS_CONTROL 0x%Lx  FLUSH_CTRL 0x%Lx\n",
+-				READ_REG(sba_dev->ioc[i].ioc_hpa + 0x108),
+-				READ_REG(sba_dev->ioc[i].ioc_hpa + 0x400)
++		DBG_INIT("	STATUS_CONTROL %#lx  FLUSH_CTRL %#lx\n",
++				(unsigned long) READ_REG(sba_dev->ioc[i].ioc_hpa + 0x108),
++				(unsigned long) READ_REG(sba_dev->ioc[i].ioc_hpa + 0x400)
+ 			);
+ 
+ 		if (IS_PLUTO(sba_dev->dev)) {
+@@ -1737,7 +1737,7 @@ sba_common_init(struct sba_device *sba_dev)
+ #ifdef ASSERT_PDIR_SANITY
+ 		/* Mark first bit busy - ie no IOVA 0 */
+ 		sba_dev->ioc[i].res_map[0] = 0x80;
+-		sba_dev->ioc[i].pdir_base[0] = 0xeeffc0addbba0080ULL;
++		sba_dev->ioc[i].pdir_base[0] = (__force __le64) 0xeeffc0addbba0080ULL;
+ #endif
+ 
+ 		/* Third (and last) part of PIRANHA BUG */
+diff --git a/drivers/platform/mellanox/Kconfig b/drivers/platform/mellanox/Kconfig
+index 30b50920b278c..f7dfa0e785fd6 100644
+--- a/drivers/platform/mellanox/Kconfig
++++ b/drivers/platform/mellanox/Kconfig
+@@ -60,6 +60,7 @@ config MLXBF_BOOTCTL
+ 	tristate "Mellanox BlueField Firmware Boot Control driver"
+ 	depends on ARM64
+ 	depends on ACPI
++	depends on NET
+ 	help
+ 	  The Mellanox BlueField firmware implements functionality to
+ 	  request swapping the primary and alternate eMMC boot partition,
+diff --git a/drivers/platform/x86/asus-nb-wmi.c b/drivers/platform/x86/asus-nb-wmi.c
+index fdf7da06af306..d85d895fee894 100644
+--- a/drivers/platform/x86/asus-nb-wmi.c
++++ b/drivers/platform/x86/asus-nb-wmi.c
+@@ -478,6 +478,15 @@ static const struct dmi_system_id asus_quirks[] = {
+ 		},
+ 		.driver_data = &quirk_asus_tablet_mode,
+ 	},
++	{
++		.callback = dmi_matched,
++		.ident = "ASUS ROG FLOW X16",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "GV601V"),
++		},
++		.driver_data = &quirk_asus_tablet_mode,
++	},
+ 	{
+ 		.callback = dmi_matched,
+ 		.ident = "ASUS VivoBook E410MA",
+diff --git a/drivers/platform/x86/intel_scu_ipc.c b/drivers/platform/x86/intel_scu_ipc.c
+index 6851d10d65825..a68df41334035 100644
+--- a/drivers/platform/x86/intel_scu_ipc.c
++++ b/drivers/platform/x86/intel_scu_ipc.c
+@@ -19,6 +19,7 @@
+ #include <linux/init.h>
+ #include <linux/interrupt.h>
+ #include <linux/io.h>
++#include <linux/iopoll.h>
+ #include <linux/module.h>
+ #include <linux/slab.h>
+ 
+@@ -231,19 +232,15 @@ static inline u32 ipc_data_readl(struct intel_scu_ipc_dev *scu, u32 offset)
+ /* Wait till scu status is busy */
+ static inline int busy_loop(struct intel_scu_ipc_dev *scu)
+ {
+-	unsigned long end = jiffies + IPC_TIMEOUT;
+-
+-	do {
+-		u32 status;
+-
+-		status = ipc_read_status(scu);
+-		if (!(status & IPC_STATUS_BUSY))
+-			return (status & IPC_STATUS_ERR) ? -EIO : 0;
++	u8 status;
++	int err;
+ 
+-		usleep_range(50, 100);
+-	} while (time_before(jiffies, end));
++	err = readx_poll_timeout(ipc_read_status, scu, status, !(status & IPC_STATUS_BUSY),
++				 100, jiffies_to_usecs(IPC_TIMEOUT));
++	if (err)
++		return err;
+ 
+-	return -ETIMEDOUT;
++	return (status & IPC_STATUS_ERR) ? -EIO : 0;
+ }
+ 
+ /* Wait till ipc ioc interrupt is received or timeout in 10 HZ */
+@@ -251,10 +248,12 @@ static inline int ipc_wait_for_interrupt(struct intel_scu_ipc_dev *scu)
+ {
+ 	int status;
+ 
+-	if (!wait_for_completion_timeout(&scu->cmd_complete, IPC_TIMEOUT))
+-		return -ETIMEDOUT;
++	wait_for_completion_timeout(&scu->cmd_complete, IPC_TIMEOUT);
+ 
+ 	status = ipc_read_status(scu);
++	if (status & IPC_STATUS_BUSY)
++		return -ETIMEDOUT;
++
+ 	if (status & IPC_STATUS_ERR)
+ 		return -EIO;
+ 
+@@ -266,6 +265,24 @@ static int intel_scu_ipc_check_status(struct intel_scu_ipc_dev *scu)
+ 	return scu->irq > 0 ? ipc_wait_for_interrupt(scu) : busy_loop(scu);
+ }
+ 
++static struct intel_scu_ipc_dev *intel_scu_ipc_get(struct intel_scu_ipc_dev *scu)
++{
++	u8 status;
++
++	if (!scu)
++		scu = ipcdev;
++	if (!scu)
++		return ERR_PTR(-ENODEV);
++
++	status = ipc_read_status(scu);
++	if (status & IPC_STATUS_BUSY) {
++		dev_dbg(&scu->dev, "device is busy\n");
++		return ERR_PTR(-EBUSY);
++	}
++
++	return scu;
++}
++
+ /* Read/Write power control(PMIC in Langwell, MSIC in PenWell) registers */
+ static int pwr_reg_rdwr(struct intel_scu_ipc_dev *scu, u16 *addr, u8 *data,
+ 			u32 count, u32 op, u32 id)
+@@ -279,11 +296,10 @@ static int pwr_reg_rdwr(struct intel_scu_ipc_dev *scu, u16 *addr, u8 *data,
+ 	memset(cbuf, 0, sizeof(cbuf));
+ 
+ 	mutex_lock(&ipclock);
+-	if (!scu)
+-		scu = ipcdev;
+-	if (!scu) {
++	scu = intel_scu_ipc_get(scu);
++	if (IS_ERR(scu)) {
+ 		mutex_unlock(&ipclock);
+-		return -ENODEV;
++		return PTR_ERR(scu);
+ 	}
+ 
+ 	for (nc = 0; nc < count; nc++, offset += 2) {
+@@ -438,13 +454,12 @@ int intel_scu_ipc_dev_simple_command(struct intel_scu_ipc_dev *scu, int cmd,
+ 	int err;
+ 
+ 	mutex_lock(&ipclock);
+-	if (!scu)
+-		scu = ipcdev;
+-	if (!scu) {
++	scu = intel_scu_ipc_get(scu);
++	if (IS_ERR(scu)) {
+ 		mutex_unlock(&ipclock);
+-		return -ENODEV;
++		return PTR_ERR(scu);
+ 	}
+-	scu = ipcdev;
++
+ 	cmdval = sub << 12 | cmd;
+ 	ipc_command(scu, cmdval);
+ 	err = intel_scu_ipc_check_status(scu);
+@@ -484,11 +499,10 @@ int intel_scu_ipc_dev_command_with_size(struct intel_scu_ipc_dev *scu, int cmd,
+ 		return -EINVAL;
+ 
+ 	mutex_lock(&ipclock);
+-	if (!scu)
+-		scu = ipcdev;
+-	if (!scu) {
++	scu = intel_scu_ipc_get(scu);
++	if (IS_ERR(scu)) {
+ 		mutex_unlock(&ipclock);
+-		return -ENODEV;
++		return PTR_ERR(scu);
+ 	}
+ 
+ 	memcpy(inbuf, in, inlen);
+diff --git a/drivers/power/supply/ab8500_btemp.c b/drivers/power/supply/ab8500_btemp.c
+index 6f83e99d2eb72..ce36d6ca34226 100644
+--- a/drivers/power/supply/ab8500_btemp.c
++++ b/drivers/power/supply/ab8500_btemp.c
+@@ -115,7 +115,6 @@ struct ab8500_btemp {
+ static enum power_supply_property ab8500_btemp_props[] = {
+ 	POWER_SUPPLY_PROP_PRESENT,
+ 	POWER_SUPPLY_PROP_ONLINE,
+-	POWER_SUPPLY_PROP_TECHNOLOGY,
+ 	POWER_SUPPLY_PROP_TEMP,
+ };
+ 
+@@ -532,12 +531,6 @@ static int ab8500_btemp_get_property(struct power_supply *psy,
+ 		else
+ 			val->intval = 1;
+ 		break;
+-	case POWER_SUPPLY_PROP_TECHNOLOGY:
+-		if (di->bm->bi)
+-			val->intval = di->bm->bi->technology;
+-		else
+-			val->intval = POWER_SUPPLY_TECHNOLOGY_UNKNOWN;
+-		break;
+ 	case POWER_SUPPLY_PROP_TEMP:
+ 		val->intval = ab8500_btemp_get_temp(di);
+ 		break;
+@@ -662,7 +655,7 @@ static char *supply_interface[] = {
+ 
+ static const struct power_supply_desc ab8500_btemp_desc = {
+ 	.name			= "ab8500_btemp",
+-	.type			= POWER_SUPPLY_TYPE_BATTERY,
++	.type			= POWER_SUPPLY_TYPE_UNKNOWN,
+ 	.properties		= ab8500_btemp_props,
+ 	.num_properties		= ARRAY_SIZE(ab8500_btemp_props),
+ 	.get_property		= ab8500_btemp_get_property,
+diff --git a/drivers/power/supply/ab8500_chargalg.c b/drivers/power/supply/ab8500_chargalg.c
+index ea4ad61d4c7e2..2205ea0834a61 100644
+--- a/drivers/power/supply/ab8500_chargalg.c
++++ b/drivers/power/supply/ab8500_chargalg.c
+@@ -1720,7 +1720,7 @@ static char *supply_interface[] = {
+ 
+ static const struct power_supply_desc ab8500_chargalg_desc = {
+ 	.name			= "ab8500_chargalg",
+-	.type			= POWER_SUPPLY_TYPE_BATTERY,
++	.type			= POWER_SUPPLY_TYPE_UNKNOWN,
+ 	.properties		= ab8500_chargalg_props,
+ 	.num_properties		= ARRAY_SIZE(ab8500_chargalg_props),
+ 	.get_property		= ab8500_chargalg_get_property,
+diff --git a/drivers/power/supply/mt6370-charger.c b/drivers/power/supply/mt6370-charger.c
+index f27dae5043f5b..a9641bd3d8cf8 100644
+--- a/drivers/power/supply/mt6370-charger.c
++++ b/drivers/power/supply/mt6370-charger.c
+@@ -324,7 +324,7 @@ static int mt6370_chg_toggle_cfo(struct mt6370_priv *priv)
+ 
+ 	if (fl_strobe) {
+ 		dev_err(priv->dev, "Flash led is still in strobe mode\n");
+-		return ret;
++		return -EINVAL;
+ 	}
+ 
+ 	/* cfo off */
+diff --git a/drivers/power/supply/power_supply_sysfs.c b/drivers/power/supply/power_supply_sysfs.c
+index 06e5b6b0e255c..d483a81560ab0 100644
+--- a/drivers/power/supply/power_supply_sysfs.c
++++ b/drivers/power/supply/power_supply_sysfs.c
+@@ -482,6 +482,13 @@ int power_supply_uevent(const struct device *dev, struct kobj_uevent_env *env)
+ 	if (ret)
+ 		return ret;
+ 
++	/*
++	 * Kernel generates KOBJ_REMOVE uevent in device removal path, after
++	 * resources have been freed. Exit early to avoid use-after-free.
++	 */
++	if (psy->removing)
++		return 0;
++
+ 	prop_buf = (char *)get_zeroed_page(GFP_KERNEL);
+ 	if (!prop_buf)
+ 		return -ENOMEM;
+diff --git a/drivers/power/supply/rk817_charger.c b/drivers/power/supply/rk817_charger.c
+index 8328bcea1a299..f64daf5a41d93 100644
+--- a/drivers/power/supply/rk817_charger.c
++++ b/drivers/power/supply/rk817_charger.c
+@@ -1045,6 +1045,13 @@ static void rk817_charging_monitor(struct work_struct *work)
+ 	queue_delayed_work(system_wq, &charger->work, msecs_to_jiffies(8000));
+ }
+ 
++static void rk817_cleanup_node(void *data)
++{
++	struct device_node *node = data;
++
++	of_node_put(node);
++}
++
+ static int rk817_charger_probe(struct platform_device *pdev)
+ {
+ 	struct rk808 *rk808 = dev_get_drvdata(pdev->dev.parent);
+@@ -1061,11 +1068,13 @@ static int rk817_charger_probe(struct platform_device *pdev)
+ 	if (!node)
+ 		return -ENODEV;
+ 
++	ret = devm_add_action_or_reset(&pdev->dev, rk817_cleanup_node, node);
++	if (ret)
++		return ret;
++
+ 	charger = devm_kzalloc(&pdev->dev, sizeof(*charger), GFP_KERNEL);
+-	if (!charger) {
+-		of_node_put(node);
++	if (!charger)
+ 		return -ENOMEM;
+-	}
+ 
+ 	charger->rk808 = rk808;
+ 
+@@ -1211,3 +1220,4 @@ MODULE_DESCRIPTION("Battery power supply driver for RK817 PMIC");
+ MODULE_AUTHOR("Maya Matuszczyk <maccraft123mc@gmail.com>");
+ MODULE_AUTHOR("Chris Morgan <macromorgan@hotmail.com>");
+ MODULE_LICENSE("GPL");
++MODULE_ALIAS("platform:rk817-charger");
+diff --git a/drivers/power/supply/rt9467-charger.c b/drivers/power/supply/rt9467-charger.c
+index 683adb18253dd..fdfdc83ab0458 100644
+--- a/drivers/power/supply/rt9467-charger.c
++++ b/drivers/power/supply/rt9467-charger.c
+@@ -598,8 +598,8 @@ static int rt9467_run_aicl(struct rt9467_chg_data *data)
+ 
+ 	reinit_completion(&data->aicl_done);
+ 	ret = wait_for_completion_timeout(&data->aicl_done, msecs_to_jiffies(3500));
+-	if (ret)
+-		return ret;
++	if (ret == 0)
++		return -ETIMEDOUT;
+ 
+ 	ret = rt9467_get_value_from_ranges(data, F_IAICR, RT9467_RANGE_IAICR, &aicr_get);
+ 	if (ret) {
+diff --git a/drivers/power/supply/ucs1002_power.c b/drivers/power/supply/ucs1002_power.c
+index 954feba6600b8..7970843a4f480 100644
+--- a/drivers/power/supply/ucs1002_power.c
++++ b/drivers/power/supply/ucs1002_power.c
+@@ -384,7 +384,8 @@ static int ucs1002_get_property(struct power_supply *psy,
+ 	case POWER_SUPPLY_PROP_USB_TYPE:
+ 		return ucs1002_get_usb_type(info, val);
+ 	case POWER_SUPPLY_PROP_HEALTH:
+-		return val->intval = info->health;
++		val->intval = info->health;
++		return 0;
+ 	case POWER_SUPPLY_PROP_PRESENT:
+ 		val->intval = info->present;
+ 		return 0;
+diff --git a/drivers/scsi/iscsi_tcp.c b/drivers/scsi/iscsi_tcp.c
+index 9ab8555180a3a..8e14cea15f980 100644
+--- a/drivers/scsi/iscsi_tcp.c
++++ b/drivers/scsi/iscsi_tcp.c
+@@ -724,6 +724,10 @@ iscsi_sw_tcp_conn_bind(struct iscsi_cls_session *cls_session,
+ 		return -EEXIST;
+ 	}
+ 
++	err = -EINVAL;
++	if (!sk_is_tcp(sock->sk))
++		goto free_socket;
++
+ 	err = iscsi_conn_bind(cls_session, cls_conn, is_leading);
+ 	if (err)
+ 		goto free_socket;
+diff --git a/drivers/scsi/pm8001/pm8001_hwi.c b/drivers/scsi/pm8001/pm8001_hwi.c
+index 73cd25f30ca58..00f22058ccf4e 100644
+--- a/drivers/scsi/pm8001/pm8001_hwi.c
++++ b/drivers/scsi/pm8001/pm8001_hwi.c
+@@ -4180,7 +4180,7 @@ pm8001_chip_phy_start_req(struct pm8001_hba_info *pm8001_ha, u8 phy_id)
+ 	payload.sas_identify.dev_type = SAS_END_DEVICE;
+ 	payload.sas_identify.initiator_bits = SAS_PROTOCOL_ALL;
+ 	memcpy(payload.sas_identify.sas_addr,
+-		pm8001_ha->sas_addr, SAS_ADDR_SIZE);
++		&pm8001_ha->phy[phy_id].dev_sas_addr, SAS_ADDR_SIZE);
+ 	payload.sas_identify.phy_id = phy_id;
+ 
+ 	return pm8001_mpi_build_cmd(pm8001_ha, 0, opcode, &payload,
+diff --git a/drivers/scsi/pm8001/pm80xx_hwi.c b/drivers/scsi/pm8001/pm80xx_hwi.c
+index 39a12ee94a72f..e543bc36c84df 100644
+--- a/drivers/scsi/pm8001/pm80xx_hwi.c
++++ b/drivers/scsi/pm8001/pm80xx_hwi.c
+@@ -3671,10 +3671,12 @@ static int mpi_set_controller_config_resp(struct pm8001_hba_info *pm8001_ha,
+ 			(struct set_ctrl_cfg_resp *)(piomb + 4);
+ 	u32 status = le32_to_cpu(pPayload->status);
+ 	u32 err_qlfr_pgcd = le32_to_cpu(pPayload->err_qlfr_pgcd);
++	u32 tag = le32_to_cpu(pPayload->tag);
+ 
+ 	pm8001_dbg(pm8001_ha, MSG,
+ 		   "SET CONTROLLER RESP: status 0x%x qlfr_pgcd 0x%x\n",
+ 		   status, err_qlfr_pgcd);
++	pm8001_tag_free(pm8001_ha, tag);
+ 
+ 	return 0;
+ }
+@@ -4676,7 +4678,7 @@ pm80xx_chip_phy_start_req(struct pm8001_hba_info *pm8001_ha, u8 phy_id)
+ 	payload.sas_identify.dev_type = SAS_END_DEVICE;
+ 	payload.sas_identify.initiator_bits = SAS_PROTOCOL_ALL;
+ 	memcpy(payload.sas_identify.sas_addr,
+-	  &pm8001_ha->sas_addr, SAS_ADDR_SIZE);
++		&pm8001_ha->phy[phy_id].dev_sas_addr, SAS_ADDR_SIZE);
+ 	payload.sas_identify.phy_id = phy_id;
+ 
+ 	return pm8001_mpi_build_cmd(pm8001_ha, 0, opcode, &payload,
+diff --git a/drivers/scsi/qedf/qedf_io.c b/drivers/scsi/qedf/qedf_io.c
+index 4750ec5789a80..10fe3383855c0 100644
+--- a/drivers/scsi/qedf/qedf_io.c
++++ b/drivers/scsi/qedf/qedf_io.c
+@@ -1904,6 +1904,7 @@ int qedf_initiate_abts(struct qedf_ioreq *io_req, bool return_scsi_cmd_on_abts)
+ 		goto drop_rdata_kref;
+ 	}
+ 
++	spin_lock_irqsave(&fcport->rport_lock, flags);
+ 	if (!test_bit(QEDF_CMD_OUTSTANDING, &io_req->flags) ||
+ 	    test_bit(QEDF_CMD_IN_CLEANUP, &io_req->flags) ||
+ 	    test_bit(QEDF_CMD_IN_ABORT, &io_req->flags)) {
+@@ -1911,17 +1912,20 @@ int qedf_initiate_abts(struct qedf_ioreq *io_req, bool return_scsi_cmd_on_abts)
+ 			 "io_req xid=0x%x sc_cmd=%p already in cleanup or abort processing or already completed.\n",
+ 			 io_req->xid, io_req->sc_cmd);
+ 		rc = 1;
++		spin_unlock_irqrestore(&fcport->rport_lock, flags);
+ 		goto drop_rdata_kref;
+ 	}
+ 
++	/* Set the command type to abort */
++	io_req->cmd_type = QEDF_ABTS;
++	spin_unlock_irqrestore(&fcport->rport_lock, flags);
++
+ 	kref_get(&io_req->refcount);
+ 
+ 	xid = io_req->xid;
+ 	qedf->control_requests++;
+ 	qedf->packet_aborts++;
+ 
+-	/* Set the command type to abort */
+-	io_req->cmd_type = QEDF_ABTS;
+ 	io_req->return_scsi_cmd_on_abts = return_scsi_cmd_on_abts;
+ 
+ 	set_bit(QEDF_CMD_IN_ABORT, &io_req->flags);
+@@ -2210,7 +2214,9 @@ process_els:
+ 		  refcount, fcport, fcport->rdata->ids.port_id);
+ 
+ 	/* Cleanup cmds re-use the same TID as the original I/O */
++	spin_lock_irqsave(&fcport->rport_lock, flags);
+ 	io_req->cmd_type = QEDF_CLEANUP;
++	spin_unlock_irqrestore(&fcport->rport_lock, flags);
+ 	io_req->return_scsi_cmd_on_abts = return_scsi_cmd_on_abts;
+ 
+ 	init_completion(&io_req->cleanup_done);
+diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c
+index 7825765c936cd..91f3f1d7098eb 100644
+--- a/drivers/scsi/qedf/qedf_main.c
++++ b/drivers/scsi/qedf/qedf_main.c
+@@ -2805,6 +2805,8 @@ void qedf_process_cqe(struct qedf_ctx *qedf, struct fcoe_cqe *cqe)
+ 	struct qedf_ioreq *io_req;
+ 	struct qedf_rport *fcport;
+ 	u32 comp_type;
++	u8 io_comp_type;
++	unsigned long flags;
+ 
+ 	comp_type = (cqe->cqe_data >> FCOE_CQE_CQE_TYPE_SHIFT) &
+ 	    FCOE_CQE_CQE_TYPE_MASK;
+@@ -2838,11 +2840,14 @@ void qedf_process_cqe(struct qedf_ctx *qedf, struct fcoe_cqe *cqe)
+ 		return;
+ 	}
+ 
++	spin_lock_irqsave(&fcport->rport_lock, flags);
++	io_comp_type = io_req->cmd_type;
++	spin_unlock_irqrestore(&fcport->rport_lock, flags);
+ 
+ 	switch (comp_type) {
+ 	case FCOE_GOOD_COMPLETION_CQE_TYPE:
+ 		atomic_inc(&fcport->free_sqes);
+-		switch (io_req->cmd_type) {
++		switch (io_comp_type) {
+ 		case QEDF_SCSI_CMD:
+ 			qedf_scsi_completion(qedf, cqe, io_req);
+ 			break;
+diff --git a/drivers/scsi/scsi.c b/drivers/scsi/scsi.c
+index d0911bc28663a..89367c4bf0ef5 100644
+--- a/drivers/scsi/scsi.c
++++ b/drivers/scsi/scsi.c
+@@ -613,6 +613,17 @@ void scsi_cdl_check(struct scsi_device *sdev)
+ 	bool cdl_supported;
+ 	unsigned char *buf;
+ 
++	/*
++	 * Support for CDL was defined in SPC-5. Ignore devices reporting an
++	 * lower SPC version. This also avoids problems with old drives choking
++	 * on MAINTENANCE_IN / MI_REPORT_SUPPORTED_OPERATION_CODES with a
++	 * service action specified, as done in scsi_cdl_check_cmd().
++	 */
++	if (sdev->scsi_level < SCSI_SPC_5) {
++		sdev->cdl_supported = 0;
++		return;
++	}
++
+ 	buf = kmalloc(SCSI_CDL_CHECK_BUF_LEN, GFP_KERNEL);
+ 	if (!buf) {
+ 		sdev->cdl_supported = 0;
+diff --git a/drivers/scsi/scsi_scan.c b/drivers/scsi/scsi_scan.c
+index aa13feb17c626..97669657a9976 100644
+--- a/drivers/scsi/scsi_scan.c
++++ b/drivers/scsi/scsi_scan.c
+@@ -822,7 +822,7 @@ static int scsi_probe_lun(struct scsi_device *sdev, unsigned char *inq_result,
+ 	 * device is attached at LUN 0 (SCSI_SCAN_TARGET_PRESENT) so
+ 	 * non-zero LUNs can be scanned.
+ 	 */
+-	sdev->scsi_level = inq_result[2] & 0x07;
++	sdev->scsi_level = inq_result[2] & 0x0f;
+ 	if (sdev->scsi_level >= 2 ||
+ 	    (sdev->scsi_level == 1 && (inq_result[3] & 0x0f) == 1))
+ 		sdev->scsi_level++;
+diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+index 3c668cfb146d3..d6535cbb4e05e 100644
+--- a/drivers/scsi/sd.c
++++ b/drivers/scsi/sd.c
+@@ -213,18 +213,32 @@ cache_type_store(struct device *dev, struct device_attribute *attr,
+ }
+ 
+ static ssize_t
+-manage_start_stop_show(struct device *dev, struct device_attribute *attr,
+-		       char *buf)
++manage_start_stop_show(struct device *dev,
++		       struct device_attribute *attr, char *buf)
+ {
+ 	struct scsi_disk *sdkp = to_scsi_disk(dev);
+ 	struct scsi_device *sdp = sdkp->device;
+ 
+-	return sprintf(buf, "%u\n", sdp->manage_start_stop);
++	return sysfs_emit(buf, "%u\n",
++			  sdp->manage_system_start_stop &&
++			  sdp->manage_runtime_start_stop);
+ }
++static DEVICE_ATTR_RO(manage_start_stop);
+ 
+ static ssize_t
+-manage_start_stop_store(struct device *dev, struct device_attribute *attr,
+-			const char *buf, size_t count)
++manage_system_start_stop_show(struct device *dev,
++			      struct device_attribute *attr, char *buf)
++{
++	struct scsi_disk *sdkp = to_scsi_disk(dev);
++	struct scsi_device *sdp = sdkp->device;
++
++	return sysfs_emit(buf, "%u\n", sdp->manage_system_start_stop);
++}
++
++static ssize_t
++manage_system_start_stop_store(struct device *dev,
++			       struct device_attribute *attr,
++			       const char *buf, size_t count)
+ {
+ 	struct scsi_disk *sdkp = to_scsi_disk(dev);
+ 	struct scsi_device *sdp = sdkp->device;
+@@ -236,11 +250,42 @@ manage_start_stop_store(struct device *dev, struct device_attribute *attr,
+ 	if (kstrtobool(buf, &v))
+ 		return -EINVAL;
+ 
+-	sdp->manage_start_stop = v;
++	sdp->manage_system_start_stop = v;
+ 
+ 	return count;
+ }
+-static DEVICE_ATTR_RW(manage_start_stop);
++static DEVICE_ATTR_RW(manage_system_start_stop);
++
++static ssize_t
++manage_runtime_start_stop_show(struct device *dev,
++			       struct device_attribute *attr, char *buf)
++{
++	struct scsi_disk *sdkp = to_scsi_disk(dev);
++	struct scsi_device *sdp = sdkp->device;
++
++	return sysfs_emit(buf, "%u\n", sdp->manage_runtime_start_stop);
++}
++
++static ssize_t
++manage_runtime_start_stop_store(struct device *dev,
++				struct device_attribute *attr,
++				const char *buf, size_t count)
++{
++	struct scsi_disk *sdkp = to_scsi_disk(dev);
++	struct scsi_device *sdp = sdkp->device;
++	bool v;
++
++	if (!capable(CAP_SYS_ADMIN))
++		return -EACCES;
++
++	if (kstrtobool(buf, &v))
++		return -EINVAL;
++
++	sdp->manage_runtime_start_stop = v;
++
++	return count;
++}
++static DEVICE_ATTR_RW(manage_runtime_start_stop);
+ 
+ static ssize_t
+ allow_restart_show(struct device *dev, struct device_attribute *attr, char *buf)
+@@ -572,6 +617,8 @@ static struct attribute *sd_disk_attrs[] = {
+ 	&dev_attr_FUA.attr,
+ 	&dev_attr_allow_restart.attr,
+ 	&dev_attr_manage_start_stop.attr,
++	&dev_attr_manage_system_start_stop.attr,
++	&dev_attr_manage_runtime_start_stop.attr,
+ 	&dev_attr_protection_type.attr,
+ 	&dev_attr_protection_mode.attr,
+ 	&dev_attr_app_tag_own.attr,
+@@ -3733,7 +3780,8 @@ static int sd_remove(struct device *dev)
+ 
+ 	device_del(&sdkp->disk_dev);
+ 	del_gendisk(sdkp->disk);
+-	sd_shutdown(dev);
++	if (!sdkp->suspended)
++		sd_shutdown(dev);
+ 
+ 	put_disk(sdkp->disk);
+ 	return 0;
+@@ -3810,13 +3858,20 @@ static void sd_shutdown(struct device *dev)
+ 		sd_sync_cache(sdkp, NULL);
+ 	}
+ 
+-	if (system_state != SYSTEM_RESTART && sdkp->device->manage_start_stop) {
++	if (system_state != SYSTEM_RESTART &&
++	    sdkp->device->manage_system_start_stop) {
+ 		sd_printk(KERN_NOTICE, sdkp, "Stopping disk\n");
+ 		sd_start_stop_device(sdkp, 0);
+ 	}
+ }
+ 
+-static int sd_suspend_common(struct device *dev, bool ignore_stop_errors)
++static inline bool sd_do_start_stop(struct scsi_device *sdev, bool runtime)
++{
++	return (sdev->manage_system_start_stop && !runtime) ||
++		(sdev->manage_runtime_start_stop && runtime);
++}
++
++static int sd_suspend_common(struct device *dev, bool runtime)
+ {
+ 	struct scsi_disk *sdkp = dev_get_drvdata(dev);
+ 	struct scsi_sense_hdr sshdr;
+@@ -3848,15 +3903,18 @@ static int sd_suspend_common(struct device *dev, bool ignore_stop_errors)
+ 		}
+ 	}
+ 
+-	if (sdkp->device->manage_start_stop) {
++	if (sd_do_start_stop(sdkp->device, runtime)) {
+ 		if (!sdkp->device->silence_suspend)
+ 			sd_printk(KERN_NOTICE, sdkp, "Stopping disk\n");
+ 		/* an error is not worth aborting a system sleep */
+ 		ret = sd_start_stop_device(sdkp, 0);
+-		if (ignore_stop_errors)
++		if (!runtime)
+ 			ret = 0;
+ 	}
+ 
++	if (!ret)
++		sdkp->suspended = true;
++
+ 	return ret;
+ }
+ 
+@@ -3865,15 +3923,15 @@ static int sd_suspend_system(struct device *dev)
+ 	if (pm_runtime_suspended(dev))
+ 		return 0;
+ 
+-	return sd_suspend_common(dev, true);
++	return sd_suspend_common(dev, false);
+ }
+ 
+ static int sd_suspend_runtime(struct device *dev)
+ {
+-	return sd_suspend_common(dev, false);
++	return sd_suspend_common(dev, true);
+ }
+ 
+-static int sd_resume(struct device *dev)
++static int sd_resume(struct device *dev, bool runtime)
+ {
+ 	struct scsi_disk *sdkp = dev_get_drvdata(dev);
+ 	int ret = 0;
+@@ -3881,16 +3939,21 @@ static int sd_resume(struct device *dev)
+ 	if (!sdkp)	/* E.g.: runtime resume at the start of sd_probe() */
+ 		return 0;
+ 
+-	if (!sdkp->device->manage_start_stop)
++	if (!sd_do_start_stop(sdkp->device, runtime)) {
++		sdkp->suspended = false;
+ 		return 0;
++	}
+ 
+ 	if (!sdkp->device->no_start_on_resume) {
+ 		sd_printk(KERN_NOTICE, sdkp, "Starting disk\n");
+ 		ret = sd_start_stop_device(sdkp, 1);
+ 	}
+ 
+-	if (!ret)
++	if (!ret) {
+ 		opal_unlock_from_suspend(sdkp->opal_dev);
++		sdkp->suspended = false;
++	}
++
+ 	return ret;
+ }
+ 
+@@ -3899,7 +3962,7 @@ static int sd_resume_system(struct device *dev)
+ 	if (pm_runtime_suspended(dev))
+ 		return 0;
+ 
+-	return sd_resume(dev);
++	return sd_resume(dev, false);
+ }
+ 
+ static int sd_resume_runtime(struct device *dev)
+@@ -3926,7 +3989,7 @@ static int sd_resume_runtime(struct device *dev)
+ 				  "Failed to clear sense data\n");
+ 	}
+ 
+-	return sd_resume(dev);
++	return sd_resume(dev, true);
+ }
+ 
+ /**
+diff --git a/drivers/scsi/sd.h b/drivers/scsi/sd.h
+index 5eea762f84d18..409dda5350d10 100644
+--- a/drivers/scsi/sd.h
++++ b/drivers/scsi/sd.h
+@@ -131,6 +131,7 @@ struct scsi_disk {
+ 	u8		provisioning_mode;
+ 	u8		zeroing_mode;
+ 	u8		nr_actuators;		/* Number of actuators */
++	bool		suspended;	/* Disk is suspended (stopped) */
+ 	unsigned	ATO : 1;	/* state of disk ATO bit */
+ 	unsigned	cache_override : 1; /* temp override of WCE,RCD */
+ 	unsigned	WCE : 1;	/* state of disk WCE bit */
+diff --git a/drivers/soc/imx/soc-imx8m.c b/drivers/soc/imx/soc-imx8m.c
+index 1dcd243df5677..ec87d9d878f30 100644
+--- a/drivers/soc/imx/soc-imx8m.c
++++ b/drivers/soc/imx/soc-imx8m.c
+@@ -100,6 +100,7 @@ static void __init imx8mm_soc_uid(void)
+ {
+ 	void __iomem *ocotp_base;
+ 	struct device_node *np;
++	struct clk *clk;
+ 	u32 offset = of_machine_is_compatible("fsl,imx8mp") ?
+ 		     IMX8MP_OCOTP_UID_OFFSET : 0;
+ 
+@@ -109,11 +110,20 @@ static void __init imx8mm_soc_uid(void)
+ 
+ 	ocotp_base = of_iomap(np, 0);
+ 	WARN_ON(!ocotp_base);
++	clk = of_clk_get_by_name(np, NULL);
++	if (IS_ERR(clk)) {
++		WARN_ON(IS_ERR(clk));
++		return;
++	}
++
++	clk_prepare_enable(clk);
+ 
+ 	soc_uid = readl_relaxed(ocotp_base + OCOTP_UID_HIGH + offset);
+ 	soc_uid <<= 32;
+ 	soc_uid |= readl_relaxed(ocotp_base + OCOTP_UID_LOW + offset);
+ 
++	clk_disable_unprepare(clk);
++	clk_put(clk);
+ 	iounmap(ocotp_base);
+ 	of_node_put(np);
+ }
+diff --git a/drivers/spi/spi-gxp.c b/drivers/spi/spi-gxp.c
+index 684d63f402f34..aba08d06c251c 100644
+--- a/drivers/spi/spi-gxp.c
++++ b/drivers/spi/spi-gxp.c
+@@ -195,7 +195,7 @@ static ssize_t gxp_spi_write(struct gxp_spi_chip *chip, const struct spi_mem_op
+ 		return ret;
+ 	}
+ 
+-	return write_len;
++	return 0;
+ }
+ 
+ static int do_gxp_exec_mem_op(struct spi_mem *mem, const struct spi_mem_op *op)
+diff --git a/drivers/spi/spi-intel-pci.c b/drivers/spi/spi-intel-pci.c
+index a7381e774b953..57d767a68e7b2 100644
+--- a/drivers/spi/spi-intel-pci.c
++++ b/drivers/spi/spi-intel-pci.c
+@@ -72,6 +72,7 @@ static const struct pci_device_id intel_spi_pci_ids[] = {
+ 	{ PCI_VDEVICE(INTEL, 0x4da4), (unsigned long)&bxt_info },
+ 	{ PCI_VDEVICE(INTEL, 0x51a4), (unsigned long)&cnl_info },
+ 	{ PCI_VDEVICE(INTEL, 0x54a4), (unsigned long)&cnl_info },
++	{ PCI_VDEVICE(INTEL, 0x5794), (unsigned long)&cnl_info },
+ 	{ PCI_VDEVICE(INTEL, 0x7a24), (unsigned long)&cnl_info },
+ 	{ PCI_VDEVICE(INTEL, 0x7aa4), (unsigned long)&cnl_info },
+ 	{ PCI_VDEVICE(INTEL, 0x7e23), (unsigned long)&cnl_info },
+diff --git a/drivers/spi/spi-nxp-fspi.c b/drivers/spi/spi-nxp-fspi.c
+index 5440176557875..8e44de084bbe3 100644
+--- a/drivers/spi/spi-nxp-fspi.c
++++ b/drivers/spi/spi-nxp-fspi.c
+@@ -1085,6 +1085,13 @@ static int nxp_fspi_default_setup(struct nxp_fspi *f)
+ 	fspi_writel(f, FSPI_AHBCR_PREF_EN | FSPI_AHBCR_RDADDROPT,
+ 		 base + FSPI_AHBCR);
+ 
++	/* Reset the FLSHxCR1 registers. */
++	reg = FSPI_FLSHXCR1_TCSH(0x3) | FSPI_FLSHXCR1_TCSS(0x3);
++	fspi_writel(f, reg, base + FSPI_FLSHA1CR1);
++	fspi_writel(f, reg, base + FSPI_FLSHA2CR1);
++	fspi_writel(f, reg, base + FSPI_FLSHB1CR1);
++	fspi_writel(f, reg, base + FSPI_FLSHB2CR1);
++
+ 	/* AHB Read - Set lut sequence ID for all CS. */
+ 	fspi_writel(f, SEQID_LUT, base + FSPI_FLSHA1CR2);
+ 	fspi_writel(f, SEQID_LUT, base + FSPI_FLSHA2CR2);
+diff --git a/drivers/spi/spi-stm32.c b/drivers/spi/spi-stm32.c
+index 7ddf9db776b06..4737a36e5d4e9 100644
+--- a/drivers/spi/spi-stm32.c
++++ b/drivers/spi/spi-stm32.c
+@@ -275,6 +275,7 @@ struct stm32_spi_cfg {
+  * @fifo_size: size of the embedded fifo in bytes
+  * @cur_midi: master inter-data idleness in ns
+  * @cur_speed: speed configured in Hz
++ * @cur_half_period: time of a half bit in us
+  * @cur_bpw: number of bits in a single SPI data frame
+  * @cur_fthlv: fifo threshold level (data frames in a single data packet)
+  * @cur_comm: SPI communication mode
+@@ -302,6 +303,7 @@ struct stm32_spi {
+ 
+ 	unsigned int cur_midi;
+ 	unsigned int cur_speed;
++	unsigned int cur_half_period;
+ 	unsigned int cur_bpw;
+ 	unsigned int cur_fthlv;
+ 	unsigned int cur_comm;
+@@ -466,6 +468,8 @@ static int stm32_spi_prepare_mbr(struct stm32_spi *spi, u32 speed_hz,
+ 
+ 	spi->cur_speed = spi->clk_rate / (1 << mbrdiv);
+ 
++	spi->cur_half_period = DIV_ROUND_CLOSEST(USEC_PER_SEC, 2 * spi->cur_speed);
++
+ 	return mbrdiv - 1;
+ }
+ 
+@@ -707,6 +711,10 @@ static void stm32h7_spi_disable(struct stm32_spi *spi)
+ 		return;
+ 	}
+ 
++	/* Add a delay to make sure that transmission is ended. */
++	if (spi->cur_half_period)
++		udelay(spi->cur_half_period);
++
+ 	if (spi->cur_usedma && spi->dma_tx)
+ 		dmaengine_terminate_async(spi->dma_tx);
+ 	if (spi->cur_usedma && spi->dma_rx)
+diff --git a/drivers/spi/spi-sun6i.c b/drivers/spi/spi-sun6i.c
+index cec2747235abf..26abd26dc3652 100644
+--- a/drivers/spi/spi-sun6i.c
++++ b/drivers/spi/spi-sun6i.c
+@@ -106,6 +106,7 @@ struct sun6i_spi {
+ 	struct reset_control	*rstc;
+ 
+ 	struct completion	done;
++	struct completion	dma_rx_done;
+ 
+ 	const u8		*tx_buf;
+ 	u8			*rx_buf;
+@@ -200,6 +201,13 @@ static size_t sun6i_spi_max_transfer_size(struct spi_device *spi)
+ 	return SUN6I_MAX_XFER_SIZE - 1;
+ }
+ 
++static void sun6i_spi_dma_rx_cb(void *param)
++{
++	struct sun6i_spi *sspi = param;
++
++	complete(&sspi->dma_rx_done);
++}
++
+ static int sun6i_spi_prepare_dma(struct sun6i_spi *sspi,
+ 				 struct spi_transfer *tfr)
+ {
+@@ -211,7 +219,7 @@ static int sun6i_spi_prepare_dma(struct sun6i_spi *sspi,
+ 		struct dma_slave_config rxconf = {
+ 			.direction = DMA_DEV_TO_MEM,
+ 			.src_addr = sspi->dma_addr_rx,
+-			.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES,
++			.src_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE,
+ 			.src_maxburst = 8,
+ 		};
+ 
+@@ -224,6 +232,8 @@ static int sun6i_spi_prepare_dma(struct sun6i_spi *sspi,
+ 						 DMA_PREP_INTERRUPT);
+ 		if (!rxdesc)
+ 			return -EINVAL;
++		rxdesc->callback_param = sspi;
++		rxdesc->callback = sun6i_spi_dma_rx_cb;
+ 	}
+ 
+ 	txdesc = NULL;
+@@ -279,6 +289,7 @@ static int sun6i_spi_transfer_one(struct spi_master *master,
+ 		return -EINVAL;
+ 
+ 	reinit_completion(&sspi->done);
++	reinit_completion(&sspi->dma_rx_done);
+ 	sspi->tx_buf = tfr->tx_buf;
+ 	sspi->rx_buf = tfr->rx_buf;
+ 	sspi->len = tfr->len;
+@@ -479,6 +490,22 @@ static int sun6i_spi_transfer_one(struct spi_master *master,
+ 	start = jiffies;
+ 	timeout = wait_for_completion_timeout(&sspi->done,
+ 					      msecs_to_jiffies(tx_time));
++
++	if (!use_dma) {
++		sun6i_spi_drain_fifo(sspi);
++	} else {
++		if (timeout && rx_len) {
++			/*
++			 * Even though RX on the peripheral side has finished
++			 * RX DMA might still be in flight
++			 */
++			timeout = wait_for_completion_timeout(&sspi->dma_rx_done,
++							      timeout);
++			if (!timeout)
++				dev_warn(&master->dev, "RX DMA timeout\n");
++		}
++	}
++
+ 	end = jiffies;
+ 	if (!timeout) {
+ 		dev_warn(&master->dev,
+@@ -506,7 +533,6 @@ static irqreturn_t sun6i_spi_handler(int irq, void *dev_id)
+ 	/* Transfer complete */
+ 	if (status & SUN6I_INT_CTL_TC) {
+ 		sun6i_spi_write(sspi, SUN6I_INT_STA_REG, SUN6I_INT_CTL_TC);
+-		sun6i_spi_drain_fifo(sspi);
+ 		complete(&sspi->done);
+ 		return IRQ_HANDLED;
+ 	}
+@@ -665,6 +691,7 @@ static int sun6i_spi_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	init_completion(&sspi->done);
++	init_completion(&sspi->dma_rx_done);
+ 
+ 	sspi->rstc = devm_reset_control_get_exclusive(&pdev->dev, NULL);
+ 	if (IS_ERR(sspi->rstc)) {
+diff --git a/drivers/spi/spi-zynqmp-gqspi.c b/drivers/spi/spi-zynqmp-gqspi.c
+index fb2ca9b90eabf..c309dedfd6020 100644
+--- a/drivers/spi/spi-zynqmp-gqspi.c
++++ b/drivers/spi/spi-zynqmp-gqspi.c
+@@ -1342,9 +1342,9 @@ static int zynqmp_qspi_probe(struct platform_device *pdev)
+ 	return 0;
+ 
+ clk_dis_all:
+-	pm_runtime_put_sync(&pdev->dev);
+-	pm_runtime_set_suspended(&pdev->dev);
+ 	pm_runtime_disable(&pdev->dev);
++	pm_runtime_put_noidle(&pdev->dev);
++	pm_runtime_set_suspended(&pdev->dev);
+ 	clk_disable_unprepare(xqspi->refclk);
+ clk_dis_pclk:
+ 	clk_disable_unprepare(xqspi->pclk);
+@@ -1368,11 +1368,15 @@ static void zynqmp_qspi_remove(struct platform_device *pdev)
+ {
+ 	struct zynqmp_qspi *xqspi = platform_get_drvdata(pdev);
+ 
++	pm_runtime_get_sync(&pdev->dev);
++
+ 	zynqmp_gqspi_write(xqspi, GQSPI_EN_OFST, 0x0);
++
++	pm_runtime_disable(&pdev->dev);
++	pm_runtime_put_noidle(&pdev->dev);
++	pm_runtime_set_suspended(&pdev->dev);
+ 	clk_disable_unprepare(xqspi->refclk);
+ 	clk_disable_unprepare(xqspi->pclk);
+-	pm_runtime_set_suspended(&pdev->dev);
+-	pm_runtime_disable(&pdev->dev);
+ }
+ 
+ MODULE_DEVICE_TABLE(of, zynqmp_qspi_of_match);
+diff --git a/drivers/thermal/thermal_of.c b/drivers/thermal/thermal_of.c
+index 22272f9c5934a..e615f735f4c03 100644
+--- a/drivers/thermal/thermal_of.c
++++ b/drivers/thermal/thermal_of.c
+@@ -38,8 +38,10 @@ static int of_find_trip_id(struct device_node *np, struct device_node *trip)
+ 	 */
+ 	for_each_child_of_node(trips, t) {
+ 
+-		if (t == trip)
++		if (t == trip) {
++			of_node_put(t);
+ 			goto out;
++		}
+ 		i++;
+ 	}
+ 
+@@ -402,8 +404,10 @@ static int thermal_of_for_each_cooling_maps(struct thermal_zone_device *tz,
+ 
+ 	for_each_child_of_node(cm_np, child) {
+ 		ret = thermal_of_for_each_cooling_device(tz_np, child, tz, cdev, action);
+-		if (ret)
++		if (ret) {
++			of_node_put(child);
+ 			break;
++		}
+ 	}
+ 
+ 	of_node_put(cm_np);
+diff --git a/drivers/thermal/thermal_sysfs.c b/drivers/thermal/thermal_sysfs.c
+index 6c20c9f90a05a..4e6a97db894e9 100644
+--- a/drivers/thermal/thermal_sysfs.c
++++ b/drivers/thermal/thermal_sysfs.c
+@@ -185,9 +185,6 @@ trip_point_hyst_store(struct device *dev, struct device_attribute *attr,
+ 	if (sscanf(attr->attr.name, "trip_point_%d_hyst", &trip_id) != 1)
+ 		return -EINVAL;
+ 
+-	if (kstrtoint(buf, 10, &trip.hysteresis))
+-		return -EINVAL;
+-
+ 	mutex_lock(&tz->lock);
+ 
+ 	if (!device_is_registered(dev)) {
+@@ -198,7 +195,11 @@ trip_point_hyst_store(struct device *dev, struct device_attribute *attr,
+ 	ret = __thermal_zone_get_trip(tz, trip_id, &trip);
+ 	if (ret)
+ 		goto unlock;
+-	
++
++	ret = kstrtoint(buf, 10, &trip.hysteresis);
++	if (ret)
++		goto unlock;
++
+ 	ret = thermal_zone_set_trip(tz, trip_id, &trip);
+ unlock:
+ 	mutex_unlock(&tz->lock);
+diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c
+index 739f522cb893c..5574b4b61a25c 100644
+--- a/drivers/tty/n_gsm.c
++++ b/drivers/tty/n_gsm.c
+@@ -3071,10 +3071,8 @@ static void gsm_cleanup_mux(struct gsm_mux *gsm, bool disc)
+ 		gsm->has_devices = false;
+ 	}
+ 	for (i = NUM_DLCI - 1; i >= 0; i--)
+-		if (gsm->dlci[i]) {
++		if (gsm->dlci[i])
+ 			gsm_dlci_release(gsm->dlci[i]);
+-			gsm->dlci[i] = NULL;
+-		}
+ 	mutex_unlock(&gsm->mutex);
+ 	/* Now wipe the queues */
+ 	tty_ldisc_flush(gsm->tty);
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index 483bb552cdc40..c4da580bcb444 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -1929,7 +1929,10 @@ int serial8250_handle_irq(struct uart_port *port, unsigned int iir)
+ 		skip_rx = true;
+ 
+ 	if (status & (UART_LSR_DR | UART_LSR_BI) && !skip_rx) {
+-		if (irqd_is_wakeup_set(irq_get_irq_data(port->irq)))
++		struct irq_data *d;
++
++		d = irq_get_irq_data(port->irq);
++		if (d && irqd_is_wakeup_set(d))
+ 			pm_wakeup_event(tport->tty->dev, 0);
+ 		if (!up->dma || handle_rx_dma(up, iir))
+ 			status = serial8250_rx_chars(up, status);
+diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
+index 9615a076735bd..80c48eb6bf85c 100644
+--- a/drivers/ufs/core/ufshcd.c
++++ b/drivers/ufs/core/ufshcd.c
+@@ -22,6 +22,7 @@
+ #include <linux/module.h>
+ #include <linux/regulator/consumer.h>
+ #include <linux/sched/clock.h>
++#include <linux/iopoll.h>
+ #include <scsi/scsi_cmnd.h>
+ #include <scsi/scsi_dbg.h>
+ #include <scsi/scsi_driver.h>
+@@ -2324,7 +2325,11 @@ static inline int ufshcd_hba_capabilities(struct ufs_hba *hba)
+  */
+ static inline bool ufshcd_ready_for_uic_cmd(struct ufs_hba *hba)
+ {
+-	return ufshcd_readl(hba, REG_CONTROLLER_STATUS) & UIC_COMMAND_READY;
++	u32 val;
++	int ret = read_poll_timeout(ufshcd_readl, val, val & UIC_COMMAND_READY,
++				    500, UIC_CMD_TIMEOUT * 1000, false, hba,
++				    REG_CONTROLLER_STATUS);
++	return ret == 0 ? true : false;
+ }
+ 
+ /**
+@@ -2416,7 +2421,6 @@ __ufshcd_send_uic_cmd(struct ufs_hba *hba, struct uic_command *uic_cmd,
+ 		      bool completion)
+ {
+ 	lockdep_assert_held(&hba->uic_cmd_mutex);
+-	lockdep_assert_held(hba->host->host_lock);
+ 
+ 	if (!ufshcd_ready_for_uic_cmd(hba)) {
+ 		dev_err(hba->dev,
+@@ -2443,7 +2447,6 @@ __ufshcd_send_uic_cmd(struct ufs_hba *hba, struct uic_command *uic_cmd,
+ int ufshcd_send_uic_cmd(struct ufs_hba *hba, struct uic_command *uic_cmd)
+ {
+ 	int ret;
+-	unsigned long flags;
+ 
+ 	if (hba->quirks & UFSHCD_QUIRK_BROKEN_UIC_CMD)
+ 		return 0;
+@@ -2452,9 +2455,7 @@ int ufshcd_send_uic_cmd(struct ufs_hba *hba, struct uic_command *uic_cmd)
+ 	mutex_lock(&hba->uic_cmd_mutex);
+ 	ufshcd_add_delay_before_dme_cmd(hba);
+ 
+-	spin_lock_irqsave(hba->host->host_lock, flags);
+ 	ret = __ufshcd_send_uic_cmd(hba, uic_cmd, true);
+-	spin_unlock_irqrestore(hba->host->host_lock, flags);
+ 	if (!ret)
+ 		ret = ufshcd_wait_for_uic_cmd(hba, uic_cmd);
+ 
+@@ -4166,8 +4167,8 @@ static int ufshcd_uic_pwr_ctrl(struct ufs_hba *hba, struct uic_command *cmd)
+ 		wmb();
+ 		reenable_intr = true;
+ 	}
+-	ret = __ufshcd_send_uic_cmd(hba, cmd, false);
+ 	spin_unlock_irqrestore(hba->host->host_lock, flags);
++	ret = __ufshcd_send_uic_cmd(hba, cmd, false);
+ 	if (ret) {
+ 		dev_err(hba->dev,
+ 			"pwr ctrl cmd 0x%x with mode 0x%x uic error %d\n",
+diff --git a/drivers/vfio/mdev/mdev_sysfs.c b/drivers/vfio/mdev/mdev_sysfs.c
+index e4490639d3833..9d2738e10c0b9 100644
+--- a/drivers/vfio/mdev/mdev_sysfs.c
++++ b/drivers/vfio/mdev/mdev_sysfs.c
+@@ -233,7 +233,8 @@ int parent_create_sysfs_files(struct mdev_parent *parent)
+ out_err:
+ 	while (--i >= 0)
+ 		mdev_type_remove(parent->types[i]);
+-	return 0;
++	kset_unregister(parent->mdev_types_kset);
++	return ret;
+ }
+ 
+ static ssize_t remove_store(struct device *dev, struct device_attribute *attr,
+diff --git a/drivers/video/fbdev/Kconfig b/drivers/video/fbdev/Kconfig
+index 6df9bd09454a2..80c999a67779f 100644
+--- a/drivers/video/fbdev/Kconfig
++++ b/drivers/video/fbdev/Kconfig
+@@ -2003,7 +2003,7 @@ config FB_COBALT
+ 
+ config FB_SH7760
+ 	bool "SH7760/SH7763/SH7720/SH7721 LCDC support"
+-	depends on FB && (CPU_SUBTYPE_SH7760 || CPU_SUBTYPE_SH7763 \
++	depends on FB=y && (CPU_SUBTYPE_SH7760 || CPU_SUBTYPE_SH7763 \
+ 		|| CPU_SUBTYPE_SH7720 || CPU_SUBTYPE_SH7721)
+ 	select FB_CFB_FILLRECT
+ 	select FB_CFB_COPYAREA
+diff --git a/fs/binfmt_elf_fdpic.c b/fs/binfmt_elf_fdpic.c
+index 1c6c5832af86d..124544fb27b1d 100644
+--- a/fs/binfmt_elf_fdpic.c
++++ b/fs/binfmt_elf_fdpic.c
+@@ -345,10 +345,9 @@ static int load_elf_fdpic_binary(struct linux_binprm *bprm)
+ 	/* there's now no turning back... the old userspace image is dead,
+ 	 * defunct, deceased, etc.
+ 	 */
++	SET_PERSONALITY(exec_params.hdr);
+ 	if (elf_check_fdpic(&exec_params.hdr))
+-		set_personality(PER_LINUX_FDPIC);
+-	else
+-		set_personality(PER_LINUX);
++		current->personality |= PER_LINUX_FDPIC;
+ 	if (elf_read_implies_exec(&exec_params.hdr, executable_stack))
+ 		current->personality |= READ_IMPLIES_EXEC;
+ 
+diff --git a/fs/btrfs/delayed-inode.c b/fs/btrfs/delayed-inode.c
+index 0f147240ce9b8..142e0a0f6a9fe 100644
+--- a/fs/btrfs/delayed-inode.c
++++ b/fs/btrfs/delayed-inode.c
+@@ -412,6 +412,7 @@ static void finish_one_item(struct btrfs_delayed_root *delayed_root)
+ 
+ static void __btrfs_remove_delayed_item(struct btrfs_delayed_item *delayed_item)
+ {
++	struct btrfs_delayed_node *delayed_node = delayed_item->delayed_node;
+ 	struct rb_root_cached *root;
+ 	struct btrfs_delayed_root *delayed_root;
+ 
+@@ -419,18 +420,21 @@ static void __btrfs_remove_delayed_item(struct btrfs_delayed_item *delayed_item)
+ 	if (RB_EMPTY_NODE(&delayed_item->rb_node))
+ 		return;
+ 
+-	delayed_root = delayed_item->delayed_node->root->fs_info->delayed_root;
++	/* If it's in a rbtree, then we need to have delayed node locked. */
++	lockdep_assert_held(&delayed_node->mutex);
++
++	delayed_root = delayed_node->root->fs_info->delayed_root;
+ 
+ 	BUG_ON(!delayed_root);
+ 
+ 	if (delayed_item->type == BTRFS_DELAYED_INSERTION_ITEM)
+-		root = &delayed_item->delayed_node->ins_root;
++		root = &delayed_node->ins_root;
+ 	else
+-		root = &delayed_item->delayed_node->del_root;
++		root = &delayed_node->del_root;
+ 
+ 	rb_erase_cached(&delayed_item->rb_node, root);
+ 	RB_CLEAR_NODE(&delayed_item->rb_node);
+-	delayed_item->delayed_node->count--;
++	delayed_node->count--;
+ 
+ 	finish_one_item(delayed_root);
+ }
+@@ -1426,7 +1430,29 @@ void btrfs_balance_delayed_items(struct btrfs_fs_info *fs_info)
+ 	btrfs_wq_run_delayed_node(delayed_root, fs_info, BTRFS_DELAYED_BATCH);
+ }
+ 
+-/* Will return 0 or -ENOMEM */
++static void btrfs_release_dir_index_item_space(struct btrfs_trans_handle *trans)
++{
++	struct btrfs_fs_info *fs_info = trans->fs_info;
++	const u64 bytes = btrfs_calc_insert_metadata_size(fs_info, 1);
++
++	if (test_bit(BTRFS_FS_LOG_RECOVERING, &fs_info->flags))
++		return;
++
++	/*
++	 * Adding the new dir index item does not require touching another
++	 * leaf, so we can release 1 unit of metadata that was previously
++	 * reserved when starting the transaction. This applies only to
++	 * the case where we had a transaction start and excludes the
++	 * transaction join case (when replaying log trees).
++	 */
++	trace_btrfs_space_reservation(fs_info, "transaction",
++				      trans->transid, bytes, 0);
++	btrfs_block_rsv_release(fs_info, trans->block_rsv, bytes, NULL);
++	ASSERT(trans->bytes_reserved >= bytes);
++	trans->bytes_reserved -= bytes;
++}
++
++/* Will return 0, -ENOMEM or -EEXIST (index number collision, unexpected). */
+ int btrfs_insert_delayed_dir_index(struct btrfs_trans_handle *trans,
+ 				   const char *name, int name_len,
+ 				   struct btrfs_inode *dir,
+@@ -1468,6 +1494,27 @@ int btrfs_insert_delayed_dir_index(struct btrfs_trans_handle *trans,
+ 
+ 	mutex_lock(&delayed_node->mutex);
+ 
++	/*
++	 * First attempt to insert the delayed item. This is to make the error
++	 * handling path simpler in case we fail (-EEXIST). There's no risk of
++	 * any other task coming in and running the delayed item before we do
++	 * the metadata space reservation below, because we are holding the
++	 * delayed node's mutex and that mutex must also be locked before the
++	 * node's delayed items can be run.
++	 */
++	ret = __btrfs_add_delayed_item(delayed_node, delayed_item);
++	if (unlikely(ret)) {
++		btrfs_err(trans->fs_info,
++"error adding delayed dir index item, name: %.*s, index: %llu, root: %llu, dir: %llu, dir->index_cnt: %llu, delayed_node->index_cnt: %llu, error: %d",
++			  name_len, name, index, btrfs_root_id(delayed_node->root),
++			  delayed_node->inode_id, dir->index_cnt,
++			  delayed_node->index_cnt, ret);
++		btrfs_release_delayed_item(delayed_item);
++		btrfs_release_dir_index_item_space(trans);
++		mutex_unlock(&delayed_node->mutex);
++		goto release_node;
++	}
++
+ 	if (delayed_node->index_item_leaves == 0 ||
+ 	    delayed_node->curr_index_batch_size + data_len > leaf_data_size) {
+ 		delayed_node->curr_index_batch_size = data_len;
+@@ -1485,36 +1532,14 @@ int btrfs_insert_delayed_dir_index(struct btrfs_trans_handle *trans,
+ 		 * impossible.
+ 		 */
+ 		if (WARN_ON(ret)) {
+-			mutex_unlock(&delayed_node->mutex);
+ 			btrfs_release_delayed_item(delayed_item);
++			mutex_unlock(&delayed_node->mutex);
+ 			goto release_node;
+ 		}
+ 
+ 		delayed_node->index_item_leaves++;
+-	} else if (!test_bit(BTRFS_FS_LOG_RECOVERING, &fs_info->flags)) {
+-		const u64 bytes = btrfs_calc_insert_metadata_size(fs_info, 1);
+-
+-		/*
+-		 * Adding the new dir index item does not require touching another
+-		 * leaf, so we can release 1 unit of metadata that was previously
+-		 * reserved when starting the transaction. This applies only to
+-		 * the case where we had a transaction start and excludes the
+-		 * transaction join case (when replaying log trees).
+-		 */
+-		trace_btrfs_space_reservation(fs_info, "transaction",
+-					      trans->transid, bytes, 0);
+-		btrfs_block_rsv_release(fs_info, trans->block_rsv, bytes, NULL);
+-		ASSERT(trans->bytes_reserved >= bytes);
+-		trans->bytes_reserved -= bytes;
+-	}
+-
+-	ret = __btrfs_add_delayed_item(delayed_node, delayed_item);
+-	if (unlikely(ret)) {
+-		btrfs_err(trans->fs_info,
+-			  "err add delayed dir index item(name: %.*s) into the insertion tree of the delayed node(root id: %llu, inode id: %llu, errno: %d)",
+-			  name_len, name, delayed_node->root->root_key.objectid,
+-			  delayed_node->inode_id, ret);
+-		BUG();
++	} else {
++		btrfs_release_dir_index_item_space(trans);
+ 	}
+ 	mutex_unlock(&delayed_node->mutex);
+ 
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index 2ebc982e8eccb..7cc0ed7532793 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -4083,8 +4083,14 @@ void read_extent_buffer(const struct extent_buffer *eb, void *dstv,
+ 	char *dst = (char *)dstv;
+ 	unsigned long i = get_eb_page_index(start);
+ 
+-	if (check_eb_range(eb, start, len))
++	if (check_eb_range(eb, start, len)) {
++		/*
++		 * Invalid range hit, reset the memory, so callers won't get
++		 * some random garbage for their uninitialzed memory.
++		 */
++		memset(dstv, 0, len);
+ 		return;
++	}
+ 
+ 	offset = get_eb_offset_in_page(eb, start);
+ 
+diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
+index fd03e689a6bed..eae9175f2c29b 100644
+--- a/fs/btrfs/file.c
++++ b/fs/btrfs/file.c
+@@ -1466,8 +1466,13 @@ static ssize_t btrfs_direct_write(struct kiocb *iocb, struct iov_iter *from)
+ 	if (iocb->ki_flags & IOCB_NOWAIT)
+ 		ilock_flags |= BTRFS_ILOCK_TRY;
+ 
+-	/* If the write DIO is within EOF, use a shared lock */
+-	if (iocb->ki_pos + iov_iter_count(from) <= i_size_read(inode))
++	/*
++	 * If the write DIO is within EOF, use a shared lock and also only if
++	 * security bits will likely not be dropped by file_remove_privs() called
++	 * from btrfs_write_check(). Either will need to be rechecked after the
++	 * lock was acquired.
++	 */
++	if (iocb->ki_pos + iov_iter_count(from) <= i_size_read(inode) && IS_NOSEC(inode))
+ 		ilock_flags |= BTRFS_ILOCK_SHARED;
+ 
+ relock:
+@@ -1475,6 +1480,13 @@ relock:
+ 	if (err < 0)
+ 		return err;
+ 
++	/* Shared lock cannot be used with security bits set. */
++	if ((ilock_flags & BTRFS_ILOCK_SHARED) && !IS_NOSEC(inode)) {
++		btrfs_inode_unlock(BTRFS_I(inode), ilock_flags);
++		ilock_flags &= ~BTRFS_ILOCK_SHARED;
++		goto relock;
++	}
++
+ 	err = generic_write_checks(iocb, from);
+ 	if (err <= 0) {
+ 		btrfs_inode_unlock(BTRFS_I(inode), ilock_flags);
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index db2b33a822fcd..d5c112f6091b1 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -5931,20 +5931,24 @@ out:
+ 
+ static int btrfs_get_dir_last_index(struct btrfs_inode *dir, u64 *index)
+ {
+-	if (dir->index_cnt == (u64)-1) {
+-		int ret;
++	int ret = 0;
+ 
++	btrfs_inode_lock(dir, 0);
++	if (dir->index_cnt == (u64)-1) {
+ 		ret = btrfs_inode_delayed_dir_index_count(dir);
+ 		if (ret) {
+ 			ret = btrfs_set_inode_index_count(dir);
+ 			if (ret)
+-				return ret;
++				goto out;
+ 		}
+ 	}
+ 
+-	*index = dir->index_cnt;
++	/* index_cnt is the index number of next new entry, so decrement it. */
++	*index = dir->index_cnt - 1;
++out:
++	btrfs_inode_unlock(dir, 0);
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ /*
+@@ -5979,6 +5983,19 @@ static int btrfs_opendir(struct inode *inode, struct file *file)
+ 	return 0;
+ }
+ 
++static loff_t btrfs_dir_llseek(struct file *file, loff_t offset, int whence)
++{
++	struct btrfs_file_private *private = file->private_data;
++	int ret;
++
++	ret = btrfs_get_dir_last_index(BTRFS_I(file_inode(file)),
++				       &private->last_index);
++	if (ret)
++		return ret;
++
++	return generic_file_llseek(file, offset, whence);
++}
++
+ struct dir_entry {
+ 	u64 ino;
+ 	u64 offset;
+@@ -11059,7 +11076,7 @@ static const struct inode_operations btrfs_dir_inode_operations = {
+ };
+ 
+ static const struct file_operations btrfs_dir_file_operations = {
+-	.llseek		= generic_file_llseek,
++	.llseek		= btrfs_dir_llseek,
+ 	.read		= generic_read_dir,
+ 	.iterate_shared	= btrfs_real_readdir,
+ 	.open		= btrfs_opendir,
+diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
+index f1dd172d8d5bd..f285c26c05655 100644
+--- a/fs/btrfs/super.c
++++ b/fs/btrfs/super.c
+@@ -2111,7 +2111,7 @@ static int btrfs_statfs(struct dentry *dentry, struct kstatfs *buf)
+ 	 * calculated f_bavail.
+ 	 */
+ 	if (!mixed && block_rsv->space_info->full &&
+-	    total_free_meta - thresh < block_rsv->size)
++	    (total_free_meta < thresh || total_free_meta - thresh < block_rsv->size))
+ 		buf->f_bavail = 0;
+ 
+ 	buf->f_type = BTRFS_SUPER_MAGIC;
+diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
+index e2bb0d0072da5..c268bd07e7ddd 100644
+--- a/fs/ceph/caps.c
++++ b/fs/ceph/caps.c
+@@ -4105,6 +4105,9 @@ void ceph_handle_caps(struct ceph_mds_session *session,
+ 
+ 	dout("handle_caps from mds%d\n", session->s_mds);
+ 
++	if (!ceph_inc_mds_stopping_blocker(mdsc, session))
++		return;
++
+ 	/* decode */
+ 	end = msg->front.iov_base + msg->front.iov_len;
+ 	if (msg->front.iov_len < sizeof(*h))
+@@ -4201,7 +4204,6 @@ void ceph_handle_caps(struct ceph_mds_session *session,
+ 	     vino.snap, inode);
+ 
+ 	mutex_lock(&session->s_mutex);
+-	inc_session_sequence(session);
+ 	dout(" mds%d seq %lld cap seq %u\n", session->s_mds, session->s_seq,
+ 	     (unsigned)seq);
+ 
+@@ -4309,6 +4311,8 @@ done:
+ done_unlocked:
+ 	iput(inode);
+ out:
++	ceph_dec_mds_stopping_blocker(mdsc);
++
+ 	ceph_put_string(extra_info.pool_ns);
+ 
+ 	/* Defer closing the sessions after s_mutex lock being released */
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index 5fb367b1d4b06..4b0ba067e9c93 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -4550,6 +4550,9 @@ static void handle_lease(struct ceph_mds_client *mdsc,
+ 
+ 	dout("handle_lease from mds%d\n", mds);
+ 
++	if (!ceph_inc_mds_stopping_blocker(mdsc, session))
++		return;
++
+ 	/* decode */
+ 	if (msg->front.iov_len < sizeof(*h) + sizeof(u32))
+ 		goto bad;
+@@ -4568,8 +4571,6 @@ static void handle_lease(struct ceph_mds_client *mdsc,
+ 	     dname.len, dname.name);
+ 
+ 	mutex_lock(&session->s_mutex);
+-	inc_session_sequence(session);
+-
+ 	if (!inode) {
+ 		dout("handle_lease no inode %llx\n", vino.ino);
+ 		goto release;
+@@ -4631,9 +4632,13 @@ release:
+ out:
+ 	mutex_unlock(&session->s_mutex);
+ 	iput(inode);
++
++	ceph_dec_mds_stopping_blocker(mdsc);
+ 	return;
+ 
+ bad:
++	ceph_dec_mds_stopping_blocker(mdsc);
++
+ 	pr_err("corrupt lease message\n");
+ 	ceph_msg_dump(msg);
+ }
+@@ -4829,6 +4834,9 @@ int ceph_mdsc_init(struct ceph_fs_client *fsc)
+ 	}
+ 
+ 	init_completion(&mdsc->safe_umount_waiters);
++	spin_lock_init(&mdsc->stopping_lock);
++	atomic_set(&mdsc->stopping_blockers, 0);
++	init_completion(&mdsc->stopping_waiter);
+ 	init_waitqueue_head(&mdsc->session_close_wq);
+ 	INIT_LIST_HEAD(&mdsc->waiting_for_map);
+ 	mdsc->quotarealms_inodes = RB_ROOT;
+diff --git a/fs/ceph/mds_client.h b/fs/ceph/mds_client.h
+index 86d2965e68a1f..cff7392809032 100644
+--- a/fs/ceph/mds_client.h
++++ b/fs/ceph/mds_client.h
+@@ -381,8 +381,9 @@ struct cap_wait {
+ };
+ 
+ enum {
+-       CEPH_MDSC_STOPPING_BEGIN = 1,
+-       CEPH_MDSC_STOPPING_FLUSHED = 2,
++	CEPH_MDSC_STOPPING_BEGIN = 1,
++	CEPH_MDSC_STOPPING_FLUSHING = 2,
++	CEPH_MDSC_STOPPING_FLUSHED = 3,
+ };
+ 
+ /*
+@@ -401,7 +402,11 @@ struct ceph_mds_client {
+ 	struct ceph_mds_session **sessions;    /* NULL for mds if no session */
+ 	atomic_t		num_sessions;
+ 	int                     max_sessions;  /* len of sessions array */
+-	int                     stopping;      /* true if shutting down */
++
++	spinlock_t              stopping_lock;  /* protect snap_empty */
++	int                     stopping;      /* the stage of shutting down */
++	atomic_t                stopping_blockers;
++	struct completion	stopping_waiter;
+ 
+ 	atomic64_t		quotarealms_count; /* # realms with quota */
+ 	/*
+diff --git a/fs/ceph/quota.c b/fs/ceph/quota.c
+index 64592adfe48fb..f7fcf7f08ec64 100644
+--- a/fs/ceph/quota.c
++++ b/fs/ceph/quota.c
+@@ -47,25 +47,23 @@ void ceph_handle_quota(struct ceph_mds_client *mdsc,
+ 	struct inode *inode;
+ 	struct ceph_inode_info *ci;
+ 
++	if (!ceph_inc_mds_stopping_blocker(mdsc, session))
++		return;
++
+ 	if (msg->front.iov_len < sizeof(*h)) {
+ 		pr_err("%s corrupt message mds%d len %d\n", __func__,
+ 		       session->s_mds, (int)msg->front.iov_len);
+ 		ceph_msg_dump(msg);
+-		return;
++		goto out;
+ 	}
+ 
+-	/* increment msg sequence number */
+-	mutex_lock(&session->s_mutex);
+-	inc_session_sequence(session);
+-	mutex_unlock(&session->s_mutex);
+-
+ 	/* lookup inode */
+ 	vino.ino = le64_to_cpu(h->ino);
+ 	vino.snap = CEPH_NOSNAP;
+ 	inode = ceph_find_inode(sb, vino);
+ 	if (!inode) {
+ 		pr_warn("Failed to find inode %llu\n", vino.ino);
+-		return;
++		goto out;
+ 	}
+ 	ci = ceph_inode(inode);
+ 
+@@ -78,6 +76,8 @@ void ceph_handle_quota(struct ceph_mds_client *mdsc,
+ 	spin_unlock(&ci->i_ceph_lock);
+ 
+ 	iput(inode);
++out:
++	ceph_dec_mds_stopping_blocker(mdsc);
+ }
+ 
+ static struct ceph_quotarealm_inode *
+diff --git a/fs/ceph/snap.c b/fs/ceph/snap.c
+index 343d738448dcd..7ddc6bad77ef3 100644
+--- a/fs/ceph/snap.c
++++ b/fs/ceph/snap.c
+@@ -1015,6 +1015,9 @@ void ceph_handle_snap(struct ceph_mds_client *mdsc,
+ 	int locked_rwsem = 0;
+ 	bool close_sessions = false;
+ 
++	if (!ceph_inc_mds_stopping_blocker(mdsc, session))
++		return;
++
+ 	/* decode */
+ 	if (msg->front.iov_len < sizeof(*h))
+ 		goto bad;
+@@ -1030,10 +1033,6 @@ void ceph_handle_snap(struct ceph_mds_client *mdsc,
+ 	dout("%s from mds%d op %s split %llx tracelen %d\n", __func__,
+ 	     mds, ceph_snap_op_name(op), split, trace_len);
+ 
+-	mutex_lock(&session->s_mutex);
+-	inc_session_sequence(session);
+-	mutex_unlock(&session->s_mutex);
+-
+ 	down_write(&mdsc->snap_rwsem);
+ 	locked_rwsem = 1;
+ 
+@@ -1151,6 +1150,7 @@ skip_inode:
+ 	up_write(&mdsc->snap_rwsem);
+ 
+ 	flush_snaps(mdsc);
++	ceph_dec_mds_stopping_blocker(mdsc);
+ 	return;
+ 
+ bad:
+@@ -1160,6 +1160,8 @@ out:
+ 	if (locked_rwsem)
+ 		up_write(&mdsc->snap_rwsem);
+ 
++	ceph_dec_mds_stopping_blocker(mdsc);
++
+ 	if (close_sessions)
+ 		ceph_mdsc_close_sessions(mdsc);
+ 	return;
+diff --git a/fs/ceph/super.c b/fs/ceph/super.c
+index a5f52013314d6..281b493fdac8e 100644
+--- a/fs/ceph/super.c
++++ b/fs/ceph/super.c
+@@ -1365,25 +1365,90 @@ nomem:
+ 	return -ENOMEM;
+ }
+ 
++/*
++ * Return true if it successfully increases the blocker counter,
++ * or false if the mdsc is in stopping and flushed state.
++ */
++static bool __inc_stopping_blocker(struct ceph_mds_client *mdsc)
++{
++	spin_lock(&mdsc->stopping_lock);
++	if (mdsc->stopping >= CEPH_MDSC_STOPPING_FLUSHING) {
++		spin_unlock(&mdsc->stopping_lock);
++		return false;
++	}
++	atomic_inc(&mdsc->stopping_blockers);
++	spin_unlock(&mdsc->stopping_lock);
++	return true;
++}
++
++static void __dec_stopping_blocker(struct ceph_mds_client *mdsc)
++{
++	spin_lock(&mdsc->stopping_lock);
++	if (!atomic_dec_return(&mdsc->stopping_blockers) &&
++	    mdsc->stopping >= CEPH_MDSC_STOPPING_FLUSHING)
++		complete_all(&mdsc->stopping_waiter);
++	spin_unlock(&mdsc->stopping_lock);
++}
++
++/* For metadata IO requests */
++bool ceph_inc_mds_stopping_blocker(struct ceph_mds_client *mdsc,
++				   struct ceph_mds_session *session)
++{
++	mutex_lock(&session->s_mutex);
++	inc_session_sequence(session);
++	mutex_unlock(&session->s_mutex);
++
++	return __inc_stopping_blocker(mdsc);
++}
++
++void ceph_dec_mds_stopping_blocker(struct ceph_mds_client *mdsc)
++{
++	__dec_stopping_blocker(mdsc);
++}
++
+ static void ceph_kill_sb(struct super_block *s)
+ {
+ 	struct ceph_fs_client *fsc = ceph_sb_to_client(s);
++	struct ceph_mds_client *mdsc = fsc->mdsc;
++	bool wait;
+ 
+ 	dout("kill_sb %p\n", s);
+ 
+-	ceph_mdsc_pre_umount(fsc->mdsc);
++	ceph_mdsc_pre_umount(mdsc);
+ 	flush_fs_workqueues(fsc);
+ 
+ 	/*
+ 	 * Though the kill_anon_super() will finally trigger the
+-	 * sync_filesystem() anyway, we still need to do it here
+-	 * and then bump the stage of shutdown to stop the work
+-	 * queue as earlier as possible.
++	 * sync_filesystem() anyway, we still need to do it here and
++	 * then bump the stage of shutdown. This will allow us to
++	 * drop any further message, which will increase the inodes'
++	 * i_count reference counters but makes no sense any more,
++	 * from MDSs.
++	 *
++	 * Without this when evicting the inodes it may fail in the
++	 * kill_anon_super(), which will trigger a warning when
++	 * destroying the fscrypt keyring and then possibly trigger
++	 * a further crash in ceph module when the iput() tries to
++	 * evict the inodes later.
+ 	 */
+ 	sync_filesystem(s);
+ 
+-	fsc->mdsc->stopping = CEPH_MDSC_STOPPING_FLUSHED;
++	spin_lock(&mdsc->stopping_lock);
++	mdsc->stopping = CEPH_MDSC_STOPPING_FLUSHING;
++	wait = !!atomic_read(&mdsc->stopping_blockers);
++	spin_unlock(&mdsc->stopping_lock);
++
++	if (wait && atomic_read(&mdsc->stopping_blockers)) {
++		long timeleft = wait_for_completion_killable_timeout(
++					&mdsc->stopping_waiter,
++					fsc->client->options->mount_timeout);
++		if (!timeleft) /* timed out */
++			pr_warn("umount timed out, %ld\n", timeleft);
++		else if (timeleft < 0) /* killed */
++			pr_warn("umount was killed, %ld\n", timeleft);
++	}
+ 
++	mdsc->stopping = CEPH_MDSC_STOPPING_FLUSHED;
+ 	kill_anon_super(s);
+ 
+ 	fsc->client->extra_mon_dispatch = NULL;
+diff --git a/fs/ceph/super.h b/fs/ceph/super.h
+index 3bfddf34d488b..e6c1edf9e12b0 100644
+--- a/fs/ceph/super.h
++++ b/fs/ceph/super.h
+@@ -1375,4 +1375,7 @@ extern bool ceph_quota_update_statfs(struct ceph_fs_client *fsc,
+ 				     struct kstatfs *buf);
+ extern void ceph_cleanup_quotarealms_inodes(struct ceph_mds_client *mdsc);
+ 
++bool ceph_inc_mds_stopping_blocker(struct ceph_mds_client *mdsc,
++			       struct ceph_mds_session *session);
++void ceph_dec_mds_stopping_blocker(struct ceph_mds_client *mdsc);
+ #endif /* _FS_CEPH_SUPER_H */
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index bd7557d8dec41..3711be697a0a5 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -16,6 +16,7 @@
+ #include <linux/slab.h>
+ #include <linux/nospec.h>
+ #include <linux/backing-dev.h>
++#include <linux/freezer.h>
+ #include <trace/events/ext4.h>
+ 
+ /*
+@@ -6920,6 +6921,21 @@ __acquires(bitlock)
+ 	return ret;
+ }
+ 
++static ext4_grpblk_t ext4_last_grp_cluster(struct super_block *sb,
++					   ext4_group_t grp)
++{
++	if (grp < ext4_get_groups_count(sb))
++		return EXT4_CLUSTERS_PER_GROUP(sb) - 1;
++	return (ext4_blocks_count(EXT4_SB(sb)->s_es) -
++		ext4_group_first_block_no(sb, grp) - 1) >>
++					EXT4_CLUSTER_BITS(sb);
++}
++
++static bool ext4_trim_interrupted(void)
++{
++	return fatal_signal_pending(current) || freezing(current);
++}
++
+ static int ext4_try_to_trim_range(struct super_block *sb,
+ 		struct ext4_buddy *e4b, ext4_grpblk_t start,
+ 		ext4_grpblk_t max, ext4_grpblk_t minblocks)
+@@ -6927,11 +6943,13 @@ __acquires(ext4_group_lock_ptr(sb, e4b->bd_group))
+ __releases(ext4_group_lock_ptr(sb, e4b->bd_group))
+ {
+ 	ext4_grpblk_t next, count, free_count;
++	bool set_trimmed = false;
+ 	void *bitmap;
+ 
+ 	bitmap = e4b->bd_bitmap;
+-	start = (e4b->bd_info->bb_first_free > start) ?
+-		e4b->bd_info->bb_first_free : start;
++	if (start == 0 && max >= ext4_last_grp_cluster(sb, e4b->bd_group))
++		set_trimmed = true;
++	start = max(e4b->bd_info->bb_first_free, start);
+ 	count = 0;
+ 	free_count = 0;
+ 
+@@ -6945,16 +6963,14 @@ __releases(ext4_group_lock_ptr(sb, e4b->bd_group))
+ 			int ret = ext4_trim_extent(sb, start, next - start, e4b);
+ 
+ 			if (ret && ret != -EOPNOTSUPP)
+-				break;
++				return count;
+ 			count += next - start;
+ 		}
+ 		free_count += next - start;
+ 		start = next + 1;
+ 
+-		if (fatal_signal_pending(current)) {
+-			count = -ERESTARTSYS;
+-			break;
+-		}
++		if (ext4_trim_interrupted())
++			return count;
+ 
+ 		if (need_resched()) {
+ 			ext4_unlock_group(sb, e4b->bd_group);
+@@ -6966,6 +6982,9 @@ __releases(ext4_group_lock_ptr(sb, e4b->bd_group))
+ 			break;
+ 	}
+ 
++	if (set_trimmed)
++		EXT4_MB_GRP_SET_TRIMMED(e4b->bd_info);
++
+ 	return count;
+ }
+ 
+@@ -6976,7 +6995,6 @@ __releases(ext4_group_lock_ptr(sb, e4b->bd_group))
+  * @start:		first group block to examine
+  * @max:		last group block to examine
+  * @minblocks:		minimum extent block count
+- * @set_trimmed:	set the trimmed flag if at least one block is trimmed
+  *
+  * ext4_trim_all_free walks through group's block bitmap searching for free
+  * extents. When the free extent is found, mark it as used in group buddy
+@@ -6986,7 +7004,7 @@ __releases(ext4_group_lock_ptr(sb, e4b->bd_group))
+ static ext4_grpblk_t
+ ext4_trim_all_free(struct super_block *sb, ext4_group_t group,
+ 		   ext4_grpblk_t start, ext4_grpblk_t max,
+-		   ext4_grpblk_t minblocks, bool set_trimmed)
++		   ext4_grpblk_t minblocks)
+ {
+ 	struct ext4_buddy e4b;
+ 	int ret;
+@@ -7003,13 +7021,10 @@ ext4_trim_all_free(struct super_block *sb, ext4_group_t group,
+ 	ext4_lock_group(sb, group);
+ 
+ 	if (!EXT4_MB_GRP_WAS_TRIMMED(e4b.bd_info) ||
+-	    minblocks < EXT4_SB(sb)->s_last_trim_minblks) {
++	    minblocks < EXT4_SB(sb)->s_last_trim_minblks)
+ 		ret = ext4_try_to_trim_range(sb, &e4b, start, max, minblocks);
+-		if (ret >= 0 && set_trimmed)
+-			EXT4_MB_GRP_SET_TRIMMED(e4b.bd_info);
+-	} else {
++	else
+ 		ret = 0;
+-	}
+ 
+ 	ext4_unlock_group(sb, group);
+ 	ext4_mb_unload_buddy(&e4b);
+@@ -7042,7 +7057,6 @@ int ext4_trim_fs(struct super_block *sb, struct fstrim_range *range)
+ 	ext4_fsblk_t first_data_blk =
+ 			le32_to_cpu(EXT4_SB(sb)->s_es->s_first_data_block);
+ 	ext4_fsblk_t max_blks = ext4_blocks_count(EXT4_SB(sb)->s_es);
+-	bool whole_group, eof = false;
+ 	int ret = 0;
+ 
+ 	start = range->start >> sb->s_blocksize_bits;
+@@ -7061,10 +7075,8 @@ int ext4_trim_fs(struct super_block *sb, struct fstrim_range *range)
+ 		if (minlen > EXT4_CLUSTERS_PER_GROUP(sb))
+ 			goto out;
+ 	}
+-	if (end >= max_blks - 1) {
++	if (end >= max_blks - 1)
+ 		end = max_blks - 1;
+-		eof = true;
+-	}
+ 	if (end <= first_data_blk)
+ 		goto out;
+ 	if (start < first_data_blk)
+@@ -7078,9 +7090,10 @@ int ext4_trim_fs(struct super_block *sb, struct fstrim_range *range)
+ 
+ 	/* end now represents the last cluster to discard in this group */
+ 	end = EXT4_CLUSTERS_PER_GROUP(sb) - 1;
+-	whole_group = true;
+ 
+ 	for (group = first_group; group <= last_group; group++) {
++		if (ext4_trim_interrupted())
++			break;
+ 		grp = ext4_get_group_info(sb, group);
+ 		if (!grp)
+ 			continue;
+@@ -7097,13 +7110,11 @@ int ext4_trim_fs(struct super_block *sb, struct fstrim_range *range)
+ 		 * change it for the last group, note that last_cluster is
+ 		 * already computed earlier by ext4_get_group_no_and_offset()
+ 		 */
+-		if (group == last_group) {
++		if (group == last_group)
+ 			end = last_cluster;
+-			whole_group = eof ? true : end == EXT4_CLUSTERS_PER_GROUP(sb) - 1;
+-		}
+ 		if (grp->bb_free >= minlen) {
+ 			cnt = ext4_trim_all_free(sb, group, first_cluster,
+-						 end, minlen, whole_group);
++						 end, minlen);
+ 			if (cnt < 0) {
+ 				ret = cnt;
+ 				break;
+@@ -7148,8 +7159,7 @@ ext4_mballoc_query_range(
+ 
+ 	ext4_lock_group(sb, group);
+ 
+-	start = (e4b.bd_info->bb_first_free > start) ?
+-		e4b.bd_info->bb_first_free : start;
++	start = max(e4b.bd_info->bb_first_free, start);
+ 	if (end >= EXT4_CLUSTERS_PER_GROUP(sb))
+ 		end = EXT4_CLUSTERS_PER_GROUP(sb) - 1;
+ 
+diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
+index 1438e7465e306..59c1aed0b9b90 100644
+--- a/fs/gfs2/glock.c
++++ b/fs/gfs2/glock.c
+@@ -2017,7 +2017,9 @@ static long gfs2_scan_glock_lru(int nr)
+ 		if (!test_bit(GLF_LOCK, &gl->gl_flags)) {
+ 			if (!spin_trylock(&gl->gl_lockref.lock))
+ 				continue;
+-			if (!gl->gl_lockref.count) {
++			if (gl->gl_lockref.count <= 1 &&
++			    (gl->gl_state == LM_ST_UNLOCKED ||
++			     demote_ok(gl))) {
+ 				list_move(&gl->gl_lru, &dispose);
+ 				atomic_dec(&lru_count);
+ 				freed++;
+diff --git a/fs/gfs2/glops.c b/fs/gfs2/glops.c
+index 54319328b16b5..0a3b069386ec9 100644
+--- a/fs/gfs2/glops.c
++++ b/fs/gfs2/glops.c
+@@ -567,15 +567,16 @@ static void freeze_go_callback(struct gfs2_glock *gl, bool remote)
+ 	struct super_block *sb = sdp->sd_vfs;
+ 
+ 	if (!remote ||
+-	    gl->gl_state != LM_ST_SHARED ||
++	    (gl->gl_state != LM_ST_SHARED &&
++	     gl->gl_state != LM_ST_UNLOCKED) ||
+ 	    gl->gl_demote_state != LM_ST_UNLOCKED)
+ 		return;
+ 
+ 	/*
+ 	 * Try to get an active super block reference to prevent racing with
+-	 * unmount (see trylock_super()).  But note that unmount isn't the only
+-	 * place where a write lock on s_umount is taken, and we can fail here
+-	 * because of things like remount as well.
++	 * unmount (see super_trylock_shared()).  But note that unmount isn't
++	 * the only place where a write lock on s_umount is taken, and we can
++	 * fail here because of things like remount as well.
+ 	 */
+ 	if (down_read_trylock(&sb->s_umount)) {
+ 		atomic_inc(&sb->s_active);
+diff --git a/fs/libfs.c b/fs/libfs.c
+index 5b851315eeed0..712c57828c0e4 100644
+--- a/fs/libfs.c
++++ b/fs/libfs.c
+@@ -1646,6 +1646,7 @@ ssize_t direct_write_fallback(struct kiocb *iocb, struct iov_iter *iter,
+ 		 * We don't know how much we wrote, so just return the number of
+ 		 * bytes which were direct-written
+ 		 */
++		iocb->ki_pos -= buffered_written;
+ 		if (direct_written)
+ 			return direct_written;
+ 		return err;
+diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c
+index 3404707ddbe73..2cd3ccf4c4399 100644
+--- a/fs/netfs/buffered_read.c
++++ b/fs/netfs/buffered_read.c
+@@ -47,12 +47,14 @@ void netfs_rreq_unlock_folios(struct netfs_io_request *rreq)
+ 	xas_for_each(&xas, folio, last_page) {
+ 		loff_t pg_end;
+ 		bool pg_failed = false;
++		bool folio_started;
+ 
+ 		if (xas_retry(&xas, folio))
+ 			continue;
+ 
+ 		pg_end = folio_pos(folio) + folio_size(folio) - 1;
+ 
++		folio_started = false;
+ 		for (;;) {
+ 			loff_t sreq_end;
+ 
+@@ -60,8 +62,10 @@ void netfs_rreq_unlock_folios(struct netfs_io_request *rreq)
+ 				pg_failed = true;
+ 				break;
+ 			}
+-			if (test_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags))
++			if (!folio_started && test_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags)) {
+ 				folio_start_fscache(folio);
++				folio_started = true;
++			}
+ 			pg_failed |= subreq_failed;
+ 			sreq_end = subreq->start + subreq->len - 1;
+ 			if (pg_end < sreq_end)
+diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c
+index 47d892a1d363d..f6c74f4246917 100644
+--- a/fs/nfs/direct.c
++++ b/fs/nfs/direct.c
+@@ -93,12 +93,10 @@ nfs_direct_handle_truncated(struct nfs_direct_req *dreq,
+ 		dreq->max_count = dreq_len;
+ 		if (dreq->count > dreq_len)
+ 			dreq->count = dreq_len;
+-
+-		if (test_bit(NFS_IOHDR_ERROR, &hdr->flags))
+-			dreq->error = hdr->error;
+-		else /* Clear outstanding error if this is EOF */
+-			dreq->error = 0;
+ 	}
++
++	if (test_bit(NFS_IOHDR_ERROR, &hdr->flags) && !dreq->error)
++		dreq->error = hdr->error;
+ }
+ 
+ static void
+@@ -120,6 +118,18 @@ nfs_direct_count_bytes(struct nfs_direct_req *dreq,
+ 		dreq->count = dreq_len;
+ }
+ 
++static void nfs_direct_truncate_request(struct nfs_direct_req *dreq,
++					struct nfs_page *req)
++{
++	loff_t offs = req_offset(req);
++	size_t req_start = (size_t)(offs - dreq->io_start);
++
++	if (req_start < dreq->max_count)
++		dreq->max_count = req_start;
++	if (req_start < dreq->count)
++		dreq->count = req_start;
++}
++
+ /**
+  * nfs_swap_rw - NFS address space operation for swap I/O
+  * @iocb: target I/O control block
+@@ -488,7 +498,9 @@ static void nfs_direct_add_page_head(struct list_head *list,
+ 	kref_get(&head->wb_kref);
+ }
+ 
+-static void nfs_direct_join_group(struct list_head *list, struct inode *inode)
++static void nfs_direct_join_group(struct list_head *list,
++				  struct nfs_commit_info *cinfo,
++				  struct inode *inode)
+ {
+ 	struct nfs_page *req, *subreq;
+ 
+@@ -510,7 +522,7 @@ static void nfs_direct_join_group(struct list_head *list, struct inode *inode)
+ 				nfs_release_request(subreq);
+ 			}
+ 		} while ((subreq = subreq->wb_this_page) != req);
+-		nfs_join_page_group(req, inode);
++		nfs_join_page_group(req, cinfo, inode);
+ 	}
+ }
+ 
+@@ -528,20 +540,15 @@ nfs_direct_write_scan_commit_list(struct inode *inode,
+ static void nfs_direct_write_reschedule(struct nfs_direct_req *dreq)
+ {
+ 	struct nfs_pageio_descriptor desc;
+-	struct nfs_page *req, *tmp;
++	struct nfs_page *req;
+ 	LIST_HEAD(reqs);
+ 	struct nfs_commit_info cinfo;
+-	LIST_HEAD(failed);
+ 
+ 	nfs_init_cinfo_from_dreq(&cinfo, dreq);
+ 	nfs_direct_write_scan_commit_list(dreq->inode, &reqs, &cinfo);
+ 
+-	nfs_direct_join_group(&reqs, dreq->inode);
++	nfs_direct_join_group(&reqs, &cinfo, dreq->inode);
+ 
+-	dreq->count = 0;
+-	dreq->max_count = 0;
+-	list_for_each_entry(req, &reqs, wb_list)
+-		dreq->max_count += req->wb_bytes;
+ 	nfs_clear_pnfs_ds_commit_verifiers(&dreq->ds_cinfo);
+ 	get_dreq(dreq);
+ 
+@@ -549,27 +556,40 @@ static void nfs_direct_write_reschedule(struct nfs_direct_req *dreq)
+ 			      &nfs_direct_write_completion_ops);
+ 	desc.pg_dreq = dreq;
+ 
+-	list_for_each_entry_safe(req, tmp, &reqs, wb_list) {
++	while (!list_empty(&reqs)) {
++		req = nfs_list_entry(reqs.next);
+ 		/* Bump the transmission count */
+ 		req->wb_nio++;
+ 		if (!nfs_pageio_add_request(&desc, req)) {
+-			nfs_list_move_request(req, &failed);
+-			spin_lock(&cinfo.inode->i_lock);
+-			dreq->flags = 0;
+-			if (desc.pg_error < 0)
++			spin_lock(&dreq->lock);
++			if (dreq->error < 0) {
++				desc.pg_error = dreq->error;
++			} else if (desc.pg_error != -EAGAIN) {
++				dreq->flags = 0;
++				if (!desc.pg_error)
++					desc.pg_error = -EIO;
+ 				dreq->error = desc.pg_error;
+-			else
+-				dreq->error = -EIO;
+-			spin_unlock(&cinfo.inode->i_lock);
++			} else
++				dreq->flags = NFS_ODIRECT_RESCHED_WRITES;
++			spin_unlock(&dreq->lock);
++			break;
+ 		}
+ 		nfs_release_request(req);
+ 	}
+ 	nfs_pageio_complete(&desc);
+ 
+-	while (!list_empty(&failed)) {
+-		req = nfs_list_entry(failed.next);
++	while (!list_empty(&reqs)) {
++		req = nfs_list_entry(reqs.next);
+ 		nfs_list_remove_request(req);
+ 		nfs_unlock_and_release_request(req);
++		if (desc.pg_error == -EAGAIN) {
++			nfs_mark_request_commit(req, NULL, &cinfo, 0);
++		} else {
++			spin_lock(&dreq->lock);
++			nfs_direct_truncate_request(dreq, req);
++			spin_unlock(&dreq->lock);
++			nfs_release_request(req);
++		}
+ 	}
+ 
+ 	if (put_dreq(dreq))
+@@ -589,8 +609,6 @@ static void nfs_direct_commit_complete(struct nfs_commit_data *data)
+ 	if (status < 0) {
+ 		/* Errors in commit are fatal */
+ 		dreq->error = status;
+-		dreq->max_count = 0;
+-		dreq->count = 0;
+ 		dreq->flags = NFS_ODIRECT_DONE;
+ 	} else {
+ 		status = dreq->error;
+@@ -601,7 +619,12 @@ static void nfs_direct_commit_complete(struct nfs_commit_data *data)
+ 	while (!list_empty(&data->pages)) {
+ 		req = nfs_list_entry(data->pages.next);
+ 		nfs_list_remove_request(req);
+-		if (status >= 0 && !nfs_write_match_verf(verf, req)) {
++		if (status < 0) {
++			spin_lock(&dreq->lock);
++			nfs_direct_truncate_request(dreq, req);
++			spin_unlock(&dreq->lock);
++			nfs_release_request(req);
++		} else if (!nfs_write_match_verf(verf, req)) {
+ 			dreq->flags = NFS_ODIRECT_RESCHED_WRITES;
+ 			/*
+ 			 * Despite the reboot, the write was successful,
+@@ -609,7 +632,7 @@ static void nfs_direct_commit_complete(struct nfs_commit_data *data)
+ 			 */
+ 			req->wb_nio = 0;
+ 			nfs_mark_request_commit(req, NULL, &cinfo, 0);
+-		} else /* Error or match */
++		} else
+ 			nfs_release_request(req);
+ 		nfs_unlock_and_release_request(req);
+ 	}
+@@ -662,6 +685,7 @@ static void nfs_direct_write_clear_reqs(struct nfs_direct_req *dreq)
+ 	while (!list_empty(&reqs)) {
+ 		req = nfs_list_entry(reqs.next);
+ 		nfs_list_remove_request(req);
++		nfs_direct_truncate_request(dreq, req);
+ 		nfs_release_request(req);
+ 		nfs_unlock_and_release_request(req);
+ 	}
+@@ -711,7 +735,8 @@ static void nfs_direct_write_completion(struct nfs_pgio_header *hdr)
+ 	}
+ 
+ 	nfs_direct_count_bytes(dreq, hdr);
+-	if (test_bit(NFS_IOHDR_UNSTABLE_WRITES, &hdr->flags)) {
++	if (test_bit(NFS_IOHDR_UNSTABLE_WRITES, &hdr->flags) &&
++	    !test_bit(NFS_IOHDR_ERROR, &hdr->flags)) {
+ 		if (!dreq->flags)
+ 			dreq->flags = NFS_ODIRECT_DO_COMMIT;
+ 		flags = dreq->flags;
+@@ -755,18 +780,23 @@ static void nfs_write_sync_pgio_error(struct list_head *head, int error)
+ static void nfs_direct_write_reschedule_io(struct nfs_pgio_header *hdr)
+ {
+ 	struct nfs_direct_req *dreq = hdr->dreq;
++	struct nfs_page *req;
++	struct nfs_commit_info cinfo;
+ 
+ 	trace_nfs_direct_write_reschedule_io(dreq);
+ 
++	nfs_init_cinfo_from_dreq(&cinfo, dreq);
+ 	spin_lock(&dreq->lock);
+-	if (dreq->error == 0) {
++	if (dreq->error == 0)
+ 		dreq->flags = NFS_ODIRECT_RESCHED_WRITES;
+-		/* fake unstable write to let common nfs resend pages */
+-		hdr->verf.committed = NFS_UNSTABLE;
+-		hdr->good_bytes = hdr->args.offset + hdr->args.count -
+-			hdr->io_start;
+-	}
++	set_bit(NFS_IOHDR_REDO, &hdr->flags);
+ 	spin_unlock(&dreq->lock);
++	while (!list_empty(&hdr->pages)) {
++		req = nfs_list_entry(hdr->pages.next);
++		nfs_list_remove_request(req);
++		nfs_unlock_request(req);
++		nfs_mark_request_commit(req, NULL, &cinfo, 0);
++	}
+ }
+ 
+ static const struct nfs_pgio_completion_ops nfs_direct_write_completion_ops = {
+@@ -794,9 +824,11 @@ static ssize_t nfs_direct_write_schedule_iovec(struct nfs_direct_req *dreq,
+ {
+ 	struct nfs_pageio_descriptor desc;
+ 	struct inode *inode = dreq->inode;
++	struct nfs_commit_info cinfo;
+ 	ssize_t result = 0;
+ 	size_t requested_bytes = 0;
+ 	size_t wsize = max_t(size_t, NFS_SERVER(inode)->wsize, PAGE_SIZE);
++	bool defer = false;
+ 
+ 	trace_nfs_direct_write_schedule_iovec(dreq);
+ 
+@@ -837,17 +869,37 @@ static ssize_t nfs_direct_write_schedule_iovec(struct nfs_direct_req *dreq,
+ 				break;
+ 			}
+ 
+-			nfs_lock_request(req);
+-			if (!nfs_pageio_add_request(&desc, req)) {
+-				result = desc.pg_error;
+-				nfs_unlock_and_release_request(req);
+-				break;
+-			}
+ 			pgbase = 0;
+ 			bytes -= req_len;
+ 			requested_bytes += req_len;
+ 			pos += req_len;
+ 			dreq->bytes_left -= req_len;
++
++			if (defer) {
++				nfs_mark_request_commit(req, NULL, &cinfo, 0);
++				continue;
++			}
++
++			nfs_lock_request(req);
++			if (nfs_pageio_add_request(&desc, req))
++				continue;
++
++			/* Exit on hard errors */
++			if (desc.pg_error < 0 && desc.pg_error != -EAGAIN) {
++				result = desc.pg_error;
++				nfs_unlock_and_release_request(req);
++				break;
++			}
++
++			/* If the error is soft, defer remaining requests */
++			nfs_init_cinfo_from_dreq(&cinfo, dreq);
++			spin_lock(&dreq->lock);
++			dreq->flags = NFS_ODIRECT_RESCHED_WRITES;
++			spin_unlock(&dreq->lock);
++			nfs_unlock_request(req);
++			nfs_mark_request_commit(req, NULL, &cinfo, 0);
++			desc.pg_error = 0;
++			defer = true;
+ 		}
+ 		nfs_direct_release_pages(pagevec, npages);
+ 		kvfree(pagevec);
+diff --git a/fs/nfs/flexfilelayout/flexfilelayout.c b/fs/nfs/flexfilelayout/flexfilelayout.c
+index 7deb3cd76abe4..a1dc338649062 100644
+--- a/fs/nfs/flexfilelayout/flexfilelayout.c
++++ b/fs/nfs/flexfilelayout/flexfilelayout.c
+@@ -1235,6 +1235,7 @@ static void ff_layout_io_track_ds_error(struct pnfs_layout_segment *lseg,
+ 		case -EPFNOSUPPORT:
+ 		case -EPROTONOSUPPORT:
+ 		case -EOPNOTSUPP:
++		case -EINVAL:
+ 		case -ECONNREFUSED:
+ 		case -ECONNRESET:
+ 		case -EHOSTDOWN:
+diff --git a/fs/nfs/nfs4client.c b/fs/nfs/nfs4client.c
+index d9114a754db73..11e3a285594c2 100644
+--- a/fs/nfs/nfs4client.c
++++ b/fs/nfs/nfs4client.c
+@@ -232,6 +232,8 @@ struct nfs_client *nfs4_alloc_client(const struct nfs_client_initdata *cl_init)
+ 	__set_bit(NFS_CS_DISCRTRY, &clp->cl_flags);
+ 	__set_bit(NFS_CS_NO_RETRANS_TIMEOUT, &clp->cl_flags);
+ 
++	if (test_bit(NFS_CS_DS, &cl_init->init_flags))
++		__set_bit(NFS_CS_DS, &clp->cl_flags);
+ 	/*
+ 	 * Set up the connection to the server before we add add to the
+ 	 * global list.
+@@ -415,6 +417,8 @@ static void nfs4_add_trunk(struct nfs_client *clp, struct nfs_client *old)
+ 		.net = old->cl_net,
+ 		.servername = old->cl_hostname,
+ 	};
++	int max_connect = test_bit(NFS_CS_PNFS, &clp->cl_flags) ?
++		clp->cl_max_connect : old->cl_max_connect;
+ 
+ 	if (clp->cl_proto != old->cl_proto)
+ 		return;
+@@ -428,7 +432,7 @@ static void nfs4_add_trunk(struct nfs_client *clp, struct nfs_client *old)
+ 	xprt_args.addrlen = clp_salen;
+ 
+ 	rpc_clnt_add_xprt(old->cl_rpcclient, &xprt_args,
+-			  rpc_clnt_test_and_add_xprt, NULL);
++			  rpc_clnt_test_and_add_xprt, &max_connect);
+ }
+ 
+ /**
+@@ -1007,6 +1011,9 @@ struct nfs_client *nfs4_set_ds_client(struct nfs_server *mds_srv,
+ 	if (mds_srv->flags & NFS_MOUNT_NORESVPORT)
+ 		__set_bit(NFS_CS_NORESVPORT, &cl_init.init_flags);
+ 
++	__set_bit(NFS_CS_DS, &cl_init.init_flags);
++	__set_bit(NFS_CS_PNFS, &cl_init.init_flags);
++	cl_init.max_connect = NFS_MAX_TRANSPORTS;
+ 	/*
+ 	 * Set an authflavor equual to the MDS value. Use the MDS nfs_client
+ 	 * cl_ipaddr so as to use the same EXCHANGE_ID co_ownerid as the MDS
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 3c24c3c99e8ac..51029e4b60f56 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -2703,8 +2703,12 @@ static int _nfs4_proc_open(struct nfs4_opendata *data,
+ 			return status;
+ 	}
+ 	if (!(o_res->f_attr->valid & NFS_ATTR_FATTR)) {
++		struct nfs_fh *fh = &o_res->fh;
++
+ 		nfs4_sequence_free_slot(&o_res->seq_res);
+-		nfs4_proc_getattr(server, &o_res->fh, o_res->f_attr, NULL);
++		if (o_arg->claim == NFS4_OPEN_CLAIM_FH)
++			fh = NFS_FH(d_inode(data->dentry));
++		nfs4_proc_getattr(server, fh, o_res->f_attr, NULL);
+ 	}
+ 	return 0;
+ }
+@@ -8787,6 +8791,8 @@ nfs4_run_exchange_id(struct nfs_client *clp, const struct cred *cred,
+ #ifdef CONFIG_NFS_V4_1_MIGRATION
+ 	calldata->args.flags |= EXCHGID4_FLAG_SUPP_MOVED_MIGR;
+ #endif
++	if (test_bit(NFS_CS_DS, &clp->cl_flags))
++		calldata->args.flags |= EXCHGID4_FLAG_USE_PNFS_DS;
+ 	msg.rpc_argp = &calldata->args;
+ 	msg.rpc_resp = &calldata->res;
+ 	task_setup_data.callback_data = calldata;
+@@ -8864,6 +8870,8 @@ static int _nfs4_proc_exchange_id(struct nfs_client *clp, const struct cred *cre
+ 	/* Save the EXCHANGE_ID verifier session trunk tests */
+ 	memcpy(clp->cl_confirm.data, argp->verifier.data,
+ 	       sizeof(clp->cl_confirm.data));
++	if (resp->flags & EXCHGID4_FLAG_USE_PNFS_DS)
++		set_bit(NFS_CS_DS, &clp->cl_flags);
+ out:
+ 	trace_nfs4_exchange_id(clp, status);
+ 	rpc_put_task(task);
+@@ -10614,7 +10622,9 @@ static void nfs4_disable_swap(struct inode *inode)
+ 	 */
+ 	struct nfs_client *clp = NFS_SERVER(inode)->nfs_client;
+ 
+-	nfs4_schedule_state_manager(clp);
++	set_bit(NFS4CLNT_RUN_MANAGER, &clp->cl_state);
++	clear_bit(NFS4CLNT_MANAGER_AVAILABLE, &clp->cl_state);
++	wake_up_var(&clp->cl_state);
+ }
+ 
+ static const struct inode_operations nfs4_dir_inode_operations = {
+diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
+index e079987af4a3e..597ae4535fe33 100644
+--- a/fs/nfs/nfs4state.c
++++ b/fs/nfs/nfs4state.c
+@@ -1209,16 +1209,26 @@ void nfs4_schedule_state_manager(struct nfs_client *clp)
+ {
+ 	struct task_struct *task;
+ 	char buf[INET6_ADDRSTRLEN + sizeof("-manager") + 1];
++	struct rpc_clnt *clnt = clp->cl_rpcclient;
++	bool swapon = false;
+ 
+-	if (clp->cl_rpcclient->cl_shutdown)
++	if (clnt->cl_shutdown)
+ 		return;
+ 
+ 	set_bit(NFS4CLNT_RUN_MANAGER, &clp->cl_state);
+-	if (test_and_set_bit(NFS4CLNT_MANAGER_AVAILABLE, &clp->cl_state) != 0) {
+-		wake_up_var(&clp->cl_state);
+-		return;
++
++	if (atomic_read(&clnt->cl_swapper)) {
++		swapon = !test_and_set_bit(NFS4CLNT_MANAGER_AVAILABLE,
++					   &clp->cl_state);
++		if (!swapon) {
++			wake_up_var(&clp->cl_state);
++			return;
++		}
+ 	}
+-	set_bit(NFS4CLNT_MANAGER_RUNNING, &clp->cl_state);
++
++	if (test_and_set_bit(NFS4CLNT_MANAGER_RUNNING, &clp->cl_state) != 0)
++		return;
++
+ 	__module_get(THIS_MODULE);
+ 	refcount_inc(&clp->cl_count);
+ 
+@@ -1235,8 +1245,9 @@ void nfs4_schedule_state_manager(struct nfs_client *clp)
+ 			__func__, PTR_ERR(task));
+ 		if (!nfs_client_init_is_complete(clp))
+ 			nfs_mark_client_ready(clp, PTR_ERR(task));
++		if (swapon)
++			clear_bit(NFS4CLNT_MANAGER_AVAILABLE, &clp->cl_state);
+ 		nfs4_clear_state_manager_bit(clp);
+-		clear_bit(NFS4CLNT_MANAGER_AVAILABLE, &clp->cl_state);
+ 		nfs_put_client(clp);
+ 		module_put(THIS_MODULE);
+ 	}
+@@ -2741,22 +2752,25 @@ static int nfs4_run_state_manager(void *ptr)
+ 
+ 	allow_signal(SIGKILL);
+ again:
+-	set_bit(NFS4CLNT_MANAGER_RUNNING, &clp->cl_state);
+ 	nfs4_state_manager(clp);
+-	if (atomic_read(&cl->cl_swapper)) {
++
++	if (test_bit(NFS4CLNT_MANAGER_AVAILABLE, &clp->cl_state) &&
++	    !test_bit(NFS4CLNT_MANAGER_RUNNING, &clp->cl_state)) {
+ 		wait_var_event_interruptible(&clp->cl_state,
+ 					     test_bit(NFS4CLNT_RUN_MANAGER,
+ 						      &clp->cl_state));
+-		if (atomic_read(&cl->cl_swapper) &&
+-		    test_bit(NFS4CLNT_RUN_MANAGER, &clp->cl_state))
++		if (!atomic_read(&cl->cl_swapper))
++			clear_bit(NFS4CLNT_MANAGER_AVAILABLE, &clp->cl_state);
++		if (refcount_read(&clp->cl_count) > 1 && !signalled() &&
++		    !test_and_set_bit(NFS4CLNT_MANAGER_RUNNING, &clp->cl_state))
+ 			goto again;
+ 		/* Either no longer a swapper, or were signalled */
++		clear_bit(NFS4CLNT_MANAGER_AVAILABLE, &clp->cl_state);
+ 	}
+-	clear_bit(NFS4CLNT_MANAGER_AVAILABLE, &clp->cl_state);
+ 
+ 	if (refcount_read(&clp->cl_count) > 1 && !signalled() &&
+ 	    test_bit(NFS4CLNT_RUN_MANAGER, &clp->cl_state) &&
+-	    !test_and_set_bit(NFS4CLNT_MANAGER_AVAILABLE, &clp->cl_state))
++	    !test_and_set_bit(NFS4CLNT_MANAGER_RUNNING, &clp->cl_state))
+ 		goto again;
+ 
+ 	nfs_put_client(clp);
+diff --git a/fs/nfs/write.c b/fs/nfs/write.c
+index f4cca8f00c0c2..8c1ee1a1a28f1 100644
+--- a/fs/nfs/write.c
++++ b/fs/nfs/write.c
+@@ -59,7 +59,8 @@ static const struct nfs_pgio_completion_ops nfs_async_write_completion_ops;
+ static const struct nfs_commit_completion_ops nfs_commit_completion_ops;
+ static const struct nfs_rw_ops nfs_rw_write_ops;
+ static void nfs_inode_remove_request(struct nfs_page *req);
+-static void nfs_clear_request_commit(struct nfs_page *req);
++static void nfs_clear_request_commit(struct nfs_commit_info *cinfo,
++				     struct nfs_page *req);
+ static void nfs_init_cinfo_from_inode(struct nfs_commit_info *cinfo,
+ 				      struct inode *inode);
+ static struct nfs_page *
+@@ -502,8 +503,8 @@ nfs_destroy_unlinked_subrequests(struct nfs_page *destroy_list,
+  * the (former) group.  All subrequests are removed from any write or commit
+  * lists, unlinked from the group and destroyed.
+  */
+-void
+-nfs_join_page_group(struct nfs_page *head, struct inode *inode)
++void nfs_join_page_group(struct nfs_page *head, struct nfs_commit_info *cinfo,
++			 struct inode *inode)
+ {
+ 	struct nfs_page *subreq;
+ 	struct nfs_page *destroy_list = NULL;
+@@ -533,7 +534,7 @@ nfs_join_page_group(struct nfs_page *head, struct inode *inode)
+ 	 * Commit list removal accounting is done after locks are dropped */
+ 	subreq = head;
+ 	do {
+-		nfs_clear_request_commit(subreq);
++		nfs_clear_request_commit(cinfo, subreq);
+ 		subreq = subreq->wb_this_page;
+ 	} while (subreq != head);
+ 
+@@ -566,8 +567,10 @@ static struct nfs_page *nfs_lock_and_join_requests(struct folio *folio)
+ {
+ 	struct inode *inode = folio_file_mapping(folio)->host;
+ 	struct nfs_page *head;
++	struct nfs_commit_info cinfo;
+ 	int ret;
+ 
++	nfs_init_cinfo_from_inode(&cinfo, inode);
+ 	/*
+ 	 * A reference is taken only on the head request which acts as a
+ 	 * reference to the whole page group - the group will not be destroyed
+@@ -584,7 +587,7 @@ static struct nfs_page *nfs_lock_and_join_requests(struct folio *folio)
+ 		return ERR_PTR(ret);
+ 	}
+ 
+-	nfs_join_page_group(head, inode);
++	nfs_join_page_group(head, &cinfo, inode);
+ 
+ 	return head;
+ }
+@@ -955,18 +958,16 @@ static void nfs_folio_clear_commit(struct folio *folio)
+ }
+ 
+ /* Called holding the request lock on @req */
+-static void
+-nfs_clear_request_commit(struct nfs_page *req)
++static void nfs_clear_request_commit(struct nfs_commit_info *cinfo,
++				     struct nfs_page *req)
+ {
+ 	if (test_bit(PG_CLEAN, &req->wb_flags)) {
+ 		struct nfs_open_context *ctx = nfs_req_openctx(req);
+ 		struct inode *inode = d_inode(ctx->dentry);
+-		struct nfs_commit_info cinfo;
+ 
+-		nfs_init_cinfo_from_inode(&cinfo, inode);
+ 		mutex_lock(&NFS_I(inode)->commit_mutex);
+-		if (!pnfs_clear_request_commit(req, &cinfo)) {
+-			nfs_request_remove_commit_list(req, &cinfo);
++		if (!pnfs_clear_request_commit(req, cinfo)) {
++			nfs_request_remove_commit_list(req, cinfo);
+ 		}
+ 		mutex_unlock(&NFS_I(inode)->commit_mutex);
+ 		nfs_folio_clear_commit(nfs_page_to_folio(req));
+diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
+index be72628b13376..d2588f4ac42be 100644
+--- a/fs/nfsd/nfs4xdr.c
++++ b/fs/nfsd/nfs4xdr.c
+@@ -4105,6 +4105,7 @@ static __be32 nfsd4_encode_readv(struct nfsd4_compoundres *resp,
+ 				 struct file *file, unsigned long maxcount)
+ {
+ 	struct xdr_stream *xdr = resp->xdr;
++	unsigned int base = xdr->buf->page_len & ~PAGE_MASK;
+ 	unsigned int starting_len = xdr->buf->len;
+ 	__be32 zero = xdr_zero;
+ 	__be32 nfserr;
+@@ -4113,8 +4114,7 @@ static __be32 nfsd4_encode_readv(struct nfsd4_compoundres *resp,
+ 		return nfserr_resource;
+ 
+ 	nfserr = nfsd_iter_read(resp->rqstp, read->rd_fhp, file,
+-				read->rd_offset, &maxcount,
+-				xdr->buf->page_len & ~PAGE_MASK,
++				read->rd_offset, &maxcount, base,
+ 				&read->rd_eof);
+ 	read->rd_length = maxcount;
+ 	if (nfserr)
+diff --git a/fs/nilfs2/gcinode.c b/fs/nilfs2/gcinode.c
+index 48fe71d309cb4..8beb2730929d4 100644
+--- a/fs/nilfs2/gcinode.c
++++ b/fs/nilfs2/gcinode.c
+@@ -73,10 +73,8 @@ int nilfs_gccache_submit_read_data(struct inode *inode, sector_t blkoff,
+ 		struct the_nilfs *nilfs = inode->i_sb->s_fs_info;
+ 
+ 		err = nilfs_dat_translate(nilfs->ns_dat, vbn, &pbn);
+-		if (unlikely(err)) { /* -EIO, -ENOMEM, -ENOENT */
+-			brelse(bh);
++		if (unlikely(err)) /* -EIO, -ENOMEM, -ENOENT */
+ 			goto failed;
+-		}
+ 	}
+ 
+ 	lock_buffer(bh);
+@@ -102,6 +100,8 @@ int nilfs_gccache_submit_read_data(struct inode *inode, sector_t blkoff,
+  failed:
+ 	unlock_page(bh->b_page);
+ 	put_page(bh->b_page);
++	if (unlikely(err))
++		brelse(bh);
+ 	return err;
+ }
+ 
+diff --git a/fs/proc/internal.h b/fs/proc/internal.h
+index 9dda7e54b2d0d..9a8f32f21ff56 100644
+--- a/fs/proc/internal.h
++++ b/fs/proc/internal.h
+@@ -289,9 +289,7 @@ struct proc_maps_private {
+ 	struct inode *inode;
+ 	struct task_struct *task;
+ 	struct mm_struct *mm;
+-#ifdef CONFIG_MMU
+ 	struct vma_iterator iter;
+-#endif
+ #ifdef CONFIG_NUMA
+ 	struct mempolicy *task_mempolicy;
+ #endif
+diff --git a/fs/proc/task_nommu.c b/fs/proc/task_nommu.c
+index 2c8b622659814..d3e19080df4af 100644
+--- a/fs/proc/task_nommu.c
++++ b/fs/proc/task_nommu.c
+@@ -188,15 +188,28 @@ static int show_map(struct seq_file *m, void *_p)
+ 	return nommu_vma_show(m, _p);
+ }
+ 
+-static void *m_start(struct seq_file *m, loff_t *pos)
++static struct vm_area_struct *proc_get_vma(struct proc_maps_private *priv,
++						loff_t *ppos)
++{
++	struct vm_area_struct *vma = vma_next(&priv->iter);
++
++	if (vma) {
++		*ppos = vma->vm_start;
++	} else {
++		*ppos = -1UL;
++	}
++
++	return vma;
++}
++
++static void *m_start(struct seq_file *m, loff_t *ppos)
+ {
+ 	struct proc_maps_private *priv = m->private;
++	unsigned long last_addr = *ppos;
+ 	struct mm_struct *mm;
+-	struct vm_area_struct *vma;
+-	unsigned long addr = *pos;
+ 
+-	/* See m_next(). Zero at the start or after lseek. */
+-	if (addr == -1UL)
++	/* See proc_get_vma(). Zero at the start or after lseek. */
++	if (last_addr == -1UL)
+ 		return NULL;
+ 
+ 	/* pin the task and mm whilst we play with them */
+@@ -205,44 +218,41 @@ static void *m_start(struct seq_file *m, loff_t *pos)
+ 		return ERR_PTR(-ESRCH);
+ 
+ 	mm = priv->mm;
+-	if (!mm || !mmget_not_zero(mm))
++	if (!mm || !mmget_not_zero(mm)) {
++		put_task_struct(priv->task);
++		priv->task = NULL;
+ 		return NULL;
++	}
+ 
+ 	if (mmap_read_lock_killable(mm)) {
+ 		mmput(mm);
++		put_task_struct(priv->task);
++		priv->task = NULL;
+ 		return ERR_PTR(-EINTR);
+ 	}
+ 
+-	/* start the next element from addr */
+-	vma = find_vma(mm, addr);
+-	if (vma)
+-		return vma;
++	vma_iter_init(&priv->iter, mm, last_addr);
+ 
+-	mmap_read_unlock(mm);
+-	mmput(mm);
+-	return NULL;
++	return proc_get_vma(priv, ppos);
+ }
+ 
+-static void m_stop(struct seq_file *m, void *_vml)
++static void m_stop(struct seq_file *m, void *v)
+ {
+ 	struct proc_maps_private *priv = m->private;
++	struct mm_struct *mm = priv->mm;
+ 
+-	if (!IS_ERR_OR_NULL(_vml)) {
+-		mmap_read_unlock(priv->mm);
+-		mmput(priv->mm);
+-	}
+-	if (priv->task) {
+-		put_task_struct(priv->task);
+-		priv->task = NULL;
+-	}
++	if (!priv->task)
++		return;
++
++	mmap_read_unlock(mm);
++	mmput(mm);
++	put_task_struct(priv->task);
++	priv->task = NULL;
+ }
+ 
+-static void *m_next(struct seq_file *m, void *_p, loff_t *pos)
++static void *m_next(struct seq_file *m, void *_p, loff_t *ppos)
+ {
+-	struct vm_area_struct *vma = _p;
+-
+-	*pos = vma->vm_end;
+-	return find_vma(vma->vm_mm, vma->vm_end);
++	return proc_get_vma(m->private, ppos);
+ }
+ 
+ static const struct seq_operations proc_pid_maps_ops = {
+diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h
+index 051f15b9d6078..35782a6bede0b 100644
+--- a/fs/smb/client/cifsglob.h
++++ b/fs/smb/client/cifsglob.h
+@@ -1776,6 +1776,7 @@ static inline bool is_retryable_error(int error)
+ #define   MID_RETRY_NEEDED      8 /* session closed while this request out */
+ #define   MID_RESPONSE_MALFORMED 0x10
+ #define   MID_SHUTDOWN		 0x20
++#define   MID_RESPONSE_READY 0x40 /* ready for other process handle the rsp */
+ 
+ /* Flags */
+ #define   MID_WAIT_CANCELLED	 1 /* Cancelled while waiting for response */
+diff --git a/fs/smb/client/fs_context.c b/fs/smb/client/fs_context.c
+index 67e16c2ac90e6..f12203c49b802 100644
+--- a/fs/smb/client/fs_context.c
++++ b/fs/smb/client/fs_context.c
+@@ -1532,6 +1532,7 @@ static int smb3_fs_context_parse_param(struct fs_context *fc,
+ 
+  cifs_parse_mount_err:
+ 	kfree_sensitive(ctx->password);
++	ctx->password = NULL;
+ 	return -EINVAL;
+ }
+ 
+diff --git a/fs/smb/client/inode.c b/fs/smb/client/inode.c
+index c3eeae07e1390..cb85d7977b1e3 100644
+--- a/fs/smb/client/inode.c
++++ b/fs/smb/client/inode.c
+@@ -2610,7 +2610,7 @@ int cifs_fiemap(struct inode *inode, struct fiemap_extent_info *fei, u64 start,
+ 	}
+ 
+ 	cifsFileInfo_put(cfile);
+-	return -ENOTSUPP;
++	return -EOPNOTSUPP;
+ }
+ 
+ int cifs_truncate_page(struct address_space *mapping, loff_t from)
+diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c
+index dd6a423dc6e11..a5cba71c30aed 100644
+--- a/fs/smb/client/smb2ops.c
++++ b/fs/smb/client/smb2ops.c
+@@ -297,7 +297,7 @@ smb2_adjust_credits(struct TCP_Server_Info *server,
+ 		cifs_server_dbg(VFS, "request has less credits (%d) than required (%d)",
+ 				credits->value, new_val);
+ 
+-		return -ENOTSUPP;
++		return -EOPNOTSUPP;
+ 	}
+ 
+ 	spin_lock(&server->req_lock);
+@@ -1159,7 +1159,7 @@ smb2_set_ea(const unsigned int xid, struct cifs_tcon *tcon,
+ 			/* Use a fudge factor of 256 bytes in case we collide
+ 			 * with a different set_EAs command.
+ 			 */
+-			if(CIFSMaxBufSize - MAX_SMB2_CREATE_RESPONSE_SIZE -
++			if (CIFSMaxBufSize - MAX_SMB2_CREATE_RESPONSE_SIZE -
+ 			   MAX_SMB2_CLOSE_RESPONSE_SIZE - 256 <
+ 			   used_len + ea_name_len + ea_value_len + 1) {
+ 				rc = -ENOSPC;
+@@ -4716,7 +4716,7 @@ handle_read_data(struct TCP_Server_Info *server, struct mid_q_entry *mid,
+ 
+ 	if (shdr->Command != SMB2_READ) {
+ 		cifs_server_dbg(VFS, "only big read responses are supported\n");
+-		return -ENOTSUPP;
++		return -EOPNOTSUPP;
+ 	}
+ 
+ 	if (server->ops->is_session_expired &&
+diff --git a/fs/smb/client/transport.c b/fs/smb/client/transport.c
+index f280502a2aee8..2b9a2ed45a652 100644
+--- a/fs/smb/client/transport.c
++++ b/fs/smb/client/transport.c
+@@ -35,6 +35,8 @@
+ void
+ cifs_wake_up_task(struct mid_q_entry *mid)
+ {
++	if (mid->mid_state == MID_RESPONSE_RECEIVED)
++		mid->mid_state = MID_RESPONSE_READY;
+ 	wake_up_process(mid->callback_data);
+ }
+ 
+@@ -87,7 +89,8 @@ static void __release_mid(struct kref *refcount)
+ 	struct TCP_Server_Info *server = midEntry->server;
+ 
+ 	if (midEntry->resp_buf && (midEntry->mid_flags & MID_WAIT_CANCELLED) &&
+-	    midEntry->mid_state == MID_RESPONSE_RECEIVED &&
++	    (midEntry->mid_state == MID_RESPONSE_RECEIVED ||
++	     midEntry->mid_state == MID_RESPONSE_READY) &&
+ 	    server->ops->handle_cancelled_mid)
+ 		server->ops->handle_cancelled_mid(midEntry, server);
+ 
+@@ -732,7 +735,8 @@ wait_for_response(struct TCP_Server_Info *server, struct mid_q_entry *midQ)
+ 	int error;
+ 
+ 	error = wait_event_state(server->response_q,
+-				 midQ->mid_state != MID_REQUEST_SUBMITTED,
++				 midQ->mid_state != MID_REQUEST_SUBMITTED &&
++				 midQ->mid_state != MID_RESPONSE_RECEIVED,
+ 				 (TASK_KILLABLE|TASK_FREEZABLE_UNSAFE));
+ 	if (error < 0)
+ 		return -ERESTARTSYS;
+@@ -885,7 +889,7 @@ cifs_sync_mid_result(struct mid_q_entry *mid, struct TCP_Server_Info *server)
+ 
+ 	spin_lock(&server->mid_lock);
+ 	switch (mid->mid_state) {
+-	case MID_RESPONSE_RECEIVED:
++	case MID_RESPONSE_READY:
+ 		spin_unlock(&server->mid_lock);
+ 		return rc;
+ 	case MID_RETRY_NEEDED:
+@@ -984,6 +988,9 @@ cifs_compound_callback(struct mid_q_entry *mid)
+ 	credits.instance = server->reconnect_instance;
+ 
+ 	add_credits(server, &credits, mid->optype);
++
++	if (mid->mid_state == MID_RESPONSE_RECEIVED)
++		mid->mid_state = MID_RESPONSE_READY;
+ }
+ 
+ static void
+@@ -1204,7 +1211,8 @@ compound_send_recv(const unsigned int xid, struct cifs_ses *ses,
+ 			send_cancel(server, &rqst[i], midQ[i]);
+ 			spin_lock(&server->mid_lock);
+ 			midQ[i]->mid_flags |= MID_WAIT_CANCELLED;
+-			if (midQ[i]->mid_state == MID_REQUEST_SUBMITTED) {
++			if (midQ[i]->mid_state == MID_REQUEST_SUBMITTED ||
++			    midQ[i]->mid_state == MID_RESPONSE_RECEIVED) {
+ 				midQ[i]->callback = cifs_cancelled_callback;
+ 				cancelled_mid[i] = true;
+ 				credits[i].value = 0;
+@@ -1225,7 +1233,7 @@ compound_send_recv(const unsigned int xid, struct cifs_ses *ses,
+ 		}
+ 
+ 		if (!midQ[i]->resp_buf ||
+-		    midQ[i]->mid_state != MID_RESPONSE_RECEIVED) {
++		    midQ[i]->mid_state != MID_RESPONSE_READY) {
+ 			rc = -EIO;
+ 			cifs_dbg(FYI, "Bad MID state?\n");
+ 			goto out;
+@@ -1412,7 +1420,8 @@ SendReceive(const unsigned int xid, struct cifs_ses *ses,
+ 	if (rc != 0) {
+ 		send_cancel(server, &rqst, midQ);
+ 		spin_lock(&server->mid_lock);
+-		if (midQ->mid_state == MID_REQUEST_SUBMITTED) {
++		if (midQ->mid_state == MID_REQUEST_SUBMITTED ||
++		    midQ->mid_state == MID_RESPONSE_RECEIVED) {
+ 			/* no longer considered to be "in-flight" */
+ 			midQ->callback = release_mid;
+ 			spin_unlock(&server->mid_lock);
+@@ -1429,7 +1438,7 @@ SendReceive(const unsigned int xid, struct cifs_ses *ses,
+ 	}
+ 
+ 	if (!midQ->resp_buf || !out_buf ||
+-	    midQ->mid_state != MID_RESPONSE_RECEIVED) {
++	    midQ->mid_state != MID_RESPONSE_READY) {
+ 		rc = -EIO;
+ 		cifs_server_dbg(VFS, "Bad MID state?\n");
+ 		goto out;
+@@ -1553,14 +1562,16 @@ SendReceiveBlockingLock(const unsigned int xid, struct cifs_tcon *tcon,
+ 
+ 	/* Wait for a reply - allow signals to interrupt. */
+ 	rc = wait_event_interruptible(server->response_q,
+-		(!(midQ->mid_state == MID_REQUEST_SUBMITTED)) ||
++		(!(midQ->mid_state == MID_REQUEST_SUBMITTED ||
++		   midQ->mid_state == MID_RESPONSE_RECEIVED)) ||
+ 		((server->tcpStatus != CifsGood) &&
+ 		 (server->tcpStatus != CifsNew)));
+ 
+ 	/* Were we interrupted by a signal ? */
+ 	spin_lock(&server->srv_lock);
+ 	if ((rc == -ERESTARTSYS) &&
+-		(midQ->mid_state == MID_REQUEST_SUBMITTED) &&
++		(midQ->mid_state == MID_REQUEST_SUBMITTED ||
++		 midQ->mid_state == MID_RESPONSE_RECEIVED) &&
+ 		((server->tcpStatus == CifsGood) ||
+ 		 (server->tcpStatus == CifsNew))) {
+ 		spin_unlock(&server->srv_lock);
+@@ -1591,7 +1602,8 @@ SendReceiveBlockingLock(const unsigned int xid, struct cifs_tcon *tcon,
+ 		if (rc) {
+ 			send_cancel(server, &rqst, midQ);
+ 			spin_lock(&server->mid_lock);
+-			if (midQ->mid_state == MID_REQUEST_SUBMITTED) {
++			if (midQ->mid_state == MID_REQUEST_SUBMITTED ||
++			    midQ->mid_state == MID_RESPONSE_RECEIVED) {
+ 				/* no longer considered to be "in-flight" */
+ 				midQ->callback = release_mid;
+ 				spin_unlock(&server->mid_lock);
+@@ -1611,7 +1623,7 @@ SendReceiveBlockingLock(const unsigned int xid, struct cifs_tcon *tcon,
+ 		return rc;
+ 
+ 	/* rcvd frame is ok */
+-	if (out_buf == NULL || midQ->mid_state != MID_RESPONSE_RECEIVED) {
++	if (out_buf == NULL || midQ->mid_state != MID_RESPONSE_READY) {
+ 		rc = -EIO;
+ 		cifs_tcon_dbg(VFS, "Bad MID state?\n");
+ 		goto out;
+diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/atomic/atomic-arch-fallback.h
+index 18f5744dfb5d8..b83ef19da13de 100644
+--- a/include/linux/atomic/atomic-arch-fallback.h
++++ b/include/linux/atomic/atomic-arch-fallback.h
+@@ -459,8 +459,6 @@ raw_atomic_read_acquire(const atomic_t *v)
+ {
+ #if defined(arch_atomic_read_acquire)
+ 	return arch_atomic_read_acquire(v);
+-#elif defined(arch_atomic_read)
+-	return arch_atomic_read(v);
+ #else
+ 	int ret;
+ 
+@@ -508,8 +506,6 @@ raw_atomic_set_release(atomic_t *v, int i)
+ {
+ #if defined(arch_atomic_set_release)
+ 	arch_atomic_set_release(v, i);
+-#elif defined(arch_atomic_set)
+-	arch_atomic_set(v, i);
+ #else
+ 	if (__native_word(atomic_t)) {
+ 		smp_store_release(&(v)->counter, i);
+@@ -2575,8 +2571,6 @@ raw_atomic64_read_acquire(const atomic64_t *v)
+ {
+ #if defined(arch_atomic64_read_acquire)
+ 	return arch_atomic64_read_acquire(v);
+-#elif defined(arch_atomic64_read)
+-	return arch_atomic64_read(v);
+ #else
+ 	s64 ret;
+ 
+@@ -2624,8 +2618,6 @@ raw_atomic64_set_release(atomic64_t *v, s64 i)
+ {
+ #if defined(arch_atomic64_set_release)
+ 	arch_atomic64_set_release(v, i);
+-#elif defined(arch_atomic64_set)
+-	arch_atomic64_set(v, i);
+ #else
+ 	if (__native_word(atomic64_t)) {
+ 		smp_store_release(&(v)->counter, i);
+@@ -4657,4 +4649,4 @@ raw_atomic64_dec_if_positive(atomic64_t *v)
+ }
+ 
+ #endif /* _LINUX_ATOMIC_FALLBACK_H */
+-// 202b45c7db600ce36198eb1f1fc2c2d5268ace2d
++// 2fdd6702823fa842f9cea57a002e6e4476ae780c
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index 28e2e0ce2ed07..477d91b926b35 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -425,7 +425,7 @@ static inline void bpf_long_memcpy(void *dst, const void *src, u32 size)
+ 
+ 	size /= sizeof(long);
+ 	while (size--)
+-		*ldst++ = *lsrc++;
++		data_race(*ldst++ = *lsrc++);
+ }
+ 
+ /* copy everything but bpf_spin_lock, bpf_timer, and kptrs. There could be one of each. */
+diff --git a/include/linux/btf_ids.h b/include/linux/btf_ids.h
+index 00950cc03bff2..b247784427d6f 100644
+--- a/include/linux/btf_ids.h
++++ b/include/linux/btf_ids.h
+@@ -49,7 +49,7 @@ word							\
+ 	____BTF_ID(symbol, word)
+ 
+ #define __ID(prefix) \
+-	__PASTE(prefix, __COUNTER__)
++	__PASTE(__PASTE(prefix, __COUNTER__), __LINE__)
+ 
+ /*
+  * The BTF_ID defines unique symbol for each ID pointing
+diff --git a/include/linux/compiler_attributes.h b/include/linux/compiler_attributes.h
+index 00efa35c350f6..28566624f008f 100644
+--- a/include/linux/compiler_attributes.h
++++ b/include/linux/compiler_attributes.h
+@@ -94,6 +94,19 @@
+ # define __copy(symbol)
+ #endif
+ 
++/*
++ * Optional: only supported since gcc >= 14
++ * Optional: only supported since clang >= 18
++ *
++ *   gcc: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108896
++ * clang: https://reviews.llvm.org/D148381
++ */
++#if __has_attribute(__counted_by__)
++# define __counted_by(member)		__attribute__((__counted_by__(member)))
++#else
++# define __counted_by(member)
++#endif
++
+ /*
+  * Optional: not supported by gcc
+  * Optional: only supported since clang >= 14.0
+@@ -129,19 +142,6 @@
+ # define __designated_init
+ #endif
+ 
+-/*
+- * Optional: only supported since gcc >= 14
+- * Optional: only supported since clang >= 17
+- *
+- *   gcc: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108896
+- * clang: https://reviews.llvm.org/D148381
+- */
+-#if __has_attribute(__element_count__)
+-# define __counted_by(member)		__attribute__((__element_count__(#member)))
+-#else
+-# define __counted_by(member)
+-#endif
+-
+ /*
+  * Optional: only supported since clang >= 14.0
+  *
+diff --git a/include/linux/if_team.h b/include/linux/if_team.h
+index 8de6b6e678295..34bcba5a70677 100644
+--- a/include/linux/if_team.h
++++ b/include/linux/if_team.h
+@@ -189,6 +189,8 @@ struct team {
+ 	struct net_device *dev; /* associated netdevice */
+ 	struct team_pcpu_stats __percpu *pcpu_stats;
+ 
++	const struct header_ops *header_ops_cache;
++
+ 	struct mutex lock; /* used for overall locking, e.g. port lists write */
+ 
+ 	/*
+diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
+index a92bce40b04b3..4a1dc88ddbff9 100644
+--- a/include/linux/interrupt.h
++++ b/include/linux/interrupt.h
+@@ -569,8 +569,12 @@ enum
+  * 	2) rcu_report_dead() reports the final quiescent states.
+  *
+  * _ IRQ_POLL: irq_poll_cpu_dead() migrates the queue
++ *
++ * _ (HR)TIMER_SOFTIRQ: (hr)timers_dead_cpu() migrates the queue
+  */
+-#define SOFTIRQ_HOTPLUG_SAFE_MASK (BIT(RCU_SOFTIRQ) | BIT(IRQ_POLL_SOFTIRQ))
++#define SOFTIRQ_HOTPLUG_SAFE_MASK (BIT(TIMER_SOFTIRQ) | BIT(IRQ_POLL_SOFTIRQ) |\
++				   BIT(HRTIMER_SOFTIRQ) | BIT(RCU_SOFTIRQ))
++
+ 
+ /* map softirq index to softirq name. update 'softirq_to_name' in
+  * kernel/softirq.c when adding a new softirq.
+diff --git a/include/linux/libata.h b/include/linux/libata.h
+index f5bb4415c5e2d..19ddc6c804008 100644
+--- a/include/linux/libata.h
++++ b/include/linux/libata.h
+@@ -259,7 +259,7 @@ enum {
+ 	 * advised to wait only for the following duration before
+ 	 * doing SRST.
+ 	 */
+-	ATA_TMOUT_PMP_SRST_WAIT	= 5000,
++	ATA_TMOUT_PMP_SRST_WAIT	= 10000,
+ 
+ 	/* When the LPM policy is set to ATA_LPM_MAX_POWER, there might
+ 	 * be a spurious PHY event, so ignore the first PHY event that
+@@ -1155,6 +1155,7 @@ extern int ata_std_bios_param(struct scsi_device *sdev,
+ 			      struct block_device *bdev,
+ 			      sector_t capacity, int geom[]);
+ extern void ata_scsi_unlock_native_capacity(struct scsi_device *sdev);
++extern int ata_scsi_slave_alloc(struct scsi_device *sdev);
+ extern int ata_scsi_slave_config(struct scsi_device *sdev);
+ extern void ata_scsi_slave_destroy(struct scsi_device *sdev);
+ extern int ata_scsi_change_queue_depth(struct scsi_device *sdev,
+@@ -1408,6 +1409,7 @@ extern const struct attribute_group *ata_common_sdev_groups[];
+ 	.this_id		= ATA_SHT_THIS_ID,		\
+ 	.emulated		= ATA_SHT_EMULATED,		\
+ 	.proc_name		= drv_name,			\
++	.slave_alloc		= ata_scsi_slave_alloc,		\
+ 	.slave_destroy		= ata_scsi_slave_destroy,	\
+ 	.bios_param		= ata_std_bios_param,		\
+ 	.unlock_native_capacity	= ata_scsi_unlock_native_capacity,\
+diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
+index dbf26bc89dd46..6d2a771debbad 100644
+--- a/include/linux/memcontrol.h
++++ b/include/linux/memcontrol.h
+@@ -919,7 +919,7 @@ unsigned long mem_cgroup_get_zone_lru_size(struct lruvec *lruvec,
+ 	return READ_ONCE(mz->lru_zone_size[zone_idx][lru]);
+ }
+ 
+-void mem_cgroup_handle_over_high(void);
++void mem_cgroup_handle_over_high(gfp_t gfp_mask);
+ 
+ unsigned long mem_cgroup_get_max(struct mem_cgroup *memcg);
+ 
+@@ -1460,7 +1460,7 @@ static inline void mem_cgroup_unlock_pages(void)
+ 	rcu_read_unlock();
+ }
+ 
+-static inline void mem_cgroup_handle_over_high(void)
++static inline void mem_cgroup_handle_over_high(gfp_t gfp_mask)
+ {
+ }
+ 
+diff --git a/include/linux/nfs_fs_sb.h b/include/linux/nfs_fs_sb.h
+index 20eeba8b009df..cd628c4b011e5 100644
+--- a/include/linux/nfs_fs_sb.h
++++ b/include/linux/nfs_fs_sb.h
+@@ -48,6 +48,7 @@ struct nfs_client {
+ #define NFS_CS_NOPING		6		/* - don't ping on connect */
+ #define NFS_CS_DS		7		/* - Server is a DS */
+ #define NFS_CS_REUSEPORT	8		/* - reuse src port on reconnect */
++#define NFS_CS_PNFS		9		/* - Server used for pnfs */
+ 	struct sockaddr_storage	cl_addr;	/* server identifier */
+ 	size_t			cl_addrlen;
+ 	char *			cl_hostname;	/* hostname of server */
+diff --git a/include/linux/nfs_page.h b/include/linux/nfs_page.h
+index aa9f4c6ebe261..1c315f854ea80 100644
+--- a/include/linux/nfs_page.h
++++ b/include/linux/nfs_page.h
+@@ -157,7 +157,9 @@ extern	void nfs_unlock_request(struct nfs_page *req);
+ extern	void nfs_unlock_and_release_request(struct nfs_page *);
+ extern	struct nfs_page *nfs_page_group_lock_head(struct nfs_page *req);
+ extern	int nfs_page_group_lock_subrequests(struct nfs_page *head);
+-extern	void nfs_join_page_group(struct nfs_page *head, struct inode *inode);
++extern void nfs_join_page_group(struct nfs_page *head,
++				struct nfs_commit_info *cinfo,
++				struct inode *inode);
+ extern int nfs_page_group_lock(struct nfs_page *);
+ extern void nfs_page_group_unlock(struct nfs_page *);
+ extern bool nfs_page_group_sync_on_bit(struct nfs_page *, unsigned int);
+diff --git a/include/linux/resume_user_mode.h b/include/linux/resume_user_mode.h
+index 2851894544496..f8f3e958e9cf2 100644
+--- a/include/linux/resume_user_mode.h
++++ b/include/linux/resume_user_mode.h
+@@ -55,7 +55,7 @@ static inline void resume_user_mode_work(struct pt_regs *regs)
+ 	}
+ #endif
+ 
+-	mem_cgroup_handle_over_high();
++	mem_cgroup_handle_over_high(GFP_KERNEL);
+ 	blkcg_maybe_throttle_current();
+ 
+ 	rseq_handle_notify_resume(NULL, regs);
+diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
+index 987a59d977c56..e9bd2f65d7f4e 100644
+--- a/include/linux/seqlock.h
++++ b/include/linux/seqlock.h
+@@ -512,8 +512,8 @@ do {									\
+ 
+ static inline void do_write_seqcount_begin_nested(seqcount_t *s, int subclass)
+ {
+-	do_raw_write_seqcount_begin(s);
+ 	seqcount_acquire(&s->dep_map, subclass, 0, _RET_IP_);
++	do_raw_write_seqcount_begin(s);
+ }
+ 
+ /**
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index dd40c75011d25..7c816359d5a98 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -1682,7 +1682,7 @@ struct nft_trans_gc {
+ 	struct net		*net;
+ 	struct nft_set		*set;
+ 	u32			seq;
+-	u8			count;
++	u16			count;
+ 	void			*priv[NFT_TRANS_GC_BATCHCOUNT];
+ 	struct rcu_head		rcu;
+ };
+@@ -1700,8 +1700,9 @@ void nft_trans_gc_queue_sync_done(struct nft_trans_gc *trans);
+ 
+ void nft_trans_gc_elem_add(struct nft_trans_gc *gc, void *priv);
+ 
+-struct nft_trans_gc *nft_trans_gc_catchall(struct nft_trans_gc *gc,
+-					   unsigned int gc_seq);
++struct nft_trans_gc *nft_trans_gc_catchall_async(struct nft_trans_gc *gc,
++						 unsigned int gc_seq);
++struct nft_trans_gc *nft_trans_gc_catchall_sync(struct nft_trans_gc *gc);
+ 
+ void nft_setelem_data_deactivate(const struct net *net,
+ 				 const struct nft_set *set,
+diff --git a/include/scsi/scsi.h b/include/scsi/scsi.h
+index ec093594ba53d..4498f845b1122 100644
+--- a/include/scsi/scsi.h
++++ b/include/scsi/scsi.h
+@@ -157,6 +157,9 @@ enum scsi_disposition {
+ #define SCSI_3          4        /* SPC */
+ #define SCSI_SPC_2      5
+ #define SCSI_SPC_3      6
++#define SCSI_SPC_4	7
++#define SCSI_SPC_5	8
++#define SCSI_SPC_6	14
+ 
+ /*
+  * INQ PERIPHERAL QUALIFIERS
+diff --git a/include/scsi/scsi_device.h b/include/scsi/scsi_device.h
+index b9230b6add041..fd41fdac0a8e6 100644
+--- a/include/scsi/scsi_device.h
++++ b/include/scsi/scsi_device.h
+@@ -161,6 +161,10 @@ struct scsi_device {
+ 				 * pass settings from slave_alloc to scsi
+ 				 * core. */
+ 	unsigned int eh_timeout; /* Error handling timeout */
++
++	bool manage_system_start_stop; /* Let HLD (sd) manage system start/stop */
++	bool manage_runtime_start_stop; /* Let HLD (sd) manage runtime start/stop */
++
+ 	unsigned removable:1;
+ 	unsigned changed:1;	/* Data invalid due to media change */
+ 	unsigned busy:1;	/* Used to prevent races */
+@@ -193,7 +197,6 @@ struct scsi_device {
+ 	unsigned use_192_bytes_for_3f:1; /* ask for 192 bytes from page 0x3f */
+ 	unsigned no_start_on_add:1;	/* do not issue start on add */
+ 	unsigned allow_restart:1; /* issue START_UNIT in error handler */
+-	unsigned manage_start_stop:1;	/* Let HLD (sd) manage start/stop */
+ 	unsigned no_start_on_resume:1; /* Do not issue START_STOP_UNIT on resume */
+ 	unsigned start_stop_pwr_cond:1;	/* Set power cond. in START_STOP_UNIT */
+ 	unsigned no_uld_attach:1; /* disable connecting to upper level drivers */
+diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
+index 60a9d59beeabb..25f668165b567 100644
+--- a/include/uapi/linux/bpf.h
++++ b/include/uapi/linux/bpf.h
+@@ -1897,7 +1897,9 @@ union bpf_attr {
+  * 		performed again, if the helper is used in combination with
+  * 		direct packet access.
+  * 	Return
+- * 		0 on success, or a negative error in case of failure.
++ * 		0 on success, or a negative error in case of failure. Positive
++ * 		error indicates a potential drop or congestion in the target
++ * 		device. The particular positive error codes are not defined.
+  *
+  * u64 bpf_get_current_pid_tgid(void)
+  * 	Description
+diff --git a/include/uapi/linux/stddef.h b/include/uapi/linux/stddef.h
+index 7837ba4fe7289..5c6c4269f7efe 100644
+--- a/include/uapi/linux/stddef.h
++++ b/include/uapi/linux/stddef.h
+@@ -29,6 +29,11 @@
+ 		struct TAG { MEMBERS } ATTRS NAME; \
+ 	}
+ 
++#ifdef __cplusplus
++/* sizeof(struct{}) is 1 in C++, not 0, can't use C version of the macro. */
++#define __DECLARE_FLEX_ARRAY(T, member)	\
++	T member[0]
++#else
+ /**
+  * __DECLARE_FLEX_ARRAY() - Declare a flexible array usable in a union
+  *
+@@ -45,3 +50,9 @@
+ 		TYPE NAME[]; \
+ 	}
+ #endif
++
++#ifndef __counted_by
++#define __counted_by(m)
++#endif
++
++#endif /* _UAPI_LINUX_STDDEF_H */
+diff --git a/io_uring/fs.c b/io_uring/fs.c
+index f6a69a549fd45..08e3b175469c6 100644
+--- a/io_uring/fs.c
++++ b/io_uring/fs.c
+@@ -243,7 +243,7 @@ int io_linkat_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ 	struct io_link *lnk = io_kiocb_to_cmd(req, struct io_link);
+ 	const char __user *oldf, *newf;
+ 
+-	if (sqe->rw_flags || sqe->buf_index || sqe->splice_fd_in)
++	if (sqe->buf_index || sqe->splice_fd_in)
+ 		return -EINVAL;
+ 	if (unlikely(req->flags & REQ_F_FIXED_FILE))
+ 		return -EBADF;
+diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
+index 4b38c97990872..197d8252ffc65 100644
+--- a/kernel/bpf/btf.c
++++ b/kernel/bpf/btf.c
+@@ -8498,7 +8498,7 @@ bool btf_nested_type_is_trusted(struct bpf_verifier_log *log,
+ 	tname = btf_name_by_offset(btf, walk_type->name_off);
+ 
+ 	ret = snprintf(safe_tname, sizeof(safe_tname), "%s%s", tname, suffix);
+-	if (ret < 0)
++	if (ret >= sizeof(safe_tname))
+ 		return false;
+ 
+ 	safe_id = btf_find_by_name_kind(btf, safe_tname, BTF_INFO_KIND(walk_type->info));
+diff --git a/kernel/bpf/offload.c b/kernel/bpf/offload.c
+index 8a26cd8814c1b..e842229123ffc 100644
+--- a/kernel/bpf/offload.c
++++ b/kernel/bpf/offload.c
+@@ -198,12 +198,14 @@ static int __bpf_prog_dev_bound_init(struct bpf_prog *prog, struct net_device *n
+ 	offload->netdev = netdev;
+ 
+ 	ondev = bpf_offload_find_netdev(offload->netdev);
++	/* When program is offloaded require presence of "true"
++	 * bpf_offload_netdev, avoid the one created for !ondev case below.
++	 */
++	if (bpf_prog_is_offloaded(prog->aux) && (!ondev || !ondev->offdev)) {
++		err = -EINVAL;
++		goto err_free;
++	}
+ 	if (!ondev) {
+-		if (bpf_prog_is_offloaded(prog->aux)) {
+-			err = -EINVAL;
+-			goto err_free;
+-		}
+-
+ 		/* When only binding to the device, explicitly
+ 		 * create an entry in the hashtable.
+ 		 */
+diff --git a/kernel/bpf/queue_stack_maps.c b/kernel/bpf/queue_stack_maps.c
+index 8d2ddcb7566b7..d869f51ea93a0 100644
+--- a/kernel/bpf/queue_stack_maps.c
++++ b/kernel/bpf/queue_stack_maps.c
+@@ -98,7 +98,12 @@ static long __queue_map_get(struct bpf_map *map, void *value, bool delete)
+ 	int err = 0;
+ 	void *ptr;
+ 
+-	raw_spin_lock_irqsave(&qs->lock, flags);
++	if (in_nmi()) {
++		if (!raw_spin_trylock_irqsave(&qs->lock, flags))
++			return -EBUSY;
++	} else {
++		raw_spin_lock_irqsave(&qs->lock, flags);
++	}
+ 
+ 	if (queue_stack_map_is_empty(qs)) {
+ 		memset(value, 0, qs->map.value_size);
+@@ -128,7 +133,12 @@ static long __stack_map_get(struct bpf_map *map, void *value, bool delete)
+ 	void *ptr;
+ 	u32 index;
+ 
+-	raw_spin_lock_irqsave(&qs->lock, flags);
++	if (in_nmi()) {
++		if (!raw_spin_trylock_irqsave(&qs->lock, flags))
++			return -EBUSY;
++	} else {
++		raw_spin_lock_irqsave(&qs->lock, flags);
++	}
+ 
+ 	if (queue_stack_map_is_empty(qs)) {
+ 		memset(value, 0, qs->map.value_size);
+@@ -193,7 +203,12 @@ static long queue_stack_map_push_elem(struct bpf_map *map, void *value,
+ 	if (flags & BPF_NOEXIST || flags > BPF_EXIST)
+ 		return -EINVAL;
+ 
+-	raw_spin_lock_irqsave(&qs->lock, irq_flags);
++	if (in_nmi()) {
++		if (!raw_spin_trylock_irqsave(&qs->lock, irq_flags))
++			return -EBUSY;
++	} else {
++		raw_spin_lock_irqsave(&qs->lock, irq_flags);
++	}
+ 
+ 	if (queue_stack_map_is_full(qs)) {
+ 		if (!replace) {
+diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c
+index f190651bcaddc..06366acd27b08 100644
+--- a/kernel/dma/debug.c
++++ b/kernel/dma/debug.c
+@@ -637,15 +637,19 @@ static struct dma_debug_entry *__dma_entry_alloc(void)
+ 	return entry;
+ }
+ 
+-static void __dma_entry_alloc_check_leak(void)
++/*
++ * This should be called outside of free_entries_lock scope to avoid potential
++ * deadlocks with serial consoles that use DMA.
++ */
++static void __dma_entry_alloc_check_leak(u32 nr_entries)
+ {
+-	u32 tmp = nr_total_entries % nr_prealloc_entries;
++	u32 tmp = nr_entries % nr_prealloc_entries;
+ 
+ 	/* Shout each time we tick over some multiple of the initial pool */
+ 	if (tmp < DMA_DEBUG_DYNAMIC_ENTRIES) {
+ 		pr_info("dma_debug_entry pool grown to %u (%u00%%)\n",
+-			nr_total_entries,
+-			(nr_total_entries / nr_prealloc_entries));
++			nr_entries,
++			(nr_entries / nr_prealloc_entries));
+ 	}
+ }
+ 
+@@ -656,8 +660,10 @@ static void __dma_entry_alloc_check_leak(void)
+  */
+ static struct dma_debug_entry *dma_entry_alloc(void)
+ {
++	bool alloc_check_leak = false;
+ 	struct dma_debug_entry *entry;
+ 	unsigned long flags;
++	u32 nr_entries;
+ 
+ 	spin_lock_irqsave(&free_entries_lock, flags);
+ 	if (num_free_entries == 0) {
+@@ -667,13 +673,17 @@ static struct dma_debug_entry *dma_entry_alloc(void)
+ 			pr_err("debugging out of memory - disabling\n");
+ 			return NULL;
+ 		}
+-		__dma_entry_alloc_check_leak();
++		alloc_check_leak = true;
++		nr_entries = nr_total_entries;
+ 	}
+ 
+ 	entry = __dma_entry_alloc();
+ 
+ 	spin_unlock_irqrestore(&free_entries_lock, flags);
+ 
++	if (alloc_check_leak)
++		__dma_entry_alloc_check_leak(nr_entries);
++
+ #ifdef CONFIG_STACKTRACE
+ 	entry->stack_len = stack_trace_save(entry->stack_entries,
+ 					    ARRAY_SIZE(entry->stack_entries),
+diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
+index 2b83e3ad9dca1..aa0a4a220719a 100644
+--- a/kernel/dma/swiotlb.c
++++ b/kernel/dma/swiotlb.c
+@@ -350,14 +350,14 @@ void __init swiotlb_init_remap(bool addressing_limit, unsigned int flags,
+ 	}
+ 
+ 	mem->areas = memblock_alloc(array_size(sizeof(struct io_tlb_area),
+-		default_nareas), SMP_CACHE_BYTES);
++		nareas), SMP_CACHE_BYTES);
+ 	if (!mem->areas) {
+ 		pr_warn("%s: Failed to allocate mem->areas.\n", __func__);
+ 		return;
+ 	}
+ 
+ 	swiotlb_init_io_tlb_mem(mem, __pa(tlb), nslabs, flags, false,
+-				default_nareas);
++				nareas);
+ 
+ 	if (flags & SWIOTLB_VERBOSE)
+ 		swiotlb_print_info();
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index c52c2eba7c739..e8f73ff12126c 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -9271,7 +9271,7 @@ void __init init_idle(struct task_struct *idle, int cpu)
+ 	 * PF_KTHREAD should already be set at this point; regardless, make it
+ 	 * look like a proper per-CPU kthread.
+ 	 */
+-	idle->flags |= PF_IDLE | PF_KTHREAD | PF_NO_SETAFFINITY;
++	idle->flags |= PF_KTHREAD | PF_NO_SETAFFINITY;
+ 	kthread_set_per_cpu(idle, cpu);
+ 
+ #ifdef CONFIG_SMP
+diff --git a/kernel/sched/cpupri.c b/kernel/sched/cpupri.c
+index a286e726eb4b8..42c40cfdf8363 100644
+--- a/kernel/sched/cpupri.c
++++ b/kernel/sched/cpupri.c
+@@ -101,6 +101,7 @@ static inline int __cpupri_find(struct cpupri *cp, struct task_struct *p,
+ 
+ 	if (lowest_mask) {
+ 		cpumask_and(lowest_mask, &p->cpus_mask, vec->mask);
++		cpumask_and(lowest_mask, lowest_mask, cpu_active_mask);
+ 
+ 		/*
+ 		 * We have to ensure that we have at least one bit
+diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
+index 342f58a329f52..5007b25c5bc65 100644
+--- a/kernel/sched/idle.c
++++ b/kernel/sched/idle.c
+@@ -373,6 +373,7 @@ EXPORT_SYMBOL_GPL(play_idle_precise);
+ 
+ void cpu_startup_entry(enum cpuhp_state state)
+ {
++	current->flags |= PF_IDLE;
+ 	arch_cpu_idle_prepare();
+ 	cpuhp_online_idle(state);
+ 	while (1)
+diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
+index abf287b2678a1..bb0d8b9c09e7c 100644
+--- a/kernel/trace/bpf_trace.c
++++ b/kernel/trace/bpf_trace.c
+@@ -2772,6 +2772,17 @@ static int get_modules_for_addrs(struct module ***mods, unsigned long *addrs, u3
+ 	return arr.mods_cnt;
+ }
+ 
++static int addrs_check_error_injection_list(unsigned long *addrs, u32 cnt)
++{
++	u32 i;
++
++	for (i = 0; i < cnt; i++) {
++		if (!within_error_injection_list(addrs[i]))
++			return -EINVAL;
++	}
++	return 0;
++}
++
+ int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *prog)
+ {
+ 	struct bpf_kprobe_multi_link *link = NULL;
+@@ -2849,6 +2860,11 @@ int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr
+ 			goto error;
+ 	}
+ 
++	if (prog->kprobe_override && addrs_check_error_injection_list(addrs, cnt)) {
++		err = -EINVAL;
++		goto error;
++	}
++
+ 	link = kzalloc(sizeof(*link), GFP_KERNEL);
+ 	if (!link) {
+ 		err = -ENOMEM;
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index 52dea5dd5362e..da665764dd4d1 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -354,6 +354,11 @@ static void rb_init_page(struct buffer_data_page *bpage)
+ 	local_set(&bpage->commit, 0);
+ }
+ 
++static __always_inline unsigned int rb_page_commit(struct buffer_page *bpage)
++{
++	return local_read(&bpage->page->commit);
++}
++
+ static void free_buffer_page(struct buffer_page *bpage)
+ {
+ 	free_page((unsigned long)bpage->page);
+@@ -1137,6 +1142,9 @@ __poll_t ring_buffer_poll_wait(struct trace_buffer *buffer, int cpu,
+ 	if (full) {
+ 		poll_wait(filp, &work->full_waiters, poll_table);
+ 		work->full_waiters_pending = true;
++		if (!cpu_buffer->shortest_full ||
++		    cpu_buffer->shortest_full > full)
++			cpu_buffer->shortest_full = full;
+ 	} else {
+ 		poll_wait(filp, &work->waiters, poll_table);
+ 		work->waiters_pending = true;
+@@ -2011,7 +2019,7 @@ rb_remove_pages(struct ring_buffer_per_cpu *cpu_buffer, unsigned long nr_pages)
+ 			 * Increment overrun to account for the lost events.
+ 			 */
+ 			local_add(page_entries, &cpu_buffer->overrun);
+-			local_sub(BUF_PAGE_SIZE, &cpu_buffer->entries_bytes);
++			local_sub(rb_page_commit(to_remove_page), &cpu_buffer->entries_bytes);
+ 			local_inc(&cpu_buffer->pages_lost);
+ 		}
+ 
+@@ -2206,6 +2214,8 @@ int ring_buffer_resize(struct trace_buffer *buffer, unsigned long size,
+ 				err = -ENOMEM;
+ 				goto out_err;
+ 			}
++
++			cond_resched();
+ 		}
+ 
+ 		cpus_read_lock();
+@@ -2373,11 +2383,6 @@ rb_reader_event(struct ring_buffer_per_cpu *cpu_buffer)
+ 			       cpu_buffer->reader_page->read);
+ }
+ 
+-static __always_inline unsigned rb_page_commit(struct buffer_page *bpage)
+-{
+-	return local_read(&bpage->page->commit);
+-}
+-
+ static struct ring_buffer_event *
+ rb_iter_head_event(struct ring_buffer_iter *iter)
+ {
+@@ -2396,6 +2401,11 @@ rb_iter_head_event(struct ring_buffer_iter *iter)
+ 	 */
+ 	commit = rb_page_commit(iter_head_page);
+ 	smp_rmb();
++
++	/* An event needs to be at least 8 bytes in size */
++	if (iter->head > commit - 8)
++		goto reset;
++
+ 	event = __rb_page_index(iter_head_page, iter->head);
+ 	length = rb_event_length(event);
+ 
+@@ -2518,7 +2528,7 @@ rb_handle_head_page(struct ring_buffer_per_cpu *cpu_buffer,
+ 		 * the counters.
+ 		 */
+ 		local_add(entries, &cpu_buffer->overrun);
+-		local_sub(BUF_PAGE_SIZE, &cpu_buffer->entries_bytes);
++		local_sub(rb_page_commit(next_page), &cpu_buffer->entries_bytes);
+ 		local_inc(&cpu_buffer->pages_lost);
+ 
+ 		/*
+@@ -2661,9 +2671,6 @@ rb_reset_tail(struct ring_buffer_per_cpu *cpu_buffer,
+ 
+ 	event = __rb_page_index(tail_page, tail);
+ 
+-	/* account for padding bytes */
+-	local_add(BUF_PAGE_SIZE - tail, &cpu_buffer->entries_bytes);
+-
+ 	/*
+ 	 * Save the original length to the meta data.
+ 	 * This will be used by the reader to add lost event
+@@ -2677,7 +2684,8 @@ rb_reset_tail(struct ring_buffer_per_cpu *cpu_buffer,
+ 	 * write counter enough to allow another writer to slip
+ 	 * in on this page.
+ 	 * We put in a discarded commit instead, to make sure
+-	 * that this space is not used again.
++	 * that this space is not used again, and this space will
++	 * not be accounted into 'entries_bytes'.
+ 	 *
+ 	 * If we are less than the minimum size, we don't need to
+ 	 * worry about it.
+@@ -2702,6 +2710,9 @@ rb_reset_tail(struct ring_buffer_per_cpu *cpu_buffer,
+ 	/* time delta must be non zero */
+ 	event->time_delta = 1;
+ 
++	/* account for padding bytes */
++	local_add(BUF_PAGE_SIZE - tail, &cpu_buffer->entries_bytes);
++
+ 	/* Make sure the padding is visible before the tail_page->write update */
+ 	smp_wmb();
+ 
+@@ -4216,7 +4227,7 @@ u64 ring_buffer_oldest_event_ts(struct trace_buffer *buffer, int cpu)
+ EXPORT_SYMBOL_GPL(ring_buffer_oldest_event_ts);
+ 
+ /**
+- * ring_buffer_bytes_cpu - get the number of bytes consumed in a cpu buffer
++ * ring_buffer_bytes_cpu - get the number of bytes unconsumed in a cpu buffer
+  * @buffer: The ring buffer
+  * @cpu: The per CPU buffer to read from.
+  */
+@@ -4724,6 +4735,7 @@ static void rb_advance_reader(struct ring_buffer_per_cpu *cpu_buffer)
+ 
+ 	length = rb_event_length(event);
+ 	cpu_buffer->reader_page->read += length;
++	cpu_buffer->read_bytes += length;
+ }
+ 
+ static void rb_advance_iter(struct ring_buffer_iter *iter)
+@@ -5817,7 +5829,7 @@ int ring_buffer_read_page(struct trace_buffer *buffer,
+ 	} else {
+ 		/* update the entry counter */
+ 		cpu_buffer->read += rb_page_entries(reader);
+-		cpu_buffer->read_bytes += BUF_PAGE_SIZE;
++		cpu_buffer->read_bytes += rb_page_commit(reader);
+ 
+ 		/* swap the pages */
+ 		rb_init_page(bpage);
+diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c
+index 33cb6af31f395..d4755d4f744cc 100644
+--- a/kernel/trace/trace_events_user.c
++++ b/kernel/trace/trace_events_user.c
+@@ -127,8 +127,13 @@ struct user_event_enabler {
+ /* Bit 7 is for freeing status of enablement */
+ #define ENABLE_VAL_FREEING_BIT 7
+ 
+-/* Only duplicate the bit value */
+-#define ENABLE_VAL_DUP_MASK ENABLE_VAL_BIT_MASK
++/* Bit 8 is for marking 32-bit on 64-bit */
++#define ENABLE_VAL_32_ON_64_BIT 8
++
++#define ENABLE_VAL_COMPAT_MASK (1 << ENABLE_VAL_32_ON_64_BIT)
++
++/* Only duplicate the bit and compat values */
++#define ENABLE_VAL_DUP_MASK (ENABLE_VAL_BIT_MASK | ENABLE_VAL_COMPAT_MASK)
+ 
+ #define ENABLE_BITOPS(e) (&(e)->values)
+ 
+@@ -174,6 +179,30 @@ struct user_event_validator {
+ 	int			flags;
+ };
+ 
++static inline void align_addr_bit(unsigned long *addr, int *bit,
++				  unsigned long *flags)
++{
++	if (IS_ALIGNED(*addr, sizeof(long))) {
++#ifdef __BIG_ENDIAN
++		/* 32 bit on BE 64 bit requires a 32 bit offset when aligned. */
++		if (test_bit(ENABLE_VAL_32_ON_64_BIT, flags))
++			*bit += 32;
++#endif
++		return;
++	}
++
++	*addr = ALIGN_DOWN(*addr, sizeof(long));
++
++	/*
++	 * We only support 32 and 64 bit values. The only time we need
++	 * to align is a 32 bit value on a 64 bit kernel, which on LE
++	 * is always 32 bits, and on BE requires no change when unaligned.
++	 */
++#ifdef __LITTLE_ENDIAN
++	*bit += 32;
++#endif
++}
++
+ typedef void (*user_event_func_t) (struct user_event *user, struct iov_iter *i,
+ 				   void *tpdata, bool *faulted);
+ 
+@@ -482,6 +511,7 @@ static int user_event_enabler_write(struct user_event_mm *mm,
+ 	unsigned long *ptr;
+ 	struct page *page;
+ 	void *kaddr;
++	int bit = ENABLE_BIT(enabler);
+ 	int ret;
+ 
+ 	lockdep_assert_held(&event_mutex);
+@@ -497,6 +527,8 @@ static int user_event_enabler_write(struct user_event_mm *mm,
+ 		     test_bit(ENABLE_VAL_FREEING_BIT, ENABLE_BITOPS(enabler))))
+ 		return -EBUSY;
+ 
++	align_addr_bit(&uaddr, &bit, ENABLE_BITOPS(enabler));
++
+ 	ret = pin_user_pages_remote(mm->mm, uaddr, 1, FOLL_WRITE | FOLL_NOFAULT,
+ 				    &page, NULL);
+ 
+@@ -515,9 +547,9 @@ static int user_event_enabler_write(struct user_event_mm *mm,
+ 
+ 	/* Update bit atomically, user tracers must be atomic as well */
+ 	if (enabler->event && enabler->event->status)
+-		set_bit(ENABLE_BIT(enabler), ptr);
++		set_bit(bit, ptr);
+ 	else
+-		clear_bit(ENABLE_BIT(enabler), ptr);
++		clear_bit(bit, ptr);
+ 
+ 	kunmap_local(kaddr);
+ 	unpin_user_pages_dirty_lock(&page, 1, true);
+@@ -849,6 +881,12 @@ static struct user_event_enabler
+ 	enabler->event = user;
+ 	enabler->addr = uaddr;
+ 	enabler->values = reg->enable_bit;
++
++#if BITS_PER_LONG >= 64
++	if (reg->enable_size == 4)
++		set_bit(ENABLE_VAL_32_ON_64_BIT, ENABLE_BITOPS(enabler));
++#endif
++
+ retry:
+ 	/* Prevents state changes from racing with new enablers */
+ 	mutex_lock(&event_mutex);
+@@ -2376,7 +2414,8 @@ static long user_unreg_get(struct user_unreg __user *ureg,
+ }
+ 
+ static int user_event_mm_clear_bit(struct user_event_mm *user_mm,
+-				   unsigned long uaddr, unsigned char bit)
++				   unsigned long uaddr, unsigned char bit,
++				   unsigned long flags)
+ {
+ 	struct user_event_enabler enabler;
+ 	int result;
+@@ -2384,7 +2423,7 @@ static int user_event_mm_clear_bit(struct user_event_mm *user_mm,
+ 
+ 	memset(&enabler, 0, sizeof(enabler));
+ 	enabler.addr = uaddr;
+-	enabler.values = bit;
++	enabler.values = bit | flags;
+ retry:
+ 	/* Prevents state changes from racing with new enablers */
+ 	mutex_lock(&event_mutex);
+@@ -2414,6 +2453,7 @@ static long user_events_ioctl_unreg(unsigned long uarg)
+ 	struct user_event_mm *mm = current->user_event_mm;
+ 	struct user_event_enabler *enabler, *next;
+ 	struct user_unreg reg;
++	unsigned long flags;
+ 	long ret;
+ 
+ 	ret = user_unreg_get(ureg, &reg);
+@@ -2424,6 +2464,7 @@ static long user_events_ioctl_unreg(unsigned long uarg)
+ 	if (!mm)
+ 		return -ENOENT;
+ 
++	flags = 0;
+ 	ret = -ENOENT;
+ 
+ 	/*
+@@ -2440,6 +2481,9 @@ static long user_events_ioctl_unreg(unsigned long uarg)
+ 		    ENABLE_BIT(enabler) == reg.disable_bit) {
+ 			set_bit(ENABLE_VAL_FREEING_BIT, ENABLE_BITOPS(enabler));
+ 
++			/* We must keep compat flags for the clear */
++			flags |= enabler->values & ENABLE_VAL_COMPAT_MASK;
++
+ 			if (!test_bit(ENABLE_VAL_FAULTING_BIT, ENABLE_BITOPS(enabler)))
+ 				user_event_enabler_destroy(enabler, true);
+ 
+@@ -2453,7 +2497,7 @@ static long user_events_ioctl_unreg(unsigned long uarg)
+ 	/* Ensure bit is now cleared for user, regardless of event status */
+ 	if (!ret)
+ 		ret = user_event_mm_clear_bit(mm, reg.disable_addr,
+-					      reg.disable_bit);
++					      reg.disable_bit, flags);
+ 
+ 	return ret;
+ }
+diff --git a/mm/damon/vaddr-test.h b/mm/damon/vaddr-test.h
+index c4b455b5ee30b..dcf1ca6b31cc4 100644
+--- a/mm/damon/vaddr-test.h
++++ b/mm/damon/vaddr-test.h
+@@ -148,6 +148,8 @@ static void damon_do_test_apply_three_regions(struct kunit *test,
+ 		KUNIT_EXPECT_EQ(test, r->ar.start, expected[i * 2]);
+ 		KUNIT_EXPECT_EQ(test, r->ar.end, expected[i * 2 + 1]);
+ 	}
++
++	damon_destroy_target(t);
+ }
+ 
+ /*
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index 4fe5a562d0bbc..339dd2ccc9333 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -2559,7 +2559,7 @@ static unsigned long calculate_high_delay(struct mem_cgroup *memcg,
+  * Scheduled by try_charge() to be executed from the userland return path
+  * and reclaims memory over the high limit.
+  */
+-void mem_cgroup_handle_over_high(void)
++void mem_cgroup_handle_over_high(gfp_t gfp_mask)
+ {
+ 	unsigned long penalty_jiffies;
+ 	unsigned long pflags;
+@@ -2587,7 +2587,7 @@ retry_reclaim:
+ 	 */
+ 	nr_reclaimed = reclaim_high(memcg,
+ 				    in_retry ? SWAP_CLUSTER_MAX : nr_pages,
+-				    GFP_KERNEL);
++				    gfp_mask);
+ 
+ 	/*
+ 	 * memory.high is breached and reclaim is unable to keep up. Throttle
+@@ -2823,7 +2823,7 @@ done_restock:
+ 	if (current->memcg_nr_pages_over_high > MEMCG_CHARGE_BATCH &&
+ 	    !(current->flags & PF_MEMALLOC) &&
+ 	    gfpflags_allow_blocking(gfp_mask)) {
+-		mem_cgroup_handle_over_high();
++		mem_cgroup_handle_over_high(gfp_mask);
+ 	}
+ 	return 0;
+ }
+@@ -3872,8 +3872,11 @@ static ssize_t mem_cgroup_write(struct kernfs_open_file *of,
+ 			ret = mem_cgroup_resize_max(memcg, nr_pages, true);
+ 			break;
+ 		case _KMEM:
+-			/* kmem.limit_in_bytes is deprecated. */
+-			ret = -EOPNOTSUPP;
++			pr_warn_once("kmem.limit_in_bytes is deprecated and will be removed. "
++				     "Writing any value to this file has no effect. "
++				     "Please report your usecase to linux-mm@kvack.org if you "
++				     "depend on this functionality.\n");
++			ret = 0;
+ 			break;
+ 		case _TCP:
+ 			ret = memcg_update_tcp_max(memcg, nr_pages);
+diff --git a/mm/mempolicy.c b/mm/mempolicy.c
+index ec2eaceffd74b..071edec3dca2a 100644
+--- a/mm/mempolicy.c
++++ b/mm/mempolicy.c
+@@ -426,6 +426,7 @@ struct queue_pages {
+ 	unsigned long start;
+ 	unsigned long end;
+ 	struct vm_area_struct *first;
++	bool has_unmovable;
+ };
+ 
+ /*
+@@ -446,9 +447,8 @@ static inline bool queue_folio_required(struct folio *folio,
+ /*
+  * queue_folios_pmd() has three possible return values:
+  * 0 - folios are placed on the right node or queued successfully, or
+- *     special page is met, i.e. huge zero page.
+- * 1 - there is unmovable folio, and MPOL_MF_MOVE* & MPOL_MF_STRICT were
+- *     specified.
++ *     special page is met, i.e. zero page, or unmovable page is found
++ *     but continue walking (indicated by queue_pages.has_unmovable).
+  * -EIO - is migration entry or only MPOL_MF_STRICT was specified and an
+  *        existing folio was already on a node that does not follow the
+  *        policy.
+@@ -479,7 +479,7 @@ static int queue_folios_pmd(pmd_t *pmd, spinlock_t *ptl, unsigned long addr,
+ 	if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) {
+ 		if (!vma_migratable(walk->vma) ||
+ 		    migrate_folio_add(folio, qp->pagelist, flags)) {
+-			ret = 1;
++			qp->has_unmovable = true;
+ 			goto unlock;
+ 		}
+ 	} else
+@@ -495,9 +495,8 @@ unlock:
+  *
+  * queue_folios_pte_range() has three possible return values:
+  * 0 - folios are placed on the right node or queued successfully, or
+- *     special page is met, i.e. zero page.
+- * 1 - there is unmovable folio, and MPOL_MF_MOVE* & MPOL_MF_STRICT were
+- *     specified.
++ *     special page is met, i.e. zero page, or unmovable page is found
++ *     but continue walking (indicated by queue_pages.has_unmovable).
+  * -EIO - only MPOL_MF_STRICT was specified and an existing folio was already
+  *        on a node that does not follow the policy.
+  */
+@@ -508,7 +507,6 @@ static int queue_folios_pte_range(pmd_t *pmd, unsigned long addr,
+ 	struct folio *folio;
+ 	struct queue_pages *qp = walk->private;
+ 	unsigned long flags = qp->flags;
+-	bool has_unmovable = false;
+ 	pte_t *pte, *mapped_pte;
+ 	pte_t ptent;
+ 	spinlock_t *ptl;
+@@ -538,11 +536,12 @@ static int queue_folios_pte_range(pmd_t *pmd, unsigned long addr,
+ 		if (!queue_folio_required(folio, qp))
+ 			continue;
+ 		if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) {
+-			/* MPOL_MF_STRICT must be specified if we get here */
+-			if (!vma_migratable(vma)) {
+-				has_unmovable = true;
+-				break;
+-			}
++			/*
++			 * MPOL_MF_STRICT must be specified if we get here.
++			 * Continue walking vmas due to MPOL_MF_MOVE* flags.
++			 */
++			if (!vma_migratable(vma))
++				qp->has_unmovable = true;
+ 
+ 			/*
+ 			 * Do not abort immediately since there may be
+@@ -550,16 +549,13 @@ static int queue_folios_pte_range(pmd_t *pmd, unsigned long addr,
+ 			 * need migrate other LRU pages.
+ 			 */
+ 			if (migrate_folio_add(folio, qp->pagelist, flags))
+-				has_unmovable = true;
++				qp->has_unmovable = true;
+ 		} else
+ 			break;
+ 	}
+ 	pte_unmap_unlock(mapped_pte, ptl);
+ 	cond_resched();
+ 
+-	if (has_unmovable)
+-		return 1;
+-
+ 	return addr != end ? -EIO : 0;
+ }
+ 
+@@ -599,7 +595,7 @@ static int queue_folios_hugetlb(pte_t *pte, unsigned long hmask,
+ 		 * Detecting misplaced folio but allow migrating folios which
+ 		 * have been queued.
+ 		 */
+-		ret = 1;
++		qp->has_unmovable = true;
+ 		goto unlock;
+ 	}
+ 
+@@ -620,7 +616,7 @@ static int queue_folios_hugetlb(pte_t *pte, unsigned long hmask,
+ 			 * Failed to isolate folio but allow migrating pages
+ 			 * which have been queued.
+ 			 */
+-			ret = 1;
++			qp->has_unmovable = true;
+ 	}
+ unlock:
+ 	spin_unlock(ptl);
+@@ -756,12 +752,15 @@ queue_pages_range(struct mm_struct *mm, unsigned long start, unsigned long end,
+ 		.start = start,
+ 		.end = end,
+ 		.first = NULL,
++		.has_unmovable = false,
+ 	};
+ 	const struct mm_walk_ops *ops = lock_vma ?
+ 			&queue_pages_lock_vma_walk_ops : &queue_pages_walk_ops;
+ 
+ 	err = walk_page_range(mm, start, end, ops, &qp);
+ 
++	if (qp.has_unmovable)
++		err = 1;
+ 	if (!qp.first)
+ 		/* whole range in hole */
+ 		err = -EFAULT;
+@@ -1358,7 +1357,7 @@ static long do_mbind(unsigned long start, unsigned long len,
+ 				putback_movable_pages(&pagelist);
+ 		}
+ 
+-		if ((ret > 0) || (nr_failed && (flags & MPOL_MF_STRICT)))
++		if (((ret > 0) || nr_failed) && (flags & MPOL_MF_STRICT))
+ 			err = -EIO;
+ 	} else {
+ up_out:
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 7d3460c7a480b..d322bfae8f69b 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -2438,7 +2438,7 @@ void free_unref_page(struct page *page, unsigned int order)
+ 	struct per_cpu_pages *pcp;
+ 	struct zone *zone;
+ 	unsigned long pfn = page_to_pfn(page);
+-	int migratetype;
++	int migratetype, pcpmigratetype;
+ 
+ 	if (!free_unref_page_prepare(page, pfn, order))
+ 		return;
+@@ -2446,24 +2446,24 @@ void free_unref_page(struct page *page, unsigned int order)
+ 	/*
+ 	 * We only track unmovable, reclaimable and movable on pcp lists.
+ 	 * Place ISOLATE pages on the isolated list because they are being
+-	 * offlined but treat HIGHATOMIC as movable pages so we can get those
+-	 * areas back if necessary. Otherwise, we may have to free
++	 * offlined but treat HIGHATOMIC and CMA as movable pages so we can
++	 * get those areas back if necessary. Otherwise, we may have to free
+ 	 * excessively into the page allocator
+ 	 */
+-	migratetype = get_pcppage_migratetype(page);
++	migratetype = pcpmigratetype = get_pcppage_migratetype(page);
+ 	if (unlikely(migratetype >= MIGRATE_PCPTYPES)) {
+ 		if (unlikely(is_migrate_isolate(migratetype))) {
+ 			free_one_page(page_zone(page), page, pfn, order, migratetype, FPI_NONE);
+ 			return;
+ 		}
+-		migratetype = MIGRATE_MOVABLE;
++		pcpmigratetype = MIGRATE_MOVABLE;
+ 	}
+ 
+ 	zone = page_zone(page);
+ 	pcp_trylock_prepare(UP_flags);
+ 	pcp = pcp_spin_trylock(zone->per_cpu_pageset);
+ 	if (pcp) {
+-		free_unref_page_commit(zone, pcp, page, migratetype, order);
++		free_unref_page_commit(zone, pcp, page, pcpmigratetype, order);
+ 		pcp_spin_unlock(pcp);
+ 	} else {
+ 		free_one_page(zone, page, pfn, order, migratetype, FPI_NONE);
+diff --git a/mm/slab_common.c b/mm/slab_common.c
+index d1555ea2981ac..5658da50a2d07 100644
+--- a/mm/slab_common.c
++++ b/mm/slab_common.c
+@@ -479,7 +479,7 @@ void slab_kmem_cache_release(struct kmem_cache *s)
+ 
+ void kmem_cache_destroy(struct kmem_cache *s)
+ {
+-	int refcnt;
++	int err = -EBUSY;
+ 	bool rcu_set;
+ 
+ 	if (unlikely(!s) || !kasan_check_byte(s))
+@@ -490,17 +490,17 @@ void kmem_cache_destroy(struct kmem_cache *s)
+ 
+ 	rcu_set = s->flags & SLAB_TYPESAFE_BY_RCU;
+ 
+-	refcnt = --s->refcount;
+-	if (refcnt)
++	s->refcount--;
++	if (s->refcount)
+ 		goto out_unlock;
+ 
+-	WARN(shutdown_cache(s),
+-	     "%s %s: Slab cache still has objects when called from %pS",
++	err = shutdown_cache(s);
++	WARN(err, "%s %s: Slab cache still has objects when called from %pS",
+ 	     __func__, s->name, (void *)_RET_IP_);
+ out_unlock:
+ 	mutex_unlock(&slab_mutex);
+ 	cpus_read_unlock();
+-	if (!refcnt && !rcu_set)
++	if (!err && !rcu_set)
+ 		kmem_cache_release(s);
+ }
+ EXPORT_SYMBOL(kmem_cache_destroy);
+diff --git a/net/bridge/br_forward.c b/net/bridge/br_forward.c
+index 6116eba1bd891..bb1ab53e54e03 100644
+--- a/net/bridge/br_forward.c
++++ b/net/bridge/br_forward.c
+@@ -124,7 +124,7 @@ static int deliver_clone(const struct net_bridge_port *prev,
+ 
+ 	skb = skb_clone(skb, GFP_ATOMIC);
+ 	if (!skb) {
+-		dev->stats.tx_dropped++;
++		DEV_STATS_INC(dev, tx_dropped);
+ 		return -ENOMEM;
+ 	}
+ 
+@@ -267,7 +267,7 @@ static void maybe_deliver_addr(struct net_bridge_port *p, struct sk_buff *skb,
+ 
+ 	skb = skb_copy(skb, GFP_ATOMIC);
+ 	if (!skb) {
+-		dev->stats.tx_dropped++;
++		DEV_STATS_INC(dev, tx_dropped);
+ 		return;
+ 	}
+ 
+diff --git a/net/bridge/br_input.c b/net/bridge/br_input.c
+index c34a0b0901b07..c729528b5e85f 100644
+--- a/net/bridge/br_input.c
++++ b/net/bridge/br_input.c
+@@ -181,12 +181,12 @@ int br_handle_frame_finish(struct net *net, struct sock *sk, struct sk_buff *skb
+ 			if ((mdst && mdst->host_joined) ||
+ 			    br_multicast_is_router(brmctx, skb)) {
+ 				local_rcv = true;
+-				br->dev->stats.multicast++;
++				DEV_STATS_INC(br->dev, multicast);
+ 			}
+ 			mcast_hit = true;
+ 		} else {
+ 			local_rcv = true;
+-			br->dev->stats.multicast++;
++			DEV_STATS_INC(br->dev, multicast);
+ 		}
+ 		break;
+ 	case BR_PKT_UNICAST:
+diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
+index 6bed3992df814..aac954d1f757d 100644
+--- a/net/core/flow_dissector.c
++++ b/net/core/flow_dissector.c
+@@ -1402,7 +1402,7 @@ proto_again:
+ 			break;
+ 		}
+ 
+-		nhoff += ntohs(hdr->message_length);
++		nhoff += sizeof(struct ptp_header);
+ 		fdret = FLOW_DISSECT_RET_OUT_GOOD;
+ 		break;
+ 	}
+diff --git a/net/dccp/ipv4.c b/net/dccp/ipv4.c
+index a5361fb7a415b..fa14eef8f0688 100644
+--- a/net/dccp/ipv4.c
++++ b/net/dccp/ipv4.c
+@@ -255,13 +255,8 @@ static int dccp_v4_err(struct sk_buff *skb, u32 info)
+ 	int err;
+ 	struct net *net = dev_net(skb->dev);
+ 
+-	/* For the first __dccp_basic_hdr_len() check, we only need dh->dccph_x,
+-	 * which is in byte 7 of the dccp header.
+-	 * Our caller (icmp_socket_deliver()) already pulled 8 bytes for us.
+-	 *
+-	 * Later on, we want to access the sequence number fields, which are
+-	 * beyond 8 bytes, so we have to pskb_may_pull() ourselves.
+-	 */
++	if (!pskb_may_pull(skb, offset + sizeof(*dh)))
++		return -EINVAL;
+ 	dh = (struct dccp_hdr *)(skb->data + offset);
+ 	if (!pskb_may_pull(skb, offset + __dccp_basic_hdr_len(dh)))
+ 		return -EINVAL;
+diff --git a/net/dccp/ipv6.c b/net/dccp/ipv6.c
+index 33f6ccf6ba77b..c693a570682fb 100644
+--- a/net/dccp/ipv6.c
++++ b/net/dccp/ipv6.c
+@@ -83,13 +83,8 @@ static int dccp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
+ 	__u64 seq;
+ 	struct net *net = dev_net(skb->dev);
+ 
+-	/* For the first __dccp_basic_hdr_len() check, we only need dh->dccph_x,
+-	 * which is in byte 7 of the dccp header.
+-	 * Our caller (icmpv6_notify()) already pulled 8 bytes for us.
+-	 *
+-	 * Later on, we want to access the sequence number fields, which are
+-	 * beyond 8 bytes, so we have to pskb_may_pull() ourselves.
+-	 */
++	if (!pskb_may_pull(skb, offset + sizeof(*dh)))
++		return -EINVAL;
+ 	dh = (struct dccp_hdr *)(skb->data + offset);
+ 	if (!pskb_may_pull(skb, offset + __dccp_basic_hdr_len(dh)))
+ 		return -EINVAL;
+diff --git a/net/handshake/handshake-test.c b/net/handshake/handshake-test.c
+index 6d37bab35c8fc..16ed7bfd29e4f 100644
+--- a/net/handshake/handshake-test.c
++++ b/net/handshake/handshake-test.c
+@@ -235,7 +235,7 @@ static void handshake_req_submit_test4(struct kunit *test)
+ 	KUNIT_EXPECT_PTR_EQ(test, req, result);
+ 
+ 	handshake_req_cancel(sock->sk);
+-	sock_release(sock);
++	fput(filp);
+ }
+ 
+ static void handshake_req_submit_test5(struct kunit *test)
+@@ -272,7 +272,7 @@ static void handshake_req_submit_test5(struct kunit *test)
+ 	/* Assert */
+ 	KUNIT_EXPECT_EQ(test, err, -EAGAIN);
+ 
+-	sock_release(sock);
++	fput(filp);
+ 	hn->hn_pending = saved;
+ }
+ 
+@@ -306,7 +306,7 @@ static void handshake_req_submit_test6(struct kunit *test)
+ 	KUNIT_EXPECT_EQ(test, err, -EBUSY);
+ 
+ 	handshake_req_cancel(sock->sk);
+-	sock_release(sock);
++	fput(filp);
+ }
+ 
+ static void handshake_req_cancel_test1(struct kunit *test)
+@@ -340,7 +340,7 @@ static void handshake_req_cancel_test1(struct kunit *test)
+ 	/* Assert */
+ 	KUNIT_EXPECT_TRUE(test, result);
+ 
+-	sock_release(sock);
++	fput(filp);
+ }
+ 
+ static void handshake_req_cancel_test2(struct kunit *test)
+@@ -382,7 +382,7 @@ static void handshake_req_cancel_test2(struct kunit *test)
+ 	/* Assert */
+ 	KUNIT_EXPECT_TRUE(test, result);
+ 
+-	sock_release(sock);
++	fput(filp);
+ }
+ 
+ static void handshake_req_cancel_test3(struct kunit *test)
+@@ -427,7 +427,7 @@ static void handshake_req_cancel_test3(struct kunit *test)
+ 	/* Assert */
+ 	KUNIT_EXPECT_FALSE(test, result);
+ 
+-	sock_release(sock);
++	fput(filp);
+ }
+ 
+ static struct handshake_req *handshake_req_destroy_test;
+@@ -471,7 +471,7 @@ static void handshake_req_destroy_test1(struct kunit *test)
+ 	handshake_req_cancel(sock->sk);
+ 
+ 	/* Act */
+-	sock_release(sock);
++	fput(filp);
+ 
+ 	/* Assert */
+ 	KUNIT_EXPECT_PTR_EQ(test, handshake_req_destroy_test, req);
+diff --git a/net/hsr/hsr_framereg.c b/net/hsr/hsr_framereg.c
+index b77f1189d19d1..6d14d935ee828 100644
+--- a/net/hsr/hsr_framereg.c
++++ b/net/hsr/hsr_framereg.c
+@@ -288,13 +288,13 @@ void hsr_handle_sup_frame(struct hsr_frame_info *frame)
+ 
+ 	/* And leave the HSR tag. */
+ 	if (ethhdr->h_proto == htons(ETH_P_HSR)) {
+-		pull_size = sizeof(struct ethhdr);
++		pull_size = sizeof(struct hsr_tag);
+ 		skb_pull(skb, pull_size);
+ 		total_pull_size += pull_size;
+ 	}
+ 
+ 	/* And leave the HSR sup tag. */
+-	pull_size = sizeof(struct hsr_tag);
++	pull_size = sizeof(struct hsr_sup_tag);
+ 	skb_pull(skb, pull_size);
+ 	total_pull_size += pull_size;
+ 
+diff --git a/net/hsr/hsr_main.h b/net/hsr/hsr_main.h
+index 6851e33df7d14..18e01791ad799 100644
+--- a/net/hsr/hsr_main.h
++++ b/net/hsr/hsr_main.h
+@@ -83,7 +83,7 @@ struct hsr_vlan_ethhdr {
+ struct hsr_sup_tlv {
+ 	u8		HSR_TLV_type;
+ 	u8		HSR_TLV_length;
+-};
++} __packed;
+ 
+ /* HSR/PRP Supervision Frame data types.
+  * Field names as defined in the IEC:2010 standard for HSR.
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index 33626619aee79..0a53ca6ebb0d5 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -1213,6 +1213,7 @@ EXPORT_INDIRECT_CALLABLE(ipv4_dst_check);
+ 
+ static void ipv4_send_dest_unreach(struct sk_buff *skb)
+ {
++	struct net_device *dev;
+ 	struct ip_options opt;
+ 	int res;
+ 
+@@ -1230,7 +1231,8 @@ static void ipv4_send_dest_unreach(struct sk_buff *skb)
+ 		opt.optlen = ip_hdr(skb)->ihl * 4 - sizeof(struct iphdr);
+ 
+ 		rcu_read_lock();
+-		res = __ip_options_compile(dev_net(skb->dev), &opt, skb, NULL);
++		dev = skb->dev ? skb->dev : skb_rtable(skb)->dst.dev;
++		res = __ip_options_compile(dev_net(dev), &opt, skb, NULL);
+ 		rcu_read_unlock();
+ 
+ 		if (res)
+diff --git a/net/mptcp/options.c b/net/mptcp/options.c
+index c254accb14dee..cd15ec73073e0 100644
+--- a/net/mptcp/options.c
++++ b/net/mptcp/options.c
+@@ -1269,12 +1269,13 @@ static void mptcp_set_rwin(struct tcp_sock *tp, struct tcphdr *th)
+ 
+ 			if (rcv_wnd == rcv_wnd_old)
+ 				break;
+-			if (before64(rcv_wnd_new, rcv_wnd)) {
++
++			rcv_wnd_old = rcv_wnd;
++			if (before64(rcv_wnd_new, rcv_wnd_old)) {
+ 				MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_RCVWNDCONFLICTUPDATE);
+ 				goto raise_win;
+ 			}
+ 			MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_RCVWNDCONFLICT);
+-			rcv_wnd_old = rcv_wnd;
+ 		}
+ 		return;
+ 	}
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index 40258d9f8c799..6947b4b2519c9 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -772,6 +772,46 @@ static bool __mptcp_ofo_queue(struct mptcp_sock *msk)
+ 	return moved;
+ }
+ 
++static bool __mptcp_subflow_error_report(struct sock *sk, struct sock *ssk)
++{
++	int err = sock_error(ssk);
++	int ssk_state;
++
++	if (!err)
++		return false;
++
++	/* only propagate errors on fallen-back sockets or
++	 * on MPC connect
++	 */
++	if (sk->sk_state != TCP_SYN_SENT && !__mptcp_check_fallback(mptcp_sk(sk)))
++		return false;
++
++	/* We need to propagate only transition to CLOSE state.
++	 * Orphaned socket will see such state change via
++	 * subflow_sched_work_if_closed() and that path will properly
++	 * destroy the msk as needed.
++	 */
++	ssk_state = inet_sk_state_load(ssk);
++	if (ssk_state == TCP_CLOSE && !sock_flag(sk, SOCK_DEAD))
++		inet_sk_state_store(sk, ssk_state);
++	WRITE_ONCE(sk->sk_err, -err);
++
++	/* This barrier is coupled with smp_rmb() in mptcp_poll() */
++	smp_wmb();
++	sk_error_report(sk);
++	return true;
++}
++
++void __mptcp_error_report(struct sock *sk)
++{
++	struct mptcp_subflow_context *subflow;
++	struct mptcp_sock *msk = mptcp_sk(sk);
++
++	mptcp_for_each_subflow(msk, subflow)
++		if (__mptcp_subflow_error_report(sk, mptcp_subflow_tcp_sock(subflow)))
++			break;
++}
++
+ /* In most cases we will be able to lock the mptcp socket.  If its already
+  * owned, we need to defer to the work queue to avoid ABBA deadlock.
+  */
+@@ -2381,6 +2421,7 @@ static void __mptcp_close_ssk(struct sock *sk, struct sock *ssk,
+ 	}
+ 
+ out_release:
++	__mptcp_subflow_error_report(sk, ssk);
+ 	release_sock(ssk);
+ 
+ 	sock_put(ssk);
+diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
+index 94ae7dd01c65e..c7bd99b8e7b7a 100644
+--- a/net/mptcp/subflow.c
++++ b/net/mptcp/subflow.c
+@@ -1362,42 +1362,6 @@ void mptcp_space(const struct sock *ssk, int *space, int *full_space)
+ 	*full_space = tcp_full_space(sk);
+ }
+ 
+-void __mptcp_error_report(struct sock *sk)
+-{
+-	struct mptcp_subflow_context *subflow;
+-	struct mptcp_sock *msk = mptcp_sk(sk);
+-
+-	mptcp_for_each_subflow(msk, subflow) {
+-		struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
+-		int err = sock_error(ssk);
+-		int ssk_state;
+-
+-		if (!err)
+-			continue;
+-
+-		/* only propagate errors on fallen-back sockets or
+-		 * on MPC connect
+-		 */
+-		if (sk->sk_state != TCP_SYN_SENT && !__mptcp_check_fallback(msk))
+-			continue;
+-
+-		/* We need to propagate only transition to CLOSE state.
+-		 * Orphaned socket will see such state change via
+-		 * subflow_sched_work_if_closed() and that path will properly
+-		 * destroy the msk as needed.
+-		 */
+-		ssk_state = inet_sk_state_load(ssk);
+-		if (ssk_state == TCP_CLOSE && !sock_flag(sk, SOCK_DEAD))
+-			inet_sk_state_store(sk, ssk_state);
+-		WRITE_ONCE(sk->sk_err, -err);
+-
+-		/* This barrier is coupled with smp_rmb() in mptcp_poll() */
+-		smp_wmb();
+-		sk_error_report(sk);
+-		break;
+-	}
+-}
+-
+ static void subflow_error_report(struct sock *ssk)
+ {
+ 	struct sock *sk = mptcp_subflow_ctx(ssk)->conn;
+diff --git a/net/ncsi/ncsi-aen.c b/net/ncsi/ncsi-aen.c
+index 62fb1031763d1..f8854bff286cb 100644
+--- a/net/ncsi/ncsi-aen.c
++++ b/net/ncsi/ncsi-aen.c
+@@ -89,6 +89,11 @@ static int ncsi_aen_handler_lsc(struct ncsi_dev_priv *ndp,
+ 	if ((had_link == has_link) || chained)
+ 		return 0;
+ 
++	if (had_link)
++		netif_carrier_off(ndp->ndev.dev);
++	else
++		netif_carrier_on(ndp->ndev.dev);
++
+ 	if (!ndp->multi_package && !nc->package->multi_channel) {
+ 		if (had_link) {
+ 			ndp->flags |= NCSI_DEV_RESHUFFLE;
+diff --git a/net/netfilter/ipset/ip_set_core.c b/net/netfilter/ipset/ip_set_core.c
+index 0b68e2e2824e1..58608460cf6df 100644
+--- a/net/netfilter/ipset/ip_set_core.c
++++ b/net/netfilter/ipset/ip_set_core.c
+@@ -682,6 +682,14 @@ __ip_set_put(struct ip_set *set)
+ /* set->ref can be swapped out by ip_set_swap, netlink events (like dump) need
+  * a separate reference counter
+  */
++static void
++__ip_set_get_netlink(struct ip_set *set)
++{
++	write_lock_bh(&ip_set_ref_lock);
++	set->ref_netlink++;
++	write_unlock_bh(&ip_set_ref_lock);
++}
++
+ static void
+ __ip_set_put_netlink(struct ip_set *set)
+ {
+@@ -1693,11 +1701,11 @@ call_ad(struct net *net, struct sock *ctnl, struct sk_buff *skb,
+ 
+ 	do {
+ 		if (retried) {
+-			__ip_set_get(set);
++			__ip_set_get_netlink(set);
+ 			nfnl_unlock(NFNL_SUBSYS_IPSET);
+ 			cond_resched();
+ 			nfnl_lock(NFNL_SUBSYS_IPSET);
+-			__ip_set_put(set);
++			__ip_set_put_netlink(set);
+ 		}
+ 
+ 		ip_set_lock(set);
+diff --git a/net/netfilter/nf_conntrack_bpf.c b/net/netfilter/nf_conntrack_bpf.c
+index 0d36d7285e3f0..747dc22655018 100644
+--- a/net/netfilter/nf_conntrack_bpf.c
++++ b/net/netfilter/nf_conntrack_bpf.c
+@@ -380,6 +380,8 @@ __bpf_kfunc struct nf_conn *bpf_ct_insert_entry(struct nf_conn___init *nfct_i)
+ 	struct nf_conn *nfct = (struct nf_conn *)nfct_i;
+ 	int err;
+ 
++	if (!nf_ct_is_confirmed(nfct))
++		nfct->timeout += nfct_time_stamp;
+ 	nfct->status |= IPS_CONFIRMED;
+ 	err = nf_conntrack_hash_check_insert(nfct);
+ 	if (err < 0) {
+diff --git a/net/netfilter/nf_conntrack_extend.c b/net/netfilter/nf_conntrack_extend.c
+index 0b513f7bf9f39..dd62cc12e7750 100644
+--- a/net/netfilter/nf_conntrack_extend.c
++++ b/net/netfilter/nf_conntrack_extend.c
+@@ -40,10 +40,10 @@ static const u8 nf_ct_ext_type_len[NF_CT_EXT_NUM] = {
+ 	[NF_CT_EXT_ECACHE] = sizeof(struct nf_conntrack_ecache),
+ #endif
+ #ifdef CONFIG_NF_CONNTRACK_TIMESTAMP
+-	[NF_CT_EXT_TSTAMP] = sizeof(struct nf_conn_acct),
++	[NF_CT_EXT_TSTAMP] = sizeof(struct nf_conn_tstamp),
+ #endif
+ #ifdef CONFIG_NF_CONNTRACK_TIMEOUT
+-	[NF_CT_EXT_TIMEOUT] = sizeof(struct nf_conn_tstamp),
++	[NF_CT_EXT_TIMEOUT] = sizeof(struct nf_conn_timeout),
+ #endif
+ #ifdef CONFIG_NF_CONNTRACK_LABELS
+ 	[NF_CT_EXT_LABELS] = sizeof(struct nf_conn_labels),
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index a72934f00804e..976a9b763b9bb 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -1219,6 +1219,10 @@ static int nf_tables_updtable(struct nft_ctx *ctx)
+ 	     flags & NFT_TABLE_F_OWNER))
+ 		return -EOPNOTSUPP;
+ 
++	/* No dormant off/on/off/on games in single transaction */
++	if (ctx->table->flags & __NFT_TABLE_F_UPDATE)
++		return -EINVAL;
++
+ 	trans = nft_trans_alloc(ctx, NFT_MSG_NEWTABLE,
+ 				sizeof(struct nft_trans_table));
+ 	if (trans == NULL)
+@@ -1432,7 +1436,7 @@ static int nft_flush_table(struct nft_ctx *ctx)
+ 		if (!nft_is_active_next(ctx->net, chain))
+ 			continue;
+ 
+-		if (nft_chain_is_bound(chain))
++		if (nft_chain_binding(chain))
+ 			continue;
+ 
+ 		ctx->chain = chain;
+@@ -1446,8 +1450,7 @@ static int nft_flush_table(struct nft_ctx *ctx)
+ 		if (!nft_is_active_next(ctx->net, set))
+ 			continue;
+ 
+-		if (nft_set_is_anonymous(set) &&
+-		    !list_empty(&set->bindings))
++		if (nft_set_is_anonymous(set))
+ 			continue;
+ 
+ 		err = nft_delset(ctx, set);
+@@ -1477,7 +1480,7 @@ static int nft_flush_table(struct nft_ctx *ctx)
+ 		if (!nft_is_active_next(ctx->net, chain))
+ 			continue;
+ 
+-		if (nft_chain_is_bound(chain))
++		if (nft_chain_binding(chain))
+ 			continue;
+ 
+ 		ctx->chain = chain;
+@@ -2910,6 +2913,9 @@ static int nf_tables_delchain(struct sk_buff *skb, const struct nfnl_info *info,
+ 		return PTR_ERR(chain);
+ 	}
+ 
++	if (nft_chain_binding(chain))
++		return -EOPNOTSUPP;
++
+ 	nft_ctx_init(&ctx, net, skb, info->nlh, family, table, chain, nla);
+ 
+ 	if (nla[NFTA_CHAIN_HOOK]) {
+@@ -3449,6 +3455,8 @@ static int __nf_tables_dump_rules(struct sk_buff *skb,
+ 	struct net *net = sock_net(skb->sk);
+ 	const struct nft_rule *rule, *prule;
+ 	unsigned int s_idx = cb->args[0];
++	unsigned int entries = 0;
++	int ret = 0;
+ 	u64 handle;
+ 
+ 	prule = NULL;
+@@ -3471,9 +3479,11 @@ static int __nf_tables_dump_rules(struct sk_buff *skb,
+ 					NFT_MSG_NEWRULE,
+ 					NLM_F_MULTI | NLM_F_APPEND,
+ 					table->family,
+-					table, chain, rule, handle, reset) < 0)
+-			return 1;
+-
++					table, chain, rule, handle, reset) < 0) {
++			ret = 1;
++			break;
++		}
++		entries++;
+ 		nl_dump_check_consistent(cb, nlmsg_hdr(skb));
+ cont:
+ 		prule = rule;
+@@ -3481,10 +3491,10 @@ cont_skip:
+ 		(*idx)++;
+ 	}
+ 
+-	if (reset && *idx)
+-		audit_log_rule_reset(table, cb->seq, *idx);
++	if (reset && entries)
++		audit_log_rule_reset(table, cb->seq, entries);
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static int nf_tables_dump_rules(struct sk_buff *skb,
+@@ -3968,6 +3978,11 @@ static int nf_tables_newrule(struct sk_buff *skb, const struct nfnl_info *info,
+ 	}
+ 
+ 	if (info->nlh->nlmsg_flags & NLM_F_REPLACE) {
++		if (nft_chain_binding(chain)) {
++			err = -EOPNOTSUPP;
++			goto err_destroy_flow_rule;
++		}
++
+ 		err = nft_delrule(&ctx, old_rule);
+ 		if (err < 0)
+ 			goto err_destroy_flow_rule;
+@@ -4075,7 +4090,7 @@ static int nf_tables_delrule(struct sk_buff *skb, const struct nfnl_info *info,
+ 			NL_SET_BAD_ATTR(extack, nla[NFTA_RULE_CHAIN]);
+ 			return PTR_ERR(chain);
+ 		}
+-		if (nft_chain_is_bound(chain))
++		if (nft_chain_binding(chain))
+ 			return -EOPNOTSUPP;
+ 	}
+ 
+@@ -4109,7 +4124,7 @@ static int nf_tables_delrule(struct sk_buff *skb, const struct nfnl_info *info,
+ 		list_for_each_entry(chain, &table->chains, list) {
+ 			if (!nft_is_active_next(net, chain))
+ 				continue;
+-			if (nft_chain_is_bound(chain))
++			if (nft_chain_binding(chain))
+ 				continue;
+ 
+ 			ctx.chain = chain;
+@@ -7180,8 +7195,10 @@ static int nf_tables_delsetelem(struct sk_buff *skb,
+ 	if (IS_ERR(set))
+ 		return PTR_ERR(set);
+ 
+-	if (!list_empty(&set->bindings) &&
+-	    (set->flags & (NFT_SET_CONSTANT | NFT_SET_ANONYMOUS)))
++	if (nft_set_is_anonymous(set))
++		return -EOPNOTSUPP;
++
++	if (!list_empty(&set->bindings) && (set->flags & NFT_SET_CONSTANT))
+ 		return -EBUSY;
+ 
+ 	nft_ctx_init(&ctx, net, skb, info->nlh, family, table, NULL, nla);
+@@ -9559,12 +9576,15 @@ static int nft_trans_gc_space(struct nft_trans_gc *trans)
+ struct nft_trans_gc *nft_trans_gc_queue_async(struct nft_trans_gc *gc,
+ 					      unsigned int gc_seq, gfp_t gfp)
+ {
++	struct nft_set *set;
++
+ 	if (nft_trans_gc_space(gc))
+ 		return gc;
+ 
++	set = gc->set;
+ 	nft_trans_gc_queue_work(gc);
+ 
+-	return nft_trans_gc_alloc(gc->set, gc_seq, gfp);
++	return nft_trans_gc_alloc(set, gc_seq, gfp);
+ }
+ 
+ void nft_trans_gc_queue_async_done(struct nft_trans_gc *trans)
+@@ -9579,15 +9599,18 @@ void nft_trans_gc_queue_async_done(struct nft_trans_gc *trans)
+ 
+ struct nft_trans_gc *nft_trans_gc_queue_sync(struct nft_trans_gc *gc, gfp_t gfp)
+ {
++	struct nft_set *set;
++
+ 	if (WARN_ON_ONCE(!lockdep_commit_lock_is_held(gc->net)))
+ 		return NULL;
+ 
+ 	if (nft_trans_gc_space(gc))
+ 		return gc;
+ 
++	set = gc->set;
+ 	call_rcu(&gc->rcu, nft_trans_gc_trans_free);
+ 
+-	return nft_trans_gc_alloc(gc->set, 0, gfp);
++	return nft_trans_gc_alloc(set, 0, gfp);
+ }
+ 
+ void nft_trans_gc_queue_sync_done(struct nft_trans_gc *trans)
+@@ -9602,8 +9625,9 @@ void nft_trans_gc_queue_sync_done(struct nft_trans_gc *trans)
+ 	call_rcu(&trans->rcu, nft_trans_gc_trans_free);
+ }
+ 
+-struct nft_trans_gc *nft_trans_gc_catchall(struct nft_trans_gc *gc,
+-					   unsigned int gc_seq)
++static struct nft_trans_gc *nft_trans_gc_catchall(struct nft_trans_gc *gc,
++						  unsigned int gc_seq,
++						  bool sync)
+ {
+ 	struct nft_set_elem_catchall *catchall;
+ 	const struct nft_set *set = gc->set;
+@@ -9619,7 +9643,11 @@ struct nft_trans_gc *nft_trans_gc_catchall(struct nft_trans_gc *gc,
+ 
+ 		nft_set_elem_dead(ext);
+ dead_elem:
+-		gc = nft_trans_gc_queue_async(gc, gc_seq, GFP_ATOMIC);
++		if (sync)
++			gc = nft_trans_gc_queue_sync(gc, GFP_ATOMIC);
++		else
++			gc = nft_trans_gc_queue_async(gc, gc_seq, GFP_ATOMIC);
++
+ 		if (!gc)
+ 			return NULL;
+ 
+@@ -9629,6 +9657,17 @@ dead_elem:
+ 	return gc;
+ }
+ 
++struct nft_trans_gc *nft_trans_gc_catchall_async(struct nft_trans_gc *gc,
++						 unsigned int gc_seq)
++{
++	return nft_trans_gc_catchall(gc, gc_seq, false);
++}
++
++struct nft_trans_gc *nft_trans_gc_catchall_sync(struct nft_trans_gc *gc)
++{
++	return nft_trans_gc_catchall(gc, 0, true);
++}
++
+ static void nf_tables_module_autoload_cleanup(struct net *net)
+ {
+ 	struct nftables_pernet *nft_net = nft_pernet(net);
+@@ -11048,7 +11087,7 @@ static void __nft_release_table(struct net *net, struct nft_table *table)
+ 	ctx.family = table->family;
+ 	ctx.table = table;
+ 	list_for_each_entry(chain, &table->chains, list) {
+-		if (nft_chain_is_bound(chain))
++		if (nft_chain_binding(chain))
+ 			continue;
+ 
+ 		ctx.chain = chain;
+diff --git a/net/netfilter/nft_set_hash.c b/net/netfilter/nft_set_hash.c
+index 524763659f251..2013de934cef0 100644
+--- a/net/netfilter/nft_set_hash.c
++++ b/net/netfilter/nft_set_hash.c
+@@ -338,12 +338,9 @@ static void nft_rhash_gc(struct work_struct *work)
+ 
+ 	while ((he = rhashtable_walk_next(&hti))) {
+ 		if (IS_ERR(he)) {
+-			if (PTR_ERR(he) != -EAGAIN) {
+-				nft_trans_gc_destroy(gc);
+-				gc = NULL;
+-				goto try_later;
+-			}
+-			continue;
++			nft_trans_gc_destroy(gc);
++			gc = NULL;
++			goto try_later;
+ 		}
+ 
+ 		/* Ruleset has been updated, try later. */
+@@ -372,7 +369,7 @@ dead_elem:
+ 		nft_trans_gc_elem_add(gc, he);
+ 	}
+ 
+-	gc = nft_trans_gc_catchall(gc, gc_seq);
++	gc = nft_trans_gc_catchall_async(gc, gc_seq);
+ 
+ try_later:
+ 	/* catchall list iteration requires rcu read side lock. */
+diff --git a/net/netfilter/nft_set_pipapo.c b/net/netfilter/nft_set_pipapo.c
+index 6af9c9ed4b5c3..c0dcc40de358f 100644
+--- a/net/netfilter/nft_set_pipapo.c
++++ b/net/netfilter/nft_set_pipapo.c
+@@ -1596,7 +1596,7 @@ static void pipapo_gc(const struct nft_set *_set, struct nft_pipapo_match *m)
+ 
+ 			gc = nft_trans_gc_queue_sync(gc, GFP_ATOMIC);
+ 			if (!gc)
+-				break;
++				return;
+ 
+ 			nft_pipapo_gc_deactivate(net, set, e);
+ 			pipapo_drop(m, rulemap);
+@@ -1610,7 +1610,7 @@ static void pipapo_gc(const struct nft_set *_set, struct nft_pipapo_match *m)
+ 		}
+ 	}
+ 
+-	gc = nft_trans_gc_catchall(gc, 0);
++	gc = nft_trans_gc_catchall_sync(gc);
+ 	if (gc) {
+ 		nft_trans_gc_queue_sync_done(gc);
+ 		priv->last_gc = jiffies;
+diff --git a/net/netfilter/nft_set_rbtree.c b/net/netfilter/nft_set_rbtree.c
+index f250b5399344a..487572dcd6144 100644
+--- a/net/netfilter/nft_set_rbtree.c
++++ b/net/netfilter/nft_set_rbtree.c
+@@ -622,8 +622,7 @@ static void nft_rbtree_gc(struct work_struct *work)
+ 	if (!gc)
+ 		goto done;
+ 
+-	write_lock_bh(&priv->lock);
+-	write_seqcount_begin(&priv->count);
++	read_lock_bh(&priv->lock);
+ 	for (node = rb_first(&priv->root); node != NULL; node = rb_next(node)) {
+ 
+ 		/* Ruleset has been updated, try later. */
+@@ -670,11 +669,10 @@ dead_elem:
+ 		nft_trans_gc_elem_add(gc, rbe);
+ 	}
+ 
+-	gc = nft_trans_gc_catchall(gc, gc_seq);
++	gc = nft_trans_gc_catchall_async(gc, gc_seq);
+ 
+ try_later:
+-	write_seqcount_end(&priv->count);
+-	write_unlock_bh(&priv->lock);
++	read_unlock_bh(&priv->lock);
+ 
+ 	if (gc)
+ 		nft_trans_gc_queue_async_done(gc);
+diff --git a/net/rds/rdma_transport.c b/net/rds/rdma_transport.c
+index d36f3f6b43510..b15cf316b23a2 100644
+--- a/net/rds/rdma_transport.c
++++ b/net/rds/rdma_transport.c
+@@ -86,11 +86,13 @@ static int rds_rdma_cm_event_handler_cmn(struct rdma_cm_id *cm_id,
+ 		break;
+ 
+ 	case RDMA_CM_EVENT_ADDR_RESOLVED:
+-		rdma_set_service_type(cm_id, conn->c_tos);
+-		rdma_set_min_rnr_timer(cm_id, IB_RNR_TIMER_000_32);
+-		/* XXX do we need to clean up if this fails? */
+-		ret = rdma_resolve_route(cm_id,
+-					 RDS_RDMA_RESOLVE_TIMEOUT_MS);
++		if (conn) {
++			rdma_set_service_type(cm_id, conn->c_tos);
++			rdma_set_min_rnr_timer(cm_id, IB_RNR_TIMER_000_32);
++			/* XXX do we need to clean up if this fails? */
++			ret = rdma_resolve_route(cm_id,
++						 RDS_RDMA_RESOLVE_TIMEOUT_MS);
++		}
+ 		break;
+ 
+ 	case RDMA_CM_EVENT_ROUTE_RESOLVED:
+diff --git a/net/smc/smc_stats.h b/net/smc/smc_stats.h
+index b60fe1eb37ab6..aa8928975cc63 100644
+--- a/net/smc/smc_stats.h
++++ b/net/smc/smc_stats.h
+@@ -243,8 +243,9 @@ while (0)
+ #define SMC_STAT_SERV_SUCC_INC(net, _ini) \
+ do { \
+ 	typeof(_ini) i = (_ini); \
+-	bool is_v2 = (i->smcd_version & SMC_V2); \
+ 	bool is_smcd = (i->is_smcd); \
++	u8 version = is_smcd ? i->smcd_version : i->smcr_version; \
++	bool is_v2 = (version & SMC_V2); \
+ 	typeof(net->smc.smc_stats) smc_stats = (net)->smc.smc_stats; \
+ 	if (is_v2 && is_smcd) \
+ 		this_cpu_inc(smc_stats->smc[SMC_TYPE_D].srv_v2_succ_cnt); \
+diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
+index 315bd59dea056..be6be7d785315 100644
+--- a/net/sunrpc/clnt.c
++++ b/net/sunrpc/clnt.c
+@@ -2474,8 +2474,7 @@ call_status(struct rpc_task *task)
+ 		goto out_exit;
+ 	}
+ 	task->tk_action = call_encode;
+-	if (status != -ECONNRESET && status != -ECONNABORTED)
+-		rpc_check_timeout(task);
++	rpc_check_timeout(task);
+ 	return;
+ out_exit:
+ 	rpc_call_rpcerror(task, status);
+@@ -2748,6 +2747,7 @@ out_msg_denied:
+ 	case rpc_autherr_rejectedverf:
+ 	case rpcsec_gsserr_credproblem:
+ 	case rpcsec_gsserr_ctxproblem:
++		rpcauth_invalcred(task);
+ 		if (!task->tk_cred_retry)
+ 			break;
+ 		task->tk_cred_retry--;
+@@ -2904,19 +2904,22 @@ static const struct rpc_call_ops rpc_cb_add_xprt_call_ops = {
+  * @clnt: pointer to struct rpc_clnt
+  * @xps: pointer to struct rpc_xprt_switch,
+  * @xprt: pointer struct rpc_xprt
+- * @dummy: unused
++ * @in_max_connect: pointer to the max_connect value for the passed in xprt transport
+  */
+ int rpc_clnt_test_and_add_xprt(struct rpc_clnt *clnt,
+ 		struct rpc_xprt_switch *xps, struct rpc_xprt *xprt,
+-		void *dummy)
++		void *in_max_connect)
+ {
+ 	struct rpc_cb_add_xprt_calldata *data;
+ 	struct rpc_task *task;
++	int max_connect = clnt->cl_max_connect;
+ 
+-	if (xps->xps_nunique_destaddr_xprts + 1 > clnt->cl_max_connect) {
++	if (in_max_connect)
++		max_connect = *(int *)in_max_connect;
++	if (xps->xps_nunique_destaddr_xprts + 1 > max_connect) {
+ 		rcu_read_lock();
+ 		pr_warn("SUNRPC: reached max allowed number (%d) did not add "
+-			"transport to server: %s\n", clnt->cl_max_connect,
++			"transport to server: %s\n", max_connect,
+ 			rpc_peeraddr2str(clnt, RPC_DISPLAY_ADDR));
+ 		rcu_read_unlock();
+ 		return -EINVAL;
+diff --git a/scripts/atomic/gen-atomic-fallback.sh b/scripts/atomic/gen-atomic-fallback.sh
+index c0c8a85d7c81b..a45154cefa487 100755
+--- a/scripts/atomic/gen-atomic-fallback.sh
++++ b/scripts/atomic/gen-atomic-fallback.sh
+@@ -102,7 +102,7 @@ gen_proto_order_variant()
+ 	fi
+ 
+ 	# Allow ACQUIRE/RELEASE/RELAXED ops to be defined in terms of FULL ops
+-	if [ ! -z "${order}" ]; then
++	if [ ! -z "${order}" ] && ! meta_is_implicitly_relaxed "${meta}"; then
+ 		printf "#elif defined(arch_${basename})\n"
+ 		printf "\t${retstmt}arch_${basename}(${args});\n"
+ 	fi
+diff --git a/sound/core/rawmidi.c b/sound/core/rawmidi.c
+index 2d3cec908154d..cef48319bd396 100644
+--- a/sound/core/rawmidi.c
++++ b/sound/core/rawmidi.c
+@@ -1770,7 +1770,7 @@ static void snd_rawmidi_proc_info_read(struct snd_info_entry *entry,
+ 	if (IS_ENABLED(CONFIG_SND_UMP))
+ 		snd_iprintf(buffer, "Type: %s\n",
+ 			    rawmidi_is_ump(rmidi) ? "UMP" : "Legacy");
+-	if (rmidi->ops->proc_read)
++	if (rmidi->ops && rmidi->ops->proc_read)
+ 		rmidi->ops->proc_read(entry, buffer);
+ 	mutex_lock(&rmidi->open_mutex);
+ 	if (rmidi->info_flags & SNDRV_RAWMIDI_INFO_OUTPUT) {
+diff --git a/sound/core/seq/seq_ump_client.c b/sound/core/seq/seq_ump_client.c
+index f26a1812dfa73..2db371d79930d 100644
+--- a/sound/core/seq/seq_ump_client.c
++++ b/sound/core/seq/seq_ump_client.c
+@@ -207,7 +207,7 @@ static void fill_port_info(struct snd_seq_port_info *port,
+ 		SNDRV_SEQ_PORT_TYPE_PORT;
+ 	port->midi_channels = 16;
+ 	if (*group->name)
+-		snprintf(port->name, sizeof(port->name), "Group %d (%s)",
++		snprintf(port->name, sizeof(port->name), "Group %d (%.53s)",
+ 			 group->group + 1, group->name);
+ 	else
+ 		sprintf(port->name, "Group %d", group->group + 1);
+@@ -416,6 +416,25 @@ static void setup_client_midi_version(struct seq_ump_client *client)
+ 	snd_seq_kernel_client_put(cptr);
+ }
+ 
++/* set up client's group_filter bitmap */
++static void setup_client_group_filter(struct seq_ump_client *client)
++{
++	struct snd_seq_client *cptr;
++	unsigned int filter;
++	int p;
++
++	cptr = snd_seq_kernel_client_get(client->seq_client);
++	if (!cptr)
++		return;
++	filter = ~(1U << 0); /* always allow groupless messages */
++	for (p = 0; p < SNDRV_UMP_MAX_GROUPS; p++) {
++		if (client->groups[p].active)
++			filter &= ~(1U << (p + 1));
++	}
++	cptr->group_filter = filter;
++	snd_seq_kernel_client_put(cptr);
++}
++
+ /* UMP group change notification */
+ static void handle_group_notify(struct work_struct *work)
+ {
+@@ -424,6 +443,7 @@ static void handle_group_notify(struct work_struct *work)
+ 
+ 	update_group_attrs(client);
+ 	update_port_infos(client);
++	setup_client_group_filter(client);
+ }
+ 
+ /* UMP FB change notification */
+@@ -492,6 +512,8 @@ static int snd_seq_ump_probe(struct device *_dev)
+ 			goto error;
+ 	}
+ 
++	setup_client_group_filter(client);
++
+ 	err = create_ump_endpoint_port(client);
+ 	if (err < 0)
+ 		goto error;
+diff --git a/sound/core/seq/seq_ump_convert.c b/sound/core/seq/seq_ump_convert.c
+index 7cc84e137999c..b141024830ecc 100644
+--- a/sound/core/seq/seq_ump_convert.c
++++ b/sound/core/seq/seq_ump_convert.c
+@@ -1197,6 +1197,8 @@ int snd_seq_deliver_to_ump(struct snd_seq_client *source,
+ 			   struct snd_seq_event *event,
+ 			   int atomic, int hop)
+ {
++	if (dest->group_filter & (1U << dest_port->ump_group))
++		return 0; /* group filtered - skip the event */
+ 	if (event->type == SNDRV_SEQ_EVENT_SYSEX)
+ 		return cvt_sysex_to_ump(dest, dest_port, event, atomic, hop);
+ 	else if (snd_seq_client_is_midi2(dest))
+diff --git a/sound/hda/intel-sdw-acpi.c b/sound/hda/intel-sdw-acpi.c
+index 5cb92f7ccbcac..b57d72ea4503f 100644
+--- a/sound/hda/intel-sdw-acpi.c
++++ b/sound/hda/intel-sdw-acpi.c
+@@ -23,7 +23,7 @@ static int ctrl_link_mask;
+ module_param_named(sdw_link_mask, ctrl_link_mask, int, 0444);
+ MODULE_PARM_DESC(sdw_link_mask, "Intel link mask (one bit per link)");
+ 
+-static bool is_link_enabled(struct fwnode_handle *fw_node, int i)
++static bool is_link_enabled(struct fwnode_handle *fw_node, u8 idx)
+ {
+ 	struct fwnode_handle *link;
+ 	char name[32];
+@@ -31,7 +31,7 @@ static bool is_link_enabled(struct fwnode_handle *fw_node, int i)
+ 
+ 	/* Find master handle */
+ 	snprintf(name, sizeof(name),
+-		 "mipi-sdw-link-%d-subproperties", i);
++		 "mipi-sdw-link-%hhu-subproperties", idx);
+ 
+ 	link = fwnode_get_named_child_node(fw_node, name);
+ 	if (!link)
+@@ -51,8 +51,8 @@ static int
+ sdw_intel_scan_controller(struct sdw_intel_acpi_info *info)
+ {
+ 	struct acpi_device *adev = acpi_fetch_acpi_dev(info->handle);
+-	int ret, i;
+-	u8 count;
++	u8 count, i;
++	int ret;
+ 
+ 	if (!adev)
+ 		return -EINVAL;
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index ef831770ca7da..5cfd009175dac 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2223,6 +2223,7 @@ static const struct snd_pci_quirk power_save_denylist[] = {
+ 	SND_PCI_QUIRK(0x8086, 0x2068, "Intel NUC7i3BNB", 0),
+ 	/* https://bugzilla.kernel.org/show_bug.cgi?id=198611 */
+ 	SND_PCI_QUIRK(0x17aa, 0x2227, "Lenovo X1 Carbon 3rd Gen", 0),
++	SND_PCI_QUIRK(0x17aa, 0x316e, "Lenovo ThinkCentre M70q", 0),
+ 	/* https://bugzilla.redhat.com/show_bug.cgi?id=1689623 */
+ 	SND_PCI_QUIRK(0x17aa, 0x367b, "Lenovo IdeaCentre B550", 0),
+ 	/* https://bugzilla.redhat.com/show_bug.cgi?id=1572975 */
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index dc7b7a407638a..4a13747b2b0f3 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -9680,7 +9680,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x1d1f, "ASUS ROG Strix G17 2023 (G713PV)", ALC287_FIXUP_CS35L41_I2C_2),
+ 	SND_PCI_QUIRK(0x1043, 0x1d42, "ASUS Zephyrus G14 2022", ALC289_FIXUP_ASUS_GA401),
+ 	SND_PCI_QUIRK(0x1043, 0x1d4e, "ASUS TM420", ALC256_FIXUP_ASUS_HPE),
+-	SND_PCI_QUIRK(0x1043, 0x1e02, "ASUS UX3402", ALC245_FIXUP_CS35L41_SPI_2),
++	SND_PCI_QUIRK(0x1043, 0x1e02, "ASUS UX3402ZA", ALC245_FIXUP_CS35L41_SPI_2),
++	SND_PCI_QUIRK(0x1043, 0x16a3, "ASUS UX3402VA", ALC245_FIXUP_CS35L41_SPI_2),
+ 	SND_PCI_QUIRK(0x1043, 0x1e11, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA502),
+ 	SND_PCI_QUIRK(0x1043, 0x1e12, "ASUS UM3402", ALC287_FIXUP_CS35L41_I2C_2),
+ 	SND_PCI_QUIRK(0x1043, 0x1e51, "ASUS Zephyrus M15", ALC294_FIXUP_ASUS_GU502_PINS),
+diff --git a/sound/soc/amd/yc/acp6x-mach.c b/sound/soc/amd/yc/acp6x-mach.c
+index b304b3562c82b..5cc774b3da05c 100644
+--- a/sound/soc/amd/yc/acp6x-mach.c
++++ b/sound/soc/amd/yc/acp6x-mach.c
+@@ -213,6 +213,20 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "21J6"),
+ 		}
+ 	},
++	{
++		.driver_data = &acp6x_card,
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "82TL"),
++		}
++	},
++	{
++		.driver_data = &acp6x_card,
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "82QF"),
++		}
++	},
+ 	{
+ 		.driver_data = &acp6x_card,
+ 		.matches = {
+@@ -220,6 +234,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "82V2"),
+ 		}
+ 	},
++	{
++		.driver_data = &acp6x_card,
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "82UG"),
++		}
++	},
+ 	{
+ 		.driver_data = &acp6x_card,
+ 		.matches = {
+@@ -255,6 +276,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "M6500RC"),
+ 		}
+ 	},
++	{
++		.driver_data = &acp6x_card,
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "Micro-Star International Co., Ltd."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Bravo 15 B7ED"),
++		}
++	},
+ 	{
+ 		.driver_data = &acp6x_card,
+ 		.matches = {
+@@ -325,6 +353,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ 			DMI_MATCH(DMI_BOARD_NAME, "8A22"),
+ 		}
+ 	},
++	{
++		.driver_data = &acp6x_card,
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "HP"),
++			DMI_MATCH(DMI_BOARD_NAME, "8A3E"),
++		}
++	},
+ 	{
+ 		.driver_data = &acp6x_card,
+ 		.matches = {
+diff --git a/sound/soc/codecs/cs35l56-i2c.c b/sound/soc/codecs/cs35l56-i2c.c
+index 40666e6698ba9..b69441ec8d99f 100644
+--- a/sound/soc/codecs/cs35l56-i2c.c
++++ b/sound/soc/codecs/cs35l56-i2c.c
+@@ -27,7 +27,6 @@ static int cs35l56_i2c_probe(struct i2c_client *client)
+ 		return -ENOMEM;
+ 
+ 	cs35l56->dev = dev;
+-	cs35l56->can_hibernate = true;
+ 
+ 	i2c_set_clientdata(client, cs35l56);
+ 	cs35l56->regmap = devm_regmap_init_i2c(client, regmap_config);
+diff --git a/sound/soc/codecs/cs35l56.c b/sound/soc/codecs/cs35l56.c
+index fd06b9f9d496d..7e241908b5f16 100644
+--- a/sound/soc/codecs/cs35l56.c
++++ b/sound/soc/codecs/cs35l56.c
+@@ -1594,6 +1594,7 @@ void cs35l56_remove(struct cs35l56_private *cs35l56)
+ 	flush_workqueue(cs35l56->dsp_wq);
+ 	destroy_workqueue(cs35l56->dsp_wq);
+ 
++	pm_runtime_dont_use_autosuspend(cs35l56->dev);
+ 	pm_runtime_suspend(cs35l56->dev);
+ 	pm_runtime_disable(cs35l56->dev);
+ 
+diff --git a/sound/soc/codecs/cs42l42-sdw.c b/sound/soc/codecs/cs42l42-sdw.c
+index eeab07c850f95..974bae4abfad1 100644
+--- a/sound/soc/codecs/cs42l42-sdw.c
++++ b/sound/soc/codecs/cs42l42-sdw.c
+@@ -344,6 +344,16 @@ static int cs42l42_sdw_update_status(struct sdw_slave *peripheral,
+ 	switch (status) {
+ 	case SDW_SLAVE_ATTACHED:
+ 		dev_dbg(cs42l42->dev, "ATTACHED\n");
++
++		/*
++		 * The SoundWire core can report stale ATTACH notifications
++		 * if we hard-reset CS42L42 in probe() but it had already been
++		 * enumerated. Reject the ATTACH if we haven't yet seen an
++		 * UNATTACH report for the device being in reset.
++		 */
++		if (cs42l42->sdw_waiting_first_unattach)
++			break;
++
+ 		/*
+ 		 * Initialise codec, this only needs to be done once.
+ 		 * When resuming from suspend, resume callback will handle re-init of codec,
+@@ -354,6 +364,16 @@ static int cs42l42_sdw_update_status(struct sdw_slave *peripheral,
+ 		break;
+ 	case SDW_SLAVE_UNATTACHED:
+ 		dev_dbg(cs42l42->dev, "UNATTACHED\n");
++
++		if (cs42l42->sdw_waiting_first_unattach) {
++			/*
++			 * SoundWire core has seen that CS42L42 is not on
++			 * the bus so release RESET and wait for ATTACH.
++			 */
++			cs42l42->sdw_waiting_first_unattach = false;
++			gpiod_set_value_cansleep(cs42l42->reset_gpio, 1);
++		}
++
+ 		break;
+ 	default:
+ 		break;
+diff --git a/sound/soc/codecs/cs42l42.c b/sound/soc/codecs/cs42l42.c
+index a0de0329406a1..2961340f15e2e 100644
+--- a/sound/soc/codecs/cs42l42.c
++++ b/sound/soc/codecs/cs42l42.c
+@@ -2320,7 +2320,26 @@ int cs42l42_common_probe(struct cs42l42_private *cs42l42,
+ 
+ 	if (cs42l42->reset_gpio) {
+ 		dev_dbg(cs42l42->dev, "Found reset GPIO\n");
+-		gpiod_set_value_cansleep(cs42l42->reset_gpio, 1);
++
++		/*
++		 * ACPI can override the default GPIO state we requested
++		 * so ensure that we start with RESET low.
++		 */
++		gpiod_set_value_cansleep(cs42l42->reset_gpio, 0);
++
++		/* Ensure minimum reset pulse width */
++		usleep_range(10, 500);
++
++		/*
++		 * On SoundWire keep the chip in reset until we get an UNATTACH
++		 * notification from the SoundWire core. This acts as a
++		 * synchronization point to reject stale ATTACH notifications
++		 * if the chip was already enumerated before we reset it.
++		 */
++		if (cs42l42->sdw_peripheral)
++			cs42l42->sdw_waiting_first_unattach = true;
++		else
++			gpiod_set_value_cansleep(cs42l42->reset_gpio, 1);
+ 	}
+ 	usleep_range(CS42L42_BOOT_TIME_US, CS42L42_BOOT_TIME_US * 2);
+ 
+diff --git a/sound/soc/codecs/cs42l42.h b/sound/soc/codecs/cs42l42.h
+index 4bd7b85a57471..7785125b73ab9 100644
+--- a/sound/soc/codecs/cs42l42.h
++++ b/sound/soc/codecs/cs42l42.h
+@@ -53,6 +53,7 @@ struct  cs42l42_private {
+ 	u8 stream_use;
+ 	bool hp_adc_up_pending;
+ 	bool suspended;
++	bool sdw_waiting_first_unattach;
+ 	bool init_done;
+ };
+ 
+diff --git a/sound/soc/codecs/rt5640.c b/sound/soc/codecs/rt5640.c
+index eceed82097877..0a05554da3739 100644
+--- a/sound/soc/codecs/rt5640.c
++++ b/sound/soc/codecs/rt5640.c
+@@ -2404,13 +2404,11 @@ static irqreturn_t rt5640_irq(int irq, void *data)
+ 	struct rt5640_priv *rt5640 = data;
+ 	int delay = 0;
+ 
+-	if (rt5640->jd_src == RT5640_JD_SRC_HDA_HEADER) {
+-		cancel_delayed_work_sync(&rt5640->jack_work);
++	if (rt5640->jd_src == RT5640_JD_SRC_HDA_HEADER)
+ 		delay = 100;
+-	}
+ 
+ 	if (rt5640->jack)
+-		queue_delayed_work(system_long_wq, &rt5640->jack_work, delay);
++		mod_delayed_work(system_long_wq, &rt5640->jack_work, delay);
+ 
+ 	return IRQ_HANDLED;
+ }
+@@ -2566,12 +2564,11 @@ static void rt5640_enable_jack_detect(struct snd_soc_component *component,
+ 	if (jack_data && jack_data->use_platform_clock)
+ 		rt5640->use_platform_clock = jack_data->use_platform_clock;
+ 
+-	ret = devm_request_threaded_irq(component->dev, rt5640->irq,
+-					NULL, rt5640_irq,
+-					IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
+-					"rt5640", rt5640);
++	ret = request_irq(rt5640->irq, rt5640_irq,
++			  IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
++			  "rt5640", rt5640);
+ 	if (ret) {
+-		dev_warn(component->dev, "Failed to reguest IRQ %d: %d\n", rt5640->irq, ret);
++		dev_warn(component->dev, "Failed to request IRQ %d: %d\n", rt5640->irq, ret);
+ 		rt5640_disable_jack_detect(component);
+ 		return;
+ 	}
+@@ -2622,14 +2619,14 @@ static void rt5640_enable_hda_jack_detect(
+ 
+ 	rt5640->jack = jack;
+ 
+-	ret = devm_request_threaded_irq(component->dev, rt5640->irq,
+-					NULL, rt5640_irq, IRQF_TRIGGER_RISING | IRQF_ONESHOT,
+-					"rt5640", rt5640);
++	ret = request_irq(rt5640->irq, rt5640_irq,
++			  IRQF_TRIGGER_RISING | IRQF_ONESHOT, "rt5640", rt5640);
+ 	if (ret) {
+-		dev_warn(component->dev, "Failed to reguest IRQ %d: %d\n", rt5640->irq, ret);
+-		rt5640->irq = -ENXIO;
++		dev_warn(component->dev, "Failed to request IRQ %d: %d\n", rt5640->irq, ret);
++		rt5640->jack = NULL;
+ 		return;
+ 	}
++	rt5640->irq_requested = true;
+ 
+ 	/* sync initial jack state */
+ 	queue_delayed_work(system_long_wq, &rt5640->jack_work, 0);
+@@ -2802,12 +2799,12 @@ static int rt5640_suspend(struct snd_soc_component *component)
+ {
+ 	struct rt5640_priv *rt5640 = snd_soc_component_get_drvdata(component);
+ 
+-	if (rt5640->irq) {
++	if (rt5640->jack) {
+ 		/* disable jack interrupts during system suspend */
+ 		disable_irq(rt5640->irq);
++		rt5640_cancel_work(rt5640);
+ 	}
+ 
+-	rt5640_cancel_work(rt5640);
+ 	snd_soc_component_force_bias_level(component, SND_SOC_BIAS_OFF);
+ 	rt5640_reset(component);
+ 	regcache_cache_only(rt5640->regmap, true);
+@@ -2830,9 +2827,6 @@ static int rt5640_resume(struct snd_soc_component *component)
+ 	regcache_cache_only(rt5640->regmap, false);
+ 	regcache_sync(rt5640->regmap);
+ 
+-	if (rt5640->irq)
+-		enable_irq(rt5640->irq);
+-
+ 	if (rt5640->jack) {
+ 		if (rt5640->jd_src == RT5640_JD_SRC_HDA_HEADER) {
+ 			snd_soc_component_update_bits(component,
+@@ -2860,6 +2854,7 @@ static int rt5640_resume(struct snd_soc_component *component)
+ 			}
+ 		}
+ 
++		enable_irq(rt5640->irq);
+ 		queue_delayed_work(system_long_wq, &rt5640->jack_work, 0);
+ 	}
+ 
+diff --git a/sound/soc/codecs/wm_adsp.c b/sound/soc/codecs/wm_adsp.c
+index 5a89abfe87846..8c20ff6808941 100644
+--- a/sound/soc/codecs/wm_adsp.c
++++ b/sound/soc/codecs/wm_adsp.c
+@@ -687,7 +687,10 @@ int wm_adsp_write_ctl(struct wm_adsp *dsp, const char *name, int type,
+ 	struct wm_coeff_ctl *ctl;
+ 	int ret;
+ 
++	mutex_lock(&dsp->cs_dsp.pwr_lock);
+ 	ret = cs_dsp_coeff_write_ctrl(cs_ctl, 0, buf, len);
++	mutex_unlock(&dsp->cs_dsp.pwr_lock);
++
+ 	if (ret < 0)
+ 		return ret;
+ 
+@@ -703,8 +706,14 @@ EXPORT_SYMBOL_GPL(wm_adsp_write_ctl);
+ int wm_adsp_read_ctl(struct wm_adsp *dsp, const char *name, int type,
+ 		     unsigned int alg, void *buf, size_t len)
+ {
+-	return cs_dsp_coeff_read_ctrl(cs_dsp_get_ctl(&dsp->cs_dsp, name, type, alg),
+-				      0, buf, len);
++	int ret;
++
++	mutex_lock(&dsp->cs_dsp.pwr_lock);
++	ret = cs_dsp_coeff_read_ctrl(cs_dsp_get_ctl(&dsp->cs_dsp, name, type, alg),
++				     0, buf, len);
++	mutex_unlock(&dsp->cs_dsp.pwr_lock);
++
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(wm_adsp_read_ctl);
+ 
+diff --git a/sound/soc/fsl/imx-audmix.c b/sound/soc/fsl/imx-audmix.c
+index 0b58df56f4daa..aeb81aa61184f 100644
+--- a/sound/soc/fsl/imx-audmix.c
++++ b/sound/soc/fsl/imx-audmix.c
+@@ -315,7 +315,7 @@ static int imx_audmix_probe(struct platform_device *pdev)
+ 	if (IS_ERR(priv->cpu_mclk)) {
+ 		ret = PTR_ERR(priv->cpu_mclk);
+ 		dev_err(&cpu_pdev->dev, "failed to get DAI mclk1: %d\n", ret);
+-		return -EINVAL;
++		return ret;
+ 	}
+ 
+ 	priv->audmix_pdev = audmix_pdev;
+diff --git a/sound/soc/fsl/imx-pcm-rpmsg.c b/sound/soc/fsl/imx-pcm-rpmsg.c
+index 765dad607bf61..5eef1554a93a1 100644
+--- a/sound/soc/fsl/imx-pcm-rpmsg.c
++++ b/sound/soc/fsl/imx-pcm-rpmsg.c
+@@ -19,6 +19,7 @@
+ static struct snd_pcm_hardware imx_rpmsg_pcm_hardware = {
+ 	.info = SNDRV_PCM_INFO_INTERLEAVED |
+ 		SNDRV_PCM_INFO_BLOCK_TRANSFER |
++		SNDRV_PCM_INFO_BATCH |
+ 		SNDRV_PCM_INFO_MMAP |
+ 		SNDRV_PCM_INFO_MMAP_VALID |
+ 		SNDRV_PCM_INFO_NO_PERIOD_WAKEUP |
+diff --git a/sound/soc/fsl/imx-rpmsg.c b/sound/soc/fsl/imx-rpmsg.c
+index 3c7b95db2eacc..b578f9a32d7f1 100644
+--- a/sound/soc/fsl/imx-rpmsg.c
++++ b/sound/soc/fsl/imx-rpmsg.c
+@@ -89,6 +89,14 @@ static int imx_rpmsg_probe(struct platform_device *pdev)
+ 			    SND_SOC_DAIFMT_NB_NF |
+ 			    SND_SOC_DAIFMT_CBC_CFC;
+ 
++	/*
++	 * i.MX rpmsg sound cards work on codec slave mode. MCLK will be
++	 * disabled by CPU DAI driver in hw_free(). Some codec requires MCLK
++	 * present at power up/down sequence. So need to set ignore_pmdown_time
++	 * to power down codec immediately before MCLK is turned off.
++	 */
++	data->dai.ignore_pmdown_time = 1;
++
+ 	/* Optional codec node */
+ 	ret = of_parse_phandle_with_fixed_args(np, "audio-codec", 0, 0, &args);
+ 	if (ret) {
+diff --git a/sound/soc/intel/avs/boards/hdaudio.c b/sound/soc/intel/avs/boards/hdaudio.c
+index cb00bc86ac949..8876558f19a1b 100644
+--- a/sound/soc/intel/avs/boards/hdaudio.c
++++ b/sound/soc/intel/avs/boards/hdaudio.c
+@@ -55,6 +55,9 @@ static int avs_create_dai_links(struct device *dev, struct hda_codec *codec, int
+ 			return -ENOMEM;
+ 
+ 		dl[i].codecs->name = devm_kstrdup(dev, cname, GFP_KERNEL);
++		if (!dl[i].codecs->name)
++			return -ENOMEM;
++
+ 		dl[i].codecs->dai_name = pcm->name;
+ 		dl[i].num_codecs = 1;
+ 		dl[i].num_cpus = 1;
+diff --git a/sound/soc/meson/axg-spdifin.c b/sound/soc/meson/axg-spdifin.c
+index e2cc4c4be7586..97e81ec4a78ce 100644
+--- a/sound/soc/meson/axg-spdifin.c
++++ b/sound/soc/meson/axg-spdifin.c
+@@ -112,34 +112,6 @@ static int axg_spdifin_prepare(struct snd_pcm_substream *substream,
+ 	return 0;
+ }
+ 
+-static int axg_spdifin_startup(struct snd_pcm_substream *substream,
+-			       struct snd_soc_dai *dai)
+-{
+-	struct axg_spdifin *priv = snd_soc_dai_get_drvdata(dai);
+-	int ret;
+-
+-	ret = clk_prepare_enable(priv->refclk);
+-	if (ret) {
+-		dev_err(dai->dev,
+-			"failed to enable spdifin reference clock\n");
+-		return ret;
+-	}
+-
+-	regmap_update_bits(priv->map, SPDIFIN_CTRL0, SPDIFIN_CTRL0_EN,
+-			   SPDIFIN_CTRL0_EN);
+-
+-	return 0;
+-}
+-
+-static void axg_spdifin_shutdown(struct snd_pcm_substream *substream,
+-				 struct snd_soc_dai *dai)
+-{
+-	struct axg_spdifin *priv = snd_soc_dai_get_drvdata(dai);
+-
+-	regmap_update_bits(priv->map, SPDIFIN_CTRL0, SPDIFIN_CTRL0_EN, 0);
+-	clk_disable_unprepare(priv->refclk);
+-}
+-
+ static void axg_spdifin_write_mode_param(struct regmap *map, int mode,
+ 					 unsigned int val,
+ 					 unsigned int num_per_reg,
+@@ -251,25 +223,38 @@ static int axg_spdifin_dai_probe(struct snd_soc_dai *dai)
+ 	ret = axg_spdifin_sample_mode_config(dai, priv);
+ 	if (ret) {
+ 		dev_err(dai->dev, "mode configuration failed\n");
+-		clk_disable_unprepare(priv->pclk);
+-		return ret;
++		goto pclk_err;
+ 	}
+ 
++	ret = clk_prepare_enable(priv->refclk);
++	if (ret) {
++		dev_err(dai->dev,
++			"failed to enable spdifin reference clock\n");
++		goto pclk_err;
++	}
++
++	regmap_update_bits(priv->map, SPDIFIN_CTRL0, SPDIFIN_CTRL0_EN,
++			   SPDIFIN_CTRL0_EN);
++
+ 	return 0;
++
++pclk_err:
++	clk_disable_unprepare(priv->pclk);
++	return ret;
+ }
+ 
+ static int axg_spdifin_dai_remove(struct snd_soc_dai *dai)
+ {
+ 	struct axg_spdifin *priv = snd_soc_dai_get_drvdata(dai);
+ 
++	regmap_update_bits(priv->map, SPDIFIN_CTRL0, SPDIFIN_CTRL0_EN, 0);
++	clk_disable_unprepare(priv->refclk);
+ 	clk_disable_unprepare(priv->pclk);
+ 	return 0;
+ }
+ 
+ static const struct snd_soc_dai_ops axg_spdifin_ops = {
+ 	.prepare	= axg_spdifin_prepare,
+-	.startup	= axg_spdifin_startup,
+-	.shutdown	= axg_spdifin_shutdown,
+ };
+ 
+ static int axg_spdifin_iec958_info(struct snd_kcontrol *kcontrol,
+diff --git a/sound/soc/sof/core.c b/sound/soc/sof/core.c
+index 30db685cc5f4b..2d1616b81485c 100644
+--- a/sound/soc/sof/core.c
++++ b/sound/soc/sof/core.c
+@@ -486,10 +486,9 @@ int snd_sof_device_remove(struct device *dev)
+ 		snd_sof_ipc_free(sdev);
+ 		snd_sof_free_debug(sdev);
+ 		snd_sof_remove(sdev);
++		sof_ops_free(sdev);
+ 	}
+ 
+-	sof_ops_free(sdev);
+-
+ 	/* release firmware */
+ 	snd_sof_fw_unload(sdev);
+ 
+diff --git a/sound/soc/sof/intel/mtl.c b/sound/soc/sof/intel/mtl.c
+index 30fe77fd87bf8..79e9a7ed8feaa 100644
+--- a/sound/soc/sof/intel/mtl.c
++++ b/sound/soc/sof/intel/mtl.c
+@@ -460,7 +460,7 @@ int mtl_dsp_cl_init(struct snd_sof_dev *sdev, int stream_tag, bool imr_boot)
+ 	/* step 3: wait for IPC DONE bit from ROM */
+ 	ret = snd_sof_dsp_read_poll_timeout(sdev, HDA_DSP_BAR, chip->ipc_ack, status,
+ 					    ((status & chip->ipc_ack_mask) == chip->ipc_ack_mask),
+-					    HDA_DSP_REG_POLL_INTERVAL_US, MTL_DSP_PURGE_TIMEOUT_US);
++					    HDA_DSP_REG_POLL_INTERVAL_US, HDA_DSP_INIT_TIMEOUT_US);
+ 	if (ret < 0) {
+ 		if (hda->boot_iteration == HDA_FW_BOOT_ATTEMPTS)
+ 			dev_err(sdev->dev, "timeout waiting for purge IPC done\n");
+diff --git a/sound/soc/sof/intel/mtl.h b/sound/soc/sof/intel/mtl.h
+index 2794fe6e81396..9a0b8b9d8a0c9 100644
+--- a/sound/soc/sof/intel/mtl.h
++++ b/sound/soc/sof/intel/mtl.h
+@@ -62,7 +62,6 @@
+ #define MTL_DSP_IRQSTS_IPC		BIT(0)
+ #define MTL_DSP_IRQSTS_SDW		BIT(6)
+ 
+-#define MTL_DSP_PURGE_TIMEOUT_US	20000000 /* 20s */
+ #define MTL_DSP_REG_POLL_INTERVAL_US	10	/* 10 us */
+ 
+ /* Memory windows */
+diff --git a/sound/soc/sof/ipc4-topology.c b/sound/soc/sof/ipc4-topology.c
+index 11361e1cd6881..8fb6582e568e7 100644
+--- a/sound/soc/sof/ipc4-topology.c
++++ b/sound/soc/sof/ipc4-topology.c
+@@ -218,7 +218,7 @@ static int sof_ipc4_get_audio_fmt(struct snd_soc_component *scomp,
+ 
+ 	ret = sof_update_ipc_object(scomp, available_fmt,
+ 				    SOF_AUDIO_FMT_NUM_TOKENS, swidget->tuples,
+-				    swidget->num_tuples, sizeof(available_fmt), 1);
++				    swidget->num_tuples, sizeof(*available_fmt), 1);
+ 	if (ret) {
+ 		dev_err(scomp->dev, "Failed to parse audio format token count\n");
+ 		return ret;
+diff --git a/sound/soc/sof/sof-audio.c b/sound/soc/sof/sof-audio.c
+index e7ef77012c358..e5405f854a910 100644
+--- a/sound/soc/sof/sof-audio.c
++++ b/sound/soc/sof/sof-audio.c
+@@ -212,7 +212,8 @@ widget_free:
+ 	sof_widget_free_unlocked(sdev, swidget);
+ 	use_count_decremented = true;
+ core_put:
+-	snd_sof_dsp_core_put(sdev, swidget->core);
++	if (!use_count_decremented)
++		snd_sof_dsp_core_put(sdev, swidget->core);
+ pipe_widget_free:
+ 	if (swidget->id != snd_soc_dapm_scheduler)
+ 		sof_widget_free_unlocked(sdev, swidget->spipe->pipe_widget);
+diff --git a/tools/include/linux/btf_ids.h b/tools/include/linux/btf_ids.h
+index 71e54b1e37964..2f882d5cb30f5 100644
+--- a/tools/include/linux/btf_ids.h
++++ b/tools/include/linux/btf_ids.h
+@@ -38,7 +38,7 @@ asm(							\
+ 	____BTF_ID(symbol)
+ 
+ #define __ID(prefix) \
+-	__PASTE(prefix, __COUNTER__)
++	__PASTE(__PASTE(prefix, __COUNTER__), __LINE__)
+ 
+ /*
+  * The BTF_ID defines unique symbol for each ID pointing
+diff --git a/tools/include/linux/mm.h b/tools/include/linux/mm.h
+index a03d9bba51514..f3c82ab5b14cd 100644
+--- a/tools/include/linux/mm.h
++++ b/tools/include/linux/mm.h
+@@ -11,8 +11,6 @@
+ 
+ #define PHYS_ADDR_MAX	(~(phys_addr_t)0)
+ 
+-#define __ALIGN_KERNEL(x, a)		__ALIGN_KERNEL_MASK(x, (typeof(x))(a) - 1)
+-#define __ALIGN_KERNEL_MASK(x, mask)	(((x) + (mask)) & ~(mask))
+ #define ALIGN(x, a)			__ALIGN_KERNEL((x), (a))
+ #define ALIGN_DOWN(x, a)		__ALIGN_KERNEL((x) - ((a) - 1), (a))
+ 
+@@ -29,7 +27,7 @@ static inline void *phys_to_virt(unsigned long address)
+ 	return __va(address);
+ }
+ 
+-void reserve_bootmem_region(phys_addr_t start, phys_addr_t end);
++void reserve_bootmem_region(phys_addr_t start, phys_addr_t end, int nid);
+ 
+ static inline void totalram_pages_inc(void)
+ {
+diff --git a/tools/include/linux/seq_file.h b/tools/include/linux/seq_file.h
+index 102fd9217f1f9..f6bc226af0c1d 100644
+--- a/tools/include/linux/seq_file.h
++++ b/tools/include/linux/seq_file.h
+@@ -1,4 +1,6 @@
+ #ifndef _TOOLS_INCLUDE_LINUX_SEQ_FILE_H
+ #define _TOOLS_INCLUDE_LINUX_SEQ_FILE_H
+ 
++struct seq_file;
++
+ #endif /* _TOOLS_INCLUDE_LINUX_SEQ_FILE_H */
+diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
+index 60a9d59beeabb..25f668165b567 100644
+--- a/tools/include/uapi/linux/bpf.h
++++ b/tools/include/uapi/linux/bpf.h
+@@ -1897,7 +1897,9 @@ union bpf_attr {
+  * 		performed again, if the helper is used in combination with
+  * 		direct packet access.
+  * 	Return
+- * 		0 on success, or a negative error in case of failure.
++ * 		0 on success, or a negative error in case of failure. Positive
++ * 		error indicates a potential drop or congestion in the target
++ * 		device. The particular positive error codes are not defined.
+  *
+  * u64 bpf_get_current_pid_tgid(void)
+  * 	Description
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index 1384090530dbe..e308d1ba664ef 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -4333,7 +4333,8 @@ static int validate_ibt_insn(struct objtool_file *file, struct instruction *insn
+ 			continue;
+ 		}
+ 
+-		if (insn_func(dest) && insn_func(dest) == insn_func(insn)) {
++		if (insn_func(dest) && insn_func(insn) &&
++		    insn_func(dest)->pfunc == insn_func(insn)->pfunc) {
+ 			/*
+ 			 * Anything from->to self is either _THIS_IP_ or
+ 			 * IRET-to-self.
+diff --git a/tools/perf/util/Build b/tools/perf/util/Build
+index 96f4ea1d45c56..9c6c4475524b9 100644
+--- a/tools/perf/util/Build
++++ b/tools/perf/util/Build
+@@ -301,6 +301,12 @@ ifeq ($(BISON_GE_35),1)
+ else
+   bison_flags += -w
+ endif
++
++BISON_LT_381 := $(shell expr $(shell $(BISON) --version | grep bison | sed -e 's/.\+ \([0-9]\+\).\([0-9]\+\).\([0-9]\+\)/\1\2\3/g') \< 381)
++ifeq ($(BISON_LT_381),1)
++  bison_flags += -DYYNOMEM=YYABORT
++endif
++
+ CFLAGS_parse-events-bison.o += $(bison_flags)
+ CFLAGS_pmu-bison.o          += -DYYLTYPE_IS_TRIVIAL=0 $(bison_flags)
+ CFLAGS_expr-bison.o         += -DYYLTYPE_IS_TRIVIAL=0 $(bison_flags)
+diff --git a/tools/testing/memblock/internal.h b/tools/testing/memblock/internal.h
+index fdb7f5db73082..f6c6e5474c3af 100644
+--- a/tools/testing/memblock/internal.h
++++ b/tools/testing/memblock/internal.h
+@@ -20,4 +20,8 @@ void memblock_free_pages(struct page *page, unsigned long pfn,
+ {
+ }
+ 
++static inline void accept_memory(phys_addr_t start, phys_addr_t end)
++{
++}
++
+ #endif
+diff --git a/tools/testing/memblock/mmzone.c b/tools/testing/memblock/mmzone.c
+index 7b0909e8b759d..d3d58851864e7 100644
+--- a/tools/testing/memblock/mmzone.c
++++ b/tools/testing/memblock/mmzone.c
+@@ -11,7 +11,7 @@ struct pglist_data *next_online_pgdat(struct pglist_data *pgdat)
+ 	return NULL;
+ }
+ 
+-void reserve_bootmem_region(phys_addr_t start, phys_addr_t end)
++void reserve_bootmem_region(phys_addr_t start, phys_addr_t end, int nid)
+ {
+ }
+ 
+diff --git a/tools/testing/memblock/tests/basic_api.c b/tools/testing/memblock/tests/basic_api.c
+index 411647094cc37..57bf2688edfd6 100644
+--- a/tools/testing/memblock/tests/basic_api.c
++++ b/tools/testing/memblock/tests/basic_api.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0-or-later
++#include "basic_api.h"
+ #include <string.h>
+ #include <linux/memblock.h>
+-#include "basic_api.h"
+ 
+ #define EXPECTED_MEMBLOCK_REGIONS			128
+ #define FUNC_ADD					"memblock_add"
+diff --git a/tools/testing/memblock/tests/common.h b/tools/testing/memblock/tests/common.h
+index 4f23302ee6779..b5ec59aa62d72 100644
+--- a/tools/testing/memblock/tests/common.h
++++ b/tools/testing/memblock/tests/common.h
+@@ -5,6 +5,7 @@
+ #include <stdlib.h>
+ #include <assert.h>
+ #include <linux/types.h>
++#include <linux/seq_file.h>
+ #include <linux/memblock.h>
+ #include <linux/sizes.h>
+ #include <linux/printk.h>
+diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c
+index 31f1c935cd07d..98107e0452d33 100644
+--- a/tools/testing/selftests/bpf/test_verifier.c
++++ b/tools/testing/selftests/bpf/test_verifier.c
+@@ -1880,7 +1880,7 @@ int main(int argc, char **argv)
+ 		}
+ 	}
+ 
+-	get_unpriv_disabled();
++	unpriv_disabled = get_unpriv_disabled();
+ 	if (unpriv && unpriv_disabled) {
+ 		printf("Cannot run as unprivileged user with sysctl %s.\n",
+ 		       UNPRIV_SYSCTL);
+diff --git a/tools/testing/selftests/ftrace/test.d/instances/instance-event.tc b/tools/testing/selftests/ftrace/test.d/instances/instance-event.tc
+index 0eb47fbb3f44d..42422e4251078 100644
+--- a/tools/testing/selftests/ftrace/test.d/instances/instance-event.tc
++++ b/tools/testing/selftests/ftrace/test.d/instances/instance-event.tc
+@@ -39,7 +39,7 @@ instance_read() {
+ 
+ instance_set() {
+         while :; do
+-                echo 1 > foo/events/sched/sched_switch
++                echo 1 > foo/events/sched/sched_switch/enable
+         done 2> /dev/null
+ }
+ 
+diff --git a/tools/testing/selftests/kselftest_deps.sh b/tools/testing/selftests/kselftest_deps.sh
+index 4bc14d9e8ff1d..de59cc8f03c3f 100755
+--- a/tools/testing/selftests/kselftest_deps.sh
++++ b/tools/testing/selftests/kselftest_deps.sh
+@@ -46,11 +46,11 @@ fi
+ print_targets=0
+ 
+ while getopts "p" arg; do
+-    case $arg in
+-        p)
++	case $arg in
++		p)
+ 		print_targets=1
+ 	shift;;
+-    esac
++	esac
+ done
+ 
+ if [ $# -eq 0 ]
+@@ -92,6 +92,10 @@ pass_cnt=0
+ # Get all TARGETS from selftests Makefile
+ targets=$(grep -E "^TARGETS +|^TARGETS =" Makefile | cut -d "=" -f2)
+ 
++# Initially, in LDLIBS related lines, the dep checker needs
++# to ignore lines containing the following strings:
++filter="\$(VAR_LDLIBS)\|pkg-config\|PKG_CONFIG\|IOURING_EXTRA_LIBS"
++
+ # Single test case
+ if [ $# -eq 2 ]
+ then
+@@ -100,6 +104,8 @@ then
+ 	l1_test $test
+ 	l2_test $test
+ 	l3_test $test
++	l4_test $test
++	l5_test $test
+ 
+ 	print_results $1 $2
+ 	exit $?
+@@ -113,7 +119,7 @@ fi
+ # Append space at the end of the list to append more tests.
+ 
+ l1_tests=$(grep -r --include=Makefile "^LDLIBS" | \
+-		grep -v "VAR_LDLIBS" | awk -F: '{print $1}')
++		grep -v "$filter" | awk -F: '{print $1}' | uniq)
+ 
+ # Level 2: LDLIBS set dynamically.
+ #
+@@ -126,7 +132,7 @@ l1_tests=$(grep -r --include=Makefile "^LDLIBS" | \
+ # Append space at the end of the list to append more tests.
+ 
+ l2_tests=$(grep -r --include=Makefile ": LDLIBS" | \
+-		grep -v "VAR_LDLIBS" | awk -F: '{print $1}')
++		grep -v "$filter" | awk -F: '{print $1}' | uniq)
+ 
+ # Level 3
+ # memfd and others use pkg-config to find mount and fuse libs
+@@ -138,11 +144,32 @@ l2_tests=$(grep -r --include=Makefile ": LDLIBS" | \
+ #	VAR_LDLIBS := $(shell pkg-config fuse --libs 2>/dev/null)
+ 
+ l3_tests=$(grep -r --include=Makefile "^VAR_LDLIBS" | \
+-		grep -v "pkg-config" | awk -F: '{print $1}')
++		grep -v "pkg-config\|PKG_CONFIG" | awk -F: '{print $1}' | uniq)
+ 
+-#echo $l1_tests
+-#echo $l2_1_tests
+-#echo $l3_tests
++# Level 4
++# some tests may fall back to default using `|| echo -l<libname>`
++# if pkg-config doesn't find the libs, instead of using VAR_LDLIBS
++# as per level 3 checks.
++# e.g:
++# netfilter/Makefile
++#	LDLIBS += $(shell $(HOSTPKG_CONFIG) --libs libmnl 2>/dev/null || echo -lmnl)
++l4_tests=$(grep -r --include=Makefile "^LDLIBS" | \
++		grep "pkg-config\|PKG_CONFIG" | awk -F: '{print $1}' | uniq)
++
++# Level 5
++# some tests may use IOURING_EXTRA_LIBS to add extra libs to LDLIBS,
++# which in turn may be defined in a sub-Makefile
++# e.g.:
++# mm/Makefile
++#	$(OUTPUT)/gup_longterm: LDLIBS += $(IOURING_EXTRA_LIBS)
++l5_tests=$(grep -r --include=Makefile "LDLIBS +=.*\$(IOURING_EXTRA_LIBS)" | \
++	awk -F: '{print $1}' | uniq)
++
++#echo l1_tests $l1_tests
++#echo l2_tests $l2_tests
++#echo l3_tests $l3_tests
++#echo l4_tests $l4_tests
++#echo l5_tests $l5_tests
+ 
+ all_tests
+ print_results $1 $2
+@@ -164,24 +191,32 @@ all_tests()
+ 	for test in $l3_tests; do
+ 		l3_test $test
+ 	done
++
++	for test in $l4_tests; do
++		l4_test $test
++	done
++
++	for test in $l5_tests; do
++		l5_test $test
++	done
+ }
+ 
+ # Use same parsing used for l1_tests and pick libraries this time.
+ l1_test()
+ {
+ 	test_libs=$(grep --include=Makefile "^LDLIBS" $test | \
+-			grep -v "VAR_LDLIBS" | \
++			grep -v "$filter" | \
+ 			sed -e 's/\:/ /' | \
+ 			sed -e 's/+/ /' | cut -d "=" -f 2)
+ 
+ 	check_libs $test $test_libs
+ }
+ 
+-# Use same parsing used for l2__tests and pick libraries this time.
++# Use same parsing used for l2_tests and pick libraries this time.
+ l2_test()
+ {
+ 	test_libs=$(grep --include=Makefile ": LDLIBS" $test | \
+-			grep -v "VAR_LDLIBS" | \
++			grep -v "$filter" | \
+ 			sed -e 's/\:/ /' | sed -e 's/+/ /' | \
+ 			cut -d "=" -f 2)
+ 
+@@ -197,6 +232,24 @@ l3_test()
+ 	check_libs $test $test_libs
+ }
+ 
++l4_test()
++{
++	test_libs=$(grep --include=Makefile "^VAR_LDLIBS\|^LDLIBS" $test | \
++			grep "\(pkg-config\|PKG_CONFIG\).*|| echo " | \
++			sed -e 's/.*|| echo //' | sed -e 's/)$//')
++
++	check_libs $test $test_libs
++}
++
++l5_test()
++{
++	tests=$(find $(dirname "$test") -type f -name "*.mk")
++	test_libs=$(grep "^IOURING_EXTRA_LIBS +\?=" $tests | \
++			cut -d "=" -f 2)
++
++	check_libs $test $test_libs
++}
++
+ check_libs()
+ {
+ 
+diff --git a/tools/testing/selftests/mm/charge_reserved_hugetlb.sh b/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
+index a5cb4b09a46c4..0899019a7fcb4 100755
+--- a/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
++++ b/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
+@@ -25,7 +25,7 @@ if [[ "$1" == "-cgroup-v2" ]]; then
+ fi
+ 
+ if [[ $cgroup2 ]]; then
+-  cgroup_path=$(mount -t cgroup2 | head -1 | awk -e '{print $3}')
++  cgroup_path=$(mount -t cgroup2 | head -1 | awk '{print $3}')
+   if [[ -z "$cgroup_path" ]]; then
+     cgroup_path=/dev/cgroup/memory
+     mount -t cgroup2 none $cgroup_path
+@@ -33,7 +33,7 @@ if [[ $cgroup2 ]]; then
+   fi
+   echo "+hugetlb" >$cgroup_path/cgroup.subtree_control
+ else
+-  cgroup_path=$(mount -t cgroup | grep ",hugetlb" | awk -e '{print $3}')
++  cgroup_path=$(mount -t cgroup | grep ",hugetlb" | awk '{print $3}')
+   if [[ -z "$cgroup_path" ]]; then
+     cgroup_path=/dev/cgroup/memory
+     mount -t cgroup memory,hugetlb $cgroup_path
+diff --git a/tools/testing/selftests/mm/hugetlb_reparenting_test.sh b/tools/testing/selftests/mm/hugetlb_reparenting_test.sh
+index bf2d2a684edfd..14d26075c8635 100755
+--- a/tools/testing/selftests/mm/hugetlb_reparenting_test.sh
++++ b/tools/testing/selftests/mm/hugetlb_reparenting_test.sh
+@@ -20,7 +20,7 @@ fi
+ 
+ 
+ if [[ $cgroup2 ]]; then
+-  CGROUP_ROOT=$(mount -t cgroup2 | head -1 | awk -e '{print $3}')
++  CGROUP_ROOT=$(mount -t cgroup2 | head -1 | awk '{print $3}')
+   if [[ -z "$CGROUP_ROOT" ]]; then
+     CGROUP_ROOT=/dev/cgroup/memory
+     mount -t cgroup2 none $CGROUP_ROOT
+@@ -28,7 +28,7 @@ if [[ $cgroup2 ]]; then
+   fi
+   echo "+hugetlb +memory" >$CGROUP_ROOT/cgroup.subtree_control
+ else
+-  CGROUP_ROOT=$(mount -t cgroup | grep ",hugetlb" | awk -e '{print $3}')
++  CGROUP_ROOT=$(mount -t cgroup | grep ",hugetlb" | awk '{print $3}')
+   if [[ -z "$CGROUP_ROOT" ]]; then
+     CGROUP_ROOT=/dev/cgroup/memory
+     mount -t cgroup memory,hugetlb $CGROUP_ROOT
+diff --git a/tools/testing/selftests/net/tls.c b/tools/testing/selftests/net/tls.c
+index a3c57004344c6..6ec8b8335bdbf 100644
+--- a/tools/testing/selftests/net/tls.c
++++ b/tools/testing/selftests/net/tls.c
+@@ -552,11 +552,11 @@ TEST_F(tls, sendmsg_large)
+ 
+ 		msg.msg_iov = &vec;
+ 		msg.msg_iovlen = 1;
+-		EXPECT_EQ(sendmsg(self->cfd, &msg, 0), send_len);
++		EXPECT_EQ(sendmsg(self->fd, &msg, 0), send_len);
+ 	}
+ 
+ 	while (recvs++ < sends) {
+-		EXPECT_NE(recv(self->fd, mem, send_len, 0), -1);
++		EXPECT_NE(recv(self->cfd, mem, send_len, 0), -1);
+ 	}
+ 
+ 	free(mem);
+@@ -585,9 +585,9 @@ TEST_F(tls, sendmsg_multiple)
+ 	msg.msg_iov = vec;
+ 	msg.msg_iovlen = iov_len;
+ 
+-	EXPECT_EQ(sendmsg(self->cfd, &msg, 0), total_len);
++	EXPECT_EQ(sendmsg(self->fd, &msg, 0), total_len);
+ 	buf = malloc(total_len);
+-	EXPECT_NE(recv(self->fd, buf, total_len, 0), -1);
++	EXPECT_NE(recv(self->cfd, buf, total_len, 0), -1);
+ 	for (i = 0; i < iov_len; i++) {
+ 		EXPECT_EQ(memcmp(test_strs[i], buf + len_cmp,
+ 				 strlen(test_strs[i])),
+diff --git a/tools/testing/selftests/powerpc/Makefile b/tools/testing/selftests/powerpc/Makefile
+index 49f2ad1793fd9..7ea42fa02eabd 100644
+--- a/tools/testing/selftests/powerpc/Makefile
++++ b/tools/testing/selftests/powerpc/Makefile
+@@ -59,12 +59,11 @@ override define INSTALL_RULE
+ 	done;
+ endef
+ 
+-override define EMIT_TESTS
++emit_tests:
+ 	+@for TARGET in $(SUB_DIRS); do \
+ 		BUILD_TARGET=$(OUTPUT)/$$TARGET;	\
+-		$(MAKE) OUTPUT=$$BUILD_TARGET -s -C $$TARGET emit_tests;\
++		$(MAKE) OUTPUT=$$BUILD_TARGET -s -C $$TARGET $@;\
+ 	done;
+-endef
+ 
+ override define CLEAN
+ 	+@for TARGET in $(SUB_DIRS); do \
+@@ -77,4 +76,4 @@ endef
+ tags:
+ 	find . -name '*.c' -o -name '*.h' | xargs ctags
+ 
+-.PHONY: tags $(SUB_DIRS)
++.PHONY: tags $(SUB_DIRS) emit_tests
+diff --git a/tools/testing/selftests/powerpc/pmu/Makefile b/tools/testing/selftests/powerpc/pmu/Makefile
+index 2b95e44d20ff9..a284fa874a9f1 100644
+--- a/tools/testing/selftests/powerpc/pmu/Makefile
++++ b/tools/testing/selftests/powerpc/pmu/Makefile
+@@ -30,13 +30,14 @@ override define RUN_TESTS
+ 	+TARGET=event_code_tests; BUILD_TARGET=$$OUTPUT/$$TARGET; $(MAKE) OUTPUT=$$BUILD_TARGET -C $$TARGET run_tests
+ endef
+ 
+-DEFAULT_EMIT_TESTS := $(EMIT_TESTS)
+-override define EMIT_TESTS
+-	$(DEFAULT_EMIT_TESTS)
++emit_tests:
++	for TEST in $(TEST_GEN_PROGS); do \
++		BASENAME_TEST=`basename $$TEST`;	\
++		echo "$(COLLECTION):$$BASENAME_TEST";	\
++	done
+ 	+TARGET=ebb; BUILD_TARGET=$$OUTPUT/$$TARGET; $(MAKE) OUTPUT=$$BUILD_TARGET -s -C $$TARGET emit_tests
+ 	+TARGET=sampling_tests; BUILD_TARGET=$$OUTPUT/$$TARGET; $(MAKE) OUTPUT=$$BUILD_TARGET -s -C $$TARGET emit_tests
+ 	+TARGET=event_code_tests; BUILD_TARGET=$$OUTPUT/$$TARGET; $(MAKE) OUTPUT=$$BUILD_TARGET -s -C $$TARGET emit_tests
+-endef
+ 
+ DEFAULT_INSTALL_RULE := $(INSTALL_RULE)
+ override define INSTALL_RULE
+@@ -64,4 +65,4 @@ sampling_tests:
+ event_code_tests:
+ 	TARGET=$@; BUILD_TARGET=$$OUTPUT/$$TARGET; mkdir -p $$BUILD_TARGET; $(MAKE) OUTPUT=$$BUILD_TARGET -k -C $$TARGET all
+ 
+-.PHONY: all run_tests ebb sampling_tests event_code_tests
++.PHONY: all run_tests ebb sampling_tests event_code_tests emit_tests


             reply	other threads:[~2023-10-06 12:36 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-10-06 12:36 Mike Pagano [this message]
  -- strict thread matches above, loose matches on Subject: below --
2023-12-01 10:33 [gentoo-commits] proj/linux-patches:6.5 commit in: / Mike Pagano
2023-11-28 17:50 Mike Pagano
2023-11-20 11:27 Mike Pagano
2023-11-09 18:00 Mike Pagano
2023-11-08 14:01 Mike Pagano
2023-11-02 11:09 Mike Pagano
2023-10-25 11:35 Mike Pagano
2023-10-22 22:51 Mike Pagano
2023-10-19 22:29 Mike Pagano
2023-10-18 20:01 Mike Pagano
2023-10-17 22:58 Mike Pagano
2023-10-10 22:53 Mike Pagano
2023-10-05 14:07 Mike Pagano
2023-09-23 11:08 Mike Pagano
2023-09-23 11:06 Mike Pagano
2023-09-23 10:15 Mike Pagano
2023-09-19 13:18 Mike Pagano
2023-09-15 17:55 Mike Pagano
2023-09-13 12:07 Mike Pagano
2023-09-13 11:03 Mike Pagano
2023-09-07 14:53 Mike Pagano
2023-09-06 22:14 Mike Pagano
2023-09-02  9:54 Mike Pagano

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1696595772.30b3090c6bab3a7fb130eb08ccddb446aea6aeed.mpagano@gentoo \
    --to=mpagano@gentoo.org \
    --cc=gentoo-commits@lists.gentoo.org \
    --cc=gentoo-dev@lists.gentoo.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox